Today in The Future in Five Questions we talk to AI guru Oren Etzioni. A pioneering researcher in web search and machine reading, and founding CEO of the influential Allen Institute for AI, Etzioni has emerged as a powerful voice in AI development and policy. Read on to hear his thoughts about augmented intelligence, high-skilled immigration and the “big bang” moment for powerful image generation models. Responses have been edited for length and clarity. What’s one underrated big idea? I would say “augmented intelligence” — my favorite definition of AI as a tool that we use, rather than a challenge to us or something that displaces us. We're distracted with all these concerns about AI taking over the world, or AI’s legitimate threat to jobs. AI has legitimate concerns around it, but the narrative has been too negative. What’s a technology you think is overhyped? I think AI is also overhyped. AI is the latest tool that we have for statistical data processing or modeling. It's natural that we now have software where instead of writing down rules, it automatically generates rules or policies based on data. But the overhyped part is [when] people somehow jump from that to the fundamental intellectual enterprise of AI, which is understanding the human mind — building a true artificial “general” intelligence, as it’s called sometimes. To me, it's like the kid who scampers up to the top of the tree and yells, “I'm on my way to the moon!” The moon is still very far away. What book most shaped your conception of the future? When I was in high school, I had the opportunity to read “Gödel, Escher, Bach” by Douglas Hofstadter. And that book [described] mathematics and music and artificial intelligence, both in aesthetic ways and profound scientific ways. It got me wanting to think about an answer to the biggest questions, and understanding the tremendous possibilities that there are in software in particular. What could government be doing regarding tech that it isn’t? First of all, we have a huge shortage of skilled workers in the area. So I've advocated in a Wired article a while back for an AI visa program, but it's really more general. We need to re-engage students coming here. I think we need better AI literacy. We did a survey of over 1000 American adults, and we found out that they really don't know very much about what AI can and cannot do — 84 percent received a failing grade in our national survey. So the government needs to foster AI literacy. I sit on the Biden administration's literary advisory board for NAIRR — the national AI research resource. At the core of [the advisory board] is the fact that academia does not have enough students, and the public sector does not have enough computational power, to run the models that we know and love at scale. So we see companies building these tremendous [models] that require a huge amount of data to run, and the government could do a lot to level the playing field here. What has surprised you most this year? For many of us in the field, it’s the success of these generative models like GPT3 and DALL E — these models that are given very limited problems and produce very rich documents, very rich images. Those abilities, when these models are produced at scale, I think have surprised and staggered almost everybody in the field, and of course, those outside it. We are 10 seconds from the “big bang” of these models… they will change the creative process from a solitary writing process to a process that's very interactive between us and our AI tools. Extra, extra — What does this week’s AI Bill of Rights mean for tech policy? The most important point to clarify is that this is a set of principles — not regulations. But the reason the AI Bill of Rights will still be impactful is because we need to start from first principles. We've identified the principles that ought to guide regulation, that ought to guide enforcement, and to be elaborated and refined. The five principles in the AI Bill of Rights are neither mutually exclusive nor exhaustive, because it’s a fast-moving field. I view these principles as a stake in the ground — a focal point. Like when Thor hits his hammer to the earth and there’s huge reverberations.
|