Hello, and welcome to this week’s installment of the Future In Five Questions. This week I spoke with Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania and author of the One Useful Thing Substack about the impact of AI on work and education. His forthcoming book “Co-Intelligence: Living and Working with AI” urges readers to integrate AI tools into their lives intelligently and responsibly. We talked today about how unpredictable the AI development ecosystem is at this moment, why its “apocalyptic” capabilities are overrated and the need for government to set clear regulatory guidelines around AI. An edited and condensed version of our conversation follows: What’s one underrated big idea? The usefulness of AI is actually underrated. Right now there's a lot of talk about the theoretical future of AI and what it means for society, but it actually gets a lot of work done for a lot of people who aren’t talking about it. I'd like to see more attention paid to that, to its actual use in the world, which is actually much more advanced than people think. The actual technology is very hyped, but it’s a future hype, not a current hype. What’s a technology that you think is overhyped? My non-technical answer is that people are hyping the wrong kind of apocalypse, the kind with a machine that wakes up and murders us all. We're already seeing human-level performance out of AI in a lot of fields, and it doesn't need to be a sentient machine to cause a lot of disruption. That disruption is at the level of industry with companies, or even individual jobs where we have agency and control over things. The problem with the big-picture, “will AI doom us all, should we pause development” type of questions is that they make it an abstract thing that we don't have control over. It makes it something that only policy people and Silicon Valley control, as opposed to something where we all have a say in how it’s used. What book most shaped your conception of the future? I have a historical example and a recent example. The historical example is Nicholas Negroponte’s “Being Digital,” which was a big phenomenon in the early 1990s. Negroponte was the head of the MIT Media Lab, which was really interesting, because it was right about everything, and then became obsolete because of how absolutely right they were, where they’re now sort of struggling in an environment where everybody else is doing the same kind of thing. He laid out what it means to move from an analog to a digital world, and it absolutely blew my mind and was part of the reason why I did the things I’ve done since then. The more recent book that I think about often is a very strange book called “The Knowledge” by Lewis Dartnell. It's a book about how to reconstruct technology from the ground up. It's a very detailed history of technology, and how you would reconstruct it if civilization collapsed. How would you reconstruct the chemical industry? It turns out doing all this stuff is really complicated. You can't just instantly make antibiotics by scraping mold off of something. There are steps. It makes you realize how hard it is to get where we are. It wasn't one discovery, it took a lot of effort to build these systems up. What could government be doing regarding technology that it isn’t? Positive as well as negative guidance. I'm glad there are regulatory concerns about AI. There should be. But I also think giving lanes of permission and experimentation are important. If that doesn't happen, a lot of large, regulated industries are not experimenting with AI in reasonable or healthy ways while small competitors around the world are. I would like to see much more regulatory action in every industry, whether it's finance, health care, or whatever, to define positive, doable, clear uses for AI as fast as possible. What surprised you most this year? From an AI perspective, it’s the exponential growth. We had a giant leap to GPT-4, and the fact that powering up this technology turns out not that hard of a problem to solve is pretty weird. Who knew that this was a relatively simple technological problem with enough computing power? And then it keeps going. The flip side of this is that it’s also weird that no one has beat GPT-4 a year later. We have this miraculous technology made by OpenAI, which is not the size of Google or Microsoft or Amazon, and then nobody beats their model in a year. That’s weird, and nobody quite knows what it means or why. |