If “artificial general intelligence” ever arrives — an AI that surpasses human intelligence and capability — what will it actually do to society, and how can we prepare ourselves for it? That’s the big, long-term question looming over the effort to regulate this new technological force. Tech executives have tried to reassure Washington that their new AI products are tools for harmonious progress and not scary techno-revolution. But if you read between the lines of a new, exhaustive profile of OpenAI — published yesterday in Wired — the implications of the company’s takeover of the global tech conversation become stark, and go a long way toward answering those big existential questions. Veteran tech journalist Steven Levy spent months with the company’s leaders, employees and former engineers, and came away convinced that Sam Altman and his team don’t only believe that artificial general intelligence, or AGI, is inevitable, but that it’s likely to transform the world entirely. That makes their mission a political one, even if it doesn’t track easily along our current partisan boundaries, and they’re taking halting, but deliberate, steps toward achieving it behind closed doors in San Francisco. They expect AGI to change society so much that the company’s bylaws contain written provisions for an upended, hypothetical version of the future where our current contracts and currencies have no value. “Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered,” Levy notes. “After all, it will be a new world from that point on.” Sandhini Agarwal, an OpenAI policy researcher, put a finer point on how he sees the company’s mission at this point in time: “Look back at the industrial revolution — everyone agrees it was great for the world… but the first 50 years were really painful… We’re trying to think how we can make the period before adaptation of AGI as painless as possible.” There’s an immediately obvious laundry list of questions that OpenAI’s race to AGI raises, most of them still unanswered: Who will be spared the pain of this “period before adaptation of AGI,” for example? Or how might it transform civic and economic life? And just who decided that Altman and his team get to be the ones to set its parameters, anyway? The biggest players in the AI world see the achievement of OpenAI’s mission as a sort of biblical Jubilee, erasing all debts and winding back the clock to a fresh start for our social and political structures. So if that’s really the case, how is it possible that the government isn’t kicking down the doors of OpenAI’s San Francisco headquarters like the faceless space-suited agents in “E.T.”? In a society based on principles of free enterprise, of course, Altman and his employees are as legally entitled to do what they please in this scenario as they would be if they were building a dating app or Uber competitor. They’ve also made a serious effort to demonstrate their agreement with the White House’s own stated principles for AI development. Levy reported on how democratic caution was a major concern in releasing progressively more powerful GPT models, with chief technology officer Mira Murati telling him they “did a lot of work with misinformation experts and did some red-teaming” and that “there was a lot of discussion internally on how much to release” around the 2019 release of GPT-2. Those nods toward social responsibility are a key part of OpenAI’s business model and media stance, but not everyone is satisfied with them. That includes some of the company’s top executives, who split to found Anthropic in 2019. That company’s CEO, Dario Amodei, told the New York Times this summer that his company’s entire goal isn’t to make money or usher in AGI necessarily, but to set safety standards with which other top competitors will feel compelled to comply. The big questions about AI changing the world all might seem theoretical. But those within the AI community, and increasing numbers of watchdogs and politicians, are already taking them deadly seriously (despite a steadfast chorus of computer scientists still entirely skeptical about the possibility of AGI at all). Just take a recent jeremiad from Foundation for American Innovation senior economist Samuel Hammond, who in a series of blog posts has tackled the political implications of AGI boosters’ claims if taken at face value, and the implications of a potential response from government: “The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion,” Hammond writes. “It’s up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan.” For now, that’s a far-fetched future scenario. But as Levy’s profile of OpenAI reveals, it’s one that the people with the most money, computing power and public sway in the AI world hold as gospel truth. Should the AGI revolution put politicians across the globe on their back foot, or out of power entirely, they won’t be able to say they didn’t have a warning.
|