Elon Musk’s lawsuit against OpenAI is more than another episode of corporate warfare. It’s also the latest chapter in what can, without much hyperbole, be described as a religious conflict. In the lawsuit filed last Thursday Musk explicitly asks a San Francisco court to rule that OpenAI’s GPT-4 “constitutes artificial general intelligence,” a heretofore hazily defined super-advanced AI that can meet or exceed human capabilities. Musk believes that OpenAI is working to develop an AI deity — he told Andrew Sorkin a few months ago that by the time OpenAI’s copyright lawsuits are settled, “we’ll have digital God,” rendering the point moot. His own lawsuit hinges on the assertion that by building such a thing for profit, OpenAI is in breach of the agreement under which he originally gave it money. Washington leaders and other policymakers are still struggling to form their early impressions of AI risk — is it just a fancy algorithm, recycling the same old harms of the social media era? Could it unleash nuclear or biological mayhem? Will it upend the white-collar economy, or campaign season as we know it? The stark, apocalyptic terms the field’s leaders use when they talk about this technology keep the stakes high, reminding everyone just how much disruption it might cause and the collateral damage they might be willing to overlook along the way. Musk has been vocally worried about the risk AI might pose to humanity for more than a decade. His worries caused him to fund OpenAI in the first place, back when it was purely a nonprofit, and today are ostensibly fueling his own competitor to what he sees as OpenAI’s corrupted mission. Meanwhile, other tech titans — including OpenAI’s Sam Altman, Meta’s Mark Zuckerberg and Google DeepMind’s Demis Hassabis — claim that, yes, they are building AGI, but one that will be friendly, and very good for both business and democracy. “A misaligned superintelligent AGI could cause grievous harm to the world,” Altman wrote in February 2023, in contrast to what his company is planning to usher into existence. Analysts have spilled gallons of digital ink trying to determine the sincerity of this belief in a human-like (or superior) AI. Leading AI critic Gary Marcus has repeatedly, publicly (albeit unsuccessfully) tried to get Musk to wager on the possibility that his bold AI predictions could fall short. And, of course, activists and watchdogs like the AI Now Institute hound Silicon Valley over the potential risks and abuses that AI is causing, well, now. (On the other hand you have the community of effective accelerationists, or e/acc, an internet subculture dedicated to the idea that yes, machine god really is imminent, and we should hasten its arrival as quickly as possible.) At this point, you might be stopping to ask yourself exactly what all this spiritual hyperbole is concealing. After all, as out-there as Silicon Valley can be, these figures are primarily businessmen whose job is building companies and making money. Do they actually believe this stuff, or is it all just cover for cutthroat corporate competition, elbowing both their rivals and pesky regulators aside in the name of protecting humanity? There’s a third way to answer this question, however, that gives adequate weight to both Silicon Valley’s longstanding spiritual aspiration and the fact that it has made those at the top of the AI race very, very rich. Instead of evaluating what Musk, Altman, et al. say in public, picking it apart to discern its sincerity, we can simply look at their long public track record of development and business decisions, and determine the simplest explanation — and once you start to do that, the importance of either explanation fades away in lieu of what the hype means for how these companies are behaving right now. When I spoke with the University of Pennsylvania’s Ethan Mollick last week for our Future In Five Questions feature we had a side conversation about these developments, where he posited that a belief in imminent, super-intelligent AI actually explains many tech firms’ seemingly unconventional business decisions, like OpenAI’s unique corporate structure, or the lack of public detail about its rumored “Q*” project. “If you view it through the lens of normal business press, where this is a big new market and they have to struggle with the hype cycle, their decisions seem to make less sense,” Mollick said. Mollick told me he thinks much of the belief in superintelligence is sincere. He asserted that Microsoft seems like it might be the least convinced of AI’s theoretical godlike power, because it is so aggressive about commercializing it. So assuming that belief is sincere, like humans’ millenia-old belief in a non-machine God, proving it right, wrong, or insincere becomes less important than understanding what those believers end up doing in its name. “The real issue to me when I think about this is that I think they don't even know how much disruption they're causing in the meantime,” Mollick said. “They didn’t at first think they were releasing a thing that invalidated all homework. So even if they don't succeed at creating ‘machine God,’ the idea of a machine that's 10 times smarter than the current version of AI is already going to do a lot of weird stuff.” This recalls the view of AI maximalists like the Foundation for American Innnovation’s Samuel Hammond, who has written extensively about how AI might wreak havoc on American life, and how our bureaucracy must adapt to deal with it. Assuming that happens, it stands to reason that the companies building such powerful machines will enjoy a powerful, lucrative position in the world AI transforms. In that light, it’s almost a fool’s errand to determine whether Musk’s lawsuit against OpenAI is mainly a business move, a continuance of a personal grudge, or a desperate cry for judges to stop the computer apocalypse. The answer may well be “all of the above.”
|