Even the world’s fastest-developing technology cannot outrun the culture war. With Congress set to grapple with AI in hearings on Tuesday and Wednesday, and the White House getting into the game as well, the people who have thought most about its risks remain divided into feuding camps, sniping at each other on Twitter and in the press. On one side of the divide are researchers focused on current and imminent risks of the technology, like AI systems that perpetuate harmful bias against marginalized groups. On the other side are those focused on the more distant risk that future AIs could start working against humanity in ways we can't stop, and become an existential threat to our species. The split among experts is a big problem for leaders in Washington, Brussels and every other power center thinking about AI right now: If you don't know what to worry about, then you really don't know what to do about it. Both sides of the disagreement have very persuasive points, but the feud is sabotaging an effective AI response. Enter a third group of experts: those saying the two camps have more in common than either is willing to admit. “There’s a lot of common ground,” Oxford Institute for Ethics in AI researcher Carina Prunkl, author of a 2020 paper on the divide, tells DFD. “It would be much more useful if there was more engagement across these disciplines.” For example, research on immediate issues like AI bias — this group argues — is very likely to yield a toolkit of insights and strategies that will be helpful in addressing more nebulous existential risks down the line. Despite the reasons to make common cause, reconciling social justice advocates with the prophets of a techno-apocalypse will be no mean feat. Those concerned with social justice — a younger, more diverse crowd — say the existential risk camp is old, white and out of touch, worried about sci-fi scenarios at the expense of real problems affecting people right now, like unfair prison sentences and biased screening of job applications. The existential risk camp accuses the social justice camp of prioritizing a relatively narrow set of issues over the long-term survival of humanity. Granted anonymity to frankly discuss the vicious world of academic politics, AI researchers described a set of social dynamics that have magnified a difference of opinion about research priorities — and which will be dispiritingly familiar to anyone following national politics. The AI research community is clustered around a handful of college towns like Oxford, England and Berkeley, California. It is a close-knit group, especially in circles focused on existential AI risk, which includes many effective altruists, members of the utilitarian movement sponsored by Sam Bankman-Fried. It's common for members to blur work and life, rooming with and dating each other. In this cloistered world, researchers said, it is difficult for the camps involved to see beyond their factional disputes. One researcher said that those in the social justice camp often envy the lavish funding and media coverage showered on the existential risk crowd, which has attracted a bevy of billionaire patrons. The same researcher said that the utilitarian thinking of many existential risk researchers is often literal-minded to the point of absurdity, making them difficult to work with. She recounted a conversation with one in which a request to turn off a light switch devolved into a lengthy fight over the long-term costs and benefits of going through the trouble of turning off the lights at night. Another researcher said his mental health had improved after muting a handful of the most outspoken voices in the debate on Twitter This included leading voices on both sides, such as existential risker Eliezer Yudkowsky and Timnit Gebru, a former Google researcher focused on bias issues. Rather than galvanize the two camps over a shared cause, the rapid advance of AI and the attention generated by it have only made the debate more acrimonious, according to researchers who follow it. “That just turns the volume up on everything,” said Stephen Casper, a PhD student at MIT who has laid out the case for reconciliation on message boards popular among AI researchers. “There really should be no inherent tradeoff between making sure AI systems are cautiously approached,” Casper said, “compared to making AIs very aligned with humans from a justice perspective.” But recent weeks have only brought more flashpoints. In March, an open letter signed by Elon Musk and others calling for a six-month safety pause on AI research drew criticism for focusing too much on long-term risks, not enough on discrimination and real-world unfairness. And the decision last month by “AI godfather” Geoffrey Hinton to quit Google in order to raise the alarm about AI has drawn complaints that he did not back female co-workers, including Gebru, who raised internal concerns about AI bias years earlier. Hinton has responded that Gebru’s concerns “weren’t as existentially serious.” Seth Baum, executive director of the Global Catastrophic Risk Institute and author of a 2018 paper calling for reconciliation between the two factions, suggested one tried-and-true strategy for bringing people together across political boundaries. Baum told DFD the two factions should focus less scrutiny on each other, and more on the activities of tech companies like Microsoft and Google. “Trying to unite people around a common enemy,” he said. “That works sometimes.” |