Last week’s AI Safety Summit in the United Kingdom was, to hear its participants tell it, a rousing success — but critics accuse those leaders of living in a fantasy world. Those critics are part of the growing rift in the AI community over how much to focus on the “existential” risk of “frontier” models that could possibly, well… end the world. The AI policy community is at a crossroads that will determine whether the future of the technology is governed with here-and-now societal risks in mind or with an eye toward a sci-fi future where ideas about governance are effectively upended, while each side claims their view of the technology encompasses both risks. “It’s disappointing to see some of the most powerful countries in the world prioritize risks that are unprovable and unfalsifiable,” Data & Society policy director Brian Chen said in an email. “Countries are dedicating massive institutional resources to investigate existential claims that can’t hold up under basic principles of empirical inquiry.” In other words, the fight over AI should be less about preventing SkyNet from killing us and more about protecting consumers from opaque algorithms that decide to reject a home loan or decline coverage for a medical procedure. Chen and his peers do believe the government has a role to play in AI safety. But the merest whiff of “doomerism” in Silicon Valley triggers a fear that the biggest AI developers are trying to cement their dominance in the field by obscuring present-day threats at the expense of hypothetical ones. Amba Kak of the AI Now Institute, one of the few representatives of civil society at last week’s summit, said at the event’s conclusion that “we are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions.” In her remarks, Kak acknowledged efforts by the Biden administration to encourage fair competition and redress bias in AI, but said future gatherings should include voices from across society, not just the biggest tech companies and governmental leaders. Some groups see current-day AI safety and a competitive industry open to new players as inextricably paired. Mark Surman, president of the Mozilla Foundation, and the researcher Camille François said in a blog post yesterday that “competition is an antidote” for what they see as the undemocratic nature of current AI policy debates, dominated by industry giants. They emphasize making AI development tools available to everybody, accusing major players like OpenAI of using “the fear of existential risk to propose approaches that would shut down open-source AI.” (A “joint statement” published on Halloween with signatures from Surman, François, and no less than Meta AI chief Yann LeCun called for making open-source AI development a “global priority.”) As Kak alluded to, some leaders in the U.S. have spoken out about these issues. Vice President Kamala Harris was outspoken at last week’s summit in urging other global leaders and AI companies to prioritize the here-and-now risks of algorithmic discrimination. Federal Trade Commission Chair Lina Khan wrote in the New York Times in May that “The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms,” and urged anti-monopolistic policy choices along the lines of those called for by Mozilla. The Biden administration’s executive order did address those topics, specifically directing Khan’s FTC to investigate monopolistic practices in AI, establishing privacy protections for government uses of AI, and ordering the Department of Housing and Urban Development to provide guidance for stopping discriminatory AI systems in lending. The Bletchley Declaration itself elaborates on both immediate human risk and the doomy predictions of apocalypse-by-frontier-model. Still, for some on the outside of industry and government who have studied the policy fights of the past epoch in tech, they’ll believe that AI giants will voluntarily accept accountability for their products’ potential harms when they see it. “We… need a more holistic, human-centered vision of AI systems — their impact on workers, their extraction of data, their massive consumption of energy and water,” Chen said. “This was lacking at last week’s summit.”
|