The United Kingdom’s much-hyped AI Safety Summit kicked off today — and it’s already revealing what world leaders hope for, and fear, in an AI-powered future. The most concrete takeaway from the first day of the two-day summit is the “Bletchley Declaration,” named after the Bletchley Park estate where the summit is hosted (and where Alan Turing pioneered the codebreaking that helped the Allies win World War II.) The document, with 29 signatories (including China) uses mostly boilerplate language to address both existential concerns about the threats posed by powerful AI models in the future, and more quotidian policy worries about how AI might supercharge the more harmful or biased parts of existing bureaucracy. But if you look at the individual statements from attendees of the summit, a clearer vision begins to emerge for where their priorities lie, and what they see when they are projected five, 10, or even 100 years ahead. Starting with the hosts, U.K. Prime Minister Rishi Sunak’s fear of an AI apocalypse is well-documented. Fueled by speculation among some academics and high-level AI developers (including Elon Musk) that a “God-like” AI could potentially wipe out the human race, Sunak favors reining in super-powered frontier models, and the Bletchley Declaration nods to that (theoretical) extinction-level threat. The United States’ representative does not share that priority, or at least not to the same extent. As POLITICO’s Vincent Manancourt, Eugene Daniels and Brendan Bordelon reported today, Vice President Kamala Harris is more worried about the threats AI poses to society right now. “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential… when people around the world cannot discern fact from fiction because of a flood of AI enabled myths and disinformation. I ask, is that not existential for democracy?” Harris said. That is to say, the harm generated by humans using AI might be worth more scrutiny at this moment than those generated by machines at some undefined point in the future. (This also happened to be the implicit message of the most recent “Mission: Impossible” film, as President Biden might have done well to read in a DFD edition from earlier this year.) This philosophical split isn’t a matter of excluding or ignoring one type of risk over the other entirely, as the Bletchley Declaration nods to both. But governments have a limited amount of political capital, and sway — and the regulations on which they’ll decide to focus are far from certain at this point. Companies are afraid of that uncertainty, and some are bristling at how a patchwork of regulations focused on different priorities might complicate what they do. POLITICO’s Rebecca Kern kept tabs on Meta president of global affairs Nick Clegg for DFD yesterday as he made the media rounds pre-summit, and he said the race by world governments to regulate AI is confusing the industry. “All of that doesn’t quite seem to fit together,” he said at an AI Fringe event, hoping this week’s summit will lead to governments at the very least being “corralled in a similar direction.” (He also said he hopes that governments can agree to some common definition of what a “frontier model” actually is, a concept still largely up for debate even as Sunak et al. discuss how to regulate them.) Meanwhile, developers, academics, and activists who aren’t necessarily in the public or private centers of power are worried that the AI future will look… pretty much exactly like the present, with a lack of coordination in government leading to tech giants riding roughshod over their competitors and notions of privacy or consumer protection. As outlined in a letter shared with POLITICO’s Digital Future Daily yesterday, a group of open-source AI advocates — which, it should be noted, included Meta’s AI lead Yann LeCun — are warning that “quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” echoing worries that to restrict AI models’ use too tightly will simply mean that a potentially world-changing technology is used to uphold the current status quo.
|