In the end, it could be multipolarity that brings about the singularity. As leaders from the U.S. and Europe struggled to coordinate their AI response in Sweden last week, the government of Abu Dhabi was toasting the recent climb of its own large language model to the top of a global performance ranking. The milestone — along with the decision a week prior to make the model open-source — highlights two trends complicating the transatlantic effort to rein in the development of artificial intelligence. One is the global decentralization of digital know-how. The other is a growing willingness from the rest of the world to buck the wishes of the Western alliance. As POLITICO reported last week, leaders from both sides of the Atlantic are looking for common ground between the (more permissive) U.S. and (more restrictive) European posture towards AI in hopes of presenting a joint proposal to the G7 this fall. Meanwhile, a letter released last week by AI experts compared the technology’s risks to those of nuclear weapons. It only took the USSR four years after Hiroshima to detonate its own bomb, with the technology spreading to roughly a half-dozen more nations from there. The barriers to the spread of AI are, to put it mildly, lower. That means policymakers in Washington, Brussels and London are at risk of fighting the last war — stuck in a moment when the most important consumer markets and the firms on the bleeding edge of technical innovation were largely confined to the U.S. and the European Union. In 2023, it’s becoming more important to account for developments in the rest of the world. Of course, there are the usual suspects to account for outside of this bloc: Russia’s state-owned Sberbank has invested heavily in AI, while China’s global share of AI patents grew from less than 5 percent in 1996, to more than a quarter in 2021, according to one estimate, putting it into contention as a global leader in the technology. But, it’s increasingly apparent that “Russia, China and the West” is not the whole story either when it comes to the global AI race. Late last month, Abu Dhabi’s state-run Technology Innovation Institute announced that it was making its Falcon 40B LLM — first released in March — open-source. The move coincided with the model’s ascension to the top of a performance ranking of about 100 open AI models maintained by New York-based AI firm Hugging Face. The institute, founded in 2020, touts its open-sourcing decision as a means of making AI more “inclusive.” DFD has written about the challenges of reining in open-source AI. When some of those open-source models are backed by sovereign states with a degree of geopolitical independence, that only compounds the challenge. Outside of the trans-Atlantic alliance, even a close ally like G7 member Japan views emerging digital technologies as an opportunity to chip away at U.S. tech dominance. That, too, promises to complicate coordinated AI regulation. One point of divergence between the allies is already clear. For example, in the U.S., in lieu of near-term rules tailored to AI, copyright law is considered a possible avenue for slowing the technology’s advance, because of its potential to restrict the use of copyrighted material for training AI models. In Japan, on the other hand, existing copyright laws are less restrictive. Domestic content creators have stepped up pressure on Japan’s government in recent weeks to address the use of copyrighted works to train AI. But a fact sheet released last month by the office of Prime Minister Fumio Kishida reaffirmed the legality of training generative AI models on copyrighted works. Kishida has said he wants Japan to lead global rule-making on AI through its chairmanship of the G7 this year. Even assuming the G7 can reach a meaningful consensus, the gap between that and effective global consensus is getting bigger, not smaller, at a time when AI advancement shows no signs of slowing down.
|