In responding to the rise of artificial intelligence, Washington has turned to its usual playbook, with the White House hosting tech CEOs as lawmakers float a variety of proposals on Capitol Hill. But the speed at which AI is developing, and the dire warnings from many of those who understand the technology best, are, to put it mildly, unusual. That’s why one think tanker immersed in the technology believes the federal response needs to include a super-charged research project that will force tech companies to coordinate their efforts, create cordoned-off environments to test risky advances and pour resources into studying how these large language models actually work. In other words, a Manhattan Project for AI safety, as Samuel Hammond put it in an opinion essay published this afternoon in POLITICO Magazine. Such an initiative, he argues, could forestall the risks of AI while giving regulators and technologists a chance to understand its inner workings well enough to make sure it does not bring about catastrophe. Hammond is a senior economist at the Foundation for American Innovation, the new name for what until recently had been the Lincoln Network, a tech-focused think tank with a libertarian bent. In recent months, he has been wrestling with the societal implications of AI’s rapid rise. In December, Hammond published an edition of his newsletter, Second Best, presciently titled “Before the Flood,” predicting the technology would strain many existing governance structures. It’s worth the click just to be reminded of the old days of five months ago, when the outputs of AI image generators still had a rough, dreamlike quality. Compare that to the video Hammond created last month, a faux ad for Balenciaga — the avant-garde Paris fashion house known for its edgy marketing — featuring AI renderings of the George Mason University economics faculty reimagined as fashion icons. Hammond told me he made the whole thing from scratch in a couple of hours, using publicly available media AI tools to generate scripts, mock-ups of their voices and the video itself. The video’s subject is absurd and entertaining, but the output, and the speed with which it was created, are uncanny. In calling for a second Manhattan Project, Hammond brings the perspective of someone thinking professionally about AI policy as he tinkers at the outer boundary of what’s possible with the technology, His conclusion, as he writes in today’s piece: “Making the most of AI’s tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations.” In arguing for this approach, Hammond rules out a moratorium on AI development in favor of maximum engagement with the technology. Indeed, recent events suggest that a moratorium might be impossible to enforce. On Thursday night, Bloomberg reported that a Google engineer recently warned in an internal document that open-source large language models threaten to out-compete privately owned versions being developed by tech companies. In some sense, the cats are already out of the bag, and are rapidly evolving on their own in the wild. But this poses problems for a proactive federal response. Attempting to corral, study and domesticate these AI models is not the kind of problem our 18th century governance architecture and 20th century federal agencies were built for. The original Manhattan Project succeeded in beating another government research program, that of Nazi Germany. A similar Project for AI alignment would not be a race against some other government, but against the progress of a technology that is developing, in large part, independently of any government. One the hand, this strengthens the case for an exceptional, Manhattan Project-style effort. On the other hand, it raises the question of whether even that would be inadequate to this new and confounding sort of challenge.
|