AI is supposed to change everything. But what if it changes… literally everything, right down to the very fabric of government and society? That’s the vision presented in a recent series of essays from Samuel Hammond, a Canadian-American economist who has written extensively on AI, technology and social policy. Titled “AI and Leviathan”, the essay series compares AI’s impact on society to the landmark work of political philosophy that recast man’s relationship with authority in the aftermath of another information revolution, that of the printing press. The libertarian-leaning Hammond’s central argument is simple, but has far-reaching implications. As AI tools become indistinguishable from magic, to borrow from Arthur C. Clarke, he believes modern governments will very quickly lose their place of global power as AI-powered companies and individuals move too fast for them to control. He envisions cybercriminals harrying outmatched law enforcement and security agencies, companies providing services far more efficiently than the federal bureaucracy ever could and a court system incapacitated by an onslaught of AI-generated briefs and complaints. In his most recent essay he predicts three possible outcomes for world governments: “Chinese-style police state / Gulf-style monarchy; anarchic failed state; or high-tech open society with AI-fortified e-governments on the Estonia model.” As Washington wrestles with regulating AI, his version of the future is an unsettling one where it might just not matter what leaders do — the horse is already out of the barn. I interviewed Sam today about how he came to this prediction, and what, if anything, leaders in Washington and elsewhere can do to make sure democratic government still has a voice in this process. The following conversation has been edited and condensed for clarity: At what point did AI-fueled disruption go from a fantasy scenario to something you felt was an important policy issue? Information technology for a while now has coexisted alongside our existing institutions, but it's reaching a point of criticality — where especially with artificial intelligence, it’s able to combine bits into atoms, and to do useful things in the real world, more than just affect pure communication. There’s a growing user experience gap between government and the private sector. You can hail an Uber, and we're upset if it doesn't arrive within three minutes. And then you go to the DMV, or try to file our taxes, or petition the government on any particular issue, and the difference between the two experiences are very stark. How is the U.S. positioned to respond to this challenge, compared to other liberal democracies? The U.S. is in a tricky position. We're the center of the AI revolution, and no other country comes close to us in terms of talent and investment and potential to build the startups that are going to change the world. But at the same time, our political system is at a low point for trust and consensus. There’s also a sense in which our institutions have become crystallized. A lot of people have noted that the U.S. reforms itself every 40 to 60 years through a kind of internal regime change. The industrial revolution created a series of social dynamics that necessitated the construction of an administrative state and a small welfare state. That led to a reinterpretation of the Constitution that would allow for regulation of interstate commerce, and administrative agencies, and the question of Social Security. If those things didn't happen, I don't think the U.S. would have been the dominant country in the 20th century. There was enormous social and political economic pressure for these changes to occur, including bottom-up cultural pressure; that led to this “living Constitution” that tracked in fits and starts with where the public was going. Today we're living off the capital that was built up during the post-war period, and our understandings of legitimacy and procedure are much more crystallized and brittle. If we ever need to undergo a burst of legislative activity as productive as the New Deal era, or a reinterpretation of our Constitution to adapt more to modern technology, I think it's going to be much more difficult than in the past. You wrote in 2015 about the “Estonia model” as a more harmonious way for government to integrate new technology. What is that, and how have you updated the idea for the AI era? This has only become more important, as we’re realizing with deepfakes and disinformation and the war in Ukraine. When Estonia built its modern government in the mid-1990s, they had a lot of young people in government who had a hacker ethic and were early adopters of the internet. They laid a foundation for an internet-native government. Today, most everything they do in Estonia is on a cryptographically secured, distributed data exchange layer — basically an early version of the blockchain that you can use for everything from riding the bus to paying your taxes to enrolling in school. That was done because Estonia felt a sense of a threat from being so close to Russia, which is very aggressive with cyberattacks. The United States being so distant and separated by oceans has given it a false sense of security. Even people who should know aren't really 100 percent sure how compromised our infrastructure is. The Estonia model is where we need to get to, but it's incredibly hard to get there because of path dependency. It’s a wicked problem where even if we all can agree that it's a good endpoint, the path there involves upsetting different groups and pulling and pulling and pushing against different interests. So, preview the next part of this series for us: What’s a path to avoiding the more undesirable scenarios you’ve laid out so far? It will be figuring out how to get to Estonia. There’s a need for every major function of government to embrace AI early and often. I wrote about what will happen when AI tax accountants let everyone embed their tax liability in multi-tiered partnerships. The only way you fight that fire is with AI tax examiners. The IRS should be building paired datasets with fine-tuned language models, auditor training manuals, and so forth, so that they can have the “100x”-type examiner, just so they can keep up. This is the “Red Queen” dynamic, where in “Alice in Wonderland” the Red Queen says to Alice “You need to run just to stay in place.” How can pragmatists in government keep up with the sometimes very-hardcore AI community? I’m trying to talk to the accelerationists and say look, I understand accelerationism. [Futurist philosopher] Nick Land, to the extent that he's a figure in the background of all this, has the view that the intelligence explosion already happened, this is all baked in as an autocatalytic process and most of what governments do you can reinterpret as trying to contain capitalism. And so similarly, you can try to contain artificial intelligence but as soon as it breaks through it's going to replicate and affect the world and we should embrace this because we'll all become atomic ions and upload ourselves into the Lovecraftian nightmare machine. There are people who really believe that. I'm an internet native and I think it'd be awesome to upload my brain and merge with the hive mind, but I just want ordinary people and our leadership to know what they’re signing up for, and read the waiver very closely before we just load civilization into the world-historic Titan submersible. I’m not saying decelerate, but if we're going to go through this, everyone should read the fine print. If there is an opportunity for us to shape the path of history, we have to do it now because the initial conditions set the long-term trend.
|