Artificial intelligence has already brought us killer robots, chat bots that can pen government speeches, programs that can process data faster than our mammalian minds, and software that can make apparently original art upon request. So it makes sense that world powers are racing to dominate this new technological landscape. We’re on the cusp of a “new industrial revolution” defined by rapid innovations in new technology, longtime defense expert Paul Scharre writes in his new book Four Battlegrounds: Power in the Age of Artificial Intelligence, released today. In the book Scharre, who now serves as Center for a New American Security’s vice president and director of studies, draws on his experiences as an Army ranger in Iraq and Afghanistan, leading a working group that shaped the Defense Department’s autonomous weapons policy, and learning about AI chatbots before they were cool, to tell the story of how AI is shaping geopolitics and competition. Our interview has been edited for clarity and brevity. There are many personal anecdotes in the book. Was there one specific moment or event that you realized artificial intelligence will likely play a major role in the future? When I was in Iraq during the surge in 2007, I saw this little ground robot being used to defuse a roadside bomb, and the lightbulb went on in my head about the value of robotics for the military. That was, for me, a real catalyst in the work that I've done since then. In the 15 years since, first at the Pentagon and now at CNAS, I’ve been looking at how autonomy and artificial intelligence are affecting military operations and global security. I saw the value from a military standpoint of keeping people out of harm's way. Obviously since then, we've seen this explosion of AI in the deep learning revolution. So, when I finished up Army of None, I already had this book in mind. China plays a major role in the book, given its major investments and implementation of AI. How did it come to this, with the U.S. and China squaring off on a technological, futuristic battleground? China is a major leader in artificial intelligence, and they've been investing heavily in improving the Chinese AI research community. The reason why I think it's been a shock to Washington is that people in D.C. have sort of taken U.S. technology leadership for granted. The energy that existed in the U.S. government for investing in science and innovation in the ‘60s and ‘70s, it just hasn't been there. For the last several decades, the U.S government has stepped out of the game of funding basic research and development, and the private sector has played a much larger role. That's starting to change, with the $200 billion investment that Congress made in the CHIPS and Science Act, which is a really significant downpayment on investing in American science and innovation. So, I think we're seeing Washington belatedly pivot to this technology competition with China. But China's a serious technology competitor. China has things we simply don’t allow here, like facial recognition software everywhere. Do lax policies like that give Beijing a leg up? That’s a great question, because I think it's one that's on a lot of people's minds. Data on facial recognition doesn't necessarily help you in other areas. One of the arguments in favor of China's alleged data advantage is that China is a larger country with a much larger user base. That's true. But what probably matters much more is the user base that tech companies have, and U.S. tech companies have global reach. Facebook and YouTube both have over 2 billion users, compared to WeChat’s 1.2 billion users. I actually think the conclusion is counter to what a lot of people might initially think — I don't think that China has a data advantage. It's probably a relatively level playing field between the U.S. and China, and what's going to matter more is whether companies or government institutions have the ability to refine the data that they have and apply it toward machine learning applications. Let’s go back to the U.S. The book opens with this riveting anecdote about the U.S. military’s AlphaDogFight trials, which pitted a human F-16 pilot against an AI pilot. If it was a real dual, the human would’ve been toast. How worried should people — both average citizens and those in the military — be about the incorporation of AI in real-life battle? Pilots should be worried because AI is coming for their jobs. The reality is that, like in many areas, AI is going to offload tasks that people can do, and there's a lot of advantages for that in the private sector or in the military. Pilots take some heat for this in the military because automation in drones has been able to physically replace what pilots do, in terms of enabling remotely piloted aircraft. Over time, with more autonomous features, it’ll increasingly hand over the piloting of the plane itself so that pilots are in supervisory capacity, which is a good thing from a military standpoint. One of the challenges is the problem that we’re seeing with chatbots like Microsoft's Bing and AI systems everywhere. AI systems break. They fail. They do surprising things, particularly in novel situations. Making AI systems safe is in many ways harder than making them capable. That's a problem for large tech companies, and it's a problem for the military as well. So, the U.S. has guidelines they have to follow to ensure weapons involving AI are responsibly developed and deployed. But not every country has those, especially not authoritarian regimes. How could that play into the race for AI weapons? The U.S. has been very forward-leaning in terms of its AI policies. But we don't see that same level of transparency from other countries, especially competitors like China. There's good reason to be concerned about how other countries might use AI, and that they might do it in a way that’s not consistent with the law of war. There’s also a concern that countries may not be investing enough in making sure their systems are safe, which potentially could be destabilizing by risking accidents or some kind of unintended escalation in the crisis. The balloon incident is a good example of this, where the latest information out of the U.S. government is that the balloon may have initially been blown off course. It highlights this challenge that militaries can often have of controlling uncrewed or unmanned systems once they’re released. Stepping away from military stuff – you had a sneak peek of Chat-GPT before it was released. I assume the biggest danger with the software isn’t the AI becoming sentient and taking over the world. What worries you most about this type of technology? I think there's value in people finding ways to embrace the technology, where it might be useful or increase productivity. The caveat is that it does sometimes make things up, so you shouldn’t trust it. I do think it's quite likely that we see a flood of AI driven spam and propaganda, and that's something that we're going to have to find ways to address as a society in the long run. I’m skeptical that detectors will work, while there has been some interest in developing AI text generation detectors that can tell whether text has been written, for example, by Chat-GPT. A bigger concern is that the best tech companies in the world don't know how to make these systems safe. It’s not necessarily that these chatbots are claiming to be in love with people or arguing with them. The problem is that some of the best AI scientists don't know how to make it stop doing that. The solution that Microsoft has put in place right now, which is basically to cut off the conversation after a few replies, is very much a band aid solution. In the book, you say it seems likely China could reach its goal of being the world’s AI leader by 2030, unless the US makes some “course corrections.” What would those corrections look like? There are two really important things that the United States should be focusing on that are an asymmetric advantage that we have over China in AI competition. One is talent. The U.S. is the destination of choice for AI scientists from around the world, including from China, and China can't compete with that. The best AI scientists in the world want to come to the United States, they don't want to go to China. In fact, China's top AI students are coming to the U.S. for their graduate studies. That's a tremendous advantage that the U.S. has in the global competition for talent, and we want to double down on that. We should be making it easier for people with advanced STEM degrees to stay in the United States. The second area is in hardware, where we've seen with the Biden administration's export controls in October and the leverage that the U.S. has over China's access to advanced AI hardware. Doing so has effectively stopped China's access to advanced AI chips and the tooling needed to manufacture its own chips. It's an incredibly powerful point of leverage. If you can deny China access to the most advanced AI chips, then they’re simply not able to compete at the frontier of AI research and development with the most cutting edge models. What do you most want people to take away from this book? The U.S. has tremendous strength in an AI competition with China, and I firmly believe that the United States can remain the global leader in artificial intelligence. That’s if we harness those strengths and work with U.S. allies; if we're doubling down on advantages in talent, drawing on some of the best and brightest from around the world, bringing them to the States keeping them here; and if we invest in the next generation of research into semiconductor technology to ensure that U.S. companies stay dominant in key points of the semiconductor supply chain. The U.S. has the opportunity to maintain a leadership position in AI that China simply cannot compete with.
|