‘Four Battlegrounds’ shaping the U.S. and China’s AI race

From: POLITICO's Digital Future Daily - Tuesday Feb 28,2023 09:02 pm
Presented by TikTok: How the next wave of technology is upending the global economy and its power structures
Feb 28, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Matt Berg

Presented by TikTok

With help from Derek Robertson

China's flag next to the American flag on the side of the Old Executive Office Building.

AP Photo

Artificial intelligence has already brought us killer robots, chat bots that can pen government speeches, programs that can process data faster than our mammalian minds, and software that can make apparently original art upon request.

So it makes sense that world powers are racing to dominate this new technological landscape. We’re on the cusp of a “new industrial revolution” defined by rapid innovations in new technology, longtime defense expert Paul Scharre writes in his new book Four Battlegrounds: Power in the Age of Artificial Intelligence, released today.

In the book Scharre, who now serves as Center for a New American Security’s vice president and director of studies, draws on his experiences as an Army ranger in Iraq and Afghanistan, leading a working group that shaped the Defense Department’s autonomous weapons policy, and learning about AI chatbots before they were cool, to tell the story of how AI is shaping geopolitics and competition. Our interview has been edited for clarity and brevity.

There are many personal anecdotes in the book. Was there one specific moment or event that you realized artificial intelligence will likely play a major role in the future?

When I was in Iraq during the surge in 2007, I saw this little ground robot being used to defuse a roadside bomb, and the lightbulb went on in my head about the value of robotics for the military. That was, for me, a real catalyst in the work that I've done since then.

In the 15 years since, first at the Pentagon and now at CNAS, I’ve been looking at how autonomy and artificial intelligence are affecting military operations and global security. I saw the value from a military standpoint of keeping people out of harm's way. Obviously since then, we've seen this explosion of AI in the deep learning revolution. So, when I finished up Army of None, I already had this book in mind.

China plays a major role in the book, given its major investments and implementation of AI. How did it come to this, with the U.S. and China squaring off on a technological, futuristic battleground?

China is a major leader in artificial intelligence, and they've been investing heavily in improving the Chinese AI research community. The reason why I think it's been a shock to Washington is that people in D.C. have sort of taken U.S. technology leadership for granted. The energy that existed in the U.S. government for investing in science and innovation in the ‘60s and ‘70s, it just hasn't been there.

For the last several decades, the U.S government has stepped out of the game of funding basic research and development, and the private sector has played a much larger role. That's starting to change, with the $200 billion investment that Congress made in the CHIPS and Science Act, which is a really significant downpayment on investing in American science and innovation. So, I think we're seeing Washington belatedly pivot to this technology competition with China. But China's a serious technology competitor.

China has things we simply don’t allow here, like facial recognition software everywhere. Do lax policies like that give Beijing a leg up? 

That’s a great question, because I think it's one that's on a lot of people's minds. Data on facial recognition doesn't necessarily help you in other areas. One of the arguments in favor of China's alleged data advantage is that China is a larger country with a much larger user base. That's true.

But what probably matters much more is the user base that tech companies have, and U.S. tech companies have global reach. Facebook and YouTube both have over 2 billion users, compared to WeChat’s 1.2 billion users. I actually think the conclusion is counter to what a lot of people might initially think — I don't think that China has a data advantage.

It's probably a relatively level playing field between the U.S. and China, and what's going to matter more is whether companies or government institutions have the ability to refine the data that they have and apply it toward machine learning applications.

Let’s go back to the U.S. The book opens with this riveting anecdote about the U.S. military’s AlphaDogFight trials, which pitted a human F-16 pilot against an AI pilot. If it was a real dual, the human would’ve been toast. How worried should people — both average citizens and those in the military — be about the incorporation of AI in real-life battle? 

Pilots should be worried because AI is coming for their jobs. The reality is that, like in many areas, AI is going to offload tasks that people can do, and there's a lot of advantages for that in the private sector or in the military.

Pilots take some heat for this in the military because automation in drones has been able to physically replace what pilots do, in terms of enabling remotely piloted aircraft. Over time, with more autonomous features, it’ll increasingly hand over the piloting of the plane itself so that pilots are in supervisory capacity, which is a good thing from a military standpoint.

One of the challenges is the problem that we’re seeing with chatbots like Microsoft's Bing and AI systems everywhere. AI systems break. They fail. They do surprising things, particularly in novel situations. Making AI systems safe is in many ways harder than making them capable. That's a problem for large tech companies, and it's a problem for the military as well.

So, the U.S. has guidelines they have to follow to ensure weapons involving AI are responsibly developed and deployed. But not every country has those, especially not authoritarian regimes. How could that play into the race for AI weapons?

The U.S. has been very forward-leaning in terms of its AI policies. But we don't see that same level of transparency from other countries, especially competitors like China. There's good reason to be concerned about how other countries might use AI, and that they might do it in a way that’s not consistent with the law of war.

There’s also a concern that countries may not be investing enough in making sure their systems are safe, which potentially could be destabilizing by risking accidents or some kind of unintended escalation in the crisis. The balloon incident is a good example of this, where the latest information out of the U.S. government is that the balloon may have initially been blown off course. It highlights this challenge that militaries can often have of controlling uncrewed or unmanned systems once they’re released.

Stepping away from military stuff – you had a sneak peek of Chat-GPT before it was released. I assume the biggest danger with the software isn’t the AI becoming sentient and taking over the world. What worries you most about this type of technology?

I think there's value in people finding ways to embrace the technology, where it might be useful or increase productivity. The caveat is that it does sometimes make things up, so you shouldn’t trust it.

I do think it's quite likely that we see a flood of AI driven spam and propaganda, and that's something that we're going to have to find ways to address as a society in the long run. I’m skeptical that detectors will work, while there has been some interest in developing AI text generation detectors that can tell whether text has been written, for example, by Chat-GPT.

A bigger concern is that the best tech companies in the world don't know how to make these systems safe. It’s not necessarily that these chatbots are claiming to be in love with people or arguing with them. The problem is that some of the best AI scientists don't know how to make it stop doing that. The solution that Microsoft has put in place right now, which is basically to cut off the conversation after a few replies, is very much a band aid solution.

In the book, you say it seems likely China could reach its goal of being the world’s AI leader by 2030, unless the US makes some “course corrections.” What would those corrections look like?

There are two really important things that the United States should be focusing on that are an asymmetric advantage that we have over China in AI competition.

One is talent. The U.S. is the destination of choice for AI scientists from around the world, including from China, and China can't compete with that. The best AI scientists in the world want to come to the United States, they don't want to go to China. In fact, China's top AI students are coming to the U.S. for their graduate studies. That's a tremendous advantage that the U.S. has in the global competition for talent, and we want to double down on that. We should be making it easier for people with advanced STEM degrees to stay in the United States.

The second area is in hardware, where we've seen with the Biden administration's export controls in October and the leverage that the U.S. has over China's access to advanced AI hardware. Doing so has effectively stopped China's access to advanced AI chips and the tooling needed to manufacture its own chips. It's an incredibly powerful point of leverage. If you can deny China access to the most advanced AI chips, then they’re simply not able to compete at the frontier of AI research and development with the most cutting edge models.

What do you most want people to take away from this book?

The U.S. has tremendous strength in an AI competition with China, and I firmly believe that the United States can remain the global leader in artificial intelligence. That’s if we harness those strengths and work with U.S. allies; if we're doubling down on advantages in talent, drawing on some of the best and brightest from around the world, bringing them to the States keeping them here; and if we invest in the next generation of research into semiconductor technology to ensure that U.S. companies stay dominant in key points of the semiconductor supply chain.

The U.S. has the opportunity to maintain a leadership position in AI that China simply cannot compete with.

 

A message from TikTok:

Everyone has a different idea of what privacy means, which is why TikTok made its privacy settings completely customizable. With just a few taps, anyone can decide who's able to message them directly or comment on their videos. When someone posts a video, they choose who sees it, whether it’s only them, friends or everyone. These are just some of the ways TikTok strives to keep everyone’s experience safer and more positive. Learn More.

 
ai rules of the road

The Partnership on AI, a non-profit focused on establishing ethical standards and practices for the rapidly growing technology that features, well, pretty much everybody that matters in tech, released yesterday a guide for “Responsible Practices in Synthetic Media.”

Aimed at establishing standards for the generative AI models now taking the world by storm (not literally), the document offers guidance for those developing, using and selling the models. They describe “responsible” usage as including, but not limited to entertainment, art, satire, education, and research; in keeping with most similar industry-wide standards-establishing documents it urges transparency and collaboration on the part of those building models like DALL-E or ChatGPT.

“Synthetic media is not inherently harmful, but the technology is increasingly accessible and sophisticated, magnifying potential harms and opportunities,” the authors write in conclusion, citing its potential for use in scams, harassment, and electioneering. — Derek Robertson

 

A message from TikTok:

Advertisement Image

 
a crypto tidbit

The Robinhood app icon is seen on a smartphone screen.

The Robinhood app. | Patrick Sison/AP Photo

As the regulatory state’s crackdown on crypto continues, one company better known for its business in traditional finance has found itself under scrutiny for dipping its toe in the pool.

POLITICO’s Declan Harty reported for Pro s yesterday that the company that produces the stock-trading app Robinhood received an investigative subpoena from the Securities and Exchange Commission in December, “related to Robinhood's crypto listings, its custody of crypto tokens and platform operations” in the wake of FTX’s collapse. (Robinhood declined to comment.)

As POLITICO’s Zach Warmbrodt and Bjarke Smith-Meyer noted over the weekend and in yesterday’s Morning Money newsletter, this year’s flurry of regulatory activity around crypto has been so disruptive that it has Europe eyeing a potential spot as a more stable home base for the young industry. — Derek Robertson

 

JOIN POLITICO ON 3/1 TO DISCUSS AMERICAN PRIVACY LAWS: Americans have fewer privacy rights than Europeans, and companies continue to face a minefield of competing state and foreign legislation. There is strong bipartisan support for a federal privacy bill, but it has yet to materialize. Join POLITICO on 3/1 to discuss what it will take to get a federal privacy law on the books, potential designs for how this type of legislation could protect consumers and innovators, and more. REGISTER HERE.

 
 
tweet of the day

The main reason for OpenAI to partner with Microsoft: Microsoft has long solved the Alignment Problem

the future in 5 links
  • Sci-fi magazines are experiencing a deluge of (very bad) AI-generated stories.
  • Meta and Snap are marshaling major resources behind generative AI.
  • The Biden administration has barred CHIPS Act recipients from using the cash for stock buybacks.
  • Read a horror story about facial recognition and mistaken identity.
  • A Tokyo-based startup is hoping to kickstart the “lunar economy.”

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from TikTok:

Everyone has a different idea of what privacy means, which is why TikTok made its privacy settings completely customizable. With just a few taps, anyone can decide who's able to message them directly or comment on their videos. When someone posts a video, they choose who sees it, whether it’s only them, their friends or the entire TikTok community. These are just some of the ways TikTok strives to keep everyone’s experience safer and more positive. Learn more at http://tiktok.com/safety.

 
 

DOWNLOAD THE POLITICO MOBILE APP: Stay up to speed with the newly updated POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO's scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need, reimagined. DOWNLOAD FOR iOSDOWNLOAD FOR ANDROID.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Feb 27,2023 09:02 pm - Monday

When courts control DeFi

Feb 23,2023 09:19 pm - Thursday

DeFi vs. the regulators, by the numbers

Feb 22,2023 09:02 pm - Wednesday

Metaverse winter sets in

Feb 21,2023 09:02 pm - Tuesday

AI chatbots meet the press