A security loophole in Biden’s AI executive order

From: POLITICO's Digital Future Daily - Thursday Nov 30,2023 09:19 pm
Presented by Google & BCG: How the next wave of technology is upending the global economy and its power structures
Nov 30, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by Google & BCG

WASHINGTON, DC - OCTOBER 30: U.S. President Joe Biden signs a new executive order guiding his administration's approach to artificial intelligence during an event in the East Room of the White House on October 30, 2023 in Washington, DC. President Biden issued a new executive order on Monday, directing his administration to create a new chief AI officer, track companies developing the most powerful AI systems, adopt stronger privacy policies and "both deploy AI and guard against its possible bias," creating new safety guidelines and industry standards. (Photo by Chip Somodevilla/Getty Images)

President Joe Biden signs a new executive order guiding his administration's approach to artificial intelligence. (Photo by Chip Somodevilla/Getty Images) | Getty Images

As the gears start turning to implement President Joe Biden’s immense executive order on AI, questions have been percolating in the tech world: Yes, it’s long and sweeping, but does it focus on the right things?

Two computer science professors  —  Swarat Chaudhuri of The University of Texas at Austin, and Armando Solar-Lezama of MIT — wrote us with their concerns about flaws in the order that might hinder our abilities to improve safety and cybersecurity in an increasingly AI-driven world.

A year to the day ChatGPT launched, we invited them to elaborate on their concerns with the White House approach to AI in a guest essay.

The Biden administration’s AI executive order sets new standards for the safety and security of artificial intelligence, and specifically calls out security risks from “foundation models,” the general-purpose statistical models trained on massive datasets that power AI systems like ChatGPT and DALL-E.

As researchers, we agree the safety and security concerns around these models are real.

But the approach in the executive order has the potential to make those risks worse, by focusing on the wrong things and closing off access to the people trying to fix the problem.

Large foundation models have shown an astounding ability to generate code, text and images, and the executive order considers scenarios where such models — like the AI villain in last summer's "Mission: Impossible"— create deadly weapons, perform cyberattacks and evade human oversight. The order’s response is to impose a set of reporting requirements on foundation models whose training takes more than a certain (very large) amount of computing power.

 

A message from Google:

Artificial intelligence can significantly bolster climate-related adaptation and resilience initiatives. Our new report with Boston Consulting Group (BCG) shows that AI is already delivering improved predictions to help adapt to climate change. For example, Google Research has been working on a flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information up to seven days in advance. Learn more here.

 

The specific focus on the risks of the largest models, though well-intentioned, is flawed in three major ways. First, it’s inadequate: by focusing on large foundation models, it overlooks the havoc smaller models can wreak. Second, it’s unnecessary: we can build targeted mechanisms for protecting ourselves from the bad applications. Third, it represents a regulatory creep that could, in the long run, end up favoring a few large Silicon Valley companies at the expense of broader AI innovation.

FraudGPT, a malicious AI service already available on the dark web, is a good illustration of the shortcomings of the Biden approach. Think of FraudGPT as an evil cousin of ChatGPT: While ChatGPT has built-in safety guardrails, FraudGPT excels at writing malicious code that forms the basis of cyberattacks.

To build a system like FraudGPT, you would start with a general-purpose foundation model and then "fine-tune" it using additional data — in this case, malicious code downloaded from seedy corners of the internet. The foundation model itself doesn't have to be a regulation-triggering behemoth. You could build a significantly more powerful FraudGPT completely under the radar of Biden’s executive order. This doesn't make FraudGPT benign.

Just because one can build models like FraudGPT and sneak them under the reporting threshold doesn't mean that cybersecurity is a lost cause, however. In fact, AI technology may offer a way to strengthen our software infrastructure.

Most cyberattacks work by exploiting bugs in the programs being attacked. In fact the world's software systems are, to an embarrassing degree, full of bugs. If we could make our software more robust overall, the threat posed by rogue AIs like FraudGPT — or by human hackers — could be minimized.

This may sound like a tall order, but the same technologies that make rogue AIs such a threat can also help create secure software. There’s an entire sub-area of computer science called "formal verification" that focuses on methods to mathematically prove a program is bug-free. Historically, formal verification has been too labor-intensive and expensive to be broadly deployed — but new foundation-model-based techniques for automatically solving mathematical problems can bring down their costs.

 

GET A BACKSTAGE PASS TO COP28 WITH GLOBAL PLAYBOOK: Get insider access to the conference that sets the tone of the global climate agenda with POLITICO's Global Playbook newsletter. Authored by Suzanne Lynch, Global Playbook delivers exclusive, daily insights and comprehensive coverage that will keep you informed about the most crucial climate summit of the year. Dive deep into the critical discussions and developments at COP28 from Nov. 30 to Dec. 12. SUBSCRIBE NOW.

 
 

To its credit, the executive order does acknowledge the potential of AI technology to help build secure software. This is consistent with other positive aspects of the order, which call for solving specific problems such as algorithmic discrimination or the potential risks posed by AI in healthcare.

By contrast, the order's requirements on large foundation models do not respond to a specific harm. Instead, they respond to a narrative that focuses on potential existential dangers posed by foundation models, and on how a model is created rather than how it is used.

Focusing too tightly on the big foundation models also poses a different kind of security risk.

The current AI revolution was built on decades of decentralized, open academic research and open-source software development. And solving difficult, open-ended problems like AI safety or security also requires an open exchange of ideas.

Tight regulations around the most powerful AI models could, however, shut this off and leave the keys to the AI kingdom in the hands of a few Silicon Valley companies.

Over the past year, companies like OpenAI and Anthropic have feverishly warned the world about the risks of foundation models while developing those very models themselves. The subtext is that they alone can be trusted to safeguard foundation model technology.

Looking ahead, it’s reasonable to worry that the modest reporting requirements in the executive order may morph into the sort of licensing requirements for AI work that OpenAI CEO Sam Altman called for last summer. Especially as new ways to train models with limited resources emerge, and as the price of computing goes down, such regulations could start hurting the outsiders — the researchers, small companies, and other independent organizations whose work will be necessary to keep a fast-moving technology in check.

 

A message from Google:

Advertisement Image

 
Kissinger, the AI pundit?


As you wade through the barrage of assessments of Henry Kissinger’s legacy (he died this week, age 100), it’s worth remembering his late-life interest in AI.

In 2019, the former statesman co-authored with Google mogul Eric Schmidt and the computer scientist Daniel Huttenlocher a book modestly titled “The Age of A.I. and our Human Future,” warning that AI could disrupt civilization and required global responses.

Although it wasn’t always kindly received — Kevin Roose, in the Times, called it “cursory and shallow in places, and many of its recommendations are puzzlingly vague” — Kissinger did not let go of the subject. He recorded lengthy videos on AI, and this spring, at a sprightly 99, proclaimed in a Wall Street Journal op-ed that generative AI presented challenges “on a scale not experienced since the beginning of the Enlightenment” - an observation that gave the U.S. business elite a wake-up call.

As recently as last month, Kissinger co-wrote an essay in Foreign Affairs on “The Path to AI Arms Control,” with Harvard’s Graham Allison.

It's hard to know exactly what Kissinger wrote himself, or what motivated this final intellectual chapter — we did email one of his co-authors, who didn't respond by presstime. (He was reportedly introduced to the topic by Eric Schmidt at a Bilderberg conference.) But it’s not hard to imagine that as a persuasive, unorthodox thinker often accused of inhumanity, Kissinger saw an alien new thought process that was even more unorthodox, even less human, potentially even more persuasive — and he wanted people to know it was time to worry. — Stephen Heuser

 

SUBSCRIBE TO CALIFORNIA CLIMATE: Climate change isn’t just about the weather. It's also about how we do business and create new policies, especially in California. So we have something cool for you: A brand-new California Climate newsletter. It's not just climate or science chat, it's your daily cheat sheet to understanding how the legislative landscape around climate change is shaking up industries across the Golden State. Subscribe now to California Climate to keep up with the changes.

 
 
Tweet of the Day

Tweet by @TopNotchQuark aka Quarked up Shawty: Spotify wrapped is to the chronically online what Myers Briggs is to your average tech bro.

via @TopNotchQuark on Twitter | @TopNotchQuark

The Future in 5 Links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Google:

Delivering improved predictions to help adapt to climate change is one of three key areas where we’re developing AI to accelerate climate action.

Floods are the most common natural disaster, causing thousands of fatalities and disrupting the lives of millions every year. Since 2018, Google Research has been working on our flood forecasting initiative, which uses advanced AI and geospatial analysis to provide real-time flooding information so communities and individuals can prepare for and respond to riverine floods. Our Flood Hub platform is available in more than 80 countries, providing forecasts up to seven days in advance for 460 million people. With the help of AI, we hope to bring flood forecasting to every country and cover more types of floods.

Learn more here about how we’re building AI that can drive innovation forward, while at the same time working to mitigate environmental impacts.

 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Nov 29,2023 09:02 pm - Wednesday

Exclusive: What people actually think about AI

Nov 28,2023 09:01 pm - Tuesday

AI comes for music's middle class

Nov 27,2023 09:02 pm - Monday

OpenAI and the future of the corporation

Nov 22,2023 09:03 pm - Wednesday

The sunny side of the Binance bust

Nov 21,2023 09:33 pm - Tuesday

Altman, Musk, and concentrated power in tech

Nov 17,2023 09:02 pm - Friday

5 questions for Scott Aaronson