The fight over AI biosecurity risk takes a twist

From: POLITICO's Digital Future Daily - Tuesday Feb 06,2024 09:45 pm
How the next wave of technology is upending the global economy and its power structures
Feb 06, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Brendan Bordelon

With help from Rebecca Kern and Derek Robertson

The ChatGPT logo

The OpenAI logo. | Stefani Reynolds/AFP via Getty Images

In the pantheon of existential dangers posed by the rise of artificial intelligence, few loom larger than biosecurity — the fear that generative AI could help bad actors engineer superviruses and other pathogens, or even that an AI could one day create deadly bioweapons all on its own.

The Biden administration has paid special attention to the issue, giving biosecurity a prominent place in the AI executive order it unveiled in October. Key members of the Senate are also anxious about the merger of AI and biotechnology.

But how realistic is the threat, and what evidence exists to support it? Those questions have started to take some big twists lately.

A white paper published by Open AI last week poured gasoline on the growing debate over the possibility that terrorists, or scientists, or just mischief-makers could use artificial intelligence to build a world-ending bioweapon.

The paper largely downplayed the concern, concluding that GPT-4, OpenAI’s most powerful large language model, provides “at most a mild uplift” for biologists working to create lethal viruses. But the company’s relatively sanguine view was attacked by Gary Marcus, an emeritus psychology professor at New York University who has more recently become a figure in the AI policy space.

On Sunday, Marcus accused OpenAI researchers of misanalyzing their own data. He said the company used an improper statistical test, and argued that the paper’s findings actually show that AI models like GPT-4 do meaningfully raise the ability of biologists, particularly expert ones, to create dangerous new pathogens.

The NYU professor added that if he had peer-reviewed OpenAI’s paper, he would’ve sent it back with “a firm recommendation of ‘revise and resubmit.’ ”

If we’re wrong about the risks, Marcus pointed out, humans don’t get to make that mistake twice: “If an LLM equips even one team of lunatics with the ability to build, weaponize and distribute even one pathogen as deadly as covid-19, it will be a really, really big deal,” he warned.

In response to Marcus’ critique, Aleksander Madry, head of preparedness at OpenAI, said the company was “very careful to only report what our research data says, and in this case, we found there was a (mild) uplift in accessing biological information with GPT-4 that merits additional research.”

In a nod to Marcus’ claim that OpenAI used the wrong testing parameters, Madry said that the research paper “included discussion of a range of statistical approaches and their relevance.” But he also said that more work needs to be done on “the science of preparedness, including how we determine when risks become ‘meaningful.’”

It’s easy to understand why many observers fear the looming marriage of AI and biotechnology. One of AI’s most powerful demonstrations to date has been in biology, where a system called AlphaFold — now owned by Google DeepMind — has proved incredibly good at thinking up new structures for complex molecules. And automated synthesis machines can already crank out genetic material on request.

Accordingly, concern has swept across the highest levels of government. In April, Sen. Martin Heinrich (D-N.M.), one of Senate Majority Leader Chuck Schumer’s three top lieutenants on AI legislation, told POLITICO that AI-boosted bioweapons were one of the “edge cases” keeping him up at night. A paper published in June by researchers at the Massachusetts Institute of Technology sent a shudder across Capitol Hill with its warning that AI-powered chatbots could assist in the development of new pathogens, including for people “with little or no laboratory training.” In September, researchers from the RAND Corp. and other top think tanks warned senators that “existing AI models are already capable of assisting nonstate actors with biological attacks that would cause pandemics, including the conception, design, and implementation of such attacks.”

By October, the anxiety had reached the White House — the AI executive order signed by President Joe Biden included new screening mechanisms for companies involved in gene synthesis and promoted know-your-customer rules for firms providing synthetic genes and other biotech tools to researchers. Top researchers at RAND played a key role in ensuring those biosecurity requirements found their way into the president’s desk.

But many experts still see a big gap between what’s theoretically possible and what could actually happen – or how an AI could make it worse.

Skeptical researchers say there’s almost nothing an LLM can teach amateur biologists that they couldn’t already learn on Google, and question whether policymakers should spend time and energy on such a speculative risk.

Researchers like Nancy Connell, a biosecurity expert at Rutgers University, have even claimed that an avalanche of tech dollars is skewing how policy experts approach the risks posed by AI and biosecurity. Groups like Open Philanthropy, an effective altruist organization funded by billionaire Facebook co-founder Dustin Moskovitz, have pumped hundreds of millions of dollars into Washington’s AI ecosystem in an effort to focus policymakers on the technology’s existential risks to humanity, including bioweapons.

The OpenAI paper is part of a small wave of new research casting doubt on the potential bio-risks of AI. The congressionally mandated National Security Commission on Emerging Biotechnology issued a report last week that claimed LLMs “do not significantly increase the risk of the creation of a bioweapon.” Even RAND has walked back some of its earlier claims, publishing a new report last month that found the current generation of LLMs “[do] not measurably change the operational risk” of a biological attack.

But the debate over AI’s impact on biosecurity is far from over. Even skeptical researchers say it’s wise to keep a close eye on the nexus of fast-moving technologies like AI and biotech. While the NSCEB downplayed fears over the current generation of LLMs, it is concerned about the potential for “biological design tools,” or BDTs — AI models that process biological data in the same way that LLMs process human language — to supercharge the ability of trained biologists to create deadly new diseases.

The commission warned that if BDTs are one day merged with LLMs, even amateur biologists could get a boost.

Gregory C. Allen, an AI researcher at the Center for Strategic and International Studies think tank, gave OpenAI credit for “proactively” examining whether their technology raises biosecurity risks. But he takes little solace in their finding that today’s AI systems are unlikely to create killer pathogens.

“When you have a few notable leaders in this industry predicting human-level AI in as little as five years, we should recognize that where we currently are doesn’t necessarily tell us very much about where we might be going in terms of future AI and bioweapon risk,” Allen said.

 

YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. 

 
 
meta's pre-plan

Meta says it will start labeling AI-generated images on Instagram, Facebook and Threads — eventually.

As election season approaches, large social media companies are getting serious about the threats that artificial intelligence could pose to democracy (like that robocall of an artificially generated Joe Biden asking voters to skip the New Hampshire primary). Last fall, Meta started labeling “Imagined by AI” on photorealistic images from its own Meta AI system. Now, the company said in a blog post, it will start applying visible labels in the coming months to AI-generated images from its competitors as well.

But the technology isn’t quite ready for prime time. Meta’s announcement was thin on specifics, since there still isn’t an industry-wide standard on how to label AI-generated content. The plan doesn’t cover AI-generated audio and video content, Meta said because other companies are not embedding metadata in those types of content yet.

Nick Clegg, Meta’s president of global affairs, said the company is working with the industry forum Partnership on AI to start by labeling images from AI-generated content from OpenAI, Google, Adobe, Midjourney and Shutterstock, as they implement their plans for adding metadata to images created by their tools.

“Meta’s policy is an important but still inadequate step to address these profound concerns,” Robert Weissman, president of watchdog group Public Citizen, told POLITICO. He said the “major worry” is lack of an industry standard on videos and audio, saying that the most concerning deepfake videos and audio will still evade Meta’s policy. — Rebecca Kern

europe's risk factor

Now that the European Union has finally set the text of its proposed AI Act, it’s worth taking a closer look at what’s actually in it.

POLITICO’s Gian Volpicelli took a deep dive for Pro s today, finding a few key takeaways:

  • The risk model: The act applies tighter regulatory strictures to AI in ascending order of the potential “risk” involved in their application, meaning while silly camera filters likely go untouched by it, college admissions definitely do not. It also bans some uses outright, and places specific rules around “general-purpose” models like OpenAI’s ChatGPT.
  • The bans: Among those uses outright banned are subliminal or exploitative practices, biometric detection of race, sex, or union membership, China-style social scoring, and real-time facial recognition, among others.
  • The big models: The Act targets the general-purpose AI models that power more targeted applications with special rules. Developers “will have to keep detailed technical documentation; help the companies or people deploying their models understand the tools’ functionality and limits; provide a summary of the copyrighted material (such as texts or images) used to train the models; and cooperate with the European Commission and the national enforcing authorities” on compliance, Gian reports, with special reporting requirements for those with “systemic risk” to cause catastrophe. — Derek Robertson
Tweet of the Day

Ten months ago, we launched the Vesuvius Challenge to solve the ancient problem of the Herculaneum Papyri, a library of scrolls that were flash-fried by the eruption of Mount Vesuvius in 79 AD.Today we are overjoyed to announce that our crazy project has succeeded. After 2000 years, we can finally read the scrolls

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

CONGRESS OVERDRIVE: Since day one, POLITICO has been laser-focused on Capitol Hill, serving up the juiciest Congress coverage. Now, we’re upping our game to ensure you’re up to speed and in the know on every tasty morsel and newsy nugget from inside the Capitol Dome, around the clock. Wake up, read Playbook AM, get up to speed at midday with our Playbook PM halftime report, and fuel your nightly conversations with Inside Congress in the evening. Plus, never miss a beat with buzzy, real-time updates throughout the day via our Inside Congress Live feature. Learn more and subscribe here.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Feb 02,2024 09:58 pm - Friday

5 questions for ITIF's Daniel Castro

Jan 31,2024 09:02 pm - Wednesday

Apple's big (incremental) metaverse leap

Jan 29,2024 09:33 pm - Monday

The AI trust deficit

Jan 26,2024 09:03 pm - Friday

5 questions for David Ulevitch