In the pantheon of existential dangers posed by the rise of artificial intelligence, few loom larger than biosecurity — the fear that generative AI could help bad actors engineer superviruses and other pathogens, or even that an AI could one day create deadly bioweapons all on its own. The Biden administration has paid special attention to the issue, giving biosecurity a prominent place in the AI executive order it unveiled in October. Key members of the Senate are also anxious about the merger of AI and biotechnology. But how realistic is the threat, and what evidence exists to support it? Those questions have started to take some big twists lately. A white paper published by Open AI last week poured gasoline on the growing debate over the possibility that terrorists, or scientists, or just mischief-makers could use artificial intelligence to build a world-ending bioweapon. The paper largely downplayed the concern, concluding that GPT-4, OpenAI’s most powerful large language model, provides “at most a mild uplift” for biologists working to create lethal viruses. But the company’s relatively sanguine view was attacked by Gary Marcus, an emeritus psychology professor at New York University who has more recently become a figure in the AI policy space. On Sunday, Marcus accused OpenAI researchers of misanalyzing their own data. He said the company used an improper statistical test, and argued that the paper’s findings actually show that AI models like GPT-4 do meaningfully raise the ability of biologists, particularly expert ones, to create dangerous new pathogens. The NYU professor added that if he had peer-reviewed OpenAI’s paper, he would’ve sent it back with “a firm recommendation of ‘revise and resubmit.’ ” If we’re wrong about the risks, Marcus pointed out, humans don’t get to make that mistake twice: “If an LLM equips even one team of lunatics with the ability to build, weaponize and distribute even one pathogen as deadly as covid-19, it will be a really, really big deal,” he warned. In response to Marcus’ critique, Aleksander Madry, head of preparedness at OpenAI, said the company was “very careful to only report what our research data says, and in this case, we found there was a (mild) uplift in accessing biological information with GPT-4 that merits additional research.” In a nod to Marcus’ claim that OpenAI used the wrong testing parameters, Madry said that the research paper “included discussion of a range of statistical approaches and their relevance.” But he also said that more work needs to be done on “the science of preparedness, including how we determine when risks become ‘meaningful.’” It’s easy to understand why many observers fear the looming marriage of AI and biotechnology. One of AI’s most powerful demonstrations to date has been in biology, where a system called AlphaFold — now owned by Google DeepMind — has proved incredibly good at thinking up new structures for complex molecules. And automated synthesis machines can already crank out genetic material on request. Accordingly, concern has swept across the highest levels of government. In April, Sen. Martin Heinrich (D-N.M.), one of Senate Majority Leader Chuck Schumer’s three top lieutenants on AI legislation, told POLITICO that AI-boosted bioweapons were one of the “edge cases” keeping him up at night. A paper published in June by researchers at the Massachusetts Institute of Technology sent a shudder across Capitol Hill with its warning that AI-powered chatbots could assist in the development of new pathogens, including for people “with little or no laboratory training.” In September, researchers from the RAND Corp. and other top think tanks warned senators that “existing AI models are already capable of assisting nonstate actors with biological attacks that would cause pandemics, including the conception, design, and implementation of such attacks.” By October, the anxiety had reached the White House — the AI executive order signed by President Joe Biden included new screening mechanisms for companies involved in gene synthesis and promoted know-your-customer rules for firms providing synthetic genes and other biotech tools to researchers. Top researchers at RAND played a key role in ensuring those biosecurity requirements found their way into the president’s desk. But many experts still see a big gap between what’s theoretically possible and what could actually happen – or how an AI could make it worse. Skeptical researchers say there’s almost nothing an LLM can teach amateur biologists that they couldn’t already learn on Google, and question whether policymakers should spend time and energy on such a speculative risk. Researchers like Nancy Connell, a biosecurity expert at Rutgers University, have even claimed that an avalanche of tech dollars is skewing how policy experts approach the risks posed by AI and biosecurity. Groups like Open Philanthropy, an effective altruist organization funded by billionaire Facebook co-founder Dustin Moskovitz, have pumped hundreds of millions of dollars into Washington’s AI ecosystem in an effort to focus policymakers on the technology’s existential risks to humanity, including bioweapons. The OpenAI paper is part of a small wave of new research casting doubt on the potential bio-risks of AI. The congressionally mandated National Security Commission on Emerging Biotechnology issued a report last week that claimed LLMs “do not significantly increase the risk of the creation of a bioweapon.” Even RAND has walked back some of its earlier claims, publishing a new report last month that found the current generation of LLMs “[do] not measurably change the operational risk” of a biological attack. But the debate over AI’s impact on biosecurity is far from over. Even skeptical researchers say it’s wise to keep a close eye on the nexus of fast-moving technologies like AI and biotech. While the NSCEB downplayed fears over the current generation of LLMs, it is concerned about the potential for “biological design tools,” or BDTs — AI models that process biological data in the same way that LLMs process human language — to supercharge the ability of trained biologists to create deadly new diseases. The commission warned that if BDTs are one day merged with LLMs, even amateur biologists could get a boost. Gregory C. Allen, an AI researcher at the Center for Strategic and International Studies think tank, gave OpenAI credit for “proactively” examining whether their technology raises biosecurity risks. But he takes little solace in their finding that today’s AI systems are unlikely to create killer pathogens. “When you have a few notable leaders in this industry predicting human-level AI in as little as five years, we should recognize that where we currently are doesn’t necessarily tell us very much about where we might be going in terms of future AI and bioweapon risk,” Allen said.
|