The AI safety summit, and its critics

From: POLITICO's Digital Future Daily - Wednesday Nov 08,2023 09:22 pm
Presented by the Computer & Communications Industry Association: How the next wave of technology is upending the global economy and its power structures
Nov 08, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by the Computer & Communications Industry Association

A family photo on the second day of the UK Artificial Intelligence Safety Summit at Bletchley Park on Nov. 2, 2023 in Bletchley, England.

World leaders at the AI Safety Summit. | Leon Neal/Getty Images

Last week’s AI Safety Summit in the United Kingdom was, to hear its participants tell it, a rousing success — but critics accuse those leaders of living in a fantasy world.

Those critics are part of the growing rift in the AI community over how much to focus on the “existential” risk of “frontier” models that could possibly, well… end the world. The AI policy community is at a crossroads that will determine whether the future of the technology is governed with here-and-now societal risks in mind or with an eye toward a sci-fi future where ideas about governance are effectively upended, while each side claims their view of the technology encompasses both risks.

“It’s disappointing to see some of the most powerful countries in the world prioritize risks that are unprovable and unfalsifiable,” Data & Society policy director Brian Chen said in an email. “Countries are dedicating massive institutional resources to investigate existential claims that can’t hold up under basic principles of empirical inquiry.”

In other words, the fight over AI should be less about preventing SkyNet from killing us and more about protecting consumers from opaque algorithms that decide to reject a home loan or decline coverage for a medical procedure. Chen and his peers do believe the government has a role to play in AI safety. But the merest whiff of “doomerism” in Silicon Valley triggers a fear that the biggest AI developers are trying to cement their dominance in the field by obscuring present-day threats at the expense of hypothetical ones.

Amba Kak of the AI Now Institute, one of the few representatives of civil society at last week’s summit, said at the event’s conclusion that “we are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions.” In her remarks, Kak acknowledged efforts by the Biden administration to encourage fair competition and redress bias in AI, but said future gatherings should include voices from across society, not just the biggest tech companies and governmental leaders.

Some groups see current-day AI safety and a competitive industry open to new players as inextricably paired. Mark Surman, president of the Mozilla Foundation, and the researcher Camille François said in a blog post yesterday that “competition is an antidote” for what they see as the undemocratic nature of current AI policy debates, dominated by industry giants.

They emphasize making AI development tools available to everybody, accusing major players like OpenAI of using “the fear of existential risk to propose approaches that would shut down open-source AI.” (A “joint statement” published on Halloween with signatures from Surman, François, and no less than Meta AI chief Yann LeCun called for making open-source AI development a “global priority.”)

As Kak alluded to, some leaders in the U.S. have spoken out about these issues. Vice President Kamala Harris was outspoken at last week’s summit in urging other global leaders and AI companies to prioritize the here-and-now risks of algorithmic discrimination. Federal Trade Commission Chair Lina Khan wrote in the New York Times in May that “The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms,” and urged anti-monopolistic policy choices along the lines of those called for by Mozilla.

The Biden administration’s executive order did address those topics, specifically directing Khan’s FTC to investigate monopolistic practices in AI, establishing privacy protections for government uses of AI, and ordering the Department of Housing and Urban Development to provide guidance for stopping discriminatory AI systems in lending. The Bletchley Declaration itself elaborates on both immediate human risk and the doomy predictions of apocalypse-by-frontier-model.

Still, for some on the outside of industry and government who have studied the policy fights of the past epoch in tech, they’ll believe that AI giants will voluntarily accept accountability for their products’ potential harms when they see it.

“We… need a more holistic, human-centered vision of AI systems — their impact on workers, their extraction of data, their massive consumption of energy and water,” Chen said. “This was lacking at last week’s summit.”

 

A message from the Computer & Communications Industry Association:

Whether it's life-saving medical innovations or a video call with a loved one, technology is improving our lives in ways big and small. The world in front of us is full of possibilities. American entrepreneurs are opening new doors and American workers are expanding career and training opportunities powered by technology. America’s tech innovators: powering opportunity for us all. Learn more by clicking here.

 
full disclosure

Meta made another tweak to how it will police political ads and content across its platforms.

The company announced in a blog post today that it will disclose to users when content is created or altered by generative AI or other digital tools.

Starting “in the new year,” advertisers will be required to disclose when they use AI-generated content to depict events, or speech, that didn’t actually happen in the context of a real-life political debate. Meta will then notify viewers that advertisers made one of these voluntary disclosures, and if advertisers fail to do so Meta will remove the ad with “penalties” for repeated failure to disclose.

 

A message from the Computer & Communications Industry Association:

Advertisement Image

 
rules for ai risk

University of California Berkeley researchers published a report today that sets sweeping standards for evaluating risks in AI systems.

The 118-page document sets best practices meant to complement those set out by the National Institute for Standards and Technology and the International Organization for Standardization.

“We intend this Profile document primarily for use by developers of large-scale, state-of-the-art GPAIS,” the authors write, saying it “aims to help key actors… achieve outcomes of maximizing benefits, and minimizing negative impacts, to individuals, communities, organizations, society, and the planet,” including “protection of human rights, minimization of negative environmental impacts, and prevention of adverse events with systemic or catastrophic consequences at societal scale.”

Key recommendations include establishing risk-tolerance thresholds for AI’s use, the use of red-teaming and adversarial testing, and involving outside actors and users of the system in the risk-identification process.

 

GET READY FOR POLITICO’S DEFENSE SUMMIT ON 11/14: Russia’s war on Ukraine … China’s threats to Taiwan … a war in Gaza. The U.S. is under increasing pressure to deter, defend and fight in more ways — but not everyone agrees how. Join POLITICO's 3rd Annual Defense Summit on November 14 for exclusive interviews and expert discussions on global security and the U.S.'s race to bolster alliances and stay ahead of adversaries. Explore critical topics, including international conflicts, advanced technology, spending priorities and political dynamics shaping global defense strategies. Don’t miss these timely and important discussions. REGISTER HERE.

 
 
Tweet of the Day

my artificial intelligence executive order summary

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from the Computer & Communications Industry Association:

American tech companies continue to propel us forward. Not only is technological innovation an important part of the way we live, it’s also critical to our nation’s economic success. Small businesses and startups, large employers and those who keep our country safe, all build their success on American tech. Tech companies and the digital economy are providing consumers with more choices than ever before–empowering them to find new products and to explore the world.

Learn more about how American tech innovators are powering opportunity for us all by clicking here.

 
 

JOIN US ON 11/15 FOR A TALK ON OUR SUSTAINABLE FUTURE: As the sustainability movement heats up, so have calls for a national standard for clean fuel. Join POLITICO on Nov. 15 in Washington D.C. as we convene leading officials from the administration, key congressional committees, states and other stakeholders to explore the role of EVs, biofuels, hydrogen and other options in the clean fuel sector and how evolving consumer behaviors are influencing sustainable energy practices. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Nov 03,2023 08:06 pm - Friday

5 questions for Stanford's Fei-fei Li

Nov 02,2023 08:07 pm - Thursday

NASA chases a ‘lunar gold rush’

Nov 01,2023 08:03 pm - Wednesday

A prism on the AI future

Oct 30,2023 08:20 pm - Monday

AI vs. the health bureaucracy