The AI trust deficit

From: POLITICO's Digital Future Daily - Monday Jan 29,2024 09:33 pm
Presented by Samsung: How the next wave of technology is upending the global economy and its power structures
Jan 29, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Christine Mui

Presented by

Samsung

President Joe Biden speaks in Superior, Wisconsin.

President Joe Biden. | Alex Brandon/AP

There is still a lot we don’t know about the robocall in New Hampshire that impersonated President Joe Biden, likely with AI voice cloning technology.

It’s not certain who was behind the call (the New Hampshire attorney general’s office is investigating), what software was used in its creation (one analysis blames ElevenLabs, but the startup’s own tool disagreed) or how many voters received one.

But it made one thing clear: the pressure is on for regulators to take action on deepfakes ahead of the election.

The impact of deepfakes on society — and elections particularly — has been an anxiety for years. Easy-to-use generative AI tools have recently moved it from an issue in niche areas to a top security risk across the board. Before the Biden robocall, AI deepfakes were used in attempts to disrupt elections in Slovakia and Taiwan.

Congress has taken note. Sen. Mike Rounds (R-SD) told POLITICO that as the Senate hashes out its priorities on AI legislation, there is growing recognition that tackling the use of AI in campaign ads and communications should top the list. And after explicit deepfakes of Taylor Swift spread on X last week, lawmakers renewed calls for urgent legislation on the issue.

A reminder: no federal laws currently prohibit the sharing or creation of deepfakes, though several bills have been proposed in Congress and some states have passed laws to crack down on manipulated media. The Federal Election Commission, too, has been considering rule changes to regulate the use of A.I. deepfakes in campaign materials.

“Deepfakes is the first test that generative AI has thrown at us because it fundamentally eliminates all trust,” Vijay Balasubramaniyan, CEO of the phone fraud detection company Pindrop, told Steven Overly on a POLITICO Tech podcast episode that delved into the Biden robocall incident. “If we can't get together and figure out how to solve that problem, yeah, the killer robots will definitely get us.”

No surprise that’s easier said than done. One especially tricky part will be figuring out how to tackle the full range of manipulated media — from older techniques like splicing in fake audio to the new generative AI-fueled advancements, and all the hybrids in between. The robocall, for one, was not a very advanced audio deepfake, according to Matthew Wright, who chairs Rochester Institute of Technology’s cybersecurity department.

“There are tools available now that that can do a better job, and consequently be more dangerous,” he told DFD.

Looking at the proposed federal bills and enacted state laws, it turns out there’s not a whole lot they collectively agree on, starting with even what should be regulated.

California and Washington’s laws target false depictions only of political candidates, while Texas and Minnesota go further to include those created with the intention of harming a political candidate or influencing election outcomes.

Consensus on what constitutes a deepfake is also lacking. Some bills distinctly cover images and video, while others extend to audio.

“This episode does highlight how important it is to have audio be included in these efforts,”said Mekela Panditharatne, counsel for the Brennan Center’s Democracy Program. “It could be kind of separated and done piecemeal. But I do think it makes sense to consider those different forms of gen-AI production together.”

Piecemeal seems to be the way regulation on deepfakes is moving. Wright drew parallels with the landscape for privacy legislation, where a patchwork of laws offer varying levels of protection.

A key question is who should be held accountable: phone service providers, platforms, developers or distributors of the deepfakes? How you answer that ends up defining the focus of proposed solutions.

At the federal level, bills have assigned responsibility to two main groups, said Panditharatne. The first includes the actors that fall under campaign finance disclosure requirements: campaigns, super PACs and donors. Often, the resulting bills address the timing of deepfakes — like one act that bans false endorsements and knowingly misleading voting information 60 days before a federal election — or transparency, as in the case of Rep. Yvette Clarke (D-N.Y.)’s bill which requires that political ads reveal their use of AI-generated material through mandatory labeling, watermarking, or audio disclosures.

The second category targets deepfake disseminators, so long as they meet certain knowledge or intent requirements in some cases. Rep. Joe Morelle’s (D-N.Y.) Preventing Deepfakes of Intimate Images Act would make it illegal to share deepfake pornography without consent.

“There is relatively little attention both at the federal and state level in holding other actors to account for deepfakes,” Panditharatne added, giving social media companies and AI developers as examples.

As with past content moderation issues, social media giants enjoy some protection from legal liability under federal law (thanks to the famous Section 230 of the Communications Decency Act), which complicates such efforts. The bipartisan Senate NO FAKES Act is one attempt; it proposes holding liable anyone who makes or publicly shares an unauthorized digital replica — including companies — and allowing for penalties that start at $5,000 per violation.

Still, it’s unclear to Wright whether any regulations under consideration, or industry solutions in development, could have prevented the Biden robocall. Wright said he has built a deepfake detection tool of his own, but also offers one solution for which the technology does not currently exist on phones. “Every microphone is going to have to have even live audio being constantly re-certified. That might have to be what’s required.”

The design of the scheme exploited an area on which detection focuses less: a direct line with no real-time feedback from social media and limited playback capabilities.

Enforcing the regulations being floated will require some sort of detection mechanism (many have been invented). But for now, some bad actors with just a voter registration list, phone, and 30-second clip of a political figure can inevitably fly under the radar. The FTC has sponsored a challenge with a $25,000 top prize for the most effective approach to safeguard against the misuse of AI-enabled voice cloning, covering everything from imposter fraud to using someone's voice without consent in music creation. Its suggestions include real-time detection and monitoring to alert users to voice cloning or block calls.

 

A message from Samsung:

Samsung has been operating in the U.S. for over 45 years—inspiring a new age of innovation and pushing the boundaries of technology that will help secure America's economic future. From semiconductors and display technology to AI and EV batteries, Samsung plays a crucial role at the intersection of the world's most pivotal industries. We are dedicated to fostering a landscape where innovation contributes to a more sustainable tomorrow. Learn more about how we work.

 
cftc vs. crypto

CFTC Chairman Rostin Behnam

CFTC Chair Rostin Behnam. | Francis Chung/EENews/POLITICO via AP Images

Commodity Futures Trading Commission Chair Rostin Behnam is throwing cold water on the idea that regulations will legitimize the crypto industry.

POLITICO’s Declan Harty reported for Pro s on Behnam’s push for greater regulatory authority over crypto, which he sees as especially urgent after the Securities and Exchange Commission allowed the sale of Bitcoin ETFs.

"[The ETFs] have taken a speculative and volatile asset, wrapped it in a thin layer of indirect regulation, and packaged it as a shiny new product," Behnam said at an event in Florida last week.

Declan writes that Behnam warned there’s still “nothing firmly in place” to protect investors from potential crypto fraud or volatility, something he says could be remedied by Congress giving his agency more regulatory authority after it brought almost 50 crypto-related regulatory actions in 2023. — Derek Robertson

 

A message from Samsung:

Advertisement Image

 
dati polizia

Italy formally warned OpenAI it’s violating the General Data Protection Regulation.

POLITICO’s Clothilde Goujard reported on the regulatory shot across the bow for Pro s this morning, with Italy’s data protection agency saying OpenAI has 30 days to contest alleged violations of data privacy committed by ChatGPT. Penalties can be up to four percent of a company’s global revenue.

The move follows Italy’s temporary ban of ChatGPT in the country in March 2023, and open investigations into the company’s practices in Spain, France, and Germany. — Derek Robertson

 

YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. 

 
 
Tweet of the Day

Google presents Learning Universal Predictorspaper page: https://huggingface.co/papers/2401.14953Meta-learning has emerged as a powerful approach to train neural networks to learn new tasks quickly from limited data. Broad exposure to different tasks leads to versatile representations enabling general problem solving. But, what are the limits of meta-learning? In this work, we explore the potential of amortizing the most powerful universal predictor, namely Solomonoff Induction (SI), into neural networks via leveraging meta-learning to its limits. We use Universal Turing Machines (UTMs) to generate training data used to expose networks to a broad range of patterns. We provide theoretical analysis of the UTM data generation processes and meta-training protocols. We conduct comprehensive experiments with neural architectures (e.g. LSTMs, Transformers) and algorithmic data generators of varying complexity and universality. Our results suggest that UTM data is a valuable resource for meta-learning, and that it can be used to train neural networks capable of learning universal prediction strategies.

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Samsung:

With 20,000+ employees nationwide and over $60 billion in investments in U.S. advanced manufacturing, research and development, and emerging technologies, Samsung is deeply invested in America's future. Most recently, we have dedicated $18 billion in manufacturing at the Samsung Austin Semiconductor site in Texas, one of the world's most advanced manufacturing facilities.

Our commitment extends beyond innovation to community impact, with contributions exceeding $145 million to 2,900+ schools, hospitals, and foundations throughout the U.S. since 2010. With continued investment in our employees and communities, we’re ushering in the future of technology, together.

Learn more about our commitments and areas of focus.

 
 

JOIN 1/31 FOR A TALK ON THE RACE TO SOLVE ALZHEIMER’S: Breakthrough drugs and treatments are giving new hope for slowing neurodegenerative diseases like Alzheimer’s disease and ALS. But if that progress slows, the societal and economic cost to the U.S. could be high. Join POLITICO, alongside lawmakers, official and experts, on Jan. 31 to discuss a path forward for better collaboration among health systems, industry and government. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Jan 26,2024 09:03 pm - Friday

5 questions for David Ulevitch

Jan 25,2024 09:32 pm - Thursday

A red flag for AI translation

Jan 24,2024 09:04 pm - Wednesday

Welcome to AI university

Jan 23,2024 09:24 pm - Tuesday

Poll: AI is looking more partisan

Jan 22,2024 09:26 pm - Monday

Silicon Valley's crush on fusion

Jan 19,2024 09:02 pm - Friday

5 questions for Julius Krein

Jan 18,2024 09:02 pm - Thursday

The looming AI monopolies