How to tell what's real online

From: POLITICO's Digital Future Daily - Wednesday Apr 05,2023 08:02 pm
Presented by TikTok: How the next wave of technology is upending the global economy and its power structures
Apr 05, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

Presented by TikTok

With help from Derek Robertson

Deepfake video of former President Barack Obama

A deepfake video of former President Barack Obama shows technology used to make people say things they've never said. | AP Photo

The video opens with a woman in a black turtleneck, who says you can’t trust your eyes.

As it turns out, she’s an AI-generated deepfake. The woman is named Nina Schick, and although it looks like a completely natural video shot in a studio, she has never said those words, that way, in real life.

Scary stuff, right? But ah — the video also tells on itself. A labeled dropdown menu in the top left helpfully informs you that the video “contains AI-generated content.” There’s a timestamp, and a credit.

The video is the latest salvo in a war between people who create or share fake visuals and the people who want to find a way to flag them — basically, alert the public to what is real. It was released on Tuesday by technology company Truepic and production studio Revel.ai to promote the idea of a relatively new transparency standard for digitally created content.

Right now, so-called “deepfakes” are getting better and better — think about the relatively benign image of the pope in a spiffy Balenciaga jacket that fooled Twitter for a hot minute there. It was made by a Chicago-based construction worker using the AI image generator Midjourney. The growing unease around convincing, mass-producible synthetic media has prompted some big players in the tech industry to pursue the idea of an internal standard for authenticating content — something that companies and publishers can agree on and which consumers could look for when they decide what to believe.

Tuesday’s video is encrypted with a content certification standard called the C2PA. The technical-sounding name is just the acronym of the group behind it, the Coalition for Content Provenance and Authenticity. The C2PA standard is backed by a set of tech giants, including Adobe, Arm, Intel, Microsoft and Truepic. All those companies see a need to bolster trust in digital content in the era of generative AIs that are making it easier and easier to create convincing fake content.

Ok, but will it work?

A world in which the C2PA standard becomes the solution to a growing digital trust problem requires a great deal of coordination.

Without a law — and there’s no prospect of a law anytime soon — a whole chain of players would need to adopt it. Companies that build video and photo tools — including cellphone and camera manufacturers — would need to incorporate the C2PA authentication standard at the point of capture. Users, like the Bellingcat founder who created the AI-generated images of Trump’s arrest, would need to be proactive about including content credentials in the visuals they produce. Mainstream publishers and social media companies would need to look for the credentials before displaying the image on their platforms. Viewers would need to expect a little icon with a dropdown menu before they trust an image or video.

One point of the deepfake exercise was to publicly call out companies that aren’t participating, despite having access to content authentication tools like the C2PA. Schick, the person in the video, laid out C2PA’s case in an interview, naming companies that build AI tools for users to generate images and those that provide a platform for users to post these images:Why isn’t OpenAI doing it? Why isn’t Stability AI? Why aren’t Twitter or Facebook?” asked Schick. 

Andrew Jenks, co-founder and chair of the C2PA project, sees the authentication standard as an important digital literacy effort whose closest parallel is the widespread adoption of the SSL lock icon that guarantees a secure Web page. “We had to train users to look for the little padlock icon that you see in every browser today,” Jenks said. “That was a really hard problem and it took a really long time. But it's exactly the same kind of problem as we're facing with media literacy today.”

By day, Jenks is a principal program manager at Microsoft’s Azure Media Security. “Everyone in the C2PA has a day job as well.” Jenks told me. “This is a volunteer army.”

But in the larger war against misinformation, not everyone thinks a new file standard will solve the big problems. Dr. Kalev Leetaru, a media researcher and senior fellow at George Washington University, pointed out that “fake images” are just one part of the issue. “Much of the image-based misinformation in the past was not edited imagery, but rather very real imagery shared with false context,” he said. And we already have strong tools to trace an image across the web and track it back to its origin, “but no platform deploys this in production. The problem is that social media is all about context-free resharing.”

And then there’s the wider world, where misinformation is, if anything, more dangerous than it is in the U.S. “We're talking about this from the standpoint of the U.S. and the West.” Dr. Leetaru noted. “Even if this technology is rolled out on every new iPhone and Android phone that's out there today, think about how long it’s going to take before it propagates outward across the world.”

Leetaru’s concern is that during the period before widespread adoption of a standard, images or video recordings from citizen journalists that don’t have the standard will be mistrusted. And anonymity can also be a critical tool for dissidents living under authoritarian governments, meaning an encryption tool intended to trace an image’s provenance back to its point of origin can backfire on those capturing the images. (For what it’s worth, Truepic and Microsoft announced a pilot program last week called Project Providence to authenticate images coming out of Ukraine, taken by Ukrainian users documenting the country’s cultural heritage.)

And to be clear: even its advocates don’t think the C2PA is a “silver bullet.” Jenks said the C2PA is simply “one part of what we in the security world would call defense in depth.”

Still, there’s growing support for the idea of authenticating images at their source. “I've done hundreds of meetings on C2PA technology at this point. I do not believe a single person has said, ‘That's not something we need,’” Jenks told me.

 

A message from TikTok:

TikTok is building systems tailor-made to address concerns around data security. What’s more, these systems will be managed by a U.S.-based team specifically tasked with managing all access to U.S. user data and securing the TikTok platform. It’s part of TikTok’s commitment to securing personal data while still giving the global TikTok experience people know and love. Learn more at http://usds.TikTok.com.

 
the dawn of the ai scam

It was all but inevitable that “AI,” generally (and vaguely) defined, would become as powerful a tool for scams as it is for computing.

And so it’s gone in Texas, as POLITICO’s Sam Sutton reported yesterday afternoon for Pro s: Regulators in three states have issued cease-and-desist orders to “YieldTrust.ai,” a trading platform that claimed to execute “70 times more trades with 25 times higher profits than any human trader could” through the power of AI, according to the Texas State Securities Board’s complaint.

Joe Rotunda, the Board’s Director of Enforcement, said the company was in reality promoting “the equivalent of nothing” — a platform that state analysis concluded could “Blacklist users and prevent them from withdrawing funds, receiving interest or receiving refunds,” “Change the used token at any time, potentially preventing users from withdrawing funds,” and a number of other risks. The publication of that report led YieldTrust.ai to announce it was shutting down, but regulators say it continued to accept investor funds.

It seems like a pretty unsophisticated scam, but Rotunda told Sam it’s likely just the beginning: “The initial scams are less sophisticated,” he said. “More sophisticated, more dangerous cases will come.” — Derek Robertson

 

A message from TikTok:

Advertisement Image

 
chatgpt's european troubles

The Italian ban on ChatGPT might be just the beginning of OpenAI’s troubles in Europe.

POLITICO’s Clothilde Goujard and Gian Volpicelli reported today on how the company, which has no European office, is likely to bump up against the European Union’s more robust regulatory regime in other member states — specifically the General Data Protection Regulation under which Italy’s regulators announced their ban.

The European Consumer Organization also asked the EU and member governments to investigate ChatGPT last week, warning the Union’s upcoming AI Act might not take effect before “leaving consumers at risk of harm from a technology which is not sufficiently regulated during this interim period and for which consumers are not prepared.”

As Gabriela Zanfir-Fortuna, of the Future of Privacy Forum think tank, told Clothilde and Gian: “Data protection regulators are slowly realizing that they are AI regulators.” — Derek Robertson

 

GO INSIDE THE 2023 MILKEN INSTITUTE GLOBAL CONFERENCE: POLITICO is proud to partner with the Milken Institute to produce a special edition "Global Insider" newsletter featuring exclusive coverage, insider nuggets and unparalleled insights from the 2023 Global Conference, which will convene leaders in health, finance, politics, philanthropy and entertainment from April 30-May 3. This year’s theme, Advancing a Thriving World, will challenge and inspire attendees to lean into building an optimistic coalition capable of tackling the issues and inequities we collectively face. Don’t miss a thing — subscribe today for a front row seat.

 
 
tweet of the day

no intelligence is general until it can build general intelligence

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from TikTok:

TikTok has partnered with a trusted, third-party U.S. cloud provider to keep all U.S. user data here on American soil. These are just some of the serious operational changes and investments TikTok has undertaken to ensure layers of protection and oversight. They’re also a clear example of our commitment to protecting both personal data and the platform's integrity, while still allowing people to have the global experience they know and love. Learn more at http://usds.TikTok.com.

 
 

STEP INSIDE THE WEST WING: What's really happening in West Wing offices? Find out who's up, who's down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider's guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won't find anywhere else, subscribe today.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Apr 04,2023 08:46 pm - Tuesday

The future, one year later

Apr 03,2023 08:15 pm - Monday

A U.S. diplomat’s Web3 warning

Mar 31,2023 08:01 pm - Friday

5 questions for Brenda Darden Wilkerson

Mar 30,2023 08:22 pm - Thursday

Space Force major to Pentagon: Mine Bitcoin!

Mar 29,2023 08:02 pm - Wednesday

What could bloom in the metaverse winter

Mar 28,2023 08:56 pm - Tuesday

Quantum arrives in your body

Mar 27,2023 08:18 pm - Monday

A Binance crackdown and crypto’s future