The video is the latest salvo in a war between people who create or share fake visuals and the people who want to find a way to flag them — basically, alert the public to what is real. It was released on Tuesday by technology company Truepic and production studio Revel.ai to promote the idea of a relatively new transparency standard for digitally created content. Right now, so-called “deepfakes” are getting better and better — think about the relatively benign image of the pope in a spiffy Balenciaga jacket that fooled Twitter for a hot minute there. It was made by a Chicago-based construction worker using the AI image generator Midjourney. The growing unease around convincing, mass-producible synthetic media has prompted some big players in the tech industry to pursue the idea of an internal standard for authenticating content — something that companies and publishers can agree on and which consumers could look for when they decide what to believe. Tuesday’s video is encrypted with a content certification standard called the C2PA. The technical-sounding name is just the acronym of the group behind it, the Coalition for Content Provenance and Authenticity. The C2PA standard is backed by a set of tech giants, including Adobe, Arm, Intel, Microsoft and Truepic. All those companies see a need to bolster trust in digital content in the era of generative AIs that are making it easier and easier to create convincing fake content. Ok, but will it work? A world in which the C2PA standard becomes the solution to a growing digital trust problem requires a great deal of coordination. Without a law — and there’s no prospect of a law anytime soon — a whole chain of players would need to adopt it. Companies that build video and photo tools — including cellphone and camera manufacturers — would need to incorporate the C2PA authentication standard at the point of capture. Users, like the Bellingcat founder who created the AI-generated images of Trump’s arrest, would need to be proactive about including content credentials in the visuals they produce. Mainstream publishers and social media companies would need to look for the credentials before displaying the image on their platforms. Viewers would need to expect a little icon with a dropdown menu before they trust an image or video. One point of the deepfake exercise was to publicly call out companies that aren’t participating, despite having access to content authentication tools like the C2PA. Schick, the person in the video, laid out C2PA’s case in an interview, naming companies that build AI tools for users to generate images and those that provide a platform for users to post these images: “Why isn’t OpenAI doing it? Why isn’t Stability AI? Why aren’t Twitter or Facebook?” asked Schick. Andrew Jenks, co-founder and chair of the C2PA project, sees the authentication standard as an important digital literacy effort whose closest parallel is the widespread adoption of the SSL lock icon that guarantees a secure Web page. “We had to train users to look for the little padlock icon that you see in every browser today,” Jenks said. “That was a really hard problem and it took a really long time. But it's exactly the same kind of problem as we're facing with media literacy today.” By day, Jenks is a principal program manager at Microsoft’s Azure Media Security. “Everyone in the C2PA has a day job as well.” Jenks told me. “This is a volunteer army.” But in the larger war against misinformation, not everyone thinks a new file standard will solve the big problems. Dr. Kalev Leetaru, a media researcher and senior fellow at George Washington University, pointed out that “fake images” are just one part of the issue. “Much of the image-based misinformation in the past was not edited imagery, but rather very real imagery shared with false context,” he said. And we already have strong tools to trace an image across the web and track it back to its origin, “but no platform deploys this in production. The problem is that social media is all about context-free resharing.” And then there’s the wider world, where misinformation is, if anything, more dangerous than it is in the U.S. “We're talking about this from the standpoint of the U.S. and the West.” Dr. Leetaru noted. “Even if this technology is rolled out on every new iPhone and Android phone that's out there today, think about how long it’s going to take before it propagates outward across the world.” Leetaru’s concern is that during the period before widespread adoption of a standard, images or video recordings from citizen journalists that don’t have the standard will be mistrusted. And anonymity can also be a critical tool for dissidents living under authoritarian governments, meaning an encryption tool intended to trace an image’s provenance back to its point of origin can backfire on those capturing the images. (For what it’s worth, Truepic and Microsoft announced a pilot program last week called Project Providence to authenticate images coming out of Ukraine, taken by Ukrainian users documenting the country’s cultural heritage.) And to be clear: even its advocates don’t think the C2PA is a “silver bullet.” Jenks said the C2PA is simply “one part of what we in the security world would call defense in depth.” Still, there’s growing support for the idea of authenticating images at their source. “I've done hundreds of meetings on C2PA technology at this point. I do not believe a single person has said, ‘That's not something we need,’” Jenks told me.
|