AI companies try to self-regulate

From: POLITICO's Digital Future Daily - Wednesday Aug 02,2023 08:57 pm
How the next wave of technology is upending the global economy and its power structures
Aug 02, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

Check it out: This morning we launched a new podcast, POLITICO Tech. It’s a daily download on the disruption that technology is bringing to politics and policy, from AI and the metaverse to disinformation and microchips. Today, departing White House adviser Ronnie Chatterji talks about America’s multi-billion dollar chips plan, and what he’s worried about on the tech infrastructure front. Listen to today’s podcast below, or sign up here.

Play audio

Listen to today’s POLITICO Tech podcast

This illustration photograph taken in Helsinki on June 12, 2023, shows an AI (Artificial Intelligence) logo blended with four fake Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence software. Researchers call them the "American blondes" bright-eyed environment-lovers tweeting passionately in support of the UAE and its handling of the forthcoming COP28 climate summit. The only problem? They are not real.Ben, Brianna, Emma, Caitlin and Chloe exude optimism about the role of the Gulf state and its COP28 chief, oil executive Sultan Al Jaber, in promoting climate action. (Photo by Olivier MORIN / AFP) / "The erroneous mention[s] appearing in the metadata of this photo by Olivier MORIN has been modified in AFP systems in the following manner: [Twitter accounts bearing profile pictures apparently generated by Artificial Intelligence] instead of [Twitter accounts generated by Artificial Intelligence]. Please immediately remove the erroneous mention[s] from all your online services and delete it (them) from your servers. If you have been authorized by AFP to distribute it (them) to third parties, please ensure that the same actions are carried out by them. Failure to promptly comply with these instructions will entail liability on your part for any continued or post notification usage. Therefore we thank you very much for all your attention and prompt action. We are sorry for the inconvenience this notification may cause and remain at your disposal for any further information you may require." (Photo by OLIVIER MORIN/AFP via Getty Images)

An AI logo. | AFP via Getty Images

The rise of new, powerful AI models over the past year has raised huge questions about who, or what, is supposed to oversee their growth and impact on the world.

Congress has struggled to get its hands around the issue; the White House is in voluntary-guideline-and-press-conference mode. Europe is struggling to update its AI policies to account for the speed of tech development.

Two weeks ago, the industry itself stepped in with a move: The top AI builders said they were launching the Frontier Model Forum, a group that promises to work on establishing best practices for the industry and researching AI safety. ("Frontier model" is the term for the biggest, most cutting-edge generative AI models, like OpenAI’s GPT-4 and DALL-E and Google’s BERT.)

The group has only four members so far: Microsoft, OpenAI, Google and Anthropic — companies building the biggest and most powerful AI platforms. But it could be very important to AI policy: one of its core objectives is to be a conduit between the exploding AI industry and policymakers.

So… what’s the forum really going to do? That’s the question on a lot of minds, and we started asking.

The group doesn’t have an address or an email yet, and spokespeople from the individual members (we reached out to all four) didn’t have much to share. Its opening blog post promises to establish an advisory board over the next couple of months to determine its priorities. A forum spokesperson from Microsoft said more news about the board and potential new group members would be coming down the pike soon — around early September.

The forum has also signaled international ambitions. In their launch announcement, the member companies said they want to influence multilateral government AI efforts like the G7 Hiroshima process; the OECD’s work on AI risks, standards, and social impact; and the AI work underway at the U.S.-EU Trade and Technology Council.

Of course, a small group of companies banding together to push their own interests is a move Washington has seen before. In one sense, the Frontier Model Forum is a normal industry group, this time for the very rarefied club of companies deploying super-sophisticated AI platforms. “This is a familiar kind of institution for new kinds of technologies,” said Nick Garcia, policy counsel at Public Knowledge, a public interest nonprofit.

With AI, though, even skeptical watchdogs agree that something like this might actually be useful for society more broadly. “There absolutely is a need for private voluntary action to deal with the safety and ethics issues around generative AI,” said Robert Weissman, president of Public Citizen, the progressive consumer rights advocacy nonprofit.

“Normally, our view would be: industry doesn't have any role in policy development,” Weissman said. “But in this case, I think that's not feasible or even desirable. There’s too much that’s confusing, or unknown, or fast moving.”

Garcia agrees with the need to have industry involved in standard-setting, and drew a comparison to the telecommunications industry: “Certainly, a lot of the internet is built on standards organizations that work collaboratively in order to figure things out. A lot of telecom relies on these standards organizations,” he said.

But some are still wary of what the forum could become. “We have years of history with corporate self-regulation, where it tends to mean very little,” said Wendell Wallach, a fellow at the Carnegie Council for Ethics in International Affairs and co-director of its Artificial Intelligence and Equality Initiative.

Weissman voiced similar concerns: “If this is just another trade association — another formulation in a different guise of industry leveraging its influence to block meaningful controls and maintain maximum freedom for generative AI companies — that would be very harmful.”

For now, what they are looking for from the forum is specificity: Advocates want AI companies to propose clear, shared rules for AI model audits with information on what’s being tested and by whom.

So far, they haven’t seen those kinds of specifics. “If you look at the White House agreement with the companies a couple of weeks ago, it's relatively high level of generality. And in almost all of the company statements of principles, you see a high level of generality,” Weissman said.

OpenAI and Anthropic did not respond by deadline to a request for comment.

 

A NEW PODCAST FROM POLITICO: Our new POLITICO Tech podcast is your daily download on the disruption that technology is bringing to politics and policy around the world. From AI and the metaverse to disinformation and cybersecurity, POLITICO Tech explores how today’s technology is shaping our world — and driving the policy decisions, innovations and industries that will matter tomorrow. SUBSCRIBE AND START LISTENING TODAY.

 
 
Breakthrough or bogus? Now you can bet.

Science lovers with a knack for online gambling, of sorts, have taken to wagering on whether a South Korean team really did test a new superconductor that could unlock ground-breaking technology.

The South Korean team claims it created a material that can act as a superconductor at room temperature. If true, the invention would be earth-shattering, since current superconductors must be kept at freezing temperatures, making them cost-prohibitive for most everyday uses. Cheap, room-temperature superconductors could revolutionize everything from quantum computing, high-speed rail and the power grid.

The team’s announcement immediately drew skepticism, based in part, on the history of similar claims that have been debunked in the past.

That's sparked Manifold, an online predictions market, to take wagers on whether the South Korean results can be replicated. As of Wednesday afternoon, the odds stand at 36 percent in favor of replication. That follows an initial spike of optimism that put the odds at over 80 percent, before a weekend trough of despair sent them down into the low teens.

Unlike in Sin City, successful bettors will not be able to spend their winnings on a night of debauchery. Manifold users wager the platform’s “play money,” called mana; the website says it can be used to make charitable donations. —Ben Schreckinger

THE FUTURE IN 5 LINKS
 

HITTING YOUR INBOX AUGUST 14—CALIFORNIA CLIMATE: Climate change isn’t just about the weather. It's also about how we do business and create new policies, especially in California. So we have something cool for you: A brand-new California Climate newsletter. It's not just climate or science chat, it's your daily cheat sheet to understanding how the legislative landscape around climate change is shaking up industries across the Golden State. Cut through the jargon and get the latest developments in California as lawmakers and industry leaders adapt to the changing climate. Subscribe now to California Climate to keep up with the changes.

 
 

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Aug 01,2023 08:25 pm - Tuesday

The case for more AI in politics

Jul 31,2023 09:00 pm - Monday

How the Pentagon seeds small companies

Jul 28,2023 08:01 pm - Friday

5 questions for Leroy Hood

Jul 27,2023 08:02 pm - Thursday

Score one for ‘the algorithm’

Jul 25,2023 09:06 pm - Tuesday

Elon's retrofuturist dream comes to fruition

Jul 24,2023 08:38 pm - Monday

'Oppenheimer' and the politics of tech