The billionaire bucks shaping AI policy

From: POLITICO's Digital Future Daily - Monday Dec 18,2023 09:28 pm
How the next wave of technology is upending the global economy and its power structures
Dec 18, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ben Schreckinger

President Joe Biden and California Gov. Gavin Newsom talk artificial intelligence in San Francisco.

President Joe Biden (left) and California Gov. Gavin Newsom discuss AI at the Fairmont Hotel in San Francisco, Calif., on June 20, 2023. | Andrew Caballero-Reynolds/AFP via Getty Images

Who influenced President Joe Biden’s new executive order on artificial intelligence?

Over the weekend, POLITICO’s Brendan Bordelon uncovered the fingerprints of the RAND Corporation, which has cultivated ties to a growing influence network backed by tech money.

For months, Brendan has been following the money shaping the regulatory debate over AI.

And much of that money is coming from wealthy backers of “effective altruism,” a prominent ideology in Silicon Valley circles that places great a deal of weight on the existential risks of AI.

Brendan’s latest reporting reveals that RAND, one of the nation’s oldest, most venerable think tanks, is facing a wave of internal concern. Some employees have voiced worry that after taking funding from Open Philanthropy, a foundation started by Facebook co-founder and effective altruist Dustin Moskovitz, RAND has become a vehicle for inserting the movement’s ideas into policy.

DFD caught up with Brendan to dig into the discontent at RAND and the factions fighting over the future of AI policy. Our conversation has been edited and condensed for clarity.

You reported that RAND staffers — some of whom joined the think tank from the Biden administration last year — shaped parts of Biden’s executive order. But you also obtained audio of a meeting where other RAND workers expressed concerns about the organization's work on AI. What’s the problem? Aren’t think tanks supposed to be close to governments and shape policy?

I don't think it's necessarily uncommon that RAND CEO Jason Matheny and senior information scientist Jeff Alstott jumped straight from the National Security Council and the White House Office of Science and Technology Policy to this think tank that is still very much embedded in how the White House and the federal government approach AI.

The bigger issue is links these folks have with an ideological movement. And it's a movement that's very much associated with the top AI companies driving the field at the moment. That's where folks start to raise eyebrows.

That's particularly supercharged when you see the money coming in from groups like Moskovitz’ foundation, Open Philanthropy, that are aligned with effective altruism, that are building this much broader network than just RAND. In some ways, RAND is just one piece of a broader machine that effective altruists are building in Washington.

RAND’s position is that the funding from Open Philanthropy does not influence the substance of its policy recommendations. If funders don’t dictate policy recommendations, what’s the issue here?

The way that Open Philanthropy throws around money in this space, it becomes difficult to say no to. They are prolific funders, and my understanding is they often come in without a ton of strings attached beyond “Here's the area we'd like you to focus on.”

And because of that money, it's difficult to get thinkers in Washington who are worried about the explicit focus on existential risk, to the exclusion of many other problems in the AI space, to go on record with those concerns.

It has increasingly had a chilling effect on policy people who say, “Hey, look, I understand why you're concerned about the AI apocalypse. But we need to base policymaking on evidence. And if there's no real evidence at the moment, beyond sort of philosophical evidence, we need to spend our time and money elsewhere.”

The problem is, when you have a lot of money coming in with an explicit or implicit desire for research to focus on those questions, it starts to take on a life of its own. You have think tanks across the board all saying the same thing: “Existential AI risk is the key problem of our time, we have to spend all this time and attention on it.” And then it becomes an echo chamber.

Effective altruism is an idealistic worldview backed by savvy business moguls. Are these influence efforts really altruistic or is this just a fresh new guise for typical industry lobbying efforts?

I've put this question to a ton of people. And I don't think it's an either/or.

There is a sense that a lot of the problems that effective altruists are raising are real questions to ask. Even critics believe that most of these people are true believers when it comes to existential AI risks.

The problem is, there's just not enough evidence at the moment to back up some of the fears about frontier AI systems.

So if there's not this evidence, why are people so fixated on these concerns — and to the point where tens of millions of dollars are being pumped into research and policymaking communities pushing in this direction, and away from the near-term harms of AI and biotech?

One of the easy ways to explain it is — because it is tech money, because it is folks at these top AI companies who are driving this — this focus on civilization-ending concerns is a deliberate distraction.. That way policymakers don't focus on things like privacy, bias, copyright concerns, regulations that could actually really impact these companies in the short term.

What other factions are out there exerting influence on AI policy?

You're starting to see this come up on the internet, actually, and I don't know how much money is behind it, but “effective accelerationists” are growing in prominence.

This group also believes in the transformative power of AI technology, but they just feel like it's going to be good.

You see a lot of that from Andreessen Horowitz lately, and other Silicon Valley venture capitalist groups that are increasingly concerned that effective altruists are slowing down the technology’s development.

You see it in questions around access to computing power, or a potential licensing regime — and the big thing right now, the open source-closed source debate. Should these open source models be allowed to proliferate? Or should the government come in and lock those down?

So … another group of wealthy tech investors who just want to see less regulation of AI?

You hear this characterization from a lot of AI researchers on the ground. They say AI technology is not going to be overwhelmingly transformative, neither in the super positive nor in the super negative sense. It’s going to be like any other technology, where there's going to be fits and starts in its development and people are going to have to muddle through unexpected setbacks that arise.

That argument’s not getting a lot of money or attention. And that's where some think tankers are really frustrated by what's happening right now.

On the one hand we’ve got a tight network of rich and powerful people that is being compared in some corners to a cult. On the other hand, the thing binding them together is a very nerdy set of beliefs about technology. Come midnight on the next full moon, are we more likely to find effective altruists performing “Eyes Wide Shut”-style rites or DM’ing with Eliezer Yudkowsky about thought experiments?

Obviously the latter.

 

Enter the “room where it happens”, where global power players shape policy and politics, with Power Play. POLITICO’s brand-new podcast will host conversations with the leaders and power players shaping the biggest ideas and driving the global conversations, moderated by award-winning journalist Anne McElvoy. Sign up today to be notified of new episodes – click here.

 
 
pac men

A group of pro-crypto super PACs backed by Andreessen Horowitz, Coinbase, and the Winklevii is raising big money to influence the 2024 election.

POLITICO’s Jasper Goodman reported on the push this morning, which has so far raised $78 million to back crypto-friendly candidates. Their project coincides with major crypto legislation working its way through the House of Representatives, and desperation from crypto boosters to rehabilitate their political image in the wake of the FTX scandal.

It’s become “more apparent that the only way to counteract the lobbies of the big banks and big tech is to show that crypto and blockchain can be a force, too,” wrote Andreessen Horowitz’s Chris Dixon on X today by way of announcing the firm’s investment in the pro-crypto Fairshake PAC, saying their goal is “bringing together responsible actors in web3 and crypto to help advance clear rules of the road that will support American innovation while holding bad actors to account.” — Derek Robertson

preparing for the worst

OpenAI is adopting a framework to track and prepare for what it sees as potential “catastrophic risks” posed by artificial intelligence models.

The “Preparedness Framework,” unveiled in a blog post and 27-page document Monday and reported in today's National Security Daily newsletter, details how the ChatGPT maker will “develop and deploy our frontier models safely.” Among the steps OpenAI will take are running evaluations to assess risk, searching for “unknown categories of catastrophic risk,” and limiting deployment of models deemed too high-risk.

“The study of frontier AI risks has fallen far short of what is possible and where we need to be,” the company wrote.

OpenAI launched the framework weeks after its board ousted CEO Sam Altman over reported safety concerns before he was reinstated days later. Most of the board members who worked to remove Altman have since resigned.

The effort comes as U.S. lawmakers have struggled to regulate artificial intelligence, while Europe leads the world in passing laws that place guardrails on the tech. Last week, Pope Francis called for a global treaty to regulate AI, a move that Sen. Mark Warner (D-Va.) said Washington is not ready for. — Matt Berg

Tweet of the Day

Customer service going the extra mile

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GLOBAL PLAYBOOK IS TAKING YOU TO DAVOS! Unlock the insider's guide to one of the world's most influential gatherings as POLITICO's Global Playbook takes you behind the scenes of the 2024 World Economic Forum. Author Suzanne Lynch will be on the ground in the Swiss Alps, bringing you the exclusive conversations, shifting power dynamics and groundbreaking ideas shaping the agenda in Davos. Stay in the know with POLITICO's Global Playbook, your VIP pass to the world’s most influential gatherings. SUBSCRIBE NOW.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Dec 15,2023 09:02 pm - Friday

5 questions for Matt Stoller

Dec 14,2023 09:14 pm - Thursday

A breakthrough boosts quantum on the Hill

Dec 13,2023 09:37 pm - Wednesday

France's Mistral takes a victory lap

Dec 11,2023 09:22 pm - Monday

Chinese chips: The plot thickens

Dec 08,2023 09:04 pm - Friday

5 questions for Google DeepMind's Tom Lue

Dec 07,2023 09:21 pm - Thursday

Metaverse classes are in session