Who influenced President Joe Biden’s new executive order on artificial intelligence? Over the weekend, POLITICO’s Brendan Bordelon uncovered the fingerprints of the RAND Corporation, which has cultivated ties to a growing influence network backed by tech money. For months, Brendan has been following the money shaping the regulatory debate over AI. And much of that money is coming from wealthy backers of “effective altruism,” a prominent ideology in Silicon Valley circles that places great a deal of weight on the existential risks of AI. Brendan’s latest reporting reveals that RAND, one of the nation’s oldest, most venerable think tanks, is facing a wave of internal concern. Some employees have voiced worry that after taking funding from Open Philanthropy, a foundation started by Facebook co-founder and effective altruist Dustin Moskovitz, RAND has become a vehicle for inserting the movement’s ideas into policy. DFD caught up with Brendan to dig into the discontent at RAND and the factions fighting over the future of AI policy. Our conversation has been edited and condensed for clarity. You reported that RAND staffers — some of whom joined the think tank from the Biden administration last year — shaped parts of Biden’s executive order. But you also obtained audio of a meeting where other RAND workers expressed concerns about the organization's work on AI. What’s the problem? Aren’t think tanks supposed to be close to governments and shape policy? I don't think it's necessarily uncommon that RAND CEO Jason Matheny and senior information scientist Jeff Alstott jumped straight from the National Security Council and the White House Office of Science and Technology Policy to this think tank that is still very much embedded in how the White House and the federal government approach AI. The bigger issue is links these folks have with an ideological movement. And it's a movement that's very much associated with the top AI companies driving the field at the moment. That's where folks start to raise eyebrows. That's particularly supercharged when you see the money coming in from groups like Moskovitz’ foundation, Open Philanthropy, that are aligned with effective altruism, that are building this much broader network than just RAND. In some ways, RAND is just one piece of a broader machine that effective altruists are building in Washington. RAND’s position is that the funding from Open Philanthropy does not influence the substance of its policy recommendations. If funders don’t dictate policy recommendations, what’s the issue here? The way that Open Philanthropy throws around money in this space, it becomes difficult to say no to. They are prolific funders, and my understanding is they often come in without a ton of strings attached beyond “Here's the area we'd like you to focus on.” And because of that money, it's difficult to get thinkers in Washington who are worried about the explicit focus on existential risk, to the exclusion of many other problems in the AI space, to go on record with those concerns. It has increasingly had a chilling effect on policy people who say, “Hey, look, I understand why you're concerned about the AI apocalypse. But we need to base policymaking on evidence. And if there's no real evidence at the moment, beyond sort of philosophical evidence, we need to spend our time and money elsewhere.” The problem is, when you have a lot of money coming in with an explicit or implicit desire for research to focus on those questions, it starts to take on a life of its own. You have think tanks across the board all saying the same thing: “Existential AI risk is the key problem of our time, we have to spend all this time and attention on it.” And then it becomes an echo chamber. Effective altruism is an idealistic worldview backed by savvy business moguls. Are these influence efforts really altruistic or is this just a fresh new guise for typical industry lobbying efforts? I've put this question to a ton of people. And I don't think it's an either/or. There is a sense that a lot of the problems that effective altruists are raising are real questions to ask. Even critics believe that most of these people are true believers when it comes to existential AI risks. The problem is, there's just not enough evidence at the moment to back up some of the fears about frontier AI systems. So if there's not this evidence, why are people so fixated on these concerns — and to the point where tens of millions of dollars are being pumped into research and policymaking communities pushing in this direction, and away from the near-term harms of AI and biotech? One of the easy ways to explain it is — because it is tech money, because it is folks at these top AI companies who are driving this — this focus on civilization-ending concerns is a deliberate distraction.. That way policymakers don't focus on things like privacy, bias, copyright concerns, regulations that could actually really impact these companies in the short term. What other factions are out there exerting influence on AI policy? You're starting to see this come up on the internet, actually, and I don't know how much money is behind it, but “effective accelerationists” are growing in prominence. This group also believes in the transformative power of AI technology, but they just feel like it's going to be good. You see a lot of that from Andreessen Horowitz lately, and other Silicon Valley venture capitalist groups that are increasingly concerned that effective altruists are slowing down the technology’s development. You see it in questions around access to computing power, or a potential licensing regime — and the big thing right now, the open source-closed source debate. Should these open source models be allowed to proliferate? Or should the government come in and lock those down? So … another group of wealthy tech investors who just want to see less regulation of AI? You hear this characterization from a lot of AI researchers on the ground. They say AI technology is not going to be overwhelmingly transformative, neither in the super positive nor in the super negative sense. It’s going to be like any other technology, where there's going to be fits and starts in its development and people are going to have to muddle through unexpected setbacks that arise. That argument’s not getting a lot of money or attention. And that's where some think tankers are really frustrated by what's happening right now. On the one hand we’ve got a tight network of rich and powerful people that is being compared in some corners to a cult. On the other hand, the thing binding them together is a very nerdy set of beliefs about technology. Come midnight on the next full moon, are we more likely to find effective altruists performing “Eyes Wide Shut”-style rites or DM’ing with Eliezer Yudkowsky about thought experiments? Obviously the latter.
|