A few short years ago, Daniel Colson was taking a startup investment from OpenAI founder Sam Altman and rubbing shoulders with other AI pioneers in the Bay Area tech scene. Now, the tech entrepreneur is launching a think tank aimed at recruiting Washington’s policymakers to stop his one-time funder. Colson views it this way: The top scientists at the biggest AI firms believe that they can make artificial intelligence a billion times more powerful than today’s most advanced models, creating “something like a god” within five years. His proposal to stop them: Prevent AI firms from acquiring the vast supplies of hardware they would need to build super-advanced AI systems by making it illegal to build computing clusters above a certain processing power. Because of the scale of computing systems needed to produce a super-intelligent AI, Colson argues such endeavors would be easy for governments to monitor and regulate. “I see that science experiment as being too dangerous to run,” he said. As Washington’s policy scene reorients toward AI, Colson, 30, is the latest comer who sees cosmic stakes in the looming fights over the technology. But his Artificial Intelligence Policy Institute is looking to start with a humbler contribution to the emerging policy landscape: Polling. Last week, AIPI released its first poll, based on a thousand respondents, finding that 72 percent of American voters support measures to slow the advance of AI. Lamenting a lack of quality public polling on AI policy, Colson said he believes that such polls have the potential to shift the narrative in favor of decisive government action ahead of looming legislative fights. To do that, Colson’s enlisted a roster of tech entrepreneurs and policy wonks. “AI safety is just massively under-surveyed,” said Sam Hammond, an AI safety researcher listed among AIPI’s advisors. Colson is also getting advice from one advisor who goes unmentioned on AIPI’s website. Progressive pollster Sean McElwee, an expert in using polling to shape public opinion — who is best known for his relationships with the Biden White House and Sam Bankman-Fried — is advising Colson behind the scenes. A spokesman for Colson, Sam Raskin, described McElwee as “one of many advisers.” McElwee, who was ousted last year from the left-wing polling firm Data for Progress, reportedly, in part over his Bankman-Fried ties, did not respond to a request for comment. As AI safety proponents confront the technology’s rapid advance, Colson has been participating in calls convened in recent months by Rethink Priorities — a nonprofit launched in 2018 — to formulate a policy response among like-minded researchers and activists. Rethink Priorities is associated with Effective Altruism, a utilitarian philosophy that is widespread in the tech world. Though many Effective Altruists also worry about AI’s potential existential risks, Colson distances himself from the movement. He traces his misgivings to his attendance at an Effective Altruism gathering at the University of Oxford in 2016, where Google DeepMind CEO Demis Hassabis gave a talk assuring attendees the company considered AI safety a top priority. “All of the [Effective Altruists] in the audience were extremely excited and started clapping,” Colson recalled. “I remember thinking ‘Man, I think he just co-opted our movement.'” (A spokeswoman for DeepMind said Hassabis “has always been vocal about how seriously Google DeepMind takes the safe and responsible deployment of artificial intelligence.”) A year later, Colson co-founded Reserve, a stablecoin-focused crypto startup that landed investments from Altman and Peter Thiel. He found himself running in the same circles as many of the people who were then laying the foundations for the current AI boom. But Colson said that his experience as a Bay Area tech founder left him with the conviction that AI scientists’ vision for advancing the technology is unsafe. OpenAI did not respond to a request for comment. Colson also concluded that Effective Altruists’ vision for containing AI is too focused on technological fixes while ignoring the potential for government regulation to ensure public safety. That motivated the launch of AIPI, he said. The group’s funding has come from a handful of individual donors in the tech and finance worlds, but Colson declined to name them. In addition to more polling, AIPI is planning to publish an analysis of AI policy proposals this fall. Colson said he views the next 18 months as the best window for passing effective legislation. Because of the industrial scale of computing needed to achieve the ambitions of AI firms, he argues that computing clusters are a natural bottleneck at which to focus regulation. He estimates the measure could forestall the arrival of computer super-intelligence by about 20 years. Congress, he suggested, could cap AI models at 10 to the 25th flops, a measure of the speed at which computers can perform complex calculations. (By comparison, ChatGPT-2, which was state of the art in 2019, was trained with 10 to the 21 flops, Colson said.) Or better yet, he said, set the cap five orders of magnitude lower, at 10 to the 20th flops. “That’s what I would choose.”
|