Earlier this year, to galvanize public concern about the growing risks of AI, Tristan Harris and Aza Raskin — co-founders of the Center for Humane Technology — uploaded an hourlong Youtube video they recorded in March at a private gathering in San Francisco. Since then, nearly 3 million people have watched their TED-style talk on “The A.I. Dilemma.” One of those was California Gov. Gavin Newsom, who watched the video — multiple times, according to his office — took notes, and forwarded the video to his cabinet and senior staff. The talk was meant to urge policymakers into putting guardrails on the technology now. And in California, it just paid off. Roughly six months later, on Wednesday, Newsom signed an executive order to shape the state’s own handling of generative AI, and to study the development, use and risks of the technology. Newsom’s order was one of the most definitive moves yet to regulate AI — a technology that has been the focus of sudden attention in Congress, the West Wing and state capitals with little clarity on what should actually be done. In an interview with DFD, Newsom’s deputy chief of staff, Jason Elliott, talked in more detail about how the order came about, and Newsom’s long-term goals. With AI’s full long-term impact still unclear, focusing the order on the government's own use of the technology was a strategic choice, he said — a way to avoid boiling the whole AI ocean at once. “We first seek to control that which we can control,” Elliott said. (Independently, Sen. Gary Peters is trying out a similar approach with some success in the Senate.) Elliott said technology vendors had been vying to sell AI tools to the California government since before the ChatGPT burst into public consciousness. “There really isn't much in the way of best practices or guidelines for government procurement and licensing of genAI technology,” he said. “Well, that feels to us like a perfect place for California to step up.” Newsom is hoping to set an example for how other state governments should contract with government technology vendors on generative AI, Elliott said. And in doing so, he hopes to influence wider industry standards for the technology. The lead-by-example approach that California is taking has the blessing of the White House, which is also looking at ways the government should use generative AI, Elliott said. “We're working very closely with the president's team. To the extent that they want to push for legislation, we're obviously going to be supportive of where Joe Biden is headed with this,” he said. California’s lawmakers are also looking into AI: Several pieces of AI legislation are floating around the Capitol this session, although only one — a bill affirming the legislature’s commitment to the White House’s AI Bill of Rights — has been signed into law so far. With attention at every level of government, Newsom’s office is mindful of its lane: “This is really something where we recognize our role in the federal system,” Elliott said, also mentioning the state legislature’s “ideas on how to approach consumer protection, bias, misinformation, and financial protection,” “This executive order is not the be-all, end-all of California's entire posture on AI forevermore,” he said. And the governor’s office is still hoping that Washington — whether Congress or the White House, or both — will lay out a national framework on AI. “We're very sensitive to companies not wanting a state-by-state patchwork quilt,” Elliott said. “But at the same time, we're not going to abdicate our responsibility.” To that end, Elliott said part of the executive order was crafted so that Newsom’s office could start figuring out the security risks of AI for itself, instead of taking its cues from tech interest groups. And at the bottom of it, for the state whose tech hubs birthed generative AI, there’s a bit of pride involved in coming to the plate ahead of others. When it comes to AI policy, “California is a natural first mover,” Elliott said. “We are the literal home to a majority of these companies and a majority of the patents and a majority of the venture capital globally.” “This is really about us, embracing that first mover advantage, and trying to put some meat on the bones of what we mean when we say safe, ethical AI,” Elliot said.
|