DRIVING THE DAY: Gov. Gavin Newsom is still in Beijing as his visit to China continues, and is expected to visit the Great Wall today. Our intrepid colleague Blanca Begert is traveling with the governor during his tour. You can read her coverage in the California Climate newsletter. THE BUZZ — Artificial intelligence has taken off at a breakneck sprint, and California lawmakers are hustling to make sure they don’t get lapped. From the state that brought you standard-setting laws in digital privacy and social media, now comes efforts to place limits on AI before it’s too late. After a few faltering attempts to regulate the new frontier this year, lawmakers in Sacramento are again preparing to take aim at what may become a powerful global force. And, like so many things in California, what they do could end up setting the tone for the rest of the nation. “You’ll see, probably next year, that the most important legislation in the United States for the next five years will come out of California,” said Jim Steyer, founder and CEO of Common Sense Media, a nonprofit organization that played a leading role in efforts to pass the state’s landmark laws on privacy and social media. California is no stranger to taking on Big Tech. Attorney General Rob Bonta has taken great pride in challenging social media companies over what he —and other AGs — see as problematic practices. But Steyer said he thinks AI will be a “bigger deal” than social media, and tech companies have already taken note of the rumblings in state legislatures. As our colleagues in Washington reported earlier this year, groups like NetChoice have begun funneling resources away from the Capitol and into statehouses in response to emerging regulations. Khara Boender, state policy director for the Computer & Communications Industry Association, said many of the conversations happening across the country relate to new innovations started in California, and that the group anticipates that California “will explore proposals related to AI in earnest” next year. State Sen. Scott Wiener in September introduced the bones of a bill i ntended to be a safety framework for artificial intelligence. Wiener told Playbook that AI has “so much promise,” but that he doesn’t want California to be blindsided by the negative impacts in the same way it fell behind on regulating social media. “Almost nothing happened for many years, in terms of safety regulation, at the federal and state level,” Wiener said of social media and privacy. “And the horse was really out of the barn. Then, you try to put it back in the barn, and it's really hard.” Industry groups were also closely engaged on Rebecca Bauer-Kahan’s Assembly Bill 331 earlier this year, which would have set regulations on automated decision-making tools to protect against “algorithmic bias.” That bill was killed in the Appropriations Committee, but the subject could make a reappearance next year. The core tension with AI will be between companies’ willingness to self-regulate and the need for government guardrails, said Samantha Gordon, chief programs officer at TechEquity, a California-based progressive tech group. Self-regulation is risky in such a lucrative industry. Earlier this year, the Biden administration managed to persuade 15 companies to sign voluntary commitments to safety and security, a move that was applauded as a productive first step, but also underscores the difficulty of the battle ahead. "That was the president of the United States of America asking publicly for these things,” Gordon said. "I think that sort of tells you this is not easy.” GOOD MORNING. Happy Wednesday. Thanks for waking up with Playbook.
|