What’s driving California’s new move on AI policy

From: POLITICO's Digital Future Daily - Thursday Sep 07,2023 08:02 pm
How the next wave of technology is upending the global economy and its power structures
Sep 07, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

Gavin Newsom speaks during a conference.

California Gov. Gavin Newsom. | Patrick T. Fallon/AFP/Getty Images

Earlier this year, to galvanize public concern about the growing risks of AI, Tristan Harris and Aza Raskin — co-founders of the Center for Humane Technology — uploaded an hourlong Youtube video they recorded in March at a private gathering in San Francisco. Since then, nearly 3 million people have watched their TED-style talk on “The A.I. Dilemma.”

One of those was California Gov. Gavin Newsom, who watched the video — multiple times, according to his office — took notes, and forwarded the video to his cabinet and senior staff.

The talk was meant to urge policymakers into putting guardrails on the technology now. And in California, it just paid off.

Roughly six months later, on Wednesday, Newsom signed an executive order to shape the state’s own handling of generative AI, and to study the development, use and risks of the technology.

Newsom’s order was one of the most definitive moves yet to regulate AI — a technology that has been the focus of sudden attention in Congress, the West Wing and state capitals with little clarity on what should actually be done.

In an interview with DFD, Newsom’s deputy chief of staff, Jason Elliott, talked in more detail about how the order came about, and Newsom’s long-term goals.

With AI’s full long-term impact still unclear, focusing the order on the government's own use of the technology was a strategic choice, he said — a way to avoid boiling the whole AI ocean at once. “We first seek to control that which we can control,” Elliott said. (Independently, Sen. Gary Peters is trying out a similar approach with some success in the Senate.)

Elliott said technology vendors had been vying to sell AI tools to the California government since before the ChatGPT burst into public consciousness. “There really isn't much in the way of best practices or guidelines for government procurement and licensing of genAI technology,” he said. “Well, that feels to us like a perfect place for California to step up.”

Newsom is hoping to set an example for how other state governments should contract with government technology vendors on generative AI, Elliott said. And in doing so, he hopes to influence wider industry standards for the technology.

The lead-by-example approach that California is taking has the blessing of the White House, which is also looking at ways the government should use generative AI, Elliott said. “We're working very closely with the president's team. To the extent that they want to push for legislation, we're obviously going to be supportive of where Joe Biden is headed with this,” he said.

California’s lawmakers are also looking into AI: Several pieces of AI legislation are floating around the Capitol this session, although only one — a bill affirming the legislature’s commitment to the White House’s AI Bill of Rights — has been signed into law so far. With attention at every level of government, Newsom’s office is mindful of its lane: “This is really something where we recognize our role in the federal system,” Elliott said, also mentioning the state legislature’s “ideas on how to approach consumer protection, bias, misinformation, and financial protection,”

“This executive order is not the be-all, end-all of California's entire posture on AI forevermore,” he said.

And the governor’s office is still hoping that Washington — whether Congress or the White House, or both — will lay out a national framework on AI. “We're very sensitive to companies not wanting a state-by-state patchwork quilt,” Elliott said. “But at the same time, we're not going to abdicate our responsibility.”

To that end, Elliott said part of the executive order was crafted so that Newsom’s office could start figuring out the security risks of AI for itself, instead of taking its cues from tech interest groups.

And at the bottom of it, for the state whose tech hubs birthed generative AI, there’s a bit of pride involved in coming to the plate ahead of others. When it comes to AI policy, “California is a natural first mover,” Elliott said. “We are the literal home to a majority of these companies and a majority of the patents and a majority of the venture capital globally.”

“This is really about us, embracing that first mover advantage, and trying to put some meat on the bones of what we mean when we say safe, ethical AI,” Elliot said.

 

A NEW PODCAST FROM POLITICO: Our new POLITICO Tech podcast is your daily download on the disruption that technology is bringing to politics and policy around the world. From AI and the metaverse to disinformation and cybersecurity, POLITICO Tech explores how today’s technology is shaping our world — and driving the policy decisions, innovations and industries that will matter tomorrow. SUBSCRIBE AND START LISTENING TODAY.

 
 
summits for everybody

POLITICO’s Mark Scott is back with a new edition of Digital Bridge, examining the seemingly redundant web of upcoming global AI summits meant to set norms and standards for the technology.

Mark points to an upcoming policy draft expected from a meeting today of G7 officials, which will be shared with experts and watchdogs at a subsequent October summit and then approved by the G7’s digital ministers sometime before the end of the year. He characterizes this as a “massive game of horse-trading” over what will ultimately go into the guidance, which reveals a philosophical split between Western powers that want to take a more hands-off approach and those that support Europe’s AI Act.

“In that context, the G7 is trying to thread the needle so that countries can pursue their own forms of AI governance, while also creating a patchwork of international cooperation,” Mark writes. And then… there’s also a summit planned in India for December, including a wider set of non-Western countries, and one in the U.K. at the beginning of November, the focus of which remains unclear — except for the U.K.’s insistence on including China, which represents an entirely different set of competing interests. — Derek Robertson

the west vs. the rest

Speaking of which, in a new essay for the Harvard Business Review, Hemant Taneja and Fareed Zakaria lay out the implications of what they describe as a “new digital cold war” between the West and China over the development of powerful AI technologies. That’s no small issue as China touts the release of a new Tencent-designed chatbot and gloats over its overcoming trade embargoes on powerful microchips.

Taneja and Zakaria insist the West’s only hope in surpassing a China-led, surveillance-focused, authoritarian digital global order is to band together in cooperation. “For a future to prevail that prizes openness and individual rights, democratic nations need to be market leaders in AI,” they write. “The only way to ensure this is by promoting international collaboration, especially between democracies and other defenders of the rules-based order.”

They take pains to point out that means not just collaboration across governments, but between governments and the private sector itself. “We cannot risk AI going awry and putting democracies off track in this competitive race,” they write in conclusion. “Since the impacts of AI will be felt across every sector of society, accounting for broad stakeholder interests is both a moral responsibility and the only way to bring about sustainable transformation.” — Derek Robertson

Tweet of the day

most of recent waves of “tech” (aside from AI) has been laundering the glamor of technology companies into older industries to attract talent they would otherwise never see

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DON’T MISS POLITICO’S TECH & AI SUMMIT: America’s ability to lead and champion emerging innovations in technology like generative AI will shape our industries, manufacturing base and future economy. Do we have the right policies in place to secure that future? How will the U.S. retain its status as the global tech leader? Join POLITICO on Sept. 27 for our Tech & AI Summit to hear what the public and private sectors need to do to sharpen our competitive edge amidst rising global competitors and rapidly evolving disruptive technologies. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Sep 06,2023 08:25 pm - Wednesday

The plan for AI to eat the world

Aug 25,2023 08:02 pm - Friday

5 questions for Johanna Faries

Aug 24,2023 08:01 pm - Thursday

The problem behind AI's political 'bias'

Aug 23,2023 08:03 pm - Wednesday

The AI-sized holes in the UN cybercrime treaty

Aug 22,2023 08:03 pm - Tuesday

An ex-Googler takes aim at China

Aug 21,2023 08:01 pm - Monday

Let's nationalize AI. Seriously.