5 questions for Stanford's Fei-fei Li

From: POLITICO's Digital Future Daily - Friday Nov 03,2023 08:06 pm
How the next wave of technology is upending the global economy and its power structures
Nov 03, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Fei-Fei Li.

Fei-Fei Li. | Drew Kelly/Stanford Institute For Human-Centered Artificial Intelligence

Hello, and welcome to this week’s edition of The Future in 5 Questions. This week we interviewed Stanford University’s Fei-Fei Li, a computer scientist and co-director of the Stanford Institute for Human-Centered Artificial Intelligence who served on the Biden administration’s National Artificial Intelligence Research Resource Task Force. In her new book “The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI,” she writes about the importance of centering AI systems around human goals and desires, the founding of her institute to achieve that goal, and the importance of widening the circle of voices involved in AI policy — all subjects covered in this interview. This conversation has been edited and condensed for clarity:

What’s one underrated big idea?

The discussion around AI is taking human agency out of the conversation, from the technology itself to policy. Humans have a profound relationship with machines and technology that we should truly invest in, and explore.

There is a huge possibility that this technology could super-power humans. Large language models have huge potential to be a co-pilot for knowledge workers, but it butts up against concerns about changes to jobs. If we take advantage of what this technology can offer and develop it in a direction that is much more collaborative, rather than as a replacement technology, we can truly tap into human capital and human talents in profound ways. There's so much that AI can do, but humans have to be put in the center.

What’s a technology that you think is overhyped?

The idea of AI “sentience.” This is an intellectual question that's worth exploring, and human sentience is also still a question among philosophers. But the hype here is around the doomy rhetoric about AI sentience, which takes away from recognizing the technology's immediate, real, and potentially catastrophic risks. We need a lot of hands on deck to work on those. If we become so over-focused on a deeply, profoundly intellectual debate about sentience, we take away some of the more immediate focus.

What book most shaped your conception of the future?

“Homo Deus” is a good book for this conversation. I can’t say I agree with everything in it, but it's a good book coming from a philosopher slash historian, who is looking at humanity and civilization’s trajectory, and the relationship between humans and the bigger force of the divine, eventually arriving at a conjecture that powerful algorithms will function in such a profound and impactful way that it will shape humans’ relationship with the concept of God. It’s an example of a multidisciplinary discourse that invites not only technologists, but humanists, and I think it's really, really important.

Even when I was writing my book, I didn't want to just write a book about pure tech. The arc of the book is a scientist eventually turning into a humanist, because in the future we will need scientists, social scientists, humanists, and everyone on the planet to have a moral understanding of the world as powerful technologies like AI shape our civilization.

What could government be doing regarding technology that it isn’t?

What is concerning to me is that the attention of the government to AI seems to be more of a total enamoredness of the product based on commercial efforts. The conversation that's happening right now is centered around the commercial leaders. I'm not saying there’s zero involvement with the public sector, but the lack of public sector leadership engagement, the lack of public sector voices in AI and policy, is deeply concerning.

For example, recently, we released an evaluation of all the publicly released large language models against the EU AI Act’s privacy or transparency indexes. Imagine if that came from a private company. Are you going to trust that? Government should invest not only in AI research, cloud, and data repositories, but also take a moonshot mentality and invest in national labs, regional labs, and personnel. I'm concerned that these voices are not at the table.

What has surprised you the most this year?

2023 turned out to be the year of AI policy. I sat on the National AI Research Resource Task Force for the past two years, and we planned for two years and sent the plan to Congress, and now the CREATE AI Act is active and sponsored by bipartisan members.

We really hope that kind of investment to the public sector will help to reshape the imbalance that we see and add some really direly needed resources. The government is paying attention, looking at investment, there’s the executive order, the summits, and the meetings called by government leaders.

 

SUBSCRIBE TO CALIFORNIA CLIMATE: Climate change isn’t just about the weather. It's also about how we do business and create new policies, especially in California. So we have something cool for you: A brand-new California Climate newsletter. It's not just climate or science chat, it's your daily cheat sheet to understanding how the legislative landscape around climate change is shaking up industries across the Golden State. Subscribe now to California Climate to keep up with the changes.

 
musk and sunak, bffs

LONDON, ENGLAND - NOVEMBER 2: British Prime Minister Rishi Sunak (L) shakes hands with Tesla and SpaceX's CEO Elon Musk after an in-conversation event at Lancaster House on November 2, 2023 in London, England. Sunak discussed AI with Elon Musk in a conversation that is played on the social network X, which Musk owns.(Photo by Kirsty Wigglesworth - WPA Pool/Getty Images)

United Kingdom Prime Minister Rishi Sunak and Elon Musk. | Getty Images

Although Elon Musk ditched day two of the United Kingdom’s AI Safety Summit, he joined U.K. Prime Minister Rishi Sunak in London last night for an extremely chummy interview.

That’s according to POLITICO’s Tom Bristow and Dan Bloom, who recapped the conversation last night — writing that although “The format was meant to be Sunak interviewing Musk… the PM's lengthy questions diverged into listing his own achievements and heaping praise onto the tech tycoon.”

Still, the conversation was occasionally revealing when it came to Musk’s own views on AI. He thanked Sunak for inviting China to the summit, saying “Having them here is essential. If they’re not participants, it’s pointless.” He foretold that AI would usher in a future where “You can have a job if you want to have a job … but the AI will be able to do everything.” And on the titular “safety” question of this week’s summit, he (somewhat vaguely) asserted the need for models to have a built-in “referee” and “off switch” that can “throw it into a safe state.”

summit takeaways

Oh, and then there was the actual summit: Day two concluded with an agreement between world governments and leading AI companies to test models before releasing them for public use.

POLITICO’s Vincent Manancourt and Tom Bristow reported on the agreement, signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K., and on the private end of the deal, Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI and OpenAI.

Sunak said the companies agreed to provide more access to his Frontier AI Taskforce. He also elaborated on an agreement reached between 28 countries to form an international advisory panel on AI risk modeled on the Intergovernmental Panel on Climate Change, which will help academic Yoshua Bengio produce a “State of Science” report sometime next year.

Tweet of the Day

The enduring mystery of VCs is that they will choose to talk even when the EV of doing so is negative.

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Nov 02,2023 08:07 pm - Thursday

NASA chases a ‘lunar gold rush’

Nov 01,2023 08:03 pm - Wednesday

A prism on the AI future

Oct 30,2023 08:20 pm - Monday

AI vs. the health bureaucracy

Oct 27,2023 08:03 pm - Friday

5 questions for Francesco Marconi

Oct 26,2023 08:20 pm - Thursday

The quantum here-and-now

Oct 25,2023 08:39 pm - Wednesday

When 'red-teaming' AI isn't enough