5 questions for Google DeepMind's Tom Lue

From: POLITICO's Digital Future Daily - Friday Dec 08,2023 09:04 pm
Presented by Connect The Future: How the next wave of technology is upending the global economy and its power structures
Dec 08, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by Connect The Future

Tom Lue

Google DeepMind general counsel and head of governance Tom Lue. | Google DeepMind

Hello, and welcome to this week’s installment of The Future in Five Questions. This week we interviewed Tom Lue, general counsel and head of governance at Google DeepMind (which this week launched the AI chatbot Gemini). Lue, a recent guest on the POLITICO Tech podcast, is a veteran at the intersection of emerging tech, policy, and the world of law. He served as deputy general counsel at former President Barack Obama’s Office of Management and Budget, with professional experience as a Congressional staffer and Supreme Court clerk. We discussed the need for AI talent and knowledge in government, the surprising pace of regulatory efforts around AI, and the sci-fi of Kazuo Ishiguro. This conversation was edited and condensed for length and clarity:

What’s one underrated big idea?

Using AI to time travel! Not literally, of course, at least not yet. But AI has already been a powerful catalyst for speeding up innovation and scientific discovery.

Revolutionary discoveries historically have taken years of trial and error and often cost (many) millions of dollars. The Human Genome Project took more than a decade, thousands of researchers, and roughly $3 billion, but its achievements kicked off the modern study of genomics. By automating expensive and painstaking processes, AI can leapfrog many of the barriers limiting researchers today.

For example, scientists for decades tried to find a method to reliably determine a protein’s structure to uncover how it works and help find new medicines. It would take a student an entire PhD to decode just one protein structure. But using our AI system AlphaFold, we were able to map the entire known protein universe — over 200 million proteins — and we made it freely available to researchers around the world, saving countless dollars and years in the lab and dramatically accelerating the pace of progress in structural biology.

This is also true when we look at enabling new technologies. Modern tech, from computer chips to solar panels, all relies on stable inorganic crystals. Behind each new stable crystal can be months of painstaking experimentation. Just last month we published GNoME, our AI tool which discovered 2.2 million new crystals, including 380,000 stable materials that could power future technologies.

And this is just the beginning.

What’s a technology that you think is overhyped?

Overhyped isn’t the right word here, but I think polarizing discourse around AI and existential risk can sometimes generate headlines and overshadow the need to address the more immediate risks that exist in AI systems today. Whether in developing AI systems or putting in place the guardrails to keep them safe, it’s critical to consider the entire spectrum of risks — both the near and longer term — and to bring together the different communities of experts, industry, and civil society that each have important perspectives to share on how AI can be developed and governed most effectively.

What book most shaped your conception of the future?

Kazuo Ishiguro’s "Klara and the Sun" is a beautiful, thought-provoking, and moving story about what a world with digital companions (what the book terms “Artificial Friends”) could look like. All of the book’s themes — around love, creativity, loss, physical and emotional connection, and how AI might powerfully shape our most important life experiences — really stuck with me. And that’s why it’s so important to take a human-centered approach to the development of AI systems, and to make sure they reflect and respect the broad diversity of human experiences and values.

What could the government be doing regarding technology that it isn’t?

We need more AI expertise and talent in government. Having spent nearly a decade in the U.S. government across all three branches earlier in my career, I’ve seen how difficult it can be for policymakers and regulators to keep up with advances in industry — let alone an industry moving at the breakneck pace we’ve seen in AI over the past couple of years.

The good news is that we are seeing positive signs that governments are ramping up their efforts to recruit top AI talent. We know that in the U.K. for example, PhD-level experience among senior government officials working on AI safety has grown significantly this year alone, and the new AI Safety Institutes recently announced by the U.S. and U.K. governments are a promising development which should play a key role in continuing to build up that in-house expertise. We also very much welcomed the commitment in President Biden’s Executive Order to accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge.

What has surprised you the most this year?

The momentum around global coordination on AI. Before this year, there wasn’t much international coordination on AI governance. But 2023 brought a wave of collaboration, with efforts such as the White House AI commitments, the G7 Code of Conduct, the creation of the UN’s High-Level Advisory Body on AI, and the first global Safety Summit on AI, hosted by the U.K. Government.

These milestones are laying the foundation for established norms for how this technology should be governed. This is crucial as AI models continue to advance at a rapid pace. Indeed, this week Google introduced Gemini to the world — the most capable and general model we’ve ever built. Gemini was built from the ground up to be multimodal, which means it can seamlessly understand, operate across, and combine different types of information including text, code, audio, image, and video. Crucially, it also has the most comprehensive safety evaluations of any Google AI model to date.

We’re heading into the new year with models more capable than ever before, with more robust safety testing and international coordination on AI clearly prioritized across the ecosystem.

 

A message from Connect The Future:

Unfair pole costs jeopardize efforts to connect 100% of Americans to broadband. Replaced poles deliver benefits to pole owners for decades – that’s why in Canada they share at least 50% of the investment. With similar cost-sharing, the FCC could speed deployment and make programs like BEAD more effective. Learn more.

 
mother's little helper

A New York startup is hoping AI can become a mom’s best friend.

POLITICO’s Sophie Gardner reported today for the Women Rule newsletter on PaidLeave.ai, a language model meant to help New Yorkers figure out how much paid maternity leave they’re entitled to and how to access it. The model is run by Moms First, a nonprofit advocacy group. CEO Reshma Saujani said she connected with OpenAI co-founder Sam Altman early in the development process, who provided her with guidance from his development team.

The software’s secret sauce: Instead of being trained on the contents of the entire internet, like ChatGPT, it’s trained only on official documents related to paid leave from New York state. By targeting its data set so specifically, the software vastly reduces the “hallucinations” and irrelevant information ChatGPT can be prone to cough up.

Saujani told Sophie the software would “help more women get access to benefits, which is going to boost women’s economic empowerment,” and that she’s currently talking to governors in 13 other states with paid family leave.

 

A message from Connect The Future:

Advertisement Image

 
about that deadline

European legislators tacked on another day of negotiation over the AI Act, after a marathon 22-hour session Thursday failed to reach a settlement.

POLITICO’s Gian Volpicelli reported that negotiators could not agree on AI use by security agencies and law enforcement, including a proposed ban on facial recognition technology. The rift was largely between member governments who want access to those tools and the European Parliament that wants to ban them.

Civil rights groups are not happy: “Drawing out the trilogue is a tactic to force sleepy negotiators to accept a weaker deal,” Sarah Chander, a senior policy advisor at European Digital Rights, told Gian. “There’s a major concern that some in the Parliament will accept a disastrous deal when it comes to facial recognition and predictive policing.”

Negotiations, as of this writing, are still ongoing.

 

GET A BACKSTAGE PASS TO COP28 WITH GLOBAL PLAYBOOK: Get insider access to the conference that sets the tone of the global climate agenda with POLITICO's Global Playbook newsletter. Authored by Suzanne Lynch, Global Playbook delivers exclusive, daily insights and comprehensive coverage that will keep you informed about the most crucial climate summit of the year. Dive deep into the critical discussions and developments at COP28 from Nov. 30 to Dec. 12. SUBSCRIBE NOW.

 
 
Tweet of the Day

Yeeeeeenot yet.

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Connect The Future:

Unfair pole replacement costs jeopardize efforts to connect 100% of Americans to broadband. When pole owners don’t pay their fair share for investments in their own infrastructure, unserved families and small businesses pay the price. The FCC has the opportunity to fix the pole replacement cost issue and expedite broadband expansion programs like BEAD. Learn More.

 
 

JOIN WOMEN RULE ON 12/12: For centuries, women were left out of the rooms that shaped policy, built companies and led countries. Now, society needs the creativity and entrepreneurship of women more than ever. How can we make sure that women are given the space and opportunity to shape the world’s future for the better? Join POLITICO's Women Rule on Dec. 12 for Leading with Purpose: How Women Are Reinventing the World to explore this and more. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Dec 07,2023 09:21 pm - Thursday

Metaverse classes are in session

Dec 06,2023 09:02 pm - Wednesday

The stakes for tech's future in '24

Dec 05,2023 09:02 pm - Tuesday

Crypto likes the government now, sort of

Dec 04,2023 10:05 pm - Monday

IBM promises a quantum leap

Dec 01,2023 09:18 pm - Friday

5 questions for Meredith Whittaker

Nov 29,2023 09:02 pm - Wednesday

Exclusive: What people actually think about AI