5 questions for Stanford’s Lloyd B. Minor

From: POLITICO's Digital Future Daily - Friday Aug 04,2023 08:22 pm
How the next wave of technology is upending the global economy and its power structures
Aug 04, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

Lloyd B. Minor — Dean of the Stanford University School of Medicine.

Lloyd B. Minor — Dean of the Stanford University School of Medicine. | Stanford University

Hello, and welcome back to The Future in 5 Questions. This Friday, we have Lloyd B. Minor — Dean of the Stanford University School of Medicine, and a big advocate for using tech to make health care more precise. Minor’s tenure has seen Stanford Medicine develop a focus in personalized, preventative health care and launch a new initiative on responsible AI in health care. He also hosts a podcast called Minor Consult.

Read on to hear Minor’s thoughts on the rise of predictive medicine, why AI in healthcare is currently overhyped — and the risks of technology that reads your mind.

Responses have been edited for length and clarity.

What’s one underrated big idea?

The idea of prediction and prevention and early detection. In the United States, we have an amazing sick care system. We've pioneered major advances in transplantation surgery, ultra complex procedures for advanced heart disease, advanced cancer. But we don't do a very good job of predicting and preventing disease or diagnosing it early. And that's a real opportunity. So how do we move from sick care — which we do well, although there are access problems in our country, for sure — to health care?

We know for a very small group of genes — the bracket genes, for example — the enhanced proclivity for cancer in patients that have those genetic variants. But there are a host of others that remain to be identified, validated, and then rolled out into diagnostic testing. It should be possible for each of us to have a much more accurate profile of our propensity for disease, and know what type of screening we need to undergo based upon our profile.

What’s a technology you think is overhyped? 

The application of AI in medicine is both overhyped in terms of some of its immediate effects, and underhyped in terms of its long term effects. We tend to overestimate what we can get done in a short period of time, and underestimate what we can get done in a longer period of time when it comes to the application of technology.

In the longer term, I think [generative AI models] are going to become more and more accurate. Already today, they are helping physicians to supplement our medical knowledge — knowledge of complex diseases that occur rarely, that even the experts in a narrow field don't carry around in their head.

An example of overhyping would be saying that AI is going to somehow eliminate fields of medicine or radically change the way healthcare is practiced over a short period of time. I think it's going to profoundly affect the way health care is delivered and the way each of us can engage with our health.

What book most shaped your conception of the future?

Nita A. Farahany’s “The Battle for Your Brain.” It talks about neurotechnology as it is being applied today and in the future. It's very well written and engaging. She does a great job of describing the profoundly good impact of that technology on diseases like epilepsy, Parkinson's disease. Of course, there are potential downsides to the technology — like, will it be possible to simply analyze our brain and know what we're thinking? Know how we're going to vote, for example, and therefore seriously invade our privacy or alter our interactions with others? Those are real concerns. She does a really nice job of describing the opportunities but also some of the things we have to be careful of in the future as well.

What could government be doing regarding tech that it isn’t?

Polls have shown that people today, understandably, are deeply skeptical about the application of AI to health and healthcare delivery, because of the potential for doing real harm. But the polls also indicate that we need to have much more of a dialogue with the public about the status of AI today — particularly generative AI. Where's it going in the future? How do we develop responsible ways of oversight regulation and compliance?

There's a lot of discussion going on in Washington about that now. Both policymakers and those driving the science are going to benefit from a much broader swath of the public being engaged in this dialogue. Otherwise, no matter how well thought out, the regulations won't necessarily earn the trust and the respect of the public.

Government plays a really important role in being a convener of these dialogues. Certainly, elected representatives can and should play an important role with their constituents. Federal agencies like the FDA and the CDC, particularly coming out of COVID, are doing much more at reaching out to the public. All of us learned through COVID the importance of broad engagement of the public — including discussing with the public when we don't know the answer to something.

What has surprised you most this year?

It's hard to believe, but generative AI just came to the public in November of 2022. I've been playing around with various generative AI models and it's incredible. I can get information assimilated in a meaningful way so much faster. I could do the same thing with internet searches, but it would be on my back to piece together the various different components.

I'll just mention an anecdote. Earlier in my career, I did drive the field of vestibular inner ear balance physiology for many years. So I just typed into a large language model: What is superior canal dehiscence syndrome? I got back two paragraphs that were, quite frankly, probably as good as I could write. They covered the major points and even got some of the subtleties correct. Now, as I tried to push it more, it broke down. But at a high level, it was far better than you would get from doing just a simple internet search. That to me was like, “Wow, this thing has real power.” And it will, of course, only get better with time.

 

A NEW PODCAST FROM POLITICO: Our new POLITICO Tech podcast is your daily download on the disruption that technology is bringing to politics and policy around the world. From AI and the metaverse to disinformation and cybersecurity, POLITICO Tech explores how today’s technology is shaping our world — and driving the policy decisions, innovations and industries that will matter tomorrow. SUBSCRIBE AND START LISTENING TODAY.

 
 
HOLLYWOOD’S AI APOCALYPSE COMETH:

There’s a future in which artificial intelligence can insert you into movies or build TV shows around your viewing preferences — but it just might kill the artistic expression and experimentation that has propelled Hollywood for a century.

That’s the warning filmmaker Justine Bateman had for my colleague Steven Overly on today’s episode of POLITICO Tech, a daily podcast that launched earlier this week. For Bateman, AI-generated entertainment will feel like “breathing circulated air” with no fresh ideas to surprise audiences or challenge their thinking.

And while movies and television today bring people together for shared viewing experiences — Barbenheimer, anyone? — generative AI has the power to give each person a tailored experience that lacks a sense of unity. If that sounds a lot like social media to you, Bateman sees those parallels, too.

Be sure to listen to the podcast interview and read Steven’s full Q&A with Bateman’s thoughts on how Washington policymakers can act. A few excerpts:.

On entertainment’s ‘set it and forget it’ future: 

One way that coders think about things is a little bit of set it and forget it. There are some apps that work like that, some platforms that work like that. TikTok, Twitter — though I don’t know what Twitter’s turning into — Instagram, Facebook. They’re not having to fill the platform with a bunch of stuff. They’re not having to fill the shelves, other people do.

I don’t see any reason why [studios and streamers] wouldn’t want all-AI films that are customized to somebody’s viewing history and even have an upcharge for people who want to get themselves scanned or just upload a photo of themselves and put themselves in the film. … That’s a set it and forget it kind of thing.

On the loss of shared entertainment: 

Not only with these new films that are tailored to your viewing history, but also they have the technology now to go back into, you name the film, “Citizen Kane,” and tailor it to you. Give you a new version of “Citizen Kane,” give me a different version of “Citizen Kane.”

That, to me, it’s so wrong on so many levels. First off, it eliminates what you just said, being able to talk about a shared experience, even though we are experiencing that film at different times. It completely rapes the filmmaker and everybody who is involved in it by destroying what they created. And by doing all of this, but using all this generative AI in entertainment, you’re never going to see anything new.

On one positive use of AI: 

I will say that I’ve seen it used in the vein of experimental video and, in that sense, you see how strange and trippy the results are. That’s interesting for experimental video. — Steven Overly

Listen to today’s podcast below, or sign up here.

Play audio

Listen to today’s Tech podcast.

The Future in 5 Links
 

HITTING YOUR INBOX AUGUST 14—CALIFORNIA CLIMATE: Climate change isn’t just about the weather. It's also about how we do business and create new policies, especially in California. So we have something cool for you: A brand-new California Climate newsletter. It's not just climate or science chat, it's your daily cheat sheet to understanding how the legislative landscape around climate change is shaking up industries across the Golden State. Cut through the jargon and get the latest developments in California as lawmakers and industry leaders adapt to the changing climate. Subscribe now to California Climate to keep up with the changes.

 
 

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Aug 02,2023 08:57 pm - Wednesday

AI companies try to self-regulate

Aug 01,2023 08:25 pm - Tuesday

The case for more AI in politics

Jul 31,2023 09:00 pm - Monday

How the Pentagon seeds small companies

Jul 28,2023 08:01 pm - Friday

5 questions for Leroy Hood

Jul 27,2023 08:02 pm - Thursday

Score one for ‘the algorithm’