Hello, and welcome back to The Future in 5 Questions. This Friday, we have Lloyd B. Minor — Dean of the Stanford University School of Medicine, and a big advocate for using tech to make health care more precise. Minor’s tenure has seen Stanford Medicine develop a focus in personalized, preventative health care and launch a new initiative on responsible AI in health care. He also hosts a podcast called Minor Consult. Read on to hear Minor’s thoughts on the rise of predictive medicine, why AI in healthcare is currently overhyped — and the risks of technology that reads your mind. Responses have been edited for length and clarity. What’s one underrated big idea? The idea of prediction and prevention and early detection. In the United States, we have an amazing sick care system. We've pioneered major advances in transplantation surgery, ultra complex procedures for advanced heart disease, advanced cancer. But we don't do a very good job of predicting and preventing disease or diagnosing it early. And that's a real opportunity. So how do we move from sick care — which we do well, although there are access problems in our country, for sure — to health care? We know for a very small group of genes — the bracket genes, for example — the enhanced proclivity for cancer in patients that have those genetic variants. But there are a host of others that remain to be identified, validated, and then rolled out into diagnostic testing. It should be possible for each of us to have a much more accurate profile of our propensity for disease, and know what type of screening we need to undergo based upon our profile. What’s a technology you think is overhyped? The application of AI in medicine is both overhyped in terms of some of its immediate effects, and underhyped in terms of its long term effects. We tend to overestimate what we can get done in a short period of time, and underestimate what we can get done in a longer period of time when it comes to the application of technology. In the longer term, I think [generative AI models] are going to become more and more accurate. Already today, they are helping physicians to supplement our medical knowledge — knowledge of complex diseases that occur rarely, that even the experts in a narrow field don't carry around in their head. An example of overhyping would be saying that AI is going to somehow eliminate fields of medicine or radically change the way healthcare is practiced over a short period of time. I think it's going to profoundly affect the way health care is delivered and the way each of us can engage with our health. What book most shaped your conception of the future? Nita A. Farahany’s “The Battle for Your Brain.” It talks about neurotechnology as it is being applied today and in the future. It's very well written and engaging. She does a great job of describing the profoundly good impact of that technology on diseases like epilepsy, Parkinson's disease. Of course, there are potential downsides to the technology — like, will it be possible to simply analyze our brain and know what we're thinking? Know how we're going to vote, for example, and therefore seriously invade our privacy or alter our interactions with others? Those are real concerns. She does a really nice job of describing the opportunities but also some of the things we have to be careful of in the future as well. What could government be doing regarding tech that it isn’t? Polls have shown that people today, understandably, are deeply skeptical about the application of AI to health and healthcare delivery, because of the potential for doing real harm. But the polls also indicate that we need to have much more of a dialogue with the public about the status of AI today — particularly generative AI. Where's it going in the future? How do we develop responsible ways of oversight regulation and compliance? There's a lot of discussion going on in Washington about that now. Both policymakers and those driving the science are going to benefit from a much broader swath of the public being engaged in this dialogue. Otherwise, no matter how well thought out, the regulations won't necessarily earn the trust and the respect of the public. Government plays a really important role in being a convener of these dialogues. Certainly, elected representatives can and should play an important role with their constituents. Federal agencies like the FDA and the CDC, particularly coming out of COVID, are doing much more at reaching out to the public. All of us learned through COVID the importance of broad engagement of the public — including discussing with the public when we don't know the answer to something. What has surprised you most this year? It's hard to believe, but generative AI just came to the public in November of 2022. I've been playing around with various generative AI models and it's incredible. I can get information assimilated in a meaningful way so much faster. I could do the same thing with internet searches, but it would be on my back to piece together the various different components. I'll just mention an anecdote. Earlier in my career, I did drive the field of vestibular inner ear balance physiology for many years. So I just typed into a large language model: What is superior canal dehiscence syndrome? I got back two paragraphs that were, quite frankly, probably as good as I could write. They covered the major points and even got some of the subtleties correct. Now, as I tried to push it more, it broke down. But at a high level, it was far better than you would get from doing just a simple internet search. That to me was like, “Wow, this thing has real power.” And it will, of course, only get better with time.
|