The doctor is AI

From: POLITICO's Digital Future Daily - Thursday Apr 27,2023 08:02 pm
How the next wave of technology is upending the global economy and its power structures
Apr 27, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ben Leonard

With help from Derek Robertson

The Cleveland Clinic is pictured. | Getty

The Cleveland Clinic.

This week, Digital Future Daily is focusing on the fast-moving landscape of generative AI and the conversation about how and whether to regulate it — from pop culture to China to the U.S. Congress. Read day one's coverage on AI havoc in the music business, day two on AI inventors, and day three on a proposed revolution in data rights.

The chatbot is here to see you.

That’s what many health care organizations envision in the future. As workforce shortages plague the health care sector, executives and tech companies are looking for ways to boost efficiency and let providers spend more time with patients and less time in their digital systems.

Generative artificial intelligence is one potential way to do that, by allowing chatbots to answer the kinds of health care questions that often leave doctors’ inboxes overflowing.

Right now, no fewer than half of health care organizations are planning to use some kind of generative AI pilot program this year, according to a recent report by consulting firm Accenture. Some could involve patients directly. AI could make it easier for patients to understand a providers’ note, or listen to visits and summarize them.

But what about… you know, actual doctoring? So far, in limited research, chatbots have proven decent at asking simple health questions. Researchers from Cleveland Clinic and Stanford recently asked ChatGPT 25 questions on heart disease prevention. Its responses were appropriate for 21 of 25, including on how to lose weight and reduce cholesterol.

But it stumbled on more nuanced questions, including in one instance “firmly recommending” cardio and weightlifting, which could be dangerous for some patients.

Steven Lin, a physician and executive director of the Stanford Healthcare AI Applied Research Team, said that the models are fairly solid at getting things like medical school test questions right. However, in the real world, questions from patients are often messy and incomplete, Lin said, unlike the structured questions on exams.

“Real patient cases do not fit into the oversimplified ‘textbook’ presentations that LLMs are trained on,” Lin told Digital Future Daily. “The best questions for LLMs like ChatGPT are those with clear, evidence-based, widely established, and rarely disputed answers that apply universally, such as ‘What are the most common side effects of medication X?’”

Lin is skeptical that generative AI will quickly become widespread in frontline health care. He pointed out that electronic health records systems were first rolled out in the 1960s, but it took until the 2010s to become ubiquitous.

“Soon, in the next 6-12 months, the hype will cool down and we'll slow down and realize how much work still needs to be done,” Lin said.

Brian Anderson, chief digital health physician at MITRE, a nonprofit that helps manage federal science research, sees it moving quite a bit faster — becoming widespread in the sector within a year or two, given the business proposition to allow providers to be more efficient.

“Algorithms... will help drive improved clinical outcomes in the near term," Anderson said. "The comput[ing] power is there to really drive some of the value … it’s just a matter of time.”

The potential for explosive growth is exactly what worries a lot of people about generative AI: They see a highly disruptive and unaccountable new tech platform, still largely experimental, with nearly no brakes on its adoption across society.

Healthcare, though, is a little different. It’s a highly regulated space with a much slower pace of innovation, big liability risks when things go wrong and a lot of existing protections around individual data.

To help users determine if a model is appropriate for them, the Department of Health and Human Services earlier this month proposed new rules that would require makers of AI in healthcare — as well as health technology developers that incorporate other companies' AI — to open up those algorithms to scrutiny if they want HHS’ certification.

That health IT certification for software like electronic health records systems is voluntary, but is required when the technology is used in many government and private health care settings.

Micky Tripathi, HHS’ national coordinator for health IT, understands why the industry is excited about AI’s potential to help better inform care — but also said that the anxiety surrounding generative AI in the wider economy also applies to healthcare.

He didn’t mince words at a recent industry conference: People should have “tremendous fear” about potential safety, quality and transparency issues.

With that in mind, both tech companies and health systems are calling for steps to ensure trust in the technology, and have formed groups to address the issues.

The Coalition for Health AI, whose members include Anderson’s MITRE, Google, Microsoft, Stanford and John Hopkins, recently released a blueprint to facilitate trust in artificial intelligence’s use in health care. It called for any algorithms used in the treatment of disease to be testable, safe, transparent, and explainable, and for software developers to take steps to mitigate bias and protect privacy.

Anderson said he's excited about the technology's potential but is "nervous" about potential future capabilities that can be difficult to anticipate. He said physician inbox management and aiding with drafting letters could be strong early use cases.

“I'm hopeful those are some of the first areas of exploration, rather than helping me decide what to do with this patient that's suffering from some kind of acute sepsis,” Anderson said.

 

GO INSIDE THE 2023 MILKEN INSTITUTE GLOBAL CONFERENCE: POLITICO is proud to partner with the Milken Institute to produce a special edition "Global Insider" newsletter featuring exclusive coverage, insider nuggets and unparalleled insights from the 2023 Global Conference, which will convene leaders in health, finance, politics, philanthropy and entertainment from April 30-May 3. This year’s theme, Advancing a Thriving World, will challenge and inspire attendees to lean into building an optimistic coalition capable of tackling the issues and inequities we collectively face. Don’t miss a thing — subscribe today for a front row seat.

 
 
have compute, will travel

In the race for AI supremacy, it might not be computing firepower that gives researchers a leg up.

In a study published today by Georgetown University’s Center for Security and Emerging Technology, the authors set out to learn more about how “compute” (the term of art for computing power) factors into researchers’ concerns around AI development. The answer? Not as much as you might think.

“More respondents report talent as an important factor for project success, a higher priority with more funding, and a more limiting factor when deciding what projects to pursue,” they write in the paper’s introduction, hence its title: “The Main Resource Is The Human.” Remarkably, this finding mostly held true across both private industry and academia, something the CSET researchers say indicates that policymakers’ focus on semiconductor manufacturing and competition with China might be misguided.

“In light of these results… this report suggests that policymakers temper their expectations regarding the impact that restrictive policies may have on computing resources, and that policymakers instead direct their efforts at other bottlenecks such as developing, attracting, and retaining talent,” the researchers write. — Derek Robertson

wrapping up the eu's metaverse panel

The European Commission concluded its first ever citizens’ panel on the metaverse this week — which produced some notable, if vague, policy recommendations.

Patrick Grady, a policy analyst at the Center For Data Innovation, has been chronicling the three-part citizens’ panel at his blog, and had a concluding recap this morning. Near the end he makes a notable observation: That citizens’ main concerns about the metaverse appear to be around accessibility and privacy, while the Commission’s own consultation appears to be focused on competition and industrial policy.

The panel’s top five priorities, in descending order, were to: “Provide training programs to teachers” on VR technology; to establish clear regulation around anonymity; to “guarantee for all citizens free and easy access” to information about VR; to research its impact on health; and to establish “terms and conditions” for data access and privacy in virtual worlds. The European Union will soon publish its own report on the panel, ahead of the Commission’s wider guidance for the technology now expected in June. — Derek Robertson

Tweet of the Day

I gave GPT-4 eyes.Here’s what I did:- added some data to a vision model- gave the AI camera access- asked it questions about the scene- it identified objects- it searched web for info- used that info to accurately answerWatch it get 3 questions 100% correct!

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGSITER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Apr 26,2023 09:01 pm - Wednesday

A radical new idea for regulating AI

Apr 25,2023 08:28 pm - Tuesday

Can JARVIS hold a patent?

Apr 24,2023 08:15 pm - Monday

AI vs. the culture industry

Apr 21,2023 08:02 pm - Friday

5 questions for Kraken's Marco Santori

Apr 20,2023 08:02 pm - Thursday

Crypto, Miami and the future of tech hubs

Apr 19,2023 08:48 pm - Wednesday

What progress looks like to Elon