This week, Digital Future Daily is focusing on the fast-moving landscape of generative AI and the conversation about how and whether to regulate it — from pop culture to China to the U.S. Congress. Read day one's coverage on AI havoc in the music business, day two on AI inventors, and day three on a proposed revolution in data rights. The chatbot is here to see you. That’s what many health care organizations envision in the future. As workforce shortages plague the health care sector, executives and tech companies are looking for ways to boost efficiency and let providers spend more time with patients and less time in their digital systems. Generative artificial intelligence is one potential way to do that, by allowing chatbots to answer the kinds of health care questions that often leave doctors’ inboxes overflowing. Right now, no fewer than half of health care organizations are planning to use some kind of generative AI pilot program this year, according to a recent report by consulting firm Accenture. Some could involve patients directly. AI could make it easier for patients to understand a providers’ note, or listen to visits and summarize them. But what about… you know, actual doctoring? So far, in limited research, chatbots have proven decent at asking simple health questions. Researchers from Cleveland Clinic and Stanford recently asked ChatGPT 25 questions on heart disease prevention. Its responses were appropriate for 21 of 25, including on how to lose weight and reduce cholesterol. But it stumbled on more nuanced questions, including in one instance “firmly recommending” cardio and weightlifting, which could be dangerous for some patients. Steven Lin, a physician and executive director of the Stanford Healthcare AI Applied Research Team, said that the models are fairly solid at getting things like medical school test questions right. However, in the real world, questions from patients are often messy and incomplete, Lin said, unlike the structured questions on exams. “Real patient cases do not fit into the oversimplified ‘textbook’ presentations that LLMs are trained on,” Lin told Digital Future Daily. “The best questions for LLMs like ChatGPT are those with clear, evidence-based, widely established, and rarely disputed answers that apply universally, such as ‘What are the most common side effects of medication X?’” Lin is skeptical that generative AI will quickly become widespread in frontline health care. He pointed out that electronic health records systems were first rolled out in the 1960s, but it took until the 2010s to become ubiquitous. “Soon, in the next 6-12 months, the hype will cool down and we'll slow down and realize how much work still needs to be done,” Lin said. Brian Anderson, chief digital health physician at MITRE, a nonprofit that helps manage federal science research, sees it moving quite a bit faster — becoming widespread in the sector within a year or two, given the business proposition to allow providers to be more efficient. “Algorithms... will help drive improved clinical outcomes in the near term," Anderson said. "The comput[ing] power is there to really drive some of the value … it’s just a matter of time.” The potential for explosive growth is exactly what worries a lot of people about generative AI: They see a highly disruptive and unaccountable new tech platform, still largely experimental, with nearly no brakes on its adoption across society. Healthcare, though, is a little different. It’s a highly regulated space with a much slower pace of innovation, big liability risks when things go wrong and a lot of existing protections around individual data. To help users determine if a model is appropriate for them, the Department of Health and Human Services earlier this month proposed new rules that would require makers of AI in healthcare — as well as health technology developers that incorporate other companies' AI — to open up those algorithms to scrutiny if they want HHS’ certification. That health IT certification for software like electronic health records systems is voluntary, but is required when the technology is used in many government and private health care settings. Micky Tripathi, HHS’ national coordinator for health IT, understands why the industry is excited about AI’s potential to help better inform care — but also said that the anxiety surrounding generative AI in the wider economy also applies to healthcare. He didn’t mince words at a recent industry conference: People should have “tremendous fear” about potential safety, quality and transparency issues. With that in mind, both tech companies and health systems are calling for steps to ensure trust in the technology, and have formed groups to address the issues. The Coalition for Health AI, whose members include Anderson’s MITRE, Google, Microsoft, Stanford and John Hopkins, recently released a blueprint to facilitate trust in artificial intelligence’s use in health care. It called for any algorithms used in the treatment of disease to be testable, safe, transparent, and explainable, and for software developers to take steps to mitigate bias and protect privacy. Anderson said he's excited about the technology's potential but is "nervous" about potential future capabilities that can be difficult to anticipate. He said physician inbox management and aiding with drafting letters could be strong early use cases. “I'm hopeful those are some of the first areas of exploration, rather than helping me decide what to do with this patient that's suffering from some kind of acute sepsis,” Anderson said.
|