A warning for health care AI

From: POLITICO's Digital Future Daily - Tuesday May 09,2023 08:02 pm
How the next wave of technology is upending the global economy and its power structures
May 09, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ruth Reader

With help from Derek Robertson

ChatGPT

A recent study suggested that ChatGPT might deliver more thorough, empathetic answers to patient questions than actual doctors.

ChatGPT has doctors thinking about how bots can deliver better patient care.

But when it comes to the bigger picture of integrating artificial intelligence into health care, Gary Marcus, an AI entrepreneur and professor emeritus of psychology and neuroscience at NYU, is worried the government isn’t paying close enough attention.

“My biggest concern right now is there's a lot of technology that is not very well regulated— like chat search engines that could give people medical advice,” he told Digital Future Daily. There aren’t any systems in place, he said, to monitor how these new technologies are being used and assess whether they’re causing harm.

Marcus is both a doomsayer and an optimist. He believes that the future of medicine involves personalized medicine delivered by artificial intelligence. But he also wants to see proof that it works.

“We need to have actual clinical data here,” he said. And, he said, systematic studies need to answer key questions before unleashing this technology on the public: “How well do these things work? In what contexts do they work? When do they not work?”

In his new podcast, Humans vs. Machines, the first story he tells is of the rise and fall of IBM’s Watson, a technology that was supposed to revolutionize the field of health care and ultimately failed.

In 2011, Watson, an AI platform designed by IBM, famously won Jeopardy. While Watson was still warm under the glow of the spotlight, the company quickly tried to figure out how it could capitalize on the moment. So it pivoted to health care and struck up a deal with Memorial Sloan Kettering to come up with personalized cancer treatments for patients.

The episode outlines many problems with this project, including, perhaps, IBM’s narrow focus on generating profit over validating its technology. But, Marcus said, the bigger issue was that Watson could not do what humans do, which is to be able to come to their own conclusions.

“Watson was not a general purpose AI system that, for example, could read the medical literature with comprehension,” he said. Watson was like a search tool on steroids, he said, good at locating information relevant to a specific topic. When doctors diagnose someone they are taking what they know about the individual patient along with existing medical literature and molecular biology and trying to come up with a solution.

“That requires a sophisticated level of reasoning and Watson just wasn't capable of doing that— it was not a reasoning engine,” he said.

Watson may feel like an irrelevant precursor to current bots like ChatGPT. But even the latest bots suffer from these same problems.

They can’t think on their feet. “One of the things people forget about machines is that they're not people,” said Marcus.

He’s worried that tools like ChatGPT are really good at convincing people that they are human, as evidenced by Kevin Roose’s conversation with Bing. In another instance, a Belgian man ended his life after talking to a bot named Eliza about climate change. Marcus is concerned people will turn to these online bots not only for medical advice, but for mental health treatment.

“We're messing with human beings here when we portray these systems as being more human than they are,” he said. That’s why he sees a need for regulators to get involved.

Regulators are starting to step in. The Food and Drug Administration has taken steps to regulate artificial intelligence in medicine, particularly algorithms that offer up a diagnosis or make recommendations to doctors about patient care. And in niche contexts that kind of AI is making progress. For example, research shows progress in helping doctors spot sepsis.

But when it comes to online chatbots that offer up bad information, regulators are totally out of the loop.

“We know these systems hallucinate, they make stuff up,” Marcus said. “There's a big question in my mind about what people will do with the chat style searches and whether they can restrict them so that they don't give a lot of bad advice.”

 

DON’T MISS THE POLITICO ENERGY SUMMIT: A new world energy order is emerging and America’s place in it is at a critical juncture. Join POLITICO on Thursday, May 18 for our first-ever energy summit to explore how the U.S. is positioning itself in a complicated energy future. We’ll explore progress on infrastructure and climate funding dedicated to building a renewable energy economy, Biden’s environmental justice proposals, and so much more. REGISTER HERE.

 
 
meta's potential moneymaker

This morning Meta released a Deloitte-authored report on the economic impact of the metaverse, speculating it could add hundreds of billions of dollars to the U.S.’ annual GDP by 2035.

The report argues that the U.S.’ global leadership on metaverse tech means it’s well-positioned to drive both innovation and potentially lucrative exports. It also touts a slate of familiar proposed benefits of virtual worlds, from their potential to augment dangerous medical procedures to their use in city planning or engineering.

That doesn’t mean it’s totally sanguine about VR’s world- (or at least economic-)domination, however. The authors note that serious work needs to be done improving data connectivity across the country to help conjure bandwidth-hungry 3D environments, and that the public remains widely skeptical about safety in virtual worlds amid all the current problems with their existing 2D counterparts.

“In order for consumers and businesses to adopt these technologies, they will need to feel safe using them,” the authors write. “This is not an issue for one party to face alone.” — Derek Robertson

checking chatgpt

What if chatbots… bad?

Amelia Wattenberger, a researcher focused on user interface at GitHub, proposed in a recent essay that chatbots might actually be one of the worst possible interfaces for revolutionary AI technology from a design standpoint. The problem, in short: They put too much onus on the user to tell the machine what they want it to do.

“Good tools make it clear how they should be used,” Wattenberger writes. “The only clue we receive [with ChatGPT] is that we should type characters into the textbox… Of course, users can learn over time what prompts work well and which don't, but the burden to learn what works still lies with every single user. When it could instead be baked into the interface.”

Wattenberger argues that for a piece of software like ChatGPT that requires so much context from a user to get what they want out of it, it should be constructed more like a tool, that shows the user what they can or should do — think a glove, or a hammer — than a machine, a mysterious black box that we feed instructions and then await its response.

“I believe the real game changers are going to have very little to do with plain content generation,” Wattenberger writes. “Let's build tools that offer suggestions to help us gain clarity in our thinking, let us sculpt prose like clay by manipulating geometry in the latent space, and chain models under the hood to let us move objects (instead of pixels) in a video.” — Derek Robertson

Tweet of the Day

saw the new Oppenheimer trailer this morning, I'm shocked y'all haven't been posting AI safety memes from it. Lots of great material

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGSITER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

May 08,2023 08:02 pm - Monday

A Manhattan Project for AI safety

May 05,2023 08:01 pm - Friday

5 questions for SmartNews' Rich Jaroslovsky

May 04,2023 08:38 pm - Thursday

An AI reckoning coming in August

May 03,2023 08:02 pm - Wednesday

A very theatrical Bitcoin bet collapses

May 02,2023 08:04 pm - Tuesday

The industry AI might kill off first

May 01,2023 08:02 pm - Monday

Meet the Greta Thunberg of AI