Why AI isn't changing health care faster

From: POLITICO's Digital Future Daily - Monday Aug 15,2022 08:06 pm
How the next wave of technology is upending the global economy and its power structures
Aug 15, 2022 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ruth Reader

With help from Derek Robertson

Kevin Barrett, in quarantine after his former hospital roommate tested positive for COVID-19, sits in bed as he recovers from an injury as registered nurse Scott McGieson looks at records in the room.

A patient hospitalized for Covid-19 and nurse. | Elaine Thompson/AP Photo

Artificial intelligence was supposed to revolutionize human healthtaking notes for doctors as they work, reviewing medical imaging and even drawing blood from patients. It attracts billions of dollars in venture funding a year.

Geoffrey Hinton, a famous computer and cognitive scientist who pioneered a system of machine learning that mimics how human brains work, boldly suggested at a machine learning conference that “people should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better.”

That was six years ago. Radiologists are still fully employed.

Not only has healthcare AI not paid off as fully and quickly as technologists had hoped, but the field’s perspective on the scope of what it can potentially do has changed.

I got curious about artificial intelligence and its potential impact on health care last year, when two preeminent artificial intelligence experts said that it would be at least a decade before AI could meaningfully affect health care. It’s not just health care, I came to find out: The expectation for what AI can do across sectors is slimming.

For example, Anthony Levandowski, who worked on ambitious self-driving car programs for both Google and Uber (which led to an incredible legal fracas ), now has a business of his own — and it’s much narrower. His firm makes self-driving dump trucks that pick up and dump rocks between two points in environments with few if any humans.

In healthcare, a system even more complex than roads, AI is largely being used for administrative tasks and automated billing. Health systems are starting to use it to make predictions about patient health risk and to read radiological scans. But the dream of AI improving health directly, or even reducing work, for overloaded doctors, seems very distant. These early technologies all run the risk of adding to a doctor’s workload rather than reducing it.

Why has change been so slow? My colleague, Ben Leonard, and I looked into the state of AI in healthcare to see exactly what this technology can and can’t do right now — and how regulators in Washington are preparing for what comes in the next wave.

We found a few key themes:

  • Data is the lifeblood of AI — but data in health systems is organized for billing, not for artificial intelligence. It has taken time and investment to make that data ready for AI.  
  • Algorithms that work for one health system do not work for all, so AI is developing in smaller, more localized ways.
  • Healthcare is one of the most highly regulated sectors in America, and it’s not yet clear how (and how much) Washington is planning to oversee AI as it arrives. The FDA in particular has made efforts to come up with some framework for AI regulation, but overlooks one key area. 

Read the full story at our homepage.

 

INTRODUCING POWER SWITCH: The energy landscape is profoundly transforming. Power Switch is a daily newsletter that unlocks the most important stories driving the energy sector and the political forces shaping critical decisions about your energy future, from production to storage, distribution to consumption. Don’t miss out on Power Switch, your guide to the politics of energy transformation in America and around the world. SUBSCRIBE TODAY.

 
 
whose authority?

Debbie Stabenow and John Boozman. Photo credit: Francis Chung/E&E News

Senate Agriculture Chairwoman Debbie Stabenow (D-Mich.) and Ranking Member John Boozman (R-Ark.), who have introduced together legislation to regulate cryptocurrency. | Francis Chung/E&E News

One of the stories we’ve been tracking most closely here at DFD is the regulatory sword of Damocles hanging over the crypto world, as Washington and Silicon Valley wrangle over how, exactly, cryptocurrency should be classified under the law.

In today’s Morning Money newsletter, POLITICO’s Sam Sutton presented a novel argument from a policy veteran: That the Supreme Court’s June 30 decision in West Virginia v. EPA proves that try as any federal agency might to set the agenda, the crypto industry will never be on solid regulatory footing until Congress passes a law on the subject.

Tomicah Tillemann, a former State Department official and current chief policy officer at the VC firm Haun Ventures, told Sam that although agencies aren’t entirely powerless, “legislation is going to carry substantially more weight and have substantially more impact than anything that the agencies are going to be able to do on their own” after the Supreme Court decision.

Though the West Virginia ruling technically addressed only the EPA, it curbed the agency’s enforcement capabilities in a manner that had strong legal implications for other federal rule-setting agencies — such as the SEC and CFTC, which both have their eyes on crypto.

This means the crypto industry needs to watch Congress carefully. Legislators haven’t exactly been slacking on this front, with two major bills introduced this session on the how to regulate crypto, and who might be its sheriff, although their fate remains to be seen. — Derek Robertson

more an art than a science

Scholars are just starting to wrap their heads around the legal implications of artificial intelligence, with one of the most high-profile, meme-friendly instances being the world of intellectual property rights in art. Can an AI infringe on copyright? Who would be held liable?

Andres Guadamuz, a legal scholar at the University of Sussex who edits a scholarly journal on intellectual property rights, tackled the question in a recent blog post — just in time for the announcement by an OpenAI competitor that it would make a similar tool to that company’s DALL-E image generator available for public use.

What happens if an AI accidentally just recreates an existing piece of art, depicts copyrighted figures, or mimics a working artist’s style too closely? Are OpenAI, or Google, liable? Guadamuz is skeptical there’ll be enough legal ammunition to use in such cases, writing that there likely won’t be enough “substantive reproduction” to qualify, although as always with the law, it’ll vary on a case-by-case basis. Derek Robertson

The Future In 5 Links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Konstantin Kakaes (kkakaes@politico.com); and Heidi Vogt (hvogt@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Konstantin Kakaes @kkakaes

Heidi Vogt @HeidiVogt

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Aug 12,2022 08:00 pm - Friday

5 questions for Nick Clegg

Aug 11,2022 08:03 pm - Thursday

The tornado tearing through crypto

Aug 10,2022 08:02 pm - Wednesday

Crypto mogul launches new super PAC

Aug 08,2022 08:01 pm - Monday

The case against crypto in the metaverse

Aug 05,2022 08:01 pm - Friday

5 questions for Ed Markey

Aug 04,2022 08:01 pm - Thursday

The hyper-local side of crypto