Move fast and break… your brain?

From: POLITICO's Digital Future Daily - Thursday Jun 01,2023 08:43 pm
How the next wave of technology is upending the global economy and its power structures
Jun 01, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

Elon Musk departs the Phillip Burton Federal Building and United States Court House in San Francisco on Jan. 24, 2023.

Neuralink founder Elon Musk. | Benjamin Fanjoy/AP Photo

There are quite a few companies trying to connect a computer directly to your brain, but only one gets all the attention right now.

That’s because Neuralink is the one run by Elon Musk.

Musk’s brain implant startup became the latest to receive FDA approval to begin clinical trials in humans last week. Normally, catching up to the rest of the pack isn’t the kind of story that makes national news, but Musk has a way of making his ideas seem unique. In this case, he’s made a huge sales pitch for his product, tying it to the idea of humans competing with AI — and everyone wonders where exactly he might take it long-term.

But in the short term, Neuralink is a window into a different important question: What happens when a tech-style mogul tries to make the leap into health care?

The fields are superficially similar — highly profitable, with a hyper-educated workforce and venture-backed idea ecosystem — but health care operates with far more stringent regulatory controls, and requires much deeper reservoirs of consumer trust.

The market Neuralink has chosen isn’t exactly easy to enter or compete in. Brain-computer interfaces are built to correct for severe impairments: According to its website, Neuralink is currently intended for people with quadriplegia, meant to give them the ability to control digital devices with their thoughts. Other BCIs are being tested on people with severe paralysis in both upper limbs or neurodegenerative diseases like ALS.

Part of the challenge with devices like this is simply time. Musk is known for radically speeding up slow industries like EV carmaking or private space travel. But medical devices move on a timeline of their own, and other firms are already ahead, at least in proving their devices’ safety.

Synchron, a Neuralink rival backed by Jeff Bezos and Bill Gates, published a study earlier this year with four severely paralyzed patients and found they could control a computer only through brain activity. That study took almost two years to complete and required follow-up visits and training sessions at home and at a university clinic in Australia. The largest ongoing trial of intracortical BCIs concluded just this year after analyzing over 17 years of safety data from clinical trials. That came from the team behind BrainGate, a multi-institutional effort that began at Brown University.

Regulators have made some efforts to speed things up: Last year, the rapid progression of the BCI field prompted the FDA to issue specific recommendations on non-clinical testing for BCIs to make the regulatory process more efficient for its many entrants. But speeding up can only go so far: Since so little data is available on the long-term efficacy and safety of these devices, the FDA recommended a minimum follow-up period of a year after clinical trials are conducted — adding to the total time any clinical trial would take.

Of course that is not fast enough for an entrepreneur, and in typical Musk fashion, the CEO reportedly put Neuralink employees under immense pressure to speed up animal trials in order to begin human trials, leading to botched animal experimentation, a Reuters investigation revealed. In December 2022, the USDA’s watchdog, the Office of the Inspector General, launched an investigation into the company for potential animal-welfare violations. The same investigators were also scrutinizing the USDA itself for their past oversight of Neuralink.

Neuralink did not respond to a request for comment for this story. In response to allegations of animal abuse, the company released a blog post in February 2022 saying it was “absolutely committed to working with animals in the most humane and ethical way possible” and noted that (as of the time of the post) it had “never received a citation from the USDA inspections of our facilities and animal care program.”

That’s not the end of the company’s federal challenges. As recently as February, Neuralink was under investigation again, this time by the U.S. Department of Transportation, for allegedly packaging and transporting contaminated hardware in an unsafe manner back in 2019.

All those issues are hurdles you can expect to hit when you move quickly through a super-regulated area like health care.

Another one will come at the far end of the process.

Right now, Musk is selling Neuralink not just as useful to people in need, but as cooler than the competition: he talks about it in very Silicon Valley terms as the fastest and highest-bandwidth connection being tried.

“The constraint on having human interests align with machine interests is bandwidth, especially the output,” said Musk at Wall Street Journal’s CEO Council last week. He was referring to the bit-rate at which people can type or speak into existing digital devices. “With Neuralink, you can increase that by a million, probably.”

He veered off the company’s therapeutic message to tease the idea of more sci-fi human enhancement. “How do we even appreciate or understand what the computer is doing? How do we go along for the ride? And if we have a better brain-machine interface that's a million times faster, then we'll go along for the ride a lot better than interfacing with a phone using two slow moving meat sticks.” Musk added.

But Neuralink is not — and will not be, for many years — a consumer product like a car, or a social-media platform.

Even for the people who need it most, the demand might not be automatic. When the team at BrainGate analyzed their 17 years of safety data on clinical trials, they noted that not everyone might want a device that helped them in this way. Ultimately, they concluded the “potential utility of implanted BCIs will reflect the personal risk/benefit assessment of the people for whom these devices are being developed” — that is to say, each person will have to determine whether the risk of getting a brain implant is worth the potential benefit.

The people Musk will ultimately have to convince are risk-averse FDA regulators, and the doctors who will recommend the implant, and the patients who will have it inserted into their most complex organ, and the caregivers who will have to deal with the results — that’s a tall order for any tech company.

 

DON’T MISS POLITICO’S HEALTH CARE SUMMIT: The Covid-19 pandemic helped spur innovation in health care, from the wide adoption of telemedicine, health apps and online pharmacies to mRNA vaccines. But what will the next health care innovations look like? Join POLITICO on Wednesday June 7 for our Health Care Summit to explore how tech and innovation are transforming care and the challenges ahead for access and delivery in the United States. REGISTER NOW.

 
 
a middle path on ai regulation

And now, from the AI skeptic’s side: POLITICO’s own Mark Scott, who writes in today’s Digital Bridge that it might be time for those rushing to take regulatory action on AI to stop and think about how well they understand what they’re taking action on.

He implores policymakers to take a middle approach between outright panic and laissez-faire business as usual: “Call it the ‘trees and woods’ theory, in which officials must combat the existing problems bubbling around AI, while also keeping an eye on the long-term systemic risks this emerging technology may represent over the next 10 years,” he writes. “Governments are desperate to be seen as responding to a technology that often feels more like science fiction than [responding] to mundane questions around what data goes into AI models, how you test new systems in a controllable manner, and what global oversight looks like for a cross-border problem.”

So what should they actually… do, aside from get up to speed on the nitty-gritty of those problems? “Focus more on the short-term concerns than get lost in the long-term scaremongering… The current AI models, based on how they were trained with skewed data, favor me, a white man living in a Western country, more than those from minority groups,” Mark writes. “That would be a clear place to start if we’re looking to reduce harms by making such systems more equitable.” — Derek Robertson

bringing the temperature down

Okay, but what if you’re still not convinced AI doesn’t pose a considerable “x-risk” (think existential) to humanity?

Maybe Juergen Schmidhuber, a German computer scientist and pioneering AI developer, can reassure you. In an interview last week, Schmidhuber threw cold water on the recent warning from AI industry leaders that the technology should be paused until “alignment” with human values is certain, attempting to put it in perspective with other historic threats.

“A large nuclear warhead doesn’t need fancy face recognition to kill an individual. No, it simply wipes out an entire city with ten million inhabitants,” Schmidhuber said. “...what has been consistent throughout history is that those who are using technology to achieve their own ends are themselves, facing the same technology because the opposing side is learning to use it against them.”

So, an AI detente? Schmidhuber argues that market incentives will ultimately win out in the end, more so than any political or regulatory attempt to steer AI values: “There's intense commercial pressure towards good AI rather than bad AI because companies want to sell you something, and you are going to buy only stuff you think is going to be good for you. So already just through this simple, commercial pressure, you have a tremendous bias towards good AI… However, doomsday scenarios like in Schwarzenegger movies grab more attention than documentaries on AI that improve people’s lives.” — Derek Robertson

Tweet of the Day

We need autonomous vehicles like this. Imagine America's roadways filled with helmet cars.

the future in 5 links
  • An automated part of your new car soon, whether you like it or not: the brakes.
  • A new Nvidia model creates 3D images out of 2D ones.
  • “AI” has already become an epithet in the world of cultural criticism.
  • Tether has fully recovered its value after a spectacular 2022 crash.
  • The future of fish farming could be… on land.

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

May 31,2023 08:10 pm - Wednesday

The expanding AI hall of shame

May 30,2023 08:29 pm - Tuesday

The 2024 social media race has started

May 26,2023 08:02 pm - Friday

5 questions for Qualcomm’s Durga Malladi

May 23,2023 08:14 pm - Tuesday

How lawyers use AI

May 22,2023 08:03 pm - Monday

Open-source wants to eat the internet