Elon Musk's biggest worry

From: POLITICO's Digital Future Daily - Tuesday Apr 26,2022 08:23 pm
Presented by FTX: How the next wave of technology is upending the global economy and its power structures
Apr 26, 2022 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Konstantin Kakaes

Presented by FTX

With help from Derek Robertson and Ben Schreckinger

Elon Musk and a person in a robot costume.

Elon Musk and a person in a robot costume at Tesla's 2021 "AI Day." | Tesla, via YouTube

It’s not Twitter.

In 2017, at a meeting of the National Governors Association, he opined that “the scariest problem" is artificial intelligence — an invention that could pose an unappreciated "fundamental existential risk for human civilization."

Musk has, for years, seemed to be attuned to the dangers of AI. As far back as 2014, he told students at MIT that "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

So it might seem that Musk would be very cautious in how his companies deploy AI, and how carefully they stay within guidelines.

Not exactly. Musk is a big player in AI, in part through his car business. Elon Musk has described Tesla as the world's biggest robotics company. "Our cars are semi-sentient robots on wheels," he said in a speech last year for an "AI day" the company held. At the event, he also announced plans to build a prototype robot sometime in 2022. The robot, he said, is intended to be friendly, and to eliminate "dangerous, repetitive, and boring tasks." He said he'd make the robot slow enough to run away from, and weak enough to overpower.

Over the years, his car firm Tesla has not only pushed AI-powered autopilot systems beyond what regulators like the National Transportation Safety Board say is prudent, but has also failed for over four years to "implement critical NTSB safety recommendations," according to an October 2021 letter by Jennifer Homendy, the agency chair. And as Fortune reported in February , Neuralink, a brain-chip startup that Musk also runs, may have misled federal regulators about his role. Musk says he wants Neuralink chips to help humans achieve a “symbiosis with artificial intelligence.”

Musk's willingness to comply with securities regulations raises broader issues about how Neuralink might comply with regulations for brain-computer interfaces that experts argue urgently need to be written.

Emanuel Moss, a postdoctoral scholar at Cornell Tech and the Data and Society Research Institute, said that "it serves Musk’s interests to position himself and his companies as best able to address an elevated imagining of the risks around AI.”

In Moss's telling, Musk argues that his companies are the "few who are capable of addressing the risks of AI in a technically astute or robust way." But Musk, he said, "wants to sell a shiny box that solves the problems. He thinks there are technical solutions to what are in fact social problems."

That's also the view of Alex John London, the director of the Center for Ethics and Policy at Carnegie Mellon University, who said that "warnings about AI make industry look socially minded and are often window-dressing meant to build trust without that trust being warranted."

Gianclaudio Malgieri, a professor at EDHEC Business School in Paris who studies AI regulation and automated decision making, said he sees Musk's marketing strategy as "having AI as an enhancement of humanity, and not a substitution of humanity."

But this distinction is not a clear one. People alive 50 years ago, Malgieri said, would be shocked to learn how much of our mental capacity we have already given to AI — think how easy it now is to Google basic facts or rely on GPS and AI to find directions to a friend's house, or how thoroughly algorithmic recommendations now shape people's musical preferences.

Immediately before Musk spoke about Tesla's robotic ambitions at AI day, a person wearing a tight white bodysuit and blank-faced black mask rigidly walked onto the stage as though attempting to fool the audience into thinking they were a highly capable robot, before dancing maniacally to electronic music. It's a jarring attempt to blur the lines between people and robots.

Maglieri recounted the fable of the frog in a saucepan full of water that is slowly brought to a boil, and doesn't realize it's going to die until it's too late. "When do we start," he wondered, "to give away our humanity to machines?"

Musk said at the AI day event that he wants to be able to ask a robot to go to the store to pick up groceries. The question Maglieri has is: what is lost when robots do the shopping?

A message from FTX:

FTX guiding principles promote safe and equitable access to digital assets, creating strong investment opportunities for Americans. The FTX US application before the Commodity Futures Trading Commission (CFTC) is intended to expand access to digital-asset products for all investors, promote competitive markets in the U.S., and better position the U.S. as a marketplace for digital assets globally. Get the facts on our application here.

 
A bit of advice

(Photo: Business Wire)

Mozilla's 2022 festival showcased the organization's tech activism. | Business Wire

As we covered last week here at DFD, Europe is miles ahead of the States when it comes to putting regulatory guardrails in place around artificial intelligence. One Silicon Valley group providing its expertise to lawmakers and regulators is the Mozilla Foundation, which published a blog post Monday listing recommendations for the Union’s far-reaching AI Act.

The post, written by Mozilla’s executive director Mark Surman and senior policy researcher Maximilian Gahntz, points out three major areas where the act as currently written could be improved: balancing the accountability for responsible AI use between its developers and its users; writing more stringent disclosure requirements around the use of so-called “high-risk” AI technologies, and creating a means for end users to file complaints about their perceived misuse.

“Technologies that are potentially neutral, or that may have biases themselves but their design doesn't imply an inherently high-risk activity, can [still] be used for high-risk purposes or low-risk purposes,” Surman said. “We see our role — we see the need for the commission — to wrestle to the ground the practical questions of how to deal with that; we think right now the act is just too simplistic.”

The researchers recommended that EU legislators “future-proof” the bill by broadening its scope as not to preclude potential AI-driven harms that might not even exist yet.

“They define eight areas in which this can be amended,” Gahntz said. “That unnecessarily limits the room to maneuver for the Commission and the European legislators in the future. Just because we don't know right now that something may be risky and may harm people doesn't mean that two years or three years from now that might not change.”

Surman and Gahntz said that European regulators have been largely receptive to their recommendations, and that they’ll continue to offer expertise as the lengthy legislation process rolls on. (The Digital Services Act recently agreed to in principle by EU legislators was first proposed in December of 2020.) As with that law and the rest of Europe’s pioneering data privacy regulation , don’t be surprised if the debates playing out in Brussels today over AI pop up again in Washington… eventually. — Derek Robertson

For the record

Following Friday’s item on the ties between crypto mogul Brock Pierce’s independent Vermont Senate run and Trump world, Pierce’s campaign sent along a statement today saying that he has parted ways with his team of Donald Trump aides over “ideological differences.” Pierce said that in addition to Steve Bannon, he has consulted with Bill De Blasio on his Senate ambitions and that his campaign is now working with a team of Democratic and independent operatives including Ben Kinsley, Tyree Morton, Jeff Leb and David Weiner. — Ben Schreckinger

Afternoon Snack

Artificial gullibility? One unsettling fact emerging about AI: just as machine learning can detect patterns that humans can’t, it can also be fooled in ways that would never fool a person.

This isn’t just theoretical: In a somewhat disturbing Twitter thread last week, the writer Cory Doctorow laid out a laundry list of examples of how machine learning algorithms have been tricked and manipulated by researchers, occasionally in a crude and simplistic fashion with potentially dangerous implications:

  • “A Chinese team showed that they could paint invisible, tiny squares of infrared light on any face and cause a facial recognition system to think it was any other face.”
  • “...the attack that added inaudible sounds to a room that only a smart-speaker would hear and act on”
  • “A team from Toronto found that a classifier that reliably identified everything in a normal living room became completely befuddled when they added an elephant to the room”
  • “In 2019, a Tencent team showed that they could trick a Tesla's autopilot into crossing the median by adding small, innocuous strips of tape to the road-surface”

In his thread, which he also compiled as a blog post, Doctorow recapped a recent paper showing how these “adversarial examples” — data that, when introduced to a machine learning system, causes it to malfunction — could be added to basically any such system, for any purpose — and they could be undetectable to anyone who didn’t already know where to look.

“In other words, if you train a facial-recognition system with one billion faces, you can alter any face in a way that is undetectable to the human eye, such that it will match with any of those faces,” Doctorow writes. “Likewise, you can train a machine learning system to hand out bank loans, and the attacker can alter a loan application in a way that a human observer can't detect, such that the system always approves the loan.” — Derek Robertson

 

A message from FTX:

Advertisement Image

 
The Future In Five Links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Konstantin Kakaes (kkakaes@politico.com);  and Heidi Vogt (hvogt@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up here. And read our mission statement here.

A message from FTX:

The FTX US application before the CFTC is intended to expand access to digital-asset products for all investors, promote competitive markets in the U.S., and better position the U.S. as a marketplace for digital assets globally.

Congress has mandated that the CFTC enable competitive markets driven by innovation, new technologies, and opportunity for American investors. Most U.S. trading volumes for derivatives trade on a very small number of exchanges. The CFTC should fulfill its statutory mandate to promote competition and responsible innovations to give the U.S. investing public more choice for investing and risk-management opportunities. The public, moreover, is better served if there are fewer barriers to entry and investors at least have the choice to access markets through an internet connection if they choose, rather than other methods that can be costlier. Learn more.

 
 

Looking for in-depth and actionable technology policy news? The Morning Tech newsletter is exclusively available to POLITICO Pro s, please visit our website tolearn more about the benefits of a subscription.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Konstantin Kakaes @kkakaes

Heidi Vogt @HeidiVogt

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Apr 22,2022 08:01 pm - Friday

Bannon and the Bitcoin candidate

Apr 21,2022 08:29 pm - Thursday

The AI muscles in Brussels

Apr 20,2022 08:01 pm - Wednesday

Politics in the metaverse: An optimistic case

Apr 19,2022 08:01 pm - Tuesday

Washington's new crush on quantum computing

Apr 18,2022 08:01 pm - Monday

A postcard from ‘Bitcoin Beach’

Apr 15,2022 08:06 pm - Friday

Lobbyists jump to the blockchain