It’s not Twitter. In 2017, at a meeting of the National Governors Association, he opined that “the scariest problem" is artificial intelligence — an invention that could pose an unappreciated "fundamental existential risk for human civilization." Musk has, for years, seemed to be attuned to the dangers of AI. As far back as 2014, he told students at MIT that "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” So it might seem that Musk would be very cautious in how his companies deploy AI, and how carefully they stay within guidelines. Not exactly. Musk is a big player in AI, in part through his car business. Elon Musk has described Tesla as the world's biggest robotics company. "Our cars are semi-sentient robots on wheels," he said in a speech last year for an "AI day" the company held. At the event, he also announced plans to build a prototype robot sometime in 2022. The robot, he said, is intended to be friendly, and to eliminate "dangerous, repetitive, and boring tasks." He said he'd make the robot slow enough to run away from, and weak enough to overpower. Over the years, his car firm Tesla has not only pushed AI-powered autopilot systems beyond what regulators like the National Transportation Safety Board say is prudent, but has also failed for over four years to "implement critical NTSB safety recommendations," according to an October 2021 letter by Jennifer Homendy, the agency chair. And as Fortune reported in February , Neuralink, a brain-chip startup that Musk also runs, may have misled federal regulators about his role. Musk says he wants Neuralink chips to help humans achieve a “symbiosis with artificial intelligence.” Musk's willingness to comply with securities regulations raises broader issues about how Neuralink might comply with regulations for brain-computer interfaces that experts argue urgently need to be written. Emanuel Moss, a postdoctoral scholar at Cornell Tech and the Data and Society Research Institute, said that "it serves Musk’s interests to position himself and his companies as best able to address an elevated imagining of the risks around AI.” In Moss's telling, Musk argues that his companies are the "few who are capable of addressing the risks of AI in a technically astute or robust way." But Musk, he said, "wants to sell a shiny box that solves the problems. He thinks there are technical solutions to what are in fact social problems." That's also the view of Alex John London, the director of the Center for Ethics and Policy at Carnegie Mellon University, who said that "warnings about AI make industry look socially minded and are often window-dressing meant to build trust without that trust being warranted." Gianclaudio Malgieri, a professor at EDHEC Business School in Paris who studies AI regulation and automated decision making, said he sees Musk's marketing strategy as "having AI as an enhancement of humanity, and not a substitution of humanity." But this distinction is not a clear one. People alive 50 years ago, Malgieri said, would be shocked to learn how much of our mental capacity we have already given to AI — think how easy it now is to Google basic facts or rely on GPS and AI to find directions to a friend's house, or how thoroughly algorithmic recommendations now shape people's musical preferences. Immediately before Musk spoke about Tesla's robotic ambitions at AI day, a person wearing a tight white bodysuit and blank-faced black mask rigidly walked onto the stage as though attempting to fool the audience into thinking they were a highly capable robot, before dancing maniacally to electronic music. It's a jarring attempt to blur the lines between people and robots. Maglieri recounted the fable of the frog in a saucepan full of water that is slowly brought to a boil, and doesn't realize it's going to die until it's too late. "When do we start," he wondered, "to give away our humanity to machines?" Musk said at the AI day event that he wants to be able to ask a robot to go to the store to pick up groceries. The question Maglieri has is: what is lost when robots do the shopping? |