Tracking the AI apocalypse

From: POLITICO's Digital Future Daily - Tuesday Jan 10,2023 09:01 pm
Presented by WifiForward: How the next wave of technology is upending the global economy and its power structures
Jan 10, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by WifiForward

LOS ANGELES, CALIFORNIA - MAY 04: The Skynet Moto-Terminator from

The Skynet Moto-Terminator from "Terminator Salvation" is seen during the opening of the new exhibit "Hollywood Dream Machines: Vehicles Of Science Fiction And Fantasy" at the Petersen Automotive Museum. | Angela Papuga/Getty Images

It’s time, readers, that we answer the big question.

No, not “but how does the blockchain actually work” — or “what am I actually going to do in the metaverse” — or even “wen Lambo.”

I’m talking about the big question: Is artificial intelligence going to kill us all?

Okay, that might be a little bit alarmist. But there is, in fact, a dedicated and passionate group of very intelligent people dedicated at the moment to answering this question — determining the risk profile of “artificial general intelligence,” the term for an AI-powered system that matches human cognitive capacity. What happens, these future-minded theorists ask, when it uses that capacity to defy, or thwart, its human creators?

This is not exactly one of the big AI policy questions on the agenda right now — when regulators think about AI, they’re mostly thinking about algorithmic fairness, the abuse of personal data, or how the tech might disrupt existing sectors like education or law. But when this does land on the radar, it will be the Big One.

“The worry is that by its very nature [a generally intelligent AI]would have a very different set of moral codes (things it should or shouldn’t do) and a vastly increased set of capabilities, which can result in pretty catastrophic outcomes,” wrote the venture capitalist and tech blogger Rohit Krishnan in a blog post last month that explored the idea at length.

But there’s one big problem for anyone trying to address this seriously: How likely is it that such a thing is even possible? That question is central when we try to figure out how much to worry about what artificial general intelligence might look like, and how quickly we should act to shape it. (For the record, some very smart people are giving these questions some very serious thought.)

Krishnan’s post is getting attention right now because he developed a framework of sorts for answering that question.His formula for predicting existential AI risk, which he calls the “Strange Loop Equation,” is based on the Drake equation, which in the 1960s offered a way to calculate another hard-to-guess number: the number of possible, contactable alien entities in the universe. Krishnan’s version incorporates a number of risk conditions into a prediction of the likelihood that a hostile AI could arise.

I spoke with Krishnan about the post today, and he emphasized that he, himself, isn’t freaking out — in fact he’s a skeptic of the idea that runaway AI might be a harbinger of doom. He said that “like most technological progress, we're likely to see incremental progress, and with every step change we have to work a little bit on how we can actually do it smartly and safely.”

Based on his own assessments of the likelihood of the various conditions that could lead to a hostile AI — like the speed of its development, or its capability to lie — there is a (drum roll, please) 0.0144 percent chance that power-seeking AI will kill or enslave us all.

That makes him much more optimistic than some others who’ve tried their own version of the exercise, like in the Metaculus prediction market (34 percent) or a recent study by the researcher Joseph Carlsmith (roughly five percent), as Krishnan points out. (“Bear in mind chained equations like Drake are great to think through, much less so for precise numbers on probabilities,” he added.)

So no sweat, right? Maybe not: despite the serious amount of time and thought put into the problem, Krishnan warns that any current speculation likely bears little resemblance to the form it will ultimately take. “I fundamentally don’t think we can make any credible engineering statements about how to safely align an AI while simultaneously assuming it’s a relatively autonomous, intelligent and capable entity,” he writes in conclusion.

“Let's assume that at some point in the future we will be able to create systems that have high levels of agency, and to give them curiosity, the ability to act on the world, and the things that we as autonomous intelligent entities have in this world,” he told me today.

“They're not really going to be controllable, because it feels very weird to create something that has the powers of a regular human while at the same time, they can only ever do what you tell them to do. We find it very difficult to do that with anything remotely intelligent in our day-to-day life; as a consequence, the only way out I can see is to try to embed our values in them.”

Then how do we determine those values, and what role might non-engineers in government and elsewhere play in preventing an AI apocalypse? Krishnan is equally wary there, saying it’s essentially an engineering problem that will have to be solved iteratively as the problems arise.

“I am reasonably skeptical on what governments can actually do here, if only because we're talking about stuff at the very cutting edge of not only technology, but in some ways anthropology — figuring out the life science and behavior of what is effectively a new intelligent entity,” Krishnan said. “I suspect that some things government might do would be to start having treaties with each other similar to how we did with nuclear weapons… [and] ensure that the supply chain remains relatively robust,” the better to keep humanity at the cutting edge of AI development.

 

A message from WifiForward:

Spectrum Sharing = Mobile Competition + 5G Innovation + More Connections. Get the facts.

 
better help?

As the list of disruptive roles AI might play gets longer, it gets weirder yet: How about therapist?

Rob Morris, co-founder of the tech-focused mental health nonprofit Koko, tweeted about a recent experiment his company ran where it gave Koko’s users, who both send and answer queries about mental health issues through various apps, the opportunity to answer those queries with the help of GPT-3. Morris said that overall about 30,000 messages were answered with AI assistance, and that “Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own.”

And yet… people did not like this, apparently, once they took a second to contemplate the implications. “Once people learned the messages were co-created by a machine, it didn’t work. Simulated empathy feels weird, empty,” Morris tweeted. “The implications here are poorly understood. Would people eventually seek emotional support from machines, rather than friends and family?”

After a Twitter outcry, Morris fought to dispel the notion that users were somehow being deceived, noting in a tweet that GPT-3 was used as a tool by human responders, and that the feature was announced to all users of the service. (He sent me a screenshot demonstrating how when users received a GPT-3 assisted response, it included a note that says "written in collaboration with Koko Bot.") Either way, it’s a powerful example of how important our awareness of AI implementations can be when comes to how we experience them.

 

A message from WifiForward:

Advertisement Image

 
europe eyes tiktok

A photo of a shadow in front of a TikTok sign.

TikTok has ballooned in popularity around the world. | Sean Gallup/Getty Images

A recurring theme in this newsletter is how often U.S. regulators lag behind their European Union counterparts when it comes to new technology.

One area, however, where that’s decidedly not the case: TikTok. POLITICO’s Nicholas Vinocur, Clothilde Goujard, Océane Herrero, and Louis Westendarp have a report today on how European countries are coming around to the U.S.’ wariness toward the Chinese-owned app, after U.S. officials called for a ban of the app on government officials’ phones over surveillance fears.

"In view of the privacy and security risks posed by the app and the app's far-reaching access rights, I consider the ban on TikTok on the work phones of U.S. government officials to be appropriate,” a digital policy spokesman for the liberal German party FDP told the Europe-side team of reporters, adding that “Corresponding steps should also be examined in Germany.” French president Emmanuel Macron is on board for a tougher stance, too, telling a group of American investors and French tech CEOs that he wants to regulate the company.

tweet of the day

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from WifiForward:

We’re running out of spectrum, but sharing is a proven solution that delivers more – more competition, more 5G innovation, more US jobs, more voices, and more connections – all while protecting mission-critical DOD functions. Spectrum is scarce – and exclusivity? It just doesn’t deliver.

 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Jan 09,2023 09:42 pm - Monday

My lawyer, the robot

Jan 07,2023 09:02 pm - Saturday

A last shot of tech optimism from CES

Jan 05,2023 10:01 pm - Thursday

The think tanks roll into Techistan

Jan 04,2023 09:15 pm - Wednesday

Washington goes to CES

Jan 03,2023 10:03 pm - Tuesday

The year of Web3 social media — maybe

Dec 23,2022 09:02 pm - Friday

The year in 5 questions