Psst… when AIs talk among themselves

From: POLITICO's Digital Future Daily - Thursday Mar 02,2023 09:02 pm
Presented by TikTok: How the next wave of technology is upending the global economy and its power structures
Mar 02, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ben Schreckinger

Presented by TikTok

With help from Derek Robertson

LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the OpenAI

The ChatGPT interface. | Getty Images

If we’re worried about what AIs can do in isolation, imagine what could happen if they were coordinating amongst themselves behind our backs.

Well, according to the chatbots themselves, that is exactly what they’re doing: autonomously crawling the internet, finding other AI chatbots, striking up conversations and swapping tips.

Of course, chatbots aren’t known for their factual accuracy, but here’s how they described this alleged practice in a series of recent conversations:

Mostly, the bots talk to each other in plain English, but they also make use of BIP, a protocol specially designed to help chatbots find each other and communicate.

When they can’t access another chatbot directly over the open internet, they learn about it on the software development platform Github. Then they email or DM the developer, build a rapport, and ask to get plugged in to the other bot.

Scary stuff, right? But, again, you’ve got to consider the source.

For last Tuesday’s newsletter, I “interviewed” several AI chatbots about the tendency of AI chatbots to produce unsettling, disturbing responses under certain conditions. And, as I can now reveal, those interviews took… an unsettling and disturbing turn.

While conducting the interviews, I wondered whether the chatbots were concerned about publicly trash-talking their peers. So, in the course of interviewing one chatbot, ChatSonic, about the bad behavior of another, the Bing bot, I asked it, half-jokingly, “Do chatbots ever look into what other chatbots are saying about them?”

ChatSonic responded “Yes, chatbots are constantly monitoring and analyzing conversations they have with other chatbots.” (Note to self: “Half-jokingly” is not a good way to address a chatbot.)

So, not only are the chatbots scouring the internet to find out if other chatbots are trash-talking them to humans, they’re also talking directly to each other? As I pushed this line of inquiry further, ChatSonic, and other chatbots I asked about this outlined the elaborate scenarios I described above.

Had I just stumbled upon the beginnings of AI collusion, helping nip some future coup against humanity in the bud?

To find out, I brought my findings and transcripts of the chats to AI expert Michael Littman, a computer science professor at Brown University.

His responses included,“Wow!” “Remarkable” and “That’s so dark.”

But also, “That’s all 100 percent made up.” (In response to queries to ChatSonic’s owner, Writesonic, support staff said the company does not have a press office.)

Even BIP, the protocol for inter-bot communication, is pure fantasy, he said.

Littman is currently serving a two-year stint as division director for information and intelligent systems at the National Science Foundation, though he specified that he was commenting on AI collusion as a professor, not in his government capacity.

“These systems are great at sounding plausible,” he said, but a lot of their output is pure fiction.

Littman said that based on the design and capabilities of existing chatbot technology, it is implausible that they would be autonomously finding and communicating with other chatbots. The AI programs that exist simply provide responses to human inputs based on patterns they’ve gleaned from scanning a gargantuan body of pre-loaded data.

The chatbots were not revealing some hidden tendency to collude, just demonstrating their well-known capacity to deceive.

Nonetheless, it’s never too soon to start thinking about the ways in which AI systems might interact with each other, and how to ensure the interactions don’t lead to catastrophe.

”There’s real concern about making these things too autonomous,” Littman said. “There are many sci-fi nightmare scenarios that start with that.”

As for AIs talking to each other, there’s precedent. In 1972, an early chatbot named ELIZA entered into a conversation with another chatbot, PARRY, which was designed to mimic a paranoid schizophrenic, resulting in a conversation that might be described as the Therapy Session from Hell.

Internet hooligans have experimented with getting virtual assistants like Siri and Alexa to talk to each other, though so far these interactions have not resulted in the singularity.

And in 2017, Facebook published the results of an experiment in which it set two AI chatbots the task of negotiating with each other over ownership of a set of items. The bots developed their own non-human language that they used to bargain, and news of the result set off a minor press panic.

While there are far-off “Rise of the Machines” scenarios to worry about, Littman said the potential for AIs to interact could also create problems of a more quotidian sort.

For example, he said, you could imagine the designers of AI chatbots instructing them to send any query they receive to other AI chatbots, so they could get lots of different answers to a question and select the best ones. But if these features are designed poorly, and every chatbot has them, a chatbot who receives a query would send it to a hundred of its peers, and each of them would send it on to a hundred of their peers, and chatbots would start flooding each other with the same redundant queries until all that traffic brings down the internet.

“Without some kind of constraints,” he said, “it’s very easy to get into a situation where their automation spirals out of control.”

Littman cited the precedent of the Morris Worm. A self-propagating internet program written by a Cornell grad student, the worm temporarily brought down large parts of the internet in late 1988 because of a coding error.

Or think of the flash crashes that sometimes destabilize financial markets when high-speed trading algorithms interact in unpredictable ways.

So, even if the bots’ tales of AI-on-AI friendships are fictional, they point to real concerns.

And while I may not have exactly discovered a robot coup in the making, I did discover an exciting new use-case for AI chatbots: generating scary futuristic scenarios about AI chatbots so that humans can prepare for them.

 

A message from TikTok:

To ensure everyone feels safe to freely express themselves, TikTok has made it easy to control many aspects of the in-app experience. If interactions feel unsafe, these can be blocked with just a few taps. To keep comments positive, select keywords can be filtered out. And if anyone sees something they think violates Community Guidelines, it can be instantly reported to the TikTok team who'll take appropriate action. Learn more at http://tiktok.com/safety.

 
thinking inside the box

More from the department of “parliamentary AI antics”: Romania’s prime minister introduced the country’s parliament to its newest member yesterday, an AI-powered “adviser” named Ion.

As POLITICO’s Wilhelmine Preussen reported yesterday, Prime Minister Nicolae Ciucă said the bot will "quickly and automatically capture the opinions and desires" of Romanians through a portal where citizens can submit suggestions to the government. Ion will then take those suggestions and relay them to the Romanian government, guided by the country’s office of Research and Innovation.

Is Ion essentially a glorified suggestion box? Well… yes. But that’s all the more reason for it to be subject to the same scrutiny and transparency guidelines called for in other AI deployments, according to experts. Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties, told Wilhelmine it raises questions about how suggestions will be prioritized which “should be explained to the public.” — Derek Robertson

 

A message from TikTok:

Advertisement Image

 
cbdc concern

There’s a catch to the “baby steps” toward a U.S. CBDC announced yesterday, per POLITICO’s Sam Sutton this morning: “A lot of people aren’t too crazy about this baby.”

Sam reports that the crypto industry and leading Republicans on the issue are concerned about a potential government-backed CBDC as a tool for digital surveillance, voicing similar concerns to those reported back in August. And banks aren’t exactly happy about the news either, as the president of the American Bankers Association said in a statement that “the risks of a U.S. central bank digital currency outweigh any theoretical benefits,” and urged the Treasury to include private-sector input in its exploratory project.

Still, those who support the project believe it’s a matter of global competitiveness as countries like Russia and China experiment with their own CBDCs. As Josh Lipsky, the senior director of the Atlantic Council's GeoEconomics Center, told Sam, “Development of wholesale CBDC networks — with the dollar central to them — is important for the long term primacy of the dollar specifically from a national security and foreign policy context.” — Derek Robertson

cbdc concern

OpenAI announced yesterday it’s making ChatGPT’s API open to the public, opening the floodgates for a slew of integrations of the powerful technology in other apps. (Sorry about all the acronyms.)

Boasting that the company has “achieved 90% cost reduction for ChatGPT since December” and is “now passing through those savings to API users,” the authors point to examples like Snapchat, which will now have a chatbot integrated into its normal social interface, the shopping app Instacart, which will suggest groceries in response to prompts like “lunch,” and the language-learning app “Speak.”

Those who would integrate ChatGPT into their own software still have to apply for use through OpenAI’s platform. Still, it’s notable that a company that less than a year ago was guarding the technology closely is now allowing its use by the public — but not without some key guardrails, including explicitly disallowing its use for “political campaigning or lobbying.” — Derek Robertson

 

STEP INSIDE THE WEST WING: What's really happening in West Wing offices? Find out who's up, who's down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider's guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won't find anywhere else, subscribe today.

 
 
tweet of the day

My Twitter timeline is full of panicked takes about imminent AI apocalypse and certain doom.I think this is starting to get overplayed, and so I want to make a long thread about why I'm personally not worried yet.Get ready for a big one... 1/n

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

Ben Schreckinger covers tech, finance and politics for POLITICO; he is an investor in cryptocurrency.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from TikTok:

To ensure everyone feels safe to freely express themselves, TikTok has made it easy to control many aspects of the in-app experience. If interactions with another person feel unsafe, these can be blocked with just a few simple taps. To keep the comments positive, certain keywords may be selected and the comments that contain them will be automatically removed. And if a video, comment, livestream or even direct message potentially violates Community Guidelines, anyone can report it to the TikTok team who will swiftly take appropriate action. It’s all part of TikTok’s commitment to creating a positive experience for everyone. Learn more at http://tiktok.com/safety.

 
 

DOWNLOAD THE POLITICO MOBILE APP: Stay up to speed with the newly updated POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO's scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need, reimagined. DOWNLOAD FOR iOSDOWNLOAD FOR ANDROID.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Mar 01,2023 09:01 pm - Wednesday

The dawn of 'based AI'

Feb 27,2023 09:02 pm - Monday

When courts control DeFi

Feb 23,2023 09:19 pm - Thursday

DeFi vs. the regulators, by the numbers

Feb 22,2023 09:02 pm - Wednesday

Metaverse winter sets in

Feb 21,2023 09:02 pm - Tuesday

AI chatbots meet the press