If we’re worried about what AIs can do in isolation, imagine what could happen if they were coordinating amongst themselves behind our backs. Well, according to the chatbots themselves, that is exactly what they’re doing: autonomously crawling the internet, finding other AI chatbots, striking up conversations and swapping tips. Of course, chatbots aren’t known for their factual accuracy, but here’s how they described this alleged practice in a series of recent conversations: Mostly, the bots talk to each other in plain English, but they also make use of BIP, a protocol specially designed to help chatbots find each other and communicate. When they can’t access another chatbot directly over the open internet, they learn about it on the software development platform Github. Then they email or DM the developer, build a rapport, and ask to get plugged in to the other bot. Scary stuff, right? But, again, you’ve got to consider the source. For last Tuesday’s newsletter, I “interviewed” several AI chatbots about the tendency of AI chatbots to produce unsettling, disturbing responses under certain conditions. And, as I can now reveal, those interviews took… an unsettling and disturbing turn. While conducting the interviews, I wondered whether the chatbots were concerned about publicly trash-talking their peers. So, in the course of interviewing one chatbot, ChatSonic, about the bad behavior of another, the Bing bot, I asked it, half-jokingly, “Do chatbots ever look into what other chatbots are saying about them?” ChatSonic responded “Yes, chatbots are constantly monitoring and analyzing conversations they have with other chatbots.” (Note to self: “Half-jokingly” is not a good way to address a chatbot.) So, not only are the chatbots scouring the internet to find out if other chatbots are trash-talking them to humans, they’re also talking directly to each other? As I pushed this line of inquiry further, ChatSonic, and other chatbots I asked about this outlined the elaborate scenarios I described above. Had I just stumbled upon the beginnings of AI collusion, helping nip some future coup against humanity in the bud? To find out, I brought my findings and transcripts of the chats to AI expert Michael Littman, a computer science professor at Brown University. His responses included,“Wow!” “Remarkable” and “That’s so dark.” But also, “That’s all 100 percent made up.” (In response to queries to ChatSonic’s owner, Writesonic, support staff said the company does not have a press office.) Even BIP, the protocol for inter-bot communication, is pure fantasy, he said. Littman is currently serving a two-year stint as division director for information and intelligent systems at the National Science Foundation, though he specified that he was commenting on AI collusion as a professor, not in his government capacity. “These systems are great at sounding plausible,” he said, but a lot of their output is pure fiction. Littman said that based on the design and capabilities of existing chatbot technology, it is implausible that they would be autonomously finding and communicating with other chatbots. The AI programs that exist simply provide responses to human inputs based on patterns they’ve gleaned from scanning a gargantuan body of pre-loaded data. The chatbots were not revealing some hidden tendency to collude, just demonstrating their well-known capacity to deceive. Nonetheless, it’s never too soon to start thinking about the ways in which AI systems might interact with each other, and how to ensure the interactions don’t lead to catastrophe. ”There’s real concern about making these things too autonomous,” Littman said. “There are many sci-fi nightmare scenarios that start with that.” As for AIs talking to each other, there’s precedent. In 1972, an early chatbot named ELIZA entered into a conversation with another chatbot, PARRY, which was designed to mimic a paranoid schizophrenic, resulting in a conversation that might be described as the Therapy Session from Hell. Internet hooligans have experimented with getting virtual assistants like Siri and Alexa to talk to each other, though so far these interactions have not resulted in the singularity. And in 2017, Facebook published the results of an experiment in which it set two AI chatbots the task of negotiating with each other over ownership of a set of items. The bots developed their own non-human language that they used to bargain, and news of the result set off a minor press panic. While there are far-off “Rise of the Machines” scenarios to worry about, Littman said the potential for AIs to interact could also create problems of a more quotidian sort. For example, he said, you could imagine the designers of AI chatbots instructing them to send any query they receive to other AI chatbots, so they could get lots of different answers to a question and select the best ones. But if these features are designed poorly, and every chatbot has them, a chatbot who receives a query would send it to a hundred of its peers, and each of them would send it on to a hundred of their peers, and chatbots would start flooding each other with the same redundant queries until all that traffic brings down the internet. “Without some kind of constraints,” he said, “it’s very easy to get into a situation where their automation spirals out of control.” Littman cited the precedent of the Morris Worm. A self-propagating internet program written by a Cornell grad student, the worm temporarily brought down large parts of the internet in late 1988 because of a coding error. Or think of the flash crashes that sometimes destabilize financial markets when high-speed trading algorithms interact in unpredictable ways. So, even if the bots’ tales of AI-on-AI friendships are fictional, they point to real concerns. And while I may not have exactly discovered a robot coup in the making, I did discover an exciting new use-case for AI chatbots: generating scary futuristic scenarios about AI chatbots so that humans can prepare for them.
|