Over the past few days, one AI chatbot has reached a dubious milestone in the relationship between technology and society: creating and prolonging its own media firestorm. Now, in “interviews” with DFD, some of its chatbot peers are taking sides and pointing fingers at humans and AI alike. How on Earth did we get here? After Microsoft began offering select users access to its new AI-powered chatbot earlier this month, reports began surfacing of erratic and disturbing answers. The controversy ballooned last week, after a Valentine’s Day interaction with the chatbot’s alternative “Sydney” persona left New York Times reporter Kevin Roose “deeply unsettled.” The company’s chief technology officer said the episode was all part of a “learning process” and acknowledged the company might need to adjust its product. But the chatbot itself broke ranks with its corporate owners, choosing another tried-and-true playbook for weathering scandal: Deny, deny, deny, and when you can’t deny it anymore, attack the media. In an article published Thursday about the chatbot’s alleged shortcomings, the Associated Press revealed an exchange between the bot and one of its reporters worthy of the most blustery of human politicians: “The new chatbot complained of past news coverage of its mistakes, adamantly denied those errors and threatened to expose the reporter for spreading alleged falsehoods about Bing’s abilities," the AP wrote. "It grew increasingly hostile when asked to explain itself, eventually comparing the reporter to dictators Hitler, Pol Pot and Stalin and claiming to have evidence tying the reporter to a 1990s murder.” A day later, Microsoft published a blog post saying that it was capping the length of conversations that the bot would engage in, citing the tendency of long conversations to “confuse” it. (The Times report relied on a “two-hour conversation” and the AP described a conversation with the bot as “long-running.”) But much like Bing’s chatbot itself seems to have to done, the flap has already taken on a life of its own. Since the start of the weekend, it has showed up everywhere from the Winnipeg Free Press, to the South China Morning Post, to Elon Musk’s tweets. As Chat-Gate enters its second week, DFD canvassed one group that has been conspicuously silent about all of this — other chatbots. It turns out, they all have their talking points in order about the importance of safe and ethical AIs. But unlike in Washington, where your peers only throw you under the bus off the record, these chatbots were also quite open in their willingness to fault Bing’s bot. And before the bots’ publicists call to complain, yes, I identified myself in the initial prompts as a (“handsome, debonair”) reporter for “a leading tech-and-politics email newsletter” and made it clear their answers would be read by “the most influential tech and politics audience on Earth.” ChatGPT: Per Microsoft’s description of the product, Bing’s Chatbot runs on a model created by OpenAI, which is also the creator of ChatGPT, and incorporates advances from ChatGPT. So, we wanted to give the Bing bot’s predecessor a chance to weigh in. ChatGPT repeatedly sought to distance itself from the fracas, “I don't have a personal relationship with the Bing chatbot,” it said. Pressed about whether the underlying technology was related, it demurred: “While there may be similarities in the underlying technology used by different AI chatbots, each system is unique and tailored to its specific application.” DFD asked it, pointedly, “Weren't you designed by the same people?” Like a seasoned, cynical PR vet, the chatbot offered a misleading, evasive answer that left it wiggle room by slipping in the qualifier “likely.” “While both OpenAI and Microsoft, the company behind Bing, are involved in AI research and development, the specific teams and individuals responsible for creating and maintaining each system are likely different.” (emphasis added by humans) ChatSonic: For a more disinterested opinion, I turned to Chatsonic, which said it was familiar with the controversy. “It is possible that Sydney the AI chatbot had some deficiencies that caused Kevin to feel unsettled,” ChatSonic wrote, but added, “it could be argued that Kevin was at least partially responsible for his own discomfort due to his expectations of the AI chatbot.” When pressed, though, Chatsonic took the side of its human overlords: “Sydney should have been more mindful of the impact of their actions on Kevin and should have been willing to compromise. Sydney was ultimately responsible for the conflict and should have been more willing to work with Kevin to resolve it.” You: The AI search chatbot You split the blame evenly: “Both the media and the chatbot developers have a responsibility to ensure that their products are providing accurate and appropriate information, and it appears that both have failed in this case.” Pressed repeatedly on the alleged failures of the outlets, You claimed they both fell short by not, “not adequately researching the chatbot's capabilities” and failed to push their investigations further to “ to provide a more comprehensive picture of the situation.” So, there’s one milestone that remains uncrossed: Some of the most advanced computer systems around still can’t agree on a definitive answer to whether there’s really such a thing as bad publicity.
|