Add defamation to the long list of legal areas that are on a collision course with AI. Yesterday, the mayor of an Australian shire northwest of Melbourne went public with a threat to sue ChatGPT creator OpenAI for outputs that allegedly implicated him in a bribery scandal in which the man was actually a whistleblower. His threat raises the question of whether creators of large language models can be held liable for defamatory statements made by their chatbots. If courts around the world find that they can, it could severely limit the use of existing AI chatbots, which are prone to creating false outputs. As it turns out, people have already given this some thought. The verdict: No one knows for sure, and of course, laws vary between jurisdictions. Last month, one editorial cartoonist who was nonplussed to see ChatGPT falsely implicate him in a rancorous feud with a rival cartoonist quizzed a panel of American jurists about his chances of prevailing in a libel suit in the Wall Street Journal opinion pages. Opinion was mixed: Harvard Law’s Laurence Tribe said a claim would be plausible, arguing the law does not care whether libel is generated by a human or machine. The University of Utah’s RonNell Andersen Jones said it’s a stretch, saying the law was written to apply to a defendant with “a state of mind.” That means the AI would have to know the output was false, or wrote a response with reckless disregard for whether it was true, a difficult standard to apply to an inanimate tool. Yale Law’s Robert Post said a human would have to spread the libelous output to others for any claim to arise. In the case of the Australian mayor, Reuters reported that members of the public told him about the false output, but it is unclear whether the ChatGPT’s output was spread to others. In the U.S., one possible defense from similar claims could arise from Section 230 of the Communications Decency Act. It shields website operators from liability for a host of infractions, including defamation, that arise from content posted on their sites by third parties. But there is at least one big reason to suspect it would not apply to any defamation case in the United States. Judges tend to give weight to the intentions of the legislators who created a given law, and Section 230’s co-authors, Sen. Ron Wyden, an Oregon Democrat, and former Rep. Chris Cox, a California Republican, told the Washington Post last month it should not apply to chatbot creators: “To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told the paper. OpenAI did not immediately respond to a request for comment, but it may resort to a more direct defense. The company’s terms of use contain a lengthy disclaimer requiring users to take ChatGPT’s statements with a grain of salt: “Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.” Lawyers sometimes invoke a person’s reputation for embellishment or exaggeration as a defamation defense, so it’s reasonable to expect explicit disclaimers like this one to play a prominent role in defamation-by-AI cases. Of course there is one entity that seems to be getting off scot-free in all of this discussion. ChatGPT itself. I’m not aware of anyone having sued an artificial intelligence directly, but there’s a first time for everything. After all, Western law has a long tradition of anthropomorphizing non-human entities, as evidenced by the legal personhood granted to inanimate corporations in many, many contexts. In fact, in pre-modern law, animals were sometimes named as parties in legal proceedings and assigned lawyers to advocate their cause. Take the 16th Century French rats that ruined a crop of barley and then ignored a court summons (for the latter affront, their counsel blamed cats). So, could ChatGPT itself be liable here? Asked by DFD whether it defamed the Australian mayor, it responded, “I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience. ” But give it a few months, and the bot may start referring pesky inquiries like this to its AI lawyer.
|