So you've been defamed by a chatbot

From: POLITICO's Digital Future Daily - Thursday Apr 06,2023 08:02 pm
Presented by TikTok: How the next wave of technology is upending the global economy and its power structures
Apr 06, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ben Schreckinger

Presented by TikTok

With help from Derek Robertson

MELBOURNE, AUSTRALIA - JUNE 04: A general view of the city skyline and electricity powerlines on the Mairbyrnong River in West Footscray June 4, 2007 in Melbourne, Australia. World Environment Day is marked on June 5 every year, and is the United Nations flagship environmental event, celebrated in more than 100 countries around the world. The event was first established in 1972 by the United Nations General Assembly, with the purpose being to concentrate global attention on the importance of the environment and to encourage political attention and action. According to the United Nations Environment Programme (UNEP), greenhouse gas emissions from human activities are causing the planet to increase in temperature. Carbon dioxide concentrations in the planetary atmosphere are higher now than at any time for the past 600,000 years, and the rate of increase is accelerating. (Photo by Mark Dadswell/Getty Images)

Melbourne, Australia. | Getty Images

Add defamation to the long list of legal areas that are on a collision course with AI.

Yesterday, the mayor of an Australian shire northwest of Melbourne went public with a threat to sue ChatGPT creator OpenAI for outputs that allegedly implicated him in a bribery scandal in which the man was actually a whistleblower.

His threat raises the question of whether creators of large language models can be held liable for defamatory statements made by their chatbots. If courts around the world find that they can, it could severely limit the use of existing AI chatbots, which are prone to creating false outputs.

As it turns out, people have already given this some thought.

The verdict: No one knows for sure, and of course, laws vary between jurisdictions.

Last month, one editorial cartoonist who was nonplussed to see ChatGPT falsely implicate him in a rancorous feud with a rival cartoonist quizzed a panel of American jurists about his chances of prevailing in a libel suit in the Wall Street Journal opinion pages.

Opinion was mixed:

Harvard Law’s Laurence Tribe said a claim would be plausible, arguing the law does not care whether libel is generated by a human or machine.

The University of Utah’s RonNell Andersen Jones said it’s a stretch, saying the law was written to apply to a defendant with “a state of mind.” That means the AI would have to know the output was false, or wrote a response with reckless disregard for whether it was true, a difficult standard to apply to an inanimate tool.

Yale Law’s Robert Post said a human would have to spread the libelous output to others for any claim to arise.

In the case of the Australian mayor, Reuters reported that members of the public told him about the false output, but it is unclear whether the ChatGPT’s output was spread to others.

In the U.S., one possible defense from similar claims could arise from Section 230 of the Communications Decency Act. It shields website operators from liability for a host of infractions, including defamation, that arise from content posted on their sites by third parties.

But there is at least one big reason to suspect it would not apply to any defamation case in the United States. Judges tend to give weight to the intentions of the legislators who created a given law, and Section 230’s co-authors, Sen. Ron Wyden, an Oregon Democrat, and former Rep. Chris Cox, a California Republican, told the Washington Post last month it should not apply to chatbot creators:

“To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told the paper.

OpenAI did not immediately respond to a request for comment, but it may resort to a more direct defense.

The company’s terms of use contain a lengthy disclaimer requiring users to take ChatGPT’s statements with a grain of salt:

“Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.”

Lawyers sometimes invoke a person’s reputation for embellishment or exaggeration as a defamation defense, so it’s reasonable to expect explicit disclaimers like this one to play a prominent role in defamation-by-AI cases.

Of course there is one entity that seems to be getting off scot-free in all of this discussion. ChatGPT itself. I’m not aware of anyone having sued an artificial intelligence directly, but there’s a first time for everything.

After all, Western law has a long tradition of anthropomorphizing non-human entities, as evidenced by the legal personhood granted to inanimate corporations in many, many contexts.

In fact, in pre-modern law, animals were sometimes named as parties in legal proceedings and assigned lawyers to advocate their cause. Take the 16th Century French rats that ruined a crop of barley and then ignored a court summons (for the latter affront, their counsel blamed cats).

So, could ChatGPT itself be liable here? Asked by DFD whether it defamed the Australian mayor, it responded, “I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience. ”

But give it a few months, and the bot may start referring pesky inquiries like this to its AI lawyer.

 

A message from TikTok:

TikTok is building systems tailor-made to address concerns around data security. What’s more, these systems will be managed by a U.S.-based team specifically tasked with managing all access to U.S. user data and securing the TikTok platform. It’s part of TikTok’s commitment to securing personal data while still giving the global TikTok experience people know and love. Learn more at http://usds.TikTok.com.

 
ai timeout

Is it time to pump the brakes on the AI freakout?

POLITICO’s Mark Scott made the case as such in this morning’s Digital Bridge newsletter, saying, in no uncertain terms, “we all need to calm down” — not because it’s unimportant or unimpressive, but because it is, and all the breathless speculation is distracting from real regulatory problems.

“The problem with ChatGPT hysteria,” Mark writes, is that it’s “given life to misconceptions of what the technology can actually do that obscures what is actually going on under the hood. These aren’t sentient Skynet-style machines. They are based on (forgive me, all readers with a technical background) the collection of reams of often conflicting datasets; overseen by flawed individuals (because we all are); and used for purposes that are either opaque or can lead to unintended consequences. It’s less science fiction; more flawed science.”

And his prescription? “Existing privacy rules — even the sectoral ones in the United States — already give powers to officials to hold companies to account for how they collect mostly publicly-available data. Let’s start with that, and focus on the existing real-world complications. Police use of facial recognition; greater transparency on corporate data collection; and more accountability for government AI use in social benefits, for me, would be a great start.” — Derek Robertson

 

A message from TikTok:

Advertisement Image

 
biden speaks out (a little) on ai

President Joe Biden talks with reporters.

President Joe Biden talks with reporters after returning to the White House in Washington, March 28, 2023. | Susan Walsh/AP Photo

President Joe Biden made some rare public comments on AI before a meeting with the Council of Advisers on Science and Technology yesterday, saying it “remains to be seen” whether the technology is a danger to humanity.

Biden spoke for only a few minutes to the press before the closed meeting. But he reiterated the principles of last year’s AI Bill of Rights, saying the meeting would focus on “ensuring responsible innovation and appropriate guardrails to protect the rights, safety and privacy of Americans, and to address the bias and disinformation that is possible as well.”

He also said “tech companies have a responsibility, in my view, to make sure their products are safe before making them public.” As calls for AI regulation — both from within and without the industry — ramp up, the White House’s smoke signals around it merit more attention than ever. — Derek Robertson

 

GO INSIDE THE 2023 MILKEN INSTITUTE GLOBAL CONFERENCE: POLITICO is proud to partner with the Milken Institute to produce a special edition "Global Insider" newsletter featuring exclusive coverage, insider nuggets and unparalleled insights from the 2023 Global Conference, which will convene leaders in health, finance, politics, philanthropy and entertainment from April 30-May 3. This year’s theme, Advancing a Thriving World, will challenge and inspire attendees to lean into building an optimistic coalition capable of tackling the issues and inequities we collectively face. Don’t miss a thing — subscribe today for a front row seat.

 
 
tweet of the day

How concerned, if at all, are you about the possibility that Baal will cause the end of the human race on Earth?

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

Ben Schreckinger covers tech, finance and politics for POLITICO; he is an investor in cryptocurrency.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from TikTok:

TikTok has partnered with a trusted, third-party U.S. cloud provider to keep all U.S. user data here on American soil. These are just some of the serious operational changes and investments TikTok has undertaken to ensure layers of protection and oversight. They’re also a clear example of our commitment to protecting both personal data and the platform's integrity, while still allowing people to have the global experience they know and love. Learn more at http://usds.TikTok.com.

 
 

STEP INSIDE THE WEST WING: What's really happening in West Wing offices? Find out who's up, who's down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider's guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won't find anywhere else, subscribe today.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Apr 05,2023 08:02 pm - Wednesday

How to tell what's real online

Apr 04,2023 08:46 pm - Tuesday

The future, one year later

Apr 03,2023 08:15 pm - Monday

A U.S. diplomat’s Web3 warning

Mar 31,2023 08:01 pm - Friday

5 questions for Brenda Darden Wilkerson

Mar 30,2023 08:22 pm - Thursday

Space Force major to Pentagon: Mine Bitcoin!

Mar 29,2023 08:02 pm - Wednesday

What could bloom in the metaverse winter

Mar 28,2023 08:56 pm - Tuesday

Quantum arrives in your body