The expanding AI hall of shame

From: POLITICO's Digital Future Daily - Wednesday May 31,2023 08:10 pm
How the next wave of technology is upending the global economy and its power structures
May 31, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

FILE - The Pentagon is seen from Air Force One as it flies over Washington, March 2, 2022. A fake image purportedly showing an explosion near the Pentagon has been widely shared on social media, sending a brief shiver through the stock market. But police and fire officials in Arlington, Va., said Monday, May 22, 2023, that the image isn't real and there was no incident at the U.S. Department of Defense headquarters. (AP Photo/Patrick Semansky, File)

The Pentagon, a fake image of which sent a brief shiver through the stock market. | AP

Even by the dizzying pace that AI development has already set, this spring has been… a lot.

There’s the headline news, of course, like OpenAI founder Sam Altman warning Congress, of the potential existential harms AI might pose, or yesterday’s open letter saying that AI should be treated with the risk profile of “pandemics and nuclear war.”

But there’s also the near-constant drumbeat of weird, embarrassing and disorienting AI news — not the stuff of a techno-thriller plot, but just as important to keep an eye on as the technology rapidly percolates through society.

“There are equally dangerous problems that are far less speculative because they are already here,” said Louis Rosenberg, a computer scientist who published an academic paper earlier this year on “Conversational AI as a Threat to Epistemic Agency.” “You don't need a sentient AI to wreak havoc, you just need a few sentient humans controlling current AI technologies.”

You could call it the (early-days) AI Hall of Shame. Even AI optimists need to think hard about what these incidents mean — and how they suggest what tools we might need to actually deal with this disruptive technology.

CheatGPT

Jared Mumm’s students had a bit of a rough end to their spring semester. A few weeks ago, the animal science professor at Texas A&M University at Commerce emailed his class to inform them that he had run a check by ChatGPT to analyze their essays and see whether they had been composed… by ChatGPT.

Which, the bot dutifully reported, they were, and therefore every student in the class would be receiving a grade of “incomplete,” potentially endangering their diplomas.

Except they weren’t. After one student proved via timestamps in Google Docs that she composed her essay herself, Mumm gave his students the opportunity to submit an alternate assignment, and a university spokesman noted to the Washington Post that “several students have been exonerated and their grades have been issued, while one student has come forward admitting his use of [ChatGPT] in the course.”

Whatever the final outcome for those harried students, this example is perhaps the most straightforward one yet of how blind human trust in AI-generated content can lead to disaster. AI gets many things wrong. (Plagiarism detection is especially difficult.) For Mumm’s students, that meant a fraught end to their semester, to say the very least, and it could have far more serious repercussions in scenarios with less margin for error.

An AI “flight” of fancy

… Like, for example, a lawsuit that makes it to federal court. As the New York Times reported over the weekend, a federal judge in Manhattan is threatening to sanction a lawyer who created a 10-page brief filled with references to imaginary decisions and precedent — all invented by ChatGPT.

The lawyer, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, insisted he had no intent of defrauding the court in citing entirely made-up cases like “Varghese v. China Southern Airlines” as part of his client’s personal injury lawsuit against the airline Avianca. As the Times notes, he even said that he “asked the program to verify that the cases were real,” which of course it dutifully verified. Good enough, right?

Not for the judge, who’s scheduled Schwartz for a hearing on June 8 to “discuss potential sanctions.” We’re moving up the risk ladder: the law has significantly less room for leniency than a classroom does, and to be overly credulous to AI could not only threaten the credibility of a given case, but that of the attorneys (and the legal system) itself.

An unexpected blast radius

And then there are the real catastrophes. Well, fake real catastrophes — that have real consequences, despite the lack of any “real” damage or danger. Washington was rocked last week by a faked video that circulated on social media claiming to show an explosion near the Pentagon, most prominently shared by a popular national security Twitter account with more than 300,000 followers.

There was no explosion. But the video sent very real shockwaves across the country: The S&P 500 briefly dipped by a quarter of a percent. The White House press shop went into full crisis preparation mode, as West Wing Playbook reported this morning. Twitter announced it would expand its “community notes” crowd-sourced fact-checking feature to include images.

This is already pretty bad, and it doesn’t include any of the additional scenarios — mass blackmail, propaganda, targeted financial fraud — helpfully outlined in a 2022 Department of Homeland Security memo. How should regulators know where to start when it comes to AI-proofing our most vulnerable systems?

Viewed through the lens of human error or gullibility causing most current AI harms, the European Union’s risk-based framework outlined in the draft text of its AI Act begins to look fairly sensible — the more sensitive the system, the more legal restrictions are placed on the use of AI systems there.

“The AI Act from the EU is a good step towards controlling many of the risks of AI,” Rosenberg said, pointing out that it could be quite useful in regulating potential harms of institutional AI deployment, like in parole, hiring, or lending decisions.

But outside those institutions there’s still a Wild West of human error, laziness, and advantage-taking, and guarding against that will take a lot more than federal regulatory strictures.

Regulators “need to be focused on the problems that are about to hit because AI capabilities are moving so quickly,” Rosenberg said. “The EU proposal is very good, but it needs to look a little further ahead. By this time next year, we will all be talking to AI systems on a regular basis, engaging interactively, and we're not ready for those dangers.”

 

DON’T MISS POLITICO’S HEALTH CARE SUMMIT: The Covid-19 pandemic helped spur innovation in health care, from the wide adoption of telemedicine, health apps and online pharmacies to mRNA vaccines. But what will the next health care innovations look like? Join POLITICO on Wednesday June 7 for our Health Care Summit to explore how tech and innovation are transforming care and the challenges ahead for access and delivery in the United States. REGISTER NOW.

 
 
congress' role on ai


OpenAI CEO Sam Altman arrives for a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing.

Sam Altman answers questions before Congress. | Patrick Semansky/AP Photo

Should Congress play more of a lead role in AI regulation than the executive branch?

Michigan State University professor Anjana Susarla argues the legislative branch should, in a recent op-ed published for The Conversation. That’s because centralizing AI regulation in a single federal agency, as some have proposed, would carry some steep risks.

“Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act,” Susarla writes. “That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.”

Susarla also argues for the importance of a licensing regime around AI development, building on a call from OpenAI CEO Sam Altman for the very same thing. She said the government also needs to ensure individuals maintain rights to their own data when it’s used to train AI models, echoing the rhetoric of the data dignity movement.

tt-see you later

The fourth annual EU-U.S. Trade and Technology Council meeting came to its conclusion today, and POLITICO’s transatlantic reporting team has the details on how the two geopolitical forces are tackling this historic moment in AI.

POLITICO’s Mohar Chatterjee, Mark Scott, and Gian Volpicelli reported on the action, which culminated in what they describe as a “rough draft, at best” of a plan for transatlantic cooperation on AI. Margrethe Vestager, executive vice president of the European Commission, told the team that a “voluntary code of conduct” devised to prevent harm from AI is currently a mere two-page note, which she personally handed to U.S. Secretary of Commerce Gina Raimondo.

In today’s edition of Morning Tech, Mohar added context to the negotiations from BSA—Software Alliance’s vice president of global policy, Aaron Cooper, who suggested that EU and U.S. leaders would be best served by legislating on AI according to the relative risk of any given deployment — much like as planned for the EU’s AI Act.

It’s still an open question how much “cooperation” is truly possible between the EU’s explicitly regulatory approach and the more hands-off, “guideline”-oriented ethos of the Biden administration and Congress thus far. It might remain that way for a while: as Mohar, Mark, and Gian write, “Ongoing political divisions within Congress make it unlikely any AI-specific legislation will be passed before next year’s U.S. election.”

Tweet of the Day

Hearing someone created a deep fake of me.First of all, it's about time.Second of all, please know that I will never call or text you and ask to talk immediately. A real deep fake would say

the future in 5 links
  • What is a Shoggoth, and why has it come to symbolize AI?
  • Apple could gain AR/VR supremacy with an old tool from its toolbox.
  • The proliferation of AI fakes is rocking the White House press shop.
  • Hedge funds are already putting ChatGPT to (relatively) good use.
  • AI chatbots are not that great at shopping, as it turns out.

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

May 30,2023 08:29 pm - Tuesday

The 2024 social media race has started

May 26,2023 08:02 pm - Friday

5 questions for Qualcomm’s Durga Malladi

May 23,2023 08:14 pm - Tuesday

How lawyers use AI

May 22,2023 08:03 pm - Monday

Open-source wants to eat the internet

May 18,2023 08:02 pm - Thursday

The one big problem Washington faces on AI