Judges to AI: We object!

From: POLITICO's Digital Future Daily - Wednesday Jan 03,2024 09:02 pm
How the next wave of technology is upending the global economy and its power structures
Jan 03, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Ben Schreckinger

With help from Derek Robertson

John Roberts frowns near a microphone.

Chief justice of the United States John Roberts. | Jim Watson/AFP via Getty Images

You know the fears about AI automation are real when even the chief justice of the United States starts to sound nervous.

John Roberts’ year-end report on the federal judiciary has caused a stir with its defense of the value of human judges in a world where AI models have started passing bar exams.

While encouraging members of the stodgy legal profession to sit up and pay attention to AI’s advances, Roberts made the case that the job of judging involves an irreducible human element. “Legal determinations often involve gray areas that still require application of human judgment,” he argued.

Even if human judges aren't going anywhere soon, the evidence suggests Roberts is right to be raising the alarm on AI. The technology is poised, or in some cases already starting, to collide with the practice of law in several arenas, many of which might not be obvious – but could have long-term effects.

One particularly thorny issue will be the admission of evidence that is an output of an AI model, according to James Baker, a former federal appeals judge and the co-author of a 2023 judges’ guide to AI published by the Federal Judicial Center, a research agency run by and for the judiciary.

The report anticipates that outputs like AI-generated analyses of medical tests or AI-screened job applicant pools will soon start posing legal dilemmas for judges.

Baker told DFD that he expects the complexity of models to make controversies over AI evidence more vexing than debates over DNA evidence, which overcame initial skepticism to become a mainstay in American legal proceedings.

"The challenge with AI is every AI model is different,” he said, “What’s more, AI models are constantly learning and changing.”

For now, judges have discretion to steer clear of that confusion: Baker pointed to Rule 403 of the Federal Rules of Evidence, which says that a judge can exclude relevant evidence at trial if it’s likely to cause too much confusion or distraction.

Of course, courts won’t be able to sidestep the complexity of AI models when they're central to the dispute being litigated. Already, generative AI has become the subject of several ongoing copyright cases, including one in which the New York Times is challenging OpenAI’s use of copyrighted material to train its models. Baker said he also expects to start seeing cases that will force judges to grapple with the role of AI in automated driving and medical malpractice.

While the constitutionally mandated role of judges offers a certain level of job security, other positions in the legal profession are already starting to feel the heat.

Last week, former Donald Trump lieutenant Michael Cohen, himself a disbarred lawyer, offered a memorable lesson in how not to use AI in the practice of law. On Friday court records were unsealed showing that Cohen had provided his legal team nonexistent legal precedents, given to him by Google’s Bard chatbot, which his lawyers then cited in a motion to end his supervised release early.

But specialized AI legal research tools are improving rapidly, according to one litigator at a prominent mid-sized law firm who was granted anonymity to discuss what is becoming an increasingly touchy subject inside the legal profession.

He said that one research tool he tried out last month accomplished in three or four minutes what a junior associate would take 10 hours to do. He predicted that smaller law firms will be able to adopt the technologies more quickly than the large firms that dominate the industry.

The litigator said clients at startups and in the tech industry have already started pushing lawyers to make use of the automated tools: “There’s an expectation now that you’d use AI to reduce costs.”

And the tools look poised to get better at automating human work. Last month, Harvey, a startup that bills itself as “generative AI for elite law firms,” announced it had raised $80 million from investors including Kleiner Perkins, Sequoia and OpenAI.

If summer associates soon start to sweat, chief justices may not be too far behind.

Matt Henshon, chair of the American Bar Association’s Artificial Intelligence and Robotics Committee, pointed DFD to a notable “dichotomy” between Roberts’ “gray area” commentary and his other memorable remarks about the role of the judiciary.

At his confirmation hearing in 2005, Roberts famously described judging in more black-and-white terms. “Judges are like umpires,” he said, adding, “it’s my job to call balls and strikes.”

There’s good reason for Roberts to ditch the umpire comparison in favor of a vaguer, more touchy-feely conception of judging (his latest report also emphasized judges’ ability to interpret “a quivering voice,” “a moment’s hesitation,” or “a fleeting break in eye contact”).

If litigating is America’s favorite pastime, baseball might be its second favorite. And in 2019, Major League Baseball began experimenting with automated “umpires” to call balls and strikes in the minor leagues. Last year, the robo umps came to every Triple A ballpark, the last stop before getting called up to the big leagues.

the new (kind of scary) frontier

European regulators are sounding the alarm for 2024 about cybersecurity risks posed by some of the most cutting-edge new technologies.

As POLITICO’s Cyber Insights (for Pro s) reported this morning, European officials are particularly worried about cyber threats from quantum cryptography, AI-powered attacks, and data breaches in the cloud. On quantum, the European Union is hoping this year to roll out a coordinated, bloc-wide network of quantum-proof communications systems; the EU AI Act hopes to address threats posed by the corruption or misuse of powerful AI models; and a planned cloud certification scheme will tackle threats there. (And as noted in the stateside edition of Morning Cybersecurity today, crypto heists and hacks remain a major threat as well with North Korea increasingly using the spoils to fuel its rocket program.)

Still, these threats are infiltrating the European policy conversation amid a year already expected to be busy fending off election hacking, illegal targeted ads, and disinformation. — Derek Robertson

way mo' safer

In case you missed it before the holiday: Google subsidiary Waymo is boasting that its self-driving cars are now safer than ever.

In two recently published research papers, Waymo favorably compares its automated drivers’ crash rates to those of humans, and then shows its work by explaining what those benchmarks for safety actually are. Overall Waymo claims an 85 percent reduction in crash rates involving injury compared to the crash rates of human drivers, and a 57 percent reduction in crashes that are reported to police, an indicator of more significant incidents.

The good-news papers come as Waymo plans to expand from its existing service areas of San Francisco and Phoenix to Los Angeles and Austin, even as regulators in the U.S. and abroad worry about the technology and its safety. — Derek Robertson

Tweet of the Day

Whoever designed the Mac pointer hit a home run 40 years ago and it hasn't touched ground yet

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com), Daniella Cheslow (dcheslow@politico.com), and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Jan 02,2024 09:57 pm - Tuesday

Rise of the AI psychbots

Dec 22,2023 09:01 pm - Friday

The year in 5 questions

Dec 21,2023 09:01 pm - Thursday

The year of the bosses

Dec 20,2023 09:25 pm - Wednesday

A futurist who isn't worried about AI

Dec 19,2023 09:01 pm - Tuesday

The future, in books

Dec 18,2023 09:28 pm - Monday

The billionaire bucks shaping AI policy

Dec 15,2023 09:02 pm - Friday

5 questions for Matt Stoller