Experts cold on police AI

From: POLITICO's Digital Future Daily - Thursday Feb 22,2024 09:01 pm
How the next wave of technology is upending the global economy and its power structures
Feb 22, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Alfred Ng

With help from Derek Robertson

Text from the ChatGPT page of the OpenAI website is shown in this photo.

Text from the ChatGPT page of the OpenAI website in this Feb. 2, 2023 photo. | Richard Drew/AP

The RoboCop of the near future could be spending its days filing paperwork.

Companies and former police officers are looking into using generative AI to help cut down time on writing police reports — a time suck that officers argue could be better used for investigating crimes.

The Louisiana-based company 365 Labs, which develops software for law enforcement agencies, and PoliceReports.ai, founded by a former Florida police officer, are pitching technology that can write police reports within a matter of seconds.

They work in different ways, but all follow the basic premise that police can share details of an incident with a chatbot and get back a legally admissible document written by AI.

Police are attracted to the same promises that have made generative AI attractive in other industries: The technology offers a quick way to spit out clean copy in seconds, sparing officers one of the big nuisances of their jobs. A 2019 survey found that police spend up to 3 hours a shift on paperwork.

But government agencies and academics are already raising alarms — pointing out that the idea also steers straight into two weaknesses of the technology: Its propensity to make mistakes, even falsify facts; and the risk of leaking sensitive information.

Large language models such as ChatGPT are known for inaccuracies, often described as “hallucinations.” Some of the higher-profile examples have resulted in allegations of defamation and perpetuating racist theories. These mistakes — though harmless in many contexts — could have life-altering consequences when used in law enforcement, Chris Gilliard, a privacy researcher and a Just Tech fellow at the Social Science Research Council, said.

“If you want to use it to tell your kid a bedtime story about a ninja in the style of a rap by Eminem, sure,” Gilliard said. “But when it’s a life-or-death thing, or it has the potential for long-term effects, it doesn’t seem like a good application.”

In its marketing material, 365 Labs claims that AI-written police reports could actually improve on accuracy compared with reports written by officers because it’s less prone to misspellings and grammatical errors. The software automatically puts details such as names and locations into templates, according to the company. PoliceReports.ai’s website says that its AI produces accurate reports, but also notes that it’s essential to review and validate all outputs.

365 Labs and PoliceReports.ai didn’t respond to requests for comment.

While there are no federal regulations on how government agencies should use AI, many proposed guidelines for AI’s use in the public sector stress the need for human review. New Jersey’s guidelines for government use of generative AI warns that AI-written content shouldn’t be used without careful editing, and also notes that it shouldn’t be used for any sensitive topics.

An Interpol report on ChatGPT’s effects on law enforcement also highlighted that AI does not meet the standards of accuracy and impartiality needed for police reports.

“The final responsibility for the accuracy and quality of police reports shall always remain with police officers who have the necessary training and expertise,” the report stated.

The requirement for human review raises the question of how much time the software would actually save, if it’s creating a new task for police officers to handle on top of generating the reports themselves.

Jonathan Parham, a former police director in Rahway, New Jersey, said he doesn’t support a police report mostly written by AI, calling the concept “troubling.” He raised concerns that AI-written reports would fail to include nuanced details that only a human could describe, and that even a slight error or omission could invalidate the entire report.

But he’s not against using AI to save time. Instead, he has proposed using generative AI more as an aide, less as a writer, creating a ChatGPT bot he called the “Police Report-Writer assistant.” He points out that many officers have issues with spelling and grammar, making writing mistakes that could jeopardize an entire case.

His ChatGPT bot, he said, asks officers a set of questions related to the incident, and then organizes the details while cleaning up grammar and spelling errors, but never fully writes the reports.

“The AI should never replace the officer — it should enhance their operational competency,” Parham said.

He said his chatbot has gotten mixed feedback from police who’ve tried it out, with younger cops touting its benefits, and veteran officers against the technology.

 

YOUR GUIDE TO EMPIRE STATE POLITICS: From the newsroom that doesn’t sleep, POLITICO's New York Playbook is the ultimate guide for power players navigating the intricate landscape of Empire State politics. Stay ahead of the curve with the latest and most important stories from Albany, New York City and around the state, with in-depth, original reporting to stay ahead of policy trends and political developments. Subscribe now to keep up with the daily hustle and bustle of NY politics. 

 
 
chips part two

US Secretary of Commerce Gina Raimondo speaks at the opening session of the Asia-Pacific Economic Cooperation (APEC) leaders' week in San Francisco, California, on November 15, 2023. The APEC Summit takes place through November 17. (Photo by ANDREW CABALLERO-REYNOLDS / AFP) (Photo by ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

Secretary of Commerce Gina Raimondo. | Andrew Caballero-Reynolds/AFP via Getty Images

Secretary of Commerce Gina Raimondo says to achieve dominance in the semiconductor industry the U.S. might require a sequel to the CHIPS and Science Act.

As POLITICO’s Brendan Bordelon reported yesterday for Pro s, Raimondo told Intel CEO Pat Gelsinger at an event that she’s “out of breath running as fast as I can to implement CHIPS one,” but “All of that being said, I suspect there will have to be — whether you call it CHIPS Two or something else — continued investment… if we want to lead the world — look, we fell pretty far. We took our eye off the ball.”

The U.S. gave the domestic semiconductor industry a $53 billion subsidy as part of the CHIPS act passed in 2022. Intel is the recipient of grant money from that bill, which will be used to build a massive microchip complex outside Columbus, Ohio, the details of which Gelsinger said would be announced “very soon.”

state of the llms

An international team of AI researchers found that while large language models have made a great leap over the past few years, there are still both technical and ethical hurdles the field desperately needs to clear.

In a pre-print that looks at the three leading LLM families — OpenAI’s GPT, Meta’s LLaMA, and Google’s PaLM — the researchers led by Snap machine learning lead Shervin Minaee evaluate the technology's current state and make some recommendations for developers going forward. They conclude that while “the pace of innovation is increasing rather than slowing down” in the field, there’s still much to do.

Take, for example, context: In the example they describe, for an LLM to efficiently recommend a good movie for the user, it needs a lot of data about the user. But attention-based models, the dominant form of LLM at this moment, “are highly inefficient for longer contexts,” which might drive more research for different AI architectures.

They also emphasize that “As LLMs are increasingly deployed in real world applications, they need to be protected from potential threats, to prevent them being used to manipulate people or spread mis-information,” something they find the field is currently working on.

Tweet of the day

Incredible.

The Future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

CONGRESS OVERDRIVE: Since day one, POLITICO has been laser-focused on Capitol Hill, serving up the juiciest Congress coverage. Now, we’re upping our game to ensure you’re up to speed and in the know on every tasty morsel and newsy nugget from inside the Capitol Dome, around the clock. Wake up, read Playbook AM, get up to speed at midday with our Playbook PM halftime report, and fuel your nightly conversations with Inside Congress in the evening. Plus, never miss a beat with buzzy, real-time updates throughout the day via our Inside Congress Live feature. Learn more and subscribe here.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

| Privacy Policy | Terms of Service

More emails from POLITICO's Digital Future Daily

Feb 21,2024 09:50 pm - Wednesday

Why new money is chasing older chips

Feb 16,2024 09:02 pm - Friday

5 questions for Damon Krukowski

Feb 15,2024 09:13 pm - Thursday

The metaverse puts its best foot forward

Feb 14,2024 09:02 pm - Wednesday

Setting the digital future's agenda

Feb 13,2024 09:01 pm - Tuesday

'The MacGyvers of modern warfare'

Feb 12,2024 09:27 pm - Monday

The alternate 'you' in the metaverse