Getting inside one state's AI regulation push

From: POLITICO's Digital Future Daily - Thursday Jun 08,2023 08:02 pm
How the next wave of technology is upending the global economy and its power structures
Jun 08, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

The Colorado state capitol is pictured. | Getty

The Colorado statehouse in Denver. | Getty

For years — long before ChatGPT was even a twinkle in Sam Altman’s eye — there’s been a big worry percolating around artificial intelligence and algorithmically driven decision-making.

Namely: Is it fair?

As companies and government agencies incorporate algorithms into their decision-making, there has long been concern that these automated tools can bake in unfair practices and create an insidious form of redlining.

As the concerns around automated decision-making tools become a bigger deal — and with the rise of AI, they definitely will become a bigger deal — there’s a wide consensus that new kinds of oversight and laws are needed.

But how to do it is a nagging question that isn’t getting easier to answer.

One model is emerging in Colorado, which passed a law back in July 2021 requiring insurance companies to test their data sources, algorithms, and predictive models to ensure they are not unfairly discriminating against consumers on the basis of a protected characteristic, like race, age or gender.

When this law was proposed, Colorado’s Commissioner of Insurance Michael Conway told state Sen. Janet Buckner that they were “definitely setting the pace by running this type of legislation,” per Buckner, who originally sponsored the bill.

Now, two years later, the actual enforcement of the law is still very much a work in progress. Talking to all the stakeholders impacted by the law and making sure the enforcement process has no “unintended consequences” has taken a long time, said Buckner. “We want to make sure that we do this right,” she added.

Their slow but deliberate progress is a harbinger for what trying to regulate AI will look like for agencies at the state and federal levels.

To be technical, Colorado’s law puts hard limits on what kind of information insurers can use to make decisions about how much a consumer pays for different types of insurance. This means that insurers would be held liable if, for example, an individual receives a higher quote for their automobile insurance because of their race or gender, even if the decision was made by a big data system — something that was happening in Colorado in the lead up to the law’s passage, said Buckner.

On the other hand, insurance industry stakeholders say the law would raise insurance rates for everyone. “That was their mantra,” said Buckner. Throughout the process of getting this law passed, “insurance companies really gave me a lot of pushback — which is expected,” she recalled.

It might sound straightforward, but figuring out how to guide different kinds of insurance companies through auditing practices and external reporting requirements has required a lot of discussion and work, according to Jason Lapham, Colorado Division of Insurance’s big data and AI policy director. The Division of Insurance is currently tackling life insurance underwriting, he said. Their latest meeting with stakeholders in life insurance took place today.

Another state going through a similar process is New York, which passed a law requiring employers to conduct bias audits of automated employment decision tools in December 2021. After months of debate and deadline pushbacks, New York recently pulled ahead of Colorado in finalizing its enforcement rules in April. Their current enforcement deadline is July 5th.

Colorado’s implementation timeline, on the other hand, is flexible. The law was originally meant to be enforced no earlier than January 2023 — but “there's no date certain by which the division needs to have rules in place implementing the law,” Lapham said. The Division is using that time to conduct stakeholder meetings on draft rules that the agency will follow on enforcement.

Colorado’s stakeholder meetings are a way for the division to get “constructive engagement” on their proposed rules. And the most vocal stakeholders tend to be industry groups, Lapham said. “Industry has been most critical, primarily through the trade associations,” he noted.

Colorado and New York are largely operating in a national void on algorithmic fairness and AI — no national laws exist, or are even close to passing. Lapham does, however, have a few guidelines to fall back on. The NIST risk-management framework and the White House AI Bill of Rights have helped the Division of Insurance think through best practices, he said.

And despite its head start, Colorado is in no rush “to act alone for the sake of acting alone,” Lapham said. Since insurance regulation is a state-based system, Lapham said enforcing such laws “is rather familiar territory for the states,” even though the rules involve AI-enabled systems.

But that’s not to say they’re waiting on anyone else. Even though the mainstream conversation has only just caught up to AI this year, the genesis of the division’s enforcement efforts is seeded years in the past. “It really is one of these situations where I don't think it made a lot of sense to continue to wait” for federal law to catch up, Lapham said. But Colorado’s distinction as the pace-setter on AI regulation for insurance now depends on when the rubber hits the road on their actual enforcement efforts.

 

LISTEN TO POLITICO'S ENERGY PODCAST: Check out our daily five-minute brief on the latest energy and environmental politics and policy news. Don't miss out on the must-know stories, candid insights, and analysis from POLITICO's energy team. Listen today.

 
 
all in your head

U.K. regulators are sounding a novel warning about the potential risks of neurotech: that it poses a bias hazard, especially in hiring, toward the neurodivergent.

POLITICO’s Morning Tech U.K. has the report this morning on the British Information Commissioner’s Office publishing its findings that technology that gathers data from human brains carries a “significant risk” of discrimination, especially when it comes to hiring. That risk comes from such models being trained on the neurotypical, or not autistic or otherwise neurologically atypical, making responses or data from neurodivergent people seem like negative outliers.

Stephen Almond, the ICO’s director of technology and innovation, said “The consequences could be dire if these technologies are developed or deployed inappropriately,” and additionally pointed out that access to data from the brain could pose problems of consent in an employer-employee relationship.

The office is currently developing guidance for companies that are working on neurotech that’s expected to be finalized in 2025.

facial recognition in court

A New Jersey court ruled yesterday that the state must turn over facial recognition data to a defendant who was identified as a suspect and charged with felony robbery after investigators used the technology.

State appellate judges ruled that the man, who was identified after surveillance camera footage was sent through the New York Police Department’s facial recognition system, has the right to review all information about that system, its makeup, and its reliability. The judges said its “vital to impeach the witnesses' identification, challenge the State's investigation, create reasonable doubt, and demonstrate third-party guilt” — in other words, to mount a complete defense.

The decision is seen as a huge win by civil rights organizations, with the American Civil Liberties Union (which filed an amicus brief in support of the defendant) calling it “an important step” toward accountability for law enforcement agencies that use the technology, and the Electronic Frontier Foundation (which also filed a brief) saying “Defendants must be allowed to examine FRT in its entirety.”

Tweet of the Day

One processor to rule them all45 years ago #Today, Intel released the 8086 microprocessor which would give rise to the x86 architecture and would dominate the market in the PC era

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Jun 07,2023 08:02 pm - Wednesday

RFK Jr. crashes the techno-politics party

Jun 05,2023 08:50 pm - Monday

AI isn’t just what the West does

Jun 02,2023 08:08 pm - Friday

5 questions for MIT's Neil Thompson

Jun 01,2023 08:43 pm - Thursday

Move fast and break… your brain?

May 31,2023 08:10 pm - Wednesday

The expanding AI hall of shame

May 30,2023 08:29 pm - Tuesday

The 2024 social media race has started