For years — long before ChatGPT was even a twinkle in Sam Altman’s eye — there’s been a big worry percolating around artificial intelligence and algorithmically driven decision-making. Namely: Is it fair? As companies and government agencies incorporate algorithms into their decision-making, there has long been concern that these automated tools can bake in unfair practices and create an insidious form of redlining. As the concerns around automated decision-making tools become a bigger deal — and with the rise of AI, they definitely will become a bigger deal — there’s a wide consensus that new kinds of oversight and laws are needed. But how to do it is a nagging question that isn’t getting easier to answer. One model is emerging in Colorado, which passed a law back in July 2021 requiring insurance companies to test their data sources, algorithms, and predictive models to ensure they are not unfairly discriminating against consumers on the basis of a protected characteristic, like race, age or gender. When this law was proposed, Colorado’s Commissioner of Insurance Michael Conway told state Sen. Janet Buckner that they were “definitely setting the pace by running this type of legislation,” per Buckner, who originally sponsored the bill. Now, two years later, the actual enforcement of the law is still very much a work in progress. Talking to all the stakeholders impacted by the law and making sure the enforcement process has no “unintended consequences” has taken a long time, said Buckner. “We want to make sure that we do this right,” she added. Their slow but deliberate progress is a harbinger for what trying to regulate AI will look like for agencies at the state and federal levels. To be technical, Colorado’s law puts hard limits on what kind of information insurers can use to make decisions about how much a consumer pays for different types of insurance. This means that insurers would be held liable if, for example, an individual receives a higher quote for their automobile insurance because of their race or gender, even if the decision was made by a big data system — something that was happening in Colorado in the lead up to the law’s passage, said Buckner. On the other hand, insurance industry stakeholders say the law would raise insurance rates for everyone. “That was their mantra,” said Buckner. Throughout the process of getting this law passed, “insurance companies really gave me a lot of pushback — which is expected,” she recalled. It might sound straightforward, but figuring out how to guide different kinds of insurance companies through auditing practices and external reporting requirements has required a lot of discussion and work, according to Jason Lapham, Colorado Division of Insurance’s big data and AI policy director. The Division of Insurance is currently tackling life insurance underwriting, he said. Their latest meeting with stakeholders in life insurance took place today. Another state going through a similar process is New York, which passed a law requiring employers to conduct bias audits of automated employment decision tools in December 2021. After months of debate and deadline pushbacks, New York recently pulled ahead of Colorado in finalizing its enforcement rules in April. Their current enforcement deadline is July 5th. Colorado’s implementation timeline, on the other hand, is flexible. The law was originally meant to be enforced no earlier than January 2023 — but “there's no date certain by which the division needs to have rules in place implementing the law,” Lapham said. The Division is using that time to conduct stakeholder meetings on draft rules that the agency will follow on enforcement. Colorado’s stakeholder meetings are a way for the division to get “constructive engagement” on their proposed rules. And the most vocal stakeholders tend to be industry groups, Lapham said. “Industry has been most critical, primarily through the trade associations,” he noted. Colorado and New York are largely operating in a national void on algorithmic fairness and AI — no national laws exist, or are even close to passing. Lapham does, however, have a few guidelines to fall back on. The NIST risk-management framework and the White House AI Bill of Rights have helped the Division of Insurance think through best practices, he said. And despite its head start, Colorado is in no rush “to act alone for the sake of acting alone,” Lapham said. Since insurance regulation is a state-based system, Lapham said enforcing such laws “is rather familiar territory for the states,” even though the rules involve AI-enabled systems. But that’s not to say they’re waiting on anyone else. Even though the mainstream conversation has only just caught up to AI this year, the genesis of the division’s enforcement efforts is seeded years in the past. “It really is one of these situations where I don't think it made a lot of sense to continue to wait” for federal law to catch up, Lapham said. But Colorado’s distinction as the pace-setter on AI regulation for insurance now depends on when the rubber hits the road on their actual enforcement efforts.
|