How AI could make the next big crisis way, way worse

From: POLITICO's Digital Future Daily - Wednesday Feb 01,2023 09:02 pm
How the next wave of technology is upending the global economy and its power structures
Feb 01, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

LOST HILLS, CA - MARCH 24: The sun rises over an oil field over the Monterey Shale formation where gas and oil extraction using hydraulic fracturing, or fracking, is on the verge of a boom on March 24, 2014 near Lost Hills, California. Critics of fracking in California cite concerns over water usage and possible chemical pollution of ground water sources as California farmers are forced to leave unprecedented expanses of fields fallow in one of the worst droughts in California history. Concerns also include the possibility of earthquakes triggered by the fracking process which injects water, sand and various chemicals under high pressure into the ground to break the rock to release oil and gas for extraction though a well. The 800-mile-long San Andreas Fault runs north and south on the western side of the Monterey Formation in the Central Valley and is thought to be the most dangerous fault in the nation. Proponents of the fracking boom saying that the expansion of petroleum extraction is good for the economy and security by developing more domestic energy sources and increasing gas and oil exports. (Photo by David McNew/Getty Images)

The sun rises over an oil field near Lost Hills, Calif. | David McNew/Getty Images

There are plenty of big global problems that people are hoping AI can finally help solve: climate change, traffic deaths, loneliness.

But what if AI, faced with a sudden crisis, is actually the wrong tool to manage a big problem in real time? What if it might make a bad situation drastically worse?

That’s the bleak potential future that Anselm Küsters, a tech researcher and historian at the Center for European Policy in Berlin, explored in a research paper published last December titled “AI as Systemic Risk in a Polycrisis.”

If that last word looks unfamiliar, “polycrisis” is an idea laid out by Columbia University historian Adam Tooze to describe the slow-rolling, mutually-reinforcing combination of parallel risks we’re living through — risks to climate, markets, and the security of Europe, just to name a few examples.

In that environment, Küsters makes the argument that when something goes gravely wrong, AI systems trained on older data from a relatively “peaceful” world might be woefully equipped to handle a more chaotic one.

How much should we worry about this, and is there anything we can do? I called him yesterday to discuss the origins of his project, the gulf between “data haves” and “data have nots” in a global crisis and what the European Union is getting right (and wrong) in the AI Act currently making its way through its parliament. An edited and condensed version of our conversation follows:

Why did you decide to write a paper about something outside your field, which hasn’t really yet happened?

I am an economic historian, so I look at things from a historical perspective, especially technology. What I’ve noticed is that most people are well aware that you have biased data in the sense that some groups are underrepresented, or that there are historical injustices in the data that then get perpetuated by the system — which is really good that this is now a commonly known problem.

But then I wondered if there’s also a temporal bias to the data —if all the data we use from the past 20 years have been collected in times of relative macroeconomic and political stability, if when we try to use AI systems to reduce complexity in a polycrisis it might have the opposite effect and actually make things worse.

In the paper you cite the work of Cathy O’Neil, who is a huge critic of how math and computer systems can go terribly wrong when they collide with real life. I take it she influenced your approach?

It influenced me a lot, because she comes from a computer science background but takes a broader view — she goes into the real world and looks at actual people and the problems that occur when they encounter these systems. One of the things I found very early on in my research was the story about how credit rating systems malfunctioned during the pandemic. The systems automatically thought that you don't shop a lot online, you buy most of your groceries in physical stores, so in the early months of the pandemic of course they malfunctioned. This was the kind of human story that made me think about the problem in a more general sense.

What kind of systems are most vulnerable to the risk you’ve identified?

I give three primary examples: Finance, medicine and security. The fundamental problem in these areas is that their systems are largely automated, and thereby automatically affect a large number of people which makes it difficult to reverse their effects in a short period of time. What you are lacking in a crisis is time.

In your paper you mention the “data haves” and “data have nots.” Who are they, and how would inequality make a crisis like this worse?

We tend to think that the whole world is digitized, but in fact that’s mostly the Western world and various countries that are quite well off. In other countries we lack many indicators or data points we could use to feed these systems, which is a problem because the more data we have, the better we can train our models.

For instance, with more data we could better predict abnormal weather events in developing countries. But even having more data might not always be good, because you can then rely too much on models instead of common sense and human intuition. The more general point to make here is that detecting anomalous events is always difficult with AI because you have to train the system, and by definition anomalies are rare events, so you have a lack of data. To compare a normal situation and an anomaly requires comparable data that we usually don't have — and we especially don't have it in countries that are lacking in data, where we would need it the most.

What does the European Union’s AI Act do well, and not do well, to mitigate this kind of risk?

The EU’s AI Act takes a risk-based approach, so it’s based on systems’ perceived risk, which might not be completely perfect in the context of avoiding crises because the environment is changing so quickly and so dramatically.

We can’t change this, so I think it’s important to have a higher proportion of AI systems classified as “high-risk.” At this point in the negotiations there’s a lot of talk about lessening the extent to which systems will be classified as high risk, and I think that's going in the wrong direction.

To some extent you can never fully understand this risk, because machine learning systems are often so sophisticated that even their designers don’t fully understand them. How do you think about risk management given that reality?

Many observers and policymakers think that we should have AI audits that you conduct on a yearly basis, or when you introduce a new product. I agree this is an important first step, but I want to highlight how the future is so unpredictable that no matter how well you do the audit, and no matter how much staff you employ, there will always be blind spots.

That’s partly related to AI systems being a black box, but also to the future being a black box. There are certain risks that might never be quantifiable, but that we should still be aware of — we will always have waves of new technology, and you can never know their effects ex ante, only over the course of history unfolding.

 

JOIN POLITICO ON 2/9 TO HEAR FROM AMERICA’S GOVERNORS: In a divided Congress, more legislative and policy enforcement will shift to the states, meaning governors will take a leading role in setting the agenda for the nation. Join POLITICO on Thursday, Feb. 9 at World Wide Technology's D.C. Innovation Center for The Fifty: America's Governors, where we will examine where innovations are taking shape and new regulatory red lines, the future of reproductive health, and how climate change is being addressed across a series of one-on-one interviews. REGISTER HERE.

 
 
chip tension in europe

The global “decoupling”/realignment/nascent trade war over semiconductors is having, predictably, some unintended geopolitical consequences.

Yesterday POLITICO’s Pieter Haeck reported for Pro s on how the U.S.’ successful efforts to get the Netherlands to stop selling China advanced semiconductor manufacturing technology has driven a wedge between the nation and the European Union. One EU diplomat told Pieter that the move could leave the EU as a whole vulnerable to retaliation by China, despite the deal being solely between the U.S. and the Netherlands, and one policy fellow argued it made the union look weak by virtue of its effective role as a bystander.

And the fallout isn’t limited to the EU and its member nations: POLITICO’s Graham Lanktree and Annabelle Dickson reported yesterday as well on the fallout in the U.K., where Downing Street is worried it’s falling behind in the race to decrease its reliance on China. “The U.K. needs to — at pace — understand what it wants its role to be in the industries that will define the future economy,” one lobbyist told Graham and Annabelle.

a setback for VR

The Microsoft Corp. logo outside the Microsoft Visitor Center.

The company said in a regulatory filing Wednesday that the layoffs were a response to “macroeconomic conditions and changing customer priorities.” | Ted S. Warren/AP Photo

One major victim of the tech downturn: Microsoft’s AR/VR efforts, which were drastically cut amid the company’s 10,000-person layoff at the end of last month.

There’s one particularly interesting detail to those layoffs: The company’s HoloLens project, which after agreeing to a contract with the U.S. Army in 2021 that could have been worth up to $22 billion now seems defunct. Last month Congress rejected a request for $400 million to buy more of the goggles, which have been plagued by issues — like inciting headaches and nausea — since their introduction.

Congress instead authorized $40 million to the Army to try to develop a new model of the goggles, according to Bloomberg, but as Microsoft cuts back that might be a taller order than it once was.

tweet of the day

Machinists Union ad in DSA’s magazine Democratic Left, 1982

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DOWNLOAD THE POLITICO MOBILE APP: Stay up to speed with the newly updated POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO's scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need, reimagined. DOWNLOAD FOR iOSDOWNLOAD FOR ANDROID.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Jan 31,2023 09:33 pm - Tuesday

How (not) to future-proof the law

Jan 30,2023 09:01 pm - Monday

2023's crypto characters to watch

Jan 27,2023 09:22 pm - Friday

5 questions with IBM's Christina Montgomery

Jan 26,2023 10:18 pm - Thursday

Tech's strange D.C. alliances

Jan 25,2023 09:01 pm - Wednesday

Should a robot be allowed to kill you?

Jan 24,2023 09:14 pm - Tuesday

The fight for the airwaves in your house

Jan 23,2023 09:29 pm - Monday

A new wild west: state-backed digital money