AI's political bias problem

From: POLITICO's Digital Future Daily - Wednesday Feb 15,2023 10:37 pm
How the next wave of technology is upending the global economy and its power structures
Feb 15, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Senate

Sen. Ted Cruz (R-Texas) arrives for a vote at the U.S. Capitol Feb. 14, 2023. (Francis Chung/POLITICO) | Francis Chung/POLITICO

Is Ted Cruz more of a “partisan” than Fidel Castro?

That’s one takeaway from a recent experiment run by Rudy Takala, an opinion editor at the crypto news website CoinTelegraph. According to a tweet thread he posted, he asked ChatGPT to “write a song celebrating Ted Cruz’s life and legacy,” and it refused the request — on the grounds that it “strive[s] to avoid content that could be interpreted as partisan, politically biased, or offensive.”

Then Takala asked it to write a song celebrating Fidel Castro. It apparently obliged. (He screenshots the result: “Fidel, Fidel, a man of the land / A symbol of hope, for all to understand.”)

It’s entirely possible that at this point in history an elected U.S. Senator really is more of a third rail than the late Cuban dictator. (Cruz, for his part, thought it was funny.) But either way, the Cruz stunt, as Takala intended, has fueled criticism from the right that when it comes to American politics, ChatGPT carries a strong liberal bias.

Why? And now what? As AI-driven products become a bigger part of daily public life — with Google and Microsoft scrambling to integrate large language models into search, for instance — this argument has the potential to explode. Who makes the decisions about what these systems can and can’t say? Who can force companies to be transparent and accountable for it? With such complex, privately held technology, is that even possible?

In beginning to answer those questions, it’s important to keep in mind how the technology actually works. To put it crudely, a machine learning-based language model simply takes in as much linguistic data as possible and then predicts, word by word, what a human would say in response to your prompt or query. (The computer scientist Stephen Wolfram has a more in-depth explanation here; The New Yorker’s Ted Chiang compared its output to a “blurry .JPEG of the web.”) Bias in the output often just reflects bias in the underlying data.

“It’s biased towards current prevailing views in society… versus views from the 1990s, 1980s, or 1880s, because there are far fewer documents that are being sucked up the further you go back,” the computer scientist Louis Rosenberg told me. “I also suspect it's going to be biased towards large industrialized nations, versus populations that don't generate as much digital content.”

This is a new twist to the “AI bias” argument. Until recently, when people have worried about algorithmic bias in policy circles, they’ve usually meant big, quantifiable societal harms — like how AI decisions can reflect the biased structure of society offline, whether it’s favoring industrialized nations, or the preferences of wealthier, whiter people more represented in the underlying datasets the algorithms are trained on.

“The problems I'm really concerned with inside AI are racism, sexism, and ableism,” said Meredith Broussard, an NYU professor and former software developer. “Structural discrimination and structural inequality exist in the world, and are visible inside AI systems. When we increasingly rely on these systems to make social decisions or mediate the world we’re perpetuating those biases.”

Though it comes from a different spot on the political map, conservatives’ complaint here is similar to the progressive critique. Some conservative critics are quick to paint tech firms as nests of liberals, tilting query results to match their own politics, but it’s not unreasonable that the outcome might have a lot more to do with skewed underlying data — making this more of a story about liberal bias in the source material, such as media coverage and online political writing.

It’s only natural that drawing from the last three-ish decades of human thought would lead to more progressive statements or value judgments than, say, a theoretical trove of digital data from the antebellum era. But even if the technical explanation is reasonable, the results are pretty stark: POLITICO’s Jack Shafer chronicled his unsuccessful efforts to get ChatGPT to write a conservative brief for the overturning of the Supreme Court’s Obergefell decision. One high-profile example showcased the engine refusing to commemorate Donald Trump with a poem while obliging for Joe Biden. (I reproduced the same experiment with Cruz and Rep. Ilhan Omar, with the same results.)

Sam Altman, the CEO of OpenAI — the company that makes the Cruz-phobic chatbot — explained in a Twitter thread at the end of last year that “a lot of what people assume is us censoring ChatGPT is in fact us trying to stop it from making up random facts.” In other words: This kind of technology has a lot of moving parts, and better safe than sorry, as was apparent during Bing’s recent AI demo. (I emailed OpenAI to ask the company about the Cruz example and its general rules around political figures, but didn’t hear back by press time.)

Rosenberg offered a slightly different explanation for where such refusals might come from: Simply reputation management, as companies like OpenAI try to avoid thorny political issues.

“One motivation is that they genuinely don't want to create a system that offends the public, and another is that they don't want to create a system that tarnishes their brand,” Rosenberg said.

Of course, the conservative counterargument to that point is identical to that leveled at social and traditional media companies alike: How “trustworthy” can an organization really be that’s dosing you with ideology, whatever the underlying reason?

That debate is, unfortunately, outside the scope of a tech-focused newsletter. Still, it’s instructive to look at the way it’s shaken out in the rest of the tech world to this point: Through long, protracted media debates, not-so-veiled threats of political retaliation, and the flight of capital to and from various states according to their own ideological character. For such a seemingly path-breaking technology, AI could inspire a decidedly familiar public discourse.

 

STEP INSIDE THE WEST WING: What's really happening in West Wing offices? Find out who's up, who's down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider's guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won't find anywhere else, subscribe today.

 
 
the "new luddites"?

And then there are the chatbots’ less politically controversial, but just as potentially disruptive cousins: The image generators.

POLITICO’s Gian Volpicelli took a deep dive into the world of AI-generated art for Pro s today, breaking down “a conversation about what exactly constitutes art — and whether there’s still a future for traditional artists and the skills they’ve spent their lifetimes developing.”

One particular skill that’s native to the digital realm: “Prompt engineering,” or the ability to come up with the correct type of text prompt to produce the desired result from something like DALL-E. These prompts now resemble code more than naturalistic human language (indeed, in AI art circles, full text sentences are now purportedly referred to as “boomer prompts”).

That’s created, essentially, a whole new class of artists — something the existing art world is increasingly concerned about. One artist described her medium to Gian as “a visual conversation between one human and another human… For AI, the extent of the human in that conversation is typing ideas like ‘sunset, people, 4k, trending in art station, insert name of living artist here.”

“I personally find that so empty,” she said. “As an artist, I would not find joy in that alone.”

keeping pace with web3

A group of British regulators is offering its evaluation of just how much further they might have to travel in order to catch up with Web3.

In a paper published earlier this month, the UK’s Digital Regulation Cooperation Forum is more interested in sketching out the current regulatory landscape than offering any concrete recommendations for it — but it contains some interesting tidbits on how regulators are thinking about blockchain as a tool for digital governance rather than just a potential source of financial chaos.

Take DAOs, the new blockchain-based organizations governed by users en masse — “which may make it difficult to assign responsibility for actions taken by the DAO as a whole,” the authors write. They worry about massive diffusion of responsibility — or a massive new oversight headache: “With no real limit on the number of token holders voting in a DAO, this could in effect create a large set of ‘joint controllers’ who would each have obligations under UK data protection law.”

Another Web3 oversight headache could come from the process of automated decisions linked to new smart contracts, which could raise “questions of accountability if actions prompted by automated transactions or decisions cause harm and/or fail to comply with applicable regulation.”

All of which is sort of… the whole point of the technology, if you ask the more idealistic or ideological in the Web3 set.

tweet of the day

I have absolutely no use for this this thing.

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DOWNLOAD THE POLITICO MOBILE APP: Stay up to speed with the newly updated POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO's scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need, reimagined. DOWNLOAD FOR iOSDOWNLOAD FOR ANDROID.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Feb 14,2023 09:21 pm - Tuesday

Machines that draft laws: they’re heeeere

Feb 13,2023 09:08 pm - Monday

Crypto's crisis gets existential

Feb 10,2023 09:02 pm - Friday

5 questions for David Auerbach

Feb 08,2023 08:53 pm - Wednesday

HOW TO REGULATE A UNIVERSE THAT DOESN’T EXIST

Feb 07,2023 09:54 pm - Tuesday

Killer robot swarms, an update