Timnit Gebru's anti-'AI pause'

From: POLITICO's Digital Future Daily - Tuesday Apr 11,2023 08:01 pm
How the next wave of technology is upending the global economy and its power structures
Apr 11, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mark Scott

With help from Derek Robertson

Timnit Gebru

Then-Google AI Research Scientist Timnit Gebru speaks during the TechCrunch Disrupt SF 2018 conference. | Kimberly White/Getty Images for TechCrunch

Last Thursday POLITICO’s Mark Scott, author of the Digital Bridge newsletter, interviewed the computer scientist and activist Timnit Gebru about a recent open letter from her Distributed AI Research Institute that argued — contra the Future of Life Institute’s high-profile letter calling for an “AI pause” — that the major harms caused by AI are already here, and therefore “Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.” 

Mark asked her what she thinks regulators’ role should be in this fast-moving landscape, and how society might take a more proactive approach to shaping AI before it simply shapes us. This conversation has been edited for length and clarity.

Why is it important to increase the transparency and accountability for how AI systems are deployed, and how would it benefit people's understanding of how the technology works?

First, this would show us what data is being used. Was it obtained with opt-in informed consent or stolen? Second, it would show us what the quality of the data is. What data sources are they using? It’s important that the onus be on the corporations to show us these things before deployment, rather than understaffed agencies auditing or inquiring about them after the fact.

What concerns do you have about a small group of companies potentially dominating AI, and how would you mitigate that threat?

These corporations want one model that does everything for everyone, everywhere, so that we all pay one or two companies to literally do any task in our lives. OpenAI founder Sam Altman said that "In the next five years, computer programs that can think will read legal documents and give medical advice." He has no evidence for this claim, but people will actually think it’s true and start to use these systems as such. Why would we want to have a world where every single task is done by one model from OpenAI, and the whole world just pays them to do it?

What is your appeal to policymakers? What would you want Congress and regulators to do now to address the concerns you outline in the open letter?

Congress needs to focus on regulating corporations and their practices, rather than playing into their hype of “powerful digital minds.” This, by design, ascribes agency to the products rather than the organizations building them. This language obfuscates the amount of data that is being collected — and the amount of worker exploitation involved with those who are labeling and supplying the datasets, and moderating model outputs.

Congress needs to ensure corporations are not using people's data without their consent, and hold them responsible for the synthetic media they produce — whether it is text or media spewing disinformation, hate speech or other types of harmful content. Regulations need to put the onus on corporations, rather than understaffed agencies. There are probably existing regulations these organizations are breaking. There are mundane "AI" systems being used daily; we just heard about another Black man being wrongfully arrested because of the use of automated facial analysis systems. But that's not what we're talking about, because of the hype.

The European Union is moving ahead with AI-specific legislation, and already has expansive privacy regulations that address some of the issues you mention in the open letter. Are you optimistic about what the Europeans are doing?

They're doing something, which is much better than doing nothing. However, with things like their General Data Privacy Regulation, the onus is on individuals to prove harm, rather than corporations to prove a priori before they release a product that it fulfills a certain set of requirements. I'd like regulation that doesn't put the onus on individuals and understaffed agencies to prove harm after AI products have already proliferated.

What do you mean by this phrase from the open letter: “We should be building machines that work for us, instead of ‘adapting’ society to be machine readable and writable.”

In the Future of Life Foundation AI “pause” letter, they used words like "cope" to deal with "disruptions to democracy." Why should we cope? This sounds so ridiculous to me.

Society should build technology that helps us, rather than simply adjusting to whatever technology comes our way. Paris Marx’s “The Road To Nowhere” describes suggestions about what people should wear in order to co-exist with self-driving cars, because the designers simply assumed that self-driving cars need to exist at any cost — and we have to be made “machine readable,” to adjust who we are, in order to co-exist with them. This frames technology not as a tool to help us exist how we want to exist, but as a thing that has to exist, and we have to bend to its will. It is a very strange framing.

 

GO INSIDE THE 2023 MILKEN INSTITUTE GLOBAL CONFERENCE: POLITICO is proud to partner with the Milken Institute to produce a special edition "Global Insider" newsletter featuring exclusive coverage, insider nuggets and unparalleled insights from the 2023 Global Conference, which will convene leaders in health, finance, politics, philanthropy and entertainment from April 30-May 3. This year’s theme, Advancing a Thriving World, will challenge and inspire attendees to lean into building an optimistic coalition capable of tackling the issues and inequities we collectively face. Don’t miss a thing — subscribe today for a front row seat.

 
 
lifelike ai 'agents'

What are AI bots doing, um, without us?

A buzzy pre-print from a group of Stanford University and Google researchers, published last week, answers that question in slightly alarming detail. The team created a virtual town filled with “generative agents” — essentially, software bots programmed with LLM-like technology to simulate human behavior — and watched them go about their business, in the course of which they proceeded to “wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.”

There are even cute graphics to illustrate it. One freakily specific example of the bots coordinating: “starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time.”

Given their lifelike nature, the researchers end the paper with a note of caution about the risk of “people forming parasocial relationships with generative agents even when such relationships may not be appropriate… developers of generative agents must ensure that the agents, or the underlying language models, be value-aligned so that they do not engage in behaviors that would be inappropriate given the context, e.g., to reciprocate confessions of love.” — Derek Robertson

ai anti-hype

And now, for the other side of the debate: In the New Statesman today, sociologist and automation researcher Aaron Benanav argues that ChatGPT and similar technologies are unlikely to be anywhere near as disruptive as some might expect.

Citing the Oxford researchers Carl Benedikt Frey and Michael Osborne who predicted in 2013 that around half of all jobs would disappear due to automation within two decades, Benanav says current AI optimists (or doomsayers) are making the same mistakes they did — namely, basing their predictions on the relatively rigid job descriptions used by the Occupational Information Network.

“...Computer experts turn out to be bad at predicting computers’ capacities for autonomous operation,” Benanav writes. “Jobs such as school teacher or lathe operator look different across workplaces in the United States, and vary even more so across Germany, India and China. Legal frameworks, collective bargaining agreements, wage-levels, comparative advantages and business strategies all shape how jobs evolve, in terms of technologies used and tasks required.”

He goes on to argue that the fields where LLMs are most likely to have a big impact are computer programming and technical and legal writing, but that even in those cases that might not mean disappearing jobs — it could unleash pent-up demand, as the cost of coding, for example, plummets. “Even veritable revolutions, like those inaugurated by the advent of the steam engine in the 19th century and the internet in the 20th, unfolded gradually,” Benanav writes. “So do not believe every company press release proclaiming a revolutionary advance. Do not rely on over-exuberant self-reported tests. Wait for actual, on-the-ground results.” — Derek Robertson

tweet of the day

The meta experience of going to Peppa Pig World to sit in her living room and watch Peppa Pig on TV. Gen post-Z already exists in the metaverse.

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

STEP INSIDE THE WEST WING: What's really happening in West Wing offices? Find out who's up, who's down, and who really has the president’s ear in our West Wing Playbook newsletter, the insider's guide to the Biden White House and Cabinet. For buzzy nuggets and details that you won't find anywhere else, subscribe today.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Apr 10,2023 08:28 pm - Monday

Jack Dorsey explains his new obsession

Apr 07,2023 08:02 pm - Friday

5 questions for BCG’s François Candelon

Apr 06,2023 08:02 pm - Thursday

So you've been defamed by a chatbot

Apr 05,2023 08:02 pm - Wednesday

How to tell what's real online

Apr 04,2023 08:46 pm - Tuesday

The future, one year later

Apr 03,2023 08:15 pm - Monday

A U.S. diplomat’s Web3 warning

Mar 31,2023 08:01 pm - Friday

5 questions for Brenda Darden Wilkerson