A radical new idea for regulating AI

From: POLITICO's Digital Future Daily - Wednesday Apr 26,2023 09:01 pm
How the next wave of technology is upending the global economy and its power structures
Apr 26, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston. (AP Photo/Michael Dwyer)

The OpenAI logo is seen on a mobile phone in front of a computer screen. | AP Photo/Michael Dwyer

This week, Digital Future Daily is focusing on the fast-moving landscape of generative AI and the conversation about how and whether to regulate it — from pop culture to China to the U.S. Congress. Read day one here, on the controversy over a fake single from Drake and the Weeknd, and day two here on the not-actually-that-crazy idea of automated inventors.

There’s a growing push to regulate AI. 

But what would that actually even look like?

AI is either an idea or a tool, depending who you ask. And as a rule, the government doesn’t just start regulating tools or ideas. It usually finds a lever — a place to intervene, and ideally, a hard rationale for doing so.

Last week, the leading futurist Jaron Lanier laid out a powerful case for both in the New Yorker. He argued for a principle called “data dignity” — or the concept that “digital stuff would typically be connected with the humans who want to be known for having made it.” In practical terms, this means you or I would actually have some claim on the huge data trails we leave — and on the ways they’re being used to train powerful artificial minds like GPT-4.

For Lanier and the wider community of experts who have been exploring this idea, “data dignity” has two key pillars. For one, it’s a way to keep AIs closely tethered to people, rather than spinning off on their own in terrifying ways. It also offers clear, practical guidelines by which to regulate how they’re built and used — as well as who profits from them.

Which in some ways, seems almost obvious. If these models are worthless without the stuff we post on the internet, shouldn’t we have a say over when, where, and how that stuff is used?

But in another way, it represents a radical change in how we think about data online — one that likely has plenty of public support, even if the concept of “data dignity” itself is still unfamiliar to most people.

On the internet we’ve come to know during the past 20 years, we’ve almost come to assume that free services require us to hand over control of massive gobs of data to Big Tech platforms — which then used to serve us hyper-targeted ads for, I don’t know, a dating service for singles in the Florida Panhandle who love Bruce Springsteen. It’s a trade we make knowingly, even if there are a lot of objections to the business model.

But the rise of large language models and other tools that scrape the internet relentlessly for our personal data now adds a new dimension to the debate over that data, and that debate has serious ramifications for privacy, accountability and even the global economy.

“It's not just creative people whose work is being transformed and reproduced,” said E. Glen Weyl, an economist and author who has written extensively about data ownership and provenance. (Weyl is also a researcher at Microsoft, but spoke to me in his personal capacity.)

“Pretty much everyone who has done anything on the internet — and we don't know exactly what these models were trained on, because it hasn’t been disclosed, but imagine that it's most things that are publicly available online — anyone who's contributed there is helping create these models, and therefore helping create something that's both an engine of productivity, and also potentially an engine of labor displacement.”

It’s a familiar doomy tale of technological exploitation: Companies that are just as much black boxes as their AI models are taking our data and using it to build machines that will put us out of jobs.

The promise of “data dignity,” as its advocates lay out, is that internet users will not just have more authority and awareness of how their data is used, but that they might even be proportionally compensated for it.

“Just like on Spotify, where there's a royalty that gets paid when a song gets played, there should be attribution of [data] both in moral terms so that people know where it's coming from, and in economic terms [accruing] to the people who created that,” Weyl said.

Is that even possible, in practical terms? It’s an idea that would totally upend the status quo of the digital economy — so naturally, some big-stick government regulation would be required to start enforcing it.

I asked Maritza Johnson, a data privacy expert and the founding director of the University of San Diego’s Center For Digital Society, what she thinks government could do to steer the current regime of digital rights in this direction if “data dignity” really does become a broader political cause. She made the case that it can use the same regulatory tools for accountability that already exist for other industries.

“We need to move away from this fairy tale that [data rights are] up to the individual and recognize this is a collective problem,” Johnson said. She cited the examples of Facebook and Twitter being caught using phone numbers collected ostensibly for two-factor authentication for advertising, saying that regulators should be empowered to set explicit rules for how data is used and presented to users, and to punish companies for not following them.

Weyl said much the same thing, arguing that tracking the provenance of data use is quite easy and simply needs a regulatory push to ensure it happens. For the trickier part — that is, getting users paid — he argued regulators could take a page from labor law.

“From the legal perspective, [regulators could try] to give support and encouragement to some of these collective management organizations the same way that labor law did for labor unions… the Authors’ Guild, the Writers’ Guild, etc. can move beyond just the like basics of limiting what these models can do to try to invest in building the regime they really want once these things are pervasive.”

When you start talking about humanity’s future, there’s another reason that tying the data LLMs use back to the original source could be important. Some thinkers in the data dignity movement argue that the risk to privacy posed by AI is existential, making it imperative that internet users have control over how personal data, like their speech patterns, written syntax, and even their gait are used.

“These systems are capable of imitating just about anything,” Weyl told me. “They can create an essentially perfect replica of the person who is being imitated… and have it do arbitrary things to people that you care about. Many of the things that you believe are secret are not going to be secret anymore.”

Johnson argues this is why regulators need to act now, before the AI business model runs roughshod over privacy.

“Privacy and security are really hard to retrofit onto a system, which is why you have a lot of talk about ‘privacy by design,’” she said. “It would take a really big regulatory stick to be sure that companies actually do it, but it’s extremely important.”

 

GO INSIDE THE 2023 MILKEN INSTITUTE GLOBAL CONFERENCE: POLITICO is proud to partner with the Milken Institute to produce a special edition "Global Insider" newsletter featuring exclusive coverage, insider nuggets and unparalleled insights from the 2023 Global Conference, which will convene leaders in health, finance, politics, philanthropy and entertainment from April 30-May 3. This year’s theme, Advancing a Thriving World, will challenge and inspire attendees to lean into building an optimistic coalition capable of tackling the issues and inequities we collectively face. Don’t miss a thing — subscribe today for a front row seat.

 
 
japan's ai strategy

Well-wishers hold Japanese national flags during the New Year's appearance by the Japanese royal family at the Imperial Palace in Tokyo Monday, Jan. 2, 2023. (Tomohiro Ohsumi/Pool Photo via AP)

Well-wishers hold Japanese national flags during the New Year's appearance by the Japanese royal family at the Imperial Palace in Tokyo Monday, Jan. 2, 2023. (Tomohiro Ohsumi/Pool Photo via AP) | AP

Japan’s ruling Liberal Democratic Party has released a white paper outlining its vision for where the country fits into the rapidly-growing global ecosystem around AI.

The report points out that Japan today ranks 29th out of 63 nations in the Digital Competitiveness Ranking, which tracks countries’ adoption and development of cutting-edge technology. The paper proposes, among other things, that Japan should boost its domestic semiconductor industry and establish a “support team” within the government to seek out opportunities to implement AI in the public sector.

It also, notably, proposes a “new approach to AI regulations,” suggesting the party observe and borrow from the European Union, U.S., and China, especially with regard to the significant risks AI poses to human rights, national security and the democratic process. The LDP also nods to a U.K.-like hands-off approach to encouraging innovation, writing that the government should “develop and expand an environment where businesses can challenge new businesses without being restricted by existing regulations.”

cap on ai

One of America’s leading liberal think tanks is making a pitch for how the Biden administration should handle AI.

In a report released today, the Center For American Progress proposed the administration give its AI plan teeth via “an immediate executive order to implement the Blueprint for an AI Bill of Rights” and a slew of other “safeguards” around the technology, including around risk assessment, labor disruption and competition.

“The president should immediately issue a new executive order on artificial intelligence (AI EO) centered on implementing the Blueprint for an AI Bill of Rights,” the authors write. They also propose the creation of a White House council on AI that would coordinate agencies’ responses and specifically note the national security risks posed by AI, writing that “The challenges faced by the Trump administration and the Biden administration in acting against TikTok — which is owned by a foreign company — are illustrative of potential challenges a president might face in taking action against a dangerous, domestic AI system.”

Tweet of the Day

This @spolsky paragraph always comes back to me, about how Google built an eng org that operated with higher-level abstractions as if they were primitives.There are non-AI companies now quietly using LLMs the way their competitors use the if statement.

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); and Benton Ives (bives@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

GET READY FOR GLOBAL TECH DAY: Join POLITICO Live as we launch our first Global Tech Day alongside London Tech Week on Thursday, June 15. Register now for continuing updates and to be a part of this momentous and program-packed day! From the blockchain, to AI, and autonomous vehicles, technology is changing how power is exercised around the world, so who will write the rules? REGSITER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

Benton Ives @BentonIves

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Apr 25,2023 08:28 pm - Tuesday

Can JARVIS hold a patent?

Apr 24,2023 08:15 pm - Monday

AI vs. the culture industry

Apr 21,2023 08:02 pm - Friday

5 questions for Kraken's Marco Santori

Apr 20,2023 08:02 pm - Thursday

Crypto, Miami and the future of tech hubs

Apr 19,2023 08:48 pm - Wednesday

What progress looks like to Elon