The plan for AI to eat the world

From: POLITICO's Digital Future Daily - Wednesday Sep 06,2023 08:25 pm
How the next wave of technology is upending the global economy and its power structures
Sep 06, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

OpenAI CEO Sam Altman addresses a speech during a meeting, at the Station F in Paris on May 26, 2023. (Photo by JOEL SAGET/AFP via Getty Images)

OpenAI CEO Sam Altman. | JOEL SAGET/AFP via Getty Images

If “artificial general intelligence” ever arrives — an AI that surpasses human intelligence and capability — what will it actually do to society, and how can we prepare ourselves for it?

That’s the big, long-term question looming over the effort to regulate this new technological force.

Tech executives have tried to reassure Washington that their new AI products are tools for harmonious progress and not scary techno-revolution. But if you read between the lines of a new, exhaustive profile of OpenAI — published yesterday in Wired — the implications of the company’s takeover of the global tech conversation become stark, and go a long way toward answering those big existential questions.

Veteran tech journalist Steven Levy spent months with the company’s leaders, employees and former engineers, and came away convinced that Sam Altman and his team don’t only believe that artificial general intelligence, or AGI, is inevitable, but that it’s likely to transform the world entirely.

That makes their mission a political one, even if it doesn’t track easily along our current partisan boundaries, and they’re taking halting, but deliberate, steps toward achieving it behind closed doors in San Francisco. They expect AGI to change society so much that the company’s bylaws contain written provisions for an upended, hypothetical version of the future where our current contracts and currencies have no value.

“Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered,” Levy notes. “After all, it will be a new world from that point on.”

Sandhini Agarwal, an OpenAI policy researcher, put a finer point on how he sees the company’s mission at this point in time: “Look back at the industrial revolution — everyone agrees it was great for the world… but the first 50 years were really painful… We’re trying to think how we can make the period before adaptation of AGI as painless as possible.”

There’s an immediately obvious laundry list of questions that OpenAI’s race to AGI raises, most of them still unanswered: Who will be spared the pain of this “period before adaptation of AGI,” for example? Or how might it transform civic and economic life? And just who decided that Altman and his team get to be the ones to set its parameters, anyway?

The biggest players in the AI world see the achievement of OpenAI’s mission as a sort of biblical Jubilee, erasing all debts and winding back the clock to a fresh start for our social and political structures.

So if that’s really the case, how is it possible that the government isn’t kicking down the doors of OpenAI’s San Francisco headquarters like the faceless space-suited agents in “E.T.”?

In a society based on principles of free enterprise, of course, Altman and his employees are as legally entitled to do what they please in this scenario as they would be if they were building a dating app or Uber competitor. They’ve also made a serious effort to demonstrate their agreement with the White House’s own stated principles for AI development. Levy reported on how democratic caution was a major concern in releasing progressively more powerful GPT models, with chief technology officer Mira Murati telling him they “did a lot of work with misinformation experts and did some red-teaming” and that “there was a lot of discussion internally on how much to release” around the 2019 release of GPT-2.

Those nods toward social responsibility are a key part of OpenAI’s business model and media stance, but not everyone is satisfied with them. That includes some of the company’s top executives, who split to found Anthropic in 2019. That company’s CEO, Dario Amodei, told the New York Times this summer that his company’s entire goal isn’t to make money or usher in AGI necessarily, but to set safety standards with which other top competitors will feel compelled to comply.

The big questions about AI changing the world all might seem theoretical. But those within the AI community, and increasing numbers of watchdogs and politicians, are already taking them deadly seriously (despite a steadfast chorus of computer scientists still entirely skeptical about the possibility of AGI at all).

Just take a recent jeremiad from Foundation for American Innovation senior economist Samuel Hammond, who in a series of blog posts has tackled the political implications of AGI boosters’ claims if taken at face value, and the implications of a potential response from government:

“The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion,” Hammond writes. “It’s up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan.”

For now, that’s a far-fetched future scenario. But as Levy’s profile of OpenAI reveals, it’s one that the people with the most money, computing power and public sway in the AI world hold as gospel truth. Should the AGI revolution put politicians across the globe on their back foot, or out of power entirely, they won’t be able to say they didn’t have a warning.

 

A NEW PODCAST FROM POLITICO: Our new POLITICO Tech podcast is your daily download on the disruption that technology is bringing to politics and policy around the world. From AI and the metaverse to disinformation and cybersecurity, POLITICO Tech explores how today’s technology is shaping our world — and driving the policy decisions, innovations and industries that will matter tomorrow. SUBSCRIBE AND START LISTENING TODAY.

 
 
an ai prescription

On today’s POLITICO Tech podcast, an AI leader recommends some very specific tools for the government to put in its toolbox when it comes to making AI safe globally.

Mustafa Suleyman, CEO of Inflection AI and co-founder of Google DeepMind, told POLITICO’s Steven Overly that Washington needs to put limits on the sale of AI hardware and appoint a cabinet-level regulator for the tech.

“It is a travesty that we don’t have senior technical contributors in cabinet and in every government department given how critical digitization is to every aspect of our world,” Suleyman told Steven, and he writes in his new book that “the next five or so years are absolutely critical, a tight window when certain pressure points can still slow technology down.”

To hear the full interview with Suleyman and other tech leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.

newsom takes on ai

Gavin Newsom speaks during a news conference.

California Gov. Gavin Newsom. | Josh Edelson/AFP/Getty Images

The top official on the AI revolution’s home turf is laying down some rules for the state’s use of the technology.

California Gov. Gavin Newsom issued an executive order today ordering the state’s agencies to research potential risks that AI poses, devise new policies and put rules in place to ensure its ethical and legal use.

“This is a potentially transformative technology – comparable to the advent of the internet – and we’re only scratching the surface of understanding what GenAI is capable of,” Newsom said in a press release. “We recognize both the potential benefits and risks these tools enable.”

That makes California just the latest state to tackle AI in its own idiosyncratic manner, as Newsom took care in his remarks to note the role its tech industry plays in the technology’s development. POLITICO’s Mohar Chatterjee reported for DFD in June on AI legislative efforts in Colorado, and Massachusetts saw similar efforts with a novel twist this year as well.

Tweet of the day

people love to complain about crypto UX but this is what treasury direct bond buying website used to look like

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DON’T MISS POLITICO’S TECH & AI SUMMIT: America’s ability to lead and champion emerging innovations in technology like generative AI will shape our industries, manufacturing base and future economy. Do we have the right policies in place to secure that future? How will the U.S. retain its status as the global tech leader? Join POLITICO on Sept. 27 for our Tech & AI Summit to hear what the public and private sectors need to do to sharpen our competitive edge amidst rising global competitors and rapidly evolving disruptive technologies. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Aug 25,2023 08:02 pm - Friday

5 questions for Johanna Faries

Aug 24,2023 08:01 pm - Thursday

The problem behind AI's political 'bias'

Aug 23,2023 08:03 pm - Wednesday

The AI-sized holes in the UN cybercrime treaty

Aug 22,2023 08:03 pm - Tuesday

An ex-Googler takes aim at China

Aug 21,2023 08:01 pm - Monday

Let's nationalize AI. Seriously.

Aug 18,2023 08:01 pm - Friday

5 questions for Katherine Boyle