A prism on the AI future

From: POLITICO's Digital Future Daily - Wednesday Nov 01,2023 08:03 pm
How the next wave of technology is upending the global economy and its power structures
Nov 01, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Bletchley Park Mansion is pictured at Bletchley Park, near Milton Keynes, north of London on October 26, 2023. The UK government will welcome foreign political leaders, tech industry figures, academics and others next week for a two-day summit billed as the first of its kind on artificial intelligence (AI). The summit will be held at a deliberately symbolic location: Bletchley Park, where top British codebreakers cracked Nazi Germany's "Enigma" code, hastening the end of World War II. (Photo by Justin TALLIS / AFP) (Photo by JUSTIN TALLIS/AFP via Getty Images)

Bletchley Park, near Milton Keynes north of London. | AFP via Getty Images

The United Kingdom’s much-hyped AI Safety Summit kicked off today — and it’s already revealing what world leaders hope for, and fear, in an AI-powered future.

The most concrete takeaway from the first day of the two-day summit is the “Bletchley Declaration,” named after the Bletchley Park estate where the summit is hosted (and where Alan Turing pioneered the codebreaking that helped the Allies win World War II.) The document, with 29 signatories (including China) uses mostly boilerplate language to address both existential concerns about the threats posed by powerful AI models in the future, and more quotidian policy worries about how AI might supercharge the more harmful or biased parts of existing bureaucracy.

But if you look at the individual statements from attendees of the summit, a clearer vision begins to emerge for where their priorities lie, and what they see when they are projected five, 10, or even 100 years ahead.

Starting with the hosts, U.K. Prime Minister Rishi Sunak’s fear of an AI apocalypse is well-documented. Fueled by speculation among some academics and high-level AI developers (including Elon Musk) that a “God-like” AI could potentially wipe out the human race, Sunak favors reining in super-powered frontier models, and the Bletchley Declaration nods to that (theoretical) extinction-level threat.

The United States’ representative does not share that priority, or at least not to the same extent. As POLITICO’s Vincent Manancourt, Eugene Daniels and Brendan Bordelon reported today, Vice President Kamala Harris is more worried about the threats AI poses to society right now.

“Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential… when people around the world cannot discern fact from fiction because of a flood of AI enabled myths and disinformation. I ask, is that not existential for democracy?” Harris said.

That is to say, the harm generated by humans using AI might be worth more scrutiny at this moment than those generated by machines at some undefined point in the future. (This also happened to be the implicit message of the most recent “Mission: Impossible” film, as President Biden might have done well to read in a DFD edition from earlier this year.)

This philosophical split isn’t a matter of excluding or ignoring one type of risk over the other entirely, as the Bletchley Declaration nods to both. But governments have a limited amount of political capital, and sway — and the regulations on which they’ll decide to focus are far from certain at this point. Companies are afraid of that uncertainty, and some are bristling at how a patchwork of regulations focused on different priorities might complicate what they do.

POLITICO’s Rebecca Kern kept tabs on Meta president of global affairs Nick Clegg for DFD yesterday as he made the media rounds pre-summit, and he said the race by world governments to regulate AI is confusing the industry.

“All of that doesn’t quite seem to fit together,” he said at an AI Fringe event, hoping this week’s summit will lead to governments at the very least being “corralled in a similar direction.” (He also said he hopes that governments can agree to some common definition of what a “frontier model” actually is, a concept still largely up for debate even as Sunak et al. discuss how to regulate them.)

Meanwhile, developers, academics, and activists who aren’t necessarily in the public or private centers of power are worried that the AI future will look… pretty much exactly like the present, with a lack of coordination in government leading to tech giants riding roughshod over their competitors and notions of privacy or consumer protection.

As outlined in a letter shared with POLITICO’s Digital Future Daily yesterday, a group of open-source AI advocates — which, it should be noted, included Meta’s AI lead Yann LeCun — are warning that “quickly rushing towards the wrong kind of regulation can lead to concentrations of power in ways that hurt competition and innovation,” echoing worries that to restrict AI models’ use too tightly will simply mean that a potentially world-changing technology is used to uphold the current status quo.

 

SUBSCRIBE TO CALIFORNIA CLIMATE: Climate change isn’t just about the weather. It's also about how we do business and create new policies, especially in California. So we have something cool for you: A brand-new California Climate newsletter. It's not just climate or science chat, it's your daily cheat sheet to understanding how the legislative landscape around climate change is shaking up industries across the Golden State. Subscribe now to California Climate to keep up with the changes.

 
a positive outlook on the metaverse

A tech consulting firm’s annual trend report is bullish on the metaverse — largely because of the boost it might get from generative AI.

The 2024 tech and media report from the consulting group Activate expects there to be more than 600 million metaverse users globally by 2026, defining “metaverse user” as someone who uses any tool related to a virtual world from fully immersive experiences like Meta’s to games like Minecraft or Microsoft’s virtual workspace software.

The secret sauce for metaverse growth, they predict: Generative AI tools that will allow those people not just to use virtual spaces, but design them. They note that 67 percent of those who use generative AI to make online content are also active in virtual worlds, and that increasingly sophisticated technology will erase some of the problems with avatars that have made existing metaverse spaces look somewhat low rent.

praying with ai

While the rest of the world figures out how to manage an AI-powered future, South Koreans are integrating the technology into a very old practice: Christianity.

The Financial Times recently reported (paywall) on the tens of thousands of South Koreans who are using chatbots and other AI tools as part of their religious practice. (Think of a bot called “Ask Jesus,” or one that assists busy pastors in writing sermons.)

“We faced strong resistance from churches initially with their suspicion that we are trying to replace God and pastors,” one AI executive told the FT’s reporters. “But pastors began to appreciate our service as it helps them save time in preparing for sermons, and find more time to take care of lonely, troubled followers.”

The effect might not be entirely salutary, however. When I interviewed the “digital theologian” Adam Graber in July on how he thought AI and Christianity might collide, he had a slightly more jaundiced view of its effect on scriptural interpretation: “There’s a semantic overload of meaning… we become overloaded with these different interpretations, and we don’t actually know how to make sense of what’s a legitimate interpretation or not. Even before AI systems we were struggling with that, and I think AI systems will amplify that even further.”

Tweet of the Day

like if i had an enormous overcollateralized balance sheet of no-risk short-term treasuries paying me billions of dollars a year of pure profit, would i resist getting it audited just to mess with my haters? maybe? but it's a fun choice!

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com) and Daniella Cheslow (dcheslow@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Oct 30,2023 08:20 pm - Monday

AI vs. the health bureaucracy

Oct 27,2023 08:03 pm - Friday

5 questions for Francesco Marconi

Oct 26,2023 08:20 pm - Thursday

The quantum here-and-now

Oct 25,2023 08:39 pm - Wednesday

When 'red-teaming' AI isn't enough

Oct 24,2023 08:03 pm - Tuesday

GPS ‘spoofing’ thickens the fog of war

Oct 20,2023 08:39 pm - Friday

5 questions for Zak Kallenborn