The global summit to solve the future

From: POLITICO's Digital Future Daily - Thursday Jun 15,2023 08:04 pm
How the next wave of technology is upending the global economy and its power structures
Jun 15, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Global Tech Day

A funny thing has happened in the year-plus since this newsletter launched: A good deal of the “future” technology and policy we’ve set out to cover has made its way into the present.

At POLITICO’s Global Tech Day, held today in London, a global group of policymakers and thinkers gathered to hash out how governments might respond to dizzying developments in AI, digital payments, and competition with China. Some of the day’s highlights, issue by issue:

Artificial intelligence: Sen. Ted Cruz (R-Texas) kicked off the day with some incendiary comments around Congress’ ability to meaningfully regulate AI, saying with Texan tact that the body“doesn’t know what the hell it’s doing” on the topic.

“This is an institution [where] I think the median age in the Senate is about 142. This is not a tech savvy group,” Cruz said, before criticizing the European Union’s sweeping approach to AI regulation through the forthcoming AI Act. By his lights, however, U.S. lawmakers’ cluelessness might not be a bad thing: He compared America’s hands-off approach favorably to Europe’s sticky fingers, saying the latter was “far less concerned with creating an environment where innovation can flourish.”

Sen. Mark Warner (D-Va.) chimed in from the other side of the aisle on the recently proposed push from Senate Majority Leader Chuck Schumer to legislate around AI, saying that from his perspective as chair of the Senate Intelligence Committee, it’s a national security issue as well as a tech one.

“Many of us believe that we are in an enormous technology competition, particularly with China, and that national security means winning the battle around AI,” Warner said.

A group of European regulators were, of course, on hand to (implicitly) defend and discuss their approach — particularly when it comes to generative AI, the popular rise of which occurred late into the writing of the AI Act. Regulators from the U.K., Italy, and Romania discussed the very practical, real-world regulatory problems AI poses already, especially around data and privacy.

“Privacy is one [AI concern]... but also beyond privacy, there are issues of bias and discrimination” around generative AI, said the European Commission’s Lucilla Sioli, by way of touting the AI Act’s “risk-based” regulatory approach. That structure places tighter restrictions on AI use depending on the sensitivity or potential for harm in certain tasks it might be used for.

That’s the stuff we know. The assembled regulators also addressed the inherently unknowable parts of AI risk, with Stephen Almond of the U.K.’s Information Commissioner’s Office saying the country already sees a potentially existential AI risk as part and parcel of their overall policy approach.

Almond said he doesn’t think of that risk as separate from today’s policy issues, but that “the bigger risk is a progression in the growth of technology… by solving the immediate, here-and-now risk we get better and better, and we can put in place the institutions that we need.”

Digital payments: John Cunliffe, a deputy governor of the Bank of England, spoke with POLITICO’s Izabella Kaminska about the U.K.’s investigation of a potential “digital pound,” a central bank digital currency similar to one the U.S. is exploring. (And yes, China has already embraced its own.)

Cunliffe said the case for a British CBDC is that it would “ensure confidence in money, and the uniformity of money” in a sometimes-bewildering digital marketplace. “People won’t have to think, ‘Am I using a stablecoin? Am I using an HSBC deposit? What form of money is this, what is it worth?’”

He warned that “retrofitting legislation on them, once they become established, is hugely difficult,” and that the U.K.’s proactiveness is an attempt to “deal with likely futures before we're surprised, and suddenly we're running after the trying to catch up.” (DFD’s Ben Schreckinger reported Monday on how Europe sees an opportunity to potentially surpass the U.S. on the new technology, given pushback from conservatives in Washington and Silicon Valley.)

Global competition: Let’s be real — one of the main reasons we’re here, both reading (and writing) this newsletter and listening to the machers in London today, is because it really matters who gets to write the rules for these powerful new technologies. That self-awareness was markedly on display today, especially as POLITICO’s Mark Scott and Brendan Bordelon reported on the growing unease in Europe and among some Asian countries with the U.S.’ efforts to box out China on tech.

Officials from Singapore, the European Commission, and Malaysia all insisted that they would continue to engage with China. “Malaysia is a neutral country, we do adhere to a free market policy,” said Fahmi Fadzil, Malaysia’s communications and digital minister.

And it’s not just China that scrambles the calculus when it comes to how governments deal with these slippery, border-crossing digital technologies. Julie Brill, Microsoft’s chief privacy officer, sat down with POLITICO CEO Goli Sheikholeslami to argue for more transatlantic collaboration on tech. “We need to see regulators move forward starting to demand transparency” and “make companies live up to what they’re supposed to be doing.”

 

LISTEN TO POLITICO'S ENERGY PODCAST: Check out our daily five-minute brief on the latest energy and environmental politics and policy news. Don't miss out on the must-know stories, candid insights, and analysis from POLITICO's energy team. Listen today.

 
 
dod on ai

The Pentagon is seen from the air.

The U.S. Defense Department has worked for over a decade to ensure AI's responsible use. | Patrick Semansky/AP Photo

On the ongoing story of whether AI should be allowed to kill you: Kathleen Hicks, the Deputy Secretary of Defense, wrote in POLITICO Magazine today to explain how the Pentagon is deploying artificial intelligence.

Hicks first points out that the DOD has been working on this for quite some time: there’s the responsible use policy from 2012 (which was updated in January), a series of “ethical principles” published in 2021, and a “responsible AI strategy” from last year. But this field is moving fast. What does Hicks have to say about the Pentagon’s plan to keep up, especially as geopolitical competition with China engulfs the world of tech?

“Our commitment to values is one reason why the United States and its military have so many capable allies and partners around the world, and growing numbers of commercial technology innovators who want to work with us,” Hicks writes. It’s a line that continues the Pentagon’s efforts to frame the global tech competition as a philosophical one that will mirror military and diplomatic efforts to ensure it’s more in countries’ self-interest to align with U.S. principles instead of China’s.

Hicks adds a disclaimer: “Even as our use of AI reflects our ethics and our democratic values, we don’t seek to control innovation. … While that makes me choose our free-market system over China’s statist system any day of the week, it doesn’t mean the two systems cannot coexist.” She also drops a few tidbits for the safety-minded when it comes to automated weapons, including “a bright line when it comes to nuclear weapons” that would ensure they’re impossible to deploy without human involvement, a refusal to “use AI to censor, constrain, repress or disempower people,” and a hands-off approach to the industry itself.

garbage in, garbage out

A new academic pre-print posits there might be a limit to how much AI-generated content can fill up the internet before making AI systems themselves unusable.

A group of British and Canadian researchers found that once AI models start being trained on AI-generated content, they essentially… break.

“We find that use of model-generated content in training causes irreversible defects in the resulting models,” they write, calling the resulting phenomenon “model collapse.” When a model “collapses” they find that it becomes pretty much useless, forgetting the original, human-generated data on which it was trained and producing more and more errors and nonsensical output.

They propose that developers ensure that an AI-content-free, human-generated dataset is always available to retrain on or reintroduce to their AI models. As one of the researchers told VentureBeat: “Data needs to be backed up carefully, and cover all possible corner cases. … As progress drives you to retrain your models, make sure to include old data as well as new. This will push up the cost of training, yet will help you to counteract model collapse, at least to some degree.”

Tweet of the Day

> solar> heat pump> tankless water heater> induction stove> battery backupit’s really cool that everything you need for a digital closed loop self-sufficient home is now mundane and available at Home Depot

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

SUBSCRIBE TO POWER SWITCH: The energy landscape is profoundly transforming. Power Switch is a daily newsletter that unlocks the most important stories driving the energy sector and the political forces shaping critical decisions about your energy future, from production to storage, distribution to consumption. Don’t miss out on Power Switch, your guide to the politics of energy transformation in America and around the world. SUBSCRIBE TODAY.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Jun 14,2023 08:02 pm - Wednesday

The culture war for your face

Jun 13,2023 08:42 pm - Tuesday

How gaming can still push things forward

Jun 12,2023 08:02 pm - Monday

The digital dollar's bipartisan problem

Jun 08,2023 08:02 pm - Thursday

Getting inside one state's AI regulation push

Jun 07,2023 08:02 pm - Wednesday

RFK Jr. crashes the techno-politics party