The AI-sized holes in the UN cybercrime treaty

From: POLITICO's Digital Future Daily - Wednesday Aug 23,2023 08:03 pm
How the next wave of technology is upending the global economy and its power structures
Aug 23, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Mohar Chatterjee

With help from Derek Robertson

U.N. Secretary-General Ban Ki-moon said in Vienna that U.N. experts seeking to collect evidence from the apparent chemical attack will report to him as soon as they leave the country Saturday. The team is expected to complete its inspection Friday. Their conclusions will be shared with members of the Security Council.  

The United Nations flag.

Negotiators within the United Nations are grappling with how to address artificial intelligence and potential state surveillance of political dissidents in a new cyber security treaty that’s in the works.

Like many tech policy discussions lately, the rapid emergence of AI as a dual-use tool for carrying out and protecting against cyberattacks has thrown a wrench in the proceedings in New York City, as negotiators sketch out how countries should cooperate with each other when investigating cybercrime. The treaty would bind countries to common standards for sharing data and information, shaping how countries deal with criminal investigations in the digital realm for decades to come.

With the current session wrapping on Sept. 1, negotiators from different member states are duking it out over critical definitions in the treaty with wide-reaching implications on what qualifies as a cybercrime, and what safeguards need to be placed on the flow of information between countries.

One of the core tensions playing out is how much information the U.S. and its allies must provide to countries like Russia and China with less than democratic regimes — particularly on cybercrime investigations that could double as surveillance operations.

Some countries want the treaty to broadly cover the misuse of information and communication technologies, which would allow access to “everything that touches the flow of data,” said Deborah McCarthy, a retired ambassador who is the U.S.’ lead negotiator on the treaty. “That will include AI, in all aspects, in all its forms,” she said.

The United States wants more specific definitions and for the treaty to focus instead on a narrow set of crimes in order to limit the control a country can exert over its or other nations’ information space.

Digital rights advocate Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation, said the broad scope of the current treaty could authorize sharing personal data with law enforcement in other countries — including biometric information and datasets used to train AI. Rodriguez said the treaty’s lack of precision on what kinds of data needed to be shared “could potentially lead to sharing of intrusive data without a specific assistance request.”

“In theory, the long arm of this treaty could access citizens in other countries who may express opinions counter to the government of the country that is requesting [information on the citizen],” McCarthy said. “And we’re saying no, it has to be for certain crimes, under certain conditions and safeguards would apply.”

Negotiators will hammer out safeguards this afternoon for the flow of information between law enforcement, McCarthy said. The U.S. and its allies specifically want to lay the groundwork that would deny information-gathering requests that could be used to target political dissidents.

Additionally, in its current iteration, digital rights advocates are worried the treaty’s broad definitions of cybercrime might criminalize legitimate cybersecurity research on emerging technologies like AI, thus chilling work in the field.

Protections for private citizens carrying out cybersecurity research are still under debate on the global stage, even as the U.S. federal government turns to hackers to help it catch vulnerabilities in large language models. Raman Jit Singh Chima, Asia policy director and senior international counsel for the digital rights advocacy group Access Now, said the UN treaty does “not actually help those who are trying to make sure that AI does not result in an explosion in cybercrime.”

McCarthy noted that the need for built-in protections for cybersecurity researchers was a “consistent message” from industry, think tanks and human rights groups, and that proposals for such protections are “still being discussed.”

 

A NEW PODCAST FROM POLITICO: Our new POLITICO Tech podcast is your daily download on the disruption that technology is bringing to politics and policy around the world. From AI and the metaverse to disinformation and cybersecurity, POLITICO Tech explores how today’s technology is shaping our world — and driving the policy decisions, innovations and industries that will matter tomorrow. SUBSCRIBE AND START LISTENING TODAY.

 
 
ai school in session

An illustration of a teacher and student in a classroom. A virtual figure is also seated at the desk with them.

Illustration by Glenn Harvey for POLITICO

With the new school year here, educators are slowly learning to embrace ChatGPT and other AI tools in the classroom.

That’s the main takeaway from a report this morning by POLITICO’s Blake Jones, Madina Touré, and Juan Perez Jr., who write about how after early bans and panic over the technology, it’s now being consciously integrated into curricula across the country.

Olli-Pekka Heinonen, the director general of the International Baccalaureate program, told them that “AI will be affecting societies to a large extent, and they are so strongly influencing the basic ways of how we make sense of reality, how we know things, and how we create things, that it would be a mistake if we would leave schools out of that kind of development.”

Although individual schools and local and state governments are getting more ChatGPT-friendly, there still isn’t an education-focused regulatory response to the technology (with the exception of guidance issued in May by the Department of Education for personalized learning). The POLITICO team reports that nonprofits, unions, and educators are largely concerned with privacy, security, and job preparation. — Derek Robertson

from one futurist to another

DENVER, CO - FEBRUARY 18: Ethereum co-founder Vitalik Buterin speaks at ETHDenver on February 18, 2022 in Denver, Colorado. ETHDenver is the largest and longest running Ethereum Blockchain event in the world with more than 15,000 cryptocurrency devotees attending the weeklong meetup. (Photo by Michael Ciaglo/Getty Images)

Ethereum founder Vitalik Buterin. | Michael Ciaglo/Getty Images

What does one of the highest-profile champions of open technology think about Elon Musk’s efforts to crowdsource fact-checking on X?

Ethereum founder Vitalik Buterin offered his thoughts in a recent blog post, arguing that the “community notes” feature meant to provide Wikipedia-like, consensus-driven fact-checking on the platform formerly called Twitter is not only “informative and valuable” but highly aligned with the ethos of the crypto world.

“Community Notes are not written or curated by some centrally selected set of experts; rather, they can be written and voted on by anyone, and which notes are shown or not shown is decided entirely by an open source algorithm,” Buterin writes. “It's not perfect, but it's surprisingly close to satisfying the ideal of credible neutrality, all while being impressively useful, even under contentious conditions, at the same time.”

He writes that although it doesn’t quite add up to the vision of “decentralized” social media that many in the crypto world hold, it could play a big role in driving interest in, and preference for, the principles that the open-source world holds dear. — Derek Robertson

Tweet of the Day

After a failed attempt nearly four years ago, India made history Wednesday by becoming the first country to touch down near the moon’s south pole and joins the United States, the Soviet Union, and China in achieving a moon landing.

THE FUTURE IN 5 LINKS

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

DON’T MISS POLITICO’S TECH & AI SUMMIT: America’s ability to lead and champion emerging innovations in technology like generative AI will shape our industries, manufacturing base and future economy. Do we have the right policies in place to secure that future? How will the U.S. retain its status as the global tech leader? Join POLITICO on Sept. 27 for our Tech & AI Summit to hear what the public and private sectors need to do to sharpen our competitive edge amidst rising global competitors and rapidly evolving disruptive technologies. REGISTER HERE.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Aug 22,2023 08:03 pm - Tuesday

An ex-Googler takes aim at China

Aug 21,2023 08:01 pm - Monday

Let's nationalize AI. Seriously.

Aug 18,2023 08:01 pm - Friday

5 questions for Katherine Boyle

Aug 17,2023 08:02 pm - Thursday

The future on your dinner plate

Aug 16,2023 08:02 pm - Wednesday

A more immediate 'AI risk'

Aug 15,2023 08:13 pm - Tuesday

One think tank vs. 'god-like' AI

Aug 14,2023 09:20 pm - Monday

Hackers in Vegas take on AI