Negotiators within the United Nations are grappling with how to address artificial intelligence and potential state surveillance of political dissidents in a new cyber security treaty that’s in the works. Like many tech policy discussions lately, the rapid emergence of AI as a dual-use tool for carrying out and protecting against cyberattacks has thrown a wrench in the proceedings in New York City, as negotiators sketch out how countries should cooperate with each other when investigating cybercrime. The treaty would bind countries to common standards for sharing data and information, shaping how countries deal with criminal investigations in the digital realm for decades to come. With the current session wrapping on Sept. 1, negotiators from different member states are duking it out over critical definitions in the treaty with wide-reaching implications on what qualifies as a cybercrime, and what safeguards need to be placed on the flow of information between countries. One of the core tensions playing out is how much information the U.S. and its allies must provide to countries like Russia and China with less than democratic regimes — particularly on cybercrime investigations that could double as surveillance operations. Some countries want the treaty to broadly cover the misuse of information and communication technologies, which would allow access to “everything that touches the flow of data,” said Deborah McCarthy, a retired ambassador who is the U.S.’ lead negotiator on the treaty. “That will include AI, in all aspects, in all its forms,” she said. The United States wants more specific definitions and for the treaty to focus instead on a narrow set of crimes in order to limit the control a country can exert over its or other nations’ information space. Digital rights advocate Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation, said the broad scope of the current treaty could authorize sharing personal data with law enforcement in other countries — including biometric information and datasets used to train AI. Rodriguez said the treaty’s lack of precision on what kinds of data needed to be shared “could potentially lead to sharing of intrusive data without a specific assistance request.” “In theory, the long arm of this treaty could access citizens in other countries who may express opinions counter to the government of the country that is requesting [information on the citizen],” McCarthy said. “And we’re saying no, it has to be for certain crimes, under certain conditions and safeguards would apply.” Negotiators will hammer out safeguards this afternoon for the flow of information between law enforcement, McCarthy said. The U.S. and its allies specifically want to lay the groundwork that would deny information-gathering requests that could be used to target political dissidents. Additionally, in its current iteration, digital rights advocates are worried the treaty’s broad definitions of cybercrime might criminalize legitimate cybersecurity research on emerging technologies like AI, thus chilling work in the field. Protections for private citizens carrying out cybersecurity research are still under debate on the global stage, even as the U.S. federal government turns to hackers to help it catch vulnerabilities in large language models. Raman Jit Singh Chima, Asia policy director and senior international counsel for the digital rights advocacy group Access Now, said the UN treaty does “not actually help those who are trying to make sure that AI does not result in an explosion in cybercrime.” McCarthy noted that the need for built-in protections for cybersecurity researchers was a “consistent message” from industry, think tanks and human rights groups, and that proposals for such protections are “still being discussed.”
|