In the 1866 novel Crime and Punishment, Russian writer Fyodor Dostoevsky drills straight into a dark and perplexing question: Is it ever acceptable for a human to take another human’s life? More than a century and a half later, a fitting reinterpretation would cast Raskolnikov, the homicidal main character, as a robot. That’s because military analysts and human rights advocates have been battling over a newer moral frontier: Is it ever okay for a fully autonomous machine to kill a human? The answer, right now, seems to be no — not in any official sense, but by informal, global consensus. That’s despite experts believing fully autonomous weapons have already been deployed on the battlefield in past years. But that question may be pushed to the official forefront very quickly in Europe: Ukrainian officials are developing so-called “killer robots,” possibly to be used in the country's fight against Russia. Military experts warn that the longer the war goes on — we’re approaching the one-year anniversary in February — the more likely we’ll see drones that can target, engage and kill targets without an actual human finger on the trigger. Fully autonomous killer dronesare “a logical and inevitable next step” in weapons development, Mykhailo Fedorov, Ukraine’s digital transformation minister, told the Associated Press earlier this month. Ukraine has been doing “a lot” of research and development on the topic, and he believes “the potential for this is great in the next six months.” You might think someone would be frantically trying to prevent this, and you’d be right: the Campaign to Stop Killer Robots, an international coalition of non-governmental organizations, has for a dozen years been pressuring governments and United Nations members to call for a preemptive ban on the weapons.And right now it is very worried about Ukraine. Deploying fully autonomous weapons “changes the relationship between people and technology by handing over life and death decision-making to machines,” Catherine Connolly, the group’s automated decision research manager, told Digital Future Daily. The United Nations has been discussing the issue for yearswithout coming to any kind of consensus. Groups like Stop Killer Robots, Human Rights Watch and the International Committee of the Red Cross have called for an international legally binding treaty on autonomous weapons systems. That requires agreement among U.N. members, which has so far been impossible to achieve. But there seems to be momentum in the anti-killer robot camp. In October, 70 states delivered a joint statement on autonomous weapons systems at the U.N. General Assembly. In it, they called for “adopting appropriate rules and measures” for the weapons. It’s the largest ever cross-regional statement made to the U.N. on the issue, with signers including the United States, Germany, the United Kingdom and other highly militarized nations. Not everyone’s in agreement, though. So far in the U.N., some nations believe a preemptive ban could hinder their militaries’ ability to use AI tech in the future. And in the academic world, there’s some skepticism that the moral distinction is as clear as advocates assume. One provocative study even argues they could be “good news,” going so far as to say concerns surrounding killer robots are totally unfounded. “The reality is war is horrifying, horrible,and there's always going to be [soldiers] shooting a bullet through someone's head and splattering their guts all over the wall. Like, that's not particularly pleasant, right? And it doesn't matter too much if it’s a human doing it,” Zak Kallenborn, a George Mason University weapons innovation analyst, told Digital Future Daily. For now, the pace of technology is saving us from having to decide.Many countries already have the fully autonomous technology developed, but it’s been hard to work out the kinks,Kallenborn said. Deploying killing machines that might accidentally mistake a school bus full of children for an enemy tank, for instance, wouldbe a bad idea. “Some of the issues that you've run into are that they're not trustworthy or reliable, and it’s often tough to explain why they made a decision,” Kallenborn said. “It's really tough to align the system and use it if you don't really know” how it makes a decision. One key question,as weapons stumble forward without clear regulations, is who would be held accountable for actions undertaken by a robot with a mind of its own. Neither criminal law nor civil law guarantees that people directly involved in the use of killer robots would be held accountable, per a report from Human Rights Watch. If a civilian is mistakenly killed, it’s unclear who should face the consequences when there was no human input. “When people say it doesn't matter if it's a machine that's used … [humans] still have accountability and responsibility. It is humans who have the moral responsibility to make targeting decisions, and we can't delegate that to machines,” Connolly said. For now, the decade-long arguments rage on. The U.N. will meet again in March and May to discuss provisions for the technology, but if they can’t come to a consensus, the issue will be punted another year. “At this point, the time for talking is kind of done,” Connolly said. “It's time for action.” |