As militaries incorporate more and more autonomous technology, who bears responsibility when things go wrong? It's one issue under international law if a sergeant orders an illegal attack on civilians. It's another entirely when an AI-powered system makes a snap decision nobody could have predicted. Rebecca Crootof, a University of Richmond law professor, has been wrestling with questions like this for years — stretching the boundaries of how law and liability should apply to the ever-changing landscape of AI. Read on to hear her thoughts on the cost of war, military AI, and remote interference in our weekly feature: The Future in 5 Questions. Responses have been edited for length and clarity. What’s one underrated big idea? It's absolutely bonkers that there is no accountability for civilian harms in armed conflicts. We just sort of accept that accidents happen in war and then it’s like, “Too bad, everybody!” after that. I understand that we accept it because it's how things have been. I understand that it made some relative sense in a world when we were creating the laws of war — when we had a world of traditional battlefields. But as we're shifting from wars on battlefields to increasingly wars in urban settings — and as the civilian casualty rate is just skyrocketing — the fact that states can just shrug off mistakenly killing people just strikes me as fundamentally wrong and untenable for the laws of war to have any real moral force. So I've been arguing that we need to come up with some form of state accountability for accidents of war. On one hand, I recognize that it's completely ambitious. On the other hand, it's just intuitive that states are waging war for their own purposes and they shouldn't be allowed to shift the costs of their mistakes onto foreign civilians. International criminal law was unthinkable 100 years ago. You just would not have individual criminal liability for war crimes. And the crime of aggression was a pie in the sky thing — until this year. And so the idea of having some form of state accountability for accidents in war and some form of obligation and to pay and some route to a remedy for civilians… I see it being part of the same trajectory. What’s a technology you think is overhyped? Military AI in general. When I talk about military AI, most people jump to autonomous weapons systems. The more I've worked on them and studied them and learned about them, the more I think they are overhyped. They are often discussed as these amazing things that will do everything. And they're often conceptualized as a sort of humanoid soldier or like Terminator-type thing that's just going to replace human soldiers. And that's a completely mistaken way of conceptualizing how military A.I. is being integrated and used. Even with a more nuanced understanding of military AI, it's still being credited with way too much. AI is incredibly useful for certain things, but it is often treated as this Swiss army knife — like it’s going to solve every single problem — without I think enough awareness of what it’s not good at. What book most shaped your conception of the future? I grew up reading science fiction. My dad had this huge collection of old paperbacks in the attic and I would sneak up there and read things that were definitely too old for me and terrified me. One of the books that I read was Level Seven, which is set in the late 1950s and is about accidental nuclear war and the extinction of the human race as a result. It's written from the point of view of a person who is seven levels down in a bunker and is meant to be protected from the horrors of nuclear annihilation. One by one, the levels go dark, and eventually, everyone's dead. When the first Gulf War began, I must've been like nine or ten. I remember being on the school bus thinking, “This is it. We're all going to die.” And so I guess that shaped my interest. What could government be doing regarding tech that it isn’t? Under current liability rules, if a company pushes an update or changes the terms of the service, contractually, they could easily not be liable for any harm that results from that change. So if there's a smart oven and they change the terms on how it operates and it ends up being left on or turning on inappropriately — based on different analogies that a court might use, the company might be found not liable. To me, this should be an easy one. There should be liability for companies that engage in remote interference that foreseeably causes harm. You could accomplish this in a variety of different ways. You could set up a new kind of product liability. Or maybe companies are not allowed to disclaim liability, which many do in their contracts. Or you could say there is a duty to not engage in remote interference if there's a foreseeable risk of harm. Unfortunately, I don't see any likelihood of legislative action on this front until something absolutely horrific happens. I'm really concerned that we will go down a path where consumers just start to think, “Oh, yeah, if I use this thing, this risk is attached to it.” Because the legal standard is tied into what people expect. So I hope that judicially, judges will use analogies that foster the development of this type of a duty from companies towards a consumer. What has surprised you most this year? A combination of three things: that Russia invaded Ukraine in the first place, that Ukrainians have done so unexpectedly well and that Russia has done so unexpectedly poorly. Many of the common assumptions about Russian military might – and autonomous military capabilities – have evaporated. The whole conflict has really showcased the import of technology – in that Ukraine has been doing so well in part because of the cyber and physical tech support it has received – and the import of human emotion – in that Ukrainians are doing so well in part because of what they are fighting for.
|