Hello, and welcome to this week’s installment of The Future In Five Questions. This week I spoke with Zak Kallenborn, a George Mason University policy fellow. He’s a leading expert on autonomous weapons who’s been featured in several of our pieces this year on the topic, including on the ethics of killer drones and killer drone swarms. Kallenborn has also been dubbed a “mad scientist” by the U.S. Army. Today, he talks about the recent use of autonomous weapons in Ukraine, the potential for Israel to use drones for its imminent ground invasion of the Gaza Strip, why AI might be slightly overplayed (like we talked about in last week’s edition) and his hope for an autonomous weapons policy going forward. The following has been condensed and edited for clarity: What’s one underrated big idea right now? Drone warfare has been shifting and people have really started waking up to what's going on here. I would have told you recently that we're not paying enough attention. It's still somewhat true — we're not paying enough attention to drones and other domains. But I think Ukraine has really woken people up and we have increasing attention to the domain of drone warfare. I also suspect that drone warfare will be an important part of the conflict between Israel and Hamas. We've already seen that a little bit with Hamas’ use of drones in knocking out communication towers. I expect to see Israel employ a significant number of drones if and when they end up launching that ground invasion, given the dense urban environment in Gaza and its extensive tunnels where the potential risk of surprise ambush is very high. So, if you can shift that risk from the individual soldier to a robot, where you don’t care if the robot dies — no one's gonna cry except the accountants — that potentially could be really useful. What’s a technology that you think is overhyped? I think hype is valuable. It’s really just another way of saying excitement, what are people talking about and their interests? That excitement is what drives people to go off into the crazy unknown and see what happens and take bigger risks and hope for big rewards. To answer the question more directly, I think AI is somewhat overhyped, but I think the hype is good. I’m skeptical that AI is going to lead to drastic revolutions and warfare with everything incorporated, at least in the short-term. But I also think that AI absolutely has tremendously broad applications where you can potentially buy it pretty much everywhere in some capacity. What book most shaped your conception of the future? “Wired for War” by P.W. Singer was absolutely influential in thinking about the future of war and robotic systems writ large. It’s an appreciation for where the technology is and where it's going, and the level of involvement at organizations, from militaries to tech companies. Also significant is the general work of Philip Tetlock in “Superforecasting.” It’s excellent in terms of how we think about the future, and also being very humble about it. One of his findings is that experts who have studied this stuff for a long time are actually not much better in general than laypeople in terms of forecasting political events. It's very important to have that humility, and he explains how to think more usefully about the future. What could government be doing regarding technology that it isn’t? We could do more to lead global collaboration to reduce catastrophic and existential risks, especially technology-related risks like artificial intelligence and nanotechnology. Although the U.S. has been doing a lot in individual silos like climate change and planetary defense, we could do more to integrate governmental efforts at high levels of policy and lead global collaboration. The Global Catastrophic Management Act that was included in the recent NDAA is a great start to look seriously at catastrophic risk. However, Congress and the White House should ensure it’s the beginning of a larger, concentrated effort. Congress and the White House should also ensure that the findings of the DHS report on catastrophic risks are made public. I'd also like to see catastrophic and existential risk incorporated into existing risk assessment frameworks; put the topics on the agenda at the G7 and other international fora; and consider new international bodies to assess, manage and reduce global catastrophic risk. Perhaps a "planetary defense council" to facilitate diplomacy, policy coordination, funding development and intelligence-sharing concerning catastrophic and existential risks. What has surprised you most this year? There’s nothing particularly surprising, but I think there have been some good developments. The move by the U.N. General Assembly on autonomous weapons discussion, moving at least somewhat out of the governmental experts process, I think is a good development. The confirmation of Ukraine using the first autonomous weapon in combat last week, I think that was important in that you have someone going out and saying, “Hey, I recognize that there's some ethical and policy concerns here, but we're in a war and people are dying. So, we gotta use what we gotta use.”
|