So, you think killer robots are scary? Try an entire swarm of them. It’s no secret that militaries around the world are competing to develop the smartest weapons. But AI in warfare doesn’t necessarily mean high-powered brains — it can also be a blizzard of dumb-ish little vehicles overwhelming an enemy. Vladimir Putin, in a speech about AI war several years ago, predicted that “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.” So where’s the Pentagon on this? Developing an effective drone swarm — a group of autonomous drones that can communicate to achieve a goal — is “without a doubt a priority” for the U.S. military, Elke Schwarz, author of Death Machines: The Ethics of Violent Technologies, told Digital Future Daily. The Pentagon doesn’t openly discuss many of its most advanced technologies, but last year it called for proposals from the defense industry for a new program called AMASS, for Autonomous Multi-Domain Adaptive Swarms-of-Swarms. The goal: To develop the ability to deploy thousands of autonomous land, sea and air drones to overwhelm and dominate an enemy’s area defenses, according to recently updated documents. As for where they’d send such a swarm — officials haven’t named names, but observers reading between the lines think they may envision using them in the event of a Chinese invasion of Taiwan. (A Pentagon spokesperson did not immediately respond to a request for comment.) “I am not surprised that DARPA and DoD are working on this considering they are in a tech race with China, which also has its own swarm accomplishments to date,” the Center for a New American Security’s Samuel Bendett told Digital Future Daily. Last week, the Hudson Institute's Bryan Clark also called for the U.S. to challenge China with drone swarms. The AMASS program isn’t the first time the Strategic Technology Office of the Defense Advanced Research Projects Agency — better known as DARPA — has looked into using autonomous drone swarms to gain an upper hand. Six years ago, the department launched a separate OFFensive Swarm-Enabled Tactics Program (OFFSET) program, which hopes to perfect the use of swarms to assist Army ground forces. Last year, six months after the Pentagon ran its final OFFSET test, a top DARPA official told FedScoop that it could be possible for the U.S. military to launch swarms of up to 1,000 drones within the next five years. So far, the number of real-life military drone swarms known to have been deployed stands at one: In 2021, Israel sent a fully autonomous swarm of small drones to locate, identify and attack Hamas militants in concert with other missiles and weapons. Israel’s swarm was “just the beginning,” George Mason policy fellow Zak Kallenborn wrote in DefenseOne. While AI was used, the drones weren’t as sophisticated as future swarms could be, he wrote, since they coordinated with mortars and ground-based missiles to strike targets miles away. In the future, “swarms will not be so simple.” So how about drone-swarm ethics? And limits? In the wrong hands, drone swarms have potential to be weapons of mass destruction, experts warn, because of two things: their potential to inflict harm on lots of people at once, and a lack of control to ensure they don’t harm civilians. Since swarms communicate together, unlike a group of drones that act independently, the risk for catastrophe if something goes wrong is much higher. The DoD does have some guardrails in place. The department updated its autonomous weapons policy to adhere to its AI Ethical Principles, which outline the design, development, deployment and use of AI. In the case of drone swarms, the policy would ensure that the technology needs to be entirely foolproof — with no risk for deadly miscalculations or unpredicted actions — before being used. But nations without such safeguards could do irreparable damage. Drones can be cheap and easy to build. Networks can be created by unethical programmers. In short, a drone swarm is a fairly scary technology accessible to many countries — or even insurgent groups. “They could be used for wide-scale surveillance as well as wide-scale indiscriminate attacks,” Michel said. And for malign actors like terror groups and those without AI laws, “the fact that swarms are terrifying and unpredictable and indiscriminate,” he said, “would actually be a major selling point.”
|