Should We Ban Killer Robots?

A man in Colorado Springs recently made headlines across the internet by venting his frustrations upon a defenseless personal computer, unloading into the device with his pistol. Just days prior, the international community came together to ask: Would, or should, we ever allow the machines to reply in kind, either in self-defense or acting on human orders?

On April 14th, a number of leading military, scientific and legal experts met to hold a five day round of talks at the United Nations second convention on Certain Conventional Weapons in Geneva involving 90 member states. The talks were called in order to establish how best to regulate the potential future development and deployment of Lethal Autonomous Weapons, or LAWs. In other words, should we create robots to be used as soldiers? And if yes, how should we control their use?

What is a Lethal Autonomous Weapon?

Unlike current remote-controlled drones, LAWs encompass armed and autonomous machines capable of independent action without a controlling operator.

Roboticist Professor Noel Sharkey and leading member of the Campaign to Stop Killer Robots explains that, “According to the US Department of Defense, a LAW is a military weapon that, once activated, will engage targets without any further intervention. And what that means is that it will track and select its own targets to kill them or apply violent force without meaningful human control. In essence, a killer robot.”

Organizations such as Stop Killer Robots, Amnesty International and Human Rights Watch have been appealing to member states to ensure that future autonomous weapon systems are always subject to “meaningful human control.”

To be clear, we’re not talking about sentient killing machines, either. For instance, a Roomba vacuum-cleaning robot is autonomous, with the ability to change its own cleaning route depending on factors such as whether your foot is in it’s way, or stop if it encounters a steep drop like a flight of stairs. Autonomous military machines could similarly make independent decisions in the field based on sensory input and pre-planned objectives, without constant supervision and guidance by expensive and inefficient human operators.

How close are we to the killer robots seen in movies?

The subject of armed automatons has captured the public imagination in both heroic and villainous guises throughout modern science fiction, expressed in Hollywood feature films such as the Star Wars prequels, and most recently Marvel Films Avengers : Age of Ultron. But one of the most currently pertinent examples of killer machines in film is from last year’s RoboCop (2014). During the film’s introductory vision of 2028, a patrolling platoon of US military robots deployed within a fictional Iranian suburb is confronted by a team of insurgents, utilising target recognition and threat evaluation software to identify and destroy their targets.

However, Prof. Sharkey has doubts that similar recognition software with the correct level of sophistication to accurately discern legitimate targets will be created at the same pace as other aspects of LAW development. He said,”At the moment, these weapons could not be discriminate, the principle of distinction that’s the cornerstone of the Geneva Conventions. The ability to recognize civilians, surrendering or wounded soldiers, we just don’t have that at the moment. It doesn’t seem likely that’ll happen quickly… Current smart targeting systems are designed so you don’t waste ammunition.”

“These systems can see and accurately identify a tank by its silhouette if it’s in an open desert, but as soon as you put it near trees or woodland, it can’t tell the difference between the tank and a civilian truck.”

“Another problem is the concept of proportionality. When deciding what is a target, and what amount of force to apply, human commanders must make decisions in complex situations, weighing humanitarian needs with military advantage. You can’t make an algorithm for that; it requires a human to make that call.”

Recommended Videos

Cold and calculating: not something to fear

“You can’t make an algorithm for that; it requires a human to make that call.”

Still, the advantages of successful implementation of this technology are significant. International law expert and former RAF Commodore now Doctor William Boothby believes there is the potential for LAWs of the future to one day avoid the pitfalls of human error and emotive fallibility, if the technology can be allowed to develop further. As an example he cites, “… A young infantryman, tasked with clearing an unlit building within a war zone. When entering the darkened room, he detects movement in the corner of his eye, and in that instant, in terror, unloads his magazine, and kills a hiding family of civilians.”

“Now imagine a time where we have become capable of developing an infantry robot that is able to differentiate between targets, and is placed within that same situation. Although this time, it may refrain from firing, not only because it might instantly recognise the absence of weapons, but also because it would be immune to feelings of fear, of anger, of seeking revenge. Such systems could be more objective, restrained and controlled than soldiers.”

Drones are the beginning

Further development of drones and autonomous weapon systems could see significant advantages for militaries across the world. Even relatively small and cash-strapped nations could take advantage of the ever decreasing cost of computing power to produce robotic forces, augmenting expensive conventional armed capabilities to secure an edge in local conflicts. Aerial drone models such as Quadcopters, which use four rotary motors to grant an exceptional ability to reliably take off and land, are already available on commercial markets for $1000. A typical civilian model has a range of 10-20 km, with a battery life of half an hour. According to the BBC, South African firm Desert Wolf has already produced and sold 25 units of its own Octocopter “Skunk” security drones to domestic and international clients, equipped with pepper-spray firing chemical launchers. Fitting an automatic rifle to similar units could be relatively simple.

In the report 20YY: Preparing for War in the Robotic Age published by the Center for a New American Century in January last year it was stated that such low-cost robotic hardware could be used to support border patrols, provide battlefield reconnaissance or even fight en-masse in swarming formations. An autonomous swarm, guided by complex algorithms beyond the capabilities of human operators, might be capable of entirely enveloping an objective such as a town in order to assault defenders from all sides.

Cheaper than human soldiers – but with hidden dangers

With such potential for cheap and effective robotic hardware, human soldiers could become incredibly expensive in comparison. The Pentagon comptroller Secretary Robert Hale estimated that in 2012 the average ground-bound US infantryman deployed in Afghanistan cost the government $850,000 a year in equipment, training and living costs. Whilst there would be costs involved in maintaining, equipping and supplying robotic armed forces, the fact that this need not involve the ongoing training, payment of salary and the same duty of care afforded to human soldiers means the money-saving potential could be enormous. Within a political climate where it is increasingly controversial among domestic populations, especially in Western nations, to deploy human soldiers into the modern battlefield with the potential for large casualties, robotic alternatives could be seen by world governments as both a financial and political solution.

Speaking on the potential for fewer military casualties and tax dollars spent, Prof. Sharkey doesn’t believe these benefits will outweigh their own cost. He said, “In fact it’s one of the problems. I don’t want to see young men killed in war, but the threat of body bags means there will be some reluctance in going to war. If you have virtually no body bags coming home, when you can just send in cheaper robots, then there’s no stopping the number of conflicts you can get into.”

Already banned?

“If you have virtually no body bags coming home, […] then there’s no stopping the number of conflicts you can get into.”

Dr. Boothby believes that current weapons targeting and procurement law is already sufficient to prevent the deployment of offensive LAW systems. He warned of the dangers of existing law being undermined by any new prohibition implemented through the UN, noting, “There is existing law to deal with this, and I’m worried that discussion of a ban diverts our attention from it. It will be far better if states consistently review their weapons plans, applied the existing law and recognised offensive autonomous weapons based on current technology do not comply with that law… You could create a prohibition that states that never intend to develop LAWs are happy to go along with, but others will simply ignore. And this could further weaken the very concept of international law.”

New laws may ban existing defensive technology

Another problem Doc. Boothby has with any new prohibition is that it could be worded to catch currently automated but entirely defensive systems such as Israel’s Iron Dome and the US Phalanx anti-missile shield systems. “Depending on how a provision against LAWs is worded, my concern is it might catch some technologies that states already rely on for their defense and that states will need to consider these proposals very carefully.”

There are currently six countries considered capable of producing semi or fully autonomous fighting machines who already operate partially automated weapon systems ranging from remote-controlled aerial drones to anti-missile programs; the US, Israel, China, South Korea, the UK and Russia.

Of these countries, none have publicly stated that they are actively pursuing LAWs, although very few governments worldwide have been willing to declare outright support for any pre-emptive prohibitions on combat-ready robots. Only Pakistan, Egypt, Cuba, Ecuador and the Holy See have advocated such a measure. Explaining such reticence, Dr. Boothby noted that, “Generally speaking states are reluctant to ban technologies which haven’t been developed yet. They did ban blinding lasers before such weapons were fielded but states prefer to be able to understand the pros and cons of a future technology first before reaching a firm decision as to their acceptability.”

The US and Russia aren’t banning killer robots

The United States in its opening statement to the recent conference indicated it will not rule out the possibility of future acquisition of LAWs. Russia has kept it’s own position guarded, having previously expressed “severe doubts” over how constructive further talks might be. Russia did however welcome an initial UN report in 2013, noting that particular attention should be paid to the serious implications that the use of such weapons could have for societal foundations, including “the negating of human life.”

Neither country appears to favor an international ban in the near future, as is sought by humanitarian organisations such as Human Rights Watch and the Campaign to Stop Killer Robots. In fact both countries seem eager to develop more advanced drone technologies that might one day benefit from fully autonomous functions. The US Navy’s latest X47B carrier-borne jet drone already demonstrates semi-autonomous capability, able to create its own flight route, take off and land with an operator merely setting a final destination. Meanwhile, Russian media sources such as RT.com have recently speculated that a new Russian battle tank, the T-14 Armata, may be further developed into the world’s first fully robotic tank. Reports suggest it already possesses an unmanned gun turret, with only two operating crew members. Prototypes of the vehicle are due to be on parade in Moscow for the upcoming May 9th Victory in Europe celebrations.


David Rodgers is a freelance writer and journalist with interests in speculative science fiction, tabletop gaming and toy robots. You can follow him at boilerplated.tumblr.com for future articles and thoughts.


The Escapist is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more
related content
Read Article There’s Now Credible Evidence of Alien Intelligence – But Are We Ready For It?”
Read Article 6 Over-Funded Kickstarters That Should Have Failed
Read Article En Route to the Post-Apocalypse – John McAfee Predicts US “Annihilation”
Related Content
Read Article There’s Now Credible Evidence of Alien Intelligence – But Are We Ready For It?”
Read Article 6 Over-Funded Kickstarters That Should Have Failed
Read Article En Route to the Post-Apocalypse – John McAfee Predicts US “Annihilation”