Critical Intel

Critical Intel
Killer Robots and Collateral Damage

Robert Rath | 20 Dec 2012 16:00
Critical Intel - RSS 2.0
image

"I'm Don't Worry About the Guy That Wants to Hijack a Plane, I Worry About a Guy Who Wants to Hijack All the Planes"

Despite its absurd alarmism, the quote above - said by Oliver North in one of Black Ops II's widely-criticized pseudo-documentary ads - nails the game's primary theme: What happens if we lose control and our drones turn against us? Essentially, Raul Menendez's plot boils down to turning America's high-tech weaponry against it as punishment for the misery the U.S. caused through Cold War proxy conflicts. (At least that's the most sense I can tease out of it. The plot of Black Ops II is convoluted at best, and at its worst is just a litany of contemporary shout-outs.) However, Black Ops II also deals with the issue of drone autonomy, a question that may become one of the defining ethical debates of the next twenty years.

Black Ops II depicts drone autonomy as neutral but problematic. Drones under the player's control serve as a potent force multiplier for the JSOC, allowing Section and his team to take on much larger forces. However, the game also explores the horrifying potential that drones might be badly programmed or fall into enemy hands. During one mission, a hovering drone sweeps the flooded streets of Lahore as a pre-programmed death squad, killing anyone caught in its searchlights. Later on, Section and his team try to stop a terrorist attack on a floating resort in the Cayman Islands, only to have autonomous security drones open fire on them, since the robots perceive anyone with weapons as an enemy. Overall, the game presents autonomy as a useful technology but warns that its imperfections could render it extremely dangerous.

Drones currently have a limited amount of autonomy, mostly in regard to stabilizing the aircraft and plotting flight paths. However, the technology is improving fast, and with each generation drones take over more tasks from their human handlers. This year the Navy began testing a stealth drone called the X-47B that can take off and land on a carrier deck with the click of a mouse and perform pre-programmed missions without human guidance (though its systems will be monitored by a human). These are the type of drones Black Ops II portrays - fully autonomous or near-autonomous combat systems that can chart flight paths and engage targets independently of an operator - and it's this degree of autonomy that worries experts both inside and outside the military.

One ethical quandary is one we've already discussed: how autonomous drones will differentiate between combatants and non-combatants. Will targets be selected and neutralized through internal systems, or by the human monitor? Suppose a drone identifies a high-value target but there are civilians in the strike zone - will programmers need to write an algorithm that allows a drone to calculate how much collateral damage is too much? And if civilians are killed by an autonomous drone, who can be held accountable for that action under international human rights law? Subject to the specific circumstances of an incident, you could make equally strong arguments that responsibility lies with the drone's commander, mission programmer, or even manufacturer. These questions seem like the outline for an Asimov novel, but the issue is real enough to warrant a 50 page report from Human Rights Watch titled Losing Humanity: The Case Against Killer Robots, where the organization urges an international ban on autonomous drones due to risks to civilians and the accountability gap they create. While laudable, an international ban would prove impractical for a number of reasons - mostly because autonomy is a pretty elusive concept to define technically, and would be even more so in drafting an international agreement (the UN has been trying to create an internationally-accepted definition of "terrorism" since 1937).

However, the U.S. military shares some of the same concerns as human rights groups, and is taking its own form of action. Last month, Deputy Defense Secretary Ashton Carter signed a series of instructions that would prevent the military from purchasing autonomous or semi-autonomous drones "that could lead to unintended engagements," meaning drones that target and attack of their own accord. Basically, the Pentagon believes that the determination to engage a target should always come from a human and be part of the military's traditional command structure, rather than an automated process. The debate doesn't end there though - the instructions specifically allow autonomous systems relating to surveillance, cyber weapons like Stuxnet and Flame, deploying mines, and unguided and operator-guided munitions.

But what are the odds of these newly-autonomous drones being turned against their operators? Well, for now the chances are pretty slim. There have been some data breaches of U.S. drones, most notably in late 2008, when U.S. forces in Iraq arrested a Shiite militant who had a laptop full of intercepted camera feeds from a Predator. The next year, American troops found more laptops with hours of drone footage, and after an investigation, determined that Iran is teaching militants how to intercept the unencrypted feeds with a $26 piece of hardware. Iran, in fact, has met with some success countering the U.S. drones that cross its border to spy on its uranium enrichment facilities. In late 2011, Iran captured an RQ-170 Sentinel drone that was likely operated by the CIA. While the circumstances surrounding the capture are still unclear, Iran claimed that they jammed the Sentinel's GPS signal and then spoofed it, tricking the Sentinel into thinking it was back home and landing at an Iranian air base. U.S. experts, on the other hand, find that claim dubious, since there's no indication Iran possesses that level of capability, and say the drone probably crashed due to pilot error. However, those experts also admitted that the exploit Iran described is a real problem and the scheme would be possible, and stress that fully hijacking and controlling a drone via computer is still in the realm of science fiction.

Call of Duty: Black Ops II isn't focused on giving us a thoughtful experience regarding emergent technology, it mostly just wants to give us cool things to shoot at and cool weapons to shoot at them with. However, despite its bombastic storyline, it does raise interesting points concerning drone autonomy and the dangers we face by letting an automated system determine who lives and who dies.

RELATED CONTENT
Comments on