The National Security Agency’s SKYNET program, which uses metadata and learning algorithms to select targets for drone strikes, may be targeting thousands of innocent people.

The NSA’s dramatically-titled SKYNET program was touted as a learning algorithm, a pseudo-intelligence that uses meta-data from cell phone records to create profiles of suspected terrorists. Now, close examination of documents leaked by Edward Snowden, and new declarations from whistle-blowers, are painting a bleaker picture: the program’s chance for false positives is frighteningly high, and may have lead to the deaths of thousands of innocent people under the assumption they had extremist connections.

The documents released include slide presentations of SKYNET’s methods and successes (here and here), from which much of this information was gleamed.

SKYNET would guide predator drone strikes, as well as death squads, based on the data it acquired – so just what kind of air-tight information could lead it to point out a person as a terrorist?

That largely comes down to social networks and location. Among the many points of data SKYNET examined were the movements of suspects (and suspects, to SKYNET, can include the entire population of a country such as Pakistan): travelling to or from a region with high terrorist activity, especially if the timing lines up such that you arrive just before or after a major event.

A person’s social network also comes into play. At the risk of oversimplification, if person A calls or texts person B, who has already been flagged by SKYNET, person A will likely be flagged by the algorithm, as well.

Trying to protect one’s own privacy also tips off SKYNET. A suspect that takes the battery out of their phone, changes a SIM card, and so forth, will be considered suspicious. Even turning the phone off can be seen as a way to evade detection.

One person touted as a “confirmed member of Al-Qa’ida” is a man named Ahmed Zaidan, who fit the profile of a terrorist perfectly, according to the NSA’s standards. Zaidan travelled to and from locations where terrorist activity was known to occur, and had numerous phone conversations with known extremists. The thing is, Ahmed Zaidan is Al-Jazeera’s Islamabad Bureau Chief – a journalist who covers terrorist activity and communicates directly with the factions involved.

Already, SKYNET’s methods seem to break down. What’s more, the algorithm required a base set of known extremists to be fed into it in order to develop its own profile of what an extremist might act like. The problem is that “known extremists” is actually a vanishingly small-sized sample, and the NSA supplemented the number by adding “suspected extremists” to the list.

On top of that, the program may have been in development for more than a decade, and its methods possibly tested on gamers.

The Escapist spoke to a former DARPA consultant, speaking under conditions of anonymity. He stated that the Department of Defense has been working on terrorist detection algorithms since at least 2000. As early as 2001, they knew they had a problem with false positives. The project the former consultant was involved with was an attempt to test those algorithms against massively multiplayer games. It’s unclear how successful this attempt was, but gamers may have been the guinea pigs for these algorithms. (Note: The Escapist was not able to independently verify these allegations.)

It’s unknown now, and may remain so forever, how many false positives SKYNET delivered, and how many of those were acted on by people with the power to.

For most of us, SKYNET seems far removed from our lives – but algorithms increasingly infiltrate our day-to-day. Would we notice when the results of these algorithms shift from being targeted marketing, to targeted arrests, or worse? It’s a scary thought, and maybe it’s naive. I’ll leave that up to the Escapists; please, comment and share your thoughts on the matter.

Source: Ars Techinca

You may also like