MIT Researcher Proposes Rights for Robots


Humans may find it beneficial to grant robots rights on par with pets.

We should all be pretty well aware at this point that the robot apocalypse (or “robopocalypse,” if you will) is on its way. Our most gifted storytellers have been warning us about it for years, from the legend of the golem to James Cameron’s Skynet. In their latest volley against an unsuspecting human race, our metallic overlords-to-be have conscripted MIT researcher Kate Darling to draft a new research paper that suggests humans grant rights to robots. According to Darling, robots don’t need rights on par with humans (yet), but due to the emotional connections humans can create with them, we may find it beneficial to ascribe them similar rights to our pets.

All right, calling Darling a pawn of the robots may be going too far, considering that her 18-page paper lays out a digestible, cogent argument for robot rights sooner rather than later. “The typical debate surrounding ‘rights for robots’ assumes a futuristic world of fully autonomous and highly sophisticated androids that are nearly indistinguishable from humans,” she writes. “While technological development may someday lead to such a Blade Runner-esque scenario, the future relevant legal issues are currently shrouded by unforeseeable factors.” Darling describes the robots of today, from Sony’s robotic dogs, to Paro the seal (who has seen proven success in geriatric therapeutics), and even Roomba vacuum cleaners, explaining that each one can generate a companionate, emotional reaction in humans, especially in small children. This interaction, she argues, is not the same as an interaction with nonresponsive toys. “While a child is aware of the projection onto an inanimate toy and can engage or not engage in it at will, a robot that demands attention by playing off of our natural responses may cause a subconscious engagement that is less voluntary.”

Children are not the only subjects who anthropomorphize robots, either. Darling describes a situation in which a battle-hardened army colonel could not bear to watch a mine-detecting robot modeled after a stick insect get leg after leg blown off during a trial run. “[The] colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane,” Darling quotes from a 2007 article by Joel Garreau. Even gamers should be familiar with granting human qualities to totally inanimate objects simply because they mimic a small human quality. “In the video game Portal … which requires the player to incinerate the companion cube that has accompanied them throughout the game, some players will opt to sacrifice themselves rather than the object, forfeiting their victory.”

The obvious criticism of Darling’s paper is that simple companion robots, unlike animals, have no desires, feelings, or capacities for pain and pleasure. However, she reminds readers that “our desire to protect animals from harm may not necessarily be based on their inherent attributes, but rather on the projection of ourselves onto these animals.” Furthermore, a discussion of robot rights in the present might make a similar discussion if – or when – robots develop sentience somewhere down the line. Just remember that protecting Paro the seal today might make it that much harder to fight back against the Terminator a few years from now.

Source: Social Science Research Network via Computerworld

About the author