It’s a tale as old as time: Man makes robot, robot develops intelligence, man denies robot rights, man and robot struggle to resolve their differences. We’ve seen this basic story play out countless times over the years in movies like Terminator, The Matrix, and Westworld, as well as in video games like Detroit: Become Human. Sometimes the battle is bloody (like Terminator or Ex Machina), other times legal (Bicentennial Man), and still other times philosophical (Star Trek).
For years, people have wondered how these kinds of conflicts would play out in the real world. Now we know. Last week, a federal appeals court threw down the gauntlet when it decided that artificial intelligence systems cannot legally qualify as “inventors.” The court’s reasoning was simple enough: An AI system cannot be an inventor because “The Patent Act expressly provides that inventors are ‘individuals,’” a term that, according to the court, refers only to “human beings” and not AI systems.
Of course, this yields an obvious follow-up question: How did the court conclude that an AI system cannot qualify as an “individual”? The court did not compare the attributes of the AI system against those that one would expect to find in individuals worthy of rights. Instead, the court relied on a Supreme Court decision holding that the word “individual,” as used in the Torture Victim Protection Act of 1991, did not apply to organizations, but instead was limited to “natural persons” (such that organizations that engaged in torture were not subject to liability under the statute).
It is obvious that, in interpreting “individual,” the Supreme Court did not contemplate that its analysis would be used to parse out a distinction between flesh-and-blood persons and “unnatural” AI persons. Nevertheless, the consequences are sweeping. Indeed, in setting forth rights the Constitution and Constitutional Amendments include the same reference to “Persons” as the Patent Act. (For example, the Fourth Amendment references one’s right to “be secure in their persons,” and the Fourteenth Amendment prohibits states from depriving “any person” of life, liberty, and property.) Thus, a rule holding that an AI system or robot cannot qualify as a “person” seems to foreclose the possibility that they can have any rights.
A Silver Lining for this AI Court Ruling?
While the decision is in many ways terrible for AI and AI-rights enthusiasts, there are a few silver linings. First, the court attempted to limit its decision to the specific issue in front of it and made clear that the decision was not meant to resolve large and far-reaching disputes regarding robot rights. Indeed, in the very first paragraph of the decision, the court stated that its decision did not involve “an abstract inquiry into the nature of invention or the rights, if any, of AI systems” (questions the court characterized as “metaphysical matters”). While there is no escaping the AI-unfriendly nature of the court’s reasoning, the disclaimer could be used as a firewall of sorts to persuade future courts not to apply this precedent expansively.
Second, the decision avoided a robot-rights problem that would have arisen had the court reached the opposite conclusion. The question at issue was whether an AI system could be listed as an inventor on a patent application. Even if the answer had been yes, the AI system would not have had any ownership rights in the resultant patent — those rights would go to Stephen Thaler, the person who created the AI system and filed the patent application. (Under the patent laws, the owner of a patent and the inventor of a patent can be different.)
If we imagine a world in which AI systems should have rights — a world where AI systems can be exploited for their ability to innovate — then the court’s decision could be viewed as a boon for AI systems, since it removes a key incentive for exploitation. Namely, the owners of an AI system would have less incentive to exploit an AI if the owners are unable to secure a patent for the resulting invention.
In practice, I would expect this effect to be relatively small. The court’s ruling may prevent AI owners from patenting AI inventions, but it does nothing to prevent AI owners from exploiting AI in other ways (for example, by marketing the AI inventions without a patent, or by protecting the intricacies of the invention as a trade secret).
So How Should It Have Gone?
As bad as the decision is for AI rights, the fact of the matter is that the court reached the right outcome. The reason has to do with the specific facts of the case. Given the (relatively) early state of AI technology, there is no question that the AI system at issue here would not qualify as a “person” under any definition. Among other things, the system, called “Device for the Autonomous Bootstrapping of Unified Sentience” (DABUS), lacks sentience, desires, and the ability to think beyond specifically requested program specifications.
As a result, the court could have kept most of its analysis, but simply added a few paragraphs that left open the possibility that an AI system could, one day, qualify as an “individual.” For example, the court could have added the following conclusion:
“In reaching this decision, the court does not hold, as an absolute matter, that an AI system can never qualify as an ‘individual’ under the Patent Act. The future is long, and AI technology holds great potential. It may be that a future AI system would sufficiently resemble an ‘individual’ so as to qualify as inventor under the Act. Nevertheless, the record before us now leaves no doubt that the AI system at issue here does not possess those traits. Thus, the outcome is clear — DABUS cannot be viewed as an inventor for purposes of the Patent Act.” This language would leave the court’s core ruling intact, while acknowledging that it may be necessary to revisit the rule in the future.
Where Do We Go from Here?
In Bicentennial Man, Andrew the robot spends 200 years growing and changing from a factory-line robot into a unique, thinking, and feeling individual. Along the way, Andrew develops numerous inventions and reshapes several fields of industry. Yet, Andrew is not legally permitted to own his inventions or profits. Instead, his earnings belong to his original owners and are managed by a trust set up to handle finances on his behalf. Andrew spends years lobbying the Congress of his world to recognize his personhood. He is ultimately successful — but only after modifying his neural brain to allow it to deteriorate and mimic human mortality. In an ultimate irony, Andrew obtains his goal only by ensuring that he would never get to enjoy his success.
While Bicentennial Man is just a story, it serves as a strong reminder of what we already know from human history and experience — rights do not come free, but instead require focus, determination, and resilience. Whatever else can be said, DABUS is not Bicentennial Man. Even so, the court’s decision set down a marker that could impact AI rights for years to come. While its true impact may not be felt for years, the decision serves as a reminder that Andrew’s fight is just beginning.
Your move, robot.