Ever since roboticists first broached the possibility of “thinking machines”, science fiction has treated a future with sentient robots as inevitable. Maybe they’ll befriend us or maybe they’ll kill us, but they can think logically, be communicated with, and usually express some range of emotions – or a distinct lack thereof. At least that’s how it works in the movies – but how would a truly sentient robot actually behave?
The truth is we have no idea. The birth of AI consciousness is called the Singularity for a reason – until it happens, we have no way of predicting how it could possibly unfold. Not that it’s stopped Hollywood from trying, but in most cases these questions have to be simplified so we’re not bogging down the plot. Ex Machina is one recent exception. It tells the story of Caleb, a computer programmer asked to judge whether an AI named Ava could pass as a human. During Ex Machina‘s theatrical release, it received widespread critical praise for examining deeper implications of consciousness and robotics.
That’s thanks partly to Murray Shanahan, a UK-based roboticist who acted as advisor on the film. His book Embodiment and the Inner Life was an inspiration for Ex Machina‘s script, while his new book The Technological Singularity breaks down potential future scenarios in more detail. The Escapist recently spoke with Shanahan to talk about what a human-level intelligence might be like. Could it reflect human behaviors and emotions? Would it be coldly logical while surpassing us in superior brain-power?
“I think we can imagine human-level AI that is not human-like, and that lacks human-like emotions,” Shanahan told The Escapist. “But it’s also possible to build a human-level AI that is very human-like and does have human-like emotions. In other words both are possible, and right now we have no idea what the first human-level AI – which might be decades away – will be like.”
Part of the problem is that when we talk about robots, we’re not talking about consciousness – we’re talking about intelligence. Since we consider intelligence to be a strictly human trait, it’s an easy way to measure whether an AI is humanlike, or whether they can outsmart us. That explains why humans get worked up when computers beat chess champions, instead of asking whether it actually enjoys playing chess.
It’s also why the Turing Test has become sci-fi shorthand for determining how well AI can mimic human behavior. But the truth is that human speech isn’t just based on intelligence. In fact, we’ve recently realized that programmers can trick testers by adding human-sounding nonsense. “The Turing Test is usually thought of as a test of intelligence rather than self-awareness or consciousness,” Shanahan continued. ‘But one of the problems with the Turing Test is that it’s possible to build chatbots that seem human-like because they behave in eccentric or silly ways.”
In reality, there are all kinds of elements which reflect consciousness, and most AI is incredibly limited with the full range. As we’ve mentioned before, while computers have far more processing power, our organic brains are capable of far more complex and nuanced thought that we usually take for granted. “The most impressive achievements in AI today are only better than humans in very narrow domains, such as playing chess or finding patterns in huge amounts of data,” Shanahan said. “We still have no idea how to make a computer that can perform the range of intellectual tasks that a human can perform, or that can learn new tasks and even invent entirely new kinds of behaviour.”
So what behaviors really show that something has consciousness, instead of mere intelligence? According to Shanahan, the biggest hang-ups for AI are common sense and creativity, thought processes which are almost impossible to replicate. The computers which recently made headlines by creating Magic: The Gathering Cards are a step in the right direction, but even those technically used patterns in Magic‘s rules – they couldn’t innovate to create an entirely new card game. But it’s definitely a field the Turing Test isn’t equipped to evaluate – according to Shanahan, Caleb’s tests in Ex Machina are a much better way of evaluating the process.
“The test that Caleb is asked to conduct on Ava is much better than the Turing Test,” he said. “In the Turing Test the machine is hidden, and the task is to fool the judge into thinking it is human. As Nathan says, ‘The real test is to show you [Ava] is a robot. Then see if you still feel she has consciousness.'”
Ex Machina is still just a movie however, and there’s no consensus on when robots would reach these goals. Some roboticists believe we’ll reach the singularity by mid-century, while Shanahan himself suspects it might be even longer. Still, if an AI ever quietly became experts at common sense and creativity, it’s possible it would be sentient and we’d never pick up on it.
Ex Machina released on Blu-ray and DVD Tuesday, July 14th