Be sure to check out part 3 of this 5-part series: “ 5 Uses for the Astonishing Power of 2D Materials ,” and check back next week for part 5!
No matter how strong the spirit, the flesh is weak. Fragile. Mortal. Usually doesn’t smell very good. Cursed with a decidedly unappealing combination of corpse-pale skin and dense, almost werewolfesque body hair – at least when it’s my flesh.
DARPA, the Defense Advanced Research Projects Agency – the folks who brought you the foundations of the Internet, the MQ-1 Predator attack drone, and Metal Gear REX – recently performed a remarkable experiment. The subject was a 28-year-old man who’d lost his hand a decade prior. In its place, a mechanical hand was attached. Sensors connected to the artificial limb were implanted in the motor cortex of his brain. There, they detect the electrical activity of firing nerve cells. When the patient thinks of moving his missing hand, the implants pick up the signal and send it to the artificial hand, which moves in accordance with the user’s will like a real limb.
Meanwhile, mechanical sensors in the limb’s fingers send their own signals back to another implant, this one an array of electrodes in the subject’s somatosensory cortex – the main area of the brain responsible for the sense of touch. When the hand’s sensors detect pressure, the electrodes stimulate the surrounding nerves to produce corresponding tactile sensations in the brain – replicating that sense of touch. This is not the first such experiment, but it is the most impressive to date. The volunteer was able to tell which of their individual fingers was being touched while blindfolded with almost perfect accuracy.
A machine has pressure applied to some torque sensors, and a conscious human mind feels it.
Attempts to send information directly between machines and organic brains date back to the 1960s. From crude origins, research into this area – whether by actually implanting devices into the brain or through less invasive means – has grown increasingly sophisticated.
There are many challenges still to overcome. Our understanding of the human nervous system is quite limited. The amount of information that can be transmitted across existing implants is very modest – a human brain and a computer using them to communicate is a bit like a pair of brilliant scientists trying to discuss the details of their research by blinking at each other in Morse code. And brains are a lot like people: They often react poorly to having bits of metal stuck into them. Luckily, progress is being made in all of these areas, and shows no sign of stopping.
This could be just the beginning. Brain-computer interfaces (BCI) could have incredibly far-reaching implications, not just for people with disabilities but for everyone. Medicine, work, entertainment, the legal system – all could be deeply affected as our ability to link mind and machine grows. It could even be used to go beyond just restoring normal human abilities to those who’ve lost them to enhance people beyond previous human limits – perhaps, someday, beyond what many people would even consider being human.
It would also revolutionize porn, but that goes without saying, since the first thing every new communication technology is used for is revolutionizing porn. Nonetheless, here are five other ways in which brain-computer interfaces will change the world – and us.
The biggest motivator behind research into brain-computer interface technology has been treatment of disabilities. The first successful attempt at this was in 1978, when Dr. William Dobelle implanted electrodes into the visual cortex of a man with acquired blindness and connected it to cameras. The vision this gave the patient was very crude, just a narrow field of gray shapes moving at about 5 frames a second. It also, in its original form, required a two-ton supercomputer to process the visual data.
Still, it was something. A machine was communicating directly with a human brain.
In 2002, Jens Naumann – possibly the world’s unluckiest man, losing first one eye and then the other in two unrelated accidents three years apart – received the first in a new generation of Dobelle’s implants. These restored considerably more of his sight, though still far from all of it. And a quarter century of advancements in computer technology meant he didn’t have to be plugged into a supercomputer that weighed more than a car.
In 1998 a man named name Johnny Ray, who’d lost virtually all voluntary motor function after a catastrophic brain stem stroke the year before, became able to move a computer cursor with mental commands using an implant. Not “movement” in the usual sense, but an important milestone nevertheless – it was now possible for the motor center of the human brain to control a machine. Since then, people suffering quadriplegia have used brain implants to control robotic arms and perform some fairly impressive feats, like picking up a bottle and drinking from it.
With prosthetics like the recent DARPA experiment, we’re learning to restore the other function an artificial hand will need to be a full substitute for a real one: tactile sensation. The human hand is a sensory organ of tremendous precision, one of the biggest concentrations of sensory nerves in the human body. (Search “sensory homunculus” if you’d like to see a visual representation of this. And possibly have horrible nightmares.) Without a sense of touch, even simple interactions with the world around us become clumsy and awkward, never mind complex ones.
There is an important hurdle that must be overcome first. Naumann’s artificial vision didn’t last, and eight weeks after the surgery he was again completely blind. The same happened to the other recipients of the implants. Unfortunately, the immune system often perceives implants as a threat, with glial cells eventually producing scar tissue around it that degrades the electrodes’ ability to interact with surrounding neurons. (Fans of the Deus Ex series may recall this was actually a major factor in the plot of Human Revolution.)
New implant designs and materials that get along better with their organic surroundings are thus an important part of research if long-term use of brain implants is to be practical. Noninvasive interfaces using sensors outside the skull are also possible, and potentially quite useful, as we’ll see. But they’re not as precise, which is fine for many applications but problematic if you hope to someday make an artificial limb as precise and dexterous as a human one. And they can’t do what makes the DARPA hand so exciting – communicate in two directions.
Still, this needn’t be an insurmountable challenge; promising new implant designs and materials meant to address this problem are already in development. Implanted brain-computer interfaces could bring us to an era where the blind see, lost arms and legs are replaced with artificial limbs barely distinguishable from the original, and a severed spinal cord is an inconvenience for electronics to detour signals around instead of a permanent disability.
Brain-computer interfaces have their more frivolous applications too, especially in interactive entertainment like video games. “Wait,” you may be saying. “Being able to walk again is one thing, but I’m not letting anyone drill a hole in my skull and poke stuff into my brain just so I can see Geralt fornicating with a minotaur in The Witcher IX: Vesna’s Revenge at higher resolution.”
Luckily, you won’t have to. (Though your lack of dedication disappoints me.) There’s a lot you can do just with noninvasive technologies, most likely a headset incorporating sensors that pick up brain activity. Electroencephalography is the most likely candidate for this, using electrodes placed on the scalp to detect the voltage changes caused as waves of ions are pumped from neuron to neuron across the brain.
Games could be made to interpret particular neuroelectric patterns as in-game commands. This might be done by directly monitoring the player’s motor cortex, so that the player imagining a particular motion (run, jump, caress minotaur, etc.) is translated directly into its in-game equivalent. Neural controls needn’t be so literal-minded, however – pretty much any thought could be assigned to any in-game action, provided the thought corresponds with a particular pattern of neuroelectric activity that can be consistently detected.
You could even have controls based not on specific thoughts but on emotions or mental states. There’s at least one game already based on this concept, the marvelously titled Throw Trucks With Your Mind. It uses conventional first-person shooter controls for movement, but the strength of your character’s psychokinetic powers is based on your own level of focus and calm tracked by an EEG headset.
This might also usher in a new era of video game peripherals that are impossible to use without looking like a complete tool that rivals the glory days of the U-Force, Konami Laserscope, and Sega Activator, but sometimes progress demands sacrifice.
Looking farther into the future… I only said that you didn’t have to resort to something as drastic as neural implants. I didn’t say you couldn’t – and it would allow for a degree of both precision and two-way communication impossible with electronics that have to work with your thick skull in the way. Images, sounds, and even senses previously unavailable to videogames like touch and smell might be sent directly into the appropriate parts of the brain. (Who among us HASN’T wondered what Pyramid Head smells like?)
That might seem outlandish, but if neural implants become cheaper, safer, and more commonplace, who knows?
3. Reading Minds
One likely application of brain-computer interfaces is a direct outgrowth of how the technology works: understanding what patterns of neurolectric activity corresponds to what thoughts, sensations, or intents. The DARPA artificial arm that kicked off this article works because it understands that one pattern of electrical activity means “I want my hand to open,” another means, “Something is touching my index finger,” and so on.
Consider what you could do with a more sophisticated sensor system and understanding of the brain. You might learn which neural signals correspond to sounds or words, allowing people to “speak” with thought. You could use this to communicate with a computer, or with other people in a sort of electronic telepathy – the United States Army is already researching the latter possibility. People unable to speak normally due to illness or injury, or who have trouble with verbal communication due to conditions like autism, could speak via a voice synthesizer as quickly and easily as the rest of us speak with our own voice – no typing, just thought turned into sound
Of course, this wouldn’t just allow people to speak with their minds. It could be used to force them. The ability to actually hear people’s thoughts would be incredibly useful to police, courts, employers, and all sorts of other people seeking information possessed by someone who might not want to reveal it. Even something much less ambitious like a lie detector that actually works consistently could radically change the legal system, among other things.
The ethical concerns raised by this are considerable. Such technology could be used to intrude on someone’s privacy on a level beyond anything possible today. Throughout history, even in the most oppressive regimes or the most horrifically abusive families, the oppressed could at least rebel in their own thoughts; a day could come when that’s no longer true.
Even with more well-intentioned applications, questions remain. For instance, in the United States and other nations with legal systems based in English common law, criminal suspects cannot be compelled to testify against themselves. Suppose you put someone accused of murder under some sensors and saw them think, “Yes, I killed her, and I’d do it again! Whose waifu is trash NOW, Mom?”
Is that brainscan data testimony, or is it more akin to forensic evidence like a DNA sample? If it’s testimony, then any evidence of guilt gained by scanning the brain of a suspect without their consent amounts to forced self-incrimination.
The point of the dire speculations raised in this section is not to be alarmist, or to give the impression that brain-computer interface must inevitably lead to a nightmarish dystopia where totalitarian regimes peer into their subject’s very souls and/or innocent waifus are calumniated with impunity. It’s not inevitable at all. It’s a possibility, just like using surgical tools to maim and kill instead of heal is a possibility. If that happens, it won’t be the technology’s fault.
4. Daily Life
Direct interface between mind and machine could change the more mundane parts of daily life. The same sort of sensors – whether invasive or not – that allow amputees to use artificial limbs or gamers to hurl trucks might be adapted for mental control of other machines, from industrial machinery to home appliances to vehicles, revolutionizing daily life for the disabled and the merely lazy alike.
This article began with using thought to control a mechanical limb designed to mimic the motions of a human body part, but the things you can use a brain-computer interface to control with your thoughts don’t have to be direct analogues of what your mind normally controls. Recall that the earliest successful experiments involving humans controlling a machine by thought didn’t involve mechanical limbs – they controlled a computer cursor! The brain is highly adaptable, and computers can be trained to interpret a given neural signal as any command desired.
For example, at the University of Minnesota, Dr. Bin He has created a small helicopter drone controlled by thought, using noninvasive sensors that detect signals in the operator’s motor cortex. Thinking “Clench your right fist” steers it to the right, “clench your left fist” steers it left, striking the palm of your left hand thrice against your sternum arms the axially mounted coilgun and enables Autonomous Hunter-Killer Mode, and so on.
(In the interest of journalism, I probably ought to stress that Dr. He is actually a humane man who wants to build technology that helps people with disabilities, like mind-controlled wheelchairs or mechnical limbs, not an armada of coilgun-firing flying robots. At least as far as I’m aware.)
You might use your thoughts to change the channel on your television, drive a vehicle, or tell your computer what file or web site to open. Industrial machinery could be operated and monitored mentally, with greater potential safety and efficiency. Office work and Internet flame wars could move with blazing speed as thought is turned directly into text. With the growing potential for drones in commercial applications, being able to pilot one in your mind could be a handy skill, too.
5. Beyond Human
We began with brain-computer interfaces being used to replace lost limbs and allow disabled people to enjoy the same abilities as the rest of us. But we needn’t stop there.
There’s no reason a replacement limb has to be limited to merely restoring normal functionality. As the technology behind them advances, an artificial limb might be made to be not merely equal but superior to the organic original – stronger, faster, more durable, more precise, more sensitive. An artificial eye might be able to see into the infrared or ultraviolet, produce higher resolution images, or improve on humans’ shoddy peripheral vision
You have to be careful with this. Maybe your mechanical superarm can support a thousand pounds – what about the soft, squishy torso it’s attached to? Still, you could do quite a bit to improve on nature.
Prosthetic body parts controlled via BCI wouldn’t necessarily have to mimic organic ones in form – as we’ve seen, BCI can already be used control everything from computer cursors to little flying drones. An artificial arm might have extra digits, or tools attached or built in, or more joints, or… who knows? This would require advances not only in BCI technology but also areas like materials science, electronics, and power storage. But there’s nothing particularly implausible about it.
And yet, the ability to run faster or lift heavier weights or have chainsaws for hands wouldn’t be the biggest consequence. With sufficiently advanced linkages between brains and computers, we could improve our mental faculties as well. A brain might be connected to computer memory storage, processors, and software that it interacts with in real time, giving abilities far beyond what would be possible for an unaugmented person. Intelligence, recall, reaction speed, spatial awareness – all might be affected.
This may eventually raise questions about just what a “human” is. Is someone with a mechanical arm human? Obviously. What about someone whose brain is linked to a processor that lets them do complex math problems in their head, or a hard drive that gives them perfect recall? Of course.
What about someone who’s connected to a lot of computer hardware, and isn’t just using it to outsource stuff that machines are better at than human? Suppose the computers are interwoven with their organic thought processes to the point that they’re involved in processing “general intelligence” tasks, emotions, social interaction, and creativity too? Or what about someone who’s replaced most of their original body with artificial parts, so that they’re basically a brain riding around in a mechanical shell?
I’d still say “of course they’re human,” but not everybody would.
What about someone who is far more intelligent than any unaugmented human who ever lived and can divide their attention enough to do calculus problems, cheer for his beloved Chicago Cubs, read the final A Song of Ice and Fire book when it’s finally published in the early 2080s, and conduct multiple unrelated conversations – simultaneously? And does 95% of the processing necessary for this on computers that use his human brain as a central hub? What if then replaces some of his flesh and blood brain with electronics that mimic its functions but do the more efficiently? What if he replaces most of it? What if he eventually replaces all of it?
At what point – if any – would a being who has transcended normal human limits by joining their mind with machines cease to be human? Does the question even matter, as far as how we ought to treat such a being, or how it ought to treat us?
This may sound like a fantastical scenario. (Especially the part about a hyper-intelligent posthuman being a Cubs fan.) There’s no guarantee it will ever come – but neither is there any guarantee it won’t, and the continuing advancement of brain-computer interface technology brings us closer and closer to a day when it might.
Be sure to check out part 3 of this 5-part series: “ 5 Uses for the Astonishing Power of 2D Materials ,” and check back next week for part 5!