“Why make videogames when you can make art?” David Cage asks, as I visit him at Quantic Dream headquarters on Boulevard Davout in Paris.
His delivery is plain, without pomposity. He doesn’t even look at me but seems lost in thoughts about his motivations, what drives him down this difficult path, looking for new ways to tell stories with emotional complexity in a medium where the safe bet is to make another shooter.
Cage is short, unshaven, thin-haired and soft-spoken. His face is agile with eyes that dart away in response to difficult questions, probing the corners of the room for answers. No hint of showy, executive self-assertion; instead a sense of ceaseless searching.
It’s a somewhat scruffy mug for a game studio CEO, but one befitting of a videogame auteur (though he shuns the “videogame” moniker, preferring “interactive drama”) responsible for the offbeat 2005 effort Indigo Prophecy. At the moment, Cage commands an army of 70 employees, hard at work on his latest interactive drama. The game in progress, Heavy Rain, has already generated a lot of buzz, based only on a tech demo presented at last year’s E3.
The demo, called “The Casting,” is a five-minute monologue in which a young woman named Mary Smith goes from hopeful actress at an audition to wistful happiness, broken-hearted sadness, suicidal despair, lethal determination and, finally, disappointment. It’s not a scene from Heavy Rain, but a showcase of the cutting-edge motion capture Quantic is using, commissioned by Sony for E3.
Facial animation has become a key feature in Sony’s next-gen vision. At E3 2005, the pliable cheeks, breakable noses and rolling eyes in Fight Night Round 3 were highlights among scarce PS3 offerings. Last year “The Casting” raised eyebrows on and off screen, and since then Heavenly Sword has impressed us with cut scenes created at Peter Jackson’s CGI company, Weta Digital.
And Sony is far from alone in chasing the facial animation dream. Valve keeps refining its muscular system in Episode Two, and BioWare’s upcoming space opera Mass Effect relies heavily on sophisticated animation to make dialogue vibrant.
“Videogames today are about primitive emotions,” Cage says. “We need to bring it to the next level, closer to theatre, literature and cinema – art forms that can convey any kind of emotion and tell any kind of story. Except, we don’t only want to watch it, we want to be a part of it.”
The potential rewards are threefold. Better facial animation means greater range and nuance in acting; the sheer artistic difference between Johnny Drama and Helen Mirren. Second, expressive faces make players care about characters and story, motivating us to keep playing. Third, it opens possibilities for new types of gameplay. When characters smirk, wink, wince and display facial tics, it’s no longer enough to merely listen and read the HUD. You need to keep your irony detector, gaydar and bullshit gauge calibrated, and to adjust your approach depending on emotional response.
The motion capture for “The Casting” was done in Quantic’s in-house studio, on the floor below Cage’s office. It’s an empty space with an intricate pattern on the floor and looks like a basketball court with scrambled lines. As Aurélie Brancilhon performed the role of Mary Smith on this strange stage, she was covered with reference points, about 50 markers in the face alone, for cameras lining walls and ceiling to focus on.
It’s an expensive setup, but a game like Heavy Rain would be impossible to make without an internal studio, Cage says. While the details of the game are still shrouded in secrecy, he says it’s a game with “no weapon – there are much more interesting things to play with than weapons,” and that just shooting body movement and facial expressions takes an estimated five months. That’s longer than it takes to shoot a movie.
“The actors are our vehicles for emotion. We need to make sure that we capture every last bit of their performance and translate the emotion into the game. You know, getting 80 percent of that is quite easy. But after that every additional percent takes a lot of effort,” Cage says. “Right now, we’re working on lips and the revealing of teeth. It’s incredibly complex, a matter of fractions of a millimeter, and if it’s not right you’ll immediately notice. We’re also working on the tongue. You may not think you see my tongue moving while I’m talking, but believe me, if it stopped you would be horrified.”
Quantic’s latest milestone is a new system for eye movement, completed in September. Eyes can’t be motion captured, since you can’t attach markers to them, and Quantic has been experimenting with video tracking, struggling to perfectly capture the minute movements of a living eye. Cage is visibly proud as he shows me two bloodshot eyes gazing out from a 28-inch screen, restlessly moving, blinking in soft, rapid motions.
“This is in the engine. It’s not CG, it’s in the engine,” he says. “It’s really alive. It has soul. Soul is in the eyes.”
With such devotion to detail, it seems odd that Quantic employs only three animators, but this is a consequence of Cage’s philosophy of keeping performances pure, captured by high fidelity technology and preserved in pristine condition. “As soon as you start touching up an animation, you kill it. Life is a very fragile thing.”
To some, the illusion of life is so fragile that the slightest aberrations destroy it. Some flinch at the sight of Mary Smith in “The Casting” because of slight glitches in her facial movements. There’s a botox-like rigidity to her upper lip, a vaguely disquieting quality in the clipping of her eyelids – little things that would go unnoticed if her body language was more stylized, but are jarring in such a lifelike performance.
The unease caused by artificial humans that are close to, but not entirely, realistic, is often called “the uncanny valley,” a term coined by Japanese roboticist Masahiro Mori in 1970. According to Mori’s theory, a replica of a human will elicit more empathy the more realistic it is, up until a point where the simulacrum becomes so lifelike that small faults pop out and make it seem uncanny. The “valley” is a dip in a graph that describes the level of familiarity one feels with the simulacrum.
But the uncanny valley works differently for different people. “The Casting,” for instance, divides people into two categories: The critical-minded gripe about disturbing flaws, while those more willing to suspend disbelief are amazed at the depth of human emotion expressed by Mary Smith.
Jens Matthies, the Art Director at Starbreeze Studios, has an ambivalent relationship to moving faces and the uncanny. He explains over the phone that he subscribes to Mori’s theory and holds a pessimistic view about the long climb out of the valley. Still, Starbreeze worked hard on the facial animations in The Darkness, going as far as developing a proprietary system for facial animation, a process Matthies describes as “insanely laborious.”
Matthies and his colleagues dubbed their method “VoCap,” since it captures voiceover and motion simultaneously. After spending lots of time developing the system, they set up shop in a sound recording studio in Santa Monica, California. The process was fraught with unforeseen problems, like finding a mocap suit large enough for a 350-pound mob character and creating workarounds for limitations in the studio equipment.
“We certainly had to pay for our high ambitions with incredibly hard work, but in the end I’m happy with the result. And we learned a lot that will go straight into our next production,” Matthies says.
He hopes to reach an even higher level of authenticity for upcoming games, drawing on past experiences, and he says Starbreeze is still dedicated to advance facial animation, even though it could take decades until animations feel genuinely human. “We want to make people really feel something, and this means we want the characters to be as real as possible. There’s so much potential to take the medium further with acting, script and storytelling, and we’ll keep working on it. This business is a constant arms race. That’s part of the fun.”
An arms race indeed. Three years ago, Half-Life 2 was the gold standard for facial animation. “The Casting” raised the bar in 2006, and at the moment the cut-scenes Weta Digital created for Ninja Theory’s Heavenly Sword are bleeding-edge. In the immediate future, the game to watch is BioWare’s Mass Effect.
BioWare has its work cut out for itself: to create complex human expressions by combining modular “performance elements.” The reason for this approach, Project Director Casey Hudson explains in an email, is they want to be able to create huge amounts of high quality interactive dialogue, without having to motion capture everything.
Hudson says this generation’s hardware power provides new possibilities, like wrinkling skin, and this means interactive scenes, for the first time, can rival the drama of live-action movies. “In one scene in Mass Effect, you can reprimand a female officer for doing something dangerous, and as she looks back at you with eyes that appear to well with tears, you suddenly feel apologetic. At that point you realize that you’re experiencing something entirely different from anything you’ve played before.”
BioWare takes the integration of facial animation one step further, beyond synthesizing emotion. As the animation system came together, the designers realized they could use it as a part of the gameplay. “One example is where you find out that a certain character has a ‘tell’ – an expression that indicates that he’s lying,” Hudson says. “If you learn this about him, you can call his bluff at the right time and get him to admit to what he’s really up to.”
Back in Paris, Cage hesitates to make that leap. In Indigo Prophecy the mental health of the main character was reflected in his face and body language, but there was still a gauge in the corner of the screen that spelled it out for the player.
Cage doubts that he will be confident enough to make the performances an integral part of the gameplay in Heavy Rain. Sure, he wants to create avant-garde art, but at the same time he desperately wants people to simply get what he’s doing. Cutting-edge facial animation is not an end in itself, but a tool he wields to hack the medium.
“I started working on videogames because I thought these toys for kids would one day become a new mainstream art form. Now, there are still gamers who want to drive and shoot and fight, but there are also gamers who are interested in a different type of gameplay, based on emotion, characters and storytelling,” Cage says with an absent look in his eyes.
As I look down into my notebook, preparing to shut it and thank him for his time, Cage leans forward in his chair, fixing me with his gaze to make sure I’m paying attention.
“If we can make simple scenes from daily life interesting to play, like two people just talking, then we have a whole new world in front of us,” he says. “Then we can do anything.”
Sam Sundberg is a freelance writer based in Stockholm. He is also the games editor at Svenska Dagbladet, a leading Swedish newspaper.