When Microsoft's J Allard told developers the Xbox 360 would ship with half a gig of RAM, they whooped and cheered. It meant more headroom for high polygon counts, detailed textures, and bigger levels. But when publishers looked at the resulting development budgets of $15 or $20 million to pay all those additional programmers and artists, they flipped out.
Three years ago, persuading a publisher to sink $5 million into a non-franchise title was an uphill battle; now that same game could cost five times as much, but units sold and retail prices haven't risen accordingly. Only a half-dozen or so games per year break into the world of multi-million unit shifters. Just as in Hollywood, a small number of tent poles support all the projects that fail. When EA wanted to make games more like the movies, this is not what they had in mind.
Welcome to the future of gaming.
Three years ago, I flew to Montreal and knocked on the door of an office I'd never seen before. After I was inside, a technician put little sticky, colored dots all over me and took digital photographs of my front, back, sides, hands, and head. Some men typed on keyboards. A few minutes later, there it was: a complete 3D model of me from head to toe, with photographic color textures already applied. There were plenty of glitches, but the result was nonetheless amazing. A full-body, high-resolution scan, geometry and texture, ready for cleanup and use in a game. It wasn't even one of those whirring, revolving laser things. They just took pictures, connected the sticky dots across the images, and the software interpolated my entire form. For a couple grand, I was my own avatar.
It was the summer of 2002, and we'd all seen the early screenshots from Doom 3. The world got its first big look at normal mapping, a technology where you start with a high-polygon model, generate lighting data, and then apply that lighting data to the in-game low-polygon model. The result is me, right there on the screen: a two-thousand-polygon playable character that looks almost indistinguishable from the two-million-polygon original. It's the kind of innovation that happens when you only have 64MB of video RAM. The developers I worked for engineered their own normal mapping code, from pipeline to playback, and I was their guinea pig. When I saw myself looking back from the screen in our game engine, photo-realistic and dressed exactly as I'd been that day in Montreal, I thought I was looking into the future.
Two years later, the future arrived. Doom 3, Half-Life 2, Halo 2, and more. Characters were lifelike, environments were vivid, lighting was dramatic. Compared to the games of 2002, it was a quantum leap in technology, a real moment of future frisson.
And the games - well, they were okay.
That's not an easy sentence to write. But "okay" is about where they ended up. Doom 3? Some nice intensity, a lot of zombies in closets, and one measly design innovation: in the dark you could shoot or you could see, but you couldn't do both at the same time. (Most gamers hated that.) Half-Life 2? Amazing characters, impressive production, and the most tiresome and annoying physics-based jumping puzzles ever conceived. (Valve never got the memo that jumping puzzle + FPS = misery.) Halo 2? Plenty of two-gun fun, but no ending, and embarrassing cut-scene glitches.