As someone who does a bit of graphics programming in his day job and as a hobby, it can be said that I’m a fan of graphics technology. Despite that, I think it’s long past the time where PC developers should stop worrying about graphical spectacle, because the pursuit of graphics is doing us more harm than good.
In the original Wolfenstein 3D, the “level editor” was a simple little program that let you draw squares on a grid to create gamespace. You could make a playable room in well under a minute. It was laughably simple and primitive by today’s standards, but the game was at least forty hours long because the content was so easy to produce. (It would have almost been possible for someone to make levels as fast as you could play them.) A few years later in the Doom and Duke Nukem 3D era, level design had become slightly more elaborate. It took time to get the textures to line up and make the lighting interesting; that same room of gamespace might take five or ten minutes to produce. With Quake, the bar was raised even higher. Level design was basically 3D modeling, and it might take a whole hour to make the same amount of content.
You can see where this is going. The one hour room gave way to two hours, and eventually led to teams of people working for days to make just a few moments of playable content. Now you have someone designing the level, someone else making unique meshes to decorate the space, a specialized texture artist, and a lot of work being done to set up complex lighting systems, moving machinery, special environmental effects, and all of the other steps needed to take advantage of current-gen graphics engines. That’s more than a thousand fold increase in the amount of work required to give players a few seconds of entertainment. This inflation of manhours is obviously unsustainable, and even the amount of work we’re putting into games now is probably too much. Taking another step forward is folly.
Don’t get me wrong, these sexy new polygons look great, and I certainly wouldn’t want to go back to the days of pixelated 2-D sprites sliding around a repetitive blocky room, but the problem is that each new graphical step forward has cost us more and given us less in return, and I think at this point we’re getting a lot less than what we’re giving up.
As these costs rose, we started getting less game for our money. Games began getting shorter. Forty hour games became twenty hour games. Then ten hour games. When developers couldn’t make any more cuts to gameplay, they began protecting their investments by simply taking fewer risks. It’s one thing to try something outlandish and innovative when a game costs half a million dollars to produce. It’s quite another to do so if the game is going to cost twenty million, and anything less than a complete commercial success will spell bankruptcy for your company.
In 1992, you could pay $40 for a forty hour game that was unlike anything we’d ever seen before. Today, you’ll pay $60 for a ten hour game that plays much like a lot of the titles you already have on your shelf. (Assuming you can get the thing to run at all.) We’re getting shorter games and less innovation and more buggy games. All this, and developers are still having trouble keeping up financially and technologically. The constant push to improve visuals is hurting both parties, and I think it would be great if we could just call a graphical time-out and tried to make the most of what we have now.
It costs a lot to jump from one generation of technology to the next. Each new graphics engine has its own tools, its own quirks, its own limitations, its own visual trade-offs. It takes time to master these tools, and for the most part we’re throwing them out just when artists are getting good at them. Compare the debut PS2 titles with the games that came out near the end of its lifespan. (Which would be now, I guess. Quality PS2 titles are still coming out.) The newer games look better and run smoother, even though the hardware hasn’t changed. It’s possible to improve the visuals and performance of your game without changing the hardware at all, just by giving artists enough time to become adept with the tools.
What developers should do – and what should have happened years ago – is start treating the PC (and if we’re lucky, the Mac) like consoles. Pick a nice safe spot on the tech curve and make that your baseline target platform. Now keep it there for eight years or so. When you finish a game in 2003, make another game aimed at the same 2003 level hardware. Then another. Get three or four games out of your tech before you re-invent the wheel. Sure, it means the graphics still look a little stale the third or forth time around, but the games will be cheaper to produce. Millions of dollars cheaper.
And at this point in the tech curve, a lot of people might not even notice you’re standing still. Quake II came out five years after Wolfenstein 3D. In those five years we’d seen the world of in-game graphics revolutionized twice. (At least.) Anyone that released a game with Wolfenstein-level graphics in 1997 would have been laughed at. Yet here we are five years after the release of Doom 3, and that game barely looks dated at all. You could be pumping out games based on 2004-level technology and produce something that’s commercially viable, attractive to look at, and relatively cheap to produce. (Cheap compared to chasing after the next engine, anyway.) I suspect that with strong art direction and experienced artists you could actually get another five years out of that 2004 technology before you absolutely had to move to a new generation.
Yes, there are mainstream game reviewers out there who are obsessed with graphics and spend their non-gaming hours masturbating to the NVIDIA product catalog. They will indeed give you a hard time because you’re not using the next-gen bling mapping. I’m sorry about those guys. But for what it’s worth, some reviewers won’t do that, and I think consumers will be happy to pony up for your game as long as its fun. This might sound risky, but think about the millions you’ll save in development costs. You’ll be producing a game for less money that can run on a far larger portion of PCs. It will run smoother, be less of a support headache, and give gamers more value for their gaming dollar. That sounds like a winning strategy to me. All you have to do is sacrifice a bit of your graphical spectacle. The odd snarky review might cost you a few sales, but I can’t imagine it will hurt you as bad as riding the bleeding edge. What are you after here? Do you want the approval of a jaded graphics fetishist or do you want to make awesome games?