Every “three-dimensional” game you play is a pussy. On your flatscreen TV or monitor, your so-called 3-D game presents nothing but an illusion of depth, a sleight, a fake. When will we get computer gaming in three real, no-kidding spatial dimensions?
Answer: By some standards, it’s already here. But if you’re waiting for something really cool, stick with the fakes.
Getting to Third
A device that creates 3-D imagery is stereoscopic. If the images are viewable without special gadgetry (glasses, goggles, ViewFinder), the device is autostereoscopic.
Flat-panel monitors create 3-D effects using “spatially-multiplexed parallax” to create a different image for each eye. Everyone knows anaglyph images, those movies and comic books with red and blue images printed out of register. The illusion of depth arises when you view them with red-blue glasses. One anaglyphic computer game, among many, is Polish programmer Jacek Fedorynski’s Quake II for Red-Blue 3-D Glasses; he’s the guy who did Text Mode Quake II. There are several other kinds of stereoscopic goggles, like electric shutter glasses that flick open and shut 50 times per second in synchrony with “right” and “left” frames on your monitor. If you try playing Counter-Strike that way, please post Flickr photos of you wearing the goggles. Tag them “dork.”
The main problem with stereoscopic glasses is, after using them you throw up. Fortunately, there are many ways to create autostereoscopic (non-goggled, non-nauseating) parallax, such as splitting the screen via physical barriers, polarization and beam splitting. A promotional .PDF brochure from 3-D monitor company SeeReal Technologies includes short summaries of some major hardware approaches.
You can also get an autostereoscopic effect with software alone, such as the popular Raster3D program scientists employ for molecular imaging. Workstations that use Raster3D often split the screen into side-by-side “stereo pair” images, so you must cross your eyes to see them in 3-D. Here’s a sample stereo pair, but be warned: If you have trouble seeing those Magic Eye stereograms, don’t even try this.
The 3-D monitor industry simmers continually. The Stereoscopic 3-D Web Ring currently has 139 members. Established enterprises like Dimension Technologies and Real D Scientific, as well as relative newcomers like SeeReal, cater mainly to the military, oil companies and specialized markets like industrial CAD/CAM prototyping, scientific (molecular and medical) imaging and photogrammetry, the spy’s practice of deducing physical dimension of objects from measurements on photographs.
Gaming? Not so much. In 2004, SeeReal licensed its technology to the German game company Trinigy for use in the Vision game engine – seemingly without result. None of the other companies boast gaming connections.
Meanwhile, work in holographic animation, or holovideo, has slowed a lot since MIT’s famous Media Lab closed its Spatial Imaging Group in 2004. The holographic video project created only two prototypes, the Mark-I and the Mark-II, but “the Mark-1 display is capable of rendering full color 25x25x25mm images with a 15 degree view zone at rates around 20 frames per second”. The Mark-II display provides 150x75x150mm images with a 36 degree view zone at rates of around 2.5 frames per second” – unexciting benchmarks for a Halo player.
Some MIT Media Lab grads later founded Zebra Imaging, a 3-D holographic imaging company that makes a cool Holo-Touch Workstation not far short of Minority Report. But again, they promise no gaming prospects.
If you want this hard-nosed commercial tech adapted to something as frivolous as gaming, it turns out you must go – where else? – on campus.
In June 2004, the online edition of the electrical engineering journal IEEE Spectrum made a rare venture into science fiction with Vernor Vinge‘s story “Synthetic Serendipity,” excerpted from his 2006 novel Rainbows End. Vinge presents a world blanketed with sensors, always-on and pervasively networked. Characters move easily through this ocean of mediated information, interfacing with it through contact-lens displays and clothing that senses twitching muscles. Harry Goldstein’s companion article to “Serendipity” outlines the tech underlying this augmented environment – cheap sensors, mesh networks, ubiquitous computing, haptics and what some call “augmented reality” (AR).
Goldstein mentions ARQuake, a 2001-02 research project by the Wearable Computer Lab at the University of South Australia in Adelaide. As described in the ARQuake FAQ, “We modified Quake to take its view information from a GPS and orientation sensor, and so as you walk around, Quake moves in sync with the real world. Monsters and buildings appear to sit in the real world as though they were really there, and then you can play a game of Quake while you are in the real physical world.” They used Quake because it’s open-source, and because they needed to keep the monsters stupid: “When running around outside, it is not possible to move at the same speed as on a desktop … so if the monsters are too smart and too fast, they will always beat you. In Quake, we use monsters which are intentionally slow and not too powerful, to give the player a chance to actually beat them.” The team later revised ARQuake as part of the ongoing Tinmith project, which includes their Black & White-like Hand of God. You yourself can’t play ARQuake – the lab is busy, and the hardware costs way too much – but some of the principals have started a company, A-Rage (for “Augmented Reality Active Game Engine”) that is working on AR Sky Invaders, a commercial Space Invaders-type game.
The Augmented Environments Laboratory at the Georgia Institute of Technology has created ARCraft, a real-time strategy game. “Using head tracking and wand-based interaction, each player navigates his or her fighting force around obstacles while hunting for the enemy. Our research goal is to investigate how people can use AR to work together (or against each other) in a shared virtual space while maintaining remote physical spaces.” Again, though, ARCraft is private. You can’t play. Go home.
If not augmented reality, what about virtual reality? It’s just like AR, except you can’t move around. Arguably the world’s best-established 3-D view technology is the CAVE, or Cave Automatic Virtual Environment. Standing in a dark 10-foot cube with sides made of rear-projection screens, you wear (groan) shutter glasses with positional sensors. Projected imagery surrounds you and moves in response to your head movements. CAVEs have been around for 15 years, and lots of campus computer science departments have one. They’re used for medical visualization, prototyping, cognitive psychology research and urban planning, but not for gaming – at least not where the Board of Regents can find out.
In case you can’t muster the six-digit installation cost for your own CAVE, you might try the budget versions, ImmersaDesk and Infinity Wall – though these basically amount to home theater setups.
One nasty problem with CAVEs and augmented reality is, after you try it for a few minutes – stop me if you’ve heard this one – you throw up. Simulation sickness is caused by lag in the onscreen visuals in response to your head movements. Even delays measured in milliseconds can make sensitive users queasy. They never talk about that on the Star Trek holodeck.
Volume, Volume, Volume
You’ve already noticed all these methods still use bogus 2-D flatscreen fakery. For genuine three-dimensional graphics out in real space, you’re talking volumetric displays. As the remarkably good Wikipedia article notes, the chief problem with volume of the “Help me, Obi-Wan Kenobi” kind is the mind-boggling amounts of data it requires: “For example, a 24-bit 70 volume/sec 1024×1024×768 display might need up to 180 GB/s transferred to the electro-optic modulator components.”
Volumetric engineering has made slow, incremental progress, but it’s been acceptably steady. Actuality Systems already markets a pricey industrial 3-D display. “Perspecta provides hologram-like spatial 3-D full-color and full-motion images that occupy a volume in space. Users experience an all-encompassing 360-degree view and simultaneous multi-view collaboration without goggles or any assistive device.” Actuality targets medical markets, though an October 2006 interview with Actuality founder Gregg Favalora tantalizingly mentions “exciting, hologram-like displays for the desktop and arcade.”
San Francisco-based IO2 Technology sells a “floating free-space interactive display,” the M2 Heliodisplay. It’s a high-tech fog generator that projects an image onto a cloud of micro-droplets. So far, it generates only a 2-D image. Inventor Chad Dyner said in an interview, “You can play games on the Heliodisplay, but the picture quality would work for only certain types of games today.” He added, optimistically, “This is not to say that with a future version this would not be more widely adopted.”
Research continues. In late November 2006, Japan’s National Institute of Information and Communications Technology (NICT) and Kobe University demonstrated a thin-panel device that forms 3-D images in the air.
What do all the companies and institutions cited here share in common? That’s right – no gaming connection whatever. For volumetric gaming, you can’t find anything beyond a few oddball art projects, like the Matrixx 3-D display built in April 2006 by electrical engineering students at the Delft (Netherlands) University of Technology. What was the Matrixx? In this case, it was a giant array of 8,000 red diodes in 8,000 suspended ping-pong balls connected with four kilometers of copper wire. This grandiose thingy, the largest electronic 3-D display in the history of the world, let passersby play – woohoo, hold me back! – 3-D Snake, Duck Hunt and Pong. That, friends, is the state of the art in true 3-D gaming. Oh well, at least it’ll get here faster than your robot housekeeper and your personal helicopter.