Using the Kinect driver supplied last week, a hacker was able to build an application that renders 3D images from real time video.
We all knew that the possibilities of a hacked Kinect would blow our minds, but I have to admit that I didn’t see this coming. Oliver Kreylos wrote an application in C++ which takes the video from the Kinect unit and indexes the pixels using the depth information provided. The result is a freaky looking video that proves the concept of manipulating a 3D image in real time. The video of Kreylos changing the perspective of the 3D image to be above his position is really amazing. Based on the information from the single Kinect unit, half of Kreylos’ body is gone and the black shadow represents what the Kinect camera cannot see. But that’s what his room looks like in three dimensions.
The video itself is a little rough around the edges, but I kind of like the rotoscoping effect that it provides.
Keep in mind that this is done only with a single Kinect unit. If the application that Kreylos wrote could take information and video from two or more Kinects, then it’s possible that this image could have even more fidelity. And if that information could be interpreted into another 3D application, like say, a videogame, then that would be even more awesome.
We could use Kinect to model 3D environments and easily make games set in our office/classroom/apartment. I can finally throwaway that ugly motion capture suit. Sweet.