Oliver Kreylos has constructed a telepresence system using an Oculus Rift, three Microsoft Kinect sensors, and a fair amount of coding know-how.
Video captured by the Kinect trio is fused together to create a mushy, but stable, 3D video of Kreylos.
"One of the things we've noticed since we started working with 3D video to create "holographic" avatars many years ago was that, even with low-res and low-quality 3D video, the resulting avatars just feel real," said Kreylos on his blog. "I believe it's related to the uncanny valley principle, in that fuzzy 3D video that moves in a very lifelike fashion is more believable to the brain than high-quality avatars that don't quite move right."
All the work done in the above video was achieved with Xbox 360-era Kinect sensors. If and when Kreylos upgrades to the latest Xbox One sensors, the 3D video quality should see a serious bump in quality.
The hardware used by Kreylos on the project? "It's run by a single Linux computer (Intel Core i7 @ 3.5 GHz, 8 GB RAM, Nvidia Geforce GTX 770), which receives raw depth and color image streams from three Kinects-for-Xbox, is connected to an external tracking server for head position and wand position and orientation, and drives an Oculus Rift and a secondary monoscopic view from the same viewpoint (exactly the view shown in the video) on the desktop display."
[Additional Source: Engadget]