Abstract
We consider a face-to-face videoconferencing system that uses a Kinect camera at each end of the link for 3D modeling and an ordinary 2D display for output. The Kinect camera allows a 3D model of each participant to be transmitted; the (assumed static) background is sent separately. Furthermore, the Kinect tracks the receiver’s head, allowing our system to render a view of the sender depending on the receiver’s viewpoint. The resulting motion parallax gives the receivers a strong impression of 3D viewing as they move, yet the system only needs an ordinary 2D display. This is cheaper than a full 3D system, and avoids disadvantages such as the need to wear shutter glasses, VR headsets, or to sit in a particular position required by an autostereo display. Perceptual studies show that users experience a greater sensation of depth with our system compared to a typical 2D videoconferencing system.
| Original language | English |
|---|---|
| Pages (from-to) | 131-142 |
| Number of pages | 12 |
| Journal | Computational Visual Media |
| Volume | 2 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 1 Mar 2016 |
Keywords
- motion parallax
- naked-eye 3D
- real-time 3D modeling
- videoconferencing