webstats

There is rapidly growing interest in the creation of rendered environments and content for virtual and augmented reality. Currently, the most popular approaches include polygonal environments created with game engines, as well as 360 degree spherical cameras used to capture live action video. These tools were not originally designed to leverage the more complex visual cues available in immersive environments when users laterally shift viewpoints, manually interact with models, and employ stereoscopic vision. There is a need for a fresh look at computer graphics techniques that can capitalize upon the unique affordances that make virtual and augmented reality so compelling.  Furthermore, the manual creation of high-quality immersive content is time and labor intensive.  To address these challenges, the Illusioneering Lab researches new techniques and technologies for creating lifelike immersive content through the automatic capture and digitization of real world objects, people, and behaviors.

A complete pipeline for the capture, reconstruction, and real-time rendering of a photorealistic object with dynamic lighting using a single handheld RGB-D camera (Chen and Suma Rosenberg 2018).
A rapid avatar capture system that can generate a photorealistic, interactive virtual character capable of movement, emotion, speech, and gesture through a near-automatic process that takes less than 20 minutes (Feng et al. 2017).

Selected Publications

Virtual Content Creation Using Dynamic Omnidirectional Texture Synthesis

C. Chen, and E. Suma Rosenberg. IEEE Virtual Reality, pp. 521–522, 2018.

View-dependent virtual reality content from RGB-D images

C. Chen, M. Bolas, and E. Suma Rosenberg. IEEE International Conference on Image Processing, pp. 2931–2935, 2017.

Just-in-time, Viable, 3-D Avatars from Scans

A. Feng, E. Suma Rosenberg, and A. Shapiro. Computer Animation and Virtual Worlds, 28 (3-4), pp. e1769, 2017.

Rapid Photorealistic Blendshape Modeling from RGB-D Sensors

D. Casas, A. Feng, O. Alexander, G. Fyffe, P. Debevec, R. Ichikari, H. Li, K. Olszewski, E. Suma, and A. Shapiro. International Conference on Computer Animation and Social Agents, pp. 121–129, 2016.

Creating Near-Field VR Using Stop Motion Characters and a Touch of Light-Field Rendering

M. Bolas, A. Kuruvilla, S. Chintalapudi, F. Rabelo, V. Lympouridis, C. Barron, E. Suma, C. Matamoros, C. Brous, A. Jasina, Y. Zheng, A. Jones, P. Debevec, and D. Krum. ACM SIGGRAPH Posters, 2015.

Rapid Avatar Capture and Simulation Using Commodity Depth Sensors

A. Shapiro, A. Feng, R. Wang, H. Li, M. Bolas, G. Medioni, and E. Suma. Computer Animation and Virtual Worlds, 15 (3-4), pp. 201–211, 2014.