Computer Vision Talks at Columbia University
Acquiring, Synthesizing, and Compressing 3-D Textures over Variable Lighting and Viewpoint
Melissa L. Koudelka
Yale University
CAVE Lab, 6th Floor CEPSR
Host: Prof. Shree Nayar
Abstract
Accurately rendering the detailed surface structure of objects as the conditions around them change is a vital element in enhancing the visual realism of a synthetic scene. Real world surfaces such as tree bark, moss, fur, or skin often have complicated geometry that leads to effects such as self-shadowing, masking, specularity, and interreflection as the lighting or viewpoint in the scene changes. We present an image-based method for generating textured surfaces with correct geometric and lighting effects, such that the end result is a photorealistic image of the textured object. The textures are generated without an explicit geometric model -- using image data alone to give the appearance of fine scale geometric structure. We further present a method for compressing the 5000 to 6000 images in each 4-D lighting and viewpoint variation texture dataset, representing the entire dataset in just under 4MB.