Abstract
Obtaining photo-realistic geometric and photometric models is an
important component of image-based rendering systems that use real-world
imagery as their input. Applications of such systems include novel view
generation and the mixing of live imagery with synthetic computer graphics. In
this talk, I review a number of image-based representations (and their
associated reconstruction algorithms) we have developed in the last few years.
I begin by reviewing some recent approaches to the classic problem of recovering
a depth map from two or more images. I then describe some of our newer
representations and reconstruction algorithms, including volumetric
representations, layered plane-plus-parallax representations (including the
recovery of transparent and reflected layers), and multiple depth maps. Each of
these techniques has its own strengths and weaknesses, which I will address. I
will also present our work in video-based rendering, in which we synthesize
novel video from short sample clips by discovering their (quasi-repetitive)
temporal structure.