Abstract
This talk presents an overview of our efforts in modeling-from-reality,
which automatically creates photo-realistic virtual models from observation
of the real objects. This effort consists of three aspects of research:
geometric, photometric, and environmental modeling. The geometric modeling
creates a complete-consistent surface model from a sequence of partial views
of the object by aligning and merging observed data. The photometric
modeling creates photo-realistic new appearances of an object from the
analysis of an object surface. The environmental modeling seamlessly
integrates a virtual object with a real scene by estimating illumination
environment in a real scene. I will overview these component algorithms
developed in my group over the last five years, and, then, explain a project
to preserve and restore Kamakura Great Buddha in the virtual space using
these component algorithms.