Digital projection technologies, such as Digital Light Processing (DLP) and
Liquid Crystal Displays (LCD), are increasingly used in consumer, commercial,
and scientific applications. Many of these applications require the projectors
to be focused for best performance. In practice, projectors are designed to
produce bright images on a single screen; as a result, they have large
apertures and hence narrow depth of filed. An analysis of the defocus
properties of projectors is therefore beneficial as it could lead to new
methods that take advantage of, as well as compensate for, projection defocus.
In this project, we present a simple linear model for projector defocus;
based on this model, we develop methods for robust scene capture as well as
enhanced image display. In particular, we develop a novel temporal defocus
method to recover scene depth at each image pixel using its intensity variation
over time. This method, unlike most depth recovery methods, generates complete
depth maps with sharp discontinuities. Using the same model, we also develop a
defocus compensation method that filters a projection image in a scene-adaptive
manner to minimize its defocus blur after it is projected onto the scene. This
method effectively increases the depth of field of a projector without
modifying its optics. Finally, we present an algorithm that exploits projector
defocus to reduce the strong pixelation artifacts produced by digital
projectors, while preserving the quality of the projected image. We have
experimentally verified each of our methods using real scenes. |