Schechner Yoav

Technion - Israel Institute of Technology

Computer Vision Talks at Columbia University

11:00 am -June 28th , 1999

Location : Inter School Lab , 7th Floor CEPSR

Abstracts 

Depth from Defocus vs. Stereo: How different really are they?
In recent years range imaging based on the limited depth of field of lenses has been gaining popularity. Methods based on this principle are normally considered to be a separate class, distinguished from triangulation techniques such as depth from stereo, vergence or motion.

We unify these approaches and show that Depth from Focus (DFF) and Depth from Defocus (DFD) methods can be regarded as realizations of the geometric triangulation principle. fundamentally, the depth sensitivities of DFF and DFD are not different than those of stereo (or motion) based systems having the same physical dimensions. Contrary to common belief, DFD does not inherently avoid the matching (correspondence) problem, and DFD/DFF do not avoid the occlusion problem any more than ``classic'' triangulation techniques. However, they are more stable in the presence of occlusions.

The fundamental advantage of DFF and DFD methods is the two-dimensionality of the aperture, allowing more robust estimation. We analyze the effect of noise in different spatial frequencies, and derive the optimal changes of the focus settings in DFD. These results elucidate the limitations of methods based on depth of field and provide a foundation for fair performance comparison between DFF/DFD and shape from stereo (or motion) algorithms.


Separation of Transparent Layers and the Inclination of an Invisible Surface
Consider scenes deteriorated by reflections off a semi-reflecting medium (e.g., a glass window) that lies between the observer and an object. We present two approaches to recover the real and the virtual scenes, using algorithms that we developed to take advantage of following optical cues:

1) POLARIZATION CUE
*
Raw polarization filtering of reflections does not suffice for most inclinations of the transparent surface. Reconstruction by inverting the imaging process requires the estimation of the orientation of the invisible (semi-reflecting) surface in space, particularly its inclination angle. This angle is estimated by seeking the value which leads to the minimum mutual information of the reconstructed scenes. Each scene is automatically labeled as transmitted (real) or reflected (virtual). We also discuss a fundamental ambiguity in the determination of the plane of incidence.

2) FOCUS CUE**
This approach is based on searching for the images in which either of the scenes is focused. Focusing gives an initial separation of the layers. The separation is enhanced via mutual blurring of the perturbing components in the images, based on the depths estimate and the parameters of the imaging system.

* Joint work with Joseph Shamir and Nahum Kiryati.
** Joint work with Nahum Kiryati and Ronen Basri.