Abstract
This is really two mini-talks combined together. I will describe two
recent results that we have obtained in the topic of 3D scene
reconstruction.
Factorization With Uncertainty*
Factorization using Singular Value Decomposition (SVD) is often used for
recovering 3D shape and motion from feature correspondences across
multiple views. However, this is the correct error to minimize only
when the x and y positional errors in the features are uncor related
and identically distributed. But this is rarely the case in real data,
where uncertainty in feature position depends on the underlying spatial
intensity structure in the image, which has strong directionality to it.
The proper measure to minimize is covarianceweighted squared error
(or the Mahalanobis distance). In this talk, I will describe a new
approach to covarianceweighted factorization, which can factor noisy
feature correspondences with high degree of directional uncertainty into
structure and motion. Our approach is based on transforming the
rawdata into a covarianceweighted data space, where the components of
noise in the different directions are uncorrelated and identically. We
empirically show that our method does not degrade with increase in
directionality of uncertainty, even in the extreme when only "normal
flow" data is available. It thus provides a unified approach for
treating cornerlike points together with points along linear structures
in the image.
Integrating Local Affine Models into Global Perspective Models in
Multiview Geometry**
The fundamental matrix defines a nonlinear 3D variety in the joint image
space of multiple views. The tangents to this variety correspond to taking
an affine (or ``para-perspective'') projection approximation within a
shallow portion of the 3D scene. In the case of two views, we show that
this variety is a 4D cone whose vertex is the joint epipole (namely the 4D
point obtained by stacking the two epipoles in the two images). We use
these observations to develop a new approach for recovering multiview
geometry by integrating multiple local affine joint images into the global
projective joint image. The local affine models are recovered by analyzing
multiple (more than two) views using a factorization method or a direct
estimation technique. For every pair of views, the recovered affine
model parameters from multiple image patches are combined to obtain the
epipolar geometry between those views. We describe a novel algorithm that
uses the local affine models to directly recover the image epipoles
without recovering the fundamental matrix as an intermediate step.
__________________________________________________
* Joint work with Michal Irani (Weizmann Institute, Israel).
** Joint work with Shai Avidan