Computer Vision Talks at Columbia University

Learning to separate transparencies

Assaf Zomet

Hebrew University

Interschool Lab, 7th Floor CEPSR 

Host: Prof. Shree Nayar 

 

Abstract 

Understanding a viewed scene from images is challenging. It may require segmenting the images to objects, estimating the geometry and color of the scene, etc. The talk will start by presenting some examples for vision tasks which may be solved by scene understanding. It will be shown that such tasks can be approached even without understanding of the viewed scene; low-level statistics or reasonable approximations can be used instead.

The main presented task will be separation of transparent layers: How to separate a single image consisting of two super-positioned layers into the two layers. Certain simple images are known to trigger a percept of transparency: The input image $I$ is perceived as the sum of two images $I(x,y)=I_1(x,y)+I_2(x,y)$. We will present a model for choosing the "best" decomposition from the infinite number of ways to express $I$ as a sum of two images.

We suggest that transparency is the rational percept of a system that is adapted to the statistics of natural scenes. We present a probabilistic model of images based on the qualitative statistics of derivative filters and "corner detectors" in natural scenes and use this model to find the most probable decomposition of a novel image. The optimization is performed using loopy belief propagation. We show that our model computes perceptually "correct" decompositions on real and synthetic images.

The talk includes works done with: Anat Levin, Yair Weiss, Shmuel Peleg, Daphna Weinshall, Doron Feldman and Alex Rav-Acha.