Computer Vision Talks at Columbia University

Limits on Super-Resolution and How to Break Them

 

Simon Baker

Robotics Institue, Carnegie Mellon University

Host: Shree K. Nayar

 2:00 p.m. - November 29th, 1999

CAVE (Computer Vision Lab) 6th floor CEPSR, Schapiro Building, Computer Science

 

Abstract

 

Super-resolution is the process of combining multiple low resolution images from a video to form a higher resolution one. Most algorithms are based on the constraints that the super-resolution image, when appropriately warped and down-sampled to model the image formation process, should yield the low resolution input images. Algorithms that use these constraints are known as "reconstruction-based."

In the first part of this talk I will present detailed analysis of the super-resolution reconstruction constraints. I will describe three results which all show that the reconstruction constraints provide far less useful information as the magnification factor increases. The analysis shows that two factors combine to cause these difficulties: (1) the discretiztion of the continuous intensities into grey-levels, and (2) the integration of the illumination over the photosensitive area of the pixels.

It is well established that the use of a smoothness prior may help somewhat. However, for large enough magnification factors the use of such priors leads to overly smooth results. In the second half of this talk, I will describe an algorithm which learns a prior on the spatial distribution of the image gradient for specific classes of objects or scenes. I will present results which demonstrate that the use of such priors gives far better results than standard smoothness priors do, both for human faces and text data.