Programmable Imaging: Controllable Apertures

In this project, we develop a novel camera that uses a controllable aperture instead of an imaging lens to image the scene of interest. In the most general setting, the aperture is a volumetric light attenuator that is controllable in space and in time. A volumetric controllable light attenuator can be implemented using a stack of flat attenuators (liquid crystal sheets, for example). Since our camera does not have an imaging lens, each point on the detector collects light from the entire field of view. Therefore, the aperture defines the mapping between the scene and the image detector. By assigning different attenuation patterns to the aperture, it is possible to implement novel imaging functionalities, that cannot be implemented with conventional cameras.

We demonstrate several useful functionalities of our camera. For example, the aperture is programmed to form a pinhole, and the pinhole is shifted electronically at each time instance. As a result, the camera can change its viewing direction without the use of moving parts. The camera can also be programmed to capture images with spatially-varying properties. For example, it can be programmed to split the field of view and capture sub-divided images, where each image part corresponds to a different viewing direction. Alternatively, the camera can also be used as a computational sensor. The computations are performed by the camera optics and the result is captured by the detector. All these functionalities and additional ones can be implemented using the same physical camera, and the functionalities can be switched from one frame to the next.

Publications

"Lensless Imaging with a Controllable Aperture,"
A. Zomet and S.K. Nayar,
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Jun, 2006.
[PDF] [bib] [©]

Images

  Imaging with a Volumetric Aperture:

In its most general form, the camera uses a controllable volumetric aperture to map scene points to image points. Unlike a traditional lens camera, in this case, the 4D light field incident upon the aperture is modified by the 3D attenuation function of the aperture before the final 2D image is captured. This enables the camera to achieve mappings from the scene to the image that are not possible to achieve using a conventional lens camera.

     
  Multi-Layered Aperture:

A volumetric aperture can be implemented using a stack of flat attenuating layers. One way to implement controllable attenuating layers is by using liquid crystal sheets. Other spatial light modulators can be used as well such as a digital micromirror device (DMD) or a liquid crystal on silicon (LCOS) device.

     
  Prototype Camera:

This prototype includes an off-the-shelf digital still camera (without the lens) as the detector and an off-the-shelf LCD sheet in front of it as the controllable aperture. When needed, additional attenuating layers are realized by adding physical apertures.

     
  Flexible Pinhole Imaging:

A pinhole camera is implemented by setting all attenuator transmittances to be zero except for a small area where the transmittance is set to the maximum value. Then, at each time instance, the pinhole can be moved to any location on the aperture. This way, we can change the field of view of the camera instantaneously and arbitrarily. In contrast, conventional cameras rely on pan-tilt motors, which are limited by mechanical constraints and produce motion blur.

     
  A View with Multiple Views:

This image shows how split field of view imaging can be used to track an object while keeping an eye on the periphery of a larger field of view.

     
  Optical Computations:

The aperture can modulate the incoming light such that the captured images are the results of computations. This way, the camera can be used to perform expensive computations at the speed of light. In the configuration shown in this picture, one half of the captured image is simply the image of the scene of interest while the second half is the result of the correlation of the scene image with a correlation mask that is applied to one of the attenuating layers.

     
  Face Detection in Optics:

The bottom part of the image is the result of the correlation of the top part of the image with a face template. This correlation was performed by the optics. The detected candidate faces are shown by the red boxes. Although some false positives are detected, the relevant information is significantly pruned so that more sophisticated detectors can be applied to just the small number of candidate pixels.

     

Videos

If you are having trouble viewing these .mpg videos in your browser, please save them to your computer first (by right-clicking and choosing "Save Target As..."), and then open them.

  Panning Without Moving Parts:

This video shows how the camera can change its viewing direction (without moving) to keep a moving object in the scene within its field of view.

     
  Split Field of View Imaging:

By using two attenuating layers, the camera can capture disjoint parts of the scene in a single frame without capturing the regions in between them. This way, the camera can capture far apart scene parts with higher resolution. In contrast, conventional cameras are restricted as they are forced to distribute the limited resolution of the detector uniformly over a single contiguous field of view.

     

Slides

CVPR 2006 presentation     With videos (zip file)

Programmable Imaging: Micro-Mirror Arrays

Coded Rolling Shutter Photography:
Flexible Space-Time Sampling