When Does Computational Imaging Improve Performance? |
Figure 1: Performance of computational imaging
for naturally occurring lighting conditions. We show that
CI techniques (solid curve) give a negligible
performance gain over conventional (impulse) imaging (dotted line) if
the illumination level is higher than that of a typical living room.
This is an example plot for spectral, light field, and illumination
multiplexing systems for the following scene and sensor
characteristics: average scene reflectivity is .5, exposure time
is 20 msec, aperture setting is F/2.1, pixel size is 1 micron,
quantum efficiency is .5, and read noise standard deviation is
4 electrons. We give similar performance plots
for defocus and motion deblurring
|
Project Description
A number of computational imaging techniques have been introduced to
improve image quality by increasing light throughput. These techniques
use optical coding to measure a stronger signal level.
However, the performance of these techniques is limited by the decoding
step, which amplifies noise. While it is well understood that
optical coding can increase performance at low light levels, little
is known about the quantitative performance advantage of
computational imaging in general settings. In
this paper, we derive the performance bounds for various
computational imaging techniques. We then discuss the implications of these bounds for several
real-world scenarios (illumination conditions, scene properties and
sensor noise characteristics). Our results show that computational imaging techniques do not provide a significant performance advantage when imaging with illumination brighter than typical daylight. These results can be
readily used by practitioners to design the most suitable imaging
systems given the application at hand.
|
Publications
"When Does Computational Imaging Improve Performance?," O. Cossairt, M. Gupta and S.K. Nayar, IEEE Transactions on Image Processing (accepted), 2012. [PDF] [bib] [©]
|
Images
 |
|
Linear image formation model:
All CI techniques discussed in this
paper can be modeled using a linear image formation model.
In order to recover the desired image, these techniques require an additional decoding step, which amplifies noise.
Impulse imaging techniques measure
the signal directly without requiring any decoding. A stopped down
aperture can be used to avoid defocus blur, a shorter exposure can
be used to avoid motion blur, and a pin-hole mask can be placed near the sensor to directly measure the light field.
|
| |
|
|
 |
|
Analytical performance bound:
We analyze the performance of a variety of CI techniques and derive a
bound on their performance in terms of SNR. We show that CI
techniques provide a significant performance advantage only if the
average signal level is significantly lower than the sensor read
noise variance. In this we simulate the performance of several defocus deblurring
cameras[1][2][3][4][5],
motion deblurring cameras[6][7],
and light field multiplexing cameras[2][8]. All
techniques perform at or below the performance bound derived in the paper.
|
| |
|
|
 |
|
Practical guidelines for computational imaging:
We provide guidelines for when to
use CI given an imaging scenario. The scenarios are defined in terms
of the application (e.g., motion deblurring, defocus deblurring),
real-world lighting (e.g., moonlit night or cloudy day, indoor or
outdoor), scene properties (albedo, object velocities, depth range)
and sensor characteristics. These figures show contour plot of the SNR gain bound for motion and defocus deblurring cameras. For both cameras, the SNR gain is always negligible when the illuminance
is greater than 125 lux (typical indoor lighting).
|
| |
|
|
 |
|
Simulated flutter-shutter images:
We use simulations to compare the performance of flutter-shutter[6] and impulse cameras (i.e. a camera with a short exposure). The top row of this figure shows an image blurred by
the flutter sequence given in[6]. The second and fourth rows show the results after deblurring with linear
inversion and the BM3D algorithm[14], respectively. The third row shows the
results from the impulse camera. The last row shows the
results for denoising the images in the third row with the BM3D
algorithm. The flutter shutter camera has higher
SNR when the illumination is less than 100 lux.
|
| |
|
|
 |
|
Simulated focal sweep images:
We use simulations to compare the performance of focal sweep[4][5] and impulse cameras (i.e. a camera with a stopped-down aperture). The top row shows an image blurred by
a focal sweep PSF. The second and fourth rows show the results after deblurring with linear
inversion and the BM3D algorithm[14], respectively. The third row shows the results from the impulse camera. The last row shows the results for
denoising the images in the third row with the BM3D algorithm.
The focal sweep camera always has a higher SNR than
impulse imaging, but the improvement becomes negligible when
illumination is greater than 100 lux.
|
| |
|
|
 |
|
Simulations with different priors and metrics:
We also provide empirical results
using perceptually motivated metrics and regularized deblurring algorithms.
Here we show performance for the MSE,
SSIM[9], VIF[10], and UQI[11] metrics.
The top row shows performance for the focal sweep camera, and the bottom row shows
performance for the flutter shutter camera. For each plot, the
performance gain is plotted on a log scale. The black line corresponds to our derived performance bound. The magenta lines correspond to
performance gain using direct linear inversion. The red, green, and blue curves
correspond to reconstructions using Gaussian[12][13],
TV[14], and BM3D[15] priors.
The bound derived in the paper is emperically found to be an upper
bound for performance across all metrics and priors.
|
| |
|
|
|
Acknowledgements
This research was supported in part by DARPA Award No. W911NF-10-1-0214 and ONR MURI Award No. N00014-08-1-0638.
Oliver Cossairt was supported by an NSF Graduate Research Fellowship.
|
Flexible Depth of Field
Spectral Focal Sweep
Gigapixel Computational Imaging
Multiplexed Illumination
Compressive Structured Light
Multispectral Imaging
What is a Computational Camera
Jitter Camera
Single Shot Video
|
References
- [1] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional camera with a coded aperture. In SIGGRAPH, 2007.
- [2] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and 1175 J. Tumblin. Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. In SIGGRAPH, 2007.
- [3] C. Zhou and S. Nayar. What are good apertures for defocus deblurring? In ICCP, 2009.
- [4] G. Hausler. A method to increase the depth of focus by two step image processing. Optics Communications, 1972.
- [5] H. Nagahara, S. Kuthirummal, C. Zhou, and S. Nayar. Flexible Depth of Field Photography. In ECCV, 2008.
- [6] R. Raskar, A. Agrawal, and J. Tumblin. Coded exposure photography: motion deblurring using fluttered shutter. In SIGGRAPH, 2006.
- [7] A. Levin, P. Sand, T. Cho, F. Durand, and W. Freeman. Motion-invariant photography. In SIGGRAPH, 2008.
- [8] D. Lanman, R. Raskar, A. Agrawal, and G. Taubin. Shield fields: modeling and capturing 3d occluders. In SIGGRAPH, 2008.
- [9] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli. Image quality assessment: From error visibility to structural similarity. TIP, 2004.
- [10] H. Sheikh and A. Bovik. Image information and visual quality. TIP, 2006.
- [11] Z. Wang and A. Bovik. A universal image quality index. Signal Processing Letters, 2002.
- [12] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional camera with a coded aperture. In SIGGRAPH, 2007.
- [13] S. Hasinoff, K. Kutulakos, F. Durand, and W. Freeman. Time-constrained photography. In ICCV, 2009.
- [14] . Bioucas-Dias and M. Figueiredo. A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration. in TIP, 2007.
- [15] . K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. TIP, 2007.
|
|