Removing Image Artifacts Due to Dirty Camera Lenses
and Thin Occluders

A common assumption in computer graphics, as well as in digital photography and imaging systems, is that the radiance emitted from a scene point is observed directly at the sensor. However, there are often physical layers or media lying between the scene and the imaging system. For example, the lenses of consumer digital cameras, or the front windows of security cameras, often accumulate various types of contaminants over time (e.g., fingerprints, dust, dirt). Also, photographs are often taken through a layer of thin occluders (e.g., fences, meshes, window shutters, curtains, tree branches) which partially obstruct the scene. Both artifacts are annoying for photographers, and may also damage important scene information for applications in computer vision or digital forensics.

While a simple solution is to clean the camera lens, or choose a better spot to retake pictures, this is impossible for existing images and impractical for some applications like outdoor security cameras, underwater cameras or covert surveillance behind a fence. Therefore, in this project, we develop new ways to take the pictures, and new computational algorithms to remove dirty-lens and thin-occluder artifacts. Unlike image inpainting and hole-filling methods, our algorithms rely on an understanding of the physics of image formation to directly recover the image information in a pointwise fashion, given that each point is partially visible in at least one of the captured images.

We show that both effects can be described by a single image formation model, wherein an intermediate layer (of dust, dirt or thin occluders) both attenuates the incoming light and scatters stray light towards the camera. Because of camera defocus, these artifacts are low-frequency and either additive or multiplicative, which gives us the power to recover the original scene radiance pointwise. We develop a number of physics-based methods to remove these effects from digital photographs and videos.

Publications

"Removing Image Artifacts Due to Dirty Camera Lenses and Thin Occluders,"
J. Gu, R. Ramamoorthi, P.N. Belhumeur and S.K. Nayar,
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia),
Dec, 2009.
[PDF] [bib] [©]

Images

  Scenarios where an Automatic Computational Cleaning Method is Needed:

For automated imaging systems (e.g., outdoor security cameras) and some sophisticated imaging systems (e.g., telescopes and microscopes), the image artifacts caused by dust or debris will significantly lower the image quality, but manually cleaning the optics is usually very expensive. Therefore, we need automatic, computational methods to remove these artifacts from captured images and videos.

     
  Image Formation Model:

The dust will both attenuate and scatter light. The attenuation is a local effect, which makes the region darker, such as the sky. The scattering is a global effect. The dust receives the radiance from the entire environment, including stray lights such as the sun, and scatters. This makes some regions brighter.

The attenuation can be modeled as the product of the background scene I0(x) and the defocused attenuation map alpha(x)*k(x). For the scattering, the dust layer acts like a light source, and its contribution to the captured image equals Ia(x) * k(x), where Ia(x) is the radiance caused by dust scattering.

We simplified the image formation model by approximating I_a(x) as a product of some function of the attenuation map, and the aggregate of the outside illumination. This assumption has been validated using experiments as below. With this assumption, the image formation model is simplified, where a(x) and b(x) are the attenuation map and the scattering map. These terms are determined by the camera and the dirty lens. c is the aggregate of the outside illumination. It is a scene-dependent scalar and it will be estimated for each captured image.

     
  Validation of the Model Simplification for Artifacts caused by a Dusty Camera Lens:

Experimental validation of model simplification. (a) A sequence of shifted checkerboard patterns are projected on a scene. (b) The point-wise maximum of the captured images, I_max(x), includes both the attenuation and the scattering. (c) The minimum of the captured images (amplified 20 times for demonstration), I_min (x), directly measures the scattering of the lens dirt. (d) The attenuation can be simply computed as I_max (x)-I_min (x). As shown in (c), the scattering is related only to the attenuation pattern, and not the background scene. (e) shows I_max (x) + I_min (x), and (f) is the image captured when we project a white pattern on the scene. (e) should equal to (f) because the checkerboard patterns turn on half the projector pixels and thus the scattering in (c) is half of the scattering in (f) while the attenuation keeps the same. Indeed, we found (e) and (f) are closely matched with a mean absolute percentage error 0.6%.

     
  Removal of Dirty Lenses Artifacts (with Calibration) - Calibration:

Calibration of a dirty-lens camera. (a) The dirty pattern on the lens can be measured by taking several pictures (>=2) of a stripe pattern, as shown in (b). With these calibration images, we can estimate (c) the attenuation map a(x) and (d) the scattering map b(x) for a dirty lens camera.

     
  Removal of Dirty Lenses Artifacts (with Calibration) - Results:

Removal of the dirty-lens image artifacts with calibration. Given the attenuation map a(x) and the scattering map b(x) from the calibration, we can remove dirty-lens artifacts for each input image. (a) are four input images, and (b) are the recovered images. (c) shows the insets of the input and recovered images.

     
  Removal of Dirty Lenses Artifacts from Multiple Images (No Calibration) - Results:

Estimation of the attenuation map a(x) and the scattering map b(x) from a video taken with a dirty lens camera. (a) The input is a 5-min long video clip, consisting of 7200 frames. (b) The averaged image over all the frames. (c) The averaged image gradient (amplified 20 times for demonstration) over all the frames. (d)-(g) show the intermediate results of the iterative polynomial fitting where the black pixels in the images are the outliers corresponding to the dirt regions in captured images. (h) and (i) show the fitting results, and (j) and (k) show the estimated attenuation map a(x) and the scattering map b(x).

     
  Removal of Dirty Lenses Artifacts from Multiple Images (No Calibration) - Results:

Removal of the dirty-lens image artifacts from a video without calibration. Based on the estimated attenuation map and the scattering map, dirty-lens artifacts can be removed for each frame. We show several examples where (a) and (c) are the original frames, and (b) and (d) are the recovered images.

     
  Removal of Thin Occluder Artifacts from Two Images (Known Depths):

A similar image formation model can be used to remove the thin occluder artifacts from captured images. Here shows an example where we know roughly the point spread function, and two images with different apertures are needed to automatically remove the artifacts.

     
  Removal of Thin Occluder Artifacts from Three Images (Unknown Depths):

If the depth of the thin occluder is unknown, we need three images to remove the artifacts, using an iterative method.

     

Slides

SIGGRAPH Asia 2009 Presentation     With videos (zip file)

Dirty Glass: Contamination on Transparent Surfaces

Vision through Fog and Haze