Several computer vision techniques assume very simple models of the world. For example, scenes are assumed to be convex, nearly diffuse and opaque. Real world scenes are more complex than this. Imagine a robot trying to navigate an underground cave, a surgical instrument inside human body, or a movie director imaging the face of an actor. In these scenarios, light misbehaves - it bounces between scene points, scatters inside translucent materials and reflects specularly at shiny surfaces. These optical phenomena prevent vision systems from performing reliably. To make vision systems work in real-world situations, it is imperative to deal with these optical effects.
My research goal is to develop simple computational models of light transport. These models, together with novel sensors and programmable illumination, make it possible to recover scene properties (geometry, appearance and material properties) in real world scenarios. Following are some examples (click on thumbnails for details):
A Combined Theory of Defocused Illumination and Global Light Transport
[CVPR 2009] [IJCV 2011]
High Resolution Tracking of Facial Expressions
[ICCV 2005] [IJCV 2008]
Flexible Voxels for Motion-Aware Videography
[ECCV 2010]
Optimal Coded Sampling for Temporal Super-Resolution
[CVPR 2010]
Multiplexed Illumination for Scene Recovery in the Presence of Global Illumination
[ICCV 2011]
Underwater Imaging: Seeing clearer and farther in poor visibility environments
[CVPR 2008]
Fast Simulation and Rendering of Dynamic, Non-homogenous Volumetric Media
[SCA 2007]
Measuring Scattering Properties of Volumetric Media
[SIGGRAPH 2006]
Capturing Video in a Single Image
[ICCV 2011]
Measuring Shape in the Presence of Inter-reflections, Sub-surface Scattering and Defocus
[CVPR 2011] [IJCV 2012]
When Does Computational Imaging Improve Performance?
[IEEE TIP 2012]