Learning Sparse Representations for Vision
Abstract
I will describe some recent results on the problem of function approximation
and sparse representations that connect regularization theory, Support
Vector Machine Regression (Vapnick), Basis Pursuit Denoising (Chen, Donoho,
Sanders) and PCA techniques. I will motivate the appeal of learning sparse
representations from an overcomplete dictionary of basis functions in terms
of recent results in two different fields: neuroscience and computer vision.
In particular, physiological data from IT cortex suggest that individual
neurons encode a large vocabulary of elementary shapes before converging
on cells tuned to specific views of specific 3D objects. In the area of
computer vision we have developed a trainable object detection architecture
that succeeds in learning a sparse representation from an overcomplete
set of Haar wavelets to perform difficult object detection tasks.
Luis Gravano
gravano@cs.columbia.edu