Research

I enjoy working on various aspects of machine learning problems and high-dimensional statistics. I am especially interested in understanding and exploiting the intrinsic structure in data (eg. manifold or sparse structure) to design effective learning algorithms. Here is my research statement for more details.

Selected Publications

  • Time-accuracy tradeoffs in Kernel prediction: controlling prediction quality Samory Kpotufe and Nakul Verma Journal of Machine Learning Research (JMLR), 2017 pdf code
  • Sample complexity of learning Mahalanobis distance metrics Nakul Verma and Kristin Branson Neural Information Processing Systems (NIPS), 2015 pdf talk poster
  • Distance preserving embeddings for general n-dimensional manifolds (aka An algorithmic realization of Nash's embedding theorem) Nakul Verma Journal of Machine Learning Research (JMLR), 2013 pdf oldpdf slides video poster
  • Efficient energy management and data recovery in sensor networks using latent variables based tensor factorization Bojan Milosevic, Jinseok Yang, Nakul Verma, Sameer Tilak, Piero Zappi, Elisabetta Farella, Luca Benini, Tajana Rosing Conference on Modelling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), 2013
  • Learning from data with low intrinsic dimension Nakul Verma Ph.D. Thesis, Dept. of Computer Science and Engineering, UC San Diego, 2012 pdf
  • Distance preserving embeddings for general n-dimensional manifolds (aka An algorithmic realization of Nash's embedding theorem) Nakul Verma Conference on Learning Theory (COLT), 2012 pdf oldpdf slides video poster
  • Learning hierarchical similarity metrics Nakul Verma, Dhruv Mahajan, Sundararajan Sellamanickam and Vinod Nair Conference on Computer Vision and Pattern Recognition (CVPR), 2012 pdf poster
  • A note on random projections for preserving paths on a manifold Nakul Verma UC San Diego, Tech. Report CS2011-0971, 2011 pdf
  • Latent variables based data estimation for sensing applications Nakul Verma, Piero Zappi, Tajana Rosing Conference on Intelligent Sensors, Sensor Networks, and Information processing (ISSNIP), 2011
  • Multiple instance learning with manifold bags Boris Babenko, Nakul Verma, Piotr Dollar and Serge Belongie International Conference on Machine Learning (ICML), 2011 pdf slides poster
  • Which spatial partition trees are adaptive to intrinsic dimension Nakul Verma, Samory Kpotufe and Sanjoy Dasgupta Conference on Uncertainty in Artificial Intelligence (UAI), 2009 pdf poster software
  • Mathematical advances in manifold learning Nakul Verma Survey, UC San Diego Tech. Report, 2008 pdf slides
  • Learning the structure of manifolds using random projections Yoav Freund, Sanjoy Dasgupta, Mayank Kabra and Nakul Verma Neural Information Processing Systems (NIPS), 2007 pdf poster software
  • A concentration theorem for projections Sanjoy Dasgupta, Daniel Hsu and Nakul Verma Conference on Uncertainty in Artificial Intelligence (UAI), 2006 pdf poster

Talks

  • Distance preserving embeddings for Riemannian manifolds slides
    • Carnegie Mellon University, Machine Learning Department Aarti Singh
    • IBM Research, Almaden Ken Clarkson
    • University of Washington, Math Department Marina Meila
    • Yahoo Labs, Bangalore Dhruv Mahajan
  • An introduction to statistical theory of learning slides
    • Neurotheory seminar, Janelia Research Campus, HHMI Shaul Druckmann
  • A tutorial on metric learning with some recent advances slides
    • Bay Area Machine Learning Group Tony Tran

Software

Spatial Trees are a recursive space partitioning datastructure that can help organize high-dimensional data. They can assist in analyzing the underlying data density, perform fast nearest-neighbor searches, and do high quality vector-quantization. Here we implement several instantiations (KD-tree, RP-tree, PCA-tree) to study their relative strengths.

Useful Links