Suggested topics

Johnson-Lindenstrauss

Dasgupta and Gupta, An elementary proof of a theorem of Johnson and Lindenstrauss

Ailon and Chazelle, Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform

Ailon and Liberty, Almost optimal unrestricted fast Johnson-Lindenstrauss transform

Kane and Nelson, Sparser Johnson-Lindenstrauss transforms

Kernel approximation methods

Rahimi and Recht, Random Features for large-scale kernel machines

Rahimi and Recht, Weighted sums of random kitchen sinks

Le, Sarlos, and Smola, Fastfood -- approximating kernel expansions in loglinear time

Bach, Sharp analysis of low-rank kernel matrix approximations

Fowlkes, Belongie, Chung, and Malik, Spectral grouping using the Nystrom method

Gittens and Mahoney, Revisiting the Nystrom method for improved large-scale machine learning

Compressed sensing and sparse regression

Donoho, Compressed sensing

Candes, The restricted isometry property and its implications for compressed sensing

Rudelson and Zhou, Reconstruction from anisotropic random measurements

Bickel, Ritov, and Tsybakov, Simultaneous analysis of Lasso and Datzig selector

Zhang, On the consistency of feature selection using greedy least squares regression

Locality sensitive hashing

Andoni, Datar, Immorlica, Indyk, and Mirrokni, Locality-sensitive hashing using stable distributions

Andoni and Indyk, Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions

Cayton and Dasgupta, A learning framework for nearest neighbor

Spatial partition trees

Fakcharoenphol, Rao, and Talwar, A tight bound on approximating arbitrary metrics by tree metrics

Beygelzimer, Kakade, and Langford, Cover trees for nearest neighbor

Dasgupta and Freund, Random projection trees and low dimensional manifolds

Verma, Kpotufe, and Dasgupta, Which spatial partition trees are adaptive to intrinsic dimension?

Dhesi and Kar, Random projection trees revisited

McFee and Lanckriet, Large-scale music similarity search with spatial trees

Correlation / covariance analysis

Hardoon, Szedmak, and Shawe-Taylor, Canonical correlation analysis; an overview with applications to learning methods

Ando and Zhang, Two-view feature generation model for semi-supervised learning

Chaudhuri and Rao, Learning mixtures of product distributions using correlations and independence

Chaudhuri, Kakade, Livescu, and Sridharan, Multi-view clustering with canonical correlation analysis

Foster, Johnson, Kakade, and Zhang, Multi-view dimensionality reduction via canonical correlation analysis

Hsu, Kakade, and Zhang, A spectral algorithm for learning hidden Markov models

Fast linear algebra

Mahoney, Randomized algorithms for matrices and data

Halko, Martinsson, and Tropp, Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions

Matrix tail inequalities

Tropp, User-friendly tools for random matrices: an introduction

Planted partition models

McSherry, Spectral partitioning of random graphs

Bshouty and Long, Finding planted partitions in nearly linear time using arrested spectral clustering

Chaudhuri, Chung, Tsiatas, Spectral clustering of graphs with general degrees in the extended planted partition model

Mossel, Neeman, and Sly, Stochastic block models and reconstruction

Learning mixture models

Dasgupta, Learning mixtures of Gaussians

Arora and Kannan, Learning mixtures of separated nonspherical Gaussians

Dasgupta and Schulman, A probabilistic analysis of EM for mixtures of separated, spherical Gaussians

Vempala and Wang, A spectral algorithm for learning mixtures of distributions

Achlioptas and McSherry, On spectral learning of mixtures of distributions

Brubaker and Vempala, Isotropic PCA and affine-invariant clustering

Belkin and Sinha, Polynomial learning of distributions families

Moitra and Valiant, Settling the polynomial learnability of mixtures of Gaussians

Hsu and Kakade, Learning mixtures of spherical Gaussians: moment methods and spectral decompositions

Chaudhuri, Dasgupta, and Vattani, Learning mixtures of Gaussians using the k-means algorithm

Kumar and Kannan, Clustering with spectral norm and the k-means algorithm

Awasthi and Sheffet, Improved spectral-norm bounds for clustering

Nellore and Ward, Recovery guarantees for exemplar-based clustering

Robust PCA

Candes, Li, Ma, and Wright, Robust Principal Component Analysis?

Chandrasekaran, Sanghavi, Parrilo, and Willsky, Rank-sparsity incoherence for matrix decomposition

Hsu, Kakade, and Zhang, Robust matrix decompomsition with sparse corruptions

McCoy and Tropp, Two proposals for robust PCA using semidefinite programming

Hardt and Moitra, Can we reconcile robustness and efficiency in unsupervised learning?

Xu, Caramanis, and Mannor, Principal component analysis with contaminated data: the high dimensional case

k-means clustering

Kumar, Sabharwal, and Sen, A simple linear time (1+\epsilon)-approximation algorithm for k-means clustering in any dimensions

Kanungo, Mount, Netanyahu, Piatko, Silverman, and Wu, A local search approximation algorithm for k-means clustering

Arthur and Vassilvitskii, k-means++: the advantages of careful seeding

Manifold learning

Cayton, Algorithms for manifold learning

Verma, Distance preserving embeddings for general n-dimensional manifolds

Belkin and Niyogi, Towards a theoretical foundation for Laplacian based manifold methods

Niyogi, Smale, and Weinberger, Finding homology of submanifolds with high confidence from random samples

Sindhwani, Belkin, and Niyogi, The geometric basis of semi-supervised learning

Sparse PCA / hidden clique

Johnstone and Lu, Sparse principal components analysis

Berthet and Rigollet, Complexity theoretic lower bounds for sparse principal component detection

Berthet and Rigollet, Optimal detection of sparse principal components in high dimensions

Yuan and Zhang, Truncated power method for sparse eigenvalue problems

Ma, Sparse principal component analysis and iterative thresholding

Alon, Krivelevich, and Sudakov, Finding a large hidden clique in a random graph

Feige and Ron, Finding hidden cliques in linear time

Dekel, Gurel-Gurevich, and Peres, Finding hidden cliques in linear time with high probability

Deshpande and Montanari, Finding hidden cliques of size \sqrt{N/e} in nearly linear time

(more to come)