Daniel Hsu | Research papers

I work on algorithmic statistics and machine learning.

Many of my research papers are available on arxiv.

A word cloud based on paper titles.

Preprints

Loss minimization and parameter estimation with heavy tails.
Daniel Hsu and Sivan Sabato.
Preprint, 2013.
[arxiv, slides for a related talk]

Analysis of a randomized approximation scheme for matrix multiplication.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Preprint, 2012.
[arxiv]

Tensor decompositions for learning latent variable models.
Anima Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky.
Preprint, 2012.
[arxiv, slides for a related talk]

Postprints

2014

A spectral algorithm for learning class-based n-gram models of natural language.
Karl Stratos, Do-kyum Kim, Michael Collins, and Daniel Hsu.
Thirtieth Conference on Uncertainty in Artificial Intelligence, 2014.
[pdf]

Taming the monster: a fast and simple algorithm for contextual bandits.
Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire.
Thirty-First International Conference on Machine Learning, 2014.
[pdf, arxiv, link, slides]

Heavy-tailed regression with a generalized median-of-means.
Daniel Hsu and Sivan Sabato.
Thirty-First International Conference on Machine Learning, 2014.
[arxiv, link]

A tensor approach to learning mixed membership community models.
Anima Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade.
Journal of Machine Learning Research, 15(Jun):2239-2312, 2014.
[pdf, arxiv, link]

Random design analysis of ridge regression.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Foundations of Computational Mathematics, 14(3):569-600, 2014.
[pdf, arxiv, link]

2013

Contrastive learning using spectral methods.
James Zou, Daniel Hsu, David Parkes, and Ryan Adams.
Advances in Neural Information Processing Systems 26, 2013.
[pdf]

When are overcomplete topic models identifiable? Uniqueness of tensor Tucker decompositions with structured sparsity.
Anima Anandkumar, Daniel Hsu, Majid Janzamin, Sham M. Kakade.
Advances in Neural Information Processing Systems 26, 2013.

Stochastic convex optimization with bandit feedback.
Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Alexander Rakhlin.
SIAM Journal on Optimization, 23(1):213-240, 2013.
[pdf, arxiv, link]

A tensor spectral approach to learning mixed membership community models.
Anima Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade.
Twenty-Sixth Annual Conference on Learning Theory, 2013.
[pdf (short version), arxiv (full version)]

Learning linear Bayesian networks with latent variables.
Anima Anandkumar, Daniel Hsu, Adel Javanmard, and Sham M. Kakade.
Thirtieth International Conference on Machine Learning, 2013.
[pdf (short version), arxiv (full version)]

Learning mixtures of spherical Gaussians: moment methods and spectral decompositions.
Daniel Hsu and Sham M. Kakade.
Fourth Innovations in Theoretical Computer Science, 2013.
[pdf (short version), arxiv (full version), talk slides]

2012

A spectral algorithm for learning hidden Markov models.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Journal of Computer and System Sciences, 78(5):1460-1480, 2012.
[pdf, arxiv, link, errata]

A tail inequality for quadratic forms of subgaussian random vectors.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Electronic Communications in Probability, 17(52):1-6, 2012.
[pdf, link]

A spectral algorithm for latent Dirichlet allocation.
Anima Anandkumar, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Yi-Kai Liu.
Advances in Neural Information Processing Systems 25, 2012.
[pdf (short version), arxiv (full version)]

Identifiability and unmixing of latent parse trees.
Daniel Hsu, Sham M. Kakade, and Percy Liang.
Advances in Neural Information Processing Systems 25, 2012.
[pdf, arxiv]

Learning mixtures of tree graphical models.
Anima Anandkumar, Daniel Hsu, Furong Huang, and Sham M. Kakade.
Advances in Neural Information Processing Systems 25, 2012.

Convergence rates for differentially private statistical estimation.
Kamalika Chaudhuri and Daniel Hsu.
Twenty-Ninth International Conference on Machine Learning, 2012.
[pdf]

A method of moments for mixture models and hidden Markov models.
Anima Anandkumar, Daniel Hsu, and Sham M. Kakade.
Twenty-Fifth Annual Conference on Learning Theory, 2012.
[pdf, arxiv, talk slides, slides for a related talk]

Random design analysis of ridge regression.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Twenty-Fifth Annual Conference on Learning Theory, 2012.
[pdf (short version), arxiv (full version)]

Tail inequalities for sums of random matrices that depend on the intrinsic dimension.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Electronic Communications in Probability, 17(14):1-13, 2012.
[pdf, link, errata]

2011

Spectral methods for learning multivariate latent tree structure.
Anima Anandkumar, Kamalika Chaudhuri, Daniel Hsu, Sham M. Kakade, Le Song, and Tong Zhang.
Advances in Neural Information Processing Systems 24, 2011.
[pdf, arxiv]

Stochastic convex optimization with bandit feedback.
Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Alexander Rakhlin.
Advances in Neural Information Processing Systems 24, 2011.
[pdf (journal version), arxiv]

Efficient optimal learning for contextual bandits.
Miroslav Dudik, Daniel Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, and Tong Zhang.
Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, 2011.
[pdf]

Sample complexity bounds for differentially private learning.
Kamalika Chaudhuri and Daniel Hsu.
Twenty-Fourth Annual Conference on Learning Theory, 2011.
[pdf, talk slides]

Robust matrix decomposition with sparse corruptions.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
IEEE Transactions on Information Theory, 57(11):7221-7234, 2011.
[pdf, link]

2010

Agnostic active learning without constraints.
Alina Beygelzimer, Daniel Hsu, John Langford, and Tong Zhang.
Advances in Neural Information Processing Systems 23, 2010.
[pdf]

An online learning-based framework for tracking.
Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu.
Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, 2010.
[pdf]

Algorithms for active learning.
Daniel Hsu
Ph.D. dissertation, University of California, San Diego, 2010.
[pdf]

2009

Multi-label prediction via compressed sensing.
Daniel Hsu, Sham M. Kakade, John Langford, and Tong Zhang.
Advances in Neural Information Processing Systems 22, 2009.
[pdf, arxiv, talk slides]

A parameter-free hedging algorithm.
Kamalika Chaudhuri, Yoav Freund, and Daniel Hsu.
Advances in Neural Information Processing Systems 22, 2009.
[pdf, note about epsilon-quantile regret]

A spectral algorithm for learning hidden Markov models.
Daniel Hsu, Sham M. Kakade, and Tong Zhang.
Twenty-Second Annual Conference on Learning Theory, 2009.
[pdf (journal version), arxiv, errata]

2008 and earlier

Hierarchical sampling for active learning.
Sanjoy Dasgupta and Daniel Hsu.
Twenty-Fifth International Conference on Machine Learning, 2008.
[pdf]

A general agnostic active learning algorithm.
Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni.
Advances in Neural Information Processing Systems 20, 2007.
[pdf]

On-line estimation with the multivariate Gaussian distribution.
Sanjoy Dasgupta and Daniel Hsu.
Twentieth Annual Conference on Learning Theory, 2007.
[pdf]

A concentration theorem for projections.
Sanjoy Dasgupta, Daniel Hsu, and Nakul Verma.
Twenty-Second Conference on Uncertainty in Artificial Intelligence, 2006.
[pdf]