Learning Poisson Binomial Distributions
C. Daskalakis and I. Diakonikolas and R. Servedio.
44th Annual Symposium on Theory of Computing (STOC), 2012.


Abstract:

We consider a basic problem in unsupervised learning: learning an unknown \emph{Poisson Binomial Distribution}. A Poisson Binomial Distribution (PBD) over $\{0,1,\dots,n\}$ is the distribution of a sum $X = X_1 + \cdots + X_n$ of $n$ independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 \ cite{Poisson:37} and are a natural $n$-parameter generalization of the familiar Binomial Distribution. We work in a framework where the learner is given access to independent draws from the distribution and must (with high probability) output a hypothesis distribution which has total variation distance at most $\eps$ from the unknown target PBD. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal.

We essentially settle the complexity of the learning problem for this basic class of distributions. As our main result we give a highly efficient algorithm which learns to $\eps$-accuracy using $\tilde{O }(1/\eps^3)$ samples \emph{independent of $n$}. The running time of the algorithm is \emph{quasilinear} in the size of its input data, i.e. $\tilde{O}(\log(n)/\eps^3)$ bit-operations (observe that each draw from the distribution is a $\log(n)$-bit string). This is nearly optimal since any algorithm must use $\Omega(1/\eps^2)$ samples. We also give positive and negative results for some extensions of this learning problem.

pdf of conference version

pdf of full version


Russian translation of this page courtesy of SciPosts

Swedish translation of this page courtesy of Daniela Milton

Back to main papers page