PAC Analogues of Perceptron and Winnow via Boosting the Margin.
R. Servedio. Machine Learning 47(2/3), 2002, pp. 133-152 (special issue for COLT 2000).
Preliminary version in Thirteenth Annual Conference on Computational Learning Theory (COLT), 2000, pp. 148-157.
Best Student Paper award, COLT 2000.


We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these well-studied online algorithms in some sense correspond to instances of boosting. We show that the new algorithms can be viewed as natural PAC analogues of the online $p$-norm algorithms which have recently been studied by Grove, Littlestone, and Schuurmans \cite{GLS} and Gentile and Littlestone \cite{GL}. As special cases of the algorithm, by taking $p=2$ and $p=\infty$ we obtain natural boosting-based PAC analogues of Perceptron and Winnow respectively. The $p = \infty$ case of our algorithm can also be viewed as a generalization (with an improved sample complexity bound) of Jackson and Craven's PAC-model boosting-based algorithm for learning ``sparse perceptrons'' \cite{JC}. The analysis of the generalization error of the new algorithms relies on techniques from the theory of large margin classification.

Postscript or pdf of conference version

pdf of journal version

Back to main papers page