DNF are Teachable in the Average Case.
H. Lee and R. Servedio and A. Wan.
Machine Learning, 69: 79--96, 2007.
Preliminary version in Nineteenth Annual Conference on Computational Learning Theory (COLT), 2006, pp. 214--228.


Abstract: We study the average number of well-chosen labeled examples that are required for a helpful teacher to uniquely specify a target function within a concept class. This ``average teaching dimension'' has been studied in learning theory and combinatorics and is an attractive alternative to the ``worst-case'' teaching dimension of Goldman and Kearns \cite{golkea92} which is exponential for many interesting concept classes. Recently Balbach \cite{bal05} showed that the classes of 1-decision lists and 2-term DNF each have linear average teaching dimension.

As our main result, we extend Balbach's teaching result for 2-term DNF by showing that for any $1 \leq s \leq 2^{\Theta(n)}$, the well-studied concept classes of at-most-$s$-term DNF and at-most-$s$-term monotone DNF each have average teaching dimension $O(ns)$. The proofs use detailed analyses of the combinatorial structure of ``most'' DNF formulas and monotone DNF formulas. We also establish asymptotic separations between the worst-case and average teaching dimension for various other interesting Boolean concept classes such as juntas and sparse $GF_2$ polynomials.

Postscript or pdf of full version of conference paper

pdf of journal version


Back to main papers page