Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas.
V. Feldman and H. Lee and R. Servedio.
24th Annual Conference on Computational Learning Theory (COLT), 2011.


Much work has been done on learning various classes of ``simple" monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model.

We introduce a new approach to understanding the learnability of ``simple" monotone functions that is based on a recent characterization of Strong SQ learnability by \citet{Simon-2007} %Simon \cite{Simon-2007}. Using the characterization we first show that depth-3 monotone formulas of size $n^{o(1)}$ cannot be learned by any polynomial-time SQ algorithm to accuracy $1 - 1/(\log n)^{\Omega(1)}.$ We then build on this result to show that depth-4 monotone formulas of size $n^{o(1)}$ cannot be learned even to a certain ${\frac 1 2} + o(1)$ accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of ``mildly hard'' learning problems in either the PAC or SQ framework. This hardness amplification for learning builds on the ideas in the work of \citet{ODonnell-2002} on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts.

Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.

Link to open-access proceedings version

Back to main papers page