We consider the relative abilities and limitations of computationally efficient algorithms for learning in the presence of noise, under two well-studied and challenging adversarial noise models for learning Boolean functions: malicious noise, in which an adversary can arbitrarily corrupt a random subset of examples given to the learner; and nasty noise, in which an adversary can arbitrarily corrupt an adversarially chosen subset of examples given to the learner. We consider both the distribution-independent and fixed-distribution settings. Our main results highlight a dramatic difference between these two settings:
To offset the negative result given in (2) for the fixed-distribution setting, we define a broad and natural class of algorithms, namely those that ignore contradictory examples (ICE). We show that for these algorithms, malicious noise and nasty noise are equivalent up to a factor of two in the noise rate: Any efficient ICE learner that succeeds with $\eta$-rate malicious noise can be converted to an efficient learner that succeeds with $\eta/2$-rate nasty noise. We further show that the above factor of two is necessary, again under a standard cryptographic assumption.
As a key ingredient in our proofs, we show that it is possible to efficiently amplify the success probability of nasty noise learners in a black-box fashion. Perhaps surprisingly, this was not previously known; it turns out to be an unexpectedly non-obvious result which we believe may be of independent interest.