Despite the representational similarity, as a training algorithm, boosted naive Bayesian (BNB) learning is quite different from backpropagation. BNB has definite advantages. It requires only linear time and constant space, and "hidden" nodes are learned incrementally, starting with the most important. On the real-world datasets on which the method has been tried so far, generalization performance is as good as or better than the best published result using any other learning method, and BNB was the winner of the data mining competition at the recent KDD conference.
Unlike backpropagation and other standard learning methods, NB learning can be done in logarithmic time with a linear number of processors. Accordingly, it is more plausible neurocomputationally than other methods as a model of animal learning. A review of experimentally established phenomena of primate supervised learning leads to the conclusion that NB learning is also more plausible behaviorally, since only it exhibits the same phenomena.