In this talk I will address this issue as I discuss the general problem of learning from examples. First, I will introduce a general framework for characterizing the problem of learning from examples. I will develop this framework in the context of pattern recognition using neural networks. I will show that the generalization error has two components --- an approximation error due to finiteness of the neural network (hypothesis class) and an estimation error due to finiteness of the data available for learning. I will show how to balance the two kinds of error resulting in an optimal choice of the network structure that can solve the problem efficiently. I will argue that this kind of a balancing act will have to be performed in all learning problems as we search through an infinite class of models for the one of the right complexity. Furthermore, prior information about the problem domain will have to be incorporated for the balancing act to have a chance of succeeding in practical applications.
In the second part of the talk, I will discuss applications of machine learning in the areas of natural language learning, speech recognition and object detection. In each of these cases, I will show how the particular application relates to the general framework and point out ways in which domain-specific prior information is injected into the learning process to achieve successful performance.