Here are some references on language modeling. These papers are provided because we think they may be interesting to you if you're interested in learning more about current research in language modeling. We don't expect you to read the material in the background readings -- the lecture slides contain all material that is required for the class. The first reading (Chen and Goodman) gives a very comprehensive overview of different smoothing techniques. The second reading (McAllester and Schapire) gives proofs regarding the Good-Turing estimators. The remaining references include both survey articles and also current research in language modeling.
Stanley Chen and Joshua Goodman. An Empirical Study of Smoothing Techniques for Language Modeling. 1998. Harvard Computer Science Technical report TR-10-98.
David McAllester and Robert E. Schapire. On the Convergence Rate of Good-Turing Estimators. In Proceedings of COLT 2000.
Y.W. Teh. A Hierarchical Bayesian Language Model based on Pitman-Yor Processes. In Proceedings of Coling/ACL 2006.
P. Xu, and F. Jelinek. Random Forests in Language Modeling. In Proceedings of EMNLP'2004, Barcelona, Spain, July, 2004.
R. Rosenfeld. Two decades of statistical language modeling: where do we go from here?, Proceedings of the IEEE. On page(s): 1270-1278, Volume: 88, Issue: 8, Aug 2000.
S. F. Chen; R. Rosenfeld. A survey of smoothing techniques for ME models. Speech and Audio Processing, IEEE Transactions on On page(s): 37-50, Volume: 8, Issue: 1, Jan 2000.
C Chelba, F Jelinek. Exploiting syntactic structure for language modeling. In Proceedings of COLING-ACL, 1998.
R. Rosenfeld. A maximum entropy approach to adaptive statistical language modeling. Computer Speech and Language, 1996.
Hermann Ney, Ute Essen and Reinhard Kneser, On structuring probabilistic dependences in stochastic language modelling. Computer Speech & Language, Volume 8, Issue 1, January 1994, Pages 1-38.