Introduction to Computational Learning Theory
General Information |
Introduction | Topics |
| Grading | Readings
| Problem Sets
| Final Projects
| Schedule of Topics
| Lecture Notes
Instructor: Rocco Servedio.
Office: 517 CSB. Office hours: Fri 9:30-11:30.
Teaching assistants: Clement Canonne (OH: 2-4 Tues, CSB 516),
Igor Carboni Oliveira (OH: 4-6 Fri, CSB 516), Long Chen (OH: 12-2 Fri,
Xiaorui Sun (OH: Wed 11-1, TA room)
Classroom: 535 Mudd
Time: Mon/Wed 1:10-2:25.
Course website (this page): http://www.cs.columbia.edu/~cs4252/
Course email (use for all course-related issues):
coms4252columbia2012 at gmail dot com.
The question "Can machines learn from experience?"
is one that has fascinated people for a long time.
For decades many researchers in computer science
have studied this question from different points of view and
with different motivations.
This course will give an introduction to some of the central
topics in computational learning theory, a field
which studies the above question from a theoretical computer science
perspective. We will study
well-defined mathematical and computational
models of learning in which it is possible
to give precise and rigorous analyses of learning problems and
A big focus of the course will be the computational efficiency
of learning in these models. We'll develop computationally
efficient algorithms for certain learning problems, and will see
why efficient algorithms are not likely to exist for other
List of Topics
This is a preliminary list of core topics.
Other topics may be covered depending on how
the semester progresses. Most topics will take several lectures.
Introduction: what is computational learning theory (and why)?
Basic notions (learning models, concept classes).
The online mistake-bound learning model. Online algorithms for simple
learning problems (elimination, Perceptron, Winnow)
General algorithms and lower bounds for online learning
(halving algorithm, Weighted Majority algorithm, VC dimension)
The Probably Approximately Correct (PAC) learning model: definition
and examples. Online to PAC conversions.
- Occam's Razor: learning by finding a consistent hypothesis.
The VC dimension and uniform convergence.
Weak versus strong learning: boosting algorithms.
Computational hardness results for efficient learning based on cryptography.
PAC learning from noisy data. Malicious noise and
random classification noise. Learning from Statistical Queries.
Exact learning from membership and equivalence queries.
Learning monotone DNF and learning finite automata.
You should be familiar with the following topics:
Important: Big-O notation and basic analysis of algorithms.
Chapter 3 in "Introduction to Algorithms" by
Cormen, Leicerson, Rivest and Stein is sufficient background.
Important: Basic discrete math and probability. The course
Computer Science 3203 and the "Mathematical Background" section
(Appendix VIII) of CLRS are good sources here.
Important: You should be comfortable with
reading and writing proofs. You will encounter many proofs in class and
will be doing proofs on problem sets.
Chapter 0 of Sipser's book "Introduction to the Theory of Computation"
is a good source here.
Less important but helpful:
Some basic knowledge of the theory of NP-completeness (e.g. Chapter 7 of
Sipser or Chapter 34 of CLRS) and of elementary cryptography
(e.g. pages 881-887 of CLRS) would be helpful, but this knowledge
is not required --
the course will be self-contained with respect to these topics.
Grading and Requirements
The requirements of the course are as follows:
Biweekly problem sets (70% of grade). Your solutions must be typed
and prepared as LaTeX documents
(instructions will be given on how to use LaTeX.)
The biweekly problem sets will be due on Wednesdays by 1:10pm (start of class);
you must bring a hard copy to class and give it to the TA, and
also turn in an electronic copy to the course account coms4252columbia2012.
You are allowed a total of six late days for the semester.
Each late day is exactly 24 hours; late days
cannot be subdivided -- five minutes late counts as one late day.
Note also that problem sets cannot be subdivided with respect to late days
(i.e. you cannot use 1/6 of a late day on a 6-problem problem set by
handing in one problem one day late).
Late days over and beyond the allotted six late days will be
penalized at a rate of 10% per late day (with the same system in place;
five minutes late for one problem counts as one late day for the entire set).
an exception, you must have your undergraduate advisor (for undergrads) or your
graduate advisor (for graduate students) contact me.
will be challenging; you are advised to start the problem
Final project (20% of grade).
For your final project, you will choose a research paper in computational
learning theory, read it, and submit a written report giving a clear
and coherent exposition in your own words of what you learned from
The paper can either be on some topic we touched on in class where you
want to learn the topic in more depth and detail, or on a CLT topic that
we didn't get to in class but you are interested in.
Final Projects will be due in December (exact date TBA, but it will be
around the last day of classes). More details will be available later in the
semester on the projects page.
Final exam (10% of grade). The final exam will be held
in class on the final day of class, Monday December 10.
It will be closed-book and closed-note, and will be cumulative --
anything that was covered this semester is fair game. The best way
to prepare for the exam is to go over course notes, readings,
and homework problems and solution sets.
You are encouraged to discuss the course material and the homework
with each other in small groups (2-3 people), but you must list all
partners on your problem set.
Discussion of homework problems may include
brainstorming and verbally discussing possible solution approaches, but must
not go as far as one person telling the others how to solve the problem.
In addition, each person must write up her/his solutions
independently; you may not look at another student's written solutions.
You may consult outside materials, but all materials used must be
appropriately acknowledged, and you must always write up your
solutions in your own words.
Violation of any portion of this policy will result
in a penalty to be assessed at the instructor's discretion. This may include
receiving a zero grade for the assignment in question AND a failing grade
for the whole course, even for the first infraction.
Students are expected to adhere to the Academic Honesty policy of the
Computer Science Department; this policy can be found in full
Please contact the instructor with any questions.
If you dispute the grade received for an assignment, you must
submit, in writing, a detailed and clearly stated
argument for what you believe is incorrect and why. This must be submitted
no later than the beginning of class 1 week after the assignment was returned.
(For example, if the assignment were returned to the class on Wednesday, your
regrade request would have to be submitted before the start of class
on the next Wednesday).
Requests for a re-grade after this time will not be accepted.
A written response will be
provided within one week indicating your final score.
Requests of re-grade of a specific problem may result in a regrade of
the entire assignment. This re-grade and written response is final.
Keep in mind that a re-grade request may result in the overall score
for an assignment being lowered.
"I therefore perused this Sheet with wonderful
Application, and in about a Day's time discovered that I could not
understand it." -- Henry Fielding, A Journey from This
World to the Next
The textbook for this course is:
This book is available for purchase on-line;
it's also available on reserve in the engineering library, but you will
likely want to have your own copy.
Note that several topics which we'll cover
(particularly early in the semester) are not in the book.
A good source for material which we will cover in the first few weeks
of class are the following two papers:
- Avrim Blum's survey paper
"Online Algorithms in Machine Learning"
This has an overview of the online mistake-bound learning model, the Weighted
Majority algorithm, the online decision list learning algorithm, and much
- There are countless online sources where you can find expositions of the
Perceptron algorithm and proofs of convergence. The language tends to
vary from presentation to presentation; one which hews fairly closely
to the presentation we'll give in class can be found
There are also many writeups describing the ``kernel trick'' and how
it can be used in conjunction with the Perceptron algorithm, including
For the unit on boosting, a good reference is Rob Schapire's survey paper
"The Boosting approach to machine learning: An overview".
The first three sections give a (very condensed) treatment of the
AdaBoost algorithm we'll cover in class, and later sections have good
overviews of more advanced material (this is a good starting point
for a project on boosting). Of course, the original paper on
Adaboost, "A decision-theoretic generalization of on-line learning and
an application to boosting", is a good sources as well.
Click here for a handy copy of the AdaBoost
Click here to get to the homework page.
Here is an anticipated schedule of topics. Note that the ordering
of some topics may change, and we may spend more or less than one lecture per