COMS 4252:
Introduction to Computational Learning Theory
Fall 2018
SYLLABUS
General Information 
Introduction  Topics 
Prerequisites
 Grading, Requirements, and Academic Honesty  Readings
 Problem Sets
 Schedule of Topics
 Link to Courseworks/Piazza
 Notes from Lectures
 Schedule of Deadlines
General Information
Instructor: Rocco Servedio.
Office: CSB 450 (computer science department chair's office).
Office hours: Fri 911
Instructional assistants:
Emmanouil Vasileios Vlatakis Gkaragkounis, Sandip Sinha, Steven Shao. Emmanouil's OH: Tues 13pm in CSB 516. Sandip's OH: Thurs 4:306:30pm in TA room on 1st floor of Mudd. Steven's OH: Wed 12:302:30pm in TA room on 1st floor of Mudd.
Classroom: 451 Computer Science Building
Time: Tues/Thurs 8:409:55
Course website (this page): http://www.cs.columbia.edu/~cs4252/
Course email (use to contact teaching staff about any administrative issues):
coms4252columbia2018 at gmail dot com
Introduction
The question "Can machines think?"
is one that has fascinated people for a long time;
see here
for a nontechnical perspective (you may need to be on the Columbia
network to access this resource). This is pretty close
to the question "Can machines learn?",
which has been studied from different points of view
by many researchers in computer science.
This course will give an introduction to some of the central
topics in computational learning theory, a field
which approaches the above question from a theoretical computer science
perspective. We will study
welldefined mathematical and computational
models of learning in which it is possible
to give precise and rigorous analyses of learning problems and
learning algorithms.
A big focus of the course will be the computational efficiency
of learning in these models. We'll develop computationally
efficient algorithms for certain learning problems, and will see
why efficient algorithms are not likely to exist for other
problems.
List of Topics
This is a preliminary list of core topics.
Other topics may be covered depending on how
the semester progresses. Most topics will take several lectures.

Introduction: what is computational learning theory (and why)?
Basic notions (learning models, concept classes).

The online mistakebound learning model. Online algorithms for simple
learning problems (elimination, Perceptron, Winnow)

General algorithms and lower bounds for online learning
(halving algorithm, Weighted Majority algorithm, VC dimension)

The Probably Approximately Correct (PAC) learning model: definition
and examples. Online to PAC conversions.
 Occam's Razor: learning by finding a consistent hypothesis.

The VC dimension and uniform convergence.

Weak versus strong learning: boosting algorithms.

Computational hardness results for efficient learning based on cryptography.

PAC learning from noisy data. Malicious noise and
random classification noise. Learning from Statistical Queries.

Exact learning from membership and equivalence queries.
Learning monotone DNF and learning finite automata.
Prerequisites
You should be familiar with the following topics:

Important: BigO notation and basic analysis of algorithms.
Chapter 3 in "Introduction to Algorithms" by
Cormen, Leicerson, Rivest and Stein is sufficient background.

Important: Basic discrete math and probability. The course
Computer Science 3203 and the "Mathematical Background" section
(Appendix VIII) of CLRS are good sources here.

Important: You should be comfortable with
reading and writing proofs. You will encounter many proofs in class and
will be doing proofs on problem sets.
Chapter 0 of Sipser's book "Introduction to the Theory of Computation"
is a good source here.

Less important but helpful:
Some basic knowledge of the theory of NPcompleteness (e.g. Chapter 7 of
Sipser or Chapter 34 of CLRS) and of elementary cryptography
(e.g. pages 881887 of CLRS) would be helpful, but this knowledge
is not required 
the course will be selfcontained with respect to these topics.
Grading, Requirements and Academic Honesty
The requirements of the course are as follows:

Biweekly problem sets (70% of grade). Your solutions must be typed
and prepared as LaTeX documents
(instructions will be given on how to use LaTeX.)
The (more or less) biweekly problem sets will be due by 8:40am (start of class) on each due date;
you must submit a pdf file of your HW solutions on Courseworks by this time or your homework will be considered to be late (details about HW submission can be found
on the homework page).
You are allowed a total of six late days for the semester.
Each late day is exactly 24 hours; late days
cannot be subdivided  five minutes late counts as one late day.
Note also that problem sets cannot be subdivided with respect to late days
(i.e. you cannot use 1/6 of a late day on a 6problem problem set by
handing in one problem one day late).
Late days over and beyond the allotted six late days will be
penalized at a rate of 10% of the initial grade per late day
(with the same system in place;
five minutes late for one problem counts as one late day for the entire set).
Thus, for example, a problem set which is four days late after all six late days have been used would receive 60% of the points
that it would have received had it been turned in on time.
No problem sets will be accepted after two weeks have elapsed from
the initial due date. For
any exceptions,
you must have your undergraduate advisor (for undergrads) or your
graduate advisor (for graduate students) contact me.
Some problems
will be challenging; you are advised to start the problem
sets early.

Midterm evaluation (10% of grade). Details TBA, but the midterm will be
on October 18.

Inclass evaluation (20% of grade). There will be an inclass
evaluation on the last day of class, Thursday, December 6.
It will be closedbook and closednote, and will be cumulative 
anything that was covered this semester is fair game. The best way
to prepare for the evaluation is to go over course notes, readings,
homework problems and solution sets.
Homework policy:
You are encouraged to discuss the course material and the homework
problems
with each other in small groups (23 people), but you must list all
discussion partners on your problem set.
Discussion of homework problems may include
brainstorming and verbally discussing possible solution approaches, but
must not go as far as writing up solutions together;
each person MUST WRITE UP HIS/HER SOLUTIONS INDEPENDENTLY. You
may not collaborate with another student on writing up solutions or even look at another student's written solutions. If your homework writeup resembles that of another student in a way which suggests that you have violated the above policy, you may be suspected of academic dishonesty.
You may consult certain outside materials, specifically lecture notes and videos of other classes, any textbooks, and research papers. You may not consult any other materials, including solved homework problems for this or any other class. For all outside materials used, you must provide a detailed acknowledgement of the precise materials used. Whether or not you consult outside materials, you must always write up your solutions in your own words. If your homework writeup resembles any outside source in a way which suggests that you have violated the above policy, you may be suspected of academic dishonesty.
Violation of any portion of this policy will result
in a penalty to be assessed at the instructor's discretion. This may include
receiving a zero grade for the assignment in question AND a failing grade
for the whole course, even for the first infraction.
More generally, students are expected to adhere to the Academic Honesty policy of the
Computer Science Department; this policy can be found in full
here.
Please contact the instructor with any questions.
Regrade policy:
If you dispute the grade received for an assignment, you must
submit, in writing, a detailed and clearly stated
argument for what you believe is incorrect and why. This must be submitted
no later than the beginning of class 1 week after the assignment was returned.
(For example, if the assignment were returned to the class on Tuesday, your
regrade request would have to be submitted before the start of class
on the next Tuesday).
Requests for a regrade after this time will not be accepted.
A written response will be
provided within one week indicating your final score.
Requests of regrade of a specific problem may result in a regrade of
the entire assignment. This regrade and written response is final.
Keep in mind that a regrade request may result in the overall score
for an assignment being lowered.
Readings
"I therefore perused this Sheet with wonderful
Application, and in about a Day's time discovered that I could not
understand it."  Henry Fielding, A Journey from This
World to the Next
The textbook for this course is:
This book is available for purchase online.
It's also available on reserve in the science and engineering
library, and is electronically available through the Columbia library
here (you will need to be signed in to access this).
Note that several topics which we'll cover
(particularly early in the semester) are not in the book.
A good source for material which we will cover in the first few weeks
of class are the following two papers:
 Avrim Blum's survey paper
"Online Algorithms in Machine Learning"
This has an overview of the online mistakebound learning model, the Weighted
Majority algorithm, the online decision list learning algorithm, and much
more.
 There are countless online sources where you can find expositions of the
Perceptron algorithm and proofs of convergence. The language tends to
vary from presentation to presentation; one which hews fairly closely
to the presentation we'll give in class can be found
here.
A handy overview of Chernoff and Hoeffding bounds can be found
here (thanks to Clement Canonne for
preparing this).
For the unit on boosting, a good reference is Rob Schapire's survey paper
"The Boosting approach to machine learning: An overview".
The first three sections give a (very condensed) treatment of the
AdaBoost algorithm we'll cover in class, and later sections have good
overviews of more advanced material (this is a good starting point
for a project on boosting). Of course, the original paper on
Adaboost, "A decisiontheoretic generalization of online learning and
an application to boosting", is a good sources as well.
Click here for a handy copy of the AdaBoost
algorithm.
Advice
"Realizing the gravity of the situation, we immediately set
about taking notes."  Witold Gombrowicz, Philidor's Child Within
Lectures will often be cumulative, in the sense that most lectures will
build on concepts and results that were covered in previous lectures; so if
you fall significantly behind it can be difficult to catch up.
You are advised to take careful notes during lectures and to review
those notes between lectures.
Problem Sets
Click here to get to the homework page.
Schedule of Topics
Here is an anticipated schedule of topics. Note that the ordering
of some topics may change, and we may spend more or less than one lecture per
topic.
 Introduction to machine learning theory. Learning
models and learning problems.
 Online mistake bound learning. Algorithms for
simple concept classes. Attribute efficient learning with the Winnow
algorithm.
 Winnow algorithm continued. Perceptron algorithm.
General bounds for online mistake bound learning. The Halving
algorithm, the Standard Optimal Algorithm.
 Best Experts and Weighted Majority.
 The Probably Approximately Correct(PAC) Learning model.
PAC learning algorithms for simple concept classes.
 More PAC learning algorithms. Conversions from online
learning to PAC learning. (KV chapter 1)
 Occam's Razor: learning by finding consistent hypotheses.
Applications (KV chapter 2).
 Computational hardness results for finding consistent
hypotheses.
 VapnikChervonenkis dimension. Upper and lower
bounds on sample complexity (KV chapter 3).
 VC dimension and sample complexity continued.
 VC dimension and sample complexity continued.
 Weak learning, strong learning, and Boosting (KV
chapter 4).
 Boosting continued.
 Boosting continued.
 Learning in the presence of noise. Classification
noise, malicious noice,
 Statistical Query learning (KV chapter 5).
 Learning with noise continued.
 Learning with noise continued.
 Cryptographic limitations on efficient learning (KV
chapter 6).
 Cryptographic limitations on learning continued.
 Cryptographic limitations and reductions in
PAC learning (KV chapters 6,7).
 The model of exact learning from membership and
equivalence queries. Learning monotone DNF formulas.
 Learning deterministic finite automata.
 Learning deterministic finite automata continued.
Link to Courseworks/Piazza
Click
here
for Courseworks (from which you can access Piazza, the discussion forum for the class).
Notes from Lectures
You are strongly advised to take your own notes during lecture, and to crosscheck them again these (unedited) notes of what was produced during class.
Warning: the notes below were generated in real time and have not been edited. They may contain typos or other errors. They may also contain aesthetically displeasing color combinations.
Lecture #2, Thurs Sept 6 2018
Lecture #3, (scheduled for) Tues Sept 11 2018
Lecture #4, (scheduled for) Thurs Sept 13 2018
(The videos for Lectures 3 and 4 are available to anyone at Columbia at
this link
Lecture #5, Tues Sept 18 2018
Lecture #6, Thurs Sept 20 2018
Lecture #7, Tues Sept 25 2018
Lecture #8, Thurs Sept 27 2018
Lecture #9, Tues Oct 2 2018
Lecture #10, Thurs Oct 4 2018
Lecture #11, (scheduled for) Tues Oct 9 2018
Lecture #12, Thurs Oct 11 2018
Lecture #13, Tues Oct 16 2018
Schedule of Deadlines
Click
here
for a handy overview of all course deadlines.