Columbia Theory Reading Group, Fall 2006

Abstracts for the talks are given below. See the main page for schedule and location information.


Monday, September 11:
Towards Models for Backtracking and Dynamic Programming
Josh Buresh-Oppenheim
Simon Fraser University

Since most algorithms that people use can intuitively be classified into large paradigms of algorithms such as greedy, dynamic programming, linear and semidefinite programming, local search, etc., it is natural to ask what problems can be computed by these paradigms. This question can be seen as an alternative to asking what problems can be computed by all, say, poly-time algorithms in that the restriction on the algorithms is more conceptual rather than complexity-theoretic. Of course, to ask a question about an algorithmic paradigm, you first have to formally define the paradigm. We offer one very natural model, pBT, which captures many algorithms generally considered to be dynamic programming or backtracking. This model is an extension of the Priority Algorithm model of Borodin, Nielsen and Rackoff, which captures greedy algorithms. We demonstrate many upper and lower bounds in this model for problems such as interval scheduling, knapsack and SAT. We then define a stronger model, pBP, and show that it seems to capture some aspects of dynamic programming that pBT does not, but still does not solve even tractable problems that do not intuitively have dynamic programming algorithms.

This is joint work with Mikhail Alekhnovich, Allan Borodin, Sashka Davis, Russell Impagliazzo, Avner Magen and Toni Pitassi.


Monday, September 18:
Colored Range Searching via Matrix Multiplication

Elad Verbin
Tel Aviv University

In a Colored Range Searching Problem, one is given a set of n points, each point colored in one of c colors, and a set of n hypercubes, each colored in one of c colors. The goal is to report all pairs of colors (c_1,c_2) such that a point of color c_1 is contained in a cube of color c_2. The same question can be asked for halfspaces, triangles, etc.

In a Sparse Matrix Multiplication Problem, one is given a matrix A with c rows and at most n non-zero entries (the number of columns is not significant). The goal is to compute A*A^T, i.e. the product of A and its transpose.

It is relatively easy to see that the Colored Range Searching Problem is harder than the Sparse Matrix Multiplication Problem, via a simple reduction. In this talk we prove the other direction: we give an algorithm that solves Colored Range Searching using Sparse Matrix Multiplication. Furthermore, we give an efficient algorithm for Sparse Matrix Multiplication, and outline some generalizations and open problems.

No prior knowledge in computational geometry is assumed.


Monday, September 25:
Biology as Computation
Leslie Valiant
Harvard University

We argue that computational models have an essential role in uncovering the principles behind a variety of biological phenomena. In particular we consider recent results relating to the following three questions: How can brains, given their known resource constraints such as the sparsity of connections and slow elements, do any significant information processing at all? How can evolution, in only a few billion years, evolve such complex mechanisms as it has? How can cognitive systems manipulate large amounts of such uncertain knowledge and get usefully reliable results? We show that each of these problems can be formulated as a quantitative question for a computational model, and argue that solutions to these formulations provide some understanding of these biological phenomena.


Monday, October 16:
Logarithmic Regret Algorithms for Online Convex Optimization
Amit Agarwal
Princeton University

We introduce a new algorithm and a new analysis technique that is applicable to a variety of online optimization scenarios, including regret minimization for Lipschitz regret functions studied by Merhav-Feder, Cover's universal portfolio management, Zinkevich's online convex optimization and online utility maximization of Kalai-Vempala. The algorithm extends a method proposed by Hannan in the 1950's, called `Follow the Leader", and shows a surprising connection to the Newton method for offline optimization. One application of the new algorithm is universal portfolio management, for which our algorithm combines optimal regret with computational efficiency. For more general settings, our algorithm is the first to achieve optimal regret.

The talk is based on joint papers with Elad Hazan, Adam Kalai, Satyen Kale and Rob Schapire.


Thursday, October 26:
Online Scheduling with Known Arrival Times
Chris Potts
University of Southampton

We consider an online scheduling environment where decisions are made without knowledge of the data of jobs that may arrive later. However, additional jobs can only arrive at known future times. This environment interpolates between the classical offline and online scheduling environments, and approaches the classical online environment when there are many equally spaced potential job arrival times. The objective is to minimize the sum of weighted completion times, a widely used measure of work-in-process inventory cost and customer service. For a non-preemptive single machine environment, we show that a lower bound on the competitive ratio of any online algorithm is the solution of a mathematical program. This lower bound is between (1+sqrt{5})/2 and 2, with the exact value depending on the potential job arrival times. We also provide a "best possible" online scheduling algorithm, and show that its competitive ratio matches this lower bound. A key element in establishing this result is a system of equations and inequalities that characterizes the schedule delivered by our online algorithm for an arbitrary problem instance. Then we use this system to construct a feasible solution to the mathematical program that establishes the lower bound. This proof technique has potential application to other online scheduling problems. We analyze two practically motivated special cases where the potential job arrival times have a special structure.


Tuesday, October 31:
Qubit Complexity of Continuous Problems
Joseph Traub
Columbia University

For the forseeable future the number of qubits will be a crucial computational resource on a quantum computer. We show how to lower bound the qubit complexity using the classical query complexity.

We use this result to present a simple problem which cannot be solved on a quantum computer in the standard quantum setting with deterministic queries but can be solved on a classical computer using randomized queries (Monte Carlo). This suggests introducing a quantum setting with randomized queries.

We apply this setting to high dimensional integration and to path integration. In particular, there is an exponential improvement in the qubit complexity of path integration using the quantum setting with randomized queries.

We end by discussing future directions and where to learn more.


Friday, November 10:
New Market Models and Algorithms
Vijay Vazirani
Georgia Institute of Technology

The notion of a ``market'' has undergone a paradigm shift with the Internet -- totally new and highly successful markets have been defined and launched by companies such as Google, Yahoo!, Amazon, MSN and Ebay. Another major change is the availability of massive computational power for running these markets in a centralized or distributed manner.

In view of these new realities, the study of market equilibria, an important, though essentially non-algorithmic, theory within Mathematical Economics, needs to be revived and rejuvenated with new models, ideas, and an inherently algorithmic approach.

In this talk, I will give a feel for the exciting work going on on this front and present new results on resource allocation markets. Interestingly enough, this work has also contributed handsomely to the theory of algorithms itself. In particular, the highly successful primal-dual schema from exact and approximation algorithms, which was so far used for combinatorially solving special classes of linear programs, has been extended to solving nonlinear convex programs.

This talk is meant for a general audience. It is based on the following three papers:

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/adwords.pdf

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/EG.pdf

http://www-static.cc.gatech.edu/fac/Vijay.Vazirani/EG2.pdf


Thursday, November 16:
On Computational Hardness of Agnostic Learning
Vitaly Feldman
Harvard University

The agnostic learning framework of Kearns, Schapire and Sellie is an extension of Valiant's PAC learning model, in which examples that are given to the learning algorithm are labelled by an unknown and unrestricted (and hence adversarial) function. An agnostic learning algorithm for a class of functions $C$ is required to produce a hypothesis whose error (with respect to the distribution of examples) is "close" to the optimal possible by a function from $C$. A large number of questions regarding the learnability of even the simplest function classes in this natural model remain open since the introduction of the model in 1992.

In this talk I will show that agnostic learning of parities with respect to the uniform distribution reduces to learning of parities with random classification noise. Learning with random noise is a substantially more tractable model and, in particular, using a noise-tolerant parity learning algorithm by Blum, Kalai and Wasserman we obtain the first non-trivial algorithm for agnostic learning of parities with respect to the uniform distribution. The reduction also shows that learning of juntas and DNF expressions with respect to the uniform distribution can be reduced to learning parities with random classification noise.

The second problem we'll discuss is the problem of weak agnostic learning of monomials by a proper learning algorithm (that is, an algorithm that produces a monomial as its hypothesis) formulated by Avrim Blum. We resolve this open problem by showing that it is NP-hard. In other words, we prove that finding a monomial that is consistent with $1/2+\epsilon$ fraction of examples is NP-hard even when there exists a monomial consistent with $1-\epsilon$ fraction of examples (for any constant $\epsilon$). The result can also be interpreted as an optimal hardness of approximation result for maximizing agreements with a monomial. It substantially improves on previous hardness of approximation results for this problem (due to Ben-David et al., and Bshouty and Burroughs).

The talk will be largely self-contained and both results will be also interpreted outside of the learning context. Parts of this talk are based on joint work with Parikshit Gopalan, Subhash Khot and Ashok Ponnuswami.


Monday, November 27:
Fully Polynomial Time Approximation Schemes for Convex and for Monotone Stochastic Dynamic Programming
Nir Halman
MIT

We develop a framework for obtaining a (deterministic) Fully Polynomial Time Approximation Scheme (FPTAS) to stochastic univariate dynamic programming with either convex or monotone single period cost function.

Using our framework, we give the first FPTAS for several NP-Hard problems in various fields of research.

Joint work with Diego Klabjan, Chung-Lun Li, James Orlin and David Simchi-Levi.


Back to Fall 2006 Theory Seminar main page