Columbia Theory Reading Group, Fall 2004

Abstracts for the talks are given below. See the main page for schedule and location information.


Thursday January 29:
On Conflict-Free Colorings
Shakhar Smorodinsky

Motivated by frequency assignment problems in cellular networks, we study some coloring problems of the following flavor:

What is the minimum number $f(n)$ such that one can assign colors to any set $P$ of $n$ points in the plane, using a total of at most $f(n)$ colors and such that this coloring have the following property (which we refer to as Conflict Free coloring): For any disc $d$ in the plane, with nonempty intersection with $P$, there is at least one point of $P$ inside $d$ which has a unique color among the points inside $d$.

We show that $f(n) = O(\log n)$. This is asymptotically tight in the worst case. We extend this result to many other classes of ranges (other than disks), and also to the dual type of problems, where we want to color a given set of ranges, so that for each point $p$ there is a range with a unique color among the ranges that contain $p$.

Some parts of this talk is joint work with Guy Even, Zvika Lotker and Dana Ron (Tel-Aviv University) and some other parts are joint work with Sariel Har-Peled (UIUC).


Thursday February 12:
Completeness in Two-Party Secure Computation - A Computational View
Danny Harnik

Secure Function Evaluation (SFE) of a two-variable function f is a protocol that allows two parties with inputs x and y to evaluate f(x,y) in a manner where neither party learns ``more than is necessary". A rich body of work deals with the study of completeness for secure two-party computation. A function f is complete for SFE if a protocol for securely evaluating f allows the secure evaluation of all (efficiently computable) functions. The questions investigated are which functions are complete for SFE and which functions have SFE protocols unconditionally. The previous study of these questions was mainly conducted from an Information Theoretic point of view and provided strong answers in the form of combinatorial properties. However, we show that there are major differences between the information theoretic and computational settings. In particular, we show functions that are considered as having SFE unconditionally by the combinatorial criteria but are actually complete in the computational setting.

We initiate the fully computational study of these fundamental questions. Somewhat surprisingly, we manage to provide an almost full characterization of the complete functions in this model as well. More precisely, we present a computational criterion for a function f to be complete for the asymmetric case. Furthermore, we show a matching criterion for f to have a simple SFE (based on no additional assumptions). These criteria are close to being the negation of one another and thus we essentially characterize all ``nice" functions as either complete or having SFE unconditionally.

Joint work with Moni Naor, Omer Reingold and Alon Rosen.


February 23:
A Quantitative Approach to Reductions in Secure Computation.
Amos Beimel

Secure computation is one of the most fundamental cryptographic tasks. It is known that all functions can be computed securely in the information theoretic setting, given access to a black box for some complete function such as AND. However, without such a black box, not all functions can be securely computed. This gives rise to two types of functions, those that can be computed without a black box (``easy'') and those that cannot (``hard''). However, no further distinction among the hard functions is made.

In this talk, I'll present a quantitative approach, associating with each function $f$ the minimal number of calls to the black box that are required for securely computing $f$. This approach leads to a better understanding of the inherent complexity for securely computing a given function $f$ and may lead to more efficient protocols.

In this talk I'll consider the two-party, honest-but-curious, information-theoretic case. For this setting, I'll provide a complete characterization for deterministic protocols, and explore the hierarchy for randomized protocols as well, giving upper and lower bounds, and comparing it to the deterministic hierarchy.

Joint work with Tal Malkin.


March 1:
The Hardness of Metric Labeling
Julia Chuzhoy

The Metric Labeling problem is an elegant and powerful mathematical model capturing a wide range of classification problems. The input to the problem consists of a set of labels and a weighted graph. Additionally, a metric distance function on the labels is defined, and for each label and each vertex, an assignment cost is given. The goal is to find a minimum-cost assignment of the vertices to the labels. The cost of the solution consists of two parts: the assignment costs of the vertices and the separation costs of the edges (each edge pays its weight times the distance between the two labels to which its endpoints are assigned).

Due to the simple structure and variety of the applications, the problem and its special cases (with various distance functions on the labels) received a lot of attention recently. Metric Labeling has a known logarithmic approximation, and it has been an open question for several years whether a constant approximation exists. We refute this possibility and show a $\sqrt{\log n}$ hardness of approximation.

Joint work with Seffi Naor.


March 8:
Extracting Randomness from Weak Random Sources
Ariel Elbaz

Randomized algorithms are widely used in computer science. Usually, randomness is assumed to be sampled from uniform distributions, that give truly random, independently-identically-distributed (iid) bits. The problem of extracting bits that are (close to) truly random, from distributions that are statistically far from uniform distributions, dates back to 1951 [vN51]. Extractors that extract randomness from a weak random source, using an additional (short) truly random seed, received huge attention in recent years. There are explicit constructions that are optimal (up to constant factors) in all parameters.

For extracting randomness without using an additional random seed, known constructions are still far from optimal. In this talk we consider the following problem: Given two independent weak random sources X,Y over ${0,1}^n$, with min-entropies $b_X, b_Y$ (respectively), we want to extract many bits from X,Y, that are close to truly random bits, where the number and quality of the extracted bits depend on $b_X, b_Y$. There exist (non-explicit) functions that can extract almost $b_X + b_Y$ random bits, even for very low min-entropies $b_X, b_Y$. In contrast, all known explicit results require $b_X + b_Y > n + polylog(n)$. Previous explicit construction, due to Vazirani [Vaz87], allowed extraction of $k = \Omega(b_X + b_Y - n)$ bits that are statistically close to uniform.

We use an idea of Dodis and Oliveira [DO03], showing that the k extracted bits look random even to someone who sees Y. This allows us to use these bits as a random seed for an Extractor to extract the randomness from Y, extracting a total of $b_Y + \Omega(b_X + b_Y - n)$ random bits.

The talk will be self-contained - no knowledge in randomness extraction is assumed.

Joint work with Ran Raz.


March 22:
Automatic Synthesis and Simulation of Quantum Circuits
Igor Markov

The evolution of a given quantum-mechanical system can be captured by a quantum circuit. While a theorist may be satisfied with an existence proof of this result, a quantum computer engineer would want gate counts and practical algorithms for circuit synthesis. Recent results include sharp numerical bounds for two-qubit circuits and asymptotically-sharp bounds for n-qubit circuits. All upper bounds are constructive and achieved by practical algorithms. Ongoing work deals with measurement-aware synthesis and quantum don't-cares.

Quantum circuits are useful to simulate physics on quantum or classical computers. In the latter case, quantum circuits themselves must be simulated with classical computation, but this approach can be surprisingly successful. One such simulation technique uses graph-based compression of matrices that automatically optimizes away redundant matrix elements. In the absense of noise, this facilitates density-matrix simulations at the cost of state-vector simulations, and small amounts of noise often do not affect numerical efficiency. We are currently developing a reusable simulator with Matlab-like interface that automatically selects the most appropriate algorithms and datastructures on the fly.


March 29:
Learning Mixtures of Product Distributions
Ryan O'Donnell

We present a polynomial time algorithm for learning a mixture of any constant number of unknown product distributions over {0,1}^n. Previous polynomial time algorithms were only able to learn mixtures of two product distributions.

Via a reduction from a notorious hard problem in computational learning theory, we also give evidence that no polynomial time algorithm can learn mixtures of a superconstant number of product distributions over {0,1}^n. This reduction suggests that our results may be essentially the best which can be achieved by any polynomial time algorithm for this problem.

In fact, our technique is quite general and can efficiently learn many classes of mixtures of O(1) product distributions over R^n, including mixtures of axis-aligned Gaussians and mixtures of product distributions over sets of finite size.

Joint work with Jon Feldman and Rocco Servedio of Columbia University.


April 6:
A Fast Quantum Algorithm for Computing the Unit Group of a Number Field
Sean Hallgren

Computing the unit group of a number field is one of the main tasks in computational algebraic number theory. Factoring integers reduces to a special case of computing the unit group, but a reduction in the other direction is not known and appears more difficult. We give a polynomial-time quantum algorithm for computing the unit group when the number field has constant degree.


April 19:
Exponential Algorithmic Speedup by Quantum Walk
Edward Farhi

I will give an introduction to continuous time quantum walks. I will also show how a quantum walk algorithm can solve a particular oracle problem with a polynomial number of queries whereas no classical algorithm can solve this problem with fewer than an exponential number of queries.


April 26:
Concurrent Flows in O*(1/epsilon) Time.
Garud Iyengar

We adapt a method proposed by Nesterov to design an algorithm that computes eps-optimal solutions to fractional packing problems by solving O*(\sqrt{Kn}/eps) separable convex quadratic programs, where K is the maximum number of non-zeros per row and n is the number of variables. We also show that the quadratic program can be approximated to any degree of accuracy by an appropriately defined piecewise-linear program. For the special case of the maximum concurrent flow problem on a graph G = (V,E) with rational capacities and demands we obtain an algorithm that computes an eps-optimal flow by solving O*(1/eps log(1/eps)) shortest path problems. We also show that the complexity of computing a maximum multicommodity flow is $O*(1/eps log^2(1/eps)$. In contrast, previous algorithms required 1/eps^2 iterations.


May 7:
Learnability and Automatizability
Toniann Pitassi

In this talk we prove new upper and lower bounds on the proper PAC learnability of decision trees, DNF formulas, and intersections of halfspaces. Several of our results were obtained by exploring a new connection between automatizability in proof complexity and learnability. After explaining this basic connection, we will prove the following new results.

(1) We give new upper bounds for proper PAC learning of decision trees and DNF, based on similar known algorithms for automatizability of Resolution.

(2) We show that it is not possible to PAC learn DNF by DNF in polynomial-time unless $NP \subseteq BPP$. We also prove the same negative result for proper PAC learning of intersections of halfspaces.

(3) We show that decision trees cannot be proper PAC learned, under a different (less standard) complexity-theoretic assumption.

This is joint work with Misha Alekhnovich, Mark Braverman, Vitaly Feldman, and Adam Klivans.


May 10:
Approximation via Cost Sharing (or, How to Build Good Networks by Flipping Coins)
Tim Roughgarden

In this talk, I will describe a family of randomized approximation algorithms for several well-studied NP-hard network design problems. These algorithms are extremely simple, and yet improve over the previously best known approximation ratios, in some cases by orders of magnitude. The analysis of these algorithms is driven by a novel connection between approximation algorithms and cost sharing.

Joint work with Anupam Gupta (CMU), Amit Kumar (IIT Delhi), and Martin Pal (Cornell).


May 24:
The Security of Protocols in Modern Network Settings - a Survey
Yehuda Lindell

In modern network settings, secure protocols are run concurrently with other arbitrary protocols. The interaction of different protocols with each other can be exploited by malicious parties to mount successfully attacks on protocols that are secure when considered in isolation. To make things worse, adversarial parties may design and release "malicious" protocols whose sole purpose is to interact with secure protocols and compromise their security. In fact, it has been shown that in modern network settings like the Internet, it is easy to launch such attacks on real protocols. In this talk, we survey recent progress on the question of whether or not it is possible to construct protocols that remain secure in this general adversarial setting. We consider a number of different models and present both positive and negative results that provide a rather comprehensive picture of feasibility. Finally, we will discuss research directions and goals for the future.


Back to Spring 2004 Theory Seminar main page