Columbia Theory Reading Group, Spring 2008

Abstracts for the talks are given below. See the main page for schedule and location information.


Friday, January 18:
Succinct Approximate Convex Pareto Curves
Ilias Diakonikolas
Columbia University
We study the succinct approximation of convex Pareto curves of multiobjective optimization problems. We propose the concept of $\epsilon$-convex Pareto ($\epsilon$-CP) set as the appropriate one for the convex setting, and observe that it can offer arbitrarily more compact representations than $\epsilon$-Pareto sets in this context. We characterize when an $\epsilon$-CP can be constructed in polynomial time in terms of an efficient routine $\textrm{Comb}$ for optimizing (exactly or approximately) monotone linear combinations of the objectives. We investigate the problem of computing minimum size $\epsilon$-convex Pareto sets, both for discrete (combinatorial) and continuous (convex) problems, and present general algorithms using a $\textrm{Comb}$ routine. For bi-objective problems, we show that if we have an exact $\textrm{Comb}$ optimization routine, then we can compute the minimum $\epsilon$-CP for continuous problems (this applies for example to bi-objective Linear Programming and Markov Decision Processes), and factor 2 approximation to the minimum $\epsilon$-CP for discrete problems (this applies for example to bi-objective versions of polynomial-time solvable combinatorial problems such as Shortest Paths, Spanning Tree, etc.). If we have an approximate $\textrm{Comb}$ routine, then we can compute factor 3 and 6 approximations respectively to the minimum $\epsilon$-CP for continuous and discrete bi-objective problems. We consider also the case of three and more objectives and present some upper and lower bounds.

(Joint work with Mihalis Yannakakis.)


Wednesday, January 30:
Competitive Queue Management for Latency Sensitive Packets
Uri Nadav
Tel Aviv University

We consider the online problem of non-preemptive queue management. An online sequence of packets arrive, each of which has an associated intrinsic value. Packets can be accepted to a FIFO queue, or discarded. The profit gained by transmitting a packet diminishes over time and is equal to its value minus the delay. This corresponds to the well known and strongly motivated Naor's model in operations research.

We give a queue management algorithm with a competitive ratio equal to the golden ratio ($\phi\approx 1.618$) in the case that all packets have the same value, along with a matching lower bound. We also derive $\Theta(1)$ upper and lower bounds on the competitive ratio when packets have different intrinsic values in the case of differentiated services. We can extend our results to deal with more general models for loss of value over time.

Finally, we re-interpret our online algorithms in the context of selfish agents, producing an online mechanism that approximates the optimal social welfare to within a constant factor.

Joint work with Amos Fiat and Yishay Mansour.


Thursday, February 7:
Lower Bounds and the Power of Randomness
Emanuele Viola
Columbia University

Computation is an ubiquitous phenomenon in nature, and of increasing relevance to many fields such as economics, biology, and mathematics. In this talk we survey some of our results in two surprisingly interrelated areas that are central to the understanding of computation: lower bounds and the power of randomness.

In particular, we consider the fundamental problem of following a path in a graph whose edges are distributed among several collaborating players. We obtain a lower bound on the amount of communication that the players need to exchange, answering a decade-old question. We also consider the problem of constructing sequences of bits that ``look random'' though being generated with little or no randomness. Here we present a pseudorandom generator that stretches few random bits into a long sequence which looks random to polynomials. Our result marks the first progress on this problem since 1993, and has spurred interaction between computer scientists and mathematicians, leading to advances on an outstanding conjecture in combinatorics.


Wednesday February 27:
Partitioning a System of Polynomial Equations via Graph Theory
Gregory V. Bard
Fordham University

Solving a polynomial system of equations is NP-hard, and therefore very hard in practice. However, this problem is at the root of many important applications, including algebraic cryptanalysis. Many techniques exist to tackle these problems, such as Groebner Bases methods, SAT-solvers, Linearization, and simple brute-force guess-and-check. The security of block and stream ciphers depends directly on the difficulty of solving systems of this type.

All of those methods are heavily dependent on the number of variables. The technique which will be presented today partitions the system into two distinct parts, which are otherwise connected only by a small number of variables which need to be guessed. So for example, consider a system of 100 variables and equations, with a small "bridge set" of 20 variables that connects two parts of the system. We show how to identify the presence and membership of the bridge, and thus break the system in two. The 20 variables must be guessed, but solving two 40 variable systems 220 times is much closer to feasible (261 work if using brute-force) than solving the original (2100 work if using brute-force). Potentially, this could move infeasible problems into the realm of feasibility.


Thursday February 28:
Black-box Construction of a Non-Malleable Encryption Scheme from Any Semantically Secure One
Dana Dachman-Soled
Columbia University

A non-malleable encryption scheme is one wherein given an encryption of a message, it is infeasible to generate an encryption of a related message. We show how to transform any semantically secure encryption scheme into a non-malleable one, with a black-box construction that achieves a quasi-linear blow-up in the size of the ciphertext. This improves upon the previous non-black-box construction of Pass, Shelat and Vaikuntanathan (Crypto '06). Our construction also extends readily to guarantee non-malleability under a bounded-CCA2 attack, thereby simultaneously improving on both results in the work of Cramer et al. (Asiacrypt '07).

Our construction departs from the oft-used paradigm of re-encrypting the same message with different keys and then proving consistency of encryptions in zero-knowledge. Instead, we encrypt an encoding of the message with certain locally testable and self-correcting properties. We exploit the fact that low-degree polynomials are simultaneously good error-correcting codes and a secret-sharing scheme.

Joint work with Seung Geol Choi, Tal Malkin, and Hoeteck Wee.


Wednesday March 5:
Degree Bounded Network Design
Nikhil Bansal
IBM TJ Watson Research Center
Computing the minimum cost spanning tree in an undirected graph is well understood since the 1920's. However, consider the generalization where we are given degree bounds b_v on each vertex v, and we want to find the minimum cost spanning tree subject to these bounds. In a recent break through result, Goemans gave a LP based approximation algorithm that computes the minimum cost tree, while violating the degree bounds by at most an additive +2. This was subsequently improved and simplified by Singh and Lau who gave an additive guarantee of +1.

In this talk, I will describe a new extremely simple proof of the +1 result. Then I will discuss extensions to directed graphs, matroids and more general network design problems. Most of the talk will consist of giving a flavor of the polyhedral techniques used in the design and analysis of these algorithms.

Joint work with Rohit Khandekar and Viswanath Nagarajan.


Thursday March 6:
Optimality of Merkle's Scheme, and Black-Box Separation of Key-Exchange from One-Way Function
Mohammad Mahmoody-Ghidary
Princeton University
We will see how to break any key-exchange protocol in random oracle model using O(q^2) queries to the oracle where q is the number of queries asked by Alice and Bob in the protocol. The bound is tight up to a constant factor since Merkle's simple scheme '79 can not be broken using o(q^2) queries. Our attack improves (and simplifies through a totally different analysis) \Tilde{O}(q^6)-query attack of Impagliazzo-Rudich '89 which was their main step of rejecting black-box constructions of key-exchange from one-way function.

Joint work with Boaz Barak


Thursday March 13:
Lossy Trapdoor Functions and Their Applications
Chris Peikert
SRI International
We introduce a new general primitive called lossy trapdoor functions (lossy TDFs), and show new approaches for constructing many important cryptographic primitives, including standard (injective) trapdoor functions, collision-resistant hash functions, chosen ciphertext-secure cryptosystems, and more. All of our constructions are simple, efficient, and black-box.

We show how to realize lossy TDFs under various number theoretic assumptions, including hardness of the decisional Diffie-Hellman (DDH) problem and the worst-case hardness of standard lattice problems. Taken all together, our results resolve some long-standing open problems in cryptography: they give the first known (injective) trapdoor functions based on problems not directly related to integer factorization, and provide the first known CCA-secure cryptosystem based solely on worst-case lattice assumptions.

This is joint work with Brent Waters.


Wednesday March 26:
Interactive and Noninteractive Zero Knowledge are equivalent in the Help Model
Iordanis Kerenidis
CNRS - Univ of Paris
We show that interactive and noninteractive zero-knowledge are equivalent in the `help model' of Ben-Or and Gutfreund (J. Cryptology, 2003). In this model, the shared reference string is generated by a probabilistic polynomial-time dealer who is given access to the statement to be proven. Our results do not rely on any unproven complexity assumptions and hold for statistical zero knowledge, for computational zero knowledge restricted to AM, and for quantum zero knowledge when the help is a pure quantum state.

This is joint work with Florin Ciocan, Andre Chailloux and Salil Vadhan


Wednesday April 2:
Developments in Holographic Algorithms
Jin-Yi Cai
Computer Sciences Dept.
Univ. of Wisconsin -- Madison
Radcliffe Institute, Harvard University
Valiant's theory of holographic algorithms is a new design method to produce polynomial time algorithms. Information is represented in a superposition of linear vectors in a holographic mix. This mixture creates the possibility for exponential sized cancellations of fragments of local computations. The underlying computation is done by invoking the Fisher-Kasteleyn-Temperley method for counting perfect matchings for planar graphs, which uses Pfaffians and runs in polynomial time. In this way some seemingly exponential time computations can be done in polynomial time, and some minor variations of the problems are known to be NP-hard or #P-hard. Holographic algorithms challenge our conception of what polynomial time computations can do, in view of the P vs. NP question.

In this talk we will survey some new developments in holographic algorithms.


Wednesday April 16:
Simple encryption schemes against sophisticated attacks
Hoeteck Wee
Columbia University
Along with the increasing reliance on computers and the Internet for myriad tasks from voting to auctions evolves a pressing need to develop cryptographic tools and protocols with stronger guarantees. Traditional cryptographic guarantees such as data privacy amidst wiretapping and security against a static collection of malicious network entities do not meet the security requirements for many of these tasks:

-- An adversary may be unable to learn your bid in an online auction if the bid is encrypted; however, it could potentially modify the ciphertext to obtain one corresponding to a bid that is a dollar higher than yours.

-- An adversary that adaptively determines which electronic voting machines to break into during the course of an election has a better chance at influencing the outcome of an election than one that makes its choices before the election commences.

I will present new constructions of encryption schemes addressing each of these attacks. The first scheme guarantees that given an encryption of a message, it is infeasible to generate an encryption of a related message. The second improves upon an important building block used in constructing protocols for general multi-party computation that are secure against an adversary that adaptively corrupts up to one third of the parties. Compared to most previous constructions, our schemes are simpler, more efficient, and can be realized under a larger class of cryptographic assumptions.

(joint work with Seung Geol Choi, Dana Dachman-Soled and Tal Malkin.)


Monday, April 14:
Unique Games on Expanding Constraint Graphs are Easy
Alexandra Kolla
UC Berkeley and Princeton University
I will present efficient algorithms to find a good solution to the Unique Games problem when the constraint graph is an expander. I will show two different ways of finding highly satisfying assignments when they exist. The first approach introduces a new analysis of the vector solution for the standard SDP that involves correlations among distant vertices. The second approach is based on the observation that the information of the satisfying labeling is encoded within the first few eigenvectors of the label-extended graph. This leads to a simple spectral partitioning algorithm for recovering highly satisfying assignments.

This is Joint Work with Sanjeev Arora, Subash Khot, David Steurer, Madhur Tulsiani and Nisheeth Vishnoi.


Wednesday, April 23:
Evolvability from Learning Algorithms
Vitaly Feldman
IBM Almaden Research Center
In this talk I will present results on the power of the model of evolvability recently introduced by Leslie Valiant. Valiant's model addresses one of the most important and least understood aspects of the theory of evolution: how complex and adaptable mechanisms result from relatively short sequences of random mutations guided (primarily) by natural selection. The fundamental insight of his theory is that evolution is a form of learning in which the feedback from the environment is provided solely by natural selection. Valiant therefore suggests that the appropriate framework for understanding the power of evolution as a mechanism of learning is that of computational learning theory. Accordingly, in his model, evolvability of certain useful functionality is cast as a problem of learning the desired functionality through a process in which, at each step, the most ``fit" candidate function is chosen from a small pool of candidates. A certain class of functions is considered evolvable if there exists a single mechanism that guarantees convergence to the desired function for any function in this class. Here the requirements closely follow those of the celebrated PAC learning model introduced by Valiant in 1984.

Valiant showed that classes of functions evolvable in his model are also learnable in the statistical query (SQ) model of Kearns widely regarded as almost as powerful as PAC learning. He also asked whether the converse is true.

We show that evolvability is equivalent to learnability by a restricted form of statistical queries. Based on this equivalence we prove that for any fixed distribution D over the instance space, every class of functions learnable by SQs over D is evolvable over D. Previously, only the evolvability of monotone conjunctions of Boolean variables over the uniform distribution was known. On the other hand, we prove that the answer to Valiant's question is negative when distribution-independent evolvability is considered. To demonstrate this, we develop a technique for proving lower bounds on evolvability and use it to show that decision lists and linear threshold functions are not evolvable in a distribution-independent way.


Wednesday, April 30:
Hardness amplification proofs require majority
Emanuele Viola
Columbia University
Hardness amplification is a major line of research that mainly seeks to transform a given lower bound (e.g. a function that has correlation at most 99% with small circuits) into a strongly average-case one (i.e. a function that has negligible correlation with small circuits). Strongly average-case lower bounds are of central importance in complexity theory and in particular are necessary for most cryptography and pseudorandom generators.

In this work we show that standard techniques for proving hardness amplification against a class of circuits require that same class of circuits to compute the Majority function.

Our work is most significant when coupled with the celebrated ``natural proofs'' result by Razborov and Rudich (J. CSS '97) and Naor and Reingold (J. ACM '04), which shows that most lower-bounding techniques cannot be applied to circuits that can compute Majority. The combination of our results with theirs shows that **standard techniques for hardness amplification can only be applied to those circuit classes for which standard techniques cannot prove circuit lower bounds.** This in particular explains the lack of strong average-case lower bounds for a number of circuit classes for which we have lower bounds.

Our results also show a qualitative difference between the direct product lemma and Yao's XOR lemma, and they give tight bounds on the number of queries needed for hardness amplification.

Joint work with Ronen Shaltiel.


Wednesday, May 7th:
Optimal Algorithms and Inapproximability Results for Every CSP?
Prasad Raghavendra
University of Washington
Semidefinite Programming(SDP) is one of the strongest algorithmic techniques used in the design of approximation algorithms. In recent years, Unique Games Conjecture(UGC) has proved to be intimately connected to the limitations of Semidefinite Programming.

Making this connection precise, we show the following result : If UGC is true, then for every constraint satisfaction problem(CSP) the best approximation ratio is given by certain simple SDP. Specifically, we show a generic conversion from SDP integrality gap instances to UGC hardness results. This result holds both for maximization and minimization problems over arbitrary finite domains. Using this connection between integrality gaps and hardness results we obtain a generic polynomial-time algorithm for all CSPs. Assuming the Unique Games Conjecture, this algorithm achieves the optimal approximation ratio for every CSP.

Unconditionally, for all 2-CSPs the algorithm achieves an approximation ratio equal to the integrality gap of the natural SDP. Thus the algorithm achieves at least as good an approximation ratio as the best known algorithms for several problems like MaxCut, Max2SAT and Unique Games.


Wednesday, May 14th:
Optimal Cryptographic Hardness of Learning Monotone Functions
Andrew Wan
Columbia University
A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However, polynomial-time algorithms have thus far been obtained almost exclusively for various classes of monotone functions, while the computational hardness results obtained to date have all been for various classes of general (nonmonotone) functions. Motivated by this disparity between known positive results (for monotone functions) and negative results (for nonmonotone functions), we establish strong computational limitations on the efficient learnability of various classes of monotone functions. We give several such hardness results which are provably almost optimal since they nearly match known positive results. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than 1/2 + 1/pn; this accuracy bound is close to optimal by known positive results (Blum et al., FOCS '98). Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomial-size circuits computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation '04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O.Donnell (JCSS '04).

Joint work with Dana Dachman-Soled, Homin K. Lee, Tal Malkin, Rocco A. Servedio, and Hoeteck Wee.


Wednesday, May 14th:
Reconstruction of depth-3 arithmetic circuits
Amir Shpilka
Technion
Depth-3 arithmetic circuits compute polynomials that can be represented as sums of products of linear functions. In spite of their simple structure, we are far from understanding such circuits. In this talk we will focus on the reconstruction problem for depth-3 circuits, that have a constant number of multiplication gates. I.e., we have access to some unknown polynomial, that can be represented as a sum of a constant number of products of linear functions, and by asking for its values on (a small number of) inputs of our choice we would like to find a depth-3 circuit computing it. We will show how to reconstruct such depth-3 circuits in time exp(polylog(s)), where s is the size of a depth-3 circuit computing the unknown polynomial. Our techniques rely on a theorem on the structure of zero depth-3 circuits and on a zero testing algorithm that it implies.


Back to Spring 2008 Theory Seminar main page