Columbia Theory Reading Group, Fall 2007

Abstracts for the talks are given below. See the main page for schedule and location information.


Wednesday, September 19:
Succinct Approximation of Multiobjective Optimization Problems
Ilias Diakonikolas
Columbia University
In multiobjective optimization, solutions to an optimization problem are evaluated with respect to several cost criteria, and we are interested in a class of solutions that capture the trade-off between these objectives, the so-called Pareto curve. The problem is that the Pareto curve has typically exponential size and hence we cannot efficiently construct the full curve.

In recent years we initiated a systematic investigation to develop the theory of multiobjective approximation along similar rigorous lines as the approximation of single objective problems. We address the problem of computing efficiently a succinct approximation of the Pareto curve using as few solutions as possible. If we are to select only a certain number of solutions, how shall we pick them so that they represent as accurately as possible the whole spectrum of possibilities?

Time permitting, we will discuss a different notion of approximation and present some of the many open problems in this area.

(Based on joint works with Mihalis Yannakakis.)


Wednesday, Oct 3:
Lossless Expanders from Parvaresh-Vardy Codes with applications to Randomness Extraction
Venkatesan Guruswami
Univ. of Washington and Institute for Advanced Study

We give a new construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Our expanders have a short and self-contained description and analysis, based on the algebraic ideas underlying the recent list-decodable error-correcting codes of Parvaresh and Vardy (FOCS `05).

Our expanders can be interpreted as "randomness condensers" that achieve near-optimal compression of a weak random source while preserving all of its entropy. We also show, using ideas from the work of Guruswami and Rudra (STOC'06), that a natural construction based on Reed-Solomon codes yields a condenser that retains a (1 - delta) fraction of the source min-entropy, for any desired constant delta > 0.

Our results reduce the task of extracting randomness from sources of arbitrary min-entropy rate to extracting randomness from sources of min-entropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that extract 99.9% of the source entropy using a logarithmic (up to constant factors) length seed. Our construction is much simpler than the previous construction of Lu et al. (STOC `03) that achieved a similar result, and in addition improves upon it when the error parameter is small (e.g. 1/poly(n)).

Joint work with Christopher Umans and Salil Vadhan.


Wednesday, October 10:
k-means++: The advantages of careful seeding
Yahoo Research

The k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting k-means with a very simple, randomized seeding technique, we obtain an algorithm that is \Theta(log k)-competitive with the optimal clustering. Our experiments show that this augmentation improves both the speed and the accuracy of k-means, often quite dramatically.


Wednesday, October 17: Two FOCS 2007 Practice Talks
Buy-at-Bulk Network Design with Protection
Spyros Antonakopoulos (Columbia University)
We consider approximation algorithms for buy-at-bulk network design, with the additional constraint that demand pairs be protected against edge or node failures in the network. In practice, the most popular model used in high speed telecommunication networks for protection against failures is the so-called 1+1 model. In this model, two edge or node-disjoint paths are provisioned for each demand pair. We obtain the first non-trivial approximation algorithms for buy-at-bulk network design in the 1+1 model for both edge and node-disjoint protection requirements. Our results are for the single-cable cost model, which is prevalent in optical networks. More specifically, we present a constant-factor approximation for the single-sink case and an O(log^3 n) approximation for the multi-commodity case. These results are of interest for practical applications and also suggest several new challenging theoretical problems.

Pseudorandom Bits for Polynomials
Emanuele Viola (Columbia University)

A pseudorandom generator is an efficient deterministic algorithm that stretches a randomly chosen seed into a sequence which is much longer but nevertheless "fools" certain randomness tests, in the sense that the tests cannot distinguish it from random. In this talk we focus on the randomness tests that correspond to low-degree polynomials in n variables over a finite field, say GF(2).

For the special case of degree-1 polynomials, Naor and Naor ('90) give a generator with optimal seed length s=O(log n). This generator has been one of the most celebrated results in pseudorandomness, with applications ranging from derandomization to PCP's. On the other hand, the case of higher degree has seen little progress, with only a couple of papers (Luby, Velickovic, and Wigderson, '93; Bogdanov, '05) giving much worse seed length s > 2^(sqrt(log n)). In particular, prior to our work generators with logarithmic seed length were not known even for polynomials of degree 2.

In this work we present a new approach to constructing pseudorandom generators that fool low-degree polynomials over finite fields, based on the so-called Gowers norms. Using this approach, we obtain the following main results:

  • Unconditionally, we exhibit generators that fool degree-3 polynomials with seed length s = O(log n),
  • assuming the ``Gowers inverse conjecture,'' we exhibit for every d generators that fool degree-d polynomials with seed length s = O(d log n).

    Our generator for degree-d polynomials is simply the component-wise sum of d generators for degree-1 polynomials (on independent seeds).

    Joint work with Andrej Bogdanov.


  • Wednesday, October 31:
    Optimal Mechanism Design: from the 2007 Nobel Prize in Economics to the Foundations of Internet Algorithms
    Jason Hartline
    Northwestern University

    As the Internet has developed to become the single most important arena for resource sharing among parties with diverse and selfish interests, traditional algorithmic and distributed systems approaches are insufficient. To prevent undesirable Internet phenomena such as spam in email systems, bid-sniping in eBay's auction marketplace, free-loading in file-sharing networks, and click-fraud in Internet advertising; game-theoretic and economic considerations must be made. This issue is addressed by the economics field of mechanism design. Consider a mechanism to be a protocol in which selfish agents interact with a central server that then chooses an outcome (e.g., which agents are given access to which Internet resources) and possible payments (sometimes it is necessary to make agents pay for the resources they use).

    The first half of this talk surveys the seminal 1981 work of economist Roger Myerson who was recognized last Thursday with the 2007 Nobel Prize in Economics. His work gave a characterization of the kinds of outcomes that are implementable by a mechanism. From this characterization, he reduces the problem of optimizing the total payments (i.e., the profit of the mechanism) to the problem optimizing the social welfare (i.e., total utility generated by the outcome selected). This result has since directly and indirectly implied countless other significant results in economics, hence the Nobel recognition.

    For the second part of this talk we discuss implications and extensions of this result to the design of Internet algorithms. In particular, implementation of good mechanisms often requires the agents make payments. E.g., spam wound not exist in an email mechanism that charged one cent for each email sent. Yet, many Internet protocols do not allow payment to be made. Would you pay for email? A computational approach to solve this problem is to use computational payments instead of monetary payments. E.g., to send an email the sender's computer must first solve a computational task that provably takes ten seconds to solve; this would effectively stop email spam. Notice that in order to obtain a desirable outcome (no spam) we have to waste the computational resources of good agents (which is a loss to society). We show how Myerson's characterization can be used to design a mechanism that optimizes the social welfare when payments are not monetary but in instead in units of wasted resources. These results apply in very general settings.


    Wednesday, November 7:
    Terminal Backup, 3D Matching and Covering Cubic Graphs
    Elliot Anshelevich
    Rensselaer Polytechnic Institute

    We define a problem called Simplex Matching, and show that it is solvable in polynomial time. While Simplex Matching is interesting in its own right as a nontrivial extension of non-bipartite min-cost matching, its main value lies in many (seemingly very different) problems that can be solved using our algorithm. For example, suppose that we are given a graph with terminal nodes, non-terminal nodes, and edge costs. Then, the Terminal Backup problem, which consists of finding the cheapest forest connecting every terminal to at least one other terminal, is reducible to Simplex Matching. Simplex Matching is also useful for various tasks that involve forming groups of at least 2 members, such as project assignment and variants of facility location.

    In an instance of Simplex Matching, we are given a hypergraph H with edge costs, and edge size at most 3. We show how to find the min-cost perfect matching of H efficiently, if the edge costs obey a simple and realistic inequality that we call the Simplex Condition. The algorithm we provide is relatively simple to understand and implement, but difficult to prove correct. In the process of this proof we show some powerful new results about covering cubic graphs with simple combinatorial objects.


    Wednesday, November 14:
    Deterministic Extractors for Small Space Sources
    Anup Rao
    Institute for Advanced Studies

    It is a widely acknowledged that the physical universe is unpredictable. In computer science this unpredictability, modelled as randomness, turns out to be an enabling feature in algorithm design, cryptography and distributed computing.

    Unfortunately, there is a gap between the physical assumption of unpredicatibility and the computer science model of pure randomness. Physical sources don't yield pure independent random bits, instead they produce "impure" bits, with some entropy.

    The area of extraction of randomness attempts to convert unpredictability into pure randomness either by injecting few pure random bits (which is good enough for algorithmic usage), or by combining multiple sources of pure randomness (which may be the only way to produce cryptographically strong randomness).

    In this talk we describe work along the second direction above: We show how to deterministically extract high quality randomness from a source which was generated by a small width branching program or a small space algorithm. This seems like a model that could capture defective sources of randomness in nature and extractors for such sources could be useful in the construction of pseudorandom generators for space bounded machines.


    Wednesday, November 28:
    Information-Theoretically Secure Protocols and Security Under Composition
    Tal Rabin
    IBM TJ Watson Research Center

    We investigate the question of whether security of protocols in the information-theoretic setting (where the adversary is computationally unbounded) implies the security of these protocols under concurrent composition. This question is motivated by the folklore that all known protocols that are secure in the information-theoretic setting are indeed secure under concurrent composition. We provide answers to this question for a number of different settings (i.e., considering perfect versus statistical security, and concurrent composition with adaptive versus fixed inputs). Our results enhance the understanding of what is necessary for obtaining security under composition, as well as providing tools (i.e., composition theorems) that can be used for proving the security of protocols under composition while considering only the standard stand-alone definitions of security.

    Joint work with Eyal Kushilevitz and Yehuda Lindell


    Wednesday, December 5:
    Computing Approximate Nash Equilibria
    Costas Daskalakis
    UC Berkeley
    In view of the recent hardness results for mixed Nash equilibria, there has been increased interest in computing approximate equilibrium points, albeit progress has been slow. We will overview the hardness results and present recent developments on the subject, in particular, algorithms which achieve constant factor approximations for 2-player games, and a quasi-polynomial time approximation scheme for the multi-player setting. Next, we will consider a very natural and important class of games, the anonymous games, in which, like in certain auction settings, a player is oblivious to the identities of the other players. These games or variants are often appealing models for games of many players even so because the description size that they require is polynomial in the number of players -as opposed to exponential that a game in standard form would require. We will present a polynomial time approximation scheme for the anonymous setting and surprising connections with Stein's method in probability theory.

    Based on joint work with Christos Papadimitriou


    Wednesday, December 12:
    Trapdoors for Hard Lattices, and New Cryptographic Constructions
    Vinod Vaikuntanathan
    Massachusetts Institute of Technology

    Integer *lattices* have emerged to be a powerful alternative to number-theory as a source of hard problems for cryptography. Cryptographic schemes based on lattices are attractive for several reasons: their computations are simple and of low complexity, they have thus far resisted quantum attacks, and their security can be based on the *worst-case* hardness of lattice problems. Unfortunately, the use of lattices in cryptography so far has been limited to relatively simple primitives such as collision-resistant hashing and public-key encryption.

    In this talk, we will describe a new way to securely generate and exploit natural "trapdoors" in lattices. The cryptographic applications include various kinds of trapdoor functions, simple and efficient digital signature schemes, oblivious transfer, and identity-based encryption, all based on *worst-case* lattice assumptions.

    The talk will be self-contained.

    Joint work with Craig Gentry and Chris Peikert.


    Back to Fall 2007 Theory Seminar main page