For Spring 2009, the usual time for the meetings will be Mondays at 11:00am in the CS conference room, CSB 453. Abstracts for talks are given below the talk schedule.

Pavel Hrubes

IAS

**Abstract:**
The problem of proving superpolynomial lower bounds on sizes of formulas computing an explicit polynomial is one of the important open questions in complexity theory. The same applies to more restricted computational models, such as homogeneous formulas. In particular, it is not known whether homogeneous formulas are more powerful than unrestricted formulas.
I will illustrate this problem on the example of elementary symmetric polynomials, which are natural candidates for a potential separation.

This is joint work with Amir Yehudayoff.

Arkadev Chattopadhyay

IAS

Obtaining strong lower bounds for the `Number on the Forehead' model of multiparty communication has many applications. For instance, it yields lower bounds on the size of constant-depth circuits and the size of proofs in important proof-systems.

Recently, Sherstov (2008) and Shi and Zhu (2008), independently introduced two closely related techniques for proving lower bounds on 2-party communication complexity of certain functions. We extend both techniques to the multiparty setting. Each of them yields a lower bound of n^{\Omega(1)} for the k-party communication complexity of Disjointness as long as k is a constant. No superlogarithmic lower bounds were known previously on the three-party communication complexity of Disjointness.

Part of this is joint work with Anil Ada.

Ilias Diakonikolas

Columbia

**Abstract:**
We show that any distribution on {-1,1}^n that is k-wise
independent fools any halfspace h with error \eps for
k = O(\log^2(1/\eps) /\eps^2). Up to logarithmic factors,
our result matches a lower bound by
Benjamini, Gurel-Gurevich, and Peled (2007) showing that
k = \Omega(1/(\eps^2 \cdot \log(1/\eps))).
Using standard constructions
of k-wise independent distributions, we obtain the first
explicit pseudorandom generators G: {-1,1}^s --> {-1,1}^n
that fool halfspaces. Specifically, we fool halfspaces with error eps and
seed length s = k \log n = O(\log n \cdot \log^2(1/\eps) /\eps^2).

Our approach combines classical tools from real approximation theory with structural results on halfspaces by Servedio (Computational Complexity 2007).

Joint work with Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio and Emanuele Viola.

Zvika Brakerski

Weizmann Institute of Science

**Abstract:**
Verifiable random functions (VRFs), introduced by Micali, Rabin and
Vadhan, are pseudorandom functions in which the owner of the seed
produces a public-key that constitutes a commitment to all values of
the function. It can then produce, for any input $x$, a proof that
the function has been evaluated correctly on $x$, preserving
pseudorandomness for all other inputs. No public-key (even a falsely
generated one) allows for proving more than one value per input
(compare to signature schemes where soundness is only required for
legal public-keys).

VRFs are both a natural and a useful primitive, and previous works have suggested a variety of constructions and applications. Still, there are many open questions in the study of VRFs, especially their relation to more widely studied cryptographic primitives and constructing them from a wide variety of cryptographic assumptions. In this work we define a natural relaxation of VRFs that we call weak verifiable random functions, where pseudorandomness is required to hold only for randomly selected inputs. We conduct a study of weak VRFs, focusing on applications, constructions, and their relationship to other cryptographic primitives. We show:

* Constructions. We present constructions of weak VRFs based on a variety of assumptions, including general assumptions such as (enhanced) trapdoor permutations, as well as constructions based on specific number-theoretic assumptions such as the Diffie-Hellman assumption in bilinear groups.

* Separations. Verifiable random functions (both weak and standard) cannot be constructed from one-way permutations in a black-box manner. This constitutes the first result separating (standard) VRFs from any cryptographic primitive.

* Applications. Weak VRFs capture the essence of constructing non-interactive zero-knowledge proofs for all NP languages.

Joint work with Shafi Goldwasser, Guy Rothblum and Vinod Vaikuntanathan.

Texas A&M University

**Abstract:**
Parameterized algorithms have witnessed a tremendous growth in the last
decade and have become increasingly important in dealing with NP-hard
problems that arise from the world of practical computation. In this
talk, after a brief review of the basic concepts in parameterized
computation, we will be focused on two new algorithmic techniques that
have turned out to be useful in the recent development of parameterized
algorithms: randomized process of a small unknown subset of a given
universal set, and implicitly enforced parameter bounds in a
branch-and-search process. These techniques are simple, effective, and
have led to significant improved algorithms for a number of well-known
NP-hard problems.

Jonathan Gross

Columbia University

**Abstract:**
We present a quadratic-time algorithm for calculating the sequence
of numbers g0, g1, g2, ... of topologically distinct ways to draw a
3-regular outerplanar graph G on each of the respective orientable
surfaces S0, S1, S2, ... (without edge-crossings). The total number
of ways over all surfaces is 2^n, where n is the number of vertices of
G. The key algorithmic features are a characterization of 3-regular
outerplanar graphs in terms of plane trees and a subsequent synthesis
of the graphs by sequences of amalgamations of building-block graphs
according to post-order traversals of those plane trees.

Bhaskar Dasgupta

University of Illinois at Chicago

**Abstract:**
In this talk, I will discuss our ongoing investiations of
approximability issues of a few somewhat-related
optimization problems.

We first consider several versions of the transitive reduction problems on directed graphs. For all these problems we establish the following: polynomial time algorithm for the minimization version within ratio 1.5, for the maximization version within ratio 2 and MAX-SNP-hardness for both versions even of the length of simple cycles is limited to 5.

Roughly speaking, the Horn formula optimization problem asks for a given Horn formula another equivalent Horn formula optimizing certain objective criteria. We will discuss approximability of special case of this problem via transitive reductions and super-polylog inapproximability for a slightly more general case.

(Ongoing work with various subsets of the following authors: Amitabha Bhattacharya, Piotr Berman, Marek Karpinspki and Gyorgy Turan. None of the results have been published yet.)

Falk Unger

UC Berkeley

**Abstract:**
Threshold direct product theorems are statements of the following
form: If one instance of a given problem can be solved with
probability at most p, then solving significantly more than a p
fraction among multiple instances has negligible probability. Results
of this kind are crucial when distinguishing whether a process
succeeds with probability s or c, for 0 < s < c < 1. Here standard direct
product theorems are of no help since even a process which can solve
one instance with probability c will only be able to solve all k
instances with exponentially small probability.

I will present a probabilistic inequality which can turn XOR Lemmas into threshold (and standard) direct product theorems in an information-theoretically optimal way. I will give examples of this approach and show how to obtain threshold direct product theorems for example for quantum XOR games, quantum random access codes, circuits, (multi-party) communication complexity and polynomials over GF(2). We also show how other recent results can be simplified and extended.

It is well-known that direct product theorems and XOR Lemmas are ``essentially'' equivalent. It will be clear from my talk that one direction is often even tight: going from XOR Lemmas to (threshold) direct product theorems is possible without loss in the parameters.

Seung Geol Choi

Columbia University

**Abstract:**
We present a new construction of non-committing encryption schemes.
Unlike the previous constructions of Canetti et al. (STOC '96) and
of Damg{\aa}rd and Nielsen (Crypto '00), our construction achieves
all of the following properties:

As a result, we obtain the first non-committing public-key encryption schemes under hardness of factoring and worst-case lattice assumptions; previously, such schemes were only known under the DDH and RSA assumptions. Combined with existing work on secure multi-party computation, we obtain protocols for multi-party computation secure against a malicious adversary that may adaptively corrupt an arbitrary number of parties under weaker assumptions than were previously known. Specifically, we obtain the first adaptively secure multi-party protocols based on hardness of factoring in both the stand-alone setting and in the UC setting with a common reference string.

This is a joint work with Dana Dachman-Soled, Tal Malkin, and Hoeteck Wee.

David Xiao

Princeton University

**Abstract:**
The PAC model (Valiant, CACM '84) is one of the central models
studied in computational learning theory. There is evidence that
many specific classes of functions (e.g. intersections of
half-spaces, parity functions with noise, etc.) are hard to learn by
efficient algorithms, and cryptographic assumptions imply that
learning small circuits is hard. We say that PAC learning is hard if
no efficient algorithm can learn all functions computable by small
circuits.

In this talk, we show that the black-box complexity of PAC learning lies strictly between NP and ZK. More precisely, if P = NP then PAC learning is easy and if ZK != BPP then PAC learning is hard, but black-box techniques (with some additional restrictions) do not suffice to prove equivalence in either case.

With regard to NP, we rule out non-adaptive reductions using a PAC learning oracle to solve an NP-complete problem by showing this would imply that coNP is contained in AM, which is considered implausible.

With regard to ZK, we rule out relativizing proofs that ZK != BPP based on hardness of learning. Using the characterization of ZK of Ong and Vadhan (EUROCRYPT '07), we also show that any black-box construction of a (computational) ZK protocol for a language L based on hardness of learning implies that L actually has a *statistical* zero knowledge proof (i.e. L is in SZK), and hence such a black-box construction is unlikely to exist for NP-complete languages.

Parts of this talk are based on joint work with Benny Applebaum and Boaz Barak.

Contact

There is a mailing list for the reading group. General information about the mailing list (including how to subscribe yourself to it) is available here. If you want to unsubscribe or change your options, send email to theoryread-request@lists.cs.columbia.edu with the word `help' in the subject or body (don't include the quotes), and you will get back a message with instructions.

Comments on this page are welcome; please send them to
` rocco-at-cs.columbia.edu
`

Last updated 1/30/2009.