Abstracts for the talks are given below. See the main page for schedule and location information.

Rocco Servedio

In this talk we consider a fundamental (and seemingly quite difficult) problem in computational learning theory: learning an arbitrary Boolean function which depends on an unknown set of k out of n Boolean variables. (Here k is much less than n, e.g. log(n) or even smaller.) This problem was posed by Avrim Blum in 1994 and is an important special case of several notorious open problems such as learning decision trees or DNF formulas.

We give an algorithm for learning such functions given only a source of uniformly distributed random labeled examples. Our algorithm achieves the first polynomial factor improvement on a naive n^k time bound (which can be achieved via exhaustive search). The algorithm and its analysis are based on new structural properties which hold for all Boolean functions and may be of independent interest.

Joint work with Elchanan Mossel (UC Berkeley) and Ryan O'Donnell (MIT), to appear in STOC 2003.

Cliff Stein

Tal Malkin

Approximation algorithms are often useful in obtaining efficient solutions to problems where no efficient exact computation is known, for example when the input data is extremely large, or when the problem is NP-hard. Secure multiparty computation allows mutually distrustful parties to compute a function of their inputs without revealing unnecessary information. Both secure multiparty computation and approximation algorithms are major research fields, and a rich body of work has emerged in the last decades in these two areas. However, some settings require the benefits offered by both concepts. For example, two companies may want to compute joint statistics on their data, which is typically huge, without revealing any additional information. Trying to combine techniques of secure multiparty computation with approximation algorithms is complex, and if done naively the resulting algorithm may be neither efficient, nor secure. Here, we initiate the study of secure multiparty computation of approximations, realizing most of the cost savings available by using an approximation algorithm, without losing the privacy of a secure computation.

We make three contributions: First, we introduce the concept and give formal definitions of secure multiparty approximate computation. Second, we present efficient private approximation protocols for distance functions, which are useful for computations on massive data sets. In particular, we give an efficient, sublinear-communication, private approximation for the Hamming distance, and an efficient, polylogarithmic-communication solution for the $L^{2}$ distance in a relaxed model. Finally, we give an efficient private approximation of the permanent and other related \#P-hard problems.

This talk is based on joint work with J. Feigenbaum, Y. Ishai, K. Nissim, M. Strauss, and R. Wright.

Venkatesan Guruswami

(University of Washington)

Expanders, which are sparse yet highly connected graphs with additional pseudorandom properties, have found applications in varied settings. In particular, they have been the main combinatorial tool in the explicit constructions of error-correcting codes with linear time encoding and decoding algorithms. Recently, these constructions were improved to the point where the rate (a measure of efficiency of the code) vs. error-correction trade-off is near-optimal.

In this talk, I'll begin by presenting a brief overview of how expanders are used in these constructions. I'll then talk about some recent work which uses expanders and related combinatorial tools to construct codes with *linear time* ``list decoding'' algorithms. Under list decoding, the decoder is permitted to output a small *list* of candidate messages which must include the transmitted message, and this enables meaningful error recovery from much larger amounts of noise. Unlike the prior constructions of list decodable codes which were algebraic (either explicitly or in disguise), our construction is combinatorial, and in addition to expanders, relies on spectral partitioning tools to find dense clusters in graphs.

I'll also highlight some related future challenges in the area of list decoding.

Based on joint works with Piotr Indyk.

Anargyros Papageorgiou and Joe Traub

Risi Kondor

Kernel-based learning algorithms, such as Support Vector Machines are currently very popular in the Machine Learning community. One key feature of these algorithms has only recently been realized. By abstracting the structure of input space in the kernel, the same algorithms can be applied unchanged in wildly different domains.

After a brief introduction to kernel based learning methods, I will discuss learning on graphs, finite groups and Riemannian manifolds. In particular, I will show how the concept of Laplace's operator and diffusion can unify all these domains and allow us to tap into 200 years of mainstream mathematics.

Moti Yung

Digital signatures emerge naturally from Public-Key Encryption based on trapdoor permutations, and the ``duality'' of the two primitives was noted as early as Diffie-Hellman's seminal work. The current work is centered around the crucial observation that two well known cryptographic primitives whose connection has not been noticed so far in the literature enjoy an analogous duality. The primitives are the privacy mechanism of "Group Signature Schemes" and the digital right management mechanism called "Traitor Tracing Schemes." Based on the observed duality, we introduce new strategies (methodologies) for designing group-signatures based on traitor tracing schemes.

Our first methodology applies to generic "public-key traitor tracing schemes." We show its power by applying it to the Boneh-Franklin scheme, and obtaining its dual group-signature. This dual scheme is the first provably secure group-signature scheme whose signature size is not proportional to the size of the group and is based on a traditional assumption, namely the Decisional Diffie-Hellman (and a random oracle). Our second methodology introduces a generic way of turning any group signature scheme with signature size linear in the group size into a group signature scheme with poly-logarithmic signature size. To this end it employs the notion of "traceability codes" (a central component of combinatorial traitor tracing schemes, already used in the first such scheme by Chor, Fiat and Naor). Without the random oracle assumption our schemes give rise to provably secure and efficient Identity Escrow schemes.

This is joint work with Aggelos Kiayias.

Ted Diament

The classical uncertainty principle states that a function and its Fourier spectrum cannot both be highly localized or concentrated. In quantum mechanics, this implies, most famously, that it is impossible to measure a particle's position and momentum simultaneously.

We show that an analog of the uncertainty principle can be reached in a purely algebraic setting (in particular without relying on any structural properties of the complex plane). Specifically, we show that if $F$ is any field and $G$ is any finite subgroup of $(F^*)^m$ and $f:G \rightarrow F$ is any function not identically zero, then the product of the number of nonzero spectral coefficients of $f$ and the number of nonzero values is at least $|G|$. In other words, a function and its Fourier spectrum cannot both have small support.

If we focus on finite fields, our result has the following interpretation.

Let $F$ be a finite field, $F^*$ be its multiplicative group, $\alpha$ a primitive element of $F$ and let $d_1,...,d_m$ be divisors of $n$. Then a $k-sparse$ multivariate polynomial $f(x_1,..., x_m)$ with coefficients in $F$ and multi-degree $(d_1-1,..., d_m-1)$ is either identically zero, or it is nonzero on at least a fraction $1/k$ of the inputs in $\alpha^{n/d_1} \times ... \times \alpha^{n/d_m}$.

This fundamental property of polynomials is a generalization of a 1996 result by Karpinski and Shparlinksi. It emerges here as a special case of the uncertainty principle. We will discuss implications of this result for polynomial identity testing and some approximation problems.

The talk is intended for a general audience of computer scientists. No special background in algebra or Fourier analysis will be assumed.

Back to main page