Columbia Theory Reading Group, Fall 2003

Abstracts for the talks are given below. See the main page for schedule and location information.


September 18:
Decoding Error-Correcting Codes via Linear Programming
Jon Feldman

Error-correcting codes are fundamental tools used to transmit digital information over an unreliable channel. Their study goes back to the work of Shannon, who used them as the basis for the field of information theory. The problem of decoding up to the full error-correcting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel.

In this talk we investigate the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new "LP decoders" have tight combinatorial characterizations of decoding success that can be used to analyze error-correcting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the channel. We refer to this property as the "ML certificate" property. We also introduce the concept of the "fractional distance" of an LP relaxation, and show that LP decoding always corrects a number of errors up to half the fractional distance.

We analyze specific LP decoders for two major families of codes: "turbo" codes and "low-density parity-check (LDPC)" codes. These families of codes have received a great deal of attention recently due to their unprecedented error-correcting performance. Our decoder is particularly attractive for these codes because the standard message-passing algorithms used for decoding are often difficult to analyze. We compare the performance of the LP decoder to the standard message-passing techniques, both analytically and experimentally. We also give new provably convergent message-passing decoders based on linear programming duality.

This is joint work with David Karger and Martin Wainwright.


October 2:
Separating the Power of Monotone Span Programs over Different Fields
Enav Weinreb

Monotone span programs are a linear-algebraic model of computation. They are equivalent to linear secret sharing schemes and have various applications in cryptography and complexity. Part of the specification of a span program is the field in which the algebraic operations are performed. A fundamental question is whether the choice of the field effects the power of the span program. We answer this question positively by proving that the power of monotone span programs over finite fields of different characteristics is incomparable. More precisely we show a super-polynomial separation between any two fields with different characteristics.

In this talk, we will introduce the model of monotone span programs, and briefly discuss its applications in complexity and cryptography. In the main part of the talk, we will describe the ideas of proving the separation result.


October 16:
Statistical zero-knowledge proofs with efficient provers: lattice problems and more
Daniele Micciancio

We construct several new statistical zero-knowledge proofs with _efficient_provers_, i.e. ones where the prover strategy runs in probabilistic polynomial time given an NP witness for the input string.

Our first proof systems are for approximate versions of the Shortest Vector Problem (SVP) and Closest Vector Problem (CVP), where the witness is simply a short vector in the lattice or a lattice vector close to the target, respectively. Our proof systems are in fact proofs of knowledge, and as a result, we immediately obtain efficient lattice-based identification schemes which can be implemented with arbitrary families of lattices in which the approximate SVP or CVP are hard.

We then turn to the general question of whether _all_ problems in SZK intersection NP admit statistical zero-knowledge proofs with efficient provers. Towards this end, we give a statistical zero-knowledge proof system with an efficient prover for a natural restriction of Statistical Difference, a complete problem for SZK. We also suggest a plausible approach to resolving the general question in the positive.

Joint work with Salil Vadhan (Harvard University). Talk based on a paper presented at CRYPTO 2003.


November 6:
Energy Aware Algorithm Design via Probabilistic Computing: From Algorithms and Models to Moore's Law and Novel (Semiconductor) Devices
Krishna Palem

The energy consumed by computations is becoming an increasing concern both within the context of high-performance systems well as embedded systems, on par with the past focus on raw speed or its derivative performance. In this talk, we will outline a novel framework for designing and analyzing algorithms wherein the figure of merit is the energy complexity - a measure of the (physical) energy consumed. Using the formulation of an energy-aware switch, and a network of such switches, fundamental limits will be established for the energy needed for switching deterministically, as well as energy savings derived from probabilistic switching, with a probability of being correct, p. Specifically, it is shown that a single deterministic switching step for computing a BIT consumes at least (-kT ln(2)) Joules of energy, whereas the energy consumed by a single probabilistic switching step to compute a PBIT can be as low as (-kT ln (2p)) Joules. These results are developed within the context of an idealized switching device introduced here, constrained by the laws of classical (statistical) thermodynamics (of Maxwell, Boltzmann and Gibbs), as well as by the constraints of idealized semiconductor devices. Based on this notion of switching, models for algorithm analysis and design, as well as upper- and lower- bounds on energy complexity and hence, for the first time, asymptotic energy savings via the use of probabilistic computing will be established. Possible approaches to realizing these probabilistic switches using conventional CMOS technology, as well as their potential for accelerating the current semiconductor roadmap that is based on deterministic computing, including the projections implied by Moore.s law, will be outlined. This work draws upon basic concepts from computer science, microelectronics and classical thermodynamics, and in the interest of being self-contained, the presentation will include a brief survey of the relevant thermodynamics.

This work was supported in part by DARPA Seedling Contract #F30602-02-2-0124.


November 13:
Exposure-Resilient Cryptography
Yevgeniy Dodis

Much successful research has focused on developing cryptographic protocols and algorithms which are secure (in some appropriate and well-defined sense) under the assumption that ``secret'' information is kept hidden from the adversary. However, as cryptographic algorithms are increasingly deployed on inexpensive, lightweight, mobile, and/or unprotected devices, the risk of *key exposure* is becoming a serious threat to the security of many real-world systems. Indeed, in practice the attacks of this sort are, in many cases, more likely than attacks which directly "crack" the cryptographic assumptions on which the security of the scheme is based. And while at first glance it might appear that not much can be done to prevent or mitigate the damage caused by key exposure, the study of *exposure-resilient cryptography* has led to a variety of diverse and effective approaches for combating key exposure.

In this talk, I will survey several recent methodologies in the field of exposure-resilient cryptography where I was involved. These methodologies include

(1) remotely-keyed cryptography

(2) two-party schemes (i.e., client-server model)

(3) key evolution (including forward-secure, key-insulated and intrusion-resilient cryptography)

(4) partial key exposure protection (incl. secret sharing)

(5) biometric authentication

(6) intentional key exposure protection (incl. traitor tracing).

The talk will be introductory and concentrate on items (1)-(3). Many of the relevant papers can be found at http://cs.nyu.edu/~dodis


November 20:
Two Unconventional Aspects of Trees
Fabrizio Luccio

A tree T=(V, E) is a connected graph which is either void, or contains a leaf v with exactly one incident edge e, and the subgraph T=(V-{v}, E-{e}) still is a tree. This definition leads to two interrelated problems:

1. Hercules and the Hydra. The Hydra was a monster with many heads living in a marsh of Lerna. Hercules fought the Hydra chopping her heads with a club, however, as he chopped a head, two new heads grew forth from the monster's body. Hercules won. What mythology did not tell us, however, is that no matter in which order Hercules had attacked the heads, the monster would have been annihilated. Although less charming we shall treat the Hydra as a rooted tree and her heads as leaves, specifying the rules for head chopping and re-growing (in particular, no new leaf is allowed to grow forth from the root). The problem was originally formulated by Kirby and Paris (Bull. of London Math. Soc., 1982,) who proved the convergence to zero of the number of heads using transfinite induction, and how head chopping is related to Goodstein's series. To the best of our knowledge an elementary combinatorial proof of convergence is still unknown. We shall discuss a new chopping strategy and prove its convergence.

2. Dense Trees. A natural extension of the definition of a tree is obtained by letting a leaf to be a vertex with at most k incident edges, for a given integer parameter k>=1. We call the resulting structure a k-dense tree, with 1-dense trees being standard trees. The relations between k-dense trees and other families of graphs are discussed, and their properties are investigated. To some extent dense trees can be seen as reinforced trees, in their role of fault-tolerant spanning trees of a network, or search tree structures. In particular they arise in the field of distributed systems where node failures may be determined according to the soundness of the neighbors. Much work on dense trees has still to be done.

Back to Fall 2003 theory reading group main page