New York Area Theory Day
Organized by: IBM/NYU/Columbia
External sponsorship by: Google
Friday, December 11, 2009
Recital Hall, 365 Fifth Avenue (between 34th
Street and 35th Street), at CUNY Graduate Center, New York.
The New York Area Theory Day is a semiannual conference, aimed to bring
together people in the New York Metropolitan area for one day of interaction
and discussion. The Theory Day features several (usually 45) hourlong
presentations by leading theoretical computer scientists about
stateoftheart advances in various areas. Some presentations give a
survey of the latest advances in some area, while others may concentrate on a
particular result.
The meeting is free and open to everyone; in particular, students are
encouraged to attend.
Program:
9:30  10:00 Coffee and bagels
10:00  10:55 Prof. Costis Daskalakis (MIT)
On the Complexity of Approximating a Nash Equilibrium
10:55  11:05 Short break
11:05  12:00 Dr. Yael TaumanKalai (Microsoft Research)
A Survey on Leakage Resilient Cryptography
12:00  2:00 Lunch break
2:00  2:55 Prof. Bobby Kleinberg (Cornell University)
Pricing Randomized Allocations
2:55  3:15 Coffee break
3:15  4:10 Prof. Moni Naor (Weizmann Institute and Princeton)
Backyard Cuckoo Hashing: Constant WorstCase
Operations with a Succinct Representation
For directions, please see
http://www.cs.gc.cuny.edu/dept/location
To subscribe to our mailing list, follow instructions at
http://www.cs.nyu.edu/mailman/listinfo/theoryny
Organizers:
Amotz Barnoy amotz@sci.brooklyn.cuny.edu
Yevgeniy Dodis dodis@cs.nyu.edu
Tal Rabin talr@us.ibm.com
Baruch Schieber sbar@us.ibm.com
Cliff Stein cliff@ieor.columbia.edu
============================================================
Abstracts
On the Complexity of Approximating a Nash Equilibrium
Prof. Costis Daskalakis
(MIT)
According to Bob Aumann, strictly competitive gamesa generalization
of zerosum gamesare "one of the few areas in game theory, and
indeed in the social sciences, where a fairly sharp, unique prediction
is made". We present a generalization of these games to multiplayer
games played on a network, where the nodes are players and the edges
are strictly competitive games between their endpoints; that is, we
are looking at a network of competitors. Our generalization of the
minmax theorem to the network setting comes with interesting
consequences: convexity of equilibria, polynomialtime tractability,
and convergence of noregret algorithms to equilibria. And what about
extending our results beyond zerosum games? Previous work has
established that computing exact equilibria is computationally
intractable, but research on approximation algorithms has not made
progress beyond finite values of the approximation, while
inapproximability results have been evading current techniques. We
provide the first inapproximability result for Nash equilibria, for
finite values of relative approximation.
============================================================
A Survey on Leakage Resilient Cryptography
Dr. Yael TaumanKalai
(Microsoft Research)
Traditionally, the theory of cryptography community proved security of
cryptographic primitives under the assumption that no information
about the secret key is leaked. However, there is a growing
realization that in reality information about the secret key may be
leaked via various so called "side channel" attacks. Thus, the
problem of designing cryptographic primitives that are provably secure
even with keys which may be partly compromised has recently gained
much popularity. In this talk, I will survey some of these recent
results.
============================================================
Pricing Randomized Allocations
Prof. Bobby Kleinberg
(Cornell University)
We study the use of randomized outcomes (henceforth, "lotteries") in
mechanism design. It is well known to economists that optimal
mechanisms for singleparameter agents do not randomize over outcomes,
but this ceases to be the case when one considers multiparameter
problems. To what extent can a seller improve revenue by pricing
lotteries rather than items, and does this modification of the problem
affect its computational tractability? We investigate these questions
in the context of an archetypical profit maximization problem: selling
heterogeneous items in unlimited supply to unitdemand bidders. The
answers hinge on whether consumers can purchase only one lottery (the
buyone model) or purchase any set of lotteries and receive an
independent sample from each (the buymany model). In the buyone
model, lotteries can improve revenue by an unbounded factor, and an
optimal lottery pricing system can be computed in polynomial time
despite the inapproximability of the corresponding item pricing
problem. In the buymany model, lotteries improve profit by only a
logarithmic factor and the optimal pricing problem is inapproximable
unless NP has subexponentialtime randomized algorithms.
This is joint work with Patrick Briest, Shuchi Chawla, and Matt
Weinberg.
============================================================
Backyard Cuckoo Hashing: Constant WorstCase
Operations with a Succinct Representation
Prof. Moni Naor
(Weizmann Institute and Princeton)
The performance of a dynamic dictionary is measured mainly by its update
time, lookup time, and space consumption. Although the first analysis of a
dynamic dictionary dates back more than 45 years ago (when Knuth analyzed
linear probing in 1963), the tradeoff between these aspects of performance
is still not completely understood.
In this talk I will focus on a recent line of work with the goal of
achieving the best possible performance guarantees simultaneously.
In particular, the following constructions will be described:
 A deamortization of cuckoo hashing that stores n elements using
roughly 2n memory words, and guarantees constanttime time operations
in the worst case with high probability, for any sequence of polynomially
many operations.
 The first dynamic dictionary that enjoys the best of both worlds: a
twolevel variant of cuckoo hashing that stores n elements using
(1+o(1))n memory words, and guarantees constanttime operations in
the worst case as above. The construction is based on augmenting
cuckoo hashing with a "backyard" that handles a large fraction of the
elements, together with a deamortized perfect hashing scheme for
eliminating the dependency on bin sizes.
 A variant of the previous construction that stores n elements using
only (1+o(1))B bits, where B is the informationtheoretic lower bound
for representing a (static) set of size n taken from a universe of size u,
and guarantees constanttime operations in the worst case with
high probability, as before. This problem was open even in the
amortized setting.
Joint work with Yuriy Arbitman and Gil Segev.
