For Fall 2014 the usual time for the meetings will be Friday at 12:00 - 13:30 in the open meeting area, CSB 477. Abstracts for talks are given below the talk schedule.

** Theory Seminar on Google Calendar **

Siyao Guo

Chinese University of Hong Kong

**Abstract:**
Pseudorandom functions (PRFs) play a fundamental role in symmetric-key cryptography. However, they are inherently complex and cannot be implemented in the class AC0(MOD2). Weak pseudorandom functions (weak PRFs) do not suffer from this complexity limitation, yet they suffice for many cryptographic applications.

We study the minimal complexity requirements for constructing weak PRFs. To this end

We argue that such a complexity-driven approach can play a role in bridging the gap between the theory and practice of cryptography.

Joint work with Adi Akavia , Andrej Bogdanov, Akshay Kamath, and Alon Rosen.

Mike Rosulek

Oregon State University

**Abstract:**
Secure 2-party computation (2PC) is becoming practical in some domains. However, most approaches are limited by the fact that the desired functionality must be represented as a boolean circuit. In response, the random-access machine (RAM) model has recently been investigated as a promising alternative to circuits.

In this talk, I will discuss some pitfalls of basing malicious-secure 2PC on the RAM model rather than circuits. I will then describe two new protocols for malicious-secure 2PC of RAM programs, whose performance relative to the semi-honest model matches the state of the art for circuit-based 2PC techniques. For malicious security with statistical security parameter $2^{-s}$, our protocol without preprocessing has overhead $s$ compared to the semi-honest model; our protocol with preprocessing has overhead $\sim 2s/\log T$, where $T$ is the running time of the RAM program.

Joint work with Arash Afshar, Payman Mohassel, and Zhangxiang Hu.

Yaoyun Shi

University of Michigan, Ann Arbor

**Abstract:**
Secrecy is randomness. A perfect secret is one that all the alternatives
are equally likely to the adversary. For secrecy to be possible, we have
to assume that the world is not deterministic. Here we show how that
necessary assumption, together with the validity of quantum mechanics
and relativity, allows us to generate the first and an almost perfect secret,
and then how to expand it to be arbitrarily long. Unlike all classical solutions,
our construction is provably unconditionally secure, i.e. secure against an
all-powerful quantum adversary.

Technically speaking, we formulate a precise model of extracting randomness from quantum devices whose inner-workings may be imperfect or even malicious. We then construct such a "physical extractor" that needs only a single and arbitrarily weak classical source, and outputs arbitrarily long and almost perfect randomness. This is an impossible task for classical randomness extractors, which require two or more *independent* sources. Our construction also differs from quantum-based random number generators on the market, as those all require users to trust their quantum inner-workings. Our results can also be used for distributing cryptographic keys.

No prior knowledge on quantum mechanics or randomness extractors will be assumed.

Joint work with Carl A. Miller (arXiv:1402.0489 and forthcoming), and with Kin-Min Chung and Xiaodi Wu (arXiv:1402.4797).

Aryeh Kontorovich

Ben-Gurion University

**Abstract:**
We revisit the classical decision-theoretic problem of weighted expert
voting from a statistical learning perspective. In particular, we
examine the consistency (both asymptotic and finitary) of the optimal
Nitzan-Paroush weighted majority and related rules. In the case of
known expert competence levels, we give sharp error estimates for the
optimal rule. When the competence levels are unknown, they must be
empirically estimated. We provide frequentist and Bayesian analyses
for this situation. Some of our proof techniques are non-standard and
may be of independent interest. The bounds we derive are nearly
optimal, and several challenging open problems are posed. Experimental
results are provided to illustrate the theory.

Joint work with Daniel Berend.

Yash Kanoria

Columbia University

**Abstract:**
We study bilateral matching markets such as marriage markets, labor
markets, and school/college admissions, that allow participants to form
partnerships with each other for mutual benefit. We study competition
in matching markets with random heterogeneous preferences. First, we
find that such markets are extremely competitive. In unbalanced markets,
each agent on the short side of the market is matched to one of his top
preferences and each agent on the long side does almost no better than
being matched to a random partner. Second, we find that even the slightest
imbalance leads to competition that yields an essentially unique stable
matching. Our results suggest that any matching market is likely to have
a small core, explaining why empirically small cores are ubiquitous.

We obtain broadly similar results in random assignment markets (matching markets with transfers).

Joint work with Itai Ashlagi and Jacob Leshno; and Daniela Saban and Jay Sethuraman.

Vincent Conitzer

Duke University

**Abstract:**
Algorithms for computing game-theoretic solutions are now
deployed in real-world security domains, including air travel and
harbors. These applications raise some hard questions. How do we deal
with the equilibrium selection problem? How is the temporal and
informational structure of the game best modeled? What assumptions can
we reasonably make about the utility functions of the attacker and the
defender? And, last but not least, can we make all these modeling
decisions in a way that allows us to scale to realistic instances? I
will present our ongoing work on answering these questions.

Igor Carboni Oliveira

Columbia University

**Abstract:**
During the eighties, Razborov and Smolensky proved lower bounds on the
size of depth-d Boolean circuits extended with modulo p gates computing
the Majority function. This result remains one of the strongest circuit lower
bounds for an explicit Boolean function.

In this work, we obtain information about the structure of polynomial-size Boolean circuits with modulo p gates computing Majority. We show that for any d, at least n/((log n)^O(d)) wires must enter the d-th layer of the circuit, which is essentially optimal. This result follows from the investigation of a more general framework called interactive compression games (Chattopadhyay and Santhanam, 2012), which combines computational complexity and communication complexity, and has applications in cryptography, parameterized complexity and circuit complexity.

This is joint work with Rahul Santhanam (Univ. Edinburgh).

Roei Tov

Bar-Ilan university

**Abstract:**
Designing a Routing Schemes is perhaps one of the most basic tasks in distributed networks. The goal is to design a mechanism that allows to route messages efficiently between the nodes of a network (graph). In Compact Routing Schemes we aim to achieve the same goal but in-addition we also want to minimize the size of the routing-tables stored at each node (i.e. storing $o(n)$-space at each vertex). This approach implies a tradeoff between the size of the routing-tables (space) and the optimality of the routing path length (accuracy).

This problem has been studied extensively and has a long history that spans over the last three decades. Nevertheless, for the most space-accuracy tradeoffs, there is a substantially gap between the upper bounds in the centralized-model (distance oracles) and the upper bounds in their corresponding distributed-model (routing schemes).

A basic idea, used by many routing schemes, allows to route from every vertex in the graph to its nearby vertices. We show two new routing techniques that extend this idea, and using these techniques we obtain new routing schemes. Our new routing schemes improve routing schemes of previous works. Furthermore, most of our routing schemes almost close the gap of their corresponding state-of-the-art distance oracles.

Specifically, we show the following routing schemes: Let $G=(V,E)$ be an undirected graph with $n$ vertices and $m$ edges.

John Wilmes

University of Chicago

**Abstract:**
Coherent configurations (CCs) are generalizations of association schemes
which arise naturally in the context of the Graph Isomorphism problem.
They are obstacles to combinatorial divide-and-conquer approaches to
graph isomorphism testing. We give an algorithm for testing isomorphism
of primitive coherent configurations with worst-case time-complexity
$\exp(\widetilde{O}(n^{1/3}))$. This improves the previous best bound of
$\exp(\widetilde{O}(n^{1/2}))$, due to Babai, which stood for over 30 years.
Our proof also implies a classification of large primitive coherent configurations
with many automorphisms.

A key step in our work is proving the existence of clique geometries in constituent graphs of primitive coherent configurations in a certain range of the parameters. Specifically, given certain inequalities on the parameters of primitive coherent configuration, we find a union of constituent graphs with a collection of maximal cliques of asymptotically equal order with the property that every edge lies in a unique clique. Such clique structures appear in many familiar graphs, including Johnson graphs, Hamming graphs over non-binary alphabets, and more generally line-graphs of partial geometries.

Joint work with Xiaorui Sun.

Grigory Yaroslavtsev

University of Chicago

**Abstract:**
I will present sublinear algorithms for approximately testing properties of
real-valued data under Lp distance measures (for p = 1,2). Our algorithms
allow one to distinguish datasets which have a certain property from datasets
which are far from having it with respect to L_p distance. While the classical
property testing framework developed under the Hamming distance has been
studied extensively, testing under Lp distances has received little attention.

For applications involving noisy real-valued data using Lp distances is natural because unlike Hamming distance it allows to suppress distortion introduced by the noise. Moreover, we show that it also allows one to design simple and fast algorithms for classic problems, such as testing monotonicity, convexity and Lipschitz conditions (also known as “sensitivity”). Our algorithms require minimal assumptions on the choice of the sampled data (either uniform or easily samplable random points suffice). We also show connections between our Lp-testing model and the standard framework of property testing under the Hamming distance. In particular, some of our results improve existing bounds for Hamming distance.

Joint work with Piotr Berman and Sofya Raskhodnikova (appeared in STOC 2014).

Elliot Anshelevich

Rensselaer Polytechnic Institute

**Abstract:**
We will discuss both integral and fractional versions of "correlated stable matching" problems. Each player is a node in a social network and strives to form a good match with a neighboring player; the player utilities from forming a match are correlated. We consider the existence, computation, and inefficiency of stable matchings from which no pair of players wants to deviate. We especially focus on networks where players are embedded in a social context, and may incorporate friendship relations or altruism into their decisions.

When the benefits from a match are the same for both players, we show that incorporating the well-being of other players into their matching decisions significantly improves the quality of stable solutions. Furthermore, a good stable matching always exists and can be reached in polynomial time. We extend these results to more general matching rewards, when players matched to each other may receive different utilities from the match. For this more general case, we show that incorporating social context (i.e., "caring about your friends") can make an even larger difference, and greatly reduce the price of anarchy. Finally, we extend most of our results to network contribution games, in which players can decide how much effort to contribute to each incident edge, instead of simply choosing a single node to match with.

Clement Canonne

Columbia University

**Abstract:**
The communication complexity of many fundamental problems
reduces greatly when the communicating parties share randomness
that is independent of the inputs to the communication task. Natural
communication processes (say between humans) however often
involve large amounts of shared correlations among the communicating
players, but rarely allow for perfect sharing of randomness. Can the
communication complexity benefit from shared correlations as well as
it does from shared randomness? This question was considered mainly
in the context of simultaneous communication by Bavarian et al.
(ICALP 2014). In this work we study this problem in the standard
interactive setting and give some general results.

In particular, we show that every problem with communication complexity of $k$ bits with perfectly shared randomness has a protocol using imperfectly shared randomness with complexity $2^{O(k)}$ bits. We also show that this is best possible by exhibiting a promise problem with complexity $k$ bits with perfectly shared randomness which requires $2^{\Omega(k)}$ bits when the randomness is imperfectly shared. Along the way we also highlight some other basic problems such as compression, and agreement distillation, where shared randomness plays a central role and analyze the complexity of these problems in the imperfectly shared randomness model.

The technical highlight of this work is the lower bound that goes into the result showing the tightness of our general connection. This result builds on the intuition that communication with imperfectly shared randomness needs to be less sensitive to its random inputs than communication with perfectly shared randomness. The formal proof invokes results about the small-set expansion of the noisy hypercube and an invariance principle to convert this intuition to a proof, thus giving a new application domain for these fundamental results.

Joint work with V. Guruswami, R. Meka and M. Sudan.

Justin Thaler

Yahoo! Labs

**Abstract:**
The \eps-approximate degree of a Boolean function is the minimum
degree of a real polynomial that point-wise approximates f to error
\eps. Approximate degree has wide-ranging applications in theoretical
computer science, from computational learning theory to communication,
query, and circuit complexity. Despite its importance, our understanding
of approximate degree remains somewhat limited, with few general
results known.

The focus of this talk will be on a relatively new method for proving lower bounds on approximate degree: specifying \emph{dual polynomials}, which are dual solutions to a certain linear program capturing the approximate degree of any function. After surveying earlier work on approximate degree, I will describe how the method of dual polynomials has recently enabled progress on several open problems.

Anthi Orfanou

Columbia University

**Abstract:**
Anonymous games are a large and well-studied class of succinct
multiplayer games, in which the payoff of each player depends only
on her own strategy and number of other players playing each
strategy. In a sequence of papers, Daskalakis and Papadimitriou
studied the problem of finding an approximate Nash equilibrium in
an anonymous game, and obtained polynomial-time approximation
schemes when the number of strategies is bounded. In this talk we
briefly review their results and show that the problem is complete in
PPAD when the number of strategies is at least seven and the
approximation parameter is exponentially small, confirming a
conjecture of Daskalakis and Papadimitriou.

Joint work with Xi Chen and David Durfee.

Mikhail Kapralov

IBM

**Abstract:**
Graph sparsification is a standard algorithmic tool for reducing the number of edges in a graph while approximately preserving its structural properties. Sparsification is especially important when the input graph is so large that its edge set cannot be stored in memory, a setting modeled by the well-studied streaming model of computation.

In the streaming model edges of the graph are presented to the algorithm as a sequence of edge insertions (insertion-only streams) or insertions and deletions (dynamic streams). Graph problems in the insertion only model have been studied extensively. On the other hand, small space solutions to even basic graph problems in the dynamic model have only been obtained very recently: in SODA'12 Ahn, Guha and McGregor gave a nearly optimal space algorithm for dynamic connectivity via linear measurements of the graph, starting the field of graph sketching.

In this talk I will present the first algorithm for constructing spectral sparsifiers of graphs in the dynamic model that takes only a single pass over the stream of edge updates and uses essentially optimal space. Our algorithm maintains a randomized linear sketch of the graph incidence matrix into dimension nearly linear in the number of vertices. Using this sketch, at any point, the algorithm can output a spectral sparsifier for the graph with high probability. The core of our approach is a novel application of $\ell_2/\ell_2$ sparse recovery that allows us to sample edges of the sketched graph with probabilities proportional to their effective resistance.

Joint work with Yin Tat Lee, Cameron Musco, Christopher Musco and Aaron Sidford.

Contact

There is a mailing list for the reading group. General information about the mailing list (including how to subscribe yourself to it) is available here. If you want to unsubscribe or change your options, send email to theoryread-request@lists.cs.columbia.edu with the word `help' in the subject or body (don't include the quotes), and you will get back a message with instructions.

Comments on this page are welcome; please send them to
` xichen-at-cs.columbia.edu
`