Columbia Theory Seminar, Spring 2015

For Spring 2015 the usual time for the meetings will be Friday at 12:00 - 13:30 in the open meeting area, CSB 477. Abstracts for talks are given below the talk schedule.

Theory Seminar on Google Calendar


  • Friday, Januray 23, Cryptography day at Columbia University
  • Friday, January 30, 12:00, CSB 477: Tyson Williams : Siegel's theorem, edge coloring, and a holant dichotomy (Abstract)
  • Friday, February 20, 12:00, CSB 477: Ilya Razenshteyn : Sketching and Embedding are Equivalent for Norms (Abstract)
  • Friday, March 05, 12:00, CSB 477: Clement Canonne : Sampling Correctors (Abstract)
  • Friday March 13, 12:00, CSB 477: Mrinal Kumar : Homogeneous depth four circuits and recent progress on arithmetic circuit lower bounds (Abstract)
  • Friday March 27, 12:00, CSB 477: Tsvi Kopelowitz : Higher lower bounds from the 3SUM conjecture (Abstract)
  • Friday April 3, 12:00, CSB 477: Igor Shinkar : Zero-Fixing Extractors for Sub-Logarithmic Entropy (Abstract)
  • Friday April 10, 12:00, CSB 477: Nisheeth Vishnoi : Natural Algorithms for Flow Problems (Abstract)
  • Monday April 13, 12:00, CSB 477: Li-Yang Tan : An average-case depth hierarchy theorem for Boolean circuits (Abstract)
  • Friday April 17, New York Theory Day at Columbia.
  • Friday April 24, 12:00, CSB 477: Ryan O'Donnell : Quantum spectrum testing (Abstract)
  • Friday May 1, 12:00, CSB 477: Mahesh Viswanathan : Probabilistic Automata on Finite and Infinite Words (Abstract)
  • Friday May 8, 12:30, CSB 477 (*Please note the unusual time*): Igor C. Oliveira : The Power of Negations in Cryptography (Abstract)
  • Monday May 11, 1:30pm, CSB 477 (*Please note the unusual time*): Jin-Yi Cai : A Holant Dichotomy: Is the FKT Algorithm Universal? (Abstract)
  • Friday May 15, 12:15, CSB 453 Conference Room (*Please note the unusual place*): Noah Stephens-Davidowitz : SVP in 2^n Time Using Discrete Gaussian Sampling (Abstract)
  • Friday May 22, 12:00, CSB 477: Aaron Bernstein : Fully Dynamic Matching in Bipartite Graphs (Abstract)

  • Talk Abstracts

    Friday, January 30:
    Siegel's theorem, edge coloring, and a holant dichotomy
    Tyson Williams
    University of Wisconsin Madison

    Abstract: Coherent configurations (CCs) are generalizations of association schemes What does Siegel's theorem on finiteness of integer solutions have to do with complexity theory? In this talk we discuss a new complexity dichotomy theorem for counting problems. Such a dichotomy is a classification of a class of problems into exactly two kinds: those that are polynomial time computable, and those that are #P-hard, and thus intractable. An example problem in this dichotomy is the problem of counting the number of valid edge colorings of a graph. We show that an effective version of Siegel's theorem and some Galois theory are key ingredients in the proof of this dichotomy.

    This is Joint work with Jin-Yi Cai and Heng Guo.


    Friday, February 20:
    Sketching and Embedding are Equivalent for Norms
    Ilya Razenshteyn
    MIT

    Abstract: Coherent configurations (CCs) are generalizations of association schemes Imagine the following communication task. Alice and Bob each have a point from a metric space. They want to transmit a few bits and decide whether their points are close to each other or are far apart. Of particular interest are sketching protocols: Alice and Bob both compute short summaries of their inputs and then a referee, given these summaries, makes the decision; sketches are very useful for the nearest neighbor search, streaming, randomized linear algebra etc. Indyk (FOCS 2000) showed that for the l_p spaces with 0 < p <= 2 the above problem allows a very efficient sketching protocol. Consequently, any metric that can be mapped into the l_p space with all the distances being approximately preserved has a good protocol as well.

    I will show that for normed spaces (a very important class of metric spaces) embedding into l_p is the only possible technique for solving the communication problem. Slightly more formally, we show that any normed space that admits a good communication (in particular, sketching) protocol for distinguishing close and far pairs of points embeds well into l_p with p being close to 1. The proof uses tools from communication complexity and functional analysis.

    As a corollary, we will show communication lower bounds for the planar Earth Mover's Distance (minimum-cost matching metric) and for the trace norm (the sum of the singular values of a matrix) by deriving them from the (known) non-embeddability theorems and (the contrapositive of) our result.

    The talk is based on a joint paper with Alexandr Andoni and Robert Krauthgamer (arXiv:1411.2577).


    Friday, March 6:
    Sampling Correctors
    Clement Canonne
    Columbia University

    Abstract: In many situations, sample data is obtained from a noisy or imperfect source. In order to address such corruptions, we introduce the concept of a "sampling corrector". Such algorithms use structure that the distribution is purported to have, in order to allow one to make 'on-the-fly' corrections to samples drawn from probability distributions. These algorithms then act as filters between the noisy data and the end user.

    We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms. We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions. When correcting monotonicity, we show that this is indeed the case for distributions that are very close to monotone.

    We consider the question of whether an additional source of independent random bits are required by sampling correctors to implement the correction process. We show that for correcting close-to-uniform distributions and close-to-monotone distributions, no additional source of random bits is required, as the samples from the input source itself can be used to produce this randomness.

    Finally, we consider a restricted error model that aims at capturing "missing data" corruptions. In this model, we show that distributions that are close to monotone have sampling correctors that are significantly more efficient than achievable by the learning approach.

    Joint work with Themis Gouleakis (CSAIL, MIT) and Ronitt Rubinfeld (CSAIL, MIT and Tel Aviv University).


    Friday, March 13:
    Homogeneous depth four circuits and recent progress on arithmetic circuit lower bounds
    Mrinal Kumar
    Rutgers University

    Abstract:

    Starting with a beautiful result of Gupta et al in 2012, the last few years have seen exciting progress on the problem of proving lower bounds for interesting special classes of homogeneous depth four arithmetic circuits. These lower bounds are particularly interesting since they seem to come tantalizingly close to resolving the algebraic analog of the P vs NP conjecture.

    I will talk about these developments and end with a discussion of a recent exponential lower bound for general homogeneous depth 4 arithmetic circuits. It is known that any asymptotic improvement in the exponent of this lower bound would imply super-polynomial lower bounds for general arithmetic circuits.

    Based on joint work with Shubhangi Saraf.


    Friday, March 27:
    Higher lower bounds from the 3SUM conjecture
    Tsvi Kopelowitz
    University of Michigan

    Abstract:

    The 3SUM hardness conjecture has proven to be a valuable and popular tool for proving conditional lower bounds on the complexities of dynamic data structures and graph problems. This line of work was initiated by Patrascu [STOC 2010] and has received a lot of recent attention. Most of these lower bounds are based on reductions from 3SUM to a special set intersection problem introduced by Patrascu, which we call Patrascu's Problem. However, the framework introduced by Patrascu that reduces 3SUM to Patrascu's Problem suffers from some limitations, which in turn produce polynomial gaps between the achievable lower bounds via this framework and the known upper bounds.

    We address these issues by providing a tighter and more versatile framework for proving 3SUM lower bounds via a new reduction to Patrascu's Problem. Furthermore, our framework does not become weaker if 3SUM can be solved in truly subquadratic time, and provides some immediate higher conditional lower bounds for several problems, including for set intersection data-structures. For some problems, the new higher lower bounds meet known upper bounds, giving evidence to the optimality of such algorithms.

    During the talk, we will discuss this new framework, and show some new (optimal) lower bounds conditioned on the 3SUM hardness conjecture. In particular, we will demonstrate how some old and new triangle listing algorithms are optimal for any graph density, and, time permitting, prove a conditional lower bound for incremental Maximum Cardinality Matching which introduces new techniques for obtaining amortized lower bounds.

    Joint work with Shubhangi Saraf.


    Friday, April 3:
    Zero-Fixing Extractors for Sub-Logarithmic Entropy
    Igor Shinkar
    NYU

    Abstract:

    An (n,k)-bit-fixing source is a distribution on n bit strings, that is fixed on n-k of the coordinates, and uniform on the remaining k bits. Explicit constructions of extractors for bit-fixing sources by Gabizon, Raz and Shaltiel (SICOMP 2006) and Rao (CCC 2009) extract (1-o(1))k bits for k = polylog(n), almost matching the probabilistic argument. Intriguingly, unlike other well-studied sources of randomness, a result of Kamp and Zuckerman (SICOMP 2006) shows that, for any k some small portion of the entropy in an (n,k)-bit-fixing source can be extracted. Although the extractor does not extract all the entropy, it does extract log(k)/2 bits.

    In this work we prove that when the entropy k is small enough compared to n, this exponential entropy-loss is unavoidable, and it is impossible to extract more than log(k)/2 bits from (n,k)-bit-fixing sources. In some sense the remaining entropy is inaccessible, information theoretically.

    In fact, our impossibility result holds for what we call zero-fixing sources. These are bit-fixing sources where the fixed bits are set to 0. We complement our negative result, by giving an explicit construction of an (n,k)-zero-fixing extractor, that outputs Omega(k) bits, even for k = polyloglog(n).

    Furthermore, we give a construction of an (n,k)-bit-fixing extractor, for entropy k = (1+o(1))loglog(n), with running time n^polyloglog(n). We also give a new construction of an (n,k)-bit-fixing extractor, for entropy k = polylog(n) based on multi-source extractors.

    Joint work with Gil Cohen


    Friday, April 10:
    Natural Algorithms for Flow Problems
    Nisheeth Vishnoi
    EPFL

    Abstract:

    In the last few years, there has been a significant interest in the computational abilities of Physarum Polycephalum (a slime mold). This drew from a remarkable experiment which showed that this organism can compute shortest paths in a maze. Subsequently, the workings of Physarum were mathematically modeled as a dynamical system and algorithms inspired by this model were proposed to solve several graph problems. If these algorithms could be shown to work, it would raise the tantalizing possibility that nature, via evolution, has developed algorithms that could efficiently solve some of the most complex graph problems. An important step towards this was taken by Becchetti et al. who proved that one such algorithm for the shortest path problem on undirected graphs is efficient. In this talk I will show that Physarum-based algorithms can also efficiently solve flow problems on undirected and directed graphs.

    This is joint work with Damian Straszak.


    Monday, April 13:
    An average-case depth hierarchy theorem for Boolean circuits
    Li-Yang Tan
    Simons Institute at UC Berkeley

    Abstract:

    We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates. Our hierarchy theorem says that for every $d \geq 2$, there is an explicit $n$-variable function Boolean function $f$, computed by a linear-size depth-$d$ formula, which is such that any depth-$(d-1)$ circuit that agrees with $f$ on $(1/2 + o_n(1))$ fraction of all inputs must have size $\exp(n^{\Omega(1/d)}).$ This answers an open question posed by Håstad in his Ph.D. thesis.

    Our average-case depth hierarchy theorem implies that the polynomial hierarchy is infinite relative to a random oracle with probability 1, confirming a conjecture of Håstad, Cai, and Babai. We also use our result to show that there is no "approximate converse" to the results of Linial, Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus answering a question posed by O'Donnell, Kalai, and Hatami.

    A key ingredient in our proof is a notion of random projections which generalize random restrictions.

    Joint work with Ben Rossman and Rocco Servedio.


    Friday, April 24:
    Quantum spectrum testing
    Ryan O'Donnell
    CMU

    Abstract:

    A quantum experiment produces a particle that can be in one of d unknown orthogonal states v_1, ... v_d in C^d. In fact, it's a randomized experiment, so the particle's full state is a probability distribution p_1, ..., p_d over v_1, ..., v_d. (In other words, it's given by the density matrix rho = p_1| v_1 > < v_1| + ... + p_d |v_d > < v_d|.) The experimenter wishes to test whether the probability distribution (p_1, ..., p_d) (i.e., rho's spectrum) has some property. Examples include:

  • Testing whether the distribution is close to the uniform distribution (1/d, ..., 1/d) (i.e., whether rho is the maximally mixed state).
  • Testing whether the distribution is close to having support-size r (i.e., whether rho has rank at most r).
  • The experimenter can prepare n copies of the state and then measure them. The goal is to minimize n as a function of d.

    We describe an approach to proving tight upper and lower bounds for these kinds of problems. The approach uses the asymptotic representation theory of the symmetric group; in particular Kerov's algebra of polynomial functions on Young diagrams.

    This is joint work with John Wright (CMU).


    Friday, May 1:
    Probabilistic Automata on Finite and Infinite Words
    Mahesh Viswanathan
    University of Illinois Urbana-Champaign

    Abstract:

    Probabilistic automata are finite state machines that roll dice in each step to decide the next state on reading an input symbol. Such machines were first introduced by Rabin in the 1960s as a computational model on finite words, and, more recently, the theory has been extended to consider infinite length inputs. This talk will survey results about such machines, contrasting it with the theory of regular languages, and discuss open problems and applications to model checking.

    Results presented will include joint work with Rohit Chadha, Dileep Kini, and Prasad Sistla.


    Friday, May 8:
    The Power of Negations in Cryptography
    Igor C. Oliveira
    Columbia University

    Abstract:

    The study of monotonicity and negation complexity for Boolean functions has been prevalent in complexity theory as well as in computational learning theory, but little attention has been given to it in the cryptographic context. Recently, Goldreich and Izsak (2012) have initiated a study of whether cryptographic primitives can be monotone, and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot.

    In this paper, we start by filling in the picture and proving that many other basic cryptographic primitives cannot be monotone. We then initiate a quantitative study of the power of negations, asking how many negations are required. We provide several lower bounds, some of them tight, for various cryptographic primitives and building blocks including one-way permutations, pseudorandom functions, small-bias generators, hard-core predicates, error-correcting codes, and randomness extractors. Among our results, we highlight the following.

    i) Unlike one-way functions, one-way permutations cannot be monotone. ii) We prove that pseudorandom functions require log n - O(1) negations (which is optimal up to the additive term). iii) Error-correcting codes with optimal distance parameters require log n - O(1) negations (again, optimal up to the additive term). iv) We prove a general result for monotone functions, showing a lower bound on the depth of any circuit with t negations on the bottom that computes a monotone function f in terms of the monotone circuit depth of f.

    Joint work with: Siyao Guo, Tal Malkin and Alon Rosen


    Monday, May 11:
    A Holant Dichotomy: Is the FKT Algorithm Universal?
    Jin-Yi Cai
    University of Wisconsin - Madison

    Abstract: The Fisher-Kasteleyn-Temperley (FKT) algorithm can count the number of perfect matchings over planar graphs in polynomial time. For four decades this stood as *the* important counting problem that has a P-time algorithm over planar graphs while the same problem over general graphs is intractable. In 2001, Les Valiant introduced matchgates. And subsequently he introduced holographic algorithms based on matchgates. They solve a number of counting problems over planar graphs, which are #P-hard over general graphs.

    Over the past decade, a substantial theory has been developed, which aims to classify a broad class of counting problems by the local constraint functions that define them. Out of this work, a strong theme has emerged, which supports the following sweeping statement: All such locally defined counting problems can be classified into *exactly* three categories: (1) those that are P-time solvable over general graphs; (2) those that are P-time solvable over planar graphs but #P-hard over general graphs; and (3) those that remain #P-hard over planar graphs. Moreover, category (2) consists *precisely* of those problems that can be captured by Valiant's holographic reduction to the FKT. In other words, Valiant's holographic reduction followed by the FKT appears to be a *universal* method. This sweeping statement is supported by a number of complexity dichotomy theorems of ever broader scope.

    So the question is: Is Valiant + FKT universal?

    In this talk, I will answer this question.

    Joint work with: Zhiguo Fu, Heng Guo and Tyson Williams Friday, May 15:

    SVP in 2^n Time Using Discrete Gaussian Sampling
    Noah Stephens-Davidowitz
    NYU

    Abstract: We give a $2^{n+o(n)}$-time algorithm for the shortest vector problem on lattices. The previous best-known algorithm of Micciancio and Voulgaris runs in time $2^{2n+o(n)}$. The algorithm is based on an elementary yet powerful observation that by properly combining samples from a Gaussian distribution over the lattice, we can produce samples from a \emph{narrower} Gaussian distribution on the lattice. We use this to sample from an arbitrarily narrow Gaussian distribution over the lattice, allowing us to find a shortest vector.

    Joint work with Divesh Aggarwal, Daniel Dadush, and Oded Regev


    Friday, May 22:
    Fully Dynamic Matching in Bipartite Graphs
    Aaron Bernstein
    Columbia University

    Abstract: I will talk about our recent paper on fully dynamic approximate matching, which will appear in ICALP ‘15. Given a dynamic graph G, the goal is to maintain an approximately maximum matching as edges are being inserted into and deleted from G. Looking at the state of the art, we see that maintaining an approximate matching actually breaks down into 2 very different problems: maintaining a 2-or-worse approximation, and maintaining a better-than-2 approximation. The reason for this is that in the former case, it is enough to maintain a *maximal* matching, which turns out to be much easier. For the former case, the state of the art maintains a 2-approximation and requires only O(log(n)) amortized time per edge change in G. For the latter case, the state of the art maintains a (1+eps) approximation but requires O(m^1/2) time per edge change; nothing faster than O(m^1/2) was known for this problem, even if we allowed a 1.99 approximation.

    We present an algorithm that goes beyond this barrier in *bipartite* graphs. In particular, our main result maintains a (3/2+eps)-approximation in update time O(m^1/4). This is the fastest algorithm to maintain a better-than-2 approximation, and is also the fastest deterministic algorithm for *any* constant approximation. We also show how to maintain a (1+eps)-approximation with update time O(log(n)) if the bipartite graph has constant arboricity. Previous results that focused on fast algorithms for the constant arboricity case only achieved a 2-approximation.

    Joint work with Cliff Stein.


    Contact xichen-at-cs-dot-columbia-dot-edu if you want to volunteer to give a talk (especially for students!). The talk can be about your or others' work. It can be anything from a polished presentation of a completed result, to an informal black-board presentation of an interesting topic where you are stuck on some open problem. It should be accessible to a general theory audience. I will be happy to help students choose papers to talk about.
    There is a mailing list for the reading group. General information about the mailing list (including how to subscribe yourself to it) is available here. If you want to unsubscribe or change your options, send email to theoryread-request@lists.cs.columbia.edu with the word `help' in the subject or body (don't include the quotes), and you will get back a message with instructions.

    Comments on this page are welcome; please send them to xichen-at-cs.columbia.edu

    Back to Theory of Computation at Columbia main page