Abstracts for the talks are given below. See the main page for schedule and location information.
Can NP-complete problems be solved efficiently using the resources of the physical universe? I'll survey approaches including soap bubbles, folding proteins, DNA, analog computing, relativistic time dilation, quantum computing (including the quantum adiabatic algorithm), quantum advice, hidden-variable theories, "subquantum" computing, nonlinear quantum mechanics, TQFT's, time travel, quantum gravity, and various self-selection and anthropic principles. I'll argue that none of these approaches are likely to succeed. I'll then advocate the hardness of NP-complete problems as a physical axiom akin to the Second Law of Thermodynamics.
Decision trees are an expressive representation for Boolean functions which have proven hard to learn in worst-case learning models. An even more challenging problem is that of learning an unknown DNF (Disjunctive Normal Form formula, i.e. an AND of ORs of Boolean variables) or CNF in the worst case.
In this work we give the first efficient algorithm for the average-case versions of these learning problems. Our algorithm for decision trees runs in polynomial time for random trees of logarithmic depth. Our algorithm for DNF and CNF formulas runs in polynomial time for random formulas with a subquadratic number of AND gates.
The talk will be self-contained and assumes no prior background in computational learning theory.
This is joint work with Jeffrey Jackson.
In this work we study hardness amplification within NP -- this problem was first studied by O'Donnell (STOC 2002). He showed how to take a mildly average-case hard function f in NP, and produce a function f', also in NP, such that every "small" circuit fails to compute f' on roughly a (1/2 - 1/n) fraction of inputs. O'Donnell also showed that constructions of a certain general form cannot amplify beyond hardness (1/2 - 1/n).
In this work we prove hardness amplification up to (1/2 - 1/2^{\sqrt{n}}) within NP. That is, starting from a mild average-case hard f in NP, we produce f' in NP such that every "small" circuit fails to compute f' on a (1/2 - 1/2^{\sqrt{n}}) fraction of inputs.
To bypass the barrier shown in O'Donnell's work, we employ two new techniques. First, we show how to derandomize O'Donnell's construction using a certain "pseudorandom generator". Second, we show how to use nondeterminism in the hardness amplification -- this is in contrast to previous hardness amplifications which are deterministic.
Joint work with Salil Vadhan and Emanuele Viola.
In this work the authors show that the existence of a general obfuscator is impossible. They also show that the results extend to approximate functionality obfuscators and to very simple complexity classes(TC_0). Additionally they rule out potential applications of obfuscators by constructing unobfuscatable signature schemes, encryption schemes, and pseudorandom function families.
Andrew will present the proof of the main result and survey the extensions. If there is time, he will also discuss "Positive Results and Techniques for Obfuscation"(Lynn, Prabhakaran, Sahai).
We analyze and study the effects of locality on the fault-tolerance threshold for quantum computation. We analytically estimate how the threshold will depend on a scale parameter r which estimates the scale-up in the size of the circuit due to encoding. We carry out a detailed semi-numerical threshold analysis for concatenated coding using the 7-qubit CSS code in the local and `nonlocal' setting. First, we find that the threshold in the local model for the [[7,1,3]] code has a 1/r dependence, which is in correspondence with our analytical estimate. Second, the threshold, beyond the 1/r dependence, does not depend too strongly on the noise levels for transporting qubits. Beyond these results, we find that it is important to look at more than one level of concatenation in order to estimate the threshold and that it may be beneficial in certain places, like in the transportation of qubits, to do error correction only infrequently.
This is joint work with Barbara Terhal (IBM) and David DiVincenzo (IBM). The paper has been posted to the arXiv: quant=ph/0410047
Our main result is an on-line algorithm with competitive ratio phi = 1.618 for instances with agreeable deadlines. Both the algorithm and the analysis are strikingly simple. Note that this competitive ratio is the best achievable by a deterministic algorithm.
This is joint work with Prof. Jay Sethuraman and Prof. Cliff Stein, both in IEOR Department of Columbia University.
We determine k_d for all d>0, proving that it is the smallest integer k such that d < 2k log k.
The proof requires taking a new look at one of the most basic tools in probability and combinatorics, the second moment method. Specifically, we show that for a large class of random constraint satisfaction problems (including random graph coloring, LDPC codes, random k-SAT) the second moment method reduces to understanding "entropy-energy" tradeoffs, controlled by the problem's constraint density. Critical values of this density correspond to points where the organization of the space of solutions of the problem undergoes dramatic change. We will show that identifying such points: (i) yields the location of "thresholds" such as the one above for graph coloring, and (ii) provides rigorous insight on why algorithms often have a hard time solving random CSPs close to such thresholds.
Based on joint works with Assaf Naor and Yuval Peres.