# Columbia computer scientists present four papers at FOCS

*An average-case depth hierarchy theorem for Boolean circuits*, which solved a problem open for almost 30 years, earned the best paper award. A high-level description of this paper is given below, with abstracts from each of the other papers.

**An average-case depth hierarchy theorem for Boolean circuits**

Benjamin Rossman, *National Institute of Informatics in Japan (to start a professorship at the University of Toronto in 2016)
*Rocco Servedio,

*Columbia University*

Li-Yang Tan,

*Columbia University, CS PhD’14, now research assistant professor at TTI-Chicago*

*An average-case depth hierarchy theorem for Boolean circuits*, Benjamin Rossman, Rocco Servedio, and Li-Yang Tan gave an affirmative answer to this question, by proving that for every

*d*, even in the average case, there are functions that are efficiently computable by depth-(

*d*+

*1*) circuits but not by circuits of depth

*d*. Rocco Servedio described the result as follows: “The paper shows that there’s a problem that can be solved very efficiently in 101 parallel time steps, i.e., it doesn’t require many processors if you get to use that much time. But if you only get 100 rather than 101 parallel time steps, without a really huge number of processors there’s no way to do much better than random guessing—for a yes/no question, you won’t be able to get the right answer even 51% of the time.”

*An average-case depth hierarchy theorem for Boolean circuits*received the Best Paper Award at FOCS 2015.

*Columbia University*

Ilias Diakonikolas,

*School of Informatics, University of Edinburgh*

Anthi Orfanou,

*Columbia University*

Dimitris Paparas,

*Columbia University*

Xiaorui Sun,

*Columbia University*

Mihalis Yannakakis,

*Columbia University*

*On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms*show that even a relatively simple lottery pricing scenario presents a computational problem that one cannot hope to solve efficiently in polynomial time.

*On the Complexity of Optimal Lottery Pricing and Randomized Mechanisms*, where the authors show that even for a simple lottery scenario—one seller and one unit-demand buyer—it’s not possible to solve the pricing problem efficiently in polynomial time without adding in conditions or special cases. Via a folklore connection, the result also provides the same hardness evidence for the problem of optimal mechanism design under the same simple setup.

**Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption**

Craig Gentry,

*IBM T.J. Watson Research Center*,

Allison Lewko Bishop,

*Columbia University*,

Amit Sahai,

*University of California at Los Angeles*

Brent Waters,

*University of Texas at Austin*

*ad-hoc*assumptions: In the original constructive work of Garg

*et al*. (FOCS 2013), the underlying explicit computational assumption encapsulated an exponential family of assumptions for each pair of circuits to be obfuscated. In the more recent work of Pass et al. (Crypto 2014), the underlying assumption is a

*meta-assumption*that also encapsulates an exponential family of assumptions, and this meta-assumption is invoked in a manner that captures the specific pair of circuits to be obfuscated. The assumptions underlying both these works substantially capture (either explicitly or implicitly) the actual structure of the obfuscation mechanism itself.

*Microsoft Research*

Adam D. Smith,

*Pennsylvania State University*

Thomas Steinke,

*Harvard University*

Jonathan Ullman ,

*Columbia University CS PhD’14,*now assistant professor at Northeastern University

In 2008, Homer *et al*. (*PLoS Genetics*) demonstrated the privacy risks inherent in the release of a large number of summary statistics (specifically, 1-way marginals of SNP allele frequencies) about participants in a genome-wide association study. Specifically, they showed how given a large number of summary statistics from a case group (a dataset of individuals that have been diagnosed with a particular disease), the data of a single target individual, and statistics from a sizable reference dataset independently drawn from the same population, they could determine with confidence whether or not the target is in the case group. That is, the *trace amount* of data (DNA) that the target individual contributes to the summary statistics enables his or her presence in the case group to be *traced* by an adversary. The Homer *et al*. work had a major impact, changing the way that the US NIH and Wellcome Trust allow access to information from studies they fund.

*single sample*drawn from the underlying population. The attack is not specific to genomics.