\sup_S \frac{\Pr[ A(x) \in S ]}{\Pr[ A(x') \in S] } \leq e^\varepsilon?
\Omega( \frac{\log n}{\log\log\log n})
# A TCS Quiver

## FOCS 2019 Workshop: Saturday, November 9 (Baltimore)

**8:30-8:40**Clément Canonne**8:40-10:00**Omri Weinstein*Coffee break***10:15-11:15**Anindya De**11:15-11:35**Clément Canonne*Lunch break***13:00-14:20**Steven Wu**14:20-14:30**Clément Canonne

Cell-sampling is an elementary information theoretic technique for proving unconditional lower bounds on the “locality” of algorithms, via a compression-style argument. Despite its simplicity, cell-sampling yields state-of-art lower bounds in many computational models, such as static and dynamic data structures, hashing, locally-decodable codes (LDCs) and matrix rigidity. I will sketch some of those applications, including time-space tradeoffs for near-neighbor search and the Katz–Trevisan lower bound for general LDCs.

[slides], [slides (PPTX)]The central limit theorem is one of the cornerstones of modern probability theory — in recent years, probably to no one's surprise, the theorem and its variants have found applications in several areas of theoretical computer science including complexity theory, learning theory and algorithmic theory among others. In this talk, I will talk about some of these variants, their applications and some approaches that are used to prove central limit theorems.

[slides]I will discuss two (unrelated) small facts that have proven quite useful in various domains. The first goes by many names (Gibbs variational principle, Donskerâ€”Varadhan formula, among others), and provides a very fruitful characterization of relative divergence. The second, more algorithmic, is an improvement over naive averaging-type/bucketing arguments, sometimes known as *Levin's economical investment strategy*, allowing one to leverage a lower bound on the expected value of some quantity without losing quadratic nor log factors.

Differential privacy is a notion of algorithmic stability that provides a rigorous foundation for the study of privacy-preserving data analyses. However, tools developed for differential privacy have also found applications in areas of research beyond privacy. In this talk, I will describe how one can leverage the stability guarantee in differential privacy to obtain 1) incentive-compatibility in mechanism design, 2) statistical validity in adaptive data analysis, and 3) certified robustness to adversarial examples.

[slides]- Anindya De is an Assistant Professor at the University of Pennsylvania. Previously, he was an Assistant Professor at Northwestern University, a postdoc at IAS / DIMACS and a graduate student at Berkeley. If you ask him what is he working on, a somewhat likely response is “I have been reading about this central limit theorem ...” (and hence the talk).
- Steven Wu is an Assistant Professor at the University of Minnesota. Previously, he was a postdoc at Microsoft Research-NYC, and before that a Ph.D. student at the University of Pennsylvania. His recent work focuses on (1) how to make machine learning better aligned with societal values, especially privacy and fairness, and (2) how social and economic interactions influence machine learning.
- Omri Weinstein is an assistant professor in Columbia University. He is interested in the interplay between information theory, complexity and data structures. He was a PhD student at Princeton University, and a Simons Society Junior Fellow at Courant Institute.