A weₐᵉkly quiz on theoretical computer science (TCS), math, and random things.

📊 It's Thursday, allegedly: time for our wækly quiz. So, completely unrelated to anything, this one's going to be about influence, majority, tribes, noise, and dictators.

(I didn't get "chaos" in, but I'm keeping that option open for the future. #gaussianchaos )

1/11 pic.twitter.com/eeRtkIpNmr

— Clément Canonne (@ccanonne_) September 24, 2020

📊 Answers and discussion for yesterday's quiz on voting rules (a.k.a. Boolean functions for social choice).↴

Disclaimer: for a much better and comprehensive treatment, @BooleanAnalysis 's book is the place to look ( https://t.co/f9OGjFKAVu ).

1/ https://t.co/jy0JzLWwiq

— Clément Canonne (@ccanonne_) September 26, 2020

📊 It's (weakly) Thursday: time for a weekly quiz. For this one let's be merry, and play games—repeatedly!

But, what's a game? You have 2 players, Alice 👩🏻‍💻 and Bob 👨🏾‍💻, and a game master, say Guillaume🧔🏼. Guillaume draws (x,y), gives (separately!) x to Alice and y to Bob...

1/9 pic.twitter.com/NMYxoRelJ5

— Clément Canonne (@ccanonne_) September 10, 2020

📊 Answers and discussion for yesterday's quiz on games. Technically, "repeated games and direct product (parallel repetition) theorems."

Or, as one may put it: if I keep playing, how likely is it that I'll keep winning?🤷

1/15 https://t.co/TvjcHG52Gf

— Clément Canonne (@ccanonne_) September 11, 2020

📊 It may not be Thursday everywhere yet, but let's have our weakly(weekly(quiz))!

Today: no general theme. Just things that may be true, or false, and surprisingly so―or not. Beware of the traps.

1/7 pic.twitter.com/VL50ZjMgS8

— Clément Canonne (@ccanonne_) September 3, 2020

📊Answers and discussion for yesterday's quiz on "Things I Probably Would Have Gotten Wrong (Did You?)"

Ft. Complexity theory (Q1), Probability (Q2), Measure theory (Q4), Planar graphs and pandas 🐼 (Q5), and Linear algebra (Q3)... in that order.

1/13 https://t.co/AG9nyiCw5F

— Clément Canonne (@ccanonne_) September 4, 2020

📊 It's Thursday for many, time for this week's quiz! As alluded earlier, we're going to flip coins. Like, a lot.

Today: you have a biased coin which lands heads with some probability p∊(0,1), and you can toss it as many times as you want. You don't know p, though...

1/10 pic.twitter.com/YCYEzSC5zJ

— Clément Canonne (@ccanonne_) August 27, 2020

📊 Answers and discussion for yesterday's quiz on coin flipping (a.k.a. the power of small change?)

Reminder: given a fixed function f, we want a procedure which given independent coin flips from a coin unknown probability of heads p ("bias")...

1/14 https://t.co/GThwTq3VPB

— Clément Canonne (@ccanonne_) August 28, 2020

📊 It's Thursday, time for a we•kly quiz! As promised last week, today will be about Gaussians, CLTs, Berry—Esseen, and a little bit of Invariance Principle.

Let's start. You have n i.i.d. real-valued r.v.s X₁,...,Xₙ, with mean zero. You sum them, and hope for the best.

1/10 pic.twitter.com/vWuIrcxO62

— Clément Canonne (@ccanonne_) August 20, 2020

📊 Answers and discussion for yesterday's quiz on CLTs, Gaussians, and a bit of Invariance Principle.

Let's start: Q1 on the Central Limit Theorem was a trap 🧐!

1/16 https://t.co/AqzMX6YdgI

— Clément Canonne (@ccanonne_) August 21, 2020

📊 It's Thursday [reference needed], time for a weᵄekly quiz! Given some recent events about randomized polynomial time (RP), it feels like a short discussion about 🎲 is topical.

So, let's go: Las Vegas, Monte Carlo, and... Bellagio (?) algorithms!

1/8 pic.twitter.com/DYFqOeMG8q

— Clément Canonne (@ccanonne_) August 6, 2020

📊Thread: answers and discussions for yesterday's quiz on randomized complexity classes (RP, ZPP, BPP).

Of course, I am barely scratching the surface. I find randomness in computation fascinating, and hope you will too. https://t.co/DQNWorXxa0
1/15 pic.twitter.com/CIFcqaqaF8

— Clément Canonne (@ccanonne_) August 7, 2020

📊 Today, (weakly|weekly) quiz will be about graphs. The ones with nodes and edges, not those plotting against you.

Specifically, *planar* graphs. The nice ones.

1/7

— Clément Canonne (@ccanonne_) July 30, 2020

📊 Answers and discussion for yesterday's quiz on planar graphs. Recall: a graph G=(V,E) on n=|V| vertices and m=|E| edges is planar if you can "embed it in the plane" (draw it without edge crossings). 🕸️

Why should we care, you ask?

1/10 https://t.co/yGrMgZ9ndB

— Clément Canonne (@ccanonne_) July 31, 2020

📊 It's Thursday: time for our wÆkly quiz! I'll build a little bit on last week's statistics/algorithms thread (see below for a recap), and ask you two 2️⃣ questions.

General topic: testing distributions, "leaky" measurements, and adaptivity. 🔍

1/6 https://t.co/YhEpCn4yov pic.twitter.com/aj5mYWKqkp

— Clément Canonne (@ccanonne_) July 23, 2020

📊 Answers and discussions for yesterday's we(a)ekly quiz on "testing whether data is uniformly distribution, given some leaky query access to the samples." #statistics #quiz

To paraphrase Tolstoy when starting "War and Peace": I'll be brief. https://t.co/NMK2GJ2DqF

1/10

— Clément Canonne (@ccanonne_) July 24, 2020

📊 Two days ago, I asked a question. Way more people answered than expected, and... well, this week's weₐekly #quiz will be slightly ≠: a long thread on uniformity testing, trickling down all day long.

Be careful what you wish for :) #statistics

1/n https://t.co/3ECldvbekE

— Clément Canonne (@ccanonne_) July 16, 2020

📊 A short discussion about last week's thread on uniformity testing: I wrote things down!

📝 https://t.co/jLEIKSxMIi (LaTeX)
📄 https://t.co/Uqq0yjWrwp (PDF)

Comments welcome.

1/2 https://t.co/6jqSvwoabC pic.twitter.com/1baBnVWWVl

— Clément Canonne (@ccanonne_) July 20, 2020

📊 It's Thursday (somewhere), time for a weₐekly quiz! Today: #machinelearning ! Ish. Let's say... computational learning. Some of it. In theory.

OK. VC dimension. You got me. Anyways, #COLT2020 is starting tomorrow, so... what does it mean to "learn," exactly?

1/10

— Clément Canonne (@ccanonne_) July 9, 2020

📊 Answers and comments about yesterday's online quiz on learning.

Not a quiz on online learning. Though there was an (online learning) component on that online (learning quiz), which maybe led to some learning online, so... I am confused now. https://t.co/2VO9AqXudR

1/16 pic.twitter.com/rNmo5tmwWM

— Clément Canonne (@ccanonne_) July 10, 2020

📊 Wæekly quiz: "The answer may surprise you (or not)."

Four results I find surprising (among many). Three of the statements below are true, one is false. Choose... wisely.

1/11 pic.twitter.com/lAV5UMTwRh

— Clément Canonne (@ccanonne_) July 2, 2020

⁉️ Answers and discussion for yesterday's quiz: and yes, you are a Ravenclaw. (BuzzFeed has nothing on me.) https://t.co/csDmh8T4bk
First I'd like to apologize for the original erroneous phrasing of Q1. If your answer was wrong, it was my fault.

[As usual, refs at the end]

1/19 pic.twitter.com/aODklwZQKB

— Clément Canonne (@ccanonne_) July 4, 2020

📊 Weᵃₑkly quiz: linear threshold functions (a.k.a. perceptrons, a.k.a. halfspaces), and what makes them so unique.

In other words, "the Chow Parameters."

Let's start with Boolean functions: i.e., functions of the form f: {-1,1}ⁿ→{-1,1}. There are 2^2^n of those.

1/9 pic.twitter.com/jJVRaJnyQh

— Clément Canonne (@ccanonne_) June 25, 2020

🧑‍🏫 Answers and discussions for yesterday's quiz on Boolean functions, halfspaces (LTFs), and Chow parameters.

Our focus here will be functions of the form f: {-1,1}ⁿ→{-1,1} (i.e., bits=±1)

[References for all results at the end of the thread.]

1/18 https://t.co/RYVrBItOrn pic.twitter.com/NZjzVdFJn1

— Clément Canonne (@ccanonne_) June 26, 2020

📊 Weakly weekly quiz: a short one—next week 's will be longer. "How do you prove you won't always fall short of your expectations?"

(No, it not not about eating Cheerios on the sofa, or the last time I wore actual socks. It's about constant-probability statements.)

1/4 pic.twitter.com/fJN7fBYx8B

— Clément Canonne (@ccanonne_) June 18, 2020

📊 Answers and discussions for yesterday's quiz: "minimal assumptions for X to be, with constant probability, no less than a constant fraction of 𝔼[X]?"

E.g., if an algo find something w/ large value in expectation, can it do it w/ probability 1/10?

1/7 https://t.co/wH3dh5qtLd

— Clément Canonne (@ccanonne_) June 19, 2020

📊 Weakly-weekly quiz: this week, "how hard is it to learn when you keep forgetting things?"
(This quiz is about learning functions. The question applies to history as well...)

You observe a stream of labelled samples (xᵢ,yᵢ) where yᵢ=f(xᵢ), and want to learn f...

1/9 pic.twitter.com/OsryHLJyH4

— Clément Canonne (@ccanonne_) June 11, 2020

📊 Answers and discussion for yesterday's quiz on sample/memory tradeoffs for learning: a thread. ↴

Overall question: "how hard does it become to learn a (linear) function when the algorithm can't keep too much information in memory?"

1/11 https://t.co/VXHILDT8Ww

— Clément Canonne (@ccanonne_) June 12, 2020

📊 For this edition of the weekly quiz, let's look at random sums of random things, and their tails.

Specifically, we have some i.i.d. r.v.'s X₁,X₂,..., and some integer-valued (possibly infinite) r.v. T. We look at S = X₁+X₂+...+X_T.

1/7

— Clément Canonne (@ccanonne_) June 4, 2020

📊 Answers and discussion about yesterday w(ee|ea)kly quiz on random sums: S = X₁+X₂+...+X_T, w/ Xₙ's independent and T itself an integer-valued random variable.

Under mild assumptions on all those, what can we say about the tails of the sum S? https://t.co/uNnZUyfuTc

1/11 pic.twitter.com/ACoAo8m5pe

— Clément Canonne (@ccanonne_) June 5, 2020

📊 Today's weₐekly quiz: a 3-series ∑ edition! Nothing too fancy, maybe a cute probability proof in disguise somewhere.

Let's start: we all know—maybe love—the exponential function. For all k in ℝ, ∑ₙkⁿ/n! = eᵏ. Fair enough... butnwhat if we perturb things a bit?

1/5

— Clément Canonne (@ccanonne_) May 28, 2020

🧑‍🏫 Answers and discussions for yesterday's quiz on "series that kind of look like the exponential one."

If you couldn't see properly the questions (non-utf8er's gonna non-utf8), I had screenshots here: https://t.co/65TqGvrZh1

1/8 https://t.co/e7Y1ChYOAI

— Clément Canonne (@ccanonne_) May 29, 2020

📊 Here we go again: we{e,a}kly quiz. Let's have a look at two of our basic* friends, the Poisson 🐟 and Binomial 🐪 distributions. And what happens when they meet.

Just so we're all on the same page: X~Binom(n,p) if X=∑ₖ Xₖ (n terms)...

*"I'm not basic. THEY're basic."

1/6

— Clément Canonne (@ccanonne_) May 21, 2020

Answers and discussions for yesterday's quiz on Poisson 🐟, Binomial 🐪, and I guess 🐘 distributions.

Here's the plan: answers, discussion of what a "Poisson Binomial distribution" 🐡 is and what it's for, related references.

1/9 https://t.co/di7wk0I48I

— Clément Canonne (@ccanonne_) May 22, 2020

📊 It's time for a (short) installment of our weakly quiz. Setup: you are a *deterministic* algorithm 🤖 (sorry). You're given i.i.d. draws from some discrete distribution p over [n]={1,2..,n}, *close-ish* to uniform. You want to output i.i.d. draws *closer* to uniform.

1/4

— Clément Canonne (@ccanonne_) May 14, 2020

📊 Answers for yesterday's quiz: "if you're given i.i.d. samples from something close* from uniform on n elements, can you deterministically combine them to get something more uniform**? ↴

* within TV distance ε
** within TV distance ε'

1/9 https://t.co/LYNwhDFNzS

— Clément Canonne (@ccanonne_) May 15, 2020

📊 Here is a *semi non-technical* weekly quiz: a two-parter, first with an opinion poll, then a one-question quizz.

Q1: What would you like these quizzes (and/or this account) to focus on more in the future? 🙋

1/3

— Clément Canonne (@ccanonne_) April 30, 2020

Answers and discussions for yesterday's poll+quiz: thread below. ↴

First, the poll: random stuff, and probability distribution-related things win the day. I won't promise puns will disappear, though. As a TCS person, I have promise problems.

1/6 https://t.co/iAMBxcjHbo

— Clément Canonne (@ccanonne_) May 1, 2020

📊It's Thursday, time for a we[ae]kly quiz! Let's talk about juntas. The non-military kind.
(As usual, questions below, answers tomorrow)

What is a junta? Basically, a very misleading function. For simplicity, think of f:{0,1}ⁿ→{0,1} (the def generalizes to f:Σⁿ→ℝ)..

1/7

— Clément Canonne (@ccanonne_) April 23, 2020

⌛Time for the answers to yesterday's quiz on junta functions*! See thread below. ↴

* "Technically n variables, morally k ≪ n"

1/12 https://t.co/x4QISb3Kyn

— Clément Canonne (@ccanonne_) April 24, 2020

Weakly weekly quiz, new installment! I'm assuming everyone is very busy with either the #FOCS2020 deadline, the #ICML2020 reviews, or the current global health crisis and juggling with 5 toddlers & 7 Zoom online classes, so I'll keep it short.

Adaptivity 🗘 and testing 🔎.

1/7

— Clément Canonne (@ccanonne_) April 9, 2020

📚 Here are (a day late) the answers to this week's quiz on adaptivity 🗘 in property testing 🥚. A thread ↴

1/10 https://t.co/1pcUXJ05Dt

— Clément Canonne (@ccanonne_) April 11, 2020

🗒️ Today's quiz will be short, and focus on a very useful primitive called "hypothesis selection."

Say you've got, as usual, data points coming from some unknown probability distribution p over some domain 𝓧. And you have a list of possible 'models' q₁, q₂,..., qₙ.

1/6

— Clément Canonne (@ccanonne_) March 26, 2020

Answers and discussions for yesterday's quiz on hypothesis selection: a thread.

The problem: you have data from some unknown probability distribution p, a list of n hypotheses q₁,..., qₙ. Assuming one of the qₜ's is good, find one 'not too bad.'

1/12 https://t.co/09lOKAy0Cl

— Clément Canonne (@ccanonne_) March 27, 2020

📊 The weakly weekly quiz is back: in today's edition, "learning and testing, how are they related?" Specifically, we're going to look at functions f: {0,1}ⁿ→{0,1} and probability distributions p on a domaine of size [k].

Hope you find that enjoyable. 1/11

— Clément Canonne (@ccanonne_) March 19, 2020

Answers and discussions for yesterday's quiz on learning v. testing Boolean functions and probability distributions: a thread. 🧵 https://t.co/dG0N5anxMW

1/16

— Clément Canonne (@ccanonne_) March 20, 2020

Quizz of the week: I'm slightly jetlagged, so it'll be a bit short. Let's about 📉*monotone* 📈 probability distributions (univariate, discrete).
That is, you have a distribution p over [k] = {1,2,...,k}, and you know its probability mass function is (wlog) non-increasing.
1/6

— Clément Canonne (@ccanonne_) March 5, 2020

Answers and discussion for yesterday's polls on inference for (discrete) monotone distributions 📈📉 below. I hope you won't find it too... monotonous↴
1/16 https://t.co/Tv1AdqRh7e

— Clément Canonne (@ccanonne_) March 7, 2020

📊The past two weeks, we talked about learning (density estimation) of discrete distributions. Now, let's look at the flip side, and see a bit of what happens w/ *testing* (goodness-of-fit, hypothesis testing,...).
All after, our distance measure will be total variation (TV)
1/8

— Clément Canonne (@ccanonne_) February 27, 2020

Answers and discussion for yesterday's polls on testin 📊🐘 below. Some things, I'd say, are *far* from obvious.↴ https://t.co/gueiqlGBYA
1/14

— Clément Canonne (@ccanonne_) February 28, 2020

So last week was density estimation (distribution learning) of distributions 📊 over a domain of size k in total variation, Hellinger, and KL distances. Now you may wonder: "What about the rest?"
Let's see some of the rest: ℓ₂, ℓ_∞, and Kolmogorov.

First: "Kolmogorov"?
1/9

— Clément Canonne (@ccanonne_) February 20, 2020

Answers and discussion for yesterday's poll on "density estimation in other distances, such as Kolmogorov" below The answer MAY surprise you (with probability ≥ 0.428). ↴
1/9 https://t.co/6jo6kAhKtT

— Clément Canonne (@ccanonne_) February 21, 2020

📊 Learning discrete distributions over a finite domain, a short thread.↴
Given n i.i.d. samples from some unknown p over a domain of size k and parameters ε,δ, you want to output some hypothesis q s.t.
d(p,q) < ε
with probability at least 1-δ. How large must n be?
1/8

— Clément Canonne (@ccanonne_) February 15, 2020

📊 So, the results... Learning discrete distributions over a finite domain of size k to distance ε, with probability 1-δ: how hard can it be?
1/9 https://t.co/AANZoxkm95

— Clément Canonne (@ccanonne_) February 16, 2020

Test your intuition (or, in my case, lack thereof): hypothesis testing in high dimensions. You are given n i.i.d. samples from an unknown multivariate Gaussian p = N(μ,Σ) in ℝ^d. You want to know if p is *the* standard Gaussian N(0,I). 1/8

— Clément Canonne (@ccanonne_) October 27, 2019

So, the answers:
- Q1 and Q2 both have the same one, √d/ε².
The fact that it's the same comes from the discussion on equivalent between TV to N(0,I) and ℓ₂ norm of the mean.

Learning the mean to ε in ℓ₂ would be d/ε². But estimating ||μ||₂ is quadratically cheaper! 1/8

— Clément Canonne (@ccanonne_) October 28, 2019