Columbia computer scientists are presenting four papers at this week’s 47th
Symposium on the Theory of Computing:
On the Complexity of Nash Equilibria in Anonymous Games
Xi Chen (Columbia University), David Durfee (Georgia Institute of Technology), Anthi Orfanou (Columbia University)
The celebrated theorem of John Nash states that every game has an equilibrium point. During the past decade, computer theorists have investigated the computational complexity of finding exact or approximate Nash equilibria in various types of games. For anonymous games with a bounded number of strategies, an impressive sequence of papers by Constantinos Daskalakis and Christos Papadimitriou show that an approximate Nash equilibrium can be computed efficiently. (Anonymous games are an important class of multiplayer games where there are no special players; a single player’s payoff depends only on the player’s own strategy and the number—but not the identity—of other players playing each strategy. An example might be someone’s choice of attending a departmental meeting; if enough people attend, no matter who they may be, others may also decide to attend to avoid being conspicuously absent.)
Anthi Orfanou
However, the possibility of efficiently computing an exact Nash equilibrium (or an approximate one with an exponentially high accuracy) in such games remained open, though it was conjectured to be hard. In
“On the Complexity of Nash Equilibria in Anonymous Games,” Columbia computer scientists
Xi Chen, and
Anthi Orfanou along with David Durfee (Georgia Institute of Technology) provide the proof that computing an exact equilibrium in anonymous games with a bounded number of strategies is indeed hard, and that the results obtained by Daskalakis and Papadimitriou cannot be extended to find a highly accurate Nash equilibrium.
While the paper’s result came as no surprise, it was not immediately obvious how to obtain such a proof, for which one needs to embed some other problem known to be hard. It’s not easy to embed things in anonymous games due to their highly symmetric payoff structure. Chen and his coauthors were however able to focus on a carefully picked class of anonymous games in which different players play a particular strategy with probabilities of different scales. This allowed them to analyze Nash equilibria of these games and embed things in them. This method may be applied when investigating the computational complexity of other types of games.
Rocco Servedio
Boolean function monotonicity testing requires (almost) n1/2 non-adaptive queries
Xi Chen (Columbia University), Anindya De (IAS), Rocco A. Servedio (Columbia University), Li-Yang Tan (UC Berkeley)
A natural framework for many types of data sets is to view them as a collection of (input,output) pairs. It is often important to know whether a data set is monotone or not, in the following sense: does increasing the input value always cause the output value to increase? Or can it sometimes cause the output to increase and sometimes to decrease? The standard way to check a data set for monotonicity is to scan through all the (input,output) pairs. However, this brute-force method is too inefficient for today’s massive high-dimensional data sets, when it is computationally too expensive to make even a single complete pass over the data.
In some situations, it suffices to know only whether a data set is monotone versus far from monotone. In this case, the key question becomes how many queries are needed to differentiate the two cases. The 1998 paper “Testing Monotonicity” (by Oded Goldreich, Shafi Goldwasser, Eric Lehman, and Dana Ron) gives an algorithm that requires only n queries for n-dimensional datasets (with 2n binary data entries), setting an upper bound on the query complexity of testing monotonicity of n-variable Boolean functions. Subsequent work by Deeparnab Chakrabarty and C. Seshadhri recently pushed the number of queries down from n to n7/8.
In the paper “
Boolean function monotonicity testing requires (almost) n1/2 non-adaptive queries,”
Xi Chen,
Anindya De (IAS),
Rocco A. Servedio, and
Li-Yang Tan (UC Berkeley) studied lower bounds on the number of queries required for monotonicity testing. Their paper proves a lower bound of
n1/2⁻
c, for any positive constant c, on the query complexity of two-sided error non-adaptive algorithms for testing whether an
n-variable Boolean function is monotone versus far from monotone. This bound of
n1/2 is also conjectured to be tight. Their result represents an improvement over the previous lower bound of
n1/5 by Chen, Servedio, and Tan in 2014.
Update: After this paper was announced, a new result by Subhash Khot, Dor Minzer, and Muli Safra improves the upper bound to n1/2 as conjectured, matching the lower bound of Chen, De, Servedio, and Tan.
Allison Bishop
Indistinguishability Obfuscation for Turing Machines with Unbounded Memory
Venkata Koppula (University of Texas at Austin), Allison Bishop (Columbia University), Brent Waters (University of Texas at Austin)
Indistinguishability obfuscation is the process of encrypting software so it performs a function without the software itself being intelligible. Indistinguishability obfuscation was long thought to be impossible, but in 2013, six researchers (Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, Brent Waters) published
a paper that describes the first plausible candidate indistinguishability obfuscator using a new type of mathematical structure called multilinear maps. This candidate allows for obfuscation of any polynomially sized circuit.
While circuits represent a general model of computation, they have the drawback that the size of a circuit description of a computation is proportional to the running time of a computation. This might be much longer than the time required to simply describe the computation in a different form.
“
Indistinguishability Obfuscation for Turing Machines with Unbounded Memory“, co-authored by
Allison Bishop along with
Venkata Koppula (University of Texas at Austin) and
Brent Waters (University of Texas at Austin), replaces circuits (where the input length is fixed) with a Turing machine to show conceptually what can be done for varying input sizes. The proof of concept described in the paper shows how obfuscation time grows polynomially with the machine description size, |M|, as opposed to its worst case running time T. In addition to serving as its own application, such a construction paves the way for other applications such as delegation of computation.
Xiaorui Sun
In Faster Canonical Forms for Primitive Coherent Configurations
Xiaorui Sun (Columbia University) and John Wilmes (University of Chicago)
Given two graphs with the same number of nodes and edges but arranged differently, how efficiently can an algorithm determine whether the two graphs are isomorphic? The problem looks innocent, and there are many different heuristics for finding features that might differentiate two graphs. In practice these heuristics run efficiently, but it cannot be proved they always give the right answer in polynomial time. This graph isomorphism problem is notorious for its unresolved complexity status, and there has been no improvement on the exp(~O(n1/2))-time algorithm obtained more than three decades ago.
In “
In Faster Canonical Forms for Primitive Coherent Configurations,”
Xiaorui Sun (Columbia University) and
John Wilmes (University of Chicago), both PhD students, look at a special family of graphs, primitive coherent configurations (PCCs), which are edge-colored digraphs with certain regularity structures. Sun and Wilmes develop a new combinatorial structure theory for PCCs that in particular demonstrates the presence of “asymptotically uniform clique geometries” in PCCs with a certain range of parameters. Their results imply a faster algorithm for the isomorphism testing of PCCs with running time exp(~O(n
1/3)) and the same upper bound on the number of automorphisms of PCCs, making the first progress in 33 years on a 1982 conjecture of Babai.