For Fall 2008, the usual time for the meetings will be Fridays at 11:00am in the CS conference room, CSB 453. Abstracts for talks are given below the talk schedule.
Abstract: We will discuss a general framework for showing lower bounds on communication complexity based on matrix norms and approximation norms, the minimum norm of a matrix entrywise close to the target matrix. An advantage of this approach is that one can use duality theory to obtain a lower bound quantity phrased as a maximization problem, which can be more convenient to work with in showing lower bounds.
Time permitting, we will discuss two applications of this approach.
1. One of the strongest lower bound techniques for randomized and quantum communication complexity is approximation rank---the minimum rank of a matrix which is entrywise close to the target matrix. We show that an approximation norm known as gamma_2 is polynomially related to approximation rank. This gives a polynomial time algorithm to approximate approximation rank, and also shows that approximation rank lower bounds quantum communication with shared entanglement, as gamma_2 does.
2. We show how the norm framework naturally generalizes to the case of multiparty complexity and how an approximation norm for multiparty number-on-the forehead complexity was recently used, in combination with techniques developed by Sherstov and Chattopadhyay, to show nontrivial lower bounds on the disjointness function for up to c log log n players, c <1.
This talk surveys joint work with Adi Shraibman.
Abstract: We study the approximation of multiobjective optimization problems. We propose the concept of $\epsilon$-convex Pareto ($\epsilon$-CP) set as the appropriate one for the convex setting, and observe that it can offer arbitrarily more compact representations than $\epsilon$-Pareto sets in this context.
We characterize when an $\epsilon$-CP can be constructed in polynomial time in terms of an efficient routine $\textrm{Comb}$ for optimizing (exactly or approximately) monotone linear combinations of the objectives.
We investigate the problem of computing minimum size $\epsilon$-convex Pareto sets, both for discrete (combinatorial) and continuous (convex) problems, and present general algorithms using a $\textrm{Comb}$ routine. For bi-objective problems, we show that if we have an exact $\textrm{Comb}$ optimization routine, then we can compute the minimum $\epsilon$-CP for continuous problems (this applies for example to bi-objective Linear Programming and Markov Decision Processes), and factor 2 approximation to the minimum $\epsilon$-CP for discrete problems (this applies for example to bi-objective versions of polynomial-time solvable combinatorial problems such as Shortest Paths, Spanning Tree, etc.). If we have an approximate $\textrm{Comb}$ routine, then we can compute factor 3 and 6 approximations respectively to the minimum $\epsilon$-CP for continuous and discrete bi-objective problems. We consider also the case of three and more objectives and present some upper and lower bounds.
Joint work with Mihalis Yannakakis.
Abstract: A social network - the graph of relationships and interactions within a group of individuals - plays a fundamental role as a medium for the spread of information, ideas, influence, or diseases among its members. An idea or innovation will appear, and it can either die out quickly or make significant inroads into the population. Similarly, an infectious disease may either affect a large share of the population, or be confined to a small fraction.
The collective behavior of individuals and the spread of diseases in a social network have a long history of study in sociology and epidemiology. In this talk, we will investigate graph-theoretic optimization problems relating to the spread of information or diseases. Specifically, we will focus on two types of questions: influence maximization, wherein we seek to identify influential individuals to start a cascade of an innovation to maximize the expected number of eventual adopters; and infection minimization, wherein we seek to remove nodes so as to keep a given infected component small.
We will present constant factor and bicriteria algorithms for versions of these problems, and also touch on many open problems and issues regarding competition among multiple innovators.
(This talk represents joint work with Jon Kleinberg, Eva Tardos, Elliot Anshelevich, Shishir Bharathi, Ara Hayrapetyan, Martin Pal, Mahyar Salek, and Zoya Svitkina.)
Abstract: This talk describes new explicit constructions of dispersers for affine sources of dimension below the notorious n/2 threshold. The main novelty in our construction lies in the method of proof which relies (solely) on elementary properties of linearized polynomials. In this respect we differ significantly from previous solutions to the problem, due to [Barak et al. 2005] and [Bourgain 2007]. These two breakthrough results used recent sum-product theorems over finite fields, whereas our analysis relies on properties of linearized polynomials that have been well-known since the work of Ore in the 1930's.
Definition of affine dispersers: A disperser for affine sources of dimension d is a function Disp:F_2^n -> F_2 that is nonconstant on every affine space of dimension > d. Formally, for every affine S \subset F_2^n, dim(S)>d we have {Disp(s): s in S}={0,1}.
Joint work with Swastik Kopparty.
Abstract: We study the point-to-point shortest path problem in a setting where preprocessing is allowed. The two main techniques we address are ALT and REACH. ALT is A* search with lower bounds based on landmark distances and triangle inequality. The REACH approach of Gutman precomputes locality values on vertices and uses them to prune the search.
We improve on REACH in several ways. In particular, we add shortcut arcs which reduce vertex reaches. Our modifications greatly reduce both preprocessing and query times. Our algorithm combines in a natural ALT, yielding significantly better query times and allowing a wide range of time-space trade-offs.
The resulting algorithms are quite practical for our motivating application, computing driving directions, both for server and for portable device applications. The ideas behind our algorithms are elegant and may have other applications.
(Joint work with Haim Kaplan and Renato Werneck)
Abstract: Direct Product Theorems are formal statements of the intuition: "if solving one instance of a problem is hard, then solving multiple instances is even harder". For example, a Direct Product Theorem with respect to bounded size circuits computing a function is a statement of the form: "if a function f is hard to compute on average for small size circuits, then f^k(x_1, ..., x_k) = f(x_1), ..., f(x_k) is even harder on average for certain smaller size circuits". The proof of the such a statement is by contradiction: we start with a circuit which computes f^k on some non-negligible fraction of the inputs and then use this circuit to construct another circuit which computes f on almost all inputs. By viewing such a constructive proof as decoding certain error-correcting code, it was independently observed by Trevisan and Impagliazzo that constructing a single circuit is not possible in general. Instead, we can only hope to construct a list of circuits such that one of them computes f on almost all inputs. This makes the list size an important parameter of the Theorem which can be minimized. We achieve optimal value of the list size which is a substantial improvement compared to previous proofs of the Theorem. In particular, this new version can be applied to uniform models of computation (e.g., randomized algorithms) whereas all previous versions applied only to nonuniform models (e.g., circuits).
Consider the following stronger and a more general version of the previous Direct Product Theorem statement: "if a problem is hard to solve on average, then solving more than the expected fraction of problem instances from a pool of multiple independently chosen instances becomes even harder". Such statements are useful in cryptographic settings where the goal is to amplify the gap between the ability of legitimate users to solve a type of problem and that of attackers. We call such statements "Chernoff-type Direct Product Theorems" and prove such a statement for a very general setting.
Comments on this page are welcome; please send them to rocco-at-cs.columbia.edu
Last updated 09/02/2008.