For Fall 2010, the usual time for the meetings will be Thursday at 10:30 - 12:00 in the CS conference room, CSB 453. Abstracts for talks are given below the talk schedule.

Ilias Diakonikolas

Columbia University

**Abstract:**
We study optimization problems with multiple objectives. Such problems
are pervasive across many diverse disciplines . in economics,
engineering, healthcare, biology, to name but a few . and heuristic
approaches to solve them have already been deployed in several areas, in
both academia and industry. Hence, there is a real need for a rigorous
investigation of the relevant questions.

In such problems we are interested not in a single optimal solution, but in the tradeoff between the different objectives. This is captured by the tradeoff or Pareto curve, the set of all feasible solutions whose vector of the various objective is not dominated by any other solution. Typically, we have a small number of objectives and we wish to plot the tradeoff curve to get a sense of the design space. Unfortunately, typically the tradeoff curve has exponential size for discrete optimization problems even for two objectives (and is typically infinite for continuous problems). Hence, a natural goal in this setting is, given an instance of a multiobjective problem, to efficiently obtain a .good. approximation to the entire solution space with .few. solutions. This has been the underlying goal in much of the research in the multiobjective area, with many heuristics proposed for this purpose, typically however without any performance guarantees or complexity analysis.

We develop efficient algorithms for the succinct approximation of the Pareto set for a large class of multiobjective problems. First, we investigate the problem of computing a minimum set of solutions that approximates within a specified accuracy the Pareto set of a multiobjective optimization problem. We provide approximation algorithms with tight performance guarantees for biobjective problems and make progress for the more challenging case of three and more objectives. Subsequently, we propose and study the notion of the approximate convex Pareto set; a novel notion of approximation to the Pareto set, as the appropriate one for the convex setting. We characterize when such an approximation can be efficiently constructed and investigate the problem of computing minimum size approximate convex Pareto sets, both for discrete and convex problems. Next, we turn to the problem of approximating the Pareto set as efficiently as possible. To this end, we analyze the Chord algorithm, a popular, simple method for the succinct approximation of curves, which is widely used, under different names, in a variety of areas, such as, multiobjective and parametric optimization, computational geometry, and graphics.

Spyridon Antonakopoulos

Computing and Software Principles Research, Bell Labs

**Abstract:**
Given a network, a set of demands and a cost function f(.), the min-cost
network design problem is to route all demands with the objective of
minimizing the sum of f(l_e), where l_e is the traffic load on link e
under the routing. We focus on cost functions of the form f(x) = s +
x^a for x > 0, with f(0) = 0. For a <= 1, f(.) is subadditive and
exhibits behavior consistent with economies of scale. This problem
corresponds to the well-studied Buy-at-Bulk network design problem and
admits polylogarithmic approximation and hardness.

In this paper, we focus on the less studied scenario of a > 1 with a positive startup cost s > 0. Now, the cost function f(.) is neither subadditive nor superadditive. This is motivated by minimizing network-wide energy consumption when supporting a set of traffic demands. It is commonly accepted that, for some computing and communication devices, doubling processing speed more than doubles the energy consumption. Hence, in Economics parlance, such a cost function reflects diseconomies of scale.

We begin by discussing why existing routing techniques such as randomized rounding and tree-metric embedding fail to generalize directly. We then present our main contribution, which is a polylogarithmic approximation algorithm. We obtain this result by first deriving a bicriteria approximation for a related capacitated min-cost flow problem that we believe is interesting in its own right. Our approach for this problem builds upon the well-linked decomposition due to Chekuri-Khanna-Shepherd, the construction of expanders via matchings due to Khandekar-Rao-Vazirani, and edge-disjoint routing in well-connected graphs due to Rao-Zhou. However, we also develop new techniques that allow us to keep a handle on the total cost, which was not a concern in the aforementioned literature.

This is joint work with Matthew Andrews and Lisa Zhang, and will appear in FOCS 2010.

Rocco Servedio

Columbia University

**Abstract:**
The goal of this talk is to provide background and context for the
upcoming October 21-22 workshop on "Analysis and Geometry of Boolean
Threshold Functions." The talk will survey a range of results -- some
old, some new -- on linear threshold functions and polynomial threshold
functions, to complement the results that will be presented at the
workshop.

A recurring theme of the talk will be that establishing bounds on various natural parameters of polynomial threshold functions (degree, average sensitivity, etc.) yields efficient algorithms for a range of well-studied problems in computational learning theory.

Gilad Tsur

Tel-Aviv University

**Abstract:**
We initiate the study of testing properties of images that
correspond to *sparse* $0/1$-valued matrices of size $n\times n$.
Our study is related to but different from the study initiated by
Raskhodnikova (* Proceedings of RANDOM, 2003*), where the images
correspond to *dense* $0/1$-valued matrices. Specifically,
while distance between images in the model studied by
Raskhodnikova is the fraction of entries
on which the images differ taken with respect to
all $n^2$ entries, the distance measure in our model is defined by
the fraction of such entries taken with respect to the
actual number of $1$'s in the matrix.
We study several natural properties: connectivity, convexity, monotonicity,
and being a line. In all cases we give testing algorithms
with sublinear complexity, and in some of the cases we also provide
corresponding lower bounds.

This is joint work with with Dana Ron.

Dana Dachman-Soled

Columbia University

**Abstract:**
A fair two-party coin tossing protocol is one in which both parties
output the same bit that is almost uniformly distributed (i.e. it
equals 0 and 1 with probability that is at most negligibly far from
one half). It is well known that it is impossible to achieve fair coin
tossing even in the presence of fail-stop adversaries (Cleve, FOCS
1986). In fact, Cleve showed that for every coin tossing protocol
running for $r$ rounds, an efficient fail-stop adversary can bias the
output by $\Omega(1/r)$. Since this is the best possible, a protocol
that limits the bias of any adversary to $O(1/r)$ is called
optimally-fair. The only optimally-fair protocol that is known to
exist relies on the existence of oblivious transfer, because it uses
general secure computation (Moran, Naor and Segev, TCC 2009). However,
it is possible to achieve a bias of $O(1/\sqrt{r})$ in $r$ rounds
relying only on the assumption that there exist one-way functions.

In this paper we show that it is impossible to achieve optimally-fair coin tossing via a black-box construction from one-way functions for $r$ that is less than $O(n/log n)$, where $n$ is the security parameter (i.e. input/output length) of the one-way function used. An important corollary of this is that it is impossible to construct an optimally-fair coin tossing protocol via a black-box construction from one-way functions whose round complexity is independent of the security parameter $n$ determining the security of the one-way function being used. Thus we put forward another barrier (alternative to that of Cleve's) for the number of rounds in an optimally-fair coin tossing protocol, depending on the primitive used in the construction. Our result extends to any primitive (used to construct optimally-fair coin tossing) which can be realized in a black-box manner from a random oracle; examples of such primitives are exponentially hard one-way functions and collision resistant hash functions.

Joint work with Yehuda Lindell, Mohammad Mahmoody and Tal Malkin

Aaron Bernstein

Columbia University

**Abstract:**
We present a new technique for developing faster approximation algorithms
for several shortest-path related problems. Our inspiration comes from
graph sparsification: here, the idea is that instead of working with the
algorithms, we work with the graph itself. In particular, we can transform
an originally dense graph into a sparse one (at the cost of some approximation),
so that existing algorithms run faster. This is widely used in approximation
algorithms for shortest paths, but it is limited in scope because it cannot
help on sparse graphs.

To overcome this, we focus on another nice property that the transformed graph might possess. In particular, we present a technique for reducing the diameter of an arbitrary undirected graph (The diameter is defined as the maximum shortest distance in the graph, with the smallest weight normalized to 1). Put otherwise, we create an approximation to the original graph where all distances are small. Many variations of the shortest path problem admit extremely simple and efficient algorithms for small distances, so reducing the diameter and running these algorithms leads to significant improvements in running time.

We present two definite applications, in addition to many other problems which seem likely to benefit from this technique. The first application is to fully dynamic all-pair-shortest-paths in weighted, undirected graphs [Bernstein, FOCS 2009]. In this problem we have a graph where edges are being inserted and deleted one at a time, and our goal is to reconstruct the shortest paths after each edge change. The previous best approximation algorithm was a 3-approximation which required O(m*sqrt(n)) time per edge change. We reduce this to a (2 + epsilon)-approximation requiring close to O(m) per update. (Technically, it is O(m*2^sqrt{log(n)}).) We also present an improved approximation algorithm for the restricted shortest path problem.

Mahesh Viswanathan

University of Illinois at Urbana-Champaign

**Abstract:**
The continuous run-time monitoring of the behavior of a system is a
technique that is used both as a complementary approach to formal
verification and testing to ensure reliability, as well as a means to
discover emergent properties in a distributed system, like intrusion and
event correlation. The monitors in all these scenarios can be abstractly
viewed as automata that process a (unbounded) stream of events to and
from the component being observed, and raise an ``alarm'' when an error
or intrusion is discovered. These monitors indicate the absence of error
or intrusion in a behavior implicitly by the absence of an alarm.

In this talk, we will investigate the power of randomization in run-time monitoring. Specifically, we examine finite memory monitoring algorithms that toss coins to make decisions on the behavior they are observing. We give a number of results that characterize, topologically as well as with respect to their computational power, the sets of sequences the monitors permit. We will present the complexity of various decision problems and discuss the connections to a theory of omega-languages recognized by probabilistic automata.

This is joint work with Rohit Chadha, and A. Prasad Sistla.

Bio: Mahesh Viswanathan obtained his bachelor's degree in computer science from the Indian Institute of Technology at Kanpur in 1995, and his doctorate from the University of Pennsylvania in 2000. He was a post-doctoral fellow at DIMACS with a joint appointment with Telcordia Technologies in 2000-01. Since 2001, he has been on the faculty at the University of Illinois at Urbana-Champaign. His research interests are in the core areas of logic, automata theory, and algorithm design, with applications to the algorithmic verification of systems.

Isamu Teranishi

Columbia University and NEC

**Abstract:**
Key Dependent Message (KDM) secure encryption is a new attractive research area of cryptography and
extensively studied in past few years. A KDM secure scheme w.r.t. a function set F provides security even if one
encrypts a key dependent
message f(sk) for any f\in F.
The main result of this work is the construction of an efficient public key encryption scheme which is
KDM secure with respect to quite a large function set \calF.
Proposing such a scheme is significant because, although KDM security has practical applications,
all previous works in the standard model are either feasibility results based on "bit by bit"
encryptions, or for a small set of functions such as linear functions.

Efficiency of our scheme is dramatically improved compared to the previous feasibility results, and our function set is significantly richer than the set of linear functions. Our function set contains all rational functions whose denominator and numerator have polynomial bounded degree and which can be computable by a Straight Line Program with polynomial length. We also put forth criteria for the adversarial settings in the KDM definition, that include in addition to the allowed set of functions, also a set of flexibility parameters of the scheme. Our encryption technique can be dubbed ``Cascaded Paillier ElGamal''. Along the way we also suggest a ``triple mode proof framework'' technique as a general proof methodology for KDM-secure schemes.

Contact

There is a mailing list for the reading group. General information about the mailing list (including how to subscribe yourself to it) is available here. If you want to unsubscribe or change your options, send email to theoryread-request@lists.cs.columbia.edu with the word `help' in the subject or body (don't include the quotes), and you will get back a message with instructions.

Comments on this page are welcome; please send them to
` tal-at-cs.columbia.edu
`

Last updated 11/02/2010.