Fall 2011

DISTINGUISHED LECTURE SERIES

The Laplacian Paradigm: Emerging Algorithms for Massive Graphs
Shang-Hua Teng
University of Southern California
Wednesday, September 28, 2011
ABSTRACT: In Computer Science and Applied Mathematics, we often represent a graph by its adjacency matrix. This linear algebraic view of graphs can be very useful. For example, it has facilitated the design of the fast algorithm for the shortest path problem and the introduction of PageRank.

In this talk, I focus on the Laplacian matrix, a simple modification of the adjacency matrix. The Laplacian matrix had been extensively used for partitioning graphs that arise in VLSI design and high-performance computing. As the starting point of our discussion, I will show that every Laplacian matrix can be inverted efficiently, that is, the linear system Ax = b, where A is the Laplacian matrix, can be solved in nearly linear time. I then describe an emerging paradigm, which we refer to as the Laplacian Paradigm, for the design of efficient algorithms for massive graphs. This paradigm is built on the nearly-linear time Laplacian solver as well as a few other algorithms (such as for local clustering and high quality spanning trees) developed for supporting this solver.

I will discuss some recent progress in applying the Laplacian paradigm. Our first example will be the nearly linear time solution to a problem that we all learned in high school, that is, to determine the electrical flows in a circuit of resistors. We then show by solving a sequence of electrical flow problems we obtain the fastest algorithm for computing an approximate maximum s-t flow in an undirected graph. Recently, significant improvements have been made to the Laplacian solver and practical implementation might become available in the near future.

The goal of this presentation is to encourage more researchers to consider the use of the Laplacian Paradigm to develop faster algorithms for solving fundamental problems in combinatorial optimization, scientific computing, machine learning, network analysis, and other applications that involve massive graphs.

Joint work with Daniel Spielman (Yale) and Paul Christiano (MIT), Jonathan Kelner (MIT), and Aleksander Madry (MIT).

BIOGRAPHY: Shang-Hua Teng’s main research is in algorithm design, analysis, and applications. He and coauthor Dan Spielman received the Gödel Prize (2008) and Fulkerson Prize (2009) for developing Smoothed Analysis, a rigorous framework to explain the practical success of the simplex algorithm on real data that could not be clearly understood through traditional techniques. His recent work addresses Spectral Graph Theory & the Laplacian Paradigm and its applications to maximum flows, for which he and coauthors Paul Christiano, Jon Kelner, Aleksander Madry, and Dan Spielman received the best paper award at ACM STOC 2011. In addition, he has conducted research in Algorithmic Game Theory, where he is known for his joint paper with Xi Chen and Xiaotie Deng for resolving the complexity for computing an approximate Nash equilibrium and his joint papers on market equilibria. Teng is also interested in mathematical board games. Kyle Burke, his former Ph.D. student, and he designed & analyzed a game called Atropos, which is played on the Sperner’s triangle and based on the beautiful, celebrated Sperner’s Lemma. Teng’s past research interests include scientific computing, Internet algorithms, computational geometry, compiler optimization, parallel algorithms, computer graphics and three-dimensional mesh generation. Teng is an ACM Fellow, Alfred P. Sloan Fellow, winner of Senior Xerox Award for Outstanding Faculty Research (UIUC), and NSF CAREER Award.

Shang-Hua Teng is the Seeley G. Mudd Professor and Chairman of the Computer Science Department at USC Viterbi School of Engineering. He received a dual B.S. degree in computer science and electrical engineering from Shanghai Jiao Tong University in 1985, M.S. degree in computer science from USC in 1988, and Ph.D. degree in computer science from Carnegie Mellon University (CMU) in 1991.He was an instructor in the Department of Mathematics of MIT and professor in the Computer Science Departments of Boston University, the University of Minnesota and UIUC. He has worked and consulted for Microsoft Research, Akamai, IBM Almaden Research Center, Intel Corporation, Xerox PARC, Cray Research/SGI, Thinking Machines Corporation, and NASA Ames Research Center. He has received more than ten US Patents for his work on compiler optimization, Internet technology, and social network analysis.

 

The Power and Weakness of Randomness (When You Are Short on Time)
Avi Wigderson
Institute for Advanced Study, Princeton, NJ
Wednesday, October 26, 2011
ABSTRACT: Man has grappled with the meaning and utility of randomness for centuries. Research in the Theory of Computation in the last thirty years has enriched this study considerably. I’ll describe two main aspects of this research on randomness, demonstrating respectively its power and limits for making algorithms faster. I will address the role of randomness in other computational settings, such as space bounded computation and probabilistic and zero-knowledge proofs.
BIOGRAPHY: Avi Wigderson is the Herbert H. Maass Professor at the School of Mathematics, Institute for Advanced Study in Princeton, NJ. He obtained his B.Sc. in Computer Science from the Technion in 1980, and his Ph.D. from Princeton in 1983. He was a member of the faculty at the Hebrew University in Jerusalem from 1986-2003. His research interests span Complexity Theory, Algorithms, Randomness, Cryptography, and other topics in theoretical computer science. His awards include the Yoram Ben-Porat Presidential Prize for Outstanding Researcher (1994), the Rolf Nevanlinna Prize of the International Mathematical Union (1994), the Conant Prize of the American Mathematical Society (2008) and the Gel Prize (2009). He is a Member of the American Academy of Arts and Sciences.

 

Towards a Theory of Trust in Networks of Humans and Computers
Virgil Gligor
Carnegie Mellon University
Monday, December 12, 2011
ABSTRACT: We argue that a general theory of trust in networks of humans and computers must be build on both a theory of behavioral trust and a theory of computational trust. This argument is motivated by increased participation of people in social networking, crowdsourcing, human computation, and socio-economic protocols, e.g., protocols modeled by trust and gift-exchange games, norms-establishing contracts, and scams/deception. User participation in these protocols relies primarily on trust, since on-line verification of protocol compliance is often impractical; e.g., verification can lead to undecidable problems, co-NP complete test procedures, and user inconvenience. Trust is captured by participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants. Both preferences and beliefs can be enhanced whenever protocol non-compliance leads to punishment of untrustworthy participants; i.e., it seems natural that betrayal aversion can be decreased and belief in trustworthiness increased by properly defined punishment. We argue that a general theory of trust should focus on the establishment of new trust relations where none were possible before. This focus would help create new economic opportunities by increasing the pool of usable services, removing cooperation barriers among users, and at the very least, taking advantage of “network effects.” Hence a new theory of trust would also help focus security research in areas that promote trust-enhancement infrastructures in human and computer networks. Finally, we argue that a general theory of trust should mirror, to the largest possible extent, human expectations and mental models of trust without relying on false metaphors and analogies with the physical world.
BIOGRAPHY: Virgil D. Gligor received his B.Sc., M.Sc., and Ph.D. degrees from the University of California at Berkeley. He taught at the University of Maryland between 1976 and 2007, and is currently a Professor of Electrical and Computer Engineering at Carnegie Mellon University and co-Director of CyLab. Over the past thirty-five years, his research interests ranged from access control mechanisms, penetration analysis, and denial-of-service protection to cryptographic protocols and applied cryptography. Gligor was an editorial board member of several IEEE and ACM journals and the Editor in Chief of the IEEE Transactions on Dependable and Secure Computing. He served as the chair of ACM’s Special Interest Group on Security, Audit and Control, and received the 2006 National Information Systems Security Award jointly given by NIST and NSA in the US.