2011-2012 DISTINGUISHED LECTURE SERIES

September 28, 2011

Shang-Hua Teng , University of Southern California

The Laplacian Paradigm: Emerging Algorithms for Massive Graphs

Bio:
Shang-Hua Teng's main research is in algorithm design, analysis, and applications. He and coauthor Dan Spielman received the Gödel Prize (2008) and Fulkerson Prize (2009) for developing Smoothed Analysis, a rigorous framework to explain the practical success of the simplex algorithm on real data that could not be clearly understood through traditional techniques. His recent work addresses Spectral Graph Theory & the Laplacian Paradigm and its applications to maximum flows, for which he and coauthors Paul Christiano, Jon Kelner, Aleksander Madry, and Dan Spielman received the best paper award at ACM STOC 2011. In addition, he has conducted research in Algorithmic Game Theory, where he is known for his joint paper with Xi Chen and Xiaotie Deng for resolving the complexity for computing an approximate Nash equilibrium and his joint papers on market equilibria. Teng is also interested in mathematical board games. Kyle Burke, his former Ph.D. student, and he designed & analyzed a game called Atropos, which is played on the Sperner's triangle and based on the beautiful, celebrated Sperner's Lemma. Teng's past research interests include scientific computing, Internet algorithms, computational geometry, compiler optimization, parallel algorithms, computer graphics and three-dimensional mesh generation. Teng is an ACM Fellow, Alfred P. Sloan Fellow, winner of Senior Xerox Award for Outstanding Faculty Research (UIUC), and NSF CAREER Award. Shang-Hua Teng is the Seeley G. Mudd Professor and Chairman of the Computer Science Department at USC Viterbi School of Engineering. He received a dual B.S. degree in computer science and electrical engineering from Shanghai Jiao Tong University in 1985, M.S. degree in computer science from USC in 1988, and Ph.D. degree in computer science from Carnegie Mellon University (CMU) in 1991.He was an instructor in the Department of Mathematics of MIT and professor in the Computer Science Departments of Boston University, the University of Minnesota and UIUC. He has worked and consulted for Microsoft Research, Akamai, IBM Almaden Research Center, Intel Corporation, Xerox PARC, Cray Research/SGI, Thinking Machines Corporation, and NASA Ames Research Center. He has received more than ten US Patents for his work on compiler optimization, Internet technology, and social network analysis. host: Xi Chen

Abstract:

In Computer Science and Applied Mathematics, we often represent a graph by its adjacency matrix. This linear algebraic view of graphs can be very useful. For example, it has facilitated the design of the fast algorithm for the shortest path problem and the introduction of PageRank.

In this talk, I focus on the Laplacian matrix, a simple modification of the adjacency matrix. The Laplacian matrix had been extensively used for partitioning graphs that arise in VLSI design and high-performance computing. As the starting point of our discussion, I will show that every Laplacian matrix can be inverted efficiently, that is, the linear system Ax = b, where A is the Laplacian matrix, can be solved in nearly linear time. I then describe an emerging paradigm, which we refer to as the Laplacian Paradigm, for the design of efficient algorithms for massive graphs. This paradigm is built on the nearly-linear time Laplacian solver as well as a few other algorithms (such as for local clustering and high quality spanning trees) developed for supporting this solver.

I will discuss some recent progress in applying the Laplacian paradigm. Our first example will be the nearly linear time solution to a problem that we all learned in high school, that is, to determine the electrical flows in a circuit of resistors. We then show by solving a sequence of electrical flow problems we obtain the fastest algorithm for computing an approximate maximum s-t flow in an undirected graph. Recently, significant improvements have been made to the Laplacian solver and practical implementation might become available in the near future.

The goal of this presentation is to encourage more researchers to consider the use of the Laplacian Paradigm to develop faster algorithms for solving fundamental problems in combinatorial optimization, scientific computing, machine learning, network analysis, and other applications that involve massive graphs.

Joint work with Daniel Spielman (Yale) and Paul Christiano (MIT), Jonathan Kelner (MIT), and Aleksander Madry (MIT).

October 26, 2011

Avi Wigderson, IAS, Princeton

The power and weakness of randomness (when you are short on time)

Bio:
Avi Wigderson is the Herbert H. Maass Professor at the School of Mathematics, Institute for Advanced Study in Princeton, NJ. He obtained his B.Sc. in Computer Science from the Technion in 1980, and his Ph.D. from Princeton in 1983. He was a member of the faculty at the Hebrew University in Jerusalem from 1986-2003. His research interests span Complexity Theory, Algorithms, Randomness, Cryptography, and other topics in theoretical computer science. His awards include the Yoram Ben-Porat Presidential Prize for Outstanding Researcher (1994), the Rolf Nevanlinna Prize of the International Mathematical Union (1994), the Conant Prize of the American Mathematical Society (2008) and the Gödel Prize (2009). He is a Member of the American Academy of Arts and Sciences. host: Rocco Servedio

Abstract:

Man has grappled with the meaning and utility of randomness for centuries. Research in the Theory of Computation in the last thirty years has enriched this study considerably. I'll describe two main aspects of this research on randomness, demonstrating respectively its power and limits for making algorithms faster. I will address the role of randomness in other computational settings, such as space bounded computation and probabilistic and zero-knowledge proofs.

December 12, 2011

Virgil Gligor, Carnegie Mellon University

Towards a Theory of Trust in Networks of Humans and Computers

Abstract:

* based on joint work with Jeannette Wing.

Abstract:

We argue that a general theory of trust in networks of humans and computers must be build on both a theory of behavioral trust and a theory of computational trust. This argument is motivated by increased participation of people in social networking, crowdsourcing, human computation, and socio-economic protocols, e.g., protocols modeled by trust and gift-exchange games, norms-establishing contracts, and scams/deception. User participation in these protocols relies primarily on trust, since on-line verification of protocol compliance is often impractical; e.g., verification can lead to undecidable problems, co-NP complete test procedures, and user inconvenience. Trust is captured by participant preferences (i.e., risk and betrayal aversion) and beliefs in the trustworthiness of other protocol participants. Both preferences and beliefs can be enhanced whenever protocol non-compliance leads to punishment of untrustworthy participants; i.e., it seems natural that betrayal aversion can be decreased and belief in trustworthiness increased by properly defined punishment. We argue that a general theory of trust should focus on the establishment of new trust relations where none were possible before. This focus would help create new economic opportunities by increasing the pool of usable services, removing cooperation barriers among users, and at the very least, taking advantage of

February 27, 2012

Alfred Spector, Google, Inc.

Computer Science—The Ever-Expanding Sphere

Bio:
Alfred Z. Spector is Vice President for research and special initiatives at Google, and is responsible for research across Google, as well as its University Relations Programs, Open Source Activities, and internal technical education. Previously, Spector was Vice President of Strategy and Technology IBM's Software Business, and prior to that, he was Vice President of Services and Software Research across IBM. He was also founder and CEO of Transarc Corporation, a pioneer in distributed transaction processing and wide area file systems, and was an Associate Professor of Computer Science at Carnegie Mellon University, specializing in highly reliable, highly scalable distributed computing. Spector received his Ph.D. in Computer Science from Stanford and his A.B. in Applied Mathematics from Harvard. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, a Fellow of the IEEE and ACM, and the recipient of the 2001 IEEE Computer Society's Tsutomu Kanai Award for work in scalable architectures and distributed systems. host: Joe Traub

Abstract:

Computer Science continues to have a very productive research agenda in its core areas, such as for building planetary-scale, parallel systems, improving software engineering, increasing the scale and utility of machine learning, and securing systems. However, Computer Science continues to be an expanding sphere, with its surface growing into ever more other domains. In this talk, based on my experiences at Google and IBM, I will discuss a few of the challenges in the core of computer science. But, I will mostly focus on computer science research opportunities at the edge of the expanding sphere—particularly, in education and healthcare. I will illustrate some of the more basic research challenges, but also discuss some of the engineering needs as advanced computing moves ever further into life

March 07, 2012

Rajesh P. N. Rao, University of Washington

Controlling Objects by Thought: The Emerging Science of Brain-Computer Interfacing

Bio:
Rajesh P. N. Rao is an associate professor in the Department of Computer Science and Engineering and director of the Neural Systems Laboratory at the University of Washington, Seattle. He received his PhD from the University of Rochester and was an Alfred P. Sloan postdoctoral fellow at the Salk Institute for Biological Studies in San Diego. He is the recipient of an NSF CAREER award, an ONR Young Investigator Award, a Sloan Faculty Fellowship, and a David and Lucile Packard Fellowship for Science and Engineering. Over the course of his career, his research interests have ranged from computational complexity theory and computer vision to computational neuroscience, robotics, brain-computer interfacing, and computer analysis of ancient scripts. He has co-edited two books, Probabilisitic Models of the Brain and Bayesian Brain (both published by MIT Press).

Abstract:

The idea of controlling objects by thought alone has long been a staple of science fiction novels and movies. However, the advent of brain-computer interfaces (BCIs) is rapidly making this idea a reality, with researchers demonstrating the control of cursors, prosthetic arms, wheelchairs, and robotic avatars all through brain signals. BCIs have now enabled communication in locked-in patients, helped restore movement in paralyzed and disabled individuals, and are increasingly being used in novel applications such as security, lie detection, alertness monitoring, entertainment, gaming, education, and human augmentation. This talk will provide an overview of recent developments in the field, with highlights from two approaches that my collaborators and I are exploring at the University of Washington: a non-invasive technique known as electroencephalography (EEG) for recording from the scalp and an invasive technique known as electrocorticography (ECoG) for recording from the brain surface. I will describe how devices such as a computer cursor, a prosthetic hand, and a humanoid robot can be controlled through motor imagery and stimulus-evoked potentials decoded from EEG and ECoG signals. The talk will conclude with a discussion of some of the moral and ethical issues we will have to confront as a society when BCIs start making the transition from the laboratory to our daily lives.

April 11, 2012

Harry Shum, Microsoft Corporation

Bing Dialog Model: Intent, Knowledge and Interaction

Bio:
Harry Shum is the corporate vice president responsible for search product development at Microsoft Corporation, www.bing.com. Previously he oversaw the research activities at Microsoft Research Asia and the lab's collaborations with universities in the Asia Pacific region, and was responsible for the Internet Services Research Center, an applied research organization dedicated to long-term and short-term technology investments in search and advertising at Microsoft. Shum joined Microsoft Research in 1996, as a researcher based in Redmond, Washington. He moved to Beijing as one of the founding members of Microsoft Research China (later renamed Microsoft Research Asia). There he began a nine-year tenure as a research manager, subsequently moving on to become assistant managing director, managing director of Microsoft Research Asia, Distinguished Engineer and Corporate Vice President. Shum is an IEEE Fellow and an ACM Fellow for his contributions to computer vision and computer graphics. Shum received a doctorate in robotics from the School of Computer Science at Carnegie Mellon University.

Abstract:

The decade-old Internet search outcomes, manifested in the document-centric form of "ten blue links," are no longer sufficient for Internet users. Many studies have shown that when users are ushered off the conventional search result pages through blue links, their needs are often partially met at best in a "hit-or-miss" fashion. To tackle this challenge, we have designed Bing (www.bing.com), Microsoft's search engine, to not just navigate users to a landing page through a blue link but to continue engaging with users to clarify intent and facilitate task completion with Bing

Other Lectures