DISTINGUISHED LECTURE SERIES
Computer Science:The Ever-Expanding Sphere
Alfred Spector
Google
Monday, February 27, 2012
ABSTRACT: Computer Science continues to have a very productive research agenda in its core areas, such as for building planetary-scale, parallel systems, improving software engineering, increasing the scale and utility of machine learning, and securing systems. However, Computer Science continues to be an expanding sphere, with its surface growing into ever more other domains. In this talk, based on my experiences at Google and IBM, I will discuss a few of the challenges in the core of computer science. But, I will mostly focus on computer science research opportunities at the edge of the expanding sphere—particularly, in education and healthcare. I will illustrate some of the more basic research challenges, but also discuss some of the engineering needs as advanced computing moves ever further into life’s most pervasive and important pursuits.
BIOGRAPHY: Alfred Z. Spector is Vice President for research and special initiatives at Google, and is responsible for research across Google, as well as its University Relations Programs, Open Source Activities, and internal technical education. Previously, Spector was Vice President of Strategy and Technology IBM’s Software Business, and prior to that, he was Vice President of Services and Software Research across IBM. He was also founder and CEO of Transarc Corporation, a pioneer in distributed transaction processing and wide area file systems, and was an Associate Professor of Computer Science at Carnegie Mellon University, specializing in highly reliable, highly scalable distributed computing. Spector received his Ph.D. in Computer Science from Stanford and his A.B. in Applied Mathematics from Harvard. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, a Fellow of the IEEE and ACM, and the recipient of the 2001 IEEE Computer Society’s Tsutomu Kanai Award for work in scalable architectures and distributed systems.
Controlling Objects by Thought: The Emerging Science of Brain-Computer Interfacing
Rajesh P. N. Rao
University of Washington
Wednesday, March 7, 2012
ABSTRACT: The idea of controlling objects by thought alone has long been a staple of science fiction novels and movies. However, the advent of brain-computer interfaces (BCIs) is rapidly making this idea a reality, with researchers demonstrating the control of cursors, prosthetic arms, wheelchairs, and robotic avatars all through brain signals. BCIs have now enabled communication in locked-in patients, helped restore movement in paralyzed and disabled individuals, and are increasingly being used in novel applications such as security, lie detection, alertness monitoring, entertainment, gaming, education, and human augmentation. This talk will provide an overview of recent developments in the field, with highlights from two approaches that my collaborators and I are exploring at the University of Washington: a non-invasive technique known as electroencephalography (EEG) for recording from the scalp and an invasive technique known as electrocorticography (ECoG) for recording from the brain surface. I will describe how devices such as a computer cursor, a prosthetic hand, and a humanoid robot can be controlled through motor imagery and stimulus-evoked potentials decoded from EEG and ECoG signals. The talk will conclude with a discussion of some of the moral and ethical issues we will have to confront as a society when BCIs start making the transition from the laboratory to our daily lives.
BIOGRAPHY: Rajesh P. N. Rao is an associate professor in the Department of Computer Science and Engineering and director of the Neural Systems Laboratory at the University of Washington, Seattle. He received his PhD from the University of Rochester and was an Alfred P. Sloan postdoctoral fellow at the Salk Institute for Biological Studies in San Diego. He is the recipient of an NSF CAREER award, an ONR Young Investigator Award, a Sloan Faculty Fellowship, and a David and Lucile Packard Fellowship for Science and Engineering. Over the course of his career, his research interests have ranged from computational complexity theory and computer vision to computational neuroscience, robotics, brain-computer interfacing, and computer analysis of ancient scripts. He has co-edited two books, Probabilisitic Models of the Brain and Bayesian Brain (both published by MIT Press).
The Bing Dialog Model: Intent, Knowledge and Interaction
Harry Shum
Microsoft Research
Wednesday, April 11, 2012
ABSTRACT: The decade-old Internet search outcomes, manifested in the document-centric form of “ten blue links,” are no longer sufficient for Internet users. Many studies have shown that when users are ushered off the conventional search result pages through blue links, their needs are often partially met at best in a “hit-or-miss” fashion. To tackle this challenge, we have designed Bing, Microsoft’s search engine, not just to navigate users to a landing page through a blue link but to continue engaging with users to clarify intent and facilitate task completion with Bing’s deep understanding of the underlying entities and domains of user interest. Powering this new paradigm is the Bing Dialog Model that consists of three building blocks: an indexing system that comprehensively collects information from the web and systematically harvests knowledge, an intent model that statistically infers user intent and predicts next action based on the harvested knowledge, and an interaction model that elicits user intent through mathematically optimized presentations of web information and knowledge that matches user needs. In this talk, I’ll describe the Bing Dialog Model in detail and demonstrate it in action through some innovative features, in particular applying social contexts and entity understanding for user task completion.
BIOGRAPHY: Harry Shum is the corporate vice president responsible for search product development at Microsoft Corporation. Previously he oversaw the research activities at Microsoft Research Asia and the lab’s collaborations with universities in the Asia Pacific region, and was responsible for the Internet Services Research Center, an applied research organization dedicated to long-term and short-term technology investments in search and advertising at Microsoft. Shum joined Microsoft Research in 1996, as a researcher based in Redmond, Washington. He moved to Beijing as one of the founding members of Microsoft Research China (later renamed Microsoft Research Asia). There he began a nine-year tenure as a research manager, subsequently moving on to become assistant managing director, managing director of Microsoft Research Asia, Distinguished Engineer and Corporate Vice President. Shum is an IEEE Fellow and an ACM Fellow for his contributions on computer vision and computer graphics. Shum received a doctorate in robotics from the School of Computer Science at Carnegie Mellon University.
FACULTY CANDIDATE SEMINARS
Michael Bernstein
MIT
Monday, January 30, 2012
ABSTRACT: Crowd-powered systems combine computation with human intelligence, drawn from large groups of people connecting and coordinating online. These hybrid systems enable applications and experiences that neither crowds nor computation could support alone. Unfortunately, crowd work is error-prone and slow, making it difficult to incorporate crowds as first-order building blocks in software systems. I introduce computational techniques that decompose complex tasks into simpler, verifiable steps to improve quality, and optimize work to return results in seconds. These techniques advance crowdsourcing into a platform that is reliable and responsive to the point where crowds can be used in interactive systems. In this talk, I will present two crowd-powered systems to illustrate these ideas. The first, Soylent, is a word processor that uses paid micro-contributions to aid writing tasks such as text shortening and proofreading. Using Soylent is like having access to an entire editorial staff as you write. The second system, Adrenaline, is a camera that uses crowds to help amateur photographers capture the exact right moment for a photo. It finds the best smile and catches subjects in mid-air jumps, all in realtime. These systems point to a future where social and crowd intelligence are central elements of interaction, software, and computation.
BIOGRAPHY: Michael Bernstein is a PhD candidate in Computer Science at the Massachusetts Institute of Technology. His research in human-computer interaction focuses on crowdsourcing and social computing systems. He has been awarded Best Student Paper at UIST 2010, Best Paper at ICWSM 2011, the NSF graduate research fellowship and the Microsoft Research PhD fellowship. His work has appeared in venues like the New York Times, Slate, CNN and Wired. He earned his masters in Computer Science at MIT, and a bachelors degree in Symbolic Systems at Stanford University.
The Battle for Control of Online Communications
Nick Feamster
Georgia Tech
Monday, February 13, 2012
ABSTRACT: The Internet offers users many opportunities for communicating and exchanging ideas, but abuse, censorship, and the manipulation of Internet traffic have put free and open communication at risk. Recent estimates suggest that spam constitutes about 95% of all email traffic; hundreds of thousands of online scam domains emerge every day; online social networks may be used to spread propaganda; and more than 60 countries around the world censor Internet traffic. In this talk, I will present approaches that we have developed to preserve free and open communication on the Internet in the face of these threats. First, I will describe the threat of message abuse (e.g., spam) and describe methods we have developed for mitigating it. I will briefly discuss a 13-month study of the network-level behavior of spammers, and present SNARE, a spam filtering system we developed that classifies email messages based on the network-level traffic characteristics of the email messages, rather than their contents. Next, I will turn to information censorship, and describe Collage, a system that circumvents censorship without arousing the suspicion of the censor. Finally, I will discuss the various forms of information manipulation, including the spread of propaganda in social networks and online “filter bubbles”. Although it is difficult to prevent all forms of manipulation, our goal is to make it more transparent to users. Towards this goal, I will describe my broader research agenda and plans, which aim to improve Internet transparency for aspects of Internet communication ranging from network performance to social media to search results using the aggregation of data from a wide variety of vantage points.
BIOGRAPHY: Nick Feamster is an associate professor in the College of Computing at Georgia Tech. He received his Ph.D. in Computer science from MIT in 2005, and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. His research focuses on many aspects of computer networking and networked systems, including the design, measurement, and analysis of network routing protocols, network operations and security, and anonymous communication systems. In December 2008, he received the Presidential Early Career Award for Scientists and Engineers (PECASE) for his contributions to cybersecurity, notably spam filtering. His honors include the Technology Review 35 “Top Young Innovavors Under 35” award, a Sloan Research Fellowship, the NSF CAREER award, the IBM Faculty Fellowship, and award papers at SIGCOMM 2006 (network-level behavior of spammers), the NSDI 2005 conference (fault detection in router configuration), Usenix Security 2002 (circumventing web censorship using Infranet), and Usenix Security 2001 (web cookie analysis).
Ankur Moitra
IAS
Wednesday, February 15, 2012
ABSTRACT: Suppose we are given a gigantic communication network, but are only interested in a small number of nodes (clients). There are many routing problems we could be asked to solve for our clients. Is there a much smaller network – that we could write down on a sheet of paper and put in our pocket – that approximately preserves all the relevant communication properties of the original network? As we will demonstrate, the answer to this question is YES, and we call this smaller network a vertex sparsifier. In fact, if we are asked to solve a sequence of optimization problems characterized by cuts or flows, we can compute a good vertex sparsifier ONCE and discard the original network. We can run our algorithms (or approximation algorithms) on the vertex sparsifier as a proxy – and still recover approximately optimal solutions in the original network. This novel pattern saves both space (because the network we store is much smaller) and time (because our algorithms run on a much smaller graph). Additionally, we apply these ideas to obtain a master theorem for graph partitioning problems – as long as the integrality gap of a standard linear programming relaxation is bounded on trees, then the integrality gap is at most a logarithmic factor larger for general networks. This result implies optimal bounds for many well studied graph partitioning problems as a special case, and even yields optimal bounds for more challenging problems that had not been studied before.
BIOGRAPHY: Ankur Moitra is currently an NSF CI Fellow (postdoc) in the School of Mathematics at the Institute for Advanced Study in Princeton and his main research interests are in algorithms, learning and convex geometry. He was a Fannie and John Hertz Foundation Fellow at MIT, and completed his PhD in 2011 and his MS in 2009, both in theoretical computer science. He received a George M. Sprowls award for best thesis and a William A. Martin award for best master’s thesis in computer science at MIT.
Algorithmic Solutions to Some Statistical Questions
Greg Valiant
UC Berkeley
Wednesday, February 22, 2012
ABSTRACT: I will discuss three classical statistical problems for which the computational perspective unlocks insights into the fundamental nature of these tasks; additionally, our solutions suggest new algorithmic approaches to coping with the increasing size of real-world datasets. The first problem is the basic task of finding correlated vectors. Given a set of n binary vectors, with the promise that some pair is r-correlated, can one find the correlated pair faster than simply searching all O(n^2) pairs? We give a surprisingly simple algorithm for finding such a pair in time poly(1/r)*n^1.6, which is the first sub-quadratic time algorithm for this task with a polynomial dependence on the correlation, and improves upon the prior best of n^(2-O(r)), given by the Locality Sensitive Hashing approach. This result can be leveraged to yield significantly improved algorithms for several important problems in learning theory, including the problems of learning juntas, and learning parities with noise. The second problem we consider is recovering the parameters of a mixture of Gaussian distributions. Given data drawn from a single Gaussian distribution, the sample mean and covariance of the data trivially yield good estimates of the parameters of the true distribution; if, however, some of the data points are drawn according to one Gaussian, and the rest of the data points are drawn according to a different Gaussian, how can one recover the parameters of each Gaussian component? This problem was first proposed by Pearson in the 1890’s, and, in the last decade, has been revisited by computer scientists. In a pair of papers with Adam Kalai and Ankur Moitra, we give the first polynomial-time algorithm for this problem that has provable success guarantees. The final problem, investigated in a series of papers with Paul Valiant, considers the tasks of estimating a broad class of statistical properties, which includes entropy, various distance metrics between pairs of distributions, and support size. Our estimators are the first proposed estimators for these properties which achieve small error using a sub-linear number of samples, and are based on a novel approach of approximating the portion of the distribution from which one draws no samples. There are several implications of our results, including resolving the sample complexity of the `distinct elements problem’ (i.e. given a vector of length n, how many random indices must one query to accurately estimate the number of distinct elements?); we show that on the order of n/log n queries is both necessary, and sufficient, improving upon the prior lower-bounds, and contradicting the previously widely-held belief that O(n) samples are necessary.
BIOGRAPHY: Gregory Valiant is currently finishing his PhD at UC Berkeley under the supervision of Christos Papadimitriou. Prior to Berkeley, Gregory received a BA in mathematics, and an MS in computer science from Harvard University. His research interests include algorithms, learning theory, applied probability and statistics, and game theory, with his recent work focusing on algorithmic approaches to several classical questions in statistics.
Cloudy with a Chance of Consistency
Tim Kraska
UC Berkeley
Monday, March 5, 2012
ABSTRACT: Cloud computing promises virtually unlimited scalability and high availability at low cost. However, it is commonly believed that a system’s consistency must be relaxed in order to achieve these properties. We evaluated existing commercial cloud offerings for transactional workloads and found that consistency is indeed expensive and limits scalability in these systems. Because of this expense, many systems designers have chosen to provide only relaxed consistency guarantees, if any. This choice, however, makes such systems inappropriate for many mission critical applications. This dichotomy is based on unrealistically pessimistic assumptions about Big Data environments. First, it assumes that consistency is an all or nothing decision that must be applied uniformly to all data in a system.
Secondly, even in situations where strong consistency is required, previous transaction commit protocols were based on worst-case assumptions regarding the likelihood of conflicts. In thi s talk, I describe two techniques that build on a more nuanced view of consistency requirements and the costs of maintaining them.
I first describe Consistency Rationing, which builds on inventory holding models used in Operations Research to help classify and manage data based on their consistency requirements. Consistency Rationing exploits the fact that for some data the cost of maintaining consistency outweighs the benefit obtained by avoiding inconsistencies. In the second part of the talk, I present a new optimistic commit protocol for the wide-area network. For a long time, synchronized wide-area replication was considered to be infeasible with strong consistency. With MDCC I will show how we can achieve strong consistency with similar response-time guarantees as eventual consistency in the normal operational case. This work was done as part of a larger project around Big Data management. At the end of the talk, I will provide an overview of some of my other projects and give an outline for future work.
BIOGRAPHY: Tim Kraska is a PostDoc in the AMPLab, which is part of the Computer Science Division at UC Berkeley. Currently his research focuses on Big Data management in the cloud and hybrid human/machine database systems. Before joining UC Berkeley, Tim Kraska received his PhD from ETH Zurich, where he worked on transaction management and stream processing in the cloud as part of the Systems Group. He received a Swiss National Science Foundation Prospective Researcher Fellowship (2010), a DAAD Scholarship (2006), a University of Sydney Master of Information Technology Scholarship for outstanding achievement (2005), the University of Sydney Siemens Prize (2005), and a VLDB best demo award (2011).
Algorithms for Learning Latent Variable Models
Daniel Hsu
Microsoft Research
Monday, March 19, 2012
ABSTRACT: Latent variable models are widely used in applications to automatically recover simple underlying signals from noisy high-dimensional data. The challenge in estimating such models stems from the presence of hidden (unobserved) variables, and typical local search methods used for this task (e.g., E-M) generally lack basic performance guarantees such as statistical consistency and computational efficiency. In this talk, I will discuss recent developments in linear algebraic methods for learning certain classes of latent variable models, including parameter estimation for hidden Markov models, and structure learning of latent variable tree models. Unlike the local search heuristics, the proposed linear algebraic methods come with statistical and computational efficiency guarantees under mild conditions on the data distribution. Central to the new techniques is the characterization of the models in terms of low-order moments (e.g., averages, correlations) of observable variables, which are readily estimated from data.
This talk is based on various joint works with Anima Anandkumar, Kamalika Chaudhuri, Sham Kakade, Le Song, and Tong Zhang.
BIOGRAPHY: Daniel Hsu is a postdoctoral researcher at Microsoft Research New England. Previously, he was a postdoc with the Department of Statistics at Rutgers University and the Department of Statistics at the University of Pennsylvania from 2010 to 2011, supervised by Tong Zhang and Sham M. Kakade. He received his Ph.D. in Computer Science in 2010 from UC San Diego, where he was advised by Sanjoy Dasgupta, and his B.S. in Computer Science and Engineering in 2004 from UC Berkeley. His research interests are in algorithmic statistics and machine learning.
Geometric Capture of Human Performances
Hao Li
Columbia
Monday, March 26, 2012
ABSTRACT: Real-time depth sensors reliably capture both geometry and motion at remarkably high resolution, regardless of a subject’s appearance or the illumination of the environment. While the subtlest human body deformations and facial expressions can be recovered, only partial information is available at a time and the input data is typically noisy. The key to obtaining a complete digitization from incomplete 3D scans is to establish full correspondences throughout the entire recording while modeling the deformed shape in occluded regions using optimal geometric priors. I will first present a highly robust general-purpose non-rigid registration algorithm that is capable of accurately finding dense correspondences despite consecutive scans undergoing significantly large and complex deformations. I will explain why it is important to simultaneously couple the optimization of correspondence positions, warping field, and overlapping regions. In the second part of the talk, I will focus on a fully integrated system for real-time facial animation using depth sensors and discuss two novel data-driven techniques: one that effectively extracts high-level expression semantics for retargeting and one that significantly improves robustness to unforeseen occlusions and severe noise using probabilistic animation priors. I will give a live demonstration using Microsoft’s Kinect sensor and show how low-quality input data can be effectively mapped to realistic facial expressions. Going beyond computer graphics and vision, I will show several applications in science where my technique has been successfully applied, such as radiation oncology and cardiac surface mechanics. I believe that these foundational tools for data-driven performance capture will be key to collecting massive data sets of detailed dynamic geometry and real-time analysis of performance data in everyday surroundings.
BIOGRAPHY: Hao Li is a postdoctoral researcher in Computer Science at Columbia and Princeton Universities since 2011. His research focuses on developing geometric algorithms and frameworks that effectively couple both worlds, data capture and the modeling of human performances. While primarily developed for high-resolution geometric capture and realistic facial animation in film production, his work on markerless dynamic shape reconstruction has also impacted the field of human shape analysis and biomechanics. His algorithms are widely deployed in the industry, ranging from leading visual effects studios to manufacturers of state-of-the-art radiation therapy systems. He has been awarded the SNF Fellowship for Prospective Researchers in 2011 and Best Paper Award at SCA 2009. He obtained his PhD from ETH Zurich in 2010 and received his MSc degree in Computer Science in 2006 from the University of Karlsruhe (TH). He was a visiting researcher at EPFL in 2010, Industrial Light & Magic (Lucasfilm) in 2009, Stanford University in 2008, National University of Singapore in 2006, and ENSIMAG in 2003.
Machine Learning Algorithms for 3D Shape Analysis and Synthesis
Vangelis Kalogerakis
Stanford University
Monday, April 2, 2012
ABSTRACT: The emergence of modern geometry acquisition devices, such as the Kinect, and the appearance of large-scale shape repositories, such as the Google Warehouse, are revolutionizing computer graphics, making three-dimensional content ubiquitous. The need for algorithms that understand and intelligently process 3D shapes is thus greater than ever. In this talk, I will describe a new generation of algorithms that analyze and synthesize complex three-dimensional shapes. The algorithms are based on probabilistic models that reason about both geometry and semantics. In contrast to traditional approaches that consider individual shapes in isolation and require laborious hand-tuning, these algorithms learn from collections of shapes. Specifically, I will present a data-driven shape segmentation technique that outperforms all previous approaches to shape segmentation. I will then describe a shape synthesis technique that can construct high-quality novel shapes from complex domains, such as aircraft, ships, and furniture. Finally, I will discuss new opportunities for geometry processing and 3D modeling enabled by these algorithms.
BIOGRAPHY: Evangelos Kalogerakis is a postdoctoral researcher in computer science at Stanford university. His research deals with the development of computer graphics techniques that support human creativity and automate complex visual content processing tasks for novice users, scientists and artists. He is particularly interested in developing machine learning algorithms for 3D content analysis and synthesis. He earned a PhD from the department of Computer Science at the University of Toronto. He has been awarded with the NSERC Alexander Graham Bell Research Fellowship.
Robust Replication (or how I learned to stop worrying and love failures)
Allen Clement
Max Planck Institute for Software Systems
Monday, April 9, 2012
ABSTRACT: The choice between Byzantine and crash fault tolerance is viewed as a fundamental design decision when building fault tolerant systems. We show that this dichotomy is not fundamental, and present a unified model of fault tolerance in which the number of tolerated faults of each type is a configuration choice. Additionally, we observe that a single fault is capable of devastating the performance of existing Byzantine fault tolerant replication systems. We argue that fault tolerant systems should, and can, be designed to perform well even when failures occur. In this talk I will expand on these two insights and describe our experience leveraging them to build a generic fault tolerant replication library that provides flexible fault tolerance and robust performance. We use the library to build a (Byzantine) fault tolerant version of the Hadoop Distributed File System.
BIOGRAPHY: Allen Clement is a Post-doctoral Research Fellow at the Max Planck Institute for Software Systems. He received a Ph.D. from the University of Texas at Austin in 2010 and an A.B. in Computer Science from Princeton University. His research focuses on the challenges of building robust and reliable distributed systems. In particular, he has investigated practical Byzantine fault tolerant replication, systems robust to both Byzantine and selfish behaviors, consistency in geo-replicated environments, and how to leverage the structure of social networks to build Sybil-tolerant systems. When not in the lab he enjoys ultimate frisbee and bouldering.
Making the Future Sound Better: Physically Based Sound Rendering
Changxi Zheng
Cornell University
Monday, April 16, 2012
ABSTRACT: The real world is full of sounds: a babbling brook winding through a tranquil forest, an agitated shopping cart plunging down a flight of stairs, or a falling piggybank breaking on the ground. Unfortunately virtual worlds simulated by current simulation algorithms are still inherently silent. Sounds are added as afterthoughts, often using “canned sounds” which have little to do with the animated geometry and physics. While recent decades have seen the dramatic success of 3D computer animation, our brain still expects a full spectrum of sensations. The lack of realistic sound rendering methods will continue to cripple our ability to enable highly interactive and realistic virtual experiences as computers become faster. In this talk, I will introduce my attempts to change that. My research on physics-based sound rendering aims to enable future virtual experiences with simulated images, motion, and sound are synchronized and highly engaging. Exploring this direction raises lots of unique challenges for computer simulation. I will illustrate ways in which we have addressed them for two major classes of sound-producing objects: solids and liquids. Finally, I will briefly overview some of my other graduate work on robot learning, scientific computing, and Internet routing, and outline my future plans.
BIOGRAPHY: Changxi Zheng is currently a PhD candidate in Computer Science at Cornell University. While he has a wide range of research interests, he is particularly interested in computer simulation methods. His graduate work explored the new direction of synthesizing realistic virtual sounds for physics-based simulations in computer animation, which received various press coverage (NPR, BBC, Science Daily, New Scientist, slashdot). Changxi received his MS from Cornell and his BS from Shanghai Jiaotong University, both in Computer Science.