Spring 2010

DISTINGUISHED LECTURE SERIES

Three Rivers in Machine Learning: Data, Computation and Risk
John Lafferty
Carnegie Mellon University
Monday, February 1, 2010
ABSTRACT: Machine learning is a confluence of computer science and statistics that is empowering technologies such as search engines, robotics, and personalized medicine. Fundamentally, the goal of machine learning is to develop computer programs that predict well, according to some measure of risk or accuracy. The predictions should get better as more historical data become available. The field is developing interesting and useful frameworks for building such programs, which often demand large computational resources. Theoretical analyses are also being advanced to help understand the tradeoffs between computation, data, and risk that are inherent in statistical learning. Two types of results have been studied: the consistency and scaling behavior of specific convex optimization procedures, which have polynomial computational efficiency, and lower bounds on any statistically efficient procedure, without regard to computational cost. This talk will give a survey of some of these developments, with a focus on structured learning problems for graphs and shared learning tasks in high dimensions.
BIOGRAPHY: John Lafferty is a professor in the Computer Science Department at Carnegie Mellon University, with joint appointments in the Machine Learning Department and the Department of Statistics. His research interests include machine learning, statistical learning theory, natural language processing, information theory, and information retrieval. Prof. Lafferty received the Ph.D. in Mathematics from Princeton University, where he was also a member of the Program in Applied and Computational Mathematics. Before joining CMU in 1994, he held a post-doctoral position at Harvard University, and was a Research Staff Member at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY. More recently, he has held visiting positions at the University of California, Berkeley and the University of California, San Diego. Prof. Lafferty is an IEEE Fellow, has served as co-director of CMU’s new Ph.D. Machine Learning Ph.D. Program, and currently serves as associate editor of the Journal of Machine Learning Research and the Electronic Journal of Statistics.

 

Statistical Models of Language
Michael Collins
Massachusetts Institute of Technology
Wednesday March 3, 2010
ABSTRACT: A vast amount of text and speech data is now available in electronic form, making methods that recover structure in this data increasingly important. In this talk I’ll describe work on machine learning approaches for the analysis of natural language. The central question addressed in the talk will be the following: how can we design statistical models, and associated learning algorithms, that operate over rich linguistic representations? As examples of work in this area, I will describe models for syntax (parsing), semantics (mapping sentences to underlying representations of their meaning) and machine translation. While the focus of this talk will be on language, many of the techniques I will describe are applicable to machine learning problems in other fields, for example speech recognition, computer vision, and computational biology.
BIOGRAPHY: Michael Collins is an associate professor of computer science at MIT. His research is focused on topics including statistical parsing, structured prediction problems in machine learning, and applications including machine translation, dialog systems, and speech recognition. His awards include a Sloan fellowship, an NSF career award, and best paper awards at EMNLP (2002 and 2004), UAI (2004 and 2005), and CoNLL 2008.

 

COMPUTER VISION AND GRAPHICS (CVGC) DISTINGUISHED LECTURE SERIES

New Approaches To Modeling And Control Of Complex Dynamics
Adrien Treuille
Carnegie Mellon University
Wednesday, March 31, 2010
ABSTRACT: Complex phenomena such as animal morphology, human motion, and large fluid systems challenge even our most so- phisticated simulation and control techniques. My overarching research goal has been to develop fundamentally new methods to approach such high-dimensional and nonlinear problems. This talk presents my work solving these problems across a wide range of phenomena, including a new model-reduction approach to fluids that is orders-of-magnitude faster than standard simulation methods and enables interactive high- resolution fluid simulation for the first time. Another example is a continuum approach to crowd dynamics which efficiently reproduces empirical aspects of large crowd behavior that would be difficult or impossible to achieve with traditional agent models. The talk will also cover work on several other phenomena in- cluding human animation, animal morphology, and protein folding. Such new algorithmic approaches advance not only our ability to simulate and control complex systems but also our understanding of the systems themselves.
BIOGRAPHY: Adrien Treuille, an assistant professor of computer science and robotics at Carnegie Mellon University who specializes in real-time computer simulation techniques, is the recipient of an National Science Foundation CAREER award, and has been recognized by Technology Review magazine as one of the world’s top 35 innovators under the age of 35. He received his PhD under Zoran Popovic in the computer graphics group at the University of Washington, and was a postdoc in the Baker Group under Zoran Popovic and David Baker. He was one of the creators of Foldit, the computer game where users contribute to science by folding proteins. He also pursues research in the simulation and animation of high-dimensional nonlinear phenomena like animal morphology, human motion, and large fluid systems.

 

FACULTY CANDIDATE SEMINARS

Digital Doubles And The Synergistic Role Of Scientific Computing, Biomechanics, And Computer Animation
Eftychios Sifakis
University of California Los Angeles
Wednesday, April 7, 2010
ABSTRACT: Digital doubles have not only evolved into prevalent elements of motion pictures and games, but are also finding an ever growing application base including medical diagnostics, surgical planning and design of vehicles and crafts. At the same time, current and developing applications demand improved photorealism, enhanced biomechanical accuracy, better subject-specificity and faster simulation algorithms. As these demands often outgrow the evolution of computer hardware, new algorithms for biomechanical modeling andsimulation are necessary to ensure that upcoming computational platforms are utilized to the best of their capacity. Additionally, biomechanical simulation has provided a great opportunity for transformative advances in medical practice using virtual models of the human body for disease prevention and treatment. These emerging applications mandate an increased level of attention to the unique demands of subject-specificity and anatomical accuracy for clinical uses of biomechanical modeling and simulation. This talk will outline a number of techniques that were designed to facilitate modeling and simulation of digital doubles with high fidelity and efficiency. Finally, I will discuss the cross-cutting impact of such advances on character animation, scientific computing and virtual surgery.
BIOGRAPHY: Eftychios Sifakis is a post-doctoral researcher at the University of California Los Angeles (with a joint appointment in Mathematics and Computer Science). He completed B.Sc. degrees in Computer Science (2000) and Mathematics (2002) from the University of Crete, Greece, and a Ph.D. degree in Computer Science (2007) from Stanford University. His research focuses on scientific computing, physics based modeling and computer graphics. He is particularly interested in biomechanical modeling for applications such as character animation, medical simulations and virtual surgical environments. Eftychios is a research consultant with Walt Disney Animation Studios, and has previously held consulting positions at Intel Corporation and SimQuest LLC.

 

From Games to Markets: On the Computation of Equilibria
Xi Chen
University of Southern California
Monday April 12, 2010
ABSTRACT: In the past decade, models and approaches from Game Theory and Economics have been widely applied in the study of large-scale competitive environments such as the Internet and e-commerce. The concept of equilibria, i.e., that of stable states, serves as one of the key tools in such studies, and its computation has attracted great attention. In this talk, we will focus on the problem of computing equilibria in two of the most fundamental models of competitions in Game Theory and Economics: games and markets. Both problems have a long intellectual history. In 1950, Nash showed that every game has an equilibrium. In 1954, Arrow and Debreu showed that under very mild conditions, every market has an equilibrium. While games and Nash equilibria are used to predict the behavior of selfish agents in conflicting situations, the study of markets and market equilibria laid down the foundation of competitive pricing. Other than the fact that both existence proofs heavily rely on fixed point theorems, the two models look very different from each other. In this talk, we will review the recent characterizations of how difficult it is to compute or to approximate Nash equilibria in two-player games. We will then show that these results also significantly advanced our understanding about equilibria in the market setting.
BIOGRAPHY: Xi Chen received his B.S. degree in Physics in 2003 and his Ph.D. in Computer Science in 2007, both from Tsinghua University, China. He then became a postdoctoral researcher at the Institute for Advanced Study, and at Princeton University, hosted by Avi Wigderson and Sanjeev Arora. This year he has been at the University of Southern California, hosted by Shang-Hua Teng. The research interests of Dr. Chen lie mainly in Algorithmic Game Theory and Theoretical Computer Science in general. He is particularly interested in characterizing the intrinsic difficulties of natural and fundamental problems that arise in the game-theoretic study of Internet and e-commerce. He won the best paper awards of the 47th IEEE Symposium on Foundations of Computer Science (FOCS) in 2006 (for his work on the computation of Nash equilibria) and the 20th International Symposium on Algorithms and Computation in 2009 (for his work on the computation of market equilibria).

 

Addressing the Mobile Social Data Deluge
Augustin Chaintreau
Technicolor
Wednesday, April 14, 2010
ABSTRACT: Extracting the full economic and scientific value of the “data deluge”, which follows from the information produced and consumed online by individuals, is redefining the frontier of computer science. Five years after one of the first experiments on mobile social dynamics, the size and scope of data collected or accessed through mobile devices have increased dramatically. In this talk, it is argued that understanding and releasing the potential of mobile social networks is possible provided that three key challenges are addressed: the lack of a guiding theory, the need to design algorithms exploiting social properties, and the presence of entities with competing goals. Although these broad challenges are likely to exist for some time, this talk presents three examples in which these issues are addressed, using new analytical and algorithmic tools, to improve the efficiency of information dissemination.
BIOGRAPHY: A. Chaintreau graduated in 2006 from Ecole Normale Superieure, Paris. He joined Technicolor (previously known as Thomson) to contribute to the creation of a new research lab on advanced communication platforms, where his research deals with mobile and social dynamics in information systems. Prior to that, he worked at Intel Research Cambridge where he was involved in conducting the first measurement campaign of opportunistic mobile dissemination. During his Ph.D, under the supervision of Francois Baccelli, he worked with Alcatel Bell and the IBM Watson T.J. Research Center, on characterizing scalable resource sharing systems in the presence of fairness and reliability constraints.