NLP LECTURE SERIES
Behind the Scenes with Watson at the Jeopardy!-IBM Challenge
Tuesday, March 29, 2011
ABSTRACT: On Feb 14th, 15th & 16th, the Jeopardy! TV game show aired a unique exhibition match: a contest between the two most well-known Jeopardy! champions, Ken Jennings and Brad Rutter, and an IBM computer, nicknamed Watson. In this talk I will give an overview of the challenges facing an open-domain automatic question-answering system, specifically in the context of Jeopardy! I will describe the high-level architecture of Watson, how its performance has evolved over the last four years and why IBM was interested in this task. I will describe some of the approaches taken by Watson and look at some interesting questions – in particular, I will discuss “Toronto”. I will try to leave a lot of time for questions.
Since coming to work for IBM in 1979, I have been working in several research areas that have converged in my current project activity. From 1979 to 1992, I worked at the former IBM Cambridge Scientific Center (in Cambridge MA), in several projects involving Intelligent User Interfaces. From 1992 to the present, I have been at T.J. Watson, working on interfaces again, on Search and on Question Answering. Since 2006 I have been on the DeepQA research team developing Watson, the computer system designed to play at champion human level at the Jeopardy! question-answering quiz game
My research interests lie within the area of Artificial Intelligence, which can be loosely defined as getting computers to perform those (cognitive) tasks that humans can do very well, but for which correct and complete algorithms are beyond our current capabilities to write. Specifically, I’m interested in using user and domain models to inform activities such as question-answering. I’m also interested in all aspects of language (both natural and computer processing of), which helped my work on Watson which has concentrated on the different kinds of puzzles, word-play and puns common in the Jeopardy! challenge.
My Ph.D. (1979) at the University of Massachusetts was in low-level vision, in particular in exploring how a moving subject can use his/its rapidly changing visual input to help analyze the scene ahead. I received an MA (1977) and Diploma in Computer Science with Distinction (1975) from the University of Cambridge (the other Cambridge); my dissertation was on intelligent help systems. My BA was also from the University of Cambridge (1974).
CVGC DISTINGUISHED LECTURE SERIES
From Eye-Balls to Ball-Games: Next-Gen Motion Capture for Science and Entertainment
New York University
Thursday, April 21, 2011
ABSTRACT: This talk will cover several research projects centered around the use of vision and motion capture for animation, recognition, and gaming. This includes human movements as diverse as subtle eye-blinks, lip-motions, spine-deformations, human walks and dances, politicians, base-ball pitchers, and the production of the largest motion capture game to date. The technical content of the talk focuses on the trade-off between data-driven models of human motion vs. analytically derived and perceptually driven models using dancers, animators, linguists, and other domain experts. This is demonstrated by sub-pixel tracking in Hollywood productions, reading the body-language of public figures, visualizing the pitches of NY Yankees Mariano Rivera, and the making of crowd mocap games in various cultures.
BIOGRAPHY: Chris Bregler is an Associate Professor of Computer Science at NYU’s Courant Institute. He received his M.S. and Ph.D. in Computer Science from U.C. Berkeley in 1995 and 1998 and his Diplom from Karlsruhe University in 1993. Prior to NYU he was on the faculty at Stanford University and worked for several companies including Hewlett Packard, Interval, Disney Feature Animation, and LucasFilm’s ILM. He was named Stanford Joyce Faculty Fellow and Terman Fellow in 1999. He received the Olympus Prize for achievements in computer vision and AI in 2002, and was named a Sloan Research Fellow in 2003. He was the chair for the SIGGRAPH 2004 Electronic Theater and Computer Animation Festival. At CVPR 2008 he was awarded the IEEE Longuet-Higgins Prize for “Fundamental Contributions in Computer Vision that have withstood the test of time”.
FACULTY CANDIDATE SEMINARS
Surface Comparison using Conformal Geometry
Wednesday, February 16, 2011
ABSTRACT: One of the core problems in geometry processing is the problem of comparing shapes and finding correspondences between different, but similar shapes. Maybe the most popular instance of this problem is matching and comparing surfaces (2-dimensional manifolds), a crucial component in a large number of applications ranging from matching of cortical surfaces, faces, bones or other biological surfaces, to more synthetic applications like shape morphing and attribute transfer. In this talk we will present a few applications of conformal geometry to the problems of surface matching and comparison. In particular, we will show how certain ideas originated in the theory of conformal geometry can be used to define novel metrics measuring dissimilarities and finding correspondences betweens pairs of surfaces automatically. The key idea is to utilize the prominent low-dimensionality of conformal mappings to construct metrics that are computationally efficient. I will report results on a few datasets of biological surfaces as well as other standard 3D surface datasets. I will end the talk with a broader picture of how ideas originated from differential geometry can be proven useful in describing and constructing algorithms for solving different geometric problems.
BIOGRAPHY: Yaron Lipman is a Postdoctoral Fellow in the Computer Science Department, and the Program in Applied and Computational Mathematics at Princeton University. His research interests are mainly in geometric processing and modeling, discrete differential geometry, and approximation theory with applications. Dr. Lipman received his B.Sc. in Computer Science and Mathematics, and a Ph.D. in Applied Mathematics from Tel-Aviv University. He received the 2009 Eurographics Young Researcher award and the 2010 Blavatnik Award for Young Scientists (postdoc category).
Personalized Adaptation to Accommodate Diverse User Needs
University of Washington
Wednesday, March 2, 2011
ABSTRACT: Software interfaces offer immense potential to support individual user needs through adaptation. Benefits of personalization include, for example, hiding unnecessarily complex options for only a basic task, or interpreting input specifically to accommodate a user with a severe motor impairment. However, effectively tapping into this potential is a major challenge; automatically adapting the user interface can improve performance and satisfaction, but, if not done well, it can also have the opposite effect. The overarching goal of my research is to accommodate diverse user needs through personalized adaptation, reducing information complexity, improving performance, and facilitating access for users with a range of motor and cognitive abilities. In this talk, I will first give an overview of my dissertation work on fundamental aspects of personalized interaction in the context of automatically adapting command structures (e.g., menus and toolbars) to reduce software application complexity. I will also discuss two ongoing projects where I am applying personalization to specific problem domains: (1) improving touch screen text input and (2) easing input for users with motor impairments.
BIOGRAPHY: Leah Findlater is an NSERC Postdoctoral Fellow in the Information School at the University of Washington. Her research focuses on accommodating diverse user needs through personalized adaptation to reduce information complexity and facilitate accessibility for a range of education levels, and motor and cognitive abilities. Her work has been recognized with CHI 2009 and CHI 2010 Best Paper Awards. Leah received her PhD in Computer Science in 2009 from the University of British Columbia, where she was awarded an IBM Centers for Advanced Studies Fellowship. She has collaborated with IBM Toronto Software Lab and with Microsoft Research in India and Redmond, WA.
Search and the Social Web: Organizing the World’s People and Making them Accessible and Useful
Monday, March 21, 2011
ABSTRACT: In the past few years, we have seen a tremendous growth in public human communication and self-expression, through blogs, microblogs, and social networks. In addition, we are beginning to see the emergence of a social technology stack on the web, where profile and relationship information gathered by some applications can be used by other applications. This technology shift, and the cultural shift that has accompanied it, offers a great opportunity for computer scientists, artists, and sociologists to study (and organize) people at scale. In this talk I will discuss how the changing web suggests new paradigms for search and discovery. I will discuss some recent projects that use web search to study human nature, and use human nature to improve web search. I will describe the underlying principles behind these projects and suggest how they might inform future work in search, data mining, and social computing.
BIOGRAPHY: Sep Kamvar is a consulting professor of Computational and Mathematical Engineering at Stanford University. His research focuses on social computing and information management. From 2003 to 2007, Sep was the head of personalization at Google. Prior to Google, he was founder and CEO of Kaltix, a personalized search company that was acquired by Google in 2003. Sep is the author of two books and over 40 technical publications and patents in the fields of search and social computing. His artwork is in the permanent collections of the Museum of Modern Art in New York and the Museum of Fine Arts in Houston, and has been exhibited in a number of other museums, including the Victoria and Albert Musem in London and the National Museum of Contemporary Art in Athens. He holds a Ph.D. in Scientific Computing and Computational Mathematics from Stanford University, and an A.B. in Chemistry from Princeton University.
Computational Regulatory Genomics and Epigenomics in Human, Fly, and Yeast
Monday, March 28, 2011
11:00 AM – Interschool lab, CEPSR
Advances in high-throughput technologies such as DNA sequencing are enabling the generation of massive amounts of biological data. This data is providing unprecedented opportunities to gain a systematic understanding of the genome of organisms and the regulation of genes encoded in them, but calls for new computational approaches for its analysis.
To address these challenges, I have developed computational methods for genome interpretation and for understanding gene regulation. (1) I developed a clustering method, STEM, for the analysis of short time series gene expression data, and initially applied it to data on immune response in human. STEM has since become a widely used method in many species and contexts. (2) I developed DREM, a method for integrating time series gene expression data with transcription factor-gene interactions, which reveals gene regulation temporal dynamics, which I applied originally in yeast and most recently in the context of the Drosophila modENCODE project. (3) I developed a method for predicting targets of transcription factors across the human genome by integrating sequence, annotation, and chromatin features, given the increasing availability of epigenetic information on chromatin modifications. (4) To exploit epigenomic information more systematically, I developed an algorithm for discovering and characterizing biologically-significant combinations of chromatin modifications across a genome, or ‘chromatin states’, based on their recurring patterns across the genome. (5) I used these chromatin states to study the dynamics of epigenetic changes across nine cell types in the context of the human ENCODE project, revealing a dynamic epigenomic landscape, that reveals causal regulators for cell type-specific enhancers, and provides new insights for interpreting disease-associated SNPs from genome-wide association studies (GWAS).
These methods provide a systematic way to discern regulatory information amidst the vast non-coding space of the human genome, towards a systematic understanding of gene regulation in the context of health and disease.
BIOGRAPHY: Jason Ernst is a NSF postdoctoral fellow in Manolis Kellis’s group within the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. He completed a PhD advised by Ziv Bar-Joseph in the Machine Learning Department within the School of Computer Science at Carnegie Mellon University. His research is in the area of computational biology, and involves the development and application of machine learning methods to address problems in epigenomics and gene regulation.
Regaining Control Over Mobile and Cloud Data
University of Washington
Monday, April 4, 2011
Davis Auditorium, CEPSR
ABSTRACT: Emerging technologies, such as cloud and mobile computing, offer previously unimaginable global access to data; however, they also threaten our ability to control the use of our data, its lifetime, accessibility, privacy, management properties, etc. My research focuses on restoring to users control they’ve ceded to the cloud and mobile devices. In this talk I will describe two examples of this work. First, I’ll present Keypad, an auditing file system for theft- and loss-prone mobile devices that permits users to track and control accesses on their mobile data, even after a device has been stolen. Second, I’ll describe Vanish, a global-scale distributed-trust system that allows users to cause all copies of desired Web data objects, online or offline, to simultaneously self destruct at a specified time. A common thread of these efforts is the integration of systems and crypto techniques to solve new problems in data management brought on by technological change.
BIOGRAPHY: Roxana Geambasu is a doctoral candidate in the Department of Computer Science & Engineering at the University of Washington. Her interests span broad areas of systems research, including cloud and mobile computing, operating systems, file systems, and databases, with a focus on security and privacy. She received her B.S. in Computer Science from the Polytechnic University of Bucharest in 2005 and was the recipient of the first Google Fellowship in Cloud Computing in 2009.
Sensing and Feedback of Everyday Activities to Promote Environmentally Sustainable Behaviors
University of Washington
Wednesday, April 6, 2011
Interschool Lab, 750 CEPSR
There is often a profound disconnect between our everyday behaviors and the effects those behaviors have on our health and the environment around us. My research focuses on the role of technology in bridging this disconnect—in particular, how technology can be used to effectively sense and visualize information about our own behaviors to promote awareness and enable positive behavior change. In this talk, I will provide an overview of my dissertation work on sensing and feedback systems for environmental behaviors, focusing on home resource consumption and personal transportation. These two domains account for a large percentage of an individual’s environmental footprint.
My work covers the entire spectrum of information flow: from sensing physical events, to intelligently interpreting and classifying this data, to building novel feedback interfaces that inform and motivate behavior. I will discuss the design and evaluation of UbiGreen, a mobile phone-based system that semi-automatically tracks personal transit behaviors such as bicycling or riding in a car, and feeds back this information continuously on the background of the mobile phone. I will also talk about HydroSense, which is the first water disaggregation system to automatically track water usage activities down to the fixture-level (e.g., upstairs shower vs. kitchen sink) from a single-point. Finally, I will discuss my current research on Reflect, a real-time ambient water usage feedback display for the home. Throughout the talk, I will interweave a design space of feedback technology that incorporates findings from behavioral and environmental psychology and human-computer interaction.
BIOGRAPHY: Jon Froehlich is a PhD candidate and Microsoft Research Graduate Fellow in Human Computer Interaction and Ubiquitous Computing at the University of Washington (UW), advised by Professors James Landay and Shwetak Patel. In 2010, he was selected as the UW College of Engineering Graduate Student Innovator of the Year. His research focuses on designing, building, and evaluating technology that addresses high-impact social problems such as environmental sustainability, personal health and well-being, and computer accessibility. His dissertation is on promoting sustainable behaviors through automated sensing and feedback technology, which has led to a number of top-tier publications including a UbiComp 2009 best paper nomination and a CHI 2010 best paper. His work on HydroSense, an advanced water sensing system, was recently licensed to Belkin International, Inc. Jon received his MS in Information and Computer Science from the University of California, Irvine where he was advised by Paul Dourish.