September 23, 2020
Costis Daskalakis, MIT
Constantinos (aka “Costis”) Daskalakis is a Professor of Electrical Engineering and Computer Science at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a PhD. in Electrical Engineering and Computer Science from UC Berkeley, advised by Christos Papadimitriou. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has been honored with the ACM Doctoral Dissertation Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft Research Faculty Fellowship, the Simons Investigator Award, the Rolf Nevanlinna Prize from the International Mathematical Union, the ACM Grace Murray Hopper Award, and the Bodossaki Foundation Distinguished Young Scientists Award.
A common assumption in machine learning and statistics is the existence of training data comprising independent observations from the entire distribution of relevant data. In practice, data deviates from this assumption in various ways. Data might be biased samples from the distribution of interest, due to systematic selection bias, societal biases, incorrect experimental design, or legal restrictions that might prevent the use of all available data. Moreover, observations might be collected on a social network, a spatial or a temporal domain and may thus not be independent but intricately dependent. Finally, data might be affected by the choices made by other learning agents who are learning and making decisions in the same environment where our data is collected and must be acted upon. In the presence of these deviations from the standard i.i.d. model naively trained models fail. In this talk, we overview recent work with various collaborators suggesting avenues to address the resulting challenges through a combined approach involving tools from truncated statistics, high-dimensional probability, statistical physics, and game theory.
October 07, 2020
Meredith Ringel-Morris, Microsoft Research
Meredith Ringel Morris is a Sr. Principal Researcher at Microsoft Research and Research Area Manager for Interaction, Accessibility, and Mixed Reality. She founded Microsoft Research’s Ability research group and is a member of lab’s Leadership Team. She is also an Affiliate Professor at the University of Washington in the Allen School of Computer Science & Engineering and in The Information School. Dr. Morris is an expert in Human-Computer Interaction; in 2020, she was inducted into the ACM SIGCHI Academy in recognition of her research in collaborative and social computing. Her research on collaboration and social technologies has contributed new systems, methods, and insights to diverse areas of computing including gesture interaction, information retrieval, and accessibility. Dr. Morris earned her Sc.B. in Computer Science from Brown University and her M.S. and Ph.D. in Computer Science from Stanford University.
In this lecture, I show how considering the intersection of collaborative and social scenarios with other domains of computing can reveal end-user needs and result in innovative technical systems. I give examples of this approach from my work in gesture interaction, information retrieval, and accessibility, focusing particularly on the topics of creating more efficient and expressive augmentative and alternative communication technologies and of making social media more accessible to screen reader users. I close by identifying future opportunities for creating inclusive collaboration and social technologies.
October 19, 2020
Yejin Choi, University of Washington
Yejin Choi is a Brett Helsel associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research interests include commonsense knowledge and reasoning, neural language (de-)generation, language grounding, and AI for social good. She is a co-recipient of the AAAI Outstanding Paper Award in 2020, Borg Early Career Award (BECA) in 2018, IEEE’s AI Top 10 to Watch in 2015, the ICCV Marr Prize in 2013, and the inaugural Alexa Prize Challenge in 2017.
Neural language models, as they grow in scale, continue to surprise us with utterly nonsensical and counterintuitive errors despite their otherwise remarkable performances on leaderboards. In this talk, I will argue that it is time to break out of the currently dominant paradigm of sequence-to-sequence models with task-specific supervision built on top of large-scale pre-trained neural networks.
First, I will argue for unsupervised inference-time algorithms to make better lemonade out of neural language models. As examples, I will demonstrate how unsupervised decoding algorithms can elicit advanced reasoning capabilities such as non-monotonic reasoning (e.g., counterfactual and abductive reasoning) out of off-the-shelf left-to-right language models, and how in some controlled text generation benchmarks, unsupervised decoding can match or even outperform supervised approaches.
Next, I will highlight the importance of melding explicit and declarative knowledge encoded in symbolic knowledge graphs with implicit and observed knowledge encoded in neural language models. As a concrete case study, I will present Social Chemistry 101, a new conceptual formalism, a knowledge graph, and neural models to reason about social, moral, and ethical norms.
November 04, 2020
Joan Feigenbaum, Yale
Joan Feigenbaum is the Grace Murray Hopper Professor of Computer Science at Yale University. She received a BA in Mathematics from Harvard and a Ph.D. in Computer Science from Stanford. Between finishing her Ph.D. in 1986 and starting at Yale in 2000, she was with AT&T, where she participated broadly in the company's Information-Sciences research agenda, e.g., by creating a research group in Algorithms and Distributed Data, of which she was the manager in 1998-99. Professor Feigenbaum's research interests include security, privacy, anonymity, and accountability; Internet algorithmics; and computational complexity. Her recent service contributions to the research community include Program Chair of the ACM Symposium on Theory of Computing (2013), Department Chair of the Yale Computer Science Department (July 2014 through June 2017), General Chair of the inaugural ACM Symposium on Computer Science and Law (2019), and ACM Vice President (July 2020 through June 2022). Professor Feigenbaum is an Amazon Scholar, a Fellow of the ACM, a Fellow of the AAAS, a Connecticut Technology Council Woman of Innovation, and a winner of the Test-of-Time Award from the IEEE Symposium on Security and Privacy for her 1996 paper (with Matt Blaze and Jack Lacy) entitled "Decentralized Trust Management."
Computer scientists have often treated law as though it can be reduced purely to a finite set of rules about which the only meaningful computational questions are those of decidability and complexity. Similarly, legislators and policy makers have often advocated general, imprecisely defined requirements and assumed that the tech industry could solve whatever technical problems arose in the design and implementation of products and services that conform to those requirements. The research area of Computer Science and Law seeks to replace these flawed, disciplinary approaches with a multidisciplinary focus on co-development of computing techniques, laws, and public policies. This talk will present ongoing efforts and open problems in this emerging area.
November 09, 2020
Distinguished Lecture - Joel Emer, NVIDIA/MIT
For over 40 years, Joel Emer held various research and advanced development positions investigating processor microarchitecture and developing performance modeling and evaluation techniques. He has made architectural contributions to a number of VAX, Alpha, and X86 processors and is recognized as one of the developers of the widely employed quantitative approach to processor performance evaluation. More recently, he has been recognized for his contributions in the advancement of deep learning accelerator design, spatial and parallel architectures, processor reliability analysis, cache organization, and simultaneous multithreading. Currently, he is a professor at the Massachusetts Institute of Technology and spends part-time as a Senior Distinguished Research Scientist in Nvidia's Architecture Research group. Previously, he worked at Intel where he was an Intel Fellow and Director of Microarchitecture Research. Even earlier, he worked at Compaq and Digital Equipment Corporation. He earned a doctorate in electrical engineering from the University of Illinois in 1979. He received a bachelor's degree with highest honors in electrical engineering in 1974, and his master's degree in 1975 -- both from Purdue University. Recognitions of his contributions include an ACM/SIGARCH-IEEE-CS/TCCA Most Influential Paper Award for his work on simultaneous multithreading and six other papers that were selected as IEEE Micro's Top Picks in Computer Architecture. Among his professional honors, he is a Fellow of both the ACM and IEEE and a member of the NAE. In 2009 he was a recipient of the Eckert-Mauchly award for lifetime contributions in computer architecture.
Recent history is replete with myriad examples of new applications that have changed the course of computing and the world. These include the spreadsheet, visual editing, graphics, networking, and many more. Behind each of these advances were programs developed on easily-programmable and ever-faster processors. Unfortunately, as is widely acknowledged, the technological trend articulated by Moore's Law, which contributed significantly to creating the "ever faster" part of that recipe, is dead (or at least slowing significantly). However, as outlined in our "Science" article, "There's plenty of room at the top", there is promise in continuing Moore's Law-like improvements through a multi-pronged approach that includes software performance engineering, algorithm improvements, and hardware architecture advances. Among those researchers focusing on the hardware architecture advances prong, there are many who advocate significant specialization of the hardware to specific domains, which will typically be well-understood and of widely-acknowledged importance. This approach, however, is likely to impede the development of the next big application because there will be no generally-programmable platform on which to develop it. Therefore, I believe that the biggest challenge in evolving hardware architectures in the post-Moore era lies in striking the right balance between preserving broad programmability and enhancing efficiency. In this talk, I will discuss how we have approached that challenge by focusing on the aspects of the hardware that gives the most leverage to improve efficiency and by providing an abstraction that make it possible to compile to the new hardware. More specifically, since data movement has become the dominant consumer of energy, I will describe structures that facilitate "data orchestration" that reduce and optimize data movement. I also will describe abstractions that are intended to make it possible to compile high-level programs to these new hardware structures.
November 11, 2020
Distinguished Lecture - Barbara Tversky, Teachers College
Barbara Tversky studied cognitive psychology at the University of Michigan and has held positions at the Hebrew University, at Stanford University, where she is emerita professor of psychology, and now at Teachers College. Her work has spanned memory, categorization, mental models, spatial thinking and language, event perception and cognition, diagrammatic reasoning, information visualization, gesture, and creativity. She has enjoyed collaborations with linguists, neuroscientists, computer scientists, domain scientists, designers, and artists. She has been on the editorial boards of many journals, on the organizing committees of many international and interdisciplinary meetings, on the governing boards of many professional societies, and the President of the Association for Psychological Science, She was elected to the Academy of Arts and Sciences and the Society for Experimental Psychology and is a fellow of the Association for Psychological Science, the Cognitive Science Society, and the Russell Sage Foundation. She was recently awarded the Kampé de Fériet Prize. Her book, Mind in Motion: How Action Shapes Thought, was published in 2019.
All creatures must move and act in space to survive. Moving in space creates representations of place in the hippocampus and of spatial relations among places in the entorhinal cortex. In people, the same brain structures that represent place in spatial relations also represent ideas in conceptual relations. This supports the conclusion that spatial thinking is the foundation of abstract thought, not the entire edifice, but the foundation. This view is supported by evidence from language, from gesture, and from diagrams. Gesture and diagrams express meaning more directly than language, which bears arbitrary connections to meaning. Like language, gestures and diagrams are structured, with a syntax and semantics, using marks in space and place in space. Gestures are actions on ideas rather than on objects. Those actions design our world, which reflects our minds.
Due to technical issues the video for this presentation is unavailable.
November 16, 2020
Susan Landau, Tufts
Susan Landau is Bridge Professor in Cyber Security and Policy at the Fletcher School of Law and Diplomacy and the School of Engineering, Department of Computer Science, Tufts University and Senior Fellow at the Fletcher School Center for International Law and Governance and Visiting Professor, Department of Computer Science, University College London. Landau's most recent book, Listening In : Cybersecurity in an Insecure Age, was published by Yale University Press; she is also the author of Surveillance or Security? The Risks Posed by New Wiretapping Technologies (MIT Press) and co-author, with Whitfield Diffie, of Privacy on the Line: The Politics of Wiretapping and Encryption (MIT Press). Landau has testified before Congress, written for the Washington Post, Science, and Scientific American, and frequently appears on NPR and BBC. Landau has been a senior staff Privacy Analyst at Google, a Distinguished Engineer at Sun Microsystems, and a faculty member at Worcester Polytechnic Institute, the University of Massachusetts Amherst, and Wesleyan University. She received the 2008 Women of Vision Social Impact Award, was a 2010-2011 fellow of the Radcliffe Institute for Advanced Study, was a 2012 Guggenheim fellow, was inducted into the Cybersecurity Hall of Fame in 2015 and into the Information System Security Association Hall of Fame in 2018. She is also a fellow of the American Association for the Advancement of Science and the Association for Computing Machinery.
Distinguished Lecture Series
Dr. Anthony Fauci, ticked off the timeline, "First notice at the end of December, hit China in January, hit the rest of the world in February, March, April, May, early June." COVID spread like wildfire. This disease turned out to be Fauci's "worst nightmare."
Pandemics end because we shut down the infection source, eradicate it, or vaccinate against it. But if these techniques don't work, then we contact trace. For COVID-19, which spreads respiratorily even before someone shows symptoms, manual contact tracing can be too slow. Phone-based apps might be able to speed this up, but raise lots of issues.
We need to know: Is an app efficacious? Does the app help or hinder the efforts of human-based contact tracing, a practice central to ending epidemics? If not---and efficacy must be measured across different communities---there is no reason to consider its use any further. Is the use of the app equitable? What are the social and legal protections for people who receive an exposure notification? Does a contact-tracing app improve public health more effectively than other efforts? Does the public support its use? Without public support, apps fail.
The next pandemic will be different from COVID-19. Now is the time to decide what sorts of medical and social interventions we will make and what choices we want. The choices we make now will reverberate forever.
December 09, 2020
Umesh Vazirani, UC Berkeley
Umesh Vazirani is the Roger A. Strauch Professor of Electrical Engineering and Computer Science at UC Berkeley, and a member of the National Academy of Sciences. One of the pioneers of Quantum Computation, he is the director of the Berkeley Quantum Computation Center.
The recent demonstration of quantum supremacy by Google is a first step towards the era of small to medium scale quantum computers. In this talk I will explain what the experiment accomplished and the theoretical work it is based on, as well as what it did not accomplish and the many theoretical and practical challenges that remain. I will also describe recent breakthroughs in the design of protocols for the testing and benchmarking of quantum computers, a task that has deep computational and philosophical implications. Specifically, this leads to protocols for scalable and verifiable quantum supremacy, certifiable quantum random generation and verification of quantum computation.
The talk will be aimed at a broad audience.
- Distinguished Lectures 2023-2024
- Distinguished Lectures 2022-2023
- Distinguished Lectures 2021-2022
- Distinguished Lectures 2020-2021
- Distinguished Lectures 2019-2020
- Distinguished Lectures 2018-2019
- Distinguished Lectures 2017-2018
- Distinguished Lectures 2016-2017
- Distinguished Lectures 2015-2016
- Distinguished Lectures 2014-2015
- Distinguished Lectures 2013-2014
- Distinguished Lectures 2012-2013
- Distinguished Lectures 2011-2012