2020-2021 DISTINGUISHED LECTURE SERIES

September 23, 2020

Costis Daskalakis, MIT

Three ways Machine Learning fails and what to do about them

Bio:
Constantinos (aka “Costis”) Daskalakis is a Professor of Electrical Engineering and Computer Science at MIT. He holds a Diploma in Electrical and Computer Engineering from the National Technical University of Athens, and a PhD. in Electrical Engineering and Computer Science from UC Berkeley, advised by Christos Papadimitriou. He works on Computation Theory and its interface with Game Theory, Economics, Probability Theory, Machine Learning and Statistics. He has been honored with the ACM Doctoral Dissertation Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft Research Faculty Fellowship, the Simons Investigator Award, the Rolf Nevanlinna Prize from the International Mathematical Union, the ACM Grace Murray Hopper Award, and the Bodossaki Foundation Distinguished Young Scientists Award.

Abstract:

A common assumption in machine learning and statistics is the existence of training data comprising independent observations from the entire distribution of relevant data. In practice, data deviates from this assumption in various ways. Data might be biased samples from the distribution of interest, due to systematic selection bias, societal biases, incorrect experimental design, or legal restrictions that might prevent the use of all available data. Moreover, observations might be collected on a social network, a spatial or a temporal domain and may thus not be independent but intricately dependent. Finally, data might be affected by the choices made by other learning agents who are learning and making decisions in the same environment where our data is collected and must be acted upon. In the presence of these deviations from the standard i.i.d. model naively trained models fail. In this talk, we overview recent work with various collaborators suggesting avenues to address the resulting challenges through a combined approach involving tools from truncated statistics, high-dimensional probability, statistical physics, and game theory.

Watch Video of Event

October 07, 2020

Meredith Ringel-Morris, Microsoft Research

Collaboration as a Lens for Inclusive Technical Innovation

Bio:
Meredith Ringel Morris is a Sr. Principal Researcher at Microsoft Research and Research Area Manager for Interaction, Accessibility, and Mixed Reality. She founded Microsoft Research’s Ability research group and is a member of lab’s Leadership Team. She is also an Affiliate Professor at the University of Washington in the Allen School of Computer Science & Engineering and in The Information School. Dr. Morris is an expert in Human-Computer Interaction; in 2020, she was inducted into the ACM SIGCHI Academy in recognition of her research in collaborative and social computing. Her research on collaboration and social technologies has contributed new systems, methods, and insights to diverse areas of computing including gesture interaction, information retrieval, and accessibility. Dr. Morris earned her Sc.B. in Computer Science from Brown University and her M.S. and Ph.D. in Computer Science from Stanford University.

Abstract:

In this lecture, I show how considering the intersection of collaborative and social scenarios with other domains of computing can reveal end-user needs and result in innovative technical systems. I give examples of this approach from my work in gesture interaction, information retrieval, and accessibility, focusing particularly on the topics of creating more efficient and expressive augmentative and alternative communication technologies and of making social media more accessible to screen reader users. I close by identifying future opportunities for creating inclusive collaboration and social technologies.

October 19, 2020

Yejin Choi, University of Washington

Intuitive Reasoning as (Un)supervised Neural Generation

Bio:
Yejin Choi is a Brett Helsel associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research interests include commonsense knowledge and reasoning, neural language (de-)generation, language grounding, and AI for social good. She is a co-recipient of the AAAI Outstanding Paper Award in 2020, Borg Early Career Award (BECA) in 2018, IEEE’s AI Top 10 to Watch in 2015, the ICCV Marr Prize in 2013, and the inaugural Alexa Prize Challenge in 2017.

Abstract:

Neural language models, as they grow in scale, continue to surprise us with utterly nonsensical and counterintuitive errors despite their otherwise remarkable performances on leaderboards. In this talk, I will argue that it is time to break out of the currently dominant paradigm of sequence-to-sequence models with task-specific supervision built on top of large-scale pre-trained neural networks.

First, I will argue for unsupervised inference-time algorithms to make better lemonade out of neural language models. As examples, I will demonstrate how unsupervised decoding algorithms can elicit advanced reasoning capabilities such as non-monotonic reasoning (e.g., counterfactual and abductive reasoning) out of off-the-shelf left-to-right language models, and how in some controlled text generation benchmarks, unsupervised decoding can match or even outperform supervised approaches.

Next, I will highlight the importance of melding explicit and declarative knowledge encoded in symbolic knowledge graphs with implicit and observed knowledge encoded in neural language models. As a concrete case study, I will present Social Chemistry 101, a new conceptual formalism, a knowledge graph, and neural models to reason about social, moral, and ethical norms.

Watch Video of Event

November 04, 2020

Joan Feigenbaum, Yale

Computer Science and Law: Opportunities and Research Directions

Bio:
Joan Feigenbaum is the Grace Murray Hopper Professor of Computer Science at Yale University. She received a BA in Mathematics from Harvard and a Ph.D. in Computer Science from Stanford. Between finishing her Ph.D. in 1986 and starting at Yale in 2000, she was with AT&T, where she participated broadly in the company's Information-Sciences research agenda, e.g., by creating a research group in Algorithms and Distributed Data, of which she was the manager in 1998-99. Professor Feigenbaum's research interests include security, privacy, anonymity, and accountability; Internet algorithmics; and computational complexity. Her recent service contributions to the research community include Program Chair of the ACM Symposium on Theory of Computing (2013), Department Chair of the Yale Computer Science Department (July 2014 through June 2017), General Chair of the inaugural ACM Symposium on Computer Science and Law (2019), and ACM Vice President (July 2020 through June 2022). Professor Feigenbaum is an Amazon Scholar, a Fellow of the ACM, a Fellow of the AAAS, a Connecticut Technology Council Woman of Innovation, and a winner of the Test-of-Time Award from the IEEE Symposium on Security and Privacy for her 1996 paper (with Matt Blaze and Jack Lacy) entitled "Decentralized Trust Management."

Abstract:

Computer scientists have often treated law as though it can be reduced purely to a finite set of rules about which the only meaningful computational questions are those of decidability and complexity. Similarly, legislators and policy makers have often advocated general, imprecisely defined requirements and assumed that the tech industry could solve whatever technical problems arose in the design and implementation of products and services that conform to those requirements. The research area of Computer Science and Law seeks to replace these flawed, disciplinary approaches with a multidisciplinary focus on co-development of computing techniques, laws, and public policies. This talk will present ongoing efforts and open problems in this emerging area.

November 09, 2020

Joel Emer, NVIDIA/MIT

Data Orchestration is the New Compute: Computer Architecture for the Post-Moore Era

Bio:
For over 40 years, Joel Emer held various research and advanced development positions investigating processor microarchitecture and developing performance modeling and evaluation techniques. He has made architectural contributions to a number of VAX, Alpha, and X86 processors and is recognized as one of the developers of the widely employed quantitative approach to processor performance evaluation. More recently, he has been recognized for his contributions in the advancement of deep learning accelerator design, spatial and parallel architectures, processor reliability analysis, cache organization, and simultaneous multithreading. Currently, he is a professor at the Massachusetts Institute of Technology and spends part-time as a Senior Distinguished Research Scientist in Nvidia's Architecture Research group. Previously, he worked at Intel where he was an Intel Fellow and Director of Microarchitecture Research. Even earlier, he worked at Compaq and Digital Equipment Corporation. He earned a doctorate in electrical engineering from the University of Illinois in 1979. He received a bachelor's degree with highest honors in electrical engineering in 1974, and his master's degree in 1975 -- both from Purdue University. Recognitions of his contributions include an ACM/SIGARCH-IEEE-CS/TCCA Most Influential Paper Award for his work on simultaneous multithreading and six other papers that were selected as IEEE Micro's Top Picks in Computer Architecture. Among his professional honors, he is a Fellow of both the ACM and IEEE and a member of the NAE. In 2009 he was a recipient of the Eckert-Mauchly award for lifetime contributions in computer architecture.

Abstract:

Recent history is replete with myriad examples of new applications that have changed the course of computing and the world. These include the spreadsheet, visual editing, graphics, networking, and many more. Behind each of these advances were programs developed on easily-programmable and ever-faster processors. Unfortunately, as is widely acknowledged, the technological trend articulated by Moore's Law, which contributed significantly to creating the "ever faster" part of that recipe, is dead (or at least slowing significantly). However, as outlined in our "Science" article, "There's plenty of room at the top", there is promise in continuing Moore's Law-like improvements through a multi-pronged approach that includes software performance engineering, algorithm improvements, and hardware architecture advances. Among those researchers focusing on the hardware architecture advances prong, there are many who advocate significant specialization of the hardware to specific domains, which will typically be well-understood and of widely-acknowledged importance. This approach, however, is likely to impede the development of the next big application because there will be no generally-programmable platform on which to develop it. Therefore, I believe that the biggest challenge in evolving hardware architectures in the post-Moore era lies in striking the right balance between preserving broad programmability and enhancing efficiency. In this talk, I will discuss how we have approached that challenge by focusing on the aspects of the hardware that gives the most leverage to improve efficiency and by providing an abstraction that make it possible to compile to the new hardware. More specifically, since data movement has become the dominant consumer of energy, I will describe structures that facilitate "data orchestration" that reduce and optimize data movement. I also will describe abstractions that are intended to made it possible to compile high-level programs to these new hardware structures.

November 16, 2020

Susan Landau, Tufts

Distinguished Lecture - Susan Landau

Bio:
Susan Landau works at the intersection of cyber security, national security, law, and policy. She has testified before Congress, written for the Washington Post, Science, and Scientific American, and frequently appears on NPR and BBC. Her previous positions include senior staff privacy analyst at Google, distinguished engineer at Sun Microsystems, and faculty member at Worcester Polytechnic Institute, the University of Massachusetts Amherst, and Wesleyan University.

Other Lectures