September 09, 2019
Anna Karlin, University of Washington
Anna R. Karlin is the Microsoft Professor of Computer Science and Engineering at the Paul G. Allen School of Computer Science at the University of Washington. She received her Ph.D. from Stanford University in 1987. Before coming to the University of Washington, she spent 5 years as a researcher at (what was then) Digital Equipment Corporation's Systems Research Center. Her research is primarily in theoretical computer science: the design and analysis of algorithms, particularly algorithmic game theory, probabilistic and online algorithms. She is coauthor of the book “Game Theory, Alive”, published in 2017 by the American Mathematical Society. She is a Fellow of the Association for Computing Machinery and a member of the American Academy of Arts and Sciences.
The traveling saleswoman problem (TSP) is one of the most celebrated and well-studied NP-complete problems we know. A classic result from Christofides in the 70s tells us that a fast algorithm exists which returns a solution at most 3/2 times worse than the optimal. Since then, however, no better approximation algorithm has been found. Finding a polynomial time algorithm which breaks this 3/2 barrier is a famous open problem in theoretical computer science. In this talk, we will give an overview of research towards this goal and describe a new sub-3/2 approximation algorithm for an interesting special case, so-called "half integral" TSP instances.
This is joint work with Nathan Klein and Shayan Oveis Gharan.
(Event video not available.)
September 18, 2019
Zeynep Tufekci, University of North Carolina - Chapel Hill
Zeynep Tufekci is an associate professor at the University of North Carolina, author of Twitter and Tear Gas: The Power and Fragility of Networked Protest from Yale University Press and an opinion writer at the New York Times and a columnist with Scientific American and Wired. Her research revolves around the interaction between society and technology with special focus on how big data, algorithms and artificial intelligence are affecting social and political structures in society and the workings of power, politics and resistance.
"As software eats the world", there are more and more questions that arise from the interaction of computing systems with society. There are many unexpected questions and novel challenges. It's hard to ponder a world in which speculative branch prediction in the microcode in computer architecture interacts, for example, with human rights and keeping dissidents and minorities safe, and where machine learning artifacts and features can affect potentially billions of lives, for better or worse. This has put a lot of stress on both computer science researchers and social scientists as they race to provide understanding and solutions, often while big tech companies and governments have different priorities. This talk is a reflection on this state of affairs as well as a discussion of how we could proceed, especially in the academy.
Host: Steve Bellovin
(Event video not available.)
September 25, 2019
Toniann Pitassi, University of Toronto
Toniann Pitassi received a bachelors and masters degrees from Pennsylvania State University and then a PhD from the University of Toronto in 1992. After that, she spent 2 years as a postdoc at UCSD, and began teaching at the University of Pittsburgh, then at the University of Arizona. In the fall of 2001, Pitassi moved back to Toronto, where she is currently a professor in the Computer Science Department, with a joint appointment in Mathematics.
Ever since Yao introduced the communication complexity model in 1979, it has played a pivotal role in our understanding of limitations for a wide variety of problems in Computer Science. In this talk, I will present the lifting method, whereby communication lower bounds are obtained by lifting much simpler lower bounds. I will show how lifting theorems have been used to solve many open problems in a variety of areas of computer science, including: circuit complexity, proof complexity, combinatorial optimization (size of linear programming formulations), cryptography (linear secret sharing schemes), game theory and privacy. Time permitting, I will sketch a proof of a unified lifting theorem and discuss some new applications to pseudodeterminism.
September 30, 2019
Richard Zemel, University of Toronto
Richard Zemel is a Professor of Computer Science and Industrial Research Chair in Machine Learning at the University of Toronto, and a co-founder and the Research Director at the Vector Institute for Artificial Intelligence. Prior to that he was on the faculty at the University of Arizona, and a Postdoctoral Fellow at the Salk Institute and at CMU. He received the B.Sc. in History & Science from Harvard, and a Ph.D. in Computer Science from the University of Toronto. His awards and honors include a Young Investigator Award from the ONR and a US Presidential Scholar award. He is a Senior Fellow of the Canadian Institute for Advanced Research, an NVIDIA Pioneer of AI, and a member of the NeurIPS Advisory Board.
Machine learning models, and more specifically deep neural networks, are achieving state-of-the-art performance on difficult pattern-recognition tasks such as object recognition, speech recognition, drug discovery, and more. However, deep networks are notoriously difficult to understand, both in how they arrive at and how to affect their responses. As these systems become more prevalent in real-world applications it is essential to allow users to exert more control over the learning system. In particular a wide range of applications can be facilitated by exerting some structure over the learned representations, to enable users to manipulate, interpret, and in some cases obfuscate the representations. In this talk I will discuss recent work that makes some steps towards these goals, allowing users to interact with and control representations.
Host: David Blei
November 11, 2019
Shafi Goldwasser, UC Berkeley
Shafi Goldwasser is the Director of the Simons Institute for the Theory of Computing, and a professor of computer science at UC Berkeley. She is also the RSA Professor of Electrical Engineering and Computer Science at MIT, and a professor of computer science and applied mathematics at the Weizmann Institute of Science in Israel. Goldwasser received a BS in applied mathematics from Carnegie Mellon University in 1979, and MS and PhD in computer science from UC Berkeley in 1984. Goldwasser was the recipient of ACM Turing Award for 2012. She was also the recipient of the Gödel Prize in 1993 and another in 2001 for her work on interactive proofs and connections to approximation, and was awarded the ACM Grace Murray Hopper Award (1996), the RSA award in mathematics (1998), the ACM Athena award for women in computer science (2008), the Benjamin Franklin Medal in Computer and Cognitive Science (2010), the IEEE Emanuel R. Piore Award (2011), the Barnard College Medal of Distinction (2016), and the Suffrage Science Award (2016). She is a member of the AAAS, ACM, NAE, NAS, Israeli Academy of Science, London Mathematical Society, and Russian Academy of Science.
Cryptography and Computational Learning have shared a curious history: a scientific success for one has often provided an example of an impossible task for the other. Today, the goals of the two fields are aligned. Cryptographic models and tools can and should play a role in ensuring the safe use of machine learning. We will discuss this development with its challenges and opportunities.
Host: Jeannette Wing
(Event video not available.)
November 13, 2019
Srini Devadas, MIT
Srini Devadas is the Webster Professor of EECS at MIT where he has been on the faculty since 1988. His current research interests are in computer security, computer architecture and applied cryptography. Devadas received the 2015 ACM/IEEE Richard Newton award, the 2017 IEEE W. Wallace McDowell award and the 2018 IEEE Charles A. Desoer award for his research in secure hardware. He is a Fellow of the ACM and IEEE. He is a MacVicar Faculty Fellow, an Everett Moore Baker and a Bose award recipient, considered MIT's highest teaching honors.
As the world becomes more connected, privacy is becoming harder to maintain. From social media services to Internet service providers to state-sponsored mass-surveillance programs, many outlets collect sensitive information about the users and the communication between them – often without the users ever knowing about it. In response, many Internet users have turned to end-to-end encryption, like Signal and TLS, to protect the content of the communication. Unfortunately, these works do little to hide the metadata of the communication, such as when and with whom a user is communicating. In scenarios where the metadata are sensitive, encryption alone is not sufficient to ensure users’ privacy.
Most prior systems that provide metadata private communication fall into one of two categories: systems that (1) provide formal privacy guarantees against global adversaries but do not scale to large numbers of users, or (2) scale easily to a large user base but do not provide strong guarantees against global adversaries. I will present two systems that aim to bridge the gap between the two categories to enable private communication with strong guarantees for many millions of users: Quark, a horizontally scalable anonymous broadcast system that defends against a global adversary who monitors the entire network and controls a significant fraction of the servers, and Crossroads, which provides metadata private communication between two honest users against the same adversary using a novel cryptographic primitive.
This talk is based on Albert Kwon's recently completed MIT PhD dissertation.
Host: Simha Sethumadhavan
November 25, 2019
Barbara Grosz, Harvard University
Computing technologies have become pervasive in daily life, sometimes bringing unintended but harmful consequences. For students to learn to think not only about what technology they could create, but also whether they should create that technology and to recognize the ethical considerations that should constrain their design, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard's Embedded EthiCS initiative, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will begin with a short description of my experiences teaching the course "Intelligent Systems: Design and Ethical Challenges" that inspired the design of Embedded EthiCS. It will then describe the goals behind the design, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions.
Host: Augustin Chaintreau
December 02, 2019
Bill Freeman, MIT
Bill Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, joining the faculty in 2001. His current research interests include motion re-rendering, computational photography, and learning for vision. Freeman received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and won "Test of Time" awards for papers written in 1990, 1995, and 2005. Previous research topics include steerable filters and pyramids, the generic viewpoint assumption, color constancy, bilinear models for separating style and content, and belief propagation in networks with loops. He holds over 30 patents.
Vision and audition are rich perceptual channels and provide opportunities for cross- modal learning. To learn the visual appearance of different materials, we ask, "What can you tell if you hit everything you see with a drumstick?," joint work with Andrew Owens and others at MIT.
Freeman will also present "Looking to Listen," a search result from my team at Google. Given video filmed in a noisy environment, with multiple speakers, we filter the noisy audio to play only the sound from a selected speaker. This may have applications in closed captioning, assistive devices, and human-computer interaction.
Host: Carl Vondrick
February 20, 2020
Umesh Vazirani, UC Berkeley
Umesh Vazirani is the Roger A. Strauch Professor of Electrical Engineering and Computer Science at UC Berkeley, and a member of the National Academy of Sciences. One of the pioneers of Quantum Computation, he is the director of the Berkeley Quantum Computation Center.
The lecture has been postponed and will be rescheduled for a future date in March, to be announced.
The recent demonstration of quantum supremacy by Google is a first step towards the era of small to medium scale quantum computers. In this talk, I will explain what the experiment accomplished and the theoretical work it is based on, as well as what it did not accomplish and the many theoretical and practical challenges that remain. I will also describe recent breakthroughs in the design of protocols for the testing and benchmarking of quantum computers, a task that has deep computational and philosophical implications. Specifically, this leads to protocols for scalable and verifiable quantum supremacy, certifiable quantum random generation and verification of quantum computation.
The talk will be aimed at a broad audience.
March 13, 2020
Marine Carpuat, University of Maryland
Marine Carpuat is an Assistant Professor in Computer Science at the University of Maryland. Her research focuses on multilingual natural language processing and machine translation. Before joining the faculty at Maryland, she was a Research Scientist at the National Research Council Canada. She received a PhD in Computer Science and a MPhil in Electrical Engineering from the Hong Kong University of Science & Technology, and a Diplome d'Ingenieur from the French Grande Ecole Supelec. She is the recipient of an NSF CAREER award, research awards from Google and Amazon, best paper awards at *SEM and TALN, and an Outstanding Teaching Award.
Despite the explosion of online content worldwide, much information remains isolated by language barriers. While deep neural networks have dramatically improved machine translation (MT), truly breaking language barriers requires not only translating accurately, but also understanding what is said and how it is said across languages. I will first challenge the assumption that translation always preserves meaning, and discuss how to automatically detect when the meaning of a translation diverges from its source. Next, I will show how modeling divergences between MT model hypotheses and reference human translations can improve MT. Finally, I will argue that translation does not necessarily need to preserve all properties of the input and introduce a family of models that let us tailor translation style while preserving input meaning. Taken together, these results illustrate how modeling divergences from common assumptions about translation data can not only improve MT, but also broaden the framing of MT to make it more responsive to user needs.
April 01, 2020
Hal Daume, University of Maryland
Hal Daumé III is a Perotto Chair Professor in Computer Science and Language Science at the University of Maryland, and a Senior Principal Researcher at Microsoft Research. His research focuses on developing learning algorithms for natural language processing, with a focus on interactive learning methods, and techniques for mitigating harms that can arise from automated systems. He earned his Ph.D. from the University of Southern California in 2006, was an inaugural diversity and inclusion co-chair at NeurIPS 2018, is an action editor for TACL, and is program co-chair for ICML 2020.
To have the broadest possible positive impact, machine learning-based natural language processing systems must be able to (a) learn when limited training data exists for the target tasks, languages (and varieties), and domains of interest, and (b) identify and mitigate potential harms in their use, in particular arising from the signals on which they are trained. I will first present new algorithms and applications for learning language processing systems through interaction with people, where implicit and/or explicit user feedback drives learning. I will then discuss learning challenges around "fairness" and how such interactive learning mechanisms can help address them.