September 09, 2019

Anna Karlin, University of Washington

Towards an improved approximation algorithm for the Travelling Saleswoman Problem

Anna R. Karlin is the Microsoft Professor of Computer Science and Engineering at the Paul G. Allen School of Computer Science at the University of Washington. She received her Ph.D. from Stanford University in 1987. Before coming to the University of Washington, she spent 5 years as a researcher at (what was then) Digital Equipment Corporation's Systems Research Center. Her research is primarily in theoretical computer science: the design and analysis of algorithms, particularly algorithmic game theory, probabilistic and online algorithms. She is coauthor of the book “Game Theory, Alive”, published in 2017 by the American Mathematical Society. She is a Fellow of the Association for Computing Machinery and a member of the American Academy of Arts and Sciences.


The traveling saleswoman problem (TSP) is one of the most celebrated and well-studied NP-complete problems we know. A classic result from Christofides in the 70s tells us that a fast algorithm exists which returns a solution at most 3/2 times worse than the optimal. Since then, however, no better approximation algorithm has been found. Finding a polynomial time algorithm which breaks this 3/2 barrier is a famous open problem in theoretical computer science. In this talk, we will give an overview of research towards this goal and describe a new sub-3/2 approximation algorithm for an interesting special case, so-called "half integral" TSP instances.
This is joint work with Nathan Klein and Shayan Oveis Gharan.

Event video not available.

September 18, 2019

Zeynep Tufekci, University of North Carolina - Chapel Hill

As Computer Science and Social Sciences Converge: Reflections on Complex Systems and Their Challenges

Zeynep Tufekci is an associate professor at the University of North Carolina, author of Twitter and Tear Gas: The Power and Fragility of Networked Protest from Yale University Press and an opinion writer at the New York Times and a columnist with Scientific American and Wired. Her research revolves around the interaction between society and technology with special focus on how big data, algorithms and artificial intelligence are affecting social and political structures in society and the workings of power, politics and resistance.


"As software eats the world", there are more and more questions that arise from the interaction of computing systems with society. There are many unexpected questions and novel challenges. It's hard to ponder a world in which speculative branch prediction in the microcode in computer architecture interacts, for example, with human rights and keeping dissidents and minorities safe, and where machine learning artifacts and features can affect potentially billions of lives, for better or worse. This has put a lot of stress on both computer science researchers and social scientists as they race to provide understanding and solutions, often while big tech companies and governments have different priorities. This talk is a reflection on this state of affairs as well as a discussion of how we could proceed, especially in the academy.

Host: Steve Bellovin

Event video not available.

September 25, 2019

Toniann Pitassi, University of Toronto

Lifting Lower Bounds, Communication, and Proofs

Toniann Pitassi received a bachelors and masters degrees from Pennsylvania State University and then a PhD from the University of Toronto in 1992. After that, she spent 2 years as a postdoc at UCSD, and began teaching at the University of Pittsburgh, then at the University of Arizona. In the fall of 2001, Pitassi moved back to Toronto, where she is currently a professor in the Computer Science Department, with a joint appointment in Mathematics.


Ever since Yao introduced the communication complexity model in 1979, it has played a pivotal role in our understanding of limitations for a wide variety of problems in Computer Science. In this talk, I will present the lifting method, whereby communication lower bounds are obtained by lifting much simpler lower bounds. I will show how lifting theorems have been used to solve many open problems in a variety of areas of computer science, including: circuit complexity, proof complexity, combinatorial optimization (size of linear programming formulations), cryptography (linear secret sharing schemes), game theory and privacy. Time permitting, I will sketch a proof of a unified lifting theorem and discuss some new applications to pseudodeterminism.

Watch Video of Event

September 30, 2019

Richard Zemel, University of Toronto

Controlling the Black Box: Learning Manipulable and Fair Representations

Richard Zemel is a Professor of Computer Science and Industrial Research Chair in Machine Learning at the University of Toronto, and a co-founder and the Research Director at the Vector Institute for Artificial Intelligence. Prior to that he was on the faculty at the University of Arizona, and a Postdoctoral Fellow at the Salk Institute and at CMU. He received the B.Sc. in History & Science from Harvard, and a Ph.D. in Computer Science from the University of Toronto. His awards and honors include a Young Investigator Award from the ONR and a US Presidential Scholar award. He is a Senior Fellow of the Canadian Institute for Advanced Research, an NVIDIA Pioneer of AI, and a member of the NeurIPS Advisory Board.


Machine learning models, and more specifically deep neural networks, are achieving state-of-the-art performance on difficult pattern-recognition tasks such as object recognition, speech recognition, drug discovery, and more. However, deep networks are notoriously difficult to understand, both in how they arrive at and how to affect their responses. As these systems become more prevalent in real-world applications it is essential to allow users to exert more control over the learning system. In particular a wide range of applications can be facilitated by exerting some structure over the learned representations, to enable users to manipulate, interpret, and in some cases obfuscate the representations. In this talk I will discuss recent work that makes some steps towards these goals, allowing users to interact with and control representations.

Host: David Blei

November 11, 2019

Shafi Goldwasser, UC Berkeley

Safe Machine Learning

Shafi Goldwasser is the Director of the Simons Institute for the Theory of Computing, and a professor of computer science at UC Berkeley. She is also the RSA Professor of Electrical Engineering and Computer Science at MIT, and a professor of computer science and applied mathematics at the Weizmann Institute of Science in Israel. Goldwasser received a BS in applied mathematics from Carnegie Mellon University in 1979, and MS and PhD in computer science from UC Berkeley in 1984. Goldwasser was the recipient of ACM Turing Award for 2012. She was also the recipient of the Gödel Prize in 1993 and another in 2001 for her work on interactive proofs and connections to approximation, and was awarded the ACM Grace Murray Hopper Award (1996), the RSA award in mathematics (1998), the ACM Athena award for women in computer science (2008), the Benjamin Franklin Medal in Computer and Cognitive Science (2010), the IEEE Emanuel R. Piore Award (2011), the Barnard College Medal of Distinction (2016), and the Suffrage Science Award (2016). She is a member of the AAAS, ACM, NAE, NAS, Israeli Academy of Science, London Mathematical Society, and Russian Academy of Science.


Cryptography and Computational Learning have shared a curious history: a scientific success for one has often provided an example of an impossible task for the other. Today, the goals of the two fields are aligned. Cryptographic models and tools can and should play a role in ensuring the safe use of machine learning. We will discuss this development with its challenges and opportunities.

Host: Jeannette Wing

November 13, 2019

Srini Devadas, MIT

Towards anonymous and metadata private communication at Internet scale

Srini Devadas is the Webster Professor of EECS at MIT where he has been on the faculty since 1988. His current research interests are in computer security, computer architecture and applied cryptography. Devadas received the 2015 ACM/IEEE Richard Newton award, the 2017 IEEE W. Wallace McDowell award and the 2018 IEEE Charles A. Desoer award for his research in secure hardware. He is a Fellow of the ACM and IEEE. He is a MacVicar Faculty Fellow, an Everett Moore Baker and a Bose award recipient, considered MIT's highest teaching honors.


As the world becomes more connected, privacy is becoming harder to maintain. From social media services to Internet service providers to state-sponsored mass-surveillance programs, many outlets collect sensitive information about the users and the communication between them – often without the users ever knowing about it. In response, many Internet users have turned to end-to-end encryption, like Signal and TLS, to protect the content of the communication. Unfortunately, these works do little to hide the metadata of the communication, such as when and with whom a user is communicating. In scenarios where the metadata are sensitive, encryption alone is not sufficient to ensure users’ privacy.

Most prior systems that provide metadata private communication fall into one of two categories: systems that (1) provide formal privacy guarantees against global adversaries but do not scale to large numbers of users, or (2) scale easily to a large user base but do not provide strong guarantees against global adversaries. I will present two systems that aim to bridge the gap between the two categories to enable private communication with strong guarantees for many millions of users: Quark, a horizontally scalable anonymous broadcast system that defends against a global adversary who monitors the entire network and controls a significant fraction of the servers, and Crossroads, which provides metadata private communication between two honest users against the same adversary using a novel cryptographic primitive.

This talk is based on Albert Kwon's recently completed MIT PhD dissertation.

Host: Simha Sethumadhavan

November 25, 2019

Barbara Grosz, Harvard University

From Ethical Challenges of Intelligent Systems to Embedding Ethics in Computer Science Education


Computing technologies have become pervasive in daily life, sometimes bringing unintended but harmful consequences. For students to learn to think not only about what technology they could create, but also whether they should create that technology and to recognize the ethical considerations that should constrain their design, computer science curricula must expand to include ethical reasoning about the societal value and impact of these technologies. This talk will describe Harvard's Embedded EthiCS initiative, a novel approach to integrating ethics into computer science education that incorporates ethical reasoning throughout courses in the standard computer science curriculum. It changes existing courses rather than requiring wholly new courses. The talk will begin with a short description of my experiences teaching the course "Intelligent Systems: Design and Ethical Challenges" that inspired the design of Embedded EthiCS. It will then describe the goals behind the design, the way the program works, lessons learned and challenges to sustainable implementations of such a program across different types of academic institutions.
Host: Augustin Chaintreau

December 02, 2019

Bill Freeman, MIT

Distinguished Lecture - Bill Freeman

Other Lectures