Fall 2016

DISTINGUISHED LECTURE SERIES

Programming a Warehouse-Scale Computer
Luiz André Barroso
Google
Wednesday, October 12, 2016
ABSTRACT: Public clouds are quickly making massive-scale computing capabilities available to ever larger population of programmers, and are no longer a playground restricted to a handful of institutions, such as national labs or large Internet services companies. In this talk I will highlight some of the features of this new class of computers, the challenges faced by their programmers, and tools/techniques we have developed to address some of those challenges.
BIOGRAPHY: Luiz André Barroso is a Google Fellow, and the VP of Engineering for the Geo Platform team, the group responsible for collecting and curating maps, local and imagery data that powers Google consumer products (such as Google Search and Google Maps). While at Google he has co-authored some well-cited articles on warehouse-scale computing, energy efficient computing and storage system reliability. He also co-wrote “The Datacenter as a Computer”, the first textbook to describe the architecture of warehouse-scale computing systems, now in its 2nd edition. Before Google, he was a member of the research staff at Digital Equipment (later Compaq), where his group did some of the pioneering research on modern multi-core architectures.
The Future of Cybersecurity
Dan Geer
In-Q-Tel
Monday, September 26, 2016
ABSTRACT: Predicting the future of risk is itself risky, so why bother trying? The answer is time. Shortening the long tail of upgrade may ultimately require force, but the deployment of force takes time to be done well. Future-proof strategies are a kind of answer but if, and only if, the rate constant of their deployment is quicker than the rate constant of innovation in the opponent’s kit. Adaptive, autonomous technologies promise faster event detection and response, but such technologies are inherently dual use, and, in any case, the more optimized an algorithm is, the harder it is to know what the algorithm is really doing. This talk will make predictions — predictions contingent on the answers to questions we now face.
BIOGRAPHY: Dr. Dan Geer is a computer security analyst and risk management specialist, recognized for raising awareness of critical computer and network security issues before the risks were widely understood, and for ground-breaking work on the economics of security. He led the design of MIT’s X Window System and Kerberos while at MIT, established the first information security consulting firm on Wall Street, convened the first academic conference on electronic commerce, delivered the “Risk Management is Where the Money Is” speech that changed the focus of security, was President of the USENIX Association, made the first call for the eclipse of authentication by accountability, was principal author of and spokesman for “Cyberinsecurity: The Cost of Monopoly”, co-founded SecurityMetrics.Org, convened MetriCon, authored “Economics & Strategies of Data Security” and “Cybersecurity & National Policy”, and created of the Index of Cyber Security and the Cyber Security Decision Market. He is currently Chief Scientist In-Q-Tel, the investment arm of the US intelligence community. In addition, Dan is a founder of six companies, and has testified five times before Congress. Dan has a BS in Electrical Engineering and Computer Science from MIT and a Sc.D in biostatistics from Harvard.
What’s this movie about? Automatic Content Analysis and Summarization
Mirella Lapata
University of Edinburgh
Wednesday, October 5, 2016
ABSTRACT: Movie analysis is an umbrella term for many tasks aiming to
automatically interprete, extract, and summarize the content of a movie. Potential applications include generating shorter versions of scripts to help with the decision making process in a production company, enhancing movie recommendation engines by abstracting over specific keywords to more general concepts (e.g., thrillers with psychopaths), and notably generating movie previews.
In this talk I will illustrate how NLP-based models together with video analysis can be used to facilitate various steps in the movie production pipeline. I will formalize the process of generating a shorter version of a movie as the task of finding an optimal chain of scenes and present a graph-based model that selects a chain by jointly optimizing its logical progression, diversity, and importance. I will then apply this framework to screenplay summarization, a task which could enhance script browsing and speed up reading time. I will also show that by aligning the screenplay to the movie, the model can generate movie previews with minimal modification. Finally, I will discuss how the computational analysis of movies can lead to tools that automatically create movie “profiles” which give a first impression of the movie by describing its plot, mood, location, or style.
BIOGRAPHY: Mirella Lapata is a Professor at the School of Informatics at the University of Edinburgh. Her recent research interests are in natural language processing. She serves as an associate editor of the Journal of Artificial Intelligence Research (JAIR). She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award. She has also received best paper awards in leading NLP conferences and financial support from the EPSRC (the UK Engineering and Physical Sciences Research Council) and ERC (the European Research Council).

CS SEMINAR

FACULTY CANDIDATE SEMINARS