September 26, 2016

Dan Geer, In-Q-Tel

The Future of Cybersecurity

Dr. Dan Geer is a computer security analyst and risk management specialist, recognized for raising awareness of critical computer and network security issues before the risks were widely understood, and for ground-breaking work on the economics of security. He led the design of MIT's X Window System and Kerberos while at MIT, established the first information security consulting firm on Wall Street, convened the first academic conference on electronic commerce, delivered the "Risk Management is Where the Money Is" speech that changed the focus of security, was President of the USENIX Association, made the first call for the eclipse of authentication by accountability, was principal author of and spokesman for "Cyberinsecurity: The Cost of Monopoly", co-founded SecurityMetrics.Org, convened MetriCon, authored "Economics & Strategies of Data Security" and "Cybersecurity & National Policy", and created of the Index of Cyber Security and the Cyber Security Decision Market. He is currently Chief Scientist In-Q-Tel, the investment arm of the US intelligence community. In addition, Dan is a founder of six companies, and has testified five times before Congress. Dan has a BS in Electrical Engineering and Computer Science from MIT and a Sc.D in biostatistics from Harvard.


Predicting the future of risk is itself risky, so why bother trying? The answer is time. Shortening the long tail of upgrade may ultimately require force, but the deployment of force takes time to be done well. Future-proof strategies are a kind of answer but if, and only if, the rate constant of their deployment is quicker than the rate constant of innovation in the opponent's kit. Adaptive, autonomous technologies promise faster event detection and response, but such technologies are inherently dual use, and, in any case, the more optimized an algorithm is, the harder it is to know what the algorithm is really doing. This talk will make predictions -- predictions contingent on the answers to questions we now face.

October 05, 2016

Mirella Lapata, University of Edinburgh

What's this movie about? Automatic Content Analysis and Summarization

Mirella Lapata is a Professor at the School of Informatics at the University of Edinburgh. Her recent research interests are in natural language processing. She serves as an associate editor of the Journal of Artificial Intelligence Research (JAIR). She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award. She has also received best paper awards in leading NLP conferences and financial support from the EPSRC (the UK Engineering and Physical Sciences Research Council) and ERC (the European Research Council).


Movie analysis is an umbrella term for many tasks aiming to
automatically interprete, extract, and summarize the content of a movie. Potential applications include generating shorter versions of scripts to help with the decision making process in a production company, enhancing movie recommendation engines by abstracting over specific keywords to more general concepts (e.g., thrillers with psychopaths), and notably generating movie previews.
In this talk I will illustrate how NLP-based models together with video analysis can be used to facilitate various steps in the movie production pipeline. I will formalize the process of generating a shorter version of a movie as the task of finding an optimal chain of scenes and present a graph-based model that selects a chain by jointly optimizing its logical progression, diversity, and importance. I will then apply this framework to screenplay summarization, a task which could enhance script browsing and speed up reading time. I will also show that by aligning the screenplay to the movie, the model can generate movie previews with minimal modification. Finally, I will discuss how the computational analysis of movies can lead to tools that automatically create movie "profiles" which give a first impression of the movie by describing its plot, mood, location, or style.

October 12, 2016

Luiz André Barroso, Google

Programming a Warehouse-Scale Computer

Luiz André Barroso is a Google Fellow, and the VP of Engineering for the Geo Platform team, the group responsible for collecting and curating maps, local and imagery data that powers Google consumer products (such as Google Search and Google Maps). While at Google he has co-authored some well-cited articles on warehouse-scale computing, energy efficient computing and storage system reliability. He also co-wrote "The Datacenter as a Computer", the first textbook to describe the architecture of warehouse-scale computing systems, now in its 2nd edition. Before Google, he was a member of the research staff at Digital Equipment (later Compaq), where his group did some of the pioneering research on modern multi-core architectures.


Public clouds are quickly making massive-scale computing capabilities available to ever larger population of programmers, and are no longer a playground restricted to a handful of institutions, such as national labs or large Internet services companies. In this talk I will highlight some of the features of this new class of computers, the challenges faced by their programmers, and tools/techniques we have developed to address some of those challenges.

October 31, 2016

Christos Papadimitriou, University of California, Berkeley

Algorithm as a Scientific Weltanschauung

Christos H. Papadimitriou is the C. Lester Hogan Professor of Computer Science at UC Berkeley. Before joining Berkeley in 1996 he taught at Harvard, MIT, Athens Polytechnic, Stanford, and UCSD. He has written five textbooks and many research articles on algorithms and complexity, and their applications to optimization, databases, AI and robotics, control theory, the Internet, game theory and economics, the theory of evolution, and brain science. Besides his PhD from Princeton he has received eight honorary doctorates. He is a member of the American Academy of Arts and Sciences, of the National Academy of Engineering, and the National Academy of Sciences of the USA, and a fellow of ACM. He has won the Knuth prize, the Goedel prize, the EATCS award, and IEEE's von Neumann medal. He has also published three novels, including the graphic novel Logicomix (with Apostolos Doxiadis, Bloomsbury 2010).


When key problems in science are revisited from the computational viewpoint, occasionally unexpected progress results. There is a reason for this: Implicit algorithmic processes are present in the great objects of scientific inquiry—the cell, the brain, the market—as well as in the models developed by scientists over the centuries for studying them. This unexpected power of computational ideas, sometimes called "the algorithmic lens," has manifested itself in these past few decades in virtually all sciences: natural, life, or social, for example, in statistical physics through the study of phase transitions in terms of the convergence of Markov chain-Monte Carlo algorithms, and in quantum mechanics through quantum computing. This talk will focus on three other instances. Almost a decade ago, ideas and methodologies from computational complexity revealed a subtle conceptual flaw in the solution concept of Nash equilibrium, which lies at the foundations of modern economic thought. In the study of evolution, a new understanding of century-old questions has been achieved through surprisingly algorithmic ideas. Finally, current work in theoretical neuroscience suggests that the algorithmic point of view may be useful in the central scientific question of our era, namely understanding how behavior and cognition emerge from the structure and activity of neurons and synapses.

Watch Video of Event

November 02, 2016

Sir Dermot Turing,

Alan Turing: Computing Machinery and Intelligence

Dermot Turing is Alan Turing's nephew and the author of "Prof" — Alan Turing decoded, the most recent biography of his celebrated ancestor. Dermot graduated from King's College, Cambridge and New College, Oxford in the UK. He spent his career in the legal profession, until 2014 as a partner of international law-firm Clifford Chance, where he specialized in financial markets issues. Nowadays he is concentrating on his writing.


Most people are familiar with Alan Turing's "Imitation Game" or the "Turing Test" to see whether computers can actually think, which was first set out in his 1950 paper called "Computing Machinery and Intelligence." Dermot Turing, who is Alan Turing's nephew and biographer, takes a new look at the development of early computing machinery, and Alan Turing's contribution to that, and also his role in secret intelligence in World War II, when the work done by the codebreakers at Bletchley Park involved an early foray into the realms of big data.

Watch Video of Event

November 16, 2016

Chad Jenkins, University of Michigan

Perception of People and Scenes for Robot Learning from Demonstration

Odest Chadwicke Jenkins, Ph.D., is an Associate Professor of Computer Science at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). Prof. Jenkins was selected as a Sloan Research Fellow in 2009. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based human tracking from video. He has also received Young Investigator awards from the Office of Naval Research (ONR) for his research in learning dynamical primitives from human motion, the Air Force Office of Scientific Research (AFOSR) for his work in manifold learning and multi-robot coordination and the National Science Foundation (NSF) for robot learning from multivalued human demonstrations.


We are at the dawn of a robotics revolution where the visions of interconnected heterogeneous robots in widespread use will become a reality. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret commands that accord with a human's model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit the critical missing component is the grounding of symbols that conceptually tie together low-level perception with user programs and high-level reasoning systems. Such a grounding will enable robots to perform tasks that require extended goal-directed autonomy as well as fluidly work with human partners.

Towards making robot programming more accessible and general, I will present our work on improving perception of people and scenes to enable robot learning from human demonstration. Robot learning from demonstration (LfD) has emerged as a compelling alternative to explicit coding in a programming language, where robots are programmed implicitly from a user's demonstration. Phrasing LfD as a statistical regression problem, our multivalued regression algorithms will be presented for learning robot controllers in the face of perceptual aliasing. I will also describe how such regressors can be used within physics-based estimation systems to learn controllers for humanoids from monocular video of human motion. With respect to learning for sequential manipulation tasks, our recent work aims to perceive axiomatic descriptions of scenes from depth for planning goal-directed behavior.

Watch Video of Event

Other Lectures