2012-2013 DISTINGUISHED LECTURE SERIES

September 12, 2012

Brian Kernighan, Princeton University

What Should a Well-informed Person Know about Computers?

Bio:
Brian Kernighan is a Professor in the Computer Science Department of Princeton University. Before joining Princeton, he worked at Bell Labs alongside the Unix creators Ken Thompson and Dennis Ritchie and contributed to the development of the Unix operating system. Kernighan is well known for his books and research on programming languages. His book with Dennis Ritchie, the creator of the C programming language, (known as K&R) is the bible of C. He is also a coauthor of the widely used AWK and AMPL programming languages, and of the EQN and PIC typesetting languages. In collaboration with Shen Lin he devised well-known heuristics for two important NP-complete optimization problems: graph partitioning and the travelling salesman problem. (The former is called the Kernighan–Lin algorithm, the latter Lin–Kernighan.) Kernighan is also a coauthor of a number of widely read books on programming including "The Elements of Programming Style", "Software Tools", "The Unix Programming Environment", and "The Practice of Programming". His latest book is "D is for Digital" and clearly explains what any well-informed person should know about computers and communications. Kernighan has a B.A.Sc. in Engineering Physics from the University of Toronto and a Ph.D. in Electrical Engineering from Princeton University. He is a member of the National Academy of Engineering.

Abstract:

All of us are affected by computing, in ways we may not even realize. Some of the technology is highly visible, like laptops, cell phones and the Internet; most is invisible, like the computers in everything from gadgets to infrastructure, or the myriad systems and services that quietly collect personal data about us.

Even though most people will not be directly involved with programming such systems, everyone is strongly affected by them, so a well-informed person should have a good, if rather high level, understanding of how computer and communication systems operate.

This talk is based on my experience teaching "Computers in Our World," in courses for students in the humanities and social sciences at Princeton and at Harvard. The course is meant to describe how computing works -- hardware, software, networking, and systems built upon them -- for a non-technical audience. The intent, or perhaps just fond hope, is to help students understand computing and communications technologies, how to reason about how systems work, and how to be intelligently skeptical about technology and technological claims.

October 17, 2012

Manuela Veloso, CMU

Symbiotic Autonomy: Robots, Humans, and the Web

Bio:
Manuela M. Veloso is Herbert A. Simon Professor of Computer Science at Carnegie Mellon University. Her research focuses on Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of multiagent systems where agents Collaborate, Observe, Reason, Act, and Learn, (http://www.cs.cmu.edu/~coral). Professor Veloso is an IEEE Fellow, AAAS Fellow, and AAAI Fellow, and is the current President of AAAI. Professor Veloso was recently recognized by the Chinese Academy of Sciences as Einstein Chair Professor. She also received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso is the author of the book Planning and Learning by Analogical Reasoning and editor of several other books. She is also an author of over 280 journal articles and conference papers.

Abstract:

We envision ubiquitous autonomous mobile robots that coexist and interact with humans while performing assistance tasks. Such robots are still far from common, as our environments offer great challenges to robust autonomous robot perception, cognition, and action. In this talk, I present symbiotic robot autonomy in which robots are aware of their limitations and proactively ask for help from humans, access the web for missing knowledge, and coordinate with other robots. Such symbiotic autonomy has enabled our CoBot robots to move in our multi-floor buildings performing a variety of service tasks, including escorting visitors, and transporting packages between locations. I will describe CoBot's fully autonomous effective mobile robot indoor localization and navigation algorithms, its human-centered task planning, and its symbiotic interaction with the humans and with the web. I will further discuss our ongoing research on knowledge learning from our speech-based robot interaction with humans. The talk will be illustrated with results and examples from many hours-long runs of the robots in our buildings.

December 03, 2012

Frans Kaashoek, MIT

The multicore evolution and operating systems

Bio:
M. Frans Kaashoek is a full professor in MIT's EECS department and a member of the Computer Science and Artificial Intelligence Laboratory, where he coleads the parallel and distributed operating systems group (http://www.pdos.csail.mit.edu/). He received a PhD (1992) from the Vrije Universiteit (Amsterdam, The Netherlands) for his work on group communication in the Amoeba distributed operating system, under the supervision of A.S. Tanenbaum. Frans's principal field of interest is designing and building computer systems. In collaboration with students and colleagues, his past contributions include the exokernel operating system, the Click modular router, the RON overlay, the self-certifying file system, the Chord distributed hash table, and the Asbestos/Flume secure operating system. Frans is a member of the National Academy of Engineering and the recipient of several awards, including the inaugural ACM SIGOPS Mark Weiser award and the 2010 ACM-Infosys Foundation award. He was a cofounder of Sightpath, Inc. (acquired by Cisco systems Inc.) and founding board member of Mazu Networks, Inc. (acquired by Riverbed Inc.). Host: Prof. Junfeng Yang

Abstract:

Multicore chips with hundreds of cores will likely be available soon. Although many applications have significant inherent parallelism (e.g., mail servers), their scalability on many cores can be limited by the underlying operating system. We have built or modified several kernels (Corey, Linux, and xv6) to explore OS designs that scale with increasing number of cores. This talk will summarize our experiences by exploring questions such as what is the impact of kernel scalability on application scalability, is a revolution in kernel design necessary to achieve kernel scalability, and what limits kernel scalability.

Joint work with: S. Boyd-Wickizer, A. Clements, Y. Mao, A. Pesterev, R. Morris, and N. Zeldovich

December 10, 2012

David Dill, Stanford

The E-voting Battle

Bio:
David Dill has been on the faculty in the Computer Science Department at Stanford since 1987. He worked for many years in formal verification of hardware, software, and protocols, but his major research interest now is computational biology. He is a Fellow of the IEEE and ACM. Prof. Dill has been working on policy issues in voting technology since 2003. He is the author of the "Resolution on Electronic Voting", which calls for a voter-verifiable audit trail on all voting equipment, and which has been endorsed by thousands of people, including many of the top computer scientists in the U.S. He has served on the California Secretary of State's Ad Hoc Task Force on Touch-Screen voting. He has testified on electronic voting before the U.S. Senate and the Commission on Federal Election Reform, co-chaired by Jimmy Carter and James Baker III. He is the founder of the Verified Voting Foundation and VerifiedVoting.org and is on the boards of those organizations. In 2004, he received the Electronic Frontier Foundation's "Pioneer Award" for "spearheading and nurturing the popular movement for integrity and transparency in modern elections." Host: Prof. Gail Kaiser

Abstract:

After the 2000 Florida election fiasco, Americans were clamoring for improved voting technology. By 2002, millions of dollars were available from the Federal government and some state governments to help local jurisdictions upgrade. But the planned "upgrades" to paperless electronic voting machines would have resulted in a system where we had no idea whether our leaders were selected by voters or by errors or malicious software in voting machines. This led to an ongoing policy battle over voting technology from 2003. Computer scientists and other technologists played an important and unusual role in this debate.

I will describe the problem, the history since 2003, and where we stand after the recent Presidential election.

Other Lectures