2018-2019 DISTINGUISHED LECTURE SERIES

September 10, 2018

Pieter Abbeel, UC Berkeley

Deep Learning to Learn

Bio:
Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.

Abstract:

Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In contrast, humans can pick up new skills far more quickly. To do so, humans might rely on a better learning algorithm or on a better prior (potentially learned from past experience), and likely on both. In this talk I will describe some recent work on meta-learning for action, where agents learn the imitation/reinforcement learning algorithms and learn the prior. This has enabled acquiring new skills from just a single demonstration or just a few trials. While designed for imitation and RL, our work is more generally applicable and also advanced the state of the art in standard few-shot classification benchmarks such as omniglot and mini-imagenet.

Watch Video of Event

October 22, 2018

C. Mohan, IBM Almaden Research Center

Blockchains Untangled: 
Public, Private, Smart Contracts, Applications, Issues

Bio:
Dr. C. Mohan has been an IBM researcher for 37 years in the database and related areas, impacting numerous IBM and non-IBM products, the research and academic communities, and standards, especially with his invention of the well-known ARIES family of database locking and recovery algorithms, and the Presumed Abort distributed commit protocol. This IBM (1997), and ACM and IEEE (2002) Fellow has also served as the IBM India Chief Scientist for 3 years (2006-2009). In addition to receiving the ACM SIGMOD Innovations Award (1996), the VLDB 10 Year Best Paper Award (1999) and numerous IBM awards, Mohan was elected to the US and Indian National Academies of Engineering (2009) and named an IBM Master Inventor (1997). This Distinguished Alumnus of IIT Madras (1977) received his PhD at the University of Texas at Austin (1981). He is an inventor of 50 patents. He is currently focused on Blockchain, Big Data and HTAP technologies (http://bit.ly/CMbcDB, http://bit.ly/CMgMDS). Since 2016, Mohan has been a Distinguished Visiting Professor of China’s prestigious Tsinghua University. He has served on the advisory board of IEEE Spectrum, and on numerous conference and journal boards. Mohan is a frequent speaker in North America, Europe and Asia, and has given talks in 40 countries. He is very active on social media and has a huge network of followers. More information can be found in the Wikipedia page at http://bit.ly/CMwIkP

Abstract:

The concept of a distributed ledger was invented as the underlying technology of the public or permissionless Bitcoin cryptocurrency network. But the adoption and further adaptation of it for use in the private or permissioned environments is what I consider to be of practical consequence and hence only such private blockchain systems will be the focus of this talk.

Computer companies like IBM, Intel, Oracle, Baidu and Microsoft, and many key players in different vertical industry segments have recognized the applicability of blockchains in environments other than cryptocurrencies. IBM did some pioneering work by architecting and implementing Fabric, and then open sourcing it. Now Fabric is being enhanced via the Hyperledger Consortium as part of The Linux Foundation. There is a great deal of momentum behind Hyperledger Fabric throughout the world. Other private blockchain efforts include Enterprise Ethereum, Hyperledger Sawtooth and R3 Corda.

While currently there is no standard in the private blockchain space, all the ongoing efforts involve some combination of persistence, transaction, encryption, virtualization, consensus and other distributed systems technologies. Some of the application areas in which blockchain systems have been leveraged are: global trade digitization, derivatives processing, e-governance, Know Your Customer (KYC), healthcare, food safety, supply chain management and provenance management.

In this talk, I will describe some use-case scenarios, especially those in production deployment. I will also survey the landscape of private blockchain systems with respect to their architectures in general and their approaches to some specific technical areas. I will also discuss some of the opportunities that exist and the challenges that need to be addressed. Since most of the blockchain efforts are still in a nascent state, the time is right for mainstream database and distributed systems researchers and practitioners to get more deeply involved to focus on the numerous open problems. Extensive blockchain related collateral can be found at http://bit.ly/CMbcDB

Watch Video of Event

November 12, 2018

Daniel Wigdor, University of Toronto

Enabling Real Virtuality: Closing the Gap Between the Digital and the Physical

Bio:
Daniel Wigdor is an associate professor of computer science and the NSERC-Facebook Industrial Research Chair in Human-Machine Interaction, conducting his research in the Dynamic Graphics Project at the University of Toronto. His research is in the area of human-computer interaction, with major areas of focus in the architecture of highly-performant UI’s, in interaction and application models for mobile computing, in development methods for ubiquitous computing, and in post-WIMP interaction methods. Before joining the faculty at U of T in 2011, Daniel was a researcher at Microsoft Research, the user experience architect of the Microsoft Surface Table, and a company-wide expert in user interfaces for new technologies (2008-2010). Daniel has also served as a visiting associate professor at Cornell Tech (2017-2018), as an affiliate assistant professor at the University of Washington (2009-2011), and fellow and associate at Harvard University (2007-2008, 2011-2012). He also conducted research as an intern at Mitsubishi Electric Research Labs (2005-2008). For his research, he has been awarded an Ontario Early Researcher Award (2014) and the Alfred P. Sloan Foundation’s Research Fellowship (2015), as well as best paper awards or honorable mentions at CHI 2018, CHI 2017, CHI 2016, CHI 2015, CHI 2014, Graphics Interface 2013,CHI 2011, and UIST 2004. Three of his projects were selected as the People’s Choice Best Talks at CHI 2014 and CHI 2015. Daniel is co-founder of Iota Wireless, a startup commercializing his research in mobile-phone gestural interaction, of Matter Matters, a startup commercializing his team’s work in printed circuit board fabrication methods, of Tactual Labs, a startup commercializing his research in high-performance, low-latency user input, and of Chatham Labs, a design firm which helps clients plan long-term technology, intellectual property, and product roadmaps. Daniel is the co-author of Brave NUI World | Designing Natural User Interfaces for Touch and Gesture, the first practical book for the design of touch and gesture interfaces. He has also published dozens of other works as invited book chapters and papers in leading international journals and conferences, and is an inventor of over four dozen patents and pending patent applications. Daniel’s is sought after as an expert witness, and has testified before courts and commissions in the United Kingdom and the United States. Further information, including publications and videos demonstrating some of his research, can be found at www.dgp.toronto.edu/~dwigdor.

Abstract:

As digital interaction spreads to an increasing number of devices, direct physical manipulation has become the dominant metaphor in HCI. The promise made by this approach is that digital content will look, feel, and respond like content from the real world. Current commercial systems fail to keep that promise, leaving a broad gulf between what users are led to expect and what they see and feel. In this talk, Daniel will discuss two areas where his lab has been making strides to address this gap. First, in the area of passive haptics, he will describe technologies intended to enable users to feel virtual content, without having to wear gloves or hold “poking” devices. Second, in the area of systems performance, he will describe his team’s work in achieving nearly zero latency responses to touch and stylus input.

Watch Video of Event

November 26, 2018

Sanjeev Arora, Princeton/IAS

Toward theoretical understanding of deep learning

Bio:
Sanjeev Arora is the Charles C. Fitzmorris Professor in Computer Science. He joined Princeton in 1994 after earning his doctorate from the University of California, Berkeley. He was a visiting professor at the Weizmann Institute in 2007, a visiting researcher at Microsoft in 2006-07, and a visiting associate professor at Berkeley during 2001-02. Professor Arora’s honors include the D.R. Fulkerson Prize in Discrete Mathematics (awarded by the American Mathematical Society and Math Optimization Society) in 2012, the ACM-Infosys Foundation Award in the Computing Sciences in the same year, the Best paper award from IEEE Foundations of Computer Science in 2010, and the EATCS-SIGACT Gödel Prize (cowinner), also in 2010. He was appointed a Simons Foundation investigator in 2012, and was elected an ACM fellow in 2009. Professor Arora was the founding director and lead PI at the Center for Computational Intractability in 2008, a project funded partly by an NSF Expeditions in Computing grant.

Abstract:

Deep learning is driving progress in machine learning and artificial intelligence today. But many aspects of it are not rigorously understood: when and how fast does training work, how many training data does it require, how to interpret the answers that the trained model provides? This talk is a birds-eye survey ongoing efforts to develop mathematical understanding of such issues. It will be largely self-contained.

Watch Video of Event

December 03, 2018

John Hennessy, Stanford

The End of the Road for General Purpose Processors and the Future of Computing

Bio:
Professor Hennessy initiated the MIPS project at Stanford in 1981, MIPS is a high- performance Reduced Instruction Set Computer (RISC), built in VLSI. MIPS was one of the first three experimental RISC architectures. In addition to his role in the basic research, Hennessy played a key role in transferring this technology to industry. During a sabbatical leave from Stanford in 1984-85, he cofounded MIPS Computer Systems (later MIPS Technologies Inc. and now part of Imagination Technologies), which specializes in the production of chips based on these concepts. He also led the Stanford DASH (Distributed Architecture for Shared Memory) multiprocessor project. DASH was the first scalable shared memory multiprocessor with hardware-supported cache coherence. More recently, he has been involved in FLASH (FLexible Architecture for Shared Memory), which is designed to support different communication and coherency approaches in large-scale shared-memory multiprocessors. In the 1990s, he served as the Founding Chairman of the Board of Atheros, an early wireless chipset company, now part of Qualcomm. Hennessy is also the coauthor of two widely used textbooks in computer architecture. In addition to his work as a Professor at Stanford, he has served as Chair of the Department of Computer Science (1994-96), Dean of the School of Engineering (1996-99), Provost (1999-2000), and President (2000-2016). He is currently the Director of the Knight-Hennessy Scholars Program, which each year will select 100 new graduate scholars from around the world to receive a full scholarship (with stipend) to pursue a wide-ranging graduate education at Stanford, with the goal of developing a new generation of global leaders.

Abstract:

After 40 years of remarkable progress in general- purpose processors, a variety of factors are combining to lead to a much slower rate of performance growth in the future. These limitations arise from three different areas: IC technology, architectural inefficiencies, and changing applications and usage. The end of Dennard scaling and the slowdown in Moore's Law will require much more efficient architectural approaches than we have relied on. Although progress on general-purpose processors may hit an asymptote, domain specific architectures may be the one attractive path for important classes of problems, at least until we invent a flexible and competitive replacement for silicon.

Watch Video of Event

December 10, 2018

Jelani Nelson, Harvard

Sketching algorithms

Bio:
Jelani Nelson is Associate Professor of Computer Science and John L. Loeb Associate Professor of Engineering and Applied Sciences at Harvard. His main research interest is in algorithm design and analysis, with focus on streaming algorithms, dimensionality reduction, compressed sensing, and randomized linear algebra algorithms. He completed his Ph.D. in computer science at MIT in 2011, receiving the George M. Sprowls Award for best computer science doctoral dissertations at MIT. He is the recipient of an NSF CAREER Award, ONR Young Investigator Award, Sloan Fellowship, and Presidential Early Career Award for Scientists and Engineers (PECASE).

Abstract:

A "sketch" is a data structure supporting some pre-specified set of queries and updates to a database while consuming space substantially (often exponentially) less than the information theoretic minimum required to store everything seen. Thus sketching can be seen as some form of functional compression. The advantages of sketching include reduced memory consumption, faster algorithms, and reduced bandwidth requirements in distributed computing environments.
Sketching has been a core technique in several domains, including processing massive data streams with low memory footprint, 'compressed sensing' for lossy compression of signals with few linear measurements, and dimensionality reduction or 'random projection' methods for speedups in large-scale linear algebra algorithms, and high-dimensional computational geometry.
This talk will provide a glimpse into some recent progress on core problems in the theory of sketching algorithms.

Watch Video of Event

February 13, 2019

Tal Rabin, IBM

Exciting Times for Multiparty Computations -- Over 35 Years in the Making

Bio:
Tal Rabin is a Distinguished Research Staff Member and the manager of the Cryptographic Research Group at IBM’s T.J. Watson Research Center. Her research focuses on the general area of cryptography and more specifically on secure multiparty computation and privacy preserving computations. She has a PhD from the Hebrew University. Rabin is an ACM Fellow, an IACR (International Association of Cryptologic Research) Fellow and member of the American Academy of Arts and Sciences. She was named by Forbes as one of the Top 50 Women in Tech, 2018. In 2014 she won the Anita Borg Women of Vision Award winner for Innovation and was ranked by Business Insider as the #4 on the 22 Most Powerful Women Engineer. Tal has served as the Program and General Chair of the leading cryptography conferences and is an editor of the Journal of Cryptology. She has initiated and organizes the Women in Theory Workshop, a biennial event for graduate students in Theory of Computer Science.

Abstract:

The area of Multiparty Computations started in the beginning of the 1980s and became a very active area of research with thousands of results. In recent years it has added a focus of designing practical solutions that also provide privacy. This aspect of MPC is due to rising privacy concerns and needs and the deployment of the cloud and the burst of cryptocurrencies.

In this talk we will present this journey from the first days culminating in current solutions to real world problems.

Watch Video of Event

March 29, 2019

Éva Tardos, Cornell University

Learning in Games

Bio:
Tardos has been elected to the National Academy of Engineering, National Academy of Sciences, and the American Academy of Arts and Sciences, and she is a fellow of multiple societies (ACM, AMS, SIAM, INFORMS). Dr. Tardos is also the recipient of several fellowships and awards including the Packard Fellowship, the Fulkerson Prize and the Goedel Prize. Most recently, IEEE announced that Dr. Tardos will receive the 2019 IEEE John von Neumann Medal in May for outstanding achievement in computer-related science and technology. Éva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, and she was Computer Science department chair from 2006 to 2010. She received her BA and PhD from Eötvös University in Budapest. She joined the faculty at Cornell in 1989. Tardos’s research interest is algorithms and algorithmic game theory. She is most known for her work on network-flow algorithms and quantifying the efficiency of selfish routing. She has been elected to the National Academy of Engineering, the National Academy of Sciences, the American Academy of Arts and Sciences, and is an external member of the Hungarian Academy of Sciences. She is the recipient of a number of fellowships and awards including the Packard Fellowship, the Gödel Prize, Dantzig Prize, Fulkerson Prize, and the ETACS prize. She is editor editor-in-Chief of the Journal of the ACM, has been editor-in-Chief of SIAM Journal of Computing, and editor of several other journals including Combinatorica; she served as problem committee member for many conferences, and was program committee chair for the ACM-SIAM Symposium on Discrete Algorithms (1996), as well as FOCS’05, and EC’13. Most recently, IEEE announced that Dr. Tardos will receive the 2019 IEEE John von Neumann Medal in May for outstanding achievement in computer-related science and technology.

Abstract:

Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we developed good understanding on how to quantify the impact of strategic user behavior on the overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where players use a form of learning that helps them adapt to the environment. We consider two closely related questions: what are broad classes of learning behaviors that guarantee high social welfare in games, and are these results robust to when the game or the population of players is dynamically changing and where participants have to adapt to the changing environment.

Watch Video of Event

April 01, 2019

Elias Bareinboim, Purdue University

Causal Data Science: A general framework for data fusion and causal inference

Bio:
Elias Bareinboim is an assistant professor in the Departments of Computer Science and Statistics at Purdue University. His research focuses on causal and counterfactual inference and their applications in data-driven fields. His work was the first to propose a general solution to the problem of “data-fusion” and provides practical methods for combining datasets generated under different experimental conditions. More recently, Bareinboim has been interested in the intersection of causal inference with reinforcement learning and fairness analysis. He received a Ph.D. in Computer Science from UCLA advised by Judea Pearl. Bareinboim’s recognitions include NSF CAREER Award, IEEE AI’s 10 to Watch, the Dan David Prize Scholarship, the 2014 AAAI Outstanding Paper Award, and the 2018 UAI Best Student Paper Award.

Abstract:

Causal inference is usually dichotomized into two categories, experimental (Fisher, Cox, Cochran) and observational (Neyman, Rubin, Robins, Dawid, Pearl) which, by and large, have evolved and been studied independently. A wide range of problems faced by the current generation of empirical scientists is more demanding. Experimental and observational studies are but two extremes of a rich spectrum of research designs that generate the bulk of the data available in practical, large-scale situations. In typical medical explorations, for example, data from multiple observational and experimental studies are collected from distinct locations, different sampling conditions, and heterogeneous populations. Piecing together these data sources presents a tremendous opportunity to data scientists since the knowledge conveyed by the combined data would not be attainable from any individual source alone.

However, the biases that emerge in heterogeneous environments require a new set of principles and analytical tools. Some of these biases, including confounding, sampling selection, and cross-population (i.e., external validity) biases, have been addressed in isolation, largely in restricted parametric models. In this talk, I will present a general, non-parametric framework for handling these biases and, ultimately, a theoretical solution to the data fusion problem in causal inference tasks. I will further outline the connections of this theory to current challenges in AI and Machine Learning, including fairness analysis, explainability, and reinforcement learning. I’ll end the talk with some reflections on where we are now in the grand scheme of automating the empirical sciences, a project that I call "Causal Data Science.”

Watch Video of Event

Other Lectures