September 10, 2018
Pieter Abbeel, UC Berkeley
Pieter Abbeel is Professor and Director of the Robot Learning Lab at UC Berkeley [2008- ], Co-Founder of covariant.ai [2017- ], Co-Founder of Gradescope [2014- ], Advisor to OpenAI, Founding Faculty Partner AI@TheHouse, Advisor to many AI/Robotics start-ups. He works in machine learning and robotics. In particular his research focuses on making robots learn from people (apprenticeship learning), how to make robots learn through their own trial and error (reinforcement learning), and how to speed up skill acquisition through learning-to-learn (meta-learning). His robots have learned advanced helicopter aerobatics, knot-tying, basic assembly, organizing laundry, locomotion, and vision-based robotic manipulation. He has won numerous awards, including best paper awards at ICML, NIPS and ICRA, early career awards from NSF, Darpa, ONR, AFOSR, Sloan, TR35, IEEE, and the Presidential Early Career Award for Scientists and Engineers (PECASE). Pieter's work is frequently featured in the popular press, including New York Times, BBC, Bloomberg, Wall Street Journal, Wired, Forbes, Tech Review, NPR.
Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. In contrast, humans can pick up new skills far more quickly. To do so, humans might rely on a better learning algorithm or on a better prior (potentially learned from past experience), and likely on both. In this talk I will describe some recent work on meta-learning for action, where agents learn the imitation/reinforcement learning algorithms and learn the prior. This has enabled acquiring new skills from just a single demonstration or just a few trials. While designed for imitation and RL, our work is more generally applicable and also advanced the state of the art in standard few-shot classification benchmarks such as omniglot and mini-imagenet.
October 22, 2018
C. Mohan, IBM Almaden Research Center
Dr. C. Mohan has been an IBM researcher for 37 years in the database and related areas, impacting numerous IBM and non-IBM products, the research and academic communities, and standards, especially with his invention of the well-known ARIES family of database locking and recovery algorithms, and the Presumed Abort distributed commit protocol. This IBM (1997), and ACM and IEEE (2002) Fellow has also served as the IBM India Chief Scientist for 3 years (2006-2009). In addition to receiving the ACM SIGMOD Innovations Award (1996), the VLDB 10 Year Best Paper Award (1999) and numerous IBM awards, Mohan was elected to the US and Indian National Academies of Engineering (2009) and named an IBM Master Inventor (1997). This Distinguished Alumnus of IIT Madras (1977) received his PhD at the University of Texas at Austin (1981). He is an inventor of 50 patents. He is currently focused on Blockchain, Big Data and HTAP technologies (http://bit.ly/CMbcDB, http://bit.ly/CMgMDS). Since 2016, Mohan has been a Distinguished Visiting Professor of China’s prestigious Tsinghua University. He has served on the advisory board of IEEE Spectrum, and on numerous conference and journal boards. Mohan is a frequent speaker in North America, Europe and Asia, and has given talks in 40 countries. He is very active on social media and has a huge network of followers. More information can be found in the Wikipedia page at http://bit.ly/CMwIkP
The concept of a distributed ledger was invented as the underlying technology of the public or permissionless Bitcoin cryptocurrency network. But the adoption and further adaptation of it for use in the private or permissioned environments is what I consider to be of practical consequence and hence only such private blockchain systems will be the focus of this talk.
Computer companies like IBM, Intel, Oracle, Baidu and Microsoft, and many key players in different vertical industry segments have recognized the applicability of blockchains in environments other than cryptocurrencies. IBM did some pioneering work by architecting and implementing Fabric, and then open sourcing it. Now Fabric is being enhanced via the Hyperledger Consortium as part of The Linux Foundation. There is a great deal of momentum behind Hyperledger Fabric throughout the world. Other private blockchain efforts include Enterprise Ethereum, Hyperledger Sawtooth and R3 Corda.
While currently there is no standard in the private blockchain space, all the ongoing efforts involve some combination of persistence, transaction, encryption, virtualization, consensus and other distributed systems technologies. Some of the application areas in which blockchain systems have been leveraged are: global trade digitization, derivatives processing, e-governance, Know Your Customer (KYC), healthcare, food safety, supply chain management and provenance management.
In this talk, I will describe some use-case scenarios, especially those in production deployment. I will also survey the landscape of private blockchain systems with respect to their architectures in general and their approaches to some specific technical areas. I will also discuss some of the opportunities that exist and the challenges that need to be addressed. Since most of the blockchain efforts are still in a nascent state, the time is right for mainstream database and distributed systems researchers and practitioners to get more deeply involved to focus on the numerous open problems. Extensive blockchain related collateral can be found at http://bit.ly/CMbcDB
November 12, 2018
Daniel Wigdor, University of Toronto
Daniel Wigdor is an associate professor of computer science and the NSERC-Facebook Industrial Research Chair in Human-Machine Interaction, conducting his research in the Dynamic Graphics Project at the University of Toronto. His research is in the area of human-computer interaction, with major areas of focus in the architecture of highly-performant UI’s, in interaction and application models for mobile computing, in development methods for ubiquitous computing, and in post-WIMP interaction methods. Before joining the faculty at U of T in 2011, Daniel was a researcher at Microsoft Research, the user experience architect of the Microsoft Surface Table, and a company-wide expert in user interfaces for new technologies (2008-2010). Daniel has also served as a visiting associate professor at Cornell Tech (2017-2018), as an affiliate assistant professor at the University of Washington (2009-2011), and fellow and associate at Harvard University (2007-2008, 2011-2012). He also conducted research as an intern at Mitsubishi Electric Research Labs (2005-2008). For his research, he has been awarded an Ontario Early Researcher Award (2014) and the Alfred P. Sloan Foundation’s Research Fellowship (2015), as well as best paper awards or honorable mentions at CHI 2018, CHI 2017, CHI 2016, CHI 2015, CHI 2014, Graphics Interface 2013,CHI 2011, and UIST 2004. Three of his projects were selected as the People’s Choice Best Talks at CHI 2014 and CHI 2015. Daniel is co-founder of Iota Wireless, a startup commercializing his research in mobile-phone gestural interaction, of Matter Matters, a startup commercializing his team’s work in printed circuit board fabrication methods, of Tactual Labs, a startup commercializing his research in high-performance, low-latency user input, and of Chatham Labs, a design firm which helps clients plan long-term technology, intellectual property, and product roadmaps. Daniel is the co-author of Brave NUI World | Designing Natural User Interfaces for Touch and Gesture, the first practical book for the design of touch and gesture interfaces. He has also published dozens of other works as invited book chapters and papers in leading international journals and conferences, and is an inventor of over four dozen patents and pending patent applications. Daniel’s is sought after as an expert witness, and has testified before courts and commissions in the United Kingdom and the United States. Further information, including publications and videos demonstrating some of his research, can be found at www.dgp.toronto.edu/~dwigdor.
As digital interaction spreads to an increasing number of devices, direct physical manipulation has become the dominant metaphor in HCI. The promise made by this approach is that digital content will look, feel, and respond like content from the real world. Current commercial systems fail to keep that promise, leaving a broad gulf between what users are led to expect and what they see and feel. In this talk, Daniel will discuss two areas where his lab has been making strides to address this gap. First, in the area of passive haptics, he will describe technologies intended to enable users to feel virtual content, without having to wear gloves or hold “poking” devices. Second, in the area of systems performance, he will describe his team’s work in achieving nearly zero latency responses to touch and stylus input.
November 26, 2018
Sanjeev Arora, Princeton/IAS
Sanjeev Arora is the Charles C. Fitzmorris Professor in Computer Science. He joined Princeton in 1994 after earning his doctorate from the University of California, Berkeley. He was a visiting professor at the Weizmann Institute in 2007, a visiting researcher at Microsoft in 2006-07, and a visiting associate professor at Berkeley during 2001-02. Professor Arora’s honors include the D.R. Fulkerson Prize in Discrete Mathematics (awarded by the American Mathematical Society and Math Optimization Society) in 2012, the ACM-Infosys Foundation Award in the Computing Sciences in the same year, the Best paper award from IEEE Foundations of Computer Science in 2010, and the EATCS-SIGACT Gödel Prize (cowinner), also in 2010. He was appointed a Simons Foundation investigator in 2012, and was elected an ACM fellow in 2009. Professor Arora was the founding director and lead PI at the Center for Computational Intractability in 2008, a project funded partly by an NSF Expeditions in Computing grant.
Deep learning is driving progress in machine learning and artificial intelligence today. But many aspects of it are not rigorously understood: when and how fast does training work, how many training data does it require, how to interpret the answers that the trained model provides? This talk is a birds-eye survey ongoing efforts to develop mathematical understanding of such issues. It will be largely self-contained.
December 03, 2018
John Hennessy, Stanford
Professor Hennessy initiated the MIPS project at Stanford in 1981, MIPS is a high- performance Reduced Instruction Set Computer (RISC), built in VLSI. MIPS was one of the first three experimental RISC architectures. In addition to his role in the basic research, Hennessy played a key role in transferring this technology to industry. During a sabbatical leave from Stanford in 1984-85, he cofounded MIPS Computer Systems (later MIPS Technologies Inc. and now part of Imagination Technologies), which specializes in the production of chips based on these concepts. He also led the Stanford DASH (Distributed Architecture for Shared Memory) multiprocessor project. DASH was the first scalable shared memory multiprocessor with hardware-supported cache coherence. More recently, he has been involved in FLASH (FLexible Architecture for Shared Memory), which is designed to support different communication and coherency approaches in large-scale shared-memory multiprocessors. In the 1990s, he served as the Founding Chairman of the Board of Atheros, an early wireless chipset company, now part of Qualcomm. Hennessy is also the coauthor of two widely used textbooks in computer architecture. In addition to his work as a Professor at Stanford, he has served as Chair of the Department of Computer Science (1994-96), Dean of the School of Engineering (1996-99), Provost (1999-2000), and President (2000-2016). He is currently the Director of the Knight-Hennessy Scholars Program, which each year will select 100 new graduate scholars from around the world to receive a full scholarship (with stipend) to pursue a wide-ranging graduate education at Stanford, with the goal of developing a new generation of global leaders.
After 40 years of remarkable progress in general- purpose processors, a variety of factors are combining to lead to a much slower rate of performance growth in the future. These limitations arise from three different areas: IC technology, architectural inefficiencies, and changing applications and usage. The end of Dennard scaling and the slowdown in Moore's Law will require much more efficient architectural approaches than we have relied on. Although progress on general-purpose processors may hit an asymptote, domain specific architectures may be the one attractive path for important classes of problems, at least until we invent a flexible and competitive replacement for silicon.
December 10, 2018
Jelani Nelson, Harvard
Jelani Nelson is Associate Professor of Computer Science and John L. Loeb Associate Professor of Engineering and Applied Sciences at Harvard. His main research interest is in algorithm design and analysis, with focus on streaming algorithms, dimensionality reduction, compressed sensing, and randomized linear algebra algorithms. He completed his Ph.D. in computer science at MIT in 2011, receiving the George M. Sprowls Award for best computer science doctoral dissertations at MIT. He is the recipient of an NSF CAREER Award, ONR Young Investigator Award, Sloan Fellowship, and Presidential Early Career Award for Scientists and Engineers (PECASE).
A "sketch" is a data structure supporting some pre-specified set of queries and updates to a database while consuming space substantially (often exponentially) less than the information theoretic minimum required to store everything seen. Thus sketching can be seen as some form of functional compression. The advantages of sketching include reduced memory consumption, faster algorithms, and reduced bandwidth requirements in distributed computing environments.
Sketching has been a core technique in several domains, including processing massive data streams with low memory footprint, 'compressed sensing' for lossy compression of signals with few linear measurements, and dimensionality reduction or 'random projection' methods for speedups in large-scale linear algebra algorithms, and high-dimensional computational geometry.
This talk will provide a glimpse into some recent progress on core problems in the theory of sketching algorithms.
February 13, 2019
Tal Rabin, IBM
Tal Rabin is a Distinguished Research Staff Member and the manager of the Cryptographic Research Group at IBM’s T.J. Watson Research Center. Her research focuses on the general area of cryptography and more specifically on secure multiparty computation and privacy preserving computations. She has a PhD from the Hebrew University. Rabin is an ACM Fellow, an IACR (International Association of Cryptologic Research) Fellow and member of the American Academy of Arts and Sciences. She was named by Forbes as one of the Top 50 Women in Tech, 2018. In 2014 she won the Anita Borg Women of Vision Award winner for Innovation and was ranked by Business Insider as the #4 on the 22 Most Powerful Women Engineer. Tal has served as the Program and General Chair of the leading cryptography conferences and is an editor of the Journal of Cryptology. She has initiated and organizes the Women in Theory Workshop, a biennial event for graduate students in Theory of Computer Science.
The area of Multiparty Computations started in the beginning of the 1980s and became a very active area of research with thousands of results. In recent years it has added a focus of designing practical solutions that also provide privacy. This aspect of MPC is due to rising privacy concerns and needs and the deployment of the cloud and the burst of cryptocurrencies.
In this talk we will present this journey from the first days culminating in current solutions to real world problems.