2022-2023 DISTINGUISHED LECTURE SERIES
September 19, 2022
Jignesh Patel, University of Wisconsin-Madison
Revenge of the database machine? Towards a hardware-software approach for high-performance databases
Jignesh Patel is a professor in the Computer Science Department at the University of Wisconsin-Madison where he also has an affiliate position in the Biostatistics and Medical Informatics department. His research interests are in data management systems. His papers have been selected as best papers in the conference in several top database venues, including SIGMOD and VLDB. He is a Fellow of the AAAS, the ACM, and the IEEE organizations. He has also won teaching awards at the U. Wisconsin and the U. Michigan. He has a keen interest in technology transfer from university research, and he has spun four startups from his research group. Further, at Wisconsin, he has co-founded entrepreneurship organizations at both the department and the university levels to help other entrepreneurs.
Database applications have an insatiable appetite for higher performance. In the past, a large part of this appetite has been fed by leveraging the gift of Moore’s Law. The slowing down of Moore’s Law now requires a new approach. Fortunately, the hardware landscape is undergoing a Cambrian explosion in new architectures, and in this talk, I will describe how one class of architecture may provide part of the answer in our search for future high-performance database systems. This architecture called PIM, packages compute and storage closely, and appears to be a good candidate to accelerate database analytic workloads. However, as I will describe in this talk, a true hardware-software co-design strategy is needed to engineer critical changes on both sides to achieve high performance. This talk will also highlight how there are rich opportunities in redesigning core database kernels to work with a broader class of hardware by considering circuit-level parallelism that is present in most computing substrates. Finally, I will go back five decades and connect to the early days of the database field when a similar co-design approach (then called database machines) was dominant, and how that history may hold valuable lessons for the path forward.
September 28, 2022
Allison Bishop, Proof Trading
How the US stock market is a cryptography problem that almost no one is working on
Allison Bishop is the President and co-founder of Proof Trading, an institutional broker-dealer for US equities, as well as a part-time visiting professor at City College. She is an appointed member of the board of the International Association for Cryptologic Research, and the creator of the annual Conference for Failed Approaches and Insightful Losses in cryptology (CFAIL). She was formerly a quantitative researcher at IEX (the stock exchange featured in Michael Lewis' book "Flash Boys") and an assistant professor of computer science at Columbia University. Her primary areas of research include stock market microstructure, applied machine learning, and cryptography.
On any given day, billions of shares are traded on the US stock market. Nonetheless, institutions who seek to buy or sell large amounts of stock may stick out in the crowd and fall victim to predatory behavior. They refer to this problem as "information leakage," which sounds technical, but there is no widely accepted definition of the phenomenon, and little in the way of public-facing research on the topic. In this talk, I'll describe why cryptographers and computer scientists are well-suited to approach this problem, and why more public research would be good for the health of the financial industry. I will also discuss the related historical and economic trends that have influenced the structure of the US stock market, leading to its current state.
October 17, 2022
Eric Brewer, Google
Trustworthy Open Source: The Consequences of Success
Eric is a VP and Fellow at Google and leads technical areas including Kubernetes, Serverless, and Anthos. A recent focus is security for open-source software, including supply-chain risks and helping start the OpenSSF. At Berkeley, he led work on cloud computing, network infrastructure, IoT, and the CAP Theorem. He has also led work on technology for developing regions, with projects in India, the Philippines, and Kenya among others, including communications, power, and health care. In 1996, he co-founded Inktomi Corporation and helped lead it onto the NASDAQ 100. In 2000, working with President Clinton, Professor Brewer helped to create USA.gov, the official portal of the Federal government. Major awards include membership in the NAE, AAAS, and AAA(&)S, the ACM Prize in Computing, and the ACM SIGOPS Mark Weiser Award.
Wide-spread use of open-source software is a remarkable achievement, but also creates a tremendous responsibility. How can we collectively step up to ensure open-source software is worthy of the trust the world now expects and deserves? We cover a range of structural and security challenges and how we might address them, including our hopes for a more sustainable future.
October 24, 2022
Lawrence Saul, Flatiron Institute
A Geometrical Connection Between Sparse and Low-rank Matrices and Its Uses for Machine Learning
Lawrence Saul is a Senior Research Scientist in the Center for Computational Mathematics (CCM) at the Flatiron Institute. He joined CCM in July 2022 as a group leader in machine learning; previously, he was a Professor and Vice Chair in the Department of Computer Science and Engineering at UC San Diego.
Many problems in high dimensional data analysis can be formulated as a search for structure in large matrices. One important type of structure is sparsity; for example, when a matrix is sparse, with a large number of zero elements, it can be stored in a highly compressed format. Another type of structure is linear dependence; when a matrix is low-rank, it can be expressed as the product of two smaller matrices. It is well known that neither one of these structures implies the other. But can one find more subtle connections by looking beyond the canonical decompositions of linear algebra?
In this talk, I will consider when a sparse nonnegative matrix can be recovered from a real-valued matrix of significantly lower rank. Of particular interest is the setting where the positive elements of the sparse matrix encode the similarities of nearby points on a low dimensional manifold. The recovery can then be posed as a problem in manifold learning—namely, how to learn a similarity-preserving mapping of high-dimensional inputs into a lower-dimensional space. I will describe an algorithm for this problem based on a generalized low-rank decomposition of sparse matrices. This decomposition has the interesting property that it can be encoded by a neural network with one layer of rectified linear units; since the algorithm discovers this encoding, it can also be viewed as a layerwise primitive for deep learning. Finally, I will apply the algorithm to data sets where vector magnitudes and small cosine distances have interpretable meanings (e.g., the brightness of an image, the similarity to other words). On these data sets, the algorithm is able to discover much lower dimensional representations that preserve these meanings.
October 31, 2022
Sanja Fidler, University of Toronto, NVIDIA
Towards A.I. for 3D Content Creation
3D content is key in several domains such as architecture, film, gaming, robotics, and lies in the heart of the metaverse applications. However, creating 3D content can be very time consuming -- the artists need to sculpt high quality 3d assets, compose them into large worlds, and bring these worlds to life by writing behaviour models that "drive" the characters around in the world. For applications such as metaverse which feature extremely large 3D virtual worlds, A.I. that can help automate, scale up and democratize 3D content creation is existential. In this talk, I'll present some of the ongoing efforts on A.I. for creating virtual worlds.
Sanja Fidler is an Associate Professor at the University of Toronto, affiliated faculty at the Vector Institute (and one of the co-founding members), and VP of AI Research at NVIDIA, leading a research lab in Toronto. Prior to coming to Toronto, in 2014, she was a Research Assistant Professor at Toyota Technological Institute at Chicago, an academic institute located in the campus of University of Chicago. Her work is in the area of Computer Vision and Machine Learning. She received the NVIDIA Pioneer of AI award, Amazon Academic Research Award, Facebook Faculty Award, and the Connaught New Researcher Award. In 2018 she was appointed as the Canadian CIFAR AI Chair. Her work on semi-automatic object instance annotation won the Best Paper Honorable Mention at CVPR-17. Her main research interests are in the intersection of computer vision and graphics, 3D vision, 3D reconstruction and synthesis; and interactive methods for image annotation.
- Distinguished Lectures 2022-2023
- Distinguished Lectures 2021-2022
- Distinguished Lectures 2020-2021
- Distinguished Lectures 2019-2020
- Distinguished Lectures 2018-2019
- Distinguished Lectures 2017-2018
- Distinguished Lectures 2016-2017
- Distinguished Lectures 2015-2016
- Distinguished Lectures 2014-2015
- Distinguished Lectures 2013-2014
- Distinguished Lectures 2012-2013
- Distinguished Lectures 2011-2012