2024-2025 DISTINGUISHED LECTURE SERIES

October 23, 2024

Hanna Hajishirzi, University of Washington

OLMo: Accelerating the Science of Language Modeling

Bio:
Hanna Hajishirzi is the Torode Family Associate Professor in the Allen School of Computer Science and Engineering at the University of Washington and a Senior Director of NLP at AI2. Her current research delves into various domains within Natural Language Processing (NLP) and Artificial Intelligence (AI), with a particular emphasis on accelerating the science of language modeling, broadening their scope, and enhancing their applicability and usefulness for human lives. She has published over 140 scientific articles in prestigious journals and conferences across ML, AI, NLP, and Computer Vision. She is the recipient of numerous awards, including the Sloan Fellowship, NSF CAREER Award, Intel Rising Star Award, Allen Distinguished Investigator Award, Academic Achievement UIUC Alumni Award, and Innovator of the Year Award by GeekWire. The work from her lab has been nominated for or has received best paper awards at various conferences and has been featured in numerous magazines and newspapers.

Abstract:

Language models (LMs) have become ubiquitous in both AI research and commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclosed. Given the significance of these details in scientifically studying these models, including their biases and potential risks, I argue that it is essential for the research community to have access to powerful, truly open LMs. In this talk, I present our OLMo project aimed at building strong language models and making them fully accessible to researchers along with open-source code for data, training, and inference. I describe our efforts in building language modeling from scratch, expanding their scope to make them applicable and useful for real-world applications, and investigating a new generation of LMs that address fundamental challenges inherent in current models.

Watch Video of Lecture

October 28, 2024

Matt Blaze, Georgetown University

Making Elections More Trustworthy (and trusted)

Bio:
Matt Blaze, Ph.D., is a professor of law at Georgetown Law and a professor of computer science at Georgetown University. For more than 25 years, Blaze’s research and scholarship has focused on security and privacy in computing and communications systems, especially as we rely on insecure platforms such as the internet for increasingly critical applications. His work has focused particularly on the intersection of this technology with public policy issues. For example, in 2007, he led several of the teams that evaluated the security of computerized election systems from several vendors on behalf of the states of California and Ohio.

Watch Video of Lecture

Abstract:

From voter registration to tallying ballots to reporting results, technology - computers and software - plays a central role in almost every aspect of US elections. Information technology has become essential for managing the US's complex elections, and, when all goes well, provides great benefits in efficiency, accuracy, and usability. But computers and software are also notoriously (and fundamentally) unreliable and vulnerable to tampering, and the systems we use for voting and election management are no exception. In some ways, the integrity of election outcomes has become dependent on the integrity of technology that may not always work as intended. Can we trust election outcomes? Should we?

Fortunately, recent advances have found reliable methods for conducting high-integrity elections even with flawed (or malicious( technology. This talk will examine the technologies used in elections, the ways they can fail, and practical safeguards that mitigate risks they introduce.

November 20, 2024

Margaret Martonosi, Princeton University

Taking on the World's Challenges: The Role of Computer Systems and Architecture Research

Bio:
Margaret Martonosi is the H.T. Adams '35 Professor of Computer Science at Princeton University, where she has been on the faculty since 1994. In addition, while on leave from Princeton, Martonosi recently served a 4-year rotation leading the U.S. National Science Foundation’s Directorate for Computer and Information Science and Engineering. NSF is the primary source of federal research funding for computing. Martonosi’s role there was to lead budget and operational strategy in stewarding this funding for the community.

Martonosi is an elected member of the US National Academy of Engineering and the American Academy of Arts and Sciences. In 2021, she received computer architecture’s highest honor, the ACM/IEEE Eckert-Mauchly Award, for contributions to the design, modeling, and verification of power-efficient computer architecture. She is a Fellow of IEEE and ACM. Her papers have received numerous long-term impact awards in the SIGARCH, SIGMOBILE, and other communities. She received the 2023 ACM Frances E. Allen Award for Outstanding Mentoring, for her impacts on computer architecture and the broader computing community. Other notable awards include the 2018 IEEE Computer Society Technical Achievement Award, 2010 Princeton University Graduate Mentoring Award, and the 2019 ACM SIGARCH Alan D. Berenbaum Distinguished Service Award. Her work with others to co-found the ACM CARES movement was recognized by the Computing Research Association’s 2020 Distinguished Service Award.

Abstract:

Throughout human history, society has faced great opportunities and challenges, and has used its available technologies to navigate them. Today, many of the global opportunities and challenges we face call for the full engagement of the computer systems and architecture research community. Resiliently navigating climate trends will require computing techniques and systems to model the future, as well as innovative techniques to mitigate carbon footprint by employing telepresence, optimizing logistics, and more. Another grand challenge of our era is the ability for us as individuals and as groups to communicate with each other in a way that upholds accuracy, integrity, privacy, and trust. This talk will discuss how the different elements of the computer science ecosystem— academia, industry, professional organizations, and governments—can work together to meet these challenges. It will be a call to action on how we can best navigate the next decade and beyond to do so.

Watch Video of Lecture

December 04, 2024

Işıl Dillig, University of Texas at Austin

Neurosymbolic Program Synthesis: Bridging Perception and Reasoning in Real-World Applications

Bio:
Isil Dillig is a Professor of Computer Science at The University of Texas at Austin, where she leads the UToPiA research group. Her primary research interests span programming languages, formal methods, program synthesis, and software verification. She earned her Bachelor of Science, Master of Science, and Ph.D. degrees in Computer Science from Stanford University. Dr. Dillig’s work has been recognized with honors such as the Sloan Research Fellowship and the NSF CAREER Award, as well as best paper awards at conferences including PLDI, POPL, OOPSLA, and ETAPS. She has served as Program Committee Chair for PLDI 2022 and CAV 2019 and contributed to program committees for many conferences in her field. Finally, her dedication to teaching has been recognized with multiple awards such as the Texas 10 and the College of Natural Sciences Teaching Excellence Award.

Abstract:

Neurosymbolic Program Synthesis (NSP) integrates neural networks and symbolic reasoning to tackle complex tasks requiring both perception and logical reasoning. This talk provides an overview of the NSP framework and its applications in domains such as image editing, data extraction, and robot learning from demonstrations. We will delve into the key ideas behind NSP learning algorithms, focusing on the synergistic interplay between neural guidance and symbolic reasoning. Finally, we will discuss recent advances in ensuring the correctness of synthesized neurosymbolic programs, paving the way for robust and reliable AI systems.

Watch Video of Lecture

December 09, 2024

Nick Feamster, University of Chicago

Fifteen Years of Measuring Access Network Performance: From Benchmarks to Equity

Bio:
Nick Feamster is the Neubauer Professor of Computer Science and the Director of Research at the Data Science Institute at the University of Chicago. Previously, he was a full professor in the Computer Science Department at Princeton University, where he directed the Center for Information Technology Policy (CITP). Prior to Princeton, he was a full professor in the School of Computer Science at Georgia Tech. His research spans many aspects of computer networking and networked systems, with a focus on network operations, network security, and censorship-resistant communication systems. He earned his Ph.D. in Computer Science from MIT in 2005 and his S.B. and M.Eng. degrees in Electrical Engineering and Computer Science from MIT in 2000 and 2001, respectively. He was an early-stage employee at Looksmart (acquired by AltaVista), where he developed the company's first web crawler, and at Damballa, where he helped design the company’s first botnet-detection algorithm.

Nick is an ACM Fellow and has received numerous awards for his contributions to computer networking and cybersecurity, including the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work on spam filtering. Other honors include the Technology Review 35 "Top Young Innovators Under 35" award, the ACM SIGCOMM Rising Star Award, a Sloan Research Fellowship, the NSF CAREER award, the IBM Faculty Fellowship, and the IRTF Applied Networking Research Prize. His research papers have received awards at ACM SIGCOMM (on the network-level behavior of spammers), the SIGCOMM Internet Measurement Conference (on measuring web performance bottlenecks), USENIX Security (on circumventing web censorship using Infranet and web cookie analysis), and USENIX Networked Systems Design and Implementation (on fault detection in router configuration and software-defined networking). His seminal work on the Routing Control Platform received the USENIX Test of Time Award for its impact on Software Defined Networking.

Abstract:

The last 15 years have seen significant advances in the area of measuring broadband access networks. In the mid-2000s, measuring access networks was simpler: access networks were typically the throughput bottleneck along an end-to-end path, and access speeds were slower, making it far easier to measure ISP performance and understand the contributions of the access ISP to overall end-to-end performance.

Today, as access network speeds have increased, performance bottlenecks may lie anywhere along an end-to-end path, complicating performance analysis. Furthermore, higher access throughput has shifted attention to other metrics, such as latency and application performance, which present their own challenges. The increasing importance of the Internet in everyday life has also amplified interest in questions of quality of experience (QoE) and equity of Internet access, adding new research directions to the age-old problems of Internet performance measurements.

This talk narrates a 15-year research arc that starts with simple speed test designs and ends with problems in QoE and equity, which are ultimately geared towards the larger goal of improving the lived experience of Internet users around the world.

I will first discuss our early and ongoing work designing and evaluating state-of-the-art "speed tests". Then, I will present our early and ongoing research on application quality of experience QoE inference, focusing on techniques we have developed to infer quality metrics such as startup delay and resolution for encrypted video streaming services, including a multi-year investigative effort in collaboration with the Wall Street Journal that has now also formed the basis of a commercial venture, NetMicroscope. At the core of this work are machine learning models that perform quality inference across diverse services such as Netflix, YouTube, Amazon, and Twitch. These models provide fine-grained predictions, revealing, for instance, that higher Internet speeds often yield only marginal improvements to QoE metrics like startup delay and resolution. Finally, I will share some of our ongoing work on Internet equity, as part of the Internet Equity Initiative, which I founded and direct at the University of Chicago. In recent work, we apply spatial modeling techniques to crowdsourced measurement datasets to construct stable sampling boundaries that reflect disparities in Internet performance across neighborhoods. By overlaying interpolated maps and clustering contiguous regions, we demonstrate how our methods outperform traditional approaches that rely on predefined social or political boundaries. I will close with a reflection on the evolution of Internet access network performance measurement, discussing what has changed over time, and what aspects of the problem remain timeless.

Watch Video of Lecture

Other Lectures