2023-2024 DISTINGUISHED LECTURE SERIES

October 09, 2023

Ben Y. Zhao, University of Chicago

Protecting Human Users from Misused AI

Bio:
Ben Zhao is Neubauer Professor of Computer Science at University of Chicago. He completed his Ph.D. at U.C. Berkeley (2004), and B.S. from Yale (1997). He is a Fellow of the ACM, and a recipient of the NSF CAREER award, MIT Technology Review's TR-35 Award (Young Innovators Under 35), USENIX Internet Defense Prize, ComputerWorld Magazine's Top 40 Tech Innovators award, IEEE ITC Early Career Award, and Faculty awards from Google, Amazon, and Facebook. His work has been covered by media outlets including New York Times, CNN, NBC, BBC, MIT Tech Review, Wall Street Journal, Forbes, and New Scientist. He has published over 180 articles in areas of security and privacy, machine learning, networking, and HCI. He served as TPC (co-)chair for the World Wide Web conference (WWW 2016) and ACM Internet Measurement Conference (IMC 2018). He also serves on the steering committee for HotNets.

Abstract:

Recent developments in machine learning and artificial intelligence have taken nearly everyone by surprise. The arrival of arguably the most transformative wave of AI did not bring us smart cities full of self-driving cars, or robots that do our laundry and mow our lawns. Instead, it brought us over-confident token predictors that hallucinate, deepfake generators that produce realistic images and video, and ubiquitous surveillance. In this talk, I’ll describe some of our recent efforts to warn, and later defend against some of the darker side of AI. In particular, I will tell the story of how our efforts to disrupt unauthorized facial recognition models led unexpectedly to Glaze, a tool to defend human artists against art mimicry by generative image models. I will share some of the ups and downs of implementing and deploying an adversarial ML tool to a global user base, and reflect on mistakes and lessons learned.

Watch Video of Lecture

October 25, 2023

Sarita Adve, University of Illinois at Urbana-Champaign

Enabling the Era of Immersive Computing

Bio:
Sarita Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois at Urbana-Champaign where she directs IMMERSE, a campus-wide Center for Immersive Computing. Her research interests span the system stack, ranging from hardware to applications. Her work on the data-race-free, Java, and C++ memory models forms the foundation for memory models used in most hardware and software systems today. Her group released the ILLIXR (Illinois Extended Reality) testbed, an open-source extended reality system and research testbed, and launched the ILLIXR consortium to democratize XR research, development, and benchmarking. She is also known for her work on heterogeneous systems and software-driven approaches for hardware resiliency. She is a member of the American Academy of Arts and Sciences, a fellow of the ACM and IEEE, and a recipient of the ACM/IEEE-CS Ken Kennedy award. As ACM SIGARCH chair, she co-founded the CARES movement, winner of the CRA distinguished service award, to address discrimination and harassment in Computer Science research events. She received her PhD from the University of Wisconsin-Madison and her B.Tech. from the Indian Institute of Technology, Bombay.

Abstract:

Computing is on the brink of a new immersive era. Recent innovations in virtual/augmented/mixed reality (extended reality or XR) show the potential for a new immersive modality of computing that will transform most human activities and change how we design, program, and use computers. There is, however, an orders of magnitude gap between the power/performance/quality-of-experience attributes of current and desirable immersive systems. Bridging this gap requires an inter-disciplinary research agenda that spans end-user devices, edge, and cloud, is based on hardware-software-algorithm co-design, and is driven by end-to-end human-perceived quality of experience.

The ILLIXR (Illinois Extended Reality) project has developed an open source end-to-end XR system to enable such a research agenda. ILLIXR is being used in academia and industry to quantify the research challenges for desirable immersive experiences and provide solutions to address these challenges. To further push the interdisciplinary frontier for immersive computing, we recently established the IMMERSE center at Illinois to bring together research, education, and infrastructure activities in immersive technologies, applications, and human experience. This talk will give an overview of IMMERSE and a deeper dive into the ILLIXR project, including the ILLIXR infrastructure, its use to identify XR systems research challenges, and cross-system solutions to address several of these challenges.

Watch Video of Lecture

November 01, 2023

Heng Ji, University of Illinois at Urbana-Champaign

SmartBook: an AI Prophetess for Disaster Reporting and Forecasting

Bio:
Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department and Coordinated Science Laboratory of University of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge-enhanced Large Language Models, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates Association Forum, and selected to participate in DARPA AI Forward in 2023. She was selected as "Young Scientist" and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017. She was named as part of Women Leaders of Conversational AI (Class of 2023) by Project Voice. The awards she received include "AI's 10 to Watch" Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, "Best of ICDM2013" paper award, "Best of SDM2013" paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited by the Secretary of the U.S. Air Force and AFRL to join Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030, and invited to speak at the Federal Information Integrity R&D Interagency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks, including the U.S. ARL projects on information fusion and knowledge networks construction, DARPA ECOLE MIRACLE team, DARPA KAIROS RESIN team and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Population task since 2010-2021. She was the associate editor for IEEE/ACM Transaction on Audio, Speech, and Language Processing, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DARPA, NSF, DoE, ARL, IARPA, AFRL, DHS) and industry (Amazon, Google, Facebook, Bosch, IBM, Disney).

Abstract:

History repeats itself, sometimes in a bad way. If we don’t learn lessons from history, we might suffer similar tragedies, which are often preventable. For example, many experts now agree that some schools were closed for too long during COVID-19 and that abruptly removing millions of children from American classrooms has had harmful effects on their emotional and intellectual health. Also many wish we had invested in vaccines earlier, prepared more personal protective equipment and medical facilities, provided online consultation services for people who suffered from anxiety and depression, and created better online education platforms for students. Similarly, genocides throughout history (from those in World War II to the recent one in Rwanda in 1994) have also all shared early warning signs (e.g., organization of hate groups, militias, and armies and polarization of the population) forming patterns that follow discernible progressions. Preventing natural or man-made disasters requires being aware of these patterns and taking pre-emptive action to address and reduce them, or ideally, eliminate them. Emerging events, such as the COVID pandemic and the Ukraine Crisis, require a time-sensitive comprehensive understanding of the situation to allow for appropriate decision-making and effective action response. Automated generation of situation reports can significantly reduce the time, effort, and cost for domain experts when preparing their official human-curated reports. However, AI research toward this goal has been very limited, and no successful trials have yet been conducted to automate such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insufficient to identify, locate, and summarize important information, and lack detailed, structured, and strategic awareness. We propose SmartBook, a novel framework that cannot be solved by ChatGPT, targeting situation report generation which consumes large volumes of news data to produce a structured situation report with multiple hypotheses (claims) summarized and grounded with rich links to factual evidence by claim detection, fact checking, misinformation detection and factual error correction. Furthermore, SmartBook can also serve as a novel news event simulator, or an intelligent prophetess. Given “What-if” conditions and dimensions elicited from a domain expert user concerning a disaster scenario, SmartBook will induce schemas from historical events, and automatically generate a complex event graph along with a timeline of news articles that describe new simulated events based on a new Λ-shaped attention mask that can generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format, we expect SmartBook will greatly assist humanitarian workers and policymakers to exercise reality checks (what would the next disaster look like under these given conditions?), and thus better prevent and respond to future disasters.

Watch Video of Lecture

November 08, 2023

Caroline Uhler, Massachusetts Institute of Technology

Causal Representation Learning and Optimal Intervention Design

Bio:
Caroline Uhler is a Full Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society at MIT. In addition, she is a core institute member of the Broad Institute, where she co-directs the Eric and Wendy Schmidt Center. She holds an MSc in mathematics, a BSc in biology, and an MEd all from the University of Zurich. She obtained her PhD in statistics from UC Berkeley in 2011 and then spent three years as an assistant professor at IST Austria before joining MIT in 2015. She is a SIAM Fellow, a Simons Investigator, a Sloan Research Fellow, and an elected member of the International Statistical Institute. In addition, she received an NIH New Innovator Award, an NSF Career Award, a Sofja Kovalevskaja Award from the Humboldt Foundation, and a START Award from the Austrian Science Foundation. Her research lies at the intersection of machine learning, statistics, and genomics, with a particular focus on causal inference, representation learning, and gene regulation.

Abstract:

Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. Representation learning has become a key driver of deep learning applications, since it allows learning latent spaces that capture important properties of the data without requiring any supervised annotations. While representation learning has been hugely successful in predictive tasks, it can fail miserably in causal tasks including predicting the effect of an intervention. This calls for a marriage between representation learning and causal inference. An exciting opportunity in this regard stems from the growing availability of interventional data (in medicine, advertisement, education, etc.). However, these datasets are still miniscule compared to the action spaces of interest in these applications (e.g. interventions can take on continuous values like the dose of a drug or can be combinatorial as in combinatorial drug therapies). In this talk, we will present initial ideas towards building a statistical and computational framework for causal representation learning and discuss its applications to optimal intervention design in the context of drug design and single-cell biology.

Watch Video of Lecture

November 15, 2023

Monica Lam, Stanford University

Cognitive Workforce Revolution with Trustworthy and Self-Learning Generative AI

Bio:
Monica Lam is the Kleiner Perkins, Mayfield, Sequoia Capital Professor in the School of Engineering at Stanford, in the Departments of Computer Science and, by courtesy, Electrical Engineering. She is the Faculty Director of the Stanford Open Virtual Assistant Laboratory. Prof. Lam is a member of the National Academy of Engineering and an ACM Fellow. Prof. Lam has won numerous best paper awards, and has published over 150 papers on natural language processing, machine learning, compilers, computer architecture, operating systems, high-performance computing, and HCI. Prof. Lam's recent research on natural language processing led to the creation of the first conversational virtual assistant based on deep learning, which received Popular Science's Best of What's New Award in Security in 2019. She co-authored the "Dragon Book", the definitive text on compiler technology. She was on the founding team of Tensilica, the first startup in configurable processor cores. She received a B.Sc. from University of British Columbia and a Ph.D. from Carnegie Mellon University.

Abstract:

Generative AI, and in particular Large Language Models (LLMs), have already changed how we work and study. To truly transform the cognitive workforce however, LLMs need to be trustworthy so they can operate autonomously without human oversight. Unfortunately, language models are not grounded and have a tendency to hallucinate.

Our research hypothesis is that we can turn LLM into useful workers across different domains if we (1) teach them how to acquire and apply knowledge in external corpora such as written documents, knowledge bases, and APIs; (2) have them self-learn through model distillation of simulated conversations. We showed that by supplying different external corpora to our Genie assistant framework, we can readily create trustworthy agents that can converse about topics in open domains from Wikidata, Wikipedia, or StackExchange; help navigate services and products such as restaurants or online stores; persuade users to donate to charities; and improve the social skills of people with autism spectrum disorder.

Watch Video of Lecture (Audio Fixed at minute mark)

Other Lectures