The seminar is a series of both invited faculty talks and student speakers. All are welcome to attend.
This semester (Spring 2023) the standard time is 4:00-5:00pm ET on Monday. However, we will also have multiple talks scheduled on different days/at different times so please check the days and times carefully on the calendar below so you don't miss out on any talks!
The seminar will be hybrid this semester. In-person meetings will be held in the CS Conference room (unless noted otherwise). The Zoom link will be sent out to the NLP mailing list. If you are not on the mailing list but would like the link, please email us.
The seminar is co-orgnaized by Emily Allaway and Fei-Tzin Lee. Please contact us with any questions.
Abstract: Executable language grounding focuses on mapping natural language instructions into code or actions executable within real-world contexts, including databases, web applications, and robotic environments. The field of natural language processing (NLP) has recently experienced significant advancements, particularly in language grounding, facilitated by large language models (LLMs) such as Codex, GPT-4, and ChatGPT-Plugins. This progress paves the way for the development of next-generation natural language interfaces. In this presentation, I will discuss our latest efforts to harness the capabilities of LLMs, predominantly Codex/GPT-4, to create natural language interfaces capable of addressing a broader spectrum of data analysis requirements. Firstly, I will explicate a neural-symbolic framework based on Codex, which enhances code generation to address a more diverse range of questions by incorporating API calls into LLMs within programming languages (e.g., SQL, Python). In the subsequent portion of the talk, I will introduce our recent endeavors in data science code generation using LLMs, which produce code solutions in response to StackOverflow inquiries about data science Python libraries, including NumPy and Pandas. Lastly, I will talk about ongoing and future research prospects in this domain.
Bio: Tao Yu is an Assistant Professor of Computer Science at The University of Hong Kong and serves as Co-Director of the HKU NLP group. His main research interest is in natural language processing. He completed his Ph.D. at Yale University and was a postdoctoral fellow in the UW NLP group at the University of Washington. His research aims to develop and design the next generation of natural language interfaces employing large language models to facilitate human interaction with data analysis, web applications, and robotic instruction through conversation. It involves executable language grounding, such as semantic parsing and code generation, efficient and generalizable large language models, and interactive systems. Tao is the recipient of the Google Research Scholar Award and the Amazon Research Award.