Binghui Peng (彭炳辉)

I am a visiting faculty researcher at Google research, NYC. Previously, I was a postdoctoral research fellow at Stanford University (supervised by Aviad Rubinstein and Amin Saberi) and Simons Institute, UC Berkeley. I obtained my Ph.D. from Columbia University, advised by Xi Chen and Christos Papadimitriou. Prior to that, I received my bachelor's degree from Yao Class at Tsinghua University.

I have worked on machine learning theory, game theory, and theoretical computer science broadly. Recently, I am particularly interested in theory of LLM.

Email: bp2601 [dot] columbia [dot] edu | Google scholar | dblp

I will join the computer science department of the University of Maryland starting from Jan 2026.


Research

Theoretical foundation of Transformer

Transfromer is the backbone architecture of modern LLM, what makes Transformer special and what problems can be solved efficiently by Transformer? I study the representation power of Transformer

Game theory

How should we reason about a multi-agent system and how can these agents agree on an equilibrium? My research resolves decades of years open problems on equilibrium computation

The role of memory in learning

Memory (or space) is a crucial computational resource for large-scale learning tasks. My research addresses the space requirements for fundamental learning problems, including convex optimization and online learning.

Continual learning

How can machines learn effectively in evolving data and ever-changing environments? Continual learning (or lifelong learning) is still in its early stages, but I believe that theoretical insights are essential for its advancement


A complete list of publication

Selected talks

Academia and industry experience

Teaching, mentoring and service