CS professors have been honored with Test of Time Awards in 2025, distinctions granted annually by premier computer science conferences to research that has demonstrated enduring impact—work that continues to shape the discipline long after its initial publication.
Rocco Servedio
At the 66th Annual Symposium on Foundations of Computer Science (FOCS 2025), Rocco Servedio, Adam T. Kalai, Adam R. Klivans, Yishay Mansour were honored for their 2005 paper, Agnostically Learning Half Spaces. The research delivered a breakthrough in computational learning theory by proving that halfspaces can be efficiently learned under adversarial label noise, overturning prior assumptions of impossibility. Its results reshaped the field, sparking a wave of new advances in learning geometric concepts under more complex noise models and broader distributional settings.
Toniann Pitassi
Also at FOCS 2025, Toniann Pitassi, Mika Göös, and Thomas Watson received recognition for their 2015 paper, Deterministic Communication vs. Partition Number. The paper established a separation between the logarithm of the partition number and the deterministic communication complexity of a function, resolving a long‑standing open question and sparking extensive research on lifting theorems. The lifting framework, introduced by Raz and McKenzie, involves proving separations in query complexity and then extending them to communication complexity.
Richard Zemel and Toniann Pitassi were recognized for their paper, “Learning Fair Representations,” which established the subfield of machine learning–machine learning and fairness.
Five CS researchers received Test of Time awards for papers that have had a lasting impact on their fields. The influential papers were presented at their respective conferences in the past 25 years and have remained relevant to research and practice.
Moti Yung
IACR International Conference on Practice and Theory of Public-Key Cryptography (PKC2020) Test of Time award
The IEEE Symposium on Security and Privacy (SP19) brings together researchers and practitioners to present and discuss developments in computer security and electronic privacy. Now in its 40th year, the conference issued Test of Time awards, one of which was bestowed to computer science professor Steven Bellovin. Also at SP19 were papers from other CS researchers, detailed below.
This paper created the subfield of cryptography now known as PAKE: Password- Authenticated Key Exchange. A Google Scholar search for cryptography password PAKE gets more than 1,300 hits, and that’s an undercount because the early papers do not use “PAKE”; the coinage is somewhat later.
The world of 1992 was moving towards networks. Rather than having a password for every computer and having to enter it constantly, a user would have to download a cryptographic credential from a server. But how was this credential protected? The answer, of course, is the user’s password.
This means that an attacker who can eavesdrop on the network connection can pick up the credential that is encrypted with the password, and see if a guess at the password yields something that looks like a valid credential. For example, a cryptographic credential often includes a username or an expiration date—if you guess at the password, do you get something that looks like a username or a reasonable expiration date? This is known as “verifiable plaintext”.
The underlying problem relates to so-called “password-guessing attacks”. That is, an attacker guesses at a user’s password and tries to verify that guess. The classic attack of this type was described in a 1979 paper by Morris and Thompson; it assumed that the attackers had somehow obtained the “hashed password file”.
Bellovin and Michael Merritt’s solution eliminated verifiable plaintext. Suppose an attacker eavesdrops on network traffic. The cryptographic credential is encrypted with the user’s password. For an incorrect guess, the attacker gets a string of random bytes. However, a correct guess yields something that “looks” like a string of random bytes, and the only way to tell that it’s not is to solve an extremely hard problem, one that may be computationally unsolvable regardless. This means that there is no way for an attacker to verify a guess by looking at network traffic.
Bellovin no longer works on PAKEs but recommends Matt Green’s post for the state of PAKEs today.
The paper proposes a new defense against adversarial examples with rigorous guarantees.
Adversarial examples are an attack against machine learning (ML) models, and especially Deep Neural Networks (DNNs). These models are used in many domains to make predictions from input data (e.g. label the contents of a picture, recognize a spoken word, etc.).
In adversarial attacks, an attacker makes a small change to the input data – such as small changes to the pixels of an image, invisible to the naked eye – that fools the ML model into make a wrong prediction chosen by the adversary. These attacks can be particularly problematic when models are used in security sensitive contexts, such as face recognition to enable access to a phone, voice recognition to operate a device, or to detect malware and credit card fraud.
PixelDP is a defense that adds specifically calibrated noise to DNNs and adapts the training and prediction procedures to make them more difficult to attack with adversarial examples. By leveraging the theory of a privacy mechanism, differential privacy, PixelDP can also prove guarantees about the maximal change an attacker can trigger with small changes to the input data, to give the operator a measure of robustness for each specific prediction.
Moreover, this approach is the first one to give both provable guarantees and scale to the large DNNs and datasets that are used in practice. As an example, this is the first defense with rigorous guarantees that supports large DNN models, such as the one called Inception released pre-trained by Google, on tasks such as large scale image classification on the ImageNet dataset.
Fuzzing is a popular and simple technique to hunt bugs by generating random test inputs and executing the target program with these inputs to trigger potential security vulnerabilities. Due to its simplicity and low performance overhead, fuzzing has been very successful at finding numerous security vulnerabilities in many real-world programs. However, existing works rely on a large number of error-trial scheme and lack sophisticated guidance to generate high-quality inputs which helps to uncover bugs.
The researchers built the first practical neural-network assisted-fuzzer which finds software vulnerabilities in real world programs. The neural network model approximates program behaviors by functioning as a differentiable program surrogate. Inspired from adversarial examples in deep learning, it computes the gradient of the neural program surrogate and performs efficient gradient-guided mutations.
Dean Boyce's statement on amicus brief filed by President Bollinger
President Bollinger announced that Columbia University along with many other academic institutions (sixteen, including all Ivy League universities) filed an amicus brief in the U.S. District Court for the Eastern District of New York challenging the Executive Order regarding immigrants from seven designated countries and refugees. Among other things, the brief asserts that “safety and security concerns can be addressed in a manner that is consistent with the values America has always stood for, including the free flow of ideas and people across borders and the welcoming of immigrants to our universities.”
This recent action provides a moment for us to collectively reflect on our community within Columbia Engineering and the importance of our commitment to maintaining an open and welcoming community for all students, faculty, researchers and administrative staff. As a School of Engineering and Applied Science, we are fortunate to attract students and faculty from diverse backgrounds, from across the country, and from around the world. It is a great benefit to be able to gather engineers and scientists of so many different perspectives and talents – all with a commitment to learning, a focus on pushing the frontiers of knowledge and discovery, and with a passion for translating our work to impact humanity.
I am proud of our community, and wish to take this opportunity to reinforce our collective commitment to maintaining an open and collegial environment. We are fortunate to have the privilege to learn from one another, and to study, work, and live together in such a dynamic and vibrant place as Columbia.
Sincerely,
Mary C. Boyce
Dean of Engineering
Morris A. and Alma Schapiro Professor