Zoom Security: the Good, the Bad, and the Business Model
Professor Steven Bellovin writes about the video conferencing app Zoom and its security and privacy flaws.
Professor Steven Bellovin writes about the video conferencing app Zoom and its security and privacy flaws.
The IEEE Symposium on Security and Privacy (SP19) brings together researchers and practitioners to present and discuss developments in computer security and electronic privacy. Now in its 40th year, the conference issued Test of Time awards, one of which was bestowed to computer science professor Steven Bellovin. Also at SP19 were papers from other CS researchers, detailed below.
Encrypted Key Exchange: Password-Based Protocols Secure Against Dictionary Attacks
Steven M. Bellovin and Michael Merritt
Test of Time award – Proceedings of the IEEE Symposium on Research in Security and Privacy, Oakland, May 1992
This paper created the subfield of cryptography now known as PAKE: Password- Authenticated Key Exchange. A Google Scholar search for cryptography password PAKE gets more than 1,300 hits, and that’s an undercount because the early papers do not use “PAKE”; the coinage is somewhat later.
The world of 1992 was moving towards networks. Rather than having a password for every computer and having to enter it constantly, a user would have to download a cryptographic credential from a server. But how was this credential protected? The answer, of course, is the user’s password.
This means that an attacker who can eavesdrop on the network connection can pick up the credential that is encrypted with the password, and see if a guess at the password yields something that looks like a valid credential. For example, a cryptographic credential often includes a username or an expiration date—if you guess at the password, do you get something that looks like a username or a reasonable expiration date? This is known as “verifiable plaintext”.
The underlying problem relates to so-called “password-guessing attacks”. That is, an attacker guesses at a user’s password and tries to verify that guess. The classic attack of this type was described in a 1979 paper by Morris and Thompson; it assumed that the attackers had somehow obtained the “hashed password file”.
Bellovin and Michael Merritt’s solution eliminated verifiable plaintext. Suppose an attacker eavesdrops on network traffic. The cryptographic credential is encrypted with the user’s password. For an incorrect guess, the attacker gets a string of random bytes. However, a correct guess yields something that “looks” like a string of random bytes, and the only way to tell that it’s not is to solve an extremely hard problem, one that may be computationally unsolvable regardless. This means that there is no way for an attacker to verify a guess by looking at network traffic.
Bellovin no longer works on PAKEs but recommends Matt Green’s post for the state of PAKEs today.
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana
The paper proposes a new defense against adversarial examples with rigorous guarantees.
Adversarial examples are an attack against machine learning (ML) models, and especially Deep Neural Networks (DNNs). These models are used in many domains to make predictions from input data (e.g. label the contents of a picture, recognize a spoken word, etc.).
In adversarial attacks, an attacker makes a small change to the input data – such as small changes to the pixels of an image, invisible to the naked eye – that fools the ML model into make a wrong prediction chosen by the adversary. These attacks can be particularly problematic when models are used in security sensitive contexts, such as face recognition to enable access to a phone, voice recognition to operate a device, or to detect malware and credit card fraud.
PixelDP is a defense that adds specifically calibrated noise to DNNs and adapts the training and prediction procedures to make them more difficult to attack with adversarial examples. By leveraging the theory of a privacy mechanism, differential privacy, PixelDP can also prove guarantees about the maximal change an attacker can trigger with small changes to the input data, to give the operator a measure of robustness for each specific prediction.
Moreover, this approach is the first one to give both provable guarantees and scale to the large DNNs and datasets that are used in practice. As an example, this is the first defense with rigorous guarantees that supports large DNN models, such as the one called Inception released pre-trained by Google, on tasks such as large scale image classification on the ImageNet dataset.
NEUZZ: Efficient Fuzzing with Neural Program Smoothing
Dongdong She, Kexin Pei, Dave Epstein, Junfeng Yang, Baishakhi Ray, and Suman Jana
Fuzzing is a popular and simple technique to hunt bugs by generating random test inputs and executing the target program with these inputs to trigger potential security vulnerabilities. Due to its simplicity and low performance overhead, fuzzing has been very successful at finding numerous security vulnerabilities in many real-world programs. However, existing works rely on a large number of error-trial scheme and lack sophisticated guidance to generate high-quality inputs which helps to uncover bugs.
The researchers built the first practical neural-network assisted-fuzzer which finds software vulnerabilities in real world programs. The neural network model approximates program behaviors by functioning as a differentiable program surrogate. Inspired from adversarial examples in deep learning, it computes the gradient of the neural program surrogate and performs efficient gradient-guided mutations.
Find open faculty positions here.
President Bollinger announced that Columbia University along with many other academic institutions (sixteen, including all Ivy League universities) filed an amicus brief in the U.S. District Court for the Eastern District of New York challenging the Executive Order regarding immigrants from seven designated countries and refugees. Among other things, the brief asserts that “safety and security concerns can be addressed in a manner that is consistent with the values America has always stood for, including the free flow of ideas and people across borders and the welcoming of immigrants to our universities.”
This recent action provides a moment for us to collectively reflect on our community within Columbia Engineering and the importance of our commitment to maintaining an open and welcoming community for all students, faculty, researchers and administrative staff. As a School of Engineering and Applied Science, we are fortunate to attract students and faculty from diverse backgrounds, from across the country, and from around the world. It is a great benefit to be able to gather engineers and scientists of so many different perspectives and talents – all with a commitment to learning, a focus on pushing the frontiers of knowledge and discovery, and with a passion for translating our work to impact humanity.
I am proud of our community, and wish to take this opportunity to reinforce our collective commitment to maintaining an open and collegial environment. We are fortunate to have the privilege to learn from one another, and to study, work, and live together in such a dynamic and vibrant place as Columbia.
Sincerely,
Mary C. Boyce
Dean of Engineering
Morris A. and Alma Schapiro Professor