April 2020
Zoom Security: The Good, the Bad, and the Business Model (2 April 2020)
Zoom Cryptography and Authentication Problems (4 April 2020)
Trusting Zoom? (6 April 2020)
Is Zoom's Server Security Just as Vulnerable as the Client Side? (13 April 2020)
In Memoriam: Joel Reidenberg (22 April 2020)
The Price of Lack of Clarity (26 April 2020)
Software Done in a Hurry (29 April 2020)

Zoom Security: The Good, the Bad, and the Business Model

2 April 2020

Zoom—one of the hottest companies on the planet right now, as businesses, schools, and individuals switch to various forms of teleconferencing due to the pandemic—has come in for a lot of criticism due to assorted security and privacy flaws. Some of the problems are real but easily fixable, some are due to a mismatch between what Zoom was intended for and how it’s being used now—and some are worrisome.

The first part is the easiest: there have been a number of simple coding bugs. For example, their client used to treat a Windows Universal Naming Convention (UNC) file path as a clickable URL; if you clicked on such a path sent by an attacker, you could end up disclosing your hashed password. Zoom’s code could have and should have detected that, and now does. I’m not happy with that class of bug, and while no conceivable effort can eliminate all such problems, efforts like Microsoft’s Software Development Lifecycle can really help. I don’t know how Zoom ensured software security before; I strongly suspect that whatever they were doing before, they’re doing a lot more now.

Another class of problem involves deliberate features that were actually helpful when Zoom was primarily serving its intended market: enterprises. Take, for example, the ability of the host to mute and unmute everyone else on a call. I’ve been doing regular teleconferences for well over 25 years, first by voice and now by video. The three most common things I’ve heard are “Everyone not speaking, please mute your mic”; “Sorry, I was on mute”, and “Mute button!” I’ve also heard snoring and toilets flushing… In a work environment, giving the host the ability to turn microphones off and on isn’t spying, it’s a way to manage and facilitate a discussion in a setting where the usual visual and body language cues aren’t available.

The same rationale applies to things like automatically populating a directory with contacts, scraping Linked-In data, etc.— it’s helping business communication, not spying on, say, attendees at a virtual religious service. You can argue if these are useful feautures or not; you can even say that they shouldn’t be done even in a business context—but the argument against it in a business context is much weaker than it is when talking about casual users who just want to chat out online with their friends.

There is, though, a class of problems that worries me: security shortcuts in the name of convenience or usability. Consider the first widely known flaw in Zoom: a design decision that allowed “any website to forcibly join a user to a Zoom call, with their video camera activated, without the user’s permission.” Why did it work that way? It was intended as a feature:

As Zoom explained, changes implemented by Apple in Safari 12 that “require a user to confirm that they want to start the Zoom client prior to joining every meeting” disrupted that functionality. So in order to save users an extra click, Zoom installed the localhost web server as “a legitimate solution to a poor user experience problem.”
They also took shortcuts with initial installation, again in the name of convenience. I’m all in favor of convenience and usability (and in fact one of Zoom’s big selling points is how much easier it is to use than its competitors), but that isn’t a license to engage in bad security practices.

To its credit, Zoom has responded very well to criticisms and reports of flaws. Unlike more or less any other company, they’re now saying things like “yup, we blew it; here’s a patch”. (They also say that critics have misunderstood how they do encryption.) They’ve even announced a plan for a thorough review, with outside experts. There are still questions about some system details, but I’m optimistic that things are heading in the right direction. Still, it’s the shortcuts that worry me the most. Those aren’t just problems that they can fix, they make me fear for the attitudes of the development team towards security. I’m not convinced that they get it—and that’s bad. Fixing that is going to require a CISO office with real power, as well as enough education to make sure that the CISO doesn’t have to exercise that power very often. They also need a privacy officer, again with real power; many of their older design decisions seriously impact privacy.

I’ve used Zoom in variety of contexts for several years, and mostly like its functionality. But the security and privacy issues are real and need to be fixed. I wish them luck.


Here is my set of blog posts on Zoom and Zoom security.

  1. Notes on a Zoom Class
  2. Zoom Security: The Good, the Bad, and the Business Model
  3. Zoom Cryptography and Authentication Problems
  4. Trusting Zoom?
  5. Is Zoom’s Server Security Just as Vulnerable as the Client Side?
Tags: Zoom
https://www.cs.columbia.edu/~smb/blog/2020-04/2020-04-02.html