Trusting Zoom?

6 April 2020

Since the world went virtual, often by using Zoom, several people have asked me if I use it, and if so, do I use their app or their web interface. If I do use it, isn't this odd, given that I've been doing security and privacy work for more than 30 years and “everyone” knows that Zoom is a security disaster?

To give too short an answer to a very complicated question: I do use it, via both Mac and iOS apps. Some of my reasons are specific to me and may not apply to you; that said, my overall analysis is that Zoom's security, though not perfect, is quite likely adequate for most people. Security questions always have situation-specific answers; my students will tell you that my favorite response is “It depends!” This is no different. Let me explain my reasoning.

I'll start by quoting from Thinking Security, which I wrote about four years ago.

All too often, insecurity is treated as the equivalent of being in a state of sin. Being hacked is not perceived as the result of a misjudgment or of being outsmarted by an adversary; rather, it's seen as divine punishment for a grievous moral failing. The people responsible didn't just err; they're fallen souls to be pitied and/or ostracized. And of course that sort of thing can't happen to us, because we're fine, upstanding folk who have the blessing of the computer deity—$DEITY, in the old Unix-style joke—of our choice.


There's one more point to consider, and it goes to the heart of this book's theme: is living disconnected worth it? Employees have laptops and network connections because there's a business need for such things, and not just to provide them with recreation in lonely hotel rooms or a cheap way to make a video call home. As always, the proper question is not “is Wi-Fi access safe?”; rather, it's “is the benefit to the business from having connectivity greater than or less than the incremental risk?”

Let me put this another way.

  1. Are the benefits from me using Zoom (or any other conferencing system; much of this blog post is, as you will see, independent of the particular one involved) greater than the risks?
  2. What can Zoom itself do to improve security?
  3. If there are risks, what can I or my university do to minimize those risks?

The first part of the of the question is relatively easy. Part of my job is to teach; part of my employer's mission is to provide instruction. Given the pandemic, in-person teaching is unduly risky; some form of “distance learning” is the only choice we have right now, and one-way video just isn't the same. In other words, the benefit of using Zoom is considerable, and I have an ethical obligation to do it unless the risks to me, to my students, or to the university are greater.

Are there risks? Sure, as I discuss below, but at least for teaching, those risks are much less than providing no instruction at all. At most, I—really, the university—should switch to a different platform, but of course that platform would require its own security analysis.

The platform question is a bit harder, but the analysis is the same. My personal perception, given what I'm now teaching (Computer Security II, a lecture course) and how I teach (I generally use PDF slides), is that I can do a better job on my Mac, where I can use one screen for my slides and another to see my students via their own cameras. In other words, if my ethical obligation is to teach and to teach as well as I can, and if I feel that I teach better when using Mac app, then that's what I should do unless I feel that the risks are too great.

In other words: I generally don't use Zoom on iOS for teaching, because I don't think it works as well. I have done so occasionally, where I wanted to use a stylus to highlight parts of diagrams—but I also had it running on my Mac to get the second view of the class.

My reasoning for not using the browser option is a bit different: I don't trust browsers enough to want one to have the ability to get at my camera or microphone. I (of course) use my browser constantly, and the best way to make sure that my browser doesn't give the wrong sites access to my camera or mic is to make sure that the browser itself has no access. Can a browser properly ensure that only the right sites have such access, despite all the complexities of IFRAMEs, JavaScript, cross-site scripting problems, and more? I'm not convinced. I think that browsers can be built that way, but I'm not convinced that any actually are.

Some will argue that the Chrome browser is more secure than more or less any other piece of code out there, including especially the Zoom app. That may very well be—Google is good at security. But apart from my serious privacy reservations, flaws in the Zoom app put me at risk while using Zoom, while flaws in a browser put me at risk more or less continuously.

Note carefully that my decision to use the Zoom ap on a Mac is based on my classroom teaching responsibilities—and I regard everything I say in class as public and on the record. (Essentially all of my class slides are posted on my public web site.) There are other things I do for my job where the balance is a bit different: one-on-one meetings with research and project students, private meetings in my roles as both a teacher and as an advisor, faculty meetings, promotion and hiring meetings, and more. The benefits of video may be much less, and the subjects being discussed are often more sensitive, whether because the research is cutting edge and hence potentially more valuable, or because I'm discussing exceedingly sensitive matters. (Anyone who has taught for any length of time has heard these sorts of things from students. For those who haven't—imagine the most heart-wrenching personal stories you can think of, and then realize that a student has to relate them to a near-stranger who is in a position of authority. And I've heard tragic things that my imagination isn't good enough to have come up with.) Is Zoom (or some other system) secure enough for this type of work?

In order to answer these questions more precisely, though, I have to delve into the specifics of Zoom. Are Zoom's weaknesses sufficiently serious that my university—and I—should avoid it? Again, though, this is a question that can't be answered purely abstractly. Let me draw on some definitions from a National Academies report that I worked on:

A “vulnerability” is an error or weakness in the design, implementation, or operation of a system. An “attack” is a means of exploiting some vulnerability in a system. A “threat” is an adversary that is motivated and capable of exploiting a vulnerability.
There are certainly vulnerabilities in Zoom, as I and others have discussed. But is there a threat? That is, is there an adversary who is both capable and motivated?

In Thinking Security, I addressed that question by means of this diagram:

Diagram show threats along two axes, attacker skill
	and degree of targeting
Are you being targeted? And how good is the attacker?

Let's first consider what I called an egregious flaw in Zoom's cryptography. In my judgment (and as I remarked in that blog post), “I doubt if anyone but a major SIGINT agency could exploit it”. That's already a substantial part of the answer: I'm not worried about the Andromedan cryptanalysts trying to learn about my students' personal tragedies. Yes, I suppose in theory I could have as a student someone who is a person of interest to some foreign intelligence agency and this person has a problem that they would tell to me and that agency would be interested enough in blackmailing this student that they'd go to the trouble of cryptanalyzing just the right Zoom conversation—but I don't believe it's at all likely and I doubt that you do. In other words, this would require a highly targeted attack by a highly skilled attacker. If it were actually a risk, I suspect that I'd know about it in advance and have that conversation via some other mechanism.

The lack of end-to-end encryption is a more serious flaw. To recap, the encryption keys are centrally generated by Zoom, sometimes in China; these keys are used to encrypt conversations. (Zoom has replied that the key generation in China was an accident and shouldn't have been possible.) Anyone who has access to those keys and to the ciphertext can listen in. Is this a threat? That is, is there an adversary who is both motivated to and capable of exploiting the vulnerability? That isn't clear.

It's likely that any major intelligence agency is capable of getting the generated keys. They could probably do it by legal process, if the keys are generated in their own country; alternatively, they could hack into either the key generation machine or any one of the endpoints of the call—very few systems are hardened well enough to resist that sort of attack. Of course, the latter strategy would work just as well for any conceivable competitor to Zoom—a weak endpoint is a weak endpoint. There may, under certain circumstances, be an incremental risk if the hosting government can compel production of a key, but this is still a targeted attack by a major enemy. This is only a general technical threat if some hacker group had continous access to all of the keys generated by Zoom's servers. That's certainly possible—companies as large as Marriott and Nortel have been victimized for years—but again, this is the product of a powerful enemy.

There's another part of the puzzle for a would-be attacker who wants to exploit this flaw: they need access to the target's traffic. There are a variety of ways that that can be done, ranging from the trivial—the attacker is on-LAN with the target—to the complicated, e.g., via BGP routing attacks. Routing attacks don't require a government-grade attacker, but they're also well up there on the scale of abilities.

What it boils down to is this: exploiting the lack of true end-to-end encryption in Zoom is quite difficult, since you need access to both the per-meeting encryption key and the traffic. Zoom itself could probably do it, but if they were that malicious you shouldn't trust their software at all, no matter how they handled the crypto.

There's one more point. Per the Citizen Lab report, “the mainline Zoom app appears to be developed by three companies in China” and “this arrangement could also open up Zoom to pressure from Chinese authorities.” Conversations often go through Zoom's own servers; this means that, at least if you accept that premise, the Chinese government often has access to both the encryption key and the traffic. Realistically, though, this is probably a niche threat—virtually all of the new Zoom traffic is uninteresting to any intelligence agency. (Forcing military personnel to sit through faculty meetings probably violates some provisions of the Geneva Conventions. In fact, I'm surprised that public universities can even hold faculty meetings, since as state actors they're bound by the U.S. constitutional prohibition against cruel and unusual punishment.) In other words: if what you're discussing on Zoom is likely to be of interest to the Chinese government, and if the assertions about their power to compel cooperation from Zoom are correct, there is a real threat. Nothing that I personally do would seem to meet that first criterion—I try to make all of my academic work public as soon as I can—but there are some plausible university activities, e.g., development of advanced biotechnology, where there could be such governmental interest.

There's one more important point, though: given Zoom's architecture, there's an easy way around the cryptography for an attacker: be an endpoint. As I noted in my previous blog post about Zoom,

At a minimum, you need assurance that someone you're talking to is indeed the proper party, and not some interloper or eavesdropper. That in turn requires that anyone who is concerned about the security of the conversation has to have some reason to believe in the other parties' identities, whether via direct authentication or because some trusted party has vouched for them.
That is, simply join the call. Yes, everyone on the call is supposed to be listed, but for many Zoom calls, that's simply not effective. Only a limited number of participants' names are shown by default; if the group is large enough or if folks don't scroll far enough, the addition will go unnoticed. If, per the above, Zoom is malicious or has been pressured to be malicious, the extra participant won't even be listed. (Or extra participants—who's to say that there aren't multiple developers in thrall to multiple intelligence agencies? Maybe there's even a standard way to list all of the intercepting parties?
    struct do_not_list {
        char *username;
	char *agency;
    } secret_users[] = {
    	{"Aldrich Ames", "CIA"},
	{"Sir Francis Walsingham", "GCHQ"},
	{"Clouseau", "DGSE"},
	{"Yael", "Mossad"},
	{"Achmed", "IRGC"},
	{"Vladimir", "KGB"},
	{"Bao", "PLA"},
	{"Natasha", "Pottsylvania"},
The possibilities are endless…)

Let me stress that properly authenticating users is very important even if the cryptography was perfect. Let's take a look at a competing product, Cisco's Webex. They appear to handle the encryption properly:

Cisco Webex also provides End-to-End encryption. With this option, the Cisco Webex cloud does not decrypt the media streams, as it does for normal communications. Instead it establishes a Transport Layer Security (TLS) channel for client-server communication. Additionally, all Cisco Webex clients generate key pairs and send the public key to the host’s client.

The host generates a symmetric key using a Cryptographically Secure Pseudo-Random Number Generator (CSPRNG), encrypts it using the public key that the client sends, and sends the encrypted symmetric key back to the client. The traffic generated by clients is encrypted using the symmetric key. In this model, traffic cannot be decoded by the Cisco Webex server. This End-to-End encryption option is available for Cisco Webex Meetings and Webex Support.

The end-points generate key pairs; the host sends the session key to those end-points and only those end-points.

Note what this excerpt does not say: that the host has any strong assurance of the identity of these end-points. Do they authenticate to Cisco or to the host? The difference is crucial. Webex supports single sign-on, where users authenticate using their corporate credentials and no password is ever sent to Cisco—but it isn't clear to me if this authentication is reliably sent to the host, as opposed to Cisco. I hope that the host knows, but the text I quoted says “all Cisco Webex clients generate key pairs”, and doesn't say “all clients send the host their corporate-issued certificate”. I simply do not know, and I would welcome clarification by those who do know.

Though it's not obvious, the authentication problem is related to what is arguably the biggest real-world problem Zoom has: Zoombombing. People can zoombomb because it's just too easy to find or guess a Zoom conference ID. Zoom doesn't help things; if you use their standard password option, an encoded form of the password is included in the generated meeting URL. If you're in the habit of posting your meeting URLs publicly—my course webpage software generates a subscribable calendar for each lecture; I'd love to include the Zoom URL in it—having the password in the URL vitiates any security from that measure. Put another way, Zoom's default password authentication increases the size of the namespace of Zoom meetings, from 109 to 1014. This isn't a trivial change, but it's also not the same as strong authentication. Zoom lets you require that all attendees be registered, but if there's an option by which I can specify the attendee list I haven't seen it. To me, this is the single biggest practical weakness in Zoom, and it's probably fixable without much pain.

To sum up: apart from Zoombombing, the architectural problems with Zoom are not serious for most people. Most conversations at most universities are quite safe if carried by Zoom. A small subset might not be safe, though, and if you're a defense contractor or a government agency you might want to think twice, but that doesn't apply to most of us.

That the threat is minimal, absent malign activity by one or more governments, does not mean that Zoom and its clients can't do things better. First and foremost, Zoom needs to clean up its act when it comes to its code. The serious security and privacy bugs we've seen, per my blog post the other day and a Rapid7 blog post, boil down to carelessness and/or a desire to make life easier for users, even at a significant cost in security. This is unacceptable, Zoom should have known better, and they have to fix it. Fortunately, I supect that most of those changes will be all but invisible to users.

Second, the authentication and authorization model has to be fixed. Zoom has to support lists of authorized users, including shortcuts like "*". (If it's already there, they have to make the option findable. I didn't see it—and I'm not exactly a technically naive user…) Ideally, there will also be some provision for strong end-to-end authentication.

Third, the cryptography has to be fixed. I said enough about that the other day; I won't belabor the point.

Finally, Webex has the proper model for scalable end-to-end encryption. It won't work properly without proper, end-to-end, authentication, but even the naive model will eliminate a lot of the fear surrounding centralized key distribution.

In truth, I'd prefer multi-party PAKE, at least as an option, but it's not clear to me that suitably vetted algorithms exist. PAKE would permit secure cryptography even without strong authentication, as long as all of the participants shared a simple password. In the original PAKE paper, Mike Merritt and I showed how to buid a public phone were people could have strong protection against wiretappers, even when they used something as simple as a shared 4-digit PIN.

There are things that Zoom's customers can do today. Zoom has stated that

an on-premise solution exists today for the entire meeting infrastructure, and a solution will be available later this year to allow organizations to leverage Zoom’s cloud infrastructure but host the key management system within their environment. Additionally, enterprise customers have the option to run certain versions of our connectors within their own data centers if they would like to manage the decryption and translation process themselves.
If you can, do this. If you host your own infrastructure, attackers will have a much harder time—you can monitor your own site for nasty traffic, and perhaps (depending on the precise requirements from Zoom) you can harden the system more. Besides, at least for Zoom meetings where everyone is on-site, routing attacks will be much harder.

Finally, enable the security options that are there, notably meeting passwords. They're not perfect, especially if your users post URLs with embedded passwords, but they're better than nothing.

One last word: what about Zoom's competitors? Should folks be using them instead? One reason Zoom has succeeded so well is user friendliness. In the last few weeks, I've had meetings over Zoom, Skype, FaceTime, and Webex. From a usability perspective, Zoom was by far the best—and again, I'm not technically naive, and my notion of an easy-to-use system has been warped by more than 40 years of using ed and vi for text editing. To give just two examples, recently a student and I spent several minutes trying to connect using Skype; we did not succeed. I've had multiple failures using FaceTime; besides, it's Apple-only and (per Apple) only supports 32 users on a call. There's also the support issue, especially for universities where the central IT department can't dictate what platforms are in use. Zoom supports more or less any platform, and does a decent job on all of them. Perhaps one of its competitors can offer better security and better usability—but that's the bar they have to clear. (I mean, it's 2020 and someone I know had to install Flash(!) for some online sessions. Flash? In 2020?)

I'd really like Zoom to do better. To its credit, the company is not reacting defensively or with hostility to these reports. Instead, it seems to be treating these reports as constructive criticism, and is trying to fix the real problems with its codebase while pointing out where the critics have not gotten everything right. I wish that more companies behaved that way.

Zoom Cryptography and Authentication Problems

4 April 2020

In my last blog post about Zoom, I noted that the company says “that critics have misunderstood how they do encryption.” New research from Citizen Lab show that not only were the critics correct, Zoom's design shows that they're completely ignorant about encryption. When companies roll their own crypto, I expect it to have flaws. I don't expect those flaws to be errors I'd find unacceptable in an introductory undergraduate class, but that's what happened here.

Let's start with the egregious flaw. In this particular context, it's probably not a real threat—I doubt if anyone but a major SIGINT agency could exploit it—but it's just one of these things that you should absolutely never do: use the Electronic Code Book (ECB) mode of encryption for messages. Here's what I've told my students about ECB:

Again, it would be hard to exploit here, but it suggests that the encryption code was written by someone who knew nothing whatsoever about the subject—and lays open the suspicion that there are deeper, more subtle problems. I mean, subtle problems are hard to avoid in cryptography even when you know what you're doing.

The more important error isn't that egregious, but it does show a fundamental misunderstanding of what “end-to-end encryption” means. The definition from a recent Internet Society brief is a good one:

End-to-end (E2E) encryption is any form of encryption in which only the sender and intended reipient hold the keys to decrypt the message. The most important aspect of E2E encryption is that no third party, even the party providing the communication service, has knowledge of the encryption keys.
As shown by Citizen Lab, Zoom's code does not meet that definition:
By default, all participants’ audio and video in a Zoom meeting appears to be encrypted and decrypted with a single AES-128 key shared amongst the participants. The AES key appears to be generated and distributed to the meeting’s participants by Zoom servers.
Zoom has the key, and could in principle retain it and use it to decrypt conversations. They say they do not do so, which is good, but this clearly does not meet the definition [emphasis added]: no third party, even the party providing the communication service, has knowledge of the encryption keys.”

Doing key management—that is, ensuring that the proper parties, and only the proper parties know the key—is a hard problem, especially in a multiparty conversation. At a minimum, you need assurance that someone you're talking to is indeed the proper party, and not some interloper or eavesdropper. That in turn requires that anyone who is concerned about the security of the conversation has to have some reason to believe in the other parties' identities, whether via direct authentication or because some trusted party has vouched for them. On today's Internet, when consumers log on to a remote site, they typically supply a password or the like to authenticate themselves, but the site's own identity is established via a trusted third party known as a certificate authority.

Zoom can't quite do identification correctly. You can have a login with Zoom, and meeting hosts generally do, but often, participants do not. Again, this is less of an issue in an enterprise setting, where most users could be registered, but that won't always be true for, say, university or school classes. Without particpant identification and authentication, it isn't possible for Zoom to set up a strongly protected session, no matter how good their cryptography; you could end up talking to Boris or Natasha when you really wanted to talk confidentially to moose or squirrel.

You can associate a password or PIN with a meeting invitation, but Zoom knows this value and uses it for access control, meaning that it's not a good enough secret to use to set up a secure, private conference.

Suppose, though, that all particpants are strongly authenticated and have some cryptographic credential they can use to authenticate themselves. Can Zoom software then set up true end-to-end encryption? Yes, it can, but it requires sophisticated cryptographic mechanisms. Zoom manifestly does not have the right expertise to set up something like that, or they wouldn't use ECB mode or misunderstand what end-to-end encryption really is.

Suppose that Zoom wants to do everything right. Could they retrofit true end-to-end encryption, done properly? The sticking point is likely to be authenticating users. Zoom likes to outsource authentication to its enterprise clients, which is great for their intended market but says nothing about the existence of cryptographic credentials.

All that said, it might be possible to use a so-called Password-authenticated key exchange (PAKE) protocol to let participants themselves agree on a secure, shared key. (Disclaimer: many years ago, a colleague and I co-invented EKE, the first such scheme.) But multiparty PAKEs are rather rare. I don't know if there are any that are secure enough and would scale to enough users.

So: Zoom is doing its cryptography very badly, and while some of the errors can be fixed pretty easily, others are difficult and will take time and expertise to solve.

Tags: Zoom

Zoom Security: The Good, the Bad, and the Business Model

2 April 2020

Zoom—one of the hottest companies on the planet right now, as businesses, schools, and individuals switch to various forms of teleconferencing due to the pandemic—has come in for a lot of criticism due to assorted security and privacy flaws. Some of the problems are real but easily fixable, some are due to a mismatch between what Zoom was intended for and how it's being used now—and some are worrisome.

The first part is the easiest: there have been a number of simple coding bugs. For example, their client used to treat a Windows Universal Naming Convention (UNC) file path as a clickable URL; if you clicked on such a path sent by an attacker, you could end up disclosing your hashed password. Zoom's code could have and should have detected that, and now does. I'm not happy with that class of bug, and while no conceivable effort can eliminate all such problems, efforts like Microsoft's Software Development Lifecycle can really help. I don't know how Zoom ensured software security before; I strongly suspect that whatever they were doing before, they're doing a lot more now.

Another class of problem involves deliberate features that were actually helpful when Zoom was primarily serving its intended market: enterprises. Take, for example, the ability of the host to mute and unmute everyone else on a call. I've been doing regular teleconferences for well over 25 years, first by voice and now by video. The three most common things I've heard are “Everyone not speaking, please mute your mic”; “Sorry, I was on mute”, and “Mute button!” I've also heard snoring and toilets flushing… In a work environment, giving the host the ability to turn microphones off and on isn't spying, it's a way to manage and facilitate a discussion in a setting where the usual visual and body language cues aren't available.

The same rationale applies to things like automatically populating a directory with contacts, scraping Linked-In data, etc.— it's helping business communication, not spying on, say, attendees at a virtual religious service. You can argue if these are useful feautures or not; you can even say that they shouldn't be done even in a business context—but the argument against it in a business context is much weaker than it is when talking about casual users who just want to chat out online with their friends.

There is, though, a class of problems that worries me: security shortcuts in the name of convenience or usability. Consider the first widely known flaw in Zoom: a design decision that allowed “any website to forcibly join a user to a Zoom call, with their video camera activated, without the user's permission.” Why did it work that way? It was intended as a feature:

As Zoom explained, changes implemented by Apple in Safari 12 that “require a user to confirm that they want to start the Zoom client prior to joining every meeting” disrupted that functionality. So in order to save users an extra click, Zoom installed the localhost web server as “a legitimate solution to a poor user experience problem.”
They also took shortcuts with initial installation, again in the name of convenience. I'm all in favor of convenience and usability (and in fact one of Zoom's big selling points is how much easier it is to use than its competitors), but that isn't a license to engage in bad security practices.

To its credit, Zoom has responded very well to criticisms and reports of flaws. Unlike more or less any other company, they're now saying things like “yup, we blew it; here's a patch”. (They also say that critics have misunderstood how they do encryption.) They've even announced a plan for a thorough review, with outside experts. There are still questions about some system details, but I'm optimistic that things are heading in the right direction. Still, it's the shortcuts that worry me the most. Those aren't just problems that they can fix, they make me fear for the attitudes of the development team towards security. I'm not convinced that they get it—and that's bad. Fixing that is going to require a CISO office with real power, as well as enough education to make sure that the CISO doesn't have to exercise that power very often. They also need a privacy officer, again with real power; many of their older design decisions seriously impact privacy.

I've used Zoom in variety of contexts for several years, and mostly like its functionality. But the security and privacy issues are real and need to be fixed. I wish them luck.

Tags: Zoom

Notes on a Zoom Class

14 March 2020

On Friday, I taught my first class using Zoom. It was an interesting experience, and I'm wondering what, if anything, I should change for next class. (About two years ago, I had conducted a class using Zoom, and I've been in many Zoom meetings, but this was rather different.)

I'm currently teaching Computer Security II. It's a rather small lecture class, with 22 (nominally) in-class students. This was the third day of Zoom-only classes at Columbia, so almost all of the students had established their own Zoom routines by that point. I requested that students turn on their video, because I try to look around the room for feedback if something is unclear—but essentially no one complied.

I had a few technical hiccups. I'd intended to run the class from an iPad Pro, but I'd forgotten that I'd need a USB-C to 3.5 mm jack adapter. Instead—and rather to my concern—I had to do a quick install of the MacOS Zoom package. I also ran into a UI issue trying to do proper screensharing on the iPad—if you want to, e.g., show slides of some sort, you have to start that, switch to Zoom, start sharing your screen, and then switch back to the slide app. Furthermore, there's a permission pop-up about Zoom wanting to record your screen—that's not what it's doing, but that's the interface that it uses. You then have to tell the permission box that you want to record the screen (on the second pass, you say to authorize Zoom, I think) I'd tried the more obvious route: trying to open a Dropbox or iCloud file within Zoom. That sort-of works, but not well. (Btw: the official advice here is to use PDF slides and not Powerpoint; apparently, Zoom doesn't like Powerpoint as much. I'm not sure why not.) There was also some screen-sharing glitch at the start that required me to exit and restart the application.

It goes without saying that you should use earphones/earbuds when doing any sort of teleconference.

In an informal test on my own, I found a problem when two different screens had different aspect ratios—it matched the height, meaning that if I shared the screen that was longer relative to its height, I had to scroll left and right on the shorter screen. When I shared in the opposite direction, there were black bars on either side of the longer screen, but that's exactly what I'd want. So: make up your sides in, say, 4:3 or 11:8.5 format, rather than 16:9.

I strongly recommend using two screens, one for slides or video of yourself, and one to see the panels with the participant and chat panels. Using my Mac laptop, I could keep the slides on my external monitor; the laptop's screen showed the “gallery” view of logged in students. I used an iPad for the participant list and chat window, and glanced at it on occasion. I may try switching those roles next time, to have more room for the student gallery—but again, that doesn't help much with no student video. And if I use my laptop to project, I'll have to move my external camera/mic or use the laptop's built-in camera.

The biggest problem I had was lack of visual feedback. I've recorded lectures in empty classrooms before; it's never gone pleasantly. In fact, in the past when I've had to prerecord a lecture because I'd be traveling, I've invited any students who were available to attend the recording session. A few would always take me up on the offer; that helped a lot. Perhaps, if the Zoom gallery view showed more students—even with a small class, it didn't show everyone—it would have been better, but again, essentially no one was willing to have their video on. (I can't say that they're wrong to value their privacy. In that class two years ago, one student forgot she was on video and changed her top during class.)

It would be interesting to try to teach a seminar class this way. I would insist on video for that. (I'll be attending a Zoom class Wednesday night; I'm curious how it will be conducted.) I normally do stop to ask questions of the class when lecturing; there were no instances where I wanted to yesterday, but next lecture will have some opportunities. I wonder how that will go.

I asked the class for feedback on whether I should run the next class differently. For a number of reasons, I always use slides when lecturing, but since I really want the visual feedback (if I can get the students to turn their cameras on…), I suggested that they download my slides in advance while I simply lectured on video. None of the students who responded liked that idea; they all wanted my slides projected.

I should note: even though I unmuted everyone for the discussion, almost no one was willing to respond verbally; they all preferred to type using the chat function. I do suggest, if feasible (it isn't for me), that a TA monitor the chat room continously, so that the instructor doesn't have to glance over constantly.

There have been some privacy concerns expressed about Zoom. I'm still assessing those and may do another blog post later. For now, I'll opine that instructors should make sure that “attendee attention tracking” is disabled. (To be fair, as anyone who has ever taught knows, it's really obvious which students are not paying attention. This merely replicates that…) Also, some of the concerns in that article are about Zoom's “webinars”. My class is a “meeting”; Zoom treats them differently.

Finally: I did have fun with Zoom's virtual backgrounds. For the first half of the class, I appeared to be in Grand Teton National Park; for the second, I downloaded a picture of Saturn from a NASA website… I think I'll look at my own photo collection for some more interesting landscapes. I know I have good pictures of Mount Rainier, the hoodoos in Bryce National Park, and more. Or maybe I'll just appear in front of the Pillars of Creation or a spiral galaxy

I have the following recommendations:

Y2038: It's a Threat

19 January 2020

Last month, for the 20th anniversary of Y2K, I was asked about my experiences. (Short answer: there really was a serious potential problem, but disaster was averted by a lot of hard work by a lot of unsung programmers.) I joked that, per this T-shirt I got from a friend, the real problem would be on January 19, 2038, and 03:14:08 GMT.

Picture of a T-shirt saying that Y2K
is harmless but that 2038 is dangerous

Why might that date be such a problem?

On Unix-derived systems, including Linux and MacOS, time is stored internally as the number of seconds since midnight GMT, January 1, 1970, a time known as “the Epoch”. Back when Unix was created, timestamps were stored in a 32-bit number. Well, like any fixed-size value, only a limited range of numbers can be stored in 32 bits: numbers from -2,147,483,648 to 2,147,483,647. (Without going into technical details, the first of those 32 bits is used to denote a negative number. The asymmetry in range is to allow for zero.)

I immediately got pushback: did I really think that 18 years hence, people would still be using 32-bit systems? Modern computers use 64-bit integers, which can allow for times up to 9,223,372,036,854,775,807 seconds since the Epoch. (What date is that? I didn't bother to calculate it, but it's about 292,271,023,045 years, a date that's well beyond when it is projected that the Sun will run out of fuel. I don't propose to worry about computer timestamps after that.)

It turns out, though, that just as with Y2K, the problems don't start when the magic date hits; rather, they start when a computer first encounters dates after the rollover point, and that can be a lot earlier. In fact, I just had such an experience.

A colleague sent me a file from his Windows machine; looking at the contents, I saw this.

$ unzip -l
Length Date Time Name
——— ———- —— —-
2411339 01-01-2103 00:00 Anatomy…
——— ——-

Look at that date: it's in the next century! (No, I don't know how that happened.) But when I looked at it after extracting on my computer, the date was well in the past:

$ ls -l Anatomy…
-rw-r—r—@ 1 smb staff 2411339 Nov 24 1966 Anatomy…

After a quick bit of coding, I found that the on-disk modification time of the extracted file was 4,197,067,200 seconds since the Epoch. That's larger than the limit! But it's worse than that. I translated the number to hexadecimal (base 16), which computer programmers use as an easy way to display the binary values that computers use internally. It came to FA2A29C0. (Since base 16 needs six more digits than our customary base 10, we use the letters A–F to represent them.) The first “F”, in binary, is 1111. And the first of those bits is the so-called sign bit, the bit that tells whether or not the number is negative. The value of FA2A29C0, if treated as a signed, 32-bit number, is -97,900,096, or about 3.1 years before the Epoch. Yup, that corresponds exactly to the Nov 24, 1966 date my system displayed. (Why should +4,197,067,200 come out to -97,900,096? As I indicated, that's moderately technical, but if you want to learn the gory details, the magic search phrase is “2's complement”.)

So what happened? MacOS does use 64-bit time values, so there shouldn't have been a problem. But the “ls” command (and the Finder graphical application) do do some date arithmetic. I suspect that there is old code that is using a 32-bit variable, thus causing the incorrect display.

For fun, I copied the zip file to a Linux system. It got it right, on extraction and display:

$ ls -l Anatomy…
-rw-r—r— 1 smb faculty 2411339 Jan 2 2103 Anatomy…
(Why Januaray 2 instead of January 1? I don't know for sure; my guess is time zones.)

So: there are clearly some Y2038 bugs in MacOS, today. In other words, we already have a problem. And I'm certain that these aren't the only ones, and that we'll be seeing more over the next 18 years.

Update: I should have linked to this thread about a more costly Y2038 incident.

The Early History of Usenet, Part XI: Errata

9 January 2020

I managed to conflate RFCs 733 and 822, and wrote 722 in the last poast. That's now been fixed.

Here is the table of contents, actual and projected, for this series.

  1. The Early History of Usenet: Prologue
  2. The Technological Setting
  3. Hardware and Economics
  4. File Format
  5. Implementation and User Experience
  6. Authentication and Norms
  7. The Public Announcement
  8. Usenet Growth and B-news
  9. The Great Renaming
  10. Retrospective Thoughts
  11. Errata

The tag URL will always take you to an index of all blog posts on this topic.

The Early History of Usenet, Part X: Retrospective Thoughts

9 January 2020

Usenet is 40 years old. Did we get it right, way back when? What could/should we have done differently, with the technology of the time and with what we should have known or could feasibly have learned? And what are the lessons for today?

A few things were obviously right, even in retrospect. For the expected volume of communications and expected connectivity, a flooding algorithm was the only real choice. Arguably, we should have designed a have/want protocol, but that was easy enough to add on later—and was, in the form of NNTP. There were discussions even in the mid- to late-1980s about how to build one, even for dial-up links. For that matter, the original announcement explicitly included a variant form:

Traffic will be reduced further by extending news to support "news on demand." X.c would be submitted to a newsgroup (e.g. "NET.bulk") to which no one subscribes. Any node could then request the article by name, which would generate a sequence of news requests along the path from the requester to the contributing system. Hopefully, only a few requests would locate a copy of x.c. "News on demand" will require a network routing map at each node, but that is desirable anyway.
Similarly, we were almost certainly right to plan on a linked set of star nodes, including of course Duke. Very few sites had autodialers, but most had a few dial-in ports.

The lack of a cryptographic authentication and hence control mechanisms is a somewhat harder call, but I still think we made the right decision. First, there really wasn't very much academic cryptographic literature at the time. We knew of DES, we knew of RSA, and we knew of trapdoor knapsacks. We did not know the engineering parameters for either of the latter two, and, as I noted in an earlier post, we didn't even know to look for a bachelor's thesis that might or might not have solved the problem. Today, I know enough about cryptography that I could, I think, solve the problem with the tools available in 1979 (though remember that there were no cryptographic hash functions then), but I sure didn't know any of that back then.

There's a more subtle problem, though. Cryptography is a tool for enforcing policies, and we didn't know what the policies should be. In fact, we said that, quite explicitly:

  1. What about abuse of the network?
    In general, it will be straightforward to detect when abuse has occurred and who did it. The uucp system, like UNIX, is not designed to prevent abuses of overconsumption. Experience will show what uses of the net are in fact abuses, and what should be done about them.
  2. Who would be responsible when something bad happens?
    Not us! And we don't not intend that any innocent bystander be held liable either. We are looking into this matter. Suggestions are solicited.
  3. This is a sloppy proposal. Let's start a committee.
    No thanks! Yes, there are problems. Several amateurs collaborated on this plan. But let's get started now. Once the net is in place, we can start a committee. And they will actually use the net, so they will know what the real problems are.
This is a crucial point: if you don't know what you want the policies to be, you can't design suitable enforcement mechanisms. Similarly, you have to have some idea who is charged with enforcing policies in order to determine who should hold, e.g., cryptographic keys.

Today's online communities have never satisfactorily answered either part of this. Twitter once described itself as the “free speech wing of the free speech party”; today, it struggles with how to handle things like Trump's tweets and there are calls to regulate social media. Add to that the international dimension and it's a horribly difficult problem—and Usenet was by design architecturally decentralized.

Original Usenet never tried to solve the governance problem, even within its very limited domain of discourse. It would be simple, today, to implement a scheme where posters could cancel their own articles. Past that, it's very hard to decide in whom to vest control. The best Usenet ever had were the Backbone Cabal and a voting scheme for creation of new newsgroups, but the former was dissolved after the Great Renaming because it was perceived to lack popular legitimacy and the latter was very easily abused.

Using threshold cryptography to let M out of N chosen “trustees” manage Usenet works technically but not politically, unless the “voters”—and who are they, and how do we ensure one Usenet user, one vote?—agree on how to choose the Usenet trustees and what their powers should be. There isn't even a worldwide consensus on how governments should be chosen or what powers they should have; adding cryptographic mechanisms to Usenet wouldn't solve it, either, even for just Usenet.

We did make one huge mistake in our design: we didn't plan for success. We never asked ourselves, “What if our traffic estimates are far too low?”

There were a number of trivial things we could have done. Newsgroups could always have been hierarchical. We could have had more hierarchies from the start. We wouldn't have gotten the hierarchy right, but computers, other sciences, humanities, regional, and department would have been obvious choices and not that far from what eventually happened.

A more substantive change would have been a more extensible header format. We didn't know about RFC 733, the then-current standard for ARPANET email, but we probably could have found it easily enough. But we did know enough to insist on having “A” as the first character of a post, to let us revise the protocol more easily. (Aside: tossing in a version indicator is easy. Ensuring that it's compatible with the next version is not easy, because you often need to know something of the unknowable syntax and semantics of the future version. B-news did not start all articles with a “B”, because that would have been incompatible with its header format.)

The biggest success-related issue, though, was the inability to read articles by newsgroup and out of order within a group. Ironically, Twitter suffers from the same problem, even now: you see a single timeline, with no easy way to flag some tweets for later reading and no way to sort different posters into different categories (“tweetgroups”?). Yes, there are lists, but seeing something in a list doesn't mean you don't see it again in your main timeline. (Aside: maybe that's why I spend too much time on Twitter, both on my main account and on my photography account.)

Suppose, in a desire to relive my technical adolescence, I decided to redesign Usenet. What would it look like?

Nope, not gonna go there. Even apart from the question of whether the world needs another social noise network, there's no way the human attention span scales far enough. The cognitive load of Usenet was far too high even at a time when very few people, relatively speaking, were online. Today, there are literally billions of Internet users. I mean, I could specify lots of obvious properties for Usenet: The Next Generation—distributed, peer-to-peer, cryptographically authenticated, privacy-preserving—but people still couldn't handle the load and there are still the very messy governance problems like illegal content, Nazis, trolls, organization, and more. The world has moved on, and I have, too, and there is no shortage of ways to communicate. Maybe there is a need for another, but Usenet—a single infrastructure intended to support many different topics—is probably not the right model.

And there's a more subtle point. Usenet was a batch, store-and-forward network, because that's what the available technology would support. Today, we have an always-online network with rich functionality. The paradigm for how one interacts with a network would and should be completely different. For example: maybe you can only interact with people who are online at the same time as you are—and maybe that's a good thing.

Usenet was a creation of its time, but around then, something like it was likely to happen. To quote Robert Heinlein's Door into Summer, “you railroad only when it comes time to railroad.” The corollary is that when it is time to railroad, people will do so. Bulletin Board Systems started a bit earlier, though it took the creation of the Hayes SmartModem to make them widespread in the 1980s. And there was CSnet, an official email gateway between the ARPANET and dial-up sites, started in 1981, with some of the same goals. We joked that when professors want to do something, they wrote a proposal and received lots of funding, but we, being grad students, just went and did it, without waiting for paperwork and official sanction.

Usenet, though, was different. Bulletin Board Systems were single-site, until the rise of Fidonet a few years later; Usenet was always distributed. CSnet had central administration; Usenet was, by intent, laissez-faire and designed for organic growth at the edges, with no central site that in some way needed money. Despite its flaws, it connected many, many people around the world, for more than 20 years until the rise of today's social network. And, though the user base and usage patterns have changed, it's still around, 40 years later.

This concludes my personal history of Usenet. I haven't seen any corrections, but I'll keep that link live in case I get some.

Correction: This post erroneously referred to RFC 722, by conflating 733 with 822, the revision.

Here is the table of contents, actual and projected, for this series.

  1. The Early History of Usenet: Prologue
  2. The Technological Setting
  3. Hardware and Economics
  4. File Format
  5. Implementation and User Experience
  6. Authentication and Norms
  7. The Public Announcement
  8. Usenet Growth and B-news
  9. The Great Renaming
  10. Retrospective Thoughts
  11. Errata

The tag URL will always take you to an index of all blog posts on this topic.

The Early History of Usenet, Part IX: The Great Renaming

26 December 2019

The Great Renaming was a significant event in Usenet history, since it involved issues of technology, money, and governance. From a personal perspective—and remember that this series of blog posts is purely my recollections—it also marked the end of my “official” involvement in “running” Usenet. I put “running” in quotation marks in the previous sentence because of the difficulty of actually controlling a non-hierarchical, distributed system with no built-in, authenticated control mechanisms.

As with so many other major changes in Usenet, the underlying problem was volume. Here, it wasn't so much the volume that individuals could consume as it was volume for sites to send, receive, and store. There was simply too much traffic. The problem was exacerbated by the newsgroup naming structure: it was too flat, and the hierarchy that did exist—net, fa (for “from ARPA”, ARPANET mailing lists that were gatewayed into Usenet newsgroups), and mod, for moderated newsgroups—wasn't very helpful for managing load. The hierarchy was not semantic, it was based on how content could appear: posted by anyone (net), relayed from a mailing list (fa), or controlled by a moderator (mod). Clearly, something had to be done to aid manageability. But who had both the authority and the power to make such decisions?

Although in theory, all Usenet nodes were equal, in practice some were more equal than others. In technical terms, though Usenet connectivity is considered a graph, in practice it was more like a set of star networks: a very few nodes had disproportionately high connectivity. These few nodes fed many end-sites, but they also talked to each other. In effect, those latter links were the de facto network backbone of Usenet, and the administrators of these major nodes wielded great power. They, together with a few Usenet old-timers, including me and Gene Spafford, constituted what became known as the “Backbone Cabal”. The Backbone Cabal had no power de jure; in practice, though, any newsgroups excluded by the entire cabal would have seen very little distribution outside of the originating region.

The problem had been recognized for quite a while before anything was actually done, see, e.g., this post by Chuq von Rospach, which is arguably the first detailed proposal. The essence of it and the scheme that was finally adopted were the same: organize groups into hierarchies that reflected both subject matter and signal-to-noise ratio. The latter was a significant problem; the volume of shouting in some newsgroups compares unfavorably to the “Comments” section of many web pages. The result was the same, though: sites could easily select what they wanted to receive, via broad categories rather than a long, long list of desired or undesired groups.

Contrary to what some, e.g., the Electronic Frontier Foundation have said, the issue was not censorship, even censorship designed to ensure that Usenet never created the kind of scandal that would lead to public outcry that would threaten the project. And the backbone sites never had to hide from immediate management; as I have indicated, management was very well aware of Usenet and—for backbone sites—was willing to absorb the phone bills. (“Companies so big that their Usenet-related long distance charges were lost in the dictionary-sized bills the company generated every month”—sorry, it doesn't work that way in any organization I've ever been associated with. Every sub-organization had its own budget and had to cover its own phone bills.) There were budget issues and there were worries about scandal, but to the best of my recollection these were more on some non-backbone sites. But the backbone sites had to administer their feeds, and that demanded hierarchy.

To be sure, the top-level hierarchies into which some newsgroups were put was political. It couldn't help being political, because everyone knew that moving something to the talk hierarchy would sharply curtail its distribution. And yes, members of the Cabal (including, of course, me) had their own particular interests. But that notwithstanding, trying to impose a hierarchical classification system on knowledge is hard—ask any librarian. (Thought experiment: how would you classify Apollo 11? Under “rocketry”? The “space race”? The “Cold War”? What about Werner von Braun's contribution to the project? Is he a subcategory of the Apollo Project? Or of the history of rocketry, or of World War II?) There was not and could not be a perfect solution.

(The Wikipedia article on the Great Renaming says that the two immediate drivers were the complexity of listing which groups which sites would receive, and/or the cost of the overseas links from seismo to Europe. That my very well be; I simply do not remember specific issues other than load writ broadly.)

The ultimate renaming scheme was the subject of a lot of discussion, and changes were made to the original proposals. Ultimately, it was adopted—and there was rapid counter-action. The alt hierarchy was created as a set of newsgroups explicitly outside the control the Backbone Cabal. And it succeeded, because technology had changed. For one thing, the cost of phone calls was dropping. For another, the spread of the Internet to many sites meant that Usenet didn't have to flow via phone calls billed by the minute: RFC 977, which prposed a standard for transmitting Usenet over the Internet, came out in early 1986. In other words, the notional control of the Backbone Cabal over content and distribution was just that: notional. The success of the alt hierarchy showed that Usenet had passed a critical point, where the disappearance of a very few nodes could have killed the whole idea of Usenet. At least partially in reaction to this, the Backbone Cabal disappeared—but it left unanswered the question of governance: who could or should control the net?

Newsgroup creation was one early topic. Creation was approved by voting: rough, imperfect voting, which gave rise to proposals for change. There was also the issue of unwanted or improper content, the creation of cancelbots, and more. People worried about liability, jurisdiction, copyright, and more, very early on. These issues are still largely unresolved. Fundamentally, the debate then was between a purely hands-off approach and some form of control; the latter, though, required both consensus on who should have the right to exercise authority and also the creation of appropriate technical mechanisms. Both of these issues are still with us today. I'll have more to say on them in the next (and final substantive) installment of this series.

Here is the table of contents, actual and projected, for this series.

  1. The Early History of Usenet: Prologue
  2. The Technological Setting
  3. Hardware and Economics
  4. File Format
  5. Implementation and User Experience
  6. Authentication and Norms
  7. The Public Announcement
  8. Usenet Growth and B-news
  9. The Great Renaming
  10. Retrospective Thoughts
  11. Errata

The tag URL will always take you to an index of all blog posts on this topic.

The Early History of Usenet, Part VIII: Usenet Growth and B-news

30 November 2019

For quite a while, it looked like my prediction—one to two articles per day—was overly optimistic. By summer, there were only four new sites: Reed College, University of Oklahoma (at least, I think that that's what uucp node uok is), vax135, another Bell Labs machine—and, cruciallyy, U.C. Berkeley, which had a uucp connection to Bell Labs Research and was on the ARPANET.

In principle, even a slow rate of exponential growth can eventually take over the world. But that assumes that there are no “deaths” that will drive the growth rate negative. That isn't a reasaonable assumption, though. If nothing else, Jim Ellis, Tom Truscott, Steve Daniel, and I all planned to graduate. (We all succeeded in that goal.) If Usenet hadn't shown its worth to our successors by then, they'd have let it wither. For that matter, university faculty or Bell Labs management could have pulled the plug, too. Usenet could easily have died aborning. But the right person at Berkeley did the right thing.

Mary Horton was then a PhD student there. (After she graduated, she joined Bell Labs; she and I were two of the primary people who brought TCP/IP to the Labs, where it was sometimes known as the “datagram heresy”. The phone network was, of course, circuit-switched…) Known to her but unknown to us, there were two non-technical ARPANET mailing lists that would be of great interest to many potential Usenet users, HUMAN-NETS and SF-LOVERS. She set up a gateway that relayed these mailing lists into Usenet groups; these were at some point moved to the fa (“From ARPANET”) hierarchy. (For a more detailed telling of this part of the story, see Ronda Hauben's writings.) With an actual traffic source, it was easy to sell folks on the benefits of Usenet. People would have preferred a real ARPANET connection but that was rarely feasible and never something that a student could set up: ARPANET connections were restricted to places that had research contracts with DARPA. The gateway at Berkeley was, eventually, bidirectional for both Usenet and email; this enabled Usenet-style communication between the networks.

SF-LOVERS was, of course, for discussing science fiction; then as now, system administrators were likely to be serious science fiction fans. HUMAN-NETS is a bit harder to describe. Essentially, it dealt with the effect on society of widespread networking. If it still existed today, it would be a natural home for discussions of online privacy, fake news, and hate speech, as well as the positive aspects: access to much of the world's knowledge, including primary source materials that years ago were often hard to find, and better communications between people.

It is, in fact, unclear if the gateway was technically permissible. The ARPANET was intended for use by authorized ARPANET sites only; why was a link to another network allowed? The official reason, as I understand it, is that it was seen as a use by Berkeley, and thus passed muster; my actual impression is that it was viewed as an interesting experiment. The reason for the official restriction was to prevent a government-sponsored network from competing with then-embryonic private data networks; Usenet, being non-commercial, wasn't viewed as a threat.

Uucp email addresses, as seen on the ARPANET, were a combination of a uucp explicit path and an ARPANET hostname. This was before the domain name system; the ARPANET had a flat name space back then. My address would have been something like

but also
research!duke!unc!smb at BERKELEY
—in this era, " at " was accepted as a synonym for "@"…

With the growth in the number of sites came more newsgroups and more articles. This made the limitations of the A-news user interface painfully apparent. Mary designed a new scheme; a high school student, Matt Glickman, implemented what became B-news. There were many improvements.

The most important change was the ability to read articles by newsgroup, and to read them out of order. By contrast, A-news presented articles in order of arrival, and only stored the high-water mark of continuous articles read. The input file format changed, too, to one much more like email. Here's the sample from RFC 1036:

From: jerry@eagle.ATT.COM (Jerry Schwarz)
Path: cbosgd!mhuxj!mhuxt!eagle!jerry
Newsgroups: news.announce
Subject: Usenet Etiquette — Please Read
Message-ID: <642@eagle.ATT.COM>
Date: Fri, 19 Nov 82 16:14:55 GMT
Followup-To: news.misc
Expires: Sat, 1 Jan 83 00:00:00 -0500
Organization: AT&T Bell Laboratories, Murray Hill

The body of the message comes here, after a blank line.

The most interesting change was the existence of both From: and Path: lines. The former was to be used for sending email; the latter was used to track which sites had already seen an article. There is also the implicit assumption that there would be a suitable ARPANET-to-uucp gateway, identified by a DNS MX record, to handle email relaying; at this time, such gateways were largely aspirational and mixed-mode addresses were still the norm.

B-news also introduced control messages. As noted, these were unauthenticated; mischief could and did result. Other than canceling messagse, the primary use was for the creation of new newsgroups—allowing them to be created willy-nilly didn't scale.

There was also control message support for mapping the network, which did not work as well as we expected. Briefly, the purpose of the senduuname message was to allow a site to calculate the shortest uucp path to a destination, both to relieve users of the mental effort to remember long paths and also to allow a shorter email path than simply retracing the Usenet path. (This was also a reliability feature; uucp email, especially across multiple hops, was not very reliable.) My code worked (and, after a 100% rewrite by Peter Honeyman) became my first published paper) but it was never properly integrated into mailers and the shorter paths were even less reliable than the long ones.

Finally, there were internal changes. A-news had used a single directory for all messages, but as the number of messages increased, that became a serious performance bottleneck. B-news use a directory per newsgroup, and eventually subdirectories that reflected the hierarchical structure.

The growth of Usenet had negative consequences, too: some sites became less willing to carry the load. Bell Labs Research had been a major forwarding site, but Doug McIlroy, then a department head, realized that the exponent in Usenet's growth rate was, in fact, significant, and that the forwarding load was threatening to overload the site—star networks don't scale. He ordered an end to email relaying. This could have been very, very serious; fortunately, there were a few other sites that had started to pick up the load, most notably decvax at Digital Equipment Corporations' Unix Engineering Group. This effort, spearheaded by Bill Shannon and Armando Stettner, was quite vital. Another crucial relay site was seismo, run by Rick Adams at the Center for Seismic Studies; Rick later went on to found UUNET, which became the first commercial ISP in the United States. At Bell Labs, ihnp4, run by Gary Murakami, became a central site, too. (Amusingly enough, even though I joined the Labs in late 1982, I did not create another hub: as a very junior person, I didn't feel that I could. But it wasn't because management didn't know about Usenet; indeed, on my first day on the job, my center's director (three levels up from me) greeted me with, “Hi, Steve—I've seen your flames on Netnews.” I learned very early that online posts can convey one's reputation…)

More on load issues in the next post.

Here is the table of contents, actual and projected, for this series.

  1. The Early History of Usenet: Prologue
  2. The Technological Setting
  3. Hardware and Economics
  4. File Format
  5. Implementation and User Experience
  6. Authentication and Norms
  7. The Public Announcement
  8. Usenet Growth and B-news
  9. The Great Renaming
  10. Retrospective Thoughts
  11. Errata

The tag URL will always take you to an index of all blog posts on this topic.

The Early History of Usenet, Part VII: The Public Announcement

25 November 2019

Our goal was to announce Usenet at the January, 1980 Usenix meeting. In those days, Usenix met at universities; it was a small, comaparatively informal organization, and didn't require hotel meeting rooms and the like. (I don't know just when Usenix started being a formal academic-style conference; I do know that it was no later than 1984, since I was on the program committee that year for what would later be called the Annual Technical Conference.) This meeting was in Boulder; I wasn't there, but Tom Truscott and Jim Ellis were.

Apart from the announcement itself, we of course needed non-experimental code—and my prototype was not going to cut it. Although I no longer remember precisely what deficiencies were in my C version, one likely issue was the ability to configure which neighboring sites would receive which newsgroups. Stephen Daniel, also at Duke CS, wrote the code that became known as “A-news”. One important change was the ability to have multiple hierarchies, rather than just the original “NET” or “NET.*”. (Aside: I said in a previous note that my C version had switched to “NET.*” for distributed groups, rather than the single NET. I'm now no longer sure of when that was introduced, in my C version or in Steve Daniel's version. He certainly supported other hierarchies; I certainly did not.) It was also possible in the production version to configure which groups or hierarchies a site would receive. For sanity's sake, this configuration would have to be in a file, rather than in an array built into the code.

That latter point was not always obvious. Uucp, as distributed, used an array to list the commands remote sites were permitted to execute:

char *Cmds[] = {
/*  to remove restrictions from uuxqt
 *  redefine CMDOK 0
 *  to add allowable commands, add to the list under Cmds[]
To permit rnews to execute, a system administrator would have to change the source code (and most people had source code to Unix in those days) and recompile. This was, in hindsight, an obviously incorrect decision, but it arguably was justifiable in those days: what else should you be allowed to do? There were many, many fewer commands. (I should note: I no longer remember for certain what fsend, fget, or opr were. I think they were for sending and receiving files, and for printing to a Honeywell machine at the Bell Labs Murray Hill comp center. Think of the ancient GCOS field in /etc/passwd file.)

To work around this problem, we supplied a mail-to-rnews program: a sending site could email articles, rather than try to execute rnews directly. A clock-driven daemon would retrieve the email messages and pass them to rnews. And it had to be clock-driven: in those days, there was no way to have email delivered directly to a program or file. (A security feature? No, simply the simplicity that was then the guiding spirit of Unix. But yes, it certainly helped security.) The remote site configuration file in the A-news therefore needed to know a command to execute, too.

The formal announcement can be seen here. The HTML is easier on the eyes, but there are a few typos and even some missing text, so you may want to look at the scanned version linked to at the bottom. A few things stand out. First, as I noted in Part III, there was a provision for Duke to recover phone charges from sites it polled. There was clearly faculty support at Duke for the project. For that matter, faculty at UNC knew what I was doing.

A more interesting point is what we thought the wide-area use would be: "The first articles will probably concern bug fixes, trouble reports, and general cries for help." Given how focused on the system aspects we were, what we really meant was something like the eventual newsgroup comp.sys.unix-wizards. There was, then, a very strong culture of mutual assistance among programmers, not just in organizations like Usenix (which was originally, as I noted, the Unix Users' Group), but also in the IBM mainframe world. The Wikipedia article on SHARE explains this well:

A major resource of SHARE from the beginning was the SHARE library. Originally, IBM distributed what software it provided in source form and systems programmers commonly made small local additions or modifications and exchanged them with other users. The SHARE library and the process of distributed development it fostered was one of the major origins of open source software.

Another proposed use was locating interesting source code, but not flooding it to the network. Why not? Because software might be bulky, and phone calls then were expensive. The announcement estimates that nighttime phone rates were about US$.50 for three minutes; that sounds about right, though even within the US rates varied with distance. In that time, at 300 bps—30 bytes per second—you could send at most 5400 bytes; given protocol overhead, we conservatively estimated 3000 bytes, or a kilobyte per minute. To pick an arbitrary point of comparison, the source to uucp is about 120KB; at 1KB/sec, that's two hours, or US$20. Adjusting for inflation, that's over US$60 in today's money—and most people don't want most packages. And there was another issue: Duke only had two autodialers; there simply wasn't the bandwidth to send big files to many places, and trying to do so would block all news transfers to other sites. Instead, the proposal was for someone—Duke?—to be a central respository; software could then be retrieved on demand. This was a model later adopted by UUNET; more on it in the next installment of this series.

The most interesting thing, though, is what the announcement didn't talk about: any non-technical use. We completely missed social discussions, hobby discussions, politial discussions, or anything else like that. To the extent we considered it at all, it was for local use—after all, who would want to discuss such things with someone they'd never met?

Here is the table of contents, actual and projected, for this series.

  1. The Early History of Usenet: Prologue
  2. The Technological Setting
  3. Hardware and Economics
  4. File Format
  5. Implementation and User Experience
  6. Authentication and Norms
  7. The Public Announcement
  8. Usenet Growth and B-news
  9. The Great Renaming
  10. Retrospective Thoughts
  11. Errata

The tag URL will always take you to an index of all blog posts on this topic.