We knew that Usenet needed some sort of management system, and we knew that that would require some sort of authentication, for users, sites, and perhaps posts. We didn’t add any, though—and why we didn’t is an interesting story. (Note: much of this blog post is taken from an older post.)
The obvious solution was something involving public key cryptography, which we (the original developers of the protocol: Tom Truscott, the late Jim Ellis, and myself) knew about: all good geeks at the time had seen Martin Gardner’s "Mathematical Games" column in the August 1977 issue of Scientific American (paywall), which explained both the concept of public key cryptography and the RSA algorithm. For that matter, Rivest, Shamir, and Adleman’s technical paper had already appeared; we’d seen that, too. In fact, we had code available for trapdoor knapsack encryption: the xsend command for public key encryption and decryption, which we could have built upon, was part of 7th Edition Unix, and that’s what Usenet ran on.
What we did not know was how to authenticate a site’s public key. Today, we’d use certificate issued by a certificate authority. Certificates had been invented by then, but we didn’t know about them, and of course there were no search engines to come to our aid. (Manual finding aids? Sure—but apart from the question of whether or not any accessible to us would have indexed bachelor’s theses, we’d have had to know enough to even look. The RSA paper gave us no hints; it simply spoke of a "public file" or something like a phone book. It did speak of signed messages from a "computer network"—scare quotes in the original!—but we didn’t have one of those except for Usenet itself. And a signed message is not a certificate.) Even if we did know, there were no certificate authorities, and we certainly couldn’t create one along with creating Usenet.
Going beyond that, we did not know the correct parameters: how long a key to use (the estimates in the early papers were too low), what was secure (the xsend command used an algorithm that was broken a few years later), etc. Maybe some people could have made good guesses. We did not know and knew that we did not know.
The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn’t work, either. For one thing, it was trivial to impersonate a site that appeared to be further away. Every Usenet message contains a Path: line; someone trying to spoof a message would simply have to claim to be a few hops away. (This is how the famous kremvax prank worked.)
It was possible, barely, to have a separate uucp login for different sites, but apart from overhead for managing separate logins, it isn’t clear that rnews could have handled it properly.
But there’s a more subtle issue. Usenet messages were transmitted via a generic remote execution facility. The Usenet program on a given computer executed the Unix command
uux neighborsite!rnewswhere neighborsite is the name of the next-hop computer on which the rnews command would be executed. (Before you ask: yes, the list of allowable remotely requested commands was very small; no, the security was not perfect. But that’s not the issue I’m discussing here.) The trouble is that any knowledgeable user on a site could issue the uux command; it wasn’t and couldn’t easily be restricted to authorized users. Anyone could have generated their own fake control messages, without regard to authentication and sanity built in to the Usenet interface. And yes, we knew that at the time.
Could uux have been secured? This is itself a complex question that I don’t want to go into now; please take it on faith and don’t try to argue about setgid(), wrapper programs, and the like. It was our judgment then—and my judgment now—that such solutions would not be adopted. The minor configuration change needed to make rnews an acceptable command for remote execution was a sufficiently high hurdle that we provided alternate mechanisms for sites that wouldn’t do it.
That left us with no good choices. The infrastructure for a cryptographic solution was lacking. The uux command rendered illusory any attempts at security via the Usenet programs themselves. We chose to do nothing. That is, we did not implement fake security that would give people the illusion of protection but not the reality.
This was the right choice.
But the story is more complex than that. It was the right choice in 1979 but not necessarily right later, for several reasons. The most important is that the online world in 1979 was very different than it is now. For one thing, since only a very few people had access to Usenet, mostly CS students and tech-literate employees of large, sophisticated companies—the norms were to some extent self-enforcing: if someone went too far astray, their school or employer could come down on them. And we did anticipate that some people would misbehave.
As I mentioned in the previous post, our projections of participation and volume were very low. On the one hand, a large network has much more need for management, including ways to deal with people and traffic that violates the norms. On the other, simply as a matter of statistics a large network will have at the least proportionately more malefactors. Furthermore, the increasing democratization of access meant that there were people who were not susceptible to school or employer pressure.
B-news (which I’ll get to in a few days) did have control messages. They were necessary, useful—and abused. Spam messages were often countered by cancelbots, but of course cancelbots were not available only to the righteous. And online norms are not always what everyone wants them to be. The community was willing to act technically against the first large-scale spam outbreak, but other issues—a genuine neo-Nazi, posts to the misc.kids newsgroup by a member of NAMBLA, trolls on the soc.motss newsgroup, and more were dealt with by social pressure. (I should note: the first neo-Nazi appeared on Usenet very early on. And no, I’m not being even slightly hyperbolic when I call him that, but I won’t give him more publicity by mentioning his name.)
There are several lessons here. One, of course, is that technical honesty is important. A second, though, is that the balance between security and functionality is not fixed—environments and hence needs change over time. B-news was around for a long time before cancel messages were used or abused on a large scale, and this mass good behavior was not because the insecurity wasn’t recognized: when I had a job interview at Bell Labs in 1982, the first thing Dennis Ritchie said to me was "[B-news] is a tool of the devil!" A third lesson is that norms can matter, but that the community as a whole has to decide how to enforce them.
There’s an amusing postscript to the public key cryptography issue. In 1979-1981, when the Usenet software was being written, there were no patents on public key cryptography nor had anyone heard about export licenses for cryptographic technology. If we’d been a bit more knowledgeable or a bit smarter, we’d have shipped software with such functionality. The code would have been very widespread before any patents were issued, making enforcement very difficult. On the other hand, Tom, Jim, Steve Daniel (who wrote the first released version of the software) and I might have had some very unpleasant conversations with the FBI. But the world of online cryptography would almost certainly have been very different. It’s interesting to speculate on how things would have transpired if cryptography was widely used in the early 1980s.
As I alluded to above, we did anticipate possible trouble. In fact, the original public announcement warned about this:
What about abuse of the network?
In general, it will be straightforward to detect when abuse has occurred and who did it. The uucp system, like UNIX, is not designed to prevent abuses of overconsumption. Experience will show what uses of the net are in fact abuses, and what should be done about them.
Certain abuses of the net can be serious indeed. As with ordinary abuses, they can be thought about, looked for, and even programmed against, but only experience will show what matters. Uucp provides some measure of protection. It runs as an ordinary user, and has strict access controls. It is safe to say that it poses no greater threat than that inherent in a call-in line.
Who would be responsible when something bad happens?
Not us! And we do not intend that any innocent bystander be held liable either. We are looking into this matter. Suggestions are solicited.
It seems, though, that we were worried about other abuses as well. The announcement mentions overconsumption of resources as a risk; we knew of that from an article we had seen by Dennis Ritchie in the Bell System Technical Journal. Quoting him:
The weakest area is in protecting against crashing, or at least crippling, the operation of the system. Most versions lack checks for overconsumption of certain resources, such as file space, total number of files, and number of processes (which are limited on a per-user basis in more recent versions). Running out of these things does not cause a crash, but will make the systemunusable for a period. When resource exhaustion occurs, it is generally evident what happened and who was responsible, so malicious actions are detetable, but the real problem is the accidental program bug.Note the similarity between our "it will be straightforward…" and Ritchie’s conclusion.
The bottom line, though, was that we really did not know what to do, nor even what sorts of problems would actually occur. I personally did worry about security to some extent—I actually caught my first hackers around 1971, when some activity generated a console message and I went and examined the punch cards(!) for the program involved—but it wasn’t in any sense my primary focus. That said, when Morris and Thompson’s famous paper on passwords appeared, I coded up a quick-and-dirty password guesser and informed some people about how bad their passwords were. (One answer I received: "It’s not a problem that my password is ’abscissa’; no one else can spell it." Umm…) We would have received that issue of Communications of the ACM around the time that Usenet was being invented, but I do not recall when we saw it.
Were we worried about trolling and other forms of online misbehavior? From a vantage point of 40 years, it’s hard to say. As mentioned earlier, we did anticipate people posting things like used car ads to inappropriate places. Of course, there was no way to anticipate what Usenet would become in just a few short years.
Here is the table of contents, actual and projected, for this series.
- The Early History of Usenet: Prologue
- The Technological Setting
- Hardware and Economics
- File Format
- Implementation and User Experience
- Authentication and Norms
- The Public Announcement
- Usenet Growth and B-news
- The Great Renaming
- Retrospective Thoughts
The tag URL https://www.cs.columbia.edu/~smb/blog/control/tag_index.html#TH_Usenet_history will always take you to an index of all blog posts on this topic.