Useful Links

Recent Posts


Crypto War III: Assurance

24 March 2018

For decades, academics and technologists have sparred with the government over access to cryptographic technology. In the 1970s, when crypto started to become an academic discipline, the NSA was worried, fearing that they'd lose the ability to read other countries' traffic. And they acted. For example, they exerted pressure to weaken DES. From a declassified NSA document (Thomas R. Johnson, American Cryptology during the Cold War, 1945-1989: Book III: Retrenchment and Reform, 1972-1980, p. 232):

(For my take on other activity during the 1970s, see some class slides.)

The Second Crypto War, in the 1990s, is better known today, with the battles over the Clipper Chip, export rules, etc. I joined a group of cryptographers in criticizing the idea of key escrow as insecure. When the Clinton administration dropped the idea and drastically restricted the scope of export restrictions on cryptographic technology, we thought the issue was settled. We were wrong.

In the last several years, the issue has heated up again. A news report today says that the FBI is resuming the push for access:

F.B.I. and Justice Department officials have been quietly meeting with security researchers who have been working on approaches to provide such "extraordinary access" to encrypted devices, according to people familiar with the talks.

Based on that research, Justice Department officials are convinced that mechanisms allowing access to the data can be engineered without intolerably weakening the devices' security against hacking.

I'm as convinced as ever that "exceptional access"—a neutral term, as opposed to "back doors", "GAK" (government access to keys), or "golden keys", and first used in a National Academies report—is a bad idea. Why? Why do I think that the three well-resepcted computer scientists mentioned in the NY Times article (Stefan Savage, Ernie Brickell, and Ray Ozzie) who have proposed schemes are wrong?

I can give my answer in one word: assurance. When you design a security system, you want to know that it will work correctly, despite everything adversaries can do. In my view, cryptographic mechanisms are so complex and so fragile that tinkering with them to add exceptional access seriously lowers their assurance level, enough so that we should not have confidence that they will work correctly. I am not saying that these modified mechanisms will be insecure; rather, I am saying that we should not be surprised if and when that happens.

History bears me out. Some years ago, a version of the Pretty Good Privacy system that was modified to support exceptional access could instead give access to attackers. The TLS protocol, which is at the heart of web encryption, had a flaw that is directly traceable to the 1990s requirement for weaker, export grade cryptography. That's right: a 1994 legal mandate—one that was abolished in 2000—led to a weakness that was still present in 2015. And that's another problem: cryptographic mechanisms have a very long lifetime. In this case, the issue was something known technically as a "downgrade attack", where an intruder in the conversation forces both sides to fall back to a less secure variant. We no longer need export ciphers and hence have no need to even negotiate the issue—but the protocol still has it, and in an insecure fashion. Bear in mind that TLS has been proven secure mathematically—and it still had this flaw.

There are thus many reasons to be skeptical about not just the new proposals mentioned in the NY Times article but about the entire concept of exceptional access. In fact, a serious flaw has been found in one of the three. Many cryptographers, including myself, had seen the proposal—but someone else, after hearing a presentation about it for the first time, found a problem in about 15 minutes. This particular flaw may be fixable, but will the fix be correct? I don't think we have any way of knowing: cryptography is a subtle discipline.

So: the risk we take by mandating exceptional access is that we may never know if there's a problem lurking. Perhaps the scheme will be secure. Perhaps it will be attackable by a major intelligence agency. Or perhaps a street criminal who has stolen the phone or a spouse or partner will be able to exploit it, with the help of easily downloadable software from the Internet. We can't know for sure, and the history of the field tells us that we should not be sanguine. Exceptional access may create far more problems than it solves.

Ed Felten to be Named as a PCLOB Board Member

13 March 2018

Today, the White House announced that Ed Felten would be named as a member of the Privacy and Civil Liberties Oversight Board. That's wonderful news, for many reasons.

First, Ed is superbly qualified for the job. He not only has deep knowledge of technology, he understands policy and how Washington works. Second, there really are important technical issues in PCLOB's work—that's why I spent a year there as their first Technology Scholar.

But more importantly, Ed's appointment is a sign that computer science technical expertise is finally being valued in Washington at the policy level. My role was very explicitly not to set or opine on policy; rather, I looked at the technical aspects and explained to the staff and the Board what I thought they implied. The Board, though, made the policy decisions.

Ed will now have a voice at that level. That's good, partly because he is, as I said, so very well qualified, but also because he will likely be the first of many technologists in such roles. For years, I and many others have been calling for such appointments. I'm glad that one has finally happened.

Please Embed Bibliographic Data in Online Documents

7 March 2018

When I teach, I assign a lot of primary sources—technical papers, but also (especially in courses like Computers and Society) news stories. And when I assign something, I have to do laborious copying and pasting: I ask my students to use complete bibligraphy entries, rather than just URLs, so I do the same. Why? Among other things, "link rot": URLs are rarely good for more than a few years, save at places that have seriously thought through their naming scheme and made a commitment to stick to it.

Being the sort of person I am, I use scripts to generate my class syllabus pages. Since I already have copious BibTeX files, I use bibtex2html to generate (most of) the readings for each class. And therein lies the rub: I want all "archival" files—journal or conference paper PDFs, articles from major newspapers (e.g., the New York Times), etc., to include machine-readable metadata. The HTML file should, by itself, be self-identifying to scholars (or at least to scholars with the right tools….). I don't care about the format chosen; I just one want single one that I can parse with a rational Python script.

This isn't a new concept. Most books published in recent years in the US contain Library of Congress cataloging information. Web pages and academic papers should, too. And there are plenty of standards to choose from; ideally, pick one.

For now, I've written scripts for two of the sites I cite the most, the New York Times and Ars Technica. They have most of the right information, but it's not in the same format. Ars Technica, for example, puts the interesting stuff in a single HTML tag, but in JSON format inside the tag. The New York Times uses a bunch of separate tags. I tried writing a variant for the Washington Post; as best I can tell, most of the information I need is there, but the reporters' names are in some JavaScript assignment statements.

I'm trying to do my part. My own web site has .bib entries for all of my papers, and I'm rewriting my blog software to generate similar files for each blog post. (Not, I think, that anyone but me has ever formally cited my blog…)

I'm not a librarian or archivist, but if I'm seeing this problem, I suspect that the pros are seeing it even more. And maybe I'm wrong, and there are standards that the New York Times is following—but in that case, can others please follow suit? The future will thank you.

Usenet, Authentication, and Engineering (or: Early Design Decisions for Usenet)

23 February 2018

A Twitter thread on trolls brought up mention of trolls on Usenet. The reason they were so hard to deal with, even then, has some lessons for today; besides, the history is interesting. (Aside: this is, I think, the first longish thing I've ever written about any of the early design decisions for Usenet. I should note that this is entirely my writing, and memory can play many tricks across nearly 40 years.)

A complete tutorial on Usenet would take far too long; let it suffice for now to say that in the beginning, it was a peer-to-peer network of multiuser time-sharing systems, primarily interconnected by dial-up 300 bps and 1200 bps modems. (Yes, I really meant THREE HUNDRED BITS PER SECOND. And some day, I'll have the energy to describe our home-built autodialers—I think that the statute of limitations has expired…) Messages were distributed via a flooding algorithm. Because these time-sharing systems were relatively big and expensive and because there were essentially no consumer-oriented dial-up services then (even modems and dumb terminals were very expensive), if you were on Usenet it was via your school or employer. If there was abuse, pressure could be applied that way—but it wasn't always easy to tell where a message had originated—and that's where this blog post really begins: why didn't Usenet authenticate requests?

We did understand the need for authentication. Without it, there was no way to perform control functions, such as deleting articles. We needed site authentication; as will be seen later, we needed user authentication as well. But how could this be done?

The obvious solution was something involving public key cryptography, which we (the original developers of the protocol: Tom Truscott, the late Jim Ellis, and myself) knew about: all good geeks at the time had seen Martin Gardner's "Mathematical Games" column in the August 1977 issue of Scientific American (paywall), which explained both the concept of public key cryptography and the RSA algorithm. For that matter, Rivest, Shamir, and Adleman's technical paper had already appeared; we'd seen that, too. In fact, we had code available: the xsend command for public key encryption and decryption, which we could have built upon, was part of 7th Edition Unix, and that's what is what Usenet ran on.

What we did not know was how to authenticate a site's public key. Today, we'd use certificate issued by a certificate authority. Certificates had been invented by then, but we didn't know about them, and of course there were no search engines to come to our aid. (Manual finding aids? Sure—but apart from the question of whether or not any accessible to us would have indexed bachelor's theses, we'd have had to know enough to even look. The RSA paper gave us no hints; it simply spoke of a "public file" or something like a phone book. It did speak of signed messages from a "computer network"—scare quotes in the original!—but we didn't have one of those except for Usenet itself. And a signed message is not a certificate.) Even if we did know, there were no certificate authorities, and we certainly couldn't create one along with creating Usenet.

Going beyond that, we did not know the correct parameters: how long a key to use (the estimates in the early papers were too low), what was secure (the xsend command used an algorithm that was broken a few years later), etc. Maybe some people could have made good guesses. We did not know and knew that we did not know.

The next thing we considered was neighbor authentication: each site could, at least in principle, know and authenticate its neighbors, due to the way the flooding algorithm worked. That idea didn't work, either. For one thing, it was trivial to impersonate a site that appeared to be further away. Every Usenet message contains a Path: line; someone trying to spoof a message would simply have to claim to be a few hops away. (This is how the famous kremvax prank worked.)

But there's a more subtle issue. Usenet messages were transmitted via a generic remote execution facility. The Usenet program on a given computer executed the Unix command

uux neighborsite!rnews
where neighborsite is the name of the next-hop computer on which the rnews command would be executed. (Before you ask: yes, the list of allowable remotely requested commands was very small; no, the security was not perfect. But that's not the issue I'm discussing here.) The trouble is that any knowledgeable user on a site could issue the uux command; it wasn't and couldn't easily be restricted to authorized users. Anyone could have generated their own fake control messages, without regard to authentication and sanity built in to the Usenet interface. (Could uux have been secured? This is itself a complex question that I don't want to go into now; please take it on faith and don't try to argue about setgid(), wrapper programs, and the like. It was our judgment then—and my judgment now—that such solutions would not be adopted. The minor configuration change needed to make rnews an acceptable command for remote execution was a sufficiently high hurdle that we provided alternate mechanisms for sites that wouldn't do it.)

That left us with no good choices. The infrastructure for a cryptographic solution was lacking. The uux command rendered illusory any attempts at security via the Usenet programs themselves. We chose to do nothing. That is, we did not implement fake security that would give people the illusion of protection but not the reality.

This was the right choice.

But the story is more complex than that. It was the right choice in 1979 but not necessarily right later, for several reasons. The most important is that the online world in 1979 was very different than it is now. For one thing, since only a very few people had access to Usenet, mostly CS students and tech-literate employees of large, sophisticated companies—the norms were to some extent self-enforcing: if someone went too far astray, their school or employer could come down on them. For another, our projections of participation and volume were very low. In my most famous error, I projected that Usenet would grow to 50-100 sites, and 1-2 articles a day, ever. The latest figures, per Wikipedia, puts traffic at about 74 million posts per day, totaling more than 37 terabytes. (I suppose it's an honor to be off by seven orders of magnitude—not many people help create a system that's successful enough to have a chance at such a lack of foresight!) On the one hand, a large network has much more need for management, including ways to deal with people and traffic that violates the norms. On the other, simply as a matter of statistics a large network will have at the least proportionately more malefactors. Furthermore, the increasing democratization of access meant that there were people who were not susceptible to school or employer pressure.

Traffic volume was the immediate driver for change. B-news came along in 1981, only a year or so after the original A-news software was released. B-news did have control messages. They were necessary, useful—and abused. Spam messages were often countered by cancelbots, but of course cancelbots were not available only to the righteous. And online norms are not always what everyone wants them to be. The community was willing to act technically against the first large-scale spam outbreak, but other issues—a genuine neo-Nazi, posts to the newsgroup by a member of NAMBLA, trolls on the soc.motss newsgroup, and more were dealt with by social pressure.

There are several lessons here. One, of course, is that technical honesty is important. A second, though, is that the balance between security and functionality is not fixed—environments and hence needs change over time. B-news was around for a long time before cancel messages were used or abused on a large scale, and this mass good behavior was not because the insecurity wasn't recognized: when I had a job interview at Bell Labs in 1982, the first thing Dennis Ritchie said to me was "[B-news] is a tool of the devil!" A third lesson is that norms can matter, but that the community as a whole has to decide how to enforce them.

There's an amusing postscript to the public key cryptography issue. In 1979-1981, when the Usenet software was being written, there were no patents on public key cryptography nor had anyone heard about export licenses for cryptographic technology. If we'd been a bit more knowledgeable or a bit smarter, we'd have shipped software with such functionality. The code would have been very widespread before any patents were issued, making enforcement very difficult. On the other hand, Tom, Jim, Steve Daniel (who wrote the first released version of the software—my code, originally a Bourne shell script that I later rewrote in C—was never distributed beyond UNC and Duke) and I might have had some very unpleasant conversations with the FBI. But the world of online cryptography would almost certainly have been very different. It's interesting to speculate on how things would have transpired if cryptography was widely used in the early 1980s.

Meltdown and Spectre: Security is a Systems Property

4 January 2018

I don't (and probably won't) have anything substantive to say about the technical details of the just-announced Meltdown and Spectre attacks. (For full technical details, go here; for an intermediate-level description, go here.) What I do want to stress is that these show, yet again, that security is a systems property: being secure requires that every component, including ones you've never heard of, be secure. These attacks depend on hardware features such as "speculative execution" (someone I know said that that sounded like something Stalin did), "cache timing", and the "translation lookaside buffer"—and no, many computer programmers don't know what those are, either. Furthermore, the interactions between components need to be secure, too.

Let me give an example of that last point. These two attacks are only exploitable by programs running on your own computer: a hacker probing from the outside can't directly trigger them. Besides, since the effect of the flaws is to let one program read the operating system's memory, single-users computers, i.e., your average home PC or Mac, would seem to be unaffected; the only folks who have to worry are the people who run servers, especially cloud servers. Well, no.

Most web browsers support a technology called JavaScript, which lets the web site you're visiting run code on your computer. For Spectre, "the Google Chrome browser… allows JavaScript to read private memory from the process in which it runs". In other words, a malicious web site can exploit this flaw. And the malice doesn't have to be on the site you're visiting; ads come from third-party ad brokers.

In other words, your home computer is vulnerable because of (a) a hardware design flaw; (b) the existence of JavaScript; and (c) the economic ecosystem of the web.

Security is a systems property…

Bitcoin—The Andromeda Strain of Computer Science Research

30 December 2017

Everyone knows about Bitcoin. Opinions are divided: it's either a huge bubble, best suited for buying tulip bulbs, or, as one Twitter rather hyperbolically expressed it, "the most important application of cryptography in human history". I personally am in the bubble camp, but I think there's another lesson here, on the difference between science and engineering. Bitcoin and the blockchain are interesting ideas that escaped the laboratory without proper engineering—and it shows.

Let's start with the upside. Bitcoin was an impressive intellectual achievement. Digital cash has been around since Chaum, Fiat, and Naor's 1988 paper. There have been many other schemes since then, with varying properties. All of the schemes had one thing in common, though: they relied on a trusted party, i.e., a bank.

Bitcoin was different. "Satoshi Nakamoto" conceived of the block chain, a distributed way to keep track of coins, spending, etc. Beyond doubt, his paper would have been accepted at any top cryptography or privacy conference. It was never submitted, though. Why not? Without authoritative statements directly from "Nakamoto", it's hard to say; my own opinion is that it originated from the anarchist libertarian wing of the cypherpunk movement. Cypherpunks believe in better living through cryptography; a privacy-preserving financial mechanism that is independent of any government fulfilled one of the ideals of the libertarian anarchists. (Some of them seemed to believe that the existence of such a mechanism would inherently cause governments to disappear. I don't know why they believed this, or why they thought it was a good idea, but the attitude was unmistakable.) In any event, they were more interested in running code than in academic credit.

So what went wrong? What happened to a system designed as an alternative to, e.g.., credit cards where the "cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions"? Instead, today the Bitcoin network is overloaded, leading to high transaction costs. The answer is a lack of engineering.

When you engineer a system for deployment you build it to meet certain real-world goals. You may find that there are tradeoffs, and that you can't achieve all of your goals, but that's normal; as I've remarked, "engineering is the art of picking the right trade-off in an overconstrained environment". For any computer-based financial system, one crucial parameter is the transaction rate. For a system like Bitcoin, another goal had to be avoiding concentrations of power. And of course, there's transaction privacy.

There are less obvious factors, too. These days, "mining" for Bitcoins requires a lot of computations, which translates directly into electrical power consumption. One estimate is that the Bitcoin network uses up more electricity than many countries. There's also the question of governance: who makes decisions about how the network should operate? It's not a question that naturally occurs to most scientists and engineers, but production systems need some path for change.

In all of these, Bitcoin has failed. The failures weren't inevitable; there are solutions to these problems in the acdemic literature. But Bitcoin was deployed by enthusiasts who in essence let experimental code escape from a lab to the world, without thinking about the engineering issues—and now they're stuck with it. Perhaps another, better cryptocurrency can displace it, but it's always much harder to displace something that exists than to fill a vacuum.