Useful Links

Recent Posts


Keys under the Doormat

7 July 2015

To those of us who have worked on crypto policy, the 1990s have become known as the Crypto Wars. The US government tried hard to control civilian use of cryptography. They tried to discourage academic research, restricted exports of cryptographic software, and—most memorably—pushed something called "escrowed encryption", a scheme wherein the government would have access to the short-term keys used to encrypt communications or stored files.

The technical community pushed back against all of these initiatives. (One side-effect was that it got a number of computer scientists, including me, professionally involved in policy issues.) Quite apart from privacy and civil liberties issues, there were technical issues: we needed strong cryptography to protect the Internet, compatibility meant that it had to be available world-wide, and simplicity was critical. Why? Most security problems are due to buggy code; increasing the complexity of a system always increases the bug rate.

Eventually, the government gave up. The need for strong crypto had become increasingly obvious, non-US companies were buying non-US products—and no one wanted escrowed encryption. Apart from the fact that it didn't do the job, it did increase complexity, as witnessed by the failure of one high-profile system. There were many papers and reports on the subject; I joined a group of very prominent security and cryptography experts (besides me, Hal Abelson, Ross Anderson, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, and Bruce Schneier) that wrote one in 1997.

The question of strong cryptography appeared to be settled 15 years ago—but it wasn't. Of late, FBI director James Comey has issued new calls for some sort of mandatory government access to plaintext; so has UK Prime Minister David Cameron. In fact, the push is stronger this time around; in the 1990s, the government denied any intention of barring unescrowed encryption. Now, they're insisting that their way is the only way. (President Obama hasn't committed to either side of the debate.)

It's still a bad idea. The underlying problem of complexity hasn't gone away; in fact, it's worse today. We're doing a lot more with cryptography, so the bypasses have to be more complex and hence riskier. There are also more serious problems of jurisdiction; technology and hence crypto are used in far more countries today than 20 years ago. Accordingly, the same group plus a few more (Matthew Green, Susan Landau, Michael Specter, and Daniel J. Weitzner) have written a new report. Our overall message is the same: deliberately weakening security systems is still a bad idea.

Section 4 is especially important. It has a list of questions that proponents of these schemes need to answer before opponents can make specific criticisms. In other words, "ignore this report; that isn't what we're suggesting" can't be used as a counterargument until the public is given precise details.

Facebook and PGP

2 June 2015

Facebook just announced support for PGP, an encrypted email standard, for email from them to you. It's an interesting move on many levels, albeit one that raises some interesting questions. The answers, and Facebook's possible follow-on moves, are even more interesting.

The first question, of course, is why Facebook has done this. It will only appeal to a very small minority of users. Using encrypted email is not easy. Very few people have ever created a PGP key pair; many who have done so have never used it, or simply used it once or twice and forgotten about it. I suspect that a significant number of people (a) will try to upload their private keys instead of their public keys; (b) will upload it, only to discover that they no longer remember the strong password they used to protect their private keys; (c) will realize that they created their key pair three computers ago and no longer have PGP installed; or (d) more than one of the above.

The nasty cynical part of me thinks it's an anti-Google measure; if email to users is encrypted, gmail won't be able to read it. It's a delightfully Machiavellian scheme, but it makes no sense; far too few people are likely to use it. Unless, of course, they plan to make encrypted email easier to use? That brings up the second question: what will Facebook do to make encryption easier to use?

Facebook is, of course, one of the tech titans. They have some really sharp people, and of course they have the money to throw at the problem. Can they find a way to make PGP easy to use? That encompasses a wide range of activities: composing encrypted and/or signed email, receiving it and immediately realizing its status, being able to search encrypted messages—and doing all this without undue mental effort. Even for sophisticated users, it's really easy to make operational mistakes with encrypted email, mistakes that gut the security. To give just one example, their announcement says that if "encrypted notifications are enabled, Facebook will sign outbound messages using our own key to provide greater assurance that the contents of inbound emails are genuine." This could protect against phishing attacks against Facebook, but if and only if people notice when they've received unsigned email purporting to be from them. Can this work? I'm dubious—no one has ever solved that problem for Web browsers—but maybe they can pull it off.

The third big question is mobile device support. As Facebook itself says, "public key management is not yet supported on mobile devices; we are investigating ways to enable this." Their target demographic lives on mobile devices, but there is not yet good support for PGP on iOS or Android. There are outboard packages available for both platforms, but that's not likely to be very usable for most people. Google has announced plans for GPG support for gmail on Chrome; it would be nice if they added such support to the built-in Android mailer as well. (Oh yes—how do you get the same key pair on your mobile device as on your laptop or desktop?)

The last and most interesting question is why they opted for PGP instead of S/MIME. While there are lots of differences in message formats and the like, the most important is how the certificates are signed and hence what the trust model is. It's a subtle question but utterly vital—and if Facebook does the right things here, it will be a very big boost to efforts to deploy encrypted email far more widely.

One of the very hardest technical things about cryptography (other than the user interface, of course) is how to get the proper keys. That is, if you want to send me encrypted email, how do you get my public key, rather than the public key of some other Steven Bellovin or a fake key that the NSA or the FSB created that claims to be mine? (I've put my actual PGP key at, but of course that could be replaced by someone who hacked the Columbia University Computer Science Department web server.) PGP and S/MIME have very different answers to the question of assuring that a retrieved key is genuine. With PGP, anyone can sign someone else's certificate, thus adding their attestation to the claim that some particular key is really associated with a particular person. Of course, this is an unstructured process, and a group of nasty people could easily create many fake identities that all vouch for each other. Still, it all starts with individuals creating key pair for themselves. If they want, they can then upload the public key to Facebook even if no one has signed it.

By contrast, S/MIME keys have to be signed by a certificate authority (CA) trusted by all parties. Still, in many ways, S/MIME is a more natural choice. It's supported by vendor-supplied mailers on Windows, Macs, and iToys (though not by the standard Android mailer). Facebook is big enough that it could become a CA. They already know enough about people that they've inherently solved one of the big challenges for an online CA: how do you verify someone's claim to a particular name? At the very least, Facebook could say "this key is associated with this Facebook account". No other company can do this, not even Google.

This, then, is a possible future. Facebook could become a de facto CA, for PGP and/or S/MIME. It could sign certificates linked to Facebook accounts. It could make those certificates easily available. It could develop software&mdail;apps, desktop or laptop programs, what have you—that go to Facebook to obtains other people's keys. The usability issues I outlined earlier would remain, but when it comes to certificate handling Facebook has advantages that no one else has ever had. If this is the path they choose to go down, we could see a very large bump in the use of encrypted email.

Hacking: Users, Computers, and Systems

28 May 2015

As many people have heard, there's been a security problem at the Internal Revenue Service. Some stories have used the word hack; other people, though, have complained that nothing was hacked, that the only problem was unauthorized access to taxpayer data but via authorized, intentionally built channels. The problem with this analysis is that it's looking at security from far too narrow a perspective—and this is at the root of a lot of the security problems we have.

Fundamentally, there are three types of intrusion, which I'll dub "user", "computer", and "system". User intrusion is the easiest to understand: someone has gained illegitimate access to one or more user accounts. That's what's happened here: the authentication mechanisms were too weak to withstand a targeted attack on particular individuals' accounts. This is the kind of attack that the usual "pick a strong password" rules are designed to protect against, and the kind that two-factor authentication will protect against. Authentication failures are not the only way this can happen—there are things like cross-site scripting attacks—but in general, the impact is limited to some set of user accounts.

Computer intrusions are more serious: the attack has gained the ability to access files and/or run code on one or more computers within a protected site. In general, someone who can do this can compromise many user accounts; it is thus strictly worse than a user intrusion. Generally speaking, this class of problem is caused by buggy software, though there are other paths for the attacker, such as social engineering or compromising an authorized user's credentials. (The attacker may even be an insider with legitimate access to the targt machine.)

System intrusion is the most nebulous concept of the three, but often the most important. It can refer to any security failures. Imagine, for example, that there was a magic URL that would cause Amazon to ship you some books, but without giving access to any of their computers and without billing any other customers. It's not a customer intrusion, and it's not a computer intrusion—but the system—the collection of computers, people, and processes that collectively are Amazon—has done the wrong thing.

System intrusions—hacks—often cannot be fixed in a single place. It may be that some interfaces were poorly designed for the humans who have to use them (and make no mistake, that's a serious technical issues), or it may require much more far-reaching changes. Let's go back to the IRS hack. Looked at simply, it's a user account intrusion; there are thus the predictable calls for two factor authentication when logging in to the IRS. It sounds simple, but it's not; in fact, it's a fiendishly hard problem. Most people interact with the IRS once a year, when they file their tax returns. This particular application is especially useful when doing things like buying a house—and that's not something folks do every week. How will people possibly keep track of their credentials for this site? Use text messages to their phones? The IRS does not have a database of mobile phone numbers; suggestions that they should have one would be greeted by howls of protest from privacy advocates (and rightly so, I should add). Besides, if you change your phone number would you remember to update it on the IRS site? If so, how would you authenticate to the site when you no longer have your old number? Authentication to the government is among the most difficult authentication problems in existence; it's a cradle-to-grave activity, and generally used infrequently enough that people will not remember their credentials.

Where does the blame lie? Arguably, the IRS should have had a better understanding of attackers' capabilities before deploying this system. It's not clear, though, that they can do much better. The choice may have been between not offering it and offering it knowing that there would be some level of abuse. In that case, the focus should be on detection and remediation, rather than on stronger authentication.

What Congress Should Do About Cybersecurity

24 April 2015

For the last few years, Congress has been debating an information-sharing bill to deal with the cybersecurity problem. Apart from the privacy issues—and they're serious—just sharing more information won't do much. A forthcoming column of mine in IEEE Security & Privacy magazine explains what Congress should do instead.

ISPs to Enforce Copyright Law

1 April 2015

A group of major ISPs and major content providers have agreed on a a mechanism to enforce copyright laws in the network. While full details have not yet been released, the basic scheme involves using previously designed IP flags to denote public domain content. That is, given general copyright principles, it is on average a shorter code path and hence more efficient to set the flag on exempt material.

Authorization is, of course, a crucial component to this scheme. The proper (and encrypted) license information will be added to the IP options field. The precise layout will depend on the operating system that created it—Windows, MacOS/iOS, GNU/Linux, and the various BSDs each have their own ways of enforcing copyright—but back-of-the-envelope calculations suggests that a 256-byte field will hold most license data. (The GNU/Linux option is especially complex, since it has to deal with copyleft and the GPL as well; validity of the license depends on the presence of a valid URL pointing to a source code repository.) To deal with the occasional longer field, though, the IP options length field will be expanded to two bytes. Briefly, packets without the public domain flag set that do not have a valid license option would be dropped or sent to a clearinghouse for monitoring.

It is clear that new border routers will be necessary to implement this scheme. Major router vendors have indicated that they will release appropriate products exactly one year from today's date.

Paying for deployment—routers, host changes, etc.—is problematic. One solution that has been proposed is to use the left-over funds appropriated by Congress to deploy CALEA. This has drawn some support from law enforcement agencies. One source who spoke on condition of anonymity noted that since terrorists and other subjects of wiretaps do not comply with copyright law, this scheme could also be used to identify them without additional bulk data collection. She also pointed out that the diversion option would work well to centralize wiretap collection, resulting in considerable cost savings.

When I learn more, I'll update this blog post—though that might not happen until this date next year.

Update on Net Neutrality

15 March 2015

A few days ago, I blogged about possible problems in handling dropped packets in the FCC's proposed network neutrality rules. The FCC has published them now; I'm happy to say that it was a non-issue.