Useful Links

Recent Posts


Why I Wrote Thinking Security

24 November 2015

I have a new book out, Thinking Security: Stopping Next Year's Hackers. There are lots of security books out there today; why did I think another was needed?

Two wellsprings nourished my muse. (The desire for that sort of poetic imagery was not among them.) The first was a deep-rooted dissatisfaction with common security advice. This common "wisdom"—I use the word advisedly—often seemed to be outdated. Yes, it was the distillation of years of conventional wisdom, but that was precisely the problem: the world has changed; the advice hasn't.

Consider, for example, passwords (and that specifically was the other source of my discomfort). We all know what to do: pick strong passwords, don't reuse them, don't write them down, etc. That all seems like very sound advice—but it comes from a 1979 paper by Morris and Thompson. The world was very different then. Many people were still using hard-copy, electromechanical terminals, people had very few logins, and neither defenders nor attackers had much in the way of computational power. None of that is true today. Maybe the advice was still sound, or maybe it wasn't, but very few people seemed to be questioning it. In fact, the requirement was embedded in very static checklists that sites were expected to follow.

Suppose that passwords are in fact terminally insecure. What the alternative? The usual answer is some form of two-factor authentication. Is that secure? Or is two-factor authentication subject to its own problems? If it's secure today, will it remain secure tomorrow? Computer technology is an extremely dynamic field; not only does the technology change, the applications and the threats change as well. Let's put it like this—why should you expect the answers to any of these questions to remain the same?

The only solution, I concluded, was to go back to first principles. What were the fundamental assumptions behind security? It turns out that for passwords, the main reason you need strong passwords is if a site's password database is compromised. In other words, a guessed password is the second failure; if the first could be avoided, the second isn't an issue. But if a site can't protect a password file, can it protect some other sort of authentication database? That doesn't seem likely. What does that mean for the security of other forms of authentication?

Threats also change. 21 years ago, when Bill Cheswick and I wrote Firewalls and Internet Security, no one was sending phishing emails to collect bank account passwords. Of course, there were no online banks then (there was barely a Web), but that's precisely the point. I eventually concluded that threats could be mapped along two axes, how skilled the attacker was and how much your site was being targeted:

Your defenses have to vary. Enterprise-scale firewalls are useful against unskilled joy hackers, they're only a speed bump to intelligence agencies, and targeted attacks are often launched by insiders who are, by definition, on the inside. Special-purpose internal firewalls, though, can be very useful.

All of this and more went into Thinking Security. It's an advanced book, not a collection of checklists. I do give some advice based on today's technologies and threats, but I show what assumptions that advice is based on, and what sorts of changes would lead it to change. I assume you already know what an encryption algorithm is, so I concentrate on what encryption is and isn't good for. The main focus is how to think about the problem. I'm morally certain that right now, someone in Silicon Valley or Tel Aviv or Hyderabad or Beijing or Accra or somewhere is devising something that 10 years from now, we'll find indispensable, but will have as profound an effect on security as today's smartphones have had. (By the way—the iPhone is only about 8 years old, but few people in high-tech can imagine life without it or an Android phone. What's next?) How will we cope?

That's why I wrote this new book. Threats aren't static, so our defenses and our thought processes can't be, either.

I'm Shocked, Shocked to Find There's Cryptanalysis Going On Here (Your plaintext, sir.)

15 October 2015

There's been a lot of media attention in the last few days to a wonderful research paper on the weakness of 1024-bit Diffie-Hellman and on how the NSA can (and possibly does) exploit this. People seem shocked about the problem and appalled that the NSA would actually exploit it. Neither reaction is right.

In the first place, the limitations of 1024-bit Diffie-Hellman have been known for a long time. RFC 3766, published in 2004, noted that a 1228-bit modulus had less than 80 bits of strength. That's clearly too little. Deep Crack cost $250,000 in 1997 and cracked a 56-bit cipher. Straight Moore's Law calcuations takes us to 68 bits; we can get to 78 bits for $250 million—and that's without economies of scale, better hardware, better math, etc. Frankly, the only real debate in the cryptographic community—and I mean the open community, not NIST or the NSA—is whether 2048 bits is enough, or if people should go to 3072 or even 4096 bits. This is simply not a suprise.

That the NSA would exploit something like this (assuming that they can) is even less surprising. They're a SIGINT and cryptanalysis agency; that's their job. Tell me that you don't think that SIGINT is ethical (shades of Stimson's "gentlemen do not read each other's mail"), but that the NSA would cryptanalyze traffic of interest is even less of a surprise than that 1024-bit Diffie-Hellman is crackable.

There's also been unhappiness that IPsec uses a small set of Diffie-Hellman moduli. Back when the IETF standardized those groups, we understood that this was a risk. It's long been known that the discrete log problem is "brittle": you put in a lot of work up front, and you can solve each instance relatively cheaply. The alternative seemed dangerous. The way Diffie-Hellman key exchange works, both parties need to have the same modulus and generator. The modulus has to be prime, and should be of the form 2q+1, where q is also a prime. Where does the modulus come from? Presumably, one party has to pick it. The other party then has to verify its properties; the protocol has to guard against downgrades or other mischief just in agreeing on the modulus. Yes, it probably could have been done. Our judgment was that the risks weren't worth it. The real problem is that neither vendors nor the IETF abandoned the 1024-bit group. RFC 4307, issued ten years ago, warned that the 1024-bit group was likley to be deprecated and that the 2048-bit group was likley to be required in some future document.

By the way: that these two results are unsurprising doesn't mean that the Weak DH paper is trivial. That's not the case at all. The authors did the hard work to show just how feasible this is, which is not the same as "not surprising". It quite deserved its "Best Paper" award.

Keys under the Doormat

7 July 2015

To those of us who have worked on crypto policy, the 1990s have become known as the Crypto Wars. The US government tried hard to control civilian use of cryptography. They tried to discourage academic research, restricted exports of cryptographic software, and—most memorably—pushed something called "escrowed encryption", a scheme wherein the government would have access to the short-term keys used to encrypt communications or stored files.

The technical community pushed back against all of these initiatives. (One side-effect was that it got a number of computer scientists, including me, professionally involved in policy issues.) Quite apart from privacy and civil liberties issues, there were technical issues: we needed strong cryptography to protect the Internet, compatibility meant that it had to be available world-wide, and simplicity was critical. Why? Most security problems are due to buggy code; increasing the complexity of a system always increases the bug rate.

Eventually, the government gave up. The need for strong crypto had become increasingly obvious, non-US companies were buying non-US products—and no one wanted escrowed encryption. Apart from the fact that it didn't do the job, it did increase complexity, as witnessed by the failure of one high-profile system. There were many papers and reports on the subject; I joined a group of very prominent security and cryptography experts (besides me, Hal Abelson, Ross Anderson, Josh Benaloh, Matt Blaze, Whitfield Diffie, John Gilmore, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, and Bruce Schneier) that wrote one in 1997.

The question of strong cryptography appeared to be settled 15 years ago—but it wasn't. Of late, FBI director James Comey has issued new calls for some sort of mandatory government access to plaintext; so has UK Prime Minister David Cameron. In fact, the push is stronger this time around; in the 1990s, the government denied any intention of barring unescrowed encryption. Now, they're insisting that their way is the only way. (President Obama hasn't committed to either side of the debate.)

It's still a bad idea. The underlying problem of complexity hasn't gone away; in fact, it's worse today. We're doing a lot more with cryptography, so the bypasses have to be more complex and hence riskier. There are also more serious problems of jurisdiction; technology and hence crypto are used in far more countries today than 20 years ago. Accordingly, the same group plus a few more (Matthew Green, Susan Landau, Michael Specter, and Daniel J. Weitzner) have written a new report. Our overall message is the same: deliberately weakening security systems is still a bad idea.

Section 4 is especially important. It has a list of questions that proponents of these schemes need to answer before opponents can make specific criticisms. In other words, "ignore this report; that isn't what we're suggesting" can't be used as a counterargument until the public is given precise details.

Facebook and PGP

2 June 2015

Facebook just announced support for PGP, an encrypted email standard, for email from them to you. It's an interesting move on many levels, albeit one that raises some interesting questions. The answers, and Facebook's possible follow-on moves, are even more interesting.

The first question, of course, is why Facebook has done this. It will only appeal to a very small minority of users. Using encrypted email is not easy. Very few people have ever created a PGP key pair; many who have done so have never used it, or simply used it once or twice and forgotten about it. I suspect that a significant number of people (a) will try to upload their private keys instead of their public keys; (b) will upload it, only to discover that they no longer remember the strong password they used to protect their private keys; (c) will realize that they created their key pair three computers ago and no longer have PGP installed; or (d) more than one of the above.

The nasty cynical part of me thinks it's an anti-Google measure; if email to users is encrypted, gmail won't be able to read it. It's a delightfully Machiavellian scheme, but it makes no sense; far too few people are likely to use it. Unless, of course, they plan to make encrypted email easier to use? That brings up the second question: what will Facebook do to make encryption easier to use?

Facebook is, of course, one of the tech titans. They have some really sharp people, and of course they have the money to throw at the problem. Can they find a way to make PGP easy to use? That encompasses a wide range of activities: composing encrypted and/or signed email, receiving it and immediately realizing its status, being able to search encrypted messages—and doing all this without undue mental effort. Even for sophisticated users, it's really easy to make operational mistakes with encrypted email, mistakes that gut the security. To give just one example, their announcement says that if "encrypted notifications are enabled, Facebook will sign outbound messages using our own key to provide greater assurance that the contents of inbound emails are genuine." This could protect against phishing attacks against Facebook, but if and only if people notice when they've received unsigned email purporting to be from them. Can this work? I'm dubious—no one has ever solved that problem for Web browsers—but maybe they can pull it off.

The third big question is mobile device support. As Facebook itself says, "public key management is not yet supported on mobile devices; we are investigating ways to enable this." Their target demographic lives on mobile devices, but there is not yet good support for PGP on iOS or Android. There are outboard packages available for both platforms, but that's not likely to be very usable for most people. Google has announced plans for GPG support for gmail on Chrome; it would be nice if they added such support to the built-in Android mailer as well. (Oh yes—how do you get the same key pair on your mobile device as on your laptop or desktop?)

The last and most interesting question is why they opted for PGP instead of S/MIME. While there are lots of differences in message formats and the like, the most important is how the certificates are signed and hence what the trust model is. It's a subtle question but utterly vital—and if Facebook does the right things here, it will be a very big boost to efforts to deploy encrypted email far more widely.

One of the very hardest technical things about cryptography (other than the user interface, of course) is how to get the proper keys. That is, if you want to send me encrypted email, how do you get my public key, rather than the public key of some other Steven Bellovin or a fake key that the NSA or the FSB created that claims to be mine? (I've put my actual PGP key at, but of course that could be replaced by someone who hacked the Columbia University Computer Science Department web server.) PGP and S/MIME have very different answers to the question of assuring that a retrieved key is genuine. With PGP, anyone can sign someone else's certificate, thus adding their attestation to the claim that some particular key is really associated with a particular person. Of course, this is an unstructured process, and a group of nasty people could easily create many fake identities that all vouch for each other. Still, it all starts with individuals creating key pair for themselves. If they want, they can then upload the public key to Facebook even if no one has signed it.

By contrast, S/MIME keys have to be signed by a certificate authority (CA) trusted by all parties. Still, in many ways, S/MIME is a more natural choice. It's supported by vendor-supplied mailers on Windows, Macs, and iToys (though not by the standard Android mailer). Facebook is big enough that it could become a CA. They already know enough about people that they've inherently solved one of the big challenges for an online CA: how do you verify someone's claim to a particular name? At the very least, Facebook could say "this key is associated with this Facebook account". No other company can do this, not even Google.

This, then, is a possible future. Facebook could become a de facto CA, for PGP and/or S/MIME. It could sign certificates linked to Facebook accounts. It could make those certificates easily available. It could develop software&mdail;apps, desktop or laptop programs, what have you—that go to Facebook to obtains other people's keys. The usability issues I outlined earlier would remain, but when it comes to certificate handling Facebook has advantages that no one else has ever had. If this is the path they choose to go down, we could see a very large bump in the use of encrypted email.

Hacking: Users, Computers, and Systems

28 May 2015

As many people have heard, there's been a security problem at the Internal Revenue Service. Some stories have used the word hack; other people, though, have complained that nothing was hacked, that the only problem was unauthorized access to taxpayer data but via authorized, intentionally built channels. The problem with this analysis is that it's looking at security from far too narrow a perspective—and this is at the root of a lot of the security problems we have.

Fundamentally, there are three types of intrusion, which I'll dub "user", "computer", and "system". User intrusion is the easiest to understand: someone has gained illegitimate access to one or more user accounts. That's what's happened here: the authentication mechanisms were too weak to withstand a targeted attack on particular individuals' accounts. This is the kind of attack that the usual "pick a strong password" rules are designed to protect against, and the kind that two-factor authentication will protect against. Authentication failures are not the only way this can happen—there are things like cross-site scripting attacks—but in general, the impact is limited to some set of user accounts.

Computer intrusions are more serious: the attack has gained the ability to access files and/or run code on one or more computers within a protected site. In general, someone who can do this can compromise many user accounts; it is thus strictly worse than a user intrusion. Generally speaking, this class of problem is caused by buggy software, though there are other paths for the attacker, such as social engineering or compromising an authorized user's credentials. (The attacker may even be an insider with legitimate access to the targt machine.)

System intrusion is the most nebulous concept of the three, but often the most important. It can refer to any security failures. Imagine, for example, that there was a magic URL that would cause Amazon to ship you some books, but without giving access to any of their computers and without billing any other customers. It's not a customer intrusion, and it's not a computer intrusion—but the system—the collection of computers, people, and processes that collectively are Amazon—has done the wrong thing.

System intrusions—hacks—often cannot be fixed in a single place. It may be that some interfaces were poorly designed for the humans who have to use them (and make no mistake, that's a serious technical issues), or it may require much more far-reaching changes. Let's go back to the IRS hack. Looked at simply, it's a user account intrusion; there are thus the predictable calls for two factor authentication when logging in to the IRS. It sounds simple, but it's not; in fact, it's a fiendishly hard problem. Most people interact with the IRS once a year, when they file their tax returns. This particular application is especially useful when doing things like buying a house—and that's not something folks do every week. How will people possibly keep track of their credentials for this site? Use text messages to their phones? The IRS does not have a database of mobile phone numbers; suggestions that they should have one would be greeted by howls of protest from privacy advocates (and rightly so, I should add). Besides, if you change your phone number would you remember to update it on the IRS site? If so, how would you authenticate to the site when you no longer have your old number? Authentication to the government is among the most difficult authentication problems in existence; it's a cradle-to-grave activity, and generally used infrequently enough that people will not remember their credentials.

Where does the blame lie? Arguably, the IRS should have had a better understanding of attackers' capabilities before deploying this system. It's not clear, though, that they can do much better. The choice may have been between not offering it and offering it knowing that there would be some level of abuse. In that case, the focus should be on detection and remediation, rather than on stronger authentication.

What Congress Should Do About Cybersecurity

24 April 2015

For the last few years, Congress has been debating an information-sharing bill to deal with the cybersecurity problem. Apart from the privacy issues—and they're serious—just sharing more information won't do much. A forthcoming column of mine in IEEE Security & Privacy magazine explains what Congress should do instead.