December 2008
Cybercrime and "Remote Search" (2 December 2008)
The Report on "Securing Cyberspace for the 44th Presidency" (15 December 2008)
Another Cluster of Cable Cuts (20 December 2008)
Companies, Courts, and Computer Security (30 December 2008)

Companies, Courts, and Computer Security

30 December 2008

The newswires and assorted technical blogs are abuzz with word that several researchers (Alex Sotirov, Marc Stevens, Jake Appelbaum, Arjen Lenstra, Benne de Weger, and David Molnar) have exploited the collision weakness in MD5 to create fake CA certificates that are accepted by all major browsers. I won’t go into the details here; if you’re interested in the attack, see Ed Felten’s post for a clear explanation. For now, let it suffice to say that I think the threat is serious.

What’s really interesting, from a broader perspective, is the entire process surrounding MD5’s weakness. We’ve known for a long time that MD5 is weak; Dobbertin found some problems in it in 1996. More seriously, Wang et al. published collisions in 2004, with full details a year later. But people reacted slowly — too slowly.

Verisign, in particular, appears to have been caught short. One of the CAs they operate still uses MD5. They said:

The RapidSSL certificates are currently using the MD5 hash function today. And the reason for that is because when you’re dealing with widespread technology and [public key infrastructure] technology, you have phase-in and phase-out processes that cane take significant periods of time to implement.
But we’re talking about more than four years! Granted, it might take a year to plan a change-over, and another year to implement it. That time is long gone. Furthermore, the obvious change, from MD5 to SHA-1, is not that challenging; all browsers already support both algorithms. (Changing to a stronger algorithm, such as SHA-256, is much harder. Note that even SHA-1 is considered threatened; NIST is running a competition to select a replacement, but that process will take years to finish.)

The really scary thing, though, is that we might never have learned of this new attack:

Molnar says that the team pre-briefed browser makers, including Microsoft and the Mozilla Foundation, on their exploit. But the researchers put them under NDA, for fear that if word got out about their efforts, legal pressure would be brought to bear to suppress their planned talk in Berlin. Molnar says Microsoft warned Verisign that the company should stop using MD5.
Legal pressure? Sotirov and company are not "hackers"; they’re respected researchers. But the legal climate is such that they feared an injunction. Nor are such fears ill-founded; others have had such trouble. Verisign isn’t happy: "We’re a little frustrated at Verisign that we seem to be the only people not briefed on this". But given that the researchers couldn’t know how Verisign would react, in today’s climate they felt they had to be cautious.

This is a dangerous trend. If good guys are afraid to find flaws in fielded systems, that effort will be left to the bad guys. Remember that for academics, publication is the only way they’re really "paid". We need a legal structure in place to protect security researchers. To paraphrase an old saying, security flaws don’t crack systems, bad guys do.