August 2016
Does Apple's Cloud Key Vault Answer the Key Escrow Question? (24 August 2016)
Once Again, Don't Panic (25 August 2016)

Does Apple's Cloud Key Vault Answer the Key Escrow Question?

24 August 2016

In a recent talk at Black Hat, Apple’s head of security engineering (Ivan Krstić) described many security mechanisms in iOS. One in particular stood out: Apple’s Cloud Key Vault, the way that Apple protects cryptographic keys stored in iCloud. A number of people have criticized Apple for this design, saying that they have effectively conceded the "Going Dark" encryption debate to the FBI. They didn’t, and what they did was done for very valid business reasons—but they’re taking a serious risk, one that could answer the Going Dark question in the other way: back-up copies of cryptographic keys are far too dangerous to leave lying around.

Going Dark, briefly, is the notion that law enforcement will be cut off from vital sources of evidence because of strong encryption. FBI directory James Comey, among others, has called for vendors to provide some alternate way for law enforcement—with a valid warrant—to bypass the encryption. On the other hand, most non-government cryptographers feel that any possible "exceptional access" mechanism is unreasonably dangerous.

The problem Apple wanted to solve is this. Suppose that you have some sort of iToy—an iPhone, an iPad, etc.— or Mac. These systems allow you to back up your keychain to Apple’s iCloud service, where they’re protected by your AppleID (an email address) and password. If you buy a new device from Apple, your keychain can be downloaded to it once you log on to iCloud. (Note: uploading keys to iCloud is optional and disabled by default, though you are prompted about enabling it during device setup.)

That’s a fine notion, and very consumer-friendly: people want to be able to back up their devices securely (remember that iToys themselves are strongly encrypted), and recover their information if their device is lost or stolen. The trick is doing this securely, and in particular guarding against brute force attacks on the PIN or password. To do this, iOS uses a "Secure Enclave"—a special co-processor that rate-limits guesses and (by default) erases the phone after too many incorrect guesses. (The details are complex; see Krstić’s talk for details.) The problem is this: how do you ensure that level of protection for keys that are stored remotely, when the attacker can just hack into or subpoena an iCloud server and guess away. Apple’s solution to this problem is even more complex (again, see Krstić’s talk), but fundamentally, Apple relies on a Hardware Security Module (HSM) to protect these keys against guessing attacks. It’s supposed to be impossible to hack HSMs, and while they do have master keys that are written to smartcards, Apple solved this problem very simply: they ran the smartcards through a blender…

So: it would seem that this solves the Going Dark problem. Instead of destroying these smart cards, suppose that Apple stored one copy in Tim Cook’s safe and another in James Comey’s. Problem solved, right? Not so fast.

Unfortunately, solving Going Dark can’t be done with a simple piece of code in one place. It’s many different problems, each of which needs its own secure solution; furthermore, the entire system—the set of all of these solutions, and the processes they rely on—has to be secure, as well as the code and processes for combining them.

The first part is the cryptographic protocols and code to implement the key upload functions. As I mentioned, these mechanisms are exceedingly complex. Although I do not know of any flaws in either the protocols or the code, I won’t be even slightly surprised by bugs in either or both. This stuff is really hard to get right.

The next step is protecting the master keys. Apple’s solution—a blender—is simple and elegant, and probably reliable. If we want some sort of exceptional access, though, we can’t do that: these smartcards have to exist. Not only must they be protected when not in use, they must be protected when in use: who can get them, what can be decrypted, how the requests are authenticated, what to do about requests from other countries, and more. This isn’t easy, either; it only seems that way from 30,000 feet. Apple got out of that game by destroying the cards, but if you want exceptional access that doesn’t work.

There’s another risk, though, one that Apple still runs: are the HSMs really secure? The entire scheme rests on the assumption that they are, but is that true? We don’t know, but research suggests that they may not be. HSMs are, after all, computers, and their code and the APIs to them are very hard to get right. If there is a flaw, Apple may never know, but vital secrets will be stolen.

Was Apple wrong, then, to build such an elaborate system? Clearly, no lesser design would have met their requirements: being able to recover old backups with just a password as the only authentication mechanism, while keeping a strict limit on password-guessing. But are those requirements correct? Here’s where life gets really tricky. Apple is a consumer device company; their prime goal is to make the customers happy—and customers get really unhappy when they lose their data. There are more secure designs possible, if you give up this remote recovery requirement, but those are more appropriate for defense gear, for situations where it’s better to lose the data than to let it be compromised by the enemy. Apple’s real problem is that they’re trying to satisfy consumer needs while still defending against nation-state adversaries. I hope they’ve gotten it right—but I won’t be even slightly surprised if they haven’t.

Once Again, Don't Panic

25 August 2016

My Twitter feed is in an uproar over some newly discovered spyware that targets iOS with three zero-days. People are saying things like Patch your iPhone NOW!, everyone with an iphone should probably stop working and update to iOS 9.3.5 right now, iOS 9.3.5 is now out. Update like you’ve never updated before, and more. Yes, the flaws are serious. But for almost everyone, my advice is relax, don’t panic, and wait a day or two to make sure that the patch doesn’t have fatal flaws.

The flaws are indeed serious, but at least for the moment they’re in the hands of a small group of attackers, principally governments. If you think that some government is targeting you because you’re an investigative journalist, a human rights worker, an official of some other government who might have information of value, etc., then you should indeed update right away. Most of us aren’t in that category. We have passwords, credit cards, and bank accounts, but ordinary phishers and scam artists don’t have the attack tool yet and may never have it; the vulnerabilities alone are quite literally worth millions of dollars.

So: yes, you should update your iPhones, iPads, and the like. But it’s probably not a crisis. (Why yes, journalists and activists are disproportionately represented in my Twitter feed. So are security people, who take things like this very personally…) Update soon, but for the average user it’s probably not an emergency.