25 August 2016
My Twitter feed is in an uproar over some newly discovered spyware that targets iOS with three zero-days. People are saying things like Patch your iPhone NOW!, everyone with an iphone should probably stop working and update to iOS 9.3.5 right now, iOS 9.3.5 is now out. Update like you've never updated before, and more. Yes, the flaws are serious. But for almost everyone, my advice is relax, don't panic, and wait a day or two to make sure that the patch doesn't have fatal flaws.
The flaws are indeed serious, but at least for the moment they're in the hands of a small group of attackers, principally governments. If you think that some government is targeting you because you're an investigative journalist, a human rights worker, an official of some other government who might have information of value, etc., then you should indeed update right away. Most of us aren't in that category. We have passwords, credit cards, and bank accounts, but ordinary phishers and scam artists don't have the attack tool yet and may never have it; the vulnerabilities alone are quite literally worth millions of dollars.
So: yes, you should update your iPhones, iPads, and the like. But it's probably not a crisis. (Why yes, journalists and activists are disproportionately represented in my Twitter feed. So are security people, who take things like this very personally…) Update soon, but for the average user it's probably not an emergency.
24 August 2016
In a recent talk at Black Hat, Apple's head of security engineering (Ivan Krstić) described many security mechanisms in iOS. One in particular stood out: Apple's Cloud Key Vault, the way that Apple protects cryptographic keys stored in iCloud. A number of people have criticized Apple for this design, saying that they have effectively conceded the "Going Dark" encryption debate to the FBI. They didn't, and what they did was done for very valid business reasons—but they're taking a serious risk, one that could answer the Going Dark question in the other way: back-up copies of cryptographic keys are far too dangerous to leave lying around.
Going Dark, briefly, is the notion that law enforcement will be cut off from vital sources of evidence because of strong encryption. FBI directory James Comey, among others, has called for vendors to provide some alternate way for law enforcement—with a valid warrant—to bypass the encryption. On the other hand, most non-government cryptographers feel that any possible "exceptional access" mechanism is unreasonably dangerous.
The problem Apple wanted to solve is this. Suppose that you have some sort of iToy—an iPhone, an iPad, etc.— or Mac. These systems allow you to back up your keychain to Apple's iCloud service, where they're protected by your AppleID (an email address) and password. If you buy a new device from Apple, your keychain can be downloaded to it once you log on to iCloud. (Note: uploading keys to iCloud is optional and disabled by default, though you are prompted about enabling it during device setup.)
That's a fine notion, and very consumer-friendly: people want to be able to back up their devices securely (remember that iToys themselves are strongly encrypted), and recover their information if their device is lost or stolen. The trick is doing this securely, and in particular guarding against brute force attacks on the PIN or password. To do this, iOS uses a "Secure Enclave"—a special co-processor that rate-limits guesses and (by default) erases the phone after too many incorrect guesses. (The details are complex; see Krstić's talk for details.) The problem is this: how do you ensure that level of protection for keys that are stored remotely, when the attacker can just hack into or subpoena an iCloud server and guess away. Apple's solution to this problem is even more complex (again, see Krstić's talk), but fundamentally, Apple relies on a Hardware Security Module (HSM) to protect these keys against guessing attacks. It's supposed to be impossible to hack HSMs, and while they do have master keys that are written to smartcards, Apple solved this problem very simply: they ran the smartcards through a blender…
So: it would seem that this solves the Going Dark problem. Instead of destroying these smart cards, suppose that Apple stored one copy in Tim Cook's safe and another in James Comey's. Problem solved, right? Not so fast.
Unfortunately, solving Going Dark can't be done with a simple piece of code in one place. It's many different problems, each of which needs its own secure solution; furthermore, the entire system—the set of all of these solutions, and the processes they rely on—has to be secure, as well as the code and processes for combining them.
The first part is the cryptographic protocols and code to implement the key upload functions. As I mentioned, these mechanisms are exceedingly complex. Although I do not know of any flaws in either the protocols or the code, I won't be even slightly surprised by bugs in either or both. This stuff is really hard to get right.
The next step is protecting the master keys. Apple's solution—a blender—is simple and elegant, and probably reliable. If we want some sort of exceptional access, though, we can't do that: these smartcards have to exist. Not only must they be protected when not in use, they must be protected when in use: who can get them, what can be decrypted, how the requests are authenticated, what to do about requests from other countries, and more. This isn't easy, either; it only seems that way from 30,000 feet. Apple got out of that game by destroying the cards, but if you want exceptional access that doesn't work.
There's another risk, though, one that Apple still runs: are the HSMs really secure? The entire scheme rests on the assumption that they are, but is that true? We don't know, but research suggests that they may not be. HSMs are, after all, computers, and their code and the APIs to them are very hard to get right. If there is a flaw, Apple may never know, but vital secrets will be stolen.
Was Apple wrong, then, to build such an elaborate system? Clearly, no lesser design would have met their requirements: being able to recover old backups with just a password as the only authentication mechanism, while keeping a strict limit on password-guessing. But are those requirements correct? Here's where life gets really tricky. Apple is a consumer device company; their prime goal is to make the customers happy—and customers get really unhappy when they lose their data. There are more secure designs possible, if you give up this remote recovery requirement, but those are more appropriate for defense gear, for situations where it's better to lose the data than to let it be compromised by the enemy. Apple's real problem is that they're trying to satisfy consumer needs while still defending against nation-state adversaries. I hope they've gotten it right—but I won't be even slightly surprised if they haven't.
8 April 2016
What appears to be a leaked copy of the Burr-Feinstein on encryption back doors. Crypto issues aside—I and my co-authors have written on those before—this bill has many other disturbing features. (Note: I've heard a rumor that this is an old version. If so, I'll update this post as necessary when something is actually introduced.)
One of the more amazing oddities is that the bill's definition of "communications" (page 6, line 10) includes "oral communication", as defined in 18 USC 2510. Now, that section says that "oral communication"
means any oral communication uttered by a person exhibiting an expectation that such communication is not subject to interception under circumstances justifying such expectation, but such term does not include any electronic communication;Leaving aside the recursion in that definition, it states that oral communications are just that, oral—how could they be encrypted?
"Covered entities"—more below on what that means—have to provide an "intelligible" version (8:5) of information or data that has been "encrypted, enciphered, encoded, modulated, or obfuscated". We'll ignore the nit that "enciphered" is a subset of "encrypted". "Encoded" is more probematic; it might be a form of encryption, but it could also refer to a standard for representing information, e.g., ASCII: the American Standard Code for Information Interchange. It's tempting to think that the encryption meaning is the intended one, but look at the next word: "modulated". Modulation has nothing to do with secrecy. If law enforcement can't cope with a modulation technique, it points to a lack of technical capability, not an attempt by a vendor to conceal information. It also leaves me wondering what "encoded" means.
An "intelligible version" is supposed to be the "original form" (8:14) of the information or data. What does that mean, especially if we're talking about an encoding format that law enforcement doesn't understand? Let's consider, say, the Lytro camera. Lytros use cool technology that lets you do things like change the focus after you've taken the picture. You can certainly get a JPG out of a Lytros image—but which one? Focus and depth of field matter. And there is no "original form" save for the actual three-dimensional objects in the field of view of the camera. It's also worth noting that the JPG format—a way to encode an image—is a lossy algorithm. That is, there is by design no way to go back to the "original". Should JPG be outlawed?
There's also language to vastly expand the amount of metadata that has to be available to law enforcement. The bill speaks of "communication identifying information" (4:19) as something that has to be made available. It sounds like classic metadata, but the definition is now expanded. It generally speaks of "dialing, routing, addressing, signaling, switching, processing, transmitting ... information", (4:21) while the older definition speaks just of "dialing, routing, addressing, or signaling". The new definition includes "public and local source and destination addressing" (5:8), "port numbers" (5:15), and, I believe, MAC addresses (5:17): "addresses or other information that uniquely identifies the equipment used...". The older terms were not defined; the new ones are worse. "Processing"? What does that mean? This section opens the door to unconstitutional overcollection. The WiFi router in your home is almost certainly covered by this bill (it's a "device" used to "facilitate a communication ... of data"), but when some of that information reaches your router there are no third parties involved. That makes it legally unavailable to law enforcement unless they have a wiretap warrant—but this bill requires that the information be available under "any order or warrant issued by a court of competent jurisdiction" (emphasis added). A court order sufficient to obtain metadata is not a warrant.
A "covered entity" (6:18) is more or less anyone: a software vendor, a hardware vendor, a provider of "remote" or "electronic" communication services, and more. At least as important, a "license distributor" (4:10)—the language isn't completely clear, but it seems to refer to app store operators—must ensure that anything distributed via their store (and it includes not just apps but also "services") conforms to these requirements. That's right: even if an app store does not already do vetting, it would be obligated to at least mandate the crypto back door. One wonders what that means for, say, GitHub—even open source software is generally distributed pursuant to a license that is included with the software.
There's more; I could go on. For example, Orin Kerr noted that the bill "doesn't require only reasonable assistance: It's 'assistance as is necessary' to decrypt". The application to home routers is incredibly invasive, since it requires features that such boxes have never had and either remote access by the manufacturer or a serious leak of private information. And all that is on top of the generally bad concept of crypto back doors.
This is a really bad bill.
Update: The official version of the bill has been released. There appear to be very few changes and none that affect anything I've said.
28 March 2016
As you probably know (if you read this blog), the FBI has gotten into Syed Farook's iPhone. Many people have asked the obvious questions: how did the FBI do it, will they tell Apple, did they find anything useful, etc.? I think there are deeper questions that really get to the full import of the break.
- How expensive is the attack?
- Security—-and by extension, insecurity—-are not absolutes.
Rather, they're only meaningful concepts if they include
some notion of the cost of an attack. If an attack is
cheap, it can be used frequently; if it's expensive, it
will be reserved for high-value targets.
We don't know anything about the cost. Did it require special hardware? Unusual skills?
- How long does the attack take to carry out?
- This attack took at most a few weeks to launch, probably less: the FBI would have wanted to test the exploit on unimportant phones before trying it on Farook's phone. But there's a big different between an exploit that takes a few seconds and one that takes several days. The former is a risk to, for example, business travelers crossing a border; the latter would likely be used only if there is already good reason to think that valuable information will be retrieved.
- How easy is it to launch?
- Can the attack be launched remotely, say via a carefully crafted text message? Does it require tethering to a computer? Does it work if the phone has been pair-locked to a particular computer? Do chips have to be unsoldered? Is special equipment necessary?
These questions are interesting because together, they define the risk to everyone else if the hole is not patched. More importantly, the answers set the parameters of the vulnerabilities equities discussion without divulging information that the FBI may consider too sensitive to reveal as yet. A cheap, fast, easy attack is a serious risk; an expensive, slow, difficult attack is a risk to only a few.
1 February 2016
As someone who learned to program at age 14 and who benefited immensely from the opporunities my high school's computer provided, I think that it's a great idea to give more children a similar chance. Programming was more fun than just about anything else I'd ever done, and it quickly displaced math, physics, and law as possible career paths for me. (No, I probably would never have become a lawyer, since math and science were too much fun, but I was interested in law and policy even then.) And yes, quite obviously my career path was shaped by that early opportunity.
Given all that, I'm delighted by the White House's "Computer Science For All" initiative. Even people who don't become programmers—-probably the large majority of students who will take these classes—-will benefit from that sort of thinking. That said, I do have a few concerns.
- Teaching the teachers
- The White House recognizes that we need far more people qualified
to teach computer science. It's a crucial need, but I wonder if
$4 billion is nearly enough money. I wonder if we need another level:
teaching the teachers who will teach the children's teachers.
The teachers have to really understand programming. I had another spot of luck when I was young: I had a relative who know how to program and who could answer my questions. My career almost died aborning; there were two or three crucial ideas that I just didn't get at first. I don't know if I'd have been able to work past them on my own.
- Reteaching the teachers
Computer science is incredibly dynamic, even at the introductory
levels. Let's put it like this: the iPhone is less than 10 years
old, but it's completely changed the industry. Teaching children
to program but ignoring smart phones would be a bad idea, if only
because they'll be less interested in the subject
matter. But progress isn't stopping
with smart phones; not very many years from now, school kids will
want—-need—-to learn about programming the next big thing,
whatever it will be. Internet
of Things? Wearables? Programmable drones?
I have no idea, but I'm sure it will happen.
In other words, the teachers are going to need frequent refreshers. The curriculum will also need frequent updates. There is in-service training today, but I suspect there will need to be more. In most subjects, the content of the course doesn't change drastically every five years; in computer science, it does. (Yes, programming is programing. But the details of what you program will change.)
In other words, the the training budget has to be an ongoing commitment. Even if $4 billion is the right number now, more will be needed not very many years in the future.
- Buying Equipment
requires computers and software.
Computers age and they don't age gracefully; software is even worse.
The hardware will need
to be replaced every 4-5 years; the software will need to be
upgraded every year or two. This, of course, also requires money.
I suspect that there also should be a subsidy for equipment and Internet connectivity for poorer households. You learn programming only be doing, and it takes hours of non-class time for every hour of instruction. Students who don't have easy access to current-enough computers and software won't learn.
In other words, teaching programming to all children will require a notable amount of extra money, on top of today's budgets, every single year. Furthermore, if extra funds are not allocated to poorer districts, much of the money spent there will be wasted and we'll worsen the digital divide.
I am not saying that the White House initiative is a bad idea—-quite the contrary, in fact. I am saying it's just the down payment on a long-term effort. The challenge now is to identify where the continuing funding will come from. It might be reasonable to give priority on the initial outlays to districts and states that have identified and committed to a sustainable funding model—-but again, this has to be done in a way that won't worsen poverty.
3 January 2016
In the debate over government "exceptional access" to encrypted communications, opponents with a technical bent (and that includes me) have said that it won't work: that such a scheme would inevitably lead to security problems. The response—from the policy side, not from techncial folk—has been to assert that perhaps more effort would suffice. FBI Director James Comey has said, "But my reaction to that is: I'm not sure they've really tried." Hillary Clinton wants a "Manhattan-like project, something that would bring the government and the tech communities together". More effort won't solve the problem—but the misunderstanding lies at the heart of why exceptional access is so hard.
The Manhattan Project had to solve one problem. It was a very hard problem, one they didn't even know could be solved, but it was just one problem. Exceptional access is a separate problem for every app, every service, every device. Possibly, a few will get it right. More likely, they'll fail even more abysmally than they've failed at even simple encryption. Study ("Developers have botched encryption in seven out of eight Android apps and 80 percent of iOS apps") after study ("10,327 out of 11,748 applications that use cryptographic APIs—88% overall—make at least one mistake") after study ("root causes are not simply careless developers, but also limitations and issues of the current SSL development paradigm") after study ("We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries.") have shown the same thing: programmers don't get the crypto right—and these are primarily studies of apps that use standardized and well-understood protocols and APIs.
Oppenheimer and company had the advantage of effectively unlimited resources. When confronted with two design choices, they could try both. This let them avoid dead ends, such as trying to build a gun-type bomb with plutonium. They could try gaseous diffusion, thermal diffusion, centrifuges, and electromagnetic separation for uranium enrichment. (The latter required more than $1 billion dollars of silver wire—and they got it.) App developers don't have that luxury. Even if one or two do, they don't share their source code with each other. Besides, most developers don't know if they've gotten it right or wrong; the failure mode here is silent insecurity. Most of the time, holes like these are found by people who do a serious penetration study—and these are generally the attackers.
One size doesn't fit all when it comes to cryptography. That's why cryptographic APIs are so complex. We can't solve the exceptional access problem once and for all, and individual efforts won't suffice for particular cases.