Useful Links

Recent Posts

Archive

The Security Problem with HTML Email

14 May 2018

Purists have long objected to HTML email on aesthetic grounds. On functional grounds, it tempts too many sites to put essential content in embedded (or worse yet, remote) images, thus making the messages not findable via search. For these reasons, among others, Matt Blaze remarked that "I've long thought HTML email is the work of the devil". But there are inherent security problems, too (and that, of course, is some of what Matt was referring to). Why?

Although there are no perfect measures for how secure a system is, one commonly used metric is the "attack surface". While handling simple text email is not easy—have you ever read the complete specs for header lines?—it's a relatively well-understood problem. Web pages, however, are very complex. Worse yet, they can contain references to malicious content, sometimes disguised as ads. They thus have a very large attack surface.

Browsers, of course, have to cope with this, but there are two important defenses. First, most browsers check lists of known-bad web sites and won't go there without warning you. Second, and most critically, you have a choice—you can only be attacked by a site if you happen to visit it.

With email, you don't have that choice—the bad stuff comes to you. If your mailer is vulnerable—again, rendering HTML has a large attack surface—simply receiving a malicious email puts you at risk.

Facebook's New Dating App

4 May 2018

I've been thinking about Facebook's new dating app. I suspect that it has the potential to be very good—or very, very bad.

Facebook is a big data company: they make their money because they can very precisely figure out what users will respond to. What if they applied that to online dating? Maybe it will look more like other dating apps, but remember how much Facebook knows about people. In particular, at this point it has many years of data not just on individuals, but on which of its users have partnered with which others, and (to some extent) on how long these partnerships last. That is, rather than code an algorithm that effectively says, "you two match on the following N points on your questions and answers", Facebook can run a machine learning algorithm that says "you two cluster with these other pairs who went on to serious relationships." (Three times already when typing this, my fingers typed "dataing" instead of "dating". Damn, make that four!)

So what's wrong? Isn't that a goal of a dating app? Well, maybe. The thing about optimization is that you have to be very careful what you ask for—because you may get exactly that, rather than what you actually wanted. What will Facebook's metric for success be? A couple that seriously pairs off, e.g., moves in together and/or marries, fairly soon? A couple that starts more slowly but the relationship lasts longer? A bimodal distributon of quick flameouts and long-term relationships? (Facebook says they're not trying for hookups, so I guess they don't need to buy data from Uber.)

There are, of course, all of the usual issues of preexisting human biases being amplified by ML algorithms, to say nothing of the many privacy issues here. I think, though, that the metric here is less obvious and more important. What is Facebook trying to maximize? And how will they profit from the answers?

Eran Tromer's Attack on Ray Ozzie's CLEAR Protocol

2 May 2018

There have been a number of mentions of an attack that Eran Tromer found against Ray Ozzie's CLEAR protocol, including in Steven Levy's Wired article and on my blog. However, there haven't been any clear descriptions of it.

Eran has kindly given me his description of it, with permission to publish it on my blog. The text below is his.



A fundamental issue with CLEAR approach is that it effectively tells law enforcement officers to trust phones handed to them by criminals, and give such phones whatever unlock keys they request. This provides a powerful avenue of attack for an adversary who uses phones as a Trojan horse.

For example, the following "man-in-the-middle" attack can let a criminal unlock a victim's phone that reached their possession, if that phone is CLEAR-compliant. The criminal would turn on the victim phone, perform the requisite gesture to display the "device unlock request" QR code, and copy this code. They would then program a new "relay" phone to impersonate the victim phone: when the relay phone is turned on, it shows the victim's QR code instead of its own. (This behavior is not CLEAR-compliant, but that's not much of a barrier: the criminal can just buy a non-compliant phone or cobble one from readily-available components). The criminal would plant the relay phone in some place where law enforcement is likely to take keen interet in it, such as a staged crime scene or near a foreign embassy. Law enforcement would diligently collect the phone and, under the CLEAR procedure, turn it on to retrieve the "device unlock request" QR code (which, unbeknownst to them, is actually the victim's code). Law enforcement would then obtain a corresponding search warrant, retrieve the unlock code from the vendor, and helpfully present it to the relay phone — which will promptly relay the code to the criminal, who can then enter the same code into the victim's phone. The victim's code, upon receiving this code, will spill all its secrets to the criminal. The relay phone can even present law enforcement with a fake view of its own contents, so that no anomaly is apparent.

The good news is that this attack requires the criminal to go through the motions anew for for every victim phone, so it cannot easily unlock phones en masse. Still, this would provide little consolation to, say, a victim whose company secrets or cryptocurrency assets have been stolen by a targeted attack.

It it plausible that such man-in-the-middle attacks can be mitigated by modern cryptographic authentication protocols coupled with physical measures such as tamper-resistant hardware or communication latency measurements. But this is a difficult challenge that requires careful design and review, and would introduce extra assumptions, costs and fragility into the system. Blocking communication (e.g., using Faraday cages) is also a possible measure, though notoriously difficult, unwieldy and expensive.

Another problem is that CLEAR phones must resist "jailbreaking", i.e., must not let phone owners modify the operating system or firmware on their own phones. This is because CLEAR critically relies on users not being able to tamper with their phones' unlocking functionality, and this functionality would surely be implemented in software, as part of the operating system or firmware, due to its sheer complexity (e.g., it includes the "device unlock request" screen, QR code recognition, crytographic verification of unlock codes, and transmission of data dumps). In practice, it is well-nigh impossible to prevent jailbreaking in complex consumer devices, and even for state-of-the-art locked-down platforms such as Apple's iOS, jailbreak methods are typically discovered and widely circulated soon after every operating system update. Note that jailbreaking also exacerbates the aforementioned man-in-the-middle attack: to create the relay phone, the criminal may pick any burner phone from a nearby store, and even if such phones are CLEAR-compliant by decree, jailbreaking them would allow them to be reprogrammed as a relay.

Additional risks stem from having an attacker-controlled electronics operating within law enforcement premises. A phone can eavesdrop on investigators's conversations, or even steal private cryptographic keys from investigator's computers. For examples of the how the latter may be done using a plain smartphone or hardware hidden that can fit in a customized phone, see http://cs.tau.ac.il/~tromer/acoustic, http://www.cs.tau.ac.il/~tromer/radioexp, and http://www.cs.tau.ac.il/~tromer/mobilesc. While prudent forensics procedures can mitigate this risk, these too would introduce new costs and complexity.

These are powerful avenues of attack, because phones are flexible devices with the capability to display arbitrary information, communicate wirelessly with adversaries, and spy on their environment. In a critical forensic investigation, you would never want to turn on a phone and run whatever nefarious or self-destructing software may be programmed in it. Moreover, the last thing you'd do is let a phone found on the street issue requests to a highly sensitive system that dispenses unlock codes (even if these requests are issued indirectly, through a well-meaning but hapless law enforcement officer who's just following procedure).

Indeed, in computer forensics, a basic precaution against such attacks is to never turn on the computer in an uncontrolled fashion; rather, you would extract its storage data and analyze it on a different, trustworthy computer. But the CLEAR scheme relies on keeping the phone intact, and even turning it on and trusting it to communicate as intended during the recovery procedure. Telling the guards of Troy to bring any suspicious wooden horse into the city walls, and to grant it an audience with the king, may not be the best policy solution to "Going Greek" debate.

Ray Ozzie's Proposal: Not a Step Forward

25 April 2018

Steven Levy has just published an article describing a new proposal by Ray Ozzie to solve the exceptional access problem. I don't have time today for a detailed answer, but there are two points I want to make.

Ozzie presented his proposal at a meeting at Columbia—I was there—to a diverse group. Levy wrote that Ozzie felt that he had "taken another baby step in what is now a two-years-and-counting quest" and that "he'd started to change the debate about how best to balance privacy and law enforcement access". I don't agree. In fact, I think that one can draw the opposite conclusion.

At the meeting, Eran Tromer found a flaw in Ozzie's scheme: under certain circumstances, an attacker can get an arbitrary phone unlocked. That in itself is interesting, but to me the important thing is that a flaw was found. Ozzie has been presenting his scheme for quite some time. I first heard it last May, at a meeting with several brand-name cryptographers in the audience. No one spotted the flaw. At the January meeting, though, Eran squinted at it and looked at it sideways—and in real-time he found a problem that everyone else had missed. Are there other problems lurking? I wouldn't be even slightly surprised. As I keep saying, cryptographic protocols are hard.

The other point is that security is a systems problem. To give just one example, the international problem alone is a killer issue. If the United States adopts this scheme, other countries, including specifically Russia and China, are sure to follow. Would they consent to a scheme that relied on the cooperation of an American company, and with keys stored in the U.S.? Almost certainly not. Now: would the U.S. be content with phones unlockable only with the consent and cooperation of Russian or Chinese companies? I can't see that, either. Maybe there's a solution, maybe not—but the proposal is silent on the issue.

Crypto War III: Assurance

24 March 2018

For decades, academics and technologists have sparred with the government over access to cryptographic technology. In the 1970s, when crypto started to become an academic discipline, the NSA was worried, fearing that they'd lose the ability to read other countries' traffic. And they acted. For example, they exerted pressure to weaken DES. From a declassified NSA document (Thomas R. Johnson, American Cryptology during the Cold War, 1945-1989: Book III: Retrenchment and Reform, 1972-1980, p. 232):


(For my take on other activity during the 1970s, see some class slides.)

The Second Crypto War, in the 1990s, is better known today, with the battles over the Clipper Chip, export rules, etc. I joined a group of cryptographers in criticizing the idea of key escrow as insecure. When the Clinton administration dropped the idea and drastically restricted the scope of export restrictions on cryptographic technology, we thought the issue was settled. We were wrong.

In the last several years, the issue has heated up again. A news report today says that the FBI is resuming the push for access:

F.B.I. and Justice Department officials have been quietly meeting with security researchers who have been working on approaches to provide such "extraordinary access" to encrypted devices, according to people familiar with the talks.

Based on that research, Justice Department officials are convinced that mechanisms allowing access to the data can be engineered without intolerably weakening the devices' security against hacking.

I'm as convinced as ever that "exceptional access"—a neutral term, as opposed to "back doors", "GAK" (government access to keys), or "golden keys", and first used in a National Academies report—is a bad idea. Why? Why do I think that the three well-resepcted computer scientists mentioned in the NY Times article (Stefan Savage, Ernie Brickell, and Ray Ozzie) who have proposed schemes are wrong?

I can give my answer in one word: assurance. When you design a security system, you want to know that it will work correctly, despite everything adversaries can do. In my view, cryptographic mechanisms are so complex and so fragile that tinkering with them to add exceptional access seriously lowers their assurance level, enough so that we should not have confidence that they will work correctly. I am not saying that these modified mechanisms will be insecure; rather, I am saying that we should not be surprised if and when that happens.

History bears me out. Some years ago, a version of the Pretty Good Privacy system that was modified to support exceptional access could instead give access to attackers. The TLS protocol, which is at the heart of web encryption, had a flaw that is directly traceable to the 1990s requirement for weaker, export grade cryptography. That's right: a 1994 legal mandate—one that was abolished in 2000—led to a weakness that was still present in 2015. And that's another problem: cryptographic mechanisms have a very long lifetime. In this case, the issue was something known technically as a "downgrade attack", where an intruder in the conversation forces both sides to fall back to a less secure variant. We no longer need export ciphers and hence have no need to even negotiate the issue—but the protocol still has it, and in an insecure fashion. Bear in mind that TLS has been proven secure mathematically—and it still had this flaw.

There are thus many reasons to be skeptical about not just the new proposals mentioned in the NY Times article but about the entire concept of exceptional access. In fact, a serious flaw has been found in one of the three. Many cryptographers, including myself, had seen the proposal—but someone else, after hearing a presentation about it for the first time, found a problem in about 15 minutes. This particular flaw may be fixable, but will the fix be correct? I don't think we have any way of knowing: cryptography is a subtle discipline.

So: the risk we take by mandating exceptional access is that we may never know if there's a problem lurking. Perhaps the scheme will be secure. Perhaps it will be attackable by a major intelligence agency. Or perhaps a street criminal who has stolen the phone or a spouse or partner will be able to exploit it, with the help of easily downloadable software from the Internet. We can't know for sure, and the history of the field tells us that we should not be sanguine. Exceptional access may create far more problems than it solves.

Ed Felten to be Named as a PCLOB Board Member

13 March 2018

Today, the White House announced that Ed Felten would be named as a member of the Privacy and Civil Liberties Oversight Board. That's wonderful news, for many reasons.

First, Ed is superbly qualified for the job. He not only has deep knowledge of technology, he understands policy and how Washington works. Second, there really are important technical issues in PCLOB's work—that's why I spent a year there as their first Technology Scholar.

But more importantly, Ed's appointment is a sign that computer science technical expertise is finally being valued in Washington at the policy level. My role was very explicitly not to set or opine on policy; rather, I looked at the technical aspects and explained to the staff and the Board what I thought they implied. The Board, though, made the policy decisions.

Ed will now have a voice at that level. That's good, partly because he is, as I said, so very well qualified, but also because he will likely be the first of many technologists in such roles. For years, I and many others have been calling for such appointments. I'm glad that one has finally happened.