14 May 2018
Purists have long objected to HTML email on aesthetic grounds. On functional grounds, it tempts too many sites to put essential content in embedded (or worse yet, remote) images, thus making the messages not findable via search. For these reasons, among others, Matt Blaze remarked that "I've long thought HTML email is the work of the devil". But there are inherent security problems, too (and that, of course, is some of what Matt was referring to). Why?
Although there are no perfect measures for how secure a system is, one commonly used metric is the "attack surface". While handling simple text email is not easy—have you ever read the complete specs for header lines?—it's a relatively well-understood problem. Web pages, however, are very complex. Worse yet, they can contain references to malicious content, sometimes disguised as ads. They thus have a very large attack surface.
Browsers, of course, have to cope with this, but there are two important defenses. First, most browsers check lists of known-bad web sites and won't go there without warning you. Second, and most critically, you have a choice—you can only be attacked by a site if you happen to visit it.
With email, you don't have that choice—the bad stuff comes to you. If your mailer is vulnerable—again, rendering HTML has a large attack surface—simply receiving a malicious email puts you at risk.
4 May 2018
I've been thinking about Facebook's new dating app. I suspect that it has the potential to be very good—or very, very bad.
Facebook is a big data company: they make their money because they can very precisely figure out what users will respond to. What if they applied that to online dating? Maybe it will look more like other dating apps, but remember how much Facebook knows about people. In particular, at this point it has many years of data not just on individuals, but on which of its users have partnered with which others, and (to some extent) on how long these partnerships last. That is, rather than code an algorithm that effectively says, "you two match on the following N points on your questions and answers", Facebook can run a machine learning algorithm that says "you two cluster with these other pairs who went on to serious relationships." (Three times already when typing this, my fingers typed "dataing" instead of "dating". Damn, make that four!)
So what's wrong? Isn't that a goal of a dating app? Well, maybe. The thing about optimization is that you have to be very careful what you ask for—because you may get exactly that, rather than what you actually wanted. What will Facebook's metric for success be? A couple that seriously pairs off, e.g., moves in together and/or marries, fairly soon? A couple that starts more slowly but the relationship lasts longer? A bimodal distributon of quick flameouts and long-term relationships? (Facebook says they're not trying for hookups, so I guess they don't need to buy data from Uber.)
There are, of course, all of the usual issues of preexisting human biases being amplified by ML algorithms, to say nothing of the many privacy issues here. I think, though, that the metric here is less obvious and more important. What is Facebook trying to maximize? And how will they profit from the answers?
2 May 2018
There have been a number of mentions of an attack that Eran Tromer found against Ray Ozzie's CLEAR protocol, including in Steven Levy's Wired article and on my blog. However, there haven't been any clear descriptions of it.
Eran has kindly given me his description of it, with permission to
publish it on my blog. The text below is his.
A fundamental issue with CLEAR approach is that it effectively tells law enforcement officers to trust phones handed to them by criminals, and give such phones whatever unlock keys they request. This provides a powerful avenue of attack for an adversary who uses phones as a Trojan horse.
For example, the following "man-in-the-middle" attack can let a criminal unlock a victim's phone that reached their possession, if that phone is CLEAR-compliant. The criminal would turn on the victim phone, perform the requisite gesture to display the "device unlock request" QR code, and copy this code. They would then program a new "relay" phone to impersonate the victim phone: when the relay phone is turned on, it shows the victim's QR code instead of its own. (This behavior is not CLEAR-compliant, but that's not much of a barrier: the criminal can just buy a non-compliant phone or cobble one from readily-available components). The criminal would plant the relay phone in some place where law enforcement is likely to take keen interet in it, such as a staged crime scene or near a foreign embassy. Law enforcement would diligently collect the phone and, under the CLEAR procedure, turn it on to retrieve the "device unlock request" QR code (which, unbeknownst to them, is actually the victim's code). Law enforcement would then obtain a corresponding search warrant, retrieve the unlock code from the vendor, and helpfully present it to the relay phone — which will promptly relay the code to the criminal, who can then enter the same code into the victim's phone. The victim's code, upon receiving this code, will spill all its secrets to the criminal. The relay phone can even present law enforcement with a fake view of its own contents, so that no anomaly is apparent.
The good news is that this attack requires the criminal to go through the motions anew for for every victim phone, so it cannot easily unlock phones en masse. Still, this would provide little consolation to, say, a victim whose company secrets or cryptocurrency assets have been stolen by a targeted attack.
It it plausible that such man-in-the-middle attacks can be mitigated by modern cryptographic authentication protocols coupled with physical measures such as tamper-resistant hardware or communication latency measurements. But this is a difficult challenge that requires careful design and review, and would introduce extra assumptions, costs and fragility into the system. Blocking communication (e.g., using Faraday cages) is also a possible measure, though notoriously difficult, unwieldy and expensive.
Another problem is that CLEAR phones must resist "jailbreaking", i.e., must not let phone owners modify the operating system or firmware on their own phones. This is because CLEAR critically relies on users not being able to tamper with their phones' unlocking functionality, and this functionality would surely be implemented in software, as part of the operating system or firmware, due to its sheer complexity (e.g., it includes the "device unlock request" screen, QR code recognition, crytographic verification of unlock codes, and transmission of data dumps). In practice, it is well-nigh impossible to prevent jailbreaking in complex consumer devices, and even for state-of-the-art locked-down platforms such as Apple's iOS, jailbreak methods are typically discovered and widely circulated soon after every operating system update. Note that jailbreaking also exacerbates the aforementioned man-in-the-middle attack: to create the relay phone, the criminal may pick any burner phone from a nearby store, and even if such phones are CLEAR-compliant by decree, jailbreaking them would allow them to be reprogrammed as a relay.
Additional risks stem from having an attacker-controlled electronics operating within law enforcement premises. A phone can eavesdrop on investigators's conversations, or even steal private cryptographic keys from investigator's computers. For examples of the how the latter may be done using a plain smartphone or hardware hidden that can fit in a customized phone, see http://cs.tau.ac.il/~tromer/acoustic, http://www.cs.tau.ac.il/~tromer/radioexp, and http://www.cs.tau.ac.il/~tromer/mobilesc. While prudent forensics procedures can mitigate this risk, these too would introduce new costs and complexity.
These are powerful avenues of attack, because phones are flexible devices with the capability to display arbitrary information, communicate wirelessly with adversaries, and spy on their environment. In a critical forensic investigation, you would never want to turn on a phone and run whatever nefarious or self-destructing software may be programmed in it. Moreover, the last thing you'd do is let a phone found on the street issue requests to a highly sensitive system that dispenses unlock codes (even if these requests are issued indirectly, through a well-meaning but hapless law enforcement officer who's just following procedure).
Indeed, in computer forensics, a basic precaution against such attacks is to never turn on the computer in an uncontrolled fashion; rather, you would extract its storage data and analyze it on a different, trustworthy computer. But the CLEAR scheme relies on keeping the phone intact, and even turning it on and trusting it to communicate as intended during the recovery procedure. Telling the guards of Troy to bring any suspicious wooden horse into the city walls, and to grant it an audience with the king, may not be the best policy solution to "Going Greek" debate.