By Eran Tromer
There have been a number of mentions of an attack that Eran Tromer found against Ray Ozzie’s CLEAR protocol, including in Steven Levy’s Wired article and on my blog. However, there haven’t been any clear descriptions of it.
Eran has kindly given me his description of it, with permission to
publish it on my blog. The text below is his.
A fundamental issue with CLEAR approach is that it effectively tells law enforcement officers to trust phones handed to them by criminals, and give such phones whatever unlock keys they request. This provides a powerful avenue of attack for an adversary who uses phones as a Trojan horse.
For example, the following "man-in-the-middle" attack can let a criminal unlock a victim’s phone that reached their possession, if that phone is CLEAR-compliant. The criminal would turn on the victim phone, perform the requisite gesture to display the "device unlock request" QR code, and copy this code. They would then program a new "relay" phone to impersonate the victim phone: when the relay phone is turned on, it shows the victim’s QR code instead of its own. (This behavior is not CLEAR-compliant, but that’s not much of a barrier: the criminal can just buy a non-compliant phone or cobble one from readily-available components). The criminal would plant the relay phone in some place where law enforcement is likely to take keen interet in it, such as a staged crime scene or near a foreign embassy. Law enforcement would diligently collect the phone and, under the CLEAR procedure, turn it on to retrieve the "device unlock request" QR code (which, unbeknownst to them, is actually the victim’s code). Law enforcement would then obtain a corresponding search warrant, retrieve the unlock code from the vendor, and helpfully present it to the relay phone — which will promptly relay the code to the criminal, who can then enter the same code into the victim’s phone. The victim’s code, upon receiving this code, will spill all its secrets to the criminal. The relay phone can even present law enforcement with a fake view of its own contents, so that no anomaly is apparent.
The good news is that this attack requires the criminal to go through the motions anew for for every victim phone, so it cannot easily unlock phones en masse. Still, this would provide little consolation to, say, a victim whose company secrets or cryptocurrency assets have been stolen by a targeted attack.
It it plausible that such man-in-the-middle attacks can be mitigated by modern cryptographic authentication protocols coupled with physical measures such as tamper-resistant hardware or communication latency measurements. But this is a difficult challenge that requires careful design and review, and would introduce extra assumptions, costs and fragility into the system. Blocking communication (e.g., using Faraday cages) is also a possible measure, though notoriously difficult, unwieldy and expensive.
Another problem is that CLEAR phones must resist "jailbreaking", i.e., must not let phone owners modify the operating system or firmware on their own phones. This is because CLEAR critically relies on users not being able to tamper with their phones’ unlocking functionality, and this functionality would surely be implemented in software, as part of the operating system or firmware, due to its sheer complexity (e.g., it includes the "device unlock request" screen, QR code recognition, crytographic verification of unlock codes, and transmission of data dumps). In practice, it is well-nigh impossible to prevent jailbreaking in complex consumer devices, and even for state-of-the-art locked-down platforms such as Apple’s iOS, jailbreak methods are typically discovered and widely circulated soon after every operating system update. Note that jailbreaking also exacerbates the aforementioned man-in-the-middle attack: to create the relay phone, the criminal may pick any burner phone from a nearby store, and even if such phones are CLEAR-compliant by decree, jailbreaking them would allow them to be reprogrammed as a relay.
Additional risks stem from having an attacker-controlled electronics operating within law enforcement premises. A phone can eavesdrop on investigators’s conversations, or even steal private cryptographic keys from investigator’s computers. For examples of the how the latter may be done using a plain smartphone or hardware hidden that can fit in a customized phone, see http://cs.tau.ac.il/~tromer/acoustic, http://www.cs.tau.ac.il/~tromer/radioexp, and http://www.cs.tau.ac.il/~tromer/mobilesc. While prudent forensics procedures can mitigate this risk, these too would introduce new costs and complexity.
These are powerful avenues of attack, because phones are flexible devices with the capability to display arbitrary information, communicate wirelessly with adversaries, and spy on their environment. In a critical forensic investigation, you would never want to turn on a phone and run whatever nefarious or self-destructing software may be programmed in it. Moreover, the last thing you’d do is let a phone found on the street issue requests to a highly sensitive system that dispenses unlock codes (even if these requests are issued indirectly, through a well-meaning but hapless law enforcement officer who’s just following procedure).
Indeed, in computer forensics, a basic precaution against such attacks is to never turn on the computer in an uncontrolled fashion; rather, you would extract its storage data and analyze it on a different, trustworthy computer. But the CLEAR scheme relies on keeping the phone intact, and even turning it on and trusting it to communicate as intended during the recovery procedure. Telling the guards of Troy to bring any suspicious wooden horse into the city walls, and to grant it an audience with the king, may not be the best policy solution to "Going Greek" debate.