The FBI's request for single-warrant, remote computer searches: Examining the technical issues
With little fanfare, zero congressional review or debate, and barely any public awareness, the FBI is requesting a rule change to gain broad powers to remotely search multiple computers, no matter location, on a single warrant. The implications are far-reaching and apt to affect not only suspected criminals but the innocent as well, including victims of hackers and botnets. Setting aside the Fourth Amendment, privacy concerns, and potential diplomatic consequences, there are technical reasons to oppose the rule change as proposed. Remote searches require the installation of software, or malware, which often causes unintended computer problems. What provisions prevent damage to files or programs? How will computer owners be notified? These and other technical questions are contained in a comments document by Steven M. Bellovin, Matt Blaze (University of Pennsylvania), and Susan Landau (Worcester Polytechnic Institute) and summarized here.
Today when the FBI wants to search a computer, it first obtains a search warrant from a judge within the district where the computer is located. If a second computer is to be searched and is located in a different district than the first, the FBI must go to a judge in that district for a search warrant.
Once the FBI has a seized computer in its possession, the gold standard for forensic procedures is to create a perfect image copy and search the copy, not the original. This prevents evidence from being altered—viewing files changes time-stamp information, for example—and compromising the prosecution's case.
The one problem in all this is that the location of the computer can't always be known. Criminals use Tor and other anonymizing software to deliberately disguise their IP addresses and locations. Another problem is investigating botnets where large numbers of computers are spread over multiple districts, making it inconvenient to obtain search warrants from judges in each district.
For this reason, the FBI is proposing a change to Rule 41(b) of the Federal Rules of Criminal Procedure—the terms under which the FBI is allowed to conduct searches—that will allow the FBI in cases when an IP address is being disguised to go to any judge in any district to get a single warrant that would allow it to remotely search one or more computers in or outside that district.
The proposed rule change is being sought from the relatively unknown Advisory Committee on Criminal Rules, an administrative office of the courts. This committee is currently considering comments on the proposed rule change until February 17, 2015.
Among those providing comments are Steven M. Bellovin, Matt Blaze (University of Pennsylvania), and Susan Landau (Worcester Polytechnic Institute). Their document Comments on Proposed Remote Search Rules examines the technical issues raised by the proposed rule change.
The innocent, too
Before getting into the technical issues raised in Comments on Proposed Remote Search Rules, it's important to point out that the proposed rule change will affect not only suspected criminals. The comments document points out that innocent people, too, act to disguise their locations and IP addresses. Dissidents, human rights activists, whistleblowers, and journalists often deliberately disguise their identities and locations, especially in countries where their safety is at risk. The intent is to hide but the motive is to maintain personal safety not to harm others.
Users of VPNs (virtual private networks) would also fall into the category of those disguising their IP addresses. But here again the motive is not criminal. People use VPNs for a variety of innocent reasons, including accessing a work LAN (local area network) or protecting data or communications when connecting to untrusted public networks (hotels are notorious for not securing their networks).
Victims of botnets and other hacking schemes would also be potential targets of the FBI's expanded computer searches since their computers might contain evidence of a crime or pointers that lead back to the botnet's command-and-control node. The comments document points out that a single warrant to cover all nodes in a botnet will sweep up large numbers of innocent people, potentially hundreds of thousands and even millions. Thus the technical issues associated with remote searches extend potentially to the innocent as well.
Software installation carries risks
Remote computer searches require the installation of surveillance software. In that this software will perform actions unwanted or unknown to the computer owner, the comments document notes the FBI's surveillance software meets at least some definitions of malware.
Install new software, especially third-party software written to run on multiple platforms, and there's a strong possibility something will break. Almost everyone has experienced such a problem: the cardiologist whose image-reading program stops working after a new browser version is uploaded, or the writer whose Office files refuse to open after an upgrade to a new version of Windows.
It's the nature of software. Every computer is different, and everyone is running a different version or patch of an operating system, browser, or program. The network environment, again different for everyone, adds another layer of complexity. There is no lab in the world that can test for every case. Even Apple with its vast resources and its reputation for quality is not immune. The company's iOS 8.0.1 release last year, presumably after much testing, broke the ability of some iPhones to make phone calls.
The comments document lists two other famous cases to illustrate the point. The Stuxnet virus, surreptitiously installed on computers in the Iranian Natanz nuclear plant in 2009, worked as planned to cause the centrifuges to spin out of control. Unplanned was a bug that caused some computers to arbitrarily reboot. Plant engineers, noticing the strange reboot behavior not the centrifuge problem, sent a computer off to a Belarusian security firm for testing. Only then was the attack software found.
In 2005, hackers installed unauthorized software on a mobile phone switch operated by Vodaphone Greece. This software allowed the hackers to essentially hijack the intercept mechanism (meant for lawful enforcement purposes) to secretly wiretap 100 prominent people, including the prime minister of Greece. The problem was discovered only when the hackers upgraded the planted software and in the process inadvertently interfered with the forwarding of texts. It is not at all surprising to the technical community that software related to one part of the system (the intercept mechanism) broke an unrelated feature (text forwarding).
Remote search software is even worse at causing problems because it runs as a "root" or administrator program to override file protections. Installing software won't cause problems on all computers, but installed on enough computers, someone somewhere is going to have problems.
From the perspective of the user whose computer stops working correctly, it may make little difference whether the responsible software was installed by the FBI or the cybercriminal down the block.
The problem of notification
Given that remote searches can cause damage, it seems only right that the FBI notify computer owners of a remote search. After all, the comments document points out that traditional search warrants generally require notice to the target, including a receipt for items seized. The innocent, whose computers are searched only for evidence of a crime done by another, should be extended the same courtesy. But how?
The comments document notes four possible notification methods: a file installed on the computer, a pop-up notification window, email, and a physical letter. Each is problematic. Few people will notice a new file. Pop-up notifications and emails might be disregarded by users who may naturally assume they are the work of hackers. (The FBI in the past has warned of malicious spam email purporting to be from the FBI.) Physical letters are time-consuming and require the cooperation of ISPs to match the IP address to a physical one. It's not hard to imagine that ISPs might find such requests burdensome, especially when they number in the hundreds of thousands.
The proposed rule change acknowledges the difficulty of notification, requiring only that the "executing officer make reasonable efforts to provide notice," conceding "the officer may be unable to provide notice of the warrant" when it's not possible to reasonably determine the owner's whereabouts.
Most owners, it seems likely, may never be notified.
A general warrant for the computer age?
Underlying the whole issue is the lack of explicit statutory authority for computer searches, and a lack of guidance on what restrictions apply.
Physical searches are limited in scope by the Fourth Amendment's specificity requirement, which provides explicit guidance as to what can and cannot be searched. A search warrant for a particular house applies only to that house, not to adjacent buildings or to an offsite storage unit rented by the homeowner. Evidence seized in a search not covered by a warrant is often inadmissible in court under the exclusionary rule.
No explicit decision has ever been made on what specificity means in a computer context. Today a warrant to search a computer is often treated as carte blanche to search everywhere on the computer, with the "plain view" exception used to justify opening and looking through all files. Someone accused of a drug offense, whose searched computer also gives up child pornography, may be charged with this second crime. While it is hard to sympathize for a suspected criminal accused of two crimes and not one, scale makes a difference, and technology is all about scale.
Imagine a warrant to search for evidence of a 100,000-member botnet. In an instant, with no effective way to notify computer owners presumed to be innocent victims, the FBI releases surveillance software onto 100,000 computers. While the initial goal is only to find botnet evidence, what happens when such searches yield evidence of tax evasion, purchases of Class A drugs, multiple prescriptions for opioids, or threatening emails? The proposed rule change does not address this issue.
In its one-size-fits-all approach with no explicit limits to what can be searched and targeting both the presumed guilty and the presumed innocent, a remote computer search begins to resemble a general warrant, the very thing the Fourth Amendment was intended to prevent.
Technology makes the difficult easy, but technology is not infallible. Bugs in the surveillance software or the examination process can affect results or make it easy to imperceptibly exceed the original scope of a warrant. Human mistakes—mistyped IP numbers, misspelled names—creep in.
Protecting the innocent from such errors becomes harder because it can't be easily ascertained as to how the evidence was collected. The FBI keeps its methods tightly under wraps, even their existence or general notions of how they work. A defendant believing evidence to be in error needs detailed technical information about how a search was conducted in order to determine the source of the error and analyze the scope of the intrusion.
Other law enforcement techniques are transparent while remaining effective. Everybody knows the police look at fingerprints and DNA, but this doesn't stop such evidence from being useful. Because the science is well understood and known to be reliable, juries and others trust these methods, when followed correctly, to decide guilt or innocence. When procedures are not followed correctly, the reliability of the evidence can be called into question, allowing defense attorneys an opportunity to cast doubt in the minds of juries.
This is the adversarial process guaranteed by the constitution, but it breaks down when the methods to procure evidence remain hidden. Prosecutors should be just as concerned as those charged with a crime. Evidence gathered through unknown means may make prosecutions more difficult. As well it should when it's not clear what software is being used, whether it has been adequately tested for reliability or veracity, or whether a forensic lab is using it correctly.
A need for discussion
The comments document understands that law enforcement not surprisingly views remote computer searches as a boon to crime-fighting. For less effort than it takes to search the image copy of a seized computer, it's possible to search millions to collect more evidence more quickly, and in a less noticeable way.
But just because technology makes remote searches easy and unobtrusive, should we allow such searches? To do so means to casually disregard constitutional protections and legal processes that have been in place for generations. National sovereignty also begins to lose meaning and force when searches of computers easily extend beyond national borders. While individual computer owners may have little recourse or few options, there may be serious consequences to ignoring national borders. We risks alienating allies, and we give other countries a pretext to retaliate in kind.
There are choices. While technology may make it easy to circumvent legal and constitutional procedures, it can also be used to bolster those protections, providing solutions for gathering evidence that are less intrusive and destructive than remote searches. In the case of botnets, honeypot machines—programmed to act like normal computers for the express purpose of becoming infected—have been shown to work effectively in monitoring and locating botnets. Engaging the technical community may well result in other technologies that aid law enforcement in less intrusive and riskier ways more in line with the legal process than remote computer searches.
The authors of Comments on Proposed Remote Search Rules do not oppose remote computer searches in principle, acknowledging that such methods are sometimes necessary to locate those who actively hide their locations to disguise criminal activity that affects us all. They question, however, is whether the extraordinary step of multi-district, single-warrant, remote searches—with all the risks such searches carry—should be applied as a matter of course even when searching bystanders' computers.
Maybe the answer in the end is "yes" or more likely "yes in certain cases." But the question needs to be posed in the first place, in stark and honest terms, so that the public debate produces the final answer. The authors suggest, and have made the argument at greater length elsewhere, that a legislative fix would be best given the intrusiveness of remote computer searches.
About the authors of Comments on Proposed Remote Search Rules
Steven M. Bellovin is a researcher on computer networking and security, and why the two don't get along. He is a Professor in the Computer Science department at Columbia University. Prior to joining the faculty at Columbia in 2005, he worked for many years at Bell Labs and AT&T Labs where he earned distinction as an AT&T Fellow.
He has long been interested in public policy. Beginning in 2012, Bellovin began serving as Chief Technologist of the Federal Trade Commission. He is a member of the National Academy of Engineering, and he serves on the Computer Science and Telecommunications Board of the National Academies, the Department of Homeland Security's Science and Technology Advisory Committee, and the Election Assistance Commission's Technical Guidelines Development Committee.
In 2007, Bellovin received the NIST/NSA National Computer Systems Security Award. He holds numerous other awards and distinctions. Along with Tom Truscott and Jim Ellis, he was awarded The Usenix Lifetime Achievement Award, The Flame for his efforts in creating USENET. He's been actively involved with the Internet Engineering Task Force (IETF), most notably in areas pertaining to security. He also served on the Internet Architecture Board from 1996-2002, and as Security Area co-Director with the Internet Engineering Steering Group (IESG) from 2002-2004.
In addition to holding a number of patents on cryptographic and network protocols, Bellovin is the co-author of Firewalls and Internet Security: Repelling the Wily Hacker. He has been a member of numerous National Research Council Study committees during his professional career.
Matt Blaze is a cryptology expert and a computer science professor at University of Pennsylvania where he directs the Distributed Systems Lab. His research focuses on the architecture and design of secure systems based on cryptographic techniques, analysis of secure systems against practical attack models, and on finding new cryptographic primitives and techniques. He was a designer of swIPe, a predecessor of the now standard IPSEC protocol for protecting Internet traffic. Another project, CFS, investigated and demonstrated the feasibility of including encryption as file system service.
He is a member of Institute for Medicine and Engineering, and the author of numerous papers, many dealing with public policy issues especially those that concern security technology and surveillance. He often contributes articles to Wired magazine.
Susan Landau is a professor in the Department of Social Science and Policy Studies at Worcester Polytechnic Institute, where she works in cybersecurity, privacy, and public policy.
She previously served as Senior Staff Privacy Analyst at Google. She was a Guggenheim Fellow and a Visiting Scholar at the Computer Science Department, Harvard University in 2012.
In 2010-2011, she was a Fellow at the Radcliffe Institute for Advanced Study at Harvard, where she investigated issues involving security of government systems, and their privacy and policy implications.
A 2012 Guggenheim fellow, Landau is the recipient of the 2008 Women of Vision Social Impact Award, and is also a fellow of the American Association for the Advancement of Science and the Association for Computing Machinery.
She is the author of Surveillance or Security?: The Risks Posed by New Wiretapping Technologies (2013) and co-wrote (with Whitfield Diffie) Privacy on the Line: The Politics of Wiretapping and Encryption.