29 August 2007
The Electronic Frontier Foundation has obtained documents on the FBI's DCS-3000 system. They're nicely summarized in a Wired story. In addition, Matt Blaze has written about technical weaknesses in the wiretap technology used. I won't repeat what they've done so well.
I'm concerned about a longer-term issue: I don't think the FBI really understands computer security. More precisely, while parts of the organization seem to, the overall design of the DCS-3000 system shows that when it comes to building and operating secure systems, they just don't get it.
The most obvious example is the account management scheme described in the DCS-3000 documents: there are no unprivileged userids. In fact, there are no individual userids; rather, there are two privileged accounts. Each has diferent powers; however, as the documents themselves note, each can change the other's permissions to restore the missing abilities. Where is the per-user accountability? Why should ordinary users run in privileged mode at all? The answers are simple and dismaying.
Instead of personal userids, the FBI relies on log sheets. This may provide sufficient accountability if everyone follows the rules. It provides no protection against rule-breakers. It is worth noting that Robert Hanssen obtained much of the information he sold to the Soviets by exploiting weak permission mechanisms in the FBI's Automated Case System. The DCS-3000 system doesn't have proper password security mechanisms, either, which brings up another point: why does a high-security system use passwords at all? We've know for years how weak they are. Why not use smart cards for authentication?
We can't even rely on just the log sheets: the systems support remote access, via unencrypted telnet.
Any security specialist will tell you that this design is a recipe for disaster. Indeed, the FBI's own security audit, as documented in the released documents, makes some of these very points. The problem is that the system was misdesigned in the first place.
There's another side to the problem, though: worries about threats that aren't particularly serious. The CI-100 component — a so-called "data diode" for moving data between different classification levels — is built from two Windows machines that are required to have anti-virus software. Why? These machines are forwarding data at the packet level. They are not receiving email, browsing the web, serving users, etc. Where will virus infections come from? It's not that it hurts to have anti-virus software, but requiring it makes me wonder how good the threat analysis is. And if viruses are a threat, why are Windows boxes used? It's not that other systems are necessarily more secure (though I can make a good case for that); however, viruses simply aren't a real-world threat. Furthermore, generic Windows machines are notoriously hard to lock down. Yes, there's some benefit to using a familiar platform, but this is a very specialized need.
My biggest concern, though, lies in the words of one of the FBI's own security evaluations: the biggest threat is from insders. The network is properly encrypted for protection against outside attackers. The defenses against insiders — yes, rogue FBI agents or employees — are far too weak.
To sum up: we have a system that accesses very sensitive data, with few technical protections against inside attacks, and generic defenses that don't seem to fit the threat model.
Update: In a new report, the Department of Justice's Office of the Inspector General has released a new report on the FBI's vulnerability to espionage from within. The report points out continuing serious problems with the Bureau's Automated Case Support (ACS) system, and calls for (among other things) "a third-party audit program to detect and give notice of unauthorized access to sensitive cases on a real-time basis". You can't do that with manual log sheets.