We often see news stories about one security flaw or another in some important package or system. Such stories are always depressing, but this week was worse than most for government systems.
The most publicized story, of course, was the yet more reports of flaws in electronic voting machines. I and others have reported on that story. What’s so sad is that it isn’t a new problem, even conceptually. People have been warning about it for almost 20 years. Peter Neumann’s 1993 paper set forth security criteria for computerized voting systems; also see the bibliography. Of particular interest is Ronnie Dugger’s 1998 article in the New Yorker. Rebecca Mercuri and David Dill are two other, early voices warning of the problem. But nothing much seems to have happened.
Weaknesses existed in all control areas and computing device types reviewed.Predictably, DHS officials downplayed the risk: "the report raised many hypothetical problems and overstated others, because few outsiders can gain access to the system’s computers." That completely ignores insider attacks, of course, but that isn’t the whole problem; the investigators found that the system had inadequate physical controls and poor separation from other networks. On top of that, the cryptography was laughably poor: a single key was used for all traffic, certficates were not properly distributed, the same certificate was used for clients and servers, etc.
These weaknesses collectively increase the risk that unauthorized individuals could read, copy, delete, add, and modify sensitive information.
Other countries probably have similar problems. A German security researcher demonstrated that he could crash passport readers. New passports have pictures and fingerprints stored as JPEG files readable via an RFID chip; Lukas Grunwald cloned a legitimate chip but inserted a corrupted JPEG file. There is quite likely a penetration vulnerability, too; as he notes, "if you’re able to crash something you are most likely able to exploit it."
Want some more? The US Internal Revenue Service, it turns out, is vulnerable to social engineering attacks. In a recent official audit, 60% of employees tested changed their passwords as instructed by a caller who claimed to be from the help desk.
There is no one solution to the problems described above. Some solutions are obvious: get rid of C to deal with the (probable) buffer overflow problem in the passport reader, improve employee training about passwords (better yet, switch to two-factor authentication), etc. It’s also clear that we need more research on the subject; I’ll blog about that in the near future. But one point is worth stressing now: given the difficulty of writing correct, secure software, it can’t be done cheaply. Low-bid systems will never be secure. We do know something about building reliable software: think how rarely we see critical failures in phone switches, avionics, etc. Note well: I am not saying that any of these systems are perfect. They’re not, and the failures have been copiously reported in RISKS Digest. But they are a lot better than most of what we use. Similarly, voting machines, DHS border control systems, passport readers, etc., can be a lot better than they are today (though for voting machines it’s unclear if they can be enough better than simpler, semi-manual alternatives). But we, as a society, have to want this badly enough to pay for it. Do we? Should we? Contemplating the consequences of any of these systems being compromised, I think the answer is obvious.