January 2008
Good NY Times Magazine Article on E-Voting (6 January 2008)
A License for Geiger Counters? (9 January 2008)
Hacking Trains (11 January 2008)
A New Internet Wiretapping Plan? (15 January 2008)
The CIA Blames Hackers for Power Outages (18 January 2008)
Apple Adds the Missing Applications to the iPod Touch (23 January 2008)
The Dangers of the Protect America Act (27 January 2008)
Massive Computer-Assisted Fraud (29 January 2008)

Good NY Times Magazine Article on E-Voting

6 January 2008

There’s a very good New York Times magazine article on the problems with electronic voting machines. Others have blogged on it; I particularly recommend the Freedom to Tinker explanation of some things the article missed.

I’ll simply stress two points that are made in the article. First, even though I’m a security guy ("Paranoia is our Profession"), I think the biggest problem with e-voting machines is ordinary buggy code. Second, you’d think that computer scientists would be the strongest proponents of our own technology. That isn’t the case. Most (though not all) of us think such machines are far too unreliable. It may be possible to build really good e-voting machines, but that sort of programming is very expensive. By all accounts, that hasn’t even been tried. (In that vein, making voting machines open source won’t help, except to let everyone see how bad the code is. There are many good things about open source code, but it’s not a substitute for good practice and a lot of very hard work.)

The country finally seems to be moving to optical scanners; I agree that they’re the best choice. A crucial point will be a precise legislative definition of voter intent. Voting machines will need to be tested against this definition. We do not need the optical mark equivalent of hanging chads. (Aside: the very first law I ever read, some 40 years ago, was an amendment to New York’s election laws. It defined how to mark a paper ballot: two lines that touched or crossed within the designated box. Yes, that allows check marks and Xs; it also allows plus signs, inverted Vs, greater than and less than signs, etc. No matter — it’s a precise definition that includes all of the normal marks that people would make. Some years later, I worked as an observer during paper ballot counting in North Carolina. Yes, we all knew to challenge improperly-marked ballots that appeared to be for the other candidate…)

Finally, I want to stress the role of process. In an election, "process" includes things like properly accounting for all ballots and making sure that the ballot boxes are empty at the start of voting; it also includes random hand recounts of some precincts as a check on the automated scanners that will do most of the tallying.

Tags: voting

A License for Geiger Counters?

9 January 2008

In one of the stranger pieces of proposed "security" regulation, New York City is considering a bill to require licenses to possess detectors for biological, chemical or radiological attacks. The big question, of course, is why they want to do such a thing.

The rationale presented is unconvincing. The city wants to "prevent false alarms and unnecessary public concern by making sure that we know where these detectors are located and that they conform to standards of quality and reliability." Of course, restricting them is also a good way to prevent necessary public concern, though perhaps that latter could be more easily accomplished by regulating the Internet and assorted forms of mass media. Unconstitutional? Sure — but I’m not at all convinced that this idea would pass muster, either.

It’s not going to be light-weight regulation:

The Police Department would work with officials in the Departments of Fire, Health and Mental Hygiene and Environmental Protection, Dr. Falkenrath said, to "develop the appropriate standards for evaluating the applications, regarding not only the technical specifications for the detectors but also the applicant’s emergency response protocols."
In other words, you need to have a suitable plan for what you’ll do if you detect something. Devising such plans will be time-consuming for would-be owners; evaluating them — and re-evaluating the ones that failed to pass muster, and keeping up with ever-changing technology of such devices — is going to keep the Police Department far too busy.

I suspect the bill will pass. It was proposed by the mayor’s office, and has the support of the City Council leadership. There is opposition, but it’s not the sort of issue to garner much public attention.

Now, there can be false alarms; in particular, there can be anomalous radiation detected that does not indicate any threat. But restricting possession of detectors won’t stop that; note that the detector here was a security guard’s. That phenomenon — a patient treated with a radioisotope setting off detectors — is sufficiently well known that doctors give patients notes to carry.

I had such treatment a few years ago, and was given such a note. (Well, the hospital I went to is less than 6 kilometers from where the incident I linked to took place, so they were undoubtedly very aware of the problem…) It’s an interesting exercise to consider what to do if a detector goes off. Imagine — I’m in New York Penn Station (and my daily commute goes through there), a police officer’s belt-worn scintillation detector goes off, and I’m stopped. The last thing I want to do is suddenly reach for my wallet.

Of course, the police have an equal problem. Do they blithely believe the information on the paper I’d show them? Do they call the phone number on it? The right answer is to look at the name of the facility where I was treated, independently check its number, and then call. Of course, that assumes that they can verify that it’s really a legitimate medical facility, and that they have some way to authenticate themselves to the facility to obtain confidential information about a patient…

It’s good that that law wasn’t in effect back then. You see, I borrowed a friend’s Geiger counter and tracked my radiation level over time…

It’s a semi-log plot. Theoretically, the level should be a straight line; I think what I actually measured is well within the margin of error. (The units on the X axis of the graph are arbitrary.) The drop-off is due both to radioactive decay and to excretion of the isotope.

Hacking Trains

11 January 2008

The Register reports that a Polish teenager has been charged with hacking the Lodz tram network. Apparently, he built a device that will move the points (more commonly referred to as switch tracks in the US), sending the trams onto the wrong tracks. There were four derailments and twelve resultant injuries.

The device is described in the original article as a modified TV remote control. Presumably, this means that the points are normally controlled by IR signals; what he did was learn the coding and perhaps the light frequency and amplitude needed. This makes a lot of sense; it lets tram drivers control where their trains go, rather than relying on an automated system or some such. Indeed, the article notes "a city tram driver tried to steer his vehicle to the right, but found himself helpless to stop it swerving to the left instead."

Using IR signals to control traffic is reasonable common. In many parts of the world, emergency vehicles can use a device known as a MIRT (Mobile Infrared Transmitter) to turn lights green. Not surprisingly, these have been hacked; there are even plans available to build your own. Newer MIRT receivers use a more sophisticated encoding. In at least one system, emitters can be programmed to transmit a specific code number; that value is set by thumbwheels in the vehicle. It isn’t clear to me how the receiver value is changed; it doesn’t seem to be hard-coded in the device, so perhaps it can be downloaded and hence changed on a daily basis.

There are several lessons here. The first is that security through obscurity simply doesn’t work for SCADA systems, whether it’s a tram, a traffic light, or a sewage plant.

A second lesson is that security problems can have real-world consequences, such as injuries.

Finally, even though automated systems can have problems, the mere availability of a manual control doesn’t always protect you.

Tags: security

A New Internet Wiretapping Plan?

15 January 2008

Word is getting out about a new plan for large-scale tapping of the Internet. The New Yorker story says

"Ed Giorgio, who is working with McConnell on the plan, said that would mean giving the government the autority to examine the content of any e-mail, file transfer or Web search," author Lawrence Wright pens.

"Google has records that could help in a cyber-investigation, he said," Wright adds. "Giorgio warned me, ’We have a saying in this business: ‘Privacy and security are a zero-sum game.’

There are several interesting aspects here.

First, from a legal perspective there’s a difference between the government looking at e-mail and looking at Google searches. The former is governed by the Stored Communications Act (I won’t go into the legal technicalities; besides, some of these are still being litigated). Reading someone’s e-mail is considered an invasion of privacy, and a suitable court order is required.

Google searches, though, are considered "third party information". Under the doctrine set forth in Smith v. Maryland, 442 U.S. 735 (1979), someone who voluntarily gives information to a third party no longer has a privacy interest in it. To use the Supreme Court’s own analogy, it’s clear that if a librarian or research associate had been engaged to answer a question, that person could be subpoenaed, and the real target of the investigation would have no recourse. Why should the legal principles be different because Google has chosen to automate? Congress could make such access easier or harder — in United States v. Miller 425 U.S. 435 (1976), the Supreme Court upheld the government’s right of access to financial records — but there are no constitutional barriers. Indeed, some would hold that the only protection of Google searches right now is the contract between Google and its users, though arguably Google could be considered a remote computing service and hence restricted by law in what they can give the government without a court order.

The next question is how such a plan would be implemented. Using wiretaps is the hard way; if you aren’t targeting a particular individual, you have to sift through an immense amount of information (and discard most of it), and you lack a lot of context. This was at the heart of many of the criticisms of Carnivore. Still, there is an existing legal framework. Wiretaps can be authorized under either existing criminal law wiretap procedures or the Foreign Intelligence Surveillance Act (FISA). There are also existing laws requiring predeployment of wiretap capability, e.g., CALEA.

On the other hand, a CALEA-like law for access to search engine data — that is, a prepositioned government path to the search data — carries its own set of risks. The risks are quite similar to those posed by CALEA: this is an intentional vulnerability which can be exploited by the wrong people. (That’s what happeed to the Greek cellphone network.)

Regardless of how surveillance is done, we need to understand the oversight mechanism. A search warrant, after all, is fundamentally a barrier to unrestricted police powers. Even under FISA, a court will sometimes issue warrants. Regardless of whether it’s true or not that "privacy and security are a zero-sum game", there needs to be some third party — probably a court — as a check or an oversight mechanism.

There then, three issues:

Any informed public discussion must consider at least these points. (I’ll try to post again soon about the zero-sum game question.)
Update: the New Yorker article that had the original story is now online here. Also see this Washington Post story.

The CIA Blames Hackers for Power Outages

18 January 2008

According to the CIA, hackers have turned off power in some foreign cities as part of an extortion plot. The intrusions took place over the Internet.

It’s scary that this can happen, but it shouldn’t surprise anyone. Ten years ago, the National Security Agency conducted an operation known as Eligible Receiver, in which a team of simulated hackers showed that they could shut down the US power grid. Remember how much less use of the Internet there was then — and the system was still vulnerable.

It’s tempting to say that the operational networks for the power grid (or the financial system, or the railroads, or what have you) shouldn’t be connected to the public Internet. Unfortunately, that’s difficult to do, because there are operational needs for interconnection. For example, in some jurisdictions customers can switch among different power generating companies in real-time. But this isn’t just a billing artifact, to be resolved later; the total demand load on a given company has to be communicated to it, so they can adjust the performance of their generator. Even without that, there generally needs to be connectivity to internal corporate nets, so that engineers can monitor and adjust system performance.

Many people will respond that that doesn’t conflict with the ability to create separated nets. In theory, that’s true. In practice, maintaining the air gap is very hard. Even the Defense Department can’t always do it; viruses have spread to classified networks in the past.

As I noted a few days ago, computer security failures can have real-world consequences. This is yet another example.

Apple Adds the Missing Applications to the iPod Touch

23 January 2008

A while ago, I wrote about missing applications on the iPod touch. Specifically, I wondered why the email, notes, map, and stock applications were missing. Apple has now added them via a software update. This is good; however, there are two possible flies in the ointment.

First, Apple is charging $20 for the update. Presumably, they’re charging because they’re adding new features; however, those features should have been there in the first place.

The second issue is more subtle and concerns the WiFi-based geolocation service on the new iPod Touch, which apparently uses technology developed by Skyhook Wireless. Skyhook Wireless has built up a database mapping the location of WiFi access points to locations. When you initiate a location query, the iPod Touch (and the iPhone) listen for access points and send the list to Skyhook Wireless; it replies with your approximate location. The question is what else happens to that data.

Skyhook Wireless has a pretty good privacy policy. However, they inherently know your IP address, at least at some point — they couldn’t talk to you over the Internet without that — and IP addresses are sometimes (rightly) considered to be personal information. On the other hand, exactly how Skyhook Wireless treats the IP address is a bit confusing:

WPS may, if configured by the end-user application, also identify the Internet Protocol or IP address (a unique identifier assigned to any device accessing the Internet) of your computer. The identification of your IP address is a standard, commonly accepted practice used by a majority of the websites and services on the Internet.
I don’t know what it means to "identify" an IP address. The web page does note that if the access points within reach are not in their database, ordinary IP geolocation techniques — based on your IP address — may be used instead.

The question is what else can be done with your location data. The usual geotargeted advertising is permitted:

Skyhook may also use this information to deliver targeted, location-based advertising or to aggregate data about general usage (e.g., there are X number of WPS users in the 12345 zip code) to potential advertisers.
Furthermore, the data is accessible to law enforcement: "Skyhook may disclose location information if required to do so by law or in the good faith belief that such action is necessary to (a) conform to a legal order or comply with legal process served on Skyhook". But what is the legal standard for such court orders? It strikes me as likely that it would be treated as information voluntarily disclosed to a third party — Skyhook — and hence not within a subject’s expectation of privacy. This is unfortunate, since most people will have no idea what, if anything, is being transmitted or to whom (Apple’s web site sure doesn’t say), and a popular technology people are familiar with (GPS) is purely passive.

All that said, the new features make the iPod Touch very attractive. I’m still waiting to see what the terms and conditions are for the software development key — but my iPod Nano is showing its age and may need replacing…

Tags: Apple

The Dangers of the Protect America Act

27 January 2008

Fundamentally, a wiretap is an intentional breach of security. It may be a desirable or even a necessary breach, but it is a breach nevertheless. Furthermore, the easier it is for the "good guys" to "break in", the easier it may be for the bad guys. The Greek cellphone tapping scandal is just one case in point.

There’s another, more subtle, problem: if your wiretap is done incorrectly, perhaps by relying on incorrect information, you may miss traffic that you’re entitled to hear (and should hear, to protect society).

The Protect America Act carries both risks. Matt Blaze, Whit Diffie, Susan Landau, Peter Neumann, Jennifer Rexford, and I have written an analysis of the dangers. It will appear soon in IEEE Security and Privacy; you can download a preprint here.

Massive Computer-Assisted Fraud

29 January 2008

Assorted business pages have been buzzing for the last several days about massive fraud at Société Générale, a major French bank. A mid-level trader allegedly exceeded his authorized access and cost the bank about €4.9 billion (~US$7.2 billion) via fraudulent and risky trades. Some analysts suspect that unwinding the mess contributed to the European stock market woes on 21 January.

What makes this story relevant to this blog is the computer angle. The person blamed had good computer skills:

Colleagues described him as a "computer genius" who was allegedly able to hack into the bank’s computers to hide his trading, until a basic slip-up on Friday, when he failed to disable the bank’s automatic alert system and his irregular trading suddenly showed up.
It might not be technically sophisticated hacking; other references say that he wasn’t that good with computers. What he did, it appears, was use other people’s passwords.

There were other issues:

Even before his massive alleged fraud came to light, Kerviel had apparently triggered occasional alarms at Société Générale — France’s second-largest bank — with his trading, but not to a degree that led managers to investigate further.

"Our controls basically identified from time to time problems with this trader’s portfolio," Mustier said.

But Kerviel explained away the red flags as trading mistakes, Mustier added.

There are thus (at least) three issues. First, he was able to use other people’s passwords. The obvious security guy reaction to that is to say that some form of one-time password system should have been used. That would certainly be a good idea, but it isn’t clear that it would have solved the underlying problem. How common was sharing accounts or making improper trades at the bank? Did the corporate culture tolerate or even encourage such behavior? That’s a management failure, not a technical issue.

There’s a second reason to wonder about management. The Wall Street Journal reports that the fraudulent trades started a year earlier than had been reported. He deflected management inquiries in a variety of ways; sometimes he’d "fabricate email messages from nonexistent trading partners to deflect supervisors’ concerns about unusual trades, a police official said." There are technical mechanisms that can guard against forged emails, but these have their limits; in particular, someone who creates a dummy company can "forge" email from it. The recipients would have to know that the company was fake to detect the problem.

The third issue concerns the knowledge that he used. A bank executive said that Kerviel had used "knowledge of the bank’s risk-control software systems that he had gained from a previous back-office position". Ideally, security systems should work even against a knowledgeable attacker. But if he knew that procedures weren’t, in fact, followed — and again, that’s a management issue — he could easily have exploited it.

Tags: security