November 2017
Facebook's Initiative Against "Revenge Porn" (16 November 2017)

Facebook's Initiative Against "Revenge Porn"

16 November 2017

There’s been been a bit of a furor recently over what Facebook calls its "Non-Consensual Intimate Image Pilot". Is it a good idea? Does it cause more harm than good?

There’s no doubt whatsoever that "revenge porn"—intimate images uploaded against the will of the subject, by an unhappy former partner—is a serious problem. Uploading such images to Facebook is often worse than uploading them to the web, because they’re more likely to be seen by friends and family of the victim, multiplying the embarrassment. I thus applaud Facebook for trying to do something about the problem. However, I have some concerns and questions about the design as described thus far. This is a pilot; I hope there will be more information and perhaps some changes going forward.

My evaluation criterion is very simple: will this scheme help more than harm? I’m not asking how effective the scheme is; any improvement is better than none. I’m not asking if Facebook is doing this because they really care, or because of external pressure, or because they fear people leaving their platforms if the problem isn’t addressed. Those are internal questions; Facebook as a corporation is more competent to evaluate those issues than I am.

There are two obvious limitations that I’m very specifically not commenting on: first, that Facebook is only protecting images posted on one of their platforms, rather than scouring the web; second, that the victim has to have a copy of the images in question. Handling those two cases as well would be nice—but they’re not doing it, and I will not comment here on why or why not, or on whether they should.

I should also note that I have a great deal of respect for Facebook’s technical prowess. It is somewhere between quite possible and very probable that they’ve already considered and rejected some of my suggestions, simply because they don’t work well enough. More transparency on these aspects would be welcome, if only to dispel people’s doubts.

The process, as described, involves the following steps. My comments on each step are indented and in italics.

The part that concerns me the most is the image submission process. I’m extremely concerned about new phishing scams. How will people react to email messages touting the "new, one-step, image submission site", one that handles all social networks and not just Facebook? The two-step process here—a web site plus an unusual action on Facebook—would seem to exacerbate this risk; people could be lured to a fake website for either step. The experience with the US government-mandated portal for free annual credit reports doesn’t reassure me; there are numerous scam versions of the real site. A single-button submission portal would, I suspect, be better. Does Facebook have evidence to the contrary? What do they plan to do about this problem?

There has been criticism of the need for an upload process. Some have suggested doing the hashing on the submitter’s device. Facebook has responded that if the hashing algorithm were public, people would figure out ways around it. I’m not entirely convinced. For example, it’s been a principle of cryptographic design since 1883 that "There must be no need to keep the system secret, and it must be able to fall into enemy hands without inconvenience."

However… It may very well be that Facebook’s hash algorithm does not meet Kerckhoffs’s principle, as it is known, but that they don’t know how to do better. Fair enough—but at some point, it’s not unlikely that the algorithm will leak, or that people will use trial-and-error to find something that will get through. However, under my evaluation criterion—is this initiative better than nothing?—Facebook has taken the right approach. If the algorithm leaks or if people work around it, we’re no worse off than we are today. In the meantime, keeping it secret delays that, and if Facebook is indeed capable of protecting the images for the short time they’re on their servers (and they probably are) there is no serious incremental risk.

Another suggestion is to delay the human verification step, to do it if and only if there’s a hash match. While there’s a certain attractiveness to the notion, I’m not convinced that it would work well. For one thing, it would require near-realtime review, to avoid delays in handling a hash match. I also wonder how many submitted images won’t be matched—I suspect that most people will be very reluctant to share their own intimate images unless they’re pretty sure that someone is going to abuse them by uploading such pictures. By definition, these are very personal, sensitive pictures, and people will not want to submit them to Facebook in the absence of some very real threat.

My overall verdict is guarded approval. Answers to a few questions would help:

But I’m glad that someone is finally trying to do something about this problem!


Update: I’m informed that the pilot is restricted to people over 18, thus obviating any concerns about transmission of child pornography.