J. Feldman and T. Malkin and R. Servedio and C. Stein and M. Wainwright.

Preliminary version in

We show that for low-density parity-check (LDPC) codes with sufficient expansion, the {\em Linear Programming (LP) Decoder} of Feldman, Karger and Wainwright (Allerton, 2003) can correct a constant fraction of errors. More specifically, we show that if the Tanner graph modeling the code has regular left degree $\ldeg$ and expands by a factor more than $\delta \ldeg$ on all variable node sets of size at most $\alpha n$, where $\delta > \frac{2}{3} + \frac{1}{3 \ldeg}$, then the LP decoder succeeds as long as at most $\frac{3 \delta - 2}{2 \delta -1} (\alpha n - 1)$ bits are flipped by the channel. A random regular graph will have sufficient expansion with high probability~\cite{spielmanexpander}, and recent work by Capalbo \etal~\cite{capalbo} shows that such graphs can be constructed efficiently. A key element of our method is the construction of a zero-valued {\em dual} solution to the decoding linear program.

Our result implies that the word error rate of the LP decoder decreases exponentially in the code length under the binary symmetric channel (BSC). This is the first such result for LP decoding; the only previously known performance bounds were an inverse-polynomial word error rate bound for high-girth cycle codes, and a proof that at least $\Theta(n^{1 - \epsilon})$ errors can be corrected in general LDPC codes under bit-flipping channels.

The word error rate bound given here is stronger than all known finite-length bounds under the BSC for message-passing decoders such as min-sum and sum-product (belief propagation). Recent work by Koetter and Vontobel (Turbo Codes, 2003) shows that LP decoding and min-sum decoding of LDPC codes are closely related by the ``graph cover'' structure of their pseudocodewords; in their terminology, our result implies that the {\em pseudodistance} of these graph covers under the BSC can grow linearly in the length of the code.

Postscript or pdf (tech report)