Low Density Parity Check Codes

Gordon McDonald1


Date: April 21, 2006

This paper will discuss a brief history of the field of error correcting codes and an interesting subfield known as Low Density Parity Check (LDPC) codes. I explored some explicit constructions of LDPC codes arising from the theory of design codes [S], [GU]. Lastly, it will investigate decoding methods. The methods investigated are based upon majority decision decoding (which is a procedure for correcting a bit based upon how many failures it contributes to the syndrome). We discuss in detail an iterative method of decoding LDPC codes using a method called bit-flipping and compare it to a similar iterative algorithm known as Gallagher Hard Decision Decoding. The algorithms are illustrated by means of detailed examples and implemented in GAP, an open source computer algebra system. Unfortuneatly, I discovered that the examples arising from design codes did not work well with the Bit-Flipping algorithm. These unsuccessful trials were omitted from the thesis.

Introduction

Before we can explore the nuances of Low Density Parity Check codes we must first understand a little about general error-correcting code theory. 1Error-correcting codes are primarily used to correctly transmit data across a noisy channel. To do this one must encode the data so that if a few errors occur they can be corrected and the original message can still be read. Codewords are the encoded data; they are what is transmitted. A code is a set of codewords, and a linear code is a subspace of the vector space $ ({\mathbb{F}}_q)^n$, where $ \mathbb{F}=GF(q)$ is the finite field with $ q$ elements. The length of the code is the integer $ n$. A generator matrix of a linear code can be any matrix whose rows form a basis for the subspace. Let $ C$ be a linear code of length $ n$ over $ {\mathbb{F}}$ with generator matrix $ G$, where $ q$ is a power of a prime $ p$. If $ p=2$, then the code is called binary. We assume that $ {\mathbb{F}}^{n}$ has been given the standard basis $ \mathbf{e}_{1}=(1,0,...,0)\in {\mathbb{F}}^{n}$, $ \mathbf{e}_{2}=(0,1,0,...,0)\in {\mathbb{F}}^{n}$, ..., $ \mathbf{e}_{n}=(0,0,...,0,1)\in {\mathbb{F}}^{n}$. The dimension of $ C$ is denoted $ k$, so the number of elements of $ C$ is equal to $ q^{k}$. Put simply, the dimension is the number of elements in the basis. The quantity $ R=frac{k}{n}$ is called the rate of the code and measures the amount of information that the code can transmit. A parity check matrix, H, of an $ [n,k]$ code C is a $ [n-k] \times n$ matrix whose rows are linearly independent parity checks. The minimum distance of a code, d is the minimum distance between any pair of different codewords. For a recieved codeword $ y$ of a code $ C$ with parity check matrix $ H$ the syndrome, $ s$, is computed by $ s= H \cdot y$.

History of Linear Codes and LDPC codes

The theory of error-correcting codes was originated in the late 1940's by Richard Hamming, a mathematician who worked for Bell Telephone. Hamming's motivation was to program a computer to correct ``bugs'' which arose in punch-card programs. Hamming's overall motivation behind the theory of error-orrecting codes was to reliably enable digital communication.

LDPC codes were first developed in a doctoral dissertation in 1963 by R.G. Gallager. Gallager's work was largely ignored for approximately 30 years until connections were drawn between the iterative methods used for decoding both LDPC codes and Turbo codes. Today, LDPC codes are becoming paramount in digital decoding and reliable digital communication over a noisy channel. Applications range from cellular telephones to worldwide computer communication.

Low Density Parity Check Code Construction

Basic Construction of LDPC codes

Although LDPC codes can be applied in any field, they are mostly considered over the GF(2) field - the binary case. For simplicity, when referring to LDPC codes consider them in the binary case.

Low Density Parity Check codes are codes of construction $ (n,c,r)$ and defined by a matrix which always has the following properities: [G, p. 7]

  • The codes are of low density. That is, they contain mostly 0's and very few 1's.
  • Contains block length $ n$. That is, the number of columns in both the Generator Matrix and the Parity Check Matrix are of length $ n$.
  • Each row in the parity check matrix has exactly $ r$ 1's.
  • Each column in the parity check matrix has exactly $ c$ 1's.
  • $ \frac{r}{n}$ and $ \frac{c}{n}$ are `small' (this is to satisfy the concept of the check matrix being of `low density'). In general, $ \frac{r}{n}$, $ \frac{c}{n}\leq \frac{1}{4}$
  • The linear binary code $ C$ is defined by $ C=\{ c\in F^{n} \vert Hc=0\}.$

Before we can construct a LDPC matrix, it is important first to prove this lemma (see for example [JH, Chapter 13]).

Theorem 1   The rate of an $ (n,c,r)$ LDPC code satisfies $ R\geq1-\frac{c}{r}$.

Proof: If the check matrix $ H$ has $ m$ rows, the total number of 1's is $ mr=nc$ (obtained by first counting the number of 1's in each row and then by counting the number of 1's in each column). Because not all the rows are lineraly independent we know $ k \geq n-m$. By substituting, we can infer that the dimension, $ k$ of the code satisfies $ k\geq n-m=n-\frac{nc}{r}$. $ \Box$

Noteworthy Examples

Let us consider an $ (n,c,r)$ LDPC parity check matrix with $ n=16, c=3$, and $ r=4$. Divide the matrix into $ c$ submatrices each with a single 1 in each column. The resulting parity check matrix will look like the example below.

Example 2   A (16,3,4) LDPC code. (this is a [16,6,6] binary code of form $ [n,k,d]$)

\begin{displaymath}
\left(
\begin{array}{cccccccccccccccc}
1 & 1 & 1 & 1 & 0 & 0...
... & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0\end{array}\right)
\end{displaymath}

This example is borrowed from [HP, 15.6].

Remark 1   It's important to note that most LDPC matrices are linearly dependent based upon their construction. At least $ (c-1)$ rows in each matrix are linearly dependent.

Example 3   Here is an example of a less elegant check matrix of a LDPC Code with $ r=10$,$ c=2$, and $ n=20$

\begin{displaymath}
\left(
\begin{array}{cccccccccccccccccccc}
1 & 0 & 0 & 1 & 1...
... & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1\end{array}\right)
\end{displaymath}

Note the redundancy in the above matrix. This matrix would not suffice as an adequate Low Density Parity Check matrix because it is not of low density. It can also be shown that the matrix cannot correct a single error. This illustrates the elegance of a well constructed LDPC matrix.

Algorithms

Majority Decision

Let $ w$ be a recieved `word' and $ C$ be a code with parity check matrix $ H$. Remember that $ H \cdot w$ is the syndrome of the code. Simply speaking, majority decision seeks to flip a bit in $ w$ which will minimize the number of syndrome `failures' (i.e. $ s_{i}\not= 0$). It's easist to explain majority decision in terms of a specific situation. Let us assume that $ \vert C_{u} \cap C_{u'}\vert \leq a $, for all distinct columns $ u$ and $ u'$. Let us also assume that $ 2c > a$ and, for simplicity that exactly 1 bit in $ w$ is in error, call it $ w_{j}$. By definition, the elements of $ C_{j}$ all contribute to the syndrome coordinates

$\displaystyle s_{i}=h_{i,1}w_{1}+...+h_{i,n}w{n}$

which `fail' for $ i \in C_{j}$, whereas all other syndrome coordinates 'pass' ($ s_{i}=0$, $ i\notin C_{j}$). First, let us flip the bit $ w_{j'}$ (this means replace $ w=(w_{1},...,w_{n})$ by $ w'=w+e_{j'}$, where $ e_{j'}$ has a 1 in the $ j'^{th}$ coordinate and 0's elsewhere. If $ j\not= j'$ then the syndrome coordinates of $ w'$ which fail are precisely the $ s_{i}$ with $ i \in C_{j'}-C_{j}$. So there will be at least $ 2c-a$ failures provided $ a<2x<n-k$. However, if $ j=j'$ then $ w'$ will be no 'failing' syndrome coordinates. Therefore, majority decision would tell us to flip the $ j^{th}$ bit. The majority decision decoding algorithm is the process of repeating bit flips determined by majority decision until either a codeword is obtained or it is determined that a codeword is unobtainable.

Remark 2   If $ w$ has $ t$ errors, $ (t+1)c<n-k$, and $ c>\frac{(t+1)ta}{2}$ then majority decision decodes $ w$ to the closet codeword (in the Hamming Metric). We shall not need this, so it shall not be proven, but it illustrates some of the limitations to the applicability of this method. In summary, codewords can only afford to contain a few errors.

Bit-Flipping

An iterative decoding method for LDPC codes is called bit-flipping. This technique's premise depends on decoding single positions based upon majority decision. Let the parity check matrix be indexed as $ H=[h_{uv}]$. We also define the set of row indices in column $ v$ such that $ h_{uv}=1$ as

$\displaystyle R_{v}=\{u\in \{1,...,n-k\} \vert  h_{uv}=1\},   1\leq v\leq n.
$

Similarly, $ C_{u}$ is the set

$\displaystyle C_{u}=\{v\in \{1,...,n\} \vert  h_{uv}=1\},   1\leq u\leq n-k
$

Suppose that a vector $ w$ is received. In the syndrome $ s=Hy^{T}$ computation of $ w$,

$\displaystyle s_i=h_{i,1}w_1+...+h_{i,n}w_n,
$

which contains at most $ r$ terms.

Consider the $ c$ parity checks in $ R_{v}$. Each involves $ c-1$ other received symbols which are assumed to be distinct. If $ p$ is the probability of bit error due to noise and $ pc<1$, then most sets of $ c$ symbols will not include errors and most of the code is correct. Therefore, for each position, the symbol in position $ v$ is changed if a majority of the parity checks including the indices $ R_{v}$ are not satisfied.

In [JH, Chapter 13] they also assume that $ R_{v'}\cap R_{v''}$ has at most 1 element and $ C_{u'}\cap C_{u''}$ has at most 1 element for all $ u',u'',v',v''$.

Lemma 4   If fewer than $ \frac{r}{2}$ errors occur among the $ r(c-1)+1$ symbols involved in the majority decision, the position is correctly decoded.

Proof: If the position $ v$ is correct, fewer than $ \frac{r}{2}$ parity checks can have errors. Therefore, most of the parity checks are satisfied and correct. If there happens to be an error in position $ v$, less than $ \frac{r}{2}$ parity checks contain an additional error, and thus most fail the test.

Remark 3   If there is more than 1 error, this does not hold true, but the probability of inducing two errors along the same parity checks is low.

We are now prepared to implement the bit-flipping algorithm.

Algorithm: Bit-flipping.

Input: The received vector w.

(1)
Set $ q=w$.

(2)
Calculate $ s=Hq^{T}$.

(3)
If for some $ v\sum_{u\in R_{v}}s>\frac{r}{2}$ , set $ q_{v}=q_{v}+1\pmod 2$.

(4)
Repeat from 2. until $ q$ is unchanged.

Output: The decoded word, $ q$.

Gallager Hard Decision Decoding Algorithm

The Gallager hard decision decoding algorithm is very similar to the bit-flipping algorithm. It too is based upon the premise of majority decision. Suppose a codeword $ c$ is transmitted by a ($ n,c,r)$ binary LDPC code $ C$. Suppose the vector $ y$ is received. In the syndrome $ Hy^{T}$ each received bit $ y_{i}$ affects at most $ c$ components of that syndrome because the $ i$th bit is in $ c$ parity checks. If among all the bits involved in these $ c$ parity checks, call it $ S$, only the $ i$th is in error then these components, $ c$ of $ Hy^{T}$ will equal $ 1$. This will indicate that they parity check equations are not satisfied and the $ i$th bit should be flipped. If there is more than one error among $ S$ then several of the $ c$ components of $ Hy^{T}$ will equal $ 1$. This indicates that multiple bits must be flipped.

Algorithm: Gallager Hard Decision Decoding Algorithm.

(I)
Compute $ Hy^{T}$ and determine the number of unsatisfied parity checks. That is, the parity checks where $ Hy^{T}$ equals $ 1$.
(II)
For each of the $ n$ bits, compute the number of unsatisfied parity checks involving that bit.
(III)
Change the bits of $ y$ that are involved in the largest number of unsatisfied parity checks; reset the resulting vector to $ y$ again.
(IV)
Iteratively repeat I, II, and III until either $ Hy^{T}=0$, in which case the recieved vector is decoded as the latest $ y$, or until a certain number of iterations is reached, in which case the recieved vector is not decoded.

The Gallager Hard Decicision Decoding algorithm can be better understood by several examples.

Example 5   Let $ C$ be a $ [16,6,6]$ LDPC code with a parity check matrix $ H$ with $ r=3$ and $ c=4$. (as seen in Example [*]). Suppose that the codeword

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1
\end{array}\right)
\end{displaymath}

is received. We compute

\begin{displaymath}
Hy^{T}=
\left(
\begin{array}{cccccccccccc}
1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0
\end{array}\right)
^{T}
\end{displaymath}

This creates a total of six unsatisfied parity checks, rows 1,5 and 9 of $ H$.

If you look at the columns of $ H$ and compare $ Hy^{T}$ it is easily seen that column 1 of $ H$ is involved in 3 of the 3 unsatisfied parity checks (column 1 has 1's in 1, 5 and 9). All other columns in $ H$ are involved in two or fewer of the unsatisfied parity check rows. Thus, flip bit 1 of $ y$ to obtain a new

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1
\end{array}\right)
\end{displaymath}

Iterating, we see that $ Hy^{T}=0$ so the newest value of $ y$ is the decoded codeword. This particular example only requires one iteration of bit-flipping until the sent codeword was decoded. However, this algorithm is designed to run multiple iterations to correct codes. Let's look at an example that will require more than one iteration to decode the codeword.

Example 6   Let C again be a $ (16,3,4)$ LDPC code with the parity check matrix $ H$ as seen in Example 1. Suppose that the codeword

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1
\end{array}\right)
\end{displaymath}

is received.

That makes

\begin{displaymath}
Hy^{T}=
\left(
\begin{array}{cccccccccccc}
1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0
\end{array}\right)
^{T}
\end{displaymath}

Parity checks 1,4,9 and 10 are unsatisfied and from $ H$; columns 1,2,13 and 15 each have two unsatisfied parity checks in them. So flip bits 1,2,13, and 15 of $ y$ to yield a new

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1
\end{array}\right)
\end{displaymath}

Iterating,

\begin{displaymath}
Hy^{T}=
\left(
\begin{array}{cccccccccccc}
1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0
\end{array}\right)
^{T}
\end{displaymath}

This makes parity checks 1,4,6,7,10 and 11 unsatisfied. Columns 2 and 15 from H each have three unsatisfied parity checks in them. The other bits have two or fewer unsatisfied parity checks. We now flip bits 2 and 15 from $ y$ to yield a new

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1
\end{array}\right)
\end{displaymath}

Iterating again yields $ Hy^{T}=0$ and so this latest $ y$ is the decoded codeword.

Example 3 required one more iteration than example 2 did, showing how bit-flipping words in the iterative sense. But there are cases when the sent codeword becomes so corrupted that it is un-decodable. Example 4 provides us with just such an example.

Example 7   Let $ C$ again be a $ (16,3,4)$ LDPC code with the parity check matrix $ H$ as seen in Example 1. Suppose that the codeword

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1
\end{array}\right)
\end{displaymath}

is received.

In turn,

\begin{displaymath}
Hy^{T}=
\left(
\begin{array}{cccccccccccc}
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}\right)
^{T}
\end{displaymath}

There are a total of 8 unsatisfied parity checks: 5,6,7,8,9,10,11 and 12. Every column of $ H$ is involved in at least two of these; so flip every bit in the codeword $ y$.

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0
\end{array}\right)
\end{displaymath}

Iterating,

\begin{displaymath}
Hy^{T}=
\left(
\begin{array}{cccccccccccc}
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1
\end{array}\right)
^{T}
\end{displaymath}

This is the same $ Hy^{T}$ as before indicating that we should flip every bit of $ y$. However, we know that will lead us onto a cycle that will never yield the corrected codeword. So the original codeword

\begin{displaymath}
y=
\left(
\begin{array}{cccccccccccccccc}
1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1
\end{array}\right)
\end{displaymath}

is un-decodable by this method!

GAP Code

The following is a function written in GAP code which will construct a random LDPC code.
random_ldpc_code2:=function(n,r,a)
    ## n = length
    ## r = number of checks 
    ## a must divide gcd(n,r)
    ## creates r1xn1 block matrix of
    ## random axa permutation matrices
    local C,i,j,H,P,G,p;
G:=SymmetricGroup(a);
if GcdInt(n,r)/a <> Int(Gcd(n,r)/a) then 
    Print("Wrong data\n");
    return [n,r,a];
fi;
p:=[];
for i in [1..Int(r/a)] do
    p[i]:=[];
    for j in [1..Int(n/a)] do
        p[i][j]:=PermutationMat(Random(G),a);
    od;
od;
P:=List([1..Int(r/a)],i->List([1..Int(n/a)],j->[i,j,p[i][j]]));
H:=BlockMatrix(P[1],Int(r/a),Int(n/a));
C:=CheckMatCode(H,GF(2));
return C;
end;

The following are helper functions implimented in the Bit-Flipping algorithm.

checksetJ:=function(H,w)
local i,I,n,k;
I:=[];
n:=Length(H[1]);
for i in [1..n] do
		if H[w][i]<>0*Z(2) then 
		I:=Concatenation(I,[i]);
		fi;
	od;
return I;
end;

checksetI:=function(H,u)
return checksetJ(TransposedMat(H),u);
end;

counter:=function(I,s)
local S,i;
S:=Codeword(List(I, i -> List(s)[i]));
return WeightCodeword(S);
end;

The following is the GAP code created to implement the Bit-Flipping algorithm.

bit_flip:=function(H,v)
local i,j,s,q,n,k,rho,gamma,I_j,tH;
tH:=TransposedMat(H);
q:=ShallowCopy(v);
#start of missing while loop
s:=H*q;
n:=Length(H[1]);
k:=n-Length(H);
rho:=Length(Support(Codeword(H[1])));
#gamma:= Length(Support(Codeword(TransposedMat(H)[1])));
for j in [1..n] do
   gamma:= Length(Support(Codeword(tH[j])));
   I_j:=checksetI(H,j);
       if counter(I_j,s)>gamma/2 then
           q[j]:=q[j]+Z(2);
           break;
       fi;
od;
return Codeword(q);
# end of missing while q is changed loop
end;

bit_flipping:=function(H,v)
local s,q,rho_new,rho_old;
q:=ShallowCopy(v);
s:=H*q;
rho_old:=Length(Support(Codeword(s)));
rho_new := 0;
while rho_new < rho_old do
 rho_old:=rho_new;
 q:=bit_flip(H,q);
 s:=H*q;
 rho_new:=Length(Support(Codeword(s)));
od;
return Codeword(q);
end;

The following is the GAP code created to implement the Gallager Hard Decision decoding algorithm.

GallagerDecoder:=function(H,y)
local s,syns,i,n,k,u,I,Helper;
y0:=y;
 
Helper:=function(H1,y1)
local i,supH,n,u;
s:=H1*y1;
supS:=Support(CodeWord(s));
supH:=List([1..Length(H1[1]),i->Support(Codeword(TransposedMat(H1[i]))));
meet:=List([1..Length(H1[1]),i->Length(Intersection(supS,supH[i])));
u:=Maximum(meet);
I:=Filtered(1..Length(H1[1]),i->u=Length(Intersection(supS,supH[i])));
for i in I do
  y1[i]:=y1[i]+Z(2);
  y2:=y1;
return(y2);
end;

syns:=[];
s:=H*y0;
while not(s in syns[]) do
   syns:=Concatonation(syns,[s]);
   y0:=Helper(H,y0);
   if Support(Codeword(H*y0[]) then return y0 fi
od
end;

Conclusion

The three examples involving the Gallager Hard Decision decoding algorithm fully illustrate the capabilities and limitations of the algorithm. It shows, that if the received codeword contains only one error, it will only take one iteration to decode the codeword. The examples also show that the further corrupted that one gets from the original codeword the more iterations it takes to decode it. Lastly, the examples illustrate that sometimes the codeword is so corrupted that no amount of iterations will yield a decoded codeword. These all reaffirm the necessity of well constructed LDPC codes. This will allow more errors to occur, without losing the original message. This concludes a brief walkthrough of the construction of LDPC codes and its major decoding process. When dealing with codewords thousands of bits long, fast, reliable decoding is desired. LDPC codes' simple, low weight design are what make it fast and reliable, which in turn, is what makes LDPC codes so important in today's world. 2

Bibliography

DSV
I.B. Djordjevic, S. Sankaranarayanan, and B.V. Vasic, Projective-Plane Iteratively Decodable Block Codes for WDM High-Speed Long-Haul Transmission Systems. (IEEE J of Lightwave Technology, Vol 22., No. 3, March 2004), 695-702.

G
Gallager, Robert G., ``Low Density Parity Check Codes'' Ph.D. diss., Massachusetts Institute of Technology, 1963.
http://www.ldpc-codes.com/papers/Robert_Gallager_LDPC_1963.pdf

GU
Guava Webpage, http://cadigweb.ew.usna.edu/~wdj/gap/GUAVA/

H
Hill, Raymond. A First Course in Coding Theory. (New York: Oxford University Press, 2004).

HP
W. Cary Huffman and Vera Pless, Fundamentals of Error Correction Codes (Cambridge: Cambridge University Press, 2003), 598-602.

JH
Justesen Jorn, and Tom Hoholdt, A Course in Error-Correcting Codes (Zurich: European Mathematical Society Publishing House, 2004), 137-145.

M
Mackay, D.J.C., Good Error Correcting Codes Based on Very Sparse Matrices. (IEEE Trans. Inform. Theory IT-45, 1999), 399-431.

S
Sonata Webpage, http://www.gap-system.org/Packages/sonata.html

About this document ...

Low Density Parity Check Codes

This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.70)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -t 'G. McDonald Math Honors Thesis 2005-2006' -split 0 mcdonald-honorsthesis.tex

The translation was initiated by David Joyner on 2006-04-21


Footnotes

... McDonald1
Department of Mathematics, United States Naval Academy, Honors Thesis 2005-2006, Advisor Prof. Joyner, wdj@usna.edu
... theory.1
As a good general reference for this material, see [HP, pp. 1-52]
... world.2
After most of the thesis was written, I discovered an article, [DSV], which explains how well the Bit-Flipping error correcting algorithm works for certain types of LDPC codes arising from finite geometry. Searching for this paper was inspired from comments made by Professor T.S. Michael. This paper contains several elegent constructions of LDPC matricies. The article also provides a class of examples which meet an assumption made by [JH] (no two rows and no two columns in a check matrix for an LDPC code may have more than one '1' in common).

David Joyner 2006-04-21