**Gordon McDonald ^{1}**

*Date:* April 21, 2006

This paper will discuss a brief history of the field of error correcting codes and an interesting subfield known as Low Density Parity Check (LDPC) codes. I explored some explicit constructions of LDPC codes arising from the theory of design codes [S], [GU]. Lastly, it will investigate decoding methods. The methods investigated are based upon majority decision decoding (which is a procedure for correcting a bit based upon how many failures it contributes to the syndrome). We discuss in detail an iterative method of decoding LDPC codes using a method called bit-flipping and compare it to a similar iterative algorithm known as Gallagher Hard Decision Decoding. The algorithms are illustrated by means of detailed examples and implemented in GAP, an open source computer algebra system. Unfortunately, I discovered that the examples arising from design codes did not work well with the Bit-Flipping algorithm. These unsuccessful trials were omitted from the thesis.

The theory of error-correcting codes was originated in the late 1940's by Richard Hamming, a mathematician who worked for Bell Telephone. Hamming's motivation was to program a computer to correct ``bugs'' which arose in punch-card programs. Hamming's overall motivation behind the theory of error-orrecting codes was to reliably enable digital communication.

LDPC codes were first developed in a doctoral dissertation in 1963 by R.G. Gallager. Gallager's work was largely ignored for approximately 30 years until connections were drawn between the iterative methods used for decoding both LDPC codes and Turbo codes. Today, LDPC codes are becoming paramount in digital decoding and reliable digital communication over a noisy channel. Applications range from cellular telephones to worldwide computer communication.

Although LDPC codes can be applied in any field, they are mostly considered over the GF(2) field - the binary case. For simplicity, when referring to LDPC codes consider them in the binary case.

Low Density Parity Check codes are codes of construction and defined by a matrix which always has the following properties: [G, p. 7]

- The codes are of low density. That is, they contain mostly 0's and very few 1's.
- Contains block length . That is, the number of columns in both the Generator Matrix and the Parity Check Matrix are of length .
- Each row in the parity check matrix has exactly 1's.
- Each column in the parity check matrix has exactly 1's.
- and are `small' (this is to satisfy the concept of the check matrix being of `low density'). In general, ,
- The linear binary code is defined by

Before we can construct a LDPC matrix, it is important first to prove this lemma (see for example [JH, Chapter 13]).

Proof: If the check matrix has rows, the total number of 1's is (obtained by first counting the number of 1's in each row and then by counting the number of 1's in each column). Because not all the rows are lineraly independent we know . By substituting, we can infer that the dimension, of the code satisfies .

Let us consider an LDPC parity check matrix with , and . Divide the matrix into submatrices each with a single 1 in each column. The resulting parity check matrix will look like the example below.

Let be a received `word' and be a code with parity check matrix . Remember that is the syndrome of the code. Simply speaking, majority decision seeks to flip a bit in which will minimize the number of syndrome `failures' (i.e. ). It's easist to explain majority decision in terms of a specific situation. Let us assume that , for all distinct columns and . Let us also assume that and, for simplicity that exactly 1 bit in is in error, call it . By definition, the elements of all contribute to the syndrome coordinates

An iterative decoding method for LDPC codes is called bit-flipping. This technique's premise depends on decoding single positions based upon majority decision. Let the parity check matrix be indexed as . We also define the set of row indices in column such that as

Suppose that a vector is received. In the syndrome computation of ,

which contains at most terms.

Consider the parity checks in . Each involves other received symbols which are assumed to be distinct. If is the probability of bit error due to noise and , then most sets of symbols will not include errors and most of the code is correct. Therefore, for each position, the symbol in position is changed if a majority of the parity checks including the indices are not satisfied.

In [JH, Chapter 13] they also assume that has at most 1 element and has at most 1 element for all .

Proof: If the position is correct, fewer than parity checks can have errors. Therefore, most of the parity checks are satisfied and correct. If there happens to be an error in position , less than parity checks contain an additional error, and thus most fail the test.

- (1)
- Set .
- (2)
- Calculate .
- (3)
- If for some
, set
.
- (4)
- Repeat from 2. until is unchanged.

The Gallager hard decision decoding algorithm is very similar to the bit-flipping algorithm. It too is based upon the premise of majority decision. Suppose a codeword is transmitted by a ( binary LDPC code . Suppose the vector is received. In the syndrome each received bit affects at most components of that syndrome because the th bit is in parity checks. If among all the bits involved in these parity checks, call it , only the th is in error then these components, of will equal . This will indicate that they parity check equations are not satisfied and the th bit should be flipped. If there is more than one error among then several of the components of will equal . This indicates that multiple bits must be flipped.

- (I)
- Compute and determine the number of unsatisfied parity checks. That is, the parity checks where equals .
- (II)
- For each of the bits, compute the number of unsatisfied parity checks involving that bit.
- (III)
- Change the bits of that are involved in the largest number of unsatisfied parity checks; reset the resulting vector to again.
- (IV)
- Iteratively repeat I, II, and III until either , in which case the received vector is decoded as the latest , or until a certain number of iterations is reached, in which case the received vector is not decoded.

The Gallager Hard Decicision Decoding algorithm can be better understood by several examples.

*If you look at the columns of and compare it is easily
seen that column 1 of is involved in 3 of the 3 unsatisfied
parity checks (column 1 has 1's in 1, 5 and 9). All other columns in are involved in two or fewer
of the unsatisfied parity check rows. Thus, flip bit 1 of
to obtain a new
*

*That makes
*

*Example 3 required one more iteration than example 2 did, showing
how bit-flipping words in the iterative sense. But there are cases
when the sent codeword becomes so corrupted that it is un-decodable.
Example 4 provides us with just such an example. *

*In turn,
*

random_ldpc_code2:=function(n,r,a) ## n = length ## r = number of checks ## a must divide gcd(n,r) ## creates r1xn1 block matrix of ## random axa permutation matrices local C,i,j,H,P,G,p; G:=SymmetricGroup(a); if GcdInt(n,r)/a <> Int(Gcd(n,r)/a) then Print("Wrong data\n"); return [n,r,a]; fi; p:=[]; for i in [1..Int(r/a)] do p[i]:=[]; for j in [1..Int(n/a)] do p[i][j]:=PermutationMat(Random(G),a); od; od; P:=List([1..Int(r/a)],i->List([1..Int(n/a)],j->[i,j,p[i][j]])); H:=BlockMatrix(P[1],Int(r/a),Int(n/a)); C:=CheckMatCode(H,GF(2)); return C; end;

The following are helper functions implimented in the Bit-Flipping algorithm.

checksetJ:=function(H,w) local i,I,n,k; I:=[]; n:=Length(H[1]); for i in [1..n] do if H[w][i]<>0*Z(2) then I:=Concatenation(I,[i]); fi; od; return I; end; checksetI:=function(H,u) return checksetJ(TransposedMat(H),u); end; counter:=function(I,s) local S,i; S:=Codeword(List(I, i -> List(s)[i])); return WeightCodeword(S); end;

The following is the GAP code created to implement the Bit-Flipping algorithm.

bit_flip:=function(H,v) local i,j,s,q,n,k,rho,gamma,I_j,tH; tH:=TransposedMat(H); q:=ShallowCopy(v); #start of missing while loop s:=H*q; n:=Length(H[1]); k:=n-Length(H); rho:=Length(Support(Codeword(H[1]))); #gamma:= Length(Support(Codeword(TransposedMat(H)[1]))); for j in [1..n] do gamma:= Length(Support(Codeword(tH[j]))); I_j:=checksetI(H,j); if counter(I_j,s)>gamma/2 then q[j]:=q[j]+Z(2); break; fi; od; return Codeword(q); # end of missing while q is changed loop end; bit_flipping:=function(H,v) local s,q,rho_new,rho_old; q:=ShallowCopy(v); s:=H*q; rho_old:=Length(Support(Codeword(s))); rho_new := 0; while rho_new < rho_old do rho_old:=rho_new; q:=bit_flip(H,q); s:=H*q; rho_new:=Length(Support(Codeword(s))); od; return Codeword(q); end;

The following is the GAP code created to implement the Gallager Hard Decision decoding algorithm.

GallagerDecoder:=function(H,y) local s,syns,i,n,k,u,I,Helper; y0:=y; Helper:=function(H1,y1) local i,supH,n,u; s:=H1*y1; supS:=Support(CodeWord(s)); supH:=List([1..Length(H1[1]),i->Support(Codeword(TransposedMat(H1[i])))); meet:=List([1..Length(H1[1]),i->Length(Intersection(supS,supH[i]))); u:=Maximum(meet); I:=Filtered(1..Length(H1[1]),i->u=Length(Intersection(supS,supH[i]))); for i in I do y1[i]:=y1[i]+Z(2); y2:=y1; return(y2); end; syns:=[]; s:=H*y0; while not(s in syns[]) do syns:=Concatonation(syns,[s]); y0:=Helper(H,y0); if Support(Codeword(H*y0[]) then return y0 fi od end;

The three examples involving the Gallager Hard Decision decoding algorithm fully illustrate the capabilities and limitations
of the algorithm. It shows, that if the received codeword
contains only one error, it will only take one iteration
to decode the codeword. The examples also show that the further corrupted
that one gets from the original codeword the more iterations it takes
to decode it. Lastly, the examples illustrate that sometimes the codeword
is so corrupted that no amount of iterations will yield a decoded
codeword. These all reaffirm the necessity of well constructed LDPC codes. This will allow more errors to occur, without losing
the original message. This concludes a brief walkthrough of the construction
of LDPC codes and its major decoding process. When dealing with codewords thousands
of bits long, fast, reliable decoding is desired. LDPC codes' simple,
low weight design are what make it fast and reliable, which in turn,
is what makes LDPC codes so important in today's world.
^{2}

- DSV
- I.B. Djordjevic, S. Sankaranarayanan, and B.V. Vasic, Projective-Plane Iteratively Decodable Block Codes for WDM High-Speed Long-Haul Transmission Systems. (IEEE J of Lightwave Technology, Vol 22., No. 3, March 2004), 695-702.
- G
- Gallager, Robert G., ``Low Density Parity Check Codes'' Ph.D.
diss., Massachusetts Institute of Technology, 1963.

`http://www.ldpc-codes.com/papers/Robert_Gallager_LDPC_1963.pdf`

- GU
- Guava Webpage,
`http://cadigweb.ew.usna.edu/~wdj/gap/GUAVA/`

- H
- Hill, Raymond. A First Course in Coding Theory. (New York: Oxford
University Press, 2004).
- HP
- W. Cary Huffman and Vera Pless, Fundamentals of Error Correction
Codes (Cambridge: Cambridge University Press, 2003), 598-602.
- JH
- Justesen Jorn, and Tom Hoholdt, A Course in Error-Correcting
Codes (Zurich: European Mathematical Society Publishing House, 2004),
137-145.
- M
- Mackay, D.J.C., Good Error Correcting Codes Based on Very Sparse Matrices. (IEEE Trans. Inform. Theory IT-45, 1999), 399-431.
- S
- Sonata Webpage,
`http://www.gap-system.org/Packages/sonata.html`

The command line arguments were:

**latex2html** `-t 'G. McDonald Math Honors Thesis 2005-2006' -split 0 mcdonald-honorsthesis.tex`

The translation was initiated by David Joyner on 2006-04-21

- ... McDonald
^{1} - Department of Mathematics, United States Naval Academy, Honors Thesis 2005-2006, Advisor Prof. Joyner, wdj@usna.edu
- ... theory.
^{1} - As a good general reference for this material, see [HP, pp. 1-52]
- ... world.
^{2} - After most of the thesis was written, I discovered an article, [DSV], which explains how well the Bit-Flipping error correcting algorithm works for certain types of LDPC codes arising from finite geometry. Searching for this paper was inspired from comments made by Professor T.S. Michael. This paper contains several elegent constructions of LDPC matricies. The article also provides a class of examples which meet an assumption made by [JH] (no two rows and no two columns in a check matrix for an LDPC code may have more than one '1' in common).

David Joyner 2006-04-21