The goal of the project is to write a program to perform list-decoding of Reed-Solomon codes using the Sudan Algorithm [S] and implement this program into the GAP coding-theory package known as GUAVA [GUA]. The Sudan Algorithm is Algorithm 12.1.1 in Justesen and Høholdt's recent book [JH]. GAP is a computer algebra package whose open source kernel is written in the C programming language [GAP]. However, the GUAVA package is written in GAP's own interpreted language. Neither GAP nor the GUAVA package contain a program for list-decoding of generalized Reed-Solomon codes. When implemented, this program will greatly increase GAP's speed in decoding Reed-Solomon codes. We will also discuss generalizations, both to the higher rate case (Algorithm 2) and the multivariate case (Algorithm 3).
Let
denote a prime power.
A finite field is a finite set of elements with operations of addition
and multiplication which satisfy the properties of a field. Let
denote a finite field with
elements.
A linear code
is simply a finite dimensional vector space
over a finite field, and its elements are called codewords. If
, then we say
has length
. Moreover, if
, then we call
a
code. A
matrix whose rows form a basis
of a linear
code is called a generator matrix of the code.
Let
be a linear code of length
over
with
generator matrix
, where
is a power of a prime
.
If
then the code is called binary. We
assume that
has been given the standard basis
,
, ...,
.
If the dimension of
is
, then the number of
elements of
is equal to
. The quantity
is called
the rate of
the code and measures the amount of information which the code can transmit.
For instance, the code in the above example has rate
.
Another important parameter associated to the code is the number of errors
which it can, in principle, correct. For this notion, we need to introduce
the Hamming metric. For any two
,
let
denote the number of coordinates where these
two vectors differ:
| (1) |
We call the minimum distance of a code
, denoted
, the smallest distance between distinct codewords in
.
There exists
and
such that
. Note that
,
where
is the minimum weight of
a codeword in
.
Also, for some other codeword
,
. Therefore,
. Now, we see that the minimum distance of
satisfies
| (2) |
It is easy to check that
is a linear code.
Since
, by the assumption in Definition 2,
the dimension of the Reed-Solomon
code
is equal to the same
used
in
([JH], p50).
A code is MDS if its parameters
satisfy the Singleton bound
with equality ([JH], p49).
Generalized Reed-Solomon codes are MDS codes, so
is easily computed in terms of the other parameters. (Please see
[McG] for more details on this.)
Reed-Solomon codes were discovered in 1959
and have applications in CDs, DVDs, and
satellite communications among other things
([JH], p49). The GUAVA package does not contain
any programs which provide fast list-decoding of Reed-Solomon codes.
In fact, to our knowledge, this list decoder has not yet
been implemented in any computer algebra
system. A fast decoder for generalized Reed-Solomon codes has been recently
implemented by J. McGowan [McG].
However, McGowan's program does not perform list-decoding.
The optimal method of decoding these codes utilizes list decoding with
the Sudan Algorithm. Instead of using brute force to examine all
codewords to find the ones closest to the received vector, list
decoding uses systems of linear equations and polynomial interpolation.
This method of using
brute force is known as nearest neighbor decoding for which the
received vector
is decoded as the codeword
so that
is the minimum distance ([HILL], p5).
This method returns a list of codewords within some fixed distance of
the received vector.
Let
be a generalized Reed-Solomon code with
parameters
. Let
.
Now we discuss a generalization of the algorithm implemented in [McG].
See his discussion for further details of the case
.
We use the notation in Definition 2. Let
be a received word where
is
a codeword in
. Assume that the weight of
the error vector
is less than or equal to
,
. In other words,
represents the maximum number of errors which the algorithm below can correct. The
idea is to determine a non-zero polynomial
Such a polynomial
(depending on
) is called an
interpolating polynomial.
Here
represents the number of codewords near
which the algorithm below returns. (When
, the
codeword closest to
is returned.
When
the two codewords closest to
are returned, and so on.)
Our next aim will be to show that such a non-zero polynomial
exists under certain
conditions (to be made explicit below).
proof:
By definition of
and the fact that
, the polynomial
has degree at most
.
Since
, except in at most
cases, we have
that
, for at least
of the
in
. This forces
for all
,
since a non-zero polynomial of degree
can have at most
zeroes.
Therefore,
is a root of the polynomial
.
If we consider the polynomial
as
a polynomial in
over
, the division algorithm over the ring
implies that
divides
.
This lemma means that any codeword
as above is associated
with a factor of
.
Under what conditions on
and
does such an interpolating
polynomial exist? Let us regard all the coefficients of
as
unknowns. The definition of
tells us that the number of unknowns is
determined by the conditions (2). Therefore, the number of unknowns is
The most interesting case is when the number of correctable errors,
, is greater than
, since otherwise the
Nearest Neighbor Algorithm applies. (Please see [I] for a
discussion of this.)
We now determine when
. This condition forces
, so
Another condition that arises from the condition (2) with
, namely
. This implies
.
Summarizing the above, list decoding applies only when
has ``low rate''
and moreover beats the Nearest Neighbor Algorithm when
.
![]() |
(3) |
The GAP implementation is below. This factorization routine is not the optimal one.
Examples of this algorithm are given in the next section.
In this section we give examples of a GUAVA implementation of Sudan's Algorithm 1.
Let
be a primitive element of
where
and consider the
Reed-Solomon code
obtained by evaluating polynomials
of degree at most 2 in the powers of
. The code has
minimum distance 13 and thus is 6-error
correcting. With
it is possible to decode up to
seven errors with list size at most 2. Suppose
is the received vector .
Solving the system in step 2 then plugging the results into the polynomial in
step 3 gives
. Since the linear factors of
are
and
, the functions
in step 4 are
and
.
Therefore, by step 5, the corresponding two codewords with 7 errors corrected are:
We used our GAP code to test the above example. The following parameters were used when testing the example.
gap> F:=GF(16);
GF(2^4)
gap> a:=PrimitiveRoot(F);; b:=a^7; b^4+b^3+1; ## alpha in JH Ex 12.1.1, pg 129
Z(2^4)^7
0*Z(2)
gap> Pts:=List([0..14],i->b^i);
[ Z(2)^0, Z(2^4)^7, Z(2^4)^14, Z(2^4)^6, Z(2^4)^13, Z(2^2), Z(2^4)^12,
Z(2^4)^4, Z(2^4)^11,
Z(2^4)^3, Z(2^2)^2, Z(2^4)^2, Z(2^4)^9, Z(2^4), Z(2^4)^8 ]
gap> R1:=PolynomialRing(F,1);;
gap> vars:=IndeterminatesOfPolynomialRing(R1);;
gap> x:=vars[1];
x_1
gap> y:=Indeterminate(F,vars);;
gap> R2:=PolynomialRing(F,[x,y]);;
gap> C:=GeneralizedReedSolomonCode(Pts,3,R1); MinimumDistance(C);
a linear [15,3,1..13]10..12 generalized Reed-Solomon code over GF(16)
13
gap> z:=Zero(F);
0*Z(2)
gap> r:=[z,z,z,z,z,z,z,z,b^6,b^2,b^5,b^14,b,b^7,b^11];; ## as in JH Ex 12.1.1
gap> r:=Codeword(r);
[ 0 0 0 0 0 0 0 0 a^12 a^14 a^5 a^8 a^7 a^4 a^2 ]
gap> cs1:=NearestNeighborGRSDecodewords(C,r,7); time;
[ [ [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ], 0*Z(2) ],
[ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ],
x_1+Z(2)^0 ] ]
1556
gap> cs2:=GeneralizedReedSolomonListDecoder(C,r,2); time;
[ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ],
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] ]
151
This verifies the above example.
For the next example we use a slightly different received vector. One of the codewords listed will actually correct 8 errors.
gap> F:=GF(16); GF(2^4) gap> a:=PrimitiveRoot(F);; b:=a^7; b^4+b^3+1; ## alpha in JH Ex 12.1.1, pg 129 Z(2^4)^7 0*Z(2) gap> Pts:=List([0..14],i->b^i); [ Z(2)^0, Z(2^4)^7, Z(2^4)^14, Z(2^4)^6, Z(2^4)^13, Z(2^2), Z(2^4)^12, Z(2^4)^4, Z(2^4)^11, Z(2^4)^3, Z(2^2)^2, Z(2^4)^2, Z(2^4)^9, Z(2^4), Z(2^4)^8 ] gap> R1:=PolynomialRing(F,1);; gap> vars:=IndeterminatesOfPolynomialRing(R1);; gap> x:=vars[1]; x_1 gap> y:=Indeterminate(F,vars);; gap> R2:=PolynomialRing(F,[x,y]);; gap> C:=GeneralizedReedSolomonCode(Pts,3,R1); MinimumDistance(C); a linear [15,3,1..13]10..12 generalized Reed-Solomon code over GF(16) 13 gap> z:=Zero(F); 0*Z(2) gap> r:=[z,z,z,z,z,z,z,b^13,b^6,b^2,b^5,b^14,b,b^7,b^11];; gap> r:=Codeword(r); [ 0 0 0 0 0 0 0 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ] gap> cs1:=NearestNeighborGRSDecodewords(C,r,7); time; [ [ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ], x_1+Z(2)^0 ] ] 1570 gap> cs2:=GeneralizedReedSolomonListDecoder(C,r,2); time; [ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ], [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] ] 147 gap> c1:=cs2[1]; c1 in C; [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ] true gap> c2:=cs2[2]; c2 in C; [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] true gap> WeightCodeword(c1-r); 6 gap> WeightCodeword(c2-r); 8
The time to run the program that finds the nearest neighbor codewords by brute force (1570 ``gapstones'') is much longer than the time to run the program which uses Sudan's algorithm (147 ``gapstones'') for this example.
Now let us try to use the standard command Decodeword to decode the
received vector
. We type into GUAVA the following command:
gap> Decodeword(C,r); time;
Error, Denominator evaluates as zero called from
Value( rf, inds, vals, One( CoefficientsFamily( FamilyObj( rf ) ) ) ) called from
Value( f, [ x ], [ s ] ) called from
func( elm ) called from
List( P, function ( s )
return Value( f, [ x ], [ s ] );
end ) called from
SpecialDecoder( C )( C, c ) called from
...
Entering break read-eval-print loop ...
you can 'quit;' to quit to outer loop, or
you can 'return;' to continue
brk>
The error that GUAVA gives us here indicates that it is not possible to use
ordinary decoding with a received word with
errors, because
. However, list decoding does work in this case.
This section contains related algorithms which will be briefly discussed but not implemented in this project.
![]() |
(4) |
![]() |
(5) |
This is an improvement on Algorithm 1 since the Guruswami-Sudan Algorithm works for codes with any rate, but the Sudan Algorithm only works for RS codes with low rates ([JH], p130).
In the remainder of this section we speculate on Sudan's generalization to polynomials in more than one variable. No proofs will be given. For details please see §5 of [S].
In general terms, the idea of the higher-dimensional generalization is the following:
Input: a finite field
, a subset
,
the dimension
, a function representing the received vector
, the number of coordinates where the
received vector is correct
, and the ``weighted degree''
of the code
.
Output: All multivariate polynomials
,
, such that
where
denotes a weighted degree.
A more precise version is stated below.
Consider the following type of evaluation code
Here is the more precise version of Sudan's Algorithm, as it applies to this case.
Let L denote the list of all polynomials
with weighted
degree
such that
divides
.
Output: The list of codewords
If
represents the
received word
then each
satisfies
.
Finally, note that the curves [Crv] GAP package command
SolveLinearEquations
can be used to solve for the coefficients of
in the system of equations
.
# List decoder for RS codes using Sudan-Guruswami algorithm
# (this implementation only works for low rate codes)
#
########################################################
#Input: List coeffs of coefficients, R = F[x]
#Output: polynomial L[0]+L[1]x+...+L[d]x^d
#
CoefficientToPolynomial:=function(coeffs,R)
local p,i,j, lengths, F,xx;
xx:=IndeterminatesOfPolynomialRing(R)[1];
F:=Field(coeffs);
p:=Zero(F);
# lengths:=List([1..Length(coeffs)],i->Sum(List([1..i],j->1+coeffs[j])));
for i in [1..Length(coeffs)] do
p:=p+coeffs[i]*xx^(i-1);
od;
return p;
end;
#Input: Pts=[x1,..,xn], a = element of L
#Output: Vandermonde matrix (xi^j)
#
VandermondeMat:=function(Pts,a)
## returns an nx(a+1) matrix
local V,n,i,j;
n:=Length(Pts);
V:=List([1..(a+1)],j->List([1..n],i->Pts[i]^(j-1)));
return TransposedMat(V);
end;
#Input: Xlist=[x1,..,xn], l = highest power,
# L=[h_1,...,h_ell] is list of powers
# r=[r1,...,rn] is received vector
#Output: Computes matrix described in Algor. 12.1.1 in [JH]
#
LocatorMat:=function(r,Pts,L)
## returns an nx(ell+sum(L)) matrix
local a,j,b,ell,add_col_mat,add_row_mat,block_matrix,diagonal_power;
add_col_mat:=function(M,N) ## "AddColumnsToMatrix"
#N is a matrix with same rowdim as M
#the fcn adjoins N to the end of M
local i,j,S,col,NT;
col:=MutableTransposedMat(M); #preserves M
NT:=MutableTransposedMat(N); #preserves N
for j in [1..DimensionsMat(N)[2]] do
Add(col,NT[j]);
od;
return MutableTransposedMat(col);
end;
add_row_mat:=function(M,N) ## "AddRowsToMatrix"
#N is a matrix with same coldim as M
#the fcn adjoins N to the bottom of M
local i,j,S,row;
row:=ShallowCopy(M);#to preserve M;
for j in [1..DimensionsMat(N)[1]] do
Add(row,N[j]);
od;
return row;
end;
block_matrix:=function(L) ## "MakeBlockMatrix"
#L is an array of matrices of the form
#[[M1,...,Ma],[N1,...,Na],...,[P1,...,Pa]]
#returns the associated block matrix
local A,B,i,j,m,n;
n:=Length(L[1]);
m:=Length(L);
A:=[];
if n=1 then
if m=1 then return L[1][1]; fi;
A:=L[1][1];
for i in [2..m] do
A:=add_row_mat(A,L[i][1]);
od;
return A;
fi;
for j in [1..m] do
A[j]:=L[j][1];
od;
for j in [1..m] do
for i in [2..n] do
A[j]:=add_col_mat(A[j],L[j][i]);
od;
od;
B:=A[1];
for j in [2..m] do
B:= add_row_mat(B,A[j]);
od;
return B;
end;
diagonal_power:=function(r,j)
## returns an nxn matrix
local A,n,i;
n:=Length(r);
A:=DiagonalMat(List([1..n],i->r[i]^j));
return A;
end;
ell:=Length(L);
a:=List([1..ell],j->diagonal_power(r,(j-1))*VandermondeMat(Pts,L[j]));
b:=List([1..ell],j->[1,j,a[j]]);
return block_matrix([a]);
end;
# Compute kernel of matrix in alg 12.1.1 in [JH].
# Choose a basis vector in kernel.
# Construct the polynomial Q(x,y) in alg 12.1.1.
#
ErrorLocatorCoeffs:=function(r,Pts,L)
local a,j,b,vec,e,QC,i,lengths,ker,ell;
ell:=Length(L);
e:=LocatorMat(r,Pts,L);
ker:=TriangulizedNullspaceMat(TransposedMat(e));
if ker=[] then Print("Decoding fails.\n"); return []; fi;
vec:=ker[Length(ker)];
QC:=[];
lengths:=List([1..ell],i->Sum(List([1..i],j->1+L[j])));
QC[1]:=List([1..lengths[1]],j->vec[j]);
for i in [2..ell] do
QC[i]:=List([(lengths[i-1]+1)..lengths[i]],j->vec[j]);
od;
return QC;
end;
#Input: List L of coefficients ell, R = F[x]
# Xlist=[x1,..,xn],
#Output: list of polynomials Qi as in Algor. 12.1.1 in [JH]
#
ErrorLocatorPolynomials:=function(r,Pts,L,R)
local q,p,i,ell;
ell:=Length(L)+1; ## ?? Length(L) instead ??
q:=ErrorLocatorCoeffs(r,Pts,L);
if q=[] then Print("Decoding fails.\n"); return []; fi;
p:=[];
for i in [1..Length(q)] do
p:=Concatenation(p,[CoefficientToPolynomial(q[i],R)]);
od;
return p;
end;
#Input: List L of coefficients ell, R = F[x]
# Pts=[x1,..,xn],
#Output: interpolating polynomial Q as in Algor. 12.1.1 in [JH]
#
InterpolatingPolynomialGRS:=function(r,Pts,L,R)
local poly,i,Ry,F,y,Q,ell;
ell:=Length(L)+1;
Q:=ErrorLocatorPolynomials(r,Pts,L,R);
if Q=[] then Print("Decoding fails.\n"); return 0; fi;
F:=CoefficientsRing(R);
y:=IndeterminatesOfPolynomialRing(R)[2];
# Ry:=PolynomialRing(F,[y]);
# poly:=CoefficientToPolynomial(Q,Ry);
poly:=Sum(List([1..Length(Q)],i->Q[i]*y^(i-1)));
return poly;
end;
GeneralizedReedSolomonListDecoder:=function(C,v,ell)
#
# v is a received vector (a GUAVA codeword)
# C is a GRS code
# ell>0 is the length of the decoded list (should be at least
# 2 to beat GeneralizedReedSolomonDecoder
# or Decoder with the special method of interpolation
# decoding)
#
local f,h,g,x,R,R2,L,F,t,i,c,Pts,k,n,tau,Q,divisorsf,div,
CodewordList,p,vars,y,degy, divisorsdeg1;
R:=C!.ring;
F:=CoefficientsRing(R);
vars:=IndeterminatesOfPolynomialRing(R);
x:=vars[1];
Pts:=C!.points;
n:=Length(Pts);
k:=C!.degree;
tau:=Int((n-k)/2);
L:=List([0..ell],i->n-tau-1-i*(k-1));
y:=X(F,vars);;
R2:=PolynomialRing(F,[x,y]);
vars:=IndeterminatesOfPolynomialRing(R2);
Q:=InterpolatingPolynomialGRS(v,Pts,L,R2);
divisorsf:=DivisorsMultivariatePolynomial(Q,R2);
divisorsdeg1:=[];
CodewordList:=[];
for div in divisorsf do
degy:=DegreeIndeterminate(div,y);
if degy=1 then ######### div=h*y+g
g:=Value(div,vars,[x,Zero(F)]);
h:=Derivative(div,y);
# h:=(div-g)/y;
if DegreeIndeterminate(h,x)=0 then
f:= -h^(-1)*g*y^0;
divisorsdeg1:=Concatenation(divisorsdeg1,[f]);
if g=Zero(F)*x then
c:=List(Pts,p->Zero(F));
else
c:=List(Pts,p->Value(f,[x,y],[p,Zero(F)]));
CodewordList:=Concatenation(CodewordList,[Codeword(c,C)]);
fi;
fi;
od;
return CodewordList;
end;
######################################################
NearestNeighborGRSDecodewords:=function(C,r,dist)
# "brute force" decoder
local k,F,Pts,v,p,x,f,NearbyWords,c,a;
k:=C!.degree;
Pts:=C!.points;
F:=LeftActingDomain(C);
NearbyWords:=[];
for v in F^k do
a := Codeword(v);
f:=PolyCodeword(a);
x:=IndeterminateOfLaurentPolynomial(f);
c:=Codeword(List(Pts,p->Value(f,[x],[p])));
if WeightCodeword(r-c) <= dist then
NearbyWords:=Concatenation(NearbyWords,[[c,f]]);
fi;
od;
return NearbyWords;
end;
NearestNeighborDecodewords:=function(C,r,dist)
# "brute force" decoder
local k,F,Pts,v,p,x,f,NearbyWords,c,a;
F:=LeftActingDomain(C);
NearbyWords:=[];
for v in F^k do
c := Codeword(v);
if WeightCodeword(r-c) <= dist then
NearbyWords:=Concatenation(NearbyWords,[c]);
fi;
od;
return NearbyWords;
end;
This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.70)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -t 'C. Lennon Math Honors Thesis 2004-2005' -split 0 lennon_mathhonors2004-2005.tex
The translation was initiated by David Joyner on 2005-04-06