List-Decoding of Generalized Reed-Solomon Codes Using Sudan's Algorithm
Clifton Lennon
4-6-2005


Contents

The goal of the project is to write a program to perform list-decoding of Reed-Solomon codes using the Sudan Algorithm [S] and implement this program into the GAP coding-theory package known as GUAVA [GUA]. The Sudan Algorithm is Algorithm 12.1.1 in Justesen and Høholdt's recent book [JH]. GAP is a computer algebra package whose open source kernel is written in the C programming language [GAP]. However, the GUAVA package is written in GAP's own interpreted language. Neither GAP nor the GUAVA package contain a program for list-decoding of generalized Reed-Solomon codes. When implemented, this program will greatly increase GAP's speed in decoding Reed-Solomon codes. We will also discuss generalizations, both to the higher rate case (Algorithm 2) and the multivariate case (Algorithm 3).

Introduction

Let $ q$ denote a prime power. A finite field is a finite set of elements with operations of addition and multiplication which satisfy the properties of a field. Let $ \mathbb{F}= GF(q)$ denote a finite field with $ q$ elements. A linear code $ C$ is simply a finite dimensional vector space over a finite field, and its elements are called codewords. If $ C
\subset GF(q)^n$, then we say $ C$ has length $ n$. Moreover, if $ k=\dim(C)$, then we call $ C$ a $ [n,k]$ code. A $ k \times n$ matrix whose rows form a basis of a linear $ [n,k]$ code is called a generator matrix of the code.

Example 1   We give an example of a generator matrix for a [10,5] code over $ \mathbb{F}_{11}$. The generator matrix will be a $ 5 \times 10$ matrix that has linearly independent rows. We can build this generator matrix in standard form by using a $ 5 \times 5$ idendtity matrix and filling the remaining five columns with elements from $ \mathbb{F}_{11}$. Since we are using the identity matrix and building the generator matrix in standard from, we can put any elements of the field in the remaining five columns and the rows will be linearly independent.

\begin{displaymath}
\left[
\begin{array}{cccccccccc}
1 & 0 & 0 & 0 & 0 & 6 & 8 &...
...
0 & 0 & 0 & 0 & 1 & 10 & 7 & 3 & 0 & 1 \\
\end{array}\right]
\end{displaymath}

Let $ C$ be a linear code of length $ n$ over $ \mathbb{F}$ with generator matrix $ G$, where $ q$ is a power of a prime $ p$. If $ p=2$ then the code is called binary. We assume that $ {\mathbb{F}}^{n}$ has been given the standard basis $ \mathbf{e}_{1}=(1,0,...,0)\in {\mathbb{F}}^{n}$, $ \mathbf{e}_{2}=(0,1,0,...,0)\in {\mathbb{F}}^{n}$, ..., $ \mathbf{e}_{n}=(0,0,...,0,1)\in {\mathbb{F}}^{n}$. If the dimension of $ C$ is $ k$, then the number of elements of $ C$ is equal to $ q^{k}$. The quantity $ R=k/n$ is called the rate of the code and measures the amount of information which the code can transmit. For instance, the code in the above example has rate $ 1/2$.

Another important parameter associated to the code is the number of errors which it can, in principle, correct. For this notion, we need to introduce the Hamming metric. For any two $ \mathbf{x},\mathbf{y}\in {\mathbb{F}}^n$, let $ d(\mathbf{x},\mathbf{y})$ denote the number of coordinates where these two vectors differ:

$\displaystyle d(\mathbf{x},\mathbf{y})=\vert\{0\leq i\leq n \vert x_{i}\not=y_{i}\}\vert.$ (1)

Define the weight of $ \mathbf{v}$, denoted $ w(\mathbf{v})$, to be the number of non-zero entries of $ \mathbf{v}$. Note, $ d(\mathbf{x},\mathbf{y})=w(\mathbf{x}-\mathbf{y})$ because the vector $ \mathbf{x}-\mathbf{y}$ has non-zero entries only at locations where $ \mathbf{x}$ and $ \mathbf{y}$ differ.

We call the minimum distance of a code $ C$, denoted $ d(C)$, the smallest distance between distinct codewords in $ C$. There exists $ \mathbf{x}$ and $ \mathbf{y}$ such that $ d(C)=d(\mathbf{x},\mathbf{y})$. Note that $ d(C)=w(\mathbf{x}-\mathbf{y}) \geq w(C)$, where $ w(C)$ is the minimum weight of a codeword in $ C$. Also, for some other codeword $ \mathbf{z}$, $ w(C)=w(\mathbf{z})=d(\mathbf{0},\mathbf{z}) \geq d(C)$. Therefore, $ w(C)=d(C)$. Now, we see that the minimum distance of $ C$ satisfies

$\displaystyle d(C)=min_{\mathbf{c}\in C,  \mathbf{c}\not= \mathbf{0}}d(\mathbf{0},\mathbf{c}).$ (2)

In general, this parameter $ d=d(C)$ is known to be very difficult to efficiently determine. (In fact, computing it in general is known to be NP-complete [BMT].) The parameter $ d(C)$ is very important because in principle it is always possible to correct $ [(d-1)/2]$ errors. (Please see Irons [I] for more details on the Nearest Neighbor Algorithm.)

Definition 2 ([JH], p50)   Let $ x_{1},...,x_{n}$ be different elements of a finite field $ \mathbb{F}$. For $ k \leq n$ consider the vector space $ \mathbb{P}_{k}$ of polynomials in $ \mathbb{F}[x]$ of degree $ < k$. A (generalized) Reed-Solomon code $ RS(k,q)$ is a code in $ \mathbb{F}^n$ whose codewords are of the form
$ {(f(x_{1})},{f(x_{2})},...,{f(x_{n})})$ where $ f \in \mathbb{P}_{k}$.

It is easy to check that $ RS(k,q)$ is a linear code.

Example 3   We now give an explicit example of a [10,5] Reed-Solomon code over $ \mathbb{F}=GF(11)$. Using the definition, we let $ \mathbb{P}_{5}=\{{\rm polynomials of degree\
} \leq 4\}$. Because our code is over $ GF(11)$, let us choose $ \{x_{1}=1,x_{2}=2, ..., x_{10}=10\} \subset \mathbb{F}$. We create a basis $ \{b_{1}=1,b_{2}=x,b_{3}=x^2,b_{4}=x^3,b_{5}=x^4\}$ for the vector space $ \mathbb{P}_5$ over $ \mathbb{F}$. From this we can determine a generator matrix for $ RS(5,11)$:

\begin{displaymath}
G=
\left[
\begin{array}{ccc}
b_{1}(x_{1}) & \hdots & b_{1}(x...
...
1 & 5 & 4 & 3 & 9 & 9 & 3 & 4 & 5 & 1 \\
\end{array}\right]
\end{displaymath}

Since $ n\leq q$, by the assumption in Definition 2, the dimension of the Reed-Solomon code $ C=RS(k,q)$ is equal to the same $ k$ used in $ \mathbb{P}_{k}$ ([JH], p50). A code is MDS if its parameters satisfy the Singleton bound $ d\leq n-k+1$ with equality ([JH], p49). Generalized Reed-Solomon codes are MDS codes, so $ d=n-k+1$ is easily computed in terms of the other parameters. (Please see [McG] for more details on this.)

Reed-Solomon codes were discovered in 1959 and have applications in CDs, DVDs, and satellite communications among other things ([JH], p49). The GUAVA package does not contain any programs which provide fast list-decoding of Reed-Solomon codes. In fact, to our knowledge, this list decoder has not yet been implemented in any computer algebra system. A fast decoder for generalized Reed-Solomon codes has been recently implemented by J. McGowan [McG]. However, McGowan's program does not perform list-decoding. The optimal method of decoding these codes utilizes list decoding with the Sudan Algorithm. Instead of using brute force to examine all codewords to find the ones closest to the received vector, list decoding uses systems of linear equations and polynomial interpolation. This method of using brute force is known as nearest neighbor decoding for which the received vector $ \mathbf{r}$ is decoded as the codeword $ \mathbf{c}$ so that $ d(\mathbf{c},\mathbf{r})$ is the minimum distance ([HILL], p5). This method returns a list of codewords within some fixed distance of the received vector.

Sudan's algorithm

Let $ C$ be a generalized Reed-Solomon code with parameters $ [n,k,n-k+1]$. Let $ \tau=[(d-1)/2]$.

Now we discuss a generalization of the algorithm implemented in [McG]. See his discussion for further details of the case $ \ell=1$.

We use the notation in Definition 2. Let $ \mathbf{r}=\mathbf{c}+\mathbf{e}=(r_1,...,r_n) \in \mathbb{F}^n$ be a received word where $ \mathbf{c}=(f(x_1),...,f(x_n))$ is a codeword in $ C$. Assume that the weight of the error vector $ \mathbf{e}$ is less than or equal to $ \tau$, $ w(\mathbf{e}) \leq \tau$. In other words, $ \tau$ represents the maximum number of errors which the algorithm below can correct. The idea is to determine a non-zero polynomial

$\displaystyle Q(x,y)=Q_{0}(x)+Q_{1}(x)y+ \dots +Q_\ell(x)y^\ell
$

satisfying
  1. $ Q(x_i,r_i)=0$ for all $ i$

  2. $ \deg(Q_j(x)) \leq n- \tau -1-j(k-1)$, for $ j=0,..., \ell$

Such a polynomial $ Q$ (depending on $ \mathbf{r}$) is called an interpolating polynomial.

Here $ \ell$ represents the number of codewords near $ \mathbf{r}$ which the algorithm below returns. (When $ \ell=1$, the codeword closest to $ \mathbf{r}$ is returned. When $ \ell=2$ the two codewords closest to $ r$ are returned, and so on.)

Our next aim will be to show that such a non-zero polynomial $ Q$ exists under certain conditions (to be made explicit below).

Lemma 4 ([JH], Ch 12, p127)   If $ Q(x,y)$ satisfies the above conditions and if $ \mathbf{c}=(f(x_1),f(x_2), ...,f(x_n))$ with $ \deg(f(x))<k$, then $ y-f(x)$ must divide $ Q(x,y)$.

proof: By definition of $ Q$ and the fact that $ \deg(f(x))<k$, the polynomial $ Q(x,f(x))$ has degree at most $ n- \tau-1$. Since $ r_i=f(x_i)$, except in at most $ \tau$ cases, we have that $ Q(x_i,f(x_i))=0$, for at least $ n-\tau$ of the $ i$ in $ 1\leq i\leq n$. This forces $ Q(x,f(x))=0$ for all $ x$, since a non-zero polynomial of degree $ d$ can have at most $ d$ zeroes. Therefore, $ y=f(x)$ is a root of the polynomial $ Q(x,y)$. If we consider the polynomial $ Q(x,y)$ as a polynomial in $ y$ over $ \mathbb{F}[x]$, the division algorithm over the ring $ \mathbb{F}[x]$ implies that $ y-f(x)$ divides $ Q(x,y)$. $ \Box$

This lemma means that any codeword $ \mathbf{c} \in RS(k,q)$ as above is associated with a factor of $ Q(x,y)$.

Under what conditions on $ \tau$ and $ \ell$ does such an interpolating polynomial exist? Let us regard all the coefficients of $ Q(x,y)$ as unknowns. The definition of $ Q(x,y)$ tells us that the number of unknowns is determined by the conditions (2). Therefore, the number of unknowns is

\begin{displaymath}
\begin{array}{c}
(n-\tau)+(n-\tau-(k-1))+(n-\tau-2(k-1))+......
...
=(\ell+1)(n-\tau)-\frac{1}{2} \ell(\ell+1)(k-1).
\end{array}\end{displaymath}

By condition (1) in the definition of $ Q(x,y)$, there are $ n$ linear equations constraining these unknown coefficients. Therefore, there are more unknowns than constraining equations provided

$\displaystyle (\ell+1)(n-\tau)-\frac{1}{2} \ell(\ell+1)(k-1)>n.
$

This is equivalent to saying that $ \tau$ satisfies the inequality

$\displaystyle \tau<\frac{n\ell}{\ell+1}-\frac{\ell(k-1)}{2}.
$

Since, without loss of generality, $ \tau>0$, if $ \ell \geq 2$ we must have $ n>\frac{(\ell+1)(k-1)}{2}\geq \frac{3}{2}(k-1)$. In particular, this algorithm does not apply when $ R=\frac{k}{n}>\frac{2}{3}+\frac{1}{n}$.

The most interesting case is when the number of correctable errors, $ \tau$, is greater than $ \frac{d-1}{2}=\frac{n-k}{2}$, since otherwise the Nearest Neighbor Algorithm applies. (Please see [I] for a discussion of this.)

We now determine when $ \tau>\frac{n-k}{2}$. This condition forces

$\displaystyle \ell \frac{2n-(\ell+1)(k-1)}{2(\ell+1)}>\frac{n-k}{2},
$

which forces $ n(\ell-1)>(\ell+1)((\ell-1)k-\ell)$, so $ n>(\ell+1)(k-\frac{\ell}{\ell-1})
\geq (\ell+1)(k-2)$. From this condition we determine that $ \frac{k-\frac{\ell}{\ell-1}}{n}<\frac{1}{\ell+1}$, so

$\displaystyle R<\frac{1}{\ell+1}+\frac{\frac{\ell}{\ell-1}}{n}
=\frac{1}{\ell+1}+\frac{\ell}{(\ell-1)n}.
$

In particular, this algorithm applies to ``low rate" RS codes. Also, it says $ \ell<\frac{n}{k-2}$, giving us a crude bound on the maximum number of codewords ``near" $ \mathbf{r}$ returned by Sudan's Algorithm.

Another condition that arises from the condition (2) with $ j=\ell$, namely $ 0\leq \deg Q_{\ell}(x)\leq
n-\tau-1-\ell(k-1)$. This implies $ \tau \leq n-1-\ell(k-1)$.

Summarizing the above, list decoding applies only when $ R$ has ``low rate'' and moreover beats the Nearest Neighbor Algorithm when $ \frac{d-1}{2}<\tau<n(1-R\ell)+\ell-1$.

Algorithm 1 ([JH], Ch 12, p129)   List decoding of RS codes using Sudan algorithm

  1. Input: A received word $ \mathbf{r}=(r_{1},r_{2}...,r_{n})$ and a natural number $ \ell$.

  2. Solve the system of linear equations

    $\displaystyle \sum_{j=0}^\ell \left( \begin{array}{cccc} r_{1}^j & \hdots & 0 &...
...ay} \right) = \left( \begin{array}{c} 0  0  \vdots  0 \end{array} \right)$ (3)

    Here $ \ell_{j}=n-\tau-1-j(k-1)$.
  3. Put the result in

    $\displaystyle Q_{j}(x)=\sum_{r=0}^{\ell_{j}}
Q_{j,r}x^r,
$

    and

    $\displaystyle Q(x,y)=\sum_{j=0}^{\ell}
Q_{j}(x)y^j.
$

  4. Find all factors of $ Q(x,y)$ of the form $ (y-f(x))$ with degree $ f(x) < k$.

  5. Output: A list of $ \ell$ codewords $ \mathbf{c}=(f(x_{1}),...,f(x_{n}))$ obtained from the factors $ f(x)$ above, that satisfy

    $\displaystyle d(\mathbf{c},\mathbf{r})\leq \tau.
$

The GAP implementation is below. This factorization routine is not the optimal one.

Examples of this algorithm are given in the next section.

Examples

In this section we give examples of a GUAVA implementation of Sudan's Algorithm 1.

Let $ \alpha$ be a primitive element of $ \mathbb{F}_{16}$ where $ \alpha^4+
\alpha^3+1=0$ and consider the $ [15,3]$ Reed-Solomon code obtained by evaluating polynomials of degree at most 2 in the powers of $ \alpha$. The code has minimum distance 13 and thus is 6-error correcting. With $ \ell=2$ it is possible to decode up to seven errors with list size at most 2. Suppose $ \mathbf{w}=(0,0,0,0,0,0,0,0,\alpha^6,\alpha^2,\alpha^5,
\alpha^{14},\alpha,\alpha^7,\alpha^{11})$ is the received vector . Solving the system in step 2 then plugging the results into the polynomial in step 3 gives $ Q(x,y)=(1+x)y+y^2=(y-0)(y-(1+x))$. Since the linear factors of $ Q(x,y)$ are $ y-0$ and $ y-(1+x)$, the functions $ f$ in step 4 are $ f(x)=0$ and $ f(x)=x+1$. Therefore, by step 5, the corresponding two codewords with 7 errors corrected are:

$\displaystyle \mathbf{c_{1}}=(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),
$

$\displaystyle \mathbf{c_{2}}=(0,\alpha^{12},\alpha^9,\alpha^4,\alpha^3,\alpha^{...
...alpha^8,
\alpha^6,\alpha^2,\alpha^5,
\alpha^{14},\alpha,\alpha^7,\alpha^{11}).
$

We used our GAP code to test the above example. The following parameters were used when testing the example.

gap> F:=GF(16);
GF(2^4)
gap> a:=PrimitiveRoot(F);; b:=a^7; b^4+b^3+1; ## alpha in JH Ex 12.1.1, pg 129
Z(2^4)^7
0*Z(2)
gap> Pts:=List([0..14],i->b^i);
[ Z(2)^0, Z(2^4)^7, Z(2^4)^14, Z(2^4)^6, Z(2^4)^13, Z(2^2), Z(2^4)^12,
  Z(2^4)^4, Z(2^4)^11,
 Z(2^4)^3, Z(2^2)^2, Z(2^4)^2, Z(2^4)^9, Z(2^4), Z(2^4)^8 ]
gap> R1:=PolynomialRing(F,1);;
gap> vars:=IndeterminatesOfPolynomialRing(R1);;
gap> x:=vars[1];
x_1
gap> y:=Indeterminate(F,vars);;
gap> R2:=PolynomialRing(F,[x,y]);;
gap> C:=GeneralizedReedSolomonCode(Pts,3,R1); MinimumDistance(C);
a linear [15,3,1..13]10..12  generalized Reed-Solomon code over GF(16)
13
gap> z:=Zero(F);
0*Z(2)
gap> r:=[z,z,z,z,z,z,z,z,b^6,b^2,b^5,b^14,b,b^7,b^11];; ## as in JH Ex 12.1.1
gap> r:=Codeword(r);
[ 0 0 0 0 0 0 0 0 a^12 a^14 a^5 a^8 a^7 a^4 a^2 ]
gap> cs1:=NearestNeighborGRSDecodewords(C,r,7); time;
[ [ [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ], 0*Z(2) ],
  [ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ],
      x_1+Z(2)^0 ] ]
1556
gap>  cs2:=GeneralizedReedSolomonListDecoder(C,r,2); time;
[ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ],
  [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] ]
151

This verifies the above example.

For the next example we use a slightly different received vector. One of the codewords listed will actually correct 8 errors.

gap> F:=GF(16);
GF(2^4)
gap> a:=PrimitiveRoot(F);; b:=a^7; b^4+b^3+1; ## alpha in JH Ex 12.1.1, pg 129
Z(2^4)^7
0*Z(2)
gap> Pts:=List([0..14],i->b^i);
[ Z(2)^0, Z(2^4)^7, Z(2^4)^14, Z(2^4)^6, Z(2^4)^13, Z(2^2), Z(2^4)^12,
  Z(2^4)^4, Z(2^4)^11,
 Z(2^4)^3, Z(2^2)^2, Z(2^4)^2, Z(2^4)^9, Z(2^4), Z(2^4)^8 ]
gap> R1:=PolynomialRing(F,1);;
gap> vars:=IndeterminatesOfPolynomialRing(R1);;
gap> x:=vars[1];
x_1
gap> y:=Indeterminate(F,vars);;
gap> R2:=PolynomialRing(F,[x,y]);;
gap> C:=GeneralizedReedSolomonCode(Pts,3,R1); MinimumDistance(C);
a linear [15,3,1..13]10..12  generalized Reed-Solomon code over GF(16)
13
gap> z:=Zero(F);
0*Z(2)
gap> r:=[z,z,z,z,z,z,z,b^13,b^6,b^2,b^5,b^14,b,b^7,b^11];;
gap> r:=Codeword(r);
[ 0 0 0 0 0 0 0 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ]
gap> cs1:=NearestNeighborGRSDecodewords(C,r,7); time;
[ [ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ], x_1+Z(2)^0 ] ]
1570
gap> cs2:=GeneralizedReedSolomonListDecoder(C,r,2); time;
[ [ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ],
 [ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ] ]
147
gap> c1:=cs2[1]; c1 in C;
[ 0 a^9 a^3 a^13 a^6 a^10 a^11 a a^12 a^14 a^5 a^8 a^7 a^4 a^2 ]
true
gap> c2:=cs2[2]; c2 in C;
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ]
true
gap> WeightCodeword(c1-r);
6
gap> WeightCodeword(c2-r);
8

The time to run the program that finds the nearest neighbor codewords by brute force (1570 ``gapstones'') is much longer than the time to run the program which uses Sudan's algorithm (147 ``gapstones'') for this example.

Now let us try to use the standard command Decodeword to decode the received vector $ \mathbf{r}$. We type into GUAVA the following command:

gap> Decodeword(C,r); time;
Error, Denominator evaluates as zero called from
Value( rf, inds, vals, One( CoefficientsFamily( FamilyObj( rf ) ) ) ) called from
Value( f, [ x ], [ s ] ) called from
func( elm ) called from
List( P, function ( s )
      return Value( f, [ x ], [ s ] );
  end ) called from
SpecialDecoder( C )( C, c ) called from
...
Entering break read-eval-print loop ...
you can 'quit;' to quit to outer loop, or
you can 'return;' to continue
brk>

The error that GUAVA gives us here indicates that it is not possible to use ordinary decoding with a received word with $ 7$ errors, because $ 7 >
[(13-1)/2]$. However, list decoding does work in this case.

Generalizations

This section contains related algorithms which will be briefly discussed but not implemented in this project.

Higher rate codes

Algorithm 2 ([JH], Ch 12, p131)   List decoding of RS codes using the Guruswami-Sudan algorithm [GS].

  1. Input: A received word $ \mathbf{r}=(r_{1},r_{2}...,r_{n})$ and a natural number $ \tau$ and $ s$.

  2. Solve for $ Q_{a,b}$ the system of linear equations, $ h+r<s$ and $ i=1,2,...,n.$

    $\displaystyle \sum_{a \geq h, b\geq r} \left( \begin{array}{c} a  h \end{arra...
...ft( \begin{array}{c} b  r \end{array} \right) Q_{a,b}x_{i}^{a-h}r_{i}^{b-r}=0$ (4)

    with $ Q_{a,b}=0$ if $ l>a$ or $ b> l_{a}$ where $ l_{a}=s(n-\tau)-1-a(k-1)$.

  3. Put

    $\displaystyle Q_{j}(x)= \sum_{r=0}^{\ell_{j}} Q_{j,r}x^{r} \quad {\rm and} \quad Q(x,y)=\sum_{j=0}^{\ell} Q_{j}(x)y^{j}.$ (5)

  4. Find all factors of $ Q(x,y)$ of the form $ (y-f(x))$ with $ \deg(f(x))<k$.

  5. Output: A list of factors $ f(x)$ that satisfy

    $\displaystyle d((f(x_{1}),f(x_{2}),..., f(x_{n})),(r_{1},r_{2},...,r_{n}))< \tau.
$

This is an improvement on Algorithm 1 since the Guruswami-Sudan Algorithm works for codes with any rate, but the Sudan Algorithm only works for RS codes with low rates ([JH], p130).

Higher dimensions

In the remainder of this section we speculate on Sudan's generalization to polynomials in more than one variable. No proofs will be given. For details please see §5 of [S].

In general terms, the idea of the higher-dimensional generalization is the following:

Input: a finite field $ \mathbb{F}= GF(q)$, a subset $ S \subset \mathbb{F}$, the dimension $ t \geq 1$, a function representing the received vector $ g:S^t\rightarrow \mathbb{F}$, the number of coordinates where the received vector is correct $ s \geq 1$, and the ``weighted degree'' of the code $ r\geq 1$.

Output: All multivariate polynomials $ f:\mathbb{F}^t\rightarrow \mathbb{F}$, $ \deg_{wt}(f)<r$, such that $ \vert\{ \mathbf{x} \in S^t  \vert f(\mathbf{x})=g(\mathbf{x})\}\vert\geq s$ where $ \deg_{wt}$ denotes a weighted degree.

A more precise version is stated below.

Consider the following type of evaluation code

$\displaystyle C=\{ (f(\mathbf{p}_1),...,f(\mathbf{p}_n))  \vert \deg_{wt}(f) < r\},
$

where $ n=\vert S\vert^t$, $ k=\dim(\mathbb{P}_r)$, where

$\displaystyle \mathbb{P}_r=\{{\rm polynomials} f {\rm in}
 t {\rm variables} x_1, ..., x_t {\rm with} \deg_{wt}(f)<r\},
$

and $ S^t=\{\mathbf{p}_1,...,\mathbf{p}_n\} \subset \mathbb{F}^t$.

Here is the more precise version of Sudan's Algorithm, as it applies to this case.

Algorithm 3  
  1. Let $ S, \mathbb{F}$, $ r$, $ s$, $ t$, $ k$, $ n$ be as above.

  2. Choose new parameters $ \ell, m$ such that

    $\displaystyle m+\ell r \geq (t+1)(r+1)^{\frac{1}{t+1}}n^{\frac{1}{t+1}},
     s>(m+\ell r)\vert S\vert^{t-1}.
$

  3. Let $ \deg_{wt}(x_1^{e_1}...x_t^{e_t}) =e_1+...+e_{t-1}+re_t$

  4. Find any non-zero function $ Q:\mathbb{F}^{t+1}\rightarrow \mathbb{F}$ satisfying
    • $ \deg_{wt}(Q(\mathbf{x},y)) < m+r \ell$,

    • for all $ \mathbf{x} \in S^t$, $ Q(\mathbf{x},g(\mathbf{x})) = 0$,

    • factor $ Q(\mathbf{x},y)$ into irreducibles.

    Let L denote the list of all polynomials $ f(\mathbf{x})$ with weighted degree $ < r$ such that $ y-f(\mathbf{x})$ divides $ Q(\mathbf{x},y)$.

    Output: The list of codewords $ T = \{ \mathbf{c}= (f(\mathbf{p}_1),...,f(\mathbf{p}_n) \vert f \in L\}$ If $ \mathbf{r} = (g(\mathbf{p}_1),...,g(\mathbf{p}_n))$ represents the received word then each $ \mathbf{c}\in T$ satisfies $ d(\mathbf{c},\mathbf{r})<n-s$.

Example 5   Certain ``toric codes'' constructed in [J] meet the criteria above. For such codes, the method sketched in the above algorithm appears to be new.

Finally, note that the curves [Crv] GAP package command
SolveLinearEquations can be used to solve for the coefficients of $ Q$ in the system of equations $ Q(\mathbf{x},g(\mathbf{x})) = 0$.

Appendix: GAP Code

#  List decoder for RS codes using Sudan-Guraswami algorithm
#     (this implementation only works for low rate codes)
#
########################################################


#Input: List coeffs of coefficients, R = F[x]
#Output: polynomial L[0]+L[1]x+...+L[d]x^d
#
CoefficientToPolynomial:=function(coeffs,R)
  local p,i,j, lengths, F,xx;
  xx:=IndeterminatesOfPolynomialRing(R)[1];
  F:=Field(coeffs);
  p:=Zero(F);
# lengths:=List([1..Length(coeffs)],i->Sum(List([1..i],j->1+coeffs[j])));
  for i in [1..Length(coeffs)] do 
   p:=p+coeffs[i]*xx^(i-1); 
  od;
  return p;
end;


#Input: Pts=[x1,..,xn], a = element of L
#Output: Vandermonde matrix (xi^j)
#
VandermondeMat:=function(Pts,a)
## returns an nx(a+1) matrix
 local V,n,i,j;
 n:=Length(Pts);
 V:=List([1..(a+1)],j->List([1..n],i->Pts[i]^(j-1)));
 return TransposedMat(V);
 end;


#Input: Xlist=[x1,..,xn], l = highest power,
#       L=[h_1,...,h_ell] is list of powers
#       r=[r1,...,rn] is received vector
#Output: Computes matrix described in Algor. 12.1.1 in [JH]
#
LocatorMat:=function(r,Pts,L)
## returns an nx(ell+sum(L)) matrix
  local a,j,b,ell,add_col_mat,add_row_mat,block_matrix,diagonal_power;

 add_col_mat:=function(M,N) ## "AddColumnsToMatrix"
  #N is a matrix with same rowdim as M 
  #the fcn adjoins N to the end of M
  local i,j,S,col,NT;
  col:=MutableTransposedMat(M);  #preserves M
  NT:=MutableTransposedMat(N);   #preserves N
  for j in [1..DimensionsMat(N)[2]] do
      Add(col,NT[j]);
  od;
  return MutableTransposedMat(col);
 end; 

 add_row_mat:=function(M,N) ## "AddRowsToMatrix"
  #N is a matrix with same coldim as M 
  #the fcn adjoins N to the bottom of M
  local i,j,S,row;
  row:=ShallowCopy(M);#to preserve M;
  for j in [1..DimensionsMat(N)[1]] do
    Add(row,N[j]);
  od;
  return row;
 end;

 block_matrix:=function(L) ## "MakeBlockMatrix"
  #L is an array of matrices of the form
  #[[M1,...,Ma],[N1,...,Na],...,[P1,...,Pa]]
  #returns the associated block matrix 
 local A,B,i,j,m,n;
  n:=Length(L[1]);
  m:=Length(L);
  A:=[];
  if n=1 then
     if m=1 then return L[1][1]; fi;
     A:=L[1][1];
     for i in [2..m] do
         A:=add_row_mat(A,L[i][1]);
     od;
     return A;
  fi;
  for j in [1..m] do
   A[j]:=L[j][1];
  od;
  for j in [1..m] do
   for i in [2..n] do
    A[j]:=add_col_mat(A[j],L[j][i]);
   od;
  od;
  B:=A[1];
  for j in [2..m] do
   B:= add_row_mat(B,A[j]);
  od;
  return B;
 end;

 diagonal_power:=function(r,j)
 ## returns an nxn matrix
  local A,n,i;
  n:=Length(r);
  A:=DiagonalMat(List([1..n],i->r[i]^j));
  return A;
 end;

  ell:=Length(L); 
  a:=List([1..ell],j->diagonal_power(r,(j-1))*VandermondeMat(Pts,L[j]));
  b:=List([1..ell],j->[1,j,a[j]]);
  return block_matrix([a]);  
end;


# Compute kernel of matrix in alg 12.1.1 in [JH].
# Choose a basis vector in kernel.  
# Construct the polynomial Q(x,y) in alg 12.1.1.  
#
ErrorLocatorCoeffs:=function(r,Pts,L)
  local a,j,b,vec,e,QC,i,lengths,ker,ell;
  ell:=Length(L); 
  e:=LocatorMat(r,Pts,L);
  ker:=TriangulizedNullspaceMat(TransposedMat(e));
  if ker=[] then Print("Decoding fails.\n"); return []; fi;
  vec:=ker[Length(ker)];
  QC:=[];
  lengths:=List([1..ell],i->Sum(List([1..i],j->1+L[j])));
  QC[1]:=List([1..lengths[1]],j->vec[j]);
  for i in [2..ell] do
  QC[i]:=List([(lengths[i-1]+1)..lengths[i]],j->vec[j]);
  od;
  return QC;
end;



#Input: List L of coefficients ell, R = F[x]
#       Xlist=[x1,..,xn], 
#Output: list of polynomials Qi as in Algor. 12.1.1 in [JH]
# 
ErrorLocatorPolynomials:=function(r,Pts,L,R)
  local q,p,i,ell;
  ell:=Length(L)+1; ##  ?? Length(L) instead ??
  q:=ErrorLocatorCoeffs(r,Pts,L);
  if q=[] then Print("Decoding fails.\n"); return []; fi;
   p:=[];
  for i in [1..Length(q)] do 
    p:=Concatenation(p,[CoefficientToPolynomial(q[i],R)]);
  od;
  return p;
end;


#Input: List L of coefficients ell, R = F[x]
#       Pts=[x1,..,xn], 
#Output: interpolating polynomial Q as in Algor. 12.1.1 in [JH]
# 
InterpolatingPolynomialGRS:=function(r,Pts,L,R)
  local poly,i,Ry,F,y,Q,ell;
  ell:=Length(L)+1; 
Q:=ErrorLocatorPolynomials(r,Pts,L,R);
  if Q=[] then Print("Decoding fails.\n"); return 0; fi;
  F:=CoefficientsRing(R);
  y:=IndeterminatesOfPolynomialRing(R)[2];
# Ry:=PolynomialRing(F,[y]);
# poly:=CoefficientToPolynomial(Q,Ry);
  poly:=Sum(List([1..Length(Q)],i->Q[i]*y^(i-1)));
  return poly;
end;


GeneralizedReedSolomonListDecoder:=function(C,v,ell)
#
# v is a received vector (a GUAVA codeword)
# C is a GRS code
# ell>0 is the length of the decoded list (should be at least
#  2 to beat GeneralizedReedSolomonDecoder
#  or Decoder with the special method of interpolation
#  decoding)
#
local f,h,g,x,R,R2,L,F,t,i,c,Pts,k,n,tau,Q,divisorsf,div,
      CodewordList,p,vars,y,degy, divisorsdeg1;
 R:=C!.ring;
 F:=CoefficientsRing(R);
 vars:=IndeterminatesOfPolynomialRing(R);
 x:=vars[1]; 
 Pts:=C!.points;
 n:=Length(Pts);
 k:=C!.degree; 
 tau:=Int((n-k)/2);
 L:=List([0..ell],i->n-tau-1-i*(k-1));
 y:=X(F,vars);;
 R2:=PolynomialRing(F,[x,y]);
 vars:=IndeterminatesOfPolynomialRing(R2);
 Q:=InterpolatingPolynomialGRS(v,Pts,L,R2); 
 divisorsf:=DivisorsMultivariatePolynomial(Q,R2);
 divisorsdeg1:=[];
 CodewordList:=[];
 for div in divisorsf do
  degy:=DegreeIndeterminate(div,y);
  if degy=1 then ######### div=h*y+g
    g:=Value(div,vars,[x,Zero(F)]);
    h:=Derivative(div,y);
#    h:=(div-g)/y;
   if DegreeIndeterminate(h,x)=0 then
      f:= -h^(-1)*g*y^0;
      divisorsdeg1:=Concatenation(divisorsdeg1,[f]);
    if g=Zero(F)*x then
       c:=List(Pts,p->Zero(F));
     else
       c:=List(Pts,p->Value(f,[x,y],[p,Zero(F)]));
    CodewordList:=Concatenation(CodewordList,[Codeword(c,C)]);
   fi;
  fi;
 od;
 return CodewordList;
end;

######################################################

NearestNeighborGRSDecodewords:=function(C,r,dist)
# "brute force" decoder
local k,F,Pts,v,p,x,f,NearbyWords,c,a;
 k:=C!.degree;
 Pts:=C!.points;
 F:=LeftActingDomain(C);
 NearbyWords:=[];
 for v in F^k do
   a := Codeword(v); 
   f:=PolyCodeword(a);
   x:=IndeterminateOfLaurentPolynomial(f);
   c:=Codeword(List(Pts,p->Value(f,[x],[p])));
   if WeightCodeword(r-c) <= dist then
   NearbyWords:=Concatenation(NearbyWords,[[c,f]]); 
 fi;
od;
return NearbyWords;
end;

NearestNeighborDecodewords:=function(C,r,dist)
# "brute force" decoder
local k,F,Pts,v,p,x,f,NearbyWords,c,a;
 F:=LeftActingDomain(C);
 NearbyWords:=[];
 for v in F^k do
   c := Codeword(v); 
   if WeightCodeword(r-c) <= dist then
   NearbyWords:=Concatenation(NearbyWords,[c]); 
 fi;
od;
return NearbyWords;
end;

Bibliography

[BMT] E. R. Berlekamp, R. J. McEliece, and H. C. A. Van Tilborg. On the inherent intractability of certain coding problems. IEEE Trans. Inform. Theory. 24 (1978) 384-386.

[GAP] GAP: Groups, Algorithms, Programming

[GS] V. Guruswami and M. Sudan. Improved decoding of Reed-Solomon codes and algebraic geometry codes. IEEE Trans. Inform. Theory, Vol. 45, 1999, 1757-1767.

[GUA] GAP GUAVA web page,

[HILL] R. Hill A first course in coding theory, Oxford University Press. (1986).

[HP] W. Huffman and V. Pless. Fundamentals of error-correcting codes. Cambridge University Press, 2003.

[JH] J. Justesen and T. Høholdt. A course in error-correcting codes. European Mathematical Society, 2004.

[I] J.W. Irons. A polynomial-time probabilistic algorithm for the minimum distance of a non-binary linear error-correcting code . USNA Math Honors project, Advisor: Prof Joyner, 2005.

[J] D. Joyner, Toric codes over finite fields, Appl. Alg. Eng. Commun. and Comp., vol. 15, Number 1 (2004)63 - 79.

[McG] J. McGowan. Implementing Generalized Reed-Solomon Codes and a Cyclic Code Decoder in GUAVA . USNA Math Honors project, Advisor: Prof Joyner, 2005.

[MS] F. J. MacWilliams and N. J. A. Sloane. The theory of error-correcting codes. North-Holland. (1983)

[S] M. Sudan. Decoding of Reed-Solomon codes beyond the error-correction bound. Journal of Complexity. Vol. 13, 1997, 180-193.


The command line arguments were:
latex2html -t 'C. Lennon Math Honors Thesis 2004-2005' -split 0 lennon_mathhonors2004-2005.tex

The translation was initiated by David Joyner on 2005-04-06