<<

“main” — 2005/10/10 — 16:27 — page 231 — #1

Volume 24, N. 2, pp. 231–244, 2005 Copyright © 2005 SBMAC ISSN 0101-8205 www.scielo.br/cam

Goppa and Srivastava codes over finite rings

ANTONIO APARECIDO DE ANDRADE1 and REGINALDO PALAZZO JR.2∗

1Department of , Ibilce, Unesp, 15054-000 São José do Rio Preto, SP, Brazil 2Department of Telematics, Feec, Unicamp, 13083-852 Campinas, SP, Brazil E-mails: [email protected] / [email protected]

Abstract. Goppa and Srivastava codes over arbitrary local finite commutative rings with identity are constructed in terms of parity-cleck matrices. An efficient decoding procedure, based on the modified Berlekamp-Massey algorithm, is proposed for Goppa codes.

Mathematical subject classification: 11T71, 94B05, 94B40.

Key words: Goppa code, Srivastava code.

1 Introduction

Goppa codes form a subclass of alternant codes and they are described in terms of a polynomial called Goppa polynomial. The most famous subclasses of alternant codes are BCH codes and Goppa codes, the former for their simple and easily ins- trumented decoding algorithm, and the latter for meeting the Gilbert-Varshamov bound. However, most of the work regarding construction and decoding of Goppa codes has been done considering codes over finite fields. On the other hand, linear codes over rings have recently generated a great deal of interest. Linear codes over local finite commutative rings with identity have been dis- cussed in papers by Andrade [1], [2], [3] where it was extended the notion of Hamming, Reed-Solomon, BCH and alternant codes over these rings. In this paper we describe a construction technique of Goppa and Srivastava codes over local finite commutative rings. The core of the construction technique

#609/04. Received: 3/VI/04. Accepted: 11/V/05. “main” — 2005/10/10 — 16:27 — page 232 — #2

232 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS mimics that of Goppa codes over a finite field, and is addressed, in this paper, from the point of view of specifying a cyclic subgroup of the group of units of an extension of finite rings. The decoding algorithm for Goppa codes consists of four major steps: (1) calculation of the syndromes, (2) calculation of the elementary symmetric functions by modified Berlekamp-Massey algorithm, (3) calculation of the error-location , and (4) calculation of the error magnitudes. This paper is organized as follows. In Section 2, we describe a construction of Goppa codes over local finite commutative rings and an efficient decoding procedure. In Section 3, we describe a construction of Srivastava codes over local finite commutative rings. Finally, in Section 5, the concluding remarks are drawn.

2 Goppa Codes

In this section we describe a construction technique of Goppa codes over arbitrary local finite commutative rings in terms of parity-check matrices, which isvery similar to the one proposed by Goppa [4] over finite fields. First, we review basic facts from the Galois theory of local finite commutative rings. Throughout this paper denotes a local finite with identity, A m maximal and residue field K A GF(p ), for some prime p, M = M ≡ m will be a positive , and x denotes the ring of polynomials in the A[ ] variable x over . The natural projection x K x is denoted by μ, where A A[ ] → [ ] μ(a(x)) a(x). = Let f (x) be a monic polynomial of degree h in x such that μ( f (x)) is A[ ] irreducible in K x . Then f (x) is also irreducible in x [5, Theorem XIII.7]. [ ] A[ ] Let be the ring x / f (x) . Then is a local finite commutative ring with R A[ ] h i R identity and it is called a Galois extension of of degree h. Its residue field is A mh 1 / 1 GF(p ), where 1 is the unique maximal ideal of , and K R M M R = ≡ mh K1∗ is the multiplicative group of K1, whose order is p 1. − Let ∗ denote the multiplicative group of units of . It follows that ∗ is R R R an abelian group, and therefore it can be expressed as a direct product of cyclic groups. We are interested in the maximal cyclic subgroup of ∗, hereafter R s denoted by s, whose elements are the roots of x 1 for some positive integer G −

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 233 — #3

ANTONIO APARECIDO DE ANDRADE and REGINALDO PALAZZO JR. 233

s such that gcd(s, p) 1. There is only one maximal cyclic subgroup of ∗ = R having order relatively prime to p [5, Theorem XVIII.2]. This cyclic group has order s pmh 1. = − The Goppa codes are specified in terms of a polynomial g(z) called Goppa polynomial. In contrast to cyclic codes, where it is difficult to estimate the minimum Hamming distance d from the generator polynomial, Goppa codes have the property that d deg(g(z)) 1. ≥ r + Let g(z) g0 g1z gr z be a polynomial with coefficients in and = + + ∙ ∙ ∙ + R g 0. Let η (α , α , , α ) be a vector consisting of distinct elements of r 6= = 1 2 ∙ ∙ ∙ n s such that g(αi ) are units from for i 1, 2, , n. G R = ∙ ∙ ∙

Definition 2.1. A shortened Goppa code (η, ω, g) of length n s over C ≤ A has parity-check matrix

1 1 g(α1)− g(αn)− 1 ∙ ∙ ∙ 1  α1g(α1)− αn g(αn)−  H . ∙ ∙ ∙ . , (1) = . .. .  . . .   r 1 1 r 1 1   α − g(α1)− α − g(αn)−   1 ∙ ∙ ∙ n    1 1 1 where ω (g(α− ), g(α− ), , g(α− )). The polynomial g(z) is called = 1 2 ∙ ∙ ∙ n Goppa polynomial.

Definition 2.2. Let (η, ω, g) be a Goppa code. C • If g(z) is irreducible then (η, ω, g) is called an irreducible Goppa code. C

• If, for all c (c1, c2, , cn) (η, ω, g), it is true that c0 = ∙ ∙ ∙ ∈ C = (cn, cn 1, , c1) (η, ω, g), then (η, ω, g) is called a reversible − ∙ ∙ ∙ ∈ C C Goppa code.

• If g(z) (z α)r then (η, ω, g) is called a commutative Goppa code. = − C • If g(z) has no multiple zeros then (η, ω, g) is called a separable Goppa C code.

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 234 — #4

234 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS

Remark 2.1. Let (η, ω, g) be a Goppa code. C 1. We have (η, ω, g) is a linear code. C 2. A parity-check matrix with elements from is then obtained by replacing A each entry of H by the corresponding column vector of length h from . A

rl 3. For a Goppa code with polynomial gl (z) (z βl ) , where βl s, we = − ∈ G have

rl rl rl (α1 βl )− (α2 βl )− (αn βl )− − r − r ∙ ∙ ∙ − r α1(α1 βl ) l α2(α2 βl ) l αn(αn βl ) l − − − − ∙ ∙ ∙ − − Hl  . . . .  = . . .. .  rl 1 r rl 1 r rl 1 r   α − (α1 βl ) l α − (α2 βl ) l αn − (αn βl ) l   1 − − 2 − − ∙ ∙ ∙ − −    which is row-equivalent to

1 1 1 r r r (α1 βl ) l (α2 βl ) l (αn βl ) l − − ∙ ∙ ∙ − (α1 βl ) (α2 βl ) (αn βl )  − r − r − r  (α1 βl ) l (α2 βl ) l (αn βl ) l Hl − − ∙ ∙ ∙ − =  . . .. .   . . . .  r 1 r 1 r 1  (α1 βl ) l − (α2 βl ) l − (αn βl ) l −   − r − r − r   (α1 βl ) l (α2 βl ) l ∙ ∙ ∙ (αn βl ) l   − − −  1 1 1 r r r (α1 βl ) l (α2 βl ) l (αn βl ) l −1 −1 ∙ ∙ ∙ −1 (r 1) (r 1) (r 1)  (α1 βl ) l (α2 βl ) l (αn βl ) l  − − − − ∙ ∙ ∙ − − . = . . .. .  . . . .     1 1 1   (α1 βl ) (α2 βl ) ∙ ∙ ∙ (αn βl )   − − −  k rl k Consequently, if g(z) l 1(z βl ) i 1 gl (z), then the Goppa = = − = = code is the intersection of Goppa codes with Goppa polynomial gl (z) Q Q = rl (z βl ) , for l 1, 2, , k, and its parity-check matrix is given by − = ∙ ∙ ∙

H1

 H2  H   . =  .   .       H   k    Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 235 — #5

ANTONIO APARECIDO DE ANDRADE and REGINALDO PALAZZO JR. 235

4. Alternant codes are a special case of Goppa codes [3, Definition 2.1].

It is possible to obtain an estimate of the minimum Hamming distance d of (η, ω, g) directly from the Goppa polynomial g(z). The next theorem provides C such an estimate.

Theorem 2.1. The code (η, ω, g) has minimum Hamming distance d r 1. C ≥ +

Proof. We have (η, ω, g) is an alternant code (n, η, ω) with η (α1, C C 1 1 1 = α , , α ) and ω (g(α )− , g(α )− , , g(α )− ) [3, Definition 2.1]. By 2 ∙ ∙ ∙ n = 1 2 ∙ ∙ ∙ n [3, Theorem 2.1] it follows (η, ω, g) has minimum distance d r 1. C ≥ +

x 3 Example 2.1. Let Z2 i and A[ ] , where f (x) x x 1 is A = [ ] R = f (x) = + + irreducible over and i 2 1. If α is a rooth ofi f (x), then α generates a cyclic A 3 = − 4 2 3 2 group s of order s 2 1 7. Let η (α, α , 1, α ), g(z) z z 1 and G 1 4 =1 − =1 2 1= 3 5 6 = + + ω (g(α)− , g(α )− , g(1)− , g(α )− ) (α , α , 1, α ). Since deg(g(z)) = = = 3, it follows that α3 α5 1 α6

H  α4 α2 1 α  , =    α5 α6 1 α3    is the parity-check matrix of a Goppa code (η, ω,g) over with length 4 and C A minimum Hamming distance at least 4.

x 4 Example 2.2. Let 2 i and A[ ] , where f (x) x x 1 is A Z R f (x) = [ ] = h i = +4 + irreducible over . Thus s 15 and 15 is generated by α, where α α 1. A = G = + Let g(z) z4 z3 1, η (1, α, α2, α3, α4, α5, α6, α8, α9, α10, α12) and = + + = ω (1, α6, α12, α13, α9, α10, α11, α3, α14, α5, α7). Since deg(g(z)) 4, it = = follows that 1 α6 α12 α13 α9 α10 α11 α3 α14 α5 α7

 1 α7 α14 α α13 1 α2 α11 α8 1 α4  H   =  1 α8 α α4 α2 α5 α8 α4 α2 α10 α       9 3 7 6 10 14 12 11 5 13   1 α α α α α α α α α α      Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 236 — #6

236 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS

is the parity-check matrix of a Goppa code (η, ω, g) over Z2 i with length 11 C [ ] and minimum Hamming distance at least 5.

2.1 Decoding procedure

In this subsection we present a decoding algorithm for Goppa codes (η, ω, g). C This algorithm is based on the modified Berlekamp-Massey algorithm [6] which corrects all errors up to the Hamming weight t r/2, i.e., whose minimum ≤ Hamming distance is r 1. + We first establish some notation. Let be a local finite commutative ring with R identity as defined in Section 2 and α be a primitive element of the cyclic group mh s, where s p 1. Let c (c1, c2, , cn) be a transmitted codeword and G = − = ∙ ∙ ∙ b (b1, b2, , bn) be the received vector. Thus the error vector is given by = ∙ ∙ ∙ e (e1, e2, , en) b c. = ∙ ∙ ∙ = − k1 k2 kn n Given a vector η (α1, α2, , αn) (α , α , , α ) in , we define = ∙ ∙ ∙ = ∙ ∙ ∙ Gs the syndrome values s of an error vector e (e , e , , e ) as l = 1 2 ∙ ∙ ∙ n n 1 l s e g(α )− α , l 0. l = j j j ≥ j 1 X=

Suppose that ν t is the of errors which occurred at locations x1 ≤ = αi1 , x2 αi2 , , xν αiν with values y1 ei1 , y2 ei2 , , yν eiν . = ∙ ∙ ∙ = t = t = ∙ ∙ ∙ = Since s (s0, s1, , sr 1) bH eH , then the first r syndrome values = ∙ ∙ ∙ − = = sl can be calculated from the received vector b as

n n 1 l 1 l sl e j g(α j )− α b j g(α j )− α , l 0, 1, 2, , r 1. = j = j = ∙ ∙ ∙ − j 1 j 1 X= X=

The elementary symmetric functions σ1, σ2, , σν of the error-location num- ∙ ∙ ∙ bers x , x , , xν are defined as the coefficients of the polynomial 1 2 ∙ ∙ ∙ ν ν ν i σ (x) (x xi ) σi x − , = − = i 1 i 0 Y= X= where σ0 1. Thus, the decoding algorithm being proposed consists of four = major steps:

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 237 — #7

ANTONIO APARECIDO DE ANDRADE and REGINALDO PALAZZO JR. 237

Step 1 – Calculation of the syndrome vector s from the received vector.

Step 2 – Calculation of the elementary symmetric functions σ , σ , , σν 1 2 ∙ ∙ ∙ from s, using the modified Berlekamp-Massey algorithm [6].

Step 3 – Calculation of the error-location numbers x , x , , xν from σ , 1 2 ∙ ∙ ∙ 1 σ , , σν, that are roots of σ(x). 2 ∙ ∙ ∙

Step 4 – Calculation of the error magnitudes y , y , , yν from x and s, using 1 2 ∙ ∙ ∙ i Forney’s procedure [7].

Now, each step of the decoding algorithm is analyzed. There is no need to com- ment on Step 1 since the calculation of the vector syndrome is straightforward. 0 1 s 1 The of possible error-location numbers is a subset of α , α , , α − . In { ∙ ∙ ∙ } Step 2, the calculation of the elementary symmetric functions is equivalent to finding a solution σ , σ , , σν, with minimum possible ν, to the following set 1 2 ∙ ∙ ∙ of linear recurrent equations over R

s j ν s j ν 1σ1 s j 1σν 1 s j σν 0, j 0, 1, 2, ,(r 1) ν, (2) + + + − +∙ ∙ ∙+ + − + = = ∙ ∙ ∙ − − where s0, s1, , sr 1 are the components of the syndrome vector. We make use ∙ ∙ ∙ − of the modified Berlekamp-Massey algorithm to find the solutions of Equation

(2). The algorithm is iterative, in the sense that the following n ln equations − (called power sums)

(n) (n) (n) snσ0 sn 1σ1 sn ln σl 0 + − + ∙ ∙ ∙ + − n =  (n) (n) (n) sn 1σ0 sn 2σ1 sn ln 1σl 0  − + − + ∙ ∙ ∙ + − − n =   .  .  s σ (n) s σ (n) s σ (n) 0  ln 1 0 ln 1 1 ln  + + + ∙ ∙ ∙ + =   (0) (n) are satisfied with ln as small as possible and σ 1. The polynomial σ (x) 0 = = σ (n) σ (n)x σ (n)xn represents the solution at the n-th stage. The n-th dis- 0 1 ln + +∙ ∙ ∙+ (n) (n) (n) crepancy is denoted by dn and defined by dn snσ0 sn 1σ1 sn ln σl . = + − +∙ ∙ ∙+ − n The modified Berlekamp-Massey algorithm for commutative rings with iden- tity is formulated as follows. The inputs to the algorithm are the syndromes

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 238 — #8

238 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS

s0, s1, , sr 1 which belong to . The output of the algorithm is a set of va- ∙ ∙ ∙ − R lues σi , i 1, 2, , ν, such that Equation (2) holds with minimum ν. Let ( 1) = ∙ ∙ ∙ (0) σ − (x) 1, l 1 0, d 1 1, σ (x) 1, l0 0 and d0 s0 be the a set = − = − = = = = of initial conditions to start the algorithm as in Peterson [8]. The steps of the algorithm are:

1. n 0. ← (n 1) (n) 2. If dn 0, then σ + (x) σ (x) and ln 1 ln and to go 5). = ← + ←

3. If dn 0, then find m n 1 such that dn ydm 0 has a solution y and 6= ≤ − (n 1) − =(n) n m (m) m l has the largest value. Then, σ + (x) σ (x) yx − σ (x) − m ← − and ln 1 max ln, lm n m . + ← { + − }

4. If ln 1 max ln, n 1 ln then go to step 5, else search for a solution + (n 1) = { + − } D + (x) with minimum degree l in the range max ln, n 1 ln l < (m) (n 1) (n) { + n −m (m}) ≤ ln 1 such that σ (x) defined by D + (x) σ (x) x − σ (x) is + − =(m) a solution for the first m power sums, dm dn, with σ0 a zero (n 1) = − (n 1) in . If such a solution is found, σ + (x) D + (x) and ln 1 l. R ← + ← (n) (n) 5. If n < r 1, then dn sn sn 1σ1 sn ln σl . − = + − + ∙ ∙ ∙ + − n 6. n n 1; if n < r 1 go to 2); else stop. ← + − The coefficients σ (r), σ (r), , σ (r) satisfy Equation (2). At Step 3, the solu- 1 2 ∙ ∙ ∙ ν tion to Equation (2) is generally not unique and the reciprocal polynomial ρ(z) of the polynomial σ (r)(z) (output by the modified Berlekamp-Massey algorithm), may not be the correct error-locator polynomial

(z x )(z x ) (z xν), − 1 − 2 ∙ ∙ ∙ − where x αki , for j 1, 2, , ν and i 1, 2, , n, are the correct j = = ∙ ∙ ∙ = ∙ ∙ ∙ error-location numbers. Thus, the procedure for the calculation of the correct error-location numbers is the following:

• compute the roots z1, z2, , zν of ρ(z); ∙ ∙ ∙

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 239 — #9

ANTONIO APARECIDO DE ANDRADE and REGINALDO PALAZZO JR. 239

• among the x αk j , j 1, 2 , n, select those x ’s such that x z i = = ∙ ∙ ∙ i i − i are zero in . The selected xi ’s will be the correct error-location R numbers and each k j , for j 1, 2, , n, indicates the position j of the = ∙ ∙ ∙ error in the codeword.

At Step 4, the calculation of the error magnitude is based on Forney’s proce- dure [7]. The error magnitude is given by

ν 1 l −0 σ jl sν 1 l y = − − , (3) j ν 1 ν 1 l = E j l −0 σ jl x j − − P = for j 1, 2, , ν, where the coefficientsP σ are recursively defined by = ∙ ∙ ∙ jl

σ j,i σi x j σ j,i 1, i 0, 1, , ν 1, = + − = ∙ ∙ ∙ − 1 starting with σ σ , 1. The E g(x )− , for i 1, 2, , ν, are the 0 = j 0 = i = i = ∙ ∙ ∙ corresponding location of errors in the vector w. It follows from [9, Theorem 7] that the denominator in Equation (3) is always a unit in . R

Example 2.3. As in Example 2.1, if the received vector is given by b = (0, i, 0, 0), then the syndrome vector is given by s bH t (iα5, iα2, iα6). = = Applying the modified Berlekamp-Massey algorithm, we obtain the following table

(n) n σ (z) dn ln n ln − 1 1 1 0 1 − − 0 1 iα5 0 0 1 1 iα5z iα2 α3 1 0 + + 2 1 α4z 0 1 1 + 3 1 α4z – 1 1 +

Thus σ (3)(z) 1 α4z. The root of ρ(z) z α4 (the reciprocal of σ (3)(z)) = + = + is z α4. Among the elements 1, α, , α6 we have x α4 is such that 1 = ∙ ∙ ∙ 1 = x1 z1 0 is a zero divisor in . Therefore, x1 is the correct error-location − = R number, and k2 4 indicates that one error has occurred in the second coordinate =

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 240 — #10

240 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS of the codeword. The correct elementary symmetric function σ α4 is obtained 1 = from x x x σ x α4. Finally, applying Forney’s method to s and σ , − 1 = − 1 = − 1 gives y1 i. Therefore, the error pattern is given by e (0, i, 0, 0). = =

Example 2.4. As in Example 2.2, if the received vector is b (0, 0, 1, 0, 0, = 0, 0, 0, i, 0, 0), then the syndrome vector is given by

s bH t (α12 iα14, α14 iα8, α iα2, α3 iα11). = = + + + + Applying the modified Berlekamp-Massey algorithm, the following table is obtained

(n) n σ (z) dn ln n ln − 1 1 1 0 1 − − 0 1 α12 iα14 0 0 + 1 1 (α12 iα14)z α11 iα8 1 0 + + + 2 1 (α13 iα12)z α7 iα5 1 1 + + + 3 1 (α10 iα14)z (α12 i)z2 α12 iα12 2 1 + + + + + 4 1 α11z α11z2 - 2 2 + +

Thus σ (4)(z) 1 α11z α11z2. The roots of ρ(z) z2 α11z α11 = (4+) + 2 9 = + + (the reciprocal of σ (z)) are z1 α and z2 α . Among the elements 2 14 2 = 9 = 1, α, α , , α , we have x1 α and x2 α are such that x1 z1 x2 z2 ∙ ∙ ∙ = = − = − = 0 are zero divisors in . Therefore, x1 and x2 are the correct error-location R numbers and k 2 and k 9 indicates that two errors have occurred, one in 3 = 9 = position 3, and the other in position 9, in the codeword. The correct elementary 2 symmetric functions σ1 and σ2 are obtained from (x x1)(x x2) x σ1x σ2. − − = + + Thus, σ σ α11. Finally, Forney’s method applied to s, σ and σ , gives 1 = 2 = 1 2 σ σ x σ α11 α2 α9 and σ σ x σ α11 α9 α2. 11 = 1 + 1 10 = + = 21 = 1 + 2 20 = + = Thus, by Equation (3), we obtain y1 1 and y2 i. Therefore, the error pattern = = is given by e (0, 0, 1, 0, 0, 0, 0, 0, i, 0, 0). =

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 241 — #11

ANTONIO APARECIDO DE ANDRADE and REGINALDO PALAZZO JR. 241

3 Srivastava codes

In this section we define another subclass of alternant codes over local finite commutative rings which is very similar to the one proposed by J. N. Srivastava in 1967, in an unpublished paper [10], called Srivastava codes. These codes over finite fields are defined by parity-check matrices of theform

αl H j , 1 i r, 1 j n , = 1 α β ≤ ≤ ≤ ≤ ( − i j ) m where α1, α2, , αr are distinct elements from GF(q ) and β1, β2, , βn are ∙ ∙ ∙ m 1 1 1 ∙ ∙ ∙ all the elements in GF(q ) except 0, α− , α− , , α− and l 0. 1 2 ∙ ∙ ∙ r ≥

Definition 3.1. A shortened Srivastava code of length n s over has parity- ≤ A check matrix l l l α1 α2 αn α1 β1 α2 β1 αn β1 − − ∙ ∙ ∙ − l l l  α1 α2 αn  α1 β2 α2 β2 αn β2 H  − − ∙ ∙ ∙ −  , (4) =  . . . .   . . .. .   . . .     αl αl l   1 2 αn   α1 βr α2 βr ∙ ∙ ∙ αn βr   − − −  where α1, , αn, β1, , βr are n m distinct elements of s and l 0. ∙ ∙ ∙ ∙ ∙ ∙ + G ≥

Theorem 3.1. The Srivastava code has minimum Hamming distance d r 1. ≥ +

Proof. The minimum Hamming distance of this code is at least r 1 if and + only if every combination of r or fewer columns of H is linearly independent over , or equivalently, that the submatrix R αl αl αl i1 i2 ir αi β1 αi β1 αi β1 1 − 2 − ∙ ∙ ∙ r − αl αl αl  i1 i2 ir  α β α β α β i1 2 i2 2 ∙ ∙ ∙ ir 2 H1  −. −. −.  (5) =  . . .. .   . . . .   αl αl αl   i1 i2 ir   αi βr αi βr αi βr   1 − 2 − ∙ ∙ ∙ r −   

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 242 — #12

242 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS is nonsingular for any subset i , , i of 1, 2, , n . The determinant of { 1 ∙ ∙ ∙ r } { ∙ ∙ ∙ } this matrix can be expressed as

det(H ) (α α . . . α )l det(H ), (6) 1 = i1 i2 ir 2 where the matrix H2 is given by

1 1 1 αi β1 αi β1 αi β1 1 − 2 − ∙ ∙ ∙ r − 1 1 1  α β α β α β  i1 2 i2 2 ∙ ∙ ∙ ir 2 H2 −. −. . −. . (7) =  . . .. .     1 1 1   αi βr αi βr αi βr   1 − 2 − ∙ ∙ ∙ r −    Note that det(H2) is a Cauchy determinant of order r, and therefore we conclude that the determinant of the matrix H1 is given by

l φ(αi1 , αi2 , , αir )φ(β1, β2, , βr ) det(H1) (αi1 αi2 . . . αir ) k ∙ ∙ ∙ ∙ ∙ ∙ , (8) = μ(αi1 )μ(αi2 ) . . . μ(αir )

r where k ( 1)m, m , φ(α , α , . . . , α ) (α α ) and i1 i2 ir i j

Definition 3.2. Suppose r kl and let α , , α , β , , β be n k distinct = 1 ∙ ∙ ∙ n 1 ∙ ∙ ∙ k + elements of s, w1, , wn be elements of s. A generalized Srivastava code of G ∙ ∙ ∙ G length n s over has parity-check matrix ≤ A

H1

 H2  H   , (9) =  .   .       H   k   

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 243 — #13

ANTONIO APARECIDO DE ANDRADE and REGINALDO PALAZZO JR. 243 where w1 w2 wn α1 β j α2 β j αn β j − − ∙ ∙ ∙ −  w1 w2 wn  2 2 2 (α1 β j ) (α2 β j ) (αn β j ) − − ∙ ∙ ∙ − H j   , (10) =  . . . .   . . .. .       w1 w2 wn  l l l  (α1 β j ) (α2 β j ) (αn β j )   − − ∙ ∙ ∙ −  for j 1, 2, , k.   = ∙ ∙ ∙

Theorem 3.2. The generalized Srivastava code has minimum Hamming dis- tance d kl 1. ≥ +

Proof. The proof of this theorem requires nothing more than the application of the Remark 2.1(3) and of the Theorem 3.1, since the matrices (1) and (9) are k l equivalent, with g(z) i 1(z βi ) . = = − Q Example 3.1. As in Example 2.2, if

n 8, r 6, k 2, l 3, = = = = 4 3 5 7 12 10 2 α1, α2, , α8 α , α , α , α, α , α , α , α , { ∙ ∙ ∙ } = { } 9 6 2 4 7 10 9 3 β1, β2 α , α and w1, , w8 α, α, α , α , α , α , α , α , { } = { } { ∙ ∙ ∙ } = { } then the matrix

H = α α α2 α2 α5 α10 α9 α3 α4 α9 α3 α9 α5 α9 α α9 α7 α9 α12 α9 α10 α9 α2 α9 − − − − − − − −   α α α2 α2 α5 α10 α9 α3  (α4 α9)2 (α3 α9)2 (α5 α9)2 (α α9)2 (α7 α9)2 (α12 α9)2 (α10 α9)2 (α2 α9)2   − − − − − − − −     α α α2 α2 α5 α10 α9 α3   4 9 3 3 9 3 5 9 3 9 3 7 9 3 12 9 3 10 9 3 2 9 3   (α α ) (α α ) (α α ) (α α ) (α α ) (α α ) (α α ) (α α )   − − − − − − − −   2 2 5 10 9 3   α α α α α α α α   4 6 3 6 5 6 6 7 6 12 6 10 6 2 6   α α α α α α α α α α α α α α α α   − − − − − − − −   2 2 5 10 9 3   α α α α α α α α   (α4 α6)2 (α3 α6)2 (α5 α6)2 (α α6)2 (α7 α6)2 (α12 α6)2 (α10 α6)2 (α2 α6)2   − − − − − − − −     α α α2 α2 α5 α10 α9 α3     (α4 α6)3 (α3 α6)3 (α5 α6)3 (α α6)3 (α7 α6)3 (α12 α6)3 (α10 α6)3 (α2 α6)3   − − − − − − − −  is the parity-check matrix of a generalized Srivastava code over Z2 i of length [ ] 8 and minimum distance at least 7.

Comp. Appl. Math., Vol. 24, N. 2, 2005 “main” — 2005/10/10 — 16:27 — page 244 — #14

244 GOPPA AND SRIVASTAVA CODES OVER FINITE RINGS

4 Conclusions

In this paper we presented construction and decoding procedure for Goppa codes over local finite commutative rings with identity. The decoding procedure is based on the modified Berlekamp-Massey algorithm. The complexity of the proposed decoding algorithm is essentially the same as that for Goppa codes over finite fields. Furthermore, we present the construction of Srivastava codes over local finite commutative rings with identity.

5 Acknowledgments

This paper has been supported by Fundação de Amparo à Pesquisa do Estado de São Paulo – FAPESP – 02/07473-7.

REFERENCES [1] A.A. Andrade and R. Palazzo Jr., Hamming and Reed-Solomon codes over certain rings, Computational and Applied Mathematics, 20 (3) (2001), 289–306. [2] A.A. Andrade and R. Palazzo Jr., Construction and decoding of BCH codes over finite commutative rings, Linear Algebra and its Applications, 286 (1999), 69–85. [3] A.A. Andrade, J.C. Interlando and R. Palazzo Jr., Alternant and BCH code over certain rings, Computational and Applied Mathematics, 22 (2) (2003), 233–247. [4] V.D. Goppa, A new class of linear error-correcting codes, Probl. Peredach. Inform., 6 (3) (1970), 24–30. [5] B.R. McDonald Finite rings with identity, Marcel Dekker, Inc., New York (1974). [6] J.C. Interlando, R. Palazzo Jr. and M. Elia, On the decoding of Reed-Solomon and BCH codes over integer residue rings, IEEE Trans. Inform. Theory, IT-43 (1997), 1013–1021. [7] G.D. Forney Jr., On decoding BCH codes, IEEE Trans. Inform. Theory, IT-11 (1965), 549–557. [8] W.W. Peterson and E.J. Weldon Jr., Error Correcting Codes, MIT Press, Cambridge, Mass., (1972). [9] A.A. Andrade and R. Palazzo Jr., A note on units of a local finite rings, Revista de Matemática e Estatística, 18 (2000), 213–222. [10] H.J. Helgert, Srivastava Codes, IEEE Trans. Inform. Theory, IT-18 (2) (1972).

Comp. Appl. Math., Vol. 24, N. 2, 2005