Algebraic Geometric Codes on Curves and Surfaces
Total Page:16
File Type:pdf, Size:1020Kb
Algebraic geometric codes on curves and surfaces Paolo Zampolini Master Program in Mathematics Faculty of Science University of Padova, Italy. Supervisor: prof. Luca Barbieri Viale Department of Pure and Applied Mathematics, University of Padova. Italy Padova, 22nd March 2007 \The main application of Pure Mathematics is to make you happy" H.W. Lenstra, Leiden, November 2005 Contents Introduction 6 1 Basic notions of coding theory 7 2 Algebraic geometric codes on curves 11 2.1 AG codes . 11 2.2 Dual AG codes . 12 2.3 Rational points on curves and lower bounds . 15 3 Decoding methods 21 3.1 SV-algorithm . 21 3.2 A cubic curve . 24 3.3 The Duursma algorithm . 27 3.4 Example . 34 3.5 Another example: the Klein quartic . 36 4 Codes over higher dimensional varieties 41 4.1 Sheaves of modules . 41 4.2 Divisors . 45 4.3 De¯nition of AG codes on varieties . 48 5 Codes on surfaces 50 5.1 Ample and very ample invertible sheaves . 50 5.2 Arithmetic and geometric genus of varieties . 51 5.3 Basic properties of surfaces . 54 5.4 Parameters of codes on surfaces . 56 6 Codes on ruled surfaces 59 6.1 The projective space bundle . 59 6.2 Ruled surfaces . 60 6.3 Parameters of codes on ruled surfaces . 63 6.4 Codes on rational ruled surfaces . 64 6.5 Computing the dimension in the decomposable case . 65 6.6 Computing the dimension in the general case . 66 6.7 Ruled surfaces over elliptic curves . 70 6.8 Conclusion and open problems . 75 6 INTRODUCTION Introduction One of the main developments in the coding theory in the '80s was the introduction of algebraic geometric codes, due to the russian mathematician and engineer Goppa. His main idea was to associate linear codes to divisors on curves. In 1982 Tsfasman, Vladut and Zink used this new class of codes to prove the existence of codes with better parameters than the ones ensured by the Gilbert-Varshamow bound. The theory of algebraic geometric codes on curves has been deeply stud- ied during the '80s and the '90s and now we can consider it to be well-known by the mathematical community. Nowadays most of the works about alge- braic geometric codes on curves concern the search of a faster decoding algo- rithm. Indeed at the present times other codes with more e±cient decoding algorithms are preferred to algebraic geometric codes in the applications. In 1991 Tsfasman and Vladut generalized Goppa's idea and de¯ned al- gebraic geometric codes on varieties. Nevertheless, the ¯rst publication about codes on higher dimensional va- rieties is dated 2001. Hansen's PhD thesis opened a new interesting research ¯eld, combining both coding theory and the geometry of varieties. The aim of this work is to introduce the theory of algebraic geometric codes on curves and present the recent progresses about codes on higher dimen- sional varieties. In the ¯rst section we recall the elementary notions of coding theory. The second section regards algebraic geometric codes on curves with par- ticular attention to the connection between them and the Gilbert-Varshamov bound, while the third one contains two decoding algorithms for codes on curves. We introduce in the fourth section codes on higher dimensional varieties and then we follow Hansen and Zarzar's works to study codes on surfaces. Lastly we focus on Lomont's PhD thesis about codes on ruled surfaces and we try to solve a problem left open in it. In sections 2 and 3 the language of classical algebraic geometry is as- sumed to be known, while in the last three sections we also recall the re- quired notions of scheme theory. Proofs are often omitted but we try to de¯ne almost every geometric object we use. 7 1 Basic notions of coding theory De¯nition 1.1 Let Fq be a ¯eld with q elements. An error correcting n n code is a subset C of Fq . C is said linear if it is a sub-vector space of Fq . In practice most of the codes are linear codes and they are used to transform words of length k < n (with letters in the alphabet Fq) to words of length n in order to ¯nd and correct errors happened during the transmission. The elements of the code are called words. n Let x,y 2 Fq . The Hamming distance (or, simply, distance) is de¯ned as d(x; y) := #fi j 1 · i · n; xi 6= yig. n d really de¯nes a metric on Fq . The minimum distance d of a code C is d = d(C) := minfd(x; y) j x; y 2 C; x 6= yg. It is immediate to see that the minimum distance of a linear code corresponds to the minimum weight w = w(C) := minfwt(x) j x 2 C; x 6= 0g, where wt(x) := #fi j xi 6= 0g. Let us indicate with (n; M; d) a code of length n, M words and distance d, while an [n; k; d]-code is a linear code of length n, dimension k and distance d. The decoding of a message respects the so-called maximum-likelihood- decoding principle, i.e. we suppose that, during the transmission, the small- n est number of errors occurred. The received vector y 2 Fq is interpreted as the code word x 2 C such that wt(x ¡ y) is as small as possible. Thus it is clear that a code with minimum distance d is able to ¯nd every error of d¡1 weight e < d and correct every error of weight not bigger than e = b 2 c. It follows trivially from the triangular inequality that spheres of radius e and center in a code word are disjoint. A code with distance d = 2e+1 is said to n n be perfect if such spheres cover Fq . Hence, for a perfect code, every y 2 Fq has distance less or equal e from a unique code word. The most immediate relation between the parameters of a code is called singleton bound. Proposition 1.1 (Singleton bound) Let C be an (n; M; d)¡code. Then Mqd¡1 · qn. n The proposition can be proved just by counting the elements of Fq that d¡1 belong to the disjoint spheres of radius e = b 2 c and with a code word as 8 1 BASIC NOTIONS OF CODING THEORY center. A code whose parameters satisfy the equality in the previous propo- sition is called MDS (maximum-distance-separable). We remark that, for a [n; k; d]-code, the singleton bound can be written as d + k · n + 1. A fundamental role in the development of coding theory is due to Shan- non's work ([20]) and, in particular, to his famous theorem. Suppose we have to transmit a message through a channel where the probability of re- ceiving a wrong digit is p and the probability of transmitting a wrong symbol is the same for every symbol (such a channel is called uniform). Further- more suppose we use a code C formed by M words x1; ¢ ¢ ¢ ; xM which occur with the same probability. Let Pi be the probability of not being able to decode exactly xi according to the maximum-likelihood-decoding principle. In thisP case the average probability of wrongly decoding a code in C is n Pi PC = i=1 M . Hence a code with good properties is a a code such that PC ¤ is minimum. Let P (M; n; p) be the minimum PC for C running over all the logq M (M; n)-codes and R = n the so-called information rate. Theorem 1.1 (Shannon) There exists a function c, called channel ca- bRnc pacity, depending only on q and p, such that, if 0 < R < c and Mn = q , ¤ then limn!+1 P (Mn; n; p) = 0. By Shannon's theorem, for R in an interval depending only on the trans- mission channel, among the codes with the same information rate, the good ones are those with n big. This is a good motivation to look for long codes and to study the asymptotical behavior of codes in the same class, letting n change. During the transmission of a word of length n through a channel with probability p, we expect an average of pn errors. Hence we need a code with minimum distance d > 2pn. We therefore require that d grows at least proportionally to n. d Let ± = n , Aq(n; d) = maxfM j there exists a (n; M; d)-code over Fqg. We now de¯ne the function ®: log Aq(n; b±nc) ®(±) = lim sup q , n!+1 n which is an important and widely studied object of the coding theory. Thus we are interested in studying, for a ¯xed ±, what the best possible infor- mation rate for a code of length n is, and then letting n go to in¯nity, as suggested by Shannon's theorem. The easiest (and the least precise) among those upper bounds for ® is a direct consequence of singleton bound. Theorem 1.2 For ± 2 [0; 1], we have ®(±) · 1 ¡ ±. 9 Proof: From proposition 1.1 we have M · qn+1¡d for every (n; M; d)-code, so Aq(n; ±n) · n+1¡±n. Taking the limsup, we conclude that ®(±) · 1¡±. A much more interesting result is Gilbert's (or Gilbert-Varshamov) lower q¡1 bound. De¯ne on [0; q ] the entropy function as ½ 0 if x = 0 Hq(x) := q¡1 x logq(q ¡ 1) ¡ x logq x ¡ (1 ¡ x) logq(1 ¡ x) if 0 < x · q q¡1 Hq is an increasing function and its value in 1 is q .