Linear Programming Bounds for Tree Codes (Corresp.)

Total Page:16

File Type:pdf, Size:1020Kb

Linear Programming Bounds for Tree Codes (Corresp.) Linear Programming Bounds Peter Boyvalenkov [email protected] Institute for Mathematics and Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria; Technical Faculty, Southwestern University, Blagoevgrad, Bulgaria Danyo Danev [email protected] Department of Electrical Engineering and Department of Mathematics, Link¨oping University, SE-581 83 Link¨oping, Sweden Abstract This chapter is written for the forthcoming book ”A Concise Encyclopedia of Coding The- ory” (CRC press), edited by W. Cary Huffman, Jon-Lark Kim, and Patrick Sol´e. This book will collect short but foundational articles, emphasizing definitions, examples, exhaustive refer- ences, and basic facts. The target audience of the Encyclopedia is upper level undergraduates and graduate students. 1 Preliminaries – Krawtchouk polynomials, codes, and designs Definition 1.1 (Krawtchouk polynomials). Let n 1 and q 2 be integers. The Krawtchouk ≥ ≥ polynomials are defined as i (n,q) j i j j n j z K (z) := ( 1) (q 1) − q − , i = 0, 1, 2,..., i − − n i j Xj=0 − where z := z(z 1) (z j + 1)/j!, z R. j − · · · − ∈ (n,q) Theorem 1.2 ([57]). The polynomials Ki (z) satisfy the three-term recurrence relation (n,q) (n,q) (n,q) arXiv:1903.02255v1 [cs.IT] 6 Mar 2019 (i + 1)Ki+1 (z) = [i + (q 1)(n i) qz]Ki (z) (q 1)(n i + 1)Ki 1 (z) − − − − − − − with initial conditions K(n,q)(z) = 1 and K(n,q)(z)= n(q 1) qz. 0 1 − − Theorem 1.3 ([57]). The discrete measure n n n i dµ (t) := q− (q 1) δ(t i)dt, (1) n i − − Xi=0 where δ(i) is the Dirac-delta measure at i 0, 1,...,n , and the form ∈ { } f, g = f(t)g(t)dµ (t) (2) h i n Z define an inner product over the class of real polynomials of degree less than or equal to n. Pn 1 Theorem 1.4 (Orthogonality relations, [57]). Under the inner product (2) the Krawtchouk poly- nomials satisfy n n n K(n,q)(u)K(n,q)(u)(q 1)u = δ qn(q 1)i i j − u i,j − i Xu=0 for any i, j 0, 1,...,n . Moreover, ∈ { } n (n,q) (n,q) n Ki (u)Ku (j)= δi,jq . u=0 X n (n,q) Theorem 1.5 (Expansion, [57]). If f(z)= i=0 fiKi (z), then 1 n n P− n f = qn(q 1)i f(u)K(n,q)(u)(q 1)u i − i i − u u=0 n X n (n,q) = q− f(u)Ku (i). u=0 X n+1 Definition 1.6. Define K(n,q)(z) := q n (u z). Note that K(n,q)(z) is orthogonal to any n+1 (n+1)! u=0 − n+1 (n,q) polynomial Ki (z), i = 0, 1,...,n, with respectQ to the measure (1). Theorem 1.7 (Krein condition, [57]). For any i, j 0, 1,...,n ∈ { } n (n,q) (n,q) u (n,q) (n,q) Ki (z)Kj (z)= pi,jKu (z) (mod Kn+1 (z)) u=0 X with pu = 0 if i + j > u, pu > 0 if i + j = u n, and pu 0 otherwise. i,j i,j ≤ i,j ≥ Definition 1.8. Denote i 1 n − T n,q(z, w) := K(n,q)(z)K(n,q)(w) (q 1)j . i j j − j Xj=0 Theorem 1.9 (Christoffel-Darboux formula, [57]). For any i = 0, 1,...,n and any real z and w i + 1 (w z)T n,q(z, w)= K(n,q)(z)K(n,q)(w) K(n,q)(w)K(n,q)(z) . − i q(q 1)i n i+1 i − i+1 i − i Let F n be a code, where F = 0, 1,...,q 1 is the alphabet of q symbols (so q is not C ⊆ q q { − } necessarily a power of a prime). For x,y F n, recall that d(x,y) is the number of coordinates ∈ q where x and y disagree. Definition 1.10. The vector B( ) = (B ,B ,...,B ), where C 0 1 n 1 B = (x,y) 2 : d(x,y)= i , i = 0, 1,...,n, i { ∈C } |C| is called the distance distribution of . Clearly, B = 1 and B = 0 for i = 1, 2,...,d 1, C 0 i − where d is the minimum distance of . C 2 Definition 1.11. The vector B ( ) = (B ,B ,...,B ), where ′ C 0′ 1′ n′ n 1 (n,q) Bi′ = BjKi (j), i = 0, 1,...,n, |C| Xj=0 is called the dual distance distribution of or the MacWilliams transform of B( ). Ob- C C viously B0′ = 1. Theorem 1.12 ([36, 37]). The dual distance distribution of satisfies C B′ 0, i = 1, 2,...,n. i ≥ Theorem 1.13 ([60]). If q is a power of a prime and is a linear code in Fn, then B ( ) is the C q ′ C distance distribution of the dual code . C⊥ Definition 1.14. The smallest positive integer i such that B = 0 is called the dual distance i′ 6 of and is denoted by d = d ( ). Denote by s = s( ) (resp., s = s ( )) the number of nonzero C ′ ′ C C ′ ′ C B ’s (resp., B ’s), i 1, 2,...,n , i.e. i i′ ∈ { } s = i : B = 0,i> 0 , s′ = i : B′ = 0,i> 0 . |{ i 6 }| |{ i 6 }| The number s is called the external distance of . Define δ =0 if B = 0 and δ = 1 otherwise ′ C n (respectively, δ′ =0 if Bn′ = 0 and δ′ = 1 otherwise). Definition 1.15. Let F n be a code and M be a codeword matrix consisting of all vectors of C ⊆ q as rows. Then is called a τ-design if any set of τ-columns of M contains any τ-tuple of F τ C C q the same number of times (namely, λ := /qτ ). The largest positive integer τ such that is a |C| C τ-design is called the strength of and is denoted by τ( ). The number λ is called the index C C of . C n Remark 1.16. A τ-design in Fq is also called an orthogonal array of strength τ [37] or a τ-wise independent set [4] (see also [49]). Theorem 1.17 ([36, 37]). If F n has dual distance d = d ( ), then is a τ-design, where C ⊆ q ′ ′ C C τ = d 1. ′ − n (n,q) Definition 1.18. For a real polynomial f(z)= i=0 fiKi (z), the polynomial nP n/2 (n,q) f(z)= q− f(j)Kj (z) Xj=0 b n/2 is called the dual to f(z). Note that the dual to f(z) is f(z) and also that f(i)= q fi for any i 0, 1,...,n . ∈ { } b b 3 2 General linear programming theorems n Theorem 2.1 ([36, 37]). For any code Fq with distance distribution (B0,B1,...,Bn) and C ⊆ n (n,q) dual distance distribution (B0′ ,B1′ ,...,Bn′ ), and any real polynomial f(z) = i=0 fiKi (z), it is valid that n n P f(0) + Bif(i)= f0 + fiBi′ . |C| ! Xi=1 Xi=1 In Definition 1.9.1 in Chapter 1, Aq(n, d) is defined for codes over Fq. We now extend that definition to codes over Fq as only the alphabet size is important and not its structure. Definition 2.2. For fixed q, n, and d 1, 2,...,n denote ∈ { } A (n, d) := max : F n, d( )= d . q {|C| C ⊆ q C } Definition 2.3. For fixed q, n, and τ 1, 2,...,n denote ∈ { } B (n,τ) := min : F n, τ( )= τ . q {|C| C ⊆ q C } Theorem 2.4 (Linear Programming Bound for codes [36, 37]). Let the real polynomial f(z) = n (n,q) i=0 fiKi (z) satisfy the conditions (A1)P f > 0, f 0 for i = 1, 2,...,n; 0 i ≥ (A2) f(0) > 0, f(i) 0 for i = d, d + 1,...,n. ≤ Then A (n, d) f(0)/f . Equality holds for codes F n with d( ) = d, distance distribu- q ≤ 0 C ⊆ q C tion (B0,B1,...,Bn), dual distance distribution (B0′ ,B1′ ,...,Bn′ ) and polynomials f(z) such that Bif(i) = 0 and Bi′fi = 0 for every i = 1, 2,...,n. Remark 2.5. The polynomial f from Theorem 2.4 is implicit in Theorem 1.9.23 a) from Chapter n (n,q) 1 as i=0 BiKi (i) and it is normalized for f0 (or B0) to be equal to 1; note also the normal- ization of the Krawtchouk polynomials. Thus the bound f(1)/f appears as n B in Chapter P 0 i=0 i 1. P Theorem 2.6 (Linear Programming Bound for designs [36, 37]). Let the real polynomial f(z)= n (n,q) i=0 fiKi (z) satisfy the conditions (B1)P f > 0, f 0 for i = τ + 1,τ + 2,...,n; 0 i ≤ (B2) f(0) > 0, f(i) 0 for i = 1, 2,...,n. ≥ Then B (n,τ) f(0)/f . Equality holds for designs F n with τ( ) = τ, distance distribu- q ≥ 0 C ⊆ q C tion (B0,B1,...,Bn), dual distance distribution (B0′ ,B1′ ,...,Bn′ ) and polynomial f(z) such that Bif(i) = 0 and Bi′fi = 0 for every i = 1, 2,...,n. Remark 2.7. The Rao Bound in Theorem 3.3 and Levenshtein Bound in 3.10 below are obtained by suitable polynomials in Theorems 2.4 and 2.6, respectively.
Recommended publications
  • Scribe Notes
    6.440 Essential Coding Theory Feb 19, 2008 Lecture 4 Lecturer: Madhu Sudan Scribe: Ning Xie Today we are going to discuss limitations of codes. More specifically, we will see rate upper bounds of codes, including Singleton bound, Hamming bound, Plotkin bound, Elias bound and Johnson bound. 1 Review of last lecture n k Let C Σ be an error correcting code. We say C is an (n, k, d)q code if Σ = q, C q and ∆(C) d, ⊆ | | | | ≥ ≥ where ∆(C) denotes the minimum distance of C. We write C as [n, k, d]q code if furthermore the code is a F k linear subspace over q (i.e., C is a linear code). Define the rate of code C as R := n and relative distance d as δ := n . Usually we fix q and study the asymptotic behaviors of R and δ as n . Recall last time we gave an existence result, namely the Gilbert-Varshamov(GV)→ ∞ bound constructed by greedy codes (Varshamov bound corresponds to greedy linear codes). For q = 2, GV bound gives codes with k n log2 Vol(n, d 2). Asymptotically this shows the existence of codes with R 1 H(δ), which is similar≥ to− Shannon’s result.− Today we are going to see some upper bound results, that≥ is,− code beyond certain bounds does not exist. 2 Singleton bound Theorem 1 (Singleton bound) For any code with any alphabet size q, R + δ 1. ≤ Proof Let C Σn be a code with C Σ k. The main idea is to project the code C on to the first k 1 ⊆ | | ≥ | | n k−1 − coordinates.
    [Show full text]
  • An Introduction to Coding Theory
    An introduction to coding theory Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India Feb. 6, 2017 Lecture #7A: Bounds on the size of a code Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Plotkin Bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Outline of the lecture Hamming bound Perfect codes Singleton bound Maximum distance separable codes Plotkin Bound Gilbert-Varshamov bound Adrish Banerjee Department of Electrical Engineering Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh India An introduction to coding theory Bounds on the size of a code The basic problem is to find the largest code of a given length, n and minimum distance, d.
    [Show full text]
  • Algebraic Geometric Codes: Basic Notions Mathematical Surveys and Monographs
    http://dx.doi.org/10.1090/surv/139 Algebraic Geometric Codes: Basic Notions Mathematical Surveys and Monographs Volume 139 Algebraic Geometric Codes: Basic Notions Michael Tsfasman Serge Vladut Dmitry Nogin American Mathematical Society EDITORIAL COMMITTEE Jerry L. Bona Peter S. Landweber Michael G. Eastwood Michael P. Loss J. T. Stafford, Chair Our research while working on this book was supported by the French National Scien­ tific Research Center (CNRS), in particular by the Inst it ut de Mathematiques de Luminy and the French-Russian Poncelet Laboratory, by the Institute for Information Transmis­ sion Problems, and by the Independent University of Moscow. It was also supported in part by the Russian Foundation for Basic Research, projects 99-01-01204, 02-01-01041, and 02-01-22005, and by the program Jumelage en Mathematiques. 2000 Mathematics Subject Classification. Primary 14Hxx, 94Bxx, 14G15, 11R58; Secondary 11T23, 11T71. For additional information and updates on this book, visit www.ams.org/bookpages/surv-139 Library of Congress Cataloging-in-Publication Data Tsfasman, M. A. (Michael A.), 1954- Algebraic geometry codes : basic notions / Michael Tsfasman, Serge Vladut, Dmitry Nogin. p. cm. — (Mathematical surveys and monographs, ISSN 0076-5376 ; v. 139) Includes bibliographical references and index. ISBN 978-0-8218-4306-2 (alk. paper) 1. Coding theory. 2. Number theory. 3. Geometry, Algebraic. I. Vladut, S. G. (Serge G.), 1954- II. Nogin, Dmitry, 1966- III. Title. QA268 .T754 2007 003'.54—dc22 2007061731 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy a chapter for use in teaching or research.
    [Show full text]
  • ELEC 619A Selected Topics in Digital Communications: Information Theory
    ELEC 519A Selected Topics in Digital Communications: Information Theory Hamming Codes and Bounds on Codes Single Error Correcting Codes 2 Hamming Codes • (7,4,3) Hamming code 1 0 0 0 0 1 1 0 1 0 0 1 0 1 GIP | 0 0 1 0 1 1 0 0 0 0 1 1 1 1 • (7,3,4) dual code 0 1 1 1 1 0 0 T HPI -| 1011010 1 1 0 1 0 0 1 3 • The minimum distance of a code C is equal to the minimum number of columns of H which sum to zero • For any codeword c h0 h T 1 cH [c,c,...,c01 n1 ] ch 0011 ch ...c n1n1 h 0 hn1 where ho, h1, …, hn-1 are the column vectors of H T • cH is a linear combination of the columns of H 4 Significance of H • For a codeword of weight w (w ones), cHT is a linear combination of w columns of H. • Thus there is a one-to-one mapping between weight w codewords and linear combinations of w columns of H that sum to 0. • The minimum value of w which results in cHT=0, i.e., codeword c with weight w, determines that dmin = w 5 Distance 1 and 2 Codes • (5,2,1) code 10100 10110 HG= 10010 = 01000 00001 • (5,2,2) codes 11100 10110 HG= 11010 = 01110 00001 6 (7,4,3) Code • A codeword with weight dmin = 3 is given by the first row of G: c = 1000011 • The linear combination of the first and last 2 columns in H gives T T T T (011) +(010) +(001) = (000) • Thus a minimum of 3 columns (= dmin) are required to get a zero value for cHT 7 Binary Hamming Codes Let m be an integer and H be an m (2m - 1) matrix with columns which are the non-zero distinct elements from Vm.
    [Show full text]
  • Generalized Sphere-Packing and Sphere-Covering Bounds on the Size
    1 Generalized sphere-packing and sphere-covering bounds on the size of codes for combinatorial channels Daniel Cullina, Student Member, IEEE and Negar Kiyavash, Senior Member, IEEE Abstract—Many of the classic problems of coding theory described. Erasure and deletion errors differ from substitution are highly symmetric, which makes it easy to derive sphere- errors in a more fundamental way: the error operation takes packing upper bounds and sphere-covering lower bounds on the an input from one set and produces an output from another. size of codes. We discuss the generalizations of sphere-packing and sphere-covering bounds to arbitrary error models. These In this paper, we will discuss the generalizations of sphere- generalizations become especially important when the sizes of the packing and sphere-covering bounds to arbitrary error models. error spheres are nonuniform. The best possible sphere-packing These generalizations become especially important when the and sphere-covering bounds are solutions to linear programs. sizes of the error spheres are nonuniform. Sphere-packing and We derive a series of bounds from approximations to packing sphere-covering bounds are fundamentally related to linear and covering problems and study the relationships and trade-offs between them. We compare sphere-covering lower bounds with programming and the best possible versions of the bounds other graph theoretic lower bounds such as Turan’s´ theorem. We are solutions to linear programs. In highly symmetric cases, show how to obtain upper bounds by optimizing across a family of including many classical error models, it is often possible to channels that admit the same codes.
    [Show full text]
  • Classification of Perfect Codes and Minimal Distances in the Lee Metric
    Master Thesis Classification of perfect codes and minimal distances in the Lee metric Naveed Ahmed 1982-06-20 Waqas Ahmed 1982-09-13 Subject: Mathematics Level: Advance Course code: 5MA12E Abstract Perfect codes and minimal distance of a code have great importance in the study of theory of codes. The perfect codes are classified generally and in particular for the Lee met- ric. However, there are very few perfect codes in the Lee metric. The Lee metric has nice properties because of its definition over the ring of integers residue modulo q.Itis conjectured that there are no perfect codes in this metric for q > 3, where q is a prime number. The minimal distance comes into play when it comes to detection and correction of error patterns in a code. A few bounds on the number of codewords and minimal distance of a code are discussed. Some examples for the codes are constructed and their minimal distance is calculated. The bounds are illustrated with the help of the results obtained. Key-words: Hamming metric; Lee metric; Perfect codes; Minimal distance iii Acknowledgments This thesis is written as a part of the Degree of Master of Science in Mathematics at Linnaeus University, Sweden. It is written at the School of Computer Science, Physics and Mathematics. We would like to thank our supervisor Per-Anders Svensson who has been very sup- porting throughout the whole period. We are also obliged to him for giving us chance to work under his supervision. We would also like to thank our Program Manager Marcus Nilsson who has been very helpful and very cooperative within all matters, and also all the respectable faculty members of the department.
    [Show full text]
  • MAS309 Coding Theory
    MAS309 Coding theory Matthew Fayers January–March 2008 This is a set of notes which is supposed to augment your own notes for the Coding Theory course. They were written by Matthew Fayers, and very lightly edited my me, Mark Jerrum, for 2008. I am very grateful to Matthew Fayers for permission to use this excellent material. If you find any mistakes, please e-mail me: [email protected]. Thanks to the following people who have already sent corrections: Nilmini Herath, Julian Wiseman, Dilara Azizova. Contents 1 Introduction and definitions 2 1.1 Alphabets and codes . 2 1.2 Error detection and correction . 2 1.3 Equivalent codes . 4 2 Good codes 6 2.1 The main coding theory problem . 6 2.2 Spheres and the Hamming bound . 8 2.3 The Singleton bound . 9 2.4 Another bound . 10 2.5 The Plotkin bound . 11 3 Error probabilities and nearest-neighbour decoding 13 3.1 Noisy channels and decoding processes . 13 3.2 Rates of transmission and Shannon’s Theorem . 15 4 Linear codes 15 4.1 Revision of linear algebra . 16 4.2 Finite fields and linear codes . 18 4.3 The minimum distance of a linear code . 19 4.4 Bases and generator matrices . 20 4.5 Equivalence of linear codes . 21 4.6 Decoding with a linear code . 27 5 Dual codes and parity-check matrices 30 5.1 The dual code . 30 5.2 Syndrome decoding . 34 1 2 Coding Theory 6 Some examples of linear codes 36 6.1 Hamming codes . 37 6.2 Existence of codes and linear independence .
    [Show full text]
  • Math 550 Coding and Cryptography Workbook J. Swarts 0121709 Ii Contents
    Math 550 Coding and Cryptography Workbook J. Swarts 0121709 ii Contents 1 Introduction and Basic Ideas 3 1.1 Introduction . 3 1.2 Terminology . 4 1.3 Basic Assumptions About the Channel . 5 2 Detecting and Correcting Errors 7 3 Linear Codes 13 3.1 Introduction . 13 3.2 The Generator and Parity Check Matrices . 16 3.3 Cosets . 20 3.3.1 Maximum Likelihood Decoding (MLD) of Linear Codes . 21 4 Bounds for Codes 25 5 Hamming Codes 31 5.1 Introduction . 31 5.2 Extended Codes . 33 6 Golay Codes 37 6.1 The Extended Golay Code : C24. ....................... 37 6.2 The Golay Code : C23.............................. 40 7 Reed-Muller Codes 43 7.1 The Reed-Muller Codes RM(1, m) ...................... 46 8 Decimal Codes 49 8.1 The ISBN Code . 49 8.2 A Single Error Correcting Decimal Code . 51 8.3 A Double Error Correcting Decimal Code . 53 iii iv CONTENTS 8.4 The Universal Product Code (UPC) . 56 8.5 US Money Order . 57 8.6 US Postal Code . 58 9 Hadamard Codes 61 9.1 Background . 61 9.2 Definition of the Codes . 66 9.3 How good are the Hadamard Codes ? . 67 10 Introduction to Cryptography 71 10.1 Basic Definitions . 71 10.2 Affine Cipher . 73 10.2.1 Cryptanalysis of the Affine Cipher . 74 10.3 Some Other Ciphers . 76 10.3.1 The Vigen`ereCipher . 76 10.3.2 Cryptanalysis of the Vigen`ereCipher : The Kasiski Examination . 77 10.3.3 The Vernam Cipher . 79 11 Public Key Cryptography 81 11.1 The Rivest-Shamir-Adleman (RSA) Cryptosystem .
    [Show full text]
  • Bounds on Error-Correction Coding Performance
    Chapter 1 Bounds on Error-Correction Coding Performance 1.1 Gallager’s Coding Theorem The sphere packing bound by Shannon [18] provides a lower bound to the frame error rate (FER) achievable by an (n, k, d) code but is not directly applicable to binary codes. Gallager [4] presented his coding theorem for the average FER for the ensemble of all random binary (n, k, d) codes. There are 2n possible binary combinations for each codeword which in terms of the n-dimensional signal space hypercube corresponds to one vertex taken from 2n possible vertices. There are 2k codewords, and therefore 2nk different possible random codes. The receiver is considered to be composed of 2k matched filters, one for each codeword and a decoder error occurs if any of the matched filter receivers has a larger output than the matched filter receiver corresponding to the transmitted codeword. Consider this matched filter receiver and another different matched filter receiver, and assume that the two codewords differ in d bit positions. The Hamming distance between the two = k codewords is d. The energy per transmitted bit is Es n Eb, where Eb is the energy σ 2 = N0 per information bit. The noise variance per matched filtered received bit, 2 , where N0 is the single sided noise spectral density. In the absence√ of noise, the output of the matched filter receiver for the transmitted codeword√ is n Es and the output of the other codeword matched filter receiver is (n − 2d) Es . The noise voltage at the output of the matched filter receiver for the transmitted codeword is denoted as nc − n1, and the noise voltage at the output of the other matched filter receiver will be nc + n1.
    [Show full text]
  • Genetic Algorithms in Coding Theory : a Table for $A 3(N,D)$
    Genetic algorithms in coding theory : a table for $A_3(n,d)$ Citation for published version (APA): Vaessens, R. J. M., Aarts, E. H. L., & van Lint, J. H. (1991). Genetic algorithms in coding theory : a table for $A_3(n,d)$. (Memorandum COSOR; Vol. 9122). Technische Universiteit Eindhoven. Document status and date: Published: 01/01/1991 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • 5 Bounds in Coding Theory
    5 Bounds in coding theory Given a q-ary (n, M, d)-code, where n is fixed, the size M is a measure of the efficiency of the code, and the distance d is an indication of its error-correcting capability. It would be nice if both M and d could be as large as possible, but, as we shall see shortly in this chapter, this is not quite possible, and a compromise needs to be struck. For given q, n and d, we shall discuss some well known upper and lower bounds for the largest possible value of M. In the case where M is actually equal to one of the well known bounds, interesting codes such as perfect codes and MDS codes are obtained. We also discuss certain properties and examples of some of these fascinating families. 5.1 The main coding theory problem Let C be a q-ary code with parameters (n, M, d). Recall from Chapter 2 that the R = / information rate (or transmission rate) of C is defined to be (C) (logq M) n. We also introduce here the notion of the relative minimum distance. Definition 5.1.1 For a q-ary code C with parameters (n, M, d), the relative minimum distance of C is defined to be δ(C) = (d − 1)/n. Remark 5.1.2 The relative minimum distance of C is often defined to be d/n in the literature, but defining it as (d −1)/n leads sometimes to neater formulas (see Remark 5.4.4). = n Example 5.1.3 (i) Consider the q-ary code C Fq .
    [Show full text]
  • A Method to Find the Volume of a Sphere in the Lee Metric, and Its
    A method to find the volume of a sphere in the Lee metric, and its applications Sagnik Bhattacharya, Adrish Banerjee Department of Electrical Engineering, Indian Institute of Technology Kanpur Email: {sagnikb, adrish}@iitk.ac.in Abstract—We develop general techniques to bound the size of B. Prior work the balls of a given radius r for q-ary discrete metrics, using Given a bound on the volume of the q-ary Lee ball of radius t, the generating function for the metric and Sanov’s theorem, that V (n), for a code of blocklength n, Chiang and Wolf [1] give reduces to the known bound in the case of the Hamming metric t and gives us a new bound in the case of the Lee metric. We use the Hamming bound the techniques developed to find Hamming, Elias-Bassalygo and (n) n(1−R(d)) Gilbert-Varshamov bounds for the Lee metric. V(d−1)=2 ≤ q (1) I.I NTRODUCTION and the Gilbert-Varshamov bound, (n) n(1−R(d)) A. Background Vd > q (2) One of the outcomes of Shannon’s classic work on information on the rate R(d) for a code of minimum distance d in the Lee theory is a long and ongoing investigation of point-to-point metric. They also give the following bound analogous to the communication over a discrete memoryless channel and the singleton bound for a linear code of length n and rank k in limits of reliable communication over such a channel. To the Lee metric, describe these limits, we can use various metrics defined on ( q+1 the codeword space that are matched to the channel under con- 4 (n − k + 1) for odd q d ≤ 2 (3) sideration, where we say that a metric is matched to a channel q 4(q−1) (n − k + 1) for even q if nearest neighbour decoding according to the metric implies maximum likelihood (ML) decoding on the channel [1].
    [Show full text]