Sparse Polynomial Interpolation Codes and Their Decoding Beyond Half the Minimum Distance *
Total Page:16
File Type:pdf, Size:1020Kb
Sparse Polynomial Interpolation Codes and their Decoding Beyond Half the Minimum Distance * Erich L. Kaltofen Clément Pernet Dept. of Mathematics, NCSU U. J. Fourier, LIP-AriC, CNRS, Inria, UCBL, Raleigh, NC 27695, USA ÉNS de Lyon [email protected] 46 Allée d’Italie, 69364 Lyon Cedex 7, France www4.ncsu.edu/~kaltofen [email protected] http://membres-liglab.imag.fr/pernet/ ABSTRACT General Terms: Algorithms, Reliability We present algorithms performing sparse univariate pol- Keywords: sparse polynomial interpolation, Blahut's ynomial interpolation with errors in the evaluations of algorithm, Prony's algorithm, exact polynomial fitting the polynomial. Based on the initial work by Comer, with errors. Kaltofen and Pernet [Proc. ISSAC 2012], we define the sparse polynomial interpolation codes and state that their minimal distance is precisely the code-word length 1. INTRODUCTION divided by twice the sparsity. At ISSAC 2012, we have Evaluation-interpolation schemes are a key ingredient given a decoding algorithm for as much as half the min- in many of today's computations. Model fitting for em- imal distance and a list decoding algorithm up to the pirical data sets is a well-known one, where additional minimal distance. information on the model helps improving the fit. In Our new polynomial-time list decoding algorithm uses particular, models of natural phenomena often happen sub-sequences of the received evaluations indexed by to be sparse, which has motivated a wide range of re- an arithmetic progression, allowing the decoding for a search including compressive sensing [4], and sparse in- larger radius, that is, more errors in the evaluations terpolation of polynomials [23,1, 15, 13,9, 11]. Most while returning a list of candidate sparse polynomials. algorithms for the latter problem rely on the connection We quantify this improvement for all typically small val- between linear complexity and sparsity, often referred to ues of number of terms and number of errors, and pro- as Blahut's Theorem (Theorem1[3, 19]) though already vide a worst case asymptotic analysis of this improve- used in the 18th century by Prony [23]. The Berlekamp/ ment. For instance, for sparsity T = 5 with ≤ 10 errors Massey algorithm [20] makes this connection effective. we can list decode in polynomial-time from 74 values of These exact sparse interpolation techniques have been the polynomial with unknown terms, whereas our earlier very successfully applied to numeric computations [10, algorithm required 2T (E + 1) = 110 evaluations. 16,6, 17]. We then propose two variations of these codes in char- Computer algebra also widely uses evaluation-inter- acteristic zero, where appropriate choices of values for polation schemes as a key computational tool: reducing the variable yield a much larger minimal distance: the operations on polynomials to base ring operations, inte- code-word length minus twice the sparsity. ger and rationals operations to finite fields operations, Categories and Subject Descriptors: multivariate polynomials operations to univariate poly- I.1.2 [Symbolic and Algebraic Manipulation]: Algorithms; nomials operations, etc. With the rise of large scale par- G.1.1 [Numerical Analysis]: Interpolation{smoothing; allel computers, their ability to convert a large sequential E.4 [Coding and Information Theory]: Error control computation, into numerous smaller independent tasks codes. is of high importance. Evaluation-interpolation schemes are also at the core ∗ This material is based on work supported in part by the Na- of the famous Reed-Solomon error correcting codes [25, tional Science Foundation under Grant CCF-1115772 (Kaltofen), 22]. There, a block of information, viewed as a dense pol- and the Agence Nationale de la Recherche under Grant HPAC ANR-11-BS02-013 and the Inria Associate Teams Grant QO- ynomial over a finite field is encoded by its evaluation LAPS (Pernet). in n points. Decoding is achieved by an interpolation Permission to make digital or hard copies of all or part of this work for per- resilient to errors. Blahut's theorem [3, 19] originates sonal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear from the decoding of Reed-Solomon codes: the interpo- this notice and the full citation on the first page. Copyrights for components lation of the error vector of sparsity t is a sequence of of this work owned by others than the author(s) must be honored. Abstract- linear complexity t whose generator, computed by Ber- ing with credit is permitted. To copy otherwise, to republish, to post on lekamp/Massey algorithm, carries in its roots the infor- servers or to redistribute to lists, requires prior specific permission and/or a mation of the error locations. Beyond the field of digital fee. Request permissions from [email protected]. communication and data storage, error correcting codes ISSAC’14, July 23–25, 2014, Kobe, Japan.Copyright is held by the own- ers/authors. Publication rights licensed to ACM. ACM 978-1-4503-2501- have found more recent applications in fault tolerant dis- 1/14/07 ...$15.00. http://dx.doi.org/10.1145/2608628.2608660 . tributed computations [14, 18,7]. In particular, paral- 272 lelization based on evaluation-interpolation can be made setting of univariate sparse polynomial interpolation. Let fault tolerant if interpolation with errors is performed. f be a univariate polynomial with t terms, mj and let This is achieved by Reed-Solomon codes for dense pol- cj the corresponding non-zero coefficients: Pt ej Pt ynomial interpolation and by CRT codes, for residue f(x) = j=1 cj x = j=1 cj mj 6= 0; ej 2 Z: number systems [18]. The problem of sparse polynomial ej Theorem 2 [1] Let bj = α , where α is a value from interpolation with errors rises naturally in this context. i the coefficient domain to be specified later, let ai = f(α ) We give algorithms for the solution of the problem in [6]. Pt i Qt t = j=1 cj bj ; and let Λ(z) = j=1(z − bj ) = z + Our approach is naturally related to the k-error linear t−1 complexity problem [21] from stream cipher theory. A λt−1z + ··· + λ0. The sequence (a0; a1;:::) is linearly major concern in our previous results is that in order to generated by the minimal polynomial Λ(z). correct E errors, the number of evaluations has to be The Blahut/Ben-Or/Tiwari algorithm then proceeds in increased by a multiplicative factor linear in E. In com- the four following steps: parison, dense interpolation with errors only requires an 1. Find the minimal-degree generating polynomial Λ for additive term linear in E. (a0; a1;:::), using the Berlekamp/Massey algorithm. In this paper we further investigate this problem from 2. Compute the roots bj of Λ, using univariate polyno- a coding theory viewpoint. In section2 we define the mial factorization. sparse polynomial interpolation codes. We then focus 3. Recover the exponents ej of f, by repeatedly dividing on the case where the evaluation points are consecu- bj by α. tive powers of a primitive root of unity, whose order 4. Recover the coefficients cj of f, by solving the trans- is divisible by twice the sparsity, in order to to bene- posed t × t Vandermonde system 2 3 2 3 2 3 fit from Blahut/Ben-Or/Tiwari interpolation algorithm. 1 1 ::: 1 c1 a0 We show that in this stetting the minimal distance is 6 b1 b2 : : : bt 7 6c27 6 a1 7 6 7 6 7 = 6 7 : precisely the length divided by twice the sparsity. The 6 : : :: : 7 6 : 7 6 : 7 4 : : : : 5 4 : 5 4 : 5 algorithms of [6] can be viewed as a unique decoding bt−1 bt−1 : : : bt−1 c a algorithm for as much as half the minimal distance and 1 2 t t t−1 By Blahut's theorem, the sequence (a ) has lin- a list decoding algorithm up to the minimal distance. i i≥0 ear complexity t, hence only 2t coefficients suffice for In section3, we propose a new polynomial-time list de- the Berlekamp/Massey algorithm to recover the mini- coding algorithm that uses sub-sequences of the received mal polynomial Λ. In the presence of errors in some of evaluations indexed by an arithmetic progression, reach- the evaluations, this fails. ing a larger decoding radius. We quantify this improve- ment on average by experiments, in the worst case for all typically small values of number of terms and number 2. SPARSE INTERPOLATION CODES of errors, and make connections between the asymptotic Definition 1 Let K be a field, 0 < n ≤ m, two inte- decoding capacity and the famous Erd}os-Tur´an prob- gers and let x0; : : : ; xn−1 be n distinct elements of K.A lem of additive combinatorics. We then propose in sec- sparse polynomial evaluation code of parameters (n; T ) tion5 two variations of these codes in characteristic zero, over K is defined as the set where appropriate choices of values for the variable yield a much larger minimal distance: the length minus twice C(n; T ) = f(f(x0); f(x1); : : : ; f(xn−1)) : f 2 K[z] the sparsity. is t-sparse with t ≤ T and deg f < mg Linear recurring sequences. We recall that a se- quence (a0; a1;:::) is linearly recurring if there exists In order to benefit from Blahut/Ben-Or/Tiwari algo- Pt−1 rithm for error free interpolation, we will consider, until λ0; λ1; : : : ; λt−1 such that aj+t = i=0 λiaj+i 8j ≥ 0. t Pt−1 i section5, the special case where the evaluation points The monic polynomial Λ(z) = z − i=0 λiz is called a generating polynomial of the sequence, the generating are consecutive powers of a primitive m-th root of unity i polynomial with least degree is called the minimal gener- α 2 K: xi = α .