Intra- Multiplication of Polynomials Given in Various Polynomial Bases

S. Karamia, M. Ahmadnasabb, M. Hadizadehd, A. Amiraslanic,d

aDepartment of Mathematics, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran bDepartment of Mathematics, University of Kurdistan, Sanandaj, Iran cSchool of STEM, Department of Mathematics, Capilano University, North Vancouver, BC, Canada dFaculty of Mathematics, K. N. Toosi University of Technology, Tehran, Iran

Abstract

Multiplication of polynomials is among key operations in computer algebra which plays important roles in developing techniques for other commonly used polynomial operations such as division, evaluation/interpolation, and factorization. In this work, we present formulas and techniques for polynomial multiplications expressed in a variety of well-known polynomial bases without any change of basis. In particular, we take into consideration degree-graded polynomial bases including, but not limited to orthogonal polynomial bases and non-degree-graded polynomial bases including the Bernstein and Lagrange bases. All of the described polynomial multiplication formulas and tech- niques in this work, which are mostly presented in matrix-vector forms, preserve the basis in which the polynomials are given. Furthermore, using the results of direct multiplication of polynomials, we devise techniques for intra-basis polynomial division in the polynomial bases. A generalization of the well-known “long division” algorithm to any degree-graded polynomial basis is also given. The proposed framework deals with matrix-vector computations which often leads to well-structured matrices. Finally, an application of the presented techniques in constructing the Galerkin repre- sentation of polynomial multiplication operators is illustrated for discretization of a linear elliptic problem with stochastic coefficients. Keywords: Polynomial arithmetic, Degree-graded & non degree-graded polynomial bases, Polynomial multiplication, Stochastic Galerkin matrices, Polynomial division 2010 MSC: 15B99, 08A40, 33C47, 80M10 arXiv:2108.01558v1 [math.NA] 3 Aug 2021 1. Introduction

There is an important and ongoing thread of research into polynomial computation using polyno- mial bases other than the standard power basis. Such problems have interesting applica- tions in several areas including approximation theory [10, 20, 21, 22], cryptography [26], coding [37] and computer science [18]. The investigations in [1, 14, 36] specifically involve computation with polynomials expressed in the Lagrange basis, or, in other words, polynomial computation directly by values.

∗Corresponding author Email address: [email protected] (A. Amiraslani)

Preprint submitted to Linear Algebra and its Applications August 4, 2021 A key motivation for this interest in polynomial computation using alternatives to the power basis is that conversion between bases can be unstable (more commonly so from other bases to the ) and the instability increases with the degree. (See e.g. [23]). The complications arising from such computations motivate explorations of hybrid symbolic-numeric techniques for polynomial computation that are relevant to researchers interested in computer algebra and numerical analysis. For instance, investigations in [8] and [24] show that working directly in the Lagrange polynomial basis is both numerically stable and efficient, more stable and efficient than had been heretofore credited widely in the numerical analysis community. These and similar results strengthen the motivation for examining algorithms for direct manipulation of polynomials given in polynomial bases other than the standard monomial power basis. Figure 1 summarizes this idea which we intend to use here as well. This work attempts the dashed road. The symbols P , S, PM and

SM stand for the “Problem” to be solved in an arbitrary basis, the “Solution” in the same basis, the “Polynomial after conversion to Monomial basis”, and the “Solution in the Monomial basis”, respectively. P 99K S ⇓ ⇑

PM ⇒ SM

Figure 1 This work is mainly geared toward implementing methods for direct multiplication of polynomials given in polynomial bases. To this end, we look at two important families of polynomial bases: degree-graded polynomial bases such as orthogonal polynomial bases and Newton basis, and non- degree-graded polynomial bases including the Lagrange and Bernstein polynomial bases. From there and as an important application, we devise techniques for direct division of polynomials given in polynomial bases. That includes a generalization of the well-known “long division” method to degree-graded polynomial bases. Most of the techniques implemented in this work are based on matrix-vector representations. This enables us to use some known and straightforward techniques from linear algebra which make things a lot easier to follow and, if necessary, extend. Due to the importance and applications of Chebyshev basis in various disciplines such as Electri- cal Engineering, direct multiplication of polynomials in that orthogonal polynomial basis has been studied fairly exhaustively. The paper [6] deals with the operation of direct multiplication of polyno- mials in Chebyshev form using two approaches: a direct multiplication of polynomials in Chebyshev form as well as another approach based on the discrete cosine transform (DCT). In [28], the same problem has been also addressed through the well-known Karatsuba’s algorithm. This result is inter- esting since it provides a more practicable algorithm, avoiding conversions between bases. Notably, [20] completely reduces the multiplication in Chebyshev form to the one in monomial basis. More generally, in [34], the authors present a general method that allows the derivation of the expansion coefficients of the product of two orthogonal functions provided the generating function is known.

Besides, writing the product of two polynomials Pn(x) and Pm(x) of degrees m and n, respectively, given in an orthogonal polynomial basis as the linear expansion of polynomials in the same basis is a well-known problem called ”the linearization of the product”. The goal is to find the coefficients Pn+m gmnk such that Pn(x)Pm(x) = k=0 gnmkPk(x). Some methods and approaches for computing the gnmk have been developed so far. For classical families of orthogonal polynomial bases, explicit expressions have been obtained, usually in terms of generalized hypergeometric series, using impor-

2 tant characterizing properties: recurrence relations, generating functions, orthogonality weights, etc. (see. e.g. [34, 4, 29, 35] and references therein). We intend to find direct multiplication of polynomials in degree-graded as well as non-degree- graded polynomial bases. In particular and in comparison with [34], all we need to know here about an orthogonal polynomial basis such as but not limited to the Chebyshev polynomial basis (and other degree-graded polynomial bases for that matter) are the coefficients of its “three-term recurrence relation”. However, to the best of our knowledge, there has not been any comprehensive study in the literature on direct multiplication of polynomials in various polynomial bases. As such, it is important to implement algorithms to handle direct multiplication on polynomials in such bases. This is for instance of great interest to rigorous computing which aims to guarantee numerical approximation of functions through certified bounds on the results. There are also some prominent works on direct division of polynomials given in polynomial bases which we briefly mention here. In particular, [5] uses the “Comrade matrix” to simultaneously determine the greatest common divisor as well as the quotient and remainder of the division of two polynomials given in a degree-graded polynomial basis (the author only mentions orthogonal bases, but the idea can actually be extended to other degree-graded polynomial bases as well). The paper [11], extends the classical Euclidean division algorithm to the Bernstein polynomial basis using an algorithm based on a simple isomorphism that converts the division problems from the Bernstein polynomial basis to an equivalent problem in the monomial basis. Moreover, [30] devises algorithms similar to the classical long division method for direct division of polynomials in non-degree-graded bases (i.e., the Lagrange and Bernstein polynomial bases). Note that the aim of the current study is to provide a direct division approach that does not require basis conversion, therefore our proposed techniques here are all based on the presented direct multiplication methods in matrix-vector forms. The structure of this paper is as follows. We discuss direct multiplication of polynomials in degree-graded polynomial bases in Section 2. To this end, the multiplication of a basis element of a degree-graded polynomial basis by a basis vector of the same polynomial basis with a given degree is represented in a matrix-vector form. Section 3 concerns the direct polynomial multiplication formulas in more famous non-degree-graded polynomial bases, i.e. the Bernstein and Lagrange polynomials bases. To derive those formulas, we show how to rewrite a polynomial given in a non-degree-graded polynomial basis of certain degree as a polynomial in the same basis with a higher (or possibly lower) degree using “lifting” (or possibly “dropping”) matrices. In Section 4, using the obtained results in previous sections, we present intra-basis polynomial division techniques in degree-graded and non-degree-graded polynomial bases. We also extend the well-known “long division algorithm” from monomial basis to all degree-graded polynomial bases and give an algorithm for that. Finally, we illustrate in Section 5 an application of the presented techniques which arises in the discretization of linear differential equations with random coefficient functions.

2. Direct Multiplication of Polynomials in Degree-Graded Bases

∞ Consider a family of real polynomials {φj(x)}j=0 with φj(x) of degree j which satisfy a three-term recurrence relation:

xφj(x) = αjφj+1(x) + βjφj(x) + γjφj−1(x), j = 0, 1, 2,..., (2.1)

3 kj where the αj, βj, γj are real, φ−1(x) = 0, φ0(x) = 1, and αj = 6= 0, with kj is the leading kj+1 coefficient of φj(x). ∞ Generally, any sequence of polynomials {φj(x)}j=0, with φj(x) of degree j is degree-graded, sat- isfies (2.1) and obviously forms a linearly independent set, but is not necessarily orthogonal [2]. For instance, one can easily observe that the standard basis and Newton basis also satisfy (2.1) with

αj = 1, βj = 0, γj = 0 and αj = 1, βj = τj, γj = 0, respectively, where the τj are the nodes when the function values are given (see e.g. [3] for more details).

2.1. Multiplication of Basis Elements

T Take Φn(x) = [φ0(x), ··· , φn(x)] , the vector form of a degree-graded polynomial basis of degree n, with recurrence coefficients αi, βi and γi (i = 0, ··· , n). The following important lemma defines a matrix operator that is used in determining the result of the multiplication of a basis element (i.e., a degree-graded polynomial) of a certain degree by the basis vector of a given degree.

Lemma 2.1. For every non-negative integers k and n,

φk(x)Φn(x) = Hn,kΦn+k(x), (2.2) such that Hn,k is a size (n + 1) × (n + k + 1) matrix that has the following entries:  1, i = 1, j = k + 1,  Hn,k[i, j] = 0, j ≤ |k + 1 − i| or j ≥ k + 1 + i,  1  (αj−2Hn,k[i − 1, j − 1] + (βj−1 − βi−2)Hn,k[i − 1, j] + γj Hn,k[i − 1, j + 1] − γi−2Hn,k[i − 2, j]), elsewhere, αi−2 (2.3) where the entries Hn,k[i, j] are set to 0 for any negative or zero index or when j > n + k + 1.

Proof. For a basis element φk(x) of a fixed degree k of a degree-graded polynomial basis, we have

φ0(x)φk(x) = φk(x). (2.4)

Multiplying (2.4) by x, we get

xφ0(x)φk(x) = xφk(x).

From the recurrence relation, we have   α0 φ1(x) + β0 φ0(x) φk(x) = αk φk+1(x) + βk φk(x) + γk φk−1(x),

which gives 1   φ1(x)φk(x) = αk φk+1(x) + (βk − β0 )φk(x) + γk φk−1(x) . (2.5) α0

Similarly, multiplying (2.5) by x yields 1  φ2(x)φk(x) = αk+1 αk φk+2(x) + αk (βk+1 + βk − β1 − β0 )φk+1(x) α0 α1 (2.6) + ((βk − β0 )(βk − β1 ) − γ1 α0 + γk αk−1 + γk+1 αk )φk(x)  + γk (βk + βk−1 − β1 − β0 )φk−1(x) + γk γk−1 φk−2(x) .

4 0 (a) k ≤ n: The only nonzeronz = 411 entry of the first column appears in the (k + 1)-st row.

(b) k ≥ n: The first (k − nnz) = columns 256 of the matrix are zero vectors.

Figure 2: Non-zero structure of the matrices Hn,k for a general degree-graded polynomial bases.

Continuing this process, we can obtain φ3(x)φk(x), ..., φn(x)φk(x) and write the results in a matrix- vector form. Then the system (2.2) is obtained, where Hn,k is given by (2.3) and Φn+k(x) = T [φ0(x), φ1(x), ··· , φn+k(x)] .

The operational matrix Hn,k enables us to determine and show the result of the multiplication of a degree-graded polynomial in a given basis by a basis vector of a given degree using only the parameters of the three-term recurrence relation in that basis.

Figure 2 displays the sparsity layout (non-zero structure) of the matrix Hn,k for any k ≤ n and k ≥ n, respectively. The existing recursion relations as well as the pattern in the number of nonzero entries of each Hn,k that must be independently generated enable us to save in computational costs.

The number of entries of Hn,k that need to be independently computed is shown in Table 1. This can be summarized in the following proposition:

Proposition 1. Let n and k be two non-negative integers. Then

(a) the structure of each H0,k is as follows

h i h i H0,k = 01×k 1 and H0,0 = 1

and each Hk,0 is a (k + 1) × (k + 1) identity matrix Ik+1;

5 φ0 φ1 φ2 φ3 ··· φn Φ0 H0,0(0) H0,1(0) H0,2(0) H0,3(0) ... H0,n(0) Φ1 H1,0(0) H1,1(3) H1,2(0) H1,3(0) ... H1,n(0) Φ2 H2,0(0) H2,1(3) H2,2(5) H2,3(0) ... .. Φ3 H3,0(0) H3,1(3) H3,2(5) H3,3(7) ...... Hn−2,n(0) .. .. Φn−1 Hn−1,0(0) Hn−1,1(3) Hn−1,2(5) Hn−1,3(7) . . Hn−1,n(0) Φn Hn,0(0) Hn,1(3) Hn,2(5) Hn,3(7) ...... Hn,n(2n + 1)

Table 1: The number of nonzero entries of each Hi,j , (i, j = 0, . . . , n) that need to be independently computed is shown inside the parentheses in front of each matrix.

(b) the matrix Hn−1,k is a submatrix of Hn,k, in the form of " # Hn−1,k 0n×1 Hn,k = (2.7) Hn,k[n + 1, 1 : n + k] Hn,k[n + 1, n + k + 1]

n using MATLAB colon notation and the operational cost for computing the sequence {Hj,k}j=1 is the same as the one for Hn,k;

(c) the last rows of Hi,j and Hj,i, for i 6= j, are the same. There is no need to separately calculate

the nonzero entries of Hi,j, when j > i, which leads to operational savings in computing the

matrices Hi,j.

For further clarification, the following example helps better illustrates the structure of the matrices

Hn,k. Example 2.2. In this example, we consider the alternative orthogonal Legendre polynomial basis also known as the Chelyshkov polynomial basis of second kind (see e.g. [12] ). It is a degree-graded polynomial basis on the interval [0, 1] with weight function w(x) = x, and it can be expressed in terms of Jacobi polynomials. Chelyshkov polynomials of second kind are also related to hypergeometric functions and orthogonal exponential polynomials. In recent years, Chelyshkov polynomials have found applications in various fields of approximation theory and numerical analysis, see for example i+2 [13, 32]. According to [31], the recurrence coefficients for this polynomials are αi = 4i+6 , βi = 2(i+1)2 i (2i+3)(2i+1) and γi = − 4i+2 , for i = 0, ..., n.

In what follows, we construct the sample matrices Hn,k, (n, k = 0, 1, 2) for the Chelyshkov poly- nomial basis. Note that H0,0 = I1, H1,0 = I2, H2,0 = I3 and h i h i H0,1 = 0 1 , H0,2 = 0 0 1 ,

" # " # " # " # 1 H0,1 01×1 1 H0,2 01×1 H1,1 = 1 2 9 = 1 2 9 , H1,2 = 3 16 6 = , 2 5 10 2 5 10 5 35 7 H2,1[3, 1 : 4]

" # " # H1,1 02×1 H1,2 02×1 H2,1 = 3 16 6 , H2,2 = 1 32 24 32 50 . 0 5 35 7 3 105 35 63 63

6 2.2. Multiplication of Polynomials h i h i Suppose that Ξ(x) = ξ0 ξ1 ··· ξn−1 ξn Φn(x) and Ψ(x) = ψ0 ψ1 ··· ψm−1 ψm Φm(x) are two polynomials of degrees n and m, respectively, in a degree-graded basis Φ(x). The following lemma shows the result of direct multiplication of these two degree-graded polynomials in terms of the basis vector of degree m + n and in a matrix-vector form. Lemma 2.3. For the degree-graded polynomials Ξ(x) and Ψ(x) given above, h i Ξ(x)Ψ(x) = s0 s1 ··· sm+n Φm+n(x), where m+1 X sj = ψi−1Hm+n[i, j], j = 0, . . . , m + n i=1 in which h i h i s0 s1 ··· sm+n = ψ0 ψ1 ··· ψm−1 ψm Hm+n, (2.8) where n X Hm+n = ξiH˜ m+n,i, (2.9) i=0 and the H˜ m+n,k are (m + 1) × (m + n + 1) matrices with ˜ h i Hm+n,k = Hm,k 0(m+1)×(n−k) , k = 0, 1, ··· , n, (2.10)

where Hm,k is given by (2.3).

Proof. We can write n h i X Ξ(x)Ψ(x) = ψ0 ψ1 ··· ψm−1 ψm ξiφi(x)Φm(x). i=0

From Lemma 2.1, we have φi(x)Φm(x) = Hm+iΦm+i, for i = 0, 1, ··· , n. Moreover, we add

enough zero columns to each Hm+i to make them all of the same size, (m + 1) × (m + n + 1), and

call them H˜ m+n,i. We can now write the multiplication of the polynomials in the form of:   H˜ m+n,0  H˜  h i h i  m+n,1  Ξ(x)Ψ(x) = ψ0 ψ1 ··· ψm−1 ψm ξ0Im+1 ξ1Im+1 ... ξnIm+1  .  Φm+n(x),  .   .  H˜ m+n,n or equivalently h i Ξ(x)Ψ(x) = ψ0 ψ1 ··· ψm−1 ψm Hm+nΦm+n(x), Pn ˜ where Hm+n = i=0 ξiHm+n,i and the proof is complete.

Remark 2.4. An important observation here is that the last (n+1) columns of Hm+n in (2.9) form a

lower triangular matrix which we use later in Section 4.1. To construct H˜ m+n,0, H˜ m+n,1,..., H˜ m+n,n in (2.9), we can take advantage of the operational cost savings explained in Table 1. Remark 2.5. Note that (2.8) is symmetric with respect to the coefficients of Ξ(x) and Ψ(x), there- fore we can rewrite the right hand side of the equation by switching their coefficients and changing the matrix-vector sizes accordingly.

7 2.2.1. Powers of a Polynomial An important and fairly straightforward result of Lemma 2.3 is a formula for integer powers of a polynomial in a degree-graded basis: h i Corollary 2.6. If Ξ(x) = ξ0 ξ1 ··· ξn−1 ξn Φn(x), then for a positive integer ρ(> 1):

ρ ρ h i Y Ξ (x) = ξ0 ξ1 ··· ξn−1 ξn ( H(j×n))Φ(ρ×n)(x), (2.11) j=2

n X ˜ ˜ where H(j×n) = ξiH(j×n),i, (j = 2, ··· , ρ), and H(j×n),i is given by (2.10). i=0

(k) h i (k) h i Proof. Let s = s0 s1 ··· sk−1 sk and ξ = ξ0 ξ1 ··· ξk−1 ξk . We proceed by induction in ρ. When ρ = 2, we use the statement of Lemma 2.3 to get

n 2 (2n) (n) X ˜ (n) Ξ (x) = s Φ2n(x) = ξ ( ξiH2n,i)Φ2n(x) = ξ (H(2×n))Φ2n(x). i=0

Now, the inductive hypothesis for Ξρ−1(x) becomes

ρ−1 ρ ρ−1 (n) (n) Y Ξ (x) = Ξ(x)Ξ (x) = ξ Φn(x)ξ ( H(j×n))Φ((ρ−1)×n)(x). j=2

Using Lemma 2.3 and the notation of H(j×n), we arrive at

ρ ρ ρ (n) Y (n) Y Ξ (x) = ξ ( H(j×n))Φ((ρ−1)×n+n)(x) = ξ ( H(j×n))Φ(ρ×n)(x). j=2 j=2

2.2.2. Lower Degree Polynomial Approximation As an application of the vector-matrix representation of direct multiplication of polynomials in degree-graded polynomial bases, we intend to present a method for finding lower degree polynomial approximations of a polynomial that is given in a degree-graded polynomial basis. To this end, we need to use operational differentiation matrices. The idea of operational matrices in various polynomial bases is well-known and commonly used in numerical analysis community. An important family of such operational matrices is operational differentiation matrices in polynomial bases. The

following lemma from [2] gives the structure of the differentiation operational matrix Dφ,n, for a degree-graded polynomials basis Φn(x): h i Lemma 2.7. (From [2] ) If P (x) = a0 a1 ··· an−1 an Φn(x), then

  φ0(x) h i (k) k  .  P (x) = a0 a1 ··· an−1 an Dφ,n  .  , (2.12)   φn(x)

8 where   0 ··· 0  .  Dφ,n =  Q .  , (2.13)  φ  0 and Qφ is a size n lower triangular matrix that has the following structure for i = 1, ··· , n: ( i , i = j αi−1 Qφ[i, j] = 1 ((βj−1 − βi−1)Qφ[i − 1, j] + αj−2Qφ[i − 1, j − 1] + γjQφ[i − 1, j + 1] − γi−1Qφ[i − 2, j]) , i > j αi−1 (2.14)

Any entry, Qφ[i, j], with a negative or zero index is not considered in the above formula.

Among other things, this operational differentiation matrix enables us to find lower degree polyno- mial approximations of a polynomial given in a degree-graded polynomial basis. To make the case, we first look at the following example given in the Chebyshev polynomial basis:

P3 Example 2.8. Take P (x) = i=0 aiTi(x), where Ti(x) is the i-th degree Chebyshev polynomial 2 of first kind and the ai are undetermined coefficients. Suppose Q(x) = P (x). Our goal is to find P3 coefficients bi, in terms of the ai, for i = 0, ..., 3, such that Q(x) ' Y (x) = i=0 biTi(x) under the following constraints:

Y (k) = Q(k),Y 0(k) = Q0(k), k = −1, 1.

h i T Clearly, Y (x) = b0 b1 b2 b3 T3(x) = b T3(x). Let us assume for now that

h i T Q(x) = q0 q1 q2 q3 q4 q5 q6 T6(x) = q T6(x),

where Tn(x) shows the vector form of the Chebyshev polynomial basis of first kind of degree n.

We find and replace the coefficients qi using (2.11) later. For now, the first two constraints can be rewritten as T T T3 (k)b = T6 (k)q, k = −1, 1, and the next two constraints can be written as

T T (DT,3T3(k)) b = (DT,6T6(k)) q, k = −1, 1,

where DT,3 and DT,6 are the differentiation matrices for Chebyshev polynomials of degrees 3 and 6, respectively, with the following structures from (2.13):

     1         1   0 4      DT,3 =  0 4  , DT,6 =  3 0 6  .      3 0 6   0 8 0 8        0 8 0 8  5 0 10 0 10  0 12 0 12 0 12

9 Those four equations can be written as the following system of equations for b:

 T   T  T3 (−1) T6 (−1)  TT (1)   TT (1)   3  b =  6  q. (2.15)  T   T   (DT,3T3(−1))   (DT,6T6(−1))  T T (DT,3T3(1)) (DT,6T6(1))

It is known that, the recurrence coefficients for the Chebyshev polynomial basis of first kind are 1 1 α = 1, α = , β = 0 and γ = , (j = 1, 2, ··· ). 0 j 2 j j 2 The system (2.15) now becomes

 1 −1 1 −1   1 −1 1 −1 1 −1 1   1 1 1 1   1 1 1 1 1 1 1        b =   q,  0 1 −4 9   0 1 −4 9 −16 25 −36  0 1 4 9 0 1 4 9 16 25 36 which yields  1 0 0 0 −3 0 −8   1 0 0 0 −2 0    b =   q.  1 0 4 0 9  1 0 3 0

We can replace q using (2.11). From (2.9), one can obtain   a0 a1 a2 a3  1 a a + 1 a 1 a + 1 a 1 a 1 a  H =  2 1 0 2 2 2 1 2 3 2 2 2 3  , 6  1 1 1 1 1 1   2 a2 2 a1 + 2 a3 a0 2 a1 2 a2 2 a3  1 1 1 1 1 1 2 a3 2 a2 2 a1 a0 2 a1 2 a2 2 a3 and T h i q = a0 a1 a2 a3 H6.

That gives  2 1 2 2 2  a0 + 2 ( a1 + a2 + a3 )  2 a a + a a + a a   0 1 1 2 3 2   1 2   2 a0a2 + 2 a1 + a3a1    q =  2 a0a3 + a1a2 ,    a a + 1 a 2   3 1 2 2     a3a2  1 2 2 a3 and finally,  2 1 2 2 −7 2  a0 + 2 a1 − a2 2 a3 − 3 a3a1  2 a a + a a − a a  b =  0 1 1 2 3 2 .  1 2 2 9 2   2 a0a2 + 2 a1 + 5 a3a1 + 2 a2 + 2 a3  2 a0a3 + a1a2 + 3 a3a2

The above example is used to simply illustrate the process for setting up a system such as (2.15) in terms of the unknown vector of coefficients b, for a certain degree-reduction problem. Depending

10 on one’s strategy to approximate a given polynomial in a polynomial basis by another polynomial of a lower degree in the same basis, which is besides the point here, one will come up with constraint equations that lead to a system similar to (2.15). Obviously, the number of constraints dictates the form of this system (i.e., square or rectangular). What we would like to highlight and emphasize here is the flexibility and ease of use of this approach regardless of what is deemed the best approximation strategy.

Remark 2.9. A similar polynomial degree reduction scheme can be implemented for non-degree- graded polynomial bases as well. We refrain from discussing it in this work, but it is fairly straight- forward to extend the method from the degree-graded case to non-degree-graded bases using corre- sponding operational differentiation matrices (see Section 3.2.2) as well as formulas for direct mul- tiplications of polynomials (see Sections 3.3.1 and 3.3.2 for the Bernstein and Lagrange polynomials bases, respectively).

3. Direct Multiplication of Polynomials in Non-Degree-Graded Bases

In this section, we find direct multiplication formulas in more commonly used non-degree-graded n polynomial bases. In a non-degree-graded polynomial basis of degree n, i.e., {υj}j=0, each basis Pn element υj is itself of degree n. Moreover, j=0 υj = 1. In particular, we are interested in two of the most important non-degree-graded polynomial bases, namely the Bernstein polynomial basis and the Lagrange polynomial basis.

3.1. Two Important Non-Degree-Graded Bases First, we recall some classical definitions from [17, 8] (see these references for further details and properties of these polynomials). We start with the Bernstein polynomial basis: A Bernstein polynomial (also called B´ezierpolynomial) defined over the interval [a, b] has the form

n(x − a)j(b − x)n−j b (x) = , j = 0, . . . , n. (3.1) j,n j (b − a)n Note that this is not a typical scaling of the Bernstein polynomials, however this scaling makes matrix notations related to this basis slightly easier to write. Observe that the Bernstein polynomials

are nonnegative in [a, b], i.e., bj,n(x) ≥ 0, for all x ∈ [a, b]. For a function f(x) ∈ C[a, b], a polynomial approximation P (x) of degree n written using Bernstein basis is of the form n X T P (x) = cjbj,n(x) = c bn, (3.2) j=0

T T (b−a)j where bn(x) = [b0,n(x), ··· , bn,n(x)] , c = [c0, ··· , cn] with cj = f(a + n ). The next important non-degree-graded basis which we are interested in is the Lagrange polynomial basis:

Suppose that a function f(x) is sampled at n+1 distinct points τ0, τ1, . . . , τn, and write pj := f(τj). n This may also be shown as {(τj, pj)}j=0. The Lagrange basis polynomials of degree n are then defined by `n(x)wn,j Lj,n(x) = , j = 0, 1, . . . , n (3.3) x − τj

11 Qn 1 Qn where the “weights” wn,j are wn,j = , and `n(x) = (x − τm). m=0, m6=j τj −τm m=0 Then P (x) can be expressed in terms of its samples in the following form

T P (x) = p Ln(x), (3.4)

T T where p = [p0, ··· , pn] and Ln(x) = [L0,n(x), ··· ,Ln,n(x)] .

3.2. Lifting and Dropping Matrices

It is known that the elements of a non-degree-graded polynomial basis do not form three-term recurrence relations similar to (2.1). However, they have other important and interesting properties that enable us to link the basis elements of non-degree-graded polynomial bases of different degrees. An important property of non-degree-graded polynomial bases is that one can write a given poly- nomial in a non-degree-graded polynomial basis of degree n as a polynomial in the same basis of degree m(> n). This seems trivial in degree-graded polynomial bases, but as shown in this section, it is quite essential in finding formulas for direct multiplication of polynomials in non-degree-graded polynomial bases. We refer to this property as “lifting”. Sometimes and under certain circum- stances, we can also reverse the lifting process and write a polynomial given in a non-degree-graded polynomial basis of a higher degree, m, as a polynomial in the same basis of a lower degree, n(< m). This property is known as “dropping”. In particular, we use the dropping property in Section 4.4 to devise a technique for direct division of polynomials in the Lagrange polynomial basis.

3.2.1. Lifting Matrices The following auxiliary lemma, which is basically a summarization of [11], describes an important property of the Bernstein polynomial basis:

Lemma 3.1. (From [11]) If bj,n(x) is the j-th Bernstein polynomial basis of a degree n, then for j = 0, . . . , n  j + 1  n + 1 − j  b (x) = b (x) + b (x). j,n n + 1 j+1,n+1 n + 1 j,n+1

In other words, we can define an (n + 1) × (n + 2) “lifting matrix” Tn,n+1 as

 n+2−i  n+1 , i = j  i Tn,n+1[i, j] = n+1 , i = j − 1 (3.5)  0, elsewhere, for i = 1, ··· , n + 1 and j = 1, ··· , n + 2, such that

bn(x) = Tn,n+1bn+1(x). (3.6)

T It is clear from (3.6) that if we want to write a polynomial, P (x) = c bn(x), given in the Bernstein polynomial basis of degree n in terms of a polynomial in the same basis of degree m(> n), we have T T T P (x) = d bm(x), where d is a vector of size (m + 1) given by d = Tn,mc with

Tn,m = Tn,n+1Tn+1,n+2 ··· Tm−2,m−1Tm−1,m, (3.7) and Tn,m is the lifting matrix of size (n + 1) × (m + 1).

12 Lemma 3.2. The lifting matrix Tn,m of the Bernstein polynomial basis is an upper triangular matrix with the entries:  n m−n (i−1)( j−i )  m , i ≤ j Tn,m[i, j] = (j−1) (3.8)  0, elsewhere.

Proof. For m = n + 1, using (3.5), we can easily write

 n 1 (i−1)(j−i)  n+1 , i ≤ j Tn,n+1[i, j] = (j−1)  0, elsewhere,

therefore, (3.8) holds for m = n + 1. Note that in this case, the entries are nonzero only when i = j or i = j − 1. Now, we proceed by induction. We assume the validity of the lemma for m = k(> n) and transit to m = k + 1. A little computation including matrix multiplications shows that

 n k−n+1 (i−1)( j−i )  k+1 , i ≤ j Tk,k+1[i, j] = Tn,k Tk,k+1 = (j−1)  0, elsewhere,

i.e., the formula holds for k + 1 and the proof is complete.

Corollary 3.3. The entries of Tn,m satisfy the following relations for i = 1, ··· , n + 1 and j = 1, ··· , m + 1:

Tn,m[i, j] = Tn,m[n − i + 2, m − j + 2]. (3.9)

The lifting matrices in the Lagrange polynomial basis can also be obtained similarly through the following property from [8]:

Lemma 3.4. If we eliminate a node, say τk, of a Lagrange polynomial basis of degree n with nodes

τi, we have `n−1(x)wn−1,i Li,n−1(x) = , i = 0, 1, . . . , n; i 6= k x − τi

`n(x) where `n−1(x) = and wn−1,i = wn,i(τi − τk), then x−τk

wn,i Li,n−1(x) = − Lk,n(x) + Li,n(x). wn,k

That means we can define an n × (n + 1) “lifting matrix” Rn−1,n as

 1, i = j 6= k  R [i, j] = − wn,i−1 , j = k + 1 (3.10) n−1,n wn,k  0, elsewhere, for i = 1, ··· , n, and j = 1, ··· , n + 1, such that

Ln−1(x) = Rn−1,nLn(x). (3.11)

Remark 3.5. Without loss of generality and to make things easier to implement and follow, from this point on, we assume that the added (or eliminated) node is the last one (i.e., k = n). Obviously, we can always rearrange the nodes as we wish to make that happen.

13 T Now, it is clear from (3.11) that if we want to write a polynomial, P (x) = p Ln(x), given in the Lagrange polynomial basis of degree n as a polynomial in the Lagrange polynomial basis of degree T T T m(> n), we have P (x) = q Lm(x), where q is a vector of size (m + 1) given by q = Rn,mc with

Rn,m = Rn,n+1Rn+1,n+2 ··· Rm−2,m−1Rm−1,m,

and Rn,m is the lifting matrix of size (n + 1) × (m + 1). Similarly, the lifting matrix, Rn,m, can be constructed as follows:

Lemma 3.6. The lifting matrix Rn,m of the Lagrange polynomial basis is given by h i Rn,m = In+1 K , (3.12) where In+1 is the identity matrix of size n + 1 and K is an (n + 1) × (m − n) matrix as:

j−1 wj+n,i−1 1 X K[i, j] = − − w K[i, j − r], w w j+n,j+n−r j+n,j+n j+n,j+n r=1 for i = 1, ··· , n + 1 and j = 1, ··· , m − n.

Proof. We can use proof by induction and the process is quite similar to the proof of Lemma 3.2.

Remark 3.7. The lifting process for degree-graded polynomial bases is trivial and consists of simply adding higher degree basis elements with zero coefficients.

3.2.2. Dropping Matrices To find formulas for dropping matrices in the Bernstein and Lagrange polynomial bases, we require the operational differentiation matrices in those bases. The next useful lemmas from [2] give the

structure of the operational differentiation matrices DB,n and DL,n for the Bernstein and Lagrange polynomial bases, respectively.

Lemma 3.8. (From [2]) If P (x) is given by (3.2), then the k-th order derivative of P (x) in terms

of the k-th power of differentiation matrix DB,n is given by

(k) T k P (x) = c DB,nbn,

where DB,n is a size n + 1 tridiagonal matrix that has the following structure for i = 1, ··· , n + 1.

 n−2(i−1)  a−b , i = j  i DB,n[i, j] = a−b , i = j − 1 (3.13)  (n−i+2) − a−b , i = j + 1 Lemma 3.9. (From [2]) If P (x) is given by (3.4), then the k-th order derivative of P (x) is given by

(k) T k P (x) = p DL,nLn(x), where the differentiation matrix DL,n is a size n + 1 matrix with the elements:

( n P 1 , i = j k=0,k6=i−1 τi−1−τk DL,n[i, j] = (3.14) wi−1 , i 6= j wj−1(τj−1−τi−1)

14 T Now, let P (x) = c bm given in the Bernstein polynomial basis of degree m with an undetermined coefficients vector c. We want to find c such that P (x) is actually of degree n(< m). To this end, we must impose the following degree constraints on P (x):

(k) T k P (x) = c DB,nbm(x) = 0, (3.15)

T k for k = n + 1, n + 2, ··· , m, where DB,n is given by (3.13). From (3.15), we have (DB,n) c = 0, T k where 0 is of size m + 1. This means c must be an eigenvector of the matrices (DB,n) , k = n + 1, n + 2, ··· , m. In other words, we can write

c = Sm,nv, where v is an undetermined constant vector of size n + 1 and Sn,m is an (m + 1) × (n + 1) “dropping matrix” given by the following lemma:

Lemma 3.10. The Bernstein dropping matrix from degree m to degree n(< m) is given by m − i + 1 S [i, j] = , m,n j − 1 for i = 1, ··· , m + 1 and j = 1, ··· , n + 1.

T Proof. Let sk denote the k−th column of Sm,n, for k = 1, ··· , n + 1. It yields DB,ns1 = 0 and (m − k + 2) DT s = s , k = 2, ··· , n + 1. B,n k a − b k−1

This in turns implies that ( T 2 0, k = 1, 2 (DB,n) sk = (m−k+2)(m−k+1) (a−b)2 sk−1, k = 3, ··· , n + 1

We can continue like this to get ( T n 0, k = 1, 2, ··· , n (DB,n) sk = (m−k+2)(m−k+1)···(m−k−n+3) (a−b)n sk−1, k = n + 1

T n+1 Eventually, (DB,n) sk = 0, for k = 1, 2, ··· , n + 1. This means each sk is an eigenvector of T n+1 T ` (DB,n) associated with its 0 eigenvalue. Obviously, (DB,n) sk = 0, for any ` > n + 1 and k = 1, 2, ··· , n + 1.

T To illustrate this, if we want P (x) = c b6(x) to be a degree 3 polynomial, we must have c = S6,3v, where v is an undetermined vector of size 4, and

 1 6 15 20   1 5 10 10       1 4 6 4    S6,3 =  1 3 3 1 .      1 2 1     1 1  1

15 Suppose that P (x) is a polynomial that can be written in the Bernstein polynomial basis of degrees T T n and m(> n) as P (x) = c bn(x) and P (x) = d bm(x), respectively. We can lift the first one T and turn it into P (x) = c Tn,mbm(x). We can also drop the second one since we know it can be T T written in a basis whose degree is less than m. It becomes P (x) = v Sm,nbm(x). Since these two polynomials are the same, we can find v in terms of c or vice-versa. In fact we have

T Sm,nv = Tn,mc,

† T T † T which yields v = Sm,nTn,mc, or equivalently c = (Tm,n) Sn,mv depending on the given and un- known vectors. Here † is used to represent the generalized inverse matrix.

T Similarly, take P (x) = p Lm(x) given in the Lagrange polynomial basis of degree n with an undetermined coefficients vector p. We now want to find p such that P (x) is actually of degree n(< m). To this end, we must impose the following degree constraints on P (x):

(k) T k P (x) = p DL,nLm(x) = 0, (3.16)

T k for k = n + 1, n + 2, ··· , m, where DL,n is given by (3.14). From (3.16), we conclude (DL,n) p = T k 0m+1. This means p must be an eigenvector of the matrices (DL,n) , k = n + 1, n + 2, ··· , m. In other words, we can write

p = Zm,nv,

where v is an undetermined constant vector of size n + 1 and Zm,n is an (m + 1) × (n + 1) “dropping matrix” given by the following lemma: Lemma 3.11. The Lagrange dropping matrix from degree m to degree n(< m) is given by Qm k=m−j+2(τi−1 − τk) Zm,n[i, j] = Qm , (3.17) k=m−j+2(τm−j+1 − τk) for i = 1, ··· , m + 1 and j = 1, ··· , n + 1. Any term with a negative index is set to zero.

Proof. The proof is quite similar to that of Lemma 3.10.

T For instance, if we want P (x) = p L4(x) to be a degree 2 polynomial, we must have p = Z4,2v, where v is an undetermined vector of size 3, and

 τ −τ (τ0−τ3)(τ0−τ4)  1 0 4 τ3−τ4 (τ2−τ3)(τ2−τ4)  1 τ1−τ4 (τ1−τ3)(τ1−τ4)   τ3−τ4 (τ2−τ3)(τ2−τ4)   τ −τ  Z4,2 =  1 2 4 1 .  τ3−τ4     1 1  1

Now suppose that P (x) is a polynomial that can be written in the Lagrange polynomial basis of T T degrees n and m(> n) as P (x) = p Ln(x) and P (x) = q Lm(x), respectively. We can lift the first T one and turn it into P (x) = p Rn,mLm(x). We can also drop the second one since we know it can T T be written in a basis whose degree is less than m. It becomes P (x) = v Zm,nLm(x). Since these two polynomials are the same, we can find v in terms of p or vice-versa. In fact we have

T Zm,nv = Rn,mp,

† T T † T which yields v = Zm,nRm,np or p = (Rm,n) Zn,mv depending on the given and unknown vectors.

16 Remark 3.12. Similar to the lifting process, the dropping process for degree-graded polynomial bases is also trivial. We can drop a degree m degree-graded polynomial with undetermined coefficients to a degree n(< m) polynomial in the same basis by simply setting the coefficients ai = 0, for i =

n, n + 1, ··· , m. It is easy to verify that the dropping matrix in this case is Im+1(1 : m + 1, 1 : n + 1), i.e., the first n + 1 columns of the identity matrix of size m + 1.

3.3. Multiplication of Non-Degree-Graded Polynomials

We are now ready to derive the formulas for direct multiplications of polynomials for the two important non-degree-graded polynomial bases.

3.3.1. Bernstein Basis Multiplication

If bi,m(x) is the i-th basis element of the Bernstein polynomial basis of degree m and bj,n(x) is the j-th basis element of the Bernstein polynomial basis of degree n, then it fairly straightforward to observe that mn i j bi,m(x)bj,n(x) = m+n bi+j,m+n(x), i+j

where bi+j,m+n(x) is the (i + j)-th basis element of the Bernstein polynomial basis of degree m + n. The following lemma is an important result from Lemma 3.2 that is used in devising a polynomial multiplication formula in the Bernstein polynomial basis:

Lemma 3.13. The multiplication of bi,m(x) by bj,n(x) for i = 0, 1, ··· , m and j = 0, 1, ··· , n can be written as

bi,m(x)bj,n(x) = Tm,m+n[i + 1, i + j + 1]bi+j,m+n(x),

where Tm,m+n is the lifting matrix given by (3.8).

Proof. Suppose that h i Ξ(x) = ξ0 ξ1 ··· ξm−1 ξm bm(x), and h i Ψ(x) = ψ0 ψ1 ··· ψn−1 ψn bn(x), are two polynomials given in the Bernstein polynomial basis of degrees m and n over [a, b], re-

spectively. Using (3.7), the lifting matrix for Ξ(x) is Tm,m+n and the lifting matrix for Ψ(x) is

Tn,m+n.

In this position, using Lemma 3.2 and (3.8), we can state the following Lemma on the multiplication of two polynomials given in the Bernstein polynomial basis:

Lemma 3.14. For the Bernstein polynomials Ξ(x) and Ψ(x) given above,

Ξ(x)Ψ(x) = cbm+n(x), (3.18) h i where c, a size m+n+1 row vector, can be found either in the form of c = ξ0 ξ1 ··· ξm−1 ξm Γψ,m+n with

Γψ,m+n[i, j] = ψj−iTm,m+n[i, j],

17 h i and Tm,m+n is obtained from (3.8) and ψj−i = 0, wherever j < i, or as c = ψ0 ψ1 ··· ψn−1 ψn Γξ,m+n where

Γξ,m+n[i, j] = ξj−iTn,m+n[i, j], (3.19) with Tn,m+n from (3.8) and ξj−i = 0, wherever j < i.

h i h i Example 3.15. Let us assume P1(x) = ξ0 ξ1 ξ2 ξ3 b3(x) and P2(x) = ψ0 ψ1 ψ2 b2(x) are two polynomials given in the Bernstein polynomial basis over a certain interval [a, b] of degrees h i 3 and 2, respectively, we want to find c = c0 .... c5 so that P3(x) = P1(x)P2(x) = cb5(x).

 2 1   2 1  1 5 10 ψ0 5 ψ1 10 ψ2  3 3 3   3 ψ 3 ψ 3 ψ  From (3.8), we find T =  5 5 10  and Γ =  5 0 5 1 10 2  , 3,5  3 3 3  ψ,5  3 3 3   10 5 5   10 ψ0 5 ψ1 5 ψ2  1 2 1 2 10 5 1 10 ψ0 5 ψ1 ψ2 which yields h i c = ξ0 ξ1 ξ2 ξ3 Γψ,5 =

h 2 3 1 3 3 3 3 1 3 2 i ξ0ψ0 5 ξ0ψ1 + 5 ξ1ψ0 10 ξ0ψ2 + 5 ξ1ψ1 + 10 ξ2ψ0 10 ξ1ψ2 + 5 ξ2ψ1 + 10 ξ3ψ0 5 ξ2ψ2 + 5 ξ3ψ1 ξ3ψ2 .

 3 3 1   3 3 1  1 5 10 10 ξ0 5 ξ1 10 ξ2 10 ξ3 Alternatively, T =  2 3 3 2  and Γ =  2 ξ 3 ξ 3 ξ 2 ξ  , 2,5  5 5 5 5  ξ,5  5 0 5 1 5 2 5 3  1 3 3 1 3 3 10 10 5 1 10 ξ0 10 ξ1 5 ξ2 ξ3 which gives h i c = ψ0 ψ1 ψ2 Γξ,5 =

h 2 3 1 3 3 3 3 1 3 2 i ξ0ψ0 5 ξ0ψ1 + 5 ξ1ψ0 10 ξ0ψ2 + 5 ξ1ψ1 + 10 ξ2ψ0 10 ξ1ψ2 + 5 ξ2ψ1 + 10 ξ3ψ0 5 ξ2ψ2 + 5 ξ3ψ1 ξ3ψ2 .

Consequently, the integer-powers of a polynomial in the Bernstein polynomial basis can be obtained as follows:

h i ρ Corollary 3.16. If Ξ(x) = ξ0 ξ1 ··· ξn−1 ξn bn(x), then Ξ (x) for a positive integer ρ(> 1) is ρ ρ h i Y Ξ (x) = ξ0 ξ1 ··· ξn−1 ξn ( Γξ,(j×n))b(ρ×n)(x), j=2

where each Γξ,(j×n) for j = 2, 3, ··· , ρ can be found through (3.14).

3.3.2. Lagrange Basis Multiplication Recall that if the Lagrange nodes and values of two polynomials P (x) and Q(x) given in the n n Lagrange polynomial basis of the same degree are {(τi, pi)}i=0 and {(τi, qi)}i=0, respectively, then n the Lagrange values of their multiplication P (x)Q(x) at the same nodes are {(τi, piqi)}i=0. This important property is the key to finding a multiplication formula for polynomials given in Lagrange polynomial basis. Suppose that h i Ξ(x) = ξ0 ξ1 ··· ξm−1 ξm Lm(x),

18 m defined by the set of nodes {τi}i=0 and h i Ψ(x) = ψ0 ψ1 ··· ψn−1 ψn Ln(x),

n defined by the set of nodes {τi}i=0 are two polynomials given in the Lagrange polynomial basis of m+n degrees n and m(≥ n), respectively. Adding additional nodes {τi}i=m+1 and using (3.12), we can write the polynomials as h i Ξ(x) = s0 s1 ··· sn+m−1 sn+m Lm+n(x),

where h i h i s0 s1 ··· sn+m−1 sn+m = ξ0 ξ1 ··· ξm−1 ξm Rm,m+n, and h i Ψ(x) = t0 t1 ··· tn+m−1 tn+m Lm+n(x), with h i h i t0 t1 ··· tn+m−1 tn+m = ψ0 ψ1 ··· ψn−1 ψn Rn,m+n.

Lemma 3.17. For the Lagrange polynomials Ξ(x) and Ψ(x) given above, h i Ξ(x)Ψ(x) = s0t0 s1t1 ··· sn+m−1tn+m−1 sn+mtn+m Lm+n(x).

In particular for the integer-powers of a polynomial in the Lagrange polynomial basis, we have

h i ρ Corollary 3.18. If Ξ(x) = ξ0 ξ1 ··· ξn−1 ξn Ln(x), then Ξ (x) for a positive integer (ρ×n) ρ(> 1) and with additional nodes {τi}i=n+1 is

ρ h ρ ρ ρ ρ i Ξ (x) = s0 s1 ··· s((ρ×n)−1) s(ρ×n) L(ρ×n)(x), (3.20) h i h i where s0 s1 ··· s((ρ×n)−1) s(ρ×n) = ξ0 ξ1 ··· ξn−1 ξn Rn,(ρ×n).

This is illustrated in the following example: h i Example 3.19. If P (x) = p0 p1 p2 L2(x) is a polynomial given in the Lagrange polynomial 2 h i basis of degree 2 at the nodes {τk}k=0, our aim is to find q = q0 q1 q2 q3 q4 so that 2 4 P (x) = qL4(x) at the nodes {τk}k=0.

We first need to find R2,4 using (3.12):

 1 0 0 − w3,0 w3,0w4,3−w4,0w3,3  w3,3 w3,3w4,4 w3,1 w3,1w4,3−w4,1w3,3 R2,4 =  0 1 0 −  .  w3,3 w3,3w4,4  0 0 1 − w3,2 w3,2w4,3−w4,2w3,3 w3,3 w3,3w4,4

Next, we find h i h i s0 s1 s2 s3 s4 = p0 p1 p2 R2,4, which yields:

19  s = p ,  0 0   s1 = p1,  s2 = p2, p0w3,0 p1w3,1 p2w3,2  s3 = − − − ,  w3,3 w3,3 w3,3         w4,0 w3,0w4,3 w4,1 w3,1w4,3 w4,2 w3,2w4,3  s4 = p0 − + + p1 − + + p2 − + . w4,4 w3,3w4,4 w4,4 w3,3w4,4 w4,4 w3,3w4,4 2 Finally, from (3.20) and for i = 0, 1, 2, 3, 4, we have qi = si .

4. Direct Polynomial Division

An important application of the above results on intra-basis polynomial multiplication in various polynomial bases is in devising intra-basis polynomial division techniques in those bases. Take a polynomial, A(x), given in a certain polynomial basis of degree m and another polynomial, B(x), given in the same polynomial basis of degree n(≤ m). We have A(x) R(x) = Q(x) + , B(x) B(x) where Q(x) is the quotient of degree m − n and R(x) is the remainder of degree (at most) n − 1. This can also be written as A(x) = Q(x)B(x) + R(x). (4.1)

Our goal here is to find Q(x) and R(x) directly in the same basis as A(x) and B(x) are already given in. We first consider the degree-graded polynomial bases and devise two methods for direct division of polynomials in such bases. The first approach leads to solving a system of linear equations. The second approach is a generalization of the well-know “long division” technique, traditionally used in the standard polynomial basis, to other degree-graded polynomial bases. Next, we propose a method for direct polynomial division in non-degree-graded polynomial bases which is based on solving a system of linear equations for finding the quotient and remainder (somehow similar to our first method for degree-graded polynomial division).

4.1. Generalization to Degree-Graded Bases

T T T Take two polynomials A(x) = a Φm(x) and B(x) = b Φn(x), where a = [a0, ··· , am] , b = T [b0, ··· , bn] and Φk(x) is a degree-graded basis with k + 1 elements. Then we set Q(x) = T T T T q Φm−n(x), with q = [q0, ··· , qm−n] and R(x) = r Φn−1(x), where r = [r0, ··· , rn−1] . We intend to plug these polynomials into (4.1), simplify, and find the unknown vectors q and r from there. We first need to replace Q(x)B(x) with a polynomial of degree m. To this end, we use Lemma 2.3. From (2.8), we can write

T Q(x)B(x) = q HmΦm(x), where n X Hm = biH˜ m,i. (4.2) i=0 Now (4.1) becomes T T T a Φm(x) = q HmΦm(x) + r Φn−1(x),

20 which can be written as

T T T h i a Φm(x) = q HmΦm(x) + r In 0r Φm(x), (4.3)

where In is the identity matrix of size n and 0r is the zero matrix of size n × (m − n + 1). We can

now come up with the following linear system using the coefficient vectors of Φm(x) in (4.3) " #" # I q HT n = a, m T 0r r

or equivalently " #" # " # HT I q a m,1 n = 1 , (4.4) T T Hm,2 0r r a2 T T such that Hm,1 is of size n × (m − n + 1), Hm,2 is a square matrix of size m − n + 1, a1 is a vector of size n and a2 is a vector of size m − n + 1. T Importantly, as mentioned in Remark 2.4, Hm,2 is lower triangular and therefore Hm,2 is an upper triangular matrix. This helps us solve the system (4.4) fast and efficiently as follows

( T Hm,2q = a2, T (4.5) r = a1 − Hm,1q.

For further clarification, the following example is given on direct polynomial division in the Cheby- shev polynomial basis of first kind (see also Example 2.8).

Example 4.1. We want to divide two polynomials given in the Chebyshev polynomial basis of first

kind. Take A(x) = 2T4(x) + T2(x) + 3T1(x) − 2T0(x) by B(x) = −2T2(x) + T1(x). The constants 1 for the three term recurrence relation in this basis are α0 = 1, β0 = γ0 = 0 and αj = γj = 2 and βj = 0, for j = 1, 2,....   1 −2 The matrix H in (4.2) becomes H =  1 −1 1 −1  , and the coefficient matrix 4 4  2 2  1 1 −1 2 2 −1  1  2 −1 1  1   1 −1 2 1   in (4.4) is: −2 1  .  2   −1 1   2  −1 −3 T −7 15 T Solving the linear system (4.4) through (4.5), we get q = [ 4 , − 1, − 2] and r = [ 2 , 4 ] . The quotient and remainder of this division are therefore as follows: 3 15 7 Q(x) = −2T (x) − T (x) − T (x),R(x) = T (x) − T (x). 2 1 4 0 4 1 2 0

4.2. A Long-Division Algorithm for Degree-Graded Bases

As before, take a polynomial, A(x) = amφm + ... + a1φ1 + a0φ0, of degree m given in a certain

degree-graded basis and another polynomial, B(x) = bnφn + ... + b1φ1 + b0φ0, of degree n(≤ m) given in the same basis. We present an algorithm for finding the quotient Q(x) and the remainder R(x) directly in the same degree-graded polynomial basis.

21 Similar to the traditional long-division approach, we first divide the leading term of A(x) by the

leading term of B(x). Consider the leading term of the quotient R(x) as ηφm−n, where η is a scalar

to be determined. We multiply B(x) by ηφm−n and subtract the result from A(x). The expression

ηφm−n(x)B(x) must be expanded in terms of the basis elements, φj, j = 0, 1, . . . , m. For that, we

need to compute φm−nφi, i = 0, . . . , n. The (n + 1) × (m + 1) matrix H := Hn,m−n, defined in Lemma 2.1 is used to accomplish this task:

φm−nΦn = HΦm.

Now let k := m − n, that gives

φk(x)B(x) = b0 (H[1, k + 1]φk) + b1 (H[2, k]φk−1 + H[2, k + 1]φk + H[2, k + 2]φk+1) 2k−1 2k m X X X + ··· + bk−1 H[k, j + 1]φj + bk H[k + 1, j + 1]φj + ··· + bn H[n + 1, j + 1]φj, j=1 j=0 j=|n−k| or

k+2 ! min{n+1,2k+1}  X X φk(x)B(x) = (bkH[k + 1, 1]) φ0 + bi−1H[i, 2] φ1 + ... +  bi−1H[i, k + 1] φk i=k i=1

+ ... + (bn−1H[n, m] + bnH[n + 1, m]) φm−1 + (bnH[n + 1, m + 1]) φm

m min{n+1,(k+1)+j}  X X =  bi−1H[i, j + 1] φj. j=0 i=|j−k|+1 (4.6) It should be noted that when k = m − n > n (i.e. m > 2n), the columns 1, 2, ··· , m − 2n are all zero column vectors. Consequently, if j ≤ m − 2n − 1, then the coefficients of φ0, φ1, . . . , φj in (4.6) are zero. Pm (2) Now we set A2(x) := A(x) − ηφm−nB(x) = j=0 aj φj, where  aj, j < m − 2n, a(2) = j Pmin{n+1,(m−n+1)+j}  aj − η i=|j−(m−n)|+1 bi−1H[i, j + 1] , j ≥ m − 2n.

(2) am In particular, am = 0, if and only if η = . bnH[n+1,m+1] (2) For the next step, we divide the leading term of A2(x), i.e. am−1φm−1, by bnφn, and then update (2) the coefficients of φ0, . . . , φm−2 using the matrix H := Hn,(m−1)−n. In this step, η2φ(m−1)−n is (2) am−1 added to the quotient R(x), where η2 = (2) . This process keeps going like this until at a H [n+1,m]bn certain iteration, k, the degree of Ak(x) becomes less than n. Once that happens, Ak(x) is in fact the remainder of the division. We summarize this procedure in Algorithm 1 which provides a pseudo-code for dividing a polyno- mial A(x) given in a degree-graded polynomial basis by another polynomial B(x) given in the same basis. T The inputs are the coefficients of A(x) and B(x) in the forms of a = [a0 a1 ··· am] and b = T [b0 b1 ··· bn] , respectively, while the outputs are the coefficients of the quotient Q(x) and the T T remainder R(x) given in the forms of q = [q0 q1 ··· qm−n] and r = [r0 r1 ··· rn−1] . To construct

the operational matrices Hn,m−n, we can take advantage of the operational cost savings explained in Table 1.

22 Algorithm 1 Division in degree-graded polynomial bases T T 1: Inputs: The vectors a = [a0 a1 ··· am] and b = [b0 b1 ··· bn] , with m ≥ n. 2: Outputs: The vectors q and r of the sizes m − n + 1 and n. 3: Let q := [0, ··· , 0]T , the zero vector of the size m − n + 1. 4: while m ≥ n do

5: Construct the (n+1)×(m+1) matrix H := Hn,m−n in (2.3) s.t. φm−n(x)Φn(x) = HΦm(x). 6: Set η := am . H[n+1,m+1]bn 7: qm−n+1 := η. 8: if η = 0 then 9: set m := m − 1 and go to line 4. 10: end if 11: for j = 0 to m − 1 do 12: if j ≥ m − 2n then Pmin{n+1,(m−n+1)+j}  13: set aj = aj − η i=|j−(m−n)|+1 bi−1H[i, j + 1] , 14: end if 15: end for 16: Set m := m − 1. 17: end while T 18: Set r = [a0 a1 ··· am] (in this line we have m < n).

4.3. Direct Division in Bernstein Basis

T T T T If A(x) = a bm(x) and B(x) = b bn(x), where a = [a0, ··· , am] , b = [b0, ··· , bn] with T T T m ≤ n, then we set Q(x) = q bm−n(x) with q = [q0, ··· , qm−n] and R(x) = r bn−1(x) with T r = [r0, ··· , rn−1] . We intend to plug these polynomials into (4.1), simplify, and find q and r from there. We first need to replace Q(x)B(x) with a polynomial of degree m. To this end, we use Lemmas 3.14 which yields T Q(x)B(x) = q Γb,mbm(x).

Now (4.1) becomes T T T a bm(x) = q Γb,mbm(x) + r bn−1(x), which through lifting can be written as

T T T a bm(x) = q Γb,mbm(x) + r Tn−1,mbm(x). (4.7)

We can now come up with the following linear system using the coefficient vectors of bm(x) in (4.7) " # h i q ΓT TT = a, b,m n−1,m r or " #" # " # ΓT TT q a b,m,1 n−1,m,1 = 1 , (4.8) T T Γb,m,2 Tn−1,m,2 r a2 T T T where Γb,m,1 is a square matrix of size m − n + 1, Γb,m,2 is a matrix of size n × (m − n + 1), Tn−1,m,1 T is a matrix of size (m − n + 1) × n and Tn−1,m,2 is a square matrix of size n. Moreover, a1 is a vector of size m − n + 1 and a2 is a vector of size n.

23 T T It can be easily verified that Γb,m,1 is lower triangular and Tn−1,m,2 is upper triangular, so that T Tn−1,m,2 is always invertible. This helps us solve the system (4.8) fast and fairly efficiently as follows

  T T T −1 T  T T −1  Γb,m,1 − Tn−1,m,1(Tn−1,m,2) Γb,m,2 q = a1 − Tn−1,m,1(Tn−1,m,2) a2, T −1  T   r = (Tn−1,m,2) a2 − Γb,m,2q . h i Example 4.2. Assume that we want to divide A(x) = 2 3 4 −2 1 7 −3 b6(x) by h i B(x) = 1 4 −2 5 b3(x). To this end, we have

 2 1  1 2 −   5 4 1 2 2 1 1  1 12 − 9 1  3 5 5 15 Γ =  2 5 10 , T =  1 8 3 8 1 . b,6  1 9 6 5  2,6  3 15 5 15 3   5 5 − 5 2  1 1 2 2 1 4 15 5 5 3 1 20 5 −1 5

Consequently, the coefficient matrix in (4.8) becomes

 1 1   2 1 2 1   2 3 3   2 12 1 2 8 1   − 5 5 5 5 15 15     1 − 9 9 1 1 3 1 .  4 10 5 20 5 5 5   1 − 6 4 1 8 2   5 5 15 15 5   5 1 2   2 −1 3 3  5 1

T h 161489 290827 58617 247645 i Solving the linear system (4.8) through (4.3) yields q = 117128 117128 117128 − 117128 and T h 72767 90283 886841 i T T r = 117128 − 21296 117128 , such that Q(x) = q b3(x) and R(x) = r b2(x).

4.4. Division in Lagrange Basis

m n Let us assume A(x) is given by {(τi, ai)}i=0 and B(x) is given by {(τi, bi)}i=0 with m ≥ n. Then m m we set Q(x) as {(τi, qi)}i=0, and R(x) as {(τi, ri)}i=0. It is in fact easier here to write both the quotient and the remainder as polynomials in the Lagrange polynomial basis of degree m and then drop their degrees to m − n and n − 1, respectively.

We first lift B(x) from degree n to degree m using Rn,m given by (3.6). That is h i h i s0 s1 ··· sm = b0 b1 ··· bn Rn,m,

m therefore it is given as {(τi, si)}i=0. Using the properties of the Lagrange polynomial basis, we can turn (4.1) into the following system of linear equations:

qisi + ri = ai, i = 0, ··· , m or " # h i q SI = a, m+1 r

24 where Im+1 is the identity matrix of size m + 1 and S is a diagonal matrix of size m + 1 in the form of   s0    s1  S =   .  ..   .  sm

We should also drop Q(x) to degree m − n and R(x) to degree n − 1. To this end, we let q = Zm,m−nv1 and r = Zm,n−1v2, where v1 and v2 are unknown vectors of sizes m − n + 1 and n, respectively. Moreover, Zm,m−n and Zm,n−1 are dropping matrices given by (3.17). Then the linear system of equations become " # h i v1 SZm,m−n Zm,n−1 = a. (4.9) v2

Upon finding v1 and v2 from (4.9), we are able to determine q and r. n o Example 4.3. Suppose A(x) is given by (−2, 15), (−1, 0), (0, −1), (1, 0), (2, 15) and B(x) is given n o by (−2, 9), (−1, 6), (0, 5) . The lifting matrix

  1 0 0 1 3   R2,4 =  0 1 0 −3 −8  0 0 1 3 6 n o turns B(x) into (−2, 9), (−1, 6), (0, 5), (1, 6), (2, 9) . That means s0 = 9, s1 = 6, s2 = 5, s3 = 6 and s = 9, or 4 h i S = Diag 9, 6, 5, 6, 9 .

    1 4 6 1 4      1 3 3   1 3      The dropping matrices associated with q and r are, respectively, Z4,2 =  1 2 1 , and Z4,1 =  1 2 .      1 1   1 1      1 1   9 36 54 1 4    6 18 18 1 3    The coefficient matrix in (4.9) is therefore  5 10 5 1 2 ,    6 6 1 1    9 1 T h i T h i T h i which yields v1 = −1 −3 2 and v2 = 24 0 . That gives q = −1 −4 −5 −4 −1 h i and rT = 24 24 24 24 24 . n o n Then Q(x) and R(x) can be written as (−2, −1), (−1, −4), (0, −5), (1, −4), (2, −1) and (−2, 24), o (−1, 24), (0, 24), (1, 24), (2, 24) , respectively.

25 5. An Application in Stochastic Galerkin Schemes

Stochastic finite element method [19] is a well-known approach for disseminating data uncertainty through the numerical solution of partial differential equations. The main characteristic of such stochastic Galerkin methods is a variational formulation in which the projection spaces consist of random fields rather than deterministic functions. In this section, we are interested in stochastic Galerkin matrices that arise in the discretization of linear differential equations with random coefficient functions as they are characterized with the Galerkin representation of polynomial multiplication operators. These matrices possess attractive structural and sparsity properties (see e.g. [19, 15, 38] and references therein). Besides being useful in the numerical analysis of stochastic Galerkin schemes, such results are also crucial in developing some efficient iterative methods for related linear equations. We now intend to use the obtained results on direct multiplication of polynomials in polynomial bases to derive the stochastic Galerkin matrices when tensor product polynomials are considered as bases for the space of multivariate polynomials in random variables (see [15]). M According to [38], for an appropriate M and a finite multi-index set F ⊆ N0 , the discretized linear system related to the stochastic Galerkin finite element method has the form Aˆ u = f, where X Aˆ = Kα ⊗ Gα, α∈F is the sum of Kronecker products of matrices in which Kα and Gα are associated with the deter- ministic/stochastic function spaces.

For α = (α1, . . . , αM ) ∈ F, the stochastic Galerkin matrix Gα is given by Gα = UαM ,pM ⊗ ... ⊗

Uα2,p2 ⊗ Uα1,p1 , where for each m = 1,...,M, Uαm,pm is a (pm + 1) × (pm + 1) matrix with entries in the form of D E Uαm,pm [i, j] = ψαm (ξm)ψi(ξm)ψj(ξm) , i, j = 0, . . . , pm.

Here h . i denotes the expectation with respect to a specified probability density ρm and {ψj(ξm)} are univariate orthonormal polynomials with respect to the weight function ρm appearing in poly- nomial chaos (PC) expansion of the coefficient term as a random field.

As such, in general we need to compute the typical (p + 1) × (p + 1) matrix Uk,p whose entries are given by

Uk,p[i, j] = ψkψiψj , i, j = 0, . . . , p, (5.1) where {ψj(y)} are univariate orthonormal polynomials with respect to a specified weight function ρ(y): Z

ψiψj = ψi(y)ψj(y)ρ(y)dy = δij.

The following lemma shows that the univariate stochastic Galerkin matrix Uk,p is a principal submatrix of the corresponding operational matrix Hp,k given in Lemma 2.1:

Lemma 5.1. For any k ≥ 1, the (p + 1) × (p + 1) univariate Galerkin matrix Uk,p is built up from the first p + 1 rows and columns of the operational matrix Hp,k, i.e.

Uk,p = Hp,k (1 : p + 1, 1 : p + 1) .

26 Lemma 5.1 enables us to use our matrices Hp,k to construct the univariate Galerkin matrices Uk,p for any type of degree-graded (especially orthogonal) polynomial that may appear in PC expansion of the random coefficient term. For ease of exposition of the Lemma, we work with the same conditions

as outlined in [15] to obtain the univariate Galerkin matrices, Uk,p, for two classes of orthonormal polynomial bases: Case 1. When the univariate or multivariate random variables have normal distributions, the PC

expansion basis consists of Hermite orthogonal polynomials. Let Hn(x) be the n-th basis element of the Hermite polynomial basis. It is known that for these polynomials, the three term recurrence relation is 1 xH (x) = H (x) + nH (x), n = 0, 1, ... n 2 n+1 n−1 with H (x) = 0,H (x) = 1 (see e.g. [4]). Let ψ (x) = √ 1 H ( √x ) represent the normalization −1 0 n 2nn! n 2 −x2/2 of H such that kψ k2 = 1 with respect to the standard Gaussian weight function ρ(x) = e √ . n n 2π Under these conditions, the three-term recurrence relation of the orthonormal polynomials, ψ , can √ √ n be described by xψn = n + 1ψn+1 + nψn−1, n = 0, 1,....

Using the results of Lemma 2.1 and Proposition 1 to construct the operational matrix Hp,k for the Hermite polynomial basis with k = 3 and p = 5, we have

 1  √    3 2   √ √ √   3 3 2 10  H5,3 =  √ √ √  , 1 3 2 3 6 2 5   √ √ √   2 3 6 0 2 30 35   √ √ √  10 2 30 15 2 14 and from there the univariate Galerkin matrix, U3,5, for the Hermite polynomial basis becomes

 1  √    3 2   √ √ √   3 3 2 10  U3,5 =  √ √  . 1 3 2 3 6   √ √   2 3 6 2 30  √ √  10 2 30

The exact same result is yielded by the given explicit formula for computing the matrix Uk,p for

k = 3 and p = 5 with the entries ψ3ψiψj , i, j = 0,..., 5, which is reported in [15, Appendix A]. Case 2. When the univariate or multivariate random variables have uniform distributions, the PC expansion basis consists of Legendre orthogonal polynomials. In order to illustrate the theoretical results, we shall use some well-known properties of Legendre polynomials from [4].

The operational matrix Hp,k for the orthonormal Legendre polynomial basis can be used for

obtaining the Galerkin matrix Uk,p. For example with k = 4 and p = 3, we get  

 √4   21  U4,3 =   ,  6   7  √4 6 21 11

27

which is similar to the explicit formula for computing the entries ψkψiψj , i, j = 0, . . . , p with k = 4 and p = 3 given in [15, Appendix A].

6. Concluding Remarks

Formulas and techniques for intra-basis polynomial multiplication are given in this work with emphasis on degree-graded and non-degree-graded polynomial bases. From there, intra-basis poly- nomial division methods in those bases are derived, including an extension of the well-known long division algorithm to any degree-graded polynomial basis. Certain applications such as intra-basis lower degree approximation of polynomials are presented and discussed as well. Note that this work does not study the computational complexity and numerical analysis of the presented techniques. One can devise methods for making these polynomials multiplication and division algorithms faster and more accurate. It is shown in this work that the matrices appearing in the stochastic Galerkin discretization of linear PDEs with random coefficients are sub-matrices of matrices derived for intra-basis multipli- cation in degree-graded polynomial bases (i.e., Hp,k). To that end, all one needs is the three-term recurrence coefficients in the polynomial degree-graded basis under consideration. Not only are our results useful in constructing those stochastic Galerkin matrices, but we also hope that due to the recursive structure of Hp,k, our results can be used in devising efficient iterative techniques for solving linear systems occurring from the stochastic Galerkin discretization. More importantly, we expect these intra-basis techniques to find applications in real-world problems where direct and reliable multiplication and division of functions approximated by special polynomials are of utmost importance.

References

[1] A. Amiraslani, Dividing polynomials when you only know their values, in Proceedings EACA, Gonzalez- Vega L. and Recio T., Eds., (2004), 5-10.

[2] A. Amiraslani, Differentiation matrices in polynomial bases, Math. Sci. 5 (2016), 45-55.

[3] A. Amiraslani, R.M. Corless and M. Gunasingham, Differentiation matrices for univariate polynomials, Numer. Algor. 83 (2019), 1-31.

[4] G.E. Andrews, R. Askey and R. Roy, Special Functions, Cambridge University Press, (1999).

[5] S. Barnett, Division of generalized polynomials using the comrade matrix, Linear Alggebra Appl., 60 (1984), 159-175.

[6] G. Baszenski and M. Tasche, Fast polynomial multiplication and convolutions related to the discrete cosine transform, Linear Algebra Appl., 252(1-3) (1997) 1-25.

[7] Z. Battles and L. Trefethen, An extension of MATLAB to continuous fractions and operators, SIAM J. Sci. Comp., 25 (2004), 1743-1770.

[8] J.P. Berrut and L.N. Trefethen, Barycentric Lagrange interpolation, SIAM Review, 46(3) (2004), 501- 517.

[9] J.P. Boyd, Chebyshev and Fourier Spectral Methods, Dover Publication, N.Y., 2001.

28 [10] N. Brisebarre and M. Jolde, Chebyshev Interpolation Polynomial based Tools for Rigorous Computing, in Proceedings ISSAC ’10, ACM, (2010) 147-154.

[11] L. Buse´andR. Goldman, Division algorithms for Bernstein polynomials, Comput. Aided Geom. Design, 25(9) (2008), 850-865.

[12] V.S. Chelyshkov, Alternative orthogonal polynomials and quadratures, Electron. Trans. Numer. Anal., 25 (2006), 17-26.

[13] V.S. Chelyshkov, Alternative Jacobi polynomials and orthogonal exponentials, Math. arXiv preprint:1105.1838 (2011).

[14] R.M. Corless, Generalized companion matrices in the Lagrange basis, in Proceedings EACA, Gonzalez- Vega L. and Recio T., Eds., (2004), 317-322.

[15] O. Ernst and E. Ullman, Stochastic Galerkin matrices, SIAM J. Matrix Anal. Appl., 31(4) (2010), 1848-1872.

[16] R.T. Farouki, The Bernstein polynomial basis: A centennial retrospective, Comput. Aided Geom. De- sign, 29(6) (2012), 379-419.

[17] R.T. Farouki and V.T. Rajan, Algorithms for polynomials in Bernstein form, Comput. Aided Geom. Design, 5(1) (1998), 1-26.

[18] J.V.Z. Gathen and J. Gerhard, Modern Computer Algebra, (Third Edition), Cambridge University Press, 2013.

[19] R.G. Ghanem and P.D. Spanos, Stochastic Finite Elements: A Spectral Approach, Dover Publication, N.Y, 2003.

[20] P. Giorgi, On polynomial multiplication in Chebyshev basis, IEEE Trans. Comput., 61(6) (2012), 780- 789.

[21] P. Giorgi, Efficient algorithms and implementation in exact linear algebra, Ph.D. Thesis, Universit´ede Montpellier, 2019.

[22] P. Giorgi, B. Grenet and D.S. Roche, Generic reductions for in-place polynomial multiplication, in Proceedings ISSAC ’19, (2019) 187-194.

[23] T. Hermann, On the stability of polynomial transformations between Taylor, B´ezier,and Hermite forms, Numer. Algor., 13 (1996), 307-320.

[24] N.J. Higham, The numerical stability of barycentric Lagrange interpolation, IMA J. Numer. Anal., 24 (2004), 547-556.

[25] H. Kleindienst and A. L¨uchow, Multiplication theorems for orthogonal polynomials, Int. J. Quantum Chem., 48 (1993), 239-247.

[26] N. Koblitz, A. Menezes and S. Vanstone, The state of elliptic curve cryptography, Des. Codes Cryptogr., 19(2-3) (2000), 173-193.

[27] B.G. Lee, Y. Park and J. Yoo, Application of Legendre-Bernstein basis transformations to degree elevation and degree reduction, Comput. Aided Geom. Design, 19 (2002), 709-718.

[28] J.B. Lima, D. Panario and Q. Wang, A Karatsuba-based algorithm for polynomial multiplication in Chebyshev form, IEEE Trans. Comput., 59 (2010), 835-841.

29 [29] C. Markett, Linearization of the product of symmetric orthogonal polynomials, Constr. Approx., 10 (1994), 317-338.

[30] M. Minimair, Basis-Independent Polynomial Division Algorithm Applied to Division in Lagrange and Bernstein Basis, in Proceedings ASCM, Kapur D., Ed., Lecture Notes in Computer Science, 5081 (2008), 72-86.

[31] L. Naserizadeh, M. Hadizadeh and A. Amiraslani, Cubature rules based on bivariate alternative degree- graded orthogonal polynomials and their applications, (Submitted).

[32] C. Qguz and M. Sezer, Chelyshkov collocation method for a class of mixed functional integro-differential equations, Appl. Math. Comput., 259 (2015), 943-954.

[33] A. Rababah, Transformation of Chebyshev-Bernstein polynomial basis, Comput. Method Appl. Math., 3(4) (2003), 608-622.

[34] A. Ronveaux, A. Zarzo and E. Godoy, Recurrence relation for connection coefficients between two families of orthogonal polynomials, J. Comput. Appl. Math., 62 (1995), 67-73.

[35] Sanchez-Ruiz J., P. L. Artes P. L., Martinez-Finkelshtein A. and Dehesa J. S., General linearization formulae for products of continuous hypergeometric-type polynomials, J. Phys. A 32(42) (1999), 7345- 7366.

[36] A. Shakoori, The B´ezoutmatrix in the Lagrange basis, in Proceedings EACA, Gonzalez-Vega L. and Recio T., Eds., (2004), 295-299.

[37] M. D. Shieh, J.H. Chen, W.C. Lin and H.H. Wu, A new algorithm for high-speed modular multiplication design, IEEE Trans. Circuits Syst. I., 55(11) (2009), 3430-3437.

[38] E. Ullmann, H.C. Elman and O.G. Ernst, Efficient iterative solvers for stochastic Galerkin discretiza- tions of log-transformed random diffusion problems, SIAM J. Sci. Comput., 34(2) (2012), 659-682.

30