<<

Appl. Math. Inf. Sci. 10, No. 4, 1303-1311 (2016) 1303 Applied Mathematics & Information Sciences An International Journal

http://dx.doi.org/10.18576/amis/100409

Transformations between Two-Variable Bases with Applications

Dimitris Varsamis1,∗, Nicholas Karampetakis2 and Paris Mastorocostas1 1 Department of Informatics & Communications, Technological Educational Institute of Central Macedonia, 62124 Serres, Greece 2 Department of Mathematics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece

Received: 22 Mar. 2016, Revised: 8 May 2016, Accepted: 9 May 2016 Published online: 1 Jul. 2016

Abstract: It is well known, that any interpolating polynomial p(x,y) on the Pn,m of two-variable with degree less than n in terms of x and less than m in terms of y, has various representations that depends on the of Pn,m that we select i.e. , Newton and Lagrange basis e.t.c.. The aim of this short note is twofold : a) to present transformations between the coordinates of the polynomial p(x,y) in the aforementioned basis and b) to present transformations between these bases. Additionally, a computational numerical application that illustrate the usefulness of the transformations between two-variable polynomial bases is presented.

Keywords: Bivariate polynomial, Basis change, Transformations

1 Introduction evaluation-interpolation technique. A numerical approach for the computation of these transforming matrices is also Interpolation is the problem of approximating a function given. f with another function p more usable, when its values at distinct points are known. When the function p is a polynomial we call the method . In case where the interpolating polynomial p(x) belongs 2 Representations of the interpolating to the vector space Pn of single variable polynomials with two-variable polynomial degree less than n, p(x) has various representations that depends on the basis of Pn that we select i.e. monomial, Newton and Lagrange basis e.t.c.. We can use coordinates Although the one-variable interpolation always has a relative to a basis to reveal the relationships between solution for given distinct points, the multivariate various forms of the interpolating polynomial. [1] shows interpolation problem through arbitrary given points may how to change the form of the interpolating polynomial or may not have a solution when the number of unknown by transforming coordinates via a change of basis matrix. polynomial coefficients agree with the number of points. Moreover, [2] shows the transformations between the An interpolation problem is defined to be poised if it has a basis functions which map a specific representation to unique solution. Unlike the one-variable interpolation another. Additional work on this topic, from the problem, the Hermite, Lagrange and Newton-form numerical point of view, someone can find in [3],[4],[5]. multivariate interpolation problem is not always poised. In Sections 2 and 3, we are trying to extend the results of Let the set of interpolation points [1] and [2] to the case of two-variable interpolating polynomials with specific upper bounds in each variable. (n,m) In Section 4, we make use of the transformations S∆ = (xi,y j) | i = 0,1,...,n, j = 0,1,...,m described in previous sections for the computation of the  of two-variable polynomial matrix. The where xi 6= x j and yi 6= y j with function values on that proposed algorithm of the computation is based on the points given by fi, j := f (xi,y j). Consider also the matrix

∗ Corresponding author e-mail: [email protected]

c 2016 NSP Natural Sciences Publishing Cor. 1304 D. Varsamis et al.: Transformations between two-variable polynomial...

F ∈ R(n+1)×(m+1) that is constructed from such values i.e. and it is easily computed in case where the Vandermonde matrices are nonsingular or otherwise the interpolation f0,0 f0,1 ··· f0,m points xi (resp. y j) are different each other and the f1,0 f1,1 ··· f1,m solution is given by F =  . . .. .  (1) . . . . −1 −T   A = Vx · F ·Vy  fn,0 fn,1 ··· fn,m   However, the computation of the inverse of a It is well known [6], that for the specific selection of is ill conditioned and standard (n,m) points S∆ there exists a unique two-variable numerically stable methods in general fail to accurately polynomial pn,m (x,y) on Pn,m which interpolates these compute the entries of the inverse [7],[8],[9]. For this values i.e. pn,m (xi,y j) ≡ f (xi,y j) =: fi, j and thus the reason we may split (5) into the following system of interpolation problem is poised. This polynomial can be Vandermonde equations i.e. represented as a matrix product i.e. T Vx · A1 = P ; A ·Vy = A1 T pn,m(x,y)= X · A ·Y (2) with unknowns A1, A and solve it by using LU or QR decomposition. According to (3), the polynomial where X ∈ R[x](n+1)×1 (resp. Y ∈ R[y](m+1)×1) are vectors , that depends on the basis that we use (monomial, pn,m (x y) is written as T Lagrange, Newton) in terms of x (resp. in terms of y) and pn,m(x,y)= X · A · Y R(n+1)×(m+1) A ∈ is a two-dimensional matrix with = vec(A)T · (Y ⊗ X) elements the coefficients or otherwise the coordinates of T the terms in the respective two-variable basis. (2) can be = vec(A) · m(x,y) written as a Kronecker product i.e. where T T T 1 pn,m(x,y)= Y ⊗ X · vec(A) = (Y ⊗ X) · vec(A)= x T T  2  = vec (A) · (Y ⊗ X)= vec(A) · g(x,y) (3) x  .  where (⊗) is the Kronecker product and vec(A) is the  .   n  vectorization of a matrix, namely, is a linear  x    transformation which converts the matrix into a column  y  vector.    xy   2  Y X  x y  m(x,y) = ( ⊗ )=  .   .  2.1  .   xny     .  The interpolating polynomial pn,m (x,y) in terms of the  .   .  monomial basis is written as  ym   m  n m  xy  i j T   pn,m(x,y)= ∑ ∑ ai, jx y = X · A · Y (4)  .   .  i=0 j=0   xnym T T   where X = 1 x ··· xn , Y = 1 y ··· ym and is the two-variable monomial basis and  A ∈ R(n+1)×(m+1). By taking the relation     a0,0 p , (x ,y ) ≡ f (x ,y ) at all the interpolation points we n m i j i j a1,0 can easily get the following relation  .  . T   F = Vx · A ·Vy (5) an,0    a0,1  where   a1,1  n m  .  1 x0 ··· x0 1 y0 ··· y0  .  n m vec(A)=  .  1 x1 ··· x1 1 y1 ··· y1 a   . . .  . . .   n,1  V = . . .. . ; V = . . .. .  .  x . . . . y . . . .  .   n   m    1 xn−1 ··· xn−1 1 ym−1 ··· ym−1 a   n   m   0,m 1 xn ··· xn 1 ym ··· ym a       1,m      .  with Vx (resp. Vy) the Vandermonde matrix with respect to  .    x (resp. to y). It is easily seen that the matrix A is unique a   n,m  

c 2016 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 10, No. 4, 1303-1311 (2016) / www.naturalspublishing.com/Journals.asp 1305

Additionally, (5) can be rewritten as 2.3 Newton basis

vec(F) = (V ⊗V ) · vec(A) (6) y x Another representation of pn,m (x,y) in terms of the Newton basis [6],[12] is the following Note that vec(A) are the coordinates of pn,m(x,y) in terms of the monomial basis, whereas as we shall see below ∑n ∑m ∏i ∏ j XT Y pn,m(x,y)= i=0 j=0 di, j k=1(x − xk−1) ℓ=1(y − yℓ−1)= N · D · N vec(F) are the coordinates of pn,m(x,y) in terms of the (8) Lagrange basis. where

0 0 ∏(x − xk−1) , 1 and ∏(y − yℓ−1) , 1 2.2 Lagrange basis k=1 ℓ=1

Similar results with the monomial basis are also applied 1 to the Lagrange basis. The interpolating polynomial x − x0   pn,m (x,y) in terms of the Lagrange basis is written as X = (x − x0)(x − x1) N .  .  n m  .  XT Y (x − x )(x − x )···(x − x ) pn,m(x,y)= ∑ ∑ fi, jLi,n(x)Lm, j(y)= L · F · L (7)  0 1 n−1  i=0 j=0   1 where y − y0 XT  y y y y  L = L0,n(x) L1,n(x) ··· Ln,n(x) Y = ( − 0)( − 1) N .  .   T   YL = Lm,0(y) Lm,1(y) ··· Lm,m(y)  y y y y y y  ( − 0)( − 1)···( − m−1)   with   and D is the coefficient matrix of Newton basis given by n (x − xk) d d ··· d d Li,n(x)= ∏ for i = 0,1,..,n 0,0 0,1 0,m−1 0,m k=0 (xi − xk) d1,0 d1,1 ··· d1,m−1 d1,m k6=i  . . . . .  D = ......   dn−1,0 dn−1,1 ··· dn−1,m−1 dn−1,m m  d d ··· d d  (y − yk)  n,0 n,1 n,m−1 n,m  Lm, j(y)= ∏ for j = 0,1,...,m   k=0 (y j − yk) k6= j By taking the relation pn,m (xi,y j) ≡ f (xi,y j) at all the interpolation points we get and F defined in (1). For the Lagrange basis in T two-variable polynomials see [6],[10],[11] and the F = Nx · D · Ny (9) references therein. According to (3), pn,m (x,y) can be written as where XT Y pn,m(x,y)= L · F · L 11 1 ··· 1 T 0 z1 − z0 z2 − z0 ··· zn − z0 = vec(F) · (YL ⊗ XL)  0 0 (z2 − z0)(z2 − z1) ··· (zn − z0)(zn − z1)  = vec(F)T · ℓ(x,y) Nz =  ......   . . . . .    in terms of the Lagrange basis  n−1   00 0 ··· ∏ (zn − z j)   j=0  L (x) · L (y)   0,n m,0   L x L y 1,n( ) · m,0( ) and z ∈{x,y}. The matrix D is unique since the matrices  .  . Nx and Ny are nonsingular (xi 6= x j and yi 6= y j) and can be   easily computed by Y X Ln,n(x) · Lm,0(y) ℓ(x,y)= L ⊗ L =   L0,n(x) · Lm,1(y) L x L y  D = N−T · F · N−1  1,n( ) · m,1( ) x y  .   .    or similar to the monomial case by solving the system of Ln,n(x) · Lm,m(y) T   equations Nx · D1 = P and D · Ny = D1 with unknowns D1  

c 2016 NSP Natural Sciences Publishing Cor. 1306 D. Varsamis et al.: Transformations between two-variable polynomial...

and D respectively. An alternative way to compute the From (12) we have coefficients of D is by means of the (6) vec(A)T · m(x,y)=vec(F)T · ℓ(x,y) =⇒ (13) (k−1) (k−1) di, j − di 1, j T T − if ( j < k ∧ i ≥ k) =⇒ vec(A) · m(x,y)=((Vy ⊗Vx) · vec(A)) · ℓ(x,y)  xi − xi−k T T T  =⇒ vec(A) · m(x,y)=vec(A) · (Vy ⊗Vx) · ℓ(x,y)   (k−1) (k−1) T  di, j − di, j−1 =⇒ m(x,y) = (Vy ⊗Vx) · ℓ(x,y)  if (i < k ∧ j ≥ k) (k)  y − y T T T d :=  j j−k =⇒ m(x,y)= V ⊗V · ℓ(x,y)= V · ℓ(x,y) i, j  y x xy   (k−1) (k−1) (k−1) (k−1) T di, j + di−1, j−1 − di−1, j − di, j−1 where Vxy is the transforming matrix between the  if (i ≥ k ∧ j ≥ k) ,  (xi − xi−k) y j − y j−k coordinates of pn,m(x y) in monomial and Lagrange base.   Similarly, from (12) we have  k 1   d( − ) if (i < k ∧ j < k)  i, j T T (10)  vec(F) · ℓ(x,y)= vec(D) · n(x,y) =⇒ (14)  which are defined in [12]. The polynomial p (x,y) is T n,m N N T vec D ℓ x,y vec D T n x,y written as =⇒ ( y ⊗ x) · ( ) · ( )= ( ) · ( )   XT Y =⇒ n(x,y) = (Ny ⊗ Nx) · ℓ(x,y)= Nxy · ℓ(x,y) pn,m(x,y)= N · D · N T = vec(D) · (YN ⊗ XN) where Nxy is the transforming matrix between the T coordinates of pn,m(x,y) in Newton and Lagrange base. = vec(D) · n(x,y) Since, the i-th element of the monomial base m(x,y) in terms of the Newton basis has the same degree with the respective element in the Newton base n(x,y) there exist a lower triangular matrix 1 L such that x − x0 L · n(x,y)= m(x,y)  .  . From (13)and (14) we get  n−1   ∏  m(x,y)= V T · ℓ(x,y)  (x − xi)  xy =⇒  i=0  n(x,y)= N · ℓ(x,y) n(x,y)= YN ⊗ XN =   xy  (y − y0)     m(x,y)= V T · N−1 · n(x,y) ≡ L · n(x,y)  (y − y0)(x − x0)  xy xy  .   .  T −1  .  and therefore L = Vxy · Nxy or equivalently n−1 m−1   ∏ x x ∏ y y  T  ( − i) ( − i) V = L · Nxy (15) i=0 j=0  xy     Since, Nxy is upper triangular and L is lower triangular, By using (9), we conclude that the coordinates vec(D) of T (15)isa LU-decomposition of Vxy. Note also that pn,m (x,y) in terms of the Newton basis are connected with the respective coordinates vec(F) in terms of the Lagrange T −1 T −1 L = Vxy · Nxy = (Vy ⊗Vx) · (Ny ⊗ Nx) = (16) basis by T T −1 −1 T −1 T −1 T = Vy ⊗Vx · Ny ⊗ Nx = Vy · Ny ⊗ Vx · Nx vec(F) = (Ny ⊗ Nx) · vec(D) (10) According to [2]    T −1 T −1 3 Change of basis in polynomial interpolation Ly = Vy Ny and Lx = Vx Nx (17) where As we show in the previous section, the interpolating 1 polynomial pn,m(x,y) can be represented in one of the H (z ) 1 following ways 1 0 H2(z0) H1(z0,z1) 1  Lz := . . . . p x,y XT A Y XT F Y XT D Y (11)  ......  n,m( )= · · = L · · L = N · · N  . .  H z H z ,z H x ,...,z 1  n( 0) n−1( 0 1) ··· 1( 0 n−1)  or equivalently   with Hp(z0,...,zk) be the sum of all homogeneous T pn,m(x,y)= vec(A) · m(x,y) products of degree p of the variables z0,...,zk and = vec(F)T · ℓ(x,y) z ∈{x,y}. From (16) and(17) we conclude that T = vec(D) · n(x,y) (12) L = Ly ⊗ Lx

c 2016 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 10, No. 4, 1303-1311 (2016) / www.naturalspublishing.com/Journals.asp 1307

Since the diagonal elements of L are equal to 1, (15) is Algorithm 1Let the two-variable polynomial matrix T Rℓ×ℓ the standard LU-decomposition of Vxy. The above results B(x,y) ∈ where B = [bi, j(x,y)]. gives rise to the following Theorem. Step 1: Calculate the upper bound of the degree of the T determinant in term of each variable with the following Theorem 1.Let Vxy = Lxy · Nxy be the standard formulas LU-decomposition of the transposed Kronecker product of the matrices Vy,Vx i.e. Vxy = Vy ⊗ Vx. Then, ℓ Lxy = Ly ⊗ Lx maps the Newton polynomials to the n = min ∑ max degx[bi, j(x,y)] , 1≤ j≤ℓ and Nxy = Ny ⊗ Nx maps the Lagrange (i=1   polynomials to the Newton polynomials. ℓ ∑ max degx[bi, j(x,y)] Theorem 1, extends the results presented in [2] for the 1≤i≤ℓ j=1  ) one variable case. All the transformations described above  are summarized in Table 1. and ℓ m = min ∑ max degy[bi, j(x,y)] , 1≤ j≤ℓ Table 1: Transformation matrices (i=1    Map Basis transform Coefficients transform ℓ T T ∑ max degy[bi, j(x,y)] Lagrange to Monomial m(x,y) = Vxy · ℓ(x,y) Vx · A ·Vy = F 1≤i≤ℓ T j=1  ) Lagrange to Newton n(x,y) = Nxy · ℓ(x,y) Nx · D · Ny = F T  Newton to Monomial m(x,y) = Lxy · n(x,y) Lx · A · Ly = D Step 2: Evaluate the at the specific set of points (n,m) S˜∆ = (xi,y j) | i = 0,1,...,n , j = 0,1,...,m which are on a rectangular basis. 4 On the computation of determinant of a In this step we create the matrix F which is contains the polynomial matrix Lagrange coefficients. (n,m) Step 3: Interpolate the values at the set S˜∆ . The The computation of the determinant and the inverse of a interpolating polynomial is the determinant of B(x,y). two-variable polynomial matrix are some problems in In this step we transform the matrix F to the matrix A linear algebra which are solved with symbolic operations. where matrix A is the monomial coefficient matrix. Must However, their main disadvantage, is their complexity. In be calculate the matrices Vx andVy. order to overcome these difficulties we may use other For the calculation of the matrices V and V we select the techniques such as interpolation methods. Some x y same sequential of values for the x and y , i.e. x i where remarkable examples, but not the only ones of the use of i j i = i 1,2,...,n and y j where j 1,2,...,m. With this interpolation techniques in linear algebra problems, are = j = = approach we need less operations because the matrices V the computation of the inverse by [13] and [14], the x and V have same elements. calculation of the determinant by [15]and [12]. y The computation of matrix A is given by In this section we present the computation of the −1 −T determinant of a two variable polynomial matrix which A = Vx · F ·Vy (18) by using the Lagrange interpolation instead to Newton interpolation which is presented in [12]. For the Thus, we do not interpolate with the Lagrange method but computation of the determinant we use the we make matrix manipulations for the computation of the evaluation–interpolation technique. According this determinant of B(x,y). technique, first, we compute the determinants of the Example 1.Let the polynomial matrix polynomial matrix for specific values and afterwards we interpolate these values to find the determinant which is a −1 0 x B(x,y)= 5 1 −1 two-variable polynomial. The interpolation step can be   done with the bivariate Lagrange interpolation method. 2 3xy 2 An application of the proposed transformations is to Step 1: We compute the upper bound n (resp. m) on the replace the Lagrange interpolation method with the degree of x (resp. y) in p(x,y)= detB(x,y). corresponding transformation. The transformation helps us to avoid the computational cost and the symbolic 3 operations of the Lagrange method with numerical linear n = min ∑ max degx[bi, j(x,y)] , 1≤ j≤3 algebra operations. (i=1   3 The computation of the determinant is described in the ∑ max degx[bi, j(x,y)] 1≤i≤3 following algorithm. j=1  ) 

c 2016 NSP Natural Sciences Publishing Cor. 1308 D. Varsamis et al.: Transformations between two-variable polynomial...

n = min{1 + 0 + 1,0 + 1 + 1} = 2 where ak = 0 if k < 0or k ≥ n and bk = 0 if k < 0or k ≥ m. Let the matrices and

X = −xk 1 where k = 0,1,...n m k m = min ∑ max degy[bi, j(x,y)] , 1≤ j≤m and   (i=1   Y −y 1 where k 0,1,...m m k = k = ∑ max degy[bi, j(x,y)] 1≤i≤m which are the corresponding coefficient matrices of the j=1  )  terms (x − xk) and (y − yk) respectively. We define the coefficient transformations matrices m = min{0 + 0 + 1,0 + 1 + 0} = 1 Step 2: We evaluate the determinants at the set X1 ⋆ X2 ⋆ X3 ···Xn−1 ⋆ Xn Π x 0 (2,1)  X0 ⋆ X2 ⋆ X3 ···Xn−1 ⋆ Xn  S˜ = (xi = i,y j = j) | i = 0,1,2, j = 0,1 ∆ Π x  1   X ⋆ X ⋆ X ···X ⋆ X  and we create the matrix F which is given by  0 1 3 n−1 n   Π x  TLx =  2  det(B(x ,y )) = −2 det(B(x ,y )) = −2  .  0 0 0 1  .  F det(B(x ,y )) = −4 det(B(x ,y )) = 8   = 1 0 1 1  X ⋆ X ⋆ X ···X ⋆ X  det(B(x ,y )) = −6 det(B(x ,y )) = 48   0 1 2 n−2 n  2 0 2 1  Π x     n−1   X0 ⋆ X1 ⋆ X2 ···Xn−2 ⋆ Xn−1  Step 3:Since n > m, we compute the matrix Vx where    Π x   n  2   1 x0 x0 100 2 where Vx = 1 x1 x1 = 111 n  2 124 Π x 1 x2 x2 i = ∏ (xi − xk)     k=0 k6=i and the matrix Vy is a sub-matrix of the matrix Vx, that is, and 10 Y ⋆Y ⋆Y Y ⋆Y V = 1 2 3 ··· n−1 n y 11 Π y   0  Y0 ⋆Y2 ⋆Y3 ···Yn−1 ⋆Yn  Then, the matrix A is given by Π y  1   Y0 ⋆Y1 ⋆Y3 ···Yn−1 ⋆Yn  −2 0  y  −1 −T  Π  2 3 TLy =  2  A = Vx · F ·Vy = − −  .   0 15  .   .   Y ⋆Y ⋆Y ···Y ⋆Y     0 1 2 n−2 n  Thus, the interpolating polynomial is  Π y   n−1  Y ⋆Y ⋆Y ···Y ⋆Y  2  0 1 2 n−2 n−1  p(x,y)= −2 − 2x − 3xy + 15x y  Π y   n    which is the determinant of the polynomial matrix B(x,y). where n −1 −T Π y The matrices Vx and Vy defined in (18) can be j = ∏ (y j − yk) calculate with an alternative way. As in the papers [16,3], k=0 k6= j we will use the wrapped convolution In case where xk = x0 +k ·hx and yk = y0 +k ·hy with x0 = a ⋆ b = c0 c1 ... cn+m−1 y0 = 0 and hx = hy = 1 we have xk = yk = k. Thus, the above products are given by where  n n Π x Π y a = a0 a1 ... an−1 , b = b0 b1 ... bm−1 i = ∏ (i − k) and j = ∏ ( j − k) k=0 k=0 with coordinates equal to   k6=i k6= j

i or equivalently ci = ∑ a b i = 0,1,...,n + m − 1 k i−k Π x 1 n−i n i ! i! and Π y 1 n− j n j ! j! k=0 i =(− ) · ( − ) · j =(− ) · ( − ) ·

c 2016 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 10, No. 4, 1303-1311 (2016) / www.naturalspublishing.com/Journals.asp 1309

X X For the matrices L, TLx and holds that Then, the matrix A is given by X X L = TLx · −2 0 T A = T · F · TL = −2 −3 Y Y Lx y   Similarly, for the respectively matrices L, TLy and holds 0 15 that Y Y   L = TLy · Thus, the interpolating polynomial is Thus, the polynomial can be written p(x,y)= −2 − 2x − 3xy + 15x2y XT Y XT T Y p(x,y)= L · F · L = · TL · F · TLy · x which is the determinant of the polynomial matrix B(x,y). and we have that

A = T T · F · T Lx Ly 5 Conclusions Therefore, we show that The first result that comes directly from this short note is T −1 −T that in case where we select interpolation points that T ≡ V and TL ≡ V Lx x y y (n,m) belongs to S∆ the interpolating polynomial problem is Example 2.According to Example 1 we have the set poised, since in that case the transforming matrices that we use become nonsingular and a unique solution of the (2,1) S˜∆ = (xi = i,y j = j) | i = 0,1,2, j = 0,1 coordinate vectors exists. The second result, is that any interpolating polynomial is easily expressed in the and the matrixF which is given by Lagrange basis, since in that case the only we need are the values of the function that we want to interpolate. Then, −2 −2 by using the transformations that we have presented in F = −4 8 this work we can always express the interpolating −6 48  polynomial in other bases like the monomial and the   Newton base. Additionally, transformations between the We compute the elements of the matrices TLx and TLy monomial, Lagrange and Newton bases have been provided and the results in [1]and [2] have been extended X1 ⋆ X2 = [2,−3,1] to the bivariate polynomials. Finally, an illustrative X0 ⋆ X2 = [0,−2,1] example has been presented which uses the X0 ⋆ X1 = [0,−1,1] transformations from the Lagrange to the monomial basis. Π x n−i 2−0 0 = (−1) · (n − i)! · i! = (−1) · (2 − 0)! · 0! = 2 A numerical algorithm for the computation of the Π x n−i 1 1 = (−1) · (n − i)! · i! = (−1) · 1! · 1! = −1 transforming matrices used in this application, is also Π x n−i 0 2 = (−1) · (n − i)! · i! = (−1) · 0! · 2! = 2 given. and Acknowledgement Y1 = [−1,1] Y0 = [0,1] Π y n− j 1−0 The authors wish to acknowledge financial support 0 = (−1) · (n − j)! · j! = (−1) · (1 − 0)! · 0! = −1 Π y = (−1)n− j · (n − j)! · j! = (−1)0 · 0! · 1! = 1 provided by the Research Committee of the Serres 1 Institute of Education and Technology under grant Thus, we have SAT/IC/07022013-15/5.

X1 ⋆ X2 Π x 0 1 − 3 1 References X0 ⋆ X2  2 2 TLx = Π x = 0 2 −1  1   1 1  [1] D.R. Hill. Interpolating polynomials and their coordinates X ⋆ X  0 − 2 2  0 1  relative to a basis. The College Mathematics Journal,  Π x     2  23(4):329–333, 1992.   [2] W. Gander. Change of basis in polynomial interpolation. and Numerical Linear Algebra with Applications, 12(8):769– Y1 778, 2005. Π y 0 1 −1 [3] J. Kapusta and R. Smarzewski. Fast algorithms for TL =   = y Y0 0 1 multivariate interpolation and evaluation at special points. y   Π  Journal of Complexity, 25(4):332–338, 2009.  1   

c 2016 NSP Natural Sciences Publishing Cor. 1310 D. Varsamis et al.: Transformations between two-variable polynomial...

[4] J. Kapusta. An efficient algorithm for multivariate Institute of Serres, Greece. Since 2009, he has been with maclaurin-newton transformation. 8(2):5–14, 2008. the Technological Educational Institute of Central [5] R. Smarzewski and J. Kapusta. Fast lagrange–newton Macedonia (Serres, Greece), where he is currently transformations. Journal of Complexity, 23(3):336–345, an Assistant Professor in the Dept. of Informatics 2007. Engineering. Dr. Varsamis is the co-author of 25 articles [6] G. M. Phillips. Interpolation and approximation by in international journals and conference proceedings.His polynomials. Springer-Verlag, 2003. research interests include Scientific Programming, [7] E.E. Tyrtyshnikov. How bad are hankel matrices? Computational methods, Control System Theory and Numerische Mathematik, 67(2):261–269, 1994. . [8] W. Gautschi and G. Inglese. Lower bounds for the of vandermonde matrices. Numerische Mathematik, Nicholas P. Karampetakis 52(3):241–250, 1987. was born in Drama, Greece [9] A. Eisinberg and G. Fedele. On the inversion of the vandermonde matrix. Applied mathematics and in 1967. He received computation, 174(2):1384–1397, 2006. the bachelor degree in [10] T. Sauer and Y. Xu. On multivariate lagrange interpolation. Mathematics and Ph.D. Mathematics of Computation, 64:1147–1170, 1995. in Mathematics from the [11] M. Gasca and T. Sauer. Polynomial interpolation in Department of Mathematics, several variables. Advances in Computational Mathematics, Aristotle University of 12:377–410, 2000. Thessaloniki, Greece in 1989 [12] Dimitris N. Varsamis and Nicholas P. Karampetakis. and 1993 respectively. From On the Newton bivariate polynomial interpolation with November 1994 to September 1995 he was a research applications. Multidimensional Systems and Signal associate with the Department of Mathematical Sciences, Processing, 25(1):179–209, 2014. Loughborough University of Technology, England. From [13] A. Schuster and P. Hippe. Inversion of polynomial matrices September 1995 to March 1999 he was a Research by interpolation. IEEE Transactions on Automatic Control, Associate with the Department of Mathematics, Aristotle 37:363–365, 1992. University of Thessaloniki, Greece. From August 2000, to [14] S. Vologiannidis and N. P. Karampetakis. Inverses July 2009 he was an Assistant Professor to the same of multivariable polynomial matrices by discrete fourier Department, whereas from July 2009, he has been transforms. Multidimensional Systems and Signal Associate Professor. During the above periods he received Processing, 15:341–361, 2004. many fellowships from the Greek Government, British [15] L. E. Paccagnella and G. L. Pierobon. Fft calculation of a Council and Engineering and Physical Sciences Research determinantal polynomial. IEEE Transactions on Automatic Council of England, and contributes in many research Control, 29(3):401–402, 1976. projects sponsored by the Greek Government and the [16] A. Bostan and E.´ Schost. Polynomial evaluation and interpolation on special sets of points. Journal of European Union. His present research interests lie mainly Complexity, 21(4):420–446, 2005. in algebraic methods for computer aided design of control systems (CACSD), numerical and symbolic algorithms for CACSD, polynomial matrix theory and issues related to the mathematical systems theory. Dr. Karampetakis is a Dimitris N. Varsamis frequent contributor to the field in the form of journal was born in Serres, Greece articles, conference presentations and reviews. He is in 1976. He received Senior Member of IEEE and a vice-Chair of the IEEE the bachelor degree in Action Group on Symbolic Methods for CACSD. Mathematics from the He is also an Associate Editor a) of the Journal of Department of Mathematics, Multidimensional Systems and Signal Processing and b) University of Ioannina, of the International Journal of Applied Mathematics and Greece and the MSc Computer Science. in “Theoretical computer science, control and systems Paris Mastorocostas was theory” from the Department of Mathematics, Aristotle born in Thessaloniki, Greece, University of Thessaloniki, Greece. He was awarded the in 1971. He received the Ph.D. at the Department of Mathematics (Aristotle Diploma and Ph.D. degrees University of Thessaloniki, Greece) in 2012, under in Electrical & Computer the supervision of Associate Professor Nicholas Engineering from Aristotle Karampetakis. The subject of his doctoral thesis is “On University of Thessaloniki, the Development of Computational Methods for the Greece. Presently he solution of Control Theory problems”. From 2001 to serves as Professor at 2009 he was a scientific associate with Department of the Dept. of Informatics & Informatics Communications, Technological Educational Communications, Technological Education Institute of

c 2016 NSP Natural Sciences Publishing Cor. Appl. Math. Inf. Sci. 10, No. 4, 1303-1311 (2016) / www.naturalspublishing.com/Journals.asp 1311

Serres, Greece. His research interests include fuzzy and neural systems with applications to identification and classification processes, data mining and scientific programming.

c 2016 NSP Natural Sciences Publishing Cor.