<<

ITM Web of Conferences 24, 01008 (2019) https://doi.org/10.1051/itmconf/20192401008 AMCSE 2018

A particular block Vandermonde

Malika Yaici1,*, and Kamel Hariche2

1Laboratoire LTII, University of Bejaia, Algeria 2IGEE Institute, University of Boumerdes, Algeria

Abstract. The is ubiquitous in and engineering. Both the Vandermonde matrix and its inverse are often encountered in control theory, in the derivation of numerical formulas, and in systems theory. In some cases block vandermonde matrices are used. Block Vandermonde matrices, considered in this paper, are constructed from a full set of solvents of a corresponding matrix . These solvents represent block poles and block zeros of a linear multivariable dynamical time-invariant system described in matrix fractions. Control techniques of such systems deal with the inverse or of block vandermonde matrices. Methods to compute the inverse of a block vandermonde matrix have not been studied but the inversion of block matrices (or partitioned matrices) is very well studied. In this paper, properties of these matrices and iterative algorithms to compute the determinant and the inverse of a block Vandermonde matrix are given. A parallelization of these algorithms is also presented. The proposed algorithms are validated by a comparison based on algorithmic complexity.

1 Introduction a 2×2 is known, under the conditions that at least one of the two entries must be non- The Vandermonde matrix is very important and its uses singular. In [13], this condition is overcome by using include , coding theory, signal three new types of symbolic block matrix inversion. processing, etc. Literature on Vandermonde matrix goes In [14], the properties of block matrices with block back to 1965 and even before, where many papers deal banded inverses are investigated to derive efficient with the study of its properties, its inverse and its matrix inversion algorithms for such matrices. In determinant [1-5]. In [6], the author proved that every particular, a recursive algorithm to invert a full matrix generic nxn matrix is a product of a Vandermonde whose inverse is structured as a block matrix and its , and in [7] a Vandermonde and a recursive algorithm to compute the inverse of a matrix is decomposed to obtain variants of the Lagrange structured block tridiagonal matrix are proposed. interpolation polynomial of degree≤n that passes through Block Vandermonde matrices constructed using the n +1points . matrix solvents are very useful in control engineering, for example in control of multi-variable One of the first papers where the term block dynamic systems described in matrix fractions (see [15]). vandermonde matrix (BVM) is used, to my knowledge, It is in these particular BVM that we are interested. is [8] but it is in [9] and [10] that the concept is fully Parallelization may be a solution to problems where studied; the block Vandermonde matrix is defined and its large size matrices, as BVM, are used. Large scale properties are explored. A method, based on the matrix inversion has been used in many domains and Gaussian elimination, to compute the determinant is also block-based Gauss-Jordan (G-J) algorithm as a classical proposed. In [11], the author gives a method to method of large matrix inversion has become the focus determine the biggest integer n=v(q,t) for which there of many researchers. But the large parallel granularity in exist t×t matrices {A1…An}with the highest power q such i−1 existing algorithms restricts the performance of parallel that the BVM V=(,) Aj i j ≤ n is invertible. In [12], block-based G-J algorithm, especially in the cluster linear diffusion layers achieving maximal branch environment consisting of PCs or workstations. The numbers called MDS (maximal distance separable) author of [16] presents a fine-grained parallel G-J matrices are constructed from block Vandermonde algorithm to settle the problem presented above. matrices and their transposes. Under some conditions In this paper a new algorithm to compute the inverse these MDS matrices are involutory its inverse is itself) and the determinant of a block Vandermonde matrix which is of great value in cryptography. constructed from solvents are given. An implementation Methods to compute the inverse of a block using Matlab has been undergone, in order to obtain the vandermonde matrix have not been studied but the speed-up of the proposed algorithms compared to Matlab inversion of block matrices (or partitioned matrices) is built-in functions. very well studied! The method to compute the inverse of

* Corresponding author:[email protected]

© The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). ITM Web of Conferences 24, 01008 (2019) https://doi.org/10.1051/itmconf/20192401008 AMCSE 2018

After this introduction, some needed mathematical ALLALALAA( )=r + r−1 +⋯ + + = 0 (6) preliminaries are presented in section 2, and then the r r−1 1 0 m main results come in section 3. A parallelization of the Theorem 1: If the latent roots of A(t) are distinct, then proposed algorithms is given in section 4. A conclusion A(X) has a complete set of solvents. finishes the paper. Proof 1: see [9].

2 Mathematical preliminaries A solvent is automatically non-singular. The determinant is non-null because its latent values (eigenvalues) are distinct; the latent vectors For a set of n m × m matrices {A1, A2, …, An}, the corresponding block Vandermonde matrix (BVM) of (eigenvectors) must be linearly independent. order t is defined as follows: III⋯  2.3 Block Vandermonde matrices   ⋯ AAA1 2 n  As for an eigenvalue system, a block Vandermonde V = (1) ⋮ ⋮ ⋮ ⋮  matrix can be defined for solvents with particular properties [15]. t−1 t − 1 t − 1  AAA1 2 ⋯ n  Let a set of r right solvents Ri (m×m matrices) of a corresponding matrix polynomial A(t). A row block

Vandermonde matrix of order r is a rm × rm matrix The block Vandermonde matrices, we will be dealing defined as: with, are constructed from solvents of matrix polynomials. In this section a recall on matrix III⋯  polynomials, solvents, and their properties, will be m m m  ⋯ given. RRR1 2 r  V = (7) ⋮ ⋮ ⋮ ⋮  2.1 Matrix polynomials r−1 r − 1 r − 1  RRR1 2 ⋯ r  Definition 1: An mth order, rth polynomial (also called λ-matrix) is given by [15]: And given a set of r left solvents Li (m×m matrices) of a A(t) a column block r r−1 r A() t= Ar t + A r−1 t +⋯ + At 1 + A 0 (2) Vandermonde matrix of order is a defined as: r−1  where Ai are m × m real matrices and t a complex IRRm 1⋯ 1 number. r−1  IRRm 2⋯ 2  V = (8) Definition 2: Let X be an m × m complex matrix. ⋮ ⋮ ⋯ ⋮    A right matrix polynomial is defined by: r−1  IRRm r⋯ r  A() t= A Xr + A X r−1 +⋯ + A X + A (3) R r r−1 1 0 Theorem 2: If A(t) has distinct latent roots, then there And a left matrix polynomial is defined by: exists a complete set of right solvents of A(X), R1, …, Rm, and for any such set of solvents, The BVM V is r r−1 AL() t= X A r + X A r−1 +⋯ + XA 1 + A 0 (4) nonsingular. Proof 2: see [9]

Definition 3: The λi is called a latent Remark 1: In [17] the general right (left) block value of A(t) if it is a solution of the scalar polynomial Vandermonde matrix constructed by solvents, where a equation det(A(t)) = 0. The non-trivial vector vi, solution right (left) solvent Ri (Li) with multiplicity mi exists, is of the equation A(λi)vi = 0, is called a primary right latent given. vector associated to the latent value λi. Similarly, the non trivial row vector w, solution of the equation wA(λi) = 0 2.4 Non-singularity is called a primary left latent vector associated with λi [15]. Definition 5: If we let σ[A(t)] denote the set of all latent roots of A(t) and σ[Ri] the set of eigenvalues of the right 2.2 Solvents solvent Ri, then a complete set of right solvents is obtained if we can find r right solvents such that [10]: Definition 4: A right solvent (or a block root) R of a r polynomial matrix A(t) is defined by:  σR= σ A() t ∪i=1 [][]i r r−1  (9) ARARARARA= + +⋯ + + = (5) ( )r r−1 1 0 0m σ∩ σ   = ∅  []RRi j  and the left solvent L of a polynomial matrix A(t) is defined by:

2 ITM Web of Conferences 24, 01008 (2019) https://doi.org/10.1051/itmconf/20192401008 AMCSE 2018

and the block Vandermonde matrix thus constructed is IIII⋯ |  nonsingular. m m m m  ⋯ RRRR1 2r− 1 | r  Just as for the right solvents, the existence of a left ⋮ ⋮ ⋮ ⋮| ⋮  block root depends on the existence of a set of m linearly V = r−2 r − 2 r − 2 r − 2  (13) independent left latent vectors. A complete set of left RRRR1 2⋯ r− 1 | r  r block roots is obtained if we can find left block roots −− −− −− −− − −−  where each block root involves a distinct set of m latent   roots of A(t). This in turn requires that for each such a r−1 r − 1 r − 1 r − 1  RRRR1 2⋯ r− 1 | r  distinct set, we can find a corresponding set of linearly left latent vectors. 3.2 Inverse of a BVM Remark 2: A complete set of right or left solvents will then describe completely the latent (eigen) structure of From [18], the inverse of a block partitioned matrix is A(t). given as follows:

−1 3 Main results AB  AESFES−1+*** − 1 − − 1  = AA (14)   −1 − 1  The following results are mainly given on a row-BVM CD  −SFSAA*  constructed from right solvents. The same procedures -1 can be applied to a column-BVM constructed from left if A exists and SA , the Shur Complement of matrix A, solvents. is nonsingular.

, , . 3.1. Iterative construction of BVM

-1 Let V1 = Im, where Im is the m×m , then A similar inverse formula exists if D exists and SD, BVM of order 2 constructed from two solvents is as the Shur complement of matrix D, is nonsingular. follows: Let us compute the inverse of a BVM of order r as (10) given in equation 7 by using equation 14. In our case both diagonal entries are non-singular.

−1 − 1 − 1 If we define the following matrices: B1=Im, C1=R1 VESFES+*** −  V −1 = r−1 r − 1 r − 1  (15) and D1=R2 Then a BVM of order three (3 solvents) is as r − −1 − 1 follows: SFSr−1* r − 1 

(11) where Sr-1 is the Shur complement of matrix Vr-1 (computed at iteration r-1)

−1 where , and . SDCVBr−1= r − 1 − r − 1** r − 1 r − 1 (16)

The following theorem is a deduction from previous , results: and Dr-1, Cr-1, Br-1 are as given in equation 12. Theorem 3: A BVM of order r, constructed from r The same procedure will be used to determine the solvents, is as follows: inverse of the BVM V . So the algorithm is an iterative r-1 procedure. (12)

Algorithm: Let a complete set of solvents {R1, ..., Rr} and the corresponding BVM Vr as given in equation7. From the matrix Vr, all sub-matrices (Bi, Ci, Di and Si) where , will be first constructed, and then the inverse is computed. The algorithm uses a function which computes the inverse of the Shur Complement.

and . Step1: Let INV = Im Step2: Proof 3: It is straight forward from the block partitioning for i = 2*m to r*m with step=m of the BVM Vr as follows: Bi-1 = Vr(1:i-2, i-1:i); Ci-1 = Vr(i-1:I, 1:i-2);

3 ITM Web of Conferences 24, 01008 (2019) https://doi.org/10.1051/itmconf/20192401008 AMCSE 2018

Di-1 = Vr(i-1:I, i-1:i); Algorithmic Complexity: As for the inverse Ei-1 = INV*Bi-1; algorithm, the determinant iterative algorithm needs r-2 Fi-1 = Ci-1 * INV; iterations. The procedure consists of a set of affectations, Si-1 =Di-1-Ci-1*INV*Bi-1; the computation of the inverse of a BVM and of the INV+ E*** S−1 F − E S − 1  determinant of S. The size of S = m×m and Matlab uses INV = i−1 i − 1 the triangular factors of Gaussian elimination method to −1 − 1  −SFSi−1* i − 1  compute the determinant and the inverse of a square endfor matrix. The Gaussian elimination has a complexity of O(m3).So the complexity depends also on the size of Sr. Algorithmic Complexity: The iterative algorithm The overall complexity of our algorithm is: O((r-2) *m3). requires r-2 iterations. The procedure consists of a set of affectations and the computation of the inverse of Sr. The 4 Parallelization size of Sr is m × m and Matlab uses Gaussian elimination method rather than an inverse algorithm. The Gaussian For both Determinant and Inverse, a parallelization is elimination has a complexity of O(m3). possible because of the decomposition step in the The overall complexity of our algorithm is: O((r-2) *m3). proposed algorithm. Even though the iterative approach is difficult to optimally parallelize! The above decomposition is useful only if a parallel execution is 3.3 Determinant of a BVM possible, otherwise the benefits are negligible. From [18], the determinant of a block partitioned matrix There exist two kinds of parallelization of matrix -1 is as follows if A exists: calculus: data or tasks (calculus) decomposition. Because data decomposition is already performed, so AB  det  = detADCAB *det( − *−1 * ) (17) task decomposition is proposed. CD 

-1 4.1 Parallel inverse of BVM A similar determinant formula exists if D exists. From the data decomposition, a master-slave task Using this equation and the block decompositions decomposition was performed on the sequential given in the previous sections we deduced the algorithm. The master task will execute the data determinant of a BVM of order r as given in equation 7 scattering and gathering, and sequential instructions and by using equation 17. at least three (3) slave tasks will execute the parallel blocks. detVVS= det *det (18) r r−1 r − 1 Algorithm: where Sr-1 is the Shur complement of matrix Vr-1 (defined in equation 16). Step1: Let INV = Im Step2: for i = 2*m to r*m with step=m Parallel block Remark 2: The determinant is, in general, needed Bi-1 = Vr(1:i-2; i-1:i); with the inverse. So the inverse of Vr-1 is computed using Ci-1 = Vr(i-1:i; 1:i-2); the previous algorithm. Di¡1 = Vr(i-1:i; i-1:i); End Parallel block Algorithm: Let a complete set of solvents {R1…Rr} Ei-1 = INV*Bi-1 ; and the corresponding BVM Vr as given in equation 7. Fi-1 = Ci-1*INV ; From the matrix Vr all sub-matrices (Vi, Bi, Ci, Di and Si) Si-1 = Di-1 –Ci-1*INV*Bi-1; can be constructed and the determinant computed. The end algorithm uses a function which computes the −1 determinant of the Shur Complement. iS = Si−1 ; Parallel block Step1: Let Det = 1 Step2: INV+ E*** iS F − E iS  INV =   for i = 2*m to r*m with step=m −iS* F iS  Vi-1 = Vr(1: i-2, 1:i-2) ; B = V (1: i-2, i-1:i); end i-1 r end Ci-1 = Vr(i-1: I, 1: i-2); Di-1 = Vr(i-1:I, i-1:i);

Si-1 = Di-1 – Ci-1 * * Bi-1; 4.2 Parallel determinant of BVM Det = Det * determinant(Si-1); endfor As for the precedent algorithm, a master-slave task decomposition has been performed, and the master task will execute the data scattering and gathering and

4 ITM Web of Conferences 24, 01008 (2019) https://doi.org/10.1051/itmconf/20192401008 AMCSE 2018

sequential parts, and at least four (4) slave tasks will 3. R. E. Blahut, Theory and Practice of Error Control execute the parallel blocks. Codes. Addison Wesley, Reading, Mass., USA (1983) Algorithm: 4. G. H. Golub, C. F. Van Loan, Matrix Computation, Step1: Let Det=1 Johns Hopkins Univ. Press, Baltimore, pp. 119-124 Step2: for i = 2*m to r*m with step=m (1983) Parallel block 5. J. J. Rushanan, On the Vandermonde Matrix. The Vi-1 = Vr(1:i-2; 1:i-2); American Mathematical Monthly, Published by: Bi-1 = Vr(1:i-2; i-1:i); Mathematical Association of America. 96 (10), pp. Ci-1 = Vr(i-1:i; 1:i-2); 921-924 (1989) Di-1 = Vr(i-1:i; i-1:i); 6. K. Ye, New classes of matrix decompositions, Lin. End Algeb. and its App., 514, pp. 47-81, (2017) −1 7. I.-P.Kim, A. R. Kräuter, Decompositions of a matrix Si-1 = DCVB−− −** − − ; i1 i 1 i 1 i 1 by means of its dual matrices with applications, Lin. Det = Det*determinant(Si-1); Algeb. and its App., 537, pp. 100–117, (2018) end 8. J. E. Dennis, J. F. Traub, R. P. Weber, On the matrix polynomial, lambda-matrix and block eigenvalue 4.3 Algoithmic complexity problems. Computer Science Department Tech. rep., Carnegie-Mellon Univ., Pittsburgh, Pa., USA (1971) The overall time complexity of the two algorithms is the 9. J. E. Dennis, J. F. Traub, R. P. Weber, The algebraic same as before: O((r-2)*m3). But the detailed complexity theory of matrix polynomials. SIAM J. Numer. is slightly better. Anal. 13 (6), pp. 831-845 (1976) An implementation using Matlab has been done. 10. J. E. Dennis, J. F. Traub, R. P. Weber, Algorithms Matlab (classical) offers parallel execution of a set of for solvents of matrix polynomials. SIAM J. Numer. instructions (parallel block) using parfor. Matlab uses Anal. 15 (3), pp. 523-533 (1978) the number of cores available on the used computer 11. D. R. Richman, A result about block Vandermonde using matlabpool. Matrices. Lin. and Multilin. Alg. 21, pp. 181-189 (1987) 12. Q. Li, B. Wu, and Z. Liu, Direct Constructions of 4 Conclusions (Involutory) MDS Matrices from Block In this paper new results on the computation of the Vandermonde and Cauchy-like Matrices, inverse and the determinant of a particular block International Workshop on the Arithmetic of Finite Vandermonde matrix are given. Efficient algorithms are Fields (WAIFI) 2018, Bergen, Norway. June 14-16, proposed with their algorithmic complexities. (2018)

We used the ”tic/toc” functions of Matlab to 13. Y. Choi, J. Cheong, New Expressions of 2x2 Block determine the execution time of our algorithms to be Matrix Inversion and Their Application. IEEE 54 compared to the execution time of Matlab functions, and Trans. Auto. Cont. (11), 2648-2653 (2009)

the proposed inverse algorithm is found 10 times 14. A. Asif, J. M. F. Moura, Inversion of block matrices quicker, and the determinant algorithm is 14 times with block banded inverses: application to Kalman- quicker. The parallel execution time was greater than the Bucy filtering. IEEE Int. Conf. on Acoustics, 1 sequential execution time, because of the large amount Speech, and Signal Processing, Istanbul, Turkey. , of data flowing between the cores at each iteration. pp. 608-611, (2000)

These new computation techniques are very useful in 15. M. Yaici, K. Hariche, On eigenstructure assignment 20 control theory, where systems are described in matrix using block poles placement. Euro. J. Cont. , pp. fractions description and their properties are deduced 217-226 (2014)

from solvents. In this case block Vandermonde matrices 16. L. Shang, Z. Wang, S. G. Petiton, F. Xu, Solution of constructed from solvents, their inverse and Large Scale Matrix Inversion on Cluster and Grid. are needed. The future work is within this axis and it 7th Int. Conf. on Grid and Cooperative Computing, consists in proposing a parallel algorithm to solve the Shenzhen, China, pp. 33-40 (2008)

Compensator (Diophantine) equation where block 17. K. Hariche, E. D. Denman, Interpolation theory and 143 Vandermonde matrices constructed using matrix Lambda matrices. J. Math. Anal. Appl. , pp. polynomial solvents are involved. 530-547 (1989) 18. T. Kailath, Linear Systems. Prentice Hall, New Jersey (1980) References 1. A. Klinger, The Vandermonde matrix. The American Mathematical Monthly. 74 (5), pp. 571- 574 (1967) 2. V. Pless, Introduction to the Theory of Error- Correcting Codes. John Wiley, New York (1982)

5