On the Application of Different Numerical Methods to Obtain Null-Spaces of Polynomial Matrices

Total Page:16

File Type:pdf, Size:1020Kb

On the Application of Different Numerical Methods to Obtain Null-Spaces of Polynomial Matrices On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zu´niga˜ and D. Henrion Abstract— Four different algorithms are designed to obtain a polynomial matrix using these Toeplitz methods, in the the null-space of a polynomial matrix. In this first part we present work we develop four Toeplitz algorithms to obtain present two algorithms. These algorithms are based on classi- null-spaces of polynomial matrices. cal methods of numerical linear algebra, namely the reduction into the column echelon form and the LQ factorization. Both First, we develop two algorithms based on classical algorithms take advantage of the block Toeplitz structure of numerical linear algebra methods, and in the second part the Sylvester matrix associated with the polynomial matrix. of this work [17] we present two other algorithms based on We present a full comparative analysis of the performance of the displacement structure theory [9], [10]. The motivation both algorithms and also a brief discussion on their numerical for the two algorithms presented in [17] is to apply them to stability. large dimension problems for which the classical methods I. INTRODUCTION could be inefficient. In the next section 2 we define the null-space structure By analogy with scalar rational transfer functions, matrix of a polynomial matrix. In section 4 we present the two transfer functions of multivariable linear systems can be algorithms to obtain the null-space of A(s), based on the N(s)D 1(s) written in polynomial matrix fraction form as − numerical methods described in section 3. In section 5 we D(s) where is a non-singular polynomial matrix [1], [8], analyze the complexity and the stability of the algorithms. [11]. Therefore, the eigenstructure of polynomial matrices N(s) or D(s) contains the structural information on the represented linear system and it can be used to solve several II. NULL-SPACE STRUCTURE control problems. Consider a k l polynomial matrix A(s) of degree d. In this paper we are interested in obtaining the null-spaces We define the null-space× of A(s) as the set of non-zero of polynomial matrices. The null-space of a polynomial polynomial vectors v(s) such that matrix A(s) is the part of its eigenstructure which contains all the non-zero vectors v(s) which are non-trivial solutions A(s)v(s) = 0: (1) of the polynomial equation A(s)v(s) = 0. By analogy with the constant case, obtaining the null- ρ = rank A(s) A(s) space of a polynomial matrix A(s) yields important infor- Let . A basis of the null-space of l ρ mation such as the rank of A(s) and the relation between is formed by any set of linearly independent vectors δ − its linearly dependent columns or rows. satisfying (1). Let i be the degree of each vector in the basis. If the sum of all the degrees δ is minimal then we Applications of computational algorithms to obtain the i have a minimal null-space basis in the sense of Forney [2]. null-space of a polynomial matrix can be found in the poly- nomial approach to control system design [11] but also in The set of degrees δi and the corresponding polynomial others related disciplines. For example, in fault diagnostics vectors are called the right null-space structure of A(s). the residual generator problem can be transformed into the In order to find each vector of a minimal basis of the null- problem of finding a basis of the null-space of a polynomial space of A(s) we have to solve equation (1) for vectors v(s) matrix [12]. with minimal degree. Using the Sylvester matrix approach Pursuing along the lines of our previous work [15] where we can write the following equivalent system of equations we showed how the Toeplitz approach can be an alternative A0 0 to the classical pencil approach [14] to obtain infinite 2 3 . structural indices of polynomial matrices, and [16] where . A0 v0 6 7 2 3 we showed how we can obtain all the eigenstructure of 6 . .. 7 v1 6 Ad . 7 = 0: (2) 6 7 6 . 7 J.C. Zu´niga˜ and D. Henrion are with LAAS-CNRS, 7 Avenue du Colonel 6 Ad A0 7 6 . 7 Roche, 31077 Toulouse, France. 6 7 6 7 6 . 7 4 vδ 5 D. Henrion is also with the Institute of Information Theory and Au- 6 .. 7 6 7 tomation, Academy of Sciences of the Czech Republic, Pod vodarenskou´ 0 A veˇzˇ´ı 4, 18208 Prague, Czech Republic, and with the Department of Control 4 d 5 Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague, Technicka´ 2, 16627 Prague, Czech Republic. Notice that we can also consider the transposed linear system more used method to reveal the rank of a matrix because of its robustness [3], [7]. See section 5 for a brief discussion vT vT vT 0 1 δ about the stability of the algorithms presented in this paper. T T ··· × A0 Ad 0 2 A···T AT 3 0 ··· d B. Column Echelon Form (CEF) 6 . 7 = 0: (3) 6 .. .. 7 6 7 For the definition of the reduced echelon form of a matrix 4 0 AT AT 5 0 ··· d see [3]. Let L be the column echelon form of M. The If we are interested in the left null-space structure of particular form that could have L depends on the position A(s), i.e. the set of vectors w(s) such that w(s)A(s) = 0, of the r linearly independent rows of M. For example, let us then we can consider the transposed relation AT (s)wT (s) = take a 5 4 matrix M. Consider that after the Householder × 0. Because of this duality, in the remainder of this work we transformations Hi, applied onto the rows of M in order to only explain in detail how to obtain the right null-space obtain its LQ factorization, we obtain the column echelon structure of A(s), that we call simply null-space of A(s). form III. LQ FACTORIZATION AND COLUMN ECHELON FORM 11 0 0 0 2 × 3 21 0 0 0 From equations (2) or (3) we can see that the null-space T × L = MQ = MH1H2H3 = 6 31 32 0 0 7 : of A(s) can be recovered from the null-space of some 6 × × 7 6 41 42 43 0 7 suitable block Toeplitz matrices as soon as the degree δ 6 × × × 7 of the null-space of A(s) is known. 4 51 52 53 0 5 × × × (4) Let us first assume that degree δ is given. The methods This means that the second and the fifth rows of M are that we propose to obtain the constant null-spaces of the linearly dependent. corresponding Toeplitz matrices consist in the application of The column echelon form L allows us to obtain a basis of successive Householder transformations in order to reveal the left null-space of M. From equation (4) we can see that their ranks. We consider two methods: the LQ factorization the coefficients of linear combination for the fifth row of and the Column Echelon Form. M can be recovered by solving the linear triangular system A. LQ factorization 11 0 0 The LQ factorization is the dual of the QR factorization, 2 × 3 k1 k3 k4 31 32 0 = see [3]. Consider a k l constant matrix M. We define × × × 4 41 42 43 5 the LQ factorization of M as M = LQ where Q is an × × × 51 52 53 ; (5) orthogonal l l matrix and L = [Lr 0] is a k l matrix − × × × that we can always× put in a lower trapezoidal form× with a row permutation P : and then k1 0 k3 k4 1 M = 0: w11 2 . × 3 . .. Also notice that when obtaining the basis of the left null- 6 7 T PL = 6 wr1 wrr 0 7 = [PLr 0] space of M from matrix L it is not necessary to store Q = 6 ··· 7 6 . 7 H1H2:::. This fact can be used to reduce the algorithmic 6 . 7 6 7 complexity. 4 wk1 wkr 5 ··· We apply the CEF method to solve the system of equa- where r min(k; l) is the rank of M. tions (3). Since the number of columns of the Toeplitz Notice≤ that L has l r zero columns. Therefore, an matrix can be large, we do not store orthogonal matrix Q. orthogonal basis of the− null-space of M is given by the We only obtain the CEF and we solve the corresponding last l r rows of Q, namely, MQ(r + 1 : l; :)T = 0 and triangular systems as in (5) to compute the vectors of − moreover, we can check that M = LrQ(1 : r; :). the null-space. For more details about the CEF method We apply the LQ factorization to solve system (2). Here described above see [6]. the number of columns of the Toeplitz matrix is not very large, so we obtain the orthogonal matrix Q and, as a IV. BLOCK TOEPLITZ ALGORITHMS by-product, the vectors of the null-space. This method is a cheap version of the one presented in [15] where we For the algorithms presented here we consider that the used the singular value decomposition (SVD) to obtain the rank ρ of A(s) is given. There are several documented rank and the null-spaces of the Toeplitz matrices.
Recommended publications
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • Polynomial Sequences Generated by Linear Recurrences
    Innocent Ndikubwayo Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Linear Recurrences: Location by Sequences Generated Polynomial Innocent Ndikubwayo ISBN 978-91-7911-462-6 Department of Mathematics Doctoral Thesis in Mathematics at Stockholm University, Sweden 2021 Polynomial Sequences Generated by Linear Recurrences: Location and Reality of Zeros Innocent Ndikubwayo Academic dissertation for the Degree of Doctor of Philosophy in Mathematics at Stockholm University to be publicly defended on Friday 14 May 2021 at 15.00 in sal 14 (Gradängsalen), hus 5, Kräftriket, Roslagsvägen 101 and online via Zoom, public link is available at the department website. Abstract In this thesis, we study the problem of location of the zeros of individual polynomials in sequences of polynomials generated by linear recurrence relations. In paper I, we establish the necessary and sufficient conditions that guarantee hyperbolicity of all the polynomials generated by a three-term recurrence of length 2, whose coefficients are arbitrary real polynomials. These zeros are dense on the real intervals of an explicitly defined real semialgebraic curve. Paper II extends Paper I to three-term recurrences of length greater than 2. We prove that there always exist non- hyperbolic polynomial(s) in the generated sequence. We further show that with at most finitely many known exceptions, all the zeros of all the polynomials generated by the recurrence lie and are dense on an explicitly defined real semialgebraic curve which consists of real intervals and non-real segments. The boundary points of this curve form a subset of zero locus of the discriminant of the characteristic polynomial of the recurrence.
    [Show full text]
  • Explicit Inverse of a Tridiagonal (P, R)–Toeplitz Matrix
    Explicit inverse of a tridiagonal (p; r){Toeplitz matrix A.M. Encinas, M.J. Jim´enez Departament de Matemtiques Universitat Politcnica de Catalunya Abstract Tridiagonal matrices appears in many contexts in pure and applied mathematics, so the study of the inverse of these matrices becomes of specific interest. In recent years the invertibility of nonsingular tridiagonal matrices has been quite investigated in different fields, not only from the theoretical point of view (either in the framework of linear algebra or in the ambit of numerical analysis), but also due to applications, for instance in the study of sound propagation problems or certain quantum oscillators. However, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix has constant diagonals or the coefficients of these diagonals are subjected to some restrictions like the tridiagonal p{Toeplitz matrices [7], such that their three diagonals are formed by p{periodic sequences. The recent formulae for the inversion of tridiagonal p{Toeplitz matrices are based, more o less directly, on the solution of second order linear difference equations, although most of them use a cumbersome formulation, that in fact don not take into account the periodicity of the coefficients. This contribution presents the explicit inverse of a tridiagonal matrix (p; r){Toeplitz, which diagonal coefficients are in a more general class of sequences than periodic ones, that we have called quasi{periodic sequences. A tridiagonal matrix A = (aij) of order n + 2 is called (p; r){Toeplitz if there exists m 2 N0 such that n + 2 = mp and ai+p;j+p = raij; i; j = 0;:::; (m − 1)p: Equivalently, A is a (p; r){Toeplitz matrix iff k ai+kp;j+kp = r aij; i; j = 0; : : : ; p; k = 0; : : : ; m − 1: We have developed a technique that reduces any linear second order difference equation with periodic or quasi-periodic coefficients to a difference equation of the same kind but with constant coefficients [3].
    [Show full text]
  • Toeplitz and Toeplitz-Block-Toeplitz Matrices and Their Correlation with Syzygies of Polynomials
    TOEPLITZ AND TOEPLITZ-BLOCK-TOEPLITZ MATRICES AND THEIR CORRELATION WITH SYZYGIES OF POLYNOMIALS HOUSSAM KHALIL∗, BERNARD MOURRAIN† , AND MICHELLE SCHATZMAN‡ Abstract. In this paper, we re-investigate the resolution of Toeplitz systems T u = g, from a new point of view, by correlating the solution of such problems with syzygies of polynomials or moving lines. We show an explicit connection between the generators of a Toeplitz matrix and the generators of the corresponding module of syzygies. We show that this module is generated by two elements of degree n and the solution of T u = g can be reinterpreted as the remainder of an explicit vector depending on g, by these two generators. This approach extends naturally to multivariate problems and we describe for Toeplitz-block-Toeplitz matrices, the structure of the corresponding generators. Key words. Toeplitz matrix, rational interpolation, syzygie 1. Introduction. Structured matrices appear in various domains, such as scientific computing, signal processing, . They usually express, in a linearize way, a problem which depends on less pa- rameters than the number of entries of the corresponding matrix. An important area of research is devoted to the development of methods for the treatment of such matrices, which depend on the actual parameters involved in these matrices. Among well-known structured matrices, Toeplitz and Hankel structures have been intensively studied [5, 6]. Nearly optimal algorithms are known for the multiplication or the resolution of linear systems, for such structure. Namely, if A is a Toeplitz matrix of size n, multiplying it by a vector or solving a linear system with A requires O˜(n) arithmetic operations (where O˜(n) = O(n logc(n)) for some c > 0) [2, 12].
    [Show full text]
  • A Note on Multilevel Toeplitz Matrices
    Spec. Matrices 2019; 7:114–126 Research Article Open Access Lei Cao and Selcuk Koyuncu* A note on multilevel Toeplitz matrices https://doi.org/110.1515/spma-2019-0011 Received August 7, 2019; accepted September 12, 2019 Abstract: Chien, Liu, Nakazato and Tam proved that all n × n classical Toeplitz matrices (one-level Toeplitz matrices) are unitarily similar to complex symmetric matrices via two types of unitary matrices and the type of the unitary matrices only depends on the parity of n. In this paper we extend their result to multilevel Toeplitz matrices that any multilevel Toeplitz matrix is unitarily similar to a complex symmetric matrix. We provide a method to construct the unitary matrices that uniformly turn any multilevel Toeplitz matrix to a complex symmetric matrix by taking tensor products of these two types of unitary matrices for one-level Toeplitz matrices according to the parity of each level of the multilevel Toeplitz matrices. In addition, we introduce a class of complex symmetric matrices that are unitarily similar to some p-level Toeplitz matrices. Keywords: Multilevel Toeplitz matrix; Unitary similarity; Complex symmetric matrices 1 Introduction Although every complex square matrix is similar to a complex symmetric matrix (see Theorem 4.4.24, [5]), it is known that not every n × n matrix is unitarily similar to a complex symmetric matrix when n ≥ 3 (See [4]). Some characterizations of matrices unitarily equivalent to a complex symmetric matrix (UECSM) were given by [1] and [3]. Very recently, a constructive proof that every Toeplitz matrix is unitarily similar to a complex symmetric matrix was given in [2] in which the unitary matrices turning all n × n Toeplitz matrices to complex symmetric matrices was given explicitly.
    [Show full text]
  • Totally Positive Toeplitz Matrices and Quantum Cohomology of Partial Flag Varieties
    JOURNAL OF THE AMERICAN MATHEMATICAL SOCIETY Volume 16, Number 2, Pages 363{392 S 0894-0347(02)00412-5 Article electronically published on November 29, 2002 TOTALLY POSITIVE TOEPLITZ MATRICES AND QUANTUM COHOMOLOGY OF PARTIAL FLAG VARIETIES KONSTANZE RIETSCH 1. Introduction A matrix is called totally nonnegative if all of its minors are nonnegative. Totally nonnegative infinite Toeplitz matrices were studied first in the 1950's. They are characterized in the following theorem conjectured by Schoenberg and proved by Edrei. Theorem 1.1 ([10]). The Toeplitz matrix ∞×1 1 a1 1 0 1 a2 a1 1 B . .. C B . a2 a1 . C B C A = B .. .. .. C B ad . C B C B .. .. C Bad+1 . a1 1 C B C B . C B . .. .. a a .. C B 2 1 C B . C B .. .. .. .. ..C B C is totally nonnegative@ precisely if its generating function is of theA form, 2 (1 + βit) 1+a1t + a2t + =exp(tα) ; ··· (1 γit) i Y2N − where α R 0 and β1 β2 0,γ1 γ2 0 with βi + γi < . 2 ≥ ≥ ≥···≥ ≥ ≥···≥ 1 This beautiful result has been reproved many times; see [32]P for anP overview. It may be thought of as giving a parameterization of the totally nonnegative Toeplitz matrices by ~ N N ~ (α;(βi)i; (~γi)i) R 0 R 0 R 0 i(βi +~γi) < ; f 2 ≥ × ≥ × ≥ j 1g i X2N where β~i = βi βi+1 andγ ~i = γi γi+1. − − Received by the editors December 10, 2001 and, in revised form, September 14, 2002. 2000 Mathematics Subject Classification. Primary 20G20, 15A48, 14N35, 14N15.
    [Show full text]
  • A Note on Inversion of Toeplitz Matrices$
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Applied Mathematics Letters 20 (2007) 1189–1193 www.elsevier.com/locate/aml A note on inversion of Toeplitz matrices$ Xiao-Guang Lv, Ting-Zhu Huang∗ School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610054, PR China Received 18 October 2006; accepted 30 October 2006 Abstract It is shown that the invertibility of a Toeplitz matrix can be determined through the solvability of two standard equations. The inverse matrix can be denoted as a sum of products of circulant matrices and upper triangular Toeplitz matrices. The stability of the inversion formula for a Toeplitz matrix is also considered. c 2007 Elsevier Ltd. All rights reserved. Keywords: Toeplitz matrix; Circulant matrix; Inversion; Algorithm 1. Introduction Let T be an n-by-n Toeplitz matrix: a0 a−1 a−2 ··· a1−n ··· a1 a0 a−1 a2−n ··· T = a2 a1 a0 a3−n , . . .. an−1 an−2 an−3 ··· a0 where a−(n−1),..., an−1 are complex numbers. We use the shorthand n T = (ap−q )p,q=1 for a Toeplitz matrix. The inversion of a Toeplitz matrix is usually not a Toeplitz matrix. A very important step is to answer the question of how to reconstruct the inversion of a Toeplitz matrix by a low number of its columns and the entries of the original Toeplitz matrix. It was first observed by Trench [1] and rediscovered by Gohberg and Semencul [2] that T −1 can be reconstructed from its first and last columns provided that the first component of the first column does not vanish.
    [Show full text]
  • TOEPLITZ MATRIX APPROXIMATION by Suliman Al-Homidan
    TOEPLITZ MATRIX APPROXIMATION by Suliman Al-Homidan Department of Mathematics, King Fahd University of Petroleum and Minerals, Dhahran 31261, PO Box 119, Saudi Arabia 1 1. Matrix nearness problems 2. Hybrid Methods for Finding the Nearest Euclidean Distance Matrix 3. Educational Testing Problem 4. Hybrid Methods for Minimizing Least Distance Functions with Semi-Definite Matrix Constraints 5. The Problem 6. l1Sequential Quadratic Programming Method 2 1 Matrix nearness problems Given a matrix F ∈ IRn×n then consider the problem minimize kF − Dk (kF − Dk ≤ |F |) D Such that D has property P P can be any one or mixture of: * Symmetry * Skew-Symmetry * Poitive semi-definiteness * Orthogonality, Unitary * Normality * Rank-deficiency, Singularity 3 * D in a linesr space * D with some fixed columns, rows, subma- trix * Instability * D with a given λ, repeated λ * D is Euclidean Distance Matrix * D is Toeplitz or Hankel 4 2 Hybrid Methods for Finding the Nearest Euclidean Distance Matrix Definition A matrix D ∈ IRn×n is called a Euclidean dis- tance matrix iff there exist n points x1,..., xn in an affine subspace of dimension IRm (m ≤ n − 1) such that 2 dij = kxi − xjk2 ∀i, j. (1) The Euclidean distance problem can now be stated as follows. Given a matrix F ∈ IRn×n, find the Euclidean distance matrix D ∈ IRn×n that minimizes kF − DkF (2) where k.kF denotes the Frobenius norm. see Al-Homidan and Fletcher [1] 5 3 Educational Testing Problem The educational testing problem. can be ex- pressed as maximize eT θ θ ∈ IRn subject to F − diag θ ≥ 0 θi ≥ 0 i = 1, ..., n (3) where e = (1, 1, ..., 1)T .
    [Show full text]
  • Pfaffians of Toeplitz Payoff Matrices
    Pfaffians of Toeplitz payoff matrices Ron Evans Department of Mathematics University of California at San Diego La Jolla, CA 92093-0112 [email protected] Nolan Wallach Department of Mathematics University of California at San Diego La Jolla, CA 92093-0112 [email protected] April 19, 2019 2010 Mathematics Subject Classification. 15A15, 15B05, 15B57, 91A05 Key words and phrases. Toeplitz skew-symmetric matrix, Pfaffian, nullity, matrix game, payoff matrix. Abstract The purpose of this paper is to evaluate the Pfaffians of certain Toeplitz payoff matrices associated with integer choice matrix games. 1 1 Introduction For w 2 R, let M(n; k; w) denote the n × n skew-symmetric Toeplitz matrix whose k superdiagonals in the upper right corner have all entries −w, and whose remaining n − k − 1 superdiagonals have all entries 1. For example, M(6; 3; w) is the matrix 2 0 1 1 −w −w −w 3 6 −1 0 1 1 −w −w 7 6 7 6 −1 −1 0 1 1 −w 7 6 7 : 6 w −1 −1 0 1 1 7 6 7 4 w w −1 −1 0 1 5 w w w −1 −1 0 In earlier work [4], the Pfaffians of (scalar multiples of) M(2n; 2n − 2; w) were evaluated. Our main result (Theorem 2.4) evaluates the Pfaffians of the matrices M(2n; m; w) for all n ≥ 1 with n ≥ m ≥ 0. Up to sign, these Pfaffians turn out to be the polynomials Fm(w) defined in (2.1). We remark that Fm(w) is the difference of two Chebyshev polynomials of the second kind, namely Fm(w) = Um((w + 1)=2) − Um−1((w + 1)=2): A connection between Chebyshev polynomials of the second kind and Pfaf- fians of some skew-symmetric Toeplitz band matrices may be found in [2].
    [Show full text]
  • On Some Properties of Positive Definite Toeplitz Matrices and Their Possible Applications
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector On Some Properties of Positive Definite Toeplitz Matrices and Their Possible Applications Bishwa Nath Mukhexjee Computer Science Unit lndian Statistical Institute Calcutta 700 035, Zndia and Sadhan Samar Maiti Department of Statistics Kalyani University Kalyani, Nadia, lndia Submitted by Miroslav Fiedler ABSTRACT Various properties of a real symmetric Toeplitz matrix Z,,, with elements u+ = u,~_~,, 1 Q j, k c m, are reviewed here. Matrices of this kind often arise in applica- tions in statistics, econometrics, psychometrics, structural engineering, multichannel filtering, reflection seismology, etc., and it is desirable to have techniques which exploit their special structure. Possible applications of the results related to their inverse, determinant, and eigenvalue problem are suggested. 1. INTRODUCTION Most matrix forms that arise in the analysis of stationary stochastic processes, as in time series data, are of the so-called Toeplitz structure. The Toeplitz matrix, T, is important due to its distinctive property that the entries in the matrix depend only on the differences of the indices, and as a result, the elements on its principal diagonal as well as those lying within each of its subdiagonals are identical. In 1910, 0. Toeplitz studied various forms of the matrices ((t,_,)) in relation to Laurent power series and called them Lforms, without assuming that they are symmetric. A sufficiently well-structured theory on T-matrix LINEAR ALGEBRA AND ITS APPLICATIONS 102:211-240 (1988) 211 0 Elsevier Science Publishing Co., Inc., 1988 52 Vanderbilt Ave., New York, NY 10017 00243795/88/$3.50 212 BISHWA NATH MUKHERJEE AND SADHAN SAMAR MAITI also existed in the memoirs of Frobenius [21, 221.
    [Show full text]
  • Structure of Isometry Group of Bilinear Spaces ୋ
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 416 (2006) 414–436 www.elsevier.com/locate/laa Structure of isometry group of bilinear spaces ୋ Dragomir Ž. Ðokovic´ Department of Pure Mathematics, University of Waterloo, Waterloo, Ont., Canada N2L 3G1 Received 5 November 2004; accepted 27 November 2005 Available online 18 January 2006 Submitted by H. Schneider Abstract We describe the structure of the isometry group G of a finite-dimensional bilinear space over an algebra- ically closed field of characteristic not two. If the space has no indecomposable degenerate orthogonal summands of odd dimension, it admits a canonical orthogonal decomposition into primary components and G is isomorphic to the direct product of the isometry groups of the primary components. Each of the latter groups is shown to be isomorphic to the centralizer in some classical group of a nilpotent element in the Lie algebra of that group. In the general case, the description of G is more complicated. We show that G is a semidirect product of a normal unipotent subgroup K with another subgroup which, in its turn, is a direct product of a group of the type described in the previous paragraph and another group H which we can describe explicitly. The group H has a Levi decomposition whose Levi factor is a direct product of several general linear groups of various degrees. We obtain simple formulae for the dimensions of H and K. © 2005 Elsevier Inc. All rights reserved. Keywords: Bilinear space; Isometry group; Asymmetry; Gabriel block; Toeplitz matrix 1.
    [Show full text]
  • ON EUCLIDEAN DISTANCE MATRICES of GRAPHS∗ 1. Introduction. a Matrix D ∈ R N×N Is a Euclidean Distance Matrix (EDM), If Ther
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 574-589, August 2013 ELA ON EUCLIDEAN DISTANCE MATRICES OF GRAPHS∗ GASPERˇ JAKLICˇ † AND JOLANDA MODIC‡ Abstract. In this paper, a relation between graph distance matrices and Euclidean distance matrices (EDM) is considered. It is proven that distance matrices of paths and cycles are EDMs. The proofs are constructive and the generating points of studied EDMs are given in a closed form. A generalization to weighted graphs (networks) is tackled. Key words. Graph, Euclidean distance matrix, Distance, Eigenvalue. AMS subject classifications. 15A18, 05C50, 05C12. n n 1. Introduction. A matrix D R × is a Euclidean distance matrix (EDM), ∈ if there exist points x Rr, i =1, 2,...,n, such that d = x x 2. The minimal i ∈ ij k i − j k possible r is called an embedding dimension (see, e.g., [3]). Euclidean distance matri- ces were introduced by Menger in 1928, later they were studied by Schoenberg [13], and other authors. They have many interesting properties, and are used in various applications in linear algebra, graph theory, and bioinformatics. A natural problem is to study configurations of points xi, where only distances between them are known. n n Definition 1.1. A matrix D = (d ) R × is hollow, if d = 0 for all ij ∈ ii i =1, 2,...,n. There are various characterizations of EDMs. n n Lemma 1.2. [8, Lemma 5.3] Let a symmetric hollow matrix D R × have only ∈ one positive eigenvalue λ and the corresponding eigenvector e := [1, 1,..., 1]T Rn.
    [Show full text]