Polynomial Matrix Decompositions

Total Page:16

File Type:pdf, Size:1020Kb

Polynomial Matrix Decompositions UPTEC F10 059 Examensarbete 20 p November 2010 Polynomial Matrix Decompositions Evaluation of Algorithms with an Application to Wideband MIMO Communications Rasmus Brandt Abstract Polynomial Matrix Decompositions: Evaluation of Algorithms with an Application to Wideband MIMO Communications Rasmus Brandt Teknisk- naturvetenskaplig fakultet UTH-enheten The interest in wireless communications among consumers has exploded since the introduction of the ''3G'' cell phone standards. One reason for their success is the Besöksadress: increasingly higher data rates achievable through the networks. A further increase in Ångströmlaboratoriet Lägerhyddsvägen 1 data rates is possible through the use of multiple antennas at either or both sides of Hus 4, Plan 0 the wireless links. Postadress: Precoding and receive filtering using matrices obtained from a singular value Box 536 751 21 Uppsala decomposition (SVD) of the channel matrix is a transmission strategy for achieving the channel capacity of a deterministic narrowband multiple-input multiple-output Telefon: (MIMO) communications channel. When signalling over wideband channels using 018 – 471 30 03 orthogonal frequency-division multiplexing (OFDM), an SVD must be performed for Telefax: every sub-carrier. As the number of sub-carriers of this traditional approach grow 018 – 471 30 00 large, so does the computational load. It is therefore interesting to study alternate means for obtaining the decomposition. Hemsida: http://www.teknat.uu.se/student A wideband MIMO channel can be modeled as a matrix filter with a finite impulse response, represented by a polynomial matrix. This thesis is concerned with investigating algorithms which decompose the polynomial channel matrix directly. The resulting decomposition factors can then be used to obtain the sub-carrier based precoding and receive filtering matrices. Existing approximative polynomial matrix QR and singular value decomposition algorithms were modified, and studied in terms of decomposition quality and computational complexity. The decomposition algorithms were shown to give decompositions of good quality, but if the goal is to obtain precoding and receive filtering matrices, the computational load is prohibitive for channels with long impulse responses. Two algorithms for performing exact rational decompositions (QRD/SVD) of polynomial matrices were proposed and analyzed. Although they for simple cases resulted in excellent decompositions, issues with numerical stability of a spectral factorization step renders the algorithms in their current form purposeless. For a MIMO channel with exponentially decaying power-delay profile, the sum rates achieved by employing the filters given from the approximative polynomial SVD algorithm were compared to the channel capacity. It was shown that if the symbol streams were decoded independently, as done in the traditional approach, the sum rates were sensitive to errors in the decomposition. A receiver with a spatially joint detector achieved sum rates close to the channel capacity, but with such a receiver the low complexity detector set-up of the traditional approach is lost. Summarizing, this thesis has shown that a wideband MIMO channel can be diagonalized in space and frequency using OFDM in conjunction with an approximative polynomial SVD algorithm. In order to reach sum rates close to the capacity of a simple channel, the computational load becomes restraining compared to the traditional approach, for channels with long impulse responses. Handledare: Mats Bengtsson Ämnesgranskare: Mikael Sternad Examinator: Tomas Nyberg ISSN: 1401-5757, UPTEC F10 059 Popul¨arvetenskaplig sammanfattning p˚asvenska Tr˚adl¨os kommunikation ¨ar ett omr˚adevars popul¨aritet har ¨okat de senaste ˚aren. Ett sk¨al till "3G-internets" framg˚ang ¨ar de h¨oga datatakter som ¨ar m¨ojliga. Datatakten i en tr˚adl¨os l¨ank beror p˚asignalens bandbredd samt s¨andeffekten, och genom att ¨oka endera erh˚allsh¨ogre datatakter. B˚adebandbredd och s¨andeffekt ¨ar dock dyra resurser, eftersom deras anv¨andande ofta ¨ar reglerat av nationella och internationella myndigheter. Ett annat s¨att att ¨oka datatakten i en tr˚adl¨os l¨ank kan vara att l¨agga till fler anten- ner p˚as¨andar- och mottagarsidan, ett s.k. MIMO-system. Ett s˚adant system kan ses som en upps¨attning av enkelantennl¨ankar med inb¨ordes p˚averkan och kan beskrivas av en matris. Da- tatakten f¨or flerantennl¨anken kan maximeras genom att skicka flera parallella datastr¨ommar ¨over MIMO-kanalen. Eftersom de olika uts¨anda signalerna samsas om radiokanalen kommer de att blandas. Varje mottagarantenn kommer d¨arf¨or att ta emot en kombination av de uts¨anda signalerna fr˚ande olika s¨andarantennerna. F¨or att undvika att signalerna blandas m˚astede kodas. Det visar sig att genom att koda de s¨anda signalerna med en speciell matris, samt avkoda de mottagna signalerna med en annan matris, s˚atransformeras kanalen till en upps¨attning av parallella virtuella kanaler. P˚a dessa virtuella kanaler kan sedan oberoende datastr¨ommar skickas. Kodningsmatriserna ges av en s.k. singul¨arv¨ardesuppdelning av den ursprungliga kanalmatrisen. F¨or ett enkelantennsystem med h¨og bandbredd kommer radiokanalen att p˚averka de olika frekvenskomponenterna i signalen olika. Om inte systemet tar h¨ansyn till den effekten kommer dess prestanda att p˚averkas. Ett s¨att att undvika denna frekvensselektivitet ¨ar att signalera ¨over kanalen med s.k. OFDM. Genom OFDM-systemet delas den ursprungliga signalen upp i flera signaler med l˚agbandbredd. Genom att skicka dessa smalbandiga signaler p˚aolika delar av frekvensbandet s˚ap˚averkar de inte varandra. Den frekvensselektiva kanalen har s˚aledes delats upp i ett antal icke frekvensselektiva parallella subkanaler. Genom att skicka en bredbandig signal ¨over ett OFDM-baserat MIMO-system kan ¨annu h¨ogre datatakter ˚astadkommas. Dock m˚astekodningsmatriserna ber¨aknas f¨or varje parallell subkanal i frekvensbandet, vilket inneb¨ar att m˚angaber¨akningsoperationer kr¨avs. Det h¨ar examensarbetet har unders¨okt en ny upps¨attning algoritmer f¨or att erh˚allaapproximatio- ner av de kodningsmatriser som beh¨ovs. Kvaliteten p˚ade approximativa kodningsmatriserna j¨amf¨ordes med de exakta matriserna och antalet n¨odv¨andiga ber¨akningsoperationer m¨attes. Det visade sig att de nya algoritmerna kan producera kodningsmatriser av god kvalitet, men med fler n¨odv¨andiga ber¨akningsoperationer ¨an det traditionella s¨attet att erh˚allakodnings- matriserna. Kodningsmatriserna fr˚ande nya algoritmerna simulerades ocks˚ai ett kommunikationssy- stem. Med de nya matriserna kan man uppn˚adatatakter som ¨ar n¨ara den teoretiska maxka- paciteten f¨or en enkel radiokanal om en avancerad dekoder anv¨ands p˚amottagarsidan. Om ist¨allet en upps¨attning av enkla dekodrar anv¨ands, som i det traditionella systemet, blir systemprestanda lidande. Sammanfattningsvis s˚ahar det h¨ar examensarbetet visat att kodningsmatriserna erh˚allna fr˚ande nya algoritmerna kan anv¨andas i ett bredbandigt MIMO-system f¨or att maximera datatakten. Dock s˚akr¨aver de fler ber¨akningsoperationer, och en mer avancerad dekoder, ¨an det traditionella systemet. De nya algoritmerna ¨ar s˚aledesinte konkurrenskraftiga j¨amf¨ort med det traditionella systemet. Acknowledgements This diploma work was performed at the Signal Processing Laboratory at the School of Electrical Engineering at Kungliga Tekniska H¨ogskolan in Stockholm, and will lead to a degree of Master of Science in Engineering Physics from Uppsala University. First and foremost, I would like to thank my supervisor Mats Bengtsson for proposing the thesis topic and taking me on as a MSc thesis worker. His advice and guidance has helped me considerably during the course of this work. My ¨amnesgranskare Mikael Sternad at the Division for Signals and Systems at Uppsala university also deserves my gratitude; his comments have been very valuable to the final version of this thesis. My family has always been supporting my endeavours, and for that I am endlessly grateful. Finally, thank you Melissa for being so lovely and cheerful, and for moving to Sweden to be with me. Contents 1 Introduction 1 1.1 Wireless Communications . .1 1.2 Multiple Antennas and Wideband Channels . .3 1.3 Problem Formulation and Contributions . .3 1.4 Thesis Outline . .4 2 Preliminaries 5 2.1 Complex Polynomials . .5 2.1.1 Addition and Subtraction . .6 2.1.2 Multiplication . .6 2.2 Polynomial Matrices . .7 2.2.1 Givens Rotations . .7 2.2.2 Decompositions . .9 2.2.3 Coefficient Truncation . .9 2.3 Computational Complexity . 10 3 MIMO Channels and Multipath Propagation 11 3.1 Propagation and Modeling . 11 3.1.1 Propagation . 11 3.1.2 Channel Modeling . 13 3.1.3 MIMO Channels . 14 3.2 Channel Capacity and Achievable Rate . 15 3.3 Equalization Techniques . 16 3.4 Summary . 17 4 Polynomial Decomposition Algorithms: Coefficient Nulling 18 4.1 Performance Criteria . 18 4.2 PQRD-BC: Polynomial QR Decomposition . 20 4.2.1 Convergence and Complexity . 21 4.2.2 Discussion . 22 4.3 MPQRD-BC: Modified PQRD-BC . 22 4.3.1 Convergence and Complexity . 23 4.3.2 Simulations . 25 4.3.3 Discussion . 29 4.4 PSVD by PQRD-BC: Polynomial Singular Value Decomposition . 29 4.4.1 Convergence and Complexity . 31 i 4.4.2 Discussion . 32 4.5 MPSVD by MPQRD-BC: Modified PSVD . 32 4.5.1 Convergence and Complexity . 33 4.5.2 Simulations . 34 4.5.3 Discussion . 39 4.6 Sampled PSVD vs. SVD in DFT Domain . 39 4.6.1 Frequency Domain Comparison . 39 4.6.2 Computational Load Comparison, Set-Up Phase . 39 4.6.3 Computational Load, Online Phase . 41 4.6.4 Discussion . 42 4.7 Summary . 43 5 Rational Decomposition Algorithms: Polynomial Nulling 44 5.1 Rational Givens Rotation . 44 5.2 PQRD-R: Rational QR Decomposition . 45 5.2.1 Simulations . 46 5.2.2 Discussion . 47 5.3 PSVD-R by PQRD-R: Rational Singular Value Decomposition . 50 5.3.1 Simulations . 50 5.3.2 Discussion . 51 5.4 Summary . 51 6 Polynomial SVD for Wideband Spatial Multiplexing 55 6.1 Generic System Model .
Recommended publications
  • Polynomial Flows in the Plane
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector ADVANCES IN MATHEMATICS 55, 173-208 (1985) Polynomial Flows in the Plane HYMAN BASS * Department of Mathematics, Columbia University. New York, New York 10027 AND GARY MEISTERS Department of Mathematics, University of Nebraska, Lincoln, Nebraska 68588 Contents. 1. Introduction. I. Polynomial flows are global, of bounded degree. 2. Vector fields and local flows. 3. Change of coordinates; the group GA,(K). 4. Polynomial flows; statement of the main results. 5. Continuous families of polynomials. 6. Locally polynomial flows are global, of bounded degree. II. One parameter subgroups of GA,(K). 7. Introduction. 8. Amalgamated free products. 9. GA,(K) as amalgamated free product. 10. One parameter subgroups of GA,(K). 11. One parameter subgroups of BA?(K). 12. One parameter subgroups of BA,(K). 1. Introduction Let f: Rn + R be a Cl-vector field, and consider the (autonomous) system of differential equations with initial condition x(0) = x0. (lb) The solution, x = cp(t, x,), depends on t and x0. For which f as above does the flow (p depend polynomially on the initial condition x,? This question was discussed in [M2], and in [Ml], Section 6. We present here a definitive solution of this problem for n = 2, over both R and C. (See Theorems (4.1) and (4.3) below.) The main tool is the theorem of Jung [J] and van der Kulk [vdK] * This material is based upon work partially supported by the National Science Foun- dation under Grant NSF MCS 82-02633.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints
    Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints A Dissertation Presented by Amirreza Oghbaee to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Northeastern University Boston, Massachusetts April 2018 To my parents for their endless love and support i Contents List of Figures vi Acknowledgments vii Abstract of the Dissertation viii 1 Introduction 1 2 Matrices with Special Structures 4 2.1 Nonnegative (Positive) and Metzler Matrices . 4 2.1.1 Nonnegative Matrices and Eigenvalue Characterization . 6 2.1.2 Metzler Matrices . 8 2.1.3 Z-Matrices . 10 2.1.4 M-Matrices . 10 2.1.5 Totally Nonnegative (Positive) Matrices and Strictly Metzler Matrices . 12 2.2 Symmetric Matrices . 14 2.2.1 Properties of Symmetric Matrices . 14 2.2.2 Symmetrizer and Symmetrization . 15 2.2.3 Quadratic Form and Eigenvalues Characterization of Symmetric Matrices . 19 2.3 Nonnegative and Metzler Symmetric Matrices . 22 3 Positive and Symmetric Systems 27 3.1 Positive Systems . 27 3.1.1 Externally Positive Systems . 27 3.1.2 Internally Positive Systems . 29 3.1.3 Asymptotic Stability . 33 3.1.4 Bounded-Input Bounded-Output (BIBO) Stability . 34 3.1.5 Asymptotic Stability using Lyapunov Equation . 37 3.1.6 Robust Stability of Perturbed Systems . 38 3.1.7 Stability Radius . 40 3.2 Symmetric Systems . 43 3.3 Positive Symmetric Systems . 47 ii 4 Positive Stabilization of Dynamic Systems 50 4.1 Metzlerian Stabilization . 50 4.2 Maximizing the stability radius by state feedback .
    [Show full text]
  • Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
    Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations Tri Dao 1 Albert Gu 1 Matthew Eichhorn 2 Atri Rudra 2 Christopher Re´ 1 Abstract ture generation, and kernel approximation, to image and Fast linear transforms are ubiquitous in machine language modeling (convolutions). To date, these trans- learning, including the discrete Fourier transform, formations rely on carefully designed algorithms, such as discrete cosine transform, and other structured the famous fast Fourier transform (FFT) algorithm, and transformations such as convolutions. All of these on specialized implementations (e.g., FFTW and cuFFT). transforms can be represented by dense matrix- Moreover, each specific transform requires hand-crafted vector multiplication, yet each has a specialized implementations for every platform (e.g., Tensorflow and and highly efficient (subquadratic) algorithm. We PyTorch lack the fast Hadamard transform), and it can be ask to what extent hand-crafting these algorithms difficult to know when they are useful. Ideally, these barri- and implementations is necessary, what structural ers would be addressed by automatically learning the most priors they encode, and how much knowledge is effective transform for a given task and dataset, along with required to automatically learn a fast algorithm an efficient implementation of it. Such a method should be for a provided structured transform. Motivated capable of recovering a range of fast transforms with high by a characterization of matrices with fast matrix- accuracy and realistic sizes given limited prior knowledge. vector multiplication as factoring into products It is also preferably composed of differentiable primitives of sparse matrices, we introduce a parameteriza- and basic operations common to linear algebra/machine tion of divide-and-conquer methods that is capa- learning libraries, that allow it to run on any platform and ble of representing a large class of transforms.
    [Show full text]
  • Polynomial Parametrization for the Solutions of Diophantine Equations and Arithmetic Groups
    ANNALS OF MATHEMATICS Polynomial parametrization for the solutions of Diophantine equations and arithmetic groups By Leonid Vaserstein SECOND SERIES, VOL. 171, NO. 2 March, 2010 anmaah Annals of Mathematics, 171 (2010), 979–1009 Polynomial parametrization for the solutions of Diophantine equations and arithmetic groups By LEONID VASERSTEIN Abstract A polynomial parametrization for the group of integer two-by-two matrices with determinant one is given, solving an old open problem of Skolem and Beurk- ers. It follows that, for many Diophantine equations, the integer solutions and the primitive solutions admit polynomial parametrizations. Introduction This paper was motivated by an open problem from[8, p. 390]: CNTA 5.15 (Frits Beukers). Prove or disprove the following statement: There exist four polynomials A; B; C; D with integer coefficients (in any number of variables) such that AD BC 1 and all integer solutions D of ad bc 1 can be obtained from A; B; C; D by specialization of the D variables to integer values. Actually, the problem goes back to Skolem[14, p. 23]. Zannier[22] showed that three variables are not sufficient to parametrize the group SL2 Z, the set of all integer solutions to the equation x1x2 x3x4 1. D Apparently Beukers posed the question because SL2 Z (more precisely, a con- gruence subgroup of SL2 Z) is related with the solution set X of the equation x2 x2 x2 3, and he (like Skolem) expected the negative answer to CNTA 1 C 2 D 3 C 5.15, as indicated by his remark[8, p. 389] on the set X: I have begun to believe that it is not possible to cover all solutions by a finite number of polynomials simply because I have never The paper was conceived in July of 2004 while the author enjoyed the hospitality of Tata Institute for Fundamental Research, India.
    [Show full text]
  • The Images of Non-Commutative Polynomials Evaluated on 2 × 2 Matrices
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 140, Number 2, February 2012, Pages 465–478 S 0002-9939(2011)10963-8 Article electronically published on June 16, 2011 THE IMAGES OF NON-COMMUTATIVE POLYNOMIALS EVALUATED ON 2 × 2 MATRICES ALEXEY KANEL-BELOV, SERGEY MALEV, AND LOUIS ROWEN (Communicated by Birge Huisgen-Zimmermann) Abstract. Let p be a multilinear polynomial in several non-commuting vari- ables with coefficients in a quadratically closed field K of any characteristic. It has been conjectured that for any n, the image of p evaluated on the set Mn(K)ofn by n matrices is either zero, or the set of scalar matrices, or the set sln(K) of matrices of trace 0, or all of Mn(K). We prove the conjecture for n = 2, and show that although the analogous assertion fails for completely homogeneous polynomials, one can salvage the conjecture in this case by in- cluding the set of all non-nilpotent matrices of trace zero and also permitting dense subsets of Mn(K). 1. Introduction Images of polynomials evaluated on algebras play an important role in non- commutative algebra. In particular, various important problems related to the theory of polynomial identities have been settled after the construction of central polynomials by Formanek [F1] and Razmyslov [Ra1]. The parallel topic in group theory (the images of words in groups) also has been studied extensively, particularly in recent years. Investigation of the image sets of words in pro-p-groups is related to the investigation of Lie polynomials and helped Zelmanov [Ze] to prove that the free pro-p-group cannot be embedded in the algebra of n × n matrices when p n.
    [Show full text]
  • The Computation of the Inverse of a Square Polynomial Matrix ∗
    The Computation of the Inverse of a Square Polynomial Matrix ∗ Ky M. Vu, PhD. AuLac Technologies Inc. c 2008 Email: [email protected] Abstract 2 The Inverse Polynomial An approach to calculate the inverse of a square polyno- The inverse of a square nonsingular matrix is a square ma- mial matrix is suggested. The approach consists of two trix, which by premultiplying or postmultiplying with the similar algorithms: One calculates the determinant polyno- matrix gives an identity matrix. An inverse matrix can be mial and the other calculates the adjoint polynomial. The expressed as a ratio of the adjoint and determinant of the algorithm to calculate the determinant polynomial gives the matrix. A singular matrix has no inverse because its deter- coefficients of the polynomial in a recursive manner from minant is zero; we cannot calculate its inverse. We, how- a recurrence formula. Similarly, the algorithm to calculate ever, can always calculate its adjoint and determinant. It is, the adjoint polynomial also gives the coefficient matrices therefore, always better to calculate the inverse of a square recursively from a similar recurrence formula. Together, polynomial matrix by calculating its adjoint and determi- they give the inverse as a ratio of the adjoint polynomial nant polynomials separately and obtain the inverse as a ra- and the determinant polynomial. tio of these polynomials. In the following discussion, we will present algorithms to calculate the adjoint and deter- minant polynomials. Keywords: Adjoint, Cayley-Hamilton theorem, determi- nant, Faddeev’s algorithm, inverse matrix. 2.1 The Determinant Polynomial To begin our discussion, we consider the following equa- 1 Introduction tion that is the determinant of a square matrix of dimension n, viz Modern control theory consists of a large part of matrix the- n n−1 n−2 n jλI − Cj = λ − p1(C)λ + p2(C)λ ··· (−1) pn(C); ory.
    [Show full text]
  • Block Circulant Matrix with Circulant Polynomial Matrices As Its Blocks G.Ramesh, * R.Muthamilselvam,** * Associate Professor of Mathematics, Govt
    Science, Technology and Development ISSN : 0950-0707 Block Circulant Matrix with Circulant Polynomial Matrices as its Blocks G.Ramesh, * R.Muthamilselvam,** * Associate Professor of Mathematics, Govt. Arts College(Autonomous), Kumbakonam. ([email protected]) ** Assistant Professor of Mathematics, Arasu Engineering College, Kumbakonam. ([email protected]) ___________________________________________________________________________ Abstract: The characterization of block circulant matrix with circulant polynomial matrices as its blocks are derived as a generalization of the block circulant matrices with circulant block matrices. Keywords: Circulant polynomial matrix, Block circulant polynomial matrix, Circulant block polynomial matrix. AMS Classification: 15A09, 15A15, 15A57. I. Introduction Let a12, a ,..., an be an ordered n-tuple of polynomial with complex coefficients , and let them generate the circulant polynomial matrix of order n 5 : a12 a ... an a a ... a A n 12 1.1 a2 a 3 ... a 1 We shall often denote this circulant polynomial matrix as A circ a , a ,..., a 1.2 12 n It is well known that all circulant polynomial matrices of order n are simultaneously diagonalizable by the polynomial matrix F associated with the finite Fourier transforms. 2i Specifically, let exp ,i 1 1.3 n 1 1 1 ... 1 21n 1 1 and set Fn 2 1 2 4 2(n 1) 1.4 nn12 1 ... The Fourier polynomial matrix F depends only on n. This matrix is also symmetric polynomial and unitary polynomial FFFFI and we have AFF 1.5 Where diag 12, ,..., n 1.6 Volume IX Issue IV APRIL 2020 Page No : 519 Science, Technology and Development ISSN : 0950-0707 The symbol * designates the conjugate transpose.
    [Show full text]
  • Matrix Algebra and Control
    Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero.
    [Show full text]
  • Quasi-Rational Canonical Forms of a Matrix Over a Number Field
    Advances in Linear Algebra & Matrix Theory, 2018, 8, 1-10 http://www.scirp.org/journal/alamt ISSN Online: 2165-3348 ISSN Print: 2165-333X Quasi-Rational Canonical Forms of a Matrix over a Number Field Zhudeng Wang*, Qing Wang, Nan Qin School of Mathematics and Statistics, Yancheng Teachers University, Yancheng, China How to cite this paper: Wang, Z.D., Wang, Abstract Q. and Qin, N. (2018) Quasi-Rational Ca- nonical Forms of a Matrix over a Number A matrix is similar to Jordan canonical form over the complex field and the Field. Advances in Linear Algebra & Matrix rational canonical form over a number field, respectively. In this paper, we fur- Theory, 8, 1-10. ther study the rational canonical form of a matrix over any number field. We https://doi.org/10.4236/alamt.2018.81001 firstly discuss the elementary divisors of a matrix over a number field. Then, we Received: October 28, 2017 give the quasi-rational canonical forms of a matrix by combining Jordan and Accepted: January 7, 2018 the rational canonical forms. Finally, we show that a matrix is similar to its Published: January 10, 2018 quasi-rational canonical forms over a number field. Copyright © 2018 by authors and Keywords Scientific Research Publishing Inc. This work is licensed under the Creative Matrix, Jordan Canonical Form, Rational Canonical Form, Commons Attribution International Quasi-Rational Canonical Form License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access 1. Introduction A matrix is similar to Jordan canonical form over the complex field and the ra- tional canonical form over a number field, respectively.
    [Show full text]
  • On the Covariance Completion Problem Under a Circulant Structure
    1 On the Covariance Completion Problem under a banded inverse with zero values corresponding to the location Circulant Structure of unspecified elements, cf. [4]. The purpose of the present note is to develop a simple in- Francesca P. Carli and Tryphon T. Georgiou dependent argument that explains this result and, at the same time, shows that the algebraic constraint for the completion to be circulant is automatically satisfied in all cases, i.e., for Abstract| Covariance matrices with a circulant structure arise any number of missing bands as well as for any number of ar- in the context of discrete-time periodic processes and their sig- bitrary missing elements in a block-circulant structure. More nificance stems also partly from the fact that they can be diag- onalized via a Fourier transformation. This note deals with the specifically, the proof of the key result relies on the observation problem of completion of partially specified circulant covariance that circulant and block-circulant matrices are stable points of matrices. The particular completion that has maximal deter- a certain group. The group action preserves the value of the de- minant (i.e., the so-called maximum entropy completion) was terminant. Hence, the maximizer of the determinant, which is considered in Carli etal. [2] where it was shown that if a single unique and has the Dempster property [4], will generate an or- band is unspecified and to be completed, the algebraic restriction bit under the group-action that preserves the specified elements that enforces the circulant structure is automatically satisfied and that the inverse of the maximizer has a band of zero values in their original locations (since these are compatible with the that corresponds to the unspecified band in the data|i.e., it has circulant structure).
    [Show full text]
  • Computing the Nearest Rank-Deficient Matrix Polynomial
    Computing Lower Rank Approximations of Matrix Polynomialsy Mark Giesbrecht1, Joseph Haraldson1, George Labahn1 Abstract Given an input matrix polynomial whose coefficients are floating point numbers, we consider the problem of finding the nearest matrix polynomial which has rank at most a specified value. This generalizes the problem of finding a nearest matrix polynomial that is algebraically singu- lar with a prescribed lower bound on the dimension given in a previous paper by the authors. In this paper we prove that such lower rank matrices at minimal distance always exist, satisfy regularity conditions, and are all isolated and surrounded by a basin of attraction of non-minimal solutions. In addition, we present an iterative algorithm which, on given input sufficiently close to a rank-at-most matrix, produces that matrix. The algorithm is efficient and is proven to con- verge quadratically given a sufficiently good starting point. An implementation demonstrates the effectiveness and numerical robustness of our algorithm in practice. Keywords: Matrix polynomials; symbolic-numeric computing; low-rank approximation. 1. Introduction Matrix polynomials appear in many areas of computational algebra, control systems theory, differential equations, and mechanics. The algebra of matrix polynomials is typically described assuming that the individual polynomial coefficients come from an exact arithmetic domain. However, in the case of applications these coefficients typically have numeric coefficients, usu- ally real or complex numbers. As such, arithmetic can have numerical errors and algorithms are prone to numerical instability. Numerical errors have an impact, for example, in determining the rank of a matrix polynomial with floating point coefficients. In an exact setting determining the rank or determinant of a matrix polynomial is straightforward, and efficient procedures are available, for example from Storjohann and Villard (2005).
    [Show full text]