Realm of Matrices Exponential and Logarithm Functions

Total Page:16

File Type:pdf, Size:1020Kb

Realm of Matrices Exponential and Logarithm Functions GENERAL ⎜ ARTICLE Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas In this article, we discuss the exponential and the logarithmic functions in the realm of matrices. These notions are very useful in the mathemat- ical and the physical sciences [1,2]. We discuss some important results including the connections established between skew-symmetric and orthog- onal matrices, etc., through the exponential map. Debapriya Biswas is an Assistant Professor at the 1. Introduction Department of Mathemat- ics, IIT- Kharagpur, West The term ‘matrix’ was coined by Sylvester in 1850. Car- Bengal, India. Her areas of dano, Leibniz, Seki, Cayley, Jordan, Gauss, Cramer and interest are Lie groups and Lie algebras and their others have made deep contributions to matrix theory. representation theory, The theory of matrices is a fundamental tool widely used harmonic analysis and in different branches of science and engineering such as complex analysis, in classical mechanics, optics, electromagnetism, quantum particular, Clifford mechanics, motion of rigid bodies, astrophysics, prob- analysis. She is interested in teaching also and enjoys ability theory, and computer graphics [3–5]. The stan- discussing her research dard way that matrix theory gets applied is by its role as interests with others. a representation of linear transformations and in finding solutions to a system of linear equations [6]. Matrix al- gebra describes not only the study of linear transforma- tions and operators, but it also gives an insight into the geometry of linear transformations [7]. Matrix calculus generalizes the classical analytical notions like deriva- tives to higher dimensions [8]. Also, infinite matrices (which may have an infinite number of rows or columns) occur in planetary theory and atomic theory. Further, the classification of matrices into different types such Keywords as skew-symmetric, orthogonal, nilpotent, or unipotent Matrix exponential, matrix loga- matrices, is essential in dealing with complicated practi- rithm, orthogonal, nilpotent, uni- potent, skew-symmetric, Jordan cal problems. In this article, we will discuss the method matrix. to compute the exponential of any arbitrary real or com- 136 RESONANCE ⎜ February 2015 GENERAL ⎜ ARTICLE plex matrix, and discuss some of their important prop- erties [9,10]. 2. Jordan Form of Matrices A Jordan block – named in honour of Camille Jordan – is a matrix of the form ⎛ ⎞ λ 10... 0 ⎜ ⎟ ⎜0 λ 1 ... 0⎟ ⎜ ⎟ ⎜ . .. ⎟ J = ⎜ . ⎟ . ⎝00... λ 1⎠ 00... 0 λ Every Jordan block that is described by its dimension n and its eigenvalue λ, is denoted by Jλ,n. DEFINITION 2.1 If Mn denotes the set of all n×n complex matrices, then a matrix A ∈ Mn of the form ⎛ ⎞ A11 0 ⎜ ⎟ ⎜ A22 ⎟ A = ⎜ . ⎟ ⎝ .. ⎠ 0 Akk in which Aii ∈ Mni ,i =1, 2,...,k,andn1 + n2 + ...+ nk = n, is called a block diagonal. Notationally, such a matrix is often indicated as A = A11⊕A22⊕...⊕Akk;this is called the direct sum of the matrices A11,A22,...,Akk [7]. A block diagonal matrix whose blocks are Jordan blocks, is called a Jordan matrix, denoted by using either ⊕ or diag symbol. A block diagonal The (m+s+p)×(m+s+p) block diagonal square matrix, matrix whose having first, second, and third diagonal blocks Ja,m,Jb,s blocks are Jordan and Jc,p is compactly indicated as Ja,m ⊕ Jb,s ⊕ Jc,p or, blocks, is called a diag(Ja,m,Jb,s,Jc,p) respectively [4,7]. For example, the Jordan matrix. RESONANCE ⎜ February 2015 137 GENERAL ⎜ ARTICLE square matrix ⎛ ⎞ 0100000000 ⎜ ⎟ ⎜0010000000⎟ ⎜ ⎟ ⎜0000000000⎟ ⎜ ⎟ ⎜000i 100000⎟ ⎜ ⎟ ⎜0000i 00000⎟ J = ⎜ ⎟ ⎜00000i 1000⎟ ⎜ ⎟ ⎜000000i 000⎟ ⎜ ⎟ ⎜0000000510⎟ ⎝0000000051⎠ 0000000005 is a 10 × 10 Jordan matrix with a 3 × 3blockwith eigenvalue 0, two 2 × 2 blocks with imaginary unit i and a 3 × 3 block with eigenvalue 5. Its Jordan block structure can be expressed as either J0,3 ⊕Ji,2 ⊕Ji,2 ⊕J5,3 or, diag(J0,3,Ji,2,Ji,2,J5,3). 3. Nilpotent and Unipotent Matrices DEFINITION 3.1 A square matrix X is said to be nilpotent if Xr =0for some positive integer r. The least such positive integer is called the index (or, degree) of nilpotency. If X is an n × n nilpotent matrix, then Xm =0forallm ≥ n [9]. × 0 2 For example, the 2 2matrixA =(0 0 ) is nilpotent of degree 2, since A2 = 0. In general, any triangular matrix with zeros along the main diagonal is nilpotent. For example, the 4 × 4matrix ⎛ ⎞ 0124 ⎜ ⎟ 0021⎟ A = ⎜ ⎝0005⎠ In general, any 0000 triangular matrix with zeros along is nilpotent of degree 4 as A4 =0andA3 =0.Inthe the main diagonal above examples, several entries are zero. However, this is nilpotent. may not be so in a typical nilpotent matrix. 138 RESONANCE ⎜ February 2015 GENERAL ⎜ ARTICLE For instance, the 3 × 3 matrix ⎛ ⎞ 5 −32 A = ⎝15 −96⎠ 10 −64 squares to zero, i.e., A2 = 0, though the matrix has no zero entries. For A ∈ Mn, the following characterization may be worth mentioning: • Matrix A is nilpotent of degree r ≤ n i.e., Ar =0. • The characteristic polynomial χA(λ)=det(λIn − A)ofA is λn. • The minimal polynomial for A is λr. • tr(Ar) = 0 for all r>0, i.e., the sum of all the diagonal entries of Ar vanishes. • The only (complex) eigenvalue of A is 0. Further, from the above, the following observations can be added: • The degree of an n × n nilpotent matrix is always less than or equal to n. • The determinant and trace of a nilpotent matrix are always zero. • The only nilpotent diagonalizable matrix is the zero matrix. 3.2 Canonical Nilpotent Matrix × We consider the n ⎛n shift matrix ⎞ 010... 0 The only nilpotent ⎜ ⎟ ⎜001... 0⎟ diagonalizable A = ⎜. ⎟ ⎝. .. 1⎠ matrix is the zero 000... 0 matrix. RESONANCE ⎜ February 2015 139 GENERAL ⎜ ARTICLE which has ones along the super diagonal and zeros at other places. As a linear transformation, this shift ma- trix shifts the components of a vector one slot to the n left: S(a1,a2,...,an)=(a2,a3,...,an, 0). As, A = 0 = An−1,thismatrixA is nilpotent of degree n and is called the canonical nilpotent matrix. Further, if A is any nilpotent matrix, then A is similar to a block diagonal matrix of the form ⎛ ⎞ A1 O ... O ⎜ ⎟ ⎜ OA2 ... O⎟ ⎜ . ⎟ , ⎝ . .. O ⎠ O O ... Ar where each of the blocks A1,A2,...,Ar is a shift matrix (possibly of different sizes). The above theorem is a special case of the Jordan canonical form of matrices. For example, any non-zero, nilpotent, 2-by-2 matrix A is 0 1 similar to the matrix ( 0 0 ). That is, if A is any non-zero nilpotent matrix, then there exists a basis {b1,b2} such 0 1 that Ab1 = O and Ab2 = b1. For example, if A =(0 0 ), 1 0 0 1 b1 =(0 ), b2 =(1 ), then Ab1 =(0 )andAb2 =(0 )=b1. 3.3 Properties (i) If A is a nilpotent matrix, then I + A is invertible. Moreover, (I + A)−1 = I − A + A2 − A3 + ···+ (−1)n−1An−1, where the degree of A is n. (ii) If A is nilpotent then det(I +A)=1.Forexample, 0 1 2 if A =(0 0 ), then, A = O and det(I + A)=1. Conversely, if A is a matrix and det(I + tA)=1 Every singular for all values of scalar t then A is nilpotent. matrix can be (iii) Every singular matrix can be expressed as a prod- expressed as a uct of nilpotent matrices. product of nilpotent matrices. 140 RESONANCE ⎜ February 2015 GENERAL ⎜ ARTICLE DEFINITION 3.4. An n × n matrix A is said to be An n × n matrix A is said to be unipotent if the matrix A − I is nilpotent. The degree of nilpotency of A − I is unipotent if the also called the degree of unipotency of A. matrix A – I is nilpotent. For example, ⎛ ⎞ 1124 ⎜ ⎟ 13 ⎜0121⎟ A = ,B = 01 ⎝0015⎠ 0001 and ⎛ ⎞ 6 −32 C = ⎝15 −86⎠ 10 −65 are unipotent matrices of degree 2, 4 and 2 respectively because (A − I)2 = O,(B − I)4 = O and (C − I)2 = O. We know that every complex matrix X is similar to an upper triangular matrix. Thus, there exists a non- singular matrix P such that X = PAP−1,where ⎛ ⎞ a11 a12 ... a1n ⎜ ⎟ ⎜ 0 a22 ... a2n⎟ A = ⎜ . ⎟ . ⎝ . .. ⎠ 00... ann Therefore, the characteristic polynomial of X is (λ − a11)(λ − a22) ...(λ − ann), as similar matrices have the same characteristic polynomial. Then two cases may arise: Case I. The eigenvalues a11,a22,...,ann are all distinct. Case II. Not all of a11,a22,...,ann are distinct. 3 2 Forexample,considerthematrixA =(1 4 ). RESONANCE ⎜ February 2015 141 GENERAL ⎜ ARTICLE − 3−λ 2 − − − Then A λI =( 1 4−λ )anddet(A λI)=(λ 5)(λ 2). The determinant vanishes if λ = 5 or 2 which are the distinct eigenvalues of A. Now to find the eigenvectors of the matrix equation AX = λX, we solve the two systems of linear equations (A − 5I)X =0and(A − 2I)X =0 1 where from the eigen vectors are obtained as v1 =(1 ) 2 and v2 =(−1 ). 2 These eigenvectors form a basis B =(v1,v2)ofR and the matrix relating the standard basis E to the basis B −1 1 2 −1 − 1 −1 −2 −1 is P =(B) =(1 −1 ) = 3 ( −1 1 )andPAP = A − 1 −1 −2 3 2 1 2 5 0 is diagonal: A = 3 ( −1 1 )(1 4 )(1 −1 )=(0 2 ), which is the Jordan canonical form of A, and its characteristic polynomial is (λ−5)(λ−2). The two distinct eigenvalues are 5 and 2. 4. Exponential of a Matrix z zn Recall that the exponential function e = n! is a n≥0 convergent series for each z ∈ C.
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • The Inverse Along a Lower Triangular Matrix∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Universidade do Minho: RepositoriUM The inverse along a lower triangular matrix∗ Xavier Marya, Pedro Patr´ıciob aUniversit´eParis-Ouest Nanterre { La D´efense,Laboratoire Modal'X, 200 avenuue de la r´epublique,92000 Nanterre, France. email: [email protected] bDepartamento de Matem´aticae Aplica¸c~oes,Universidade do Minho, 4710-057 Braga, Portugal. email: [email protected] Abstract In this paper, we investigate the recently defined notion of inverse along an element in the context of matrices over a ring. Precisely, we study the inverse of a matrix along a lower triangular matrix, under some conditions. Keywords: Generalized inverse, inverse along an element, Dedekind-finite ring, Green's relations, rings AMS classification: 15A09, 16E50 1 Introduction In this paper, R is a ring with identity. We say a is (von Neumann) regular in R if a 2 aRa.A particular solution to axa = a is denoted by a−, and the set of all such solutions is denoted by af1g. Given a−; a= 2 af1g then x = a=aa− satisfies axa = a; xax = a simultaneously. Such a solution is called a reflexive inverse, and is denoted by a+. The set of all reflexive inverses of a is denoted by af1; 2g. Finally, a is group invertible if there is a# 2 af1; 2g that commutes with a, and a is Drazin invertible if ak is group invertible, for some non-negative integer k. This is equivalent to the existence of aD 2 R such that ak+1aD = ak; aDaaD = aD; aaD = aDa.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • THREE STEPS on an OPEN ROAD Gilbert Strang This Note Describes
    Inverse Problems and Imaging doi:10.3934/ipi.2013.7.961 Volume 7, No. 3, 2013, 961{966 THREE STEPS ON AN OPEN ROAD Gilbert Strang Massachusetts Institute of Technology Cambridge, MA 02139, USA Abstract. This note describes three recent factorizations of banded invertible infinite matrices 1. If A has a banded inverse : A=BC with block{diagonal factors B and C. 2. Permutations factor into a shift times N < 2w tridiagonal permutations. 3. A = LP U with lower triangular L, permutation P , upper triangular U. We include examples and references and outlines of proofs. This note describes three small steps in the factorization of banded matrices. It is written to encourage others to go further in this direction (and related directions). At some point the problems will become deeper and more difficult, especially for doubly infinite matrices. Our main point is that matrices need not be Toeplitz or block Toeplitz for progress to be possible. An important theory is already established [2, 9, 10, 13-16] for non-Toeplitz \band-dominated operators". The Fredholm index plays a key role, and the second small step below (taken jointly with Marko Lindner) computes that index in the special case of permutation matrices. Recall that banded Toeplitz matrices lead to Laurent polynomials. If the scalars or matrices a−w; : : : ; a0; : : : ; aw lie along the diagonals, the polynomial is A(z) = P k akz and the bandwidth is w. The required index is in this case a winding number of det A(z). Factorization into A+(z)A−(z) is a classical problem solved by Plemelj [12] and Gohberg [6-7].
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • LU Decompositions We Seek a Factorization of a Square Matrix a Into the Product of Two Matrices Which Yields an Efficient Method
    LU Decompositions We seek a factorization of a square matrix A into the product of two matrices which yields an efficient method for solving the system where A is the coefficient matrix, x is our variable vector and is a constant vector for . The factorization of A into the product of two matrices is closely related to Gaussian elimination. Definition 1. A square matrix is said to be lower triangular if for all . 2. A square matrix is said to be unit lower triangular if it is lower triangular and each . 3. A square matrix is said to be upper triangular if for all . Examples 1. The following are all lower triangular matrices: , , 2. The following are all unit lower triangular matrices: , , 3. The following are all upper triangular matrices: , , We note that the identity matrix is only square matrix that is both unit lower triangular and upper triangular. Example Let . For elementary matrices (See solv_lin_equ2.pdf) , , and we find that . Now, if , then direct computation yields and . It follows that and, hence, that where L is unit lower triangular and U is upper triangular. That is, . Observe the key fact that the unit lower triangular matrix L “contains” the essential data of the three elementary matrices , , and . Definition We say that the matrix A has an LU decomposition if where L is unit lower triangular and U is upper triangular. We also call the LU decomposition an LU factorization. Example 1. and so has an LU decomposition. 2. The matrix has more than one LU decomposition. Two such LU factorizations are and .
    [Show full text]
  • Polynomials and Hankel Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Polynomials and Hankel Matrices Miroslav Fiedler Czechoslovak Academy of Sciences Institute of Mathematics iitnci 25 115 67 Praha 1, Czechoslovakia Submitted by V. Ptak ABSTRACT Compatibility of a Hankel n X n matrix W and a polynomial f of degree m, m < n, is defined. If m = n, compatibility means that HC’ = CfH where Cf is the companion matrix of f With a suitable generalization of Cr, this theorem is gener- alized to the case that m < n. INTRODUCTION By a Hankel matrix [S] we shall mean a square complex matrix which has, if of order n, the form ( ai +k), i, k = 0,. , n - 1. If H = (~y~+~) is a singular n X n Hankel matrix, the H-polynomial (Pi of H was defined 131 as the greatest common divisor of the determinants of all (r + 1) x (r + 1) submatrices~of the matrix where r is the rank of H. In other words, (Pi is that polynomial for which the nX(n+l)matrix I, 0 0 0 %fb) 0 i 0 0 0 1 LINEAR ALGEBRA AND ITS APPLICATIONS 66:235-248(1985) 235 ‘F’Elsevier Science Publishing Co., Inc., 1985 52 Vanderbilt Ave., New York, NY 10017 0024.3795/85/$3.30 236 MIROSLAV FIEDLER is the Smith normal form [6] of H,. It has also been shown [3] that qN is a (nonzero) polynomial of degree at most T. It is known [4] that to a nonsingular n X n Hankel matrix H = ((Y~+~)a linear pencil of polynomials of degree at most n can be assigned as follows: f(x) = fo + f,x + .
    [Show full text]
  • Matrix Lie Groups
    Maths Seminar 2007 MATRIX LIE GROUPS Claudiu C Remsing Dept of Mathematics (Pure and Applied) Rhodes University Grahamstown 6140 26 September 2007 RhodesUniv CCR 0 Maths Seminar 2007 TALK OUTLINE 1. What is a matrix Lie group ? 2. Matrices revisited. 3. Examples of matrix Lie groups. 4. Matrix Lie algebras. 5. A glimpse at elementary Lie theory. 6. Life beyond elementary Lie theory. RhodesUniv CCR 1 Maths Seminar 2007 1. What is a matrix Lie group ? Matrix Lie groups are groups of invertible • matrices that have desirable geometric features. So matrix Lie groups are simultaneously algebraic and geometric objects. Matrix Lie groups naturally arise in • – geometry (classical, algebraic, differential) – complex analyis – differential equations – Fourier analysis – algebra (group theory, ring theory) – number theory – combinatorics. RhodesUniv CCR 2 Maths Seminar 2007 Matrix Lie groups are encountered in many • applications in – physics (geometric mechanics, quantum con- trol) – engineering (motion control, robotics) – computational chemistry (molecular mo- tion) – computer science (computer animation, computer vision, quantum computation). “It turns out that matrix [Lie] groups • pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry”. (K. Tapp, 2005) RhodesUniv CCR 3 Maths Seminar 2007 EXAMPLE 1 : The Euclidean group E (2). • E (2) = F : R2 R2 F is an isometry . → | n o The vector space R2 is equipped with the standard Euclidean structure (the “dot product”) x y = x y + x y (x, y R2), • 1 1 2 2 ∈ hence with the Euclidean distance d (x, y) = (y x) (y x) (x, y R2).
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]