Computing the Matrix Exponential Through Spectral Decomposition

Computing the Matrix Exponential Through Spectral Decomposition

Computing The Matrix Exponential Through Spectral Decomposition S.S.Chhokar 140284179 Abstract There seems to be a lack of easy to follow material which intro- duces the matrix exponential without overloading the reader with an unnerving amount of information. In fact there may be 'Nineteen Du- bious Ways to Compute the Exponential of a Matrix' [6] however this project will focus on a single way, namely, through the use of spectral decomposition. Working over a closed, complex field, we look at two classes of matrices - diagonalizable and defective - and find an effective method for each. This requires the introduction of new concepts such as generalized eigenvectors and the Jordan Canonical form. 1 What is e? First we start with a very brief history of the number e itself. It is thought to have naturally arisen through mathematical experiment with compound interest, pre-calculus. In fact it was already referred to in Edward Wright's English translation of John Napier's work on logarithms, published in 1618. The number itself was not denoted as e until Leonhard Euler's work in the first half of the eighteenth century which gave us it's more familiar role in calculus. It is now known as Euler's number e = 2:71828182845904:::[5]. (If one wishes to learn more about this number they could read \e: The Story of a Number", by Eli Maor). We move swiftly on to the natural exponential function itself. 1 Definition 1.1. The natural exponential function exp(x), is defined for all x 2 R as the following Taylor series: 1 X xk f(x) = ex = ; (1) k! k=0 where the base e, is Euler's transcendental constant.[8] The graph of this function is shown below: Figure 1: Graph of y = ex Note: We shall refer to the natural exponential function simply as the ex- ponential function from now onwards. For completeness we shall now provide properties and applications of the exponential function. Proposition 1.2. (Properties of the exponential function).[8] 1. The exponential function (1) is continuous with domain R and range (0,1).This means that ex > 0 for all x. Consequently we get the limits lim ex = 0 and lim ex = 1; (2) x→−∞ x!1 hence the x-axis is a horizontal asymptote of the exponential function - one can see this from figure 1. 2 2. exp(x) is an inverse to the natural logarithm ln(x). 3. The n-th derivative of the exponential function with respect to x, given any k; x 2 R is: dn ekx = knekx dxn 4. For all a; b 2 (−∞; 1) ea ea+b = eaeband ea−b = eb 5. (ea)b = eab Proof. Proofs of these properties can be found in Thomas' Calculus textbook.[8] The exponential function is an imperative notion with applications ranging from mathematics, statistics, natural sciences, and economics. In general e is the base rate of growth shared by all continually growing processes. It lets you take a simple growth rate (where all change happens at the end of the year) and find the impact of continuously compounded growth. The exponential function is found in all continuously growing systems: population, radioactive decay, interest calculations, and more. It is the exponent x that determines the scale of e by which a process increases. To illustrate the broad reach of the exponential function, we can use an exam- ple from political economist Thomas Malthus. In Layman's terms, Malthus stated that, if left unchecked, the human population would grow exponen- tially at a rate λ for time t until a natural disaster occurred.[4] This has come to be known as the Malthusian Catastrophe. The initial exponential growth in population can be modelled as: λt P = P0e Where, P0 is the initial population size and P is the population size after time t. 3 2 Meet the Matrices Before delving straight into the matrix exponential it is essential that we discuss the two forms of matrices we will be dealing with and how they differ - since each will require different tools for computation of their exponential. 2.1 Diagonalizable Matrix Traditionally one may have been introduced to the notion of a diagonalizable matrix with the following definition, Definition 2.1. A matrix A 2 Cn×n is said to be diagonalizable if and only if there exists an invertible matrix P such that D = P−1AP (3) where D is a diagonal matrix.[3] However we shall now give a more applicable definition in terms of the mul- tiplicity of eigenvalues belonging to a matrix. First we will define the algebraic and geometric multiplicities of a matrix. Definition 2.2. Let A be a complex n × n matrix with eigenvalue λ. 1. The algebraic multiplicity of λ is the number of times it is repeated as root of the characteristic polynomial pλ(x).[3] Let us denote the al- gebraic multiplicity of λ as h hλ(A) = max[h : pλ(x) = (x − λ) k(x)]: 2. the geometric multiplicity of λ is the dimensions of the eigenspace of λ i.e. the dimensions of the nullspace of (A − λI):[3] Let us denote the geometric multiplicity of lambda as gλ(A) = dimN(A − λI): Definition 2.3. Let A be a complex n × n matrix. A is said to be diago- nalizable if and only if each eigenvalue of A has an algebraic multiplicity equal to it's geometric multiplicity.[1] 4 [Note that for the following examples we have omitted the explicit calculation of the characteristic polynomials, eigenspaces and eigenvectors. If one would like to familiarize themselves with these notions, one can look at chapter 9, Schaum's Outline of Linear Algebra. [3]] Example 2.4. Let 0 1 A = −2 −3 Since pλ(x) = (λ + 1)(λ + 2) Thus, the eigenvalues of A are λ1 = −1; λ2 = −2: Corresponding to eigenspaces −1 −1 E = span and E = span : −1 1 −2 2 Each eigenvalue has hλ = gλ = 1 which means A is diagonalizable (by defi- nition 2.3). Example 2.5. Let 01 1 01 A = @0 2 0A : 0 −1 4 we get, pλ(x) = (1 − λ)(2 − λ)(4 − λ) Thus, the eigenvalues of A are λ1 = 1, λ2 = 2, λ3 = 4: corresponding to eigenspaces 011 021 001 E1 = span @0A E2 = span @2A and E4 = span @0A ; 0 1 1 respectively. Hence A is diagonalizable since each eigenvalue has h = g = 1. Remark: For simplicity we shall use notation h and g for the respective algebraic and geometric multiplicity of an eigenvalue. 5 2.2 Defective Matrix In order to define a non-diagonalizable (defective) matrix in a similar fashion, we must give the following proposition. Proposition 2.6. The following is true; hλ(A) ≥ gλ(A) i.e. the algebraic multiplicity of λ is at least as large as it's geometric multi- plicity. [1] Proof. A proof of this result can be found in chapter 8 of the Advanced Linear Algebra textbook. [7] Definition 2.7. Let A be a complex n × n matrix. A is said to be non- diagonalizable if there exists at least one eigenvalue λ for which hλ(A) > gλ(A) i.e. there is at least one eigenvalue with an algebraic multiplicity greater than its geometric multiplicity.[1] Example 2.8. Let 1 0 A = : 1 1 We have, pλ(x) = (1 − λ)(1 − λ): Thus, there is only one distinct eigenvalue of A, namely, λ1 = 1 with alge- braic multiplicity h = 2 and geometric multiplicity g = dimE1 = 1. Now since h > g, A is non-diagonalizable (by definition 2.7). Example 2.9. 01 1 01 A = @0 2 0A ; 0 1 2 where 2 pλ(x) = (1 − λ)(2 − λ) : The characteristic polynomial gives us eigenvalue λ1 = 1 and defective eigen- value λ2 = 2 with h = 2 and g = dimE2 = 1 therefore h > g and A is non-diagonalizable. 6 Example 2.10. 03 0 0 01 B0 3 1 0C A = B C ; @0 0 3 0A 1 0 0 3 where 4 pλ(x) = (λ − 3) : Hence our only eigenvalue is λ = 3 with h = 3 > g = 2 hence A is non- diagonalizable. 3 The Matrix Exponential Now we arrive at the matrix exponential itself. The exponential function of a matrix is analogous to the ordinary exponential function, the difference simply being that the exponent is instead a square matrix instead of a real number. Definition 3.1. Given a square matrix A 2 Cn×n, the exponential of A, denoted eAt, is the n × n matrix given by the power series, 1 X Aktk eAt = ; k! k=0 where t is a constant.[1] Proposition 3.2. (Properties of the matrix exponential function)[2] Given that A and B are square matrices, P is a non-singular matrix and t is a real number, we have: 1. If AB = BA then eAteBt = e(A+B)t; 2. If AB = BA then eAtB = BeAt; 3. (eAt)−1 = e−At; 4. ePAP−1t = PeAtP−1; Proof. The proofs of these properties can be found in Hall (2003). [2] 7 Definition 3.3. If A 2 Cn×n has n distinct eigenvalues, each with the prop- erty that hλ(A) = gλ(A) then we let matrix S form the basis of the linearly independent eigenvectors of A.[3] 2 """ 3 S = 4~v1 ~v2 ··· ~vn5 ; ### i.e. S has columns constructed from the eigenvectors of A. Proposition 3.4. Given a diagonalizable matrix A 2 Cn×n, with its basis of linearly independent eigenvectors S, then (S−1AS)n = S−1AnS; holds true for all n 2 R.[2] Proof.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    26 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us