The Exact Methods to Compute the Matrix Exponential

Total Page:16

File Type:pdf, Size:1020Kb

The Exact Methods to Compute the Matrix Exponential IOSR Journal of Mathematics (IOSR-JM) e-ISSN: 2278-5728, p-ISSN: 2319-765X. Volume 12, Issue 1 Ver. IV (Jan. - Feb. 2016), PP 72-86 www.iosrjournals.org The Exact Methods to Compute The Matrix Exponential Mohammed Abdullah Salman1, V. C. Borkar2 1(Department of Mathematics& statistics Yashwant Mahavidyalaya, Swami Ramamnand Teerth Marthwada University, Nanded, India). 2(Department of Mathematics & Statistics, Yeshwant Mahavidyalaya, Swami Ramamnand Teerth Marthwada University, Nanded, India). Abstract: in this paper we present several kinds of methods that allow us to compute the exponential matrix e tA exactly. These methods include calculating eigenvalues and Laplace transforms are well known, and are mentioned here for completeness. Other method, not well known is mentioned in the literature, that don’t including the calculation of eigenvectors, and which provide general formulas applicable to any matrix. Keywords: Exponential matrix, functions of matrix, Lagrange-Sylvester interpolation, Putzer Spectral formula, Laplace transform, Commuting Matrix, Non-commuting Matrix. I. Introduction The exponential matrix is a very useful tool on solving linear systems of first order. It provides a formula for closed solutions, with the help of this can be analyzed controllability and ob servability of a linear system [1]. There are several methods for calculating the matrix exponential, neither computationally efficient [7,8,9,10]. However, from a theoretical point of view it is important to know properties of this matrix function. Formulas involving the calculation of generalized Laplace transform and eigenvectors have been used in a large amount of text books, and for this reason, in this work is to provide alternative methods, not well known, friendly didactic. There are other methods [4] of at s interesting but not mentioned in the list of cases because of its practicality in implementation. Eight cases or develop methods to calculate cases because of its practicality in implementation. Eight cases or develop methods to calculate the matrix exponential. Provide examples of how to apply the lesser-known methods in specific cases, and for the most known cases the respective bibliography cited. II. Definitions And Results The exponential of an n n complex matrix A denoted bye tA defined by ( At ) k ( At ) 2 ( At ) n 1 (t ) e At I At ...... .......... k ! 2! ( n 1)! k 0 To set the convergence of this series, we define firstly the frobenius norm of a matrix of size m n as follow 1 m n 2 2 A a F ij i 1 j 1 If A (:, j ) denotes the j-th column of A, and A (i, :) the ith row, it is easy to see that is satisfy 1 1 n 2 m 2 2 2 A A (:, j ) A (i. :) F 2 j 1 2 i 1 We will use this standard for convenience, because in a finite dimension vector space all norms are equivalent. An important property is to know how to use narrows the Frobenius norm of a matrix product. Given the m p p n matrices A and B then the product of themC ij AB , with entries ij A (i, :) B ( j, :) . If A had complex entries, we obtain conjugate C ij applied to row . Recall the Cauchy-Schwarz inequality 2 2 C ij A (i, :) B ( j, :) then we have : m n 2 m n 2 m 2 n 2 2 2 2 2 AB C A (i, :) B (: , j) A (i, :) B(:, j) A B . F ij 2 F F i 1 j 1 i 1 j 1 2 i 1 2 j 1 2 DOI: 10.9790/5728-12147286 www.iosrjournals.org 72 | Page The Exact Methods To Compute The Matrix Exponential 퐴푝푝푙푦푖푛푔 푡푕푖푠 푖푛푒푞푢푎푙푖푡푦 푡표 푎 푠푞푢푎푟푒 푚푎푡푟푖푥 퐴 푖푠 푒푎푠푦 푡표 푑푒푑푢푐푒 푡푕푒 푛푒푥푡 푓표푟푚: n A n A , for all n 1,2,3,.... F F Formally we must examine the convergence of the following limit: 푛 퐴푘 lim 푛 →∞ 푘! 푘=0 It is sufficient to observe that satisfies: k k n k n A n A A A F F e F k ! k ! k ! k 0 F k 0 k 0 and thus it demonstrated that 푒퐴 already is well defined for any square matrix with constant entry. It is useful to remember how the matrix exponential behaves under derivation: d t k A k (t ) dt k 0 k ! d tA t 2 A 2 t 3 A 3 t 4 A 4 t 5 A 5 I .......... ........ dt 1! 2! 3! 4! 5! 2 tA 2 3t 2 A 3 kt k 1 A k A .......... .. .......... ... 2! 3! k ! tA t 2 A 2 A I .......... 11 2! Ae tA e tA A. Using induction method and covenant condition (t ) (t ) follows the formula: k ( K ) d tA k tA tA k (2.1) ( t ) e A e e A ,k Z o dt k tA Note that the formula for the first derivative implies that the function x (t ) e x o is solution of initial value problem of the following first order system: x (t ) Ax , x (0 ) x o . Two results known of linear algebra that we used below are the following theorems. Theorem (2.1) Schur Triangularization Theorem. For any square matrix A n n , there is an unitary matrix U such that A UTU 1 is upper triangular. In addition, the entries in a diagonal matrix T are the eigenvalues of A . Theorem (2.2 ((Cayley Hamilton theorem) Let a square matrix and ( ) A I its characteristic polynomial then ( A ) 0 . Now we will discuss several methods by details to calculate or compute and find the matrix exponential. III. Diagonalizable Matrix Given a diagonal matrix n n D diag ( , ,......., ) It is to say that 1 2 n k k k k D diag ( 1 , 2 ,......... .......... n ) is true for all k Z then we have t k t k t k e tD D k diag k ,......., k 1 n k 0 k ! k 0 k ! k 0 k ! diag e 1 ,......, e n , DOI: 10.9790/5728-12147286 www.iosrjournals.org 73 | Page The Exact Methods To Compute The Matrix Exponential It is also a diagonal matrix. Now, in the case that A is a matrix diagonalizable it is known that there exists an invertible matrix P formed by the eigenvectors of A and a diagonal matrix D formed by the distinct eigenvalues of such that A PDP 1 . Now it is easy to verify the identity A k PD k P 1 for all k Z . Then we have: t k t k t k e tA A k ( PD k P 1 ) P D k P 1 k 0 k ! k 0 k ! k 0 k ! Pe tD P 1 . Accordingly, it is trivial to find the exponential matrix of a diagonalizable matrix, provided that previously we find all the eigenvalues of with corresponding eigenvectors. This case is well known. IV. Not Diagonalizable Matrix Suppose is not diagonalizable matrix which it is not possible to find n linearly independent eigenvectors of the matrix , In this case can use the Jordan form of . Suppose j is the Jordan form of , A j 1 with the transition matrix. Then e Pe P Where j diag ( j , j ,....., j ) diag ( j j ..... j ) 1 1 2 2 1 k 1 1 2 2 1 Then J j j j e (e 1 1 e 2 2 ...... e k k ) Thus, the problem is to find the matrix exponential of a Jordan block where the Jordan block has the form k J k ( ) k N k M k and in general N as ones on the k th upper diagonal and is the null matrix if k n the dimension of the matrix. By using the above expression we have k J ( ) 1 k 1 k 1 k k j j e k J ( I N ) N k 0 k ! k ! k 0 k ! j 0 J This can be written 2 n 1 j N N e e I N ....... 2! ( n 1)! Finally, we get the exponential matrix as follow: A j 1 e Pe P Then, tA t J t J 1 e Pdiag (e 1 ,........, e m ) P This case is also well known. One way to demonstrate this formula is to consider the system of first order x Jx with initial condition x (0 ) x o . On the one hand, we know that the solution of this system is given t J by x (t ) e x o . Furthermore, this system is easy to solve, starting last equation, which is decoupled, then every linear first-order equation is solved one by one, via the method of integrating factor. V. Triangular Matrix: Let S is an upper triangular matrix (lower triangular for a similar development is done) and write it as the sum of a diagonal matrix with a nilpotent matrix: S D N r Recall that a matrix N is called a nilpotent if there is a positive integer r such that N 0 . The smallest positive integer, for which this equality holds, is called the index of nilpotent of ownership matrix. Assuming the known property on exponential matrices see [1] e A B e A e B , if AB BA , Now, we can use the above formula e t s e t ( D N ) e t D e t N . to calculate: DOI: 10.9790/5728-12147286 www.iosrjournals.org 74 | Page The Exact Methods To Compute The Matrix Exponential We know how to calculate the matrix exponential of a diagonal matrix, so we discuss how to get e tN . Sufficient to note that when N nilpotent, the series of this matrix becomes finite, since the number of a quantity to be added to another is bounded by index of nilpotent of N .
Recommended publications
  • Polynomial Flows in the Plane
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector ADVANCES IN MATHEMATICS 55, 173-208 (1985) Polynomial Flows in the Plane HYMAN BASS * Department of Mathematics, Columbia University. New York, New York 10027 AND GARY MEISTERS Department of Mathematics, University of Nebraska, Lincoln, Nebraska 68588 Contents. 1. Introduction. I. Polynomial flows are global, of bounded degree. 2. Vector fields and local flows. 3. Change of coordinates; the group GA,(K). 4. Polynomial flows; statement of the main results. 5. Continuous families of polynomials. 6. Locally polynomial flows are global, of bounded degree. II. One parameter subgroups of GA,(K). 7. Introduction. 8. Amalgamated free products. 9. GA,(K) as amalgamated free product. 10. One parameter subgroups of GA,(K). 11. One parameter subgroups of BA?(K). 12. One parameter subgroups of BA,(K). 1. Introduction Let f: Rn + R be a Cl-vector field, and consider the (autonomous) system of differential equations with initial condition x(0) = x0. (lb) The solution, x = cp(t, x,), depends on t and x0. For which f as above does the flow (p depend polynomially on the initial condition x,? This question was discussed in [M2], and in [Ml], Section 6. We present here a definitive solution of this problem for n = 2, over both R and C. (See Theorems (4.1) and (4.3) below.) The main tool is the theorem of Jung [J] and van der Kulk [vdK] * This material is based upon work partially supported by the National Science Foun- dation under Grant NSF MCS 82-02633.
    [Show full text]
  • Quantum Information
    Quantum Information J. A. Jones Michaelmas Term 2010 Contents 1 Dirac Notation 3 1.1 Hilbert Space . 3 1.2 Dirac notation . 4 1.3 Operators . 5 1.4 Vectors and matrices . 6 1.5 Eigenvalues and eigenvectors . 8 1.6 Hermitian operators . 9 1.7 Commutators . 10 1.8 Unitary operators . 11 1.9 Operator exponentials . 11 1.10 Physical systems . 12 1.11 Time-dependent Hamiltonians . 13 1.12 Global phases . 13 2 Quantum bits and quantum gates 15 2.1 The Bloch sphere . 16 2.2 Density matrices . 16 2.3 Propagators and Pauli matrices . 18 2.4 Quantum logic gates . 18 2.5 Gate notation . 21 2.6 Quantum networks . 21 2.7 Initialization and measurement . 23 2.8 Experimental methods . 24 3 An atom in a laser field 25 3.1 Time-dependent systems . 25 3.2 Sudden jumps . 26 3.3 Oscillating fields . 27 3.4 Time-dependent perturbation theory . 29 3.5 Rabi flopping and Fermi's Golden Rule . 30 3.6 Raman transitions . 32 3.7 Rabi flopping as a quantum gate . 32 3.8 Ramsey fringes . 33 3.9 Measurement and initialisation . 34 1 CONTENTS 2 4 Spins in magnetic fields 35 4.1 The nuclear spin Hamiltonian . 35 4.2 The rotating frame . 36 4.3 On-resonance excitation . 38 4.4 Excitation phases . 38 4.5 Off-resonance excitation . 39 4.6 Practicalities . 40 4.7 The vector model . 40 4.8 Spin echoes . 41 4.9 Measurement and initialisation . 42 5 Photon techniques 43 5.1 Spatial encoding .
    [Show full text]
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • On Multivariate Interpolation
    On Multivariate Interpolation Peter J. Olver† School of Mathematics University of Minnesota Minneapolis, MN 55455 U.S.A. [email protected] http://www.math.umn.edu/∼olver Abstract. A new approach to interpolation theory for functions of several variables is proposed. We develop a multivariate divided difference calculus based on the theory of non-commutative quasi-determinants. In addition, intriguing explicit formulae that connect the classical finite difference interpolation coefficients for univariate curves with multivariate interpolation coefficients for higher dimensional submanifolds are established. † Supported in part by NSF Grant DMS 11–08894. April 6, 2016 1 1. Introduction. Interpolation theory for functions of a single variable has a long and distinguished his- tory, dating back to Newton’s fundamental interpolation formula and the classical calculus of finite differences, [7, 47, 58, 64]. Standard numerical approximations to derivatives and many numerical integration methods for differential equations are based on the finite dif- ference calculus. However, historically, no comparable calculus was developed for functions of more than one variable. If one looks up multivariate interpolation in the classical books, one is essentially restricted to rectangular, or, slightly more generally, separable grids, over which the formulae are a simple adaptation of the univariate divided difference calculus. See [19] for historical details. Starting with G. Birkhoff, [2] (who was, coincidentally, my thesis advisor), recent years have seen a renewed level of interest in multivariate interpolation among both pure and applied researchers; see [18] for a fairly recent survey containing an extensive bibli- ography. De Boor and Ron, [8, 12, 13], and Sauer and Xu, [61, 10, 65], have systemati- cally studied the polynomial case.
    [Show full text]
  • Solutions to Math 53 Practice Second Midterm
    Solutions to Math 53 Practice Second Midterm 1. (20 points) dx dy (a) (10 points) Write down the general solution to the system of equations dt = x+y; dt = −13x−3y in terms of real-valued functions. (b) (5 points) The trajectories of this equation in the (x; y)-plane rotate around the origin. Is this dx rotation clockwise or counterclockwise? If we changed the first equation to dt = −x + y, would it change the direction of rotation? (c) (5 points) Suppose (x1(t); y1(t)) and (x2(t); y2(t)) are two solutions to this equation. Define the Wronskian determinant W (t) of these two solutions. If W (0) = 1, what is W (10)? Note: the material for this part will be covered in the May 8 and May 10 lectures. 1 1 (a) The corresponding matrix is A = , with characteristic polynomial (1 − λ)(−3 − −13 −3 λ) + 13 = 0, so that λ2 + 2λ + 10 = 0, so that λ = −1 ± 3i. The corresponding eigenvector v to λ1 = −1 + 3i must be a solution to 2 − 3i 1 −13 −2 − 3i 1 so we can take v = Now we just need to compute the real and imaginary parts of 3i − 2 e3it cos(3t) + i sin(3t) veλ1t = e−t = e−t : (3i − 2)e3it −2 cos(3t) − 3 sin(3t) + i(3 cos(3t) − 2 sin(3t)) The general solution is thus expressed as cos(3t) sin(3t) c e−t + c e−t : 1 −2 cos(3t) − 3 sin(3t) 2 (3 cos(3t) − 2 sin(3t)) dx (b) We can examine the direction field.
    [Show full text]
  • MATH 237 Differential Equations and Computer Methods
    Queen’s University Mathematics and Engineering and Mathematics and Statistics MATH 237 Differential Equations and Computer Methods Supplemental Course Notes Serdar Y¨uksel November 19, 2010 This document is a collection of supplemental lecture notes used for Math 237: Differential Equations and Computer Methods. Serdar Y¨uksel Contents 1 Introduction to Differential Equations 7 1.1 Introduction: ................................... ....... 7 1.2 Classification of Differential Equations . ............... 7 1.2.1 OrdinaryDifferentialEquations. .......... 8 1.2.2 PartialDifferentialEquations . .......... 8 1.2.3 Homogeneous Differential Equations . .......... 8 1.2.4 N-thorderDifferentialEquations . ......... 8 1.2.5 LinearDifferentialEquations . ......... 8 1.3 Solutions of Differential equations . .............. 9 1.4 DirectionFields................................. ........ 10 1.5 Fundamental Questions on First-Order Differential Equations............... 10 2 First-Order Ordinary Differential Equations 11 2.1 ExactDifferentialEquations. ........... 11 2.2 MethodofIntegratingFactors. ........... 12 2.3 SeparableDifferentialEquations . ............ 13 2.4 Differential Equations with Homogenous Coefficients . ................ 13 2.5 First-Order Linear Differential Equations . .............. 14 2.6 Applications.................................... ....... 14 3 Higher-Order Ordinary Linear Differential Equations 15 3.1 Higher-OrderDifferentialEquations . ............ 15 3.1.1 LinearIndependence . ...... 16 3.1.2 WronskianofasetofSolutions . ........ 16 3.1.3 Non-HomogeneousProblem
    [Show full text]
  • Parametrization of 3×3 Unitary Matrices Based on Polarization
    Parametrization of 33 unitary matrices based on polarization algebra (May, 2018) José J. Gil Parametrization of 33 unitary matrices based on polarization algebra José J. Gil Universidad de Zaragoza. Pedro Cerbuna 12, 50009 Zaragoza Spain [email protected] Abstract A parametrization of 33 unitary matrices is presented. This mathematical approach is inspired by polarization algebra and is formulated through the identification of a set of three orthonormal three-dimensional Jones vectors representing respective pure polarization states. This approach leads to the representation of a 33 unitary matrix as an orthogonal similarity transformation of a particular type of unitary matrix that depends on six independent parameters, while the remaining three parameters correspond to the orthogonal matrix of the said transformation. The results obtained are applied to determine the structure of the second component of the characteristic decomposition of a 33 positive semidefinite Hermitian matrix. 1 Introduction In many branches of Mathematics, Physics and Engineering, 33 unitary matrices appear as key elements for solving a great variety of problems, and therefore, appropriate parameterizations in terms of minimum sets of nine independent parameters are required for the corresponding mathematical treatment. In this way, some interesting parametrizations have been obtained [1-8]. In particular, the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) [6,7], which represents information on the strength of flavour-changing weak decays and depends on four parameters, constitutes the core of a family of parametrizations of a 33 unitary matrix [8]. In this paper, a new general parametrization is presented, which is inspired by polarization algebra [9] through the structure of orthonormal sets of three-dimensional Jones vectors [10].
    [Show full text]
  • The Exponential of a Matrix
    5-28-2012 The Exponential of a Matrix The solution to the exponential growth equation dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system ′ ~x = A~x in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like At ~x = e ~x0. The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor series for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define ∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a square matrix. It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the power series). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + .
    [Show full text]
  • Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints
    Stabilization, Estimation and Control of Linear Dynamical Systems with Positivity and Symmetry Constraints A Dissertation Presented by Amirreza Oghbaee to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering Northeastern University Boston, Massachusetts April 2018 To my parents for their endless love and support i Contents List of Figures vi Acknowledgments vii Abstract of the Dissertation viii 1 Introduction 1 2 Matrices with Special Structures 4 2.1 Nonnegative (Positive) and Metzler Matrices . 4 2.1.1 Nonnegative Matrices and Eigenvalue Characterization . 6 2.1.2 Metzler Matrices . 8 2.1.3 Z-Matrices . 10 2.1.4 M-Matrices . 10 2.1.5 Totally Nonnegative (Positive) Matrices and Strictly Metzler Matrices . 12 2.2 Symmetric Matrices . 14 2.2.1 Properties of Symmetric Matrices . 14 2.2.2 Symmetrizer and Symmetrization . 15 2.2.3 Quadratic Form and Eigenvalues Characterization of Symmetric Matrices . 19 2.3 Nonnegative and Metzler Symmetric Matrices . 22 3 Positive and Symmetric Systems 27 3.1 Positive Systems . 27 3.1.1 Externally Positive Systems . 27 3.1.2 Internally Positive Systems . 29 3.1.3 Asymptotic Stability . 33 3.1.4 Bounded-Input Bounded-Output (BIBO) Stability . 34 3.1.5 Asymptotic Stability using Lyapunov Equation . 37 3.1.6 Robust Stability of Perturbed Systems . 38 3.1.7 Stability Radius . 40 3.2 Symmetric Systems . 43 3.3 Positive Symmetric Systems . 47 ii 4 Positive Stabilization of Dynamic Systems 50 4.1 Metzlerian Stabilization . 50 4.2 Maximizing the stability radius by state feedback .
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations
    Learning Fast Algorithms for Linear Transforms Using Butterfly Factorizations Tri Dao 1 Albert Gu 1 Matthew Eichhorn 2 Atri Rudra 2 Christopher Re´ 1 Abstract ture generation, and kernel approximation, to image and Fast linear transforms are ubiquitous in machine language modeling (convolutions). To date, these trans- learning, including the discrete Fourier transform, formations rely on carefully designed algorithms, such as discrete cosine transform, and other structured the famous fast Fourier transform (FFT) algorithm, and transformations such as convolutions. All of these on specialized implementations (e.g., FFTW and cuFFT). transforms can be represented by dense matrix- Moreover, each specific transform requires hand-crafted vector multiplication, yet each has a specialized implementations for every platform (e.g., Tensorflow and and highly efficient (subquadratic) algorithm. We PyTorch lack the fast Hadamard transform), and it can be ask to what extent hand-crafting these algorithms difficult to know when they are useful. Ideally, these barri- and implementations is necessary, what structural ers would be addressed by automatically learning the most priors they encode, and how much knowledge is effective transform for a given task and dataset, along with required to automatically learn a fast algorithm an efficient implementation of it. Such a method should be for a provided structured transform. Motivated capable of recovering a range of fast transforms with high by a characterization of matrices with fast matrix- accuracy and realistic sizes given limited prior knowledge. vector multiplication as factoring into products It is also preferably composed of differentiable primitives of sparse matrices, we introduce a parameteriza- and basic operations common to linear algebra/machine tion of divide-and-conquer methods that is capa- learning libraries, that allow it to run on any platform and ble of representing a large class of transforms.
    [Show full text]