Jordan Vectors and the Jordan Form

Total Page:16

File Type:pdf, Size:1020Kb

Jordan Vectors and the Jordan Form A useful basis for defective matrices: Jordan vectors and the Jordan form S. G. Johnson, MIT 18.06 Created Spring 2009; updated May 8, 2017 Abstract This matrix has a characteristic polynomial l 2 − 2l + 1, with a repeated root (a single eigenvalue) l1 = 1. (Equiv- Many textbooks and lecture notes can be found online alently, since A is upper triangular, we can read the de- for the existence of something called a “Jordan form” terminant of A − lI, and hence the eigenvalues, off the of a matrix based on “generalized eigenvectors (or “Jor- diagonal.) However, it only has a single indepenent eigen- dan vectors” and “Jordan chains”). In these notes, in- vector, because stead, I omit most of the formal derivations and instead 0 1 focus on the consequences of the Jordan vectors for how A − I = we understand matrices. What happens to our traditional 0 0 eigenvector-based pictures of things like An or eAt when diagonalization of A fails? The answer, for any matrix is obviously rank 1, and has a one-dimensional nullspace function f (A), turns out to involve the derivative of f ! spanned by x1 = (1;0). Defective matrices arise rarely in practice, and usually only when one arranges for them intentionally, so we have 1 Introduction not worried about them up to now. But it is important to have some idea of what happens when you have a defec- So far in the eigenproblem portion of 18.06, our strategy tive matrix. Partially for computational purposes, but also has been simple: find the eigenvalues li and the corre- to understand conceptually what is possible. For example, sponding eigenvectors xi of a square matrix A, expand what will be the result of any vector of interest u in the basis of these eigenvectors k 1 At 1 (u = c1x1 +···+cnxn), and then any operation with A can A or e 2 2 be turned into the corresponding operation with li acting on each eigenvector. So, Ak becomes l k, eAt becomes elit , i for the defective matrix A above, since (1;2) is not in the and so on. But this relied on one key assumption: we re- span of the (single) eigenvector of A? For diagonaliz- quire the n × n matrix A to have a basis of n independent able matrices, this would grow as l k or elt , respectively, eigenvectors. We call such a matrix A diagonalizable. but what about defective matrices? Although matrices in Many important cases are always diagonalizable: ma- real applications are rarely exactly defective, it sometimes trices with n distinct eigenvalues l , real symmetric or or- i happens (often by design!) that they are nearly defective, thogonal matrices, and complex Hermitian or unitary ma- and we can think of exactly defective matrices as a limit- trices. But there are rare cases where A does not have a ing case. (The book Spectra and Pseudospectra by Tre- complete basis of n eigenvectors: such matrices are called fethen & Embree is a much more detailed dive into the defective. For example, consider the matrix fascinating world of nearly defective matrices.) 1 1 The textbook (Intro. to Linear Algebra, 5th ed. by A = : 0 1 Strang) covers the defective case only briefly, in section 1 8.3, with something called the Jordan form of the matrix, If li is a double root, we will need a second vector to a generalization of diagonalization, but in this section we complete our basis. Remarkably,1 it turns out to always 2 will focus more on the “Jordan vectors” than on the Jordan be the case for a double root li that N([A − liI] ) is two- factorization. For a diagonalizable matrix, the fundamen- dimensional, just as for the 2 × 2 example above. Hence, tal vectors are the eigenvectors, which are useful in their we can always find a unique second solution ji satisfying: own right and give the diagonalization of the matrix as a side-effect. For a defective matrix, to get a complete basis (A − liI)ji = xi; ji ? xi : we need to supplement the eigenvectors with something called Jordan vectors or generalized eigenvectors. Jor- Again, we can choose ji to be perpendicular to xi via dan vectors are useful in their own right, just like eigen- Gram-Schmidt—this is not strictly necessary, but gives a vectors, and also give the Jordan form. Here, we’ll focus convenient orthogonal basis. (That is, the complete solu- mainly on the consequences of the Jordan vectors for how tion is always of the form xp +cxi, a particular solution xp matrix problems behave. plus any multiple of the nullspace basis xi. If we choose T T c = −xi xp=xi xi we get the unique orthogonal solution ji.) We call ji a Jordan vector or Jordan vector of A. 2 Defining Jordan vectors The relationship between j1 and x1 is also called a Jor- dan chain. In the example above, we had a 2 × 2 matrix A but only a single eigenvector x1 = (1;0). We need another vector to get a basis for R2. Of course, we could pick another vector 2.1 More than double roots at random, as long as it is independent of x1, but we’d like (1) A more general notation is to use xi instead of xi and it to have something to do with A, in order to help us with (2) xi instead of ji. If li is a triple root, we would find computations just like eigenvectors. The key thing is to (3) (1;2) 2 look at A−I above, and to notice that (A−I) = 0. (A ma- a third vector xi perpendicular to xi by requiring (3) (2) trix is called nilpotent if some power is the zero matrix.) (A − liI)xi = xi , and so on. In general, if li is an m- 2 m So, the nullspace of (A−I) must give us an “extra” basis times repeated root, then N([A − li] ) is m-dimensiohnal vector beyond the eigenvector. But this extra vector must we will always be able to find an orthogonal sequence (a still be related to the eigenvector! If y 2 N[(A − I)2], then ( j) Jordan chain) of Jordan vectors xi for j = 2:::m satis- (A − I)y must be in N(A − I), which means that (A − I)y ( j) ( j−1) (1) fying (A − l I)x = x and (A − l I)x = 0. Even is a multiple of x ! We just need to find a new “Jordan i i i i i 1 more generally, you might have cases with e.g. a triple vector” or “generalized eigenvector” j satisfying 1 root and two ordinary eigenvectors, where you need only (A − I)j1 = x1; j1 ? x1: one generalized eigenvector, or an m-times repeated root with ` > 1 eigenvectors and m − ` Jordan vectors. How- Notice that, since x 2 N(A−I), we can add any multiple 1 ever, cases with more than a double root are extremely of x to j and still have a solution, so we can use Gram- 1 1 rare in practice. Defective matrices are rare enough to be- Schmidt to get a unique solution j perpendicular to x . 1 1 gin with, so here we’ll stick with the most common defec- This particular 2 × 2 equation is easy enough for us to tive matrix, one with a double root li: hence, one ordinary solve by inspection, obtaining j1 = (0;1). Now we have eigenvector xi and one Jordan vector ji. a nice orthonormal basis for R2, and our basis has some simple relationship to A! Before we talk about how to use these Jordan vectors, 3 Using Jordan vectors let’s give a more general definition. Suppose that li is an eigenvalue of A corresponding to a repeated root of Using an eigenvector xi is easy: multiplying by A is just det(A−liI), but with only a single (ordinary) eigenvector like multiplying by li. But how do we use a Jordan vector xi, satisfying, as usual: 1This fact is proved in any number of advanced textbooks on linear (A − liI)xi = 0: algebra; I like Linear Algebra by P. D. Lax. 2 ji? The key is in the definition above. It immediately tells 3.1 More than double roots us that In the rare case of two Jordan vectors from a triple root, Aji = liji + xi: you will have a Jordan vector x(3) and get a f (A)x(3) = It will turn out that this has a simple consequence for more i i (3) 0 00 00 complicated expressions like Ak or eAt , but that’s probably f (l)xi + f (l)ji + f (l)xi, where the f term will give 2 k−2 2 lit k At not obvious yet. Let’s try multiplying by A : you k(k − 1)li and t e for A and e respectively. 2 A quadruple root with one eigenvector and three Jordan A ji = A(Aji) = A(liji + xi) = li(liji + xi) + lixi vectors will give you f 000 terms (that is, k3 and t3 terms), 2 = li ji + 2lixi and so on. The theory is quite pretty, but doesn’t arise often in practice so I will skip it; it is straightforward to and then try A3: work it out on your own if you are interested. 3 2 2 2 2 A ji = A(A ji) = A(li ji + 2lixi) = li (liji + xi) + 2li xi 3 2 = li ji + 3li xi: 3.2 Example From this, it’s not hard to see the general pattern (which 1 1 Let’s try this for our example 2×2 matrix A = can be formally proved by induction): 0 1 k k k−1 from above, which has an eigenvector x1 = (1;0) and a A ji = li ji + kl xi : i Jordan vector j1 = (0;1) for an eigenvalue l1 = 1.
Recommended publications
  • Linear Systems and Control: a First Course (Course Notes for AAE 564)
    Linear Systems and Control: A First Course (Course notes for AAE 564) Martin Corless School of Aeronautics & Astronautics Purdue University West Lafayette, Indiana [email protected] August 25, 2008 Contents 1 Introduction 1 1.1 Ingredients..................................... 4 1.2 Somenotation................................... 4 1.3 MATLAB ..................................... 4 2 State space representation of dynamical systems 5 2.1 Linearexamples.................................. 5 2.1.1 Afirstexample .............................. 5 2.1.2 Theunattachedmass........................... 6 2.1.3 Spring-mass-damper ........................... 6 2.1.4 Asimplestructure ............................ 7 2.2 Nonlinearexamples............................... 8 2.2.1 Afirstnonlinearsystem ......................... 8 2.2.2 Planarpendulum ............................. 9 2.2.3 Attitudedynamicsofarigidbody. .. 9 2.2.4 Bodyincentralforcemotion. 10 2.2.5 Ballisticsindrag ............................. 11 2.2.6 Doublependulumoncart . .. .. 12 2.2.7 Two-linkroboticmanipulator . .. 13 2.3 Discrete-timeexamples . ... 14 2.3.1 Thediscreteunattachedmass . 14 2.4 Generalrepresentation . ... 15 2.4.1 Continuous-time ............................. 15 2.4.2 Discrete-time ............................... 18 2.4.3 Exercises.................................. 20 2.5 Vectors....................................... 22 2.5.1 Vector spaces and IRn .......................... 22 2.5.2 IR2 andpictures.............................. 24 2.5.3 Derivatives ...............................
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • Scientific Computing: an Introductory Survey
    Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Scientific Computing: An Introductory Survey Chapter 4 – Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial, educational use only. Michael T. Heath Scientific Computing 1 / 87 Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n × n matrix A, find scalar λ and nonzero vector x such that A x = λ x λ is eigenvalue, and x is corresponding eigenvector
    [Show full text]
  • Section 5 Summary
    Section 5 summary Brian Krummel November 4, 2019 1 Terminology Eigenvalue Eigenvector Eigenspace Characteristic polynomial Characteristic equation Similar matrices Similarity transformation Diagonalizable Matrix of a linear transformation relative to a basis System of differential equations Solution to a system of differential equations Solution set of a system of differential equations Fundamental solutions Initial value problem Trajectory Decoupled system of differential equations Repeller / source Attractor / sink Saddle point Spiral point Ellipse trajectory 1 2 How to find eigenvalues, eigenvectors, and a similar ma- trix 1. First we find eigenvalues of A by finding the roots of the characteristic equation det(A − λI) = 0: Note that if A is upper triangular (or lower triangular), then the eigenvalues λ of A are just the diagonal entries of A and the multiplicity of each eigenvalue λ is the number of times λ appears as a diagonal entry of A. 2. For each eigenvalue λ, find a basis of eigenvectors corresponding to the eigenvalue λ by row reducing A − λI and find the solution to (A − λI)x = 0 in vector parametric form. Note that for a general n × n matrix A and any eigenvalue λ of A, dim(Eigenspace corresp. to λ) = # free variables of (A − λI) ≤ algebraic multiplicity of λ. 3. Assuming A has only real eigenvalues, check if A is diagonalizable. If A has n distinct eigenvalues, then A is definitely diagonalizable. More generally, if dim(Eigenspace corresp. to λ) = algebraic multiplicity of λ for each eigenvalue λ of A, then A is diagonalizable and Rn has a basis consisting of n eigenvectors of A.
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • Math 1553 Introduction to Linear Algebra
    Announcements Monday, November 5 I The third midterm is on Friday, November 16. I That is one week from this Friday. I The exam covers xx4.5, 5.1, 5.2. 5.3, 6.1, 6.2, 6.4, 6.5. I WeBWorK 6.1, 6.2 are due Wednesday at 11:59pm. I The quiz on Friday covers xx6.1, 6.2. I My office is Skiles 244 and Rabinoffice hours are: Mondays, 12{1pm; Wednesdays, 1{3pm. Section 6.4 Diagonalization Motivation Difference equations Many real-word linear algebra problems have the form: 2 3 n v1 = Av0; v2 = Av1 = A v0; v3 = Av2 = A v0;::: vn = Avn−1 = A v0: This is called a difference equation. Our toy example about rabbit populations had this form. The question is, what happens to vn as n ! 1? I Taking powers of diagonal matrices is easy! I Taking powers of diagonalizable matrices is still easy! I Diagonalizing a matrix is an eigenvalue problem. 2 0 4 0 8 0 2n 0 D = ; D2 = ; D3 = ;::: Dn = : 0 −1 0 1 0 −1 0 (−1)n 0 −1 0 0 1 0 1 0 0 1 0 −1 0 0 1 1 2 1 3 1 D = @ 0 2 0 A ; D = @ 0 4 0 A ; D = @ 0 8 0 A ; 1 1 1 0 0 3 0 0 9 0 0 27 0 (−1)n 0 0 1 n 1 ::: D = @ 0 2n 0 A 1 0 0 3n Powers of Diagonal Matrices If D is diagonal, then Dn is also diagonal; its diagonal entries are the nth powers of the diagonal entries of D: (CDC −1)(CDC −1) = CD(C −1C)DC −1 = CDIDC −1 = CD2C −1 (CDC −1)(CD2C −1) = CD(C −1C)D2C −1 = CDID2C −1 = CD3C −1 CDnC −1 Closed formula in terms of n: easy to compute 1 1 2n 0 1 −1 −1 1 2n + (−1)n 2n + (−1)n+1 = : 1 −1 0 (−1)n −2 −1 1 2 2n + (−1)n+1 2n + (−1)n Powers of Matrices that are Similar to Diagonal Ones What if A is not diagonal? Example 1=2 3=2 Let A = .
    [Show full text]
  • Mata Glossary of Common Terms
    Title stata.com Glossary Description Commonly used terms are defined here. Mata glossary arguments The values a function receives are called the function’s arguments. For instance, in lud(A, L, U), A, L, and U are the arguments. array An array is any indexed object that holds other objects as elements. Vectors are examples of 1- dimensional arrays. Vector v is an array, and v[1] is its first element. Matrices are 2-dimensional arrays. Matrix X is an array, and X[1; 1] is its first element. In theory, one can have 3-dimensional, 4-dimensional, and higher arrays, although Mata does not directly provide them. See[ M-2] Subscripts for more information on arrays in Mata. Arrays are usually indexed by sequential integers, but in associative arrays, the indices are strings that have no natural ordering. Associative arrays can be 1-dimensional, 2-dimensional, or higher. If A were an associative array, then A[\first"] might be one of its elements. See[ M-5] asarray( ) for associative arrays in Mata. binary operator A binary operator is an operator applied to two arguments. In 2-3, the minus sign is a binary operator, as opposed to the minus sign in -9, which is a unary operator. broad type Two matrices are said to be of the same broad type if the elements in each are numeric, are string, or are pointers. Mata provides two numeric types, real and complex. The term broad type is used to mask the distinction within numeric and is often used when discussing operators or functions.
    [Show full text]
  • Contents 5 Eigenvalues and Diagonalization
    Linear Algebra (part 5): Eigenvalues and Diagonalization (by Evan Dummit, 2017, v. 1.50) Contents 5 Eigenvalues and Diagonalization 1 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 5.1.1 Eigenvalues and Eigenvectors . 2 5.1.2 Eigenvalues and Eigenvectors of Matrices . 3 5.1.3 Eigenspaces . 6 5.2 Diagonalization . 9 5.3 Applications of Diagonalization . 14 5.3.1 Transition Matrices and Incidence Matrices . 14 5.3.2 Systems of Linear Dierential Equations . 16 5.3.3 Non-Diagonalizable Matrices and the Jordan Canonical Form . 19 5 Eigenvalues and Diagonalization In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and characteristic vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Then we will discuss a few applications of diagonalization, including the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial, and close with a brief discussion of non-diagonalizable matrices. 5.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix.
    [Show full text]
  • Diagonalization of Matrices
    DIAGONALIZATION OF MATRICES AMAJOR QUALIFYING PROJECT REPORT SUBMITTED TO THE FACULTY OF WORCESTER POLYTECHNIC INSTITUTE IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF SCIENCE WRITTEN BY THEOPHILUS MARKS APPROVED BY:PADRAIG OC´ ATHAIN´ MAY 5, 2021 THIS REPORT REPRESENTS THE WORK OF WPI UNDERGRADUATE STUDENTS SUBMITTED TO THE FACULTY AS EVIDENCE OF COMPLETION OF A DEGREE REQUIREMENT. 1 Abstract This thesis aims to study criteria for diagonalizability over finite fields. First we review basic and major results in linear algebra, including the Jordan Canonical Form, the Cayley-Hamilton theorem, and the spectral theorem; the last of which is an important criterion for for matrix diagonalizability over fields of characteristic 0, but fails in finite fields. We then calculate the number of diagonalizable and non-diagonalizable 2 × 2 matrices and 2 × 2 symmetric matrices with repeated and distinct eigenvalues over finite fields. Finally, we look at results by Brouwer, Gow, and Sheekey for enumerating symmetric nilpotent matrices over finite fields. 2 Summary (1) In Chapter 1 we review basic linear algebra including eigenvectors and eigenval- ues, diagonalizability, generalized eigenvectors and eigenspaces, and the Cayley- Hamilton theorem (we provide an alternative proof in the appendix). We explore the Jordan canonical form and the rational canonical form, and prove some necessary and sufficient conditions for diagonalizability related to the minimal polynomial and the JCF. (2) In Chapter 2 we review inner products and inner product spaces, adjoints, and isome- tries. We prove the spectral theorem, which tells us that symmetric, orthogonal, and self-adjoint matrices are all diagonalizable over fields of characteristic zero.
    [Show full text]
  • The Jacobian of the Exponential Function
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique Working Paper The Jacobian of the exponential function Tinbergen Institute Discussion Paper, No. TI 2020-035/III Provided in Cooperation with: Tinbergen Institute, Amsterdam and Rotterdam Suggested Citation: Magnus, Jan R.; Pijls, Henk G.J.; Sentana, Enrique (2020) : The Jacobian of the exponential function, Tinbergen Institute Discussion Paper, No. TI 2020-035/III, Tinbergen Institute, Amsterdam and Rotterdam This Version is available at: http://hdl.handle.net/10419/220072 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu TI 2020-035/III Tinbergen Institute Discussion Paper The Jacobian of the exponential function Jan R.
    [Show full text]
  • Rigidity and Shrinkability of Diagonalizable Matrices
    British Journal of Mathematics & Computer Science 21(6): 1-15, 2017; Article no.BJMCS.32839 ISSN: 2231-0851 SCIENCEDOMAIN international www.sciencedomain.org Rigidity and Shrinkability of Diagonalizable Matrices Dimitris Karayannakis 1* and Maria-Evgenia Xezonaki 2 1Department of Informatics Engineering, TEI of Crete, Heraklion 71004, Greece. 2Department of Informatics and Telecommunications, University of Athens, Athens 157 84, Greece. Authors’ contributions This work was carried out in collaboration between the two authors. Author DK designed the study, wrote the theoretical part, wrote the initial draft of the manuscript and managed literature searches. Author MEX managed the matlab calculation whenever necessary for some examples, double- checked the validity of the theoretical predictions and added some new examples of her own. Both authors read and approved the final draft of the manuscript. Article Information DOI: 10.9734/BJMCS/2017/32839 Editor(s): (1) Feyzi Basar, Department of Mathematics, Fatih University, Turkey. Reviewers: (1) Jia Liu, University of West Florida, Pensacola, USA. (2) G. Y. Sheu, Chang-Jung Christian University, Tainan, Taiwan. (3) Marija Milojevic Jevric, Mathematical Institute of the Serbian Academy of Sciences and Arts, Serbia. Complete Peer review History: http://www.sciencedomain.org/review-history/18748 Received: 18 th March 2017 Accepted: 6th April 2017 Original Research Article Published: 22 nd April 2017 _______________________________________________________________________________ Abstract We introduce the seemingly new concept of a rigid matrix based on the comparison of its sparsity to the sparsity of the natural powers of the matrix. Our results could be useful as a usage guide in the scheduling of various iterative algorithms that appear in numerical linear algebra.
    [Show full text]
  • 0 Nonsymmetric Eigenvalue Problems 149
    0 Nonsymmetric Eigenvalue Problems 4.1. Introduction We discuss canonical forms (in section 4.2), perturbation theory (in section 4.3), and algorithms for the eigenvalue problem for a single nonsymmetric matrix A (in section 4.4). Chapter 5 is devoted to the special case of real symmet- ric matrices A = A T (and the SVD). Section 4.5 discusses generalizations to eigenvalue problems involving more than one matrix, including motivating applications from the analysis of vibrating systems, the solution of linear differ- ential equations, and computational geometry. Finally, section 4.6 summarizes all the canonical forms, algorithms, costs, applications, and available software in a list. One can roughly divide the algorithms for the eigenproblem into two groups: direct methods and iterative methods. This chapter considers only direct meth- ods, which are intended to compute all of the eigenvalues, and (optionally) eigenvectors. Direct methods are typically used on dense matrices and cost O(n3 ) operations to compute all eigenvalues and eigenvectors; this cost is rel- atively insensitive to the actual matrix entries. The mail direct method used in practice is QR iteration with implicit shifts (see section 4.4.8). It is interesting that after more than 30 years of depend- able service, convergence failures of this algorithm have quite recently been observed, analyzed, and patched [25, 65]. But there is still no global conver- gence proof, even though the current algorithm is considered quite reliable. So the problem of devising an algorithm that is numerically stable and globally (and quickly!) convergent remains open. (Note that "direct" methods must still iterate, since finding eigenvalues is mathematically equivalent to finding zeros of polynomials, for which no noniterative methods can exist.
    [Show full text]