Down with Determinants!

Total Page:16

File Type:pdf, Size:1020Kb

Down with Determinants! Down with Determinants! Sheldon Axler 21 Decemb er 1994 1. Intro duction Ask anyone why a square matrix of complex numb ers has an eigenvalue, and you'll probably get the wrong answer, which go es something like this: The characteristic p olynomial of the matrix|which is de ned via determinants|has a ro ot (by the fundamental theorem of algebra); this ro ot is an eigenvalue of the matrix. What's wrong with that answer? It dep ends up on determinants, that's what. Determinants are dicult, non-intuitive, and often de ned without motivation. As we'll see, there is a b etter pro of|one that is simpler, clearer, provides more insight, and avoids determinants. This pap er will showhow linear algebra can b e done b etter without determinants. Without using determinants, we will de ne the multiplici ty of an eigenvalue and prove that the numb er of eigenvalues, counting multiplicities, equals the dimension of the underlying space. Without determinants, we'll de ne the characteristic and minimal p olynomials and then prove that they b ehave as exp ected. Next, we will easily prove that every matrix is similar to a nice upp er-triangular one. Turning to inner pro duct spaces, and still without mentioning determinants, we'll havea simple pro of of the nite-dimensional Sp ectral Theorem. Determinants are needed in one place in the undergraduate mathematics curricu- lum: the change of variables formula for multi-variable integrals. Thus at the end of this pap er we'll revive determinants, but not with any of the usual abstruse de ni- tions. We'll de ne the determinant of a matrix to b e the pro duct of its eigenvalues (counting multiplici ties). This easy-to-rememb er de nition leads to the usual for- mulas for computing determinants. We'll derive the change of variables formula for multi-variable integrals in a fashion that makes the app earance of the determinant there seem natural. This work was partially supp orted by the National Science Foundation. Many p eople made comments that help ed improve this pap er. I esp ecially thank Marilyn Brouwer, William Brown, Jonathan Hall, Paul Halmos, Richard Hill, Ben Lotto, and Wade Ramey. @ det @ 2 A few friends who use determinants in their researchhave expressed unease at the title of this pap er. I know that determinants play an honorable role in some areas of research, and I do not mean to b elittle their imp ortance when they are indisp ensable. But most mathematicians and most students of mathematics will have a clearer understanding of linear algebra if they use the determinant-free approach to the basic structure theorems. The theorems in this pap er are not new; they will already b e familiar to most readers. Some of the pro ofs and de nitions are new, although many parts of this approachhave b een around in bits and pieces, but without the attention they de- served. For example, at a recentannual meeting of the AMS and MAA, I lo oked through every linear algebra text on display. Out of over fty linear algebra texts o ered for sale, only one obscure b o ok gave a determinant-free pro of that eigen- values exist, and that b o ok did not manage to develop other key parts of linear algebra without determinants. The anti-determinant philosophyadvo cated in this pap er is an attempt to counter the undeserved dominance of determinant-dep endent metho ds. This pap er fo cuses on showing that determinants should b e banished from much of the theoretical part of linear algebra. Determinants are also useless in the com- putational part of linear algebra. For example, Cramer's rule for solving systems of linear equations is already worthless for 10 10 systems, not to mention the much larger systems often encountered in the real world. Many computer programs eciently calculate eigenvalues numerically|none of them uses determinants. To emphasize the p oint, let me quote a numerical analyst. Henry Thacher, in a review (SIAM News , Septemb er 1988) of the Turb o Pascal Numerical Metho ds Toolbox , writes, I nd it hard to conceive of a situation in whichthenumerical value of a determinant is needed: Cramer's rule, b ecause of its ineciency,iscom- pletely impractical, while the magnitude of the determinant is an indication of neither the condition of the matrix nor the accuracy of the solution. 2. Eigenvalues and Eigenvectors The basic ob jects of study in linear algebra can b e thought of as either linear transformations or matrices. Because a basis-free approach seems more natural, this pap er will mostly use the language of linear transformations; readers who prefer the language of matrices should have no trouble making the appropriate translation. The term linear op erator will mean a linear transformation from a vector space to itself; thus a linear op erator corresp onds to a square matrix (assuming some choice of basis). Notation used throughout the pap er: n denotes a p ositiveinteger, V denotes an n-dimensional complex vector space, T denotes a linear op erator on V ,andI denotes the identity op erator. A complex number is called an eigenvalue of T if T I is not injective. Here is the central result ab out eigenvalues, with a simple pro of that avoids determinants. @ det @ 3 Theorem 2.1 Every linear op erator on a nite-dimensional complex vector space has an eigenvalue. Pro of. Toshow that T (our linear op erator on V ) has an eigenvalue, x any non- 2 n zero vector v 2 V . The vectors v; T v; T v; : : : ; T v cannot b e linearly indep endent, b ecause V has dimension n and wehave n +1 vectors. Thus there exist complex numbers a ;:::;a , not all 0, such that 0 n n a v + a Tv + + a T v =0: 0 1 n Make the a's the co ecients of a p olynomial, which can b e written in factored form as n a + a z + + a z = c(z r ) :::(z r ); 0 1 n 1 m where c is a non-zero complex numb er, each r is complex, and the equation holds j for all complex z .We then have n 0= (a I + a T + + a T )v 0 1 n = c(T r I ) :::(T r I )v; 1 m which means that T r I is not injective for at least one j .Inotherwords, T has j an eigenvalue. Recall that a vector v 2 V is called an eigenvector of T if Tv = v for some eigenvalue . The next prop osition|which has a simple, determinant-free pro of| obviously implies that the numb er of distinct eigenvalues of T cannot exceed the dimension of V . Prop osition 2.2 Non-zero eigenvectors corresp onding to distinct eigenvalues of T are linearly indep endent. Pro of. Supp ose that v ;:::;v are non-zero eigenvectors of T corresp onding to 1 m distinct eigenvalues ;:::; .We need to prove that v ;:::;v are linearly inde- 1 m 1 m p endent. To do this, supp ose a ;:::;a are complex numb ers such that 1 m a v + + a v =0: 1 1 m m Apply the linear op erator (T I )(T I ) :::(T I ) to b oth sides of the 2 3 m equation ab ove, getting a ( )( ) :::( )v =0: 1 1 2 1 3 1 m 1 Thus a = 0. In a similar fashion, a = 0 for each j , as desired. 1 j @ det @ 4 3. Generalized eigenvectors Unfortunately, the eigenvectors of T need not span V .For example, the linear 2 op erator on C whose matrix is 0 1 0 0 has only one eigenvalue, namely 0, and its eigenvectors form a one-dimensional 2 subspace of C .We will see, however, that the generalized eigenvectors (de ned b elow) of T always span V . Avector v 2 V is called a generalized eigenvector of T if k (T I ) v =0 for some eigenvalue of T and some p ositiveinteger k . Obviously, the set of generalized eigenvectors of T corresp onding to an eigenvalue is a subspace of V . The following lemma shows that in the de nition of generalized eigenvector, instead of allowing an arbitrary p ower of T I to annihilate v ,we could have restricted th attention to the n power, where n equals the dimension of V . As usual, ker is an abbreviation for kernel (the set of vectors that get mapp ed 0). Lemma 3.1 The set of generalized eigenvectors of T corresp onding to an eigen- n value equals ker(T I ) . n Pro of. Obviously,every elementofker(T I ) is a generalized eigenvector of T corresp onding to . To prove the inclusion in the other direction, let v be a n generalized eigenvector of T corresp onding to .Weneedtoprove that (T I ) v = 0. Clearly,we can assume that v 6= 0, so there is a smallest non-negativeinteger k k such that (T I ) v =0.Wewillbedoneifwe show that k n. This will b e proved byshowing that 2 k 1 v; (T I )v; (T I ) v; : : : ; (T I ) v (3.2) are linearly indep endentvectors; we will then have k linearly indep endent elements in an n-dimensional space, which implies that k n. Toprove the vectors in (3.2) are linearly indep endent, supp ose a ;:::;a are 0 k 1 complex numbers such that k 1 a v + a (T I )v + + a (T I ) v =0: (3.3) 0 1 k 1 k 1 k 1 Apply (T I ) to b oth sides of the equation ab ove, getting a (T I ) v =0, 0 k 2 which implies that a =0.Now apply (T I ) to b oth sides of (3.3), getting 0 k 1 a (T I ) v = 0, which implies that a =0: Continuing in this fashion, wesee 1 1 that a = 0 for each j , as desired.
Recommended publications
  • Lecture 6 — Generalized Eigenspaces & Generalized Weight
    18.745 Introduction to Lie Algebras September 28, 2010 Lecture 6 | Generalized Eigenspaces & Generalized Weight Spaces Prof. Victor Kac Scribe: Andrew Geng and Wenzhe Wei Definition 6.1. Let A be a linear operator on a vector space V over field F and let λ 2 F, then the subspace N Vλ = fv j (A − λI) v = 0 for some positive integer Ng is called a generalized eigenspace of A with eigenvalue λ. Note that the eigenspace of A with eigenvalue λ is a subspace of Vλ. Example 6.1. A is a nilpotent operator if and only if V = V0. Proposition 6.1. Let A be a linear operator on a finite dimensional vector space V over an alge- braically closed field F, and let λ1; :::; λs be all eigenvalues of A, n1; n2; :::; ns be their multiplicities. Then one has the generalized eigenspace decomposition: s M V = Vλi where dim Vλi = ni i=1 Proof. By the Jordan normal form of A in some basis e1; e2; :::en. Its matrix is of the following form: 0 1 Jλ1 B Jλ C A = B 2 C B .. C @ . A ; Jλn where Jλi is an ni × ni matrix with λi on the diagonal, 0 or 1 in each entry just above the diagonal, and 0 everywhere else. Let Vλ1 = spanfe1; e2; :::; en1 g;Vλ2 = spanfen1+1; :::; en1+n2 g; :::; so that Jλi acts on Vλi . i.e. Vλi are A-invariant and Aj = λ I + N , N nilpotent. Vλi i ni i i From the above discussion, we obtain the following decomposition of the operator A, called the classical Jordan decomposition A = As + An where As is the operator which in the basis above is the diagonal part of A, and An is the rest (An = A − As).
    [Show full text]
  • Calculus and Differential Equations II
    Calculus and Differential Equations II MATH 250 B Linear systems of differential equations Linear systems of differential equations Calculus and Differential Equations II Second order autonomous linear systems We are mostly interested with2 × 2 first order autonomous systems of the form x0 = a x + b y y 0 = c x + d y where x and y are functions of t and a, b, c, and d are real constants. Such a system may be re-written in matrix form as d x x a b = M ; M = : dt y y c d The purpose of this section is to classify the dynamics of the solutions of the above system, in terms of the properties of the matrix M. Linear systems of differential equations Calculus and Differential Equations II Existence and uniqueness (general statement) Consider a linear system of the form dY = M(t)Y + F (t); dt where Y and F (t) are n × 1 column vectors, and M(t) is an n × n matrix whose entries may depend on t. Existence and uniqueness theorem: If the entries of the matrix M(t) and of the vector F (t) are continuous on some open interval I containing t0, then the initial value problem dY = M(t)Y + F (t); Y (t ) = Y dt 0 0 has a unique solution on I . In particular, this means that trajectories in the phase space do not cross. Linear systems of differential equations Calculus and Differential Equations II General solution The general solution to Y 0 = M(t)Y + F (t) reads Y (t) = C1 Y1(t) + C2 Y2(t) + ··· + Cn Yn(t) + Yp(t); = U(t) C + Yp(t); where 0 Yp(t) is a particular solution to Y = M(t)Y + F (t).
    [Show full text]
  • SUPPLEMENT on EIGENVALUES and EIGENVECTORS We Give
    SUPPLEMENT ON EIGENVALUES AND EIGENVECTORS We give some extra material on repeated eigenvalues and complex eigenvalues. 1. REPEATED EIGENVALUES AND GENERALIZED EIGENVECTORS For repeated eigenvalues, it is not always the case that there are enough eigenvectors. Let A be an n × n real matrix, with characteristic polynomial m1 mk pA(λ) = (λ1 − λ) ··· (λk − λ) with λ j 6= λ` for j 6= `. Use the following notation for the eigenspace, E(λ j ) = {v : (A − λ j I)v = 0 }. We also define the generalized eigenspace for the eigenvalue λ j by gen m j E (λ j ) = {w : (A − λ j I) w = 0 }, where m j is the multiplicity of the eigenvalue. A vector in E(λ j ) is called a generalized eigenvector. The following is a extension of theorem 7 in the book. 0 m1 mk Theorem (7 ). Let A be an n × n matrix with characteristic polynomial pA(λ) = (λ1 − λ) ··· (λk − λ) , where λ j 6= λ` for j 6= `. Then, the following hold. gen (a) dim(E(λ j )) ≤ m j and dim(E (λ j )) = m j for 1 ≤ j ≤ k. If λ j is complex, then these dimensions are as subspaces of Cn. gen n (b) If B j is a basis for E (λ j ) for 1 ≤ j ≤ k, then B1 ∪ · · · ∪ Bk is a basis of C , i.e., there is always a basis of generalized eigenvectors for all the eigenvalues. If the eigenvalues are all real all the vectors are real, then this gives a basis of Rn. (c) Assume A is a real matrix and all its eigenvalues are real.
    [Show full text]
  • Math 2280 - Lecture 23
    Math 2280 - Lecture 23 Dylan Zwick Fall 2013 In our last lecture we dealt with solutions to the system: ′ x = Ax where A is an n × n matrix with n distinct eigenvalues. As promised, today we will deal with the question of what happens if we have less than n distinct eigenvalues, which is what happens if any of the roots of the characteristic polynomial are repeated. This lecture corresponds with section 5.4 of the textbook, and the as- signed problems from that section are: Section 5.4 - 1, 8, 15, 25, 33 The Case of an Order 2 Root Let’s start with the case an an order 2 root.1 So, our eigenvalue equation has a repeated root, λ, of multiplicity 2. There are two ways this can go. The first possibility is that we have two distinct (linearly independent) eigenvectors associated with the eigen- value λ. In this case, all is good, and we just use these two eigenvectors to create two distinct solutions. 1Admittedly, not one of Sherlock Holmes’s more popular mysteries. 1 Example - Find a general solution to the system: 9 4 0 ′ x = −6 −1 0 x 6 4 3 Solution - The characteristic equation of the matrix A is: |A − λI| = (5 − λ)(3 − λ)2. So, A has the distinct eigenvalue λ1 = 5 and the repeated eigenvalue λ2 =3 of multiplicity 2. For the eigenvalue λ1 =5 the eigenvector equation is: 4 4 0 a 0 (A − 5I)v = −6 −6 0 b = 0 6 4 −2 c 0 which has as an eigenvector 1 v1 = −1 .
    [Show full text]
  • 23. Eigenvalues and Eigenvectors
    23. Eigenvalues and Eigenvectors 11/17/20 Eigenvalues and eigenvectors have a variety of uses. They allow us to solve linear difference and differential equations. For many non-linear equations, they inform us about the long-run behavior of the system. They are also useful for defining functions of matrices. 23.1 Eigenvalues We start with eigenvalues. Eigenvalues and Spectrum. Let A be an m m matrix. An eigenvalue (characteristic value, proper value) of A is a number λ so that× A λI is singular. The spectrum of A, σ(A)= {eigenvalues of A}.− Sometimes it’s possible to find eigenvalues by inspection of the matrix. ◮ Example 23.1.1: Some Eigenvalues. Suppose 1 0 2 1 A = , B = . 0 2 1 2 Here it is pretty obvious that subtracting either I or 2I from A yields a singular matrix. As for matrix B, notice that subtracting I leaves us with two identical columns (and rows), so 1 is an eigenvalue. Less obvious is the fact that subtracting 3I leaves us with linearly independent columns (and rows), so 3 is also an eigenvalue. We’ll see in a moment that 2 2 matrices have at most two eigenvalues, so we have determined the spectrum of each:× σ(A)= {1, 2} and σ(B)= {1, 3}. ◭ 2 MATH METHODS 23.2 Finding Eigenvalues: 2 2 × We have several ways to determine whether a matrix is singular. One method is to check the determinant. It is zero if and only the matrix is singular. That means we can find the eigenvalues by solving the equation det(A λI)=0.
    [Show full text]
  • Topology and Bifurcations in Hamiltonian Coupled Cell Systems
    To appear in Dynamical Systems: An International Journal Vol. 00, No. 00, Month 20XX, 1–22 Topology and Bifurcations in Hamiltonian Coupled Cell Systems a b c B.S. Chan and P.L. Buono and A. Palacios ∗ aDepartment of Mathematics,San Diego State University, San Diego, CA 92182; bFaculty of Science, University of Ontario Institute of Technology, 2000 Simcoe St N, Oshawa, ON L1H 7K4, Canada; cDepartment of Mathematics,San Diego State University, San Diego, CA 92182 (v5.0 released February 2015) The coupled cell formalism is a systematic way to represent and study coupled nonlinear differential equations using directed graphs. In this work, we focus on coupled cell systems in which individual cells are also Hamiltonian. We show that some coupled cell systems do not admit Hamiltonian vector fields because the associated directed graphs are incompatible. In broad terms, we prove that only sys- tems with bidirectionally coupled digraphs can be Hamiltonian. Aside from the topological criteria, we also study the linear theory of regular Hamiltonian coupled cell systems, i.e., systems with only one type of node and one type of coupling. We show that the eigenspace at a codimension one bifurcation from a synchronous equilibrium of a regular Hamiltonian network can be expressed in terms of the eigenspaces of the adjacency matrix of the associated directed graph. We then prove results on steady-state bifurca- tions and a version of the Hamiltonian Hopf theorem. Keywords: Hamiltonian systems; coupled cells; bifurcations; nonlinear oscillators 37C80; 37G40; 34C14; 37K05 1. Introduction The study of coupled systems of differential equations, also known as coupled cell systems, received much attention recently with various theories and approaches being developed con- currently [1–5].
    [Show full text]
  • Math 4571 – Lecture 25
    Math 4571 { Lecture 25 Math 4571 (Advanced Linear Algebra) Lecture #25 Generalized Eigenvectors: Jordan-Block Matrices and the Jordan Canonical Form Generalized Eigenvectors Generalized Eigenspaces and the Spectral Decomposition This material represents x4.3.1 from the course notes. Math 4571 { Lecture 25 Jordan Canonical Form, I In the last lecture, we discussed diagonalizability and showed that there exist matrices that are not conjugate to any diagonal matrix. For computational purposes, however, we might still like to know what the simplest form to which a non-diagonalizable matrix is similar. The answer is given by what is called the Jordan canonical form, which we now describe. Important Note: The proofs of the results in this lecture are fairly technical, and it is NOT necessary to follow all of the details. The important part is to understand what the theorems say. Math 4571 { Lecture 25 Jordan Canonical Form, II Definition The n × n Jordan block with eigenvalue λ is the n × n matrix J having λs on the diagonal, 1s directly above the diagonal, and zeroes elsewhere. Here are the general Jordan block matrices of sizes 2, 3, 4, and 5: 2 λ 1 0 0 0 3 2 λ 1 0 0 3 2 λ 1 0 3 0 λ 1 0 0 λ 1 0 λ 1 0 6 7 , 0 λ 1 , 6 7, 6 0 0 λ 1 0 7. 0 λ 4 5 6 0 0 λ 1 7 6 7 0 0 λ 4 5 6 0 0 0 λ 1 7 0 0 0 λ 4 5 0 0 0 0 λ Math 4571 { Lecture 25 Jordan Canonical Form, III Definition A matrix is in Jordan canonical form if it is a block-diagonal matrix 2 3 J1 6 J2 7 6 7, where each J ; ··· ; J is a Jordan block 6 .
    [Show full text]
  • JORDAN FORM Contents 1. Eigenvectors and Generalized
    JORDAN FORM BENEDICT MORRISSEY Contents 1. Eigenvectors and Generalized Eigenvectors 1 2. Diagonalization 2 2.1. Extended Example – writing a matrix in Jordan Form 4 These are supplementary notes for parts of Lectures 8,9 and 10 of Math 312 Summer II 2019. 1. Eigenvectors and Generalized Eigenvectors Let T : V → V be a linear function, from a vector space V to itself. Definition 1.1. An eigenvector of T (corresponding to the eigenvalue λ)is a vector (which we generally require to be non zero) ~v such that T~v = λ~v for some scalar λ. We call a λ such that there is an eigenvector, an eigenvalue of T . We sometimes call this a λ-eigenvector. Equivalently ~v ∈ Ker(T − λId) (where Id is the identity matrix). We recall that the kernel of a linear transformation is zero precisely when the determinant is zero. Hence the eigenvalues of a linear transformation T are precisely the values of λ that are roots of the polynomial p(λ) := det(A − λId). We call p(λ) the characteristic polynomial of the linear transformation T . Definition 1.2. A generalized eigenvector of T (corresponding to the eigenvalue λ) is a vector ~v, such that there is some positive integer k such that (T − λId)k~v = 0 We say that a generalized eigenvector of T is of rank k if (T − λId)k~v = 0 and (T − λId)k−1~v 6= 0. We sometimes call this a generalized λ-eigenvector. Definition 1.3. For λ an eigenvalue of T we let Vλ ⊂ V be the vector subspace of V consisting of generalized eigenvectors of T corresponding to the eigenvalue λ (here we include the zero vector as a generalized eigenvector for the purposes of this definition).
    [Show full text]
  • Linear Algebra Notes
    LINEAR ALGEBRA OUTLINE UNIVERSITY OF ARIZONA INTEGRATION WORKSHOP, AUGUST 2015 These notes are intended only as a rough summary of the background in linear algebra expected of incoming graduate students. With brevity as our guiding principle, we include virtually no proofs of the results stated in the text below. The reader is encouraged to fill in as many of the arguments and details as possible. 1. Lecture 1 Throughout these notes, we fix a field k. 1.1. First definitions and properties. Definition 1.1.1. A vector space over k is a set V equipped with maps + : V × V ! V (addition) and · : k × V ! V (scalar multiplication) which satisfy: (1) The pair (V; +) forms an abelian group. That is: (a) + is commutative: v + w = w + v for all v; w 2 V . (b) + is associative: u + (v + w) = (u + v) + w for all u; v; w 2 V . (c) There exists an additive identity 0 2 V such that 0 + v = v for all v 2 V . (d) There exist additive inverses: for all v 2 V , there exists −v 2 V such that v + (−v) = 0. (2) Scalar multiplication gives an action of the group k× on V . That is: (a) a · (b · v) = (ab) · v for all a; b 2 k and all v 2 V . (b)1 · v = v for all v 2 V . (3) Scalar multiplication distributes over addition. That is: (a) a · (v + w) = a · v + a · w for all a 2 k and all v; w 2 V . (b)( a + b) · v = a · v + b · v for all a; b 2 k and all v 2 V .
    [Show full text]
  • On the Perron Roots of Principal Submatrices of Co-Order One of Irreducible Nonnegative Matricesୋ S.V
    Linear Algebra and its Applications 361 (2003) 257–277 www.elsevier.com/locate/laa On the Perron roots of principal submatrices of co-order one of irreducible nonnegative matricesୋ S.V. Savchenko L.D. Landau Institute for Theoretical Physics, Russian Academy of Sciences, Kosygina str. 2, Moscow 117334, Russia Received 15 September 2001; accepted 5 July 2002 Submitted by R.A. Brualdi Abstract Let A be an irreducible nonnegative matrix and λ(A) be the Perron root (spectral radius) of A. Denote by λmin(A) the minimum of the Perron roots of all the principal submatrices of co- order one. It is well known that the interval (λmin(A), λ(A)) does not contain any eigenvalues of A. Consider any principal submatrix A − v of co-order one whose Perron root is equal to λmin(A). We show that the Jordan structure of λmin(A) as an eigenvalue of A is obtained from that of the Perron root of A − v as follows: one largest Jordan block disappears and the others remain the same. So, if only one Jordan block corresponds to the Perron root of the submatrix, then λmin(A) is not an eigenvalue of A. By Schneider’s theorem, this holds if and only if there is a Hamiltonian chain in the singular digraph of A − v. In the general case the Jordan structure for the Perron root of the submatrix A − v and therefore that for the eigenvalue λmin(A) of A can be arbitrary. But if the Perron root λ(A − w) of a principal submatrix A − w of co-order one is strictly greater than λmin(A), then λ(A − w) is a simple eigenvalue of A − w.
    [Show full text]
  • Distinct, Real Eigenvalues
    Solve x0 = Ax From Section 7.1, we already know how to solve a system of two first order differential equations using the methods of Chapter 3 (after converting to a single second order equation). In these notes, we want to connect the solution to the system with the eigenvalues and eigenvectors of the coefficient matrix A. Given 0 " 0 # " #" # x1 = ax1 + bx2 x1 a b x1 0 0 or 0 = or x = Ax x2 = cx1 + dx2 x2 c d x2 Inspired by Chapter 3, we use the ansatz: " # " rt # rt rt v1 e v1 x = e v = e = rt v2 e v2 and substitute it into the differential equation. We will see that r is actually an eigenvalue of A, and v the corresponding eigenvector. First we compute x0, then Ax, then we set them equal to each other: " rt # 0 re v1 rt x = rt = re v re v2 and: Ax = Aertv = ertAv Set these equal to each other: ertAv = rertv We can divide by ert since it is never zero, and we get: Av = rv so that r is an eigenvalue of A (we will now stick with the λ notation), and v is the corre- sponding eigenvector. This is not the whole story, however. Just as we did in Chapter 3, we will need to find a fundamental set of solutions for our system. And, just as in Chapter 3, we will see that it takes two linearly independent solutions to form that fundamental set. We will have three cases classified by the eigenvalues (and in Chapter 3 by the roots to the characteristic equation): (i) Two distinct real eigenvalues, (ii) Complex conjugate eigenvalue, and (iii) One eigenvalue, one eigenvector.
    [Show full text]
  • Generalized Eigenvector - Wikipedia
    11/24/2018 Generalized eigenvector - Wikipedia Generalized eigenvector In linear algebra, a generalized eigenvector of an n × n matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.[1] Let be an n-dimensional vector space; let be a linear map in L(V), the set of all linear maps from into itself; and let be the matrix representation of with respect to some ordered basis. There may not always exist a full set of n linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable.[2][3] This happens when the algebraic multiplicity of at least one eigenvalue is greater than its geometric multiplicity (the nullity of the matrix , or the dimension of its nullspace). In this case, is called a defective eigenvalue and is called a defective matrix.[4] A generalized eigenvector corresponding to , together with the matrix generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of .[5][6][7] Using generalized eigenvectors, a set of linearly independent eigenvectors of can be extended, if necessary, to a complete basis for .[8] This basis can be used to determine an "almost diagonal matrix" in Jordan normal form, similar to , which is useful in computing certain matrix functions of .[9] The matrix is also useful in solving the system of linear differential equations where need not be diagonalizable.[10][11] Contents Overview and definition Examples Example 1 Example 2 Jordan chains
    [Show full text]