
Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components aij. × An eigenvector of A is a nonzero n 1 column vector v such that Av = λv × for some scalar λ. A scalar λ is called an eigenvalue of A if there is a nontrivial solution v of Av = λv. Any such v , 0 is called an eigenvector of A corresponding to λ. Eigenvalue Problem: Find values of the scalar λ and nonzero n 1 column vectors v such × that the matrix equation Av = λv (1) is satisfied. Equation (1) can be written equivalently as the homogeneous matrix equation: (A λI) v = 0 (2) − where I is the n n identity matrix. × 1 (A λI) v = 0 (2) − Fact: The homogeneous matrix system (2) has nontrivial solutions if and only if the determinant of the coefficient matrix A λI vanishes, that is, if − and only if a11 λ a12 a1n − ··· a21 a22 λ a2n det(A λI) = : :− ··· : = 0: (3) − : : ::: : an an ann λ 1 2 ··· − det(A λI) is a polynomial of degree n in λ, called the characteristic − polynomial of A. det(A λI) = 0 is called the characteristic equation of A. − The eigenvalues of A are the roots λ of the characteristic equation. Given the eigenvalues of A, the eigenvectors can be determined by finding all nontrivial solutions to Eq. (2). 2 Homogeneous Linear System of ODEs with Constant Coefficients We are interested in solving: X0 = AX (4) where A is an n n matrix with real constant entries, and X = X(t) is an × n 1 column vector. × For n = 1: Equation (4) becomes the scalar equation: x10 (t) = a11 x1(t), and the general solution is a11t x1(t) = c1 e : For n 2: Look for a solution vector X = X(t) of the form ≥ X = V eλt (5) where V , 0 is a constant n 1 column vector, and λ is a constant. × Substituting Eq. (5) into system (4) yields: V λeλt = AV eλt AV eλt V λeλt = 0 − (AV λV ) eλt = 0 − so that AV = λV since eλt > 0. Therefore, in order to find solutions of system (4), we need to find eigenvalues and eigenvectors of matrix A. 3 8.2.1: Distinct Real Eigenvalues Theorem: Let A be a real, constant, n n matrix. × If A has k n distinct real eigenvalues λ , λ , ..., λk with corresponding ≤ 1 2 eigenvectors V1, V2, ..., Vk, then the functions λ1t λ2t λkt X1(t) = V1 e , X2(t) = V2 e , ..., Xk(t) = Vk e are linearly independent on ( ; ). −∞ 1 Here we consider the case of k = n. Theorem 8.2.1: General Solution of Homogeneous Systems Consider the homogeneous system of differential equations X0 = AX (4) where the coefficient matrix A is a real, constant, n n matrix. × Let λ1, λ2, ..., λn be n distinct real eigenvalues of A with corresponding eigenvectors V1, V2, ..., Vn. Then λit Xi(t) = Vi e , i = 1; :::; n, form a fundamental set of solutions of system (4) on ( ; ), and the −∞ 1 general solution on ( ; ) is: −∞ 1 λ1t λ2t λnt X = c V e + c V e + + cn Vn e (6) 1 1 2 2 ··· where ci, i = 1; :::; n, are arbitrary constants. 4 Example: Find the general solution of dx = 9x 3y dt − dy = 16x 7y dt − 5 Example (cont): 6 8.2.2: Repeated Eigenvalues Definition: We say that λ1 is an eigenvalue of (algebraic) multiplicity k, where k is a positive integer, if (λ λ )k is a factor of the characteristic − 1 equation det(A λI) = 0, but (λ λ )k+1 is not. − − 1 Fact: An eigenvalue of multiplicity k can have anywhere between 1 and k (inclusive) linearly independent eigenvectors corresponding to it. Let λ be an eigenvalue of A of multiplicity k n. Different cases can 1 ≤ occur for the eigenvectors corresponding to λ1: 1. There are k linearly independent eigenvectors V1, V2, ..., Vk correspond- ing to λ1. 2. There is only one linearly independent eigenvector corresponding to λ1. 3. There are m linearly independent eigenvectors corresponding to λ1, where 1 < m < k. Note: We only consider cases 1 and 2. 7 Case 1: Eigenvalue of multiplicity k with k linearly independent eigenvector Theorem: Consider the homogeneous system of differential equations X0 = AX (4) where the coefficient matrix A is a real, constant, n n matrix. × If V1, V2, ..., Vk are k linearly independent eigenvectors corresponding to an eigenvalue λ of multiplicity k n, then ≤ λt λt λt X1(t) = V1 e , X2(t) = V2 e , ..., Xk(t) = Vk e are linearly independent solutions of system (4) on ( ; ). −∞ 1 In this case, the general solution to system (4) contains the linear combination λt λt λt c V e + c V e + + ck Vk e 1 1 2 2 ··· where ci, i = 1; :::; k, are arbitrary constants. Note: If k = n, that is, if λ is the only eigenvalue of A, then the functions Xi(t), i = 1; :::; n, given above form a fundamental set of solutions to system (4) on ( ; ), and the linear combination above represents the −∞ 1 entire general solution on ( ; ). −∞ 1 8 Example: Find the general solution of X0 = AX, where 2 3 6 13 12 9 7 6 7 6 − 7 A = 6 15 14 9 7 6 − 7 4 0 0 2 5 9 Example (cont): 10 Case 2: Eigenvalue of multiplicity k with 1 linearly independent eigenvector Let λ be an eigenvalue of multiplicity k with 1 linearly independent eigen- vector. (A) Suppose k = 2: We are looking for 2 linearly independent solutions to system (4) from this eigenvalue λ. λt One solution: X1 = V1 e , where V1 is an eigenvector corresponding to λ. To find a second solution: Try solution of the form λt λt X2 = W1 t e + W2 e (7) Substituting this X2 into system (4), X0 = AX, simplifying, and rearrang- ing terms yields: (AW λW ) t eλt + (AW λW W ) eλt = 0 1 − 1 2 − 2 − 1 This equation holds for all t if and only if each term in parenthesis is 0. This gives: (A λI) W = 0 (8) − 1 (A λI) W = W (9) − 2 1 Therefore, solving Eq. (8) for W1 and then solving Eq. (9) for W2 yields the second solution X2 in Eq. (7). 11 Note: From Eq. (8), W1 is an eigenvector of A corresponding to eigenvalue λ. Therefore, set W1 = V1. For consistency, also set W2 = V2. We can now write the second solution X2 as: λt λt X2 = V1 t e + V2 e (10) where (A λI) V = 0 (11) − 1 (A λI) V = V (12) − 2 1 That is, V1 is an eigenvector of A corresponding to eigenvalue λ, and V2, which solves Eq. (12), is called a generalized eigenvector. (B) Suppose k = 3: We are looking for 3 linearly independent solutions to system (4) from this eigenvalue λ. We know two solutions: λt X1 = V1 e λt λt X2 = V1 t e + V2 e where V1 and V2 solve Eqs. (11) and (12). 12 To find a third solution: Try solution of the form t2 X = W eλt + W t eλt + W eλt (13) 3 1 2 2 3 Substituting this X3 into system (4), X0 = AX, simplifying, and rearrang- ing terms, we find that the vectors W1, W2 and W3 must satisfy: (A λI) W = 0 (14) − 1 (A λI) W = W (15) − 2 1 (A λI) W = W (16) − 3 2 Note: Equations (14) and (15) are the same as Eqs. (8) and (9), respec- tively. Therefore, set W1 = V1 and W2 = V2. For consistency, also set W3 = V3. We can now write the third solution X3 as: t2 X = V eλt + V t eλt + V eλt (17) 3 1 2 2 3 where (A λI) V = 0 (18) − 1 (A λI) V = V (19) − 2 1 (A λI) V = V (20) − 3 2 That is, V1 is an eigenvector of A corresponding to eigenvalue λ, and V2 and V3, which solve Eqs. (19) and (20), are generalized eigenvectors. 13 (C) In general, for multiplicity k: We are looking for k linearly indepen- dent solutions to system (4) from this eigenvalue λ. They are: λt X1 = V1 e λt λt X2 = V1 t e + V2 e t2 X = V eλt + V t eλt + V eλt 3 1 2 2 3 : k 1 k 2 t − λt t − λt λt Xk = V e + V e + + Vk e 1 (k 1)! 2 (k 2)! ··· − − where V1, V2, ..., Vk is a chain of generalized eigenvectors satisfying: (A λI) V = 0 − 1 (A λI) V = V − 2 1 (A λI) V = V − 3 2 : (A λI) Vk = Vk 1 − − 14 Example: Find the general solution of X0 = AX, where " # 4 1 A = − 1 2 15 Example (cont): 16 Example: Suppose a 3 3 matrix A has eigenvalue λ = 2 of multiplicity × − 3 with only one linearly independent eigenvector V1. Let 2 3 2 3 2 3 6 1 7 6 2 7 6 15 7 6 7 6 7 6 7 6 7 6 − 7 6 7 V1 = 6 2 7 ; V2 = 6 1 7 ; V3 = 6 2 7 6 7 6 7 6 7 4 7 5 4 9 5 4 7 5 be a chain of generalized eigenvectors. Find the general solution to X0 = AX. 17 8.2.3: Complex Eigenvalues Notes: 1. Given a complex number z = α + i β: The conjugate of z is z = α i β. • − The real part of z is Re(z) = α. • The imaginary part of z is Im(z) = β.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-