Section 7.3. Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors
Total Page:16
File Type:pdf, Size:1020Kb
7.3. Systems of Linear Algebraic Equations 1 Section 7.3. Systems of Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors Note. In this section, we review a bunch of material from sophomore Linear Algebra (MATH 2010). In addition to the topics mentioned in the title of this section, we cover adjoints, similar matrices, and diagonal matrices. Note. Notice that the system of linear equations a x + a x + + a x = b 11 1 12 2 ··· 1n n 1 a x + a x + + a x = b 21 1 22 2 ··· 2n n 2 . a x + a x + + a x = b n1 1 n2 2 ··· nn n n can be written as a matrix equation A~x = ~b. If A is nonsingular, then the system 1 has a unique solution given by ~x = A− ~b. If A is singular, then the system either has no solution (and is “inconsistent”) or has an infinite number of solutions. 1 Note. As opposed to finding A− , there is a computational shortcut. We can create the augmented matrix [A ~b]. | 7.3. Systems of Linear Algebraic Equations 2 Example. If x x + x = 2 1 − 2 3 2x x = 1 2 − 3 2x1 + 3x2 = 8 1 then find x1, x2, x3. Use A− from the example of the previous section. (This has a unique solution.) Example. If x1 + x2 + x3 = 3 2x x 3x = 2 1 − 2 − 3 − 3x 2x = 0 1 − 3 then find x1, x2, x3. (This has no solution.) Example. If x1 + x2 + x3 = 3 2x x 3x = 2 1 − 2 − 3 − 3x 2x = 1 1 − 3 then find x1, x2, x3. (This has multiple solutions.) Note. Recall the computation of the determinants for 2 2 and 3 3 matrices: × × a11 a12 a11 a12 det = = a a a a , 11 22 − 12 21 a21 a22 a21 a22 7.3. Systems of Linear Algebraic Equations 3 a11 a12 a13 a11 a12 a13 a a a a a a 22 23 21 23 21 22 det a21 a22 a23 = a21 a22 a23 = a11 a12 +a13 . a a − a a a a 32 33 31 33 31 32 a31 a32 a33 a31 a32 a33 Definition. As with a set of functions, a set of vectors ~x (1), ~x (2),...,~x (k) is { } linearly dependent if c ~x (1) + c ~x (2) + + c ~x (k) = ~0 1 2 ··· k for some collection of scalars, at least one of which is nonzero. Otherwise the vectors are linearly independent. Note. If we use the above vectors ~x (i) to build a matrix X with ith column ~x (i), we can write the vector equation as a matrix equation X~c = ~0. Certainly ~c = ~0 is a solution to this matrix equation. In fact, if det(X) = 0 then the solution is unique. 6 Therefore, the ~x (i)s are linearly independent if and only if det(X) = 0 where X. 6 We can also potentially use this method to check for the linear independence of functions. Example. Page 343 Number 7. Determine whether the vectors ~x (1) = [2, 1, 0], ~x (2) = [0, 1, 0], ~x (3) = [ 1, 2, 0] are linearly independent. If they are linearly − dependent, find the linear relation among them. (1) (2) (3) Solution. We consider the equation c1~x + c2~x + c3~x = ~0. This gives: c [2, 1, 0] + c [0, 1, 0] + c [ 1, 2, 0] = [2c c , c + c + 2c , 0] = [0, 0, 0] 1 2 3 − 1 − 3 2 2 3 7.3. Systems of Linear Algebraic Equations 4 so we consider the system of equations 2c c = 0 1 − 3 c1 + c2 + 2c3 = 0 0 = 0. This leads to the augmented matrix which we row reduce: 2 0 1 0 1 1 2 0 1 1 2 0 − R1 R2 R2 R2 2R1 ↔ → − 1 1 2 0 ] 2 0 1 0 ] 0 2 5 0 − − − 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1/2 0 1 0 1/2 0 R1 R1+(1/2)R2 R2 (1/2)R2 → − →− − ] 0 2 5 0 ] 0 1 5/2 0 , − − 0 0 0 0 0 0 0 0 c (1/2)c = 0 c = (1/2)c 1 − 3 1 3 so we need c + (5/2)c = 0 or c = (5/2)c or with t = c3 as a free 2 3 2 − 3 0 = 0 c3 = c3 c1 = (1/2)t variable, c = (5/2)t So the vectors are linearly dependent. We can take (with 2 − c3 = t. t = 2) c = 1, c = 5, and c =2 to get 1 2 − 3 ~x (1) 5~x (2) + 2~x (3) = [2, 1, 0] 5[0, 1, 0] + 2[ 1, 2, 0] = [0, 0, 0]. − − − 7.3. Systems of Linear Algebraic Equations 5 Note. An n n matrix can be viewed as a linear transformation (rotation, trans- × lation, and “stretching”) of an n-dimensional vector, A~x = ~y, ~x is transformed to ~y by A. Vectors that are transformed to multiples of themselves are of particular interest, i.e. A~x λ~x = ~0 or (A λ )~x = ~0. As above, this equation has nonzero − − I solutions only if det(A λ ) = 0. − I Definition. For n n matrix A and n n identity matrix , values of λ satisfying × × I the equation det(A λ ) = 0 are called eigenvalues of A. For an eigenvalue λ of − I A, a nonzero vector ~x satisfying A~x = λ~x is called an eigenvector of λ. Definition/Note. Eigenvectors of an eigenvalue are only determined up to a constant multiple. If an eigenvector is chosen such that its length is one, it is said to be normalized. Example. Page 344 Number 21. Find the eigenvalues and all eigenvectors of 1 0 0 A = 2 1 2 . − 3 2 1 Solution. We consider 1 λ 0 0 − 1 λ 2 det(A λ )= 2 1 λ 2 = (1 λ) − − − I − − − λ 2 1 3 2 1 λ − − = (1 λ)(1 λ)2 + 4) = (1 λ)(5 2λ + λ2). − − − − 7.3. Systems of Linear Algebraic Equations 6 Setting det(A λ ) = 0 we have the eigenvalues λ = 1, λ = (2+ √4 20)2 = − I 1 2 − 1 + 2i, and λ = (2 √4 20)/2 = 1 2i. We now find the corresponding 3 − − − eigenvectors. λ = 1. We solve for the components of ~x = [x , x , x ] such that (A 1 )~x = ~0, 1 1 2 3 − I which gives the augmented matrix: 1 (1) 0 0 0 0 0 0 0 − R3 R3 R2 → − 2 1 (1) 2 0 = 2 0 2 0 ] − − − 3 2 1 (1) 0 3 2 0 0 − 0 0 0 0 1 2 2 0 1 2 2 0 R1 R3 R2 R2 2R1 ↔ → − 2 0 2 0 ] 2 0 2 0 ] 0 4 6 0 − − − − 1 2 2 0 0 0 0 0 0 0 0 0 1 2 2 0 1 0 1 0 R2 R2/( 4) R1 R1 2R2 − → − → − ] 0 1 3/2 0 ] 0 1 3/2 0 , 0 0 0 0 0 0 0 0 x x = 0 x = x 1 − 3 1 3 so we need x + (3/2)x = 0 or x = (3/2)x or with t = x3/2 as 2 3 2 − 3 0 = 0 x3 = x3 x1 = 2t a free variable, x = 3t So the set of eigenvectors associated with λ1 = 1 are 2 − x3 = 2t. 2 R t 3 t , t = 0 . − ∈ 6 2 7.3. Systems of Linear Algebraic Equations 7 λ =1+2i. We solve for the components of ~x = [x , x , x ] such that (A (1 + 2 1 2 3 − 2i) )~x = ~0, which gives the augmented matrix: I 1 (1+2i) 0 0 0 2i 0 0 0 − − 2 1 (1+2i) 2 0 = 2 2i 2 0 − − − − 3 2 1 (1+2i) 0 3 2 2i 0 − 1 0 0 0 1 0 0 0 R1 R1/( 2i) R2 R2 2R1 → − → − ] →^− 2 2i 2 0 R3 R3 3R1 0 2i 2 0 − − − − 3 2 2i 0 0 2 2i 0 1 0 0 0 1 0 0 0 R2 R2/( 2i) R3 R3 2R2 → − → − ] 0 1 i 0 ] 0 1 i 0 , − − 0 2 2i 0 0 0 0 0 x1 = 0 x1 = 0 so we need x ix = 0 or x = ix or with t = x3 as a free variable, 2 − 3 2 3 0 = 0 x3 = x3 x1 = 0 x2 = it So the set of eigenvectors associated with λ1 =1+2i are x3 = t. 0 R t i t , t = 0 . ∈ 6 1 7.3. Systems of Linear Algebraic Equations 8 λ = 1 2i. We solve for the components of ~x = [x , x , x ] such that (A (1 3 − 1 2 3 − − 2i) )~x = ~0. The technique is similar to above and we find I 0 R t i t , t = 0 . − ∈ 6 1 Definition. A matrix A is self adjoint (or Hermitian) if it satisfies A∗ = A. If A contains only real entries, this simply means that A is symmetric, AT = A. Theorem. If A is Hermitian, then 1. all the eigenvalues of A are real, 2. A has a set of associated eigenvectors of size n (where A is n n) which are × linearly independent, 3. eigenvectors of different eigenvalues of A are orthogonal, and 4. the set of n eigenvectors can be chosen to all be mutually orthogonal as well as linearly independent. Example. Page 344 Numbers 27 and 32. Prove that if A is Hermitian, then (A~x, ~y)=(~x, A~y).