Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline

Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline

Eigenvalue Problems Existence, Uniqueness, and Conditioning Computing Eigenvalues and Eigenvectors Outline 1 Eigenvalue Problems 2 Existence, Uniqueness, and Conditioning 3 Computing Eigenvalues and Eigenvectors Michael T. Heath Scientific Computing 2 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well as real matrices With complex matrices, we use conjugate transpose, AH , instead of usual transpose, AT Michael T. Heath Scientific Computing 3 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalues and Eigenvectors Standard eigenvalue problem : Given n n matrix A, find ⇥ scalar λ and nonzero vector x such that Ax= λ x λ is eigenvalue, and x is corresponding eigenvector λ may be complex even if A is real Spectrum = λ(A)=set of eigenvalues of A Spectral radius = ⇢(A) = max λ : λ λ(A) {| | 2 } Michael T. Heath Scientific Computing 4 / 87 Classic Eigenvalue Problem Consider the coupled pair of di↵erential equations: • dv =4v 5w, v =8att =0, dt − dw =2v 3w, w =5att =0. dt − This is an initial-value problem. • With the coefficient matrix, • 4 5 A = − , 2 3 − we can write this as, d v(t) 4 5 v(t) = − . dt w(t) 2 3 w(t) ✓ ◆ − ✓ ◆ Introducing the vector unknown, u(t):=[v(t) w(t)]T with u(0) = [8 5]T , • we can write the system in vector form, du = Au, with u = u(0) at t =0. dt How do we find u(t)? • If we had a 1 1matrixA = a, we would have a scalar equation: • ⇥ du = au with u = u(0) at t =0. dt The solution to this equation is a pure exponential: u(t)=eat u(0), which satisfies the initial contion because e0 =1. The derivative with respect to t is aeatu(0) = au, so it satisfies the scalar • initial value problem. The constant a is critical to how this system behaves. • – If a>0thenthesolutiongrowsintime. – If a<0thenthesolutiondecays. – If a Im then the solution is oscillatory. 2 (More on this later...) Coming back to our system, suppose we again look for solutions that are • pure exponentials in time, e.g., v(t)=eλty w(t)=eλtz. If this is to be a solution to our initial value problem, we require • dv = λeλty =4eλty 5eλtz dt − dw = λeλtz =2eλty 3eλtz. dt − The eλt cancels out from each side, leaving: • λy =4y 5z − λz =2y 3z, − which is the eigenvalue problem. In vector form, u(t)=eλtx, yields • du = Au λeλtx = A(eλtx) dt () which gives the eigenvalue problem in matrix form: λx = Ax or Ax = λx. As in the scalar case, the solution behavior depends on whether λ has • – positive real part agrowingsolution, ! – negative real part adecayingsolution, ! – an imaginary part an oscillating solution. ! Note that here we have two unknowns: λ and x. • We refer to (λ, x)asaneigenpair,witheigenvalue λ and eigenvector x. • Solving the Eigenvalue Problem The eigenpair satisfies • (Ax λI) x =0, − which is to say, – x is in the null-space of A λI − – λ is chosen so that A λI has a null-space. − We thus seek λ such that A λI is singular. • − Singularity implies det(A λI)=0. • − For our example: • 4 λ 5 0= − − =(4 λ)( 3 λ) ( 5)(2), 2 3 λ − − − − − − − or λ2 λ 2=0, − − which has roots λ = 1orλ =2. − Finding the Eigenvectors For the case λ = λ = 1, (A λ I)x satisfies, • 1 − − 1 1 5 5 y 0 − = , 2 2 z 0 − ✓ ◆ ✓ ◆ which gives us the eigenvector x1 y 1 x = = . 1 z 1 ✓ ◆ ✓ ◆ Note that any nonzero multiple of x is also an eigenvector. • 1 Thus, x defines a subspace that is invariant under multiplication by A. • 1 For the case λ = λ =2,(A λ I)x satisfies, • 2 − 2 2 2 5 y 0 − = , 2 5 z 0 − ✓ ◆ ✓ ◆ which gives us the second eigenvector as any multiple of y 5 x = = . 2 z 2 ✓ ◆ ✓ ◆ Return to Model Problem Note that our model problem du = Au,islinear in the unknown u. • dt Thus, if we have two solutions u (t)andu (t)satisfyingthedi↵erential • 1 2 equation, their sum u := u1 + u2 also satisfies the equation: du 1 = Au dt 1 du + 2 = Au dt 2 d (u + u )=A(u + u ) dt 1 2 1 2 du = Au dt λ1t du1 Take u1 = c1e x1: = c λ eλ1tx • dt 1 1 1 λ1t Au1 = A c1e x1 λ1t = c1e Ax1 λ1t = c1e λ1x1 du = 1 . dt λ2t du2 Similarly, for u2 = c2e x2: = Au . • dt 2 du d Thus, = (u + u )=A (u + u ) • dt dt 1 2 1 2 λ1t λ2t u = c1e x1 + c2e x2. The only remaining part is to find the coefficients c and c such that • 1 2 u = u(0) at time t =0. This initial condition yields a 2 2system, • ⇥ c1 8 x1 x2 = . 2 3 c2 5 ✓ ◆ ✓ ◆ 4 5 Solving for c and c via Gaussian elimination: • 1 2 15 c 8 1 = 12 c 5 ✓ 2 ◆ ✓ ◆ 15 c 8 1 = 0 3 c 3 − ✓ 2 ◆ ✓ − ◆ c2 =1 c =8 5c =3. 1 − 1 So, our solution is u(t)=x c eλ1t + x c eλ2t • 1 1 2 2 1 5 = 3e t + e2t. 1 − 2 ✓ ◆ ✓ ◆ Clearly, after a long time, the solution is going to look like a multiple • T of x2 =[52] because the component of the solution parallel to x1 will decay. (More precisely, the component parallel to x will not grow as fast as the • 1 component parallel to x2.) Solving for c and c via Gaussian elimination: • 1 2 15 c 8 1 = 12 c 5 ✓ 2 ◆ ✓ ◆ 15 c 8 1 = 0 3 c 3 − ✓ 2 ◆ ✓ − ◆ c2 =1 c =8 5c =3. 1 − 1 So, our solution is u(t)=x c eλ1t + x c eλ2t • 1 1 2 2 1 5 = 3e t + e2t. 1 − 2 ✓ ◆ ✓ ◆ Clearly, after a long time, the solution is going to look like a multiple • T of x2 =[52] because the component of the solution parallel to x1 will decay. (More precisely, the component parallel to x will not grow as fast as the • 1 component parallel to x2.) Growing / Decaying Modes Our model problem, • du = Au u(t)=x c eλ1t + x c eλ2t dt ! 1 1 2 2 leads to growth/decay of components. Also get growth/decay through matrix-vector products. • Consider u = c x + c x . • 1 1 2 2 Au = c1Ax1 + c2Ax2 = c1λ1x1 + c2λ2x2 k k k A u = c1λ1x1 + c2λ2x2 λ k = λk c 1 x + c x . 2 1 λ 1 2 2 " ✓ 2 ◆ # k k k lim A u = λ2 [c1 0 x1 + c2x2]=c2λ2x2. k · · −!1 So, repeated matrix-vector products lead to emergence of eigenvector • associated with the eigenvalue λ that has largest modulus. This is the main idea behind the power method, which is a common • way to find the eigenvector associated with max λ . | | Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Geometric Interpretation Matrix expands or shrinks any vector lying in direction of eigenvector by scalar factor Expansion or contraction factor is given by corresponding eigenvalue λ Eigenvalues and eigenvectors decompose complicated behavior of general linear transformation into simpler actions Michael T. Heath Scientific Computing 5 / 87 Eigenvalue Problems Eigenvalue Problems Existence, Uniqueness, and Conditioning Eigenvalues and Eigenvectors Computing Eigenvalues and Eigenvectors Geometric Interpretation Examples: Eigenvalues and Eigenvectors 10 1 0 A = : λ =1, x = , λ =2, x = 02 1 1 0 2 2 1 11 1 1 A = : λ =1, x = , λ =2, x = 02 1 1 0 2 2 1 3 1 1 1 A = − : λ1 =2, x1 = , λ2 =4, x2 = 13 1 1 − − 1.50.5 1 1 A = : λ1 =2, x1 = , λ2 =1, x2 = − 0.51.5 1 1 01 1 i A = : λ = i, x = , λ = i, x = 10 1 1 i 2 − 2 1 − where i = p 1 − Michael T. Heath Scientific Computing 6 / 87 Eigenvalue Problems Characteristic Polynomial Existence, Uniqueness, and Conditioning Relevant Properties of Matrices Computing Eigenvalues and Eigenvectors Conditioning Characteristic Polynomial Equation Ax = λx is equivalent to (A λI)x = 0 − which has nonzero solution x if, and only if, its matrix is singular Eigenvalues of A are roots λi of characteristic polynomial det(A λI)=0 − in λ of degree n Fundamental Theorem of Algebra implies that n n matrix ⇥ A always has n eigenvalues, but they may not be real nor distinct Complex eigenvalues of real matrix occur in complex conjugate pairs: if ↵ + iβ is eigenvalue of real matrix, then so is ↵ iβ, where i = p 1 − − Michael T. Heath Scientific Computing 7 / 87 Eigenvalue Problems Characteristic Polynomial Existence, Uniqueness, and Conditioning Relevant Properties of Matrices Computing Eigenvalues and Eigenvectors Conditioning Example: Characteristic Polynomial Characteristic polynomial of previous example matrix is 3 1 10 det − λ = 13− 01 ✓− ◆ 3 λ 1 det − − = 13λ ✓ − − ◆ (3 λ)(3 λ) ( 1)( 1) = λ2 6λ +8=0 − − − − − − so eigenvalues are given by 6 p36 32 λ = ± − , or λ =2, λ =4 2 1 2 Michael T. Heath Scientific Computing 8 / 87 Eigenvalue Problems Characteristic Polynomial Existence, Uniqueness, and Conditioning Relevant Properties of Matrices Computing Eigenvalues and Eigenvectors Conditioning Companion Matrix Monic polynomial n 1 n p(λ)=c0 + c1λ + + cn 1λ − + λ ··· − is characteristic polynomial of companion matrix 00 0 c ··· − 0 10 0 c 2 ··· − 1 3 01 0 c Cn = ··· − 2 6.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    132 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us