Chapter 7. the Eigenvalue Problem Section 7.1
Total Page:16
File Type:pdf, Size:1020Kb
7. The Eigenvalue Problem, December 17, 2009 1 Chapter 7. The Eigenvalue Problem Section 7.1. The general case 1 Introduction. The eigenvalue equation of an operator Oˆ is § Oˆ x = λ x (1) | | where λ is an unknown number and x is an unknown ket. This equation has | many solutions xα , which are called eigenstates,oreigenkets,oreigenvectors | of Oˆ.Foreachsolutionxα ,thereisanumberλα, called the eigenvalue | corresponding to the eigenket xα . I remind you that if Aˆ represents| an observable then the eigenstates of Aˆ are the pure states of that observable. In addition, the spectrum of A (the values that the observable A can take) consists of the eigenvalues of Aˆ. Solving the eigenvalue equation for the operators representing observables is a matter of greatest importance to quantum mechanics. In most cases encountered in practice, you will know the operator and will want to calculate some of the eigenvalues and eigenkets. If one works with the Schr¨odinger representation (which is the coordinate representation), the operators of interest are differential operators acting on wave functions that are elements of L2. For example, the energy operator for a harmonic oscillator is h¯2 d2 Hˆ = kx2 (2) −2μ dx2 − and the eigenvalue equation is h¯2 d2ψ(x) Hˆ ψ(x) kx2ψ(x)=Eψ(x)(3) ≡−2μ dx2 − The eigenvalue equation for the energy of the harmonic oscillator has an analytic solution (see Metiu, Quantum Mechanics) and so do those equations for the particle in a box and for the hydrogen atom. This situation is not general. One of the most frequently used methods for solving such equations nu- merically is to turn them into a matrix equation. This is why we dedicate a chapter to studying the eigenvalue problem for matrices. In practice, we use computers and software provided by experts to solve the eigenvalue problem for a given matrix. Because of this, we focus here on understanding various 7. The Eigenvalue Problem, December 17, 2009 2 aspects of the eigenvalue problem for matrices and do not discuss numer- ical methods for solving it. When we need a numerical solution, we use Mathematica. For demanding calculations, we must use Fortran, C,orC++, since Mathematica is less efficient. These computer languages have libraries of programs designed to give you the eigenvalues and the eigenvectors when you give them the matrix. 2 Some general considerations. It turns out that the eigenvalue problem A§ ψ = λ ψ has many solutions and not all of them are physically mean- ingful.| We| are interested in this eigenvalue problem because, if it arises from an observable, the eigenvalues are the spectrum of the observable and some of the eigenfunctions are the pure states of the observable. In addition, the operator representing the observable is Hermitian, because its spectrum, which consists of all the values the observable can take, just contain only real numbers. Before we try to solve the eigenvalue problem, we need to decide how we recognize which eigenstates correspond to pure states: these are the only eigenstates of interest to us. In Chapter 2, we examined the definition of the probability and concluded that kets ai and aj representing pure states must satisfy the orthonormal- ization condition| | ai aj = δij (4) | 3 Normalization. Let us assume that we are interested in solving the eigenvalue§ equation Aˆ ψ = λ ψ (5) | | It is easy to see that if α is a complex number and ψ satisfies Eq. 5 then η = α ψ satisfies the same equation. For every given| eigenvalue we have as| many| eigenstates as complex numbers. The one that is a pure state must be normalized. We can therefore determine the value of α by requiring that η satisfies | η η =1 (6) | Using Eq. 5 in Eq. 6 (and the properties of the scalar product) leads to α∗α ψ ψ =1 (7) | or 1 α α = (8) ∗ ψ ψ | 7. The Eigenvalue Problem, December 17, 2009 3 This gives eiφ α = (9) ψ ψ | where φ is a real number, which cannot be determined by solving Eq. 8. The ket eiφ η = ψ (10) | ψ ψ | | is normalized and therefore is a pure state of Aˆ corresponding to the eigen- value λ. If the system is in the state η and we measure A, we are certain that the result is λ. | In practice, most computer programs that calculate eigenstates do not normalize them. If the computer produces ψ as the eigenket corresponding to eigenvalue λ,youmustuseEq.10tofind| the eigenstate η that is a pure state. Since eiφ in Eq. 10 is an irrelevant phase factor, we can| drop it (i.e. take φ = 0); the pure state corresponding to λ is ψ η = | (11) | ψ ψ | 4 Orthogonality. The pure states must also be orthogonal. Because an operator§ Aˆ, representing an observable A, is always Hermitian, one can prove that eigenstates corresponding to different eigenvalues must be orthogonal. That is, if Aˆ ηi = λi ηi and Aˆ ηj = λj ηj (12) | | | | and λi = λj (13) then ηi ηj =0 (14) | Theproofiseasy.Wehave Aˆηi ηj ηi Aˆηj =0 (15) | − | because Aˆ is Hermitian. Using Eqs. 12 in Eq. 15 gives (λi λj) ηi ηj =0 − | 7. The Eigenvalue Problem, December 17, 2009 4 When λi = λj,wemusthave ηi ηj =0, | which is what we wanted to prove. So, when you solve an eigenvalue equation for an observable, the eigen- states corresponding to different eigenvalues must be orthogonal. If they are not, there is something wrong with the program calculating the eigenvectors. What happens if some eigenvalues are equal to each other? Physicists call such eigenvalues ‘degenerate’, mathematicians call them ‘multiple’. If ηi ˆ ˆ | and ηj belong to the same eigenvalue (i.e. if A ηi = λ ηi and A ηj = λ ηj ) | | | | | then ηi and ηj are under no obligation to be orthogonal to each other (only to, according| | to the fact in the previous , all eigenvectors belonging to eigen- values that differ from λ). Most numerical§ procedures for solving eigenvalue problems will provide non-orthogonal degenerate eigenvectors. These can be orthogonalized with the Gram-Schmidt procedure and then normalized. The resulting eigenvectors are now pure states. You will see how this is done in an example given later in this chapter. It is important to keep in mind this distinction between eigenstates and pure states. All pure states are eigenstates but not all eigenstates are pure states. 5 The eigenvalue problem in matrix representation: a review. Let a1 , § | a2 , ..., an , ...bea completeorthonormalbasisset. Thissetneednotbe a| set of pure| states of an observable; we can use any complete basis set that a mathematician can construct. Orthonormality means that ai aj = δij for all i, j (16) | Because the set is complete, we have N Iˆ an an (17) n=1 | | Eq. 17 ignores the continuous spectrum and truncates the infinite sum in the completeness relation. We can act with each am , m =1, 2,...,N, on the eigenvalue equation to turn it into the following N| equations: am Aˆ ψ = λ am ψ ,m=1, 2,...,N (18) | | | 7. The Eigenvalue Problem, December 17, 2009 5 Now insert Iˆ,asgivenbyEq.17,betweenAˆ and ψ to obtain | N am Aˆ an an ψ = λ am ψ ,m=1, 2,...,N (19) n=1 | | | | N The complex numbers an ψ are the coordinates of ψ in the an n=1 representation. If we know | them, we can write ψ as | {| } | N ψ = an an ψ (20) | n=1 | | It is easy to rewrite Eq. 19 in a form familiar from linear matrix algebra. Let us denote ψi ai ψ ,i=1, 2,...,N (21) ≡ | and Amn am Aˆ an (22) ≡ | | With this notation, Eq. 19 becomes N Amn ψn = λψm,m=1, 2,...,N (23) n=1 and Eq. 20, N ψ = an ψn (24) | n=1 | The sum in Eq. 23 is the rule by which the matrix A, having the elements Amn, acts on the vector ψ, having the coordinates ψn. This equation is called the eigenvalue problem for the matrix A and it is often written as Aψ = λψ (25) (which resembles the operator equation Aˆ ψ = λ ψ )oras | | A11 A12 A1N ψ1 ψ1 ··· A21 A22 A2N ψ2 ψ2 ⎛ ··· ⎞ ⎛ ⎞ ⎛ ⎞ . .. = λ . (26) ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ A A A ⎟ ⎜ ψ ⎟ ⎜ ψ ⎟ ⎜ N1 N2 NN ⎟ ⎜ N ⎟ ⎜ N ⎟ ⎝ ··· ⎠ ⎝ ⎠ ⎝ ⎠ Eqs. 25 and 26 are different ways of writing Eq. 23, which, in turn, is the representation of the equation Aˆ ψ = λ ψ in the basis set a1 ,..., aN . | | {| | } 7. The Eigenvalue Problem, December 17, 2009 6 Normally in quantum mechanics we choose the basis set a1 ,..., an ˆ {| |ˆ } and know the operator A. This means that we can calculate Amn = am A an . We do not know ψ or λ, and our mission is to find them from Aˆ ψ =|λ ψ| . We have converted| this operator equation, by using the orthonormal| basis| set a1 ,..., aN , into the matrix equation Eq. 23 (or Eq. 25 or Eq. 26). {| | } In Eq. 23, we know Amn but we do not know ψ = ψ1, ψ2,...,ψN or λ. { } WemustcalculatethemfromEq.23.Onceweknowψ1, ψ2,...,ψN ,wecan calculate ψ from | N N ψ = an an ψ ψn an (27) | n=1 | | ≡n=1 | 6 Use a computer. Calculating eigenvalues and eigenvectors of a matrix “by§ hand” is extremely tedious, especially since the dimension N is often very large. Most computer languages (including Fortran, C, C++, Mathematica, Maple, Mathcad, Basic) have libraries of functions that given a matrix will return its eigenvalues and eigenvectors. I will not discuss the numerical algorithms used for finding eigenvalues. Quantum mechanics could leave that to experts without suffering much harm.