A List of All the Errata Known As of 19 December 2013
Total Page:16
File Type:pdf, Size:1020Kb
14 Linear Algebra that is nonnegative when the matrices are the same N L N L † ∗ 2 (A, A)=TrA A = AijAij = |Aij| ≥ 0 (1.87) !i=1 !j=1 !i=1 !j=1 which is zero only when A = 0. So this inner product is positive definite. A vector space with a positive-definite inner product (1.73–1.76) is called an inner-product space,ametric space, or a pre-Hilbert space. A sequence of vectors fn is a Cauchy sequence if for every ϵ>0 there is an integer N(ϵ) such that ∥fn − fm∥ <ϵwhenever both n and m exceed N(ϵ). A sequence of vectors fn converges to a vector f if for every ϵ>0 there is an integer N(ϵ) such that ∥f −fn∥ <ϵwhenever n exceeds N(ϵ). An inner-product space with a norm defined as in (1.80) is complete if each of its Cauchy sequences converges to a vector in that space. A Hilbert space is a complete inner-product space. Every finite-dimensional inner-product space is complete and so is a Hilbert space. But the term Hilbert space more often is used to describe infinite-dimensional complete inner-product spaces, such as the space of all square-integrable functions (David Hilbert, 1862–1943). Example 1.17 (The Hilbert Space of Square-Integrable Functions) For the vector space of functions (1.55), a natural inner product is b (f,g)= dx f ∗(x)g(x). (1.88) "a The squared norm ∥ f ∥ of a function f(x)is b ∥ f ∥2= dx |f(x)|2. (1.89) "a A function is square integrable if its norm is finite. The space of all square-integrable functions is an inner-product space; it also is complete and so is a Hilbert space. Example 1.18 (Minkowski Inner Product) The Minkowski or Lorentz in- ner product (p, x) of two 4-vectors p =(E/c, p1,p2,p3) and x =(ct, x1,x2,x3) is p · x − Et . It is indefinite, nondegenerate (1.79), and invariant under Lorentz transformations, and often is written as p · x or as px.Ifp is the 4-momentum of a freely moving physical particle of mass m,then p · p = p · p − E2/c2 = − c2m2 ≤ 0. (1.90) The Minkowski inner product satisfies the rules (1.73, 1.74, and 1.79), but 22 Linear Algebra and the formula (1.118) for the kth orthonormal linear combination of the vectors |Vℓ⟩ is |uk⟩ |Uk⟩ = . (1.139) ⟨u |u ⟩ ! k k The vectors |Uk⟩ are not unique; they vary with the order of the |Vk⟩. Vectors and linear operators are abstract. The numbers we compute with are inner products like ⟨g|f⟩ and ⟨g|A|f⟩. In terms of N orthonormal basis ∗ vectors |n⟩ with fn = ⟨n|f⟩ and gn = ⟨g|n⟩, we can use the expansion (1.131) to write these inner products as N N ⟨g|f⟩ = ⟨g|I|f⟩ = ⟨g|n⟩⟨n|f⟩ = g∗f " " n n n=1 n=1 N N (1.140) ⟨g|A|f⟩ = ⟨g|IAI|f⟩ = ⟨g|n⟩⟨n|A|ℓ⟩⟨ℓ|f⟩ = g∗ A f " " n nℓ ℓ n,ℓ=1 n,ℓ=1 in which Anℓ = ⟨n|A|ℓ⟩. We often gather the inner products fℓ = ⟨ℓ|f⟩ into a column vector f with components fℓ = ⟨ℓ|f⟩ ⟨1|f⟩ f1 ⎛ ⎞ ⎛ ⎞ ⟨2|f⟩ f2 f = ⎜ . ⎟ = ⎜ . ⎟ (1.141) ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⟨N|f⟩ ⎠ ⎝ f3 ⎠ and the ⟨n|A|ℓ⟩ into a matrix A with matrix elements Anℓ = ⟨n|A|ℓ⟩.Ifwe also line up the inner products ⟨g|n⟩ = ⟨n|g⟩∗ in a row vector that is the transpose of the complex conjugate of the column vector g † ∗ ∗ ∗ ∗ ∗ ∗ g =(⟨1|g⟩ , ⟨2|g⟩ ,...,⟨N|g⟩ )=(g1,g2,...,gN ) (1.142) then we can write inner products in matrix notation as ⟨g|f⟩ = g†f and as ⟨g|A|f⟩ = g†Af. If we switch to a different basis, say from |n⟩’s to |αn⟩’s, then the com- ′ ponents of the column vectors change from fn = ⟨n|f⟩ to fn = ⟨αn|f⟩, and similarly those of the row vectors g† and of the matrix A change, but the bras, the kets, the linear operators, and the inner products ⟨g|f⟩ and ⟨g|A|f⟩ 28 Linear Algebra Remarkably, this translation operator is an exponential of the momentum operator U(a)=exp(−ipa/!)inwhich! = h/2π =1.054 × 10−34 Js is Planck’s constant divided by 2π. In two-dimensions, with basis states |x, y⟩ that are orthonormal in Dirac’s sense, ⟨x, y|x′,y′⟩ = δ(x − x′)δ(y − y′), the unitary operator U(θ)= |x cos θ − y sin θ,xsin θ + y cos θ⟩⟨x, y| dxdy (1.171) ! rotates a system in space by the angle θ. This rotation operator is the exponential U(θ)=exp(−i θ Lz/!)inwhichthez component of the angular momentum is Lz = xpy − ypx. We may carry most of our intuition about matrices over to these unitary transformations that change from one infinite basis to another. But we must use common sense and keep in mind that infinite sums and integrals do not always converge. 1.18 Antiunitary, Antilinear Operators Certain maps on states |ψ⟩→|ψ′⟩, such as those involving time reversal, are implemented by operators K that are antilinear ∗ ∗ ∗ ∗ K (zψ + wφ)=K (z|ψ⟩ + w|φ⟩)=z K|ψ⟩ + w K|φ⟩ = z Kψ + w Kφ (1.172) and antiunitary ∗ ∗ (Kφ,Kψ)=⟨Kφ|Kψ⟩ =(φ, ψ) = ⟨φ|ψ⟩ = ⟨ψ|φ⟩ =(ψ, φ) . (1.173) In Dirac notation, these rules are K(z|ψ⟩)=z∗⟨ψ| and K(w⟨φ|)=w∗|φ⟩. 1.19 Symmetry in Quantum Mechanics In quantum mechanics, a symmetry is a one-to-one map of states |ψ⟩↔|ψ′⟩ and |φ⟩↔|φ′⟩ that preserves probabilities ′ ′ |⟨φ |ψ ⟩|2 = |⟨φ|ψ⟩|2. (1.174) Eugene Wigner (1902–1995) showed that every symmetry in quantum me- chanics can be represented either by an operator U that is linear and uni- tary or by an operator K that is antilinear and antiunitary. The antilinear, antiunitary case seems to occur only when the symmetry involves time re- versal. Most symmetries are represented by operators that are linear and unitary. Unitary operators are of great importance in quantum mechanics. 34 Linear Algebra shows that the operations (1.189) on columns that don’t change the value of the determinant can be written as matrix multiplication from the rightbya matrix that has unity on its main diagonal and zeros below. Now consider the matrix product A 0 IB A AB = (1.202) !−IB"!0 I " !−I 0 " in which A and B are N × N matrices, I is the N × N identity matrix, and 0istheN × N matrix of all zeros. The second matrix on the left-hand side has unity on its main diagonal and zeros below, and so it does not change the value of the determinant of the matrix to its left, which then must equal that of the matrix on the right-hand side: A 0 A AB det =det . (1.203) !−IB" !−I 0 " By using Laplace’s expansion (1.183) along the first column to evaluate the determinant on the left-hand side and his expansion along the last row to compute the determinant on the right-hand side, one finds that the de- terminant of the product of two matrices is the product of the determinants det A det B =detAB. (1.204) Example 1.27 (Two 2 × 2 Matrices) When the matrices A and B are both 2 × 2, the two sides of (1.203) are a11 a12 00 A 0 ⎛a21 a22 00⎞ det =det (1.205) !−IB" −10b b ⎜ 11 12⎟ ⎜ 0 −1 b b ⎟ ⎝ 21 22⎠ = a11a22 det B − a21a12 det B =detA det B and a11 a12 (ab)11 (ab)12 A AB ⎛a21 a22 (ab)21 (ab)22⎞ det =det (1.206) !−I 0 " −10 0 0 ⎜ ⎟ ⎜ 0 −10 0⎟ ⎝ ⎠ =(−1)C42 =(−1)(−1) det AB =detAB and so they give the product rule det A det B =detAB. 44 Linear Algebra remains true when the matrix A replaces the variable λ N k P (A, A)= pk A =0. (1.258) !k=0 To see why, we use the formula (1.196) N δkℓ det A = AikCiℓ (1.259) !i=1 to write the determinant |A − λI| = P (λ,A) as the product of the matrix A − λI and the transpose of its matrix of cofactors (A − λI) C(λ,A)T = |A − λI| I = P (λ,A) I. (1.260) The transpose of the matrix of cofactors of the matrix A−λI is a polynomial in λ with matrix coefficients T N−1 C(λ,A) = C0 + C1λ + ···+ CN−1λ . (1.261) The left-hand side of equation (1.260) is then T 2 (A − λI)C(λ,A) = AC0 +(AC1 − C0)λ +(AC2 − C1)λ + ... N−1 N +(ACN−1 − CN−2)λ − CN−1λ . (1.262) Equating equal powers of λ on both sides of (1.260), we have using (1.257) and (1.262) AC0 = p0I AC1 − C0 = p1I AC2 − C1 = p2I ...= ... (1.263) ACN−1 − CN−2 = pN−1I −CN−1 = pN I. We now multiply from the left the first of these equations by I, the second by A,thethirdbyA2, . , and the last by AN and then add the resulting equations. All the terms on the left-hand sides cancel, while the sum of those on the right gives P (A, A). Thus a square matrix A obeys its characteristic equation 0 = P (A, A) or N k N−1 N−1 N N 0= pk A = |A| I+p1A+···+(−1) (TrA) A +(−1) A (1.264) !k=0 1.27 Functions of Matrices 45 a result known as the Cayley-Hamilton theorem (Arthur Cayley, 1821– 1895, and William Hamilton, 1805–1865).