Introduction to Quantum Mechanics Unit 0. Review of Linear Algebra
Total Page:16
File Type:pdf, Size:1020Kb
Introduction to Quantum Mechanics Unit 0. Review of Linear Algebra A. Linear Space 1. Definition: A linear space is a collection S of vectors |a>, |b>, |c>, …. (usually infinite number of them) on which vector addition + is defined so that (i) (S,+) forms an Abelien group, i,e,, S is closed under + + is associate: (|a>+|b>)+|c> = |a>+(|b>+|c>) + is commutative: |a>+|b>=|b>+|a> Existence of identity: |a> + |0> = |a> for all |a> Existence of inverse: |a> + (|-a>) = |0> and also scalar (usually complex number) * is defined so that (ii) S is closed under * (iii) * and complex number multiplication is associative: α(β*|a>)=(αβ)*|a> (iv) 1|a>=|a> (and hence 0|a>=|0> and |a>=-|a>) (v) * and + is distributive: (α+β)|a>=α|a>+β|a> and α(|a>+|b>)=α|a>+α|b> 2. Basis and dimension (i) Definition of a basis: A basis is a collection of vectors B={|a1>, |a2>,|a3>,….|aN>} such that any vector in S can be written as a linear combination of a1>, |a2>,|a3>,….|aN>: a = α1 a1 + α 2 a 2 + α3 a 3 +Lα N a N where α1 ,α 2 ,Kα N are complex numbers Furthermore, ={|a1>, |a2>,|a3>,….|aN> are independent of each other, i.e. 0 = α1 a1 + α 2 a 2 + α 3 a 3 +Lα N a N if and only if α1 = α 2 = K = α N = 0 (ii) Choice of a basis is not unique. (iii) However, all basises have the same number of elements. This number (say, N) depends only on the linear space S, and it is known as the dimension of the linear space S. (iv) If the basis has too few elements (<N), it is not enough to generate all vectors in the linear space. (v) If the basis has too many elements (>N), they will not be independent of each other. (vi) Note that S can have infinite number of elements even though its dimension is finite. 3. Coordinate and column matrix (i) For a particular basis B={|a1>, |a2>,|a3>,….|aN>}, we can express any vector as a linear combination of the basis vectors: a = α1 a1 + α 2 a 2 + α3 a 3 +Lα N a N where α1 ,α 2 ,Kα N are complex numbers α1, α2,α3,….αN are called the coordinates of the vector |a>. (ii) Note that α1, α2,α3,….αN are complex numbers. (iii) Not an N-dimensional basis, each vector within the space can be expressed as N coordinates. (iv) We can write the coordinates as a column matrix: ⎛ α ⎞ ⎜ 1 ⎟ ⎜ α 2 ⎟ a = ⎜ α ⎟ ⎜ 3 ⎟ ⎜ M ⎟ ⎜ ⎟ ⎝α N ⎠ (v) A column matrix corresponds to a contravariant rank 1 tensor. (vi) Above notation is kind of confusing, because the coordinates of a vector depend on the choice of the basis. In other words, a column matrix of complex number has no meaning unless we know the basis corresponds to it. (vii) If necessary, we will write the name of the basis next to the column like this: ⎛ α ⎞ ⎜ 1 ⎟ ⎜α 2 ⎟ ⎜ ⎟ a = B α 3 ⎜ ⎟ ⎜ M ⎟ ⎜ ⎟ ⎝α N ⎠ We say the vector |a> is presented under “B-represntation”. (viii) If we use another basis C, the column matrix will be different for the same vector |a>. B. Linear Transformation 1. Definition (i) A linear transformation T is a mapping of vectors from linear space S1 to another linear space S2 (T: S1→ S2) such that if T(|a>)= |b> where |a>∈ S1 and |b>∈S2 then T(α1|a1>+α2|a>)=α1(T|a1>)+α2(T|a>) (ii) Note that in general S1 and S2 can be two completely different spaces of different dimensions. (iii) We do not need to define a linear transformation over all vectors within the space. Once we know how the vectors in a particular basis of S1 are transformed, the linear transformation is defined. For example, if B1={|a1>, |a2>,|a3>,….|aN>}is a particular basis of S1 and we know what are T(|a1>),T (|a2>), …T (|a1>), then for any vector |a> in S1: a = α1 a1 + α 2 a 2 + α 3 a 3 +Lα N a N ⇒ T( a ) = α1T( a1 ) + α 2T( a 2 ) + α 3T( a 3 ) +Lα N T( a N ) (iv) T can be written in matrix form, with respect to certain basises in S1 and S2. Let B1={|a1>, |a2>,|a3>,….|aN>}be a basis in S1 and B2={|c1>, |c2>,|c3>,….|cM>}be a basis in S2. If T(|a1>)=α11| c1>+α21| c2>+α31| c3>+…….+αM1| cM> T(|a2>)=α12| c1>+α22| c2>+α32| c3>+…….+αM2| cM> ………… T(|aN>)=α1N| c1>+α2N| c2>+α3N| c3>+…….+αMN| cM> We can write the linear transformation as a matrix: ⎛ α α α α ⎞ ⎜ 11 12 13 K 1N ⎟ ⎜ α 21 α 22 α 23 L α 2N ⎟ T = ⎜ α α α α ⎟ ⎜ 31 31 31 K 31 ⎟ ⎜ M M M M M ⎟ ⎜ ⎟ ⎝α M1 α M2 α M3 L α MN ⎠ (v) Above notation is kind of confusing, because the matrix representation of T depends on the choice of the two basises in S1 and S2. We may want to write the names of the basises at the side of the matrix as a reminder. However, we must be more careful this time because matrix has two “directions”- up/down and left/right. In the above matrix, we note that the horizontal index (the second subscript 1 to N) runs through the basis B1 of S1 and the vertical index (the second subscript 1 to M) runs through the basis B2 of S2. Hence, we should write it this way: B 1 ⎛ α α α α ⎞ ⎜ 11 12 13 K 1N ⎟ ⎜ α 21 α 22 α 23 L α 2N ⎟ T = B ⎜ α α α α ⎟ 2 ⎜ 31 31 31 K 31 ⎟ ⎜ M M M M M ⎟ ⎜ ⎟ ⎝α M1 α M2 α M3 L α MN ⎠ We will discuss about this procedure more carefully when we discuss NxN matrix. (vi) Now the linear transformation of any other vector can be calculated easily with matrix multiplication. B1 ⎛ β ⎞ ⎛ α α α α ⎞ ⎛ β ⎞ ⎛ γ ⎞ ⎜ 1 ⎟ ⎜ 11 12 13 K 1N ⎟ ⎜ 1 ⎟ ⎜ 1 ⎟ ⎜ β 2 ⎟ ⎜ α 21 α 22 α 23 L α 2N ⎟ ⎜ β 2 ⎟ ⎜ γ 2 ⎟ If a = B ⎜ β ⎟ Then T( a ) = B ⎜ α α α α ⎟ B ⎜ β ⎟ = B ⎜ γ ⎟ 1 ⎜ 3 ⎟ 2 ⎜ 31 31 31 K 31 ⎟ 1 ⎜ 3 ⎟ 2 ⎜ 3 ⎟ ⎜ M ⎟ ⎜ M M M M M ⎟ ⎜ M ⎟ ⎜ M ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ β N ⎠ ⎝α M1 α M2 α M3 L α MN ⎠ ⎝ β N ⎠ ⎝γ N ⎠ Note how the name of B1 disappears during the matrix multiplication. We start with a S1-vector in B1 representation, and end up in an S2-vector in B2 representation. Rule: In matrix multiplication, for it to be meaningful, the multiplying row and column must have the same basis and that name will disappear after the multiplication. This corresponds to Einstein notation in tensor multiplication. 2. Linear functional – one important kind of linear transformation (i) Consider linear transformation T mapping vectors in linear space S to complex number T:S → C. (ii) Define addition and scalar multiplication on these transformations according to the operations in the linear space S: (T1+ T2) |a> = T1(|a>)+ T2 (|a>) (αT)|a> = T(α|a>) for all vectors |a> in S. In this way, we can write linear functional as linear combinations of the others. (iii) All linear functionals form a linear space S*, with the same dimension (N) as the original linear space S (only if the dimension is finite). S* is called the dual space of S. Example: Reciprocal space is the dual space of real space. First rank covariant tensor is dual to the first rank contravariant tensor. (iv) We will use notation <a| to represent element in S*. Hence, <a|b> means a linear functional <a| in S* is acting on a vector |b> in S and the result <a|b> should be a complex number. In quantum mechanics, this is known as the Dirac notation. <a| is called a bra, and |b> is called a ket. Since <a| and |b> are well defined unique objects in S* and S, the value <a|b> should be independent of the choice of basis. (v) Since S and S* have the same dimension, we can easily set up a one-to-one relationship between the elements in this two space. For example, we can pick up A={|a1>, |a2>,|a3>,….|aN>} from S and B={|b1>, |b2>,|b3>,….|bN>} from B* and set up a correspondence like: a1 a2 a3 L aN b b b b b1 b2 b3 L bN We said a metric has been defined for the vector space S. Obviously there are infinite numbers of ways to set up this relationship. (vi) In the above relationship, we said <b1| is a dual vector, or due image of |a1>, <b2| is a dual vector of |a2>, and so on. Under this metric definition, every vector in S can find a dual image in S* according to the following rule: * If |y>=β1|a1>+β2|a2>+β3|a3>+…..+βN|aN>, then its dual vector is <z| =β1 |b1>+ * * * β2 |b2>+β3 |b3>+…..+βN |bN>. Note the complex conjugate in the above expression. (vii) Because of the complex conjugate in the above expression, we have the nice properties <x|x> is real and ≥0 [<x|x>]1/2 is called the norm, or simply length, of the vector |x>. |x> is normalized if <x|x>=1 <x|y> = <y|x>* Also note that (α*<a|)|b> = <a|(α|b>).