
Dirac Notation We will introduce the notion of a (¯nite-dimensional) linear vector space, V, informally. For a rigorous discussion you can look at P. R. Halmos, \Finite-dimensional Vector Spaces", for example. A linear vector space, V, is a set of vectors with an abstract vector denoted by jvi (and read `ket vee'). This notation introduced by Paul Adrien Maurice Dirac(1902-1984) is elegant and extremely useful and it is imperative that you master it.1 The space is endowed with the operation of addition(+) for each pair of vectors and multiplication by `scalars' (which belong to the ¯eld of complex numbers, C, in our case): If jui and jvi are vectors so are jui + jvi and cjvi where c is a complex number(c 2 C.) Vector addition is commutative: jui + jvi = jvi + jui. Vector addition is associative: jui + ( jvi + jwi ) = ( jui + jvi ) + jwi : Scalar multiplication is distributive in (a) the scalars : ( c1 + c2 ) jvi = c1jvi + c2jvi c1 c2 2 C (b) the vectors : c ( jui + jvi ) = c jui + c jvi c 2 C Scalar multiplication is associative: c1( c2jvi ) = (c1c2) jvi. There exists a null vector, denoted by j0i such that jvi + j0i = jvi. One can show uniqueness of the null vector and that j0i = 0jvi for any vector jvi. For every ket jvi there exists a vector denoted by j ¡ vi such that jvi + j ¡ vi = j0i. One can show that the inverse is unique; we also have jvi + (¡1)jvi = 0jvi = j0i. So we have denoted (¡1) jvi by j ¡ vi. Roughly speaking, we can summarize the axioms by saying that all the normal operations with which you are familiar while studying ordinary vectors and scalars are legal. One can endow the linear vector space with an inner product (the generalization of the dot prod- uct) to make it an inner product space. The inner product is a complex number denoted by hujvi. This is represented by the bracket symbol and hence, the term bra for huj and ket for jvi. The inner product has the following properties: (P1) hujvi = hvjui¤. Since we have de¯ned the vector space over complex numbers this is dif- ferent from the more familiar case of a vector space over real numbers in which case the dot product is real and the order is immaterial. 1The seminal book by Dirac, The Principles of Quantum Mechanics published in 1930 introduced the formalism of quantum mechanics in general and this notation in particular. This is not an easy book to read! 1 0 0 0 0 0 0 (P2) Let ju i = c1jv i + c2jw i; we have huju i = c1hujv i + c2hujw i, i.e., the inner product is linear in the kets. (P3) Clearly hvjvi is real and is de¯ned to be non-negative: hvjvi ¸ 0 and equals zero if and only if jvi = j0i. 0 ¤ 0 ¤ 0 We also have from property P 1 and P 2, hu jui = c1hv jui + c2hw jui. Show this! 0 0 0 0 ¤ 0 ¤ 0 Note therefore that if ju i = c1jv i + c2jw i, then hu j = c1hv j + c2hw j. Observe the complex conjugation. Two vectors, jui and jvi, are said to be orthogonal if hujvi = 0 just as with ordinary vectors. p The norm or length of a vector, jvi is de¯ned to be the non-negative real number hvjvi; it is sometimes denoted by k v k. Our de¯nition of the inner product has tacitly used the de¯nition of a bra, huj. The bra vectors are in one-to-one correspondence with ket vectors. With every jvi we associate a unique hvj. So the bras share the same linear vector space structure as the kets. The one key property which makes the de¯nition of the scalar product well-de¯ned as observed above is that the bra associated with the ket jwi ¤ ¤ jwi = c1jui + c2jvi is hwj = c1huj + c2hvj : A set of l vectors jv1i; jv2i;:::; jvli is said to be linearly independent if Xl cj jvji = 0 implies cj = 0; for every j : (1) j=1 Exercise: In ordinary three-dimensional space write down three vectors which are mutually or- thogonal. Are they linearly independent? Give a set of three linearly independent vectors which are not orthogonal. A set of vectors is said to constitute a basis if it is linearly independent and spans the space, i.e., every vector jvi in the space can be expressed as a linear combination of the elements of the basis set (with complex coe±cients.) A linear space is said to be n-dimensional if and only if it has a basis of n vectors. Clearly then in an n-dimensional vector space one can express any vector jvi as a linear combination of a set of n linearly independent vectors; if this were not possible we would have n + 1 linearly independent vectors contradicting our assumption. The basis vectors can be made mutually orthogonal and normalized to unity. We will use the notation jeji for j = 1; 2; ¢ ¢ ¢ n for one such set of n orthonormal vectors: heijeji = ±ij (2) where ±ij is the Kronecker delta function which is one if the two indices are the same and zero otherwise. 2 Every vector(ket) jvi can be expanded in terms of the orthonormal basis as Xn jvi = vj jeji : (3) j=1 Take the inner product of the above equation with heij; one obtains, upon using the orthonormality 2 of the basis vectors the useful result vi = heijvi . Similarly, the corresponding bra vector can be expressed in terms of the dual basis: Xn Xn ¤ ¤ hvj = hejj vj = vj hejj : j=1 j=1 ¤ Note that sometimes the complex number vj is placed before the bra as in the last expression. It ¤ represents the same linear combination of the basis bra vectors multiplied by the complex numbers vj . One concrete realization of this formalism is obtained by thinking of the abstract vectors in a speci¯c basis and associating a column vector with a ket with the elements being complex numbers. Clearly we can add such column vectors (assuming all of them have the same number of components), multiply them by complex numbers such that they obey the axioms. Now one can think of a bra associated with each ket as simply the complex-conjugated transpose (sometimes referred to as the adjoint) of the column vector: it is a row vector with each element being the complex conjugate of the corresponding element of the row. Explicitly, with jvi in an n-dimensional space we associate a column vector (in a particular basis) 0 1 v1 B C B v2 C B C B v C jvi ! B 3 C (4) B . C @ . A vn where vj are complex numbers. Note that if one chooses a di®erent basis the same abstract vector, jvi is represented by a column vector with di®erent entries. The corresponding bra is given by ¤ ¤ ¤ ¤ hvj ! (v1; v2; v3; ¢ ¢ ¢ vn) : (5) This concrete identi¯cation is helpful in keeping track of the concepts of bras and kets. In more mathematical parlance, the space of bras and space of kets are dual to each other. The dot product is given by 0 1 v1 B C B v2 C B C Xn ¤ ¤ ¤ ¤ B v C ¤ hujvi = (u1; u2; u3; ¢ ¢ ¢ un) B 3 C = uj vj (6) B . C @ . A j=1 vn It is obvious, for example, that hujvi = hvjui¤. The various properties of the inner product are also clear in this representation. Note that jvi huj is a very di®erent beast from hujvi. 2Note apart from the unfamiliar notation this is nothing more than what you know from vector algebra: Given ~a = ax ^i + ay ^j + az k^ then ay = ^j ¢ ~a generalized to n-dimensions and complex vectors. 3 It is clear from the representation that we have employed that it is an n £ n matrix, i.e., it is an operator! Please write out the operator explicitly. Once more, the advantage of the formal notation is that many general results can be proved compactly and without explicitly writing vectors and operators in a basis. The basis vectors (of the chosen basis) are then given by 0 1 0 1 1 0 B C B C B 0 C B 1 C B C B C B 0 C B 0 C B C B C je1i = B 0 C je2i = B 0 C etc. (7) B C B C B C B C @ 0 A @ 0 A . The jth element of jeji is 1 while all the other elements are 0. It is easy to check that these n unit vectors constitute an orthonormal basis. The representation of any complex vector in terms of the basis vectors is obvious. Linear Operators in Dirac notation We de¯ne an operator A^ as a map that associates with each vector jui belonging to the linear vector space V a vector jwi; this is represented by A^ jui = jwi. An operator is said to be linear if it obeys A^ [ c1jui + c2jvi ] = c1A^jui + c2A^jvi for any pair of vectors jui and jvi and any pair of complex numbers c1 and c2. The linear operators themselves form a linear space in that the sum of two operators A^ and B^ is de¯ned by ( A^ + B^ ) jui = A^ jui + B^ jui and multiplication by a complex scalar is de¯ned by the action of c A^ on any ket as follows: (cA^) jvi = c( A^jvi ) : (8) so that we have C^ = c1A^ + c2B^ ) C^jvi = c1 A^jvi + c2 B^jvi for all jvi : (9) 4.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages4 Page
-
File Size-