Operator and Analysis

2.I INIRODUCTION

In quantum mechanics and quantum statistics (quantum statistical mech- anics) as well as in certain other areas of physics, physical quantities are represented by linear operators on a vector (Hilbert) space. While this approach may seem far removed from an experimental process, it makes possible the development of these subject areas by use of rigorous mathe- matical procedures. Moreover, agreement between calculated results obtained by use of this formal (or abstract) approach with experimentally nieasured values has given credence to this formulation. Mathematical operations involving linear operators are often carried out by use of matrices. Hence a knowledge of vector spaces, linear operators, and matrix analysis is required in many areas of physics. We therefore begin this chapter with a brief discussion of vector spaces and linear operators; the remainder of the chapter is devoted to a development of matrix analysis.

42 Sec. 2.2 Rudiments of Vector Spaces 43

2.2 RUDIMENTS OF YECTOR SPACES

2.2.1 Definition of a Vector Spece A (linear space or linear manifold) V" is a set of elements, V r Vrz, . . . , vn : {V}, called vectors for which the operations of addition and multiplication by a scalar defined below are valid. The term "vector," in this context, is used in an abstract mathematical sense; it is a generalization of the definition given in Chapter 1 to cases of arbitrary dimensions. In a strict mathematical sense, a space is a set of elements (vectors, points, func' tions, or any abstract quantities) for which certain defined mathematical operations are valid. We will call the set of elements vectors and require that they satisfy the following relations. A. Addirton of Yectors

1. For any two vectors yt, and tyr in Y* there exists the sum V, * ypinV, such that Vt*Vr:V**Vr

2. For vectors Vu V,, and yr in 2", there exists y, + (V r + 1r*) in Iz" such that

Vr, * (Vt * V): (v' * ry) * v*.

3. There exists a unique vector 0 (zero or null vector) in Il such that vr*0:v for any y in V,. 4. For each vector ty in Vn, there exists a unique vector -V in Z, such that V * ?v):0.

B. Mulrtphcation of Yectors by Scalars

5. For vectors Vrt nd ryr in Vn, there exists vectors u(V' * V), (a * f)V" and s,(fV) in Z" such that adV,*Y*):d.Yt*dtlr* (d 4- FW': e{t* Fv' o(fvt): (df)v,

where d and f are two scalars (real or complex numbers). Opnuron aNp Mlrnx ANlr,ysls Csap.2

6. For the zero and unit vectors in V,,:1As following respective products exist:

0.V :0 and l.V : V.

2.2.2 Linear Dependence

A set ofvectors [r4,] is said to be linearly dependent ifthere exists a correspond- ing set of scalars [a,], not all zero, such that

utvt: o. (2.r) l=lE If p-fi':o implies that p,: 0 for all i, then the set of vectors [fJ is said to be linearly independent. Here {BJ is a set of scalars.

2.2.3 Dimensionality of a Vector Space A vector space is said to be z-dimensional if it contains precisely z linearly independent vectors. A vector space is called infinite-dimensional if there exists an arbitrarily large (but countable) number of linearly independent vectors in the space. If an arbitrary vector $ in V" can be represented as a of the vectors [y,] in V, and scalars [cJ

u'v' (2.2) 6:2,= I then {r4,} is said to span the vector space V,. A linearly independent set of vectors [14,] that spans a vector space V, is called a for V,. For example, the i, j, k unit vectors described in Chapter I are the basis for the three-dimensional Cartesian space (a three- dimensional vector space).

2,2,4 Inner Product A Euclidean (Euclid about 300 B.c.) space E" is a vector space on which an inner (scalar) product is defined. The inner product of two vectors y and $ is denoted by (v,6) (2.3',)

The following properties are valid for the inner product: Sec, 2.2 Rudiments of Yector SPaces 45

(v,6 * €): (v,0) * @,€) (v*6,O:(v,0+@,o and (V,V)>0 (unlesslr:0).

r4 as The norm (length), ll ,lr I l, of a vector is definecl llyll : (v,v)'/'. (2'4) If the inner product of two vectors equals zero i' j : l'2' (vuvr): o for " ''tl I'*IV,+O it^ and Vr#O

the vectors are said to form an orthogonal set. Ifthe norm within an orthogo- nal set is unity ll,r,ll: t

the set is called orthonormal.

2.2,5 Hilbert (1862-1943) Space For a vector space v,to be complete, it is required that each cauchy sequence in V,, llV,-Vr|* 0 (for iandk * -), converges to a limit in Il. A complete and infinite-dimensional complex Euclidean space is called a Hilbert space. The notations used thus far in this section are those preferred by mathe- maticians. In physics (quantum mechanics), a Yector in Hilbert space is denoted by lV), and the inner product (V,6) is written as (t/l{) where lS)and (ylarecalledket and bra vectors, respectively. This latter notation is due to Dirac (1902-).

2.2.6 Linear Operators A linear operator on a vector space V, is a procedure for obtaining a unique vector, Qu in V, for each y, in Vo For example, 6': AVt (2.5a) where I is a linear operator. Using the Dirac notation, we write lD: tlv) (2.5b) Opsnaron enn MArnx ANlr,ysrs CH,lp.2

For linear operators A and A, it is required th4[

,q(lvr) + ld>) : elD + el0) (A + B)lr): Alv) + Blv) GDID: A(Blv)) and AalV): q,AlV) where a is a scalar. The vectors ly) and lf) are two arbitrary vectors in the vector space.

Exnuprr 2.1 A Quantum Mechanical lllustration of a Linear Operator Linear operators, in contrast to ordinary numbers and functions, do not always commute, that is, l,B is not always equal to BA. The difference AB - BA which is symbolically written as [A, B] is called the commutator of A and B. In quantum mechanics, linear operators play a central role, and it is understood that simultaneous specification of the physical quantities repre- sented by two noncommuting operators, lA, Bf * 0, cannot be made. Con- sider, for example, lx,p*j in the x-representation. Here x and p*, p,: -ihdl0x for i : 4/=l and h : hl2n, represent the position and x-compo- nent of the momentum of a particle, respectively. The value of the commuta- tor fx, p*l is obtained by operating on some yr(.r); we obtain

l*, p,lV@) : (xp* - p,x)V@) :-ih{.x-ry\

: ihty(x) or lx, p,7 : ih.

The matrix representation of a linear operator is consistent with the fact that is not in general commutative, as can be seen from the definition in Eq. (2.17).

2.3 MATRIX ANALYSIS AND NOTATIONS

The word "matrix" was introduced in 1850 by Sylvester, and matrix theory was developed by Hamilton (1805-1865) and Cayley (1821-1895) in the study of simultaneous linear equations. Sec. 2.3 Malrix Analysis and Notations

Matrices were rarely used by physicists prior to 1925. Today, they are used in most areas of physics. A matrix is a rectangular array of quantities,

Qlo

4zo (2.6)

where the at j are called elements (of the ith row and 7th column); they may be real (or complex) numbers or functions. The matrix A has m rows and n columns and is called a matrix of order m x n (m by n).lf m : n, the matrix is called a square matrix. The main diagonal of a square matrix consists of the elements cttr,ctzz,,,. rann. The row matrix

A : (arr arz au) (2.7) is called a row vector, and the column matrix

^1',:,:) (2.8) is called a column vector. Two matrices of the same order (r4 and,B) are said to be equal if and only if a,t: b,, for all i and j. For example,

^:n and r:(rr,). (2.s) If a,,:0 for all i and j, then I is called a null matrix. For example,

t0 0 0\ d:lo o ol. (2.10) \t ool

The multiplication of a matrix, A, by a scalar, k, is given by

kA: Ak (2.11) Opeuron eNo Mltntx AN,u,Ysts Ctrrp,2 where the elements of kA are ka,, for all i andT' For example,

zf : e.t2) "\l '\t/ - \6f i\.zl

2.4 MATRIX OPERATIONS

In this section, we discuss the important operations for matrices'

2.4,1 Addition (Subtraction) The operation of addition (or subtraction) for two n X zl matrices is defined as C:ATB (2.13)

where c,, : a,t * b,, for all i and i. For example,

:(1-l Q'\4, (i;l) ?) .(? ?-:)

The following laws are also valid for addition of matrices of the same order: A+B:B*A (commutative) (2.15) (A+B)+c:A+(B+C) (associative) (2.16)

2.4.2 Multiplication For two matrices A and B,two kinds of products are defined; they are called the matrix product, AB, and direct product, A @ B. The matrix product c : AB is obtained by use of the following defini tion: (2.17) ,rr: ,f, o*b'

where the orders of A,B,and Care r x s, s x nt, and n x respectively' '|1, Note that the matrix product is defined for conformable matrices only. This means that the number of columns of I must equal the number of rows of ,R. Consider 0u"",) and u : n: (:'"" :"") (1"" Sec. 2,4 Mattix Operations 49

I{erc AB becomes ,;,) Ar: (:",, :",")(1,," arzb1 arrbrz arzbzz _ lattbrt * * (2.18) - \orrbr, * azzbzt azrbrz * ozzbzz) For example,

fi ?')(: ): (: :) (2.1e)

Consider the following system of equations: 3x*Yl2z:3 2x-3Y-z:-3 (2.20) x*2Y*z:4.

In matrix form, we write Eq. (2.20) as f ):)(i) 0 (2.21) As can be shown by use of the definition of the matrix product, the com- mutative law of multiplication is not, in general, valid for the matrix product, AB + BA. However, the associative law of multiplication is valid for the matrix product, A(BC): (AB)C. The direct product () is defined for general matrices. If I is an n x n matrix and .B is an m x m matrix, then the ilirect product A @ B is an nm x nm matrix and is defined by

C:A@B (2.22)

C1p,y: a11bp1. (2,23) If

crz\ D, r\ A: f 'r and B : (b" \a' azzl \Drt brr) 50 OpsRaron lxo Mlrnx ANer.Ysrs CrrAP.2 then

Ag B: (:',',uo :',,,t) larrbrr artbn atzbrt arzbrz\ I orrb^ atrbzz urzbzr arrbrrl : Q.24) I or,,b^ aztbrz uzzbtt arrbrrl \orrbr, dzrbzz azzbz, arrbrr.l For : and : "' (: ;) "' (l -:)

the direct product or @ ot is given by

or€,or: (?::: l:::)

-\l-?::/_/::l-:\ (2.2s)

2.4.3 Division The division by a general matrix is not uniquely defined'

2.4.4 The Derivative of a Matrix The derivative of a matrix with respect to a x is equal to the derivative of each element with respect to x separately. For example, x3 2\_/ I 3x2 dlx Q26) &\"-, o 3xzl - \-r-' o ;)

2.4.5 The Integral of a Matrix The integral of a matrix with respect to a variable x is equal to the integral of each element with respect to Jc separately. For example, . '; (z2i) If,f ,1,)0.:(_!.i ;) [: ::) Sec. 2.5 Properties of Arbitrary Matrices

2,4.6 Partitioned Mahices Thus far, we have assumed that the elements of matrices are numbers or functions. However, the elements may themselves be matrices. That is to say, a matrix may be partitioned" The following is one way of partitioning a3 x 3matrix:

/btt br" n:(oo"" :": :;:) : (2.28) i \u,, b,, l;,, ;,, f ;;,, where

brr: (o" u,,: (:"") \dzr :""), bzr : (4zr an) and bzz: ats.

The b's are called submatrices. For partitioned matrices, the usual operations are valid.

2.5 PROPERTIES OF ARBITRARY MATRICES

2.5.1 Transpose Matrix: l" The transpose of an arbitrary matrix ,{ is written as ,4r and is obtained by interchanging corresponding rows and columns of l, that is, A: au znd Ar : dJt. For example, 2\ -l (2.2e) A: f 0 u and :( -i \3 ^' \, ?)

2.5.2 Complex-Conjugate Matrix:,{* The complex conjugate of an arbitrary matrix I is formed by taking the complex conjugate of each element. Hence we have

A* : afr (for all i and i). Q.30) For example, t2+3i 4-5r\ t2-3i 4+5A r: ( -u l*: ( Q'3t) 3 4i I 3 -4i )' If A* : A, then r4 is a real matrix. Oprnaron lr.ID Mlrnx ANALY$S C}ll.p.2

2.5.3 Hermitian Conjugate:,4r The Hermitian coniugate of an arbitrary rnatrix ,4 is obtained by taking the complex conjugate or tn. matrix and then the transpose of the complex conjugate matrix. For examPle, 12+3i 4-5t\ l:\ (2.32) 3 4i ) t2-3i 4+5t\r '':( , -4i) 3 _12-3i \. (2.33) - \+ + si -4il

2.6 SPECIAL SQUARE MATRICES

Here we define certain important square matrices'

2.6.1 Unit Mahix:.I The unit matrix is given bY I:6,t (2.34) where IA: AI: A. (2.3s)

The Kronecker (1823-1891) itetta, d,i, has the following property:

f 1: i: i ,u:to, i+j. (2.36)

The 3 x 3 unit matrix is given bY n00\ r:(. (2.37) I ?)

2.6.2 Diagonal Matrix Here we write (2.38) D: D11 6s. Sec. 2.6 Special Square Matrices 53

The following is an example of a 3 x 3 iliagonal matrix: t200\ o:[o -r o). (2.3e) \o o4l

2.6.3 Singular Matrixt lf lAl:0, then I is said to be a singular matrix. For example,

A:(r^:) or lrl:0. (2.40) \0 0l

2.6.4 Cofrctor Matrixt The cofactor matrix is written as A" and is defined by

A" : Att Q.4t)

For example, la' arz 4rr\ /Att Atz lt3\ ,n:I o^ azz o"rl ,a :! a^ A2z Artl e.42) \o' dn orrl \r" A3z A"l where ortl ortl Atr : (-l1r*rla* Atz : (-ryr*rla^ lan arrl' latr astl azzl ortl A13 :7-r)r*rloorrt, It Azt : (-ryr*rlat, atzl lasz atsl ortl orrl A22 : (-t)r-rl1,rt, A23 :(-t;r*,lo,, arrl' latr atzl Arrl ot t ottl A3t :(-1;r*,lat, I, A3z : (- r;r*, I I arz azt I I azr ctzz I and at t o'rl. Aii :(- t;r*, I lan azzl

fSee the appendix at the end of this chapter for a discussion of , Ornuron lNp Mlrnx ANllYsrs Cn*.2

2,6.5 Ailjoint of a Matrix The arlioint of a matrix is written as adj A; it is defined as the cofactor trans' pose, that is, adj A: A"r.

For example,

-?), : -:) (2.u) n: (: i), n" : (-: n* (-:

2.6,6 Self-AiliointMatrix lf adj A: A, A is said to be self-ailioint. For example, ,: (-l -:)' /': (-l -) Ar : adjr: (-: _) :,n. (2.4s>

2.6.7 If Ar : A, A is said to be a symmetric matrix. For example,

"':(? l) hence :", "r:(l i)

2.6.8 Antisymmetric (Skew) Matrix matrix. For example, lf Ar : -A, A is said to be an antisymmetric (skew) or: t0 -t\ (, o/

hence

toi\ (2.47\ oT: \_, o): -o,. Sec.2,6 Special Square Matrices 55

2.6.9 Hermitian Matrix If At : A, A is said to be a Hermitian matrix. For example,

and ,,:(l 1 "t:(_: ;) hence

(ot\' : : -') : rr. (2.48) ", (l

In quantum mechanics, all physical observables are represented byHermitian operators (matrices).

2.6,10 If AAt : I, A.is said to be a unitary natrix. For example,

: (? : (l (? : ", l), "t ;), "r': l) "', and ","r:fi ilf il:(l ):. (2.4e)

2.6.1!' If AAr : f, A is said to be an orthogonal matrix. For example, : "' (? ;), "t: (: ;) and ",a:(? ;)(? il:(; ):' (2.50)

2.6.12 The Trace of a Matrix The trace of a matrix is given by

Tr A: aro (2.s1) Ik s6 Oprnnror lNo Mlrtx ANlr,Ysrs Clrp.2

For example, 4\ ,t:P and TrA:2+7:s. (2.s2> \3 7l

2.6.13 The Inverse Matrix For the inverse matrix' A-1, we require that AA-t : I. (2.53)

we now develop the explicit expression for A-t.In the Laplace development (see the appendix) for the value of a deternrinant, we have

lAl: I a,,Att

lA16,r: a,1Art, (2.s4) J=rf,

Let b,1,: Ak!, that is, B: A"r. Equation (2.54) now becomes

a,,b* lA16,o: l=rf,

IlAl: AB: AA"r (2.ss1

where d,* : /. On dividing Eq. (2.55) by lAl, we obtain I: AVffi. (2.s6)

The quantity in brackets must be ,4-1 because of Eq. (2.53).

ExxverB2.2 Find the inverse of n: \,^3\ ,). Solution For A, we have lAl: t - 6 : -5, At'\-l | n,"-(At' : -2\ \n^ e4: \-l ,) Sec, 2.7 Solution of a System of Linear Eqwtions s7

and A"':( I -3\. \-z tl Hence the inverse of I is A-',:#:-+(-l _i)

Check: AA-':-+(; il(-j -i) : t l-s 0\ -i( o -5/ _L

2.7 SOLUTION OF'A SYSTEM OF LINEAR EQUATIONS

The;matrix method may be used to solve a system of linear equations. Con- sider the following system of equations to illustrate the method: x*3y:2 (2.s7) 2x I y:3. In matrix form, we write AX: C (2.58) where

*: (,), and : (:) ^: (: i), " The solution of Eq. (2.58) is x: A-tc: ffc where

I Al: : -i) , -5, ^- (_: Opruron lNo Mrrnx Axlr,vsrs Onp.2

On substituting the value of 1,4 | into the expression'for X, we obtain

-: (;) tll -il(:) --t s \_z /3\ \*/ Or.x: -] and y: *.

2.8 THE EIGEhTVALI.'E PROBLEM

The importance of eigenvalue problems in mathematical physics cannot be overemphasized. In general, it is assumed that associated with each linear operator is a set offunctions and corresponding numbers such that

Au,: ),,u, (2.se) where ,4 is a linear operator, z, are called eigenfunctions, and the l, are known as eigenvalues. The matrix form of an eigenvalue problem is

AX: )"IX or (A-)J)x-0 (2.60) where I is represented by a square matrix. In solving the general matrix equation BX : C for X, we obtain x: ffic'

If C is a null (zero) matrix and l8 | * 0, then the solution, X, is a trivial one. For C equal to a null matrix, a necessary condition for a nontrivial solution of BX: 0 is that lB I : 0. Hence the condition for a nontrivial solution of Eq. (2.60) is that lA - ),Il:0. (2.61) Equation (2.61) is called the secular (or characteristic) equation of l. The eigenvalues are just the roots of the equation obtained by expanding the Sec, 2.8 The Eigenvalue Problem 59 in Eq. (2.61). That is,

au- I an ato azt azz - I lA - lrl: Q.62)

Arl At2 a."-A

Ex,tiltpr,s 2.3 Find the eigenvalues of

/3 1\ n:\r 2)'

Solution In this case, the secular equation reduces to t.l:(3- x)(z-],)-z |l':^ 2 2-Xl^ :(x_lxr_4) -0 Therefore the eigenvalues are I : I and l: 4.

Equation (2.61) may be written as lA - AII:0 : oQ> Q.63) where d(1) is an zth degree polynomial (matrix) which can be represented as

6Q) : 6oI * 6rX+ ... + 6o-J,""-t * 6,I". (2.64)

The Cayley-Hamilton theorem states that 7 : A satisfies the above nth degree polynomial. Loosely stated, we say that a matrix satisfies its own character- istic equation, $(A) : O.

Exauprn 2.4 Illustrate the Cayley-Hamilton theorem for the matrix I where n20\ ,:(; -l , Opruron lNp M^lrnrx ANu.Ysts Cnlp.2

' Solution For this matrix, we have 2 o lt-x I O@:1 2 -r-^ ol I o o 1-.11 - -5 a 5,1* X2 - 13. The corresponding third-degree polynomial (matrix) is

oQ): oor + oJ I $,x, * 0,x' where {o - -5, 6t:5, 0;: l, and fr: -1. By use of the Cayley- Hamilton theorem, we maY write

ou):6or -t 0,A i 0,A, * $,A' t-5 0 0\ l5 l0 : -; -; ( : -9.(,'; il.f; ; il l-s -10 0\ . (-': ; -?) -0.

2.9 COORDINATE TRANSFORMATIONS

2.9.1 Rotation in Two Dimensions Consider the rotation of the two-dimensitinal Cartesian system illustrated in Fig.2.l. In the figure, F:0 since 0 *y:nl2andy+ f :nl2.The relations between the prime and unprime axes are

x, : xcos 0 * 11 sin 0 * /z sin 0 :rcos0*(lr*L)sin0 :.xcos0*ysn0 (2.6s> and !' : Ircos0 = (y - /z)cos0 :/cos0-12cos0 :/cos0-xsin0 (2.66) Sec. 2.9 Co ordina I e Transfo r ma t io w 61

Figure 2.1

ln matrix form, Eqs. (2.65) and (2.66) become

(;,):(::J, :'J,;)()

X' : RzX (2.67) where : : and -: (;) ", (;',), ^, (::',j, X;), The matrix R, is called the 2 x 2 rotation matrix. Equations (2.65) and (2.66) may be put in the following form: x\:xrcos0*x2sin0 x'z : -xt sin 0 f x, cos 0

x\: xtXrr t xzXrz x'z: -xtXzt * xzXzz or xi: i )",,x, (t:1,2). (2.68)

whcrc x > xt, ! r xr, ,\' , x'r, !' t x'1, and lrr '= cos (xl' rt). OpsRAron.l,No Mlrnx ANetysrs Cns.2

2.9.2 Rotation in Three Dimensions In three dimensions, we have

xl: | ),, x1 (i : 1,2,3). (2.6e) where

111 : cos (lt, x) (for i,7 : 1,2,3).

In general, a rotation is defined as a transformation whose matrix R is determined by a set of parameters, il, f ,. . . ,T, such that the following con- ditions are satisfied:

1. R(a, f ,...,y) is a continuous function of a,, f,...,1. 2. R(0,0,...,0): /(the unit matrix). 3. det (R) : 1.

Ex.llrlpre 2.5 Find the transformation equations for a 90-degree rotation about the xr-axis (see Fig. 2.2).

x2 - Figure 2.2

Solution Here the elements, Xu, of the rotation matrix are given by l1r,Xnl,r\lolo\ (l:: ^^',', i,,:J:(-l : l) slnce ,tr, : cos (xi, : :0 (xL, xr) cot 123 : cos xr): cos ? : 0

rt12 : (x'r, : Cos xr): cos 0o I 131 : cos (x'r, xr): cot f : 0

113: coS (x\,xr): :0 coS (x'r,xr): :0 "orf 132: cosf 121 : coS (x'r,xr): cosz: -1 133 : cqs (x'rrxr) - cos0o: I

A22 : Cos(xL, xr): cos # : 0 Sec, 2.10 Problems 63

By use of Eq. Q.69), we obtain //,\ 1010

h:J:l-l ; ?)[):( ;) Hence fr : Xz, lz: -Xy and f, : Xr. A more extensive treatment of coordinate transformation properties is given in Chapter 9.

2,IO PROBLEIVTS

2.1 Given ^:(i ) and ":( ) Find: (a) A*B (b, B_A (c) AB (d) BA. 2.2 Does lB : AC imply that B : C? Illustrate by means of an example. 2.3 Show that AB + BA for general A and B. 2.4 For the Pault (1900-1958) spln maffces

o,-0 o,:(, -) and o,:(| ), -) show tlrat (a) o?:ol:61:1 (b) [ar' ozl : Ziot (cYclic). Show that

fit'ea>n<*tt:o#'"r* + A@)ry

where l(x) and A(x) are two matrices. Write the two coupled (dependenO differential equations (equations involving derivatives) indicated by ,^ #ffl) : (,,3:,,','jx:)(fl} g Oprplron aNp Mrrno< ANlr,vss Crtsp.2

2.7 The rank of a matrix is equal to the order, of the.largest nonvanishing deter- minant contained within the matrix. Find the rank of the following matrices: rl 2 -lr rO00r '[i-r:] '"(:::,) 12 4r c)(otOO2r o o/ 'l-r -) 2.8 Calculate the trace (where it exists) of the matrices in hoblem 2.7. 2.9 Find the transpose of: t2t ."'," lol \u/ (b) (3 4 -2) t32lt a>lz o -el. \t-6 J 2.lo Determine which of the following matrices are symmetric and which are antisymmetric: 12 3r 't 1i) "'[:;) 12h (d) 'I i -:) l.2 o) 2.ll Show that (;)

is a unitary matrix. 2.12 Find such that "4 /3 r-' : (,r ) 2.13 Given ti2 3+i\ ,q:l -2 2t \-t*r o :,,l Find /. Sec. 2.10 65

2.14 Given + a12x2 : k11 larfit . tazrxr * a22x2: k2l Using the determinant method, solve for x1 &nd x2. 2.15 Show that Il2cosi I 0 I I t 2cos0 I l:-iX.I sin49 I I SlnU I o t 2cos?l 2.16 If ,{ is antisymmetric, prove that: (a) AAr : ArA (b) l2 is symmetric. 2.17 Prove that: (a) Tr(lB) - Tr(Bl) (b) Tr(l + B) : TrA *Tr B. 2,18 If C : AAr, prove that C is symmetric. 2.19 Prove the following: (a) (A\r : A (b\ (ABY : BrAr (c) (,4 * B)r : Ar + Br (d) (cA)r : cAr whete c is a scalar. 2.20 Prove that (a) A * At, (b) i(A - At), and (c) AAt are all Hermitian for arry A. 2.21 If ,{ and I are two Hermitian matrices, prove that l8 is Hermitian only ifAandBcommute. z.tL If A is a Hermitian matrix, show that d'r is unitary. 2.23 Show that the eigenvalues of a Hermitian matrix are all real. 2.U By use of the matrix method, solve the following system of equations: .- (arrXr*atzXz:ki ( x+3v:4r \u) (o) \orr*, * arrxr:7s"1 tr" -;r:J n (x*5r+32:11 {;_^i:il (a) j3r * y -r zz : rl. lx*2yt z:0) 2.25 Find the eigenvalues of the following matrices: 14 rl2r -2r (o) (a) (t J (-r n) rl 0r t13-3 5r (") (r -J (d){ o 4 ol. \-tr s -7J 6 Opeuror. ^lNp Mlrnx Axtyss Cnar.2

LXt Verify the Cayley-Hamilton theorem for fih (t o)

2.n By use of the Cayley-Hamilton theorem, compute the inverse of nh (t o)'

2.28 Solve X' : RzX for x and y where R2 is the 2 x 2 rotation matrix. 2.29 By use of the matrix method, find (a) Rr O) lR,l (c) the required transformation equations for the indicated invenion trans- formation in Fig. 2.3. ./, /xt

Figure 2.3

2.n By use of the matrix method, find (a) xr G) lnrl (c) the required transformation equations for the indicated reflection trans- formation in Fig. 2.4. Chap.2 Appendix : Rudiments of Determinants et

APPED{DIX: RUDIMENTS OF. DETERMINANTS

This appendix contains a summary of the essential properties of determin- ants.

A 2.1 Introduction A determinant is a square array of quantities called elements which may be combined according to the rules given below. In symbolic form, we write

a1 bt C1 11 a2 bz C2 12 A: (A2.1)

an b, cn rn

Here n is called the order of the determinant. The value of the determinant in terms of the elements a' b,, . . . , r, is defined as

(^2.2) where the Levi-Civita symbol, 11;1...1, has the following property: permutation of f +1 for an even (i,i,...,D e,r...,: {-l for an odd permutation of (i,i,...,1) (A2.3) I o if an index is repeated' ( L (if iik:123,231,312) For example, e,t*:1-I $f ijk:321,213,132) I o (otherwise) On applying Eq. (A2.2) to the third-order determinant, bt lo, "t I O:lo, bz ,,1 lo, b3 csl we obtain

3 A : €,,oarb.,cr {jkX

3 (A.2.4) : Ef€ rtt a,,brcr * errrarbrcr * e 3;rarbrcrJ. JK Opsnlron .lxo Mlrnx ANer,ysrs Cnap.2

Equation (A2.4) reduces to

: A ffe rrrarb'cr * € 121,a1b2c1, ! e rrrarbrcr

-l €21pa2bp* * €22pa2b2cr { errra2b3cr * €31p3bp* * €s2pa3brc* t €st*azb{*|. (A2.5)

Since e,,r : €zz*: €sz*: 0, Ee. (A2-5) becomes

A: e nratb2cr* €122a1b2c2* €14a1b2ca * € 1s1a1b3cr * €132a1b3c2 | errxarbtc,

* €211a2b p t I e { e 213a2bp, "rrarbrc2 * €61a2b3cr I €42a2bsc2 + en3azb3ca * €311a3bpr * €3pa3b1cz * €sttasb{s { e32rarb"cr * €322a3b2c2 | e34a3b2ca. (A2.6)

With the aid of Eq. (A2.3), Eq. (A2.6) reduces to L, : qrbzct - arbrc, - arbrc, I arbrc, { arbrc, - a3b2cr. (42.7) The following scheme may also be used to obtain the result in Eq. (A2.7),

* a3b1c2 + azblct - a3b2c,

- a1b3c2

- a2b1ca (A2.8)

+ a1 b2ca Chap.2 Appendix : Rudiments of Determinants 69 that is, A: atbzcs I arbrc, { arbrc" - (a3brc1* arbrc, I arbrct). (42.e)

A2.2 Laplace Development by Minors The result in Eqs. (A2.7) and (A2.9) rnay be written in the form

L,: at(bzct - b{z) - a2(brc, - btcr) I as(btc, - bzct) : ':,1 (A2'10) "ll', :',1- "'ll', i',1* "luu', where

b,,, bzcz. l:'l't ctl"l: -

The procedure of expressing A in the form given in Eq. (42.10) may be generalized to obtain the value of an nth-order determinant. In Eq. (A2.10)' we see that the expansion of a third-order determinant is expressed as a linear combination of the product of an element and a second-order deter- minant. Careful examination of Eq. (A2.10) reveals that the second-order determinant is the determinant obtained by omitting the elements in the row and column in which the multiplying element (the element in front of the second-order determinant) appears in the original determinarrt. The resulting second-order determinant is called a minor. Thus the minor of a, is obtained in the following way:

out bz cz b3 c3 out

The minor of ar is therefore It' "'l c, lbt I

In the general zth-order determinant, the (*1;t*' is associated with the minor of the element in the ith row and theTth column. The minor with its sign (- l;'*, is called the cofsctor. For the general determinant Opsnlron axp Merno< Anarvsrs Cnlp.2

cttz ato loo'"', lll:detr:1.

I o,, en2 4rn the value in terms of cofactors is given by

det ,E: a,,Arr for any i. (A2.11) lll: J=1t

The relation in Eq. (A2.11) is called the Laplace development. Expanding it along the first row gives i : 1. For example, lAl:lo" o"l laz, azzl becomes lAl: Z artAtt : artAtt * ar"A" where Att : (-l;t*t lazrl: azz and At2 : (-l;t*' lazrl: -au. (42.12) On substituting Eq. (A2.12) into the expression for lAl, we obtain lAl: a1p22 - dtzazr. Unlike matrices, determinants may be evaluated to yield a number.

A2.3 Summary of the Properties of Determinants The following are properties of determinants which may be readily proved:

1. The value of a determinant is not changed if corresponding rows and columns are interchanged.

la, bt c, I lq, o2 atl o :lo, b2 .,1 : bz b, (A2.13) lr, l b3 cr c2 ctl lot I I "t 2. If a multiple of one column is added (row by row) to another column or if a multiple of one row is added (column by column) to another row, the value of the determinant is unchanged. Chap. 2 Appendix : Rudiments of Determinanls 7l

lo, bt t'l la,*kb, bt "'l lo, b2 crl: la2 ! kb, bz ,rl (A2.t4) b3 lo^ b' ",1 lor+kt, "rl 3. If each element of a column or row is zero, the value of the determinant is zero. lo, bt "tl l0 0 0l:0 (A2.15) b3 lo, "rl 4, If two columns or rows are identical, the value of the determinant is zero. la, bt crl 1", bz "rl: o (A2.r6) c, lo, bt I 5. If two columns or rows are proportional, the value of the determinant is zero. 12otl 13 6 -21:0 (A2.17) Ir z 7l 6. If two columns or rows are interchanged, the sign of the determinant is changed. lo' bt "'l l"' bt o'l lo, bz crl: -lc, bz azl (42'18) b3 b3 o,l lo, ", I 1", 7. If each element of a column or row is multiplied by the same number, the resulting determinant is multiplied by that same number. bt O"rl bt t' lo, lo, I lo" b2 kc"l: 111o, b2 ,"1 (42.19) b3 *r,l b3 lo, lo, ",1

A2.4 Problems A2.l Prove property 4, Eq. (42.16), for determinants by use of theLevi- Civita symbol, e,.,*.