Vector Spaces
Total Page:16
File Type:pdf, Size:1020Kb
Vector Spaces Vector space, º, over the ¯eld of complex numbers, C, is a set of elements jai, jbi;:::, satisfying the following axioms. ¦ For each two vectors jai; jbi 2 º there exists a summation procedure: jai + jbi = jci, where jci 2 º. The summation obeys the following laws. jai + jbi = jbi + jai (commutative) ; (1) jai + (jbi + jci) = (jai + jbi) + jci (associative) : (2) ¦ There exists a zero vector j0i, such that 8 jai: jai + j0i = jai : (3) ¦ 8 jai 9 j¡ai (additive inverse) such that jai + j¡ai = j0i : (4) [Here we start using the symbol 8 that means `for all' (`for each', `for any') and the symbol 9 that means `there exists'.] ¦ There exists a procedure of multiplication by a scalar ® 2 C. That is 8 jai 2 º, 8 ® 2 C: 9 ® jai 2 º. Multiplication by a scalar obeys the following laws. ® (¯ jai ) = (®¯) jai ; (5) 1 ¢ jai = jai ; (6) ®(jai + jbi) = ®jai + ®jbi ; (7) (® + ¯)jai = ®jai + ¯jai : (8) From the above axioms it follows that 8 jai 0 ¢ jai = j0i ; (9) (¡1) ¢ j ai = j¡ai : (10) 1 Problem 8. On the basis of the axioms: (a) Show that the zero element is unique. (b) Show that for any vector jai there exists only one additive inverse. (c) Show that for any vector jai the relation jai + jxi = jai implies that jxi = j0i . (d) Derive (9)-(10). Important (!): Here you are not allowed to use the subtraction procedure, which is not de¯ned yet, and cannot be de¯ned prior to establishing the uniqueness of additive inverse. Once the uniqueness of additive inverse is established (Problem 8), it is convenient to de¯ne subtraction as simply adding additive inverse: j ai ¡ j bi ´ j ai + j¡bi : (11) Example. As a particular example of a complex vector space consider the space Cn of the n-rows of complex numbers: j xi = (x1; x2; x3; : : : ; xn) : (12) In the space Cn, the addition and multiplication by a complex number are de¯ned componentwise. For j yi = (y1; y2; y3; : : : ; yn) we de¯ne j xi + j yi = (x1 + y1; x2 + y2; x3 + y3; : : : ; xn + yn) : (13) For ® 2 C we de¯ne ® j xi = (® x1; ® x2; ® x3; : : : ; ® xn) : (14) It is easy to make sure that all the axioms of the vector space are satis¯ed, and thus Cn is indeed a vector space. Inner product. Now let us formally introduce the inner-product vector space as a vector space in which for any two vectors jai and jbi there exists 2 the inner product, h b j a i, which is a complex-valued function of the two vectors satisfying the following properties. h b j a i = h a j b i : (15) Here the bar denotes complex conjugation. From Eq. (15) it directly follows that the inner product of a vector with itself is always real. The next axiom of inner product requires also that hajai be positive if jai 6= j0i. Finally, we require that for any three vectors jai, jui, jvi, and any two numbers ® and ¯, the following relation|termed linearity of the in- ner product|holds true: h a j ® u + ¯ v i = ® h a j ui + ¯ h a j vi : (16) Here we use a convenient notation j u + v i ´ jui + jvi and j ® u i ´ ® jui. From Eqs. (15) and (16) it follows that h ® u + ¯ v j a i = ®¤ h u j ai + ¯¤ h v j ai : (17) From the axioms of the inner product it is also easy to deduce that h a j ¡ ai < 0 ; if jai 6= j0i ; (18) and that h a j 0i = 0 : (19) Problem 9. Derive Eqs. (17)-(19) and from the axioms of the inner product. Example 1. In the above-discussed vector space Cn, the inner product of the two vectors, jai = (a1; a2; : : : ; an) and jbi = (b1; b2; : : : ; bn), can be de¯ned as Xn ¤ h a j b i = aj bj : (20) j=1 More generally, one can de¯ne Xn ¤ h a j b i = aj bj wj ; (21) j=1 3 where wj's are some ¯xed real positive coe±cients. Example 2. In the vector space of integrable complex-valued functions f(x) de¯ned in the interval x 2 [a; b] the inner product can be de¯ned as Z b h f j g i = f ¤(x) g(x) dx : (22) a More generally, one can de¯ne Z b h f j g i = f ¤(x) g(x) w(x) dx ; (23) a where w(x) is some ¯xed positive real function. Finite-dimensional inner-product vector space Linear combination. Span. Suppose we have a set of n vectors fjÁjig, (j = 1; 2; : : : ; n) of a vector space º. The following vector, Xn jai = cj jÁji ; (24) j=1 where fcjg are some complex numbers, is called a linear combination of the vectors fjÁjig. As is directly seen from the axioms of the vector space, all the linear combinations of the given set fjÁjig form some vector spaceº ~|for such a space we will be using the term sub-space to stress the fact thatº ~ ⊆ º. The vector spaceº ~ is referred to as the span of the vectors fjÁjig. The same idea is also expressed by saying that the vectors fjÁjig span the subspaceº ~. Basis. Ifº ~ = º, that is if any vector of the space º can be represented as a linear combination of the vectors fjÁjig, then the set fjÁjig is called basis and the space º is a ¯nite-dimensional space. [We can also put this de¯nition as follows: A ¯nite-dimensional space is the space spanned by a ¯nite number of vectors.] For any ¯nite-dimensional vector space there is a minimal possible number n for a vector set to form a basis. This number is called dimensionality of the space, and, correspondingly, the space is called n-dimensional space. 4 Linear independence of a system of vectors is a very important notion, directly relevant to the notion of the dimensionality of the vector space. There are two equivalent de¯nitions of linear independence of a system of vectors, fjÁjig. The ¯rst de¯nition says that the system is linear indepen- dent if none of its vectors can be expressed as a linear combination of the others. The second de¯nition says that the system is linear independent if the equality Xn cjjÁji = j0i (25) j=1 is only possible if all numbers cj are equal to zero. Problem 10. Prove the equivalence of the two de¯nitions. In an n-dimensional vector space, all the vectors of an n-vector basis fjÁjig are linear independent. Indeed, if the vectors fjÁjig were linear dependent (= not linear independent ), then|in accordance with one of the de¯nitions of linear independence|some vector vector jÁj0 i could be expressed as a linear combination of the other vectors of the basis, and we would get a basis of (n ¡ 1) vectors|all the vectors of the original basis, but the vector jÁj0 i. But this is impossible by de¯nition of the dimensionality saying that n is the minimal possible number for the basis elements! Without loss of generality we may deal only with a linear-independent basis, since we can always eliminate linear dependent vectors. In what follows, we assume that our basis is linear independent. Problem 11. Show that if the basis fjÁjig is linear independent, then for any vector jxi the coe±cients cj of the expansion Xn jxi = cjjÁji (26) j=1 are unique. Orthonormal basis. With the inner-product structure, we can introduce a very convenient for applications type of basis. The two vectors are called orthogonal if their inner product equals zero. Correspondingly, a basis is called orthogonal if all its vectors are orthogonal to each other. A basis is called normal if for each its vector the product of this vector with itself equals unity. An orthonormal basis (ONB) is a basis which is both orthogonal and 5 normal. Summarizing, the basis fjejig is orthonormal if h ei j ej i = ±ij ; (27) where ±ij is the Kronecker delta symbol, equal to one for i = j, and zero for i 6= j. An extremely important feature of ONB's is that 8 jxi 2 º, the coe±- cients xj|referred to as components of the vector jxi with respect to the given basis|in the expansion Xn jxi = xj jeji ; (28) j=1 are given by the inner products of the vector jxi with the basis vectors xj = hejjxi : (29) This relation is readily seen by constructing inner products of the r.h.s. of (28) with the basis vectors. Gram-Schmidt orthonormalization procedure. Staring from an ar- bitrary basisfjÁjig, one can always construct an orthonormal basis fjejig. This is done by the following procedure. q je~1i = jÁ1i ; je1i = je~1i= he~1je~1i ; q je~2i = jÁ2i ¡ he1jÁ2i je1i ; je2i = je~2i= he~2je~2i ; q je~3i = jÁ3i ¡ he1jÁ3i je1i ¡ he2jÁ3i je2i ; je3i = je~3i= he~3je~3i ; ¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢q je~ni = jÁni ¡ he1jÁni je1i ¡ ::: ¡ hen¡1jÁni jen¡1i ; jeni = je~ni= he~nje~ni : By construction, each successive vector je~j0 i is orthogonal to all previous vectors jeji, and then it is just properly normalized to yield jej0 i. Isomorphism. With the tool of the orthonormal basis, it is easy to show that each n-dimensional inner-product space is isomorphic|the precise mean- ing of this word will become clear later|to the above-introduced vector 6 n space C .