<<

Physics 116A Winter 2011

Vector coordinates, elements and changes of

1. Coordinates of vectors and matrix elements of linear operators Let V be an n-dimensional real (or complex) . Vectors that live in V are usually represented by a single column of n real (or complex) numbers. A linear transformation (also called a linear operator) acting on V is a “machine” that acts on a vector and and produces another vector. Linear operators are represented by square n n real (or complex) matrices.∗ If it is not× specified, the representations of vectors and matrices described above implicitly assume that the has been chosen. That is, all vectors in V can be expressed as linear combinations of basis vectors:†

s = e1 , e2 , e3 ,..., en B = (1, 0, 0,..., 0)T , (0, 1, 0,..., 0)T , (0, 0, 1,..., 0)T ,..., (0, 0, 0,..., 1)T . b b b b The subscript s indicates that this is the standard basis. The superscript T (which stands for ) turns the row vectors into column vectors. Thus,

v1 1 0 0 0 v2 0 1 0 0           ~v = v3 = v1 0 + v2 0 + v3 1 + + vn 0 .  .  .  .  . ··· .  .  .  .  . .           vn 0 vn 0 1                     The vi are the components of the vector ~v. However, it is more precise to say that the vi are the coordinates of the abstract vector ~v with respect to the standard basis. Consider a linear operator A. The corresponding matrix representation is given by A = [aij]. For example, if w~ = A~v, then

n

wi = aijvj , (1) j=1 X ∗We can generalize this slightly by viewing a linear operator as a whose input is taken from vectors that live in V and whose output is a vector that lives in another vector space W . If V is n-dimensional and W is m-dimensional, then a linear operator is represented by an m n real (or complex) matrix. In these notes, we will simplify the discussion by always taking W =×V . 3 †If V = R (i.e., three-dimensional ), then it is traditional to designate e1 = i, e2 = j and e3 = k. b b b b b b 1 where vi and wi are the coordinates of ~v and w~ with respect to the standard basis and aij are the matrix elements of A with respect to the standard basis. If we express ~v and w~ as linear combinations of basis vectors, then

n n

~v = vjej , w~ = wiei , j=1 i=1 X X then w~ = A~v implies that b b

n n n

aijvjei = A vjej , i=1 j=1 j=1 X X X b b where we have used eq. (1) to substitute for wi. It follows that:

n n

Aej aijei vj =0 . (2) − j=1 i=1 ! X X Eq. (2) must be true for any vectorb ~v V ;b that is, for any choice of coordinates ∈ vj. Thus, the coefficient inside the parentheses in eq. (2) must vanish. We conclude that: n

Aej = aijei . (3) i=1 X b b Eq. (3) can be used as the definition of the matrix elements aij with respect to the standard basis of a linear operator A. There is nothing sacrosanct about the choice of the standard basis. One can expand a vector as a of any of n linearly independent vectors. Thus, we generalize the above discussion by introducing a basis ~ ~ ~ ~ = b1 , b2 , b3 ,..., bn . B

For any vector ~v V , we can find a unique set of coefficients vi such that ∈ n ~ ~v = vj bj . (4) j=1 X

The vi are the coordinates of ~v with respect to the basis . Likewise, for any linear operator A, B

n ~ ~ Abj = aijbi (5) i=1 X defines the matrix elements of the linear operator A with respect to the basis . Clearly, these more general definitions reduce to the previous ones given in the caseB

2 of the standard basis. Moreover, we can easily compute A~v w~ using the results of eqs. (4) and (5): ≡

n n n n n ~ ~ ~ A~v = aijvjbi = aijvj bi = wibi = w~ , i=1 j=1 i=1 j=1 ! i=1 X X X X X which implies that the coordinates of the vector w~ = A~v with respect to the basis are given by: B n wi = aijvj . j=1 X Thus, the relation between the coordinates of ~v and w~ with respect to the basis is the same as the relation obtained with respect to the standard basis [see eq. (1)].B One must simply be consistent and always employ the same basis for defining the vector coordinates and the matrix elements of a linear operator.

2. Change of basis and its effects on coordinates and matrix elements The choice of basis is arbitrary. The existence of vectors and linear operators does not depend on the choice of basis. However, a choice of basis is very convenient since it permits explicit calculations involving vectors and matrices. Suppose we start with some basis choice and then later decide to employ a different basis B choice : C = ~c1 , ~c2 , ~c3 ,..., ~cn . C In particular, suppose = s is the standard basis. Then to change from s to B B B C is geometrically equivalent to starting with a definition of the x, y and z axis, and then defining a new set of axes. Note that we have not yet introduced the concept of an inner product or norm, so there is no concept of or unit vectors. The new set of axes may be quite skewed (although such a concept also requires an inner product). Thus, we pose the following question. If the coordinates of a vector ~v and the matrix elements of a linear operator A are known with respect to a basis (which B need not be the standard basis), what are the coordinates of the vector ~v and the matrix elements of a linear operator A with respect to a basis ? To answer this question, we must describe the relation between and . WeC do this as follows. B C The basis vectors of can be expressed as linear combinations of the basis vectors ~ C bi, since the latter span the vector space V . We shall denote these coefficients as follows:

n ~ ~cj = Pijbi , j =1, 2, 3,...,n. (6) i=1 X Note that eq. (6) is a shorthand for n separate equations, and provides the co- efficients Pi1, Pi2,..., Pin needed to expand ~c1, ~c2,..., ~cn, respectively, as linear

3 ~ combinations of the bi. We can assemble the Pij into a matrix. A crucial observa- tion is that this matrix P is invertible. This must be true, since one can reverse the process and express the basis vectors of as linear combinations of the basis vec- B tors ~ci (which again follows from the fact that the latter span the vector space V ). Explicitly, n ~ 1 bk = (P − )jk~cj , k =1, 2, 3,...,n. (7) j=1 X We are now in the position to determine the coordinates of a vector ~v and the matrix elements of a linear operator A with respect to a basis . Assume that the C coordinates of ~v with respect to are given by vi and the matrix elements of A B with respect to are given by aij. With respect to , we shall denote the vector B C coordinates by vi′ and the matrix elements by aij′ . Then, using the definition of vector coordinates [eq. (4)] and matrix elements [eq. (5)],

n n n n n n ~ ~ ~ ~v = vj′~cj = vj′ Pijbi = Pijvj′ bi = vibi , (8) j=1 j=1 i=1 i=1 j=1 ! i=1 X X X X X X ~ where we have used eq. (6) to express the ~cj in terms of the bi. The last step in eq. (8) can be rewritten as:

n n ~ vi Pijv′ bi =0 . (9) − j i=1 j=1 ! X X ~ Since the bi are linearly independent, the coefficient inside the parentheses in eq. (9) must vanish. Hence, n

vi = Pijvj′ , or equivalently [~v] = P [~v] . (10) B C j=1 X Here we have introduced the notation [~v] to indicate the vector ~v represented in B terms of its coordinates with respect to the basis . Inverting this result yields: B n 1 1 vj′ = (P − )jkvk , or equivalently [~v] = P − [~v] . (11) C B k=1 X Thus, we have determined the relation between the coordinates of ~v with respect to the bases and . B C A similar computation can determine the relation between the matrix elements of A with respect to the basis , which we denote by aij [see eq. (5)], and the matrix B elements of A with respect to the basis , which we denote by a′ : C ij n

A~cj = aij′ ~ci . (12) i=1 X 4 ~ The desired relation can be obtained by evaluating Abℓ:

n n n n ~ 1 1 1 Abℓ = A (P − )jℓ~cj = (P − )jℓ A~cj = (P − )jℓ aij′ ~ci j=1 j=1 j=1 i=1 X X X X n n n n n n 1 ~ 1 ~ = (P − )jℓ aij′ Pkibk = Pkiaij′ (P − )jℓ bk , j=1 i=1 k=1 k=1 i=1 j=1 ! X X X X X X where we have used eqs. (6) and (7) and the definition of the matrix elements of A with respect to the basis [eq. (12)]. Comparing this result with eq. (5), it follows C that n n n 1 ~ akℓ Pkia′ (P − )jℓ bk =0 . − ij k=1 i=1 j=1 ! X X X ~ Since the bk are linearly independent, we conclude that

n n 1 akℓ = Pkiaij′ (P − )jℓ . i=1 j=1 X X The double sum above corresponds to the of three matrices, so it is convenient to write this result symbolically as:

1 [A] = P [A] P − . (13) B C The meaning of this equation is that the matrix formed by the matrix elements of A with respect to the basis is related to the matrix formed by the matrix elements B of A with respect to the basis by the similarity transformation given by eq. (13). We can invert eq. (13) to obtain:C

1 [A] = P − [A] P . (14) C B In fact, there is a much faster method to derive eqs. (13) and (14). Consider the equation w~ = A~v evaluated with respect to bases and , respectively: B C [w~ ] = [A] [~v] , [w~ ] = [A] [~v] . B B B C C C Using eq. (10), [w~ ] = [A] [~v] can be rewritten as: B B B P [w~ ] = [A] P [~v] . C B C Hence, 1 [w~ ] = [A] [~v] = P − [A] P [~v] . C C C B C It then follows that

1 [A] P − [A] P [~v] =0 . (15) C − B C  5 Since this equation must be true for all ~v V (and thus for any choice of [~v] ), C it follows that the quantity inside the parentheses∈ in eq. (15) must vanish. This yields eq. (14). The significance of eq. (14) is as follows. If two matrices are related by a simi- larity transformation, then these matrices may represent the same linear operator with respect to two different choices of basis. These two choices are related by eq. (6). However, it would not be correct to conclude that two matrices that are related by a similarity transformation cannot represent different linear operators. In fact, one could also interpret these two matrices as representing (with respect to the same basis) two different linear operators that are related by a similarity transformation. That is, given two linear operators A and B and an invertible 1 linear operator P , it is clear that if B = P − AP then the matrix elements of A and B with respect to a fixed basis are related by the same similarity transformation.

Example: Let be the standard basis and let = (1, 0, 0) , (1, 1, 0) , (1, 1, 1) . Given a linear operatorB A whose matrix elementsC with respect to the basis are:  B 1 2 1 − [A] = 0 1 0 , B 1− 0 7   we shall determine [A] . First, we need to work out P . Noting that: C

~c1 = ~b1 , ~c2 = ~b1 + ~b2 , ~c3 = ~b1 + ~b2 + ~b3 , it follows from eq. (6) that

1 1 1 P = 0 1 1 .   0 0 1   Inverting, ~b1 = ~c1 , ~b2 = ~c2 ~c1 , and ~b3 = ~c3 ~c2, so that eq. (7) yields: − − 1 1 0 1 − P − = 0 1 1 . 0 0− 1   Thus, using eq. (15), we obtain:

1 1 0 1 2 1 1 1 1 1 4 3 − − [A] = 0 1 1 0 1 0 0 1 1 = 1 2 9 . C         0 0− 1 1− 0 7 0 0 1 −1− 1− 8        

6 3. Application to matrix diagonalization

Consider a matrix A [A] s , whose matrix elements are defined with respect ≡ B to the standard basis, s = e1 , e2 , e3 ,..., en . The eigenvalue problem for the matrix A consists of findingB all complex λ such that  i A~vj = λj~vj b, b ~vjb=0 forb j =1, 2,...,n. (16) 6 The λi are the roots of the characteristic equation det (A λI) = 0. This is an − nth order equation which has n (possibly complex) roots, although some of the roots could be degenerate. If the roots are non-degenerate, then A is called simple. In this case, the n eigenvectors are linearly independent and span the vector space V .‡ If some of the roots are degenerate, then the corresponding n eigenvectors may or may not be linearly independent. In general, if A possesses n linearly independent eigenvectors, then A is called semi-simple.§ If some of the eigenvalues of A are degenerate and its eigenvectors do not span the vector space V , then we say that A is defective. A is diagonalizable if and only if it is semi-simple. Since the eigenvectors of a semi-simple matrix A span the vector space V , we may define a new basis made up of the eigenvectors of A, which we shall denote by = ~v1 , ~v2 , ~v3 ,..., ~vn . The matrix elements of A with respect to the basis , C C denoted by [A] , is obtained by employing eq. (12):  C n

A~vj = aij′ ~vi . i=1 X But, eq. (16) implies that aij′ = λjδij. That is,

λ1 0 0 ··· 0 λ2 0 [A] =  . . ···. .  . C . . .. .    0 0 λn  ···  The relation between A and [A] can be obtained from eq. (14). Thus, we C must determine the matrix P that governs the relation between s and [eq. (6)]. B C Consider the coordinates of ~vj with respect to the standard basis s: B n n

~vj = (~vj)i ei = Pijei , (17) i=1 i=1 X X where (~vj)i is the ith coordinate of the jbth eigenvector.b Using eq. (17), we identify Pij =(~vj)i. In matrix form,

(v1)1 (v2)1 (vn)1 ··· (v1)2 (v2)2 (vn)2 P =  . . ···. .  ......   (v1)n (v2)n (vn)n  ···    ‡This result is proved in the appendix to these notes. §Note that if A is semi-simple, then A is also simple only if the eigenvalues of A are distinct.

7 1 Finally, we use eq. (14) to conclude that [A] = P − [A] s P . If we denote the C B diagonalized matrix by D [A] and the matrix A with respect to the standard ≡ C basis by A [A] s , then ≡ B 1 P − AP = D , (18) where P is the matrix whose columns are the eigenvectors of A and D is the whose diagonal elements are the eigenvalues of A. Thus, we have succeeded in diagonalizing an arbitrary semi-simple matrix. If the eigenvectors of A do not span the vector space V (i.e., A is defective), then A is not diagonalizable.¶ That is, there does not exist a matrix P and a diagonal matrix D such that eq. (18) is satisfied.

4. Implications of the inner product Nothing in sections 1–3 requires the existence of an inner product. However, if an inner product is defined, then the vector space V is promoted to an . In this case, we can define the concepts of orthogonality and orthonormality. In particular, given an arbitrary basis , we can use the Gram-Schmidt process to construct an . Thus,B when considering inner product spaces, it is convenient to always choose an orthonormal basis. Even with the restriction of an orthonormal basis, one can examine the effect of changing basis from one orthonormal basis to another. All the considerations of section 2 apply, with the constraint that the matrix P is now a .k Namely, the transformation between any two orthonormal bases is always unitary. The following question naturally arises—which matrices have the property that their eigenvectors comprise an orthonormal basis that spans the inner product space V ? This question is answered by eq. (11.28) on p. 154 of Boas, which states that:

A matrix can be diagonalized by a unitary similarity transformation if and only if it is normal, i.e. if the matrix commutes with its hermitian conjugate.∗∗

Then, following the arguments of section 3, it follows that for any A (which satisfies AA† = A†A), there exists a diagonalizing matrix U such that

U †AU = D ,

¶ 0 1 The simplest example of a defective matrix is B = ( 0 0 ). One can quickly check that the eigenvalues of B are given by the double root λ = 0 of the characteristic equation. However, 1 solving the eigenvalue equation, B~v = 0, yields only one linearly independent eigenvector, ( 0 ). One can verify explicitly that no matrix P exists such that P −1BP is diagonal. kIn a real inner product space, a unitary transformation is real and hence is an orthogonal transformation. ∗∗A proof of this statement can be found in Philip A. Macklin, “Normal Matrices for Physicists,” American Journal of Physics 52, 513–515 (1984). A link to this article can be found in Section VIII of the course website.

8 1 where U is the unitary matrix (U † = U − ) whose columns are the orthonormal eigenvectors of A and D is the diagonal matrix whose diagonal elements are the eigenvalues of A. 5. Application to three-dimensional matrices Let R(nˆ, θ) represent an active rotation by an angle θ (in the counterclockwise direction) about a fixed axis that points along the unit vector nˆ. Given a 3 3 R with det R = 1, we know that it must correspond to R(nˆ×, θ) for some nˆ and θ. How can we determine nˆ and θ? On p. 156 of Boas, the following strategy is suggested. First, we note that R(nˆ, θ)nˆ = nˆ , since any vector pointing along the axis of rotation is not rotated under the ap- plication of the R(nˆ, θ). Thus, nˆ is the normalized eigenvector of R(nˆ, θ) corresponding to the eigenvalue +1. In order to determine θ, Boas proposes the following procedure. The matrix R provides the matrix elements of the rotation operator with respect to the standard basis s. If one defines a new basis such that B nˆ points along the new y-axis,†† then the matrix elements of the rotation operator with respect to the new basis will have a simple form in which θ can be determined by inspection. To illustrate this strategy with a concrete example, I shall solve problem 3.11–54 on p. 161 of Boas, which poses the following question. Show that the matrix, 1 √2 1 1 − R = √2 0 √2 , (19) 2   1 √2 1  − −  is orthogonal and find the rotation and/or reflection it produces as an operator acting on vectors. If a rotation, find the axis and angle. First we check that R is orthogonal. 1 √2 1 1 √2 1 1 0 0 T 1 − R R = √2 0 √2 √2 0 √2 = 0 1 0 = I . 4  −      1 √2 1 1 √2 1 0 0 1 − −   − −    Second, we compute the by the expansion of the second row by cofac- tors: 1 √2 1 1 − 1 √2 1 1 √2 det R = √2 0 √2 = √2 − + 8 −8 √2 1 1 √2 1 √2 1 " # − − − − − 1 1 = √2 2√2 2 √2 = √2[4√2] = +1 . −8 − − 8 h i ††Alternatively, one could choose a basis in which nˆ lies along either the new x-axis or the new z-axis, with obvious modifications to the subsequent computations.

9 Since det R = +1, we conclude that R is a proper rotation. To determine the axis of rotation, we solve the eigenvalue problem and identify the eigenvector corresponding to the eigenvalue +1. We determine the eigenvector by the usual method: 1 √2 1 x x 1 − √2 0 √2 y = y . 2       1 √2 1 z z  − −      The resulting equations are: 1 ( x + √2y z)=0 , 2 − − 1 (√2x 2y + √2z)=0 , 2 − 1 (x √2y z)=0 . 2 − − The first and second equations are proportional, so there are only two independent equations. Adding the first and third equation yields z = 0. Substituting back into the first equation yields x = √2 y. Normalizing the eigenvector, which we identify as the axis of rotation nˆ, we find

√2 1 nˆ = 1 , (20) √   3 0 up to an overall sign that is not fixed by the eigenvalue equation. That is, the sign choice employed in eq. (20) is a matter of convention. With the choice of overall sign specified in eq. (20), the axis of rotation lies along the unit vector 1 nˆ = √2 i + j . √3 h i To obtain the angle of rotation, we proceed as follows. The orthonormal stan- dard basis is: s = i , j , k . (21) B { } We can consider a new basis obtained from the old basis by a rotation about the z-axis, such that nˆ lies along the new y-axis. In performing this rotation, the z axis is unchanged, whereas the new x axis will point in a direction along a unit vector perpendicular to nˆ in the x–y plane. Call the new rotated orthonormal basis ′. Explicitly, B 1 1 ′ = i √2 j , √2 i + j , k . (22) B √3 − √3  h i h i  Note that the sign of the first unit vector has been chosen to preserve the right- handed . This is easily checked since the of the first two vectors of ′ yields the third vector of ′. That is, B B 1 1 1 2 i √2 j × √2 i + j = 3 i × j 3 j × i = i × j = k . √3 − √3 − h i h i 10 Indeed, the new y-axis (the second element of ′) now points along nˆ. We now propose to evaluate the matrix elements of theB rotation operator with respect to the new basis ′. AccordingB to eq. (14), 1 ′ [R] = P − [R] s P . (23) B B ′ where [R] s is the rotation matrix with respect to the standard basis, and [R] is B B the rotation matrix with respect to the new basis (in which nˆ points along the new y-axis). To compute [R] ′ , we must first determine the matrix P , whose matrix B elements are defined by [cf. eq. (6)]: n

bj′ = Pijeˆi , (24) i=1 X where the eˆi are the basis vectors of s and the b′ are the basis vectors of ′. In B j B particular, the columns of P are the coefficients of the expansion of the new basis vectors in terms of the old basis vectors. Thus, using eqs. (22) and (24), we obtain:

1 2 3 3 0 q 2 q 1 P =  0 . (25) − 3 3    q0q 0 1   Hence, eqs. (23) and (25) yield: 

1 2 0 1 1 1 1 2 0 3 − 3 2 2 − 2 3 3 1 1 ′ q 2 q 1   q   q 2 q 1  [R] = 0 2 0 2 0 B 3 3 − 3 3   q1 1 q1    q0q 01  2 2 2   q0q 0 1    − −       q   

1 2 1 2 1 3 3 0 − − 2√3 3 − 2 = q 2 q 1  1 q 1 1 3 3 0  6 3 2    √3 1 q0q 01  q2 q0 q2     −      1 0 √3 − 2 − 2 = 01 0 .  √3 1  2 0 2  −  Indeed the matrix [R] ′ represents a rotation by some angle about the new y-axis. B Comparing this result with the matrix for an active counterclockwise rotation about the y-axis by an angle of θ [given by eq. (7.20) on p. 129 of Boas], cos θ 0 sin θ 0 1 0 ,   sin θ 0 cos θ −   11 it follows that

1 √3 cos θ = and sin θ = = θ = 240◦ . − 2 − 2 ⇒ Thus, we have deduced that the rotation matrix R given by eq. (19) transforms vec- 1 √ tors via an active counterclockwise rotation by 240◦ about the axis nˆ = √3 2 i + j . As noted below eq. (20), the overall sign of nˆ is not determined by the eigenvalue   equation. In particular, it is easy to see geometrically that any rotation matrix must satisfy R(nˆ, θ) = R( nˆ, θ) = R( nˆ, 2π θ). Hence, one can equivalently − − − − state that the rotation matrix R given by eq. (19) transforms vectors via an active 1 counterclockwise rotation by 120◦ about the axis nˆ = √2 i + j . − √3 For further details on three-dimensional rotation matrices, along with a simpler method for determining nˆ and θ, see the class handout entitled, Three-dimensional proper and improper rotation matrices.

Appendix: Proof that the eigenvectors corresponding to distinct eigen- values are linearly independent

The statement that ~vi is an eigenvector of A with eigenvalue λi means that

A~vi = λi~vi .

We can rewrite this condition as:

(A λiI)~vi =0 . −

If ~v1, ~v2,..., ~vn are linearly independent, then

c1~v1 + c2~v2 + + cn~vn =0 ci = 0 for all i =1, 2,...,n. (26) ··· ⇐⇒ We prove this result by assuming the contrary and arrive at a contradiction. That is, we will assume that one of the coefficients is nonzero. Without loss of generality, we shall assume that c1 = 0 [this can always be arranged by reordering the ~vi ]. 6 { } Multiplying both sides of eq. (26) by A λ2I, and using the fact that −

(A λ2I)~vi =(λi λ2)~vi , − − we obtain:

c1(λ1 λ2)~v1 + c3(λ3 λ2)~v3 + + cn(λn λ2)~vn =0 . (27) − − ··· − Note that the term c2~v2 that appears in eq. (26) has been removed from the sum. Next, multiply both sides of eq. (26) by A λ3I. A similar computation yields: − c1(λ1 λ2)(λ1 λ3)~v1 + c4(λ4 λ2)(λ4 λ3)~v4 + + cn(λn λ2)(λn λ3)~vn =0 . − − − − ··· − −

12 Note that the term c3~v3 that originally appeared in eq. (27) has been removed from the sum. We now continue the process of multiplying on the left successively by A λ4I, A λ5I ,. . . ,A λnI. As a result, all the terms involving ci~vi [for i =2,−3,...,n] will− be removed,− leaving only one term remaining:

c1(λ1 λ2)(λ1 λ3)(λ1 λ4) (λ1 λn)~v1 =0 . (28) − − − ··· − By assumption, all the eigenvalues are distinct. Moreover, ~v1 = 0 since ~v1 is 6 an eigenvalue of A. Thus, eq. (28) implies that c1 = 0, which contradicts our original assumption. We conclude that our assumption that at least one of the ci is nonzero is incorrect. Hence, if all the eigenvalues are distinct, then ci = 0 for all i =1, 2,...,n. That is, the n eigenvectors ~vi are linearly independent.

An alternative proof of the of the {~vi}

There is a more elegant way to prove that if ~v1, ~v2,..., ~vn are eigenvectors corresponding to the distinct eigenvalues λ1,λ2,...,λn of A, then ~v1, ~v2,..., ~vn are linearly independent. Starting from A~v = λ~v, we multiply on the left by A to get A2~v = A A~v = A(λ~v)= λA~v = λ2~v . · Continuing this process of multiplication on the left by A, we conclude that:

k k 1 k 1 k 1 k A ~v = A A − ~v = A λ − ~v = λ − A~v = λ ~v , (29) for k =2, 3,...,n. Thus, if we multiply eq. (26) on the left by Ak, then we obtain n separate equations by choosing k =0, 1, 2,...,n 1 given by: − k k k c1λ1~v1 + c2λ2~v2 + + cnλ ~vn =0 , k =0, 1, 2,...,n 1 . ··· n − In matrix form,

1 1 1 1 c1~v1 ··· λ1 λ2 λ3 λn c2~v2  2 2 2 ··· 2    λ1 λ2 λ3 λn c3~v3 =0 . (30)  ···     ......   .   . . . . .   .   n 1 n 1 n 1 n 1    λ1− λ2− λ3− λ −  c ~v   ··· n   n n     The matrix appearing above is equal to the transpose of a well known matrix called the . There is a beautiful formula for its determinant: 1 1 1 1 ··· λ1 λ2 λ3 λn 2 2 2 ··· 2 λ1 λ2 λ3 λ n = (λi λj) . (31) ··· − ...... i

13 Here, I am using the to symbolize multiplication, so that Q (λi λj)=(λ1 λ2)(λ1 λ3) (λ1 λn)(λ2 λ3) (λ2 λn) (λn 1 λn) . − − − ··· − − ··· − ··· − − i

14