<<

5 Change of

In many applications, we may need to switch between two or more different bases for a . So it would be helpful to have formulas for converting the components of a vector with respect to one basis into the corresponding components of the vector (or of the ) with respect to the other basis. The theory and tools for quickly determining these “ formulas” will be developed in these notes.

5.1 Unitary and Orthogonal Matrices Definitions Unitary and orthogonal matrices will naturallyariseinchangeofbasisformulas. Theyaredefined as follows: 1 † A is a A is an with A− A , ⇐⇒ = and 1 T A is an A is an invertible real matrix with A− A ⇐⇒ = A is a real unitary matrix . ⇐⇒ Because an orthogonal matrix is simply a unitary matrix with real-valued entries, we will mainly consider unitary matrices (keeping in mind that anything derived for unitary matrices will also hold for orthogonal matrices after replacing A† with AT ). The basic test for determining if a is unitary is to simply compute A† and see if it is the inverse of A ; that is, see if AA† I . = !◮Example 5.1: Let 3 4 i 5 5 A = 4 3  i 5 5  Then   3 4 i 5 − 5 A† =  4 3  i − 5 5   

9/22/2013 Chapter & Page: 5–2 Change of Basis

and 3 4 3 4 i i 5 5 5 − 5 AA† I . = 4 3   4 3  = ··· = i i 5 5  − 5 5      So A is unitary.

Obviously, for a matrix to be unitary, it must be square. It should also be fairly clear that, if T † 1 A is a unitary (or orthogonal) matrix, then so are A∗ , A , A and A− .

◮ T † ? Exercise 5.1: Prove that, if A is a unitary (or orthogonal) matrix, then so are A∗ , A , A 1 and A− .

The term “unitary” comes from the value of the . To see this, first observe that, if A is unitary then 1 † I AA− AA . = = Using this and already discussed properties of , we have

† † 2 1 det(I) det(AA ) det(A) det(A ) det(A) det(A)∗ det(A) . = = = = = | | Thus, A is unitary det A 1 . H⇒ | | = And since there are only two real numbers which have magnitude 1, it immediately follows that

A is orthogonal det A 1 . H⇒ = ± An immediate consequence of this is that if the of the determinant of a matrix is not 1, then that matrix cannot be unitary. Why the term “orthogonal” is appropriate will become obvious later.

Rows and Columns of Unitary Matrices Let u11 u12 u13 u1N ··· u21 u22 u23 u2N  ···  U . . . . . = ......  . . . .    u N1 u N2 u N3 u N N   ···    be a square matrix. By definition, then,

u11∗ u21∗ u N1∗ ··· u12∗ u22∗ u N2∗  ···  U† u13∗ u23∗ u N3∗ . =  ···   . . .. .   . . . .    u1N ∗ u2N ∗ u N N ∗  ···    Unitary and Orthogonal Matrices Chapter & Page: 5–3

More concisely,

† U jk u jk and U U kj ∗ ukj ∗ . [ ] = jk = [ ] = Now observe:    U is unitary

† 1 U U− ⇐⇒ = U†U I ⇐⇒ = † U U I jk ⇐⇒ jk = [ ] †   U jm U mk δ jk “for all ( j, k)” ⇐⇒ m [ ] = X   umj ∗ umk δ jk “for all ( j, k)” ⇐⇒ m = X The righthand side of the last equation is simply the formula for computing the standard matrix inner product of the column matrices

u1 j u1k

u2 j u2k   and   , . .  .   .      u N j  u Nk         and the last line tells us that 1 if j k this inner product = . = ( 0 if j k 6= In other words, that line states that

u11 u12 u13 u1N  u21 u22 u23 u2N    ,   ,   ,...,    . . . .    .   .   .   .             u N1 u N2 u N3 u N N                      is an orthonormal of column matrices. But these column matrices are simply the columns of our original matrix U . Consequently, the above set of observations(startingwith“ U is unitary”) reduces to A square matrix U is unitary if and only if its columns form an orthonormal set of column matrices (using the standard column matrix inner product). You can verify that a similar statement holds using the rows of U instead of its columns.

?◮Exercise 5.2: Show that a square matrix U is unitary if and only if its rows form an orthonormal set of row matrices (using the row matrix inner product). (Hints: Either consider how the above derivation would have changed if we had used UU† instead of U†U , or use the fact just derived along with the fact that U is unitary if and only if UT is unitary.) version: 9/22/2013 Chapter & Page: 5–4 Change of Basis

In summary, we have just proven:

Theorem 5.1 (The Big Theorem on Unitary Matrices) Let u11 u12 u13 u1N ··· u21 u22 u23 u2N  ···  U . . . . . = ......  . . . .    u N1 u N2 u N3 u N N   ···  be a square matrix. Then the following statements are equivalent (that is, if any one statement is true, then all the statements are true).

1. U is unitary. 2. The columns of U form an orthonormal set of column matrices (with respect to the usual matrix inner product). That is,

1 if j k umj ∗ umk = m = ( 0 if j k X 6= . 3. The rows of U form an orthonormal set of row matrices (with respect to the usual matrix inner product). That is,

1 if j k u jm∗ ukm = m = ( 0 if j k X 6= .

?◮Exercise 5.3: What is the corresponding “Big Theorem on Orthogonal Matrices”?

An Important Consequence and Exercise You can now verify a result that will be important in our change of basis formulas involving orthonormal bases.

?◮Exercise 5.4: Let u11 u12 u13 u1N ··· u21 u22 u23 u2N  ···  U . . . . . = ......  . . . .    u N1 u N2 u N3 u N N   ···  be a square matrix, and let  

S e1, e2, ..., eN and B b1, b2, ..., bN = { } = { } be two sets of vectors (in some vector space) related by

b1 b2 ... bN e1 e2 ... eN U . =     Change of Basis for Vector Components: The General Case Chapter & Page: 5–5

(I.e., b j ekukj for j 1, 2,..., N .) = = k X a: Show that S is orthonormal and U is a unitary matrix B is also orthonormal . H⇒ b: Show that S and B are both orthonormal sets U is a unitarymatrix . H⇒

5.2 Change of Basis for Vector Components: The General Case Given the tools and theory we’ve developed, finding and describing the “most general formulas for changing the basis of a vector space” is disgustingly easy (assuming the space is finite dimensional). So let’s assume we have a vector space V of finite N and with an inner product . Let h·|·i A a1, a2, ..., aN and B b1, b2, ..., bN . = { } = { } be two bases for V , and define the corresponding four N N matrices × MAA , MAB , MBB and MBA by [MAA] a j ak , jk =

[MAB] a j bk , jk =

[MBB] b j bk , jk = and

[MBA] b j ak . jk = (See the pattern?) These matrices describe how the vectors in A and B are related to each other. Two quick observations should be made about these matrices:

1. The first concerns the relation between MAB and MBA . Observe that

† [MAB] a j bk bk a j ∗ [MBA] ∗ MBA . jk = = = kj = jk So    † MAB MBA . (5.1a) = Likewise, of course, † MBA MAB . (5.1b) =

version: 9/22/2013 Chapter & Page: 5–6 Change of Basis

2. The second observation is that MAA and MBB greatly simplify if the bases are orthonor- mal. If A is orthonormal, then

[MAA] a j ak δ jk I jk . jk = = = [ ]

So MAA I if A isorthonormal . (5.2a) = By exactly the same reasoning, it should be clear that

MBB I if B isorthonormal . (5.2b) =

Now let v be any vector in V and, for convenience, let us denote the components of v with respect to A and B using α j ’s and β j ’s respectively,

α1 β1 α β  2   2  v A . and v B . . | i = . | i = .  .   .      αN  βN          Remember, this means

v αkak βkbk . (5.3) = = k k X X Our goal is to find the relations between the α j ’s and the β j ’s so that we can find one set given the other. One set of relations can be found by taking the inner product of v with each a j and using equations (5.3). Doing so:

a j v a j αkak a j βkbk = * + = * + k k X X

a j v αk a j ak β k a j bk H⇒ = = k k X X

a j v a j ak αk a j bk βk H⇒ = = k k X X

a j v [MAA] αk [MAB] βk H⇒ = jk = jk k k X X

The formulas in the second equation in the last line are simply formulas for the j th entry in the products of MAA and MAB with the column matrices of αk’s and β’s. So that equation tells us that a1 v α1 β1 h | i a v α β  2   2   2  h .| i MAA . MAB . . = . = .  .   .   .         aN v  αN  βN  h | i           Change of Basis for Vector Components: The General Case Chapter & Page: 5–7

Recalling what the column matrices of αk’s and β’s are, we see that this reduces to

a1 v h | i a2 v h | i . MAA v A MAB v B . (5.4) . = | i = | i  .     aN v  h | i   ?◮Exercise 5.5 (semi-optional): Using the same assumptions as were used to derive (5.4), derive that b1 v h | i b2 v h | i . MBA v A MBB v B . (5.5) . = | i = | i  .     bN v  h | i   Equations (5.4) and (5.5) give the relations between the components of a vector with respect tothetwodifferentbases,aswellastherelationsbetweenthesecomponentsandtheinnerproducts of the vector with each basis vector. Our current main interest is in the relations between the different components (i.e., the rightmost two-thirds of (5.4) and (5.5)). For future reference, let us summarize what the above tells us in that regard.

Lemma 5.2 (The Big Lemma on General Change of Bases) Let

A a1, a2, ..., aN and B b1, b2, ..., bN = { } = { } be any two bases for an N-dimensional vector space V . Then, for any v in V ,

MAA v A MAB v B (5.6a) | i = | i and

MBA v A MBB v B (5.6b) | i = | i where MAA , MAB , MBB and MBA are the four N N matrices given by ×

[MAA] a j ak , [MAB] a j bk , jk = jk =

[MBB] b j bk and [MBA] b j ak . jk = jk =

Naturally, the above formulas simplify considerably when the two bases are orthonormal. That will be of particular interest to us. Before going there, however, let us observe an number of “little results” that immediately follow from the above lemma and its derivation, and which apply whether or not either basis is orthonormal:

1. Solving equations (5.6a) and (5.6b) for v B yields the change of basis formulas | i 1 v B MAB− MAA v A | i = | i and 1 v B MBB− MBA v A . | i = | i version: 9/22/2013 Chapter & Page: 5–8 Change of Basis

2. Solving equations (5.6a) and (5.6b) for v A yields the change of basis formulas | i 1 v A MAA− MAB v B | i = | i and 1 v A MBA− MBB v B . | i = | i 3. From equation (5.4) we have that

a1 v h | i a2 v 1 h | i v A MAA− . . | i = .  .     aN v  h | i   Don’t memorize these formulas. They are nice formulas and can save a little work when dealing with arbitrary matrices. However, since we will attempt to restrict ourselves to orthonor- mal (or at least orthogonal) bases, we won’t have a great need for these formulas as written. If you must memorize formulas, memorize those in the Big Lemma — they are much more easily memorized. And besides the above change of basis formulas are easily derived from the formulas in that lemma.

5.3 Change of Basis for Vector Components: When the Bases Are Orthonormal The Main Result Now consider how the formulas in the Big Lemma (lemma 5.2 on page 5–7) simplify when the bases A a1, a2, ..., aN and B b1, b2, ..., bN = { } = { } are orthonormal. Then, as noted on page 5–5 (equation set (5.1)),

MAA I and MBB I . = = Thus, equation (5.6a) in the Big Lemma reduces to

MAB v B v A , (5.7a) | i = | i and equation (5.6b) in the Big Lemma reduces to

v B MBA v A . (5.7b) | i = | i Equations (5.7a) and (5.7b) are certainly nicer than the original equations in the Big Lemma, but look at what we get when they are combined:

v B MBA v A MBA MAB v B | i = | i = | i v A | i That is, | {z } v B [MBAMAB] v B for each v V , | i = | i ∈ Change of Basis for Vector Components: When the Bases Are Orthonormal Chapter & Page: 5–9

which is only possible if MBAMAB I , = and which, in turn, means that

1 1 MBA− MAB and MAB− MBA . = = But remember (equation set (5.1) on page 5–5) that

† † MAB MBA and MBA MAB . = = Combining this with the previous line gives us

† 1 † 1 MBA MBA− and MAB MAB− . = = So MBA and its adjoint MAB are unitary matrices. What the above tells us it that all the matrices involved are easily computed via the adjoint once we know one of the matrices. We will make one more small set of observation regarding matrices MAB and MBA and the components of the vectors in basis B with respect to basis A . Remember, since A is an th , the k component of b j with respect to A is given by ak b j . That is, h | i

b j ak b j ak ak ak b j ak [MAB] . = = = kj k k k X X X This is just the formula for the matrix product

b1 b2 bN a1 a2 aN MAB . ··· = ··· In summary, we have the following:  

Theorem 5.3 (The Big Theorem on Change of Orthonormal Bases) Let V be an N-dimensional vector space with orthonormal bases

A a1, a2, ..., aN and B b1, b2, ..., bN . = { } = { } Let MAB and MBA be a pair of N N matrices which are adjoints of each other and which × satisfy any one of the following sets of conditions:

1. [MAB] a j bk . jk = 2. [MBA] b j ak . jk = 3. b1 b2 bN a1 a2 aN MAB . ··· = ··· 4. a1 a2 aN  b1 b2 bN  MBA . ··· = ··· 5. v A MAB v B for each v V  . | i = | i ∈ 6. v B MBA v A for each v V . | i = | i ∈ Then MAB and MBA are unitary matrices satisfying all of the above conditions, as well as

† 1 † 1 MAB MBA MBA− and MBA MAB MAB− . = = = =

version: 9/22/2013 Chapter & Page: 5–10 Change of Basis

Finding the Matrices If you keep in mind the equation

b1 b2 bN a1 a2 aN MAB ··· = ··· from the Big Theorem (theorem 5.3, above), and have the formulas for the bk’s in terms of the a j ’s , bk βk1a1 βk2a2 βk N aN for j 1, 2,..., N , = + + ··· + = then you can obtain the matrix MAB easily by simply noting that each of these equations can be written as βk1 β  k2  bk a1 a2 aN . . = ··· .  .      βk N      Comparing the last equation with the one a few lines earlier for b1 b2 bN , it should th ··· be clear that the above column matrix on the right must be the k column in MAB .   See, no computations are needed (provided the bases are orthonormal).

!◮Example 5.2: Let V be a three-dimensional space of traditional vectors with a A i, j, k . = { } Let B b1, b2, b3 = { } where 1 1 b1 [i k] , b2 [i k] and b3 j . = √2 + = √2 − = “By inspection”,it should be clear that B is also an orthonormal basis for V . Now observe that the formulas for the bk’s can be rewritten as 1 1 √2 √2 b1 i j k  0  , b2 i j k  0  = =  1   1          √2 −√2 and     0

b3 i j k 1 , =   0       which can be written even more consisely as 1 1 0 √2 √2 b1 b2 b3 i j k  0 0 1 . =  1 1       0 √2 −√2    Traditional Rotated and Flipped Bases Chapter & Page: 5–11

As noted above, we then must have

1 1 0 √2 √2 MAB  0 0 1 . =  1 1   0 √2 −√2   

Multiple Change of Basis Suppose, now, we have three orthonormal bases

A a1, a2, ..., aN , B b1, b2, ..., bN = { } = { } and

C c1, c2, ..., cN = { }

Then, in addition to MBA and MAB we also have MAC , MBA , MCA and MCB , all defined analogously to the way we defined MBA and MAB . Applying the above theorem we see that, for every v in V ,

MCA v A v C MCB v B MCB MBA v A . | i = | i = | i = | i   Thus,

MCA v A MCBMBA v A for every v V . | i = | i ∈ From this, it immediately follows that

MCA MCBMBA . (5.8) = Remember, this is assuming the bases are all orthonormal.

5.4 Traditional Rotated and Flipped Bases Let us briefly restrict ourselves to a two- or three-dimensional traditional vector space V , with orthonormal bases

A a1, a2 and B b1, b2 = { } = { } or

A a1, a2, a3 and B b1, b2, b3 . = { } = { }

version: 9/22/2013 Chapter & Page: 5–12 Change of Basis

Direction Cosines Since we are assuming a space of traditional vectors, the scalars are real numbers, the inner product is the traditional , and the change of bases matrices MAB and MBA are orthogonal and are transposes of each other. In this case,

[MAB] a j bk a j bk a j bk cos θ(a j , bk ) . jk = = · = k k  And because the a j ’s and bk’s are unit vectors, this reduces to

[MAB] cos θ jk where θ jk angle between a j and bk . jk = = These are called the direction cosines relating the two bases.

Two-Dimensional Case If V is two dimensional, then the possible geometric relationships between

A a1, a2 and B b1, b2 = { } = { } are easily sketched:

a2 a2 b2

b1 b1 ψ ψ or a1 a1

b2

Clearly, we can only have one of the following two situations:

1. b1, b2 are the vectors that can be obtained by “rotating” a1, a2 through some angle { } { } ψ , in which case the matrix relating b1 b2 to a1 a2 is a “ matrix” Rψ (see problem K in Homework Handout IV).     or

2. b1, b2 are the vectors that can be obtained by “rotating” a1, a2 through some angle { } { } ψ , and then “flipping” the direction of the second vector. In this case, the matrix relating b1 b2 to a1 a2 is given by a “rotationmatrix” Rψ followed by a “flip the direction of the second vector matrix”. (Equivalently, we could first rotate by a slightly different     angle and then flip the direction of the first rotated vector — or do the “flipping” first to either a1 or a2 and then rotate.) With a little thought, it should be clear that the matrices for the “flips” in the second case are simply 1 0 1 0 F1 − and F2 . = 0 1 = 0 1 " # " − # Traditional Rotated and Flipped Bases Chapter & Page: 5–13

Clearly, then the matrix MBA can be written as either a single , or as the product of a rotation matrix with one of these “flip” matrices. It’s not at all hard to show that det(any rotation matrix) 1 and det(either flip matrix) 1 . = = − Consequently, at least if we are considering orthonormal bases A and B for a two-dimensional, traditional vector space,

A and B are rotated images of each other det (MBA) 1 . ⇐⇒ = +

Three-Dimensional Case Likewise, if A a1, a2, a3 and B b1, b2, b3 = { } = { } are any two orthonormal bases for a traditional three-dimensional space, then MBA will either be a matrix for a “rotation by some angle about some vector”,or the product of such a rotation matrix with one of the flip matrices 1 0 0 1 0 0 10 0 − F1 0 10 , F2 0 1 0 and F3 01 0 . =   =  −  =   0 01 0 0 1 0 0 1      −  And, again, it can be shown that (still assuming A and B are orthonormal bases) 

A and B are rotated images of each other det (MBA) 1 . ⇐⇒ = + Consequently,

A and B are both right-handed, or are both left-handed det(MBA) 1 . ⇐⇒ = + It can be shown that the rotation matrix, itself, can be written as a product of three “simple rotation matrices” R3,γ R2,β R3,α th where R j,φ corresponds to a rotation of angle φ about the j basis vector. The angles α , β , and γ are the infamous “Euler angles”. Now turn to pages 139, 140 and 141 in Arfken, Weber and Harris and skim their discussion of rotations and Euler angles. Warnings: 1. There are subtle differences between a matrix for a rotation operator (to be discussed in the next chapter) and the change of basis matrix for “rotated bases”. Both may be called rotation matrices, but may differ in the signs of some of the corresponding entries. 2. I have not carefully checked Arken and Weber’s discussionso I will not guaranteewhether their product R3,γ R2,β R3,α

is actually what we are calling MBA or if the signs of some of the entries in some of the matrices need to be switched. In other words, I have not verified whether each of Arfken, Weber and Harris’s “rotation matrices” are describing the operation of rotating a vector in space, or describing a change of basis when one basis is a rotation of the other. 3. Different authors use different conventions for the Euler angles — different axes of rotation and different order of operations. Be careful, the computations based on two different conventions may well lead to two different and incompatible results.

version: 9/22/2013 Chapter & Page: 5–14 Change of Basis

5.5 Sidenote: Vectors Defined by a Transformation Law Suppose we have a huge collection of bases

B1, B2, B3, ..., with Bm bm, bm, ..., bm = 1 2 1 forour N-dimensional vector space V (forexample,thismightbethecollectionofall“rotations ” of some favorite orthonormal basis). Then each v in B has components with respect to each of these Bm’s , m v1 m N v2   m m v Bm . where v v b . | i = . = j j  .  j 1  m  X= v   N  Also, for each pair of bases Bm and Bn , there will be a corresponding “change of basis” matrix MBm Bn such that

v Bm MBm Bn v Bn for each v V . (5.9) | i = | i ∈ This last expression is sometimes called a “transformation law”,even though we are not really dealing with a true transform of vectors here. Now suppose that, in the course of tedious calculations and/or drinking, we obtain, corre- Bm m m m sponding to each different basis , a corresponding N-tuple of scalars w1 ,w2 ,...,wN . The question naturally arrises as to whether  1 1 1 2 2 2 3 3 3 w1,w2,...,wN , w1,w2,...,wN , w1,w2,...,wN , ... all describe the same vector w but in the various corresponding bases. Obviously (to us at least) the answer is “yes” if and only if

m n w1 w1 wm wn  2   2  . MBm Bn . for every pair of bases Bm and Bn . (5.10) . = .  .   .   m   n  w  w   N   N      This is because we can set N w w1b1 = j j j 1 X= and use the fact that (5.10) holds to verify that the change of basis formulas (5.9) will give us, Bm m m m for each , the originally given N-tuple of scalars w1 ,w2 ,...,wN as the components of w with respect to that basis.  This w is called the vector defined by the transformation law (5.9). “Volumes” of N-Dimensional Hyper-Parallelpipeds (Part II) Chapter & Page: 5–15

5.6 “Volumes” of N-Dimensional Hyper-Parallelpipeds (Part II) Let us now continue our discussion from section 3.6 on “volumes of hyper-parallelepipeds”.

Recollections from Section 3.6 Recall the situation and notation: In a N-dimensional we have a “hyper- parallelepiped” PN generated by a linearly independent set of N vectors

v , v , ..., v . { 1 2 N } Our goal is to find a way to compute the N-dimensional volume of this object — denoted by VN (PN ) — using the components of the vk’s with respect to any basis

B b1, b2, ..., bN . = { }

For our computations, we are letting VB be the “matrix of components of the v j ’s with respect to basis B ” given by

v1,1 v2,1 v3,1 vN,1 ··· v1,2 v2,2 v3,2 vN,2  ···  v v v v VB “ v1 B v2 B v3 B vN B ” 1,3 2,3 3,3 N,3 = | i | i | i ··· | i =  ···   ......     . . . . .    v v v v   1,N 2,N 3,N N,N   ···  where, for j 1, 2,..., N , = N

v j v j,kbk . = k 1 X= Recall, also, that we derived one formula for the volume; namely,

VN (PN ) det(VN ) (5.11) = where N is the orthonormal set generated from v1, v2,..., vN via the Gram-Schmidt proce- { } dure (see theorem 3.4 on page 3–27).

General Component Formulas From lemma 5.2 on page 5–7 on general change of bases formulas, we know

MNN v j MN B v j . N = B

Since N is orthonormal, however, M NN is just the , and the above reduces to

v j MN B v j . N = B

version: 9/22/2013 Chapter & Page: 5–16 Change of Basis

Thus, VN v1 N v2 N v3 N vN N = | i | i | i ··· | i M N B v1 B v2 B v3 B vN B MN BVB . = | i | i | i ··· | i = Combining this with formula (5.11) and a property of determinants (equation (4.3) on page 4–11) then yields

VN (PN ) det(VN ) det (MN BVB) (det MN B)(det VB) . = = = Let us simplify matters a little more. Let U be any orthonormal basis. As noted on page 5–11 (equation (5.8)), MN B MN U MUB . = But, since U and N are orthonormal bases of “traditional” vectors, MN U must be an orthog- onal matrix (and, thus, have det(MN U ) 1 ). Consequently, =±

det (MN B) det (MN U MUB) = det (MN U ) det (MUB) det (MUB) . = = ±

Combining this with our last formula for VN (PN ) , gives us our most general component formula for VN (PN ) (and the next theorem).

Theorem 5.4 (general component formula for the volume of a hyper-parallelepiped) Let PN bethehyper-parallelepipedinangeneratedbyalinearlyindependentset v1, v2,..., vN { } of traditional vectors. Also, using any basis B for the space spanned by the vk’s, let VB be the matrix of components of the v j ’s with respect to basis B ,

v1,1 v2,1 v3,1 vN,1 ··· vk,1 v1,2 v2,2 v3,2 vN,2 vk,  ···   2  VB v1,3 v2,3 v3,3 vN,3 with vk,3 vk B . =  ···  . = | i  ......   .   . . . . .   .      v v v v  vk,N   1,N 2,N 3,N N,N     ···    Then, letting VN (PN ) denote the N-dimensional volume of PN ,

VN (PN ) det (MUB) det (VB) (5.12) = | | where U is any orthonormal basis.

Special Cases B is orthonormal In particular, if B is an orthonormal basis, then we can let U B in formula (5.12). Because = of the of B we have

det (MUB) det (MBB) det (I) 1 , = = = “Volumes” of N-Dimensional Hyper-Parallelpipeds (Part II) Chapter & Page: 5–17

and formula (5.12) reduces to

v1,1 v2,1 v3,1 vN,1 ··· v v v v  1,2 2,2 3,2 N,2  ··· VN (PN ) det (VB) det v1,3 v2,3 v3,3 vN,3 . (5.13) = | | =  ···   ......   . . . . .    v v v v   1,N 2,N 3,N N,N   ··· 

?◮Exercise 5.6: Let V be a four-dimensional space of traditional vectors with orthonormal basis i , j , k , l { } and let v1 3i , = v2 2i 4j , = + v3 8i 3j 5k = − + and

v4 4i 7j 2k 2l . = + − + Using formula (5.13), compute the “four-dimensional volume” of the hyper-parallelepiped generated by v1 , v2 , v3 and v4 . (Compare the work done here to that done for the same problem in exercise set 3.15 on page 3–24.)

B and PN are “

Let’s now assume each b j is parallel to v j and “pointing in the same direction”. That is

v jk 0 if j k and v j j 0 . = 6= ≥ For “convenience”, let 1vk vkk = so that we can write v j 1v j b j for j 1, 2,..., N . = = Then

1v1 0 0 0 ··· 0 1v2 0 0  ···  det (VB) det 0 0 1v3 0 1v1 1v2 1v3 1vN , =  ···  = ···  ......   . . . . .     0 0 0 1v   N   ···  and formula (5.12) becomes

VN (PN ) det(MUB) 1v1 1v2 1v3 1vN . (5.14) = | | ··· where U is any orthonormal basis. version: 9/22/2013 Chapter & Page: 5–18 Change of Basis

Of course, if there is any othogonality, then the above simplifies. In particular, if B is orthogonal, then you can easily show that

VN (PN ) b1 b2 bN 1v1 1v2 1v3 1vN . = k kk k ··· k k ··· If you compare formula (5.14) with formulas (3.11) through (3.11), starting on page 3–28, it should be clear that we’ve already derived geometric formulas for the above det MUB when | | N 1 , N 2 and N 3. Since “cut and paste” is so easy, we’ll rewrite those geometric = = = formulas:

V1(P1) b1 1v1 , = ··· = k k

2 2 2 V2(P2) b1 b2 (b1 b2) 1v1 1v2 , = k k k k − · and q

V3(P3) √A B C 1v1 1v2 1v3 = − + where 2 2 2 A b1 b2 b3 , = k k k k k k 2 2 2 2 2 2 B b1 (b2 b3) b2 (b1 b2) b3 (b1 b2) , = k k · + k k · + k k · and 2 2 2 2 (b1 b2) (b1 b3) (b2 b1) (b2 b3) C 2 · 2 · 2 · 2 · . = b1 = b2 = ··· k k k k As we noted in on page 3–29, these formulas reduce even furtherifthe bk’s are unit vectors, and even further if the set of bk’s is orthonormal. If you’ve not already done so, then do the next exercise (which is the same as exercise 3.17):

◮ ? Exercise 5.7: To what do the above formulas for V1(P1) , V2(P2) and V3(P3) reduce

a: when the bk’s are unit vectors.

b: when the set of bk is orthonormal.