Change of Basis

Total Page:16

File Type:pdf, Size:1020Kb

Change of Basis 5 Change of Basis In many applications, we may need to switch between two or more different bases for a vector space. So it would be helpful to have formulas for converting the components of a vector with respect to one basis into the corresponding components of the vector (or matrix of the operator) with respect to the other basis. The theory and tools for quickly determining these “change of basis formulas” will be developed in these notes. 5.1 Unitary and Orthogonal Matrices Definitions Unitary and orthogonal matrices will naturallyariseinchangeofbasisformulas. Theyaredefined as follows: 1 † A is a unitary matrix A is an invertible matrix with A− A , ⇐⇒ = and 1 T A is an orthogonal matrix A is an invertible real matrix with A− A ⇐⇒ = A is a real unitary matrix . ⇐⇒ Because an orthogonal matrix is simply a unitary matrix with real-valued entries, we will mainly consider unitary matrices (keeping in mind that anything derived for unitary matrices will also hold for orthogonal matrices after replacing A† with AT ). The basic test for determining if a square matrix is unitary is to simply compute A† and see if it is the inverse of A ; that is, see if AA† I . = !◮Example 5.1: Let 3 4 i 5 5 A = 4 3 i 5 5 Then 3 4 i 5 − 5 A† = 4 3 i − 5 5 9/22/2013 Chapter & Page: 5–2 Change of Basis and 3 4 3 4 i i 5 5 5 − 5 AA† I . = 4 3 4 3 = ··· = i i 5 5 − 5 5 So A is unitary. Obviously, for a matrix to be unitary, it must be square. It should also be fairly clear that, if T † 1 A is a unitary (or orthogonal) matrix, then so are A∗ , A , A and A− . ◮ T † ? Exercise 5.1: Prove that, if A is a unitary (or orthogonal) matrix, then so are A∗ , A , A 1 and A− . The term “unitary” comes from the value of the determinant. To see this, first observe that, if A is unitary then 1 † I AA− AA . = = Using this and already discussed properties of determinants, we have † † 2 1 det(I) det(AA ) det(A) det(A ) det(A) det(A)∗ det(A) . = = = = = | | Thus, A is unitary det A 1 . H⇒ | | = And since there are only two real numbers which have magnitude 1, it immediately follows that A is orthogonal det A 1 . H⇒ = ± An immediate consequence of this is that if the absolute value of the determinant of a matrix is not 1, then that matrix cannot be unitary. Why the term “orthogonal” is appropriate will become obvious later. Rows and Columns of Unitary Matrices Let u11 u12 u13 u1N ··· u21 u22 u23 u2N ··· U . = . .. . u N1 u N2 u N3 u N N ··· be a square matrix. By definition, then, u11∗ u21∗ u N1∗ ··· u12∗ u22∗ u N2∗ ··· U† u13∗ u23∗ u N3∗ . = ··· . .. . u1N ∗ u2N ∗ u N N ∗ ··· Unitary and Orthogonal Matrices Chapter & Page: 5–3 More concisely, † U jk u jk and U U kj ∗ ukj ∗ . [ ] = jk = [ ] = Now observe: U is unitary † 1 U U− ⇐⇒ = U†U I ⇐⇒ = † U U I jk ⇐⇒ jk = [ ] † U jm U mk δ jk “for all ( j, k)” ⇐⇒ m [ ] = X umj ∗ umk δ jk “for all ( j, k)” ⇐⇒ m = X The righthand side of the last equation is simply the formula for computing the standard matrix inner product of the column matrices u1 j u1k u2 j u2k and , . . . u N j u Nk and the last line tells us that 1 if j k this inner product = . = ( 0 if j k 6= In other words, that line states that u11 u12 u13 u1N u21 u22 u23 u2N , , ,..., . . . . . u N1 u N2 u N3 u N N is an orthonormal set of column matrices. But these column matrices are simply the columns of our original matrix U . Consequently, the above set of observations(startingwith“ U is unitary”) reduces to A square matrix U is unitary if and only if its columns form an orthonormal set of column matrices (using the standard column matrix inner product). You can verify that a similar statement holds using the rows of U instead of its columns. ?◮Exercise 5.2: Show that a square matrix U is unitary if and only if its rows form an orthonormal set of row matrices (using the row matrix inner product). (Hints: Either consider how the above derivation would have changed if we had used UU† instead of U†U , or use the fact just derived along with the fact that U is unitary if and only if UT is unitary.) version: 9/22/2013 Chapter & Page: 5–4 Change of Basis In summary, we have just proven: Theorem 5.1 (The Big Theorem on Unitary Matrices) Let u11 u12 u13 u1N ··· u21 u22 u23 u2N ··· U . = . .. . u N1 u N2 u N3 u N N ··· be a square matrix. Then the following statements are equivalent (that is, if any one statement is true, then all the statements are true). 1. U is unitary. 2. The columns of U form an orthonormal set of column matrices (with respect to the usual matrix inner product). That is, 1 if j k umj ∗ umk = m = ( 0 if j k X 6= . 3. The rows of U form an orthonormal set of row matrices (with respect to the usual matrix inner product). That is, 1 if j k u jm∗ ukm = m = ( 0 if j k X 6= . ?◮Exercise 5.3: What is the corresponding “Big Theorem on Orthogonal Matrices”? An Important Consequence and Exercise You can now verify a result that will be important in our change of basis formulas involving orthonormal bases. ?◮Exercise 5.4: Let u11 u12 u13 u1N ··· u21 u22 u23 u2N ··· U . = . .. . u N1 u N2 u N3 u N N ··· be a square matrix, and let S e1, e2, ..., eN and B b1, b2, ..., bN = { } = { } be two sets of vectors (in some vector space) related by b1 b2 ... bN e1 e2 ... eN U . = Change of Basis for Vector Components: The General Case Chapter & Page: 5–5 (I.e., b j ekukj for j 1, 2,..., N .) = = k X a: Show that S is orthonormal and U is a unitary matrix B is also orthonormal . H⇒ b: Show that S and B are both orthonormal sets U is a unitarymatrix . H⇒ 5.2 Change of Basis for Vector Components: The General Case Given the tools and theory we’ve developed, finding and describing the “most general formulas for changing the basis of a vector space” is disgustingly easy (assuming the space is finite dimensional). So let’s assume we have a vector space V of finite dimension N and with an inner product . Let h·|·i A a1, a2, ..., aN and B b1, b2, ..., bN . = { } = { } be two bases for V , and define the corresponding four N N matrices × MAA , MAB , MBB and MBA by [MAA] a j ak , jk = [MAB] a j bk , jk = [MBB] b j bk , jk = and [MBA] b j ak . jk = (See the pattern?) These matrices describe how the vectors in A and B are related to each other. Two quick observations should be made about these matrices: 1. The first concerns the relation between MAB and MBA . Observe that † [MAB] a j bk bk a j ∗ [MBA] ∗ MBA . jk = = = kj = jk So † MAB MBA . (5.1a) = Likewise, of course, † MBA MAB . (5.1b) = version: 9/22/2013 Chapter & Page: 5–6 Change of Basis 2. The second observation is that MAA and MBB greatly simplify if the bases are orthonor- mal. If A is orthonormal, then [MAA] a j ak δ jk I jk . jk = = = [ ] So MAA I if A isorthonormal . (5.2a) = By exactly the same reasoning, it should be clear that MBB I if B isorthonormal . (5.2b) = Now let v be any vector in V and, for convenience, let us denote the components of v with respect to A and B using α j ’s and β j ’s respectively, α1 β1 α β 2 2 v A . and v B . | i = . | i = . . . αN βN Remember, this means v αkak βkbk . (5.3) = = k k X X Our goal is to find the relations between the α j ’s and the β j ’s so that we can find one set given the other. One set of relations can be found by taking the inner product of v with each a j and using equations (5.3). Doing so: a j v a j αkak a j βkbk = * + = * + k k X X a j v αk a j ak β k a j bk H⇒ = = k k X X a j v a j ak αk a j bk βk H⇒ = = k k X X a j v [MAA] αk [MAB] βk H⇒ = jk = jk k k X X The formulas in the second equation in the last line are simply formulas for the j th entry in the products of MAA and MAB with the column matrices of αk’s and β’s. So that equation tells us that a1 v α1 β1 h | i a v α β 2 2 2 h .| i MAA . MAB . = . = . . . . aN v αN βN h | i Change of Basis for Vector Components: The General Case Chapter & Page: 5–7 Recalling what the column matrices of αk’s and β’s are, we see that this reduces to a1 v h | i a2 v h | i . MAA v A MAB v B . (5.4) . = | i = | i . aN v h | i ?◮Exercise 5.5 (semi-optional): Using the same assumptions as were used to derive (5.4), derive that b1 v h | i b2 v h | i .
Recommended publications
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • Parametrization of 3×3 Unitary Matrices Based on Polarization
    Parametrization of 33 unitary matrices based on polarization algebra (May, 2018) José J. Gil Parametrization of 33 unitary matrices based on polarization algebra José J. Gil Universidad de Zaragoza. Pedro Cerbuna 12, 50009 Zaragoza Spain [email protected] Abstract A parametrization of 33 unitary matrices is presented. This mathematical approach is inspired by polarization algebra and is formulated through the identification of a set of three orthonormal three-dimensional Jones vectors representing respective pure polarization states. This approach leads to the representation of a 33 unitary matrix as an orthogonal similarity transformation of a particular type of unitary matrix that depends on six independent parameters, while the remaining three parameters correspond to the orthogonal matrix of the said transformation. The results obtained are applied to determine the structure of the second component of the characteristic decomposition of a 33 positive semidefinite Hermitian matrix. 1 Introduction In many branches of Mathematics, Physics and Engineering, 33 unitary matrices appear as key elements for solving a great variety of problems, and therefore, appropriate parameterizations in terms of minimum sets of nine independent parameters are required for the corresponding mathematical treatment. In this way, some interesting parametrizations have been obtained [1-8]. In particular, the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) [6,7], which represents information on the strength of flavour-changing weak decays and depends on four parameters, constitutes the core of a family of parametrizations of a 33 unitary matrix [8]. In this paper, a new general parametrization is presented, which is inspired by polarization algebra [9] through the structure of orthonormal sets of three-dimensional Jones vectors [10].
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis
    Orthonormal Bases in Hilbert Space APPM 5440 Fall 2017 Applied Analysis Stephen Becker November 18, 2017(v4); v1 Nov 17 2014 and v2 Jan 21 2016 Supplementary notes to our textbook (Hunter and Nachtergaele). These notes generally follow Kreyszig’s book. The reason for these notes is that this is a simpler treatment that is easier to follow; the simplicity is because we generally do not consider uncountable nets, but rather only sequences (which are countable). I have tried to identify the theorems from both books; theorems/lemmas with numbers like 3.3-7 are from Kreyszig. Note: there are two versions of these notes, with and without proofs. The version without proofs will be distributed to the class initially, and the version with proofs will be distributed later (after any potential homework assignments, since some of the proofs maybe assigned as homework). This version does not have proofs. 1 Basic Definitions NOTE: Kreyszig defines inner products to be linear in the first term, and conjugate linear in the second term, so hαx, yi = αhx, yi = hx, αy¯ i. In contrast, Hunter and Nachtergaele define the inner product to be linear in the second term, and conjugate linear in the first term, so hαx,¯ yi = αhx, yi = hx, αyi. We will follow the convention of Hunter and Nachtergaele, so I have re-written the theorems from Kreyszig accordingly whenever the order is important. Of course when working with the real field, order is completely unimportant. Let X be an inner product space. Then we can define a norm on X by kxk = phx, xi, x ∈ X.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Ch 5: ORTHOGONALITY
    Ch 5: ORTHOGONALITY 5.5 Orthonormal Sets 1. a set fv1; v2; : : : ; vng is an orthogonal set if hvi; vji = 0, 81 ≤ i 6= j; ≤ n (note that orthogonality only makes sense in an inner product space since you need to inner product to check hvi; vji = 0. n For example fe1; e2; : : : ; eng is an orthogonal set in R . 2. fv1; v2; : : : ; vng is an orthogonal set =) v1; v2; : : : ; vn are linearly independent. 3. a set fv1; v2; : : : ; vng is an orthonormal set if hvi; vji = 0 and jjvijj = 1, 81 ≤ i 6= j; ≤ n (i.e. orthogonal and unit length). n For example fe1; e2; : : : ; eng is an orthonormal set in R . 4. given an orthogonal basis for a vector space V , we can always find an orthonormal basis for V by dividing each vector by its length (see Example 2 and 3 page 256) n 5. a space with an orthonormal basis behaves like the euclidean space R with the standard basis (it is easier to work with basis that are orthonormal, just like it is n easier to work with the standard basis versus other bases for R ) 6. if v is a vector in a inner product space V with fu1; u2; : : : ; ung an orthonormal basis, Pn Pn then we can write v = i=1 ciui = i=1hv; uiiui Pn 7. an easy way to find the inner product of two vectors: if u = i=1 aiui and v = Pn i=1 biui, where fu1; u2; : : : ; ung is an orthonormal basis for an inner product space Pn V , then hu; vi = i=1 aibi Pn 8.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Coordinatization
    MATH 355 Supplemental Notes Coordinatization Coordinatization In R3, we have the standard basis i, j and k. When we write a vector in coordinate form, say 3 v 2 , (1) “ »´ fi 5 — ffi – fl it is understood as v 3i 2j 5k. “ ´ ` The numbers 3, 2 and 5 are the coordinates of v relative to the standard basis ⇠ i, j, k . It has p´ q “p q always been understood that a coordinate representation such as that in (1) is with respect to the ordered basis ⇠. A little thought reveals that it need not be so. One could have chosen the same basis elements in a di↵erent order, as in the basis ⇠ i, k, j . We employ notation indicating the 1 “p q coordinates are with respect to the di↵erent basis ⇠1: 3 v 5 , to mean that v 3i 5k 2j, r s⇠1 “ » fi “ ` ´ 2 —´ ffi – fl reflecting the order in which the basis elements fall in ⇠1. Of course, one could employ similar notation even when the coordinates are expressed in terms of the standard basis, writing v for r s⇠ (1), but whenever we have coordinatization with respect to the standard basis of Rn in mind, we will consider the wrapper to be optional. r¨s⇠ Of course, there are many non-standard bases of Rn. In fact, any linearly independent collection of n vectors in Rn provides a basis. Say we take 1 1 1 4 ´ ´ » 0fi » 1fi » 1fi » 1fi u , u u u ´ . 1 “ 2 “ 3 “ 4 “ — 3ffi — 1ffi — 0ffi — 2ffi — ffi —´ ffi — ffi — ffi — 0ffi — 4ffi — 2ffi — 1ffi — ffi — ffi — ffi —´ ffi – fl – fl – fl – fl As per the discussion above, these vectors are being expressed relative to the standard basis of R4.
    [Show full text]