Real and Complex Inner Products

Total Page:16

File Type:pdf, Size:1020Kb

Real and Complex Inner Products Real and complex inner products We discuss inner products on finite dimensional real and complex vector spaces. Although we are mainly interested in complex vector spaces, we begin with the more familiar case of the usual inner product. 1 Real inner products n Let v = (v1; : : : ; vn) and w = (w1; : : : ; wn) 2 R . We define the inner product (or dot product or scalar product) of v and w by the following formula: hv; wi = v1w1 + ··· + vnwn: Define the length or norm of v by the formula q p 2 2 kvk = hv; vi = v1 + ··· + vn: Note that we can define hv; wi for the vector space kn, where k is any field, but kvk only makes sense for k = R. We have the following properties for the inner product: n 1. (Bilinearity) For all v, u, w 2 R , hv + u; wi = hv; wi + hu; wi and n hv; u + wi = hv; ui + hv; wi. For all v, w 2 R and t 2 R, htv; wi = hv; twi = thv; wi. n 2. (Symmetry) For all v, w 2 R , hv; wi = hw; vi. n 2 3. (Positive definiteness) For all v 2 R , hv; vi = kvk ≥ 0, and hv; vi = 0 if and only if v = 0. The inner product and norm satisfy the familiar inequalities: n 1. (Cauchy-Schwarz) For all v; w 2 R , jhv; wij ≤ kvkkwk, with equality () v and w are linearly dependent. 1 n 2. (Triangle) For all v; w 2 R , jv + wk ≤ kvk + kwk, with equality () v is a positive scalar multiple of w or vice versa. n 3. For all v; w 2 R , jkvk − kwkj ≤ kv − wk. Recall that the standard basis e1; : : : ; en is orthonormal: ( 1; if i = j; hei; eji = δij = 0; if i 6= j. n More generally, vectors u1; : : : ; un 2 R are orthonormal if, for all i; j, 2 hui; uji = δij, i.e. hui; uii = kuik = 1, and hui; uji = 0 for i 6= j. In this case, u1; : : : ; un are linearly independent and hence automatically a ba- n sis of R . One advantage of working with an orthonormal basis u1; : : : ; un is that, for an arbitrary vector v, it is easy to read off the coefficients of Pn v with respect to the basis u1; : : : ; un, i.e. if v = i=1 tiui is written as a linear combination of the ui, then clearly n X hv; uii = tjhuj; uii = ti: j=1 n Equivalently, for all v 2 R , n X v = hv; uiiui: i=1 We have the following: n Proposition 1.1 (Gram-Schmidt). Let v1; : : : ; vn be a basis of R . Then n there exists an orthonormal basis u1; : : : ; un of R such that, for all i, 1 ≤ i ≤ n, spanfv1; : : : ; vig = spanfu1; : : : ; uig: n In particular, for every subspace W of R , there exists an orthonormal basis n u1; : : : ; un of R such that u1; : : : ; ua is a basis of W . Proof. Given the basis v1; : : : ; vn, we define the ui inductively as follows. Since v 6= 0, kv k 6= 0. Set u = 1 v , a unit vector (i.e. ku k = 1). Now 1 1 1 kv1k 1 1 suppose inductively that we have found u1; : : : ; ui such that, for all k; ` ≤ i, huk; u`i = δk`, and such that spanfv1; : : : ; vig = spanfu1; : : : ; uig. Define 0 Pi vi+1 = vi+1 − j=1hvi+1; ujiuj. Clearly 0 spanfv1; : : : ; vi+1g = spanfu1; : : : ; ui; vi+1g: 2 0 Thus, vi+1 6= 0, since otherwise dim spanfv1; : : : ; vi+1g would be less than i. Also, for k ≤ i, i 0 X hvi+1; uki = hvi+1; uki − hvi+1; ujihuj; uki = hvi+1; uki − hvi+1; uki = 0: j=1 1 0 Set ui+1 = 0 vi+1. Then ui+1 is a unit vector and (since ui+1 is a scalar kvi+1k 0 multiple of vi+1) hui+1; uki = 0 for all k ≤ i. This completes the inductive definition of the basis u1; : : : ; un, which has the desired properties. The final n statement is then clear, by starting with a basis v1; : : : ; vn of R such that v1; : : : ; va is a basis of W . The construction of the proof above leads to the construction of orthog- n onal projections. If W is a subspace of R , then there are many different 0 n 0 complements to W , i.e. subspaces W such that R is the direct sum W ⊕W . Given the inner product, there is a natural choice: n Definition 1.2. Let X ⊆ R . Then ? n X = fv 2 R : hv; xi = 0 for all x 2 Xg: ? n It is easy to see from the definitions that X is a subspace of R and ? ? n that X = W , where W is the smallest subspace of R containing X, which we can take to be the set of all linear combinations of elements of X. In particular, if W = spanfw1; : : : ; wag, then ? n W = fv 2 R : hv; wii = 0; 1 ≤ i ≤ ag: n n Proposition 1.3. If W is a vector subspace of R , then R is the direct ? n ? sum of W and W . In this case, the projection p: R ! W with kernel W is called the orthogonal projection onto W . Proof. We begin by giving a formula for the orthogonal projection. Let n u1; : : : ; un be an an orthonormal basis of R such that u1; : : : ; ua is a basis of W , and define a X pW (v) = hv; uiiui: i=1 Clearly Im pW ⊆ W . Moreover, if w 2 W , then there exist ti 2 R with Pa w = i=1 tiui, and in fact ti = hw; uii. Thus, for all w 2 W , a X w = hw; uiiui = pW (w): i=1 ? Finally, v 2 Ker pW () hv; uii = 0 for 1 ≤ i ≤ a () v 2 W . It follows n ? that R = W ⊕ W and that pW is the corresponding projection. 3 2 Symmetric and orthogonal matrices Let A be an m × n matrix with real coefficients, corresponding to a linear n m map R ! R which we will also denote by A. If A = (aij), we define the t t transpose A to be the n × m matrix (aji); in case A is a square matrix, A is the reflection of A about the diagonal going from upper left to lower right. Since tA is an n × m matrix, it corresponds to a linear map (also denoted t m n Pm by A) from R to R . Since a(ei) = j=1 ajiej, it is easy to see that one n m has the formula: for all v 2 R and w 2 R , hAv; wi = hv; tAwi; m where the first inner product is of two vectors in R and the second is of two n vectors in R . In fact, using bilinearity of the inner product, it is enough to t check that hAei; eji = hei; Aeji for 1 ≤ i ≤ n and 1 ≤ j ≤ m, which follows immediately. From this formula, or directly, it is easy to check that t(BA) = tAtB whenever the product is defined. In other words, taking transpose reverses the order of multiplication. Finally, we leave it as an exercise to check that, if m = n and A is invertible, then so is tA, and in fact (tA)−1 = t(A−1): Note that all of the above formulas make sense when we replace R by an arbitrary field k. If A is a square matrix, then tA is also a square matrix, and we can compare A and tA. Definition 2.1. Let A 2 Mn(R), or more generally let A 2 Mn(k). Then t n A is symmetric if A = A. Equivalently, for all v; w 2 R , hAv; wi = hv; Awi: Definition 2.2. Let A 2 Mn(R). Then A is an orthogonal matrix if, for n all v; w 2 R , hAv; Awi = hv; wi. In other words, A preserves the inner product. Lemma 2.3. Let A 2 Mn(R). Then the following are equivalent: n (i) A is orthogonal, i.e. for all v; w 2 R , hAv; Awi = hv; wi. n (ii) For all v 2 R , kAvk = kvk, i.e. A preserves length. 4 (iii) A is invertible, and A−1 = tA. n (iv) The columns of A are an orthonormal basis of R . n (v) The rows of A are an orthonormal basis of R . Proof. (i) =) (ii): Clear, since we can take w = v. (ii) =) (i): Follows n from the polarization identity: For all v; w 2 R , 2hv; wi = kv + wk2 − kvk2 − kwk2: n (i) =) (iii): Suppose that, for all v; w 2 R , hAv; Awi = hv; wi. Now t n t hAv; Awi = hv; AAwi, and hence, for all w 2 R , hv; wi = hv; AAwi for all n v 2 R . It follows that hv; w − tAAwi = 0: n t n In other words, for every w 2 R , w − AAw is orthogonal to every v 2 R , hence w − tAAw = 0, w = tAAw, and so tAA = Id. Thus A−1 = tA. −1 t n (iii) =) (i): If A = A, then, for all v; w 2 R , hAv; Awi = hv; tAAwi = hv; A−1Awi = hv; wi: t (iii) () (iv): In general, the entries of AA are the inner products hci; cji, where c1; : : : ; cn are the columns of A. Thus, the columns of A are an n t −1 t orthonormal basis of R () AA = Id () A = A. (iii) () (v): t Similar, using the fact that the entries of A A are the inner products hri; rji, where r1; : : : ; rn are the rows of A.
Recommended publications
  • Math 651 Homework 1 - Algebras and Groups Due 2/22/2013
    Math 651 Homework 1 - Algebras and Groups Due 2/22/2013 1) Consider the Lie Group SU(2), the group of 2 × 2 complex matrices A T with A A = I and det(A) = 1. The underlying set is z −w jzj2 + jwj2 = 1 (1) w z with the standard S3 topology. The usual basis for su(2) is 0 i 0 −1 i 0 X = Y = Z = (2) i 0 1 0 0 −i (which are each i times the Pauli matrices: X = iσx, etc.). a) Show this is the algebra of purely imaginary quaternions, under the commutator bracket. b) Extend X to a left-invariant field and Y to a right-invariant field, and show by computation that the Lie bracket between them is zero. c) Extending X, Y , Z to global left-invariant vector fields, give SU(2) the metric g(X; X) = g(Y; Y ) = g(Z; Z) = 1 and all other inner products zero. Show this is a bi-invariant metric. d) Pick > 0 and set g(X; X) = 2, leaving g(Y; Y ) = g(Z; Z) = 1. Show this is left-invariant but not bi-invariant. p 2) The realification of an n × n complex matrix A + −1B is its assignment it to the 2n × 2n matrix A −B (3) BA Any n × n quaternionic matrix can be written A + Bk where A and B are complex matrices. Its complexification is the 2n × 2n complex matrix A −B (4) B A a) Show that the realification of complex matrices and complexifica- tion of quaternionic matrices are algebra homomorphisms.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Partitioned (Or Block) Matrices This Version: 29 Nov 2018
    Partitioned (or Block) Matrices This version: 29 Nov 2018 Intermediate Econometrics / Forecasting Class Notes Instructor: Anthony Tay It is frequently convenient to partition matrices into smaller sub-matrices. e.g. 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 A B (2×2) (2×3) 3 1 1 0 0 = 3 1 1 0 0 = C I 1 3 0 1 0 1 3 0 1 0 (3×2) (3×3) 2 0 0 0 1 2 0 0 0 1 The same matrix can be partitioned in several different ways. For instance, we can write the previous matrix as 2 3 2 1 3 2 3 2 1 3 4 1 1 0 7 4 1 1 0 7 a b0 (1×1) (1×4) 3 1 1 0 0 = 3 1 1 0 0 = c D 1 3 0 1 0 1 3 0 1 0 (4×1) (4×4) 2 0 0 0 1 2 0 0 0 1 One reason partitioning is useful is that we can do matrix addition and multiplication with blocks, as though the blocks are elements, as long as the blocks are conformable for the operations. For instance: A B D E A + D B + E (2×2) (2×3) (2×2) (2×3) (2×2) (2×3) + = C I C F 2C I + F (3×2) (3×3) (3×2) (3×3) (3×2) (3×3) A B d E Ad + BF AE + BG (2×2) (2×3) (2×1) (2×3) (2×1) (2×3) = C I F G Cd + F CE + G (3×2) (3×3) (3×1) (3×3) (3×1) (3×3) | {z } | {z } | {z } (5×5) (5×4) (5×4) 1 Intermediate Econometrics / Forecasting 2 Examples (1) Let 1 2 1 1 2 1 c 1 4 2 3 4 2 3 h i A = = = a a a and c = c 1 2 3 2 3 0 1 3 0 1 c 0 1 3 0 1 3 3 c1 h i then Ac = a1 a2 a3 c2 = c1a1 + c2a2 + c3a3 c3 The product Ac produces a linear combination of the columns of A.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Week 8-9. Inner Product Spaces. (Revised Version) Section 3.1 Dot Product As an Inner Product
    Math 2051 W2008 Margo Kondratieva Week 8-9. Inner product spaces. (revised version) Section 3.1 Dot product as an inner product. Consider a linear (vector) space V . (Let us restrict ourselves to only real spaces that is we will not deal with complex numbers and vectors.) De¯nition 1. An inner product on V is a function which assigns a real number, denoted by < ~u;~v> to every pair of vectors ~u;~v 2 V such that (1) < ~u;~v>=< ~v; ~u> for all ~u;~v 2 V ; (2) < ~u + ~v; ~w>=< ~u;~w> + < ~v; ~w> for all ~u;~v; ~w 2 V ; (3) < k~u;~v>= k < ~u;~v> for any k 2 R and ~u;~v 2 V . (4) < ~v;~v>¸ 0 for all ~v 2 V , and < ~v;~v>= 0 only for ~v = ~0. De¯nition 2. Inner product space is a vector space equipped with an inner product. Pn It is straightforward to check that the dot product introduces by ~u ¢ ~v = j=1 ujvj is an inner product. You are advised to verify all the properties listed in the de¯nition, as an exercise. The dot product is also called Euclidian inner product. De¯nition 3. Euclidian vector space is Rn equipped with Euclidian inner product < ~u;~v>= ~u¢~v. De¯nition 4. A square matrix A is called positive de¯nite if ~vT A~v> 0 for any vector ~v 6= ~0. · ¸ 2 0 Problem 1. Show that is positive de¯nite. 0 3 Solution: Take ~v = (x; y)T . Then ~vT A~v = 2x2 + 3y2 > 0 for (x; y) 6= (0; 0).
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Matrix Determinants
    MATRIX DETERMINANTS Summary Uses ................................................................................................................................................. 1 1‐ Reminder ‐ Definition and components of a matrix ................................................................ 1 2‐ The matrix determinant .......................................................................................................... 2 3‐ Calculation of the determinant for a matrix ................................................................. 2 4‐ Exercise .................................................................................................................................... 3 5‐ Definition of a minor ............................................................................................................... 3 6‐ Definition of a cofactor ............................................................................................................ 4 7‐ Cofactor expansion – a method to calculate the determinant ............................................... 4 8‐ Calculate the determinant for a matrix ........................................................................ 5 9‐ Alternative method to calculate determinants ....................................................................... 6 10‐ Exercise .................................................................................................................................... 7 11‐ Determinants of square matrices of dimensions 4x4 and greater ........................................
    [Show full text]