Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló

Total Page:16

File Type:pdf, Size:1020Kb

Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). 8 / 78 The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t 9 / 78 With the definition on the previous slide, eJ ∗ ¶X is therefore the partial derivative in the eJ direction. Giving J ¶X ≡ ∑ e eJ ∗ ¶X J I [since eJ ∗ ¶X ≡ eJ ∗ fe (eI ∗ ¶Xg]. Key to using these definitions of multivector differentiation are several important results: The Multivector Derivative cont... Let feJg be a basis for X – ie if X is a bivector, then feJg will be the basis bivectors. 10 / 78 Key to using these definitions of multivector differentiation are several important results: The Multivector Derivative cont... Let feJg be a basis for X – ie if X is a bivector, then feJg will be the basis bivectors. With the definition on the previous slide, eJ ∗ ¶X is therefore the partial derivative in the eJ direction. Giving J ¶X ≡ ∑ e eJ ∗ ¶X J I [since eJ ∗ ¶X ≡ eJ ∗ fe (eI ∗ ¶Xg]. 11 / 78 The Multivector Derivative cont... Let feJg be a basis for X – ie if X is a bivector, then feJg will be the basis bivectors. With the definition on the previous slide, eJ ∗ ¶X is therefore the partial derivative in the eJ direction. Giving J ¶X ≡ ∑ e eJ ∗ ¶X J I [since eJ ∗ ¶X ≡ eJ ∗ fe (eI ∗ ¶Xg]. Key to using these definitions of multivector differentiation are several important results: 12 / 78 We can see this by going back to our definitions: h(X + teJ)Bi − hXBi hXBi + theJBi − hXBi eJ ∗ ¶XhXBi = lim = lim t!0 t t!0 t theJBi lim = heJBi t!0 t Therefore giving us J J ¶XhXBi = e (eJ ∗ ¶X)hXBi = e heJBi ≡ PX(B) The Multivector Derivative cont... If PX(B) is the projection of B onto the grades of X (ie J PX(B) ≡ e heJBi), then our first important result is ¶XhXBi = PX(B) 13 / 78 theJBi lim = heJBi t!0 t Therefore giving us J J ¶XhXBi = e (eJ ∗ ¶X)hXBi = e heJBi ≡ PX(B) The Multivector Derivative cont... If PX(B) is the projection of B onto the grades of X (ie J PX(B) ≡ e heJBi), then our first important result is ¶XhXBi = PX(B) We can see this by going back to our definitions: h(X + teJ)Bi − hXBi hXBi + theJBi − hXBi eJ ∗ ¶XhXBi = lim = lim t!0 t t!0 t 14 / 78 Therefore giving us J J ¶XhXBi = e (eJ ∗ ¶X)hXBi = e heJBi ≡ PX(B) The Multivector Derivative cont... If PX(B) is the projection of B onto the grades of X (ie J PX(B) ≡ e heJBi), then our first important result is ¶XhXBi = PX(B) We can see this by going back to our definitions: h(X + teJ)Bi − hXBi hXBi + theJBi − hXBi eJ ∗ ¶XhXBi = lim = lim t!0 t t!0 t theJBi lim = heJBi t!0 t 15 / 78 The Multivector Derivative cont... If PX(B) is the projection of B onto the grades of X (ie J PX(B) ≡ e heJBi), then our first important result is ¶XhXBi = PX(B) We can see this by going back to our definitions: h(X + teJ)Bi − hXBi hXBi + theJBi − hXBi eJ ∗ ¶XhXBi = lim = lim t!0 t t!0 t theJBi lim = heJBi t!0 t Therefore giving us J J ¶XhXBi = e (eJ ∗ ¶X)hXBi = e heJBi ≡ PX(B) 16 / 78 ¶XhXBi = PX(B) ¶XhXB˜ i = PX(B˜ ) ˜ ¶X˜ hXBi = PX˜ (B) = PX(B) −1 −1 −1 ¶yhMy i = −y Py(M)y X, B, M, y all general multivectors. Other Key Results Some other useful results are listed here (proofs are similar to that on previous slide and are left as exercises): 17 / 78 Other Key Results Some other useful results are listed here (proofs are similar to that on previous slide and are left as exercises): ¶XhXBi = PX(B) ¶XhXB˜ i = PX(B˜ ) ˜ ¶X˜ hXBi = PX˜ (B) = PX(B) −1 −1 −1 ¶yhMy i = −y Py(M)y X, B, M, y all general multivectors. 18 / 78 Exercises 1 1 By noting that hXBi = h(XB)˜i, show the second key result ¶XhXB˜ i = PX(B˜ ) 2 ˜ Key result 1 tells us that ¶X˜ hXBi = PX˜ (B). Verify that PX˜ (B) = PX(B), to give the 3rd key result. 3 to show the 4th key result −1 −1 −1 ¶yhMy i = −y Py(M)y −1 use the fact that ¶yhMyy i = ¶yhMi = 0. Hint: recall that XAX has the same grades as A. 19 / 78 One possible way forward is to find the plane that minimises the sum of the squared perpendicular distances of the points from the plane. X2 X3 X1 X0 1 X0 0 2 X3 A Simple Example Suppose we wish to fit a set of points fXig to a plane F – where the Xi and F are conformal representations (vector and 4 vector respectively). 20 / 78 A Simple Example Suppose we wish to fit a set of points fXig to a plane F – where the Xi and F are conformal representations (vector and 4 vector respectively). One possible way forward is to find the plane that minimises the sum of the squared perpendicular distances of the points from the plane. X2 X3 X1 0 X1 0 X2 X0 3 21 / 78 Now use the result ¶XhXBi = PX(B) to differentiate this expression wrt F ˙ ˙ ˙ ˙ ¶FS = − ∑ ¶FhXiFXiFi = − ∑ ¶FhXiFXiFi + ¶FhXiFXiFi i i = −2 ∑ PF(XiFXi) = −2 ∑ XiFXi i i =) solve (via linear algebra techniques) ∑i XiFXi = 0. Plane fitting example, cont.... Recall that FXF is the reflection of X in F, so that −X·(FXF) is the distance between the point and the plane. Thus we could take as our cost function: S = − ∑ Xi·(FXiF) i 22 / 78 = −2 ∑ PF(XiFXi) = −2 ∑ XiFXi i i =) solve (via linear algebra techniques) ∑i XiFXi = 0. Plane fitting example, cont.... Recall that FXF is the reflection of X in F, so that −X·(FXF) is the distance between the point and the plane. Thus we could take as our cost function: S = − ∑ Xi·(FXiF) i Now use the result ¶XhXBi = PX(B) to differentiate this expression wrt F ˙ ˙ ˙ ˙ ¶FS = − ∑ ¶FhXiFXiFi = − ∑ ¶FhXiFXiFi + ¶FhXiFXiFi i i 23 / 78 =) solve (via linear algebra techniques) ∑i XiFXi = 0. Plane fitting example, cont.... Recall that FXF is the reflection of X in F, so that −X·(FXF) is the distance between the point and the plane. Thus we could take as our cost function: S = − ∑ Xi·(FXiF) i Now use the result ¶XhXBi = PX(B) to differentiate this expression wrt F ˙ ˙ ˙ ˙ ¶FS = − ∑ ¶FhXiFXiFi = − ∑ ¶FhXiFXiFi + ¶FhXiFXiFi i i = −2 ∑ PF(XiFXi) = −2 ∑ XiFXi i i 24 / 78 Plane fitting example, cont.... Recall that FXF is the reflection of X in F, so that −X·(FXF) is the distance between the point and the plane.
Recommended publications
  • Quaternions and Cli Ord Geometric Algebras
    Quaternions and Cliord Geometric Algebras Robert Benjamin Easter First Draft Edition (v1) (c) copyright 2015, Robert Benjamin Easter, all rights reserved. Preface As a rst rough draft that has been put together very quickly, this book is likely to contain errata and disorganization. The references list and inline citations are very incompete, so the reader should search around for more references. I do not claim to be the inventor of any of the mathematics found here. However, some parts of this book may be considered new in some sense and were in small parts my own original research. Much of the contents was originally written by me as contributions to a web encyclopedia project just for fun, but for various reasons was inappropriate in an encyclopedic volume. I did not originally intend to write this book. This is not a dissertation, nor did its development receive any funding or proper peer review. I oer this free book to the public, such as it is, in the hope it could be helpful to an interested reader. June 19, 2015 - Robert B. Easter. (v1) [email protected] 3 Table of contents Preface . 3 List of gures . 9 1 Quaternion Algebra . 11 1.1 The Quaternion Formula . 11 1.2 The Scalar and Vector Parts . 15 1.3 The Quaternion Product . 16 1.4 The Dot Product . 16 1.5 The Cross Product . 17 1.6 Conjugates . 18 1.7 Tensor or Magnitude . 20 1.8 Versors . 20 1.9 Biradials . 22 1.10 Quaternion Identities . 23 1.11 The Biradial b/a .
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Appendix A: Matrices and Tensors
    Appendix A: Matrices and Tensors A.1 Introduction and Rationale The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach. A.2 Definition of Square, Column, and Row Matrices An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns, S.C.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • A Clifford Dyadic Superfield from Bilateral Interactions of Geometric Multispin Dirac Theory
    A CLIFFORD DYADIC SUPERFIELD FROM BILATERAL INTERACTIONS OF GEOMETRIC MULTISPIN DIRAC THEORY WILLIAM M. PEZZAGLIA JR. Department of Physia, Santa Clam University Santa Clam, CA 95053, U.S.A., [email protected] and ALFRED W. DIFFER Department of Phyaia, American River College Sacramento, CA 958i1, U.S.A. (Received: November 5, 1993) Abstract. Multivector quantum mechanics utilizes wavefunctions which a.re Clifford ag­ gregates (e.g. sum of scalar, vector, bivector). This is equivalent to multispinors con­ structed of Dirac matrices, with the representation independent form of the generators geometrically interpreted as the basis vectors of spacetime. Multiple generations of par­ ticles appear as left ideals of the algebra, coupled only by now-allowed right-side applied (dextral) operations. A generalized bilateral (two-sided operation) coupling is propoeed which includes the above mentioned dextrad field, and the spin-gauge interaction as partic­ ular cases. This leads to a new principle of poly-dimensional covariance, in which physical laws are invariant under the reshuffling of coordinate geometry. Such a multigeometric su­ perfield equation is proposed, whi~h is sourced by a bilateral current. In order to express the superfield in representation and coordinate free form, we introduce Eddington E-F double-frame numbers. Symmetric tensors can now be represented as 4D "dyads", which actually are elements of a global SD Clifford algebra.. As a restricted example, the dyadic field created by the Greider-Ross multivector current (of a Dirac electron) describes both electromagnetic and Morris-Greider gravitational interactions. Key words: spin-gauge, multivector, clifford, dyadic 1. Introduction Multi vector physics is a grand scheme in which we attempt to describe all ba­ sic physical structure and phenomena by a single geometrically interpretable Algebra.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]