Geometric Algebra in Linear Algebra and Geometry

Total Page:16

File Type:pdf, Size:1020Kb

Geometric Algebra in Linear Algebra and Geometry Geometric Algebra in Linear Algebra and Geometry Jos´eMar´ıaPozo Departament de F´ısica Fonamental Universitat de Barcelona Diagonal 647, E-08028 Barcelona, Spain jpozo@ffn.ub.es Garret Sobczyk Departamento de Fisica y Matematicas Universidad de las Am´ericas - Puebla, Mexico 72820 Cholula, M´exico, [email protected] January 10, 2000, Revised April 15, 2001 Abstract. This article explores the use of geometric algebra in linear and mul- tilinear algebra, and in affine, projective and conformal geometries. Our principal objective is to show how the rich algebraic tools of geometric algebra are fully com- patible with, and augment the more traditional tools of matrix algebra. The novel concept of an h-twistor makes possible a simple new proof of the striking relationship between conformal transformations in a pseudoeuclidean space to isometries in a pseudoeuclidean space of two higher dimensions. The utility of the h-twistor concept, which is a generalization of the idea of a Penrose twistor to a pseudoeuclidean space of arbitrary signature, is amply demonstrated in a new treatment of the Schwarzian derivative. AMS subject classification 15A09, 15A66, 15A75, 17Bxx, 41A10, 51A05, 51A45. Keywords: affine geometry, Clifford algebra, conformal group, euclidean geome- try, geometric algebra, Grassmann algebra, horosphere, Lie algebra, linear algebra, M¨obiustransformation, non-Euclidean geometry, null cone, projective geometry, spectral decomposition, Schwarzian derivative, twistor. Contents 1. Introduction − 2. Geometric Algebra and Matrices − nondegenerate geometric algebras spinor basis symmetric and hermitian inner products linear transformations outermorphism and generalized traces characteristic polynomial 3. Geometric algebra and Non-Euclidean Geometry − the meet and joint operations c 2002 Kluwer Academic Publishers. Printed in the Netherlands. gjfinPDF.tex; 6/02/2002; 14:45; p.1 2 J. Pozo and G. Sobczyk affine and projective geometries examples 4. Conformal Geometry − the horosphere the null cone h-twistors conformal transformations and isometries isometries in 0 N matrix representation h-twistors and Mobius transformations the relative matrix representation conformal transformations in dimension 2 1. Introduction Almost 125 years after the discovery of \geometric algebra" by William Kingdon Clifford in 1878, the discipline still languishes off the center- stage of mathematics. Whereas Clifford’s geometric algebra has gained currency among an increasing number scientists in different \special in- terest" groups, the authors of the present work contend that geometric algebra should be known by all mathematicians and other scientists for what it really is - the natural algebraic completion of the real number system to include the concept of direction. Whereas, evidently, most mathematicians and other scientists are either unfamiliar with or reject this point of view, we will try to prevail by showing that Clifford algebra really already has been universally recognized in the guise of linear algebra. Since linear algebra is fully compatible with Clifford algebra, it follows that in learning linear algebra, every scientist has really learned Clifford algebra but is generally unaware of this fact! What is lacking in the standard treatments of linear algebra is the recognition of the natural graded structure of linear algebra and, therefore, the geometric interpretation that goes along with the definition of geometric algebra. As has been often repeated by Hestenes and others, geometric algebra should be seen as a great unifier of the geometric ideas of mathematics (Hestenes, 1991). The purpose of the present article is to develop the ideas of geo- metric algebra alongside the more traditional tools of linear algebra by taking full advantage of their fully compatible structures. There are many advantages to such an approach. First, everybody knows matrix algebra, but not everybody is aware that exactly the same algebraic rules apply to the multivectors in a geometric algebra. Because of this gjfinPDF.tex; 6/02/2002; 14:45; p.2 Geometric Algebra in Linear Algebra and Geometry 3 fact, it is natural to consider matrices whose elements are taken from a geometric algebra. At the same time, by developing geometric algebra in such a way that any problem can be easily changed into an equivalent problem in matrix algebra, it becomes possible to utilize the powerful and extensive computer software that has been developed for working with matrices. Whereas CLICAL has proven itself to be a powerful computer aid in checking tedious Clifford algebra calculations, it lacks symbolic capabilities (Lounesto, 1994). Geometric algebra offers not only a comprehensive geometric interpretation but also a whole new set of algebraic tools for dealing with problems in linear algebra. We show that matrices, which are rectangular blocks of numbers, repre- sent geometric numbers in a rather special spinor basis of a geometric algebra with neutral signature. This work consists of four main chapters. This introductory chapter lays down the rational for this article and gives a brief summary of its main ideas and content. Chapter 2 is primarily concerned with the development of the basic ideas of linear and multilinear algebra on an n-dimensional real vector space we call the null space, since we are assuming that all vectors in are null vectors (the square of each vector is zero). Taking all linear combinationsN of sums of products of vectors in generates the 2n-dimensional associative Grassmann algebra ( ). ThisN stucture is sufficiently rich to efficiently develop many of theG basicN notions of linear algebra, such as the matrix of a linear operator and the theory of determinants and their properties. Recently, there has been much interest in the application of geomet- ric algebra to affine, projective and other non-euclidean geometries, (Maks, 1989), (Hestenes, 1991), (Hestenes and Ziegler, 1991), (Porte- ous, 1995) and (Havel, 1995). These noneuclidean models offer new computational tools for doing pseudeoeucliean and affine geometry using geometric algebra. Chapter 3 undertakes a systematic study of some of these models, and shows how the tools of geometric algebra make it possible to move freely between them, bringing a unification to the subject that is otherwise impossible. One of the key ideas is to define the meet and join operations on equivalence classes of blades of a geometric algebra which represent subspaces. Since a nonzero r- blade characterizes only the direction of a subspace, the magnitude of the blade is unimportant. Basic formulas for incidence relationships between points, lines, planes, and higher dimensional objects are com- pactly formulated. Examples of calculations are given in the affine plane which are just plain fun! Chapter 4 explores the deep relationships which exist between pro- jective geometry and the conformal group. The conformal geometry of a pseudo-euclidean space can be linearized by considering the horosphere gjfinPDF.tex; 6/02/2002; 14:45; p.3 4 J. Pozo and G. Sobczyk in a pseudo-Euclidean space of two dimensions higher. The introduc- tion of the novel concept of an h-twistor makes possible a simple new proof of the striking relationship between conformal transformations in a pseudoeuclidean space to isometries in a pseudoeuclidean space of two higher dimensions. The concept of an h-twistor greatly simplifies calculations and is in many ways a generalization of the successful spinor/twistor formalisms to pseudoeuclidean spaces of arbitrary sig- natures. The utility of the h-twistor concept is amply demonstrated in a new derivation of the Schwarzian derivative (Davis, 1974, p46), (Nehari, 1952, p199). 2. Geometric Algebra and Matrices Let be an n-dimensional vector space over a given field , and let N K e = ( e1 e2 en ) (1) f g ··· be a basis of . In this work we only consider real ( = IR) or complex ( = ICN) vector spaces although other fields couldK be chosen. By interpretingK each of the vectors in e to be the column vectors of the standard basis of the identity matrixf g id(n) of the n n matrix algebra ( ) over the field , we are free to make the identification× e = idM(n).K We wish to emphasizeK that we are interpreting the basis f g vectors ei to be elements of the 1 n row matrix (1), and not the elements of a set. Thus, in what follows,× we are assumming and often will apply the rules of matrix multiplication when dealing with the (generalized) row vector of basis vectors e . Now let be the dual vector space off 1-formsg over the the field , and let e Nbe the dual basis of with respect to the basis e of .K If we nowf interpretg each of the vectorsN in e to be the row vectorsf g ofN the standard basis of the identity matrix idf(ng) of the n n matrix algebra ( ), we can again make the identification e =×id(n). Because we wishM K to be able to interpret the elements of ef gas row vectors, we will always write the vectors in e in the columnf g vector form f g e1 0 e2 1 e = B C (2) f g B · C B · C @ en A We also assume that the column vector e obeys all the rules of matrix addition and multiplication of a n 1 columnf g vector. × gjfinPDF.tex; 6/02/2002; 14:45; p.4 Geometric Algebra in Linear Algebra and Geometry 5 In terms of these bases, any vector or point x can be written 2 N x1 0 x2 1 n x = e x e = ( e1 e2 en ) = xiei (3) f g f g ·· B · C X B C i=1 B · C @ xn A for xi IR, where 2 x1 0 x2 1 x e = B C f g B · C B · C @ xn A are the column vector of components of the vector x with respect to the basis e . Since vectorsf g in are represented by column vectors, and vectors y by row vectors,N we define the operation of transpose of the vector x 2byN e1 0 e2 1 t t t x = ( e x e ) = x e e = ( x1 x2 : : : xn ) B C (4) f g f g f gf g B · C B · C @ en A In the case of the complex field = , we have K C t x1 0 x2 1 x∗e = ( x1 x2 xn ) = B C (5) f g ··· B · C B · C @ xn A The transpose and Hermitian transpose operations allows us to move between the reciprocal vector spaces and .
Recommended publications
  • Geometric Algebra for Vector Fields Analysis and Visualization: Mathematical Settings, Overview and Applications Chantal Oberson Ausoni, Pascal Frey
    Geometric algebra for vector fields analysis and visualization: mathematical settings, overview and applications Chantal Oberson Ausoni, Pascal Frey To cite this version: Chantal Oberson Ausoni, Pascal Frey. Geometric algebra for vector fields analysis and visualization: mathematical settings, overview and applications. 2014. hal-00920544v2 HAL Id: hal-00920544 https://hal.sorbonne-universite.fr/hal-00920544v2 Preprint submitted on 18 Sep 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Geometric algebra for vector field analysis and visualization: mathematical settings, overview and applications Chantal Oberson Ausoni and Pascal Frey Abstract The formal language of Clifford’s algebras is attracting an increasingly large community of mathematicians, physicists and software developers seduced by the conciseness and the efficiency of this compelling system of mathematics. This contribution will suggest how these concepts can be used to serve the purpose of scientific visualization and more specifically to reveal the general structure of complex vector fields. We will emphasize the elegance and the ubiquitous nature of the geometric algebra approach, as well as point out the computational issues at stake. 1 Introduction Nowadays, complex numerical simulations (e.g. in climate modelling, weather fore- cast, aeronautics, genomics, etc.) produce very large data sets, often several ter- abytes, that become almost impossible to process in a reasonable amount of time.
    [Show full text]
  • Exploring Physics with Geometric Algebra, Book II., , C December 2016 COPYRIGHT
    peeter joot [email protected] EXPLORINGPHYSICSWITHGEOMETRICALGEBRA,BOOKII. EXPLORINGPHYSICSWITHGEOMETRICALGEBRA,BOOKII. peeter joot [email protected] December 2016 – version v.1.3 Peeter Joot [email protected]: Exploring physics with Geometric Algebra, Book II., , c December 2016 COPYRIGHT Copyright c 2016 Peeter Joot All Rights Reserved This book may be reproduced and distributed in whole or in part, without fee, subject to the following conditions: • The copyright notice above and this permission notice must be preserved complete on all complete or partial copies. • Any translation or derived work must be approved by the author in writing before distri- bution. • If you distribute this work in part, instructions for obtaining the complete version of this document must be included, and a means for obtaining a complete version provided. • Small portions may be reproduced as illustrations for reviews or quotes in other works without this permission notice if proper citation is given. Exceptions to these rules may be granted for academic purposes: Write to the author and ask. Disclaimer: I confess to violating somebody’s copyright when I copied this copyright state- ment. v DOCUMENTVERSION Version 0.6465 Sources for this notes compilation can be found in the github repository https://github.com/peeterjoot/physicsplay The last commit (Dec/5/2016), associated with this pdf was 595cc0ba1748328b765c9dea0767b85311a26b3d vii Dedicated to: Aurora and Lance, my awesome kids, and Sofia, who not only tolerates and encourages my studies, but is also awesome enough to think that math is sexy. PREFACE This is an exploratory collection of notes containing worked examples of more advanced appli- cations of Geometric Algebra (GA), also known as Clifford Algebra.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • A Clifford Dyadic Superfield from Bilateral Interactions of Geometric Multispin Dirac Theory
    A CLIFFORD DYADIC SUPERFIELD FROM BILATERAL INTERACTIONS OF GEOMETRIC MULTISPIN DIRAC THEORY WILLIAM M. PEZZAGLIA JR. Department of Physia, Santa Clam University Santa Clam, CA 95053, U.S.A., [email protected] and ALFRED W. DIFFER Department of Phyaia, American River College Sacramento, CA 958i1, U.S.A. (Received: November 5, 1993) Abstract. Multivector quantum mechanics utilizes wavefunctions which a.re Clifford ag­ gregates (e.g. sum of scalar, vector, bivector). This is equivalent to multispinors con­ structed of Dirac matrices, with the representation independent form of the generators geometrically interpreted as the basis vectors of spacetime. Multiple generations of par­ ticles appear as left ideals of the algebra, coupled only by now-allowed right-side applied (dextral) operations. A generalized bilateral (two-sided operation) coupling is propoeed which includes the above mentioned dextrad field, and the spin-gauge interaction as partic­ ular cases. This leads to a new principle of poly-dimensional covariance, in which physical laws are invariant under the reshuffling of coordinate geometry. Such a multigeometric su­ perfield equation is proposed, whi~h is sourced by a bilateral current. In order to express the superfield in representation and coordinate free form, we introduce Eddington E-F double-frame numbers. Symmetric tensors can now be represented as 4D "dyads", which actually are elements of a global SD Clifford algebra.. As a restricted example, the dyadic field created by the Greider-Ross multivector current (of a Dirac electron) describes both electromagnetic and Morris-Greider gravitational interactions. Key words: spin-gauge, multivector, clifford, dyadic 1. Introduction Multi vector physics is a grand scheme in which we attempt to describe all ba­ sic physical structure and phenomena by a single geometrically interpretable Algebra.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Determinants in Geometric Algebra
    Determinants in Geometric Algebra Eckhard Hitzer 16 June 2003, recovered+expanded May 2020 1 Definition Let f be a linear map1, of a real linear vector space Rn into itself, an endomor- phism n 0 n f : a 2 R ! a 2 R : (1) This map is extended by outermorphism (symbol f) to act linearly on multi- vectors f(a1 ^ a2 ::: ^ ak) = f(a1) ^ f(a2) ::: ^ f(ak); k ≤ n: (2) By definition f is grade-preserving and linear, mapping multivectors to mul- tivectors. Examples are the reflections, rotations and translations described earlier. The outermorphism of a product of two linear maps fg is the product of the outermorphisms f g f[g(a1)] ^ f[g(a2)] ::: ^ f[g(ak)] = f[g(a1) ^ g(a2) ::: ^ g(ak)] = f[g(a1 ^ a2 ::: ^ ak)]; (3) with k ≤ n. The square brackets can safely be omitted. The n{grade pseudoscalars of a geometric algebra are unique up to a scalar factor. This can be used to define the determinant2 of a linear map as det(f) = f(I)I−1 = f(I) ∗ I−1; and therefore f(I) = det(f)I: (4) For an orthonormal basis fe1; e2;:::; eng the unit pseudoscalar is I = e1e2 ::: en −1 q q n(n−1)=2 with inverse I = (−1) enen−1 ::: e1 = (−1) (−1) I, where q gives the number of basis vectors, that square to −1 (the linear space is then Rp;q). According to Grassmann n-grade vectors represent oriented volume elements of dimension n. The determinant therefore shows how these volumes change under linear maps.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]