Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors

Total Page:16

File Type:pdf, Size:1020Kb

Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21–24, 2006 Thanks: G. Carlsson, L. De Lathauwer, J.M. Landsberg, M. Mahoney, L. Qi, B. Sturmfels; Collaborators: P. Comon, V. de Silva, P. Drineas, G. Golub References http://www-sccm.stanford.edu/nf-publications-tech.html [CGLM2] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain, “Symmetric tensors and symmetric tensor rank,” SCCM Tech. Rep., 06-02, 2006. [CGLM1] P. Comon, B. Mourrain, L.-H. Lim, and G.H. Golub, “Genericity and rank deficiency of high order symmetric tensors,” Proc. IEEE Int. Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), 31 (2006), no. 3, pp. 125–128. [dSL] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SCCM Tech. Rep., 06-06 (2006). [GL] G. Golub and L.-H. Lim, “Nonnegative decomposition and approximation of nonnegative matrices and tensors,” SCCM Tech. Rep., 06-01 (2006), forthcoming. [L] L.-H. Lim, “Singular values and eigenvalues of tensors: a variational approach,” Proc. IEEE Int. Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), 1 (2005), pp. 129–132. 2 What is not a tensor, I • What is a vector? – Mathematician: An element of a vector space. – Physicist: “What kind of physical quantities can be rep- resented by vectors?” Answer: Once a basis is chosen, an n-dimensional vector is something that is represented by n real numbers only if those real numbers transform themselves as expected (ie. according to the change-of-basis theorem) when one changes the basis • What is a tensor? – Mathematician: An element of a tensor product of vector spaces. – Physicist: “What kind of physical quantities can be rep- resented by tensors?” Answer: Slide 7. 3 What is not a tensor, II By a tensor, physicists and geometers often mean a tensor field (roughly, these are tensor-valued functions on manifolds): • stress tensor • moment-of-intertia tensor • gravitational field tensor • metric tensor • curvature tensor 4 Tensor product of vector spaces U, V, W vector spaces. Think of U ⊗ V ⊗ W as the vector space of all formal linear combinations of terms of the form u ⊗ v ⊗ w, X αu ⊗ v ⊗ w, where α ∈ R, u ∈ U, v ∈ V, w ∈ W. One condition: ⊗ decreed to have the multilinear property (αu1 + βu2) ⊗ v ⊗ w = αu1 ⊗ v ⊗ w + βu2 ⊗ v ⊗ w, u ⊗ (αv1 + βv2) ⊗ w = αu ⊗ v1 ⊗ w + βu ⊗ v2 ⊗ w, u ⊗ v ⊗ (αw1 + βw2) = αu ⊗ v ⊗ w1 + βu ⊗ v ⊗ w2. 5 Tensors and multiway arrays Up to a choice of bases on U, V, W , A ∈ U ⊗ V ⊗ W can be l,m,n l×m×n represented by a 3-way array A = aijk i,j,k=1 ∈ R on which the following algebraic operationsJ areK defined: l×m×n 1. Addition/Scalar Multiplication: for bijk ∈ R , λ ∈ R, J K l×m×n aijk + bijk := aijk+bijk and λ aijk := λaijk ∈ R J K J K J K J K J K 2. Multilinear Matrix Multiplication: for matrices L = [λi0i] ∈ p×l q×m r×n R ,M = [µj0j] ∈ R ,N = [νk0k] ∈ R , p×q×r (L, M, N) · A := ci0j0k0 ∈ R J K where l m n X X X ci0j0k0 := λi0iµj0jνk0kaijk. i=1 j=1 k=1 6 Change-of-basis theorem for tensors Two representations A, A0 of A in different bases are related by (L, M, N) · A = A0 with L, M, N respective change-of-basis matrices (non-singular). Henceforth, we will not distinguish between an order-k tensor and a k-way array that represents it (with respect to some implicit choice of basis). 7 Segre outer product l m n l m n If U = R , V = R , W = R , R ⊗ R ⊗ R may be identified with l×m×n R if we define ⊗ by l,m,n u ⊗ v ⊗ w = uivjwk i,j,k=1. J K l×m×n A tensor A ∈ R is said to be decomposable if it can be written in the form A = u ⊗ v ⊗ w l m n for some u ∈ R , v ∈ R , w ∈ R . The set of all decomposable tensors is known as the Segre variety in algebraic geometry. It is a closed set (in both the Euclidean and Zariski sense) as it can be described algebraically: Seg( l m n) = l×m×n = = R , R , R {A ∈ R | ai1i2i3 aj1j2j3 ak1k2k3 al1l2l3 , {iα, jα} {kα, lα}} 8 Tensor ranks m×n Matrix rank. A ∈ R rank( ) = dim(span ) (column rank) A R{A•1,...,A•n} = dim(span ) (row rank) R{A1•,...,Am•} Pr | = min{r | A = i=1uivi } (outer product rank). Multilinear rank. A ∈ l×m×n. rank (A) = (r (A), r (A), r (A)) R 1 2 3 where ( ) = dim(span ) r1 A R{A1••,...,Al••} ( ) = dim(span ) r2 A R{A•1•,...,A•m•} ( ) = dim(span ) r3 A R{A••1,...,A••n} l×m×n Outer product rank. A ∈ R . Pr rank⊗(A) = min{r | A = i=1ui ⊗ vi ⊗ wi} In general, rank⊗(A) 6= r1(A) 6= r2(A) 6= r3(A). 9 Credit Both notions of tensor rank (also the corresponding decomposi- tion) due to Frank L. Hitchcock in 1927. Multilinear rank is a special case (uniplex) of his more general multiplex rank. F.L. Hitchcock, “The expression of a tensor or a polyadic as a sum of products,” J. Math. Phys., 6 (1927), no. 1, pp. 164–189. F.L. Hitchcock, “Multiple invariants and generalized rank of a p-way matrix or tensor,” J. Math. Phys., 7 (1927), no. 1, pp. 39–79. Often wrongly attributed. Predates CANDECOMP/PARAFAC or Tucker by 40 years. 10 Outer product rank l×m×n Theorem (H˚astad). Computing rank⊗(A) for A ∈ R is an NP-hard problem. m×n m×n Matrix: A ∈ R ⊂ C , rank(A) is the same whether we regard it as a real matrix or a complex matrix. l×m×n l×m×n Theorem (Bergman). For A ∈ R ⊂ C , rank⊗(A) is base field dependent. n Example. x, y ∈ R linearly independent and let z = x + iy. x ⊗ x ⊗ x − x ⊗ y ⊗ y + y ⊗ x ⊗ y + y ⊗ y ⊗ x 1 = (z ⊗ ¯z ⊗ ¯z + ¯z ⊗ z ⊗ z) 2 rank⊗(A) is 3 over R and is 2 over C. 11 Fundamental problem of multiway data analysis Let A be a tensor, symmetric tensor, or nonnegative tensor. Solve argminrank(B)≤rkA − Bk where rank may be outer product rank, multilinear rank, sym- metric rank (for symmetric tensors), or nonnegative rank (non- negative tensors). d ×d ×d Example. Given A ∈ R 1 2 3, find ui, vi, wi, i = 1, . , r, that minimizes kA − u1 ⊗ v1 ⊗ w1 − u2 ⊗ v2 ⊗ w2 − · · · − ur ⊗ vr ⊗ zrk. r ×r ×r d ×r or C ∈ R 1 2 3 and Li ∈ R i i that minimizes kA − (L1,L2,L3) · Ck. k n Example. Given A ∈ S (C ), find ui, i = 1, . , r, that minimizes ⊗k ⊗k ⊗k kA − u1 − u2 − · · · − ur k. 13 Harmonic analytic approach to data analysis More generally, F = C, R, R+, Rmax (max-plus algebra), R[x1, . , xn] (polynomial rings), etc. N Dictionary, D ⊂ F , not contained in any hyperplane. Let D2 = union of bisecants to D, D3 = union of trisecants to D,..., Dr = union of r-secants to D. N Define D-rank of A ∈ F to be min{r | A ∈ Dr}. N N If ϕ : F × F → R is some measure of ‘nearness’ between pairs of points (eg. norms, Bregman divergences, etc), we want to find a best low-rank approximation to A: argmin{ϕ(A, B) | D-rank(B) ≤ r}. 14 Feature revelation Get low-rank approximation A ≈ α1 · B1 + ··· + αr · Br ∈ Dr. Bi ∈ D reveal features of the dataset A. Note that another way to say ‘best low-rank’ is ‘sparsest possi- ble’. Example. D = {A | rank⊗(A) ≤ 1}, ϕ(A, B) = kA − BkF — get usual CANDECOMP/PARAFAC. Example. D = {A | rank (A) ≤ (r , r , r )} (an algebraic set), 1 2 3 ϕ(A, B) = kA − BkF — get De Lathauwer decomposition. 15 Simple lemma Lemma (de-Silva, L.). Let r ≥ 2 and k ≥ 3. Given the norm- d ×···×d topology on R 1 k, the following statements are equivalent: (a) The set Sr(d1, . , dk) := {A | rank⊗(A) ≤ r} is not closed. (b) There exists a sequence An, rank⊗(An) ≤ r, n ∈ N, converg- ing to B with rank⊗(B) > r. (c) There exists B, rank⊗(B) > r, that may be approximated arbitrarily closely by tensors of strictly lower rank, ie. inf{kB − Ak | rank⊗(A) ≤ r} = 0. (d) There exists C, rank⊗(C) > r, that does not have a best rank-r approximation, ie. inf{kC − Ak | rank⊗(A) ≤ r} is not attained (by any A with rank⊗(A) ≤ r). 16 Non-existence of best low-rank approximation D. Bini, M. Capovani, F. Romani, and G. Lotti, “O(n2.7799) complexity for n × n approximate matrix multiplication,” Inform. Process. Lett., 8 (1979), no. 5, pp. 234–235. Let x, y, z, w be linearly independent. Define A := x⊗x⊗x+x⊗y ⊗z+y ⊗z⊗x+y ⊗w ⊗z+z⊗x⊗y +z⊗y ⊗w and, for ε > 0, −1 −1 Bε := (y + εx) ⊗ (y + εw) ⊗ ε z + (z + εx) ⊗ ε x ⊗ (x + εy) − ε−1y ⊗ y ⊗ (x + z + εw) − ε−1z ⊗ (x + y + εz) ⊗ x + ε−1(y + z) ⊗ (y + εz) ⊗ (x + εw). Then rank⊗(Bε) ≤ 5, rank⊗(A) = 6 and kBε − Ak → 0 as ε → 0. A has no optimal approximation by tensors of rank ≤ 5.
Recommended publications
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • Multilinear Algebra for Visual Geometry
    multilinear algebra for visual geometry Alberto Ruiz http://dis.um.es/~alberto DIS, University of Murcia MVIGRO workshop Berlin, june 28th 2013 contents 1. linear spaces 2. tensor diagrams 3. Grassmann algebra 4. multiview geometry 5. tensor equations intro diagrams multiview equations linear algebra the most important computational tool: linear = solvable nonlinear problems: linearization + iteration multilinear problems are easy (' linear) visual geometry is multilinear (' solved) intro diagrams multiview equations linear spaces I objects are represented by arrays of coordinates (depending on chosen basis, with proper rules of transformation) I linear transformations are extremely simple (just multiply and add), and invertible in closed form (in least squares sense) I duality: linear transformations are also a linear space intro diagrams multiview equations linear spaces objects are linear combinations of other elements: i vector x has coordinates x in basis B = feig X i x = x ei i Einstein's convention: i x = x ei intro diagrams multiview equations change of basis 1 1 0 0 c1 c2 3 1 0 i [e1; e2] = [e1; e2]C; C = 2 2 = ej = cj ei c1 c2 1 2 x1 x1 x01 x = x1e + x2e = [e ; e ] = [e ; e ]C C−1 = [e0 ; e0 ] 1 2 1 2 x2 1 2 x2 1 2 x02 | 0{z 0 } [e1;e2] | {z } 2x013 4x025 intro diagrams multiview equations covariance and contravariance if the basis transforms according to C: 0 0 [e1; e2] = [e1; e2]C vector coordinates transform according to C−1: x01 x1 = C−1 x02 x2 in the previous example: 7 3 1 2 2 0;4 −0;2 7 = ; = 4 1 2 1 1 −0;2 0;6 4 intro
    [Show full text]
  • Lecture Notes on Linear and Multilinear Algebra 2301-610
    Lecture Notes on Linear and Multilinear Algebra 2301-610 Wicharn Lewkeeratiyutkul Department of Mathematics and Computer Science Faculty of Science Chulalongkorn University August 2014 Contents Preface iii 1 Vector Spaces 1 1.1 Vector Spaces and Subspaces . 2 1.2 Basis and Dimension . 10 1.3 Linear Maps . 18 1.4 Matrix Representation . 32 1.5 Change of Bases . 42 1.6 Sums and Direct Sums . 48 1.7 Quotient Spaces . 61 1.8 Dual Spaces . 65 2 Multilinear Algebra 73 2.1 Free Vector Spaces . 73 2.2 Multilinear Maps and Tensor Products . 78 2.3 Determinants . 95 2.4 Exterior Products . 101 3 Canonical Forms 107 3.1 Polynomials . 107 3.2 Diagonalization . 115 3.3 Minimal Polynomial . 128 3.4 Jordan Canonical Forms . 141 i ii CONTENTS 4 Inner Product Spaces 161 4.1 Bilinear and Sesquilinear Forms . 161 4.2 Inner Product Spaces . 167 4.3 Operators on Inner Product Spaces . 180 4.4 Spectral Theorem . 190 Bibliography 203 Preface This book grew out of the lecture notes for the course 2301-610 Linear and Multilinaer Algebra given at the Deparment of Mathematics, Faculty of Science, Chulalongkorn University that I have taught in the past 5 years. Linear Algebra is one of the most important subjects in Mathematics, with numerous applications in pure and applied sciences. A more theoretical linear algebra course will emphasize on linear maps between vector spaces, while an applied-oriented course will mainly work with matrices. Matrices have an ad- vantage of being easier to compute, while it is easier to establish the results by working with linear maps.
    [Show full text]
  • Numerical Multilinear Algebra I
    Numerical Multilinear Algebra I Lek-Heng Lim University of California, Berkeley January 5{7, 2009 L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra I January 5{7, 2009 1 / 55 Hope Past 50 years, Numerical Linear Algebra played indispensable role in the statistical analysis of two-way data, the numerical solution of partial differential equations arising from vector fields, the numerical solution of second-order optimization methods. Next step | development of Numerical Multilinear Algebra for the statistical analysis of multi-way data, the numerical solution of partial differential equations arising from tensor fields, the numerical solution of higher-order optimization methods. L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra I January 5{7, 2009 2 / 55 DARPA mathematical challenge eight One of the twenty three mathematical challenges announced at DARPA Tech 2007. Problem Beyond convex optimization: can linear algebra be replaced by algebraic geometry in a systematic way? Algebraic geometry in a slogan: polynomials are to algebraic geometry what matrices are to linear algebra. Polynomial f 2 R[x1;:::; xn] of degree d can be expressed as > > f (x) = a0 + a1 x + x A2x + A3(x; x; x) + ··· + Ad (x;:::; x): n n×n n×n×n n×···×n a0 2 R; a1 2 R ; A2 2 R ; A3 2 R ;:::; Ad 2 R . Numerical linear algebra: d = 2. Numerical multilinear algebra: d > 2. L.-H. Lim (ICM Lecture) Numerical Multilinear Algebra I January 5{7, 2009 3 / 55 Motivation Why multilinear: \Classification of mathematical problems as linear and nonlinear is like classification of the Universe as bananas and non-bananas." Nonlinear | too general.
    [Show full text]
  • Tensor)Algebraic Frameworkfor Computer Graphics,Computer Vision, and Machine Learning
    AMULTILINEAR (TENSOR)ALGEBRAIC FRAMEWORK FOR COMPUTER GRAPHICS,COMPUTER VISION, AND MACHINE LEARNING by M. Alex O. Vasilescu A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto Copyright c 2009 by M. Alex O. Vasilescu A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision, and Machine Learning M. Alex O. Vasilescu Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2009 Abstract This thesis introduces a multilinear algebraic framework for computer graphics, computer vision, and machine learning, particularly for the fundamental purposes of image synthesis, analysis, and recog- nition. Natural images result from the multifactor interaction between the imaging process, the scene illumination, and the scene geometry. We assert that a principled mathematical approach to disentan- gling and explicitly representing these causal factors, which are essential to image formation, is through numerical multilinear algebra, the algebra of higher-order tensors. Our new image modeling framework is based on (i) a multilinear generalization of principal compo- nents analysis (PCA), (ii) a novel multilinear generalization of independent components analysis (ICA), and (iii) a multilinear projection for use in recognition that maps images to the multiple causal factor spaces associated with their formation. Multilinear PCA employs a tensor extension of the conventional matrix singular value decomposition
    [Show full text]
  • Clifford Algebras and Their Applications to Lie Groups and Spinors 3
    CLIFFORD ALGEBRAS AND THEIR APPLICATIONS TO LIE GROUPS AND SPINORS DMITRY SHIROKOV National Research University Higher School of Economics 101000 Moscow, Russia; [email protected] Institute for Information Transmission Problems of Russian Academy of Sciences 127051 Moscow, Russia; [email protected] Abstract. In these lectures, we discuss some well-known facts about Clif- ford algebras: matrix representations, Cartan’s periodicity of 8, double cov- erings of orthogonal groups by spin groups, Dirac equation in different for- malisms, spinors in n dimensions, etc. We also present our point of view on some problems. Namely, we discuss the generalization of the Pauli theorem, the basic ideas of the method of averaging in Clifford algebras, the notion of quaternion type of Clifford algebra elements, the classification of Lie subal- gebras of specific type in Clifford algebra, etc. MSC: 15A66, 22E60, 35Q41 Keywords: Clifford algebra, matrix representations, Lie groups, Lie alge- bras, spin groups, Dirac equation, spinors, Pauli theorem, quaternion type, method of averaging CONTENTS Introduction....................................... .................... 2 1. Definition of Clifford Algebra . ................ 3 1.1. Clifford Algebra as a Quotient Algebra . ............. 3 1.2. Clifford Algebra with Fixed Basis . ............. 3 1.3. Examples in Small Dimensions . ............ 5 arXiv:1709.06608v2 [math-ph] 20 Jan 2018 2. Gradings and Conjugations. ................ 7 2.1. Gradings....................................... ............... 7 2.2. Center of Clifford Algebra . .............. 8 2.3. Operations of Conjugations . .............. 8 2.4. Quaternion Types of Clifford Algebra Elements . ............ 9 3. Matrix Representations of Clifford Algebras . ................ 12 1 2 Dmitry Shirokov 3.1. Cartan’s Periodicity of 8; Central and Simple Algebras............. 12 3.2. Clifford Trigonometry Circle and Exterior Signature of Clifford Algebra...........................................
    [Show full text]
  • Introduction to Clifford's Geometric Algebra
    SICE Journal of Control, Measurement, and System Integration, Vol. 4, No. 1, pp. 001–011, January 2011 Introduction to Clifford’s Geometric Algebra ∗ Eckhard HITZER Abstract : Geometric algebra was initiated by W.K. Clifford over 130 years ago. It unifies all branchesof physics, and has found rich applications in robotics, signal processing, ray tracing, virtual reality, computer vision, vector field processing, tracking, geographic information systems and neural computing. This tutorial explains the basics of geometric algebra, with concrete examples of the plane, of 3D space, of spacetime, and the popular conformal model. Geometric algebras are ideal to represent geometric transformations in the general framework of Clifford groups (also called versor or Lipschitz groups). Geometric (algebra based) calculus allows, e.g., to optimize learning algorithms of Clifford neurons, etc. Key Words: Hypercomplex algebra, hypercomplex analysis, geometry, science, engineering. 1. Introduction unique associative and multilinear algebra with this property. W.K. Clifford (1845-1879), a young English Goldsmid pro- Thus if we wish to generalize methods from algebra, analysis, ff fessor of applied mathematics at the University College of Lon- calculus, di erential geometry (etc.) of real numbers, complex don, published in 1878 in the American Journal of Mathemat- numbers, and quaternion algebra to vector spaces and multivec- ics Pure and Applied a nine page long paper on Applications of tor spaces (which include additional elements representing 2D Grassmann’s Extensive Algebra. In this paper, the young ge- up to nD subspaces, i.e. plane elements up to hypervolume el- ff nius Clifford, standing on the shoulders of two giants of alge- ements), the study of Cli ord algebras becomes unavoidable.
    [Show full text]
  • Multilinear Algebra and Applications May 14, 2019
    Multilinear Algebra and Applications May 14, 2019. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 7 2.1. Vector Spaces and Subspaces 7 2.2. Bases 9 2.3. The Einstein convention 12 2.3.1. Change of bases, revisited 14 2.3.2. The Kronecker delta symbol 15 2.4. Linear Transformations 17 2.4.1. Conjugate matrices 20 2.5. Eigenbases 22 Chapter 3. Multilinear Forms 25 3.1. Linear Forms 25 3.1.1. Definition, Examples, Dual and Dual Basis 25 3.1.2. Transformation of Linear Forms under a Change of Basis 28 3.2. Bilinear Forms 32 3.2.1. Definition, Examples and Basis 32 3.2.2. Tensor product of two linear forms on V 34 3.2.3. Transformation of Bilinear Forms under a Change of Basis 36 3.3. Multilinear forms 37 3.4. Examples 38 3.4.1. A Bilinear Form 38 3.4.2. A Trilinear Form 39 3.5. Basic Operation on Multilinear Forms 40 Chapter 4. Inner Products 41 4.1. Definitions and First Properties 41 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 42 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 44 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 44 4.1.2. Orthonormal Basis 45 4.2. Reciprocal Basis 49 4.2.1. Properties of Reciprocal Bases 51 4.2.2. Change of basis from a basis to its reciprocal basis g 53 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Multilinear Algebra
    MULTILINEAR ALGEBRA MCKENZIE LAMB 1. Introduction This project consists of a rambling introduction to some basic notions in multilinear algebra. The central purpose will be to show that the div, grad, and curl operators from vector calculus can all be thought of as special cases of a single operator on differential forms. Not all of the material presented here is required to achieve that goal. However, all of these ideas are likely to be useful for the geometry core course. Note: I will use angle brackets to denote the pairing between a vector space and its dual. That is, if V is a vector space and V ∗ is its dual, for any v 2 V , φ 2 V ∗, instead of writing φ(v) for the result of applying the function φ to v, I will write hφ, vi. The motivation for this notation is that, given an inner product h·; ·i on V , every element ∗ ∗ φ 2 V can be written in the form v 7! hv0; ·i for some vector vo 2 V . The dual space V can therefore be identified with V (via φ 7! v0). The reason we use the term, pairing, is that duality of vector spaces is symmetric. By definition, φ is a linear function on V , but v is also a linear function on V ∗. So instead of thinking of applying φ to v, we can just as well consider v to be acting on φ. The result is the same in both cases. For this reason, it makes more sense to think of a pairing between V and V ∗: If you pair an element from a vector space with an element from its dual, you get a real number.
    [Show full text]