Some Multilinear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Some Multilinear Algebra Some multilinear algebra Bernhard Leeb January 25, 2020 Contents 1 Multilinear maps 1 1.1 General multilinear maps . 1 1.2 The sign of a permutation . 2 1.3 Symmetric multilinear maps . 3 1.4 Alternatingmultilinearmaps............................... 4 1.4.1 The determinant of a matrix . 6 1.4.2 The determinant of an endomorphism . 7 1.4.3 Orientation . 8 1.4.4 Determinant and volume . 10 2 Tensors 12 2.1 The tensor product of vector spaces . 12 2.2 The tensor algebra of a vector space . 17 2.3 The exterior algebra of a vector space . 21 See also: Lang Algebra,vanderWaerdenAlgebra I. 1 Multilinear maps 1.1 General multilinear maps We work with vector spaces over a fixed field K. Let V1,...,Vk,W be vector spaces. A map µ V1 ... Vk W × × → 1 is called multilinear if it is linear in each variable, that is, if all maps µ v1,...,vi 1, ,vi 1,...,vk Vi W are linear. The multilinear maps form a vector space Mult V1,...,Vk−; W . + ( ⋅ ) ∶ Multilinear maps in k variables are also called k-linear.The1-linearmapsarethelinear → ( ) maps, the 2-linear maps are the bilinear maps. Multilinear maps are determined by their values on bases, and these values are independent of each other. More precisely, if e i j J are bases of the V ,thenamultilinearmapµ is ji i i i determined by the values µ e 1 ,...,e( ) k W for j ,...,j J ... J ,andthesevalues j1 jk 1 k 1 k ( ) ( ( ) ∈ ) can be arbitrary, i.e. for any vectors wj1...jk W there is a unique multilinear map µ with µ e 1 ,...,e k w .Indeed,forthevaluesongeneralvectors( ) ∈ ( ) ∈ × v × a e i V ,we j1 jk j1...jk i ji Ji iji ji i ( ) ( ) ∈ ( ) obtain the representation ∈ ( ) = = ∑ ∈ k µ v ,...,v a µ e 1 ,...,e k . (1.1) 1 k iji j1 jk j1,...,jk i 1 ( ) ( ) In particular, if the dimensions( of the) = vector spaces= are⋅ ( finite, then ) dim Mult V1,...,Vk; W dim Vj dim W. j ( ) = ⋅ Examples: Products in algebras. Composition Hom U, V Hom V,W Hom U, W .Scalarprodutcs,symplecticforms, volume forms, determinant. Natural pairing V V K.( ) × ( ) → ( ) ∗ × → 1.2 The sign of a permutation Let Sk denote the symmetric group of k symbols, realized as the group of bijective self-maps of the set 1,...,k .Thesign of a permutation ⇡ Sk is defined as ⇡ i ⇡ j { } sgn ⇡ ∈ 1 . 1 i j k i j ( ) − ( ) ( ) = ∈ {± } It counts the parity of the number of inversions≤ < ≤ of− ⇡, i.e. of pairs i, j such that i j and ⇡ i ⇡ j .Thesignispositiveifandonlyifthenumberofinversionsiseven,andsuch permutations are called even. Transpositions are odd, and a permutation( ) is even if and< only if it( can) > be( written) as the product of an even number of transpositions. The sign is multiplicative, sgn ⇡⇡ sgn ⇡ sgn ⇡ , ′ ′ i.e. the sign map ( ) = ( ) ( ) sgn Sk 1 is a homomorphism of groups. Its kernel An →is called{± } the alternating group. Briefly, the sign map is characterized as the unique homomorphism Sk 1 which maps transpositions to 1. The remarkable fact is that such a homomorphism exists at all. A consequence is that, when representing a permutation as a product of transpositions,→ {± } the parity of the number of factors− is well-defined. Remark. For n 5thealternatinggroupAn is non-abelian and simple. ≥ 2 1.3 Symmetric multilinear maps We now consider multilinear maps whose variables take their values in the same vector space V .Onecallssuchmapsalsomultilinearmapson V (instead of on V n). We abbreviate Multk V ; W Mult V,...,V ; W k ( ) ∶= ( ) and denote by Multk V Multk V ; K the space of k-linear forms on V .ThenMult1 V V and, by convention, Mult0 V K. ∗ ( ) ∶= ( ) ( ) = A k-linear map ( ) = µ V k V ... V W k = × × → is called symmetric if µ vσ 1 ,...,vσ k µ v1,...,vk (1.2) ( ) ( ) for all permutations σ Sk and all( vi V . ) = ( ) Remark. There is a natural∈ action ∈ Sk Multk V ; W (1.3) of permutations on multilinear forms given by ( ) σµ v1,...,vk µ vσ 1 ,...,vσ k . ( ) ( ) Indeed, ( )( ) = ( ) ⌧ σµ v1,... σµ v⌧ 1 ,... µ v⌧ σ 1 ,... ⌧ µ v1,... ( ) ( ( )) for σ,⌧ Sk.Thuscondition(1.2)canberewrittenas( ( ))( ) = ( )( ) = ( ) = (( ) )( ) ∈ σµ µ, i.e. µ is symmetric if and only if it is a fixed point= for the natural Sk-action (1.3). We describe the data necessary to determine a symmetric multilinear map. If ei i I is a basis of V ,thenµ Multk V is symmetric if and only if the values on basis vectors satisfy µ ei ,...,ei µ ei1 ,...,eik for all i1,...,ik I and σ Sk,compare ( ∈ ) σ 1 σ k ∈ ( ) the representation (1.1) of( ) the values( ) on general vectors. Hence, if I is equipped with a total ( ) = ( ) ∈ ∈ ordering “ ”, then µ is deteremined by the values µ ei1 ,...,eik for i1 ... ik, and these values can be arbitrary. ( ) If dimensions are finite, we conclude that dim V k 1 dim Multsym V ; W dim W. k k + − ( ) = ⋅ Further discussion: Polynomials and symmetric multilinear forms. 3 1.4 Alternating multilinear maps We now consider a modified symmetry condition for multilinear maps, namely which is “twisted” by the signum homomorphism on permutations: Definition (Antisymmetric). Amapµ Multk V ; W is called anti- or skew-symmetric if µ vσ 1 ,...,vσ k ∈ sgn(σ µ )v1,...,vk (1.4) ( ) ( ) for all vi V and permutations( σ Sk. ) = ( ) ⋅ ( ) In terms∈ of the natural action∈Sk Multk V ; W we can rewrite (1.4) as σµ sgn( σ )µ, (1.5) for all σ Sk. = ( ) ⋅ A closely related family of conditions will turn out to be more natural to work with: ∈ Lemma 1.6. The following three conditions on a k-linear map µ Multk V ; W are equivalent: (i) µ v ,...,v 0wheneverv v for some 1 i k. 1 k i i 1 ∈ ( ) (ii) µ v ,...,v 0wheneverv v+ for some 1 i j k. ( 1 k) = i= j ≤ < (iii) µ v ,...,v 0wheneverthev are linearly dependent. ( 1 k) = = i ≤ < ≤ They imply that µ is antisymmetric. ( ) = Proof. Obviously (iii) (ii) (i). (i) antisymmetric and (ii): Suppose that (i) holds. The computation ⇒ ⇒ ⇒ β u, v β v,u β u v,u v β u, u β v,v for bilinear maps β shows( that) + then( (1.4)) = holds( + in the+ general) − ( k-linear) − ( case) for the transpositions of pairs i, i 1 of adjacent numbers. Since these transpositions generate the group Sk,itfollows 1 that (1.4) holds for all permutations σ Sk, that is, µ is antisymmetric. The antisymmetry together( with+ ) (i) implies (ii). ∈ (ii) (iii): Suppose that the vi are linearly dependent. In view of the antisymmetry (implied already by (i), as we just saw), we may assume that vk is a linear combination of the other vi, ⇒ that is, vk i k aivi.Thenµ v1,...,vk i k aiµ v1,...,vk 1,vi 0becauseof(ii). < < − Definition= ∑ (Alternating). Amap( µ ) =Mult∑ k V ;(W is called alternating) = if it satisfies (one of) the equivalent conditions (i-iii) of the lemma. ∈ ( ) According to the lemma, alternating multilinear maps are antisymmetric. If char K 2, then also the converse holds:2 1 The permutations≠ σ Sk, for which (1.4) holds, form a subgroup of Sk. 2The field K has characteristic 2, if 2 1 1 0inK. In this case, 2 has a multiplicative inverse in K, i.e. one can divide by 2 in∈ K. On the other hand, if the field K has characteristic 2, i.e. if 2 0inK,then 1 1 and hence a a for all a K≠, i.e. there∶= + are ≠no signs in K. = − = − = ∈ 4 Lemma. If char K 2, then antisymmetric multilinear maps are alternating. Proof. It suffices to≠ treat the bilinear case. An antisymmetric bilinear map β satisfies β v,v β v,v for all v V ,andhence2β v,v 0. Dividing by 2 yields that β is alternating. ( ) = − (Moreover,) in characteristic∈ 2antisymmetryandsymmetryare“transverse”conditions;( ) = the only multilinear maps, which are both symmetric and skew-symmetric, are the null-maps. ≠ Remark. If char K 2, then bilinear forms can be uniquely decomposed as sums of symmetric and alternating ones, since ≠ β u, v β v,u β u, v β v,u β u, v . 2 2 ( ) + ( ) ( ) + ( ) ( ) = symmetric + anti-symmetric If char K 2, then the relations between the conditions are di↵erent. Since there are no signs, skew-symmetry is the same as symmetry, whereas alternation is more restrictive if k 2. = Since antisymmetry coincides with either alternation or symmetry, depending on the char- ≥ acteristic, alternation is the more interesting condition to consider. We denote by Altk V ; W Multk V ; W the K-vectorspace of alternating multilinear( ) maps,⊂ and( we write) Altk V Altk V ; K . We now describe the data determining an alternating multilinear map.( ) ∶= ( ) Lemma 1.7. If ei i I is a basis of V ,then↵ Multk V ; W is alternating if and only if (i) ↵ e ,...,e 0ifsomeofthee agree, and i1 (ik ∈ ) ij ∈ ( ) (ii) ↵ e ,...,e sgn σ ↵ e ,...,e for all σ S if the e are pairwise di↵erent. ( iσ 1 )i=σ k i1 ik k ij ( ) ( ) Proof. The( conditions are) = obviously( ) ⋅ necessary.( ) ∈ To see that they are also sufficient, we first treat the case k 2ofabilinearformβ.Fora vector v i viei,assumptions(i+ii)imply = 2 = ∑ β v,v vi β ei,ei vivj β ei,ej β ej,ei 0, i i j 0 0 ( ) = ( ) + ( ( ) + ( )) = where we assume that I is equipped= with a total ordering “= ”. Thus, β is alternating. In the general k-linear case, it follows that the bilinear forms ↵ e ,...,e , , ,e ,...,e i1 ij 1 ij 2 ik for 1 j k are alternating, and consequently all bilinear forms ↵ v1,...,vj− 1, , ,vj+ 2,...,vk ( ⋅ ⋅ ) since they are linear combinations of the former.
Recommended publications
  • HILBERT SPACE GEOMETRY Definition: a Vector Space Over Is a Set V (Whose Elements Are Called Vectors) Together with a Binary Operation
    HILBERT SPACE GEOMETRY Definition: A vector space over is a set V (whose elements are called vectors) together with a binary operation +:V×V→V, which is called vector addition, and an external binary operation ⋅: ×V→V, which is called scalar multiplication, such that (i) (V,+) is a commutative group (whose neutral element is called zero vector) and (ii) for all λ,µ∈ , x,y∈V: λ(µx)=(λµ)x, 1 x=x, λ(x+y)=(λx)+(λy), (λ+µ)x=(λx)+(µx), where the image of (x,y)∈V×V under + is written as x+y and the image of (λ,x)∈ ×V under ⋅ is written as λx or as λ⋅x. Exercise: Show that the set 2 together with vector addition and scalar multiplication defined by x y x + y 1 + 1 = 1 1 + x 2 y2 x 2 y2 x λx and λ 1 = 1 , λ x 2 x 2 respectively, is a vector space. 1 Remark: Usually we do not distinguish strictly between a vector space (V,+,⋅) and the set of its vectors V. For example, in the next definition V will first denote the vector space and then the set of its vectors. Definition: If V is a vector space and M⊆V, then the set of all linear combinations of elements of M is called linear hull or linear span of M. It is denoted by span(M). By convention, span(∅)={0}. Proposition: If V is a vector space, then the linear hull of any subset M of V (together with the restriction of the vector addition to M×M and the restriction of the scalar multiplication to ×M) is also a vector space.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • A Clifford Dyadic Superfield from Bilateral Interactions of Geometric Multispin Dirac Theory
    A CLIFFORD DYADIC SUPERFIELD FROM BILATERAL INTERACTIONS OF GEOMETRIC MULTISPIN DIRAC THEORY WILLIAM M. PEZZAGLIA JR. Department of Physia, Santa Clam University Santa Clam, CA 95053, U.S.A., [email protected] and ALFRED W. DIFFER Department of Phyaia, American River College Sacramento, CA 958i1, U.S.A. (Received: November 5, 1993) Abstract. Multivector quantum mechanics utilizes wavefunctions which a.re Clifford ag­ gregates (e.g. sum of scalar, vector, bivector). This is equivalent to multispinors con­ structed of Dirac matrices, with the representation independent form of the generators geometrically interpreted as the basis vectors of spacetime. Multiple generations of par­ ticles appear as left ideals of the algebra, coupled only by now-allowed right-side applied (dextral) operations. A generalized bilateral (two-sided operation) coupling is propoeed which includes the above mentioned dextrad field, and the spin-gauge interaction as partic­ ular cases. This leads to a new principle of poly-dimensional covariance, in which physical laws are invariant under the reshuffling of coordinate geometry. Such a multigeometric su­ perfield equation is proposed, whi~h is sourced by a bilateral current. In order to express the superfield in representation and coordinate free form, we introduce Eddington E-F double-frame numbers. Symmetric tensors can now be represented as 4D "dyads", which actually are elements of a global SD Clifford algebra.. As a restricted example, the dyadic field created by the Greider-Ross multivector current (of a Dirac electron) describes both electromagnetic and Morris-Greider gravitational interactions. Key words: spin-gauge, multivector, clifford, dyadic 1. Introduction Multi vector physics is a grand scheme in which we attempt to describe all ba­ sic physical structure and phenomena by a single geometrically interpretable Algebra.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • Does Geometric Algebra Provide a Loophole to Bell's Theorem?
    Discussion Does Geometric Algebra provide a loophole to Bell’s Theorem? Richard David Gill 1 1 Leiden University, Faculty of Science, Mathematical Institute; [email protected] Version October 30, 2019 submitted to Entropy Abstract: Geometric Algebra, championed by David Hestenes as a universal language for physics, was used as a framework for the quantum mechanics of interacting qubits by Chris Doran, Anthony Lasenby and others. Independently of this, Joy Christian in 2007 claimed to have refuted Bell’s theorem with a local realistic model of the singlet correlations by taking account of the geometry of space as expressed through Geometric Algebra. A series of papers culminated in a book Christian (2014). The present paper first explores Geometric Algebra as a tool for quantum information and explains why it did not live up to its early promise. In summary, whereas the mapping between 3D geometry and the mathematics of one qubit is already familiar, Doran and Lasenby’s ingenious extension to a system of entangled qubits does not yield new insight but just reproduces standard QI computations in a clumsy way. The tensor product of two Clifford algebras is not a Clifford algebra. The dimension is too large, an ad hoc fix is needed, several are possible. I further analyse two of Christian’s earliest, shortest, least technical, and most accessible works (Christian 2007, 2011), exposing conceptual and algebraic errors. Since 2015, when the first version of this paper was posted to arXiv, Christian has published ambitious extensions of his theory in RSOS (Royal Society - Open Source), arXiv:1806.02392, and in IEEE Access, arXiv:1405.2355.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors
    Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21–24, 2006 Thanks: G. Carlsson, L. De Lathauwer, J.M. Landsberg, M. Mahoney, L. Qi, B. Sturmfels; Collaborators: P. Comon, V. de Silva, P. Drineas, G. Golub References http://www-sccm.stanford.edu/nf-publications-tech.html [CGLM2] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain, “Symmetric tensors and symmetric tensor rank,” SCCM Tech. Rep., 06-02, 2006. [CGLM1] P. Comon, B. Mourrain, L.-H. Lim, and G.H. Golub, “Genericity and rank deficiency of high order symmetric tensors,” Proc. IEEE Int. Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), 31 (2006), no. 3, pp. 125–128. [dSL] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SCCM Tech. Rep., 06-06 (2006). [GL] G. Golub and L.-H. Lim, “Nonnegative decomposition and approximation of nonnegative matrices and tensors,” SCCM Tech. Rep., 06-01 (2006), forthcoming. [L] L.-H. Lim, “Singular values and eigenvalues of tensors: a variational approach,” Proc. IEEE Int. Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), 1 (2005), pp. 129–132. 2 What is not a tensor, I • What is a vector? – Mathematician: An element of a vector space. – Physicist: “What kind of physical quantities can be rep- resented by vectors?” Answer: Once a basis is chosen, an n-dimensional vector is something that is represented by n real numbers only if those real numbers transform themselves as expected (ie.
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]