1 Multilinear Forms

Total Page:16

File Type:pdf, Size:1020Kb

1 Multilinear Forms Linear Algebra Fall 2013 Multilinear Excursions 1 Multilinear forms In these notes V is a finite dimensional vector space of dimension n ≥ 1 over a field F . We write V 0 to denote the dual vector space. I will use a semi-standard notation, namely T k(V 0) to denote the space of all k-forms (to be defined in a second!) of V . • A 0-form is just an element of F ; that is T 0(V 0) = F . • A 1-form is a linear functional on V ; that is T 1(V 0) = V 0. • ≥ k 2 k 0 × · × ! If k 2 then a k-form is a multilinear map from V to F . That is, ! T (V ) iff ! : |V {z V} F and ktimes satisfies that for each choice of x1; : : : xk−1 2 V the maps x 7! !(x; x1; : : : ; xk−1); x 7! !(x1; x; x2; : : : ; xk−1); ::: ::: x 7! !(x1; : : : ; xk−1; x); are linear. One makes each T k(V 0) into a vector space over F in the obvious way. 0 K Let !1;:::;!k 2 V . We define !1 ⊗ · · · ⊗ !k 2 T (V ) by Yk !1 ⊗ · · · ⊗ !k(x1; : : : ; xk) = !1(x1) ··· !k(xk) = [xk;!k] j=1 So, for example, !1 ⊗ !2(x1; x2) = !1(x1)!2(x2); which can also be written as !1 ⊗ !2(x1; x2) = [x1;!1][x2;!2]: 0 Some notation could now be of some use. Let Jn = f1; : : : ; ng. We will fix a basis fβ1; : : : ; βng of V ; we may f g 2 k ≥ suppose that it is dual to a basis e1; : : : ; en of V . If i = (i1; : : : ; ik) Jn with k 1 we define i ⊗ · · · ⊗ β = βi1 βik : 2 1 − i This makes sense also if k = 1, then i Jn = Jn simply means that i is a number in the range 1 n and β = βi. 0 For the sake of completeness we also define J0 = f0g and β = 1 2 F . 2 N N [ f g f i 2 kg k Theorem 1 Let k 0 = 0 . Then β : i Jn is a basis of T (V ) ≥ 2 k Proof. The cases k = 0; 1 are obvious; assume k 2. Notice first that if i = vei1k, j = (j1; : : : ; jk) Jn , then { Yk 1; if i = j for ν = 1; : : : ; k, βi(e ; : : : ; e ) = δ = ν ν j1 jk iν jν 0; otherwise: ν=1 2 ALTERNATING LINEAR FORMS 2 P k i k Linear Independence. Let ci 2 F for i 2 J and 2 k ciβ = 0. That means that as a map from V ! F this n i Jn form is 0, in particular X i 0 = ciβ (ej1 ; : : : ; ejk ) = cj 2 k i Jn 2 k for all j = (j1; : : : ; jk) Jn . k 0 Spanning property. Assume ! 2 T (V ). If x1; : : : xk 2 V we can write Xn xj = ξjiei i=1 for elements ξji 2 F for 1 ≤ j ≤ k, 1 ≤ i ≤ n. Then multilinearity implies Xn ··· !(x1; : : : ; xk) = ξ1;i1 ξk;ik !(ei1 ; : : : ; eik ): (1) i1;:::;ik=1 One sees that X i ! = !(ei1 ; : : : ; eik )β : k i=(i1;:::;ik)2J In particular the dimension of T k(V 0) = nk. 2 Alternating linear forms We keep assuming V is an n-dimensional vector space over the field F al to the basis fe1; : : : ; eng of V . k 0 k Let ! 2 T (V ), k ≥ 2, and assume σ 2 Sk. We define σ · ! : V ! F by σ · !(x1; : : : ; xk) = !(xσ(1); : : : ; xσ(k)): It is easy to see, one just has to look, that σ· is a linear map from T k(V 0) onto itself. Exercise 1 Prove: If σ; τ 2 SK , then σ · (τ · !) = (τσ) · ! (2) for all ! 2 T k(V 0). Definition 1 Let k ≥ 2. We say that a k-form ! is symmetric iff σ·! = ! for all σ 2 Sk. It is skew symmetric iff σ · ! = ϵ(σ)! for all σ 2 Sk, where ϵ(σ) is the sign of σ; ϵ(σ) = 1 is σ is even, −1 if σ is odd. It is an immediate{ consequence of Exercise} 1 (and the{ fact that all permutations} are products of transpositions) symmetric σ · ! = ! that a k-form ! is if and only if for all transpositions σ 2 S . skew symmetric σ · ! = −! k Definition 2 Let k ≥ 2.A k-form ! is alternating iff !(x1; : : : ; xk) = 0 whenever x1; : : : ; xk 2 V and there exist 1 ≤ i =6 j ≤ k such that xi = xk. Here is the silly situation one encounters when working with fields of characteristic 2. As someone who works in analysis, I have had no encounter with these fields outside of algebra courses. But finite fields in general play an important role in algebra, fields of characteristic 2 play important roles in coding theory and cryptography, so we must either include them, or point out why we exclude them. Suppose F has characteristic 2 and V is a vector space over F . If x 2 V , then x + x = 1 · x + 1 · x = (1 + 1)x = 0 · x = 0; that is, in such vector space x + x = 0 for all x in the space. Another way of phrasing this is by −x = x for all x in the space. In conclusion all k-forms are symmetric and all are skew-symmetric; there is no difference between the two. But not all k-forms are alternating. The relation between the two concepts is given by the following simple lemma. 2 ALTERNATING LINEAR FORMS 3 Lemma 2 Let k ≥ 2 and let ! 2 T k(V 0). If ! is alternating, then ! is skew symmetric. The converse is also true if the characteristic of F is different from 2. Proof. Assume ! is alternating and let σ = (ij) 2 Sk (the transposition exchanging i; j; leaving all other numbers fixed. Assume as we may that i < j. I am going to assume a bit more to avoid too messy notation; it should be clear that what I do works in general. That is, I will assume that i = 1; j = 2. Let x1; : : : ; xk 2 V . Then, by multilinearity, 0 = !(x1 + x2; x1 + x2; x3 : : : ; xk) = !(x1; x1; x3; : : : ; xk) + !(x1; x2; x3; : : : ; xk) + !(x2; x1; x3; : : : ; xk) + !(x2; x2; x3; : : : ; xk) = 0 + !(x1; x2; x3; : : : ; xk) + !(x2; x1; x3; : : : ; xk) + 0; that is, !(x1; x2; x3; : : : ; xk) = −!(x2; x1; x3; : : : ; xk) For general i < j one uses the same idea. On applies ! to the vectors y1; : : : ; yk where y` = x` if ` =6 i; j and yi = yj = xi + xj. By multilinearity one gets σ · !(x1; : : : ; xk) = −!(x1; : : : ; xk). Conversely, assume that ! is skew symmetric. If x1; : : : ; xk 2 V and xi = xj for some j =6 i, then it is clear that for every k-form one has !(x1; : : : ; xk) = σ · !(x1; : : : ; xk). Since ! is skew symmetric, σ · !(x1; : : : ; xk) = −!(x1; : : : ; xk), hence !(x1; : : : ; xk) = −!(x1; : : : ; xk). If the characteristic of the field is 2, this implies nothing. But if it is different from 2 it implies !(x1; : : : ; xk) = 0. In fact, if F is a field of characteristic different from 2, then 2 = 1 + 1 2 F , 2 =6 0, so that 2−1 exists in F . If W is a vector space over such a field, and if x 2 W and x + x = 0, then 0 = 1 · x + 1 · x = (1 + 1)x = 2 · x, and we can multiply by 2−1 to get x = 0. Definition 3 (or notation) If k ≥ 2, the subset of all alternating k-forms is denoted by Λk(V 0). We supplement this defining Λ1(V 0) = V 0 and Λ0(V 0) = F . It is easy to see that Λk(V 0) is a subspace of T k(V 0). In a previous version of these notes I had an alternation map that allowed one to proceed in a very elegant fashion. Unfortunately, that map doesn't work for fields of characteristic p =6 0. That is, it doesn't work well. So let us bite the bullet and proceed in a generally valid way. 0 Up to a point, I do want to use an analog of this alternation map. Let !1;:::;!k 2 V with k ≥ 2. I will define !1 ^ · · · ^ !k by X !1 ^ · · · ^ !k = ϵ(σ)σ · (!1 ⊗ · · · ⊗ !k): σ2Sk Notice that Yk Yk σ · (!1 ⊗ · · · ⊗ !k)(x1; : : : ; xk) = !1 ⊗ · · · ⊗ !k(xσ(1); : : : ; xσ(k)) = !i(xσ(i)) = !σ−1(i)(xi) i=1 i=1 = !σ−1(1) ⊗ · · · ⊗ !σ−1(k)(x1; : : : ; xk); that is, σ · (!1 ⊗ · · · ⊗ !k) = !σ−1(1) ⊗ · · · ⊗ !σ−1(k): −1 −1 Notice also that as σ ranges through Sk, so does σ , and ϵ(σ) = ϵ(σ ). So we can also define X !1 ^ · · · ^ !k = ϵ(σ)!σ(1) ⊗ · · · ⊗ !σ(k): (3) σ2Sk Let us see a few examples. If k = 2, then !1 ^ !2 = !1 ⊗ !2 − !1 ⊗ !2 so !1 ^ !2(x1; x2) = !1(x1) ⊗ !2(x2) − !1(x2) ⊗ !2(x1): If k = 3, !1 ^ !2 ^ !3 = !1 ⊗ !2 ⊗ !3 + !2 ⊗ !3 ⊗ !1 + !3 ⊗ !1 ⊗ !3 − !1 ⊗ !3 ⊗ !2 − !3 ⊗ !2 ⊗ !1 − !2 ⊗ !1 ⊗ !3 We will need the following theorem. 2 ALTERNATING LINEAR FORMS 4 0 Theorem 3 Let k ≥ 2 and let !1;:::;!k 2 V 1. Let τ 2 Sk. Then !τ(1) ^ · · · ^ !τ(k) = ϵ(τ)!1 ^ · · · ^ !k. 2. If there exists i; j, 1 ≤ i =6 j ≤ k such that !i = !j, then !1 ^ · · · ^ !k = 0. Proof. 1. Let vi = !τ(i) for i = 1; : : : ; k. Then X !τ(1) ^ · · · ^ !τ(k) = ϵ(σ)vσ(1) ⊗ · · · ⊗ vσ(k): σ2Sk Now vi = !τ(i) implies vσ(i) = !τ(σ(i)) = !τσ(i).
Recommended publications
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra a Bilinear Map in Chapter I
    ·~ ...,... .chapter II 1 60 MULTILINEAR ALGEBRA Here a:"~ E v,' a:J E VJ for j =F i, and X EF. Thus, a function rjl: v 1 X ••• X v n --+ v is a multilinear mapping if for all i e { 1, ... , n} and for all vectors a: 1 e V 1> ... , a:1_ 1 eV1_h a:1+ 1eV1+ 1 , ••. , a:.ev•• we have rjl(a: 1 , ... ,a:1_ 1, ··, a 1+ 1, ... , a:.)e HomJ>{V" V). Before proceeding further, let us give a few examples of multilinear maps. Example 1. .2: Ifn o:= 1, then a function cf>: V 1 ..... Vis a multilinear mapping if and only if r/1 is a linear transformation. Thus, linear tra~sformations are just special cases of multilinear maps. 0 Example 1.3: If n = 2, then a multilinear map r/1: V 1 x V 2 ..... V is what we called .Multilinear Algebra a bilinear map in Chapter I. For a concrete example, we have w: V x v• ..... F given by cv(a:, T) = T(a:) (equation 6.6 of Chapter 1). 0 '\1 Example 1.4: The determinant, det{A), of an n x n matrix A can be thought of .l as a multilinear mapping cf>: F .. x · · · x P--+ F in the following way: If 1. MULTILINEAR MAPS AND TENSOR PRODUCTS a: (a , ... , a .) e F" for i 1, ... , o, then set rjl(a: , ... , a:.) det(a J)· The fact "'I 1 = 11 1 = 1 = 1 that rJ> is multilinear is an easy computation, which we leave as an exercise at the In Chapter I, we dealt mainly with functions of one variable.
    [Show full text]
  • Cross Products, Automorphisms, and Gradings 3
    CROSS PRODUCTS, AUTOMORPHISMS, AND GRADINGS ALBERTO DAZA-GARC´IA, ALBERTO ELDUQUE, AND LIMING TANG Abstract. The affine group schemes of automorphisms of the multilinear r- fold cross products on finite-dimensional vectors spaces over fields of character- istic not two are determined. Gradings by abelian groups on these structures, that correspond to morphisms from diagonalizable group schemes into these group schemes of automorphisms, are completely classified, up to isomorphism. 1. Introduction Eckmann [Eck43] defined a vector cross product on an n-dimensional real vector space V , endowed with a (positive definite) inner product b(u, v), to be a continuous map X : V r −→ V (1 ≤ r ≤ n) satisfying the following axioms: b X(v1,...,vr), vi =0, 1 ≤ i ≤ r, (1.1) bX(v1,...,vr),X(v1,...,vr) = det b(vi, vj ) , (1.2) There are very few possibilities. Theorem 1.1 ([Eck43, Whi63]). A vector cross product exists in precisely the following cases: • n is even, r =1, • n ≥ 3, r = n − 1, • n =7, r =2, • n =8, r =3. Multilinear vector cross products X on vector spaces V over arbitrary fields of characteristic not two, relative to a nondegenerate symmetric bilinear form b(u, v), were classified by Brown and Gray [BG67]. These are the multilinear maps X : V r → V (1 ≤ r ≤ n) satisfying (1.1) and (1.2). The possible pairs (n, r) are again those in Theorem 1.1. The exceptional cases: (n, r) = (7, 2) and (8, 3), are intimately related to the arXiv:2006.10324v1 [math.RT] 18 Jun 2020 octonion, or Cayley, algebras.
    [Show full text]
  • Multibraces on the Hochschild Space  Fusun( Akman Department of Mathematics and Statistics, Coastal Carolina University, P.O
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Pure and Applied Algebra 167 (2002) 129–163 www.elsevier.com/locate/jpaa Multibraces on the Hochschild space Fusun( Akman Department of Mathematics and Statistics, Coastal Carolina University, P.O. Box 261954, Conway, SC 29528-6054, USA Received 24 March 1997; received in revised form8 January 2001 Communicated by J. Huebschmann Abstract We generalize the coupled braces {x}{y} of Gerstenhaber and {x}{y1;:::;yn} of Gersten- haber and Getzler depicting compositions of multilinear maps in the Hochschild space C•(A)= Hom(T •A; A) of a graded vector space A to expressions of the form {x(1);:::;x(1)}··· 1 i1 {x(m);:::;x(m)} on the extended space C•;•(A) = Hom(T •A; T •A). We apply multibraces to 1 im study associative and Lie algebras, Batalin–Vilkovisky algebras, and A∞ and L∞ algebras: most importantly, we introduce a new variant of the master identity for L∞ algebras in the form {m˜ ◦ m˜}{sa1}{sa2}···{san} = 0. Using the new language, we also explain the signiÿcance of this notation for bialgebras (coassociativity is simply ◦=0), comment on the bialgebra coho- mology di>erential of Gerstenhaber and Schack, and deÿne multilinear higher-order di>erential operators with respect to multilinear maps. c 2002 Elsevier Science B.V. All rights reserved. MSC: 17A30; 17A40; 17A42; 16W55; 16W30 1. Introduction Occasionally, an algebraic identity we encounter in mathematical physics or homo- logical algebra boils down to the following: the composition of a multilinear map with another one, or a sumof such compositions,is identically zero.
    [Show full text]
  • Lecture Notes on Linear and Multilinear Algebra 2301-610
    Lecture Notes on Linear and Multilinear Algebra 2301-610 Wicharn Lewkeeratiyutkul Department of Mathematics and Computer Science Faculty of Science Chulalongkorn University August 2014 Contents Preface iii 1 Vector Spaces 1 1.1 Vector Spaces and Subspaces . 2 1.2 Basis and Dimension . 10 1.3 Linear Maps . 18 1.4 Matrix Representation . 32 1.5 Change of Bases . 42 1.6 Sums and Direct Sums . 48 1.7 Quotient Spaces . 61 1.8 Dual Spaces . 65 2 Multilinear Algebra 73 2.1 Free Vector Spaces . 73 2.2 Multilinear Maps and Tensor Products . 78 2.3 Determinants . 95 2.4 Exterior Products . 101 3 Canonical Forms 107 3.1 Polynomials . 107 3.2 Diagonalization . 115 3.3 Minimal Polynomial . 128 3.4 Jordan Canonical Forms . 141 i ii CONTENTS 4 Inner Product Spaces 161 4.1 Bilinear and Sesquilinear Forms . 161 4.2 Inner Product Spaces . 167 4.3 Operators on Inner Product Spaces . 180 4.4 Spectral Theorem . 190 Bibliography 203 Preface This book grew out of the lecture notes for the course 2301-610 Linear and Multilinaer Algebra given at the Deparment of Mathematics, Faculty of Science, Chulalongkorn University that I have taught in the past 5 years. Linear Algebra is one of the most important subjects in Mathematics, with numerous applications in pure and applied sciences. A more theoretical linear algebra course will emphasize on linear maps between vector spaces, while an applied-oriented course will mainly work with matrices. Matrices have an ad- vantage of being easier to compute, while it is easier to establish the results by working with linear maps.
    [Show full text]
  • Tensors in 10 Minutes Or Less
    Tensors In 10 Minutes Or Less Sydney Timmerman Under the esteemed mentorship of Apurva Nakade December 5, 2018 JHU Math Directed Reading Program Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems This means the laws of physics can be expressed as relationships between tensors! Why tensors are cool, kids This means the laws of physics can be expressed as relationships between tensors! Why tensors are cool, kids Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems Why tensors are cool, kids Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems This means the laws of physics can be expressed as relationships between tensors! Example Einstein's field equations govern general relativity, 8πG G = T µν c4 µν 0|{z} 0|{z} (2) tensor (2) tensor What tensors look like What tensors look like Example Einstein's field equations govern general relativity, 8πG G = T µν c4 µν 0|{z} 0|{z} (2) tensor (2) tensor Let V be an n-dimensional vector space over R Definition Given u; w 2 V and λ 2 R, a covector is a map α : V ! R satisfying α(u + λw) = α(u) + λα(w) Definition Covectors form the dual space V ∗ The Dual Space Definition Given u; w 2 V and λ 2 R, a covector is a map α : V ! R satisfying α(u + λw) = α(u) + λα(w) Definition Covectors form the dual space V ∗ The Dual Space Let V be
    [Show full text]
  • Cryptanalysis on the Multilinear Map Over the Integers and Its Related Problems ∗
    Cryptanalysis on the Multilinear Map over the Integers and its Related Problems ∗ Jung Hee Cheon, Kyoohyung Han, Changmin Lee, Hansol Ryu Seoul National University (SNU), South Korea fjhcheon, satanigh, cocomi11, [email protected] Damien Stehl´e ENS de Lyon, Laboratoire LIP (U. Lyon, CNRS, ENSL, INRIA, UCBL), France [email protected] Abstract The CRT-ACD problem is to find the primes p1; : : : ; pn given polynomially many in- stances of CRT(p1;:::;pn)(r1; : : : ; rn) for small integers r1; : : : ; rn. The CRT-ACD problem is regarded as a hard problem, but its hardness is not proven yet. In this paper, we analyze the CRT-ACD problem when given one more input CRT(p1;:::;pn)(x0=p1; : : : ; x0=pn) for n Q x0 = pi and propose a polynomial-time algorithm for this problem by using products i=1 of the instances and auxiliary input. This algorithm yields a polynomial-time cryptanalysis of the (approximate) multilinear map of Coron, Lepoint and Tibouchi (CLT): We show that by multiplying encodings of zero with zero-testing parameters properly in the CLT scheme, one can obtain a required input of our algorithm: products of CRT-ACD instances and auxiliary input. This leads to a total break: all the quantities that were supposed to be kept secret can be recovered in an efficient and public manner. We also introduce polynomial-time algorithms for the Subgroup Membership, Deci- sion Linear, and Graded External Diffie-Hellman problems, which are used as the base problems of several cryptographic schemes constructed on multilinear maps. Keywords: Multilinear maps, Graded encoding schemes, Decision linear problem, Sub- group membership problem, Graded external Diffie-Hellman problem.
    [Show full text]
  • On Lie 2-Bialgebras
    Communications in Mathematical Research 34(1)(2018), 54{64 On Lie 2-bialgebras Qiao Yu and Zhao Jia∗ (School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, 710119) Communicated by Du Xian-kun Abstract: A Lie 2-bialgebra is a Lie 2-algebra equipped with a compatible Lie 2- coalgebra structure. In this paper, we give another equivalent description for Lie 2-bialgebras by using the structure maps and compatibility conditions. We can use this method to check whether a 2-term direct sum of vector spaces is a Lie 2-bialgebra easily. Key words: big bracket, Lie 2-algebra, Lie 2-coalgebra, Lie 2-bialgebra 2010 MR subject classification: 17B66 Document code: A Article ID: 1674-5647(2018)01-0054-11 DOI: 10.13447/j.1674-5647.2018.01.06 1 Introduction 1.1 Background This paper is a sequel to [1], in which the notion of Lie 2-bialgeras was introduced. The main purpose of this paper is to give an equivalent condition for Lie 2-bialgebras. Generally speaking, a Lie 2-bialgebra is a Lie 2-algebra endowed with a Lie 2-coalgebra structure, satisfying certain compatibility conditions. As we all know, a Lie bialgebra structure on a Lie algebra (g; [ · ; · ]) consists of a cobracket δ : g ! g ^ g, which squares to zero, and satisfies the compatibility condition: for all x; y; z 2 g, δ([x; y]) = [x; δ(y)] − [y; δ(x)]: Consequently, one may ask what is a Lie 2-bialgebra. A Lie 2-bialgebra is a pair of 2-terms of L1-algebra structure underlying a 2-vector space and its dual.
    [Show full text]
  • Chicken Or Egg? a Hierarchy of Homotopy Algebras
    Chicken or egg? A hierarchy of homotopy algebras F¨usun Akman November 13, 2018 Abstract We start by clarifying and extending the multibraces notation, which economically describes substitutions of multilinear maps and tensor prod- ucts of vectors. We give definitions and examples of weak homotopy al- gebras, homotopy Gerstenhaber and Gerstenhaber bracket algebras, and homotopy Batalin-Vilkovisky algebras. We show that a homotopy algebra structure on a vector space can be lifted to its Hochschild complex, and also suggest an induction method to generate some of the explicit (weakly) homotopy Gerstenhaber algebra maps on a topological vertex operator al- gebra (TVOA), their existence having been indicated by Kimura, Voronov, and Zuckerman in 1996 (later amended by Voronov). The contention that this is the fundamental structure on a TVOA is substantiated by provid- ing an annotated dictionary of weakly homotopy BV algebra maps and identities found by Lian and Zuckerman in 1993. Contents 1 Introduction 2 2 Substitution operators 3 2.1 Multibraces .............................. 3 2.1.1 Substitutions of multilinear maps . 3 2.1.2 Bigrading in the Hochschild space . 4 arXiv:math/0306145v2 [math.QA] 22 Oct 2003 2.1.3 Substitutions with braces . 6 2.1.4 Isomorphism between Hochschild space and coderivations on the tensor coalgebra 7 2.2 Partitioned multibraces . 10 2.2.1 Substitutions of partitioned multilinear maps . 10 2.2.2 Substitutions with partitioned braces . 11 2.2.3 Symmetries .......................... 13 3 Homotopy algebras 14 3.1 Strongly homotopy associative, pre-Lie, and Lie algebras . 14 3.1.1 Thesuspensionoperator. 14 1 3.1.2 A∞ algebras ........................
    [Show full text]
  • Orthosymmetric Bilinear Map on Riesz Spaces
    Comment.Math.Univ.Carolin. 56,3 (2015) 307–317 307 Orthosymmetric bilinear map on Riesz spaces Elmiloud Chil, Mohamed Mokaddem, Bourokba Hassen Abstract. Let E be a Riesz space, F a Hausdorff topological vector space (t.v.s.). We prove, under a certain separation condition, that any orthosymmetric bilinear map T : E × E → F is automatically symmetric. This generalizes in certain way an earlier result by F. Ben Amor [On orthosymmetric bilinear maps, Positivity 14 (2010), 123–134]. As an application, we show that under a certain separation condition, any orthogonally additive homogeneous polynomial P : E → F is linearly represented. This fits in the type of results by Y. Benyamini, S. Lassalle and J.L.G. Llavona [Homogeneous orthogonally additive polynomials on Banach lattices, Bulletin of the London Mathematical Society 38 (2006), no. 3 123–134]. Keywords: orthosymmetric multilinear map; homogeneous polynomial; Riesz space Classification: 06F25, 46A40 1. Introduction One of the relevant problems in operator theory is to describe orthogonally ad- ditive polynomials via linear operators. This problem can be treated in a different manner, depending on domains and co-domains on which polynomials act. Inter- est in orthogonally additive polynomials on Banach lattices originates in the work of K. Sundaresan [19] where the space of n-homogeneous orthogonally additive polynomials on the Banach lattices lp and Lp [0, 1] was characterized. It is only recently that the class of such mappings has been getting more attention. We are thinking here about works on orthogonally additive polynomials and holomorphic functions and orthosymmetric multilinear mappings on different Banach lattices and also C∗-algebras, see for instance [3], [6], [7], [18] and [8], [13], [14], [17].
    [Show full text]
  • Tensors in Computations 3 a Large Part of Our Article; the Title of Our Article Also Includes the Use of Tensors As a Tool in the Analysis of Algorithms (E.G
    Acta Numerica (2021), pp. 1–208 © Cambridge University Press, 2021 doi:10.1017/S0962492921000076 Printed in the United Kingdom Tensors in computations Lek-Heng Lim Computational and Applied Mathematics Initiative, University of Chicago, Chicago, IL 60637, USA E-mail: [email protected] The notion of a tensor captures three great ideas: equivariance, multilinearity, separ- ability. But trying to be three things at once makes the notion difficult to understand. We will explain tensors in an accessible and elementary way throughthe lensof linear algebra and numerical linear algebra, elucidated with examples from computational and applied mathematics. CONTENTS 1 Introduction 1 2 Tensors via transformation rules 11 3 Tensors via multilinearity 41 4 Tensors via tensor products 85 5 Odds and ends 191 References 195 1. Introduction arXiv:2106.08090v1 [math.NA] 15 Jun 2021 We have two goals in this article: the first is to explain in detail and in the simplest possible terms what a tensor is; the second is to discuss the main ways in which tensors play a role in computations. The two goals are interwoven: what defines a tensor is also what makes it useful in computations, so it is important to gain a genuine understanding of tensors. We will take the reader through the three common definitions of a tensor: as an object that satisfies certain transformation rules, as a multilinear map, and as an element of a tensor product of vector spaces. We will explain the motivations behind these definitions, how one definition leads to the next, and how they all fit together. All three definitions are useful in compu- tational mathematics but in different ways; we will intersperse our discussions of each definition with considerations of how it is employed in computations, using the latter as impetus for the former.
    [Show full text]
  • Biased Multilinear Maps of Abelian Groups
    BIASED MULTILINEAR MAPS OF ABELIAN GROUPS SEAN EBERHARD Abstract. We adapt the theory of partition rank and analytic rank to the category of abelian groups. If A1,...,Ak are finite abelian groups and ϕ : A1 ×···× Ak → T is a multilinear map, where T = R/Z, the bias of ϕ is defined to be the average value of exp(i2πϕ). If the bias of ϕ is bounded away from zero we show that ϕ is the sum of boundedly many multilinear maps each of which factors through the standard multiplication map of Z/qZ for some bounded prime power q. Relatedly, if F : A1 ×···× Ak−1 → B is a multilinear map such that P(F = 0) is bounded away from zero, we show that F is the sum of boundedly many multilinear functions of a particular form. These structure theorems generalize work of several authors in the elementary abelian case to the arbitrary abelian case. The set of all possible biases is also investigated. Contents 1. Introduction 1 2. Basic properties of bias 3 3. The structure theorem 6 4. Possible biases 9 References 12 1. Introduction Suppose A1,...,Ak are finite abelian groups and ϕ : A1 ×···× Ak → T is a multilinear map, where T = R/Z. Let e(x) = exp(i2πx) be the standard character of T. The bias of ϕ is defined by (1) bias(ϕ)= Ex∈A[k] e(ϕ(x)). Here and throughout we use the following index notation. The symbol [k] denotes the index set {1,...,k}. For I ⊆ [k], AI = Ai. arXiv:2108.01580v1 [math.CO] 3 Aug 2021 Yi∈I For x ∈ A[k], xI = (xi)i∈I ∈ AI .
    [Show full text]