Appendix a Multilinear Algebra and Index Notation

Total Page:16

File Type:pdf, Size:1020Kb

Appendix a Multilinear Algebra and Index Notation 164 APPENDIX A. MULTILINEAR ALGEBRA A.1 Vector spaces and linear maps We assume the reader is somewhat familiar with linear algebra, so at least most of this section should be review|its main purpose is to establish notation that is used in the rest of the notes, as well as to clarify the relationship between real and complex vector spaces. Appendix A Throughout this appendix, let F denote either of the fields R or C; we will refer to elements of this field as scalars. Recall that a vector space over F (or simply a real/complex vector space) is a set V together with two Multilinear algebra and index algebraic operations: (vector addition) V V V : (v; w) v + w notation • × ! 7! (scalar multiplication) F V V : (λ, v) λv • × ! 7! One should always keep in mind the standard examples Fn for n 0; as we will recall in a moment, every finite dimensional vector space≥ is Contents isomorphic to one of these. The operations are required to satisfy the A.1 Vector spaces . 164 following properties: A.2 Bases, indices and the summation convention 166 (associativity) (u + v) + w = u + (v + w). • A.3 Dual spaces . 169 (commutativity) v + w = w + v. • A.4 Inner products . 170 (additive identity) There exists a zero vector 0 V such that • 0 + v = v for all v V . 2 A.5 Direct sums . 174 2 (additive inverse) For each v V there is an inverse element A.6 Tensors and multilinear maps . 175 • v V such that v + ( v) = 0.2 (This is of course abbreviated − 2 − v v = 0.) A.7 The tensor product . 179 − (distributivity) For scalar multiplication, (λ + µ)v = λv + µv and A.8 Symmetric and exterior algebras . 182 • λ(v + w) = λv + λw. A.9 Duality and the Hodge star . 188 (scalar associativity) λ(µv) = (λµ)v. A.10 Tensors on manifolds . 190 • (scalar identity) 1v = v for all v V . • 2 Observe that every complex vector space can also be considered a real If linear algebra is the study of vector spaces and linear maps, then vector space, though the reverse is not true. That is, in a complex vector multilinear algebra is the study of tensor products and the natural gener- space, there is automatically a well defined notion of multiplication by real alizations of linear maps that arise from this construction. Such concepts scalars, but in real vector spaces, one has no notion of \multiplication by are extremely useful in differential geometry but are essentially algebraic i". As is also discussed in Chapter 2, such a notion can sometimes (though rather than geometric; we shall thus introduce them in this appendix us- not always) be defined as an extra piece of structure on a real vector space. ing only algebraic notions. We'll see finally in A.10 how to apply them For two vector spaces V and W over the same field F, a map to tangent spaces on manifolds and thus recoverx the usual formalism of tensor fields and differential forms. Along the way, we will explain the A : V W : v Av conventions of \upper" and \lower" index notation and the Einstein sum- ! 7! mation convention, which are standard among physicists but less familiar is called linear if it respects both vector addition and scalar multiplication, in general to mathematicians. meaning it satisfies the relations A(v + w) = Av + Aw and A(λv) = λ(Av) 163 A.1. VECTOR SPACES 165 166 APPENDIX A. MULTILINEAR ALGEBRA for all v; w V and λ F. Linear maps are also sometimes called vector with corresponding definitions for EndR(V ), EndC(V ) and EndC(V ). 2 2 space homomorphisms, and we therefore use the notation Observe that all these sets of linear maps are themselves also vector spaces in a natural way: simply define (A + B)v := Av + Bv and (λA)v := Hom(V; W ) := A : V W A is linear : f ! j g λ(Av). Given a vector space V , a subspace V 0 V is a subset which is closed The symbols L(V; W ) and (V; W ) are also quite common but are not used under both vector addition and scalar multiplication,⊂ i.e. v + w V 0 and F LC 2 in these notes. When = , we may sometimes want to specify that we λv V 0 for all v; w V 0 and λ F. Every linear map A Hom(V; W ) mean the set of real or complex linear maps by defining: gives2 rise to importan2t subspaces2of V and W : the kernel 2 HomR(V; W ) := A : V W A is real linear ker A = v V Av = 0 V f ! j g f 2 j g ⊂ HomC(V; W ) := Hom(V; W ): and image The first definition treats both V and W as real vector spaces, reducing im A = w W w = Av for some v V W: C R f 2 j 2 g ⊂ the set of scalars from to . The distinction is that a real linear map on We say that A Hom(V; W ) is injective (or one-to-one) if Av = Aw always C a complex vector space need not satisfy A(λv) = λ(Av) for all λ , but implies v = w,2and surjective (or onto) if every w W can be written as rather for λ R. Thus every complex linear map is also real linear,2 but 2 2 Av for some v V . It is useful to recall the basic algebraic fact that A is the reverse is not true: there are many more real linear maps in general. injective if and2only if its kernel is the trivial subspace 0 V . (Prove An example is the operation of complex conjugation it!) f g ⊂ An isomorphism between V and W is a linear map A Hom(V; W ) C C : x + iy x + iy = x iy: 2 ! 7! − that is both injective and surjective: in this case it is invertible, i.e. there is C another map A−1 Hom(W; V ) so that the compositions A−1A and AA−1 Indeed, we can consider as a real vector space via the one-to-one corre- 2 spondence are the identity map on V and W respectively. Two vector spaces are C R2 : x + iy (x; y): isomorphic if there exists an isomorphism between them. When V = W , ! 7! isomorphisms V V are also called automorphisms, and the space of Then the map z z¯ is equivalent to the linear map (x; y) (x; y) these is denoted b!y on R2; it is therefore7! real linear, but it does not respect multiplication7! −by complex scalars in general, e.g. iz = iz¯. It does however have another nice Aut(V ) = A End(V ) A is invertible : 6 f 2 j g property that deserves a name: for two complex vector spaces V and W , This is not a vector space since the sum of two invertible maps need not a map A : V W is called antilinear (or complex antilinear) if it is real be invertible. It is however a group, with the natural \multiplication" ! linear and also satisfies operation defined by composition of linear maps: A(iv) = i(Av): AB := A B: − ◦ Equivalently, such maps satisfy A(λv) = λ¯v for all λ C. The canonical Fn F 2 As a special case, for V = one has the general linear group GL(n; ) := example is complex conjugation in n dimensions: Aut(Fn). This and its subgroups are discussed in some detail in Ap- pendix B. Cn Cn : (z ; : : : ; z ) (z¯ ; : : : ; z¯ ); ! 1 n 7! 1 n and one obtains many more examples by composing this conjugation with A.2 Bases, indices and the summation con- any complex linear map. We denote the set of complex antilinear maps from V to W by vention HomC(V; W ): A basis of a vector space V is a set of vectors e(1); : : : ; e(n) V such that When the domain and target space are the same, a linear map V V 2 ! every v V can be expressed as is sometimes called a vector space endomorphism, and we therefore use the 2 n notation v = cje(j) End(V ) := Hom(V; V ); j=1 X A.2. BASES, INDICES AND THE SUMMATION CONVENTION 167 168 APPENDIX A. MULTILINEAR ALGEBRA for some unique set of scalars c ; : : : ; c F. If a basis of n vectors exists, f ; : : : ; f is a basis of W implies there exist unique scalars Ai F such 1 n 2 (1) (m) j 2 then the vector space V is called n-dimensional. Observe that the map that i n Ae(j) = A jf(i); n F V : (c1; : : : ; cn) cje(j) where again summation over i is implied on the right hand side. Then for ! 7! j j=1 any v = v e V , we exploit the properties of linearity and find1 X (j) 2 is then an isomorphism, so every n-dimensional vector space over F is i (j) i (j) k i k (j) (A ja i )v = (A ja i )v e(k) = A jv a i e(k) Fn ( ) ( ) ( ) (A.2) isomorphic to . Not every vector space is n-dimensional for some n 0: i j j i j j ≥ = (A v )f i = v A f i = v Ae j = A(v e j ) = Av: there are also infinite dimensional vector spaces, e.g. the set of continuous j ( ) j ( ) ( ) ( ) R (j) functions f : [0; 1] , with addition and scalar multiplication defined by i a ! Thus A = A j (i) , and we've also derived the standard formula for (f +g)(x) := f(x)+g(x) and (λf)(x) := λf(x).
Recommended publications
  • Laplacians in Geometric Analysis
    LAPLACIANS IN GEOMETRIC ANALYSIS Syafiq Johar syafi[email protected] Contents 1 Trace Laplacian 1 1.1 Connections on Vector Bundles . .1 1.2 Local and Explicit Expressions . .2 1.3 Second Covariant Derivative . .3 1.4 Curvatures on Vector Bundles . .4 1.5 Trace Laplacian . .5 2 Harmonic Functions 6 2.1 Gradient and Divergence Operators . .7 2.2 Laplace-Beltrami Operator . .7 2.3 Harmonic Functions . .8 2.4 Harmonic Maps . .8 3 Hodge Laplacian 9 3.1 Exterior Derivatives . .9 3.2 Hodge Duals . 10 3.3 Hodge Laplacian . 12 4 Hodge Decomposition 13 4.1 De Rham Cohomology . 13 4.2 Hodge Decomposition Theorem . 14 5 Weitzenb¨ock and B¨ochner Formulas 15 5.1 Weitzenb¨ock Formula . 15 5.1.1 0-forms . 15 5.1.2 k-forms . 15 5.2 B¨ochner Formula . 17 1 Trace Laplacian In this section, we are going to present a notion of Laplacian that is regularly used in differential geometry, namely the trace Laplacian (also called the rough Laplacian or connection Laplacian). We recall the definition of connection on vector bundles which allows us to take the directional derivative of vector bundles. 1.1 Connections on Vector Bundles Definition 1.1 (Connection). Let M be a differentiable manifold and E a vector bundle over M. A connection or covariant derivative at a point p 2 M is a map D : Γ(E) ! Γ(T ∗M ⊗ E) 1 with the properties for any V; W 2 TpM; σ; τ 2 Γ(E) and f 2 C (M), we have that DV σ 2 Ep with the following properties: 1.
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra a Bilinear Map in Chapter I
    ·~ ...,... .chapter II 1 60 MULTILINEAR ALGEBRA Here a:"~ E v,' a:J E VJ for j =F i, and X EF. Thus, a function rjl: v 1 X ••• X v n --+ v is a multilinear mapping if for all i e { 1, ... , n} and for all vectors a: 1 e V 1> ... , a:1_ 1 eV1_h a:1+ 1eV1+ 1 , ••. , a:.ev•• we have rjl(a: 1 , ... ,a:1_ 1, ··, a 1+ 1, ... , a:.)e HomJ>{V" V). Before proceeding further, let us give a few examples of multilinear maps. Example 1. .2: Ifn o:= 1, then a function cf>: V 1 ..... Vis a multilinear mapping if and only if r/1 is a linear transformation. Thus, linear tra~sformations are just special cases of multilinear maps. 0 Example 1.3: If n = 2, then a multilinear map r/1: V 1 x V 2 ..... V is what we called .Multilinear Algebra a bilinear map in Chapter I. For a concrete example, we have w: V x v• ..... F given by cv(a:, T) = T(a:) (equation 6.6 of Chapter 1). 0 '\1 Example 1.4: The determinant, det{A), of an n x n matrix A can be thought of .l as a multilinear mapping cf>: F .. x · · · x P--+ F in the following way: If 1. MULTILINEAR MAPS AND TENSOR PRODUCTS a: (a , ... , a .) e F" for i 1, ... , o, then set rjl(a: , ... , a:.) det(a J)· The fact "'I 1 = 11 1 = 1 = 1 that rJ> is multilinear is an easy computation, which we leave as an exercise at the In Chapter I, we dealt mainly with functions of one variable.
    [Show full text]
  • Cross Products, Automorphisms, and Gradings 3
    CROSS PRODUCTS, AUTOMORPHISMS, AND GRADINGS ALBERTO DAZA-GARC´IA, ALBERTO ELDUQUE, AND LIMING TANG Abstract. The affine group schemes of automorphisms of the multilinear r- fold cross products on finite-dimensional vectors spaces over fields of character- istic not two are determined. Gradings by abelian groups on these structures, that correspond to morphisms from diagonalizable group schemes into these group schemes of automorphisms, are completely classified, up to isomorphism. 1. Introduction Eckmann [Eck43] defined a vector cross product on an n-dimensional real vector space V , endowed with a (positive definite) inner product b(u, v), to be a continuous map X : V r −→ V (1 ≤ r ≤ n) satisfying the following axioms: b X(v1,...,vr), vi =0, 1 ≤ i ≤ r, (1.1) bX(v1,...,vr),X(v1,...,vr) = det b(vi, vj ) , (1.2) There are very few possibilities. Theorem 1.1 ([Eck43, Whi63]). A vector cross product exists in precisely the following cases: • n is even, r =1, • n ≥ 3, r = n − 1, • n =7, r =2, • n =8, r =3. Multilinear vector cross products X on vector spaces V over arbitrary fields of characteristic not two, relative to a nondegenerate symmetric bilinear form b(u, v), were classified by Brown and Gray [BG67]. These are the multilinear maps X : V r → V (1 ≤ r ≤ n) satisfying (1.1) and (1.2). The possible pairs (n, r) are again those in Theorem 1.1. The exceptional cases: (n, r) = (7, 2) and (8, 3), are intimately related to the arXiv:2006.10324v1 [math.RT] 18 Jun 2020 octonion, or Cayley, algebras.
    [Show full text]
  • Multibraces on the Hochschild Space  Fusun( Akman Department of Mathematics and Statistics, Coastal Carolina University, P.O
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Pure and Applied Algebra 167 (2002) 129–163 www.elsevier.com/locate/jpaa Multibraces on the Hochschild space Fusun( Akman Department of Mathematics and Statistics, Coastal Carolina University, P.O. Box 261954, Conway, SC 29528-6054, USA Received 24 March 1997; received in revised form8 January 2001 Communicated by J. Huebschmann Abstract We generalize the coupled braces {x}{y} of Gerstenhaber and {x}{y1;:::;yn} of Gersten- haber and Getzler depicting compositions of multilinear maps in the Hochschild space C•(A)= Hom(T •A; A) of a graded vector space A to expressions of the form {x(1);:::;x(1)}··· 1 i1 {x(m);:::;x(m)} on the extended space C•;•(A) = Hom(T •A; T •A). We apply multibraces to 1 im study associative and Lie algebras, Batalin–Vilkovisky algebras, and A∞ and L∞ algebras: most importantly, we introduce a new variant of the master identity for L∞ algebras in the form {m˜ ◦ m˜}{sa1}{sa2}···{san} = 0. Using the new language, we also explain the signiÿcance of this notation for bialgebras (coassociativity is simply ◦=0), comment on the bialgebra coho- mology di>erential of Gerstenhaber and Schack, and deÿne multilinear higher-order di>erential operators with respect to multilinear maps. c 2002 Elsevier Science B.V. All rights reserved. MSC: 17A30; 17A40; 17A42; 16W55; 16W30 1. Introduction Occasionally, an algebraic identity we encounter in mathematical physics or homo- logical algebra boils down to the following: the composition of a multilinear map with another one, or a sumof such compositions,is identically zero.
    [Show full text]
  • Lecture Notes on Linear and Multilinear Algebra 2301-610
    Lecture Notes on Linear and Multilinear Algebra 2301-610 Wicharn Lewkeeratiyutkul Department of Mathematics and Computer Science Faculty of Science Chulalongkorn University August 2014 Contents Preface iii 1 Vector Spaces 1 1.1 Vector Spaces and Subspaces . 2 1.2 Basis and Dimension . 10 1.3 Linear Maps . 18 1.4 Matrix Representation . 32 1.5 Change of Bases . 42 1.6 Sums and Direct Sums . 48 1.7 Quotient Spaces . 61 1.8 Dual Spaces . 65 2 Multilinear Algebra 73 2.1 Free Vector Spaces . 73 2.2 Multilinear Maps and Tensor Products . 78 2.3 Determinants . 95 2.4 Exterior Products . 101 3 Canonical Forms 107 3.1 Polynomials . 107 3.2 Diagonalization . 115 3.3 Minimal Polynomial . 128 3.4 Jordan Canonical Forms . 141 i ii CONTENTS 4 Inner Product Spaces 161 4.1 Bilinear and Sesquilinear Forms . 161 4.2 Inner Product Spaces . 167 4.3 Operators on Inner Product Spaces . 180 4.4 Spectral Theorem . 190 Bibliography 203 Preface This book grew out of the lecture notes for the course 2301-610 Linear and Multilinaer Algebra given at the Deparment of Mathematics, Faculty of Science, Chulalongkorn University that I have taught in the past 5 years. Linear Algebra is one of the most important subjects in Mathematics, with numerous applications in pure and applied sciences. A more theoretical linear algebra course will emphasize on linear maps between vector spaces, while an applied-oriented course will mainly work with matrices. Matrices have an ad- vantage of being easier to compute, while it is easier to establish the results by working with linear maps.
    [Show full text]
  • Tensors in 10 Minutes Or Less
    Tensors In 10 Minutes Or Less Sydney Timmerman Under the esteemed mentorship of Apurva Nakade December 5, 2018 JHU Math Directed Reading Program Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems This means the laws of physics can be expressed as relationships between tensors! Why tensors are cool, kids This means the laws of physics can be expressed as relationships between tensors! Why tensors are cool, kids Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems Why tensors are cool, kids Theorem If there is a relationship between two tensor fields in one coordinate system, that relationship holds in certain other coordinate systems This means the laws of physics can be expressed as relationships between tensors! Example Einstein's field equations govern general relativity, 8πG G = T µν c4 µν 0|{z} 0|{z} (2) tensor (2) tensor What tensors look like What tensors look like Example Einstein's field equations govern general relativity, 8πG G = T µν c4 µν 0|{z} 0|{z} (2) tensor (2) tensor Let V be an n-dimensional vector space over R Definition Given u; w 2 V and λ 2 R, a covector is a map α : V ! R satisfying α(u + λw) = α(u) + λα(w) Definition Covectors form the dual space V ∗ The Dual Space Definition Given u; w 2 V and λ 2 R, a covector is a map α : V ! R satisfying α(u + λw) = α(u) + λα(w) Definition Covectors form the dual space V ∗ The Dual Space Let V be
    [Show full text]
  • On a Class of Einstein Space-Time Manifolds
    Publ. Math. Debrecen 67/3-4 (2005), 471–480 On a class of Einstein space-time manifolds By ADELA MIHAI (Bucharest) and RADU ROSCA (Paris) Abstract. We deal with a general space-time (M,g) with usual differen- tiability conditions and hyperbolic metric g of index 1, which carries 3 skew- symmetric Killing vector fields X, Y , Z having as generative the unit time-like vector field e of the hyperbolic metric g. It is shown that such a space-time (M,g) is an Einstein manifold of curvature −1, which is foliated by space-like hypersur- faces Ms normal to e and the immersion x : Ms → M is pseudo-umbilical. In addition, it is proved that the vector fields X, Y , Z and e are exterior concur- rent vector fields and X, Y , Z define a commutative Killing triple, M admits a Lorentzian transformation which is in an orthocronous Lorentz group and the distinguished spatial 3-form of M is a relatively integral invariant of the vector fields X, Y and Z. 0. Introduction Let (M,g) be a general space-time with usual differentiability condi- tions and hyperbolic metric g of index 1. We assume in this paper that (M,g) carries 3 skew-symmetric Killing vector fields (abbr. SSK) X, Y , Z having as generative the unit time-like vector field e of the hyperbolic metric g (see[R1],[MRV].Therefore,if∇ is the Levi–Civita connection and ∧ means the wedge product of vector Mathematics Subject Classification: 53C15, 53D15, 53C25. Key words and phrases: space-time, skew-symmetric Killing vector field, exterior con- current vector field, orthocronous Lorentz group.
    [Show full text]
  • Cryptanalysis on the Multilinear Map Over the Integers and Its Related Problems ∗
    Cryptanalysis on the Multilinear Map over the Integers and its Related Problems ∗ Jung Hee Cheon, Kyoohyung Han, Changmin Lee, Hansol Ryu Seoul National University (SNU), South Korea fjhcheon, satanigh, cocomi11, [email protected] Damien Stehl´e ENS de Lyon, Laboratoire LIP (U. Lyon, CNRS, ENSL, INRIA, UCBL), France [email protected] Abstract The CRT-ACD problem is to find the primes p1; : : : ; pn given polynomially many in- stances of CRT(p1;:::;pn)(r1; : : : ; rn) for small integers r1; : : : ; rn. The CRT-ACD problem is regarded as a hard problem, but its hardness is not proven yet. In this paper, we analyze the CRT-ACD problem when given one more input CRT(p1;:::;pn)(x0=p1; : : : ; x0=pn) for n Q x0 = pi and propose a polynomial-time algorithm for this problem by using products i=1 of the instances and auxiliary input. This algorithm yields a polynomial-time cryptanalysis of the (approximate) multilinear map of Coron, Lepoint and Tibouchi (CLT): We show that by multiplying encodings of zero with zero-testing parameters properly in the CLT scheme, one can obtain a required input of our algorithm: products of CRT-ACD instances and auxiliary input. This leads to a total break: all the quantities that were supposed to be kept secret can be recovered in an efficient and public manner. We also introduce polynomial-time algorithms for the Subgroup Membership, Deci- sion Linear, and Graded External Diffie-Hellman problems, which are used as the base problems of several cryptographic schemes constructed on multilinear maps. Keywords: Multilinear maps, Graded encoding schemes, Decision linear problem, Sub- group membership problem, Graded external Diffie-Hellman problem.
    [Show full text]
  • On Lie 2-Bialgebras
    Communications in Mathematical Research 34(1)(2018), 54{64 On Lie 2-bialgebras Qiao Yu and Zhao Jia∗ (School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, 710119) Communicated by Du Xian-kun Abstract: A Lie 2-bialgebra is a Lie 2-algebra equipped with a compatible Lie 2- coalgebra structure. In this paper, we give another equivalent description for Lie 2-bialgebras by using the structure maps and compatibility conditions. We can use this method to check whether a 2-term direct sum of vector spaces is a Lie 2-bialgebra easily. Key words: big bracket, Lie 2-algebra, Lie 2-coalgebra, Lie 2-bialgebra 2010 MR subject classification: 17B66 Document code: A Article ID: 1674-5647(2018)01-0054-11 DOI: 10.13447/j.1674-5647.2018.01.06 1 Introduction 1.1 Background This paper is a sequel to [1], in which the notion of Lie 2-bialgeras was introduced. The main purpose of this paper is to give an equivalent condition for Lie 2-bialgebras. Generally speaking, a Lie 2-bialgebra is a Lie 2-algebra endowed with a Lie 2-coalgebra structure, satisfying certain compatibility conditions. As we all know, a Lie bialgebra structure on a Lie algebra (g; [ · ; · ]) consists of a cobracket δ : g ! g ^ g, which squares to zero, and satisfies the compatibility condition: for all x; y; z 2 g, δ([x; y]) = [x; δ(y)] − [y; δ(x)]: Consequently, one may ask what is a Lie 2-bialgebra. A Lie 2-bialgebra is a pair of 2-terms of L1-algebra structure underlying a 2-vector space and its dual.
    [Show full text]
  • On Riemannian Manifolds Endowed with a Locally Conformal Cosymplectic Structure
    ON RIEMANNIAN MANIFOLDS ENDOWED WITH A LOCALLY CONFORMAL COSYMPLECTIC STRUCTURE ION MIHAI, RADU ROSCA, AND VALENTIN GHIS¸OIU Received 18 September 2004 and in revised form 7 September 2005 We deal with a locally conformal cosymplectic manifold M(φ,Ω,ξ,η,g)admittingacon- formal contact quasi-torse-forming vector field T. The presymplectic 2-form Ω is a lo- cally conformal cosymplectic 2-form. It is shown that T is a 3-exterior concurrent vector field. Infinitesimal transformations of the Lie algebra of ∧M are investigated. The Gauss map of the hypersurface Mξ normal to ξ is conformal and Mξ × Mξ is a Chen submanifold of M × M. 1. Introduction Locally conformal cosymplectic manifolds have been investigated by Olszak and Rosca [7] (see also [6]). In the present paper, we consider a (2m + 1)-dimensional Riemannian manifold M(φ, Ω,ξ,η,g) endowed with a locally conformal cosymplectic structure. We assume that M admits a principal vector field (or a conformal contact quasi-torse-forming), that is, ∇T = sdp + T ∧ ξ = sdp + η ⊗ T − T ⊗ ξ, (1.1) with ds = sη. First, we prove certain geometrical properties of the vector fields T and φT. The exis- tence of T and φT is determined by an exterior differential system in involution (in the sense of Cartan [3]). The principal vector field T is 3-exterior concurrent (see also [8]), it defines a Lie relative contact transformation of the co-Reeb form η, and the Lie differential of T with respect to T is conformal to T.ThevectorfieldφT is an infinitesimal transfor- mation of generators T and ξ.Thevectorfieldsξ, T,andφT commute and the distri- bution DT ={T,φT,ξ} is involutive.
    [Show full text]
  • A. Vector and Affine Spaces
    A. Vector and affine spaces This appendix reviews basic facts about vector and affine spaces, including the notions of metric and orientation. To a large extent, the treatment follows Malament (2009). A.1. Matrices Definition 34. An m n matrix is a rectangular table of mn real numbers A , where ⇥ ij 1 i m and 1 j n, arrayed as follows: A A A 11 12 ··· 1n 0 A21 A22 A2n 1 ··· (A.1) . .. B . C B C BA A A C B m1 m2 mnC @ ··· A The operation of matrix multiplication is defined as follows. Definition 35. Given an m n matrix A and an n p matrix B, their matrix product AB ⇥ ⇥ is the m p matrix with entries (AB)i , where ⇥ k i i j (AB) k = A jB k (A.2) Note that this uses the Einstein summation convention: repeated indices are summed over. Definition 36. Given an m n matrix Ai , its transpose is an n m matrix denoted by ⇥ j ⇥ A j, and defined by the condition that for all 1 i m and 1 j n, A j = Ai , i i j An element of Rn can be identified with an n 1 matrix; as such, an m n matrix A ⇥ ⇥ encodes a map A : Rm Rn. Such a map is linear, in the sense that for any xi, yi Rm ! 2 97 and any a, b R, 2 i j j i j i j A j(ax + by )=a(A jx )+b(A jy ) (A.3) And, in fact, any linear map from Rm to Rn corresponds to some matrix.
    [Show full text]