MULTILINEAR ALGEBRA 1. Tensors Let V Be an N-Dimensional Vector

Total Page:16

File Type:pdf, Size:1020Kb

MULTILINEAR ALGEBRA 1. Tensors Let V Be an N-Dimensional Vector LECTURE 16: MULTILINEAR ALGEBRA 1. Tensors Let V be an n-dimensional vector space, and V ∗ its dual space. Definition 1.1. A function T : V k ! R is a k-tensor on V if it is multilinear, i.e. if for each i and each v1; ··· ; vi−1; vi+1; ··· ; vk 2 V , the map Ti : V ! R; vi 7! T (v1; ··· ; vi; ··· ; vk) is linear. We will denote the space of k-tensors on V by ⊗kV ∗. One can think of ⊗kV ∗ as a generalization of the dual space V ∗, since ⊗1V ∗ = V ∗. We will denote ⊗0V ∗ = R. Similarly the space ⊗kV can be viewed as ⊗k(V ∗)∗. One can \multiply" tensors in a simple way. Definition 1.2. Let T be a k-tensor on V and S a l-tensor on V . Then their tensor product T ⊗ S is a (k + l)-tensor on V defined by (T ⊗ S)(v1; ··· ; vk+l) = T (v1; ··· ; vk)S(vk+1; ··· ; vk+l): Remark. Obviously the tensor product operation ⊗ : ⊗kV ∗ × ⊗lV ∗ ! ⊗k+lV ∗ is a bilinear map and it is associative: (T ⊗ S) ⊗ R = T ⊗ (S ⊗ R): So it makes sense to talk about the tensor products of many tensors. However, the tensor product operation is not commutative in general: T ⊗ S 6= S ⊗ T: Example. For any f 1; ··· ; f k 2 V ∗, then tensor T = f 1 ⊗ · · · ⊗ f k is a k-tensor so that 1 k T (v1; ··· ; vk) = f (v1) ··· f (vk): Such a tensor is called a decomposable k-tensor. (Note that by multi-linearity, 2e ⊗ f = e ⊗ 2f.) The following theorem clarifies how ⊗kV ∗ generalize V ∗: Theorem 1.3. Let fe1; ··· ; eng be a basis of V ∗. Then the set of k-tensors i1 i2 ik fe ⊗ e ⊗ · · · ⊗ e j 1 ≤ i1; ··· ; ik ≤ ng form a basis of ⊗kV ∗. In particular, dim ⊗kV ∗ = nk: 1 2 LECTURE 16: MULTILINEAR ALGEBRA Proof. We will denote by fe1; ··· ; eng the dual basis in V . For any multi-index I = (i1; ··· ; ik), we will denote EI = ei1 ⊗ ei2 ⊗ · · · ⊗ eik . Then the fact EI (e ; ··· ; e ) = δi1;··· ;ik j1 jk j1;··· ;jk implies that the tensors EI 's are linearly independent. k ∗ Moreover, for any T 2 ⊗ V , if we let TI = T (ei1 ; ··· ; eik ), and consider the k-tensor X I S = T − TI E ; I then S(ej1 ; ··· ; ejk ) = 0 for any multi-index J = (j1; ··· ; jk). It follows from multi-linearity that P I I S ≡ 0. In other words, T = TI E is a linear combination of these E 's. P i1 ik Remark. So any k-tensor is of the form T = I TI e ⊗ · · · ⊗ e . It is easy to check: A nonzero P i j 2-tensor T = i;j aije ⊗ e is decomposable if and only if the coefficient matrix (aij) has rank 1. However, in general it is very hard to tell whether a k-tensor, k ≥ 3, is decomposable or not. This is related to quantum entanglement. More generally, for any vector spaces V1; ··· ;Vk of dimensions m1; ··· ; mk respectively, one can define the tensor product V1 ⊗ · · · ⊗ Vk to be the m1 ··· mk dimensional linear space with a basis fv1 ⊗ · · · ⊗ vk j 1 ≤ i ≤ m ; ··· ; 1 ≤ i ≤ m g; i1 ik 1 1 k k where fvj; ··· ; vj g is a basis of V . In particular, we will call 1 mj j ⊗l;kV := (⊗lV ) ⊗ (⊗kV ∗) the space of (l; k)-tensors on V . In other words, T 2 ⊗l;kV if and only if 1 l T = T (β ; ··· ; β ; v1; ··· ; vk) i ∗ is multilinear with respect to β 2 V and vi 2 V . Remark. For any finite dimensional vector spaces V and W , one has a natural linear isomorphism V ⊗ W ∗ ' L(W; V ), where L(W; V ) is the set of all linear maps from W to V . Definition 1.4. Let T be a (l; k)-tensor on V . For any 1 ≤ r ≤ l and 1 ≤ s ≤ k, the (r; s)- contraction of T is the (l − 1; k − 1)-tensor r 1 k−1 X 1 r−1 i r k−1 Cs (T )(β ; ··· ; β ; v1; ··· ; vl−1) = T (β ; ··· ; β ; e ; β ; ··· ; β ; v1; ··· ; vs−1; ei; vs; ··· ; vl−1) i 1 n where fe1; ··· ; eng is a basis of V , and fe ; ··· ; e g the dual basis. One can check that this definition is independent of the choices of the basis feig of V . In r th th fact, Cs (T ) is the (l − 1; k − 1)-tensor obtained from T by pairing the r vector in T with the s co-vector in T . For example, if v; w 2 V and α; β; γ 2 V ∗, one has 1 C2 (v ⊗ w ⊗ α ⊗ β ⊗ γ) = β(v)w ⊗ α ⊗ γ: LECTURE 16: MULTILINEAR ALGEBRA 3 2. Linear p-forms Now let's fix a vector space V . Definition 2.1. A k-tensor T on V is called symmetric if T (v1; ··· ; vk) = T (vσ(1); ··· ; vσ(k)) for all permutations σ of (1; 2; ··· ; k). Example. An inner product on V is a positive symmetric 2-tensor. Definition 2.2. A k-tensor T on V is alternating (or a linear k-form) if it is skew-symmetric, i.e. T (v1; ··· ; vi; ··· ; vj; ··· ; vk) = −T (v1; ··· ; vj; ··· ; vi; ··· ; vk): for all v1; ··· ; vk 2 V and any 1 ≤ i 6= j ≤ k We will denote the vector space of k-forms by ΛkV ∗. Note that ΛkV ∗ is a linear subspace of ⊗kV ∗, and Λ1V ∗ = ⊗1V ∗ = V ∗. Again we set Λ0V ∗ = R. Example. det is a n-form on Rn. Recall that a permutation σ 2 Sk is called even or odd, depending on whether it is expressible as a product of an even or odd number of simple transpositions. For any k-tensor T and any σ σ 2 Sk, we define another k-tensor T by σ T (v1; ··· ; vk) = T (vσ(1); ··· ; vσ(k)): Clearly σ π σ◦π • For all k-tensor T ,(T ) = T for all σ; π 2 Sk. σ • A k-tensor T is symmetric if and only if T = T for all σ 2 Sk. σ σ σ • A k-tensor T is a k-form if and only if T = (−1) T for all σ 2 Sk, where (−1) = 1 if σ is even, and (−1)σ = −1 if σ is odd. For any k-tensor T on V , we consider the anti-symmetrization map 1 X Alt(T ) = (−1)πT π: k! π2Sk Lemma 2.3. The map Alt is a projection from ⊗kV ∗ to ΛkV ∗, i.e. it satisfies (1) For any T 2 ⊗kV ∗, Alt(T ) 2 ΛkV ∗. (2) For any T 2 ΛkV ∗, Alt(T ) = T . k ∗ Proof. (1) For any T 2 ⊗ V and any σ 2 Sk, 1 X 1 X [Alt(T )]σ = (−1)π(T π)σ = (−1)σ (−1)π◦σT π◦σ = (−1)σAlt(T ): k! k! π2Sk π2Sk k ∗ π π (2) If T 2 Λ V , then each summand (−1) T equals T . So Alt(T ) = T since jSkj = k!. We will need 4 LECTURE 16: MULTILINEAR ALGEBRA Lemma 2.4. Let T; S; R be k-, l-, and m-forms respectively. Then (1) Alt(T ⊗ S) = (−1)klAlt(S ⊗ T ). (2) Alt(Alt(T ⊗ S) ⊗ R) = Alt(T ⊗ S ⊗ R) = Alt(T ⊗ Alt(S ⊗ R)). Proof. Exercise. Now we can define a \product operation" for forms: Definition 2.5. The wedge product of T 2 ΛkV ∗ and S 2 ΛlV ∗ is the (k + l)-form (k + l)! T ^ S = Alt(T ⊗ S): k!l! The wedge product operation satisfies Proposition 2.6. The wedge product operation ^ : (ΛkV ∗) × (ΛlV ∗) ! Λk+lV ∗ is (1) Bi-linear: (T;S) 7! T ^ S is linear in T and in S. (2) Anti-commutative: T ^ S = (−1)klS ^ T . (3) Associative: (T ^ S) ^ R = T ^ (S ^ R). Proof. (1) follows from definition. (2) follows from lemma 2.4(1). (3) follows from the definition and lemma 2.4(2). So it makes sense to talk about wedge products of three or more forms. For example, we have (k + l + m)! T ^ S ^ R = Alt(T ⊗ S ⊗ R): k!l!m! One can easily extend this to wedge products of more than 3 forms. In particular, by definition we have: if f 1; ··· ; f k 2 V ∗, then f 1 ^ · · · ^ f k = k!Alt(f 1 ⊗ · · · ⊗ f k). As a consequence, 1 k ∗ Proposition 2.7. For any f ; ··· ; f 2 V and v1; ··· ; vk 2 V , 1 k i f ^ · · · ^ f (v1; ··· ; vk) = det(f (vj)): Proof. We have 1 k 1 k f ^ · · · ^ f (v1; ··· ; vk) = k! Alt(f ⊗ · · · ⊗ f )(v1; ··· ; vk) X σ 1 k = (−1) f (vσ(1)) ··· f (vσ(k)) σ2Sk i = det((f (vj))): Now we are ready to prove Theorem 2.8. Let fe1; ··· ; eng be a basis of V ∗. Then the set of k-forms i1 i2 ik fe ^ e ^ · · · ^ e j 1 ≤ i1 < i2 < ··· < ik ≤ ng k ∗ k ∗ n form a basis of Λ V . In particular, dim Λ V = k . LECTURE 16: MULTILINEAR ALGEBRA 5 Proof. Again we denote by fe1; ··· ; eng the dual basis in V . For any multi-index I = (i1; ··· ; ik) I i1 ik with i1 < ··· < ik, we let Ω = e ^ · · · ^ e . Then for any multi-index J = (j1; ··· ; jk) with j1 < ··· < jk, ΩI (e ; ··· ; e ) = δi1;··· ;ik : j1 jk j1;··· ;jk It follows that these ΩI 's are linearly independent. k ∗ P I Moreover, since any T 2 Λ V is a k-tensor, we can write T = I TI E , where I = (i1; ··· ; ik) I I I runs over all 1 ≤ i1; ··· ; ik ≤ n, and E is as in the proof of theorem 1.3.
Recommended publications
  • On the Second Dual of the Space of Continuous Functions
    ON THE SECOND DUAL OF THE SPACE OF CONTINUOUS FUNCTIONS BY SAMUEL KAPLAN Introduction. By "the space of continuous functions" we mean the Banach space C of real continuous functions on a compact Hausdorff space X. C, its first dual, and its second dual are not only Banach spaces, but also vector lattices, and this aspect of them plays a central role in our treatment. In §§1-3, we collect the properties of vector lattices which we need in the paper. In §4 we add some properties of the first dual of C—the space of Radon measures on X—denoted by L in the present paper. In §5 we imbed C in its second dual, denoted by M, and then identify the space of all bounded real functions on X with a quotient space of M, in fact with a topological direct summand. Thus each bounded function on X represents an entire class of elements of M. The value of an element/ of M on an element p. of L is denoted as usual by/(p.)- If/lies in C then of course f(u) =p(f) (usually written J fdp), and thus the multiplication between Afand P is an extension of the multiplication between L and C. Now in integration theory, for each pEL, there is a stand- ard "extension" of ffdu to certain of the bounded functions on X (we confine ourselves to bounded functions in this paper), which are consequently called u-integrable. The question arises: how is this "extension" related to the multi- plication between M and L.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors
    Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21–24, 2006 Thanks: G. Carlsson, L. De Lathauwer, J.M. Landsberg, M. Mahoney, L. Qi, B. Sturmfels; Collaborators: P. Comon, V. de Silva, P. Drineas, G. Golub References http://www-sccm.stanford.edu/nf-publications-tech.html [CGLM2] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain, “Symmetric tensors and symmetric tensor rank,” SCCM Tech. Rep., 06-02, 2006. [CGLM1] P. Comon, B. Mourrain, L.-H. Lim, and G.H. Golub, “Genericity and rank deficiency of high order symmetric tensors,” Proc. IEEE Int. Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), 31 (2006), no. 3, pp. 125–128. [dSL] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SCCM Tech. Rep., 06-06 (2006). [GL] G. Golub and L.-H. Lim, “Nonnegative decomposition and approximation of nonnegative matrices and tensors,” SCCM Tech. Rep., 06-01 (2006), forthcoming. [L] L.-H. Lim, “Singular values and eigenvalues of tensors: a variational approach,” Proc. IEEE Int. Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), 1 (2005), pp. 129–132. 2 What is not a tensor, I • What is a vector? – Mathematician: An element of a vector space. – Physicist: “What kind of physical quantities can be rep- resented by vectors?” Answer: Once a basis is chosen, an n-dimensional vector is something that is represented by n real numbers only if those real numbers transform themselves as expected (ie.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • A Novel Dual Approach to Nonlinear Semigroups of Lipschitz Operators
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 357, Number 1, Pages 409{424 S 0002-9947(04)03635-9 Article electronically published on August 11, 2004 A NOVEL DUAL APPROACH TO NONLINEAR SEMIGROUPS OF LIPSCHITZ OPERATORS JIGEN PENG AND ZONGBEN XU Abstract. Lipschitzian semigroup refers to a one-parameter semigroup of Lipschitz operators that is strongly continuous in the parameter. It con- tains C0-semigroup, nonlinear semigroup of contractions and uniformly k- Lipschitzian semigroup as special cases. In this paper, through developing a series of Lipschitz dual notions, we establish an analysis approach to Lips- chitzian semigroup. It is mainly proved that a (nonlinear) Lipschitzian semi- group can be isometrically embedded into a certain C0-semigroup. As ap- plication results, two representation formulas of Lipschitzian semigroup are established, and many asymptotic properties of C0-semigroup are generalized to Lipschitzian semigroup. 1. Introduction Let X and Y be Banach spaces over the same coefficient field K (= R or C), and let C ⊂ X and D ⊂ Y be their subsets. A mapping T from C into D is called Lipschitz operator if there exists a real constant M>0 such that (1.1) k Tx− Ty k≤ M k x − y k; 8x; y 2 C; where the constant M is commonly referred to as a Lipschitz constant of T . Clearly, if the mapping T is a constant (that is, there exists a vector y0 2 D such that Tx= y0 for all x 2 C), then T is a Lipschitz operator. Let Lip(C; D) denote the space of Lipschitz operators from C into D,andfor every T 2 Lip(C; D)letL(T ) denote the minimum Lipschitz constant of T , i.e., k Tx− Ty k (1.2) L(T )= sup : x;y2C;x=6 y k x − y k Then, it can be shown that the nonnegative functional L(·)isaseminormof Lip(C; D) and hence (Lip(C; D);L(·)) is a seminormed linear space.
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • 1.3 Cartesian Tensors a Second-Order Cartesian Tensor Is Defined As A
    1.3 Cartesian tensors A second-order Cartesian tensor is defined as a linear combination of dyadic products as, T Tijee i j . (1.3.1) The coefficients Tij are the components of T . A tensor exists independent of any coordinate system. The tensor will have different components in different coordinate systems. The tensor T has components Tij with respect to basis {ei} and components Tij with respect to basis {e i}, i.e., T T e e T e e . (1.3.2) pq p q ij i j From (1.3.2) and (1.2.4.6), Tpq ep eq TpqQipQjqei e j Tij e i e j . (1.3.3) Tij QipQjqTpq . (1.3.4) Similarly from (1.3.2) and (1.2.4.6) Tij e i e j Tij QipQjqep eq Tpqe p eq , (1.3.5) Tpq QipQjqTij . (1.3.6) Equations (1.3.4) and (1.3.6) are the transformation rules for changing second order tensor components under change of basis. In general Cartesian tensors of higher order can be expressed as T T e e ... e , (1.3.7) ij ...n i j n and the components transform according to Tijk ... QipQjqQkr ...Tpqr... , Tpqr ... QipQjqQkr ...Tijk ... (1.3.8) The tensor product S T of a CT(m) S and a CT(n) T is a CT(m+n) such that S T S T e e e e e e . i1i2 im j1j 2 jn i1 i2 im j1 j2 j n 1.3.1 Contraction T Consider the components i1i2 ip iq in of a CT(n).
    [Show full text]
  • Tensor-Spinor Theory of Gravitation in General Even Space-Time Dimensions
    Physics Letters B 817 (2021) 136288 Contents lists available at ScienceDirect Physics Letters B www.elsevier.com/locate/physletb Tensor-spinor theory of gravitation in general even space-time dimensions ∗ Hitoshi Nishino a, ,1, Subhash Rajpoot b a Department of Physics, College of Natural Sciences and Mathematics, California State University, 2345 E. San Ramon Avenue, M/S ST90, Fresno, CA 93740, United States of America b Department of Physics & Astronomy, California State University, 1250 Bellflower Boulevard, Long Beach, CA 90840, United States of America a r t i c l e i n f o a b s t r a c t Article history: We present a purely tensor-spinor theory of gravity in arbitrary even D = 2n space-time dimensions. Received 18 March 2021 This is a generalization of the purely vector-spinor theory of gravitation by Bars and MacDowell (BM) in Accepted 9 April 2021 4D to general even dimensions with the signature (2n − 1, 1). In the original BM-theory in D = (3, 1), Available online 21 April 2021 the conventional Einstein equation emerges from a theory based on the vector-spinor field ψμ from a Editor: N. Lambert m lagrangian free of both the fundamental metric gμν and the vierbein eμ . We first improve the original Keywords: BM-formulation by introducing a compensator χ, so that the resulting theory has manifest invariance = =− = Bars-MacDowell theory under the nilpotent local fermionic symmetry: δψ Dμ and δ χ . We next generalize it to D Vector-spinor (2n − 1, 1), following the same principle based on a lagrangian free of fundamental metric or vielbein Tensors-spinors rs − now with the field content (ψμ1···μn−1 , ωμ , χμ1···μn−2 ), where ψμ1···μn−1 (or χμ1···μn−2 ) is a (n 1) (or Metric-less formulation (n − 2)) rank tensor-spinor.
    [Show full text]