Multilinear Algebra, Differential Forms and Stokes' Theorem

Total Page:16

File Type:pdf, Size:1020Kb

Multilinear Algebra, Differential Forms and Stokes' Theorem Multilinear algebra, differential forms and Stokes' theorem Yakov Eliashberg April 2018 2 Contents I Multilinear Algebra 7 1 Linear and multilinear functions 9 1.1 Dual space . 9 1.2 Canonical isomorphism between (V ∗)∗ and V ...................... 11 1.3 The map A∗ ......................................... 11 1.4 Multilinear functions . 14 1.5 More about bilinear functions . 15 1.6 Symmetric bilinear functions and quadratic forms . 16 2 Tensor and exterior products 19 2.1 Tensor product . 19 2.2 Spaces of multilinear functions . 20 2.3 Symmetric and skew-symmetric tensors . 20 2.4 Symmetrization and anti-symmetrization . 22 2.5 Exterior product . 22 2.6 Spaces of symmetric and skew-symmetric tensors . 25 2.7 Operator A∗ on spaces of tensors . 26 3 More about tensors 31 3.1 Quotient space . 31 3.2 Tensor product of spaces . 32 3.3 Tensor square and bilinear functions . 33 3 4 Orientation and Volume 35 4.1 Orientation . 35 4.2 Orthogonal transformations . 36 4.3 Determinant and Volume . 37 4.4 Volume and Gram matrix . 39 5 Dualities 41 5.1 Duality between k-forms and (n − k)-forms on a n-dimensional Euclidean space V . 41 5.2 Euclidean structure on the space of exterior forms . 47 5.3 Contraction . 50 6 Complex vector spaces 55 6.1 Complex numbers . 55 6.2 Complex vector space . 58 6.3 Complex linear maps . 60 II Calculus of differential forms 63 7 Topological preliminaries 65 7.1 Elements of topology in a vector space . 65 7.2 Everywhere and nowhere dense sets . 67 7.3 Compactness and connectedness . 67 7.4 Connected and path-connected components . 70 7.5 Continuous maps and functions . 71 8 Vector fields and differential forms 75 8.1 Differential and gradient . 75 8.2 Smooth functions . 77 8.3 Gradient vector field . 77 8.4 Vector fields . 78 8.4.1 Gradient vector field . 79 4 8.5 Differential forms . 81 8.6 Coordinate description of differential forms . 81 8.7 Smooth maps and their differentials . 82 8.8 Operator f ∗ ......................................... 83 8.9 Coordinate description of the operator f ∗ ........................ 85 8.10 Examples . 86 8.11 Pfaffian equations . 87 9 Exterior differential 89 9.1 Coordinate definition of the exterior differential . 89 9.2 Properties of the operator d ................................ 91 9.3 Curvilinear coordinate systems . 95 9.4 Geometric definition of the exterior differential . 95 9.5 More about vector fields . 97 9.6 Case n = 3. Summary of isomorphisms . 99 9.7 Gradient, curl and divergence of a vector field . 100 9.8 Example: expressing vector analysis operations in spherical coordinates . 101 9.9 Complex-valued differential k-forms . 104 10 Integration of differential forms and functions 107 10.1 Useful technical tools: partition of unity and cut-off functions . 107 10.2 One-dimensional Riemann integral for functions and differential 1-forms . 110 10.3 Integration of differential 1-forms along curves . 113 10.4 Integrals of closed and exact differential 1-forms . 118 10.5 Integration of functions over domains in high-dimensional spaces . 119 10.6 Fubini's Theorem . 134 10.7 Integration of n-forms over domains in n-dimensional space . 137 10.8 Manifolds and submanifolds . 144 10.8.1 Manifolds . 144 10.8.2 Gluing construction . 146 5 10.8.3 Examples of manifolds . 153 10.8.4 Submanifolds of an n-dimensional vector space . 155 10.8.5 Submanifolds with boundary . 157 10.9 Tangent spaces and differential . 159 10.10Vector bundles and their homomorphisms . 162 10.11Orientation . 163 10.12Integration of differential k-forms over k-dimensional submanifolds . 163 III Stokes' theorem and its applications 169 11 Stokes' theorem 171 11.1 Statement of Stokes' theorem . 171 11.2 Proof of Stokes' theorem . 175 11.3 Integration of functions over submanifolds . 177 11.4 Work and Flux . 183 11.5 Integral formulas of vector analysis . 186 11.6 Expressing div and curl in curvilinear coordinates . 187 12 Topological applications of Stokes' formula 191 12.1 Integration of closed and exact forms . 191 12.2 Approximation of continuous functions by smooth ones . 192 12.3 Homotopy . 194 12.4 Winding and linking numbers . 201 12.5 Properties of k-forms on k-dimensional manifolds . 203 6 Part I Multilinear Algebra 7 Chapter 1 Linear and multilinear functions 1.1 Dual space Let V be a finite-dimensional real vector space. The set of all linear functions on V will be denoted by V ∗. Proposition 1.1. V ∗ is a vector space of the same dimension as V . Proof. One can add linear functions and multiply them by real numbers: (l1 + l2)(x) = l1(x) + l2(x) ∗ (λl)(x) = λl(x) for l; l1; l2 2 V ; x 2 V; λ 2 R It is straightforward to check that all axioms of a vector space are satisfied for V ∗. Let us now check that dim V = dim V ∗. 0 1 x1 B C B . C Choose a basis v1 : : : vn of V . For any x 2 V let B . C be its coordinates in the basis v1 : : : vn. @ A xn It is important to observe that each of the coordinates x1; : : : ; xn can be viewed as a linear function on V . Indeed, 9 1) the coordinates of the sum of two vectors are equal to the sum of the corresponding coordinates; 2) when a vector is multiplied by a number, its coordinates are being multiplied by the same number. ∗ ∗ Thus x1; : : : ; xn are vectors from the space V . Let us show now that they form a basis of V . ∗ Indeed, any linear function l 2 V can be written in the form l(x) = a1x1 + ::: + anxn which means ∗ that l is a linear combination of x1 : : : xn with coefficients a1; : : : ; an. Thus x1; : : : ; xn generate V . On the other hand, if a1x + ::: + anxn is the 0-function, then all the coefficients must be equal to 0; i.e. functions x1; : : : ; xn are linearly independent. Hence x1; : : : ; xn form a basis of V and therefore ∗ dim V = n = dim V: ∗ 1 The space V is called dual to V and the basis x1; : : : ; xn dual to v1 : : : vn. ∗ Exercise 1.2. Prove the converse: given any basis l1; : : : ; ln of V we can construct a dual basis w1; : : : ; wn of V so that the functions l1; : : : ; ln serve as coordinate functions for this basis. Recall that vector spaces of the same dimension are isomorphic. For instance, if we fix bases in both spaces, we can map vectors of the first basis into the corresponding vectors of the second basis, and extend this map by linearity to an isomorphism between the spaces. In particular, sending a ∗ ∗ basis S = fv1; : : : ; vng of a space V into the dual basis S := fx1; : : : ; xng of the dual space V ∗ we can establish an isomorphism iS : V ! V . However, this isomorphism is not canonical, i.e. it depends on the choice of the basis v1; : : : ; vn. If V is a Euclidean space, i.e. a space with a scalar product hx; yi, then this allows us to define another isomorphism V ! V ∗, different from the one described above. This isomorphism associates ∗ with a vector v 2 V a linear function lv(x) = hv; xi. We will denote the corresponding map V ! V by D. Thus we have D(v) = lv for any vector v 2 V . ∗ Exercise 1.3. Prove that D : V ! V is an isomorphism. Show that D = iS for any orthonormal basis S. 1It is sometimes customary to denote dual bases in V and V ∗ by the same letters but using lower indices for V ∗ 1 n and upper indices for V , e.g. v1; : : : ; vn and v ; : : : ; v . However, in these notes we do not follow this convention. 10 The isomorphism D is independent of a choice of an orthonormal basis, but is still not completely canonical: it depends on a choice of a scalar product. However, when talking about Euclidean spaces, this isomorphism is canonical. Remark 1.4. The definition of the dual space V ∗ also works in the infinite-dimensional case. Exercise 1.5. Show that both maps iS and D are injective in the infinite dimensional case as well. However, neither one of them is surjective if V is infinite-dimensional. 1.2 Canonical isomorphism between (V ∗)∗ and V The space (V ∗)∗, dual to the dual space V , is canonically isomorphic in the finite-dimensional case to V . The word canonically means that the isomorphism is \god-given", i.e. it is independent of any additional choices. When we write f(x) we usually mean that the function f is fixed but the argument x can vary. However, we can also take.
Recommended publications
  • Topology and Physics 2019 - Lecture 2
    Topology and Physics 2019 - lecture 2 Marcel Vonk February 12, 2019 2.1 Maxwell theory in differential form notation Maxwell's theory of electrodynamics is a great example of the usefulness of differential forms. A nice reference on this topic, though somewhat outdated when it comes to notation, is [1]. For notational simplicity, we will work in units where the speed of light, the vacuum permittivity and the vacuum permeability are all equal to 1: c = 0 = µ0 = 1. 2.1.1 The dual field strength In three dimensional space, Maxwell's electrodynamics describes the physics of the electric and magnetic fields E~ and B~ . These are three-dimensional vector fields, but the beauty of the theory becomes much more obvious if we (a) use a four-dimensional relativistic formulation, and (b) write it in terms of differential forms. For example, let us look at Maxwells two source-free, homogeneous equations: r · B = 0;@tB + r × E = 0: (2.1) That these equations have a relativistic flavor becomes clear if we write them out in com- ponents and organize the terms somewhat suggestively: x y z 0 + @xB + @yB + @zB = 0 x z y −@tB + 0 − @yE + @zE = 0 (2.2) y z x −@tB + @xE + 0 − @zE = 0 z y x −@tB − @xE + @yE + 0 = 0 Note that we also multiplied the last three equations by −1 to clarify the structure. All in all, we see that we have four equations (one for each space-time coordinate) which each contain terms in which the four coordinate derivatives act. Therefore, we may be tempted to write our set of equations in more \relativistic" notation as ^µν @µF = 0 (2.3) 1 with F^µν the coordinates of an antisymmetric two-tensor (i.
    [Show full text]
  • LP THEORY of DIFFERENTIAL FORMS on MANIFOLDS This
    TRANSACTIONSOF THE AMERICAN MATHEMATICALSOCIETY Volume 347, Number 6, June 1995 LP THEORY OF DIFFERENTIAL FORMS ON MANIFOLDS CHAD SCOTT Abstract. In this paper, we establish a Hodge-type decomposition for the LP space of differential forms on closed (i.e., compact, oriented, smooth) Rieman- nian manifolds. Critical to the proof of this result is establishing an LP es- timate which contains, as a special case, the L2 result referred to by Morrey as Gaffney's inequality. This inequality helps us show the equivalence of the usual definition of Sobolev space with a more geometric formulation which we provide in the case of differential forms on manifolds. We also prove the LP boundedness of Green's operator which we use in developing the LP theory of the Hodge decomposition. For the calculus of variations, we rigorously verify that the spaces of exact and coexact forms are closed in the LP norm. For nonlinear analysis, we demonstrate the existence and uniqueness of a solution to the /1-harmonic equation. 1. Introduction This paper contributes primarily to the development of the LP theory of dif- ferential forms on manifolds. The reader should be aware that for the duration of this paper, manifold will refer only to those which are Riemannian, compact, oriented, C°° smooth and without boundary. For p = 2, the LP theory is well understood and the L2-Hodge decomposition can be found in [M]. However, in the case p ^ 2, the LP theory has yet to be fully developed. Recent appli- cations of the LP theory of differential forms on W to both quasiconformal mappings and nonlinear elasticity continue to motivate interest in this subject.
    [Show full text]
  • NOTES on DIFFERENTIAL FORMS. PART 3: TENSORS 1. What Is A
    NOTES ON DIFFERENTIAL FORMS. PART 3: TENSORS 1. What is a tensor? 1 n Let V be a finite-dimensional vector space. It could be R , it could be the tangent space to a manifold at a point, or it could just be an abstract vector space. A k-tensor is a map T : V × · · · × V ! R 2 (where there are k factors of V ) that is linear in each factor. That is, for fixed ~v2; : : : ;~vk, T (~v1;~v2; : : : ;~vk−1;~vk) is a linear function of ~v1, and for fixed ~v1;~v3; : : : ;~vk, T (~v1; : : : ;~vk) is a k ∗ linear function of ~v2, and so on. The space of k-tensors on V is denoted T (V ). Examples: n • If V = R , then the inner product P (~v; ~w) = ~v · ~w is a 2-tensor. For fixed ~v it's linear in ~w, and for fixed ~w it's linear in ~v. n • If V = R , D(~v1; : : : ;~vn) = det ~v1 ··· ~vn is an n-tensor. n • If V = R , T hree(~v) = \the 3rd entry of ~v" is a 1-tensor. • A 0-tensor is just a number. It requires no inputs at all to generate an output. Note that the definition of tensor says nothing about how things behave when you rotate vectors or permute their order. The inner product P stays the same when you swap the two vectors, but the determinant D changes sign when you swap two vectors. Both are tensors. For a 1-tensor like T hree, permuting the order of entries doesn't even make sense! ~ ~ Let fb1;:::; bng be a basis for V .
    [Show full text]
  • Killing Spinor-Valued Forms and the Cone Construction
    ARCHIVUM MATHEMATICUM (BRNO) Tomus 52 (2016), 341–355 KILLING SPINOR-VALUED FORMS AND THE CONE CONSTRUCTION Petr Somberg and Petr Zima Abstract. On a pseudo-Riemannian manifold M we introduce a system of partial differential Killing type equations for spinor-valued differential forms, and study their basic properties. We discuss the relationship between solutions of Killing equations on M and parallel fields on the metric cone over M for spinor-valued forms. 1. Introduction The subject of the present article are the systems of over-determined partial differential equations for spinor-valued differential forms, classified as atypeof Killing equations. The solution spaces of these systems of PDE’s are termed Killing spinor-valued differential forms. A central question in geometry asks for pseudo-Riemannian manifolds admitting non-trivial solutions of Killing type equa- tions, namely how the properties of Killing spinor-valued forms relate to the underlying geometric structure for which they can occur. Killing spinor-valued forms are closely related to Killing spinors and Killing forms with Killing vectors as a special example. Killing spinors are both twistor spinors and eigenspinors for the Dirac operator, and real Killing spinors realize the limit case in the eigenvalue estimates for the Dirac operator on compact Riemannian spin manifolds of positive scalar curvature. There is a classification of complete simply connected Riemannian manifolds equipped with real Killing spinors, leading to the construction of manifolds with the exceptional holonomy groups G2 and Spin(7), see [8], [1]. Killing vector fields on a pseudo-Riemannian manifold are the infinitesimal generators of isometries, hence they influence its geometrical properties.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • Multilinear Algebra in Data Analysis: Tensors, Symmetric Tensors, Nonnegative Tensors
    Multilinear Algebra in Data Analysis: tensors, symmetric tensors, nonnegative tensors Lek-Heng Lim Stanford University Workshop on Algorithms for Modern Massive Datasets Stanford, CA June 21–24, 2006 Thanks: G. Carlsson, L. De Lathauwer, J.M. Landsberg, M. Mahoney, L. Qi, B. Sturmfels; Collaborators: P. Comon, V. de Silva, P. Drineas, G. Golub References http://www-sccm.stanford.edu/nf-publications-tech.html [CGLM2] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain, “Symmetric tensors and symmetric tensor rank,” SCCM Tech. Rep., 06-02, 2006. [CGLM1] P. Comon, B. Mourrain, L.-H. Lim, and G.H. Golub, “Genericity and rank deficiency of high order symmetric tensors,” Proc. IEEE Int. Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), 31 (2006), no. 3, pp. 125–128. [dSL] V. de Silva and L.-H. Lim, “Tensor rank and the ill-posedness of the best low-rank approximation problem,” SCCM Tech. Rep., 06-06 (2006). [GL] G. Golub and L.-H. Lim, “Nonnegative decomposition and approximation of nonnegative matrices and tensors,” SCCM Tech. Rep., 06-01 (2006), forthcoming. [L] L.-H. Lim, “Singular values and eigenvalues of tensors: a variational approach,” Proc. IEEE Int. Workshop on Computational Advances in Multi- Sensor Adaptive Processing (CAMSAP), 1 (2005), pp. 129–132. 2 What is not a tensor, I • What is a vector? – Mathematician: An element of a vector space. – Physicist: “What kind of physical quantities can be rep- resented by vectors?” Answer: Once a basis is chosen, an n-dimensional vector is something that is represented by n real numbers only if those real numbers transform themselves as expected (ie.
    [Show full text]
  • Math 865, Topics in Riemannian Geometry
    Math 865, Topics in Riemannian Geometry Jeff A. Viaclovsky Fall 2007 Contents 1 Introduction 3 2 Lecture 1: September 4, 2007 4 2.1 Metrics, vectors, and one-forms . 4 2.2 The musical isomorphisms . 4 2.3 Inner product on tensor bundles . 5 2.4 Connections on vector bundles . 6 2.5 Covariant derivatives of tensor fields . 7 2.6 Gradient and Hessian . 9 3 Lecture 2: September 6, 2007 9 3.1 Curvature in vector bundles . 9 3.2 Curvature in the tangent bundle . 10 3.3 Sectional curvature, Ricci tensor, and scalar curvature . 13 4 Lecture 3: September 11, 2007 14 4.1 Differential Bianchi Identity . 14 4.2 Algebraic study of the curvature tensor . 15 5 Lecture 4: September 13, 2007 19 5.1 Orthogonal decomposition of the curvature tensor . 19 5.2 The curvature operator . 20 5.3 Curvature in dimension three . 21 6 Lecture 5: September 18, 2007 22 6.1 Covariant derivatives redux . 22 6.2 Commuting covariant derivatives . 24 6.3 Rough Laplacian and gradient . 25 7 Lecture 6: September 20, 2007 26 7.1 Commuting Laplacian and Hessian . 26 7.2 An application to PDE . 28 1 8 Lecture 7: Tuesday, September 25. 29 8.1 Integration and adjoints . 29 9 Lecture 8: September 23, 2007 34 9.1 Bochner and Weitzenb¨ock formulas . 34 10 Lecture 9: October 2, 2007 38 10.1 Manifolds with positive curvature operator . 38 11 Lecture 10: October 4, 2007 41 11.1 Killing vector fields . 41 11.2 Isometries . 44 12 Lecture 11: October 9, 2007 45 12.1 Linearization of Ricci tensor .
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • The Language of Differential Forms
    Appendix A The Language of Differential Forms This appendix—with the only exception of Sect.A.4.2—does not contain any new physical notions with respect to the previous chapters, but has the purpose of deriving and rewriting some of the previous results using a different language: the language of the so-called differential (or exterior) forms. Thanks to this language we can rewrite all equations in a more compact form, where all tensor indices referred to the diffeomorphisms of the curved space–time are “hidden” inside the variables, with great formal simplifications and benefits (especially in the context of the variational computations). The matter of this appendix is not intended to provide a complete nor a rigorous introduction to this formalism: it should be regarded only as a first, intuitive and oper- ational approach to the calculus of differential forms (also called exterior calculus, or “Cartan calculus”). The main purpose is to quickly put the reader in the position of understanding, and also independently performing, various computations typical of a geometric model of gravity. The readers interested in a more rigorous discussion of differential forms are referred, for instance, to the book [22] of the bibliography. Let us finally notice that in this appendix we will follow the conventions introduced in Chap. 12, Sect. 12.1: latin letters a, b, c,...will denote Lorentz indices in the flat tangent space, Greek letters μ, ν, α,... tensor indices in the curved manifold. For the matter fields we will always use natural units = c = 1. Also, unless otherwise stated, in the first three Sects.
    [Show full text]
  • Multilinear Algebra for Visual Geometry
    multilinear algebra for visual geometry Alberto Ruiz http://dis.um.es/~alberto DIS, University of Murcia MVIGRO workshop Berlin, june 28th 2013 contents 1. linear spaces 2. tensor diagrams 3. Grassmann algebra 4. multiview geometry 5. tensor equations intro diagrams multiview equations linear algebra the most important computational tool: linear = solvable nonlinear problems: linearization + iteration multilinear problems are easy (' linear) visual geometry is multilinear (' solved) intro diagrams multiview equations linear spaces I objects are represented by arrays of coordinates (depending on chosen basis, with proper rules of transformation) I linear transformations are extremely simple (just multiply and add), and invertible in closed form (in least squares sense) I duality: linear transformations are also a linear space intro diagrams multiview equations linear spaces objects are linear combinations of other elements: i vector x has coordinates x in basis B = feig X i x = x ei i Einstein's convention: i x = x ei intro diagrams multiview equations change of basis 1 1 0 0 c1 c2 3 1 0 i [e1; e2] = [e1; e2]C; C = 2 2 = ej = cj ei c1 c2 1 2 x1 x1 x01 x = x1e + x2e = [e ; e ] = [e ; e ]C C−1 = [e0 ; e0 ] 1 2 1 2 x2 1 2 x2 1 2 x02 | 0{z 0 } [e1;e2] | {z } 2x013 4x025 intro diagrams multiview equations covariance and contravariance if the basis transforms according to C: 0 0 [e1; e2] = [e1; e2]C vector coordinates transform according to C−1: x01 x1 = C−1 x02 x2 in the previous example: 7 3 1 2 2 0;4 −0;2 7 = ; = 4 1 2 1 1 −0;2 0;6 4 intro
    [Show full text]