Introduction to Vectors and Tensors Volume 1

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Vectors and Tensors Volume 1 INTRODUCTION TO VECTORS AND TENSORS Linear and Multilinear Algebra Volume 1 Ray M. Bowen Mechanical Engineering Texas A&M University College Station, Texas and C.-C. Wang Mathematical Sciences Rice University Houston, Texas Copyright Ray M. Bowen and C.-C. Wang (ISBN 0-306-37508-7 (v. 1)) Updated 2010 ____________________________________________________________________________ PREFACE To Volume 1 This work represents our effort to present the basic concepts of vector and tensor analysis. Volume I begins with a brief discussion of algebraic structures followed by a rather detailed discussion of the algebra of vectors and tensors. Volume II begins with a discussion of Euclidean Manifolds which leads to a development of the analytical and geometrical aspects of vector and tensor fields. We have not included a discussion of general differentiable manifolds. However, we have included a chapter on vector and tensor fields defined on Hypersurfaces in a Euclidean Manifold. In preparing this two volume work our intention is to present to Engineering and Science students a modern introduction to vectors and tensors. Traditional courses on applied mathematics have emphasized problem solving techniques rather than the systematic development of concepts. As a result, it is possible for such courses to become terminal mathematics courses rather than courses which equip the student to develop his or her understanding further. As Engineering students our courses on vectors and tensors were taught in the traditional way. We learned to identify vectors and tensors by formal transformation rules rather than by their common mathematical structure. The subject seemed to consist of nothing but a collection of mathematical manipulations of long equations decorated by a multitude of subscripts and superscripts. Prior to our applying vector and tensor analysis to our research area of modern continuum mechanics, we almost had to relearn the subject. Therefore, one of our objectives in writing this book is to make available a modern introductory textbook suitable for the first in-depth exposure to vectors and tensors. Because of our interest in applications, it is our hope that this book will aid students in their efforts to use vectors and tensors in applied areas. The presentation of the basic mathematical concepts is, we hope, as clear and brief as possible without being overly abstract. Since we have written an introductory text, no attempt has been made to include every possible topic. The topics we have included tend to reflect our personal bias. We make no claim that there are not other introductory topics which could have been included. Basically the text was designed in order that each volume could be used in a one-semester course. We feel Volume I is suitable for an introductory linear algebra course of one semester. Given this course, or an equivalent, Volume II is suitable for a one semester course on vector and tensor analysis. Many exercises are included in each volume. However, it is likely that teachers will wish to generate additional exercises. Several times during the preparation of this book we taught a one semester course to students with a very limited background in linear algebra and no background in tensor analysis. Typically these students were majoring in Engineering or one of the Physical Sciences. However, we occasionally had students from the Social Sciences. For this one semester course, we covered the material in Chapters 0, 3, 4, 5, 7 and 8 from Volume I and selected topics from Chapters 9, 10, and 11 from Volume 2. As to level, our classes have contained juniors, iii iv PREFACE seniors and graduate students. These students seemed to experience no unusual difficulty with the material. It is a pleasure to acknowledge our indebtedness to our students for their help and forbearance. Also, we wish to thank the U. S. National Science Foundation for its support during the preparation of this work. We especially wish to express our appreciation for the patience and understanding of our wives and children during the extended period this work was in preparation. Houston, Texas R.M.B. C.-C.W. ______________________________________________________________________________ CONTENTS Vol. 1 Linear and Multilinear Algebra Contents of Volume 2………………………………………………………. vii PART I BASIC MATHEMATICS Selected Readings for Part I………………………………………………………… 2 CHAPTER 0 Elementary Matrix Theory…………………………………………. 3 CHAPTER 1 Sets, Relations, and Functions……………………………………… 13 Section 1. Sets and Set Algebra………………………………………... 13 Section 2. Ordered Pairs" Cartesian Products" and Relations…………. 16 Section 3. Functions……………………………………………………. 18 CHAPTER 2 Groups, Rings and Fields…………………………………………… 23 Section 4. The Axioms for a Group……………………………………. 23 Section 5. Properties of a Group……………………………………….. 26 Section 6. Group Homomorphisms…………………………………….. 29 Section 7. Rings and Fields…………………………………………….. 33 PART II VECTOR AND TENSOR ALGEBRA Selected Readings for Part II………………………………………………………… 40 CHAPTER 3 Vector Spaces……………………………………………………….. 41 Section 8. The Axioms for a Vector Space…………………………….. 41 Section 9. Linear Independence, Dimension and Basis…………….….. 46 Section 10. Intersection, Sum and Direct Sum of Subspaces……………. 55 Section 11. Factor Spaces………………………………………………... 59 Section 12. Inner Product Spaces………………………..………………. 62 Section 13. Orthogonal Bases and Orthogonal Compliments…………… 69 Section 14. Reciprocal Basis and Change of Basis……………………… 75 CHAPTER 4. Linear Transformations……………………………………………… 85 Section 15. Definition of a Linear Transformation………………………. 85 v CONTENTS OF VOLUME 1 vi Section 16. Sums and Products of Linear Transformations……………… 93 Section 17. Special Types of Linear Transformations…………………… 97 Section 18. The Adjoint of a Linear Transformation…………………….. 105 Section 19. Component Formulas………………………………………... 118 CHAPTER 5. Determinants and Matrices…………………………………………… 125 Section 20. The Generalized Kronecker Deltas and the Summation Convention……………………………… 125 Section 21. Determinants…………………………………………………. 130 Section 22. The Matrix of a Linear Transformation……………………… 136 Section 23 Solution of Systems of Linear Equations…………………….. 142 CHAPTER 6 Spectral Decompositions……………………………………………... 145 Section 24. Direct Sum of Endomorphisms……………………………… 145 Section 25. Eigenvectors and Eigenvalues……………………………….. 148 Section 26. The Characteristic Polynomial………………………………. 151 Section 27. Spectral Decomposition for Hermitian Endomorphisms…….. 158 Section 28. Illustrative Examples…………………………………………. 171 Section 29. The Minimal Polynomial……………………………..……… 176 Section 30. Spectral Decomposition for Arbitrary Endomorphisms….….. 182 CHAPTER 7. Tensor Algebra………………………………………………………. 203 Section 31. Linear Functions, the Dual Space…………………………… 203 Section 32. The Second Dual Space, Canonical Isomorphisms…………. 213 Section 33. Multilinear Functions, Tensors…………………………..….. 218 Section 34. Contractions…......................................................................... 229 Section 35. Tensors on Inner Product Spaces……………………………. 235 CHAPTER 8. Exterior Algebra……………………………………………………... 247 Section 36. Skew-Symmetric Tensors and Symmetric Tensors………….. 247 Section 37. The Skew-Symmetric Operator……………………………… 250 Section 38. The Wedge Product………………………………………….. 256 Section 39. Product Bases and Strict Components……………………….. 263 Section 40. Determinants and Orientations………………………………. 271 Section 41. Duality……………………………………………………….. 280 Section 42. Transformation to Contravariant Representation…………. 287 . __________________________________________________________________________ CONTENTS Vol. 2 Vector and Tensor Analysis PART III. VECTOR AND TENSOR ANALYSIS Selected Readings for Part III………………………………………… 296 CHAPTER 9. Euclidean Manifolds………………………………………….. 297 Section 43. Euclidean Point Spaces……………………………….. 297 Section 44. Coordinate Systems…………………………………… 306 Section 45. Transformation Rules for Vector and Tensor Fields…. 324 Section 46. Anholonomic and Physical Components of Tensors…. 332 Section 47. Christoffel Symbols and Covariant Differentiation…... 339 Section 48. Covariant Derivatives along Curves………………….. 353 CHAPTER 10. Vector Fields and Differential Forms………………………... 359 Section 49. Lie Derivatives……………………………………….. 359 Section 5O. Frobenius Theorem…………………………………… 368 Section 51. Differential Forms and Exterior Derivative………….. 373 Section 52. The Dual Form of Frobenius Theorem: the Poincaré Lemma……………………………………………….. 381 Section 53. Vector Fields in a Three-Dimensiona1 Euclidean Manifold, I. Invariants and Intrinsic Equations…….. 389 Section 54. Vector Fields in a Three-Dimensiona1 Euclidean Manifold, II. Representations for Special Class of Vector Fields………………………………. 399 CHAPTER 11. Hypersurfaces in a Euclidean Manifold Section 55. Normal Vector, Tangent Plane, and Surface Metric 407 Section 56. Surface Covariant Derivatives 416 Section 57. Surface Geodesics and the Exponential Map 425 Section 58. Surface Curvature, I. The Formulas of Weingarten and Gauss 433 Section 59. Surface Curvature, II. The Riemann-Christoffel Tensor and the Ricci Identities 443 Section 60. Surface Curvature, III. The Equations of Gauss and Codazzi 449 Section 61. Surface Area, Minimal Surface 454 vii CONTENTS OF VOLUME 11 viii Section 62. Surfaces in a Three-Dimensional Euclidean Manifold 457 CHAPTER 12. Elements of Classical Continuous Groups Section 63. The General Linear Group and Its Subgroups 463 Section 64. The Parallelism of Cartan 469 Section 65. One-Parameter Groups and the Exponential Map
Recommended publications
  • Tensors and Differential Forms on Vector Spaces
    APPENDIX A TENSORS AND DIFFERENTIAL FORMS ON VECTOR SPACES Since only so much of the vast and growing field of differential forms and differentiable manifolds will be actually used in this survey, we shall attempt to briefly review how the calculus of exterior differential forms on vector spaces can serve as a replacement for the more conventional vector calculus and then introduce only the most elementary notions regarding more topologically general differentiable manifolds, which will mostly be used as the basis for the discussion of Lie groups, in the following appendix. Since exterior differential forms are special kinds of tensor fields – namely, completely-antisymmetric covariant ones – and tensors are important to physics, in their own right, we shall first review the basic notions concerning tensors and multilinear algebra. Presumably, the reader is familiar with linear algebra as it is usually taught to physicists, but for the “basis-free” approach to linear and multilinear algebra (which we shall not always adhere to fanatically), it would also help to have some familiarity with the more “abstract-algebraic” approach to linear algebra, such as one might learn from Hoffman and Kunze [ 1], for instance. 1. Tensor algebra. – A tensor algebra is a type of algebra in which multiplication takes the form of the tensor product. a. Tensor product. – Although the tensor product of vector spaces can be given a rigorous definition in a more abstract-algebraic context (See Greub [ 2], for instance), for the purposes of actual calculations with tensors and tensor fields, it is usually sufficient to say that if V and W are vector spaces of dimensions n and m, respectively, then the tensor product V ⊗ W will be a vector space of dimension nm whose elements are finite linear combinations of elements of the form v ⊗ w, where v is a vector in V and w is a vector in W.
    [Show full text]
  • Equivalence of A-Approximate Continuity for Self-Adjoint
    Equivalence of A-Approximate Continuity for Self-Adjoint Expansive Linear Maps a,1 b, ,2 Sz. Gy. R´ev´esz , A. San Antol´ın ∗ aA. R´enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, P.O.B. 127, 1364 Hungary bDepartamento de Matem´aticas, Universidad Aut´onoma de Madrid, 28049 Madrid, Spain Abstract Let A : Rd Rd, d 1, be an expansive linear map. The notion of A-approximate −→ ≥ continuity was recently used to give a characterization of scaling functions in a multiresolution analysis (MRA). The definition of A-approximate continuity at a point x – or, equivalently, the definition of the family of sets having x as point of A-density – depend on the expansive linear map A. The aim of the present paper is to characterize those self-adjoint expansive linear maps A , A : Rd Rd for which 1 2 → the respective concepts of Aµ-approximate continuity (µ = 1, 2) coincide. These we apply to analyze the equivalence among dilation matrices for a construction of systems of MRA. In particular, we give a full description for the equivalence class of the dyadic dilation matrix among all self-adjoint expansive maps. If the so-called “four exponentials conjecture” of algebraic number theory holds true, then a similar full description follows even for general self-adjoint expansive linear maps, too. arXiv:math/0703349v2 [math.CA] 7 Sep 2007 Key words: A-approximate continuity, multiresolution analysis, point of A-density, self-adjoint expansive linear map. 1 Supported in part in the framework of the Hungarian-Spanish Scientific and Technological Governmental Cooperation, Project # E-38/04.
    [Show full text]
  • Tensors Notation • Contravariant Denoted by Superscript Ai Took a Vector and Gave for Vector Calculus Us a Vector
    Tensors Notation • contravariant denoted by superscript Ai took a vector and gave For vector calculus us a vector • covariant denoted by subscript Ai took a scaler and gave us a vector Review To avoid confusion in cartesian coordinates both types are the same so • Vectors we just opt for the subscript. Thus a vector x would be x1,x2,x3 R3 • Summation representation of an n by n array As it turns out in an cartesian space and other rectilinear coordi- nate systems there is no difference between contravariant and covariant • Gradient, Divergence and Curl vectors. This will not be the case for other coordinate systems such a • Spherical Harmonics (maybe) curvilinear coordinate systems or in 4 dimensions. These definitions are closely related to the Jacobian. Motivation If you tape a book shut and try to spin it in the air on each indepen- Definitions for Tensors of Rank 2 dent axis you will notice that it spins fine on two axes but not on the Rank 2 tensors can be written as a square array. They have con- third. That’s the inertia tensor in your hands. Similar are the polar- travariant, mixed, and covariant forms. As we might expect in cartesian izations tensor, index of refraction tensor and stress tensor. But tensors coordinates these are the same. also show up in all sorts of places that don’t connect to an anisotropic material property, in fact even spherical harmonics are tensors. What are the similarities and differences between such a plethora of tensors? Vector Calculus and Identifers The mathematics of tensors is particularly useful for de- Tensor analysis extends deep into coordinate transformations of all scribing properties of substances which vary in direction– kinds of spaces and coordinate systems.
    [Show full text]
  • LECTURES on PURE SPINORS and MOMENT MAPS Contents 1
    LECTURES ON PURE SPINORS AND MOMENT MAPS E. MEINRENKEN Contents 1. Introduction 1 2. Volume forms on conjugacy classes 1 3. Clifford algebras and spinors 3 4. Linear Dirac geometry 8 5. The Cartan-Dirac structure 11 6. Dirac structures 13 7. Group-valued moment maps 16 References 20 1. Introduction This article is an expanded version of notes for my lectures at the summer school on `Poisson geometry in mathematics and physics' at Keio University, Yokohama, June 5{9 2006. The plan of these lectures was to give an elementary introduction to the theory of Dirac structures, with applications to Lie group valued moment maps. Special emphasis was given to the pure spinor approach to Dirac structures, developed in Alekseev-Xu [7] and Gualtieri [20]. (See [11, 12, 16] for the more standard approach.) The connection to moment maps was made in the work of Bursztyn-Crainic [10]. Parts of these lecture notes are based on a forthcoming joint paper [1] with Anton Alekseev and Henrique Bursztyn. I would like to thank the organizers of the school, Yoshi Maeda and Guiseppe Dito, for the opportunity to deliver these lectures, and for a greatly enjoyable meeting. I also thank Yvette Kosmann-Schwarzbach and the referee for a number of helpful comments. 2. Volume forms on conjugacy classes We will begin with the following FACT, which at first sight may seem quite unrelated to the theme of these lectures: FACT. Let G be a simply connected semi-simple real Lie group. Then every conjugacy class in G carries a canonical invariant volume form.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]