Matrices and Tensors

Total Page:16

File Type:pdf, Size:1020Kb

Matrices and Tensors APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM... M ¡°11 12 1c ¡°MM... M M = ¡°21 22 2c ¡°. (A1) ¡°...... ¡°MM.... ¢±¡°rrc1 The typical element of the array, Mij, is the ith element in the jth column; in this text elements Mij will be real numbers or functions whose values are real numbers. The transpose of matrix M is denoted by MT and is obtained from M by interchanging the rows and columns: ¯MM... M ¡°11 21r 1 ¡°MM... M MT = ¡°12 22r 2 ¡°. (A2) ¡°. ... ¡°MM. ... ¢±¡°1crc 595 596 APPENDIX:MATRICES AND TENSORS The operation of obtaining MT from M is called transposition. In this text we are interested in special cases of r-by-c matrix M. These special cases are those of the square matrix, r = c = n, the case of the row matrix, r =1, c = n, and the case of the column matrix, r = n, c = 1. Fur- ther, the special subcases of interest are n = 2, n = 3, and n = 6; subcase n = 1 reduces all three special cases to the trivial situation of a single number or scalar. Square matrix A has the form ¯A AA... ¡°11 12 1n ¡°A AA... A = ¡°21 22 2n ¡°, (A3) ¡°...... ¡°A ....A ¢±¡°nnn1 while row and column matrices r and c have the forms ¯c ¡°1 ¡°c ¡°2 ¡°. r = [rr r] c = ¡° 12... n , ¡°, (A4) ¡°. ¡° ¡°. ¡° c ¢±¡°n respectively. The transpose of a column matrix is a row matrix, and thus T = [ ] c cc12... cn . (A5) To save space in books and papers, the form of c in (A5) is used more frequently than the form in the second of (A4). Wherever possible, square matrices will be denoted by upper-case bold- face Latin letters, while row and column matrices will be denoted by lower-case boldface Latin letters, as is the case in eqs. (A3) and (A4). A.3. THE TYPES AND ALGEBRA OF SQUARE MATRICES The elements of square matrix A given by (A3) for which the row and column indices are equal, namely elements A11, A22, … , Ann, are called diagonal elements. A matrix with only di- agonal elements is called a diagonal matrix: ¯A 0...0 ¡°11 ¡°0...0A A = ¡°22 ¡°. (A6) ¡°...... ¡°0....A ¢±¡°nn The sum of the diagonal elements of a matrix is a scalar called the trace of the matrix and, for matrix A, it is denoted by trA: =+++ trA A11AA 22 ... nn . (A7) If the trace of a matrix is zero, the matrix is said to be traceless. Note also that trA = trAT. TISSUE MECHANICS 597 The zero and the unit matrix, 0 and 1, respectively, constitute the null element, the 0, and the unit element, the 1, in the algebra of square matrices. The zero matrix is a matrix whose every element is zero and the unit matrix is a diagonal matrix whose diagonal elements are all one: ¯00...0 ¯10...0 ¡°¡° ¡°00...0 ¡°01...0 0 = ¡°, 1 = ¡°. (A8) ¡°¡° ¡°...... ¡°...... ¡°¡° ¢±¡°0....0 ¢±¡°0....1 δ A special symbol, the Kronecker delta, ij, is introduced to represent the components of the unit δ δ δ ≠ matrix. When i = j the value of the Kronecker delta is 1, 11 = 22 = … = nn = 1, and when i j δ δ δ δ the value of the Kronecker delta is 0, 12 = 21 = … = n1 = 1n = 0. Multiplication of matrix A by a scalar is defined as multiplication of every element of matrix A by scalar α; thus, ¯ααA AA... α ¡°11 12 1n ¡°ααA AA... α αA ¡°21 22 2n w ¡°. (A9) ¡°...... ¡°ααA ....A ¢±¡°nnn1 It is then easy to show that 1A = A, –1A = –A, 0A = 0, and αO = 0.The addition of square matrices is defined only for matrices with the same number of rows (or columns). The sum of two matrices, A and B, is denoted by A + B, where ¯A ++BAB... AB + ¡°11 11 12 12 1nn 1 ¡°A ++BAB... A + B AB+ ¡°21 21 22 22 2nn 2 w ¡°. (A10) ¡°...... ¡°A ++BAB.... ¢±¡°n11 n nn nn Matrix addition is commutative and associative, ABBA+=+ and ABC++=()() ABC ++, (A11) respectively. The following distributive laws connect matrix addition and matrix multiplication by scalars: ααα()AB+= A + B and ()αβ+=+AAA α β, (A12) where α and β are scalars. Negative square matrices may be created by employing the defini- tion of matrix multiplication by scalar (A8) in the special case when α = –1. In this case the definition of addition of square matrices (A10) can be extended to include subtraction of square matrices, A – B. A matrix for which B = BT is said to be a symmetric matrix, while a matrix for which C = –CT is said to be a skew-symmetric or anti-symmetric matrix. The symmetric and skew- symmetric parts of a matrix, say A, are constructed from A as follows: ¬1­ T symmetric part of AAAw ­()+ , and (A13) ®2­ ¬1­ T skew-symmetric part of AAAw ­(). (A14) ®2­ 598 APPENDIX:MATRICES AND TENSORS It is easy to verify that the symmetric part of A is a symmetric matrix and that the skew- symmetric part of A is a skew-symmetric matrix. The sum of the symmetric part of A and the skew-symmetric part of A is A: ¬11­­TT ¬ AAAAA=++­­()() . (A15) ®22­­ ® This result shows that any square matrix can be decomposed into the sum of a symmetric and a skew-symmetric matrix. Using the trace operation introduced above, representation (A15) can be extended to three-way decomposition of matrix A: (trA ) ¬11­­ ¯T ¬(trA ) ¬ T A1=+­­¡°()2)AA+ ­ +() AA. (A16) ­­¡° ­ n ®22¢±¡°®n ® The last term in this decomposition is still the skew-symmetric part of the matrix. The second term is the traceless symmetric part of the matrix, and the first term is simply the trace of the matrix multiplied by the unit matrix. Example A.3.1 Construct the three-way decomposition of matrix A given by ¯ ¡°123 ¡° A = ¡°456. ¡° ¢±¡°789 Solution: The symmetric and skew-symmetric parts of A, as well as the trace of A are calculated: ¯ ¯ ¡°135 ¡ 0 1 2 ° ¬11­­+=TT¡° ¬ = ¡ °= ­­()357,AA¡° ()101, AA ¡ ° tr15 A ; ®22¡° ® ¡ ° ¢±¡°579 ¢¡ 2 1 0 ±° then, since n = 3, it follows from (A16) that ¯ ¯ ¯ ¡°¡°¡°500 435 0 1 2 ¡°¡°¡° A =+¡°¡°¡°050 3 07 + 1 0 1. ¡°¡°¡° ¢±¢±¢±¡°¡°¡°005 5 74 2 1 0 Introducing the notation for the deviatoric part of n-by-n square matrix A, (trA ) devAA= 1, (A17) n the representation for matrix A given by (A16) may be rewritten as AHDS=++, (A18) where H is called the hydrostatic component, D is called the deviatoric component, and S is the skew-symmetric component, (trA ) ¬ ¬ = =+1­ T = 1­ T H1, DAA ­(dev dev ) , SAA ­() . (A19) n ®2­ ®2­ TISSUE MECHANICS 599 Example A.3.2 Show that tr(devA) = 0. Solution: Applying the trace operation to both sides of (A17), one obtains tr(devA) = trA – (1/n)trA tr1; then, since tr1 = n, it follows that tr(devA) = 0. The product of two square matrices, A and B, with equal numbers of rows (columns) is a square matrix with the same number of rows (columns). The matrix product is written as A⋅B where A⋅B is defined by kn= ()AB = A B ; (A20) ¸ ij ik kj k=1 thus, for example, the element in the rth row and cth column of product A⋅B is given by =+++ (AB¸ )rcA r11BAB c r 2 2 c ... AB rn nc . The dot inside matrix product A⋅B indicates that one index from A and one index from B are to be summed over. The positioning of the summation index on the two matrices involved in a matrix product is critical and is reflected in the matrix notation by the transpose. In the three equations below, (A21), study carefully how the positions of the summation indices within the summation sign change in relation to the position of the transpose on the matrices in the asso- ciated matrix product: kn== kn kn = ()ABTT= A BABAB , () A B= , ( A TT B )= . (A21) ¸ ij ik jk¸ ij ki kj¸ ij ki jk kk==11 k = 1 A widely used notational convention, called the Einstein summation convention, drops the summation symbol in (A20) and writes = ()AB¸ ijA ikB kj , (A22) where the convention is the understanding that the repeated index, k, is to be summed over its range of admissible values from 1 to n.
Recommended publications
  • DERIVATIONS and PROJECTIONS on JORDAN TRIPLES an Introduction to Nonassociative Algebra, Continuous Cohomology, and Quantum Functional Analysis
    DERIVATIONS AND PROJECTIONS ON JORDAN TRIPLES An introduction to nonassociative algebra, continuous cohomology, and quantum functional analysis Bernard Russo July 29, 2014 This paper is an elaborated version of the material presented by the author in a three hour minicourse at V International Course of Mathematical Analysis in Andalusia, at Almeria, Spain September 12-16, 2011. The author wishes to thank the scientific committee for the opportunity to present the course and to the organizing committee for their hospitality. The author also personally thanks Antonio Peralta for his collegiality and encouragement. The minicourse on which this paper is based had its genesis in a series of talks the author had given to undergraduates at Fullerton College in California. I thank my former student Dana Clahane for his initiative in running the remarkable undergraduate research program at Fullerton College of which the seminar series is a part. With their knowledge only of the product rule for differentiation as a starting point, these enthusiastic students were introduced to some aspects of the esoteric subject of non associative algebra, including triple systems as well as algebras. Slides of these talks and of the minicourse lectures, as well as other related material, can be found at the author's website (www.math.uci.edu/∼brusso). Conversely, these undergraduate talks were motivated by the author's past and recent joint works on derivations of Jordan triples ([116],[117],[200]), which are among the many results discussed here. Part I (Derivations) is devoted to an exposition of the properties of derivations on various algebras and triple systems in finite and infinite dimensions, the primary questions addressed being whether the derivation is automatically continuous and to what extent it is an inner derivation.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Triple Product Formula and the Subconvexity Bound of Triple Product L-Function in Level Aspect
    TRIPLE PRODUCT FORMULA AND THE SUBCONVEXITY BOUND OF TRIPLE PRODUCT L-FUNCTION IN LEVEL ASPECT YUEKE HU Abstract. In this paper we derived a nice general formula for the local integrals of triple product formula whenever one of the representations has sufficiently higher level than the other two. As an application we generalized Venkatesh and Woodbury’s work on the subconvexity bound of triple product L-function in level aspect, allowing joint ramifications, higher ramifications, general unitary central characters and general special values of local epsilon factors. 1. introduction 1.1. Triple product formula. Let F be a number field. Let ⇡i, i = 1, 2, 3 be three irreducible unitary cuspidal automorphic representations, such that the product of their central characters is trivial: (1.1) w⇡i = 1. Yi Let ⇧=⇡ ⇡ ⇡ . Then one can define the triple product L-function L(⇧, s) associated to them. 1 ⌦ 2 ⌦ 3 It was first studied in [6] by Garrett in classical languages, where explicit integral representation was given. In particular the triple product L-function has analytic continuation and functional equation. Later on Shapiro and Rallis in [19] reformulated his work in adelic languages. In this paper we shall study the following integral representing the special value of triple product L function (see Section 2.2 for more details): − ⇣2(2)L(⇧, 1/2) (1.2) f (g) f (g) f (g)dg 2 = F I0( f , f , f ), | 1 2 3 | ⇧, , v 1,v 2,v 3,v 8L( Ad 1) v ZAD (FZ) D (A) ⇤ \ ⇤ Y D D Here fi ⇡i for a specific quaternion algebra D, and ⇡i is the image of ⇡i under Jacquet-Langlands 2 0 correspondence.
    [Show full text]
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Generic Properties of Symmetric Tensors
    2006 – 1/48 – P.Comon Generic properties of Symmetric Tensors Pierre COMON I3S - CNRS other contributors: Bernard MOURRAIN INRIA institute Lek-Heng LIM, Stanford University I3S 2006 – 2/48 – P.Comon Tensors & Arrays Definitions Table T = {Tij..k} Order d of T def= # of its ways = # of its indices def Dimension n` = range of the `th index T is Square when all dimensions n` = n are equal T is Symmetric when it is square and when its entries do not change by any permutation of indices I3S 2006 – 3/48 – P.Comon Tensors & Arrays Properties Outer (tensor) product C = A ◦ B: Cij..` ab..c = Aij..` Bab..c Example 1 outer product between 2 vectors: u ◦ v = u vT Multilinearity. An order-3 tensor T is transformed by the multi-linear map {A, B, C} into a tensor T 0: 0 X Tijk = AiaBjbCkcTabc abc Similarly: at any order d. I3S 2006 – 4/48 – P.Comon Tensors & Arrays Example Example 2 Take 1 v = −1 Then 1 −1 −1 1 v◦3 = −1 1 1 −1 This is a “rank-1” symmetric tensor I3S 2006 – 5/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... I3S 2006 – 6/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity I3S 2006 – 7/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .. .. .. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... CanD/PARAFAC: = + ... + . ... ... ... ... ... ... ... ... PARAFAC cannot be used when: • Lack of diversity • Proportional slices I3S 2006 – 8/48 – P.Comon Usefulness of symmetric arrays CanD/PARAFAC vs ICA .
    [Show full text]
  • Vectors, Matrices and Coordinate Transformations
    S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between them, physical laws can often be written in a simple form. Since we will making extensive use of vectors in Dynamics, we will summarize some of their important properties. Vectors For our purposes we will think of a vector as a mathematical representation of a physical entity which has both magnitude and direction in a 3D space. Examples of physical vectors are forces, moments, and velocities. Geometrically, a vector can be represented as arrows. The length of the arrow represents its magnitude. Unless indicated otherwise, we shall assume that parallel translation does not change a vector, and we shall call the vectors satisfying this property, free vectors. Thus, two vectors are equal if and only if they are parallel, point in the same direction, and have equal length. Vectors are usually typed in boldface and scalar quantities appear in lightface italic type, e.g. the vector quantity A has magnitude, or modulus, A = |A|. In handwritten text, vectors are often expressed using the −→ arrow, or underbar notation, e.g. A , A. Vector Algebra Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0.
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]