7.1 Vectors, Tensors and the Index Notation

Total Page:16

File Type:pdf, Size:1020Kb

7.1 Vectors, Tensors and the Index Notation Section 7.1 7.1 Vectors, Tensors and the Index Notation The equations governing three dimensional mechanics problems can be quite lengthy. For this reason, it is essential to use a short-hand notation called the index notation1. Consider first the notation used for vectors. 7.1.1 Vectors Vectors are used to describe physical quantities which have both a magnitude and a direction associated with them. Geometrically, a vector is represented by an arrow; the arrow defines the direction of the vector and the magnitude of the vector is represented by the length of the arrow. Analytically, in what follows, vectors will be represented by lowercase bold-face Latin letters, e.g. a, b. The dot product of two vectors a and b is denoted by a ⋅b and is a scalar defined by a ⋅b = a b cosθ . (7.1.1) θ here is the angle between the vectors when their initial points coincide and is restricted to the range 0 ≤ θ ≤ π . Cartesian Coordinate System So far the short discussion has been in symbolic notation2, that is, no reference to ‘axes’ or ‘components’ or ‘coordinates’ is made, implied or required. Vectors exist independently of any coordinate system. The symbolic notation is very useful, but there are many circumstances in which use of the component forms of vectors is more helpful – or essential. To this end, introduce the vectors e1 , e 2 , e3 having the properties e1 ⋅e 2 = e 2 ⋅ e3 = e3 ⋅e1 = 0, (7.1.2) so that they are mutually perpendicular, and e1 ⋅e1 = e 2 ⋅ e 2 = e3 ⋅e3 = 1, (7.1.3) so that they are unit vectors. Such a set of orthogonal unit vectors is called an orthonormal set, Fig. 7.1.1. This set of vectors forms a basis, by which is meant that any other vector can be written as a linear combination of these vectors, i.e. in the form a = a1e1 + a2e 2 + a3e3 (7.1.4) where a1 , a2 and a3 are scalars, called the Cartesian components or coordinates of a along the given three directions. The unit vectors are called base vectors when used for 1 or indicial or subscript or suffix notation 2 or absolute or invariant or direct or vector notation Solid Mechanics Part II 191 Kelly Section 7.1 this purpose. The components a1 , a2 and a3 are measured along lines called the x1 , x2 and x3 axes, drawn through the base vectors. a e3 a3 e 2 e1 a1 a2 Figure 7.1.1: an orthonormal set of base vectors and Cartesian coordinates Note further that this orthonormal system {e1 ,e 2 ,e3 } is right-handed, by which is meant e1 × e 2 = e3 (or e 2 × e3 = e1 or e3 × e1 = e 2 ). In the index notation, the expression for the vector a in terms of the components a1 , a2 ,a3 and the corresponding basis vectors e1 ,e 2 ,e3 is written as 3 a = a1e1 + a2e 2 + a3e3 = ∑ aiei (7.1.5) i=1 This can be simplified further by using Einstein’s summation convention, whereby the summation sign is dropped and it is understood that for a repeated index (i in this case) a summation over the range of the index (3 in this case3) is implied. Thus one writes a = aiei . This can be further shortened to, simply, ai . The dot product of two vectors u and v, referred to this coordinate system, is u ⋅ v = (u1e1 + u2e 2 + u3e3 )⋅ (v1e1 + v2e 2 + v3e3 ) = u1v1 (e1 ⋅e1 )+ u1v2 (e1 ⋅e 2 )+ u1v3 (e1 ⋅e3 ) + u2v1 (e 2 ⋅e1 )+ u2v2 (e 2 ⋅e 2 )+ u2v3 (e 2 ⋅e3 ) (7.1.6) + u3v1 (e3 ⋅e1 )+ u3v2 (e3 ⋅e 2 )+ u3v3 (e3 ⋅e3 ) = u1v1 + u2v2 + u3v3 The dot product of two vectors written in the index notation reads u ⋅ v = ui vi Dot Product (7.1.7) 3 2 in the case of a two-dimensional space/analysis Solid Mechanics Part II 192 Kelly Section 7.1 The repeated index i is called a dummy index, because it can be replaced with any other letter and the sum is the same; for example, this could equally well be written as u ⋅ v = u j v j or uk vk . Introduce next the Kronecker delta symbol δ ij , defined by 0, i ≠ j δ ij = (7.1.8) 1, i = j Note that δ11 = 1 but, using the index notation, δ ii = 3. The Kronecker delta allows one to write the expressions defining the orthonormal basis vectors (7.1.2, 7.1.3) in the compact form ei ⋅e j = δ ij Orthonormal Basis Rule (7.1.9) Example Recall the equations of motion, Eqns. 1.1.9, which in full read ∂σ 11 ∂σ 12 ∂σ 13 + + + b1 = ρa1 ∂x1 ∂x2 ∂x3 ∂σ 21 ∂σ 22 ∂σ 23 + + + b2 = ρa2 (7.1.10) ∂x1 ∂x2 ∂x3 ∂σ 31 ∂σ 32 ∂σ 33 + + + b3 = ρa3 ∂x1 ∂x2 ∂x3 The index notation for these equations is ∂σ ij + bi = ρai (7.1.11) ∂x j Note the dummy index j. The index i is called a free index; if one term has a free index i, then, to be consistent, all terms must have it. One free index, as here, indicates three separate equations. 7.1.2 Matrix Notation The symbolic notation v and index notation viei (or simply vi ) can be used to denote a vector. Another notation is the matrix notation: the vector v can be represented by a 3×1 matrix (a column vector): Solid Mechanics Part II 193 Kelly Section 7.1 v1 v2 v 3 Matrices will be denoted by square brackets, so a shorthand notation for this matrix/vector would be [v]. The elements of the matrix [v] can be written in the index notation vi . Note the distinction between a vector and a 3×1 matrix: the former is a mathematical object independent of any coordinate system, the latter is a representation of the vector in a particular coordinate system – matrix notation, as with the index notation, relies on a particular coordinate system. As an example, the dot product can be written in the matrix notation as v1 T = [u ][v] [u1 u2 u3 ]v2 v3 “short” matrix notation “full” matrix notation Here, the notation [u T ] denotes the 1× 3 matrix (the row vector). The result is a 1×1 matrix, ui vi . The matrix notation for the Kronecker delta δ ij is the identity matrix 1 0 0 [I] = 0 1 0 0 0 1 Then, for example, in both index and matrix notation: 1 0 0u1 u1 δ = = = ij u j ui [I][u] [u] 0 1 0u2 u2 (7.1.12) 0 0 1u3 u3 Matrix – Matrix Multiplication When discussing vector transformation equations further below, it will be necessary to multiply various matrices with each other (of sizes 3×1, 1× 3 and 3× 3). It will be helpful to write these matrix multiplications in the short-hand notation. Solid Mechanics Part II 194 Kelly Section 7.1 First, it has been seen that the dot product of two vectors can be represented by [u T ][v] or T ui vi . Similarly, the matrix multiplication [u][v ] gives a 3× 3 matrix with element form ui v j or, in full, u1v1 u1v2 u1v3 u2v1 u2v2 u2v3 u3v1 u3v2 u3v3 This operation is called the tensor product of two vectors, written in symbolic notation as u ⊗ v (or simply uv). Next, the matrix multiplication Q11 Q12 Q13 u1 ≡ [Q][u] Q21 Q22 Q23 u2 Q31 Q32 Q33 u3 is a 3×1 matrix with elements ([Q][u])i ≡ Qij u j . The elements of [Q][u] are the same as T T T T those of [u ][Q ], which can be expressed as ([u ][Q ])i ≡ u j Qij . The expression [u][Q] is meaningless, but [uT ][Q] {▲Problem 4} is a 1× 3 matrix with T elements ([u ][Q])i ≡ u j Q ji . This leads to the following rule: 1. if a vector pre-multiplies a matrix [Q] → the vector is the transpose [u T ] 2. if a matrix [Q] pre-multiplies the vector → the vector is [u] 3. if summed indices are “beside each other”, as the j in u j Q ji or Qij u j → the matrix is [Q] 4. if summed indices are not beside each other, as the j in u jQij → the matrix is the transpose, [Q T ] Finally, consider the multiplication of 3× 3 matrices. Again, this follows the “beside each other” rule for the summed index. For example, [A][B] gives the 3× 3 matrix T {▲Problem 8} ([A][B])ij = Aik Bkj , and the multiplication [A ][B] is written as T ([A ][B])ij = Aki Bkj . There is also the important identity ([A][B])T = [B T ][A T ] (7.1.13) Note also the following: Solid Mechanics Part II 195 Kelly Section 7.1 (i) if there is no free index, as in ui vi , there is one element (ii) if there is one free index, as in u j Q ji , it is a 3×1 (or 1× 3 ) matrix (iii) if there are two free indices, as in Aki Bkj , it is a 3× 3 matrix 7.1.3 Vector Transformation Rule Introduce two Cartesian coordinate systems with base vectors ei and e′i and common origin o, Fig. 7.1.2. The vector u can then be expressed in two ways: u = uiei = ui′e′i (7.1.14) x2 x2′ u1′ u2′ u ′ u2 θ x1 ′ e2 e′ 1 θ o x1 u 1 Figure 7.1.2: a vector represented using two different coordinate systems Note that the xi′ coordinate system is obtained from the xi system by a rotation of the base vectors.
Recommended publications
  • Linear Algebra I
    Linear Algebra I Martin Otto Winter Term 2013/14 Contents 1 Introduction7 1.1 Motivating Examples.......................7 1.1.1 The two-dimensional real plane.............7 1.1.2 Three-dimensional real space............... 14 1.1.3 Systems of linear equations over Rn ........... 15 1.1.4 Linear spaces over Z2 ................... 21 1.2 Basics, Notation and Conventions................ 27 1.2.1 Sets............................ 27 1.2.2 Functions......................... 29 1.2.3 Relations......................... 34 1.2.4 Summations........................ 36 1.2.5 Propositional logic.................... 36 1.2.6 Some common proof patterns.............. 37 1.3 Algebraic Structures....................... 39 1.3.1 Binary operations on a set................ 39 1.3.2 Groups........................... 40 1.3.3 Rings and fields...................... 42 1.3.4 Aside: isomorphisms of algebraic structures...... 44 2 Vector Spaces 47 2.1 Vector spaces over arbitrary fields................ 47 2.1.1 The axioms........................ 48 2.1.2 Examples old and new.................. 50 2.2 Subspaces............................. 53 2.2.1 Linear subspaces..................... 53 2.2.2 Affine subspaces...................... 56 2.3 Aside: affine and linear spaces.................. 58 2.4 Linear dependence and independence.............. 60 3 4 Linear Algebra I | Martin Otto 2013 2.4.1 Linear combinations and spans............. 60 2.4.2 Linear (in)dependence.................. 62 2.5 Bases and dimension....................... 65 2.5.1 Bases............................ 65 2.5.2 Finite-dimensional vector spaces............. 66 2.5.3 Dimensions of linear and affine subspaces........ 71 2.5.4 Existence of bases..................... 72 2.6 Products, sums and quotients of spaces............. 73 2.6.1 Direct products...................... 73 2.6.2 Direct sums of subspaces................
    [Show full text]
  • Vector Spaces in Physics
    San Francisco State University Department of Physics and Astronomy August 6, 2015 Vector Spaces in Physics Notes for Ph 385: Introduction to Theoretical Physics I R. Bland TABLE OF CONTENTS Chapter I. Vectors A. The displacement vector. B. Vector addition. C. Vector products. 1. The scalar product. 2. The vector product. D. Vectors in terms of components. E. Algebraic properties of vectors. 1. Equality. 2. Vector Addition. 3. Multiplication of a vector by a scalar. 4. The zero vector. 5. The negative of a vector. 6. Subtraction of vectors. 7. Algebraic properties of vector addition. F. Properties of a vector space. G. Metric spaces and the scalar product. 1. The scalar product. 2. Definition of a metric space. H. The vector product. I. Dimensionality of a vector space and linear independence. J. Components in a rotated coordinate system. K. Other vector quantities. Chapter 2. The special symbols ij and ijk, the Einstein summation convention, and some group theory. A. The Kronecker delta symbol, ij B. The Einstein summation convention. C. The Levi-Civita totally antisymmetric tensor. Groups. The permutation group. The Levi-Civita symbol. D. The cross Product. E. The triple scalar product. F. The triple vector product. The epsilon killer. Chapter 3. Linear equations and matrices. A. Linear independence of vectors. B. Definition of a matrix. C. The transpose of a matrix. D. The trace of a matrix. E. Addition of matrices and multiplication of a matrix by a scalar. F. Matrix multiplication. G. Properties of matrix multiplication. H. The unit matrix I. Square matrices as members of a group.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix A: Matrices and Tensors
    Appendix A: Matrices and Tensors A.1 Introduction and Rationale The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach. A.2 Definition of Square, Column, and Row Matrices An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns, S.C.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • An Attempt to Intuitively Introduce the Dot, Wedge, Cross, and Geometric Products
    An attempt to intuitively introduce the dot, wedge, cross, and geometric products Peeter Joot March 21, 2008 1 Motivation. Both the NFCM and GAFP books have axiomatic introductions of the gener- alized (vector, blade) dot and wedge products, but there are elements of both that I was unsatisfied with. Perhaps the biggest issue with both is that they aren’t presented in a dumb enough fashion. NFCM presents but does not prove the generalized dot and wedge product operations in terms of symmetric and antisymmetric sums, but it is really the grade operation that is fundamental. You need that to define the dot product of two bivectors for example. GAFP axiomatic presentation is much clearer, but the definition of general- ized wedge product as the totally antisymmetric sum is a bit strange when all the differential forms book give such a different definition. Here I collect some of my notes on how one starts with the geometric prod- uct action on colinear and perpendicular vectors and gets the familiar results for two and three vector products. I may not try to generalize this, but just want to see things presented in a fashion that makes sense to me. 2 Introduction. The aim of this document is to introduce a “new” powerful vector multiplica- tion operation, the geometric product, to a student with some traditional vector algebra background. The geometric product, also called the Clifford product 1, has remained a relatively obscure mathematical subject. This operation actually makes a great deal of vector manipulation simpler than possible with the traditional methods, and provides a way to naturally expresses many geometric concepts.
    [Show full text]
  • Dirac Notation 1 Vectors
    Physics 324, Fall 2001 Dirac Notation 1 Vectors 1.1 Inner product T Recall from linear algebra: we can represent a vector V as a column vector; then V y = (V )∗ is a row vector, and the inner product (another name for dot product) between two vectors is written as AyB = A∗B1 + A∗B2 + : (1) 1 2 ··· In conventional vector notation, the above is just A~∗ B~ . Note that the inner product of a vector with itself is positive definite; we can define the· norm of a vector to be V = pV V; (2) j j y which is a non-negative real number. (In conventional vector notation, this is V~ , which j j is the length of V~ ). 1.2 Basis vectors We can expand a vector in a set of basis vectors e^i , provided the set is complete, which means that the basis vectors span the whole vectorf space.g The basis is called orthonormal if they satisfy e^iye^j = δij (orthonormality); (3) and an orthonormal basis is complete if they satisfy e^ e^y = I (completeness); (4) X i i i where I is the unit matrix. (note that a column vectore ^i times a row vectore ^iy is a square matrix, following the usual definition of matrix multiplication). Assuming we have a complete orthonormal basis, we can write V = IV = e^ e^yV V e^ ;V (^eyV ) : (5) X i i X i i i i i ≡ i ≡ The Vi are complex numbers; we say that Vi are the components of V in the e^i basis.
    [Show full text]
  • Generalized Kronecker and Permanent Deltas, Their Spinor and Tensor Equivalents — Reference Formulae
    Generalized Kronecker and permanent deltas, their spinor and tensor equivalents — Reference Formulae R L Agacy1 L42 Brighton Street, Gulliver, Townsville, QLD 4812, AUSTRALIA Abstract The work is essentially divided into two parts. The purpose of the first part, applicable to n-dimensional space, is to (i) introduce a generalized permanent delta (gpd), a symmetrizer, on an equal footing with the generalized Kronecker delta (gKd), (ii) derive combinatorial formulae for each separately, and then in combination, which then leads us to (iii) provide a comprehensive listing of these formulae. For the second part, applicable to spinors in the mathematical language of General Relativity, the purpose is to (i) provide formulae for combined gKd/gpd spinors, (ii) obtain and tabulate spinor equivalents of gKd and gpd tensors and (iii) derive and exhibit tensor equivalents of gKd and gpd spinors. 1 Introduction The generalized Kronecker delta (gKd) is well known as an alternating function or antisymmetrizer, eg S^Xabc = d\X[def\. In contrast, although symmetric tensors are seen constantly, there does not appear to be employment of any symmetrizer, for eg X(abc), in analogy with the antisymmetrizer. However, it is exactly the combination of both types of symmetrizers, treated equally, that give us the flexibility to describe any type of tensor symmetry. We define such a 'permanent' symmetrizer as a generalized permanent delta (gpd) below. Our purpose is to restore an imbalance between the gKd and gpd and provide a consolidated reference of combinatorial formulae for them, most of which is not in the literature. The work divides into two Parts: the first applicable to tensors in n-dimensions, the second applicable to 2-component spinors used in the mathematical language of General Relativity.
    [Show full text]