Dyadic Analysis1

Total Page:16

File Type:pdf, Size:1020Kb

Dyadic Analysis1 EECS730,Winter2009 K.Sarabandi Dyadic Analysis1 In this section a brief review of dyadic analysis is presented. Dyadic operations and theorems provide an effective tool for manipulation of field quantities. 2 Dyadic notation was first introduced by Gibbs in 1884 which later appeared in literature.3 Consider a vector function F having three scalar component fi with (i = 1,2,3) in a Cartesian system, that is 3 F = fixˆi i=1 X Now consider three different vector functions F j, given by 3 F j = fijxˆi j =1, 2, 3 (1) =1 Xi = which constitute a dyad F given by = 3 3 3 F = F jxˆj = fijxˆixˆj (2) j=1 i=1 j=1 X X X The doubletsx ˆixˆj (i, j = 1, 2, 3) form the nine-unit dyad basis. In matrix notation = f11 f12 f13 F = F 1, F 2, F 3 = f21 f22 f23 f31 f32 f33 It should be emphasized that xˆixˆj =6 x ˆjxˆi for i =6 j 1Copyright K. Sarabandi, 1997. 2Tai, C.T., “Dyadic Green’s Functions in Electromagnetic Theory,” New York: IEEE Press, 2nd ed., 1993. 3Gibbs, J.W., “The scientific papers of J. Willard Gibbs” Vol. 2, pp. 84-90, New York: Dover, 1961. 1 In general a dyad can be formed from the product of two arbitrary vectorsa ¯ and ¯b to = = form F =a ¯¯b. Components of F can be obtained from a matrix product ofa ¯ denoted by a 3 × 1 matrix with 1 × 3 matrix ¯b. Note that the converse may not be necessarily true. That is, a general dyad may not be expressable in terms of product of two vector like ab. = = Transpose of F , denoted by [F ]T , is defined by = T [F ] = xjF j = fijxˆjxˆi. j i j X X X = = In matrix form [F ]T is simply the transpose of matrix F . A symmetric dyad is a dyad for which = = T [Fs] =F s . = A particular case of a symmetric dyad is the “idemfactor” (I) described by fij = δij where δij is the Kronecker delta function. The idemfactor is explicitly expressed by = 3 I= xˆixˆi. i=1 X An antisymmetric dyad is defined by = = T [Fa] = − Fa It is easy to show that any dyad can be decomposed into a symmetric and antisymmetic dyad components = 1 = = 1 = = F = [F + F T ]+ [F − F T ] 2 2 1 The Scalar Products The “anterior scalar product” of a vector and a dyad is defined by = 3 3 3 a· F = a · F j xˆj = aifijxˆj (3) =1 =1 =1 jX Xi jX 2 which is a vector. The “posterior scalar product” is defined by = 3 3 3 F ·a = F j (ˆxj · a)= ajfijxˆi (4) j=1 i=1 j=1 X X X which is also a vector. It is obvious that for a symmetric dyad = = a· F s=F s ·a and = = a· I=I ·a = a 2 The Vector Products Like scalar products, there are two different vector products between a vector a dyad. The “anterior vector product” is defined by = 3 a× F = (a × F j)ˆxj (5) =1 jX and the “posterior vector product” is defined by = 3 F ×a = F j(xj × a). (6) j=1 X 3 Vector Dyadic Products = Consider two vectors a and b and a dyad C with vector components Cj. Since a · (b × Cj)= −b · (a × Cj)=(a × b) · Cj j =1, 2, 3 (7) it can easily be shown that = = = a · (b× C)= −b · (a× C)=(a × b)· C (8) Equation (8) is obtained from the three equations given by (7) after juxtaposingx ˆj at the posterior position of each equation and adding them up. 3 4 Differential Operations on Dyadic Functions Differential operators such as divergence, gradient, and curl are often encountered in analysis of electromagnetic field quantities. Hence it is useful to define these operators when applied to dyads. The divergence of a dyadic function is defined by: = 3 ∇· F = ∇· F jxˆj (9) =1 jX The resulting quantity is a vector. In a similar manner to the anterior vector product, the curl of a dyadic is defined by = 3 ∇× F = (∇ × F j)ˆxj (10) =1 jX which forms a dyad. Gradient of a vector function form a dyadic function which is given by: 3 3 3 ∂fj ∇F = (∇fj)ˆxj = xˆixˆj (11) =1 =1 =1 ∂xi ! jX Xi jX 5 Vector-Dyadic Green’s Second Identity The Green’s second identity for vector functions can be used to develop the vector-dyadic version of the theorem. For any two vector functions P and Qj which together with their first and second derivatives are continuous it can be shown that4 [P ·∇×∇× Qj − (∇×∇× P ) · Qj]dv = [Qj ×∇× P − P ×∇× Qj] · ndsˆ (12) ZZv Z ZZ = [(∇× P × nˆ) · Qj + P · (ˆn ×∇× Qj)]ds ZZs = Suppose Qj is one of the constituent vectors of dyad Q. By juxtaposingx ˆj at the posterior position of (12) and adding the resulting equations, it can easily be shown that = = = = [P ·∇×∇× Q −(∇×∇× P )· Q]dv = [P · (ˆn ×∇× Q) − (ˆn ×∇× P )· Q]ds (13) ZZv Z ZZ 4Stratton, J.A., “Electromagnetic Theory,” pp. 250, New York: Mc Graw-Hill, 1941. 4 which is the statement of vector-dyadic Green’s second identity. 6 Dyadic-Dyadic Green’s Second Identity Another useful form of the Green’s second identity is its dyadic-dyadic form. This form can be obtained directly from (13). Let us consider three vectors P j for j = 1, 2, 3. Noting that = = P j · (ˆn × ∇× Q)= −(ˆn × P j) · ∇× Q , using P j in (13), and then transposing this resulting equation, we get = = T T [∇ × ∇× Q] · P j − [Q] ·∇×∇× P j dv = (14) ZZv Z = = T T − [∇× Q] · (ˆn × P j) + [Q] · (ˆn × ∇ × P j) ds ZZs Juxtaposingx ˆj at the posterior position of (14) and summing the results, the dyadic- dyadic Green’s second identity can be obtained and is given by = = = = [∇ × ∇× Q]T · P −[Q]T · ∇ × ∇× P dv = (15) ZZv Z = = = = − [∇× Q]T · (ˆn× P ) + [Q]T · (ˆn × ∇× P ) ds ZZs 5.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • 28. Exterior Powers
    28. Exterior powers 28.1 Desiderata 28.2 Definitions, uniqueness, existence 28.3 Some elementary facts 28.4 Exterior powers Vif of maps 28.5 Exterior powers of free modules 28.6 Determinants revisited 28.7 Minors of matrices 28.8 Uniqueness in the structure theorem 28.9 Cartan's lemma 28.10 Cayley-Hamilton Theorem 28.11 Worked examples While many of the arguments here have analogues for tensor products, it is worthwhile to repeat these arguments with the relevant variations, both for practice, and to be sensitive to the differences. 1. Desiderata Again, we review missing items in our development of linear algebra. We are missing a development of determinants of matrices whose entries may be in commutative rings, rather than fields. We would like an intrinsic definition of determinants of endomorphisms, rather than one that depends upon a choice of coordinates, even if we eventually prove that the determinant is independent of the coordinates. We anticipate that Artin's axiomatization of determinants of matrices should be mirrored in much of what we do here. We want a direct and natural proof of the Cayley-Hamilton theorem. Linear algebra over fields is insufficient, since the introduction of the indeterminate x in the definition of the characteristic polynomial takes us outside the class of vector spaces over fields. We want to give a conceptual proof for the uniqueness part of the structure theorem for finitely-generated modules over principal ideal domains. Multi-linear algebra over fields is surely insufficient for this. 417 418 Exterior powers 2. Definitions, uniqueness, existence Let R be a commutative ring with 1.
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • Generalized Kronecker and Permanent Deltas, Their Spinor and Tensor Equivalents — Reference Formulae
    Generalized Kronecker and permanent deltas, their spinor and tensor equivalents — Reference Formulae R L Agacy1 L42 Brighton Street, Gulliver, Townsville, QLD 4812, AUSTRALIA Abstract The work is essentially divided into two parts. The purpose of the first part, applicable to n-dimensional space, is to (i) introduce a generalized permanent delta (gpd), a symmetrizer, on an equal footing with the generalized Kronecker delta (gKd), (ii) derive combinatorial formulae for each separately, and then in combination, which then leads us to (iii) provide a comprehensive listing of these formulae. For the second part, applicable to spinors in the mathematical language of General Relativity, the purpose is to (i) provide formulae for combined gKd/gpd spinors, (ii) obtain and tabulate spinor equivalents of gKd and gpd tensors and (iii) derive and exhibit tensor equivalents of gKd and gpd spinors. 1 Introduction The generalized Kronecker delta (gKd) is well known as an alternating function or antisymmetrizer, eg S^Xabc = d\X[def\. In contrast, although symmetric tensors are seen constantly, there does not appear to be employment of any symmetrizer, for eg X(abc), in analogy with the antisymmetrizer. However, it is exactly the combination of both types of symmetrizers, treated equally, that give us the flexibility to describe any type of tensor symmetry. We define such a 'permanent' symmetrizer as a generalized permanent delta (gpd) below. Our purpose is to restore an imbalance between the gKd and gpd and provide a consolidated reference of combinatorial formulae for them, most of which is not in the literature. The work divides into two Parts: the first applicable to tensors in n-dimensions, the second applicable to 2-component spinors used in the mathematical language of General Relativity.
    [Show full text]