Modeling Performance of Tensor Transpose Using Regression Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Modeling Performance of Tensor Transpose Using Regression Techniques Modeling Performance of Tensor Transpose using Regression Techniques A Thesis Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By Rohit Kumar Srivastava, B.E. Graduate Program in Computer Science and Engineering The Ohio State University 2018 Master's Examination Committee: Dr. P. Sadayappan, Advisor Dr. Radu Teoderescu c Copyright by Rohit Kumar Srivastava 2018 Abstract Tensor transposition is an important primitive in many tensor algebra libraries. For example, tensor contractions are implemented using TTGT(Transpose-Transpose- GEMM-Transpose) approach. Performing efficient transpose of an arbitrary tensor requires different optimization techniques depending on the required permutation. Exhaustive evaluation of all parameter choices like slice size and blocking is pro- hibitively expensive. We present an approach to model the performance of different kernels inside TTLG, a Tensor Transpose Library for GPUs, for different parameters like slice size, blocking and resultant warp efficiency etc. Predictions made by this model are then used to guide in kernel and its parameter selection. ii To my mother, father and brother, for their unconditional love and support. iii Acknowledgments This thesis wouldn't have been possible without the guidance and support of many people. First of all I would like to express my gratitude to my advisor Prof. P. Sadayappan for his guidance, feedback, patience and critical discussions throughout the process. I'm grateful to him for providing me the opportunity to work with him on this project. A special thanks to Aravind. Regular discussions with him helped me gain deeper insight into the problem and develop better understanding of the domain, funda- mentally. This improved my technical abilities. I would like to thank my lab mates Jinsung, Vineeth, Kunal, Emre, Rui, Changwan, Prashant, Israt, Wenlie and Gordon for making my past year eventful and memorable at HPCRL. I am thankful to Ak- shay Mehra, Aaditya Chauhan, Akhil Guliyani, Anhad Mohananey and Dushyanta Dhyani for their constant support, trust and faith they have put in me and for always being there during tough times of grad school life. I thank all my friends Prithvi, San- keerth, Deepankar, Piyush, Pravar, Anant, Ajit, Sayam, Pragya and Anu for making Columbus my home and helping my transition to United States much easier. Finally, all of this wouldn't be possible without the sacrifices and hard work of my parents and my brother. Without their support, love and encouragement, I wouldn't have made it this far in life. I am nothing without them. iv Vita August 2008 { May 2012 . Bachelor of Engineering, Computer Engineering Netaji Subhas Institute of Technology University of Delhi New Delhi, India. July 2012 { August 2015 . .SDE Infibeam.com Gurugram, Haryana. September 2015 { August 2016 . SDE-1 Expedia.inc Gurugram, Haryana January 2017 { May 2016 . .Graduate Teaching Associate The Ohio State University Columbus, Ohio. May 2017 { August 2017 . Software Developer Intern Amazon Web Services Seattle, Washington. August 2017 { present . Graduate Research Associate The Ohio State University Columbus, Ohio. Publications Research Publications Jyothi Vedurada, Arjun Suresh, Aravind Sukumaran Rajam, Jinsung Kim, Changwan Hong, Sriram Krishnamoorthy, Ajay Panyala, V. Krishna Nandivada, Rohit Kumar Srivastava, P.Sadayappan Efficient Tensor Transpose Library for GPUs. IPDPS , May 2018. v Fields of Study Major Field: Computer Science and Engineering vi Table of Contents Page Abstract . ii Dedication . iii Acknowledgments . iv Vita......................................... v List of Tables . ix List of Figures . x 1. Introduction . 1 1.1 TTLG:Tensor Transpose Library for GPUs . 1 1.2 Regression Analysis . 3 1.2.1 External Model . 3 1.2.2 Internal Model . 4 1.3 Contribution . 5 1.4 Organization of the Thesis . 5 2. Background . 6 2.1 GPU Architecture and CUDA Programming . 6 2.2 Kernel Selection inside TTLG . 7 2.3 TTLG Kernels . 8 2.3.1 FVINoMatchG32: Fastest varying indices do not match and their sizes are greater than 32 . 9 2.3.2 FVIMatchL32: Fastest varying indices match and size less than 32 . 10 vii 2.3.3 FVIMatchG32: Fastest varying indices match and their sizes are greater than 32 . 10 2.3.4 FVINoMatchGeneral: Fastest varying indices do not match and there is no overlap between indices of input and output slice . 11 2.3.5 FVINoMatchOverlap: General scheme that can handle both matching and non-matching fvi of input and output tensors. Indices mapped to slice from input and output tensor can be overlapping . 13 2.4 Regression Models . 13 2.4.1 Linear Regression . 14 2.4.2 Random Forest Regression . 14 3. Challenges . 19 4. Linear Regression Model . 21 4.1 Efficiency Calculation . 21 4.2 Feature Engineering . 23 4.2.1 Features . 23 4.3 Derived Features . 26 4.4 Data Collection . 28 4.5 Utilizing regression model to improve performance of tensor trans- position . 29 5. Experiments and Results . 31 6. Conclusion and Future Work . 40 Bibliography . 42 viii List of Tables Table Page 5.1 Hardware Configuration . 33 5.2 Mean of Absolute Error Percentage for Linear and Random Forest Regression . 34 ix List of Figures Figure Page 1.1 Model using Volume as input feature . 4 2.1 TTLG Kernel selection flowchart . 8 2.2 FVINoMatchG32 Scheme . 9 2.3 FVIMatchL32 Scheme . 11 2.4 FVINoMatchGeneral Scheme . 12 2.5 FVINoMatchOverlap Scheme . 16 2.6 Linear Regression . 17 2.7 Decision Tree . 17 2.8 Random Forest . 18 4.1 Read slice from Global Memory . 23 4.2 Write slice to Global Memory . 24 4.3 Type of sub-slices in a Slice . 25 4.4 Types of slices in Tensor . 26 4.5 Regression Model for TTLG . 30 5.1 Performance Comparison of previous versus new implementation of FVIMatchG32 . 34 x 5.2 MAEP during training phase and prediction phase . 35 5.3 TTLG Performance on All 15 . 35 5.4 Kernel Prediction by Model for All 15 . 36 5.5 Error Frequencies for All 15 . 36 5.6 Performance on All 16 . 37 5.7 Kernel Prediction by Model for All 16 . 37 5.8 Error Frequencies for All 16 . 38 5.9 Performance on All 17 . 38 5.10 Kernel Prediction by Model for All 17 . 39 5.11 Error Frequencies for All 17 . 39 xi Chapter 1: Introduction Tensor transposition is an important layout transformation primitive for many do- mains like machine learning, tensor contraction(TTGT[1]) and computational chem- istry that use tensors as a core data structure. It involves a permutation of the indices of an input tensor: Bρ(i0;i1;i2;;id1) Ai0;i1;i2;;id1 where A and B are the input and output tensors, respectively, and ρ denotes a permutation function that maps output indices to input indices. An arbitrary tensor transposition (of a d-dimensional tensor) can be achieved by d-nested loops or by computing memory offset of each element in the tensor and using a single loop over the complete volume of the tensor in a 1D fashion. Both of the above approaches can be inefficient when using GPU's for performing tensor transpose. For example say we want to transpose a 2D tensor, while performing the transpose successive elements read from a column of the input tensor can have very high strides which can lead to uncoalesced memory accesses. 1.1 TTLG:Tensor Transpose Library for GPUs TTLG[4] is a library developed to perform tensor transpose on a GPU in an effi- cient fashion. It divides total work into slices which are then transposed by a thread 1 block and written to its appropriate position in the Global memory. It performs in- memory transpose in GPU, this requires GPU to have enough memory for both the tensor and its transpose. We will refer to the tensor being transposed as input tensor and the transposed tensor as output tensor. It uses various techniques like thread coarsening to improve thread occupancy and shared-memory padding to provide con- flict free access to shared-memory. In order to provide coalesced memory reads and writes, indirection arrays are used to read elements from input tensor and write them to output tensor. It utilizes different GPU kernels to perform tensor transposition. For certain tensor size and output permutations the choice of the kernel to be exe- cuted is simple and based on few conditional checks. But there are certain output permutations whose transpose can be performed by multiple kernels, these are the cases where fastest varying index(fvi) of input and output tensor do not match and there are different types of kernels inside TTLG that try to optimize the transpose operation using different techniques. One way to find out the best performing kernel for a given input is to evaluate all possible kernels and then select the best one like TTC[8]. This approach performs better when the use case requires repeated transpo- sition of same size tensor and output permutation. For single use, this can consume significant amount of time and slow down the performance of the library. Another approach is to use heuristics to prune the parameter search space like cuTT[3]. This may not give the best achievable bandwidth for the given input. TTLG uses efficiency based calculations to predict the performance of possible kernels that can be executed and decides the one that it thinks will perform the best. 2 1.2 Regression Analysis Regression analysis is a statistical technique used to find the relationship between independent variables(like shared memory, input and output slice sizes, stride, warp efficiency, fvi of input and output tensors etc.) and dependent variables(performance metrics like operations/sec and bandwidth). It helps understand variations in depen- dent variable with respect to changes made to independent variables.
Recommended publications
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • 13 the Dirac Equation
    13 The Dirac Equation A two-component spinor a χ = b transforms under rotations as iθn J χ e− · χ; ! with the angular momentum operators, Ji given by: 1 Ji = σi; 2 where σ are the Pauli matrices, n is the unit vector along the axis of rotation and θ is the angle of rotation. For a relativistic description we must also describe Lorentz boosts generated by the operators Ki. Together Ji and Ki form the algebra (set of commutation relations) Ki;Kj = iεi jkJk − ε Ji;Kj = i i jkKk ε Ji;Jj = i i jkJk 1 For a spin- 2 particle Ki are represented as i Ki = σi; 2 giving us two inequivalent representations. 1 χ Starting with a spin- 2 particle at rest, described by a spinor (0), we can boost to give two possible spinors α=2n σ χR(p) = e · χ(0) = (cosh(α=2) + n σsinh(α=2))χ(0) · or α=2n σ χL(p) = e− · χ(0) = (cosh(α=2) n σsinh(α=2))χ(0) − · where p sinh(α) = j j m and Ep cosh(α) = m so that (Ep + m + σ p) χR(p) = · χ(0) 2m(Ep + m) σ (pEp + m p) χL(p) = − · χ(0) 2m(Ep + m) p 57 Under the parity operator the three-moment is reversed p p so that χL χR. Therefore if we 1 $ − $ require a Lorentz description of a spin- 2 particles to be a proper representation of parity, we must include both χL and χR in one spinor (note that for massive particles the transformation p p $ − can be achieved by a Lorentz boost).
    [Show full text]
  • 1.3 Cartesian Tensors a Second-Order Cartesian Tensor Is Defined As A
    1.3 Cartesian tensors A second-order Cartesian tensor is defined as a linear combination of dyadic products as, T Tijee i j . (1.3.1) The coefficients Tij are the components of T . A tensor exists independent of any coordinate system. The tensor will have different components in different coordinate systems. The tensor T has components Tij with respect to basis {ei} and components Tij with respect to basis {e i}, i.e., T T e e T e e . (1.3.2) pq p q ij i j From (1.3.2) and (1.2.4.6), Tpq ep eq TpqQipQjqei e j Tij e i e j . (1.3.3) Tij QipQjqTpq . (1.3.4) Similarly from (1.3.2) and (1.2.4.6) Tij e i e j Tij QipQjqep eq Tpqe p eq , (1.3.5) Tpq QipQjqTij . (1.3.6) Equations (1.3.4) and (1.3.6) are the transformation rules for changing second order tensor components under change of basis. In general Cartesian tensors of higher order can be expressed as T T e e ... e , (1.3.7) ij ...n i j n and the components transform according to Tijk ... QipQjqQkr ...Tpqr... , Tpqr ... QipQjqQkr ...Tijk ... (1.3.8) The tensor product S T of a CT(m) S and a CT(n) T is a CT(m+n) such that S T S T e e e e e e . i1i2 im j1j 2 jn i1 i2 im j1 j2 j n 1.3.1 Contraction T Consider the components i1i2 ip iq in of a CT(n).
    [Show full text]
  • Tensor-Spinor Theory of Gravitation in General Even Space-Time Dimensions
    Physics Letters B 817 (2021) 136288 Contents lists available at ScienceDirect Physics Letters B www.elsevier.com/locate/physletb Tensor-spinor theory of gravitation in general even space-time dimensions ∗ Hitoshi Nishino a, ,1, Subhash Rajpoot b a Department of Physics, College of Natural Sciences and Mathematics, California State University, 2345 E. San Ramon Avenue, M/S ST90, Fresno, CA 93740, United States of America b Department of Physics & Astronomy, California State University, 1250 Bellflower Boulevard, Long Beach, CA 90840, United States of America a r t i c l e i n f o a b s t r a c t Article history: We present a purely tensor-spinor theory of gravity in arbitrary even D = 2n space-time dimensions. Received 18 March 2021 This is a generalization of the purely vector-spinor theory of gravitation by Bars and MacDowell (BM) in Accepted 9 April 2021 4D to general even dimensions with the signature (2n − 1, 1). In the original BM-theory in D = (3, 1), Available online 21 April 2021 the conventional Einstein equation emerges from a theory based on the vector-spinor field ψμ from a Editor: N. Lambert m lagrangian free of both the fundamental metric gμν and the vierbein eμ . We first improve the original Keywords: BM-formulation by introducing a compensator χ, so that the resulting theory has manifest invariance = =− = Bars-MacDowell theory under the nilpotent local fermionic symmetry: δψ Dμ and δ χ . We next generalize it to D Vector-spinor (2n − 1, 1), following the same principle based on a lagrangian free of fundamental metric or vielbein Tensors-spinors rs − now with the field content (ψμ1···μn−1 , ωμ , χμ1···μn−2 ), where ψμ1···μn−1 (or χμ1···μn−2 ) is a (n 1) (or Metric-less formulation (n − 2)) rank tensor-spinor.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]