Appendix a Tensors, Tensor Products and Tensor Operations in Three-Dimensions

Total Page:16

File Type:pdf, Size:1020Kb

Appendix a Tensors, Tensor Products and Tensor Operations in Three-Dimensions APPENDIX A TENSORS, TENSOR PRODUCTS AND TENSOR OPERATIONS IN THREE-DIMENSIONS A.I Vectors and vector operations The purpose of this section is to summarize some basic properties of vectors and vector operations. Complete descriptions of linear vector spaces can be found in standard texts on linear algebra (Noble, 1969) and a more complete summary of the use of vectors in continuum mechanics can be found in (Sokolnikoff, 1964). For the present purpose it is sufficient to recall that a vector in three-dimensional space is usually identified with an arrow connecting two points in space. This arrow has a magnitude and a specific direction. R p.:: Fig. A.I.l Parallelogram rule of vector addition. Some basic vector operations can be summarized as follows. If a,b are two vectors, then the quantity c=a+b, (A. 1.1) is also a vector which is defined by the parallelogram law of addition (see Fig. A.I.l). Furthermore, the operations a+b=b+a (commutative law) , (a+b)+c=a+(b+c) (associative law) , aa=aa (multiplication by a real number) , a-b=b-a (commutative law) , a-(b+c)=a-b+a-c (distributive law) , a(a-b)=(aa)-b (associative law) , axb=-bxa (lack of commutativity) , ax(b+c)=axb+axc (distributive law) , a(axb)=(aa)xb (associative law) , (A.I.2) are satisfied for all vectors a,b,c and all real numbers a, where a - b denotes the dot product (or scalar product), and a x b denotes the cross product (or vector product) between the vectors a and b. 429 430 APPENDIX A A.2 Tensors as linear operators Scalars (or real numbers) are referred to as tensors of order zero, and vectors are referred to as tensors of order one. Here, higher order tensors are defined inductively starting with the notion of a vector. Tensor of Order M: The quantity T is called a tensor of order two (or a second order tensor) if it is a linear operator whose domain is the space of all vectors v and whose range Tv or vT is a vector. Similarly, the quantity T is called a tensor of order three if it is a linear operator whose domain is the space of all vectors v and whose range Tv or vT is a tensor of order two. Consequently, by induction, the quantity T is called a tensor of order M (M~I)(M~l) if it is a linear operator whose domain is the space of all vectors v and whose range Tv or vT is a tensor of order (M-I). Since T is a linear operator, it satisfies the following rules T(v + w) = Tv + Tw , a(Tv) = (aT)v = T(av) , (v + w)T = vT + wT , a(vT) = (av)T = (vT)a , (A.2.I)(A.2.1) where v,w are arbitrary vectors and a is an arbitrary scalar. Notice that the tensor T can be operated upon on its right [e.g. (A.2.1)J,2](A.2.1)l,2l or on its left [e.g. (A.2.I)3,4](A.2.1»),4l and that in general, operation on the right and on the left is not commutative Tv '#;f. vT (Lack of commutativity in general) . (A.2.2) Zero Tensor of Order M: The zero tensor of order M is denoted by OeM) and is a linear operator whose domain is the space of all vectors v and whose range O(M-I)O(M-J) is the zero tensor of order M-I. OeM) v = v OeM) = O(M-I) . (A.2.3) Notice that these tensors are defined inductively starting with the known properties of the real number 0, which is the zero tensor 0(0) of order O. Often, for simplicity in writing a tensor equation, the zero tensor of any order is denoted by the symbol O. Addition and Subtraction: The usual rules of addition and subtraction of two tensors A and B apply only when the two tensors have the same order. It should be emphasized that tensors of different orders cannot be added or subtracted. A.3 Tensor products (special case) In order to define the operations of tensor product, dot product, and juxtaposition for general tensors, it is convenient to first consider the definitions of these properties for special tensors. Also, the operations of left and right transpose of the tensor product of a string of vectors will be defined. It will be seen later that the operation of dot product is defined to be consistent with the usual notion of the dot product as an inner product because it is a positive definite operation. Consequently, the dot product of a tensor with itself yields the square of the magnitude of the tensor. Also, the operation of juxtaposition is defined to be consistent with the usual procedures for matrix multiplication when two second order tensors are juxtaposed. TENSORS 431 Tensor Product (Special Case): The tensor product operation is denoted by the symbol ® and it is defined so that for an arbitrary vector v, the quantity (a I ®a2) is a second order tensor satisfying the relations (a l ®a2)v=al (a2 ·v) , v(al ®a2)=(v·a l )a2 . (A.3.1) Similarly, the quantity (al ®a2®a3) is a third order tensor satisfying the relations (a l ®a2®a3) v = (al®a2) (a3 • v) , v (a l ®a2®a3) = (v· a l ) (a2®a3) . (A.3.2) For convenience in generalizing these ideas, let A be a special tensor of order M which is formed by the tensor product of a string of M (M~2) vectors (al ,a2,a3, ... ,aM), and let B be a special tensor of order N which is formed by the tensor product of a string of N (N~2) vectors (b l ,b2,b3, ... ,bN) so that A = (a l ®a2®a3® ... ®aM), B = (b l ®b2®b3® ... ®bN) , (A.3.3) and for definiteness take M:.:;N. (A.3.4) It then follows that A satisfies the relations Av = (a l ®a2® ... ®aM) v = (al ®a2® ... ®aM_I ) (aM· v), vA = v (al ®a2® ... ®aM) = (v· a l ) (a2®a3® ... ®aM ) . (A.3.S) Notice that when v operates on either the right or left side of A, it has the effect of forming the scalar product with the vector in the string closest to it, and it causes the result A v or v A to have order one less than the order of A since one of the tensor products is removed. The remaining vectors in the string of tensor products are unaltered. Dot Product (Special Case): The dot product operation between two vectors can be generalized to an operation between two tensors of any orders. For example, the dot product between two second order tensors can be written as (al ®a2)· (bl ®b2) = (al • bl) (a2 • b2) , (b l ®b2)· (al®a2) = (b l • al) (b2 • a2) , (A.3.6) and the dot product between a second order tensor and a third order tensor can be written as (al ®a2)· (bl ®b2®b3) = (al • bl) (a2 • b2) b 3 ' (b l ®b2®b3)· (a l®a2) = b l (b2 • al) (b3 • a2) . (A.3.7) Also, the dot product between two third order tensors can be written as (a l ®a2®a3) • (bl ®b2®b3) = (a l • bl) (a2 • b2) (a3 • b3) , (b l ®b2®b3)· (al ®a2®a3) = (b l • a l ) (b2 • a2) (b3 • a3) , (A.3.8) and the dot product between a second order tensor and a fourth order tensor can be written as (al ®a2)· (b l ®b2®b3®b4) = (a l • bl) (a2 • b2) (b3®b4) , (bl ®b2®b3®b4)· (a l ®a2) = (bl ®b2) (b3 • al) (b4 • a2)· (A.3.9) This dot product operation can be generalized for special tensors of any orders like A and B by carefully examining examples (A.3.7) and (A.3.9). These examples indicate 432 APPENDIX A that the dot product between tensors of different orders does not necessarily commute, whereas the dot product of two tensors of the same order does commute. A • B :;t B • A for M<N , A· B = B • A for M=N . (A.3.1O) Moreover, it can be seen that the tensor of smallest order controls the outcome of the dot product operation. Specifically, in (A.3.7), the second order tensor (a,®a2) appears on the left of the third order tensor (b, ®b2®b3). Since (a, ®a2) is a second order tensor this causes only the first two vectors on the left of (b, ®b2®b3) to form inner products with a, and a2. Similarly, since (a,®a2) appears on the right side of (b,®b2®b3) in (A.3.7)2' this causes only the first two vectors on the right of (b, ®b2®b3) to form inner products with a, and a2. In general, the dot product A • B of the special tensors defined in (A.3.3) is a tensor of order IM-NI. Furthermore, in view of the restriction (A.3.4), only the first M vectors of B closest to A will form inner products with the vectors of A. It is also important to note that the order of the strings of vectors in the inner products remains the same as that in the tensors. Specifically, for the dot product A • B the first vector on the left side of A forms an inner product with the first vector on the left side of B and so forth.
Recommended publications
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • The Tensor Product and Induced Modules
    The Tensor Product and Induced Modules Nayab Khalid The Tensor Product The Tensor Product A Construction (via the Universal Property) Properties Examples References Nayab Khalid PPS Talk University of St. Andrews October 16, 2015 The Tensor Product and Induced Modules Overview of the Presentation Nayab Khalid The Tensor Product A 1 The Tensor Product Construction Properties Examples 2 A Construction References 3 Properties 4 Examples 5 References The Tensor Product and Induced Modules The Tensor Product Nayab Khalid The Tensor Product A Construction Properties The tensor product of modules is a construction that allows Examples multilinear maps to be carried out in terms of linear maps. References The tensor product can be constructed in many ways, such as using the basis of free modules. However, the standard, more comprehensive, definition of the tensor product stems from category theory and the universal property. The Tensor Product and Induced Modules The Tensor Product (via the Nayab Khalid Universal Property) The Tensor Product A Construction Properties Examples Definition References Let R be a commutative ring. Let E1; E2 and F be modules over R. Consider bilinear maps of the form f : E1 × E2 ! F : The Tensor Product and Induced Modules The Tensor Product (via the Nayab Khalid Universal Property) The Tensor Product Definition A Construction The tensor product E1 ⊗ E2 is the module with a bilinear map Properties Examples λ: E1 × E2 ! E1 ⊗ E2 References such that there exists a unique homomorphism which makes the following diagram commute. The Tensor Product and Induced Modules A Construction of the Nayab Khalid Tensor Product The Tensor Product Here is a more constructive definition of the tensor product.
    [Show full text]
  • Math 395. Tensor Products and Bases Let V and V Be Finite-Dimensional
    Math 395. Tensor products and bases Let V and V 0 be finite-dimensional vector spaces over a field F . Recall that a tensor product of V and V 0 is a pait (T, t) consisting of a vector space T over F and a bilinear pairing t : V × V 0 → T with the following universal property: for any bilinear pairing B : V × V 0 → W to any vector space W over F , there exists a unique linear map L : T → W such that B = L ◦ t. Roughly speaking, t “uniquely linearizes” all bilinear pairings of V and V 0 into arbitrary F -vector spaces. In class it was proved that if (T, t) and (T 0, t0) are two tensor products of V and V 0, then there exists a unique linear isomorphism T ' T 0 carrying t and t0 (and vice-versa). In this sense, the tensor product of V and V 0 (equipped with its “universal” bilinear pairing from V × V 0!) is unique up to unique isomorphism, and so we may speak of “the” tensor product of V and V 0. You must never forget to think about the data of t when you contemplate the tensor product of V and V 0: it is the pair (T, t) and not merely the underlying vector space T that is the focus of interest. In this handout, we review a method of construction of tensor products (there is another method that involved no choices, but is horribly “big”-looking and is needed when considering modules over commutative rings) and we work out some examples related to the construction.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]