Low-Level Image Processing with the Structure Multivector

Total Page:16

File Type:pdf, Size:1020Kb

Low-Level Image Processing with the Structure Multivector Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters. Same as the monogenic signal, the applied scale- space approach is derived from the three-dimensional Laplace equation instead of the diffusion equation. Hence, the two-dimensional generalization of the analytic signal turns out to provide a whole new framework for low-level vision. Several applications are presented to show the efficiency and power of the theoretic con- siderations. Among these are methods for orientation estimation, edge and corner detection, stereo correspondence and disparity estimation, and adaptive smoothing. vi Abstract ACKNOWLEDGMENT First of all, I appreciate my supervisors Prof. Gerald Sommer and Prof. Ulrich Heute for guiding me through the interesting fields of computer vision and signal theory. They allowed me to work as independently as was necessary to obtain sub- stantial new results. However, without the countless, intensive discussions with Prof. Gerald Sommer and without his scientific intuition I would not have been able to develop the presented ideas and to write this thesis as it is. He supported me whenever it was necessary. Furthermore, I acknowledge the support and help of all our former and present members of the cognitive systems group in Kiel. Especially the fruitful discussions with Norbert Kruger,¨ Thomas Bulow,¨ and Hongbo Li have had a great influence on my work. I thank Gerd Diesner and Henrik Schmidt for their technical support and Franc¸oise Maillard for her help in administrative matters. I am grateful for all the discussions with other scientists and for the refer- ences they have given to me, especially Kieran Larkin, Robin Owens, Terence Tao, Dieter Betten, Joachim Weickert, Christoph Schnorr,¨ Bernd Jahne,¨ Hanno Scharr, Wolfgang Forstner,¨ Gosta¨ Granlund, Josef Bigun,¨ Moshe Porat, Nir Sochen, Tony Lindeberg, and Jan Koenderink. I acknowledge Hongbo Li, Jim Byrnes, Norbert Kruger,¨ and Christian Perwass for proofreading parts of this thesis. I am also grateful to the German National Merit Foundation (Studienstiftung des deutschen Volkes) and the DFG Graduiertenkolleg No. 357 for their financial support during my PhD studies. Last, but not least, I thank my family for mental support, my parents Hans and Beate, my sister Susanne, my parents-in-law Otfried and Rosemarie, my brother- in-law Sebastian, my daughter Lisa-Marie and especially my wife Regina. viii Acknowledgment CONTENTS 1. Introduction ..................................... 1 1.1 Terms and Motivation . 1 1.2 Known Approaches for the 2D Analytic Signal . 8 1.3 Outline of this Thesis . 12 2. Geometric Algebra ................................ 15 2.1 Geometric Algebra of IR2 ........................... 17 2.1.1 Definition of the Algebra . 17 2.1.2 Group Representation in IR2 ..................... 21 2.2 Geometric Algebra of IR3 ........................... 23 2.2.1 Definition of the Algebra . 23 2.2.2 Group Representation in IR3 ..................... 28 2.3 Complex Analysis and 2D Harmonic Fields . 32 2.3.1 The Cauchy-Riemann Equations . 32 2.3.2 Solutions of the 2D Laplace Equation . 33 2.4 3D Harmonic Fields . 34 2.4.1 The Generalized Cauchy-Riemann Equations . 35 2.4.2 The 3D Laplace Equation and its Solutions . 36 2.5 Summary of Chapter 2 . 37 3. Signal Theoretic Fundamentals ......................... 39 3.1 The 1D Fourier Transform and LSI Operators . 40 3.1.1 Definitions . 40 x Contents 3.1.2 Basic Theorems . 42 3.2 The Phase in 1D . 44 3.2.1 The Analytic Signal and the Local Phase . 45 3.2.2 Properties of the 1D Analytic Signal . 47 3.3 The 2D Fourier Transform and Linear Operators . 50 3.3.1 Definitions . 50 3.3.2 Basic Theorems . 53 3.3.3 Plancherel Theorem and 2D Uncertainty . 56 3.4 The Structure Tensor . 58 3.4.1 The Structure Tensor and the Intrinsic Dimension . 59 3.4.2 Interpretation of the Structure Tensor . 62 3.5 Spherical Harmonics and Steerable Filters . 65 3.5.1 Fourier Series . 65 3.5.2 Applying Steerable Filters . 66 3.6 Summary of Chapter 3 . 67 4. The Isotropic 2D Generalization of the Analytic Signal ............ 69 4.1 Derivation of the Hilbert Transform . 69 4.1.1 The 1D Signal Model . 70 4.1.2 Relation to the Laplace Transform . 72 4.2 The Riesz Transform . 73 4.2.1 The 2D Signal Model . 73 4.2.2 Relation to the Fourier Kernel and a Short Survey . 75 4.3 The Monogenic Signal . 78 4.3.1 Definition and Properties . 78 4.3.2 The Definition of the Local Phase . 81 4.4 The Interpretation of the Local Phase . 83 4.4.1 Transferring the 1D Phase to 2D . 83 4.4.2 Decomposition of the Phase Vector . 86 Contents xi 4.5 A New Scale-Space Approach . 91 4.5.1 The Scale-Space Axiomatic of Iijima . 91 4.5.2 Comparison to Gaussian Scale-Space . 93 4.6 The Spherical Quadrature Filter . 95 4.6.1 The DOP Wavelet . 95 4.6.2 The Definition of Spherical Quadrature Filter . 97 4.6.3 The Intrinsic Scale . 98 4.7 Summary of Chapter 4 . 100 5. Adaption of 2D Quadrature Filters to the Local Intrinsic Dimension .... 101 5.1 Introducing Local Anisotropies . 102 5.1.1 The Metric Tensor . 102 5.1.2 The Laplace Equation in the Deformed Space . 104 5.1.3 Interpretation in the Fourier Domain . 106 5.2 Approximation of the Local Metric . 109 5.2.1 Taylor Expansion of the Poisson Kernel . 109 5.2.2 Taylor Expansion of the Riesz Transform . 111 5.2.3 Isotropy and Coherence . 112 5.3 Steering of the Approximation . 115 5.3.1 Steering the Poisson Kernel Approximation . 115 5.3.2 Steering the Conjugate Poisson Kernel Approximation . 117 5.4 The Isotropic Analytic Signal for i2D Signals . 119 5.4.1 The 2D Signal Model for i2D Signals . 119 5.4.2 The Error of the Monogenic Signal for i2D Signals . 122 5.4.3 Spectral Decomposition According to Symmetries . 123 5.4.4 Rotation Invariant 2D Decomposition . 125 5.5 The Structure Multivector . 129 5.5.1 Steering by the Local Orientation . 129 5.5.2 Amplitude and Phase Decomposition . 131 xii Contents 5.5.3 The Properties of the Structure Multivector . 136 5.6 Summary of Chapter 5 . 138 6. Applications ..................................... 141 6.1 Orientation Estimation . 141 6.1.1 Optimal Local Orientation Estimation . 142 6.1.2 Orientation Estimation by the Riesz Transform . 144 6.1.3 Orientation Estimation with Spherical Harmonics . 148 6.2 Edge Detection . 151 6.2.1 Phase Congruency . 151 6.2.2 Derivation of the Filter Set . 152 6.2.3 Experiments and Comparisons . 154 6.3 Stereo Correspondence and Disparity Estimation . 158 6.3.1 Stereo Correspondence by Matching on the Epipolar Line . 158 6.3.2 Disparity from Monogenic Phase . 160 6.4 Adaptive Smoothing . 163 6.4.1 The Adaptive Smoothing Algorithm . 163 6.4.2 Experiments and Comparison . 164 6.5 Corner Detection and Curvature Estimation . 166 6.5.1 The Local Isotropy and Curvature . 167 6.5.2 Corner Detection Experiments and Comparisons . 169 6.6 Summary of Chapter 6 . 171 7. Conclusion ...................................... 173 7.1 Summary of the Results . 173 7.2 Further Extensions, Open Problems, and Future Work . 174 A. Fourier Transform Pairs and Uncertainties ................... 177 A.1 1D Fourier Transform Pairs . 177 A.2 2D Fourier Transform Pairs . 179 Contents xiii A.3 Uncertainties . 183 A.4 Simplified Transfer Function of the Monogenic Signal . 186 Bibliography ..................................... 187 Index ......................................... 199 xiv Contents Chapter 1 INTRODUCTION ‘This question is best answered by making use of one quite singular improper rotation, namely reflection in O; it carries any point P into its antipode P 0 ...’ Hermann Weyl (1885-1955) The quotation above1 which states that the answer is a reflection in the origin can be considered as the central point of my thesis. However, what is the question to this answer? This questions could be: ‘Is there a sophisticated 2D generalization of the Hilbert transform which can be used for image processing and if there is, what does it look like?’ As it can be concluded from the question above, this thesis is about a topic which is related to three different fields, as there are mathematics (harmonic analysis), engineering (signal theory), and computer science (image processing).
Recommended publications
  • Topology and Physics 2019 - Lecture 2
    Topology and Physics 2019 - lecture 2 Marcel Vonk February 12, 2019 2.1 Maxwell theory in differential form notation Maxwell's theory of electrodynamics is a great example of the usefulness of differential forms. A nice reference on this topic, though somewhat outdated when it comes to notation, is [1]. For notational simplicity, we will work in units where the speed of light, the vacuum permittivity and the vacuum permeability are all equal to 1: c = 0 = µ0 = 1. 2.1.1 The dual field strength In three dimensional space, Maxwell's electrodynamics describes the physics of the electric and magnetic fields E~ and B~ . These are three-dimensional vector fields, but the beauty of the theory becomes much more obvious if we (a) use a four-dimensional relativistic formulation, and (b) write it in terms of differential forms. For example, let us look at Maxwells two source-free, homogeneous equations: r · B = 0;@tB + r × E = 0: (2.1) That these equations have a relativistic flavor becomes clear if we write them out in com- ponents and organize the terms somewhat suggestively: x y z 0 + @xB + @yB + @zB = 0 x z y −@tB + 0 − @yE + @zE = 0 (2.2) y z x −@tB + @xE + 0 − @zE = 0 z y x −@tB − @xE + @yE + 0 = 0 Note that we also multiplied the last three equations by −1 to clarify the structure. All in all, we see that we have four equations (one for each space-time coordinate) which each contain terms in which the four coordinate derivatives act. Therefore, we may be tempted to write our set of equations in more \relativistic" notation as ^µν @µF = 0 (2.3) 1 with F^µν the coordinates of an antisymmetric two-tensor (i.
    [Show full text]
  • Linear Algebra I
    Linear Algebra I Martin Otto Winter Term 2013/14 Contents 1 Introduction7 1.1 Motivating Examples.......................7 1.1.1 The two-dimensional real plane.............7 1.1.2 Three-dimensional real space............... 14 1.1.3 Systems of linear equations over Rn ........... 15 1.1.4 Linear spaces over Z2 ................... 21 1.2 Basics, Notation and Conventions................ 27 1.2.1 Sets............................ 27 1.2.2 Functions......................... 29 1.2.3 Relations......................... 34 1.2.4 Summations........................ 36 1.2.5 Propositional logic.................... 36 1.2.6 Some common proof patterns.............. 37 1.3 Algebraic Structures....................... 39 1.3.1 Binary operations on a set................ 39 1.3.2 Groups........................... 40 1.3.3 Rings and fields...................... 42 1.3.4 Aside: isomorphisms of algebraic structures...... 44 2 Vector Spaces 47 2.1 Vector spaces over arbitrary fields................ 47 2.1.1 The axioms........................ 48 2.1.2 Examples old and new.................. 50 2.2 Subspaces............................. 53 2.2.1 Linear subspaces..................... 53 2.2.2 Affine subspaces...................... 56 2.3 Aside: affine and linear spaces.................. 58 2.4 Linear dependence and independence.............. 60 3 4 Linear Algebra I | Martin Otto 2013 2.4.1 Linear combinations and spans............. 60 2.4.2 Linear (in)dependence.................. 62 2.5 Bases and dimension....................... 65 2.5.1 Bases............................ 65 2.5.2 Finite-dimensional vector spaces............. 66 2.5.3 Dimensions of linear and affine subspaces........ 71 2.5.4 Existence of bases..................... 72 2.6 Products, sums and quotients of spaces............. 73 2.6.1 Direct products...................... 73 2.6.2 Direct sums of subspaces................
    [Show full text]
  • Vector Spaces in Physics
    San Francisco State University Department of Physics and Astronomy August 6, 2015 Vector Spaces in Physics Notes for Ph 385: Introduction to Theoretical Physics I R. Bland TABLE OF CONTENTS Chapter I. Vectors A. The displacement vector. B. Vector addition. C. Vector products. 1. The scalar product. 2. The vector product. D. Vectors in terms of components. E. Algebraic properties of vectors. 1. Equality. 2. Vector Addition. 3. Multiplication of a vector by a scalar. 4. The zero vector. 5. The negative of a vector. 6. Subtraction of vectors. 7. Algebraic properties of vector addition. F. Properties of a vector space. G. Metric spaces and the scalar product. 1. The scalar product. 2. Definition of a metric space. H. The vector product. I. Dimensionality of a vector space and linear independence. J. Components in a rotated coordinate system. K. Other vector quantities. Chapter 2. The special symbols ij and ijk, the Einstein summation convention, and some group theory. A. The Kronecker delta symbol, ij B. The Einstein summation convention. C. The Levi-Civita totally antisymmetric tensor. Groups. The permutation group. The Levi-Civita symbol. D. The cross Product. E. The triple scalar product. F. The triple vector product. The epsilon killer. Chapter 3. Linear equations and matrices. A. Linear independence of vectors. B. Definition of a matrix. C. The transpose of a matrix. D. The trace of a matrix. E. Addition of matrices and multiplication of a matrix by a scalar. F. Matrix multiplication. G. Properties of matrix multiplication. H. The unit matrix I. Square matrices as members of a group.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Tensor Products
    Tensor products Joel Kamnitzer April 5, 2011 1 The definition Let V, W, X be three vector spaces. A bilinear map from V × W to X is a function H : V × W → X such that H(av1 + v2, w) = aH(v1, w) + H(v2, w) for v1,v2 ∈ V, w ∈ W, a ∈ F H(v, aw1 + w2) = aH(v, w1) + H(v, w2) for v ∈ V, w1, w2 ∈ W, a ∈ F Let V and W be vector spaces. A tensor product of V and W is a vector space V ⊗ W along with a bilinear map φ : V × W → V ⊗ W , such that for every vector space X and every bilinear map H : V × W → X, there exists a unique linear map T : V ⊗ W → X such that H = T ◦ φ. In other words, giving a linear map from V ⊗ W to X is the same thing as giving a bilinear map from V × W to X. If V ⊗ W is a tensor product, then we write v ⊗ w := φ(v ⊗ w). Note that there are two pieces of data in a tensor product: a vector space V ⊗ W and a bilinear map φ : V × W → V ⊗ W . Here are the main results about tensor products summarized in one theorem. Theorem 1.1. (i) Any two tensor products of V, W are isomorphic. (ii) V, W has a tensor product. (iii) If v1,...,vn is a basis for V and w1,...,wm is a basis for W , then {vi ⊗ wj }1≤i≤n,1≤j≤m is a basis for V ⊗ W .
    [Show full text]
  • Tensor Manipulation in GPL Maxima
    Tensor Manipulation in GPL Maxima Viktor Toth http://www.vttoth.com/ February 1, 2008 Abstract GPL Maxima is an open-source computer algebra system based on DOE-MACSYMA. GPL Maxima included two tensor manipulation packages from DOE-MACSYMA, but these were in various states of disrepair. One of the two packages, CTENSOR, implemented component-based tensor manipulation; the other, ITENSOR, treated tensor symbols as opaque, manipulating them based on their index properties. The present paper describes the state in which these packages were found, the steps that were needed to make the packages fully functional again, and the new functionality that was implemented to make them more versatile. A third package, ATENSOR, was also implemented; fully compatible with the identically named package in the commercial version of MACSYMA, ATENSOR implements abstract tensor algebras. 1 Introduction GPL Maxima (GPL stands for the GNU Public License, the most widely used open source license construct) is the descendant of one of the world’s first comprehensive computer algebra systems (CAS), DOE-MACSYMA, developed by the United States Department of Energy in the 1960s and the 1970s. It is currently maintained by 18 volunteer developers, and can be obtained in source or object code form from http://maxima.sourceforge.net/. Like other computer algebra systems, Maxima has tensor manipulation capability. This capability was developed in the late 1970s. Documentation is scarce regarding these packages’ origins, but a select collection of e-mail messages by various authors survives, dating back to 1979-1982, when these packages were actively maintained at M.I.T. When this author first came across GPL Maxima, the tensor packages were effectively non-functional.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • A Clifford Dyadic Superfield from Bilateral Interactions of Geometric Multispin Dirac Theory
    A CLIFFORD DYADIC SUPERFIELD FROM BILATERAL INTERACTIONS OF GEOMETRIC MULTISPIN DIRAC THEORY WILLIAM M. PEZZAGLIA JR. Department of Physia, Santa Clam University Santa Clam, CA 95053, U.S.A., [email protected] and ALFRED W. DIFFER Department of Phyaia, American River College Sacramento, CA 958i1, U.S.A. (Received: November 5, 1993) Abstract. Multivector quantum mechanics utilizes wavefunctions which a.re Clifford ag­ gregates (e.g. sum of scalar, vector, bivector). This is equivalent to multispinors con­ structed of Dirac matrices, with the representation independent form of the generators geometrically interpreted as the basis vectors of spacetime. Multiple generations of par­ ticles appear as left ideals of the algebra, coupled only by now-allowed right-side applied (dextral) operations. A generalized bilateral (two-sided operation) coupling is propoeed which includes the above mentioned dextrad field, and the spin-gauge interaction as partic­ ular cases. This leads to a new principle of poly-dimensional covariance, in which physical laws are invariant under the reshuffling of coordinate geometry. Such a multigeometric su­ perfield equation is proposed, whi~h is sourced by a bilateral current. In order to express the superfield in representation and coordinate free form, we introduce Eddington E-F double-frame numbers. Symmetric tensors can now be represented as 4D "dyads", which actually are elements of a global SD Clifford algebra.. As a restricted example, the dyadic field created by the Greider-Ross multivector current (of a Dirac electron) describes both electromagnetic and Morris-Greider gravitational interactions. Key words: spin-gauge, multivector, clifford, dyadic 1. Introduction Multi vector physics is a grand scheme in which we attempt to describe all ba­ sic physical structure and phenomena by a single geometrically interpretable Algebra.
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]