For Deep Rotation Learning with Uncertainty

Total Page:16

File Type:pdf, Size:1020Kb

For Deep Rotation Learning with Uncertainty A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty Valentin Peretroukhin,1;3 Matthew Giamou,1 David M. Rosen,2 W. Nicholas Greene,3 Nicholas Roy,3 and Jonathan Kelly1 1Institute for Aerospace Studies, University of Toronto; 2Laboratory for Information and Decision Systems, 3Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology Abstract—Accurate rotation estimation is at the heart of unit robot perception tasks such as visual odometry and object pose quaternions ✓ ✓ q = q<latexit sha1_base64="BMaQsvdb47NYnqSxvFbzzwnHZhA=">AAACLXicbVDLSsQwFE19W9+6dBMcBBEcWh+o4EJw41LBGYWZKmnmjhMmbTrJrTiUfodb/QK/xoUgbv0NM50ivg4ETs65N/fmhIkUBj3v1RkZHRufmJyadmdm5+YXFpeW60almkONK6n0VcgMSBFDDQVKuEo0sCiUcBl2Twb+5R1oI1R8gf0EgojdxqItOEMrBc06cFQ66+XXm/RmseJVvQL0L/FLUiElzm6WHLfZUjyNIEYumTEN30swyJhGwSXkbjM1kDDeZbfQsDRmEZggK7bO6bpVWrSttD0x0kL93pGxyJh+FNrKiGHH/PYG4n9eI8X2QZCJOEkRYj4c1E4lRUUHEdCW0PbXsm8J41rYXSnvMM042qB+TLlPmDaQU7dZvJb1UmHD3JIM4X7LdJRGnqKp2lvuFukdFqBDsr9bkkP/K736dtXfqe6d71aOj8ocp8gqWSMbxCf75JickjNSI5z0yAN5JE/Os/PivDnvw9IRp+xZIT/gfHwCW8eo/g==</latexit> ⇤ ✓ estimation. Deep neural networks have provided a new way to <latexit sha1_base64="1dHo/MB41p5RxdHfWHI4xsZNFIs=">AAACXXicbVDBbtNAEN24FIopbdoeOHBZESFxaWRDqxIBUqVeOKEgkbRSHEXjzbhZde11d8cokeWP4Wu4tsee+itsHIMI5UkrvX1vZmfnxbmSloLgruVtPNp8/GTrqf9s+/nObntvf2h1YQQOhFbaXMRgUckMByRJ4UVuENJY4Xl8dbb0z7+jsVJn32iR4ziFy0wmUgA5adL+EA1RkDbldcU/8SgxIMrfUkQzJKiqMvqiTfpAribtTtANavCHJGxIhzXoT/ZafjTVokgxI6HA2lEY5DQuwZAUCis/KizmIK7gEkeOZpCiHZf1lhV/7ZQpT7RxJyNeq393lJBau0hjV5kCzey/3lL8nzcqKHk/LmWWF4SZWA1KCsVJ82VkfCqN21wtHAFhpPsrFzNwSZELdm3KPAdjseJ+VL9WXhfShX+ogHB+aGfakCjIdt2t8uv0ejX4ipwcNaQX/klv+LYbvusefz3qnH5sctxiL9kr9oaF7ISdss+szwZMsB/sJ7tht617b9Pb9nZWpV6r6Tlga/Be/ALwHroe</latexit> k k perform these tasks, and the choice of rotation representation is neural an important part of network design. In this work, we present a network novel symmetric matrix representation of the 3D rotation group, SO(3), with two important properties that make it particularly suitable for learned models: (1) it satisfies a smoothness property input data that improves convergence and generalization when regressing large rotation targets, and (2) it encodes a symmetric Bingham min qTA(✓)q q 4 2R belief over the space of unit quaternions, permitting the training our T symmetric subj.<latexit sha1_base64="QWFOECzMUM8db79oYlAKFLCkDGs=">AAAC73icdVLbbhMxEPUut7LcUnjkxSIqKkJd7ZJKgARSES+8ILWoSYviNJp1nMbUe4k9Roms/Q7eEK98DB/A3+BsA0oIjGTr+MyZGevYWaWkwST5GYRXrl67fmPrZnTr9p2791rb93umtJqLLi9VqU8zMELJQnRRohKnlRaQZ0qcZBdvF/mTz0IbWRbHOK/EIIfzQo4lB/TUsPWD5bIYOnZkAYUuPEeZ9OuDAHXm9uuaPmZTCyO6ojhjx5S9B9Ry5t7Uu6wnOJbaMZwIhPrJWi9Gd57+FmSbWt9qXR4xFDN0zS7RGZt9iimW/7vHavFrmg5b7SROmqCbIF2CNlnG4XA7iNio5DYXBXIFxvTTpMKBA42SK1FHzBpRAb+Ac9H3sIBcmIFrfK/pjmdGdFxqvwqkDbta4SA3Zp5nXpkDTszfuQX5r1zf4vjFwMmisigKfjlobJU3gi4ekY6k9j6quQfAtfR3pXwCGrj3Yn3KrAJtRE0j1nRzUyv9d9hT3rXZnpmUGrlFE/tTHTXuvVxE549Xm6D3LE47cedov33waunjFnlIHpFdkpLn5IC8I4ekS3jQCT4GWcDDafgl/Bp+u5SGwbLmAVmL8PsvSePvjA==</latexit> to q q =1 representation of uncertainty-aware models. We empirically validate the benefits matrix of our formulation by training deep neural rotation regressors differentiable QCQP layer on two data modalities. First, we use synthetic point-cloud data to show that our representation leads to superior predictive Fig. 1: We represent rotations through a symmetric matrix, A, that accuracy over existing representations for arbitrary rotation defines a Bingham distribution over unit quaternions. To apply this targets. Second, we use image data collected onboard ground representation to deep rotation regression, we present a differentiable and aerial vehicles to demonstrate that our representation is layer parameterized by A and show how we can extract a notion of amenable to an effective out-of-distribution (OOD) rejection uncertainty from the spectrum of A. technique that significantly improves the robustness of rotation estimates to unseen environmental effects and corrupted input images, without requiring the use of an explicit likelihood loss, stochastic sampling, or an auxiliary classifier. This capability is Both of these representations, however, are point representa- key for safety-critical applications where detecting novel inputs tions, and do not quantify network uncertainty—an important can prevent catastrophic failure of learned models. capability in safety-critical applications. In this work, we introduce a novel representation of SO(3) I. INTRODUCTION based on a symmetric matrix that combines these two impor- Rotation estimation constitutes one of the core challenges in tant properties. Namely, it robotic state estimation. Given the broad interest in applying 1) admits a smooth global section from SO(3) to the deep learning to state estimation tasks involving rotations representation space (satisfying the continuity property [4,7, 25–27, 30, 34, 36, 39], we consider the suitability of identified by the authors of [41]); different rotation representations in this domain. The question 2) defines a Bingham distribution over unit quaternions; arXiv:2006.01031v4 [cs.CV] 17 Jan 2021 of which rotation parameterization to use for estimation and and control problems has a long history in aerospace engineering 3) is amenable to a novel out-of-distribution (OOD) detec- and robotics [9]. In learning, unit quaternions (also known as tion method without any additional stochastic sampling, Euler parameters) are a popular choice for their numerical or auxiliary classifiers. efficiency, lack of singularities, and simple algebraic and Figure1 visually summarizes our approach. Our experiments geometric structure. Nevertheless, a standard unit quaternion use synthetic and real datasets to highlight the key advantages parameterization does not satisfy an important continuity prop- of our approach. We provide open source Python code1 of our erty that is essential for learning arbitrary rotation targets, as method and experiments. Finally, we note that our representa- recently detailed in [41]. To address this deficiency, the authors tion can be implemented in only a few lines of code in modern of [41] derived two alternative rotation representations that deep learning libraries such as PyTorch, and has marginal satisfy this property and lead to better network performance. computational overhead for typical learning pipelines. This material is based upon work that was supported in part by the Army Research Laboratory under Cooperative Agreement Number W911NF-17-2- 1Code available at https://github.com/utiasSTARS/ 0181. Their support is gratefully acknowledged. bingham-rotation-learning. II. RELATED WORK comparison of SE(3) representations for learning complex Estimating rotations has a long and rich history in computer forward kinematics is conducted in [15]. Although full SE(3) vision and robotics [17, 31, 35]. An in-depth survey of rotation pose estimation is important in most robotics applications, averaging problem formulations and solution methods that we limit our analysis to SO(3) representations as most pose deal directly with multiple rotation measurements is presented estimation tasks can be decoupled into rotation and translation in [16]. In this section, we briefly survey techniques that components, and the rotation component of SE(3) constitutes estimate rotations from raw sensor data, with a particular the main challenge. focus on prior work that incorporates machine learning into C. Learning Rotations with Uncertainty the rotation estimation pipeline. We also review recent work on differentiable optimization problems and convex relaxation- Common ways to extract uncertainty from neural networks based solutions to rotation estimation problems that inspired include approximate variational inference through Monte our work. Carlo dropout [11] and bootstrap-based uncertainty through an ensemble of models [20] or with multiple network heads A. Rotation Parameterization [24]. In prior work [26], we have proposed a mechanism that In robotics, it is common to parameterize rotation states extends these methods to SO(3) targets through differentiable as elements of the matrix Lie group SO(3) [2, 32]. This quaternion averaging and a local notion of uncertainty in the approach facilitates the application of Gauss-Newton-based tangent space of the mean. optimization and a local uncertainty quantification through Additionally, learned methods can be equipped with novelty small perturbations defined in a tangent space about an op- detection mechanisms (often referred to as out-of-distribution erating point in the group. In other state estimation contexts, or OOD detection) to help account for epistemic uncertainty. applications may eschew full 3 3 orthogonal matrices with An autoencoder-based approach to OOD detection was used positive determinant (i.e., elements× of SO(3)) in favour of on a visual navigation task in [28] to ensure that a safe control lower-dimensional representations with desirable properties policy was used in novel environments. A similar approach [9]. For example, Euler angles [33] are particularly well- was applied in [1], where a single variational autoencoder was suited to analyzing small perturbations in the steady-state used for novelty detection and control policy learning. See [23] flight of conventional aircraft because reaching a singularity for a recent summary of OOD methods commonly applied to is practically impossible [10]. In contrast, spacecraft control classification tasks. often requires large-angle maneuvers
Recommended publications
  • Bearing Rigidity Theory in SE(3) Giulia Michieletto, Angelo Cenedese, Antonio Franchi
    Bearing Rigidity Theory in SE(3) Giulia Michieletto, Angelo Cenedese, Antonio Franchi To cite this version: Giulia Michieletto, Angelo Cenedese, Antonio Franchi. Bearing Rigidity Theory in SE(3). 55th IEEE Conference on Decision and Control, Dec 2016, Las Vegas, United States. hal-01371084 HAL Id: hal-01371084 https://hal.archives-ouvertes.fr/hal-01371084 Submitted on 23 Sep 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Preprint version, final version at http://ieeexplore.ieee.org/ 55th IEEE Conference on Decision and Control. Las Vegas, NV, 2016 Bearing Rigidity Theory in SE(3) Giulia Michieletto, Angelo Cenedese, and Antonio Franchi Abstract— Rigidity theory has recently emerged as an ef- determines the infinitesimal rigidity properties of the system, ficient tool in the control field of coordinated multi-agent providing a necessary and sufficient condition. In such a systems, such as multi-robot formations and UAVs swarms, that context, a framework is generally represented by means of are characterized by sensing, communication and movement capabilities. This work aims at describing the rigidity properties the bar-and-joint model where agents are represented as for frameworks embedded in the three-dimensional Special points joined by bars whose fixed lengths enforce the inter- Euclidean space SE(3) wherein each agent has 6DoF.
    [Show full text]
  • Chapter 5 ANGULAR MOMENTUM and ROTATIONS
    Chapter 5 ANGULAR MOMENTUM AND ROTATIONS In classical mechanics the total angular momentum L~ of an isolated system about any …xed point is conserved. The existence of a conserved vector L~ associated with such a system is itself a consequence of the fact that the associated Hamiltonian (or Lagrangian) is invariant under rotations, i.e., if the coordinates and momenta of the entire system are rotated “rigidly” about some point, the energy of the system is unchanged and, more importantly, is the same function of the dynamical variables as it was before the rotation. Such a circumstance would not apply, e.g., to a system lying in an externally imposed gravitational …eld pointing in some speci…c direction. Thus, the invariance of an isolated system under rotations ultimately arises from the fact that, in the absence of external …elds of this sort, space is isotropic; it behaves the same way in all directions. Not surprisingly, therefore, in quantum mechanics the individual Cartesian com- ponents Li of the total angular momentum operator L~ of an isolated system are also constants of the motion. The di¤erent components of L~ are not, however, compatible quantum observables. Indeed, as we will see the operators representing the components of angular momentum along di¤erent directions do not generally commute with one an- other. Thus, the vector operator L~ is not, strictly speaking, an observable, since it does not have a complete basis of eigenstates (which would have to be simultaneous eigenstates of all of its non-commuting components). This lack of commutivity often seems, at …rst encounter, as somewhat of a nuisance but, in fact, it intimately re‡ects the underlying structure of the three dimensional space in which we are immersed, and has its source in the fact that rotations in three dimensions about di¤erent axes do not commute with one another.
    [Show full text]
  • Introduction to SU(N) Group Theory in the Context of The
    215c, 1/30/21 Lecture outline. c Kenneth Intriligator 2021. ? Week 1 recommended reading: Schwartz sections 25.1, 28.2.2. 28.2.3. You can find more about group theory (and some of my notation) also here https://keni.ucsd.edu/s10/ • At the end of 215b, I discussed spontaneous breaking of global symmetries. Some students might not have taken 215b last quarter, so I will briefly review some of that material. Also, this is an opportunity to introduce or review some group theory. • Physics students first learn about continuous, non-Abelian Lie groups in the context of the 3d rotation group. Let ~v and ~w be N-component vectors (we'll set N = 3 soon). We can rotate the vectors, or leave the vectors alone and rotate our coordinate system backwards { these are active vs passive equivalent perspectives { and scalar quantities like ~v · ~w are invariant. This is because the inner product is via δij and the rotation is a similarity transformation that preserves this. Any rotation matrix R is an orthogonal N × N matrix, RT = R−1, and the space of all such matrices is the O(N) group manifold. Note that det R = ±1 has two components, and we can restrict to the component with det R = 1, which is connected to the identity and called SO(N). The case SO(2), rotations in a plane, is parameterized by an angle θ =∼ θ + 2π, so the group manifold is a circle. If we use z = x + iy, we see that SO(2) =∼ U(1), the group of unitary 1 × 1 matrices eiθ.
    [Show full text]
  • Quaternion Product Units for Deep Learning on 3D Rotation Groups
    Quaternion Product Units for Deep Learning on 3D Rotation Groups Xuan Zhang1, Shaofei Qin1, Yi Xu∗1, and Hongteng Xu2 1MoE Key Lab of Artificial Intelligence, AI Institute 2Infinia ML, Inc. 1Shanghai Jiao Tong University, Shanghai, China 2Duke University, Durham, NC, USA ffloatlazer, 1105042987, [email protected], [email protected] Abstract methods have made efforts to enhance their robustness to rotations [4,6, 29,3, 26]. However, the architectures of We propose a novel quaternion product unit (QPU) to their models are tailored for the data in the Euclidean space, represent data on 3D rotation groups. The QPU lever- rather than 3D rotation data. In particular, the 3D rotation ages quaternion algebra and the law of 3D rotation group, data are not closed under the algebraic operations used in representing 3D rotation data as quaternions and merg- their models (i:e:, additions and multiplications).1 The mis- ing them via a weighted chain of Hamilton products. We matching between data and model makes these methods dif- prove that the representations derived by the proposed QPU ficult to analyze the influence of rotations on their outputs can be disentangled into “rotation-invariant” features and quantitatively. Although this mismatching problem can be “rotation-equivariant” features, respectively, which sup- mitigated by augmenting training data with additional rota- ports the rationality and the efficiency of the QPU in the- tions [18], this solution gives us no theoretical guarantee. ory. We design quaternion neural networks based on our To overcome the challenge mentioned above, we pro- QPUs and make our models compatible with existing deep posed a novel quaternion product unit (QPU) for 3D ro- learning models.
    [Show full text]
  • Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning
    The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning Feng Lin,1 Haohang Xu,2 Houqiang Li,1,4 Hongkai Xiong,2 Guo-Jun Qi3,* 1 CAS Key Laboratory of GIPAS, EEIS Department, University of Science and Technology of China 2 Department of Electronic Engineering, Shanghai Jiao Tong University 3 Laboratory for MAPLE, Futurewei Technologies 4 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center Abstract Unsupervised training of deep representations has demon- strated remarkable potentials in mitigating the prohibitive ex- penses on annotating labeled data recently. Among them is predicting transformations as a pretext task to self-train repre- sentations, which has shown great potentials for unsupervised learning. However, existing approaches in this category learn representations by either treating a discrete set of transforma- tions as separate classes, or using the Euclidean distance as the metric to minimize the errors between transformations. None of them has been dedicated to revealing the vital role of the geometry of transformation groups in learning represen- Figure 1: The deviation between two transformations should tations. Indeed, an image must continuously transform along be measured along the curved manifold (Lie group) of the curved manifold of a transformation group rather than through a straight line in the forbidden ambient Euclidean transformations rather than through the forbidden Euclidean space. This suggests the use of geodesic distance to minimize space of transformations. the errors between the estimated and groundtruth transfor- mations. Particularly, we focus on homographies, a general group of planar transformations containing the Euclidean, works to explore the possibility of pretraining representa- similarity and affine transformations as its special cases.
    [Show full text]
  • Finite Subgroups of the Group of 3D Rotations
    FINITE SUBGROUPS OF THE 3D ROTATION GROUP Student : Nathan Hayes Mentor : Jacky Chong SYMMETRIES OF THE SPHERE • Let’s think about the rigid rotations of the sphere. • Rotations can be composed together to form new rotations. • Every rotation has an inverse rotation. • There is a “non-rotation,” or an identity rotation. • For us, the distinct rotational symmetries of the sphere form a group, where each rotation is an element and our operation is composing our rotations. • This group happens to be infinite, since you can continuously rotate the sphere. • What if we wanted to study a smaller collection of symmetries, or in some sense a subgroup of our rotations? THEOREM • A finite subgroup of SO3 (the group of special rotations in 3 dimensions, or rotations in 3D space) is isomorphic to either a cyclic group, a dihedral group, or a rotational symmetry group of one of the platonic solids. • These can be represented by the following solids : Cyclic Dihedral Tetrahedron Cube Dodecahedron • The rotational symmetries of the cube and octahedron are the same, as are those of the dodecahedron and icosahedron. GROUP ACTIONS • The interesting portion of study in group theory is not the study of groups, but the study of how they act on things. • A group action is a form of mapping, where every element of a group G represents some permutation of a set X. • For example, the group of permutations of the integers 1,2, 3 acting on the numbers 1, 2, 3, 4. So for example, the permutation (1,2,3) -> (3,2,1) will swap 1 and 3.
    [Show full text]
  • 3D Rotations Matrices
    Chapter 3 3D Rotations matrices The goal of this chapter is to exemplify on a simple and useful manifold part of the general meth- ods developed previously on Riemannian geometry and Lie groups. This document also provide numerically stable formulas to implement all the basic operations needed to work with rotations in 3D. 3.1 Vector rotations of R3 3.1.1 Definition 3 Let fi; j; kg be a right handed orthonormal basis of the vector space R (we do not consider it here as a Euclidean space), and B = fe1; e2; e3g be a set of three vectors. The linear mapping from fi; j; kg to B is given by the matrix 2 3 e11 e21 e31 R = [e1; e2; e3] = 4 e12 e22 e32 5 (3.1) e13 e23 e33 If we now want B to be an orthonormal basis, we have that h ei j ej i = δij, which means that the matrix R verifies T T R:R = R :R = I3 (3.2) This equation implies in turn that det(R:RT) = 1 (i.e. det(R) = ±1) and gives rise to the subset O3 of orthogonal matrices of M3×3. If we want B to be also right handed, we have to impose det(R) = 1. T SO3 = fR 2 M3×3=R:R = Id and det(R) = +1g (3.3) Such matrices are called (proper) rotations. Each right handed orthonormal basis can then be represented by a unique rotation and conversely each rotation maps fi; j; kg onto a unique right handed orthonormal basis.
    [Show full text]
  • Mathematical Basics of Motion and Deformation in Computer Graphics Second Edition
    Mathematical Basics of Motion and Deformation in Computer Graphics Second Edition Synthesis Lectures on Visual Computing Computer Graphics, Animation, Computational Photography, and Imaging Editor Brian A. Barsky, University of California, Berkeley is series presents lectures on research and development in visual computing for an audience of professional developers, researchers, and advanced students. Topics of interest include computational photography, animation, visualization, special effects, game design, image techniques, computational geometry, modeling, rendering, and others of interest to the visual computing system developer or researcher. Mathematical Basics of Motion and Deformation in Computer Graphics: Second Edition Ken Anjyo and Hiroyuki Ochiai 2017 Digital Heritage Reconstruction Using Super-resolution and Inpainting Milind G. Padalkar, Manjunath V. Joshi, and Nilay L. Khatri 2016 Geometric Continuity of Curves and Surfaces Przemyslaw Kiciak 2016 Heterogeneous Spatial Data: Fusion, Modeling, and Analysis for GIS Applications Giuseppe Patanè and Michela Spagnuolo 2016 Geometric and Discrete Path Planning for Interactive Virtual Worlds Marcelo Kallmann and Mubbasir Kapadia 2016 An Introduction to Verification of Visualization Techniques Tiago Etiene, Robert M. Kirby, and Cláudio T. Silva 2015 iv Virtual Crowds: Steps Toward Behavioral Realism Mubbasir Kapadia, Nuria Pelechano, Jan Allbeck, and Norm Badler 2015 Finite Element Method Simulation of 3D Deformable Solids Eftychios Sifakis and Jernej Barbic 2015 Efficient Quadrature
    [Show full text]
  • The Geometry of Eye Rotations and Listing's Law
    The Geometry of Eye Rotations and Listing's Law Amir A. Handzel* Tamar Flasht Department of Applied Mathematics and Computer Science Weizmann Institute of Science Rehovot, 76100 Israel Abstract We analyse the geometry of eye rotations, and in particular saccades, using basic Lie group theory and differential geome­ try. Various parameterizations of rotations are related through a unifying mathematical treatment, and transformations between co-ordinate systems are computed using the Campbell-Baker­ Hausdorff formula. Next, we describe Listing's law by means of the Lie algebra so(3). This enables us to demonstrate a direct connection to Donders' law, by showing that eye orientations are restricted to the quotient space 80(3)/80(2). The latter is equiv­ alent to the sphere S2, which is exactly the space of gaze directions. Our analysis provides a mathematical framework for studying the oculomotor system and could also be extended to investigate the geometry of mUlti-joint arm movements. 1 INTRODUCTION 1.1 SACCADES AND LISTING'S LAW Saccades are fast eye movements, bringing objects of interest into the center of the visual field. It is known that eye positions are restricted to a subset of those which are anatomically possible, both during saccades and fixation (Tweed & Vilis, 1990). According to Donders' law, the eye's gaze direction determines its orientation uniquely, and moreover, the orientation does not depend on the history of eye motion which has led to the given gaze direction. A precise specification of the "allowed" subspace of position is given by Listing's law: the observed orientations of the eye are those which can be reached from the distinguished orientation called primary *[email protected] t [email protected] 118 A.
    [Show full text]
  • On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry,By John H
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 42, Number 2, Pages 229–243 S 0273-0979(05)01043-8 Article electronically published on January 26, 2005 On quaternions and octonions: Their geometry, arithmetic, and symmetry,by John H. Conway and Derek A. Smith, A K Peters, Ltd., Natick, MA, 2003, xii + 159 pp., $29.00, ISBN 1-56881-134-9 Conway and Smith’s book is a wonderful introduction to the normed division algebras: the real numbers (R), the complex numbers (C), the quaternions (H), and the octonions (O). The first two are well-known to every mathematician. In contrast, the quaternions and especially the octonions are sadly neglected, so the authors rightly concentrate on these. They develop these number systems from scratch, explore their connections to geometry, and even study number theory in quaternionic and octonionic versions of the integers. Conway and Smith warm up by studying two famous subrings of C: the Gaussian integers and Eisenstein integers. The Gaussian integers are the complex numbers x + iy for which x and y are integers. They form a square lattice: i 01 Any Gaussian integer can be uniquely factored into ‘prime’ Gaussian integers — at least if we count differently ordered factorizations as the same and ignore the ambiguity introduced by the invertible Gaussian integers, namely ±1and±i.To show this, we can use a straightforward generalization of Euclid’s proof for ordinary integers. The key step is to show that given nonzero Gaussian integers a and b,we can always write a = nb + r for some Gaussian integers n and r, where the ‘remainder’ r has |r| < |b|.
    [Show full text]
  • Arxiv:1909.12057V4 [Cs.LG] 22 Mar 2021 Behavior Under Such Transformations and Are Insensitive to Both Local and Global Transformations on the Input Data
    Published as a conference paper at ICLR 2020 B-SPLINE CNNS ON LIE GROUPS Erik J. Bekkers Centre for Analysis and Scientific Computing Department of Applied Mathematics and Computer Science Eindhoven University of Technology, Eindhoven, the Netherlands [email protected] ABSTRACT Group convolutional neural networks (G-CNNs) can be used to improve classi- cal CNNs by equipping them with the geometric structure of groups. Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory. Cur- rently, however, the practical implementations of G-CNNs are limited to either discrete groups (that leave the grid intact) or continuous compact groups such as rotations (that enable the use of Fourier theory). In this paper we lift these lim- itations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups. In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra. This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respec- tively localized, sparse and non-uniform B-spline expansions. The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides in which rotation equivariance plays a key role and facial landmark localization in which scale equivariance is important.
    [Show full text]
  • Dirac Equation: L
    11/16 Lecture outline ⋆ Reading for today’s lecture: Coleman lecture notes pages 211-248. Last time: The basic spinor reps are (1/2, 0) and (0, 1/2), denoted u±; these are • called left and right handed Weyl spinors. They both have D = e−i~σ·eθ/ˆ 2 for a rotation by ±~σ·eφ/ˆ 2 θ around the e axis, but they have D± = e for a boost along the e axis, where v = tanh φ. Theseb 2-component Weyl spinor representations individually playb an important role in non-parity invariant theories, like the weak interactions. Parity ((t, ~x) (t, ~x)) → − exchances them. So, in parity invariant theories, like QED, they are combined into a 4-component Dirac spinor, (1/2, 0)+(0, 1/2): u ψ = + . u− The 4-component spinor rep starts with the clifford algebra γµ,γν =2ηµν1, e.g. { } 0 1 0 σi γ0 = , γi = . 1 0 σi 0 − There are other choices of reps of the clifford algebra. The µν rotation / boost gen- M erators become (the extension / analog of spin S~ for the 3d rotation group) Sµν = 1 [γµ,γν] = 1 γµγν 1 ηµν , satisfies the Lorentz Lie algebra relation. Under a rotation, 4 2 − 2 σk 0 Sij = i ǫ , so taking Ω = ǫ ϕk get under rotations − 2 ijk 0 σk ij − ijk ei~ϕ·~σ/2 0 S[~ϕ] = . 0 ei~ϕ·~σ/2 Under boosts, Ωi,0 = φi, φ~·~σ/2 S[Λ] = e 0 . 0 e−φ~·~σ/2 This exhibits the 2-component reps that we described above.
    [Show full text]