Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning

Total Page:16

File Type:pdf, Size:1020Kb

Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning Feng Lin,1 Haohang Xu,2 Houqiang Li,1,4 Hongkai Xiong,2 Guo-Jun Qi3,* 1 CAS Key Laboratory of GIPAS, EEIS Department, University of Science and Technology of China 2 Department of Electronic Engineering, Shanghai Jiao Tong University 3 Laboratory for MAPLE, Futurewei Technologies 4 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center Abstract Unsupervised training of deep representations has demon- strated remarkable potentials in mitigating the prohibitive ex- penses on annotating labeled data recently. Among them is predicting transformations as a pretext task to self-train repre- sentations, which has shown great potentials for unsupervised learning. However, existing approaches in this category learn representations by either treating a discrete set of transforma- tions as separate classes, or using the Euclidean distance as the metric to minimize the errors between transformations. None of them has been dedicated to revealing the vital role of the geometry of transformation groups in learning represen- Figure 1: The deviation between two transformations should tations. Indeed, an image must continuously transform along be measured along the curved manifold (Lie group) of the curved manifold of a transformation group rather than through a straight line in the forbidden ambient Euclidean transformations rather than through the forbidden Euclidean space. This suggests the use of geodesic distance to minimize space of transformations. the errors between the estimated and groundtruth transfor- mations. Particularly, we focus on homographies, a general group of planar transformations containing the Euclidean, works to explore the possibility of pretraining representa- similarity and affine transformations as its special cases. To tions in an unsupervised fashion by removing the reliance avoid an explicit computing of intractable Riemannian log- on labeled data. arithm, we project homographies onto an alternative group Specifically, self-supervised representation learning from of rotation transformations SR(3) with a tractable form of geodesic distance. Experiments demonstrate the proposed ap- transformed images has shown great potential. The idea can proach to Auto-Encoding Transformations exhibits superior trace back to the data augmentation in supervised learn- performances on a variety of recognition problems. ing of convolutional neural networks in AlexNet, where la- beled data are augmented by various copies of their trans- formed copies. The fine structure of transformations have Introduction also been explored in training deep network representations. Representation learning plays a vital role in many machine For example, the celebrated convolutional neural networks learning problems. Among them is supervised pretraining are trained to transform equivariantly against the transla- from large-scale labeled datasets such as ImageNet and MS tions to capture the translated visual structures. This idea has COCO for various recognition tasks ranging from image been extended to train deep convolutional networks that are classification, object detection, and segmentation. Previous equivariant to to a family of transformations beyond trans- results have shown that representations learned through su- lations. However, these models still rely on supervised pre- pervised training on labeled data is critical to competitive training of representations on labeled data. performances before being finetuned on the downstream Recently, unsupervised pretraining of representations was tasks. However, it becomes prohibitively expensive to col- proposed in a self-supervised fashion. The deep networks lect labeled data for the purpose of pre-training representa- are trained by decoding the pretext signals without la- tion networks, which prevents the application of deep net- beled data (Kolesnikov, Zhai, and Beyer 2019). The state- works in many practical applications. This inspires many of-the-art performances have been achieved by the repre- Copyright c 2021, Association for the Advancement of Artificial sentations learned through various transformations as the Intelligence (www.aaai.org). All rights reserved. self-supervisory signals. Among them is a representative * corresponding author paradigm of Auto-Encoding Transformations (AET) (Qi y The work of Houqiang Li was supported by NSFC under Con- 2019) that extends the conventional autoencoders from re- tract 61836011 and 62021001. constructing data to transformations. The AET seeks to ex- 8610 plore the variations of data under various transformations, tions, which can be intractable for the homography group and capture the intrinsic patterns of data variations by pre- with no closed form. Instead, we project the transforma- training representations without need to annotate data. tions onto another group with a tractable expression of It relies on a simple criterion: if a representation network geodesic distances. Particularly, we choose the scaled rota- could successfully model the change of visual structures be- tion group S+(1)×SO(3) to approximate PG(2), in which fore and after a transformation, we should be able to de- the geodesic distance has a tractable closed form that can be code the transformation by comparing the representations of efficiently computed without taking an explicit group log- original and transformed images. In other words, transfor- arithm. This can greatly reduce the computing overhead in mations are used as a proxy to reveal the pattern of visual self-training the proposed model in the Lie group of homo- structures that are supposed to equivary against the applied graphies. Second, to train the AET, we need a reasonable pa- transformations. Both deterministic and probabilistic AET rameterization of homographies that can be computed from models have been presented in literature, which minimizes the deep network. Although a 3 × 3 matrix representation the Mean-Square Errors (MSE) between the matrix repre- in homogeneous coordinates is available to parameterize sentation of parametric transformations (Zhang et al. 2019) homographies directly, it mixes up various components of and maximizes the mutual information between the learned transformations such as rotations, translations, and scales representation and transformations (Qi 2019), respectively. that have incomparable scales of values. The direct matrix Albeit the promising performances on both AET mod- parameterization is thus not optimal which may result in un- els, they are subject to a questionable assumption — they stable numeric results. Thus, inspired by deep homography use the Euclidean distance as the metric to minimize the estimation (DeTone, Malisiewicz, and Rabinovich 2016; Er- errors between the estimated and the groundtruth transfor- lik Nowruzi, Laganiere, and Japkowicz 2017; Nguyen et al. mations. Obviously, this assumption is problematic. Indeed, 2018; Le et al. 2020), we choose to a reparameterization ap- transformations have a Lie group structure restricted on a proach by making the network output an implicit expres- curved manifold of matrix representations rather than be- sion of homographies with the four-point correspondences ing filled in the ambient Euclidean space. Moreover, a group between original and transformed images. The matrix rep- of transformations can be decomposed into various compo- resentation is computed from the output correspondences nents of atomic transformations. For example, a homogra- through a SVD operation. Such reparameterization of ho- phy as one of the most general planar transformations can be mographies is differentiable, thereby enabling us to mini- decomposed into affine, similarity and Euclidean transfor- mize the geodesic errors between transformations to train mations hierarchically. A direct summation of the Euclidean the deep network in an end-to-end fashion. distances over these components could yield incomparable In summary, our major contributions are as follows: results with some components overweighing the others due • We leverage the theory of Lie groups to explore the geom- to various scales. etry of transformations, using geodesic distance to min- From the perspective of transformation geometry in Lie imize the errors between the estimated and groundtruth groups, a more natural way to measure the different between transformations in AET. transformations is the shortest curved path (i.e., geodesic) connecting two transformations along the manifold, instead • Due to the intractable Riemannian logarithm, we propose of a straight segment in the ambient Euclidean space. As il- an approximation of PG(2) in a closed form by project- lustrated in Figure 1, given a pair of original and transformed ing the transformations onto the scaled rotation group. images, one cannot find a valid path of transformations in • Experiments demonstrate the proposed method exhibits the Euclidean space to continuously transform the original superior performances on various recognition problems. one to its transformed counterpart. On the contrary, such a path of transformations can only reside on the curved man- Background & Methodology ifold of a transformation group. Therefore, one should use the geodesic along the manifold as the metric to measure the AET with Euclidean Distances errors between transformations. First, let us review the Auto-Encoding Transformations To this end, we will leverage
Recommended publications
  • Bearing Rigidity Theory in SE(3) Giulia Michieletto, Angelo Cenedese, Antonio Franchi
    Bearing Rigidity Theory in SE(3) Giulia Michieletto, Angelo Cenedese, Antonio Franchi To cite this version: Giulia Michieletto, Angelo Cenedese, Antonio Franchi. Bearing Rigidity Theory in SE(3). 55th IEEE Conference on Decision and Control, Dec 2016, Las Vegas, United States. hal-01371084 HAL Id: hal-01371084 https://hal.archives-ouvertes.fr/hal-01371084 Submitted on 23 Sep 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Preprint version, final version at http://ieeexplore.ieee.org/ 55th IEEE Conference on Decision and Control. Las Vegas, NV, 2016 Bearing Rigidity Theory in SE(3) Giulia Michieletto, Angelo Cenedese, and Antonio Franchi Abstract— Rigidity theory has recently emerged as an ef- determines the infinitesimal rigidity properties of the system, ficient tool in the control field of coordinated multi-agent providing a necessary and sufficient condition. In such a systems, such as multi-robot formations and UAVs swarms, that context, a framework is generally represented by means of are characterized by sensing, communication and movement capabilities. This work aims at describing the rigidity properties the bar-and-joint model where agents are represented as for frameworks embedded in the three-dimensional Special points joined by bars whose fixed lengths enforce the inter- Euclidean space SE(3) wherein each agent has 6DoF.
    [Show full text]
  • Chapter 5 ANGULAR MOMENTUM and ROTATIONS
    Chapter 5 ANGULAR MOMENTUM AND ROTATIONS In classical mechanics the total angular momentum L~ of an isolated system about any …xed point is conserved. The existence of a conserved vector L~ associated with such a system is itself a consequence of the fact that the associated Hamiltonian (or Lagrangian) is invariant under rotations, i.e., if the coordinates and momenta of the entire system are rotated “rigidly” about some point, the energy of the system is unchanged and, more importantly, is the same function of the dynamical variables as it was before the rotation. Such a circumstance would not apply, e.g., to a system lying in an externally imposed gravitational …eld pointing in some speci…c direction. Thus, the invariance of an isolated system under rotations ultimately arises from the fact that, in the absence of external …elds of this sort, space is isotropic; it behaves the same way in all directions. Not surprisingly, therefore, in quantum mechanics the individual Cartesian com- ponents Li of the total angular momentum operator L~ of an isolated system are also constants of the motion. The di¤erent components of L~ are not, however, compatible quantum observables. Indeed, as we will see the operators representing the components of angular momentum along di¤erent directions do not generally commute with one an- other. Thus, the vector operator L~ is not, strictly speaking, an observable, since it does not have a complete basis of eigenstates (which would have to be simultaneous eigenstates of all of its non-commuting components). This lack of commutivity often seems, at …rst encounter, as somewhat of a nuisance but, in fact, it intimately re‡ects the underlying structure of the three dimensional space in which we are immersed, and has its source in the fact that rotations in three dimensions about di¤erent axes do not commute with one another.
    [Show full text]
  • For Deep Rotation Learning with Uncertainty
    A Smooth Representation of Belief over SO(3) for Deep Rotation Learning with Uncertainty Valentin Peretroukhin,1;3 Matthew Giamou,1 David M. Rosen,2 W. Nicholas Greene,3 Nicholas Roy,3 and Jonathan Kelly1 1Institute for Aerospace Studies, University of Toronto; 2Laboratory for Information and Decision Systems, 3Computer Science & Artificial Intelligence Laboratory, Massachusetts Institute of Technology Abstract—Accurate rotation estimation is at the heart of unit robot perception tasks such as visual odometry and object pose quaternions ✓ ✓ q = q<latexit sha1_base64="BMaQsvdb47NYnqSxvFbzzwnHZhA=">AAACLXicbVDLSsQwFE19W9+6dBMcBBEcWh+o4EJw41LBGYWZKmnmjhMmbTrJrTiUfodb/QK/xoUgbv0NM50ivg4ETs65N/fmhIkUBj3v1RkZHRufmJyadmdm5+YXFpeW60almkONK6n0VcgMSBFDDQVKuEo0sCiUcBl2Twb+5R1oI1R8gf0EgojdxqItOEMrBc06cFQ66+XXm/RmseJVvQL0L/FLUiElzm6WHLfZUjyNIEYumTEN30swyJhGwSXkbjM1kDDeZbfQsDRmEZggK7bO6bpVWrSttD0x0kL93pGxyJh+FNrKiGHH/PYG4n9eI8X2QZCJOEkRYj4c1E4lRUUHEdCW0PbXsm8J41rYXSnvMM042qB+TLlPmDaQU7dZvJb1UmHD3JIM4X7LdJRGnqKp2lvuFukdFqBDsr9bkkP/K736dtXfqe6d71aOj8ocp8gqWSMbxCf75JickjNSI5z0yAN5JE/Os/PivDnvw9IRp+xZIT/gfHwCW8eo/g==</latexit> ⇤ ✓ estimation. Deep neural networks have provided a new way to <latexit sha1_base64="1dHo/MB41p5RxdHfWHI4xsZNFIs=">AAACXXicbVDBbtNAEN24FIopbdoeOHBZESFxaWRDqxIBUqVeOKEgkbRSHEXjzbhZde11d8cokeWP4Wu4tsee+itsHIMI5UkrvX1vZmfnxbmSloLgruVtPNp8/GTrqf9s+/nObntvf2h1YQQOhFbaXMRgUckMByRJ4UVuENJY4Xl8dbb0z7+jsVJn32iR4ziFy0wmUgA5adL+EA1RkDbldcU/8SgxIMrfUkQzJKiqMvqiTfpAribtTtANavCHJGxIhzXoT/ZafjTVokgxI6HA2lEY5DQuwZAUCis/KizmIK7gEkeOZpCiHZf1lhV/7ZQpT7RxJyNeq393lJBau0hjV5kCzey/3lL8nzcqKHk/LmWWF4SZWA1KCsVJ82VkfCqN21wtHAFhpPsrFzNwSZELdm3KPAdjseJ+VL9WXhfShX+ogHB+aGfakCjIdt2t8uv0ejX4ipwcNaQX/klv+LYbvusefz3qnH5sctxiL9kr9oaF7ISdss+szwZMsB/sJ7tht617b9Pb9nZWpV6r6Tlga/Be/ALwHroe</latexit>
    [Show full text]
  • Introduction to SU(N) Group Theory in the Context of The
    215c, 1/30/21 Lecture outline. c Kenneth Intriligator 2021. ? Week 1 recommended reading: Schwartz sections 25.1, 28.2.2. 28.2.3. You can find more about group theory (and some of my notation) also here https://keni.ucsd.edu/s10/ • At the end of 215b, I discussed spontaneous breaking of global symmetries. Some students might not have taken 215b last quarter, so I will briefly review some of that material. Also, this is an opportunity to introduce or review some group theory. • Physics students first learn about continuous, non-Abelian Lie groups in the context of the 3d rotation group. Let ~v and ~w be N-component vectors (we'll set N = 3 soon). We can rotate the vectors, or leave the vectors alone and rotate our coordinate system backwards { these are active vs passive equivalent perspectives { and scalar quantities like ~v · ~w are invariant. This is because the inner product is via δij and the rotation is a similarity transformation that preserves this. Any rotation matrix R is an orthogonal N × N matrix, RT = R−1, and the space of all such matrices is the O(N) group manifold. Note that det R = ±1 has two components, and we can restrict to the component with det R = 1, which is connected to the identity and called SO(N). The case SO(2), rotations in a plane, is parameterized by an angle θ =∼ θ + 2π, so the group manifold is a circle. If we use z = x + iy, we see that SO(2) =∼ U(1), the group of unitary 1 × 1 matrices eiθ.
    [Show full text]
  • Quaternion Product Units for Deep Learning on 3D Rotation Groups
    Quaternion Product Units for Deep Learning on 3D Rotation Groups Xuan Zhang1, Shaofei Qin1, Yi Xu∗1, and Hongteng Xu2 1MoE Key Lab of Artificial Intelligence, AI Institute 2Infinia ML, Inc. 1Shanghai Jiao Tong University, Shanghai, China 2Duke University, Durham, NC, USA ffloatlazer, 1105042987, [email protected], [email protected] Abstract methods have made efforts to enhance their robustness to rotations [4,6, 29,3, 26]. However, the architectures of We propose a novel quaternion product unit (QPU) to their models are tailored for the data in the Euclidean space, represent data on 3D rotation groups. The QPU lever- rather than 3D rotation data. In particular, the 3D rotation ages quaternion algebra and the law of 3D rotation group, data are not closed under the algebraic operations used in representing 3D rotation data as quaternions and merg- their models (i:e:, additions and multiplications).1 The mis- ing them via a weighted chain of Hamilton products. We matching between data and model makes these methods dif- prove that the representations derived by the proposed QPU ficult to analyze the influence of rotations on their outputs can be disentangled into “rotation-invariant” features and quantitatively. Although this mismatching problem can be “rotation-equivariant” features, respectively, which sup- mitigated by augmenting training data with additional rota- ports the rationality and the efficiency of the QPU in the- tions [18], this solution gives us no theoretical guarantee. ory. We design quaternion neural networks based on our To overcome the challenge mentioned above, we pro- QPUs and make our models compatible with existing deep posed a novel quaternion product unit (QPU) for 3D ro- learning models.
    [Show full text]
  • Finite Subgroups of the Group of 3D Rotations
    FINITE SUBGROUPS OF THE 3D ROTATION GROUP Student : Nathan Hayes Mentor : Jacky Chong SYMMETRIES OF THE SPHERE • Let’s think about the rigid rotations of the sphere. • Rotations can be composed together to form new rotations. • Every rotation has an inverse rotation. • There is a “non-rotation,” or an identity rotation. • For us, the distinct rotational symmetries of the sphere form a group, where each rotation is an element and our operation is composing our rotations. • This group happens to be infinite, since you can continuously rotate the sphere. • What if we wanted to study a smaller collection of symmetries, or in some sense a subgroup of our rotations? THEOREM • A finite subgroup of SO3 (the group of special rotations in 3 dimensions, or rotations in 3D space) is isomorphic to either a cyclic group, a dihedral group, or a rotational symmetry group of one of the platonic solids. • These can be represented by the following solids : Cyclic Dihedral Tetrahedron Cube Dodecahedron • The rotational symmetries of the cube and octahedron are the same, as are those of the dodecahedron and icosahedron. GROUP ACTIONS • The interesting portion of study in group theory is not the study of groups, but the study of how they act on things. • A group action is a form of mapping, where every element of a group G represents some permutation of a set X. • For example, the group of permutations of the integers 1,2, 3 acting on the numbers 1, 2, 3, 4. So for example, the permutation (1,2,3) -> (3,2,1) will swap 1 and 3.
    [Show full text]
  • 3D Rotations Matrices
    Chapter 3 3D Rotations matrices The goal of this chapter is to exemplify on a simple and useful manifold part of the general meth- ods developed previously on Riemannian geometry and Lie groups. This document also provide numerically stable formulas to implement all the basic operations needed to work with rotations in 3D. 3.1 Vector rotations of R3 3.1.1 Definition 3 Let fi; j; kg be a right handed orthonormal basis of the vector space R (we do not consider it here as a Euclidean space), and B = fe1; e2; e3g be a set of three vectors. The linear mapping from fi; j; kg to B is given by the matrix 2 3 e11 e21 e31 R = [e1; e2; e3] = 4 e12 e22 e32 5 (3.1) e13 e23 e33 If we now want B to be an orthonormal basis, we have that h ei j ej i = δij, which means that the matrix R verifies T T R:R = R :R = I3 (3.2) This equation implies in turn that det(R:RT) = 1 (i.e. det(R) = ±1) and gives rise to the subset O3 of orthogonal matrices of M3×3. If we want B to be also right handed, we have to impose det(R) = 1. T SO3 = fR 2 M3×3=R:R = Id and det(R) = +1g (3.3) Such matrices are called (proper) rotations. Each right handed orthonormal basis can then be represented by a unique rotation and conversely each rotation maps fi; j; kg onto a unique right handed orthonormal basis.
    [Show full text]
  • Mathematical Basics of Motion and Deformation in Computer Graphics Second Edition
    Mathematical Basics of Motion and Deformation in Computer Graphics Second Edition Synthesis Lectures on Visual Computing Computer Graphics, Animation, Computational Photography, and Imaging Editor Brian A. Barsky, University of California, Berkeley is series presents lectures on research and development in visual computing for an audience of professional developers, researchers, and advanced students. Topics of interest include computational photography, animation, visualization, special effects, game design, image techniques, computational geometry, modeling, rendering, and others of interest to the visual computing system developer or researcher. Mathematical Basics of Motion and Deformation in Computer Graphics: Second Edition Ken Anjyo and Hiroyuki Ochiai 2017 Digital Heritage Reconstruction Using Super-resolution and Inpainting Milind G. Padalkar, Manjunath V. Joshi, and Nilay L. Khatri 2016 Geometric Continuity of Curves and Surfaces Przemyslaw Kiciak 2016 Heterogeneous Spatial Data: Fusion, Modeling, and Analysis for GIS Applications Giuseppe Patanè and Michela Spagnuolo 2016 Geometric and Discrete Path Planning for Interactive Virtual Worlds Marcelo Kallmann and Mubbasir Kapadia 2016 An Introduction to Verification of Visualization Techniques Tiago Etiene, Robert M. Kirby, and Cláudio T. Silva 2015 iv Virtual Crowds: Steps Toward Behavioral Realism Mubbasir Kapadia, Nuria Pelechano, Jan Allbeck, and Norm Badler 2015 Finite Element Method Simulation of 3D Deformable Solids Eftychios Sifakis and Jernej Barbic 2015 Efficient Quadrature
    [Show full text]
  • The Geometry of Eye Rotations and Listing's Law
    The Geometry of Eye Rotations and Listing's Law Amir A. Handzel* Tamar Flasht Department of Applied Mathematics and Computer Science Weizmann Institute of Science Rehovot, 76100 Israel Abstract We analyse the geometry of eye rotations, and in particular saccades, using basic Lie group theory and differential geome­ try. Various parameterizations of rotations are related through a unifying mathematical treatment, and transformations between co-ordinate systems are computed using the Campbell-Baker­ Hausdorff formula. Next, we describe Listing's law by means of the Lie algebra so(3). This enables us to demonstrate a direct connection to Donders' law, by showing that eye orientations are restricted to the quotient space 80(3)/80(2). The latter is equiv­ alent to the sphere S2, which is exactly the space of gaze directions. Our analysis provides a mathematical framework for studying the oculomotor system and could also be extended to investigate the geometry of mUlti-joint arm movements. 1 INTRODUCTION 1.1 SACCADES AND LISTING'S LAW Saccades are fast eye movements, bringing objects of interest into the center of the visual field. It is known that eye positions are restricted to a subset of those which are anatomically possible, both during saccades and fixation (Tweed & Vilis, 1990). According to Donders' law, the eye's gaze direction determines its orientation uniquely, and moreover, the orientation does not depend on the history of eye motion which has led to the given gaze direction. A precise specification of the "allowed" subspace of position is given by Listing's law: the observed orientations of the eye are those which can be reached from the distinguished orientation called primary *[email protected] t [email protected] 118 A.
    [Show full text]
  • On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry,By John H
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 42, Number 2, Pages 229–243 S 0273-0979(05)01043-8 Article electronically published on January 26, 2005 On quaternions and octonions: Their geometry, arithmetic, and symmetry,by John H. Conway and Derek A. Smith, A K Peters, Ltd., Natick, MA, 2003, xii + 159 pp., $29.00, ISBN 1-56881-134-9 Conway and Smith’s book is a wonderful introduction to the normed division algebras: the real numbers (R), the complex numbers (C), the quaternions (H), and the octonions (O). The first two are well-known to every mathematician. In contrast, the quaternions and especially the octonions are sadly neglected, so the authors rightly concentrate on these. They develop these number systems from scratch, explore their connections to geometry, and even study number theory in quaternionic and octonionic versions of the integers. Conway and Smith warm up by studying two famous subrings of C: the Gaussian integers and Eisenstein integers. The Gaussian integers are the complex numbers x + iy for which x and y are integers. They form a square lattice: i 01 Any Gaussian integer can be uniquely factored into ‘prime’ Gaussian integers — at least if we count differently ordered factorizations as the same and ignore the ambiguity introduced by the invertible Gaussian integers, namely ±1and±i.To show this, we can use a straightforward generalization of Euclid’s proof for ordinary integers. The key step is to show that given nonzero Gaussian integers a and b,we can always write a = nb + r for some Gaussian integers n and r, where the ‘remainder’ r has |r| < |b|.
    [Show full text]
  • Arxiv:1909.12057V4 [Cs.LG] 22 Mar 2021 Behavior Under Such Transformations and Are Insensitive to Both Local and Global Transformations on the Input Data
    Published as a conference paper at ICLR 2020 B-SPLINE CNNS ON LIE GROUPS Erik J. Bekkers Centre for Analysis and Scientific Computing Department of Applied Mathematics and Computer Science Eindhoven University of Technology, Eindhoven, the Netherlands [email protected] ABSTRACT Group convolutional neural networks (G-CNNs) can be used to improve classi- cal CNNs by equipping them with the geometric structure of groups. Central in the success of G-CNNs is the lifting of feature maps to higher dimensional disentangled representations in which data characteristics are effectively learned, geometric data-augmentations are made obsolete, and predictable behavior under geometric transformations (equivariance) is guaranteed via group theory. Cur- rently, however, the practical implementations of G-CNNs are limited to either discrete groups (that leave the grid intact) or continuous compact groups such as rotations (that enable the use of Fourier theory). In this paper we lift these lim- itations and propose a modular framework for the design and implementation of G-CNNs for arbitrary Lie groups. In our approach the differential structure of Lie groups is used to expand convolution kernels in a generic basis of B-splines that is defined on the Lie algebra. This leads to a flexible framework that enables localized, atrous, and deformable convolutions in G-CNNs by means of respec- tively localized, sparse and non-uniform B-spline expansions. The impact and potential of our approach is studied on two benchmark datasets: cancer detection in histopathology slides in which rotation equivariance plays a key role and facial landmark localization in which scale equivariance is important.
    [Show full text]
  • Dirac Equation: L
    11/16 Lecture outline ⋆ Reading for today’s lecture: Coleman lecture notes pages 211-248. Last time: The basic spinor reps are (1/2, 0) and (0, 1/2), denoted u±; these are • called left and right handed Weyl spinors. They both have D = e−i~σ·eθ/ˆ 2 for a rotation by ±~σ·eφ/ˆ 2 θ around the e axis, but they have D± = e for a boost along the e axis, where v = tanh φ. Theseb 2-component Weyl spinor representations individually playb an important role in non-parity invariant theories, like the weak interactions. Parity ((t, ~x) (t, ~x)) → − exchances them. So, in parity invariant theories, like QED, they are combined into a 4-component Dirac spinor, (1/2, 0)+(0, 1/2): u ψ = + . u− The 4-component spinor rep starts with the clifford algebra γµ,γν =2ηµν1, e.g. { } 0 1 0 σi γ0 = , γi = . 1 0 σi 0 − There are other choices of reps of the clifford algebra. The µν rotation / boost gen- M erators become (the extension / analog of spin S~ for the 3d rotation group) Sµν = 1 [γµ,γν] = 1 γµγν 1 ηµν , satisfies the Lorentz Lie algebra relation. Under a rotation, 4 2 − 2 σk 0 Sij = i ǫ , so taking Ω = ǫ ϕk get under rotations − 2 ijk 0 σk ij − ijk ei~ϕ·~σ/2 0 S[~ϕ] = . 0 ei~ϕ·~σ/2 Under boosts, Ωi,0 = φi, φ~·~σ/2 S[Λ] = e 0 . 0 e−φ~·~σ/2 This exhibits the 2-component reps that we described above.
    [Show full text]