Group Equivariant Convolutional Networks 2

Group Equivariant Convolutional Networks 2

Group Equivariant Convolutional Networks Taco S. Cohen [email protected] University of Amsterdam Max Welling [email protected] University of Amsterdam University of California Irvine Canadian Institute for Advanced Research Abstract Convolution layers can be used effectively in a deep net- work because all the layers in such a network are trans- We introduce Group equivariant Convolutional lation equivariant: shifting the image and then feeding Neural Networks (G-CNNs), a natural general- it through a number of layers is the same as feeding the ization of convolutional neural networks that re- original image through the same layers and then shifting duces sample complexity by exploiting symme- the resulting feature maps (at least up to edge-effects). In tries. G-CNNs use G-convolutions, a new type of other words, the symmetry (translation) is preserved by layer that enjoys a substantially higher degree of each layer, which makes it possible to exploit it not just weight sharing than regular convolution layers. in the first, but also in higher layers of the network. G-convolutions increase the expressive capacity of the network without increasing the number of In this paper we show how convolutional networks can be parameters. Group convolution layers are easy generalized to exploit larger groups of symmetries, includ- to use and can be implemented with negligible ing rotations and reflections. The notion of equivariance is computational overhead for discrete groups gen- key to this generalization, so in section 2 we will discuss erated by translations, reflections and rotations. this concept and its role in deep representation learning. G-CNNs achieve state of the art results on CI- After discussing related work in section 3, we recall a num- FAR10 and rotated MNIST. ber of mathematical concepts in section 4 that allow us to define and analyze the G-convolution in a generic manner. In section 5, we analyze the equivariance propertiesof stan- 1. Introduction dard CNNs, and show that they are equivariant to trans- lations but may fail to equivary with more general trans- Deep convolutional neural networks (CNNs, convnets) formations. Using the mathematical framework from sec- have proven to be very powerful models of sensory data tion 4, we can define G-CNNs (section 6) by analogy to such as images, video, and audio. Although a strong the- arXiv:1602.07576v3 [cs.LG] 3 Jun 2016 standard CNNs (the latter being the G-CNN for the transla- ory of neural network design is currently lacking, a large tion group). We show that G-convolutions, as well as var- amount of empirical evidence supports the notion that both ious kinds of layers used in modern CNNs, such as pool- convolutional weight sharing and depth (among other fac- ing, arbitrary pointwise nonlinearities, batch normalization tors) are important for good predictive performance. and residual blocks are all equivariant, and thus compatible Convolutional weight sharing is effective because there is with G-CNNs. In section 7 we provide concrete implemen- a translation symmetry in most perception tasks: the la- tation details for group convolutions. bel function and data distribution are both approximately In section 8 we report experimental results on MNIST-rot invariant to shifts. By using the same weights to analyze and CIFAR10, where G-CNNs achieve state of the art re- or model each part of the image, a convolution layer uses sults (2.28% error on MNIST-rot, and 4.19% resp. 6.46% far fewer parameters than a fully connected one, while pre- on augmented and plain CIFAR10). We show that replac- serving the capacity to learn many useful transformations. ing planar convolutions with G-convolutions consistently improves results without additional tuning. In section 9 we Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume provide a discussion of these results and consider several 48. Copyright 2016 by the author(s). extensions of the method, before concluding in section 10. Group Equivariant Convolutional Networks 2. Structured & Equivariant Representations 3. Related Work Deep neural networks produce a sequence of progressively There is a large body of literature on invariant represen- more abstract representations by mapping the input through tations. Invariance can be achieved by pose normalization a series of parameterized functions (LeCun et al., 2015). In using an equivariant detector (Lowe, 2004; Jaderberg et al., the current generation of neural networks, the representa- 2015) or by averaging a possibly nonlinear function over tion spaces are usually endowed with very minimal internal a group (Reisert, 2008; Skibbe, 2013; Manay et al., 2006; structure, such as that of a linear space Rn. Kondor, 2007). In this paper we construct representations that have the Scattering convolution networks use wavelet convolutions, structure of a linear G-space, for some chosen group G. nonlinearities and group averaging to produce stable in- This means that each vector in the representation space has variants (Bruna & Mallat, 2013). Scattering networks have a pose associated with it, which can be transformed by the been extended to use convolutions on the group of trans- elements of some group of transformations G. This addi- lations, rotations and scalings, and have been applied tional structure allows us to model data more efficiently: A to object and texture recognition (Sifre & Mallat, 2013; filter in a G-CNN detects co-occurrences of features that Oyallon & Mallat, 2015). have the preferred relative pose, and can match such a fea- A number of recent works have addressed the prob- ture constellation in every global pose through an operation lem of learning or constructing equivariant representa- called the G-convolution. tions. This includes work on transforming autoencoders A representation space can obtain its structure from other (Hinton et al., 2011), equivariant Boltzmann machines representation spaces to which it is connected. For this to (Kivinen & Williams, 2011; Sohn & Lee, 2012), equivari- work, the network or layer Φ that maps one representation ant descriptors (Schmidt & Roth, 2012), and equivariant to another should be structure preserving. For G-spaces filtering (Skibbe, 2013). this means that Φ has to be equivariant: Lenc & Vedaldi (2015) show that the AlexNet CNN ′ (Krizhevsky et al., 2012) trained on imagenet sponta- Φ(Tg x)= Tg Φ(x), (1) neously learns representations that are equivariant to flips, scaling and rotation. This supports the idea that equivari- That is, transforming an input x by a transformation g ance is a good inductive bias for deep convolutional net- (forming Tg x) and then passing it through the learned map works. Agrawal et al. (2015) show that useful representa- Φ should give the same result as first mapping x through Φ tions can be learned in an unsupervised manner by training and then transforming the representation. a convolutional network to be equivariant to ego-motion. Equivariance can be realized in many ways, and in particu- Anselmi et al. (2014; 2015) use the theory of locally com- ′ lar the operators T and T need not be the same. The only pact topological groups to develop a theory of statistically ′ requirement for T and T is that for any two transforma- efficient learning in sensory cortex. This theory was imple- tions g and h, we have T (gh) = T (g)T (h) (i.e. T is a mented for the commutative group consisting of time- and linear representation of G). vocal tract length shifts for an application to speech recog- From equation 1 we see that the familiar concept of in- nition by Zhang et al. (2015). ′ variance is a special kind of equivariance where Tg is the Gens & Domingos (2014) proposed an approximately identity transformation for all g. In deep learning, general equivariant convolutional architecture that uses sparse, equivariance is more useful than invariance because it is high-dimensional feature maps to deal with high- impossible to determine if features are in the right spatial dimensional groups of transformations. Dieleman et al. configuration if they are invariant. (2015) showed that rotation symmetry can be exploited in Besides improving statistical efficiency and facilitating ge- convolutional networks for the problem of galaxy morphol- ometrical reasoning, equivariance to symmetry transforma- ogy prediction by rotating feature maps, effectively learn- tions constrains the network in a way that can aid general- ing an equivariant representation. This work was later ex- ization. A network Φ can be non-injective, meaning that tended (Dieleman et al., 2016) and evaluated on various non-identical vectors x and y in the input space become computer vision problems that have cyclic symmetry. identical in the output space (for example, two instances Cohen & Welling (2014) showed that the concept of disen- of a face may be mapped onto a single vector indicating tangling can be understood as a reduction of the operators the presence of any face). If is equivariant, then the - Φ G Tg in an equivariant representation, and later related this transformed inputs Tg x and Tg y must also be mapped to notion of disentangling to the more familiar statistical no- the same output. Their “sameness” (as judged by the net- tion of decorrelation (Cohen & Welling, 2015). work) is preserved under symmetry transformations. Group Equivariant Convolutional Networks 4. Mathematical Framework equations are cumbersome. Hence, our preferred method of composing two group elements represented by integer In this section we present a mathematical framework that tuples is to convert them to matrices, multiply these matri- enables a simple and generic definition and analysis of G- ces, and then convert the resulting matrix back to a tuple of CNNs for various groups G. We begin by defining sym- integers (using the atan2 function to obtain r). metry groups, and study in particular two groups that are used in the G-CNNs we have built so far.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us