Deep Image Homography Estimation

Total Page:16

File Type:pdf, Size:1020Kb

Deep Image Homography Estimation Deep Image Homography Estimation Daniel DeTone Tomasz Malisiewicz Andrew Rabinovich Magic Leap, Inc. Magic Leap, Inc. Magic Leap, Inc. Mountain View, CA Mountain View, CA Mountain View, CA [email protected] [email protected] [email protected] Abstract—We present a deep convolutional neural network for The traditional homography estimation pipeline is com- estimating the relative homography between a pair of images. posed of two stages: corner estimation and robust homography Our feed-forward network has 10 layers, takes two stacked estimation. Robustness is introduced into the corner detection grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the stage by returning a large and over-complete set of points, first image to the second. We present two convolutional neural while robustness into the homography estimation step shows network architectures for HomographyNet: a regression network up as heavy use of RANSAC or robustification of the squared which directly estimates the real-valued homography parameters, loss function. Since corners are not as reliable as man-made and a classification network which produces a distribution over linear structures, the research community has put considerable quantized homographies. We use a 4-point homography param- eterization which maps the four corners from one image into the effort into adding line features [18] and more complicated second image. Our networks are trained in an end-to-end fashion geometries [8] into the feature detection step. What we really using warped MS-COCO images. Our approach works without want is a single robust algorithm that, given a pair of images, the need for separate local feature detection and transformation simply returns the homography relating the pair. Instead of estimation stages. Our deep models are compared to a traditional manually engineering corner-ish features, line-ish features, homography estimator based on ORB features and we highlight etc, is it possible for the algorithm to learn its own set the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by of primitives? We want to go even further, and add the deep homography estimation, thus showcasing the flexibility of transformation estimation step as the last part of a deep a deep learning approach. learning pipeline, thus giving us the ability to learn the entire homography estimation pipeline in an end-to-end fashion. I. INTRODUCTION Recent research in dense or direct featureless SLAM algo- Sparse 2D feature points are the basis of most modern rithms such as LSD-SLAM [6] indicates promise in using a Structure from Motion and SLAM techniques [9]. These full image for geometric computer vision tasks. Concurrently, sparse 2D features are typically known as corners, and in deep convolutional networks are setting state-of-the-art bench- all geometric computer vision tasks one must balance the marks in semantic tasks such as image classification, semantic errors in corner detection methods with geometric estimation segmentation and human pose estimation. Additionally, recent errors. Even the simplest geometric methods, like estimating works such as FlowNet [7], Deep Semantic Matching [1] the homography between two images, rely on the error-prone and Eigen et al.’s Multi-Scale Deep Network [5] present corner-detection method. promising results for dense geometric computer vision tasks Estimating a 2D homography (or projective transformation) like optical flow and depth estimation. Even robotic tasks like from a pair of images is a fundamental task in computer vision. visual odometry are being tackled with convolutional neural The homography is an essential part of monocular SLAM networks [4]. arXiv:1606.03798v1 [cs.CV] 13 Jun 2016 systems in scenarios such as: In this paper, we show that the entire homography estima- Rotation only movements • tion problem can be solved by a deep convolutional neural Planar scenes • network (See Figure 1). Our contributions are as follows: we Scenes in which objects are very far from the viewer • present a new VGG-style [17] network for the homography It is well-known that the transformation relating two im- estimation task. We show how to use the 4-point parameter- ages undergoing a rotation about the camera center is a ization [2] to get a well-behaved deep estimation problem. homography, and it is not surprising that homographies are Because deep networks require a lot of data to be trained essential for creating panoramas [3]. To deal with planar and from scratch, we share our recipe for creating a seemingly AB mostly-planar scenes, the popular SLAM algorithm ORB- infinite dataset of (IA;IB;H ) training triplets from an SLAM [14] uses a combination of homography estimation existing dataset of real images like the MS-COCO dataset. and fundamental matrix estimation. Augmented Reality ap- We present an additional formulation of the homography plications based on planar structures and homographies have estimation problem as classification, which produces a dis- been well-studied [16]. Camera calibration techniques using tribution over homographies and can be used to determine the planar structures [20] also rely on homographies. confidence of an estimated homography. Deep Image Homography Estimation using ConvNets 1.Teaser Figure Conv1 Conv2 Conv3 Conv4 Input Images Conv5 Conv6 Conv7 Conv8 FC FC Softmax 3x3 3x3 3x3 3x3 8x21 3x3 3x3 Max 16x16x128 16x16x128 1024 Pooling H Max 32x32x128 32x32x128 3x3 3x3 Pooling 128x128x2 Max 64x64x64 64x64x64 Pooling 128x128x64 128x128x64 Fig. 1: Deep Image Homography Estimation. HomographyNet is a Deep Convolutional Neural Network which directly produces the Homography relating two images. Our method does net require separate corner detection and homography estimation steps and all parameters are trained in an end-to-end fashion using a large dataset of labeled images. Deep Image Homography Estimation using ConvNets Conv1 Conv2 1 HE POINT OMOGRAPHY ARAMETERIZATION Conv3 Conv4 II. T 4- H P of natural images .Input The Images process is illustrated in Figure 3 and Conv5 Conv6 Conv7 Conv8 described below. FC The simplest way to parameterize a homography is with a FC Softmax 3x3 3x3 matrix and a fixed scale. The homography maps [u; v], To generate a single training example, we first randomly 3x3 3x3 8x11 3x3 3x3 Max 16x16x128 16x16x128 1024 Pooling H Max 32x32x128 32x32x128 crop a square patch from the larger image3x3 I at position3x3 p (we the pixels in the left image, to [u0; v0], the pixels in the right Pooling 128x128x2 Max 64x64x64 64x64x64 image, and is defined up to scale (see Equation 1). avoid the borders to prevent bordering artifacts later inPooling the 128x128x64 128x128x64 data generation pipeline). This random crop is Ip. Then, the u0 H11 H12 H13 u four corners of PatchFig. 1:ADeep are randomly Image Homography perturbed Estimation. by valuesHomographyNet is a Deep Convolutional Neural Network which directly produces the Homography relating two images. Our method does net require separate corner detection and homography v0 H21 H22 H23 v (1) 0 1 ∼ 0 1 0 1 within the range [-estimationρ, ρ]. The steps four and all correspondences parameters are trained define in an end-to-end fashion using a large dataset of labeled images. 1 H31 H32 H33 1 a homography HAB. Then, the inverse of this homography BA AB 1 However, if@ weA unroll@ the 8 (or 9) parametersA @ A of the homog- H = (H )− is applied to the large image to produce II. THE 4-POINT HOMOGRAPHY PARAMETERIZATION applying random projective transformations to a large dataset image I0. A second patch I0 is cropped from I0 at position p. raphy into a single vector, we’ll quickly realize that we are p of natural images 1. This procedure is detailed below. The simplest way to parameterize a homography is with a mixing both rotational and translational terms. For example, The two grayscale patches, Ip and Ip0 are then stacked channel- To generate a single training example, we first randomly 3x3 matrix and a fixed scale (see Equation 1). However, if wise to create the 2-channel image which is fed directly into crop a square patch from the larger image I at position p (we the submatrix [H11 H12; H21 H22], represents the rotational we unroll the 8 (or 9) parameters of the homography into a AB avoid the borders to prevent bordering artifacts later in the terms in the homography, while the vector [H13 H23] is the our ConvNet. Thesingle 4-point vector, parameterization well quickly realize of H that weis are then mixing both data generation pipeline). This random crop is I . Then, the used as the associatedrotational ground-truth and translational training terms. label. For example, the subma- p translational offset. Balancing the rotational and translational four corners of Patch A are randomly perturbed by values trix [H H ; H H ], represents the rotational terms in the terms as part of an optimization problem is difficult. Managing the training11 image12 21 generation22 pipeline gives us within the range [-⇢, ⇢]. The four correspondences define homography, while the vector [H H ] is the translational 13 23 a homography HAB. Then, the inverse of this homography We found that an alternate parameterization, one based on full control over theoffset. kinds Balancing of visual the effects rotational we and want translational to model. terms as part HBA =(HAB) 1 is applied to the large image to produce a single kind of location variable, namely the corner location, For example, to makeof an our optimization method more problem robust is difficult. to motion blur, − image I0. A second patch Ip0 is cropped from I0 at position p. is more suitable for our deep homography estimation task. we can apply such blurs to the image in our training set. The two grayscale patches, I and I are then stacked channel- u H H H u p p0 The 4-point parameterization has been used in traditional If we want the method to be1 robust0 to11 occlusions,12 13 we1 can wise to create the 2-channel image which is fed directly into v H H H v (1) 1 21 22 23 1 our ConvNet.
Recommended publications
  • Globally Optimal Affine and Metric Upgrades in Stratified Autocalibration
    Globally Optimal Affine and Metric Upgrades in Stratified Autocalibration Manmohan Chandrakery Sameer Agarwalz David Kriegmany Serge Belongiey [email protected] [email protected] [email protected] [email protected] y University of California, San Diego z University of Washington, Seattle Abstract parameters of the cameras, which is commonly approached by estimating the dual image of the absolute conic (DIAC). We present a practical, stratified autocalibration algo- A variety of linear methods exist towards this end, how- rithm with theoretical guarantees of global optimality. Given ever, they are known to perform poorly in the presence of a projective reconstruction, the first stage of the algorithm noise [10]. Perhaps more significantly, most methods a pos- upgrades it to affine by estimating the position of the plane teriori impose the positive semidefiniteness of the DIAC, at infinity. The plane at infinity is computed by globally which might lead to a spurious calibration. Thus, it is im- minimizing a least squares formulation of the modulus con- portant to impose the positive semidefiniteness of the DIAC straints. In the second stage, the algorithm upgrades this within the optimization, not as a post-processing step. affine reconstruction to a metric one by globally minimizing This paper proposes global minimization algorithms for the infinite homography relation to compute the dual image both stages of stratified autocalibration that furnish theoreti- of the absolute conic (DIAC). The positive semidefiniteness cal certificates of optimality. That is, they return a solution at of the DIAC is explicitly enforced as part of the optimization most away from the global minimum, for arbitrarily small .
    [Show full text]
  • Projective Geometry: a Short Introduction
    Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Master MOSIG Introduction to Projective Geometry Contents 1 Introduction 2 1.1 Objective . .2 1.2 Historical Background . .3 1.3 Bibliography . .4 2 Projective Spaces 5 2.1 Definitions . .5 2.2 Properties . .8 2.3 The hyperplane at infinity . 12 3 The projective line 13 3.1 Introduction . 13 3.2 Projective transformation of P1 ................... 14 3.3 The cross-ratio . 14 4 The projective plane 17 4.1 Points and lines . 17 4.2 Line at infinity . 18 4.3 Homographies . 19 4.4 Conics . 20 4.5 Affine transformations . 22 4.6 Euclidean transformations . 22 4.7 Particular transformations . 24 4.8 Transformation hierarchy . 25 Grenoble Universities 1 Master MOSIG Introduction to Projective Geometry Chapter 1 Introduction 1.1 Objective The objective of this course is to give basic notions and intuitions on projective geometry. The interest of projective geometry arises in several visual comput- ing domains, in particular computer vision modelling and computer graphics. It provides a mathematical formalism to describe the geometry of cameras and the associated transformations, hence enabling the design of computational ap- proaches that manipulates 2D projections of 3D objects. In that respect, a fundamental aspect is the fact that objects at infinity can be represented and manipulated with projective geometry and this in contrast to the Euclidean geometry. This allows perspective deformations to be represented as projective transformations. Figure 1.1: Example of perspective deformation or 2D projective transforma- tion. Another argument is that Euclidean geometry is sometimes difficult to use in algorithms, with particular cases arising from non-generic situations (e.g.
    [Show full text]
  • Robot Vision: Projective Geometry
    Robot Vision: Projective Geometry Ass.Prof. Friedrich Fraundorfer SS 2018 1 Learning goals . Understand homogeneous coordinates . Understand points, line, plane parameters and interpret them geometrically . Understand point, line, plane interactions geometrically . Analytical calculations with lines, points and planes . Understand the difference between Euclidean and projective space . Understand the properties of parallel lines and planes in projective space . Understand the concept of the line and plane at infinity 2 Outline . 1D projective geometry . 2D projective geometry ▫ Homogeneous coordinates ▫ Points, Lines ▫ Duality . 3D projective geometry ▫ Points, Lines, Planes ▫ Duality ▫ Plane at infinity 3 Literature . Multiple View Geometry in Computer Vision. Richard Hartley and Andrew Zisserman. Cambridge University Press, March 2004. Mundy, J.L. and Zisserman, A., Geometric Invariance in Computer Vision, Appendix: Projective Geometry for Machine Vision, MIT Press, Cambridge, MA, 1992 . Available online: www.cs.cmu.edu/~ph/869/papers/zisser-mundy.pdf 4 Motivation – Image formation [Source: Charles Gunn] 5 Motivation – Parallel lines [Source: Flickr] 6 Motivation – Epipolar constraint X world point epipolar plane x x’ x‘TEx=0 C T C’ R 7 Euclidean geometry vs. projective geometry Definitions: . Geometry is the teaching of points, lines, planes and their relationships and properties (angles) . Geometries are defined based on invariances (what is changing if you transform a configuration of points, lines etc.) . Geometric transformations
    [Show full text]
  • Feature Matching and Heat Flow in Centro-Affine Geometry
    Symmetry, Integrability and Geometry: Methods and Applications SIGMA 16 (2020), 093, 22 pages Feature Matching and Heat Flow in Centro-Affine Geometry Peter J. OLVER y, Changzheng QU z and Yun YANG x y School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA E-mail: [email protected] URL: http://www.math.umn.edu/~olver/ z School of Mathematics and Statistics, Ningbo University, Ningbo 315211, P.R. China E-mail: [email protected] x Department of Mathematics, Northeastern University, Shenyang, 110819, P.R. China E-mail: [email protected] Received April 02, 2020, in final form September 14, 2020; Published online September 29, 2020 https://doi.org/10.3842/SIGMA.2020.093 Abstract. In this paper, we study the differential invariants and the invariant heat flow in centro-affine geometry, proving that the latter is equivalent to the inviscid Burgers' equa- tion. Furthermore, we apply the centro-affine invariants to develop an invariant algorithm to match features of objects appearing in images. We show that the resulting algorithm com- pares favorably with the widely applied scale-invariant feature transform (SIFT), speeded up robust features (SURF), and affine-SIFT (ASIFT) methods. Key words: centro-affine geometry; equivariant moving frames; heat flow; inviscid Burgers' equation; differential invariant; edge matching 2020 Mathematics Subject Classification: 53A15; 53A55 1 Introduction The main objective in this paper is to study differential invariants and invariant curve flows { in particular the heat flow { in centro-affine geometry. In addition, we will present some basic applications to feature matching in camera images of three-dimensional objects, comparing our method with other popular algorithms.
    [Show full text]
  • Estimating Projective Transformation Matrix (Collineation, Homography)
    Estimating Projective Transformation Matrix (Collineation, Homography) Zhengyou Zhang Microsoft Research One Microsoft Way, Redmond, WA 98052, USA E-mail: [email protected] November 1993; Updated May 29, 2010 Microsoft Research Techical Report MSR-TR-2010-63 Note: The original version of this report was written in November 1993 while I was at INRIA. It was circulated among very few people, and never published. I am now publishing it as a tech report, adding Section 7, with the hope that it could be useful to more people. Abstract In many applications, one is required to estimate the projective transformation between two sets of points, which is also known as collineation or homography. This report presents a number of techniques for this purpose. Contents 1 Introduction 2 2 Method 1: Five-correspondences case 3 3 Method 2: Compute P together with the scalar factors 3 4 Method 3: Compute P only (a batch approach) 5 5 Method 4: Compute P only (an iterative approach) 6 6 Method 5: Compute P through normalization 7 7 Method 6: Maximum Likelihood Estimation 8 1 1 Introduction Projective Transformation is a concept used in projective geometry to describe how a set of geometric objects maps to another set of geometric objects in projective space. The basic intuition behind projective space is to add extra points (points at infinity) to Euclidean space, and the geometric transformation allows to move those extra points to traditional points, and vice versa. Homogeneous coordinates are used in projective space much as Cartesian coordinates are used in Euclidean space. A point in two dimensions is described by a 3D vector.
    [Show full text]
  • • Rotations • Camera Calibration • Homography • Ransac
    Agenda • Rotations • Camera calibration • Homography • Ransac Geometric Transformations y 164 Computer Vision: Algorithms andx Applications (September 3, 2010 draft) Transformation Matrix # DoF Preserves Icon translation I t 2 orientation 2 3 h i ⇥ ⇢⇢SS rigid (Euclidean) R t 3 lengths S ⇢ 2 3 S⇢ ⇥ h i ⇢ similarity sR t 4 angles S 2 3 S⇢ h i ⇥ ⇥ ⇥ affine A 6 parallelism ⇥ ⇥ 2 3 h i ⇥ projective H˜ 8 straight lines ` 3 3 ` h i ⇥ Table 3.5 Hierarchy of 2D coordinate transformations. Each transformation also preserves Let’s definethe properties families listed of in thetransformations rows below it, i.e., similarity by the preserves properties not only anglesthat butthey also preserve parallelism and straight lines. The 2 3 matrices are extended with a third [0T 1] row to form ⇥ a full 3 3 matrix for homogeneous coordinate transformations. ⇥ amples of such transformations, which are based on the 2D geometric transformations shown in Figure 2.4. The formulas for these transformations were originally given in Table 2.1 and are reproduced here in Table 3.5 for ease of reference. In general, given a transformation specified by a formula x0 = h(x) and a source image f(x), how do we compute the values of the pixels in the new image g(x), as given in (3.88)? Think about this for a minute before proceeding and see if you can figure it out. If you are like most people, you will come up with an algorithm that looks something like Algorithm 3.1. This process is called forward warping or forward mapping and is shown in Figure 3.46a.
    [Show full text]
  • Automorphisms of Classical Geometries in the Sense of Klein
    Automorphisms of classical geometries in the sense of Klein Navarro, A.∗ Navarro, J.† November 11, 2018 Abstract In this note, we compute the group of automorphisms of Projective, Affine and Euclidean Geometries in the sense of Klein. As an application, we give a simple construction of the outer automorphism of S6. 1 Introduction Let Pn be the set of 1-dimensional subspaces of a n + 1-dimensional vector space E over a (commutative) field k. This standard definition does not capture the ”struc- ture” of the projective space although it does point out its automorphisms: they are projectivizations of semilinear automorphisms of E, also known as Staudt projectivi- ties. A different approach (v. gr. [1]), defines the projective space as a lattice (the lattice of all linear subvarieties) satisfying certain axioms. Then, the so named Fun- damental Theorem of Projective Geometry ([1], Thm 2.26) states that, when n> 1, collineations of Pn (i.e., bijections preserving alignment, which are the automorphisms of this lattice structure) are precisely Staudt projectivities. In this note we are concerned with geometries in the sense of Klein: a geometry is a pair (X, G) where X is a set and G is a subgroup of the group Biy(X) of all bijections of X. In Klein’s view, Projective Geometry is the pair (Pn, PGln), where PGln is the group of projectivizations of k-linear automorphisms of the vector space E (see Example 2.2). The main result of this note is a computation, analogous to the arXiv:0901.3928v2 [math.GR] 4 Jul 2013 aforementioned theorem for collineations, but in the realm of Klein geometries: Theorem 3.4 The group of automorphisms of the Projective Geometry (Pn, PGln) is the group of Staudt projectivities, for any n ≥ 1.
    [Show full text]
  • Download Download
    Acta Math. Univ. Comenianae 911 Vol. LXXXVIII, 3 (2019), pp. 911{916 COSET GEOMETRIES WITH TRIALITIES AND THEIR REDUCED INCIDENCE GRAPHS D. LEEMANS and K. STOKES Abstract. In this article we explore combinatorial trialities of incidence geome- tries. We give a construction that uses coset geometries to construct examples of incidence geometries with trialities and prescribed automorphism group. We define the reduced incidence graph of the geometry to be the oriented graph obtained as the quotient of the geometry under the triality. Our chosen examples exhibit interesting features relating the automorphism group of the geometry and the automorphism group of the reduced incidence graphs. 1. Introduction The projective space of dimension n over a field F is an incidence geometry PG(n; F ). It has n types of elements; the projective subspaces: the points, the lines, the planes, and so on. The elements are related by incidence, defined by inclusion. A collinearity of PG(n; F ) is an automorphism preserving incidence and type. According to the Fundamental theorem of projective geometry, every collinearity is composed by a homography and a field automorphism. A duality of PG(n; F ) is an automorphism preserving incidence that maps elements of type k to elements of type n − k − 1. Dualities are also called correlations or reciprocities. Geometric dualities, that is, dualities in projective spaces, correspond to sesquilinear forms. Therefore the classification of the sesquilinear forms also give a classification of the geometric dualities. A polarity is a duality δ that is an involution, that is, δ2 = Id. A duality can always be expressed as a composition of a polarity and a collinearity.
    [Show full text]
  • Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface
    Robert Collins CSE486, Penn State Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface y x Robert Collins CSE486, Penn State Review : Forward Projection World Camera Film Pixel Coords Coords Coords Coords U X x u M M V ext Y proj y Maff v W Z U X Mint u V Y v W Z U M u V m11 m12 m13 m14 v W m21 m22 m23 m24 m31 m31 m33 m34 Robert Collins CSE486, PennWorld State to Camera Transformation PC PW W Y X U R Z C V Rotate to Translate by - C align axes (align origins) PC = R ( PW - C ) = R PW + T Robert Collins CSE486, Penn State Perspective Matrix Equation X (Camera Coordinates) x = f Z Y X y = f x' f 0 0 0 Z Y y' = 0 f 0 0 Z z' 0 0 1 0 1 p = M int ⋅ PC Robert Collins CSE486, Penn State Film to Pixel Coords 2D affine transformation from film coords (x,y) to pixel coordinates (u,v): X u’ a11 a12xa'13 f 0 0 0 Y v’ a21 a22 ya'23 = 0 f 0 0 w’ Z 0 0z1' 0 0 1 0 1 Maff Mproj u = Mint PC = Maff Mproj PC Robert Collins CSE486, Penn StateProjection of Points on Planar Surface Perspective projection y Film coordinates x Point on plane Rotation + Translation Robert Collins CSE486, Penn State Projection of Planar Points Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Punchline: For planar surfaces, 3D to 2D perspective projection reduces to a 2D to 2D transformation.
    [Show full text]
  • Euclidean Versus Projective Geometry
    Projective Geometry Projective Geometry Euclidean versus Projective Geometry n Euclidean geometry describes shapes “as they are” – Properties of objects that are unchanged by rigid motions » Lengths » Angles » Parallelism n Projective geometry describes objects “as they appear” – Lengths, angles, parallelism become “distorted” when we look at objects – Mathematical model for how images of the 3D world are formed. Projective Geometry Overview n Tools of algebraic geometry n Informal description of projective geometry in a plane n Descriptions of lines and points n Points at infinity and line at infinity n Projective transformations, projectivity matrix n Example of application n Special projectivities: affine transforms, similarities, Euclidean transforms n Cross-ratio invariance for points, lines, planes Projective Geometry Tools of Algebraic Geometry 1 n Plane passing through origin and perpendicular to vector n = (a,b,c) is locus of points x = ( x 1 , x 2 , x 3 ) such that n · x = 0 => a x1 + b x2 + c x3 = 0 n Plane through origin is completely defined by (a,b,c) x3 x = (x1, x2 , x3 ) x2 O x1 n = (a,b,c) Projective Geometry Tools of Algebraic Geometry 2 n A vector parallel to intersection of 2 planes ( a , b , c ) and (a',b',c') is obtained by cross-product (a'',b'',c'') = (a,b,c)´(a',b',c') (a'',b'',c'') O (a,b,c) (a',b',c') Projective Geometry Tools of Algebraic Geometry 3 n Plane passing through two points x and x’ is defined by (a,b,c) = x´ x' x = (x1, x2 , x3 ) x'= (x1 ', x2 ', x3 ') O (a,b,c) Projective Geometry Projective Geometry
    [Show full text]
  • Arxiv:Math/0502585V1 [Math.GT] 28 Feb 2005 Fcmuaosin Commutators of Tutr Seeg 1) Utemr,Bigasbe F( of Subset a Being Furthermore, [1])
    NON-INJECTIVE REPRESENTATIONS OF A CLOSED SURFACE GROUP INTO PSL(2, R) LOUIS FUNAR AND MAXIME WOLFF Abstract. Let e denote the Euler class on the space Hom(Γg,PSL(2, R)) of representations of the fundamental group Γg of the closed surface Σg. Goldman showed that the connected components of −1 Hom(Γg ,PSL(2, R)) are precisely the inverse images e (k), for 2 − 2g ≤ k ≤ 2g − 2, and that the components of Euler class 2 − 2g and 2g − 2 consist of the injective representations whose image is a discrete subgroup of P SL(2, R). We prove that non-faithful representations are dense in all the other components. We show that the image of a discrete representation essentially determines its Euler class. Moreover, we show that for every genus and possible corresponding Euler class, there exist discrete representations. 1. Introduction Let Σg be the closed oriented surface of genus g ≥ 2. Let Γg denote its fundamental group, and Rg the representation space Hom(Γg,PSL(2, R)). Elements of Rg are determined by the images of the 2g generators of Γg, subject to the single relation defining Γg. It follows that Rg has a real algebraic structure (see e.g. [1]). Furthermore, being a subset of (PSL(2, R))2g, it is naturally equipped with a Hausdorff topology. We can define an invariant e : Rg → Z, called the Euler class, as an obstruction class or as the index of circle bundles associated to representations in Rg (see [13, 5, 9]). In [9], which may be considered to be the starting point of the subject, Goldman showed that the −1 connected components of Rg are exactly the fibers e (k), for 2 − 2g ≤ k ≤ 2g − 2.
    [Show full text]
  • Homographic Wavelet Analysis in Identification of Characteristic Image Features
    Optica Applicata, VoL X X X , No. 2 —3, 2000 Homographic wavelet analysis in identification of characteristic image features T adeusz Niedziela, Artur Stankiewicz Institute of Applied Physics, Military University of Technology, ul. Kaliskiego 2, 00-908 Warszawa, Poland. Mirosław Świętochowski Department of Physics, Warsaw University, ul. Hoża 69, 00-681 Warszawa, Poland. Wavelet transformations connected with subgroups SL(2, C\ performed as homographic transfor­ mations of a plane have been applied to identification of characteristic features of two-dimensional images. It has been proven that wavelet transformations exist for symmetry groups S17( 1,1) and SL(2.R). 1. Introduction In the present work, the problem of an analysis, processing and recognition of a two-dimensional image has been studied by means of wavelet analysis connected with subgroups of GL(2, C) group, acting in a plane by homographies hA(z) = - -- - -j , CZ + fl where A -(::)eG L (2, C). The existence of wavelet reversible transformations has been proven for individual subgroups in the group of homographic transfor­ mations. The kind of wavelet analysis most often used in technical applications is connected with affine subgroup h(z) = az + b, a. case for A = ^ of the symmetry of plane £ ~ S2 maintaining points at infinity [1], [2]. Adoption of the wider symmetry group means rejection of the invariability of certain image features, which is reasonable if the problem has a certain symmetry or lacks affine symmetry. The application of wavelet analysis connected with a wider symmetry group is by no means the loss of information. On the contrary, the information is duplicated for additional symmetries or coded by other means.
    [Show full text]