Camera Projection Matrix 34/53       X1 U1 F 0 0 0  X2  in Homogeneous Coordinates

Total Page:16

File Type:pdf, Size:1020Kb

Camera Projection Matrix 34/53       X1 U1 F 0 0 0  X2  in Homogeneous Coordinates Geometry of a single view (a single camera case) Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 160 00 Prague 6, Jugoslávských partyzánů 1580/3, Czech Republic http://people.ciirc.cvut.cz/hlavac, [email protected] also Center for Machine Perception, http://cmp.felk.cvut.cz Courtesy: T. Pajdla Outline of the talk: Projectivity Homography Camera calibration Projective space Projective camera Radial distortion Perspective transformation, motivation 2/53 Parallel lines do not look like parallel lines under the perspective projection. Pin-hole camera model 3/53 (straight line) ray pinhole three-dimensional scene image plane Image function 4/53 The image function is abstracted mathematically as f(x, y), f(x, y, t). It is the result of the perspective projection encompassing geometric aspects. X=[x,y,z]T Y u=[u,v]T X y x v u Z -u -v f f reflected image plane image plane x f y f Considering similar triangles: u = z , v = z . Instead of our derived 2D image function f(u, v), it is usually denoted f(x, y). The value of the image function matches color/intensity of a 3D point (a red dot in the figure above) in the scene, which is projected. Basics of projective geometry 5/53 Pinhole model - the simplest geometrical model of human eye, photographic and TV camera. Perspective projection, also central projection. Parallel lines in the world do not remain parallel in the image (e.g., view along the straight section of a railroad). Image plane Horizon Optical axis Focal point 1 vanishing point 2 vanishing points Vanishing point 3 vanishing points Base or ground plane Multiple view geometry 6/53 3D points in the scene (and, more generally, lines and other simple geometric objects), their camera projections, and relations among multiple camera projections of a 3D scene. Projective space 7/53 Consider (d + 1)-dimensional vector space without its origin, Rd+1 − {(0,..., 0)}. Define the equivalence relation > 0 0 > [x1, . , xd+1] ≡ [x1, . , xd+1] > 0 0 > iff ∃ α 6= 0 : [x1, . , xd+1] = α [x1, . , xd+1] d Projective space P is the quotient space of this equivalence relation. Points in the projective space are expressed in homogeneous co-ordinates (called also 0 0 > projective coordinates) x = [x1, . , xd, 1] . Relation between Euclidean and projective spaces 8/53 d d Consider Euclidean space R . Non-homogeneous coordinates represent a point in R d+1 occupying the plane with the equation xd+1 = 1 in R . d d > > There is a one-to-one mapping from the R into P , i.e. [x1, . , xd] → [x1, . , xd, 1] . > Projective points [x1, . , xd, 0] do not have the Euclidean counterpart and represent points at infinity in a particular direction. > > Consider [x1, . , xn, 0] as a limiting case of [x1, . , xn, α] that is projectively equivalent > to [x1/α, . , xn/α, 1] , and assume that α → 0. d This corresponds to a point in R going to infinity in the direction of the radius vector d [x1/α, . , xd/α] ∈ R . Example of a finite line in 2D (image) plane with coordinates (u, v): a u + b v + c = 0. The line corresponds to a (homogeneous) vector l (pronounced el), l ' (a, b, c). There is an equivalence class for α ∈ R, α 6= 0, (α a, α b, α c) ' (a, b, c). Homogeneous coordinates of hyperplanes in Pd 9/53 d > A hyperplane in P is represented by the (d + 1)-vector a = [a1, . , ad+1] such that all points x lying on the hyperplane satisfy a>x = 0 (where a>x denotes the scalar product). > Considering the points in the form x = [x1, . , xd, 1] yields the familiar formula a1x1 + ··· + adxd + ad+1 = 0. The hyperplane defined by d distinct points represented by vectors x1,..., xd lying on it is represented by a vector a orthogonal to vectors x1,..., xd. This vector a can be computed, e.g., by SVD. Symmetrically, the point of intersection of d distinct hyperplanes a1,..., ad is the vector x orthogonal to them. The first useful hyperplane in computer vision: 2 the projective plane P 10/53 2 > 2 We will denote points in P by u = [u, v, w] , lines (a special case of hyperplanes) in P by l (pronounced el). The symbol × stands for the vector product here. The line passing through two points x, y (also called point join) is l = x × y. The point as intersection of two lines l, m is x = l × m. x3 2 p Projective space P , illustration Points and lines in P2 are represented by rays and pla- 1 nes, respectively, which pass through the origin in the corresponding Euclidean space R3. origin O x1 x2 The second useful hyperplane in computer vision: P3 11/53 3 > We will denote points in P by X = [X, Y, Z, W ] . 3 In P , hyperplanes become planes and one more entity occurs that has no counterpart in the projective plane: a 3D line. 3 The elegant homogeneous representation by 4-vectors, available for points and planes in P , does not exist for lines. A 3D line can be represented either by a pair of points lying on it but this representation is not unique, or by a (Grassmann-)Plücker matrix. Homography 12/53 In computer vision, any two (pin-hole camera) images of the same planar surface in 3D are related by homography. Said more generally from projective geometry standpoint: Homography is an isomorphism of projective spaces that maps lines to lines. Called also projective transformation or collineation. d d d+1 Co-lineation is any mapping P → P linear in the embedding space R . 0 Co-lineation is defined up to unknown scale as u ' H u, where H is a (d + 1) × (d + 1) matrix. The transformation maps any triplet of collinear points to a triplet of collinear points (hence one of its names—collineation). If H is regular then distinct points are mapped to distinct points. 2 0 In P , homography is the most general transformation which maps lines to lines. u ' H u, where H is a regular 3×3 matrix. Matrix H has 9 parameters. 8 parameters are independent. The scale is arbitrary. Example of two images mapped by a 2D homography 13/53 Different form of homography for hyperplanes and points 14/53 It can be derived from the fact that if the original point u and a hyperplane a are incident then a>u = 0. The point u and the hyperplane a have to remain incident after the transformation too, a0>u0 = 0. 0 0 −> −> Using equation u ' H u, we obtain that a ' H a, where H denotes the transposed inverse of H. Two simple homographies useful in computer vision 15/53 1.A planar scene and its projection by one pinhole camera are related by a 2D homography. This can be used to rectify images of planar scenes (e.g., building facades) to a frontoparallel view. 2. Two images of a 3D scene (planar or non-planar) by two pinhole cameras sharing a single center of projection is a 2D homography. This can be used for stitching panoramic images from photographs. Homography vs. non-homography (1) 16/53 > Let us illustrate how the non-homogeneous 2D point [u, v] (e.g., a point in an image) is actually mapped to the non-homogeneous image point [u0, v0]> by H using u0 ' H u. With the components and the scale written explicitly, the equation reads 0 u h11 h12 h13 u 0 α v = h21 h22 h23 v , α 6= 0 . 1 h31 h32 h33 1 Homography vs. non-homography (2) 17/53 Writing 1 in the third coordinate of u0, we tacitly assume that u0 is not a point at infinity, that is, α 6= 0. To compute [u0, v0]>, we need to eliminate the scale α. This yields the expression h u + h v + h h u + h v + h u0 = 11 12 13 , v0 = 21 22 23 , h31u + h32v + h33 h31u + h32v + h33 familiar to people who do not use homogeneous coordinates. Note that compared to this, the expression u0 ' H u is simpler, linear, and can handle the case when u0 is a point at infinity. These are the practical advantages of homogeneous coordinates. Cross ratio 18/53 A’ A a’ a B B’ 0 d b a a b’ 0 C d’ b b c C’ c = c0 D d d0 c’ D’ Subgroups of homographies 19/53 Decomposition of homographies 20/53 Any homography can be uniquely decomposed as H = HP HA HS, where I 0 K 0 R −Rt H = ,H = ,H = , P a> b A 0> 1 S 0> 1 Matrix K is upper triangular. Matrices of the form of HS represent Euclidean transformations. Matrices HAHS represent affine transformations; thus matrices HA represent the ‘purely affine’ subgroup of affine transformations, i.e., what is left of the affine group after removing from it (more exactly, factorizing it by) the Euclidean group. Matrices HP HAHS represent the whole group of projective transformations; thus matrices HP represent the ‘purely projective’ subgroup of the projective transformation. 2D homography 21/53 a ny plane 2D homography maps a plane to a plane. i u’ Recall two views of the Prague Castle at the slide 13. k j v’ w’ 1 u0 u α v0 = H v , α 6= 0 , 1 1 u where H – [3 × 3] is the homography matrix. v 1 camera camera center Example: Distance measurement in a plane 22/53 π u’ 1 0 u’4 We know coordinates of four points ui, i = 1,..., 4 in a plane in which we intend to measure distances.
Recommended publications
  • Globally Optimal Affine and Metric Upgrades in Stratified Autocalibration
    Globally Optimal Affine and Metric Upgrades in Stratified Autocalibration Manmohan Chandrakery Sameer Agarwalz David Kriegmany Serge Belongiey [email protected] [email protected] [email protected] [email protected] y University of California, San Diego z University of Washington, Seattle Abstract parameters of the cameras, which is commonly approached by estimating the dual image of the absolute conic (DIAC). We present a practical, stratified autocalibration algo- A variety of linear methods exist towards this end, how- rithm with theoretical guarantees of global optimality. Given ever, they are known to perform poorly in the presence of a projective reconstruction, the first stage of the algorithm noise [10]. Perhaps more significantly, most methods a pos- upgrades it to affine by estimating the position of the plane teriori impose the positive semidefiniteness of the DIAC, at infinity. The plane at infinity is computed by globally which might lead to a spurious calibration. Thus, it is im- minimizing a least squares formulation of the modulus con- portant to impose the positive semidefiniteness of the DIAC straints. In the second stage, the algorithm upgrades this within the optimization, not as a post-processing step. affine reconstruction to a metric one by globally minimizing This paper proposes global minimization algorithms for the infinite homography relation to compute the dual image both stages of stratified autocalibration that furnish theoreti- of the absolute conic (DIAC). The positive semidefiniteness cal certificates of optimality. That is, they return a solution at of the DIAC is explicitly enforced as part of the optimization most away from the global minimum, for arbitrarily small .
    [Show full text]
  • Affine Reflection Group Codes
    Affine Reflection Group Codes Terasan Niyomsataya1, Ali Miri1,2 and Monica Nevins2 School of Information Technology and Engineering (SITE)1 Department of Mathematics and Statistics2 University of Ottawa, Ottawa, Canada K1N 6N5 email: {tniyomsa,samiri}@site.uottawa.ca, [email protected] Abstract This paper presents a construction of Slepian group codes from affine reflection groups. The solution to the initial vector and nearest distance problem is presented for all irreducible affine reflection groups of rank n ≥ 2, for varying stabilizer subgroups. Moreover, we use a detailed analysis of the geometry of affine reflection groups to produce an efficient decoding algorithm which is equivalent to the maximum-likelihood decoder. Its complexity depends only on the dimension of the vector space containing the codewords, and not on the number of codewords. We give several examples of the decoding algorithm, both to demonstrate its correctness and to show how, in small rank cases, it may be further streamlined by exploiting additional symmetries of the group. 1 1 Introduction Slepian [11] introduced group codes whose codewords represent a finite set of signals combining coding and modulation, for the Gaussian channel. A thorough survey of group codes can be found in [8]. The codewords lie on a sphere in n−dimensional Euclidean space Rn with equal nearest-neighbour distances. This gives congruent maximum-likelihood (ML) decoding regions, and hence equal error probability, for all codewords. Given a group G with a representation (action) on Rn, that is, an 1Keywords: Group codes, initial vector problem, decoding schemes, affine reflection groups 1 orthogonal n × n matrix Og for each g ∈ G, a group code generated from G is given by the set of all cg = Ogx0 (1) n for all g ∈ G where x0 = (x1, .
    [Show full text]
  • Projective Geometry: a Short Introduction
    Projective Geometry: A Short Introduction Lecture Notes Edmond Boyer Master MOSIG Introduction to Projective Geometry Contents 1 Introduction 2 1.1 Objective . .2 1.2 Historical Background . .3 1.3 Bibliography . .4 2 Projective Spaces 5 2.1 Definitions . .5 2.2 Properties . .8 2.3 The hyperplane at infinity . 12 3 The projective line 13 3.1 Introduction . 13 3.2 Projective transformation of P1 ................... 14 3.3 The cross-ratio . 14 4 The projective plane 17 4.1 Points and lines . 17 4.2 Line at infinity . 18 4.3 Homographies . 19 4.4 Conics . 20 4.5 Affine transformations . 22 4.6 Euclidean transformations . 22 4.7 Particular transformations . 24 4.8 Transformation hierarchy . 25 Grenoble Universities 1 Master MOSIG Introduction to Projective Geometry Chapter 1 Introduction 1.1 Objective The objective of this course is to give basic notions and intuitions on projective geometry. The interest of projective geometry arises in several visual comput- ing domains, in particular computer vision modelling and computer graphics. It provides a mathematical formalism to describe the geometry of cameras and the associated transformations, hence enabling the design of computational ap- proaches that manipulates 2D projections of 3D objects. In that respect, a fundamental aspect is the fact that objects at infinity can be represented and manipulated with projective geometry and this in contrast to the Euclidean geometry. This allows perspective deformations to be represented as projective transformations. Figure 1.1: Example of perspective deformation or 2D projective transforma- tion. Another argument is that Euclidean geometry is sometimes difficult to use in algorithms, with particular cases arising from non-generic situations (e.g.
    [Show full text]
  • Robot Vision: Projective Geometry
    Robot Vision: Projective Geometry Ass.Prof. Friedrich Fraundorfer SS 2018 1 Learning goals . Understand homogeneous coordinates . Understand points, line, plane parameters and interpret them geometrically . Understand point, line, plane interactions geometrically . Analytical calculations with lines, points and planes . Understand the difference between Euclidean and projective space . Understand the properties of parallel lines and planes in projective space . Understand the concept of the line and plane at infinity 2 Outline . 1D projective geometry . 2D projective geometry ▫ Homogeneous coordinates ▫ Points, Lines ▫ Duality . 3D projective geometry ▫ Points, Lines, Planes ▫ Duality ▫ Plane at infinity 3 Literature . Multiple View Geometry in Computer Vision. Richard Hartley and Andrew Zisserman. Cambridge University Press, March 2004. Mundy, J.L. and Zisserman, A., Geometric Invariance in Computer Vision, Appendix: Projective Geometry for Machine Vision, MIT Press, Cambridge, MA, 1992 . Available online: www.cs.cmu.edu/~ph/869/papers/zisser-mundy.pdf 4 Motivation – Image formation [Source: Charles Gunn] 5 Motivation – Parallel lines [Source: Flickr] 6 Motivation – Epipolar constraint X world point epipolar plane x x’ x‘TEx=0 C T C’ R 7 Euclidean geometry vs. projective geometry Definitions: . Geometry is the teaching of points, lines, planes and their relationships and properties (angles) . Geometries are defined based on invariances (what is changing if you transform a configuration of points, lines etc.) . Geometric transformations
    [Show full text]
  • Feature Matching and Heat Flow in Centro-Affine Geometry
    Symmetry, Integrability and Geometry: Methods and Applications SIGMA 16 (2020), 093, 22 pages Feature Matching and Heat Flow in Centro-Affine Geometry Peter J. OLVER y, Changzheng QU z and Yun YANG x y School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA E-mail: [email protected] URL: http://www.math.umn.edu/~olver/ z School of Mathematics and Statistics, Ningbo University, Ningbo 315211, P.R. China E-mail: [email protected] x Department of Mathematics, Northeastern University, Shenyang, 110819, P.R. China E-mail: [email protected] Received April 02, 2020, in final form September 14, 2020; Published online September 29, 2020 https://doi.org/10.3842/SIGMA.2020.093 Abstract. In this paper, we study the differential invariants and the invariant heat flow in centro-affine geometry, proving that the latter is equivalent to the inviscid Burgers' equa- tion. Furthermore, we apply the centro-affine invariants to develop an invariant algorithm to match features of objects appearing in images. We show that the resulting algorithm com- pares favorably with the widely applied scale-invariant feature transform (SIFT), speeded up robust features (SURF), and affine-SIFT (ASIFT) methods. Key words: centro-affine geometry; equivariant moving frames; heat flow; inviscid Burgers' equation; differential invariant; edge matching 2020 Mathematics Subject Classification: 53A15; 53A55 1 Introduction The main objective in this paper is to study differential invariants and invariant curve flows { in particular the heat flow { in centro-affine geometry. In addition, we will present some basic applications to feature matching in camera images of three-dimensional objects, comparing our method with other popular algorithms.
    [Show full text]
  • The Affine Group of a Lie Group
    THE AFFINE GROUP OF A LIE GROUP JOSEPH A. WOLF1 1. If G is a Lie group, then the group Aut(G) of all continuous auto- morphisms of G has a natural Lie group structure. This gives the semi- direct product A(G) = G-Aut(G) the structure of a Lie group. When G is a vector group R", A(G) is the ordinary affine group A(re). Follow- ing L. Auslander [l ] we will refer to A(G) as the affine group of G, and regard it as a transformation group on G by (g, a): h-^g-a(h) where g, hEG and aGAut(G) ; in the case of a vector group, this is the usual action on A(») on R". If B is a compact subgroup of A(n), then it is well known that B has a fixed point on R", i.e., that there is a point xGR" such that b(x)=x for every bEB. For A(ra) is contained in the general linear group GL(« + 1, R) in the usual fashion, and B (being compact) must be conjugate to a subgroup of the orthogonal group 0(w + l). This conjugation can be done leaving fixed the (« + 1, w + 1)-place matrix entries, and is thus possible by an element of k(n). This done, the translation-parts of elements of B must be zero, proving the assertion. L. Auslander [l] has extended this theorem to compact abelian subgroups of A(G) when G is connected, simply connected and nil- potent. We will give a further extension.
    [Show full text]
  • Estimating Projective Transformation Matrix (Collineation, Homography)
    Estimating Projective Transformation Matrix (Collineation, Homography) Zhengyou Zhang Microsoft Research One Microsoft Way, Redmond, WA 98052, USA E-mail: [email protected] November 1993; Updated May 29, 2010 Microsoft Research Techical Report MSR-TR-2010-63 Note: The original version of this report was written in November 1993 while I was at INRIA. It was circulated among very few people, and never published. I am now publishing it as a tech report, adding Section 7, with the hope that it could be useful to more people. Abstract In many applications, one is required to estimate the projective transformation between two sets of points, which is also known as collineation or homography. This report presents a number of techniques for this purpose. Contents 1 Introduction 2 2 Method 1: Five-correspondences case 3 3 Method 2: Compute P together with the scalar factors 3 4 Method 3: Compute P only (a batch approach) 5 5 Method 4: Compute P only (an iterative approach) 6 6 Method 5: Compute P through normalization 7 7 Method 6: Maximum Likelihood Estimation 8 1 1 Introduction Projective Transformation is a concept used in projective geometry to describe how a set of geometric objects maps to another set of geometric objects in projective space. The basic intuition behind projective space is to add extra points (points at infinity) to Euclidean space, and the geometric transformation allows to move those extra points to traditional points, and vice versa. Homogeneous coordinates are used in projective space much as Cartesian coordinates are used in Euclidean space. A point in two dimensions is described by a 3D vector.
    [Show full text]
  • • Rotations • Camera Calibration • Homography • Ransac
    Agenda • Rotations • Camera calibration • Homography • Ransac Geometric Transformations y 164 Computer Vision: Algorithms andx Applications (September 3, 2010 draft) Transformation Matrix # DoF Preserves Icon translation I t 2 orientation 2 3 h i ⇥ ⇢⇢SS rigid (Euclidean) R t 3 lengths S ⇢ 2 3 S⇢ ⇥ h i ⇢ similarity sR t 4 angles S 2 3 S⇢ h i ⇥ ⇥ ⇥ affine A 6 parallelism ⇥ ⇥ 2 3 h i ⇥ projective H˜ 8 straight lines ` 3 3 ` h i ⇥ Table 3.5 Hierarchy of 2D coordinate transformations. Each transformation also preserves Let’s definethe properties families listed of in thetransformations rows below it, i.e., similarity by the preserves properties not only anglesthat butthey also preserve parallelism and straight lines. The 2 3 matrices are extended with a third [0T 1] row to form ⇥ a full 3 3 matrix for homogeneous coordinate transformations. ⇥ amples of such transformations, which are based on the 2D geometric transformations shown in Figure 2.4. The formulas for these transformations were originally given in Table 2.1 and are reproduced here in Table 3.5 for ease of reference. In general, given a transformation specified by a formula x0 = h(x) and a source image f(x), how do we compute the values of the pixels in the new image g(x), as given in (3.88)? Think about this for a minute before proceeding and see if you can figure it out. If you are like most people, you will come up with an algorithm that looks something like Algorithm 3.1. This process is called forward warping or forward mapping and is shown in Figure 3.46a.
    [Show full text]
  • Math 535 - General Topology Additional Notes
    Math 535 - General Topology Additional notes Martin Frankland September 5, 2012 1 Subspaces Definition 1.1. Let X be a topological space and A ⊆ X any subset. The subspace topology sub on A is the smallest topology TA making the inclusion map i: A,! X continuous. sub In other words, TA is generated by subsets V ⊆ A of the form V = i−1(U) = U \ A for any open U ⊆ X. Proposition 1.2. The subspace topology on A is sub TA = fV ⊆ A j V = U \ A for some open U ⊆ Xg: In other words, the collection of subsets of the form U \ A already forms a topology on A. 2 Products Before discussing the product of spaces, let us review the notion of product of sets. 2.1 Product of sets Let X and Y be sets. The Cartesian product of X and Y is the set of pairs X × Y = f(x; y) j x 2 X; y 2 Y g: It comes equipped with the two projection maps pX : X × Y ! X and pY : X × Y ! Y onto each factor, defined by pX (x; y) = x pY (x; y) = y: This explicit description of X × Y is made more meaningful by the following proposition. 1 Proposition 2.1. The Cartesian product of sets satisfies the following universal property. For any set Z along with maps fX : Z ! X and fY : Z ! Y , there is a unique map f : Z ! X × Y satisfying pX ◦ f = fX and pY ◦ f = fY , in other words making the diagram Z fX 9!f fY X × Y pX pY { " XY commute.
    [Show full text]
  • Math 395: Category Theory Northwestern University, Lecture Notes
    Math 395: Category Theory Northwestern University, Lecture Notes Written by Santiago Can˜ez These are lecture notes for an undergraduate seminar covering Category Theory, taught by the author at Northwestern University. The book we roughly follow is “Category Theory in Context” by Emily Riehl. These notes outline the specific approach we’re taking in terms the order in which topics are presented and what from the book we actually emphasize. We also include things we look at in class which aren’t in the book, but otherwise various standard definitions and examples are left to the book. Watch out for typos! Comments and suggestions are welcome. Contents Introduction to Categories 1 Special Morphisms, Products 3 Coproducts, Opposite Categories 7 Functors, Fullness and Faithfulness 9 Coproduct Examples, Concreteness 12 Natural Isomorphisms, Representability 14 More Representable Examples 17 Equivalences between Categories 19 Yoneda Lemma, Functors as Objects 21 Equalizers and Coequalizers 25 Some Functor Properties, An Equivalence Example 28 Segal’s Category, Coequalizer Examples 29 Limits and Colimits 29 More on Limits/Colimits 29 More Limit/Colimit Examples 30 Continuous Functors, Adjoints 30 Limits as Equalizers, Sheaves 30 Fun with Squares, Pullback Examples 30 More Adjoint Examples 30 Stone-Cech 30 Group and Monoid Objects 30 Monads 30 Algebras 30 Ultrafilters 30 Introduction to Categories Category theory provides a framework through which we can relate a construction/fact in one area of mathematics to a construction/fact in another. The goal is an ultimate form of abstraction, where we can truly single out what about a given problem is specific to that problem, and what is a reflection of a more general phenomenom which appears elsewhere.
    [Show full text]
  • The Hidden Subgroup Problem in Affine Groups: Basis Selection in Fourier Sampling
    The Hidden Subgroup Problem in Affine Groups: Basis Selection in Fourier Sampling Cristopher Moore1, Daniel Rockmore2, Alexander Russell3, and Leonard J. Schulman4 1 University of New Mexico, [email protected] 2 Dartmouth College, [email protected] 3 University of Connecticut, [email protected] 4 California Institute of Technology, [email protected] Abstract. Many quantum algorithms, including Shor's celebrated fac- toring and discrete log algorithms, proceed by reduction to a hidden subgroup problem, in which a subgroup H of a group G must be deter- mined from a quantum state uniformly supported on a left coset of H. These hidden subgroup problems are then solved by Fourier sam- pling: the quantum Fourier transform of is computed and measured. When the underlying group is non-Abelian, two important variants of the Fourier sampling paradigm have been identified: the weak standard method, where only representation names are measured, and the strong standard method, where full measurement occurs. It has remained open whether the strong standard method is indeed stronger, that is, whether there are hidden subgroups that can be reconstructed via the strong method but not by the weak, or any other known, method. In this article, we settle this question in the affirmative. We show that hidden subgroups of semidirect products of the form Zq n Zp, where q j (p − 1) and q = p=polylog(p), can be efficiently determined by the strong standard method. Furthermore, the weak standard method and the \forgetful" Abelian method are insufficient for these groups. We ex- tend this to an information-theoretic solution for the hidden subgroup problem over the groups Zq n Zp where q j (p − 1) and, in particular, the Affine groups Ap.
    [Show full text]
  • Automorphisms of Classical Geometries in the Sense of Klein
    Automorphisms of classical geometries in the sense of Klein Navarro, A.∗ Navarro, J.† November 11, 2018 Abstract In this note, we compute the group of automorphisms of Projective, Affine and Euclidean Geometries in the sense of Klein. As an application, we give a simple construction of the outer automorphism of S6. 1 Introduction Let Pn be the set of 1-dimensional subspaces of a n + 1-dimensional vector space E over a (commutative) field k. This standard definition does not capture the ”struc- ture” of the projective space although it does point out its automorphisms: they are projectivizations of semilinear automorphisms of E, also known as Staudt projectivi- ties. A different approach (v. gr. [1]), defines the projective space as a lattice (the lattice of all linear subvarieties) satisfying certain axioms. Then, the so named Fun- damental Theorem of Projective Geometry ([1], Thm 2.26) states that, when n> 1, collineations of Pn (i.e., bijections preserving alignment, which are the automorphisms of this lattice structure) are precisely Staudt projectivities. In this note we are concerned with geometries in the sense of Klein: a geometry is a pair (X, G) where X is a set and G is a subgroup of the group Biy(X) of all bijections of X. In Klein’s view, Projective Geometry is the pair (Pn, PGln), where PGln is the group of projectivizations of k-linear automorphisms of the vector space E (see Example 2.2). The main result of this note is a computation, analogous to the arXiv:0901.3928v2 [math.GR] 4 Jul 2013 aforementioned theorem for collineations, but in the realm of Klein geometries: Theorem 3.4 The group of automorphisms of the Projective Geometry (Pn, PGln) is the group of Staudt projectivities, for any n ≥ 1.
    [Show full text]