Linear and Affine Transformations

Total Page:16

File Type:pdf, Size:1020Kb

Linear and Affine Transformations Linear and affine transformations • Linear Algebra Review ▪ Matrices ▪ Transformations • Affine transformations in Euclidean space Tricky examples of nonlinear transformations (Youtube)1 Geometric transformations • Geometric transformations map points in one space to points in another: (x',y',z') = f(x,y,z), i.e. in vector form X’ = f(X) • These transformations can be very simple, such as scaling each coordinate, or complex, such as non-linear twists and bends. • We'll focus on transformations that can be represented easily with matrix operations. • We'll start in 2D... 1 2D Affine Transformations • An affine transformation is any transformation that preserves co-linearity (i.e., all points lying on a line initially still lie on a line after transformation) and ratios of distances (e.g., the midpoint of a line segment remains the midpoint after transformation). 3 ) LENGTHS ONGRUENCE (C RESERVES P SOMETRY I After any of those transformations (turn, flip or slide), the shape still has the same size, area, angles and line lengths. 2 ) When you resize a shape it gets bigger or smaller. ... but it still looks similar: all angles are the same ESIZING ANGLES (R The face and body are still in proportion RESERVES IMILARITY P S 5 Affine transformation preserves parallelism, dividing proportion, linearity and incidence. LINE Similarity and congruence can be viewed as a special case of the affine transformation. PROPORTION PARALLEL FFINITY A DIVIDING RESERVES AND P 6 3 ) ROJECTIVE The projective transformation does not (P preserve parallelism, length, and angle. But it still preserves collinearity and incidence. OMOGRAPHY H 7 4 Linearity Parallel Angle Length X X X X X X 9 Matrix Multiplication if A is an n × m matrix and B is an m × p matrix, their matrix product AB is an n × p matrix, in which the m entries across a row of A are multiplied with the m entries down a column of B and summed to produce an entry of AB. A . B November 10, 2020 10 5 Matrix Multiplication is not commutative 1 2 2 0 4 4 = 3 4 1 2 10 8 2 0 1 2 2 4 = 1 2 3 4 7 10 Expand vector notation for linear equations Solution x = Ax x = Ax xx xx== , xx 21 yy = yy 01− 21 A = x =+21 x y 01− y'0=− x y 6 ◼ The linear transformation given by a matrix Let A be an 2 2 matrix. The function T defined by T(v) = Av is a linear transformation from R2 into R2. ◼ Note: x = Ax x a b x = y c d y x =+ ax by y' =+ cx dy ◼ The linear transformation given by a matrix Let A be an mn matrix. The function T defined by is a linear transformation from Rn into Rm. Rn vector Rm vector ◼ Note: a11 a 12 a 1n v 1 a 11 v 1+ a 12 v 2 + + a 1 n v n a a a v a v+ a v + + a v Av ==21 22 2n 2 21 1 22 2 2 n n am1 a m 2 a mn v n a m 1 v 1+ a m 2 v 2 + + a mn v n T(v) = Av T : Rn ⎯⎯ → Rm 7 ◼ Two representations of the linear transformation T:R3→R3 : (1)T(x1, x2 , x3 ) = (2x1 + x2 − x3,−x1 + 3x2 − 2x3,3x2 + 4x3 ) 2 1 −1x1 (2)T(x) = Ax = −1 3 − 2x 2 0 3 4 x3 ◼ Three reasons for matrix representation of a linear transformation: ◼ It is simpler to write. ◼ It is simpler to read. ◼ It is more easily adapted for computer use. Representation of 2D linear map • We can represent a 2-D transformation M by a matrix a b M = c d • If x is a column vector, M goes on the left: x' = Mx x a b x = y c d y • If x is a row vector, MT goes on the right: x = xMT ac x y = x y bd • We will use column vectors. 8 Property of Linear Transforms • Basis vectors map to columns of matrix. • Origin (0,0) is always fixed point. • Composition of M and M-1 gives identity. • Determinant det(M) is scaling factor of the linear transformation described by the matrix M. x = M x a b x ax+ by Mx== c d y cx+ dy a b 10 a a b b == c d 01 c c d d (1, 0) Scaling by 0.5 x = M x M = .5 0 0 .5 x' = Mx (1, 0) x a b x (0.5, 0) = y c d y (0, 1) (0, 0.5) November 10, 2020 18 9 Scaling by 0.5 1 y det(M ) = y 4 0.5 0 −1 M = det(M )= 4 0 0.5 −1 20 M = x 02 x Inverse mapping = scaling by 2 Composition of M and M-1 gives identity. Determinant is scaling factor of the linear transformation described by the matrix. Homothety - Scaling x = M x November 10, 2020 20 10 Homothety - Scaling Describe the transformation represented by matrix S = {{2,0},{0,2}}. Find all fixed points and directions. List all invariants. 20 S = 02 Fixed points: X'== X ; X ' SX xx= 2 xy==0, 0 XSX= yy= 2 Only one fixed point (0, 0). Fixed directions: v'==v ; v ' Sv x=2 x x ( − 2) = 0 v= S v y=2 y y ( − 2) = 0 =2,x R , y R All directions are fixed. General Scaling xx = scale(,) sxy s y y 1 scale( sxy, s ) = sy sx 0 0 sy x x 1 sx 11 General Scaling Fixed directions: v'==v ; v ' Sv v= S v General Scaling Describe the transformation represented by matrix S = {{2,0},{0,1}}. Find all fixed points and directions. List all invariants. 20 S = 01 Fixed points: X'== X ; X ' SX xx= 2 x=0, y R XSX= yy= FP= (0,t ), t R . x=2 x x ( − 2) = 0 y= y y( − 1) = 0 =2,x R , y = 0; fd = ( t ,0) =1,x = 0, y R ; fd = (0, t ) 2 fixed directions: [(1,0)] and [(0,1)]. 12 Scaling of a circle x x = xy22+=1 xx = M a 22 −1 y xy xx= M y = += 1 b ab y a 0 y M = 0 b b 1 0 −1 a x M = a 1 1 x 0 b Real image ? Driver’s eye 1m 5 m 13 Shear-x y y 1 s 01 x x Rotation xx = Rtx() = M x cos(tt) − sin ( ) rot (t) = sin(tt) cos( ) t sin(t) cos(t) -sin(t) t cos(t) November 10, 2020 28 14 Trajectory of point A = (r, 0) in revolution xx = R cos() − sin ( ) RR==; det( ) 1 sin() cos( ) y xr cos() − sin ( ) = y sin() cos( ) 0 xr =cos A’ yr =sin A x GeoGebra book 2.1 Rotace Exercise: Rotation xx = R() Estimate parameter a so that matrix B represents revolution about origin. Find all fixed points and directions. 2 a − 2 B = cos() − sin ( ) 2 R( ) = a sin cos 2 ( ) ( ) 1. method: comparing elements R and B. 22 sin= a = cos = 22 2. method: 2 det(Ba )= 1 2 + = 1 4 Matrix Representation of rotation 15 22 − 22 Rotation B = 22 Find all fixed points. 22 x=B x, x = x x=B x () B − E x = o GeoGebra tool ReducedRowEchelonForm(M)eliminates non diagonal elements by row operations (= Gaussian elimination). 10 ()BE− 01 Using back-substitution, unknowns x, y can be solved for. Solution x = 0 and y = 0 gives only one fixed point FP = (0,0). 22 − 22 Rotation B = 22 Find all fixed directions. 22 x=B x, x = x x=B x () B − E x = o Matrix(B-E)must be singular for non trivial solutions x, but Det(B-E)=0 has no real solution. 22 −− 22 BE− = = 0 22 − 22 General rotation hasn’t fixed directions. 16 Reflection in y-axis xx = ref y refy = −10 01 November 10, 2020 33 Reflection in y-axis y y refy = −10 01 x x November 10, 2020 34 17 Line reflection Reflection in the line y= x 18 Composing Linear Transformations TM11()vv= • If T1 and T2 are transformations TM22()vv= ▪ T2 T1(v) =def T2( T1(v)) • If T1 and T2 are represented by matrices M1 and M2 ▪ T2 T1 is represented by M2 M1 ▪ T2 T1(v) = T2( T1(v)) = (M2 M1)(v) • Order is important! reflect(x) (rot(O,훂)): A → A’ → A’’ rot(O,훂) (reflect(x)): A → A → A’ AA'= Rot AA''= Ref ' 37 Composing Linear Transformations • Order is important! reflect(x) (rot(O,훂)): A → A’ → A’’ rot(O,훂) (reflect(x)): A → A → A’ AAAA'= Rot '' = Ref ' cos− sin 1 0 Rot== ; Ref sin cos 0− 1 cos sin Rot*Ref = sin− cos cos− sin Ref*Rot = −−sin cos 38 19 Composition of Linear Transformations Composition of linear transformations 39 *Decomposing Linear Transformations • Any 2D Linear Transformation can be decomposed into the product of a rotation, a scale (or line reflection), and a rotation M = R1SR2 . • Any 2D congruence can be decomposed into the product of 3 line reflection at the most. Isometry (congruent transformation) • Isometry preserves length, whereas direct isometry preserves orientation and opposite does not preserve orientation • Direct Isometry |R| = 1 (Rotation) • Opposite Isometry |R| = -1 (Line Reflection) 20 Linear Transformations • Scale, Reflection, Rotation, and Shear are all linear transformations • They satisfy: T(au + bv) = aT(u) + bT(v) ▪ u and v are vectors ▪ a and b are scalars • If T is a linear transformation ▪ T((0, 0)) = (0, 0) • What important operation does that leave out? Linear transformation Affine transformation 42 21 Rotation about an Arbitrary Point y y x x This is not a linear transformation.
Recommended publications
  • LINEAR ALGEBRA METHODS in COMBINATORICS László Babai
    LINEAR ALGEBRA METHODS IN COMBINATORICS L´aszl´oBabai and P´eterFrankl Version 2.1∗ March 2020 ||||| ∗ Slight update of Version 2, 1992. ||||||||||||||||||||||| 1 c L´aszl´oBabai and P´eterFrankl. 1988, 1992, 2020. Preface Due perhaps to a recognition of the wide applicability of their elementary concepts and techniques, both combinatorics and linear algebra have gained increased representation in college mathematics curricula in recent decades. The combinatorial nature of the determinant expansion (and the related difficulty in teaching it) may hint at the plausibility of some link between the two areas. A more profound connection, the use of determinants in combinatorial enumeration goes back at least to the work of Kirchhoff in the middle of the 19th century on counting spanning trees in an electrical network. It is much less known, however, that quite apart from the theory of determinants, the elements of the theory of linear spaces has found striking applications to the theory of families of finite sets. With a mere knowledge of the concept of linear independence, unexpected connections can be made between algebra and combinatorics, thus greatly enhancing the impact of each subject on the student's perception of beauty and sense of coherence in mathematics. If these adjectives seem inflated, the reader is kindly invited to open the first chapter of the book, read the first page to the point where the first result is stated (\No more than 32 clubs can be formed in Oddtown"), and try to prove it before reading on. (The effect would, of course, be magnified if the title of this volume did not give away where to look for clues.) What we have said so far may suggest that the best place to present this material is a mathematics enhancement program for motivated high school students.
    [Show full text]
  • Efficient Learning of Simplices
    Efficient Learning of Simplices Joseph Anderson Navin Goyal Computer Science and Engineering Microsoft Research India Ohio State University [email protected] [email protected] Luis Rademacher Computer Science and Engineering Ohio State University [email protected] Abstract We show an efficient algorithm for the following problem: Given uniformly random points from an arbitrary n-dimensional simplex, estimate the simplex. The size of the sample and the number of arithmetic operations of our algorithm are polynomial in n. This answers a question of Frieze, Jerrum and Kannan [FJK96]. Our result can also be interpreted as efficiently learning the intersection of n + 1 half-spaces in Rn in the model where the intersection is bounded and we are given polynomially many uniform samples from it. Our proof uses the local search technique from Independent Component Analysis (ICA), also used by [FJK96]. Unlike these previous algorithms, which were based on analyzing the fourth moment, ours is based on the third moment. We also show a direct connection between the problem of learning a simplex and ICA: a simple randomized reduction to ICA from the problem of learning a simplex. The connection is based on a known representation of the uniform measure on a sim- plex. Similar representations lead to a reduction from the problem of learning an affine arXiv:1211.2227v3 [cs.LG] 6 Jun 2013 transformation of an n-dimensional ℓp ball to ICA. 1 Introduction We are given uniformly random samples from an unknown convex body in Rn, how many samples are needed to approximately reconstruct the body? It seems intuitively clear, at least for n = 2, 3, that if we are given sufficiently many such samples then we can reconstruct (or learn) the body with very little error.
    [Show full text]
  • Paraperspective ≡ Affine
    International Journal of Computer Vision, 19(2): 169–180, 1996. Paraperspective ´ Affine Ronen Basri Dept. of Applied Math The Weizmann Institute of Science Rehovot 76100, Israel [email protected] Abstract It is shown that the set of all paraperspective images with arbitrary reference point and the set of all affine images of a 3-D object are identical. Consequently, all uncali- brated paraperspective images of an object can be constructed from a 3-D model of the object by applying an affine transformation to the model, and every affine image of the object represents some uncalibrated paraperspective image of the object. It follows that the paraperspective images of an object can be expressed as linear combinations of any two non-degenerate images of the object. When the image position of the reference point is given the parameters of the affine transformation (and, likewise, the coefficients of the linear combinations) satisfy two quadratic constraints. Conversely, when the values of parameters are given the image position of the reference point is determined by solving a bi-quadratic equation. Key words: affine transformations, calibration, linear combinations, paraperspective projec- tion, 3-D object recognition. 1 Introduction It is shown below that given an object O ½ R3, the set of all images of O obtained by applying a rigid transformation followed by a paraperspective projection with arbitrary reference point and the set of all images of O obtained by applying a 3-D affine transformation followed by an orthographic projection are identical. Consequently, all paraperspective images of an object can be constructed from a 3-D model of the object by applying an affine transformation to the model, and every affine image of the object represents some paraperspective image of the object.
    [Show full text]
  • Notes on Euclidean Geometry Kiran Kedlaya Based
    Notes on Euclidean Geometry Kiran Kedlaya based on notes for the Math Olympiad Program (MOP) Version 1.0, last revised August 3, 1999 c Kiran S. Kedlaya. This is an unfinished manuscript distributed for personal use only. In particular, any publication of all or part of this manuscript without prior consent of the author is strictly prohibited. Please send all comments and corrections to the author at [email protected]. Thank you! Contents 1 Tricks of the trade 1 1.1 Slicing and dicing . 1 1.2 Angle chasing . 2 1.3 Sign conventions . 3 1.4 Working backward . 6 2 Concurrence and Collinearity 8 2.1 Concurrent lines: Ceva’s theorem . 8 2.2 Collinear points: Menelaos’ theorem . 10 2.3 Concurrent perpendiculars . 12 2.4 Additional problems . 13 3 Transformations 14 3.1 Rigid motions . 14 3.2 Homothety . 16 3.3 Spiral similarity . 17 3.4 Affine transformations . 19 4 Circular reasoning 21 4.1 Power of a point . 21 4.2 Radical axis . 22 4.3 The Pascal-Brianchon theorems . 24 4.4 Simson line . 25 4.5 Circle of Apollonius . 26 4.6 Additional problems . 27 5 Triangle trivia 28 5.1 Centroid . 28 5.2 Incenter and excenters . 28 5.3 Circumcenter and orthocenter . 30 i 5.4 Gergonne and Nagel points . 32 5.5 Isogonal conjugates . 32 5.6 Brocard points . 33 5.7 Miscellaneous . 34 6 Quadrilaterals 36 6.1 General quadrilaterals . 36 6.2 Cyclic quadrilaterals . 36 6.3 Circumscribed quadrilaterals . 38 6.4 Complete quadrilaterals . 39 7 Inversive Geometry 40 7.1 Inversion .
    [Show full text]
  • Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface
    Robert Collins CSE486, Penn State Lecture 16: Planar Homographies Robert Collins CSE486, Penn State Motivation: Points on Planar Surface y x Robert Collins CSE486, Penn State Review : Forward Projection World Camera Film Pixel Coords Coords Coords Coords U X x u M M V ext Y proj y Maff v W Z U X Mint u V Y v W Z U M u V m11 m12 m13 m14 v W m21 m22 m23 m24 m31 m31 m33 m34 Robert Collins CSE486, PennWorld State to Camera Transformation PC PW W Y X U R Z C V Rotate to Translate by - C align axes (align origins) PC = R ( PW - C ) = R PW + T Robert Collins CSE486, Penn State Perspective Matrix Equation X (Camera Coordinates) x = f Z Y X y = f x' f 0 0 0 Z Y y' = 0 f 0 0 Z z' 0 0 1 0 1 p = M int ⋅ PC Robert Collins CSE486, Penn State Film to Pixel Coords 2D affine transformation from film coords (x,y) to pixel coordinates (u,v): X u’ a11 a12xa'13 f 0 0 0 Y v’ a21 a22 ya'23 = 0 f 0 0 w’ Z 0 0z1' 0 0 1 0 1 Maff Mproj u = Mint PC = Maff Mproj PC Robert Collins CSE486, Penn StateProjection of Points on Planar Surface Perspective projection y Film coordinates x Point on plane Rotation + Translation Robert Collins CSE486, Penn State Projection of Planar Points Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Robert Collins CSE486, Penn StateProjection of Planar Points (cont) Homography H (planar projective transformation) Punchline: For planar surfaces, 3D to 2D perspective projection reduces to a 2D to 2D transformation.
    [Show full text]
  • Extension of Algorithmic Geometry to Fractal Structures Anton Mishkinis
    Extension of algorithmic geometry to fractal structures Anton Mishkinis To cite this version: Anton Mishkinis. Extension of algorithmic geometry to fractal structures. General Mathematics [math.GM]. Université de Bourgogne, 2013. English. NNT : 2013DIJOS049. tel-00991384 HAL Id: tel-00991384 https://tel.archives-ouvertes.fr/tel-00991384 Submitted on 15 May 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Thèse de Doctorat école doctorale sciences pour l’ingénieur et microtechniques UNIVERSITÉ DE BOURGOGNE Extension des méthodes de géométrie algorithmique aux structures fractales ■ ANTON MISHKINIS Thèse de Doctorat école doctorale sciences pour l’ingénieur et microtechniques UNIVERSITÉ DE BOURGOGNE THÈSE présentée par ANTON MISHKINIS pour obtenir le Grade de Docteur de l’Université de Bourgogne Spécialité : Informatique Extension des méthodes de géométrie algorithmique aux structures fractales Soutenue publiquement le 27 novembre 2013 devant le Jury composé de : MICHAEL BARNSLEY Rapporteur Professeur de l’Université nationale australienne MARC DANIEL Examinateur Professeur de l’école Polytech Marseille CHRISTIAN GENTIL Directeur de thèse HDR, Maître de conférences de l’Université de Bourgogne STEFANIE HAHMANN Examinateur Professeur de l’Université de Grenoble INP SANDRINE LANQUETIN Coencadrant Maître de conférences de l’Université de Bourgogne RONALD GOLDMAN Rapporteur Professeur de l’Université Rice ANDRÉ LIEUTIER Rapporteur “Technology scientific director” chez Dassault Systèmes Contents 1 Introduction 1 1.1 Context .
    [Show full text]
  • Affine Transformations and Rotations
    CMSC 425 Dave Mount & Roger Eastman CMSC 425: Lecture 6 Affine Transformations and Rotations Affine Transformations: So far we have been stepping through the basic elements of geometric programming. We have discussed points, vectors, and their operations, and coordinate frames and how to change the representation of points and vectors from one frame to another. Our next topic involves how to map points from one place to another. Suppose you want to draw an animation of a spinning ball. How would you define the function that maps each point on the ball to its position rotated through some given angle? We will consider a limited, but interesting class of transformations, called affine transfor- mations. These include (among others) the following transformations of space: translations, rotations, uniform and nonuniform scalings (stretching the axes by some constant scale fac- tor), reflections (flipping objects about a line) and shearings (which deform squares into parallelograms). They are illustrated in Fig. 1. rotation translation uniform nonuniform reflection shearing scaling scaling Fig. 1: Examples of affine transformations. These transformations all have a number of things in common. For example, they all map lines to lines. Note that some (translation, rotation, reflection) preserve the lengths of line segments and the angles between segments. These are called rigid transformations. Others (like uniform scaling) preserve angles but not lengths. Still others (like nonuniform scaling and shearing) do not preserve angles or lengths. Formal Definition: Formally, an affine transformation is a mapping from one affine space to another (which may be, and in fact usually is, the same space) that preserves affine combi- nations.
    [Show full text]
  • NATURAL STRUCTURES in DIFFERENTIAL GEOMETRY Lucas
    NATURAL STRUCTURES IN DIFFERENTIAL GEOMETRY Lucas Mason-Brown A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in mathematics Trinity College Dublin March 2015 Declaration I declare that the thesis contained herein is entirely my own work, ex- cept where explicitly stated otherwise, and has not been submitted for a degree at this or any other university. I grant Trinity College Dublin per- mission to lend or copy this thesis upon request, with the understanding that this permission covers only single copies made for study purposes, subject to normal conditions of acknowledgement. Contents 1 Summary 3 2 Acknowledgment 6 I Natural Bundles and Operators 6 3 Jets 6 4 Natural Bundles 9 5 Natural Operators 11 6 Invariant-Theoretic Reduction 16 7 Classical Results 24 8 Natural Operators on Differential Forms 32 II Natural Operators on Alternating Multivector Fields 38 9 Twisted Algebras in VecZ 40 10 Ordered Multigraphs 48 11 Ordered Multigraphs and Natural Operators 52 12 Gerstenhaber Algebras 57 13 Pre-Gerstenhaber Algebras 58 irred 14 A Lie-admissable Structure on Graacyc 63 irred 15 A Co-algebra Structure on Graacyc 67 16 Loday's Rigidity Theorem 75 17 A Useful Consequence of K¨unneth'sTheorem 83 18 Chevalley-Eilenberg Cohomology 87 1 Summary Many of the most important constructions in differential geometry are functorial. Take, for example, the tangent bundle. The tangent bundle can be profitably viewed as a functor T : Manm ! Fib from the category of m-dimensional manifolds (with local diffeomorphisms) to 3 the category of fiber bundles (with locally invertible bundle maps).
    [Show full text]
  • Lectures – Math 128 – Geometry – Spring 2002
    Lectures { Math 128 { Geometry { Spring 2002 Day 1 ∼∼∼ ∼∼∼ Introduction 1. Introduce Self 2. Prerequisites { stated prereq is math 52, but might be difficult without math 40 Syllabus Go over syllabus Overview of class Recent flurry of mathematical work trying to discover shape of space • Challenge assumption space is flat, infinite • space picture with Einstein quote • will discuss possible shapes for 2D and 3D spaces • this is topology, but will learn there is intrinsic link between top and geometry • to discover poss shapes, need to talk about poss geometries • Geometry = set + group of transformations • We will discuss geometries, symmetry groups • Quotient or identification geometries give different manifolds / orbifolds • At end, we'll come back to discussing theories for shape of universe • How would you try to discover shape of space you're living in? 2 Dimensional Spaces { A Square 1. give face, 3D person can do surgery 2. red thread, possible shapes, veered? 3. blue thread, never crossed, possible shapes? 4. what about NE direction? 1 List of possibilities: classification (closed) 1. list them 2. coffee cup vs donut How to tell from inside { view inside small torus 1. old bi-plane game, now spaceship 2. view in each direction (from flat torus point of view) 3. tiling pictures 4. fundamental domain { quotient geometry 5. length spectra can tell spaces apart 6. finite area 7. which one is really you? 8. glueing, animation of folding torus 9. representation, with arrows 10. discuss transformations 11. torus tic-tac-toe, chess on Friday Different geometries (can shorten or lengthen this part) 1. describe each of 3 geometries 2.
    [Show full text]
  • Arxiv:1806.04129V2 [Math.GT] 23 Aug 2018 Dense, Although in Certain Cases It Is Locally the Product of a Cantor Set and an Interval
    ANGELS' STAIRCASES, STURMIAN SEQUENCES, AND TRAJECTORIES ON HOMOTHETY SURFACES JOSHUA BOWMAN AND SLADE SANDERSON Abstract. A homothety surface can be assembled from polygons by identifying their edges in pairs via homotheties, which are compositions of translation and scaling. We consider linear trajectories on a 1-parameter family of genus-2 homothety surfaces. The closure of a trajectory on each of these surfaces always has Hausdorff dimension 1, and contains either a closed loop or a lamination with Cantor cross-section. Trajectories have cutting sequences that are either eventually periodic or eventually Sturmian. Although no two of these surfaces are affinely equivalent, their linear trajectories can be related directly to those on the square torus, and thence to each other, by means of explicit functions. We also briefly examine two related families of surfaces and show that the above behaviors can be mixed; for instance, the closure of a linear trajectory can contain both a closed loop and a lamination. A homothety of the plane is a similarity that preserves directions; in other words, it is a composition of translation and scaling. A homothety surface has an atlas (covering all but a finite set of points) whose transition maps are homotheties. Homothety surfaces are thus generalizations of translation surfaces, which have been actively studied for some time under a variety of guises (measured foliations, abelian differentials, unfolded polygonal billiard tables, etc.). Like a translation surface, a homothety surface is locally flat except at a finite set of singular points|although in general it does not have an accompanying Riemannian metric|and it has a well-defined global notion of direction, or slope, again except at the singular points.
    [Show full text]
  • Determinants and Transformations 1. (A.) 2
    Lecture 3 answers to exercises: Determinants and transformations 1. (a:) 2 · 4 − 3 · (−1) = 11 (b:) − 5 · 2 − 1 · 0 = −10 (c:) Of this matrix we cannot compute the determinant because it is not square. (d:) −5·7·1+1·(−2)·3+(−1)·1·0−(−1)·7·3−1·1·1−(−5)·(−2)·0 = −35−6+21−1 = −21 2. 2 5 −2 4 2 −2 6 4 −1 3 6 −1 1 2 3 −83 3 −1 1 3 42 1 x = = = 20 y = = = −10 4 5 −2 −4 4 4 5 −2 −4 2 3 4 −1 3 4 −1 −1 2 3 −1 2 3 4 5 2 3 4 6 −1 2 1 −57 1 z = = = 14 4 5 −2 −4 4 3 4 −1 −1 2 3 3. Yes. Just think of a matrix and apply it to the zero vector. The outcome of all components are zeros. 4. We find the desired matrix by multiplying the two matrices for the two parts. The 0 −1 matrix for reflection in x + y = 0 is , and the matrix for rotation of 45◦ about −1 0 p p 1 2 − 1 2 2 p 2p the origin is 1 1 . So we compute (note the order!): 2 2 2 2 p p p p 1 2 − 1 2 0 −1 1 2 − 1 2 2 p 2p 2 p 2 p 1 1 = 1 1 2 2 2 2 −1 0 − 2 2 − 2 2 5. A must be the zero matrix. This is true because the vectors 2 3 2, 1 0 2, and 0 2 4 are linearly independent.
    [Show full text]
  • 1 Affine and Projective Coordinate Notation 2 Transformations
    CS348a: Computer Graphics Handout #9 Geometric Modeling Original Handout #9 Stanford University Tuesday, 3 November 1992 Original Lecture #2: 6 October 1992 Topics: Coordinates and Transformations Scribe: Steve Farris 1 Affine and Projective Coordinate Notation Recall that we have chosen to denote the point with Cartesian coordinates (X,Y) in affine coordinates as (1;X,Y). Also, we denote the line with implicit equation a + bX + cY = 0as the triple of coefficients [a;b,c]. For example, the line Y = mX + r is denoted by [r;m,−1],or equivalently [−r;−m,1]. In three dimensions, points are denoted by (1;X,Y,Z) and planes are denoted by [a;b,c,d]. This transfer of dimension is natural for all dimensions. The geometry of the line — that is, of one dimension — is a little strange, in that hyper- planes are points. For example, the hyperplane with coefficients [3;7], which represents all solutions of the equation 3 + 7X = 0, is the same as the point X = −3/7, which we write in ( −3) coordinates as 1; 7 . 2 Transformations 2.1 Affine Transformations of a line Suppose F(X) := a + bX and G(X) := c + dX, and that we want to compose these functions. One way to write the two possible compositions is: G(F(X)) = c + d(a + bX)=(c + da)+bdX and F(G(X)) = a + b(c + d)X =(a + bc)+bdX. Note that it makes a difference which function we apply first, F or G. When writing the compositions above, we used prefix notation, that is, we wrote the func- tion on the left and its argument on the right.
    [Show full text]