Orthogonal Transformations Professor Karen Smith

Total Page:16

File Type:pdf, Size:1020Kb

Orthogonal Transformations Professor Karen Smith Math 217: Orthogonal Transformations Professor Karen Smith T Definition: A linear transformation Rn −! Rn is orthogonal if it preserves dot products| that is, if ~x · ~y = T (~x) · T (~y) for all vectors ~x and ~y in Rn. T Theorem: Let Rn −! Rn be a linear transformation. Then T is orthogonal if and only if it preserves the length of every vector|that is, jjT (~x)jj = jj~xjj for all ~x 2 Rn. ************************************************************************************************************************************ A. Discuss the definition and theorem with your table: What, intuitively and geometrically, really is an orthogonal transformation? What does an orthogonal transformation of R3 do to the unit cube? How does this differ from a general (non-orthonormal) linear transformation? Solution note: The unit cube is taken to another unit cube, also with one vertex at the origin. In general, a linear transformation can stretch the cube to a parallelopiped (if rank T = 3) or smash the cube to 2 dimensional parallelogram (if rank T is 2), or even a linear segment (if rank T = 1.). B. Which of the following maps are orthogonal transformations? Short, geometric justifica- tions are preferred. 1. The identity map R3 ! R3 2. R2 ! R2 given by rotation counterclockwise through θ in R2 3. The reflection R3 ! R3 over a plane (though the origin). 4. The projection R3 ! R3 onto a subspace V of dimension 2 5. Dilation R3 ! R3 by a factor of 3. cosθ −sinθ 6. Multiplication by : sinθ cosθ 3 1 7. Multiplication by : −2 5 Solution note: 1. Yes, dot product is obviously the same before or after doing nothing. 2. Yes, rotation obviously preserves the LENGTH of every vector, so by the theorem, this means rotation is an orthogonal transformation. 3. Yes, reflection obviously preserves the LENGTH of every vector, so by the theorem, this means reflection is an orthogonal transformation. 4. NO, projection will typically shorten the length of vectors. 5. NO, dilation by three obviously takes each vector to a vector of length 3 times as long. 6. Yes, this is rotation again. 3 p 7. No. The vector ~e is sent to ; which has length 13, not 1 like ~e . 1 −2 1 T C. Prove that orthogonal transformations preserve lengths: if Rn −! Rn is an orthogonal transformation, then jj~xjj = jjT (~x)jj for all ~x 2 Rn. Is the converse true? What is the book's definition of an orthogonal transformation and why is it equivalent to ours? Solution note: Say that T is orthogonal. Take arbitrary ~x 2 Rn. We want to show jj~xjj = jjT (~x)jj. By definition jj~xjj = (~x · ~x)1=2 and jjT (~x)jj = (T (~x) · T (~x))1=2. Since T is orthogonal, we know that ~x · ~x = T (~x) · T (~x), so the result follows. The converse is also true. This is the Theorem at the top! The book DEFINES an orthogonal transformation as one which preserves lengths. This is equivalent to ours by the Theorem above. T D. Let Rn −! Rn be an orthogonal transformation. 1. Prove that T is injective. [Hint: consider the kernel.] 2. Prove that T is an isomorphism. 3. Prove that the matrix of T (in standard coordinates) has columns that are orthonormal. 4. Is the composition of orthogonal transformations also orthogonal? Prove it! Solution note: 1. Let ~v 2 ker T . We want to show that ~v = 0. We know that T (~v) = 0, so jjT (~v)jj = 0. Since T is orthogonal, it preserves lengths (by the theorem), so also jj~vjj = 0. This means ~v = 0 since no other vector has length 0. So the kernel of T is trivial and T is injective. 2. We already know T is injective, so we just need to check it is surjective. By rank- nullity, n = dim ker T + rankT . So by (1), we have rankT = n, so T is surjective. Thus T is a bijective linear transformation, that is, an isomorphism. 3. Let A be the matrix of T . The columns of A are T (~e1);:::;T (~en) by our old friend the "Crucial Theorem." We need to show these are orthonormal. Compute for each i: T (~ei) · T (~ei) = ~ei · ~ei = 1 (where we have used the fact that T preserves dot product). Also compute for each pair i 6= j: T (~ei) · T (~ej) = ~ei · ~ej = 0. This means that T (~e1);:::;T (~en) are orthonormal. 4. Yes! Assume T and S are orthonormal transformations with source and target Rn. To show that S ◦ T is orthogonal, we can show it preserves LENGTHS. Take any vector ~x 2 Rn. Then jj~xjj = jjT (~x)jj since T is orthogonal (by the Theorem, or by the book definition). So also jjT (~x)jj = jjS(T (~x))jj since S is orthogonal. Putting these together we have jj~xjj = jjS(T (~x))jj: Since ~x was arbitrary, we see that S ◦ T preserves every length. QED. Note: This is a different proof that what I did in class, using the Theorem instead of the definition. E. Recall that if A is an d × n matrix with columns ~v1; : : : ;~vn, then the transpose of A is T T T the n × d matrix A whose rows are ~v1 ; : : : ;~vn : T Definition: An n × n matrix A is orthogonal if A A = In. 1. Let A = cosθ −sinθ . Compute AT A. Is A orthogonal? Is 0 −1 orthogonal? sinθ cosθ −1 0 3 2. Suppose that A = [~v1 ~v2 ~v3] is a 3 × 3 matrix with columns ~v1;~v2;~v3 2 R . Use block 2 3 ~v1 · ~v1 ~v1 · ~v2 ~v1 · ~v3 T multiplication to show that A A = 4~v2 · ~v1 ~v2 · ~v2 ~v2 · ~v35 ~v3 · ~v1 ~v3 · ~v2 ~v3 · ~v3 3. Prove that a square matrix A is orthogonal if and only if its columns are orthonormal.1 " 3=5 4=5 0# 4. Is B = −4=5 3=5 0 orthogonal? Find B−1. [Be clever! There is an easy way!] 0 0 1 5. Prove the Theorem: Let T : Rn ! Rn be a linear transformation with (standard) matrix A. Then T is orthogonal if and only if A is orthogonal. 1You can write out just the 3 × 3 case if you want. But be sure to think about what happens for n × n matrices. Solution note: 1. Yes. It is easy to check that AT A is the identity matrix in both cases (we use sin2 + cos2 = 1 for the first). 2 T 3 2 T T T 3 ~v1 ~v1 ~v1 ~v1 ~v2 ~v1 ~v3 T T T T T 2. Multiply A A = 4~v2 5 ~v1 ~v2 ~v3 = 4~v2 ~v1 ~v2 ~v2 ~v2 ~v35, which produces the de- T T T T ~v3 ~v3 ~v1 ~v3 ~v2 ~v3 ~v3 sired matrix since for any n×1 matrices (column vectors) ~w and ~v, the matrix product ~wT ~v is the same as the dot product ~w · ~v: 3. This follows immediately from the n-dimensional version of (2). Let A have columns T T ~v1; : : : ;~vn. Then the ij-th entry of A A is ~vi ~vj = ~vi·~vj. So the columns are orthonormal if and only if AT A is the identity matrix. "3=5 −4=5 0# 4. Yes. The inverse is the transpose, which is 4=5 3=5 0 0 0 1 5. We need to show two things: 1). If T is orthogonal, then A has orthonormal columns. Assume T is orthogonal. The columns of A are [T (~e1) T (~e2) :::T (~en)]. To check they are orthonormal, we need two things: T (~ei) · T (~ei) = 1 for all i, and T (~ei) · T (~ej) = 0 if i 6= j. For the first, note 2 2 T (~ei) · T (~ei) = jjT (~ei)jj = jj~eijj = 1; with the first equality coming from the definition of the length of the vector and the second coming from the definition of T being orthogonal. For the second, T (~ei)·T (~ej) = ~ei · ~ej by the Theorem above. This is zero since the standard basis is orthonormal. 2). Assume A has orthonormal columns. We need to show that for any ~x 2 Rn, jjT (~x)jj = jj~xjj. Since the length is always a non-negative number, it suffices to show jjT (~x)jj2 = jj~xjj2. That is, it suffices to show T (~x) · T (~x) = ~x · ~x. For, this we take an arbitrary ~x and write it in the basis f~e1; : : : ;~eng. Note that X 2 2 ~x · ~x = (x1~e1 + ··· + xn~en) · (x1~e1 + ··· + xn~en) = (xi~ei) · (xj~ej) = x1 + ··· + xn: ij here the third equality is using some basic properties of dot product (like "foil"), and the third equality is using the fact that the ~ei are orthonormal so that (xi~ei)·(xj~ej) = 0 if i 6= j. On the other hand, we also have T (~x) · T (~x) =T (x1~e1 + ··· + xn~en) · T (x1~e1 + ··· + xn~en) X = (xiT (~ei)) · (xjT (~ej)) ij X 2 2 = xixj T (~ei) · T (~ej) = x1 + ··· + xn ij with the last equality coming from the fact that the T (~ei)'s are the columns of A and hence orthonormal. QED. F. Prove the very important Theorem on the top of the first page. [Hint: consider x + y.] Solution note: We need to show two things: 1) If T is orthogonal, then T preserves lengths; and 2) If T preserves lengths, then T is orthogonal. We already did (1) in C. We need to do 2. Assume that for all ~x, jjT (~x)jj = jj~xjj.
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Enhancing Self-Reflection and Mathematics Achievement of At-Risk Urban Technical College Students
    Psychological Test and Assessment Modeling, Volume 53, 2011 (1), 108-127 Enhancing self-reflection and mathematics achievement of at-risk urban technical college students Barry J. Zimmerman1, Adam Moylan2, John Hudesman3, Niesha White3, & Bert Flugman3 Abstract A classroom-based intervention study sought to help struggling learners respond to their academic grades in math as sources of self-regulated learning (SRL) rather than as indices of personal limita- tion. Technical college students (N = 496) in developmental (remedial) math or introductory col- lege-level math courses were randomly assigned to receive SRL instruction or conventional in- struction (control) in their respective courses. SRL instruction was hypothesized to improve stu- dents’ math achievement by showing them how to self-reflect (i.e., self-assess and adapt to aca- demic quiz outcomes) more effectively. The results indicated that students receiving self-reflection training outperformed students in the control group on instructor-developed examinations and were better calibrated in their task-specific self-efficacy beliefs before solving problems and in their self- evaluative judgments after solving problems. Self-reflection training also increased students’ pass- rate on a national gateway examination in mathematics by 25% in comparison to that of control students. Key words: self-regulation, self-reflection, math instruction 1 Correspondence concerning this article should be addressed to: Barry Zimmerman, PhD, Graduate Center of the City University of New York and Center for Advanced Study in Education, 365 Fifth Ave- nue, New York, NY 10016, USA; email: [email protected] 2 Now affiliated with the University of California, San Francisco, School of Medicine 3 Graduate Center of the City University of New York and Center for Advanced Study in Education Enhancing self-reflection and math achievement 109 Across America, faculty and policy makers at two-year and technical colleges have been deeply troubled by the low academic achievement and high attrition rate of at-risk stu- dents.
    [Show full text]
  • Reflection Invariant and Symmetry Detection
    1 Reflection Invariant and Symmetry Detection Erbo Li and Hua Li Abstract—Symmetry detection and discrimination are of fundamental meaning in science, technology, and engineering. This paper introduces reflection invariants and defines the directional moments(DMs) to detect symmetry for shape analysis and object recognition. And it demonstrates that detection of reflection symmetry can be done in a simple way by solving a trigonometric system derived from the DMs, and discrimination of reflection symmetry can be achieved by application of the reflection invariants in 2D and 3D. Rotation symmetry can also be determined based on that. Also, if none of reflection invariants is equal to zero, then there is no symmetry. And the experiments in 2D and 3D show that all the reflection lines or planes can be deterministically found using DMs up to order six. This result can be used to simplify the efforts of symmetry detection in research areas,such as protein structure, model retrieval, reverse engineering, and machine vision etc. Index Terms—symmetry detection, shape analysis, object recognition, directional moment, moment invariant, isometry, congruent, reflection, chirality, rotation F 1 INTRODUCTION Kazhdan et al. [1] developed a continuous measure and dis- The essence of geometric symmetry is self-evident, which cussed the properties of the reflective symmetry descriptor, can be found everywhere in nature and social lives, as which was expanded to 3D by [2] and was augmented in shown in Figure 1. It is true that we are living in a spatial distribution of the objects asymmetry by [3] . For symmetric world. Pursuing the explanation of symmetry symmetry discrimination [4] defined a symmetry distance will provide better understanding to the surrounding world of shapes.
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • Isometries and the Plane
    Chapter 1 Isometries of the Plane \For geometry, you know, is the gate of science, and the gate is so low and small that one can only enter it as a little child. (W. K. Clifford) The focus of this first chapter is the 2-dimensional real plane R2, in which a point P can be described by its coordinates: 2 P 2 R ;P = (x; y); x 2 R; y 2 R: Alternatively, we can describe P as a complex number by writing P = (x; y) = x + iy 2 C: 2 The plane R comes with a usual distance. If P1 = (x1; y1);P2 = (x2; y2) 2 R2 are two points in the plane, then p 2 2 d(P1;P2) = (x2 − x1) + (y2 − y1) : Note that this is consistent withp the complex notation. For P = x + iy 2 C, p 2 2 recall that jP j = x + y = P P , thus for two complex points P1 = x1 + iy1;P2 = x2 + iy2 2 C, we have q d(P1;P2) = jP2 − P1j = (P2 − P1)(P2 − P1) p 2 2 = j(x2 − x1) + i(y2 − y1)j = (x2 − x1) + (y2 − y1) ; where ( ) denotes the complex conjugation, i.e. x + iy = x − iy. We are now interested in planar transformations (that is, maps from R2 to R2) that preserve distances. 1 2 CHAPTER 1. ISOMETRIES OF THE PLANE Points in the Plane • A point P in the plane is a pair of real numbers P=(x,y). d(0,P)2 = x2+y2. • A point P=(x,y) in the plane can be seen as a complex number x+iy.
    [Show full text]
  • The Dot Product
    The Dot Product In this section, we will now concentrate on the vector operation called the dot product. The dot product of two vectors will produce a scalar instead of a vector as in the other operations that we examined in the previous section. The dot product is equal to the sum of the product of the horizontal components and the product of the vertical components. If v = a1 i + b1 j and w = a2 i + b2 j are vectors then their dot product is given by: v · w = a1 a2 + b1 b2 Properties of the Dot Product If u, v, and w are vectors and c is a scalar then: u · v = v · u u · (v + w) = u · v + u · w 0 · v = 0 v · v = || v || 2 (cu) · v = c(u · v) = u · (cv) Example 1: If v = 5i + 2j and w = 3i – 7j then find v · w. Solution: v · w = a1 a2 + b1 b2 v · w = (5)(3) + (2)(-7) v · w = 15 – 14 v · w = 1 Example 2: If u = –i + 3j, v = 7i – 4j and w = 2i + j then find (3u) · (v + w). Solution: Find 3u 3u = 3(–i + 3j) 3u = –3i + 9j Find v + w v + w = (7i – 4j) + (2i + j) v + w = (7 + 2) i + (–4 + 1) j v + w = 9i – 3j Example 2 (Continued): Find the dot product between (3u) and (v + w) (3u) · (v + w) = (–3i + 9j) · (9i – 3j) (3u) · (v + w) = (–3)(9) + (9)(-3) (3u) · (v + w) = –27 – 27 (3u) · (v + w) = –54 An alternate formula for the dot product is available by using the angle between the two vectors.
    [Show full text]
  • Concept of a Dyad and Dyadic: Consider Two Vectors a and B Dyad: It Consists of a Pair of Vectors a B for Two Vectors a a N D B
    1/11/2010 CHAPTER 1 Introductory Concepts • Elements of Vector Analysis • Newton’s Laws • Units • The basis of Newtonian Mechanics • D’Alembert’s Principle 1 Science of Mechanics: It is concerned with the motion of material bodies. • Bodies have different scales: Microscropic, macroscopic and astronomic scales. In mechanics - mostly macroscopic bodies are considered. • Speed of motion - serves as another important variable - small and high (approaching speed of light). 2 1 1/11/2010 • In Newtonian mechanics - study motion of bodies much bigger than particles at atomic scale, and moving at relative motions (speeds) much smaller than the speed of light. • Two general approaches: – Vectorial dynamics: uses Newton’s laws to write the equations of motion of a system, motion is described in physical coordinates and their derivatives; – Analytical dynamics: uses energy like quantities to define the equations of motion, uses the generalized coordinates to describe motion. 3 1.1 Vector Analysis: • Scalars, vectors, tensors: – Scalar: It is a quantity expressible by a single real number. Examples include: mass, time, temperature, energy, etc. – Vector: It is a quantity which needs both direction and magnitude for complete specification. – Actually (mathematically), it must also have certain transformation properties. 4 2 1/11/2010 These properties are: vector magnitude remains unchanged under rotation of axes. ex: force, moment of a force, velocity, acceleration, etc. – geometrically, vectors are shown or depicted as directed line segments of proper magnitude and direction. 5 e (unit vector) A A = A e – if we use a coordinate system, we define a basis set (iˆ , ˆj , k ˆ ): we can write A = Axi + Ay j + Azk Z or, we can also use the A three components and Y define X T {A}={Ax,Ay,Az} 6 3 1/11/2010 – The three components Ax , Ay , Az can be used as 3-dimensional vector elements to specify the vector.
    [Show full text]
  • Derivation of the Two-Dimensional Dot Product
    Derivation of the two-dimensional dot product Content 1. Motivation ....................................................................................................................................... 1 2. Derivation of the dot product in R2 ................................................................................................. 1 2.1. Area of a rectangle ...................................................................................................................... 2 2.2. Area of a right-angled triangle .................................................................................................... 2 2.3. Pythagorean Theorem ................................................................................................................. 2 2.4. Area of a general triangle using the law of cosines ..................................................................... 3 2.5. Derivation of the dot product from the law of cosines ............................................................... 5 3. Geometric Interpretation ................................................................................................................ 5 3.1. Basics ........................................................................................................................................... 5 3.2. Projection .................................................................................................................................... 6 4. Summary.........................................................................................................................................
    [Show full text]
  • Lesson 4: Definition of Reflection and Basic Properties
    NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 4 8•2 Lesson 4: Definition of Reflection and Basic Properties Student Outcomes . Students know the definition of reflection and perform reflections across a line using a transparency. Students show that reflections share some of the same fundamental properties with translations (e.g., lines map to lines, angle- and distance-preserving motion). Students know that reflections map parallel lines to parallel lines. Students know that for the reflection across a line 퐿 and for every point 푃 not on 퐿, 퐿 is the bisector of the segment joining 푃 to its reflected image 푃′. Classwork Example 1 (5 minutes) The reflection across a line 퐿 is defined by using the following example. MP.6 . Let 퐿 be a vertical line, and let 푃 and 퐴 be two points not on 퐿, as shown below. Also, let 푄 be a point on 퐿. (The black rectangle indicates the border of the paper.) . The following is a description of how the reflection moves the points 푃, 푄, and 퐴 by making use of the transparency. Trace the line 퐿 and three points onto the transparency exactly, using red. (Be sure to use a transparency that is the same size as the paper.) . Keeping the paper fixed, flip the transparency across the vertical line (interchanging left and right) while keeping the vertical line and point 푄 on top of their black images. The position of the red figures on the transparency now represents the Scaffolding: reflection of the original figure. 푅푒푓푙푒푐푡푖표푛(푃) is the point represented by the There are manipulatives, such red dot to the left of 퐿, 푅푒푓푙푒푐푡푖표푛(퐴) is the red dot to the right of 퐿, and point as MIRA and Georeflector, 푅푒푓푙푒푐푡푖표푛(푄) is point 푄 itself.
    [Show full text]
  • 2-D Drawing Geometry Homogeneous Coordinates
    2-D Drawing Geometry Homogeneous Coordinates The rotation of a point, straight line or an entire image on the screen, about a point other than origin, is achieved by first moving the image until the point of rotation occupies the origin, then performing rotation, then finally moving the image to its original position. The moving of an image from one place to another in a straight line is called a translation. A translation may be done by adding or subtracting to each point, the amount, by which picture is required to be shifted. Translation of point by the change of coordinate cannot be combined with other transformation by using simple matrix application. Such a combination is essential if we wish to rotate an image about a point other than origin by translation, rotation again translation. To combine these three transformations into a single transformation, homogeneous coordinates are used. In homogeneous coordinate system, two-dimensional coordinate positions (x, y) are represented by triple- coordinates. Homogeneous coordinates are generally used in design and construction applications. Here we perform translations, rotations, scaling to fit the picture into proper position 2D Transformation in Computer Graphics- In Computer graphics, Transformation is a process of modifying and re- positioning the existing graphics. • 2D Transformations take place in a two dimensional plane. • Transformations are helpful in changing the position, size, orientation, shape etc of the object. Transformation Techniques- In computer graphics, various transformation techniques are- 1. Translation 2. Rotation 3. Scaling 4. Reflection 2D Translation in Computer Graphics- In Computer graphics, 2D Translation is a process of moving an object from one position to another in a two dimensional plane.
    [Show full text]
  • Tropical Arithmetics and Dot Product Representations of Graphs
    Utah State University DigitalCommons@USU All Graduate Theses and Dissertations Graduate Studies 5-2015 Tropical Arithmetics and Dot Product Representations of Graphs Nicole Turner Utah State University Follow this and additional works at: https://digitalcommons.usu.edu/etd Part of the Mathematics Commons Recommended Citation Turner, Nicole, "Tropical Arithmetics and Dot Product Representations of Graphs" (2015). All Graduate Theses and Dissertations. 4460. https://digitalcommons.usu.edu/etd/4460 This Thesis is brought to you for free and open access by the Graduate Studies at DigitalCommons@USU. It has been accepted for inclusion in All Graduate Theses and Dissertations by an authorized administrator of DigitalCommons@USU. For more information, please contact [email protected]. TROPICAL ARITHMETICS AND DOT PRODUCT REPRESENTATIONS OF GRAPHS by Nicole Turner A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in Mathematics Approved: David E. Brown Brynja Kohler Major Professor Committee Member LeRoy Beasley Mark McLellan Committee Member Vice President for Research Dean of the School of Graduate Studies UTAH STATE UNIVERSITY Logan, Utah 2015 ii Copyright c Nicole Turner 2015 All Rights Reserved iii ABSTRACT Tropical Arithmetics and Dot Product Representations of Graphs by Nicole Turner, Master of Science Utah State University, 2015 Major Professor: Dr. David Brown Department: Mathematics and Statistics A dot product representation (DPR) of a graph is a function that maps each vertex to a vector and two vertices are adjacent if and only if the dot product of their function values is greater than a given threshold. A tropical algebra is the antinegative semiring on IR[f1; −∞} with either minfa; bg replacing a+b and a+b replacing a·b (min-plus), or maxfa; bg replacing a + b and a + b replacing a · b (max-plus), and the symbol 1 is the additive identity in min-plus while −∞ is the additive identity in max-plus; the multiplicative identity is 0 in min-plus and in max-plus.
    [Show full text]