Multi-View Metric Learning in Vector-Valued Kernel Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Multi-View Metric Learning in Vector-Valued Kernel Spaces Multi-view Metric Learning in Vector-valued Kernel Spaces Riikka Huusari Hachem Kadri Cécile Capponi Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France Abstract unlabeled examples [26]; the latter tries to efficiently combine multiple kernels defined on each view to exploit We consider the problem of metric learning for information coming from different representations [11]. multi-view data and present a novel method More recently, vector-valued reproducing kernel Hilbert for learning within-view as well as between- spaces (RKHSs) have been introduced to the field of view metrics in vector-valued kernel spaces, multi-view learning for going further than MKL by in- as a way to capture multi-modal structure corporating in the learning model both within-view and of the data. We formulate two convex opti- between-view dependencies [20, 14]. It turns out that mization problems to jointly learn the metric these kernels and their associated vector-valued repro- and the classifier or regressor in kernel feature ducing Hilbert spaces provide a unifying framework for spaces. An iterative three-step multi-view a number of previous multi-view kernel methods, such metric learning algorithm is derived from the as co-regularized multi-view learning and manifold reg- optimization problems. In order to scale the ularization, and naturally allow to encode within-view computation to large training sets, a block- as well as between-view similarities [21]. wise Nyström approximation of the multi-view Kernels of vector-valued RKHSs are positive semidefi- kernel matrix is introduced. We justify our ap- nite matrix-valued functions. They have been applied proach theoretically and experimentally, and with success in various machine learning problems, such show its performance on real-world datasets as multi-task learning [10], functional regression [15] against relevant state-of-the-art methods. and structured output prediction [5]. The main advan- tage of matrix-valued kernels is that they offer a higher degree of flexibility in encoding similarities between 1 Introduction data points. However finding the optimal matrix-valued kernel of choice for a given application is difficult, as is In this paper we tackle the problem of supervised multi- the question of how to build them. In order to overcome view learning, where each labeled example is observed the need for choosing a kernel before the learning pro- under several views. These views might be not only cess, we propose a supervised metric learning approach correlated, but also complementary, redundant or con- that learns a matrix-valued multi-view kernel jointly tradictory. Thus, learning over all the views is expected with the decision function. We refer the reader to [3] for arXiv:1803.07821v1 [cs.LG] 21 Mar 2018 to produce a final classifier (or regressor) that is better a review of metric learning. It is worth mentioning that than each individual one. Multi-view learning is well- algorithms for learning matrix-valued kernels have been known in the semi-supervised setting, where the agree- proposed in the literature, see for example [9, 8, 17]. ment among views is usually optimized [4, 28]. Yet, the However, these methods mainly consider separable ker- supervised setting has proven to be interesting as well, nels which are not suited for multi-view setting, as will independently from any agreement condition on views. be illustrated later in this paper. Co-regularization and multiple kernel learning (MKL) The main contributions of this paper are: 1) we intro- are two well known kernel-based frameworks for learn- duce and learn a new class of matrix-valued kernels de- ing in the presence of multiple views of data [31]. The signed to handle multi-view data 2) we give an iterative former attempts to optimize measures of agreement algorithm that learns simultaneously a vector-valued and smoothness between the views over labeled and multi-view function and a block-structured metric be- tween views, 3) we provide generalization analysis of Proceedings of the 21st International Conference on Artifi- our algorithm with a Rademacher bound; and 4) we cial Intelligence and Statistics (AISTATS) 2018, Lanzarote, show how matrix-valued kernels can be efficiently com- Spain. PMLR: Volume 84. Copyright 2018 by the author(s). puted via a block-wise Nyström approximation in order Multi-view Metric Learning in Vector-valued Kernel Spaces to reduce significantly their high computational cost. kernels are defined by 2 Preliminaries K(x; z) = k(x; z)T; where T is a matrix in v×v. This class of kernels is We start here by briefly reviewing the basics of vector- R very attractive in terms of computational time, as it valued RKHSs and their associated matrix-valued ker- is easily decomposable. However the matrix T acts nels. We then describe how they can be used for learn- only on the outputs independently of the input data, ing from multi-view data. which makes it difficult for these kernels to encode nec- essary similarities in multi-view setting. Transformable 2.1 Vector-valued RKHSs kernels are defined by Vector-valued RKHSs were introduced to the machine [K(x; z)] = k(S x;S z): learning community by Micchelli and Pontil [19] as a lm m l way to extend kernel machines from scalar to vector Here m and l are indices of the output matrix (views outputs. In this setting, given a random training sample in multi-view setting) and operators fS gv , are used n t t=1 fxi; yigi=1 on X × Y, optimization problem to transform the data. In contrast to separable kernels, n here the St operate on input data; however choosing X 2 arg min V (f; xi; yi) + λkfkH; (1) them is a difficult task. For further reading on matrix- f2H i=1 valued reproducing kernels, see, e.g., [1, 6, 7, 15]. where f is a vector-valued function and V is a loss function, can be solved in a vector-valued RKHS H by 2.2 Vector-valued multi-view learning the means of a vector-valued extension of the represen- This section reviews the setup for supervised multi-view ter theorem. To see this more clearly, we recall some learning in vector-valued RKHSs [14, 21]. The main fundamentals of vector-valued RKHSs. idea is to consider a kernel that measures not only the Definition 1. (vector-valued RKHS) similarities between examples of the same view but also A Hilbert space H of functions from X to Rv is called those coming from different views. Reproducing ker- a reproducing kernel Hilbert space if there is a positive nels of vector-valued Hilbert spaces allow encoding in a definite Rv×v-valued kernel K on X × X such that: natural way these similarities and taking into account i. the function z 7! K(x; z)y belongs to H; 8z; x 2 both within-view and between-view dependencies. In- deed, a kernel function K in this setting outputs a X ; y 2 Rv, matrix in Rv×v, with v the number of views, so, that v ii. 8f 2 H; x 2 X ; y 2 R ; hf; K(x; ·)yiH = K(xi; xj)lm, l; m = 1; : : : ; v, is the similarity measure v hf(x); yiR (reproducing property). between examples xi and xj from the views l and m. Definition 2. (matrix-valued kernel) More formally, consider a set of n labeled data v×v f(x ; y ) 2 X × Y; i = 1; : : : ; ng, where X ⊂ d and An R -valued kernel K on X × X is a function i i R v×v Y = {−1; 1g for classification or Y ⊂ for regression. K(·; ·): X × X ! R ; it is positive semidefinite if: R Also assume that each input instance x = (x1;:::; xv) > > i i i i. K(x; z) = K(z; x) , where denotes the transpose l dl Pv is seen in v views, where xi 2 R and l=1 dl = d. of a matrix, The supervised multi-view learning problem can be thought of as trying to find the vector-valued function ii. and, for every r 2 N and all f(xi; yi)i=1;:::;rg 2 v ^ ^1 ^v ^l P v f(·) = (f (·);::: f (·)), with f (x) 2 Y, solution of X × R , i;jhyi;K(xi; xj)yjiR ≥ 0. n Important results for matrix-valued kernels include the X 2 arg min V (yi; W(f(xi))) + λkfk : (2) positive semidefiniteness of the kernel K and that we ob- f2H;W i=1 tain a solution for regularized optimization problem (1) via a representer theorem. It states that solution f^ 2 H Here f is a vector-valued function that groups v learn- for a learning problem can be written as ing functions, each corresponding to one view, and W : v ! is combination operator for combining n R R ^ X v the results of the learning functions. f(x) = K(x; xi)ci; with ci 2 R : i=1 While the vector-valued extension of the representer theorem provides an algorithmc way for computing the Some well-known classes of matrix-valued kernels in- solution of the multi-view learning problem (2), the clude separable and transformable kernels. Separable question of choosing the multi-view kernel K remains Riikka Huusari, Hachem Kadri, Cécile Capponi n crucial to take full advantage of the vector-valued learn- where we have written kl(xi) = (kl(xt; xi))t=1. We ing framework. In [14], a matrix-valued kernel based on note that this class is not in general separable or trans- cross-covariance operators on RKHS that allow mod- formable. However in the special case when it is possi- eling variables of multiple types and modalities was ble to write Aml = AmAl the kernel is transformable.
Recommended publications
  • Projects Applying Linear Algebra to Calculus
    PROJECTS APPLYING LINEAR ALGEBRA TO CALCULUS Dr. Jason J. Molitierno Department of Mathematics Sacred Heart University Fairfield, CT 06825-1000 MAJOR PROBLEMS IN TEACHING A LINEAR ALGEBRA CLASS: • Having enough time in the semester to cover the syllabus. • Some problems are long and involved. ◦ Solving Systems of Equations. ◦ Finding Eigenvalues and Eigenvectors • Concepts can be difficult to visualize. ◦ Rotation and Translation of Vectors in multi-dimensioanl space. ◦ Solutions to systems of equations and systems of differential equations. • Some problems may be repetitive. A SOLUTION TO THESE ISSUES: OUT OF CLASS PROJECTS. THE BENEFITS: • By having the students learn some material outside of class, you have more time in class to cover additional material that you may not get to otherwise. • Students tend to get lost when the professor does a long computation on the board. Students are like "scribes." • Using technology, students can more easily visualize concepts. • Doing the more repetitive problems outside of class cuts down on boredom inside of class. • Gives students another method of learning as an alternative to be lectured to. APPLICATIONS OF SYSTEMS OF LINEAR EQUATIONS: (taken from Lay, fifth edition) • Nutrition: • Electrical Currents: • Population Distribution: VISUALIZING LINEAR TRANSFORMATIONS: in each of the five exercises below, do the following: (a) Find the matrix A that performs the desired transformation. (Note: In Exercises 4 and 5 where there are two transformations, multiply the two matrices with the first transformation matrix being on the right.) (b) Let T : <2 ! <2 be the transformation T (x) = Ax where A is the matrix you found in part (a). Find T (2; 3).
    [Show full text]
  • Direct Shear Mapping – a New Weak Lensing Tool
    MNRAS 451, 2161–2173 (2015) doi:10.1093/mnras/stv1083 Direct shear mapping – a new weak lensing tool C. O. de Burgh-Day,1,2,3‹ E. N. Taylor,1,3‹ R. L. Webster1,3‹ and A. M. Hopkins2,3 1School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 2The Australian Astronomical Observatory, PO Box 915, North Ryde NSW 1670, Australia 3ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), Australian National University, Canberra, ACT 2611, Australia Accepted 2015 May 12. Received 2015 February 16; in original form 2013 October 24 ABSTRACT We have developed a new technique called direct shear mapping (DSM) to measure gravita- tional lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmet- ric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spa- tially resolved spectroscopic images (i.e.
    [Show full text]
  • Vol 45 Ams / Maa Anneli Lax New Mathematical Library
    45 AMS / MAA ANNELI LAX NEW MATHEMATICAL LIBRARY VOL 45 10.1090/nml/045 When Life is Linear From Computer Graphics to Bracketology c 2015 by The Mathematical Association of America (Incorporated) Library of Congress Control Number: 2014959438 Print edition ISBN: 978-0-88385-649-9 Electronic edition ISBN: 978-0-88385-988-9 Printed in the United States of America Current Printing (last digit): 10987654321 When Life is Linear From Computer Graphics to Bracketology Tim Chartier Davidson College Published and Distributed by The Mathematical Association of America To my parents, Jan and Myron, thank you for your support, commitment, and sacrifice in the many nonlinear stages of my life Committee on Books Frank Farris, Chair Anneli Lax New Mathematical Library Editorial Board Karen Saxe, Editor Helmer Aslaksen Timothy G. Feeman John H. McCleary Katharine Ott Katherine S. Socha James S. Tanton ANNELI LAX NEW MATHEMATICAL LIBRARY 1. Numbers: Rational and Irrational by Ivan Niven 2. What is Calculus About? by W. W. Sawyer 3. An Introduction to Inequalities by E. F.Beckenbach and R. Bellman 4. Geometric Inequalities by N. D. Kazarinoff 5. The Contest Problem Book I Annual High School Mathematics Examinations 1950–1960. Compiled and with solutions by Charles T. Salkind 6. The Lore of Large Numbers by P.J. Davis 7. Uses of Infinity by Leo Zippin 8. Geometric Transformations I by I. M. Yaglom, translated by A. Shields 9. Continued Fractions by Carl D. Olds 10. Replaced by NML-34 11. Hungarian Problem Books I and II, Based on the Eotv¨ os¨ Competitions 12. 1894–1905 and 1906–1928, translated by E.
    [Show full text]
  • MATRICES Part 2 3. Linear Equations
    MATRICES part 2 (modified content from Wikipedia articles on matrices http://en.wikipedia.org/wiki/Matrix_(mathematics)) 3. Linear equations Matrices can be used to compactly write and work with systems of linear equations. For example, if A is an m-by-n matrix, x designates a column vector (i.e., n×1-matrix) of n variables x1, x2, ..., xn, and b is an m×1-column vector, then the matrix equation Ax = b is equivalent to the system of linear equations a1,1x1 + a1,2x2 + ... + a1,nxn = b1 ... am,1x1 + am,2x2 + ... + am,nxn = bm . For example, 3x1 + 2x2 – x3 = 1 2x1 – 2x2 + 4x3 = – 2 – x1 + 1/2x2 – x3 = 0 is a system of three equations in the three variables x1, x2, and x3. This can be written in the matrix form Ax = b where 3 2 1 1 1 A = 2 2 4 , x = 2 , b = 2 1 1/2 −1 푥3 0 � − � �푥 � �− � A solution to− a linear system− is an assignment푥 of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by x1 = 1 x2 = – 2 x3 = – 2 since it makes all three equations valid. A linear system may behave in any one of three possible ways: 1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution. 3.1. Solving linear equations There are several algorithms for solving a system of linear equations. Elimination of variables The simplest method for solving a system of linear equations is to repeatedly eliminate variables.
    [Show full text]
  • Karhunen-Loève Analysis for Weak Gravitational Lensing
    c Copyright 2012 Jacob T. Vanderplas arXiv:1301.6657v1 [astro-ph.CO] 28 Jan 2013 Karhunen-Lo`eve Analysis for Weak Gravitational Lensing Jacob T. Vanderplas A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2012 Reading Committee: Andrew Connolly, Chair Bhuvnesh Jain Andrew Becker Program Authorized to Offer Degree: Department of Astronomy University of Washington Abstract Karhunen-Lo`eve Analysis for Weak Gravitational Lensing Jacob T. Vanderplas Chair of the Supervisory Committee: Professor Andrew Connolly Department of Astronomy In the past decade, weak gravitational lensing has become an important tool in the study of the universe at the largest scale, giving insights into the distribution of dark matter, the expansion of the universe, and the nature of dark energy. This thesis research explores several applications of Karhunen-Lo`eve (KL) analysis to speed and improve the comparison of weak lensing shear catalogs to theory in order to constrain cosmological parameters in current and future lensing surveys. This work addresses three related aspects of weak lensing analysis: Three-dimensional Tomographic Mapping: (Based on work published in VanderPlas et al., 2011) We explore a new fast approach to three-dimensional mass mapping in weak lensing surveys. The KL approach uses a KL-based filtering of the shear signal to reconstruct mass structures on the line-of-sight, and provides a unified framework to evaluate the efficacy of linear reconstruction techniques. We find that the KL-based filtering leads to near-optimal angular resolution, and computation times which are faster than previous approaches.
    [Show full text]
  • Practical Computer Vision: Theory & Applications
    Practical Computer Vision: Theory & Applications Carmen Alonso Montes 23rd-27th November 2015 Contents Course Structure Basic concepts of Computer Vision Pixels Colour vs. Black&White Noise and linear filtering Image enhancement Some practical examples My first programme with images Image denoising Image enhancement Basic commands matlab Carmen Alonso Montes 2 Course Structure This course is an introductory course for students with very few practical knowledge on computer vision Resources available for students through BCAM website Course material: slides, exercises and examples What do you need for this course? The most important thing is your self-motivation! Programming: examples will be done in Matlab, and also, if needed, I will explain how to do it in OpenCV (C++) For those C++ or Phyton programmers, I suggest to use a proper code editor (eclipse, netbeans, …) Some examples of useful tools (not needed) but good to know Versioning: git, SVN Automatic documentation generation: Doxygen Project management: Redmine Carmen Alonso Montes 3 Course Structure: Contents Each class will last 2 hours and a half 50 min Theory (9:30-10:20) 10 min break 50 min Theory (10:30-11:20) 10 min break 30 hour practice (11:30-12:00) Each slide, will contain a blue box with the command in matlab, in case you want to try it during the lessons imread At the end of the slides, you will have a table with the commands used during the lessons, for a quick guide. Carmen Alonso Montes 4 Some good books Digital Image Processing, 3rd Ed. (DIP/3e) by Gonzalez and Woods, (2008) This is the bible in this domain Learning OpenCV, Computer Vision in C++ with the OpenCV Library, by Adrian Kaehler, Gary Bradski.
    [Show full text]
  • Algorithms for Designing Pop-Up Cards
    Algorithms for Designing Pop-Up Cards Zachary Abel∗ Erik D. Demainey Martin L. Demainey Sarah Eisenstaty Anna Lubiwz Andr´eSchulzx Diane L. Souvaine{ Giovanni Vigliettak Andrew Winslow{ Abstract We prove that every simple polygon can be made as a (2D) pop-up card/book that opens to any desired angle between 0 and 360◦. More precisely, given a simple polygon attached to the two walls of the open pop-up, our polynomial-time algorithm subdivides the polygon into a single-degree-of-freedom linkage structure, such that closing the pop-up flattens the linkage without collision. This result solves an open problem of Hara and Sugihara from 2009. We also show how to obtain a more efficient construction for the special case of orthogonal polygons, and how to make 3D orthogonal polyhedra, from pop-ups that open to 90◦, 180◦, 270◦, or 360◦. 1 Introduction Pop-up books have been entertaining children with their surprising and playful mechanics since their mass production in the 1970s. But the history of pop-ups is much older [27], and they were originally used for scientific and historical illustrations. The earliest known example of a \movable book" is Matthew Paris's Chronica Majora (c. 1250), which uses turnable disks (volvelle) to represent a calendar and uses flaps to illustrate maps. A more recent scientific example is George Spratt's Obstetric Tables (1850), which uses flaps to illustrate procedures for delivering babies including by Caesarean section. From the same year, Dean & Sons' Little Red Riding Hood (1850) is the first known movable book where a flat page rises into a 3D scene, though here it was actuated by pulling a string.
    [Show full text]
  • Generalized Multiple Correlation Coefficient As a Similarity
    Generalized Multiple Correlation Coefficient as a Similarity Measurement between Trajectories Julen Urain1 Jan Peters1;2 Abstract— Similarity distance measure between two trajecto- ries is an essential tool to understand patterns in motion, for example, in Human-Robot Interaction or Imitation Learning. The problem has been faced in many fields, from Signal Pro- cessing, Probabilistic Theory field, Topology field or Statistics field. Anyway, up to now, none of the trajectory similarity measurements metrics are invariant to all possible linear trans- formation of the trajectories (rotation, scaling, reflection, shear mapping or squeeze mapping). Also not all of them are robust in front of noisy signals or fast enough for real-time trajectory classification. To overcome this limitation this paper proposes a similarity distance metric that will remain invariant in front of any possible linear transformation. Based on Pearson’s Correlation Coefficient and the Coefficient of Determination, our similarity metric, the Generalized Multiple Correlation Fig. 1: GMCC applied for clustering taught trajectories in Coefficient (GMCC) is presented like the natural extension of Imitation Learning the Multiple Correlation Coefficient. The motivation of this paper is two-fold: First, to introduce a new correlation metric that presents the best properties to compute similarities between trajectories invariant to linear transformations and compare human-robot table tennis match. In [11], HMM were used it with some state of the art similarity distances. Second, to to recognize human gestures. The recognition was invariant present a natural way of integrating the similarity metric in an to the starting position of the gestures. The sequences of Imitation Learning scenario for clustering robot trajectories.
    [Show full text]
  • Direct Shear Mapping: Prospects for Weak Lensing Studies of Individual Galaxy–Galaxy Lensing Systems
    Publications of the Astronomical Society of Australia (PASA), Vol. 32, e040, 16 pages (2015). C Astronomical Society of Australia 2015; published by Cambridge University Press. doi:10.1017/pasa.2015.39 Direct Shear Mapping: Prospects for Weak Lensing Studies of Individual Galaxy–Galaxy Lensing Systems C. O. de Burgh-Day1,2,3,4, E. N. Taylor1,3,R.L.Webster1,3 and A. M. Hopkins2,3 1School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 2The Australian Astronomical Observatory, PO Box 915, North Ryde NSW 1670, Australia 3ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), School of Physics, David Caro Building, The University of Melbourne, Parkville VIC 3010, Australia 4Email: [email protected] (Received April 23, 2012; Accepted September 21, 2012) Abstract Using both a theoretical and an empirical approach, we have investigated the frequency of low redshift galaxy-galaxy lensing systems in which the signature of 3D weak lensing might be directly detectable. We find good agreement between these two approaches. Using data from the Galaxy and Mass Assembly redshift survey we estimate the frequency of detectable weak lensing at low redshift. We find that below a redshift of z ∼ 0.6, the probability of a galaxy being weakly γ ≥ . ∼ . − lensed by 0 02 is 0 01. We have also investigated the feasibility of measuring the scatter in the M∗ Mh relation using shear statistics. We estimate that for a shear measurement error of γ = 0.02 (consistent with the sensitivity of the Direct Shear Mapping technique), with a sample of ∼50,000 spatially and spectrally resolved galaxies, the scatter in the − M∗ Mh relation could be measured.
    [Show full text]
  • Matrix & Power Series Methods
    ii MATRIX AND POWER SERIES METHODS Mathematics 306 All You Ever Wanted to Know About Matrix Algebra and Infinite Series But Were Afraid To Ask By John W. Lee Department of Mathematics Oregon State University January 2006 Contents Introduction ix I Background and Review 1 1ComplexNumbers 3 1.1Goals................................. 3 1.2Overview............................... 3 1.3TheComplexNumberSystem.................... 4 1.4PropertiesofAbsoluteValuesandComplexConjugates..... 8 1.5TheComplexPlane......................... 8 1.6CirclesintheComplexPlane.................... 12 1.7ComplexFunctions.......................... 14 1.8SuggestedProblems......................... 15 2 Vectors, Lines, and Planes 19 2.1Goals................................. 19 2.2Overview............................... 19 2.3Vectors................................ 19 2.4DotProducts............................. 24 2.5RowandColumnVectors...................... 27 2.6LinesandPlanes........................... 28 2.7SuggestedProblems......................... 30 II Matrix Methods 33 3 Linear Equations 35 3.1Goals................................. 35 3.2Overview............................... 35 3.3ThreeBasicPrinciples........................ 36 3.4SystematicEliminationofUnknowns................ 40 3.5Non-SquareSystems......................... 45 3.6 EfficientEvaluationofDeterminants................ 47 iii iv CONTENTS 3.7ComplexLinearSystems....................... 48 3.8SuggestedProblems......................... 48 4 Matrices and Linear Systems 51 4.1Goals................................
    [Show full text]
  • Euclidean Plane and Its Relatives
    Euclidean Plane and its Relatives A minimalistic introduction Anton Petrunin arXiv:1302.1630v7 [math.HO] 25 Jan 2015 This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/. http://www.createspace.com/4122116 ISBN-13: 978-1481918473 ISBN-10: 1481918478 Contents Introduction 7 Prerequisite. Overview. 1 Preliminaries 11 What is axiomatic approach? What is model? Metric spaces. Examples. Shortcut for distance. Isometries, mo- tions and lines. Half-lines and segments. Angles. Reals modulo 2 π. Continuity. Congruent triangles. · Euclidean geometry 2 The Axioms 23 The Axioms. Lines and half-lines. Zero angle. Straight angle. Vertical angles. 3 Half-planes 29 Sign of angle. Intermediate value theorem. Same sign lem- mas. Half-planes. Triangle with the given sides. 4 Congruent triangles 37 Side-angle-side condition. Angle-side-angle condition. Isosceles triangles. Side-side-side condition. 5 Perpendicular lines 41 Right, acute and obtuse angles. Perpendicular bisector. Uniqueness of perpendicular. Reflection. Perpendicular is shortest. Angle bisectors. Circles. Geometric construc- tions. 3 4 CONTENTS 6 Parallel lines and similar triangles 51 Parallel lines. Similar triangles. Pythagorean theorem. Angles of triangle. Transversal property. Parallelograms. Method of coordinates. 7 Triangle geometry 61 Circumcircle and circumcenter. Altitudes and orthocenter. Medians and centroid. Bisector of triangle. Incenter. Inversive geometry 8 Inscribed angles 69 Angle between a tangent line and a chord. Inscribed angle. Inscribed quadrilaterals. Arcs. 9 Inversion 77 Cross-ratio. Inversive plane and circlines. Ptolemy’s iden- tity. Perpendicular circles. Angles after inversion. Non-Euclidean geometry 10 Absolute plane 89 Two angles of triangle.
    [Show full text]
  • Quantum Image Rotation by an Arbitrary Angle
    Noname manuscript No. (will be inserted by the editor) Quantum Image Rotation by an arbitrary angle Fei Yan · Kehan Chen · Salvador E. Venegas-Andraca · Jianping Zhao Received: date / Accepted: date Abstract In this paper, a novel method of quantum image rotation (QIR) based on shear transformations on NEQR quantum images is proposed. To compute the horizontal and vertical shear mappings required for rotation, we have designed quantum self-adder, quantum control multiplier, and quantum interpolation circuits as the basic computing units in the QIR implementation. Furthermore, we provide several examples of our results by presenting com- puter simulation experiments of QIR under 30◦, 45◦, and 60◦ rotation scenarios and have a discussion onto the anti-aliasing and computational complexity of the proposed QIR method. Keywords quantum information · quantum computation · quantum algorithms · quantum image · image rotation · shear mapping. 1 Introduction Cross-pollination between physics and computer science has long been abun- dant and fruitful. For example, statistical mechanics has inspired the develop- ment of probabilistic algorithms focused on solving optimization problems [1] Fei Yan. School of Computer Science and Technology, Changchun University of Science and Technol- ogy. No. 7089, Weixing Road, Changchun 130022, China. Kehan Chen. School of Computer Science and Technology, Changchun University of Science and Technol- ogy. No. 7089, Weixing Road, Changchun 130022, China. Salvador E. Venegas-Andraca. Tecnol´ogicode Monterrey, Escuela de Ingenier´ıay Ciencias and Campus Estado de M´exico. Carretera al Lago de Guadalupe KM. 3.5, Atizap´ande Zaragoza, Estado de M´exico CP 52926, M´exico.E-mail: [email protected] Jianping Zhao.
    [Show full text]