LECTURE 2: the RIEMANNIAN DISTANCE 1. the Existence Of
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Metric Geometry in a Tame Setting
University of California Los Angeles Metric Geometry in a Tame Setting A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Erik Walsberg 2015 c Copyright by Erik Walsberg 2015 Abstract of the Dissertation Metric Geometry in a Tame Setting by Erik Walsberg Doctor of Philosophy in Mathematics University of California, Los Angeles, 2015 Professor Matthias J. Aschenbrenner, Chair We prove basic results about the topology and metric geometry of metric spaces which are definable in o-minimal expansions of ordered fields. ii The dissertation of Erik Walsberg is approved. Yiannis N. Moschovakis Chandrashekhar Khare David Kaplan Matthias J. Aschenbrenner, Committee Chair University of California, Los Angeles 2015 iii To Sam. iv Table of Contents 1 Introduction :::::::::::::::::::::::::::::::::::::: 1 2 Conventions :::::::::::::::::::::::::::::::::::::: 5 3 Metric Geometry ::::::::::::::::::::::::::::::::::: 7 3.1 Metric Spaces . 7 3.2 Maps Between Metric Spaces . 8 3.3 Covers and Packing Inequalities . 9 3.3.1 The 5r-covering Lemma . 9 3.3.2 Doubling Metrics . 10 3.4 Hausdorff Measures and Dimension . 11 3.4.1 Hausdorff Measures . 11 3.4.2 Hausdorff Dimension . 13 3.5 Topological Dimension . 15 3.6 Left-Invariant Metrics on Groups . 15 3.7 Reductions, Ultralimits and Limits of Metric Spaces . 16 3.7.1 Reductions of Λ-valued Metric Spaces . 16 3.7.2 Ultralimits . 17 3.7.3 GH-Convergence and GH-Ultralimits . 18 3.7.4 Asymptotic Cones . 19 3.7.5 Tangent Cones . 22 3.7.6 Conical Metric Spaces . 22 3.8 Normed Spaces . 23 4 T-Convexity :::::::::::::::::::::::::::::::::::::: 24 4.1 T-convex Structures . -
Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson
Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson study of quasiconformal maps on such boundaries moti- The authors of this piece are organizers of the AMS vated Heinonen and Koskela [HK98] to axiomatize several 2020 Mathematics Research Communities summer aspects of Euclidean quasiconformal geometry in the set- conference Analysis in Metric Spaces, one of five ting of metric measure spaces and thereby extend Mostow’s topical research conferences offered this year that are work beyond the sub-Riemannian setting. The ground- focused on collaborative research and professional breaking work [HK98] initiated the modern theory of anal- development for early-career mathematicians. ysis on metric spaces. Additional information can be found at https://www Analysis on metric spaces is nowadays an active and in- .ams.org/programs/research-communities dependent field, bringing together researchers from differ- /2020MRC-MetSpace. Applications are open until ent parts of the mathematical spectrum. It has far-reaching February 15, 2020. applications to areas as diverse as geometric group the- ory, nonlinear PDEs, and even theoretical computer sci- The subject of analysis, more specifically, first-order calcu- ence. As a further sign of recognition, analysis on met- lus, in metric measure spaces provides a unifying frame- ric spaces has been included in the 2010 MSC classifica- work for ideas and questions from many different fields tion as a category (30L: Analysis on metric spaces). In this of mathematics. One of the earliest motivations and ap- short survey, we can discuss only a small fraction of areas plications of this theory arose in Mostow’s work [Mos73], into which analysis on metric spaces has expanded. -
Metric Spaces We Have Talked About the Notion of Convergence in R
Mathematics Department Stanford University Math 61CM – Metric spaces We have talked about the notion of convergence in R: Definition 1 A sequence an 1 of reals converges to ` R if for all " > 0 there exists N N { }n=1 2 2 such that n N, n N implies an ` < ". One writes lim an = `. 2 ≥ | − | With . the standard norm in Rn, one makes the analogous definition: k k n n Definition 2 A sequence xn 1 of points in R converges to x R if for all " > 0 there exists { }n=1 2 N N such that n N, n N implies xn x < ". One writes lim xn = x. 2 2 ≥ k − k One important consequence of the definition in either case is that limits are unique: Lemma 1 Suppose lim xn = x and lim xn = y. Then x = y. Proof: Suppose x = y.Then x y > 0; let " = 1 x y .ThusthereexistsN such that n N 6 k − k 2 k − k 1 ≥ 1 implies x x < ", and N such that n N implies x y < ". Let n = max(N ,N ). Then k n − k 2 ≥ 2 k n − k 1 2 x y x x + x y < 2" = x y , k − kk − nk k n − k k − k which is a contradiction. Thus, x = y. ⇤ Note that the properties of . were not fully used. What we needed is that the function d(x, y)= k k x y was non-negative, equal to 0 only if x = y,symmetric(d(x, y)=d(y, x)) and satisfied the k − k triangle inequality. -
Chapter 13 Curvature in Riemannian Manifolds
Chapter 13 Curvature in Riemannian Manifolds 13.1 The Curvature Tensor If (M, , )isaRiemannianmanifoldand is a connection on M (that is, a connection on TM−), we− saw in Section 11.2 (Proposition 11.8)∇ that the curvature induced by is given by ∇ R(X, Y )= , ∇X ◦∇Y −∇Y ◦∇X −∇[X,Y ] for all X, Y X(M), with R(X, Y ) Γ( om(TM,TM)) = Hom (Γ(TM), Γ(TM)). ∈ ∈ H ∼ C∞(M) Since sections of the tangent bundle are vector fields (Γ(TM)=X(M)), R defines a map R: X(M) X(M) X(M) X(M), × × −→ and, as we observed just after stating Proposition 11.8, R(X, Y )Z is C∞(M)-linear in X, Y, Z and skew-symmetric in X and Y .ItfollowsthatR defines a (1, 3)-tensor, also denoted R, with R : T M T M T M T M. p p × p × p −→ p Experience shows that it is useful to consider the (0, 4)-tensor, also denoted R,givenby R (x, y, z, w)= R (x, y)z,w p p p as well as the expression R(x, y, y, x), which, for an orthonormal pair, of vectors (x, y), is known as the sectional curvature, K(x, y). This last expression brings up a dilemma regarding the choice for the sign of R. With our present choice, the sectional curvature, K(x, y), is given by K(x, y)=R(x, y, y, x)but many authors define K as K(x, y)=R(x, y, x, y). Since R(x, y)isskew-symmetricinx, y, the latter choice corresponds to using R(x, y)insteadofR(x, y), that is, to define R(X, Y ) by − R(X, Y )= + . -
Distance Metric Learning, with Application to Clustering with Side-Information
Distance metric learning, with application to clustering with side-information Eric P. Xing, Andrew Y. Ng, Michael I. Jordan and Stuart Russell University of California, Berkeley Berkeley, CA 94720 epxing,ang,jordan,russell ¡ @cs.berkeley.edu Abstract Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many “plausible” ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they con- sider “similar.” For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ¢¤£ , learns a distance metric over ¢¥£ that respects these relationships. Our method is based on posing met- ric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance. 1 Introduction The performance of many learning and datamining algorithms depend critically on their being given a good metric over the input space. For instance, K-means, nearest-neighbors classifiers and kernel algorithms such as SVMs all need to be given good metrics that -
Hodge Theory
HODGE THEORY PETER S. PARK Abstract. This exposition of Hodge theory is a slightly retooled version of the author's Harvard minor thesis, advised by Professor Joe Harris. Contents 1. Introduction 1 2. Hodge Theory of Compact Oriented Riemannian Manifolds 2 2.1. Hodge star operator 2 2.2. The main theorem 3 2.3. Sobolev spaces 5 2.4. Elliptic theory 11 2.5. Proof of the main theorem 14 3. Hodge Theory of Compact K¨ahlerManifolds 17 3.1. Differential operators on complex manifolds 17 3.2. Differential operators on K¨ahlermanifolds 20 3.3. Bott{Chern cohomology and the @@-Lemma 25 3.4. Lefschetz decomposition and the Hodge index theorem 26 Acknowledgments 30 References 30 1. Introduction Our objective in this exposition is to state and prove the main theorems of Hodge theory. In Section 2, we first describe a key motivation behind the Hodge theory for compact, closed, oriented Riemannian manifolds: the observation that the differential forms that satisfy certain par- tial differential equations depending on the choice of Riemannian metric (forms in the kernel of the associated Laplacian operator, or harmonic forms) turn out to be precisely the norm-minimizing representatives of the de Rham cohomology classes. This naturally leads to the statement our first main theorem, the Hodge decomposition|for a given compact, closed, oriented Riemannian manifold|of the space of smooth k-forms into the image of the Laplacian and its kernel, the sub- space of harmonic forms. We then develop the analytic machinery|specifically, Sobolev spaces and the theory of elliptic differential operators|that we use to prove the aforementioned decom- position, which immediately yields as a corollary the phenomenon of Poincar´eduality. -
Curvature of Riemannian Manifolds
Curvature of Riemannian Manifolds Seminar Riemannian Geometry Summer Term 2015 Prof. Dr. Anna Wienhard and Dr. Gye-Seon Lee Soeren Nolting July 16, 2015 1 Motivation Figure 1: A vector parallel transported along a closed curve on a curved manifold.[1] The aim of this talk is to define the curvature of Riemannian Manifolds and meeting some important simplifications as the sectional, Ricci and scalar curvature. We have already noticed, that a vector transported parallel along a closed curve on a Riemannian Manifold M may change its orientation. Thus, we can determine whether a Riemannian Manifold is curved or not by transporting a vector around a loop and measuring the difference of the orientation at start and the endo of the transport. As an example take Figure 1, which depicts a parallel transport of a vector on a two-sphere. Note that in a non-curved space the orientation of the vector would be preserved along the transport. 1 2 Curvature In the following we will use the Einstein sum convention and make use of the notation: X(M) space of smooth vector fields on M D(M) space of smooth functions on M 2.1 Defining Curvature and finding important properties This rather geometrical approach motivates the following definition: Definition 2.1 (Curvature). The curvature of a Riemannian Manifold is a correspondence that to each pair of vector fields X; Y 2 X (M) associates the map R(X; Y ): X(M) ! X(M) defined by R(X; Y )Z = rX rY Z − rY rX Z + r[X;Y ]Z (1) r is the Riemannian connection of M. -
5.2. Inner Product Spaces 1 5.2
5.2. Inner Product Spaces 1 5.2. Inner Product Spaces Note. In this section, we introduce an inner product on a vector space. This will allow us to bring much of the geometry of Rn into the infinite dimensional setting. Definition 5.2.1. A vector space with complex scalars hV, Ci is an inner product space (also called a Euclidean Space or a Pre-Hilbert Space) if there is a function h·, ·i : V × V → C such that for all u, v, w ∈ V and a ∈ C we have: (a) hv, vi ∈ R and hv, vi ≥ 0 with hv, vi = 0 if and only if v = 0, (b) hu, v + wi = hu, vi + hu, wi, (c) hu, avi = ahu, vi, and (d) hu, vi = hv, ui where the overline represents the operation of complex conju- gation. The function h·, ·i is called an inner product. Note. Notice that properties (b), (c), and (d) of Definition 5.2.1 combine to imply that hu, av + bwi = ahu, vi + bhu, wi and hau + bv, wi = ahu, wi + bhu, wi for all relevant vectors and scalars. That is, h·, ·i is linear in the second positions and “conjugate-linear” in the first position. 5.2. Inner Product Spaces 2 Note. We can also define an inner product on a vector space with real scalars by requiring that h·, ·i : V × V → R and by replacing property (d) in Definition 5.2.1 with the requirement that the inner product is symmetric: hu, vi = hv, ui. Then Rn with the usual dot product is an example of a real inner product space. -
Semi-Riemannian Manifold Optimization
Semi-Riemannian Manifold Optimization Tingran Gao William H. Kruskal Instructor Committee on Computational and Applied Mathematics (CCAM) Department of Statistics The University of Chicago 2018 China-Korea International Conference on Matrix Theory with Applications Shanghai University, Shanghai, China December 19, 2018 Outline Motivation I Riemannian Manifold Optimization I Barrier Functions and Interior Point Methods Semi-Riemannian Manifold Optimization I Optimality Conditions I Descent Direction I Semi-Riemannian First Order Methods I Metric Independence of Second Order Methods Optimization on Semi-Riemannian Submanifolds Joint work with Lek-Heng Lim (The University of Chicago) and Ke Ye (Chinese Academy of Sciences) Riemannian Manifold Optimization I Optimization on manifolds: min f (x) x2M where (M; g) is a (typically nonlinear, non-convex) Riemannian manifold, f : M! R is a smooth function on M I Difficult constrained optimization; handled as unconstrained optimization from the perspective of manifold optimization I Key ingredients: tangent spaces, geodesics, exponential map, parallel transport, gradient rf , Hessian r2f Riemannian Manifold Optimization I Riemannian Steepest Descent x = Exp (−trf (x )) ; t ≥ 0 k+1 xk k I Riemannian Newton's Method 2 r f (xk ) ηk = −∇f (xk ) x = Exp (−tη ) ; t ≥ 0 k+1 xk k I Riemannian trust region I Riemannian conjugate gradient I ...... Gabay (1982), Smith (1994), Edelman et al. (1998), Absil et al. (2008), Adler et al. (2002), Ring and Wirth (2012), Huang et al. (2015), Sato (2016), etc. Riemannian Manifold Optimization I Applications: Optimization problems on matrix manifolds I Stiefel manifolds n Vk (R ) = O (n) =O (k) I Grassmannian manifolds Gr (k; n) = O (n) = (O (k) × O (n − k)) I Flag manifolds: for n1 + ··· + nd = n, Flag = O (n) = (O (n ) × · · · × O (n )) n1;··· ;nd 1 d I Shape Spaces k×n R n f0g =O (n) I ..... -
Learning a Distance Metric from Relative Comparisons
Learning a Distance Metric from Relative Comparisons Matthew Schultz and Thorsten Joachims Department of Computer Science Cornell University Ithaca, NY 14853 schultz,tj @cs.cornell.edu Abstract This paper presents a method for learning a distance metric from rel- ative comparison such as “A is closer to B than A is to C”. Taking a Support Vector Machine (SVM) approach, we develop an algorithm that provides a flexible way of describing qualitative training data as a set of constraints. We show that such constraints lead to a convex quadratic programming problem that can be solved by adapting standard meth- ods for SVM training. We empirically evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents. 1 Introduction Distance metrics are an essential component in many applications ranging from supervised learning and clustering to product recommendations and document browsing. Since de- signing such metrics by hand is difficult, we explore the problem of learning a metric from examples. In particular, we consider relative and qualitative examples of the form “A is closer to B than A is to C”. We believe that feedback of this type is more easily available in many application setting than quantitative examples (e.g. “the distance between A and B is 7.35”) as considered in metric Multidimensional Scaling (MDS) (see [4]), or absolute qualitative feedback (e.g. “A and B are similar”, “A and C are not similar”) as considered in [11]. Building on the study in [7], search-engine query logs are one example where feedback of the form “A is closer to B than A is to C” is readily available for learning a (more semantic) similarity metric on documents. -
Deep Riemannian Manifold Learning
Deep Riemannian Manifold Learning Aaron Lou1∗ Maximilian Nickel2 Brandon Amos2 1Cornell University 2 Facebook AI Abstract We present a new class of learnable Riemannian manifolds with a metric param- eterized by a deep neural network. The core manifold operations–specifically the Riemannian exponential and logarithmic maps–are solved using approximate numerical techniques. Input and parameter gradients are computed with an ad- joint sensitivity analysis. This enables us to fit geodesics and distances with gradient-based optimization of both on-manifold values and the manifold itself. We demonstrate our method’s capability to model smooth, flexible metric structures in graph embedding tasks. 1 Introduction Geometric domain knowledge is important for machine learning systems to interact, process, and produce data from inherently geometric spaces and phenomena [BBL+17, MBM+17]. Geometric knowledge often empowers the model with explicit operations and components that reflect the geometric properties of the domain. One commonly used construct is a Riemannian manifold, the generalization of standard Euclidean space. These structures enable models to represent non-trivial geometric properties of data [NK17, DFDC+18, GDM+18, NK18, GBH18]. However, despite this capability, the current set of usable Riemannian manifolds is surprisingly small, as the core manifold operations must be provided in a closed form to enable gradient-based optimization. Worse still, these manifolds are often set a priori, as the techniques for interpolating between manifolds are extremely restricted [GSGR19, SGB20]. In this work, we study how to integrate a general class of Riemannian manifolds into learning systems. Specifically, we parameterize a class of Riemannian manifolds with a deep neural network and develop techniques to optimize through the manifold operations. -
Section 20. the Metric Topology
20. The Metric Topology 1 Section 20. The Metric Topology Note. The topological concepts you encounter in Analysis 1 are based on the metric on R which gives the distance between x and y in R which gives the distance between x and y in R as |x − y|. More generally, any space with a metric on it can have a topology defined in terms of the metric (which is ultimately based on an ε definition of pen sets). This is done in our Complex Analysis 1 (MATH 5510) class (see the class notes for Chapter 2 of http://faculty.etsu.edu/gardnerr/5510/notes. htm). In this section we define a metric and use it to create the “metric topology” on a set. Definition. A metric on a set X is a function d : X × X → R having the following properties: (1) d(x, y) ≥ 0 for all x, y ∈ X and d(x, y) = 0 if and only if x = y. (2) d(x, y)= d(y, x) for all x, y ∈ X. (3) (The Triangle Inequality) d(x, y)+ d(y, z) ≥ d(x, z) for all x, y, z ∈ X. The nonnegative real number d(x, y) is the distance between x and y. Set X together with metric d form a metric space. Define the set Bd(x, ε)= {y | d(x, y) < ε}, called the ε-ball centered at x (often simply denoted B(x, ε)). Definition. If d is a metric on X then the collection of all ε-balls Bd(x, ε) for x ∈ X and ε> 0 is a basis for a topology on X, called the metric topology induced by d.