Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson

Total Page:16

File Type:pdf, Size:1020Kb

Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson study of quasiconformal maps on such boundaries moti- The authors of this piece are organizers of the AMS vated Heinonen and Koskela [HK98] to axiomatize several 2020 Mathematics Research Communities summer aspects of Euclidean quasiconformal geometry in the set- conference Analysis in Metric Spaces, one of five ting of metric measure spaces and thereby extend Mostow’s topical research conferences offered this year that are work beyond the sub-Riemannian setting. The ground- focused on collaborative research and professional breaking work [HK98] initiated the modern theory of anal- development for early-career mathematicians. ysis on metric spaces. Additional information can be found at https://www Analysis on metric spaces is nowadays an active and in- .ams.org/programs/research-communities dependent field, bringing together researchers from differ- /2020MRC-MetSpace. Applications are open until ent parts of the mathematical spectrum. It has far-reaching February 15, 2020. applications to areas as diverse as geometric group the- ory, nonlinear PDEs, and even theoretical computer sci- The subject of analysis, more specifically, first-order calcu- ence. As a further sign of recognition, analysis on met- lus, in metric measure spaces provides a unifying frame- ric spaces has been included in the 2010 MSC classifica- work for ideas and questions from many different fields tion as a category (30L: Analysis on metric spaces). In this of mathematics. One of the earliest motivations and ap- short survey, we can discuss only a small fraction of areas plications of this theory arose in Mostow’s work [Mos73], into which analysis on metric spaces has expanded. For in which he extended his celebrated rigidity theorem for more comprehensive introductions to various aspects of hyperbolic manifolds to the more general framework of the subject, we invite the reader to consult the monographs manifolds locally modeled on negatively curved symmet- [Hei01,HK00,HKST15,BB11,MT10,AGS08,BS07,Hei07]. ric spaces of rank one. In his proof, Mostow used the the- ory of quasiconformal mappings on the visual boundaries Poincar´einequalities in metric spaces. Inspired by the of rank-one symmetric spaces. These visual boundaries fundamental theorem of calculus, Heinonen and Koskela are equipped with a sub-Riemannian structure that is lo- proposed the notion of upper gradient as a substitute for the cally non-Euclidean and has a fractal nature. Mostow’s derivative of a function on a metric measure space (푋, 푑, 휇). More precisely, 푔 ≥ 0 is an upper gradient for a real-valued Mario Bonk is a professor of mathematics at the University of California, Los function 푢 on 푋 if Angeles. His email address is [email protected]. Luca Capogna is a professor of mathematical sciences at Worcester Polytechnic Institute. His email address is [email protected]. |푢(훾(1)) − 푢(훾(0))| ≤ ∫푔 푑푠 Piotr Hajłasz is a professor of mathematics at the University of Pittsburgh. His 훾 email address is [email protected]. Nageswari Shanmugalingam is a professor of mathematical sciences at the Uni- for each path 훾∶ [0, 1] → 푋 of finite length. versity of Cincinnati. Her email address is [email protected]. Upper gradients are not unique, but if a function 푢 has Jeremy Tyson is a professor of mathematics at the University of Illinois, Urbana– an upper gradient 푔 ∈ 퐿푝(휇), then there is a unique 푝- 푝 Champaign. His email address is [email protected]. weak upper gradient 푔ᵆ with minimal 퐿 -norm for which For permission to reprint this article, please contact: the preceding inequality holds for “almost every” curve 훾. [email protected]. The metric measure space 푋 is said to support a 푝-Poincar´e DOI: https://doi.org/10.1090/noti2030 inequality for some 푝 ≥ 1 if constants 퐶 > 0 and 휆 ≥ 1 exist FEBRUARY 2020 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY 253 so that for every ball 퐵 = 퐵(푥, 푅) ⊂ 푋, the inequality 푛-dimensional Euclidean domains as those homeomor- 1/푝 phisms that preserve the class of 푛-quasiharmonic func- 푝 tions. A similar statement is also true for PI spaces. This ∫− |푢 − 푢퐵|푑휇 ≤ 퐶푅(∫− 푔ᵆ 푑휇) 퐵 휆퐵 generalizes the well-known fact that planar conformal mappings are precisely the orientation-preserving home- holds for all function-upper gradient pairs (푢, 푔 ). Here ᵆ omorphisms that preserve harmonic functions under pull- 푢 = ∫− 푢 푑휇 and 휆퐵 = 퐵(푥, 휆푅). 퐵 퐵 back. Over the past twenty years, many aspects of first-order The further development of potential theory in the set- calculus have been systematically developed in the setting ting of metric measure spaces leads to a classification of PI spaces, that is, metric measure spaces equipped with of spaces as either 푝-parabolic or 푝-hyperbolic. This di- a doubling measure and supporting a Poincar´einequality. chotomy can be seen as a nonlinear analog of the recur- For example, for PI spaces we now have a rich theory of rence/transience dichotomy in the theory of Brownian mo- Sobolev functions which in turn lies at the foundation of tion. This classification is helpful in the development ofa the theory of quasconformal mappings and nonlinear po- quasiconformal uniformization theory or for a deeper un- tential theory. derstanding of the links between the geometry of hyper- A wealth of interesting and important examples of non- bolic spaces and the analysis on their boundaries at infin- Euclidean PI spaces exist, including sub-Riemannian man- ity. ifolds such as the Heisenberg group, Gromov-Hausdorff limits of manifolds with lower Ricci curvature bounds, vi- Differentiability of Lipschitz functions. The notion of upper gradient generalizes to metric spaces the norm of the sual boundaries of certain hyperbolic buildings, and frac- 1 tal spaces that are homeomorphic to the Menger curve. gradient of a 퐶 -function. It is a priori unclear how to for- The scope of the theory, however, is not fully explored. mulate a notion of the gradient itself (or of the differential of a function) in the absence of a linear structure. Cheeger Quasiconformal maps and nonlinear potential theory in [Che99] introduced a linear differential structure for real- metric spaces. A homeomorphism between metric spaces valued functions on metric measure spaces and established is said to be quasiconformal if it distorts the geometry of a version of Rademacher’s theorem for Lipschitz functions infinitesimal balls in a controlled fashion. Conformal defined on PI spaces. This differential structure gives rise maps form a special subclass for which infinitesimal balls to a finite-dimensional measurable vector bundle, the gen- are mapped to infinitesimal balls. Since the only confor- eralized cotangent bundle, over the metric space: to each real- mal maps between higher-dimensional Euclidean spaces valued Lipschitz function 푢 corresponds an 퐿∞-section 푑푢 are Möbius transformations, quasiconformal homeomor- of this bundle. Moreover, the pointwise Euclidean norm phisms form a more flexible class for geometric mapping |푑푢| is comparable to the minimal upper gradient 푔 al- problems. For quasiconformal maps on PI spaces, we now ᵆ most everywhere. This structure can be used in turn to in- have a well-developed theory that features many of the as- vestigate second-order PDEs in divergence form as a basis pects of the Euclidean theory, such as Sobolev regularity, for a theory of differential currents in metric spaces and for preservation of sets of measures zero, and global distortion many other purposes. estimates, among other things. A function 푢 on a domain Ω in a metric measure space Bi-Lipschitz embedding theorems. An earlier version of (푋, 푑, 휇) is said to be 푝-quasiharmonic for 푝 ≥ 1 if a constant Rademacher’s differentiation theorem for Lipschitz maps 푄 ≥ 1 exists so that the inequality between Carnot groups was proved by Pansu [Pan89]. Semmes observed that the Pansu–Rademacher theorem 푝 푝 implies that nonabelian Carnot groups do not admit bi- ∫ 푔ᵆ 푑휇 ≤ 푄 ∫ 푔ᵆ+휑 푑휇 spt휑 spt휑 Lipschitz copies in finite-dimensional Euclidean spaces. Moreover, such spaces do not bi-Lipschitz embed into holds whenever 휑 is a Lipschitz function with compact sup- Hilbert space or even into any Banach space with the port spt 휑 in Ω. In case 푄 = 1, we say that 푢 is 푝-harmonic; Radon–Nikod´ymproperty (RNP). Indeed, the algebraic this coincides with the classical Euclidean notion of a 푝- features of sub-Riemannian geometry have direct implica- harmonic function, defined as a weak solution to the 푝- tions for metric questions such as bi-Lipschitz equivalence Laplace equation or embeddability. div(|∇푢|푝−2∇푢) = 0. The bi-Lipschitz embedding problem is intimately re- lated to the existence of suitable differentiation theories Quasiharmonic functions are useful in the study for Lipschitz functions and maps. Roughly speaking, of quasiconformal mappings. For example, one can this relationship proceeds via incompatibility between the characterize quasiconformal homeomorphisms between 254 NOTICES OF THE AMERICAN MATHEMATICAL SOCIETY VOLUME 67, NUMBER 2 geometry of the cotangent bundles of the source and target 2-sphere. While the conjecture is still open, one can show spaces. In view of Cheeger’s differentiation theorem, one that the desired conclusion is true if 휕∞퐺 (equipped with can allow arbitrary PI space as source spaces here and take a visual metric) has good analytic properties, say, if it is RNP Banach spaces as targets, for example. On the other quasisymmetrically equivalent to a PI space. For more in- hand, there is no effective differentiation theory for maps formation see the ICM lectures [Bon06] and [Kle06]. into ℓ∞, because according to the Fr´echetembedding the- The problem of deciding when a metric space is quasi- orem, every separable metric space embeds isometrically symmetrically equivalent to a space with “better” analytic into ℓ∞.
Recommended publications
  • Metric Geometry in a Tame Setting
    University of California Los Angeles Metric Geometry in a Tame Setting A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Erik Walsberg 2015 c Copyright by Erik Walsberg 2015 Abstract of the Dissertation Metric Geometry in a Tame Setting by Erik Walsberg Doctor of Philosophy in Mathematics University of California, Los Angeles, 2015 Professor Matthias J. Aschenbrenner, Chair We prove basic results about the topology and metric geometry of metric spaces which are definable in o-minimal expansions of ordered fields. ii The dissertation of Erik Walsberg is approved. Yiannis N. Moschovakis Chandrashekhar Khare David Kaplan Matthias J. Aschenbrenner, Committee Chair University of California, Los Angeles 2015 iii To Sam. iv Table of Contents 1 Introduction :::::::::::::::::::::::::::::::::::::: 1 2 Conventions :::::::::::::::::::::::::::::::::::::: 5 3 Metric Geometry ::::::::::::::::::::::::::::::::::: 7 3.1 Metric Spaces . 7 3.2 Maps Between Metric Spaces . 8 3.3 Covers and Packing Inequalities . 9 3.3.1 The 5r-covering Lemma . 9 3.3.2 Doubling Metrics . 10 3.4 Hausdorff Measures and Dimension . 11 3.4.1 Hausdorff Measures . 11 3.4.2 Hausdorff Dimension . 13 3.5 Topological Dimension . 15 3.6 Left-Invariant Metrics on Groups . 15 3.7 Reductions, Ultralimits and Limits of Metric Spaces . 16 3.7.1 Reductions of Λ-valued Metric Spaces . 16 3.7.2 Ultralimits . 17 3.7.3 GH-Convergence and GH-Ultralimits . 18 3.7.4 Asymptotic Cones . 19 3.7.5 Tangent Cones . 22 3.7.6 Conical Metric Spaces . 22 3.8 Normed Spaces . 23 4 T-Convexity :::::::::::::::::::::::::::::::::::::: 24 4.1 T-convex Structures .
    [Show full text]
  • Metric Spaces We Have Talked About the Notion of Convergence in R
    Mathematics Department Stanford University Math 61CM – Metric spaces We have talked about the notion of convergence in R: Definition 1 A sequence an 1 of reals converges to ` R if for all " > 0 there exists N N { }n=1 2 2 such that n N, n N implies an ` < ". One writes lim an = `. 2 ≥ | − | With . the standard norm in Rn, one makes the analogous definition: k k n n Definition 2 A sequence xn 1 of points in R converges to x R if for all " > 0 there exists { }n=1 2 N N such that n N, n N implies xn x < ". One writes lim xn = x. 2 2 ≥ k − k One important consequence of the definition in either case is that limits are unique: Lemma 1 Suppose lim xn = x and lim xn = y. Then x = y. Proof: Suppose x = y.Then x y > 0; let " = 1 x y .ThusthereexistsN such that n N 6 k − k 2 k − k 1 ≥ 1 implies x x < ", and N such that n N implies x y < ". Let n = max(N ,N ). Then k n − k 2 ≥ 2 k n − k 1 2 x y x x + x y < 2" = x y , k − kk − nk k n − k k − k which is a contradiction. Thus, x = y. ⇤ Note that the properties of . were not fully used. What we needed is that the function d(x, y)= k k x y was non-negative, equal to 0 only if x = y,symmetric(d(x, y)=d(y, x)) and satisfied the k − k triangle inequality.
    [Show full text]
  • Distance Metric Learning, with Application to Clustering with Side-Information
    Distance metric learning, with application to clustering with side-information Eric P. Xing, Andrew Y. Ng, Michael I. Jordan and Stuart Russell University of California, Berkeley Berkeley, CA 94720 epxing,ang,jordan,russell ¡ @cs.berkeley.edu Abstract Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many “plausible” ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they con- sider “similar.” For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ¢¤£ , learns a distance metric over ¢¥£ that respects these relationships. Our method is based on posing met- ric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance. 1 Introduction The performance of many learning and datamining algorithms depend critically on their being given a good metric over the input space. For instance, K-means, nearest-neighbors classifiers and kernel algorithms such as SVMs all need to be given good metrics that
    [Show full text]
  • 5.2. Inner Product Spaces 1 5.2
    5.2. Inner Product Spaces 1 5.2. Inner Product Spaces Note. In this section, we introduce an inner product on a vector space. This will allow us to bring much of the geometry of Rn into the infinite dimensional setting. Definition 5.2.1. A vector space with complex scalars hV, Ci is an inner product space (also called a Euclidean Space or a Pre-Hilbert Space) if there is a function h·, ·i : V × V → C such that for all u, v, w ∈ V and a ∈ C we have: (a) hv, vi ∈ R and hv, vi ≥ 0 with hv, vi = 0 if and only if v = 0, (b) hu, v + wi = hu, vi + hu, wi, (c) hu, avi = ahu, vi, and (d) hu, vi = hv, ui where the overline represents the operation of complex conju- gation. The function h·, ·i is called an inner product. Note. Notice that properties (b), (c), and (d) of Definition 5.2.1 combine to imply that hu, av + bwi = ahu, vi + bhu, wi and hau + bv, wi = ahu, wi + bhu, wi for all relevant vectors and scalars. That is, h·, ·i is linear in the second positions and “conjugate-linear” in the first position. 5.2. Inner Product Spaces 2 Note. We can also define an inner product on a vector space with real scalars by requiring that h·, ·i : V × V → R and by replacing property (d) in Definition 5.2.1 with the requirement that the inner product is symmetric: hu, vi = hv, ui. Then Rn with the usual dot product is an example of a real inner product space.
    [Show full text]
  • Learning a Distance Metric from Relative Comparisons
    Learning a Distance Metric from Relative Comparisons Matthew Schultz and Thorsten Joachims Department of Computer Science Cornell University Ithaca, NY 14853 schultz,tj @cs.cornell.edu Abstract This paper presents a method for learning a distance metric from rel- ative comparison such as “A is closer to B than A is to C”. Taking a Support Vector Machine (SVM) approach, we develop an algorithm that provides a flexible way of describing qualitative training data as a set of constraints. We show that such constraints lead to a convex quadratic programming problem that can be solved by adapting standard meth- ods for SVM training. We empirically evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents. 1 Introduction Distance metrics are an essential component in many applications ranging from supervised learning and clustering to product recommendations and document browsing. Since de- signing such metrics by hand is difficult, we explore the problem of learning a metric from examples. In particular, we consider relative and qualitative examples of the form “A is closer to B than A is to C”. We believe that feedback of this type is more easily available in many application setting than quantitative examples (e.g. “the distance between A and B is 7.35”) as considered in metric Multidimensional Scaling (MDS) (see [4]), or absolute qualitative feedback (e.g. “A and B are similar”, “A and C are not similar”) as considered in [11]. Building on the study in [7], search-engine query logs are one example where feedback of the form “A is closer to B than A is to C” is readily available for learning a (more semantic) similarity metric on documents.
    [Show full text]
  • Section 20. the Metric Topology
    20. The Metric Topology 1 Section 20. The Metric Topology Note. The topological concepts you encounter in Analysis 1 are based on the metric on R which gives the distance between x and y in R which gives the distance between x and y in R as |x − y|. More generally, any space with a metric on it can have a topology defined in terms of the metric (which is ultimately based on an ε definition of pen sets). This is done in our Complex Analysis 1 (MATH 5510) class (see the class notes for Chapter 2 of http://faculty.etsu.edu/gardnerr/5510/notes. htm). In this section we define a metric and use it to create the “metric topology” on a set. Definition. A metric on a set X is a function d : X × X → R having the following properties: (1) d(x, y) ≥ 0 for all x, y ∈ X and d(x, y) = 0 if and only if x = y. (2) d(x, y)= d(y, x) for all x, y ∈ X. (3) (The Triangle Inequality) d(x, y)+ d(y, z) ≥ d(x, z) for all x, y, z ∈ X. The nonnegative real number d(x, y) is the distance between x and y. Set X together with metric d form a metric space. Define the set Bd(x, ε)= {y | d(x, y) < ε}, called the ε-ball centered at x (often simply denoted B(x, ε)). Definition. If d is a metric on X then the collection of all ε-balls Bd(x, ε) for x ∈ X and ε> 0 is a basis for a topology on X, called the metric topology induced by d.
    [Show full text]
  • Contraction Principles in $ M S $-Metric Spaces
    Contraction Principles in Ms-metric Spaces N. Mlaiki1, N. Souayah2, K. Abodayeh3, T. Abdeljawad4 Department of Mathematical Sciences, Prince Sultan University1,3,4 Department of Natural Sciences, Community College, King Saud University2 Riyadh, Saudi Arabia 11586 E-mail: [email protected] [email protected] [email protected] [email protected] Abstract In this paper, we give an interesting extension of the partial S-metric space which was introduced [4] to the Ms-metric space. Also, we prove the existence and uniqueness of a fixed point for a self mapping on an Ms-metric space under different contraction principles. 1 Introduction Many researchers over the years proved many interesting results on the existence of a fixed point for a self mapping on different types of metric spaces for example, see ([2], [3], [5], [6], [7], [8], [9], [10], [12], [17], [18], [19].) The idea behind this paper was inspired by the work of Asadi in [1]. He gave a more general extension of almost any metric space with two dimensions, and that is not just by defining the self ”distance” in a metric as in partial metric spaces [13, 14, 15, 16, 11], but he assumed that is not necessary that the self ”distance” is arXiv:1610.02582v1 [math.GM] 8 Oct 2016 less than the value of the metric between two different elements. In [4], an extension of S-metric spaces to a partial S-metric spaces was introduced. Also, they showed that every S-metric space is a partial S-metric space, but not every partial S- metric space is an S-metric space.
    [Show full text]
  • The Duality of Similarity and Metric Spaces
    applied sciences Article The Duality of Similarity and Metric Spaces OndˇrejRozinek 1,2,* and Jan Mareš 1,3,* 1 Department of Process Control, Faculty of Electrical Engineering and Informatics, University of Pardubice, 530 02 Pardubice, Czech Republic 2 CEO at Rozinet s.r.o., 533 52 Srch, Czech Republic 3 Department of Computing and Control Engineering, Faculty of Chemical Engineering, University of Chemistry and Technology, 166 28 Prague, Czech Republic * Correspondence: [email protected] (O.R.); [email protected] (J.M.) Abstract: We introduce a new mathematical basis for similarity space. For the first time, we describe the relationship between distance and similarity from set theory. Then, we derive generally valid relations for the conversion between similarity and a metric and vice versa. We present a general solution for the normalization of a given similarity space or metric space. The derived solutions lead to many already used similarity and distance functions, and combine them into a unified theory. The Jaccard coefficient, Tanimoto coefficient, Steinhaus distance, Ruzicka similarity, Gaussian similarity, edit distance and edit similarity satisfy this relationship, which verifies our fundamental theory. Keywords: similarity metric; similarity space; distance metric; metric space; normalized similarity metric; normalized distance metric; edit distance; edit similarity; Jaccard coefficient; Gaussian similarity 1. Introduction Mathematical spaces have been studied for centuries and belong to the basic math- ematical theories, which are used in various real-world applications [1]. In general, a mathematical space is a set of mathematical objects with an associated structure. This Citation: Rozinek, O.; Mareš, J. The structure can be specified by a number of operations on the objects of the set.
    [Show full text]
  • Topological Vector Spaces
    An introduction to some aspects of functional analysis, 3: Topological vector spaces Stephen Semmes Rice University Abstract In these notes, we give an overview of some aspects of topological vector spaces, including the use of nets and filters. Contents 1 Basic notions 3 2 Translations and dilations 4 3 Separation conditions 4 4 Bounded sets 6 5 Norms 7 6 Lp Spaces 8 7 Balanced sets 10 8 The absorbing property 11 9 Seminorms 11 10 An example 13 11 Local convexity 13 12 Metrizability 14 13 Continuous linear functionals 16 14 The Hahn–Banach theorem 17 15 Weak topologies 18 1 16 The weak∗ topology 19 17 Weak∗ compactness 21 18 Uniform boundedness 22 19 Totally bounded sets 23 20 Another example 25 21 Variations 27 22 Continuous functions 28 23 Nets 29 24 Cauchy nets 30 25 Completeness 32 26 Filters 34 27 Cauchy filters 35 28 Compactness 37 29 Ultrafilters 38 30 A technical point 40 31 Dual spaces 42 32 Bounded linear mappings 43 33 Bounded linear functionals 44 34 The dual norm 45 35 The second dual 46 36 Bounded sequences 47 37 Continuous extensions 48 38 Sublinear functions 49 39 Hahn–Banach, revisited 50 40 Convex cones 51 2 References 52 1 Basic notions Let V be a vector space over the real or complex numbers, and suppose that V is also equipped with a topological structure. In order for V to be a topological vector space, we ask that the topological and vector spaces structures on V be compatible with each other, in the sense that the vector space operations be continuous mappings.
    [Show full text]
  • Metric Spaces
    Chapter 1. Metric Spaces Definitions. A metric on a set M is a function d : M M R × → such that for all x, y, z M, Metric Spaces ∈ d(x, y) 0; and d(x, y)=0 if and only if x = y (d is positive) MA222 • ≥ d(x, y)=d(y, x) (d is symmetric) • d(x, z) d(x, y)+d(y, z) (d satisfies the triangle inequality) • ≤ David Preiss The pair (M, d) is called a metric space. [email protected] If there is no danger of confusion we speak about the metric space M and, if necessary, denote the distance by, for example, dM . The open ball centred at a M with radius r is the set Warwick University, Spring 2008/2009 ∈ B(a, r)= x M : d(x, a) < r { ∈ } the closed ball centred at a M with radius r is ∈ x M : d(x, a) r . { ∈ ≤ } A subset S of a metric space M is bounded if there are a M and ∈ r (0, ) so that S B(a, r). ∈ ∞ ⊂ MA222 – 2008/2009 – page 1.1 Normed linear spaces Examples Definition. A norm on a linear (vector) space V (over real or Example (Euclidean n spaces). Rn (or Cn) with the norm complex numbers) is a function : V R such that for all · → n n , x y V , x = x 2 so with metric d(x, y)= x y 2 ∈ | i | | i − i | x 0; and x = 0 if and only if x = 0(positive) i=1 i=1 • ≥ cx = c x for every c R (or c C)(homogeneous) • | | ∈ ∈ n n x + y x + y (satisfies the triangle inequality) Example (n spaces with p norm, p 1).
    [Show full text]
  • Chapter 11 Riemannian Metrics, Riemannian Manifolds
    Chapter 11 Riemannian Metrics, Riemannian Manifolds 11.1 Frames Fortunately, the rich theory of vector spaces endowed with aEuclideaninnerproductcan,toagreatextent,belifted to the tangent bundle of a manifold. The idea is to equip the tangent space TpM at p to the manifold M with an inner product , p,insucha way that these inner products vary smoothlyh i as p varies on M. It is then possible to define the length of a curve segment on a M and to define the distance between two points on M. 541 542 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS The notion of local (and global) frame plays an important technical role. Definition 11.1. Let M be an n-dimensional smooth manifold. For any open subset, U M,ann-tuple of ✓ vector fields, (X1,...,Xn), over U is called a frame over U i↵(X1(p),...,Xn(p)) is a basis of the tangent space, T M,foreveryp U.IfU = M,thentheX are global p 2 i sections and (X1,...,Xn)iscalledaframe (of M). The notion of a frame is due to Elie´ Cartan who (after Darboux) made extensive use of them under the name of moving frame (and the moving frame method). Cartan’s terminology is intuitively clear: As a point, p, moves in U,theframe,(X1(p),...,Xn(p)), moves from fibre to fibre. Physicists refer to a frame as a choice of local gauge. 11.1. FRAMES 543 If dim(M)=n,thenforeverychart,(U,'), since 1 n d'− : R TpM is a bijection for every p U,the '(p) ! 2 n-tuple of vector fields, (X1,...,Xn), with 1 Xi(p)=d''−(p)(ei), is a frame of TM over U,where n (e1,...,en)isthecanonicalbasisofR .SeeFigure11.1.
    [Show full text]
  • Metric Learning of Manifolds
    Metric Learning of Manifolds Dominique Perrault-Joncas and Marina Meil˘a Department of Statistics University of Washington fmmp,[email protected] Outline Success and failure in manifold learning Background on Manifolds Estimating the Riemannian metric Examples and experiments Consistency Outline Success and failure in manifold learning Background on Manifolds Estimating the Riemannian metric Examples and experiments Consistency Manifold learning (ML): Results depend on data Success Failure Original Original (Swiss Roll) (Swiss Roll with hole) Isomap Isomap Results depend on algorithm Laplacian Eigenmaps Hessian Eigenmaps Original data (LE) (HE) (Swiss Roll with hole) Local Linear Local Tangent Space Isomap Embedding (LLE) Alignment (LTSA) Distortion occurs even for the simplest examples \Which ML method better?" vs \Can we make them all better?" I A great number of ML algorithms exist I Isomap, Laplacian Eigenmaps (LE), Diffusion Maps (DM), Hessian Eigenmaps (HE), Local Linear Embedding (LLE), Latent Tangent Space Alignment (LTSA) I Each of them \work well" in special cases, \fail" in other cases I Current paradigm: Design a ML method that \works better" (i.e \succeeds" when current ones \fail") I Our goal/New paradigm: make existing ML methods (and future ones) \successful" i.e., given a ML method that \fails" on a data set from a manifold, we will augment it in a way that can make it \succeed" I for rigurous, general definition of \success"/\failure" Outline Success and failure in manifold learning Background on Manifolds Estimating the Riemannian metric Examples and experiments Consistency Basic notation I D = original dimension of the data (high in real examples) I d = intrinsic dimension of the manifold d << D I m = embedding dimension m ≥ d (chosen by user) m = d = 2 m = 2 > d = 1 Preserving topology vs.
    [Show full text]