The Nash Embedding Theorem

Total Page:16

File Type:pdf, Size:1020Kb

The Nash Embedding Theorem The Nash embedding theorem Khang Manh Huynh March 13, 2018 Abstract This is an attempt to present an elementary exposition of the Nash embedding theorem for the graduate student who at least knows what a vector field is. We mainly rely on [Tao16] and [How99]. 1 Preliminary definitions Definition 1. A Riemannian manifold (M, g) is a smooth manifold M equipped with a smooth Rieman- nian metric g. In other words, for any vector fields X, Y (i.e. X, Y 2 XM), g(X, Y) is a smooth function on M and for any p in M, there is a positive-definite inner product gp : Tp M × Tp M ! R such that ¥ g(X, Y)(p) = gp(Xp, Yp). As a consequence, for any f 2 C (M) : g( f X, Y) = f g(X, Y). An isometric embedding f from (M1, g1) to (M2, g2) (both Riemannian manifolds) is a smooth embed- ding f : M1 ! M2 that preserves the metric, i.e. 8X, Y 2 XM1, 8p 2 M1 : (g1)p (Xp, Yp) = (g2)f(p) (df · Xp, df · Yp) We write e = h·, ·i for the Euclidean metric on Rn, formed by the usual Euclidean dot product. Unless indicated otherwise, Rn is always equipped with e and we write Rn for (Rn, e). Throughout this note, everything we work with is assumed to be smooth unless indicated otherwise. Every metric is a Riemannian metric unless indicated otherwise. Now we can state the main theorem: Theorem 2 (Nash embedding). Any compact Riemannian manifold (M, g) without boundary can be isometrically embedded into Rn for some n. 2 Sketch of proof We will first give a sketch of the proof, leaving the technical details as lemmas to be proven later. The first tool we require is Whitney embedding. For any u 2 C¥(M, Rn) and v 2 C¥(M, Rk), we can define u ⊕ v : p 7! (u(p), v(p)). Glueing chart functions (properly cut off) leads to Whitney. Lemma 3 (Baby Whitney). Any compact smooth manifold can be smoothly embedded into Rn for some n. Because of this, and because M is compact, we can embed M into a torus, which will simplify our calculations greatly. Lemma 4 (Torus embedding). WLOG, we can assume M = Tm = (R/Z)m, i.e. the m-dimensional torus, equipped with an arbitrary Riemannian metric g. 1 Then, we introduce some definitions: Definition 5. Let Sym denote the set of symmetric tensors on Tm. In other words, h 2 Sym when h = m (hab)1≤a,b≤m is a smooth function from T into Symm(R) (the set of symmetric m × m matrices). Any m metric g of T can be considered a symmetric tensor and gab = g(¶a, ¶b) where (¶a) are the standard coordinate vector fields on Tm. Conversely, if h 2 Sym and h > 0 (i.e. h(p) is a positive matrix 8p), h is a metric. ¥ m n m n Let Map = [n2NC (T , R ) be the set of all maps from T into some Euclidean space R . We define the function Q : Map ! Sym such that for any u 2 Map: Q(u)ab = ¶au, ¶bu where h·, ·i is the usual Euclidean dot product. We note that Q(u) ≥ 0. A metric g on Tm is called good when we can find u 2 Map such that Q(u) = g (and such u would have to be immersions, though not necessarily embeddings). We write Good for the set of good metrics. Because Q(u ⊕ v) = Q(u) + Q(v), Good is closed under addition. Let Emb ⊂ Map be the set of maps that are embeddings. Nash’s theorem says every metric is in Q(Emb). However, if every metric is good, Nash’s theorem is proven. Indeed, let g be a metric and W 2 Emb be a Whitney embedding. By rescaling W, WLOG Q(W) < g. Then g = Q(W) + g0 where g0 is a metric, therefore good and g0 = Q(v) for some v 2 Map. Then W ⊕ v 2 Emb and Q(W ⊕ v) = g. We’re done. Before we can prove every metric is good, we can prove it is approximately good by finding a very clever symmetric tensor. Lemma 6 (Approximation). For any metric g, there is h 2 Sym such that g + #2h 2 Good 8# > 0. This reduces Nash’s theorem into a local perturbation problem, where Nash made his fundamental contribution. We need another definition: m n Definition 7. (Gromov-Rohklin) An injective smooth map f : T ! R is called free when f¶af(p)ga [ m f¶a¶bf(p)ga≤b is linearly independent for any p 2 T . Note that injective immersions on a compact manifold are automatically embeddings. Let Free ⊂ Emb be the set of embeddings that are free. We can also define free maps on open submanifolds of Tm and Rm. C¥ Ck Now we can state the perturbation lemma. Recall that h −! 0 means h −! 0 8k. Lemma 8 (Perturbation). Let u 2 Free. Then for any h 2 Sym small enough (in the C¥ topology), Q(u) + h 2 Good. We use the flexibility of free maps make the symmetric tensor good. Now let us prove Nash’s theorem. Proof. Let g be any metric on Tm. We first find a free map. m m m ¥ Because T is just like R locally, we can find a finite open cover (Ui) of T with cutoffs yi 2 Cc (Ui), 0 ≤ m yi ≤ 1 such that Vi = intfyi = 1g also form an open cover of T , and there are diffeomorphisms fi : Ui ! BRm (0, 2) such that fi(Vi) = BRm (0, 1) and dfi · ¶a = ¶a 8a. m L Obviously yi and yifi can be defined on T by zero extension. Then we define F = i (yi ⊕ yifi). So F is an injective immersion, therefore embedding from Tm into Rk for some k. k(k+1) k k+ 2 a k a a b Then define the Veronese embedding ik : R ! R , (x )a=1 7! (x , x x )1≤a≤b≤k and let u = ◦ j ik F. By looking at u Vi , it’s easy to see u is a free embedding. m By rescaling u, and by the compactness of T , we can WLOG assume Q(u)(p) < gp 8p (as positive matrices). Then g = g0 + Q(u) where g0 is a metric. 2 By approximation, there is h 2 Sym such that g0 + #2h 2 Good 8# > 0. By perturbation, for # small enough, Q(u) − #2h 2 Good. So g = (g0 + #2h) + (Q(u) − #2h) 2 Good. So every metric is good and Nash’s theorem is proven. So we now only need to prove our lemmas. 3 Embedding into the torus Proof of baby Whitney. Let M be any compact manifold. We can find a finite open cover (Ui) of M with ¥ cutoffs yi 2 Cc (Ui), 0 ≤ yi ≤ 1 such that Vi = intfyi = 1g also form an open cover of M, and there are diffeomorphisms fi : Ui ! BRm (0, 2) such that fi(Vi) = BRm (0, 1). Obviously yi and yifi can be defined L on M by zero extension. Then we define W = i (yi ⊕ yifi). So W is an injective immersion, therefore embedding from M into Rk for some k. Proof of torus embedding. Let (M, g) be any compact Riemannian manifold. By Whitney, there is an em- bedding W : M ! Rm for some m. Because M is compact, by translation and rescaling, WLOG assume W(M) ⊂ (0, 1)m. So WLOG, M ⊂ Tm. We want to extend g from M to Tm. We first do this locally. Let p 2 M. As M is a regular submanifold of Tm, there is a neighborhood U m containing p, open in T with a diffeomorphism F : U ! BRm (0, 1) such that for U = U \ M, F(U) = m l m−l BRl (0, 1) × f0g where l = dim M. Parametrize R = f(y, z) : y 2 R , z 2 R g, then WLOG, via the diffeomorphism, U = f(y, z) : jyj2 + jzj2 < 1g and U = f(y, 0) : jyj2 < 1g where U is equipped with a metric g. Then simply define g on U: g(y,z)((a1, b1), (a2, b2)) = gy(a1, b1) + hb1, b2i where h·, ·i is the usual Euclidean dot product. Then going back via the diffeomorphism we get g on the original U. Finally we use partition of unity to extend g globally. We can find a finite collection of Riemannian manifolds (U , g )N such that U are open in Tm and cover M, while g are extensions of gj . Let i i i=1 i i M\Ui m U0 = T nM. Then U0 is open and (U0, g0) is a Riemannian manifold where g0 is the usual Euclidean ( )N N = N metric. Then we can find a partition of unity yi i=0 subordinate to Ui i=0 and we define g ∑i=0 yi gi. So we can find an isometric embedding (M, g) ! (Tm, g). So we just need to prove Nash’s theorem for the torus. 4 Approximating the metric Firstly, by the Gram-Schmidt process, for any p, we can find an orthonormal basis for the inner product m m m a m space TpT , gp . Identify TpT with R . Let v = (v ) 2 R be any vector, then define the rank-1 tensor T a b m v ⊗ v = vv = (v v )ab 2 Symm(R). So there are vectors (vi)i=1 such that gp = ∑i vi ⊗ vi.
Recommended publications
  • Planar Embeddings of Minc's Continuum and Generalizations
    PLANAR EMBEDDINGS OF MINC’S CONTINUUM AND GENERALIZATIONS ANA ANUSIˇ C´ Abstract. We show that if f : I → I is piecewise monotone, post-critically finite, x X I,f and locally eventually onto, then for every point ∈ =←− lim( ) there exists a planar embedding of X such that x is accessible. In particular, every point x in Minc’s continuum XM from [11, Question 19 p. 335] can be embedded accessibly. All constructed embeddings are thin, i.e., can be covered by an arbitrary small chain of open sets which are connected in the plane. 1. Introduction The main motivation for this study is the following long-standing open problem: Problem (Nadler and Quinn 1972 [20, p. 229] and [21]). Let X be a chainable contin- uum, and x ∈ X. Is there a planar embedding of X such that x is accessible? The importance of this problem is illustrated by the fact that it appears at three independent places in the collection of open problems in Continuum Theory published in 2018 [10, see Question 1, Question 49, and Question 51]. We will give a positive answer to the Nadler-Quinn problem for every point in a wide class of chainable continua, which includes←− lim(I, f) for a simplicial locally eventually onto map f, and in particular continuum XM introduced by Piotr Minc in [11, Question 19 p. 335]. Continuum XM was suspected to have a point which is inaccessible in every planar embedding of XM . A continuum is a non-empty, compact, connected, metric space, and it is chainable if arXiv:2010.02969v1 [math.GN] 6 Oct 2020 it can be represented as an inverse limit with bonding maps fi : I → I, i ∈ N, which can be assumed to be onto and piecewise linear.
    [Show full text]
  • Neural Subgraph Matching
    Neural Subgraph Matching NEURAL SUBGRAPH MATCHING Rex Ying, Andrew Wang, Jiaxuan You, Chengtao Wen, Arquimedes Canedo, Jure Leskovec Stanford University and Siemens Corporate Technology ABSTRACT Subgraph matching is the problem of determining the presence of a given query graph in a large target graph. Despite being an NP-complete problem, the subgraph matching problem is crucial in domains ranging from network science and database systems to biochemistry and cognitive science. However, existing techniques based on combinatorial matching and integer programming cannot handle matching problems with both large target and query graphs. Here we propose NeuroMatch, an accurate, efficient, and robust neural approach to subgraph matching. NeuroMatch decomposes query and target graphs into small subgraphs and embeds them using graph neural networks. Trained to capture geometric constraints corresponding to subgraph relations, NeuroMatch then efficiently performs subgraph matching directly in the embedding space. Experiments demonstrate that NeuroMatch is 100x faster than existing combinatorial approaches and 18% more accurate than existing approximate subgraph matching methods. 1.I NTRODUCTION Given a query graph, the problem of subgraph isomorphism matching is to determine if a query graph is isomorphic to a subgraph of a large target graph. If the graphs include node and edge features, both the topology as well as the features should be matched. Subgraph matching is a crucial problem in many biology, social network and knowledge graph applications (Gentner, 1983; Raymond et al., 2002; Yang & Sze, 2007; Dai et al., 2019). For example, in social networks and biomedical network science, researchers investigate important subgraphs by counting them in a given network (Alon et al., 2008).
    [Show full text]
  • Some Planar Embeddings of Chainable Continua Can Be
    Some planar embeddings of chainable continua can be expressed as inverse limit spaces by Susan Pamela Schwartz A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics Montana State University © Copyright by Susan Pamela Schwartz (1992) Abstract: It is well known that chainable continua can be expressed as inverse limit spaces and that chainable continua are embeddable in the plane. We give necessary and sufficient conditions for the planar embeddings of chainable continua to be realized as inverse limit spaces. As an example, we consider the Knaster continuum. It has been shown that this continuum can be embedded in the plane in such a manner that any given composant is accessible. We give inverse limit expressions for embeddings of the Knaster continuum in which the accessible composant is specified. We then show that there are uncountably many non-equivalent inverse limit embeddings of this continuum. SOME PLANAR EMBEDDINGS OF CHAIN ABLE OONTINUA CAN BE EXPRESSED AS INVERSE LIMIT SPACES by Susan Pamela Schwartz A thesis submitted in partial fulfillment of the requirements for the degree of . Doctor of Philosophy in Mathematics MONTANA STATE UNIVERSITY Bozeman, Montana February 1992 D 3 l% ii APPROVAL of a thesis submitted by Susan Pamela Schwartz This thesis has been read by each member of the thesis committee and has been found to be satisfactory regarding content, English usage, format, citations, bibliographic style, and consistency, and is ready for submission to the College of Graduate Studies. g / / f / f z Date Chairperson, Graduate committee Approved for the Major Department ___ 2 -J2 0 / 9 Date Head, Major Department Approved for the College of Graduate Studies Date Graduate Dean iii STATEMENT OF PERMISSION TO USE .
    [Show full text]
  • Cauchy Graph Embedding
    Cauchy Graph Embedding Dijun Luo [email protected] Chris Ding [email protected] Feiping Nie [email protected] Heng Huang [email protected] The University of Texas at Arlington, 701 S. Nedderman Drive, Arlington, TX 76019 Abstract classify unsupervised embedding approaches into two cat- Laplacian embedding provides a low- egories. Approaches in the first category are to embed data dimensional representation for the nodes of into a linear space via linear transformations, such as prin- a graph where the edge weights denote pair- ciple component analysis (PCA) (Jolliffe, 2002) and mul- wise similarity among the node objects. It is tidimensional scaling (MDS) (Cox & Cox, 2001). Both commonly assumed that the Laplacian embed- PCA and MDS are eigenvector methods and can model lin- ding results preserve the local topology of the ear variabilities in high-dimensional data. They have been original data on the low-dimensional projected long known and widely used in many machine learning ap- subspaces, i.e., for any pair of graph nodes plications. with large similarity, they should be embedded However, the underlying structure of real data is often closely in the embedded space. However, in highly nonlinear and hence cannot be accurately approx- this paper, we will show that the Laplacian imated by linear manifolds. The second category ap- embedding often cannot preserve local topology proaches embed data in a nonlinear manner based on differ- well as we expected. To enhance the local topol- ent purposes. Recently several promising nonlinear meth- ogy preserving property in graph embedding, ods have been proposed, including IsoMAP (Tenenbaum we propose a novel Cauchy graph embedding et al., 2000), Local Linear Embedding (LLE) (Roweis & which preserves the similarity relationships of Saul, 2000), Local Tangent Space Alignment (Zhang & the original data in the embedded space via a Zha, 2004), Laplacian Embedding/Eigenmap (Hall, 1971; new objective.
    [Show full text]
  • Riemannian Submanifolds: a Survey
    RIEMANNIAN SUBMANIFOLDS: A SURVEY BANG-YEN CHEN Contents Chapter 1. Introduction .............................. ...................6 Chapter 2. Nash’s embedding theorem and some related results .........9 2.1. Cartan-Janet’s theorem .......................... ...............10 2.2. Nash’s embedding theorem ......................... .............11 2.3. Isometric immersions with the smallest possible codimension . 8 2.4. Isometric immersions with prescribed Gaussian or Gauss-Kronecker curvature .......................................... ..................12 2.5. Isometric immersions with prescribed mean curvature. ...........13 Chapter 3. Fundamental theorems, basic notions and results ...........14 3.1. Fundamental equations ........................... ..............14 3.2. Fundamental theorems ............................ ..............15 3.3. Basic notions ................................... ................16 3.4. A general inequality ............................. ...............17 3.5. Product immersions .............................. .............. 19 3.6. A relationship between k-Ricci tensor and shape operator . 20 3.7. Completeness of curvature surfaces . ..............22 Chapter 4. Rigidity and reduction theorems . ..............24 4.1. Rigidity ....................................... .................24 4.2. A reduction theorem .............................. ..............25 Chapter 5. Minimal submanifolds ....................... ...............26 arXiv:1307.1875v1 [math.DG] 7 Jul 2013 5.1. First and second variational formulas
    [Show full text]
  • Chapter 13 Curvature in Riemannian Manifolds
    Chapter 13 Curvature in Riemannian Manifolds 13.1 The Curvature Tensor If (M, , )isaRiemannianmanifoldand is a connection on M (that is, a connection on TM−), we− saw in Section 11.2 (Proposition 11.8)∇ that the curvature induced by is given by ∇ R(X, Y )= , ∇X ◦∇Y −∇Y ◦∇X −∇[X,Y ] for all X, Y X(M), with R(X, Y ) Γ( om(TM,TM)) = Hom (Γ(TM), Γ(TM)). ∈ ∈ H ∼ C∞(M) Since sections of the tangent bundle are vector fields (Γ(TM)=X(M)), R defines a map R: X(M) X(M) X(M) X(M), × × −→ and, as we observed just after stating Proposition 11.8, R(X, Y )Z is C∞(M)-linear in X, Y, Z and skew-symmetric in X and Y .ItfollowsthatR defines a (1, 3)-tensor, also denoted R, with R : T M T M T M T M. p p × p × p −→ p Experience shows that it is useful to consider the (0, 4)-tensor, also denoted R,givenby R (x, y, z, w)= R (x, y)z,w p p p as well as the expression R(x, y, y, x), which, for an orthonormal pair, of vectors (x, y), is known as the sectional curvature, K(x, y). This last expression brings up a dilemma regarding the choice for the sign of R. With our present choice, the sectional curvature, K(x, y), is given by K(x, y)=R(x, y, y, x)but many authors define K as K(x, y)=R(x, y, x, y). Since R(x, y)isskew-symmetricinx, y, the latter choice corresponds to using R(x, y)insteadofR(x, y), that is, to define R(X, Y ) by − R(X, Y )= + .
    [Show full text]
  • Strong Inverse Limit Reflection
    Strong Inverse Limit Reflection Scott Cramer March 4, 2016 Abstract We show that the axiom Strong Inverse Limit Reflection holds in L(Vλ+1) assuming the large cardinal axiom I0. This reflection theorem both extends results of [4], [5], and [3], and has structural implications for L(Vλ+1), as described in [3]. Furthermore, these results together highlight an analogy between Strong Inverse Limit Reflection and the Axiom of Determinacy insofar as both act as fundamental regularity properties. The study of L(Vλ+1) was initiated by H. Woodin in order to prove properties of L(R) under large cardinal assumptions. In particular he showed that L(R) satisfies the Axiom of Determinacy (AD) if there exists a non-trivial elementary embedding j : L(Vλ+1) ! L(Vλ+1) with crit (j) < λ (an axiom called I0). We investigate an axiom called Strong Inverse Limit Reflection for L(Vλ+1) which is in some sense analogous to AD for L(R). Our main result is to show that if I0 holds at λ then Strong Inverse Limit Reflection holds in L(Vλ+1). Strong Inverse Limit Reflection is a strong form of a reflection property for inverse limits. Axioms of this form generally assert the existence of a collection of embeddings reflecting a certain amount of L(Vλ+1), together with a largeness assumption on the collection. There are potentially many different types of axioms of this form which could be considered, but we concentrate on a particular form which, by results in [3], has certain structural consequences for L(Vλ+1), such as a version of the perfect set property.
    [Show full text]
  • What Can We Do with Nash's Embedding Theorem ?
    SOOCHOW JOURNAL OF MATHEMATICS Volume 30, No. 3, pp. 303-338, July 2004 WHATCANWEDOWITHNASH’SEMBEDDING THEOREM ? BY BANG-YEN CHEN Abstract. According to the celebrated embedding theorem of J. F. Nash, every Riemannian manifold can be isometrically embedded in some Euclidean spaces with sufficiently high codimension. An immediate problem concerning Nash’s theorem is the following: Problem: What can we do with Nash’s embedding theorem ? In other words, what can we do with arbitrary Euclidean submanifolds of arbitrary high codimension if no local or global assumption were imposed on the submanifold ? In this survey, we present some general optimal solutions to this and related prob- lems. We will also present many applications of the solutions to the theory of submanifolds as well as to Riemannian geometry. 1. What Can We Do with Nash’s Embedding Theorem ? According to the celebrated embedding theorem of J. F. Nash [65] published in 1956, every Riemannian manifold can be isometrically embedded in some Eu- clidean spaces with sufficiently high codimension. Received May 27, 2004. AMS Subject Classification. Primary 53C40, 53C42, 53B25; secondary 53A07, 53B21, 53C25, 53D12. Key words. Nash’s embedding theorem, δ-invariant, σ-invariant, warped product manifold, conformally flat submanifold, inequality, real space form, minimal immersion, Lagrangian im- mersion, Einstein manifold. This survey has been delivered at the International Workshop on “Theory of Submanifolds” held at Valenciennes, France from June 25th to 26th and at Leuven, Belgium from June 27th to 28th, 2003. The author would like to express his hearty thanks to Professors F. Dillen, L. Vrancken and L. Verstraelen for organizing this four days conference dedicated to his sixtieth birthday.
    [Show full text]
  • Hodge Theory
    HODGE THEORY PETER S. PARK Abstract. This exposition of Hodge theory is a slightly retooled version of the author's Harvard minor thesis, advised by Professor Joe Harris. Contents 1. Introduction 1 2. Hodge Theory of Compact Oriented Riemannian Manifolds 2 2.1. Hodge star operator 2 2.2. The main theorem 3 2.3. Sobolev spaces 5 2.4. Elliptic theory 11 2.5. Proof of the main theorem 14 3. Hodge Theory of Compact K¨ahlerManifolds 17 3.1. Differential operators on complex manifolds 17 3.2. Differential operators on K¨ahlermanifolds 20 3.3. Bott{Chern cohomology and the @@-Lemma 25 3.4. Lefschetz decomposition and the Hodge index theorem 26 Acknowledgments 30 References 30 1. Introduction Our objective in this exposition is to state and prove the main theorems of Hodge theory. In Section 2, we first describe a key motivation behind the Hodge theory for compact, closed, oriented Riemannian manifolds: the observation that the differential forms that satisfy certain par- tial differential equations depending on the choice of Riemannian metric (forms in the kernel of the associated Laplacian operator, or harmonic forms) turn out to be precisely the norm-minimizing representatives of the de Rham cohomology classes. This naturally leads to the statement our first main theorem, the Hodge decomposition|for a given compact, closed, oriented Riemannian manifold|of the space of smooth k-forms into the image of the Laplacian and its kernel, the sub- space of harmonic forms. We then develop the analytic machinery|specifically, Sobolev spaces and the theory of elliptic differential operators|that we use to prove the aforementioned decom- position, which immediately yields as a corollary the phenomenon of Poincar´eduality.
    [Show full text]
  • Fundamental Theorems in Mathematics
    SOME FUNDAMENTAL THEOREMS IN MATHEMATICS OLIVER KNILL Abstract. An expository hitchhikers guide to some theorems in mathematics. Criteria for the current list of 243 theorems are whether the result can be formulated elegantly, whether it is beautiful or useful and whether it could serve as a guide [6] without leading to panic. The order is not a ranking but ordered along a time-line when things were writ- ten down. Since [556] stated “a mathematical theorem only becomes beautiful if presented as a crown jewel within a context" we try sometimes to give some context. Of course, any such list of theorems is a matter of personal preferences, taste and limitations. The num- ber of theorems is arbitrary, the initial obvious goal was 42 but that number got eventually surpassed as it is hard to stop, once started. As a compensation, there are 42 “tweetable" theorems with included proofs. More comments on the choice of the theorems is included in an epilogue. For literature on general mathematics, see [193, 189, 29, 235, 254, 619, 412, 138], for history [217, 625, 376, 73, 46, 208, 379, 365, 690, 113, 618, 79, 259, 341], for popular, beautiful or elegant things [12, 529, 201, 182, 17, 672, 673, 44, 204, 190, 245, 446, 616, 303, 201, 2, 127, 146, 128, 502, 261, 172]. For comprehensive overviews in large parts of math- ematics, [74, 165, 166, 51, 593] or predictions on developments [47]. For reflections about mathematics in general [145, 455, 45, 306, 439, 99, 561]. Encyclopedic source examples are [188, 705, 670, 102, 192, 152, 221, 191, 111, 635].
    [Show full text]
  • Curvature of Riemannian Manifolds
    Curvature of Riemannian Manifolds Seminar Riemannian Geometry Summer Term 2015 Prof. Dr. Anna Wienhard and Dr. Gye-Seon Lee Soeren Nolting July 16, 2015 1 Motivation Figure 1: A vector parallel transported along a closed curve on a curved manifold.[1] The aim of this talk is to define the curvature of Riemannian Manifolds and meeting some important simplifications as the sectional, Ricci and scalar curvature. We have already noticed, that a vector transported parallel along a closed curve on a Riemannian Manifold M may change its orientation. Thus, we can determine whether a Riemannian Manifold is curved or not by transporting a vector around a loop and measuring the difference of the orientation at start and the endo of the transport. As an example take Figure 1, which depicts a parallel transport of a vector on a two-sphere. Note that in a non-curved space the orientation of the vector would be preserved along the transport. 1 2 Curvature In the following we will use the Einstein sum convention and make use of the notation: X(M) space of smooth vector fields on M D(M) space of smooth functions on M 2.1 Defining Curvature and finding important properties This rather geometrical approach motivates the following definition: Definition 2.1 (Curvature). The curvature of a Riemannian Manifold is a correspondence that to each pair of vector fields X; Y 2 X (M) associates the map R(X; Y ): X(M) ! X(M) defined by R(X; Y )Z = rX rY Z − rY rX Z + r[X;Y ]Z (1) r is the Riemannian connection of M.
    [Show full text]
  • Semi-Riemannian Manifold Optimization
    Semi-Riemannian Manifold Optimization Tingran Gao William H. Kruskal Instructor Committee on Computational and Applied Mathematics (CCAM) Department of Statistics The University of Chicago 2018 China-Korea International Conference on Matrix Theory with Applications Shanghai University, Shanghai, China December 19, 2018 Outline Motivation I Riemannian Manifold Optimization I Barrier Functions and Interior Point Methods Semi-Riemannian Manifold Optimization I Optimality Conditions I Descent Direction I Semi-Riemannian First Order Methods I Metric Independence of Second Order Methods Optimization on Semi-Riemannian Submanifolds Joint work with Lek-Heng Lim (The University of Chicago) and Ke Ye (Chinese Academy of Sciences) Riemannian Manifold Optimization I Optimization on manifolds: min f (x) x2M where (M; g) is a (typically nonlinear, non-convex) Riemannian manifold, f : M! R is a smooth function on M I Difficult constrained optimization; handled as unconstrained optimization from the perspective of manifold optimization I Key ingredients: tangent spaces, geodesics, exponential map, parallel transport, gradient rf , Hessian r2f Riemannian Manifold Optimization I Riemannian Steepest Descent x = Exp (−trf (x )) ; t ≥ 0 k+1 xk k I Riemannian Newton's Method 2 r f (xk ) ηk = −∇f (xk ) x = Exp (−tη ) ; t ≥ 0 k+1 xk k I Riemannian trust region I Riemannian conjugate gradient I ...... Gabay (1982), Smith (1994), Edelman et al. (1998), Absil et al. (2008), Adler et al. (2002), Ring and Wirth (2012), Huang et al. (2015), Sato (2016), etc. Riemannian Manifold Optimization I Applications: Optimization problems on matrix manifolds I Stiefel manifolds n Vk (R ) = O (n) =O (k) I Grassmannian manifolds Gr (k; n) = O (n) = (O (k) × O (n − k)) I Flag manifolds: for n1 + ··· + nd = n, Flag = O (n) = (O (n ) × · · · × O (n )) n1;··· ;nd 1 d I Shape Spaces k×n R n f0g =O (n) I .....
    [Show full text]