Chapter 11 Riemannian Metrics, Riemannian Manifolds

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 11 Riemannian Metrics, Riemannian Manifolds Chapter 11 Riemannian Metrics, Riemannian Manifolds 11.1 Frames Fortunately, the rich theory of vector spaces endowed with aEuclideaninnerproductcan,toagreatextent,belifted to the tangent bundle of a manifold. The idea is to equip the tangent space TpM at p to the manifold M with an inner product , p,insucha way that these inner products vary smoothlyh i as p varies on M. It is then possible to define the length of a curve segment on a M and to define the distance between two points on M. 541 542 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS The notion of local (and global) frame plays an important technical role. Definition 11.1. Let M be an n-dimensional smooth manifold. For any open subset, U M,ann-tuple of ✓ vector fields, (X1,...,Xn), over U is called a frame over U i↵(X1(p),...,Xn(p)) is a basis of the tangent space, T M,foreveryp U.IfU = M,thentheX are global p 2 i sections and (X1,...,Xn)iscalledaframe (of M). The notion of a frame is due to Elie´ Cartan who (after Darboux) made extensive use of them under the name of moving frame (and the moving frame method). Cartan’s terminology is intuitively clear: As a point, p, moves in U,theframe,(X1(p),...,Xn(p)), moves from fibre to fibre. Physicists refer to a frame as a choice of local gauge. 11.1. FRAMES 543 If dim(M)=n,thenforeverychart,(U,'), since 1 n d'− : R TpM is a bijection for every p U,the '(p) ! 2 n-tuple of vector fields, (X1,...,Xn), with 1 Xi(p)=d''−(p)(ei), is a frame of TM over U,where n (e1,...,en)isthecanonicalbasisofR .SeeFigure11.1. r e2 q p e1 φ φ -1 -1 φ (r) -1 U φ (q) -1 φ (p) Figure 11.1: A frame on S2. 544 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS The following proposition tells us when the tangent bun- dle is trivial (that is, isomorphic to the product, M Rn): ⇥ Proposition 11.1. The tangent bundle, TM, of a smooth n-dimensional manifold, M, is trivial i↵it possesses a frame of global sections (vector fields de- fined on M). As an illustration of Proposition 11.1 we can prove that the tangent bundle, TS1,ofthecircle,istrivial. 11.1. FRAMES 545 Indeed, we can find a section that is everywhere nonzero, i.e. anon-vanishingvectorfield,namely X(cos ✓, sin ✓)=( sin ✓, cos ✓). − The reader should try proving that TS3 is also trivial (use the quaternions). However, TS2 is nontrivial, although this not so easy to prove. More generally, it can be shown that TSn is nontrivial for all even n 2. It can even be shown that S1, S3 and S7 are the only≥ spheres whose tangent bundle is trivial. This is a rather deep theorem and its proof is hard. Remark: Amanifold,M,suchthatitstangentbundle, TM,istrivialiscalledparallelizable. We now define Riemannian metrics and Riemannian man- ifolds. 546 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS 11.2 Riemannian Metrics Definition 11.2. Given a smooth n-dimensional man- ifold, M,aRiemannian metric on M (or TM) is a family, ( , p)p M ,ofinnerproductsoneachtangent h i 2 space, TpM,suchthat , p depends smoothly on p, h i n which means that for every chart, '↵ : U↵ R ,for ! every frame, (X1,...,Xn), on U↵,themaps p X (p),X (p) ,pU , 1 i, j n 7! h i j ip 2 ↵ are smooth. A smooth manifold, M,withaRiemannian metric is called a Riemannian manifold. If dim(M)=n,thenforeverychart,(U,'), we have the 1 frame, (X1,...,Xn), over U,withXi(p)=d''−(p)(ei), n where (e1,...,en)isthecanonicalbasisofR .Sinceev- n ery vector field over U is a linear combination, i=1 fiXi, for some smooth functions, fi : U R,theconditionof Definition 11.2 is equivalent to the! fact that theP maps, 1 1 p d'− (e ),d'− (e ) ,pU, 1 i, j n, 7! h '(p) i '(p) j ip 2 are smooth. 11.2. RIEMANNIAN METRICS 547 If we let x = '(p), the above condition says that the maps, 1 1 x d'− (ei),d'− (ej) 1 ,x '(U), 1 i, j n, 7! h x x i'− (x) 2 are smooth. If M is a Riemannian manifold, the metric on TM is often denoted g =(gp)p M .Inachart,usinglocalcoordinates, we often use the notation2 g = g dx dx or simply ij ij i ⌦ j g = gijdxidxj,where ij P P @ @ gij(p)= , . @xi @xj *✓ ◆p ✓ ◆p+p 548 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS For every p U,thematrix,(gij(p)), is symmetric, pos- itive definite.2 The standard Euclidean metric on Rn,namely, g = dx2 + + dx2 , 1 ··· n makes Rn into a Riemannian manifold. Then, every submanifold, M,ofRn inherits a metric by restricting the Euclidean metric to M. n 1 For example, the sphere, S − ,inheritsametricthat n 1 makes S − into a Riemannian manifold. It is instructive to find the local expression of this metric for S2 in spherical coordinates. 11.2. RIEMANNIAN METRICS 549 We can parametrize the sphere S2 in terms of two angles ✓ (the colatitude)and' (the longitude)asfollows: x =sin✓ cos ' y =sin✓ sin ' z =cos✓. See Figure 11.2. z Θ φ y x Figure 11.2: The spherical coordinates of S2. 550 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS In order for the above to be a parametrization, we need to restrict its domain to V = (✓,') 0 <✓<⇡,0 < '<2⇡ . { | } Then the semicircle from the north pole to the south pole lying in the xz-plane is omitted from the sphere. In order to cover the whole sphere, we need another parametrization obained by choosing the axes in a suit- able fashion; for example, to omit the semicircle in the xy-plane from (0, 1, 0) to (0, 1, 0) and with x 0. − To compute the matrix giving the Riemannian metric in this chart, we need to compute a basis (u(✓,'),v(✓,')) 2 of the the tangent plane TpS at p =(sin✓ cos ', sin ✓ sin ', cos ✓). 11.2. RIEMANNIAN METRICS 551 We can use @p u(✓,')= =(cos✓ cos ', cos ✓ sin ', sin ✓) @✓ − @p v(✓,')= =( sin ✓ sin ', sin ✓ cos ', 0), @' − and we find that u(✓,'),u(✓,') =1 h i u(✓,'),v(✓,') =0 h i v(✓,'),v(✓,') =sin2 ✓, h i 2 so the metric on TpS w.r.t. the basis (u(✓,'),v(✓,')) is given by the matrix 10 g = . p 0sin2 ✓ ✓ ◆ 552 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS Thus, for any tangent vector w = au(✓,')+bv(✓,'), a, b R, 2 we have 2 2 2 gp(w, w)=a +sin ✓b . AnontrivialexampleofaRiemannianmanifoldisthe Poincar´eupper half-space,namely,theset H = (x, y) R2 y>0 equipped with the metric { 2 | } dx2 + dy2 g = . y2 11.2. RIEMANNIAN METRICS 553 Consider the Lie group SO(n). We know from Section 7.2 that its tangent space at the identity TISO(n)isthevectorspaceso(n)ofn n skew symmetric matrices, and that the tangent space⇥ TQSO(n)toSO(n)atQ is isomorphic to Qso(n)= QB B so(n) . { | 2 } If we give so(n)theinnerproduct B ,B =tr(B>B )= tr(B B ), h 1 2i 1 2 − 1 2 the inner product on TQSO(n)isgivenby QB ,QB =tr((QB )>QB )=tr(B>B ). h 1 2i 1 2 1 2 554 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS We will see in Chapter 13 that the length L(γ)ofthe curve segment γ from I to eB given by t etB (with 7! B so(n)) is given by 2 1 2 L(γ)= tr( B2) . − ✓ ◆ More generally, given any Lie group G,anyinnerproduct , on its Lie algebra g induces by left translation an h i inner product , g on TgG for every g G,andthis yields a Riemannianh i metric on G (which happens2 to be left-invariant; see Chapter 18). 11.2. RIEMANNIAN METRICS 555 Going back to the second example of Section 7.5, where we computed the di↵erential df R of the function f : SO(3) R given by ! 2 f(R)=(u>Rv) , we found that df (X)=2u>Xvu>Rv, X Rso(3). R 2 Since each tangent space TRSO(3) is a Euclidean space under the inner product defined above, by duality (see Proposition ?? applied to the pairing , ), there is auniquevectorY T SO(3) definingh thei linear form 2 R df R;thatis, Y,X = df (X), for all X T SO(3). h i R 2 R By definition, the vector Y is the gradient of f at R, denoted (grad(f))R. 556 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS We leave it as an exercise to prove that the gradient of f at R is given by (grad(f)) = u>RvR(R>uv> vu>R). R − More generally, the notion of gradient is defined as fol- lows. Definition 11.3. If (M, , )isasmoothmanifold h i with a Riemannian metric and if f : M R is a smooth function on M,thentheuniquesmoothvectorfieldgrad(! f) defined such that (grad(f)) ,u = df (u), h p ip p for all p M and all u T M 2 2 p is called the gradient of f. It is usually complicated to find the gradient of a function. 11.2. RIEMANNIAN METRICS 557 Awaytoobtainametriconamanifold,N,istopull- back the metric, g,onanothermanifold,M,alongalocal di↵eomorphism, ': N M. ! Recall that ' is a local di↵eomorphism i↵ d' : T N T M p p ! '(p) is a bijective linear map for every p N. 2 Given any metric g on M,if' is a local di↵eomorphism, we define the pull-back metric, '⇤g,onN induced by g as follows: For all p N,forallu, v T N, 2 2 p ('⇤g)p(u, v)=g'(p)(d'p(u),d'p(v)).
Recommended publications
  • Metric Geometry in a Tame Setting
    University of California Los Angeles Metric Geometry in a Tame Setting A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Erik Walsberg 2015 c Copyright by Erik Walsberg 2015 Abstract of the Dissertation Metric Geometry in a Tame Setting by Erik Walsberg Doctor of Philosophy in Mathematics University of California, Los Angeles, 2015 Professor Matthias J. Aschenbrenner, Chair We prove basic results about the topology and metric geometry of metric spaces which are definable in o-minimal expansions of ordered fields. ii The dissertation of Erik Walsberg is approved. Yiannis N. Moschovakis Chandrashekhar Khare David Kaplan Matthias J. Aschenbrenner, Committee Chair University of California, Los Angeles 2015 iii To Sam. iv Table of Contents 1 Introduction :::::::::::::::::::::::::::::::::::::: 1 2 Conventions :::::::::::::::::::::::::::::::::::::: 5 3 Metric Geometry ::::::::::::::::::::::::::::::::::: 7 3.1 Metric Spaces . 7 3.2 Maps Between Metric Spaces . 8 3.3 Covers and Packing Inequalities . 9 3.3.1 The 5r-covering Lemma . 9 3.3.2 Doubling Metrics . 10 3.4 Hausdorff Measures and Dimension . 11 3.4.1 Hausdorff Measures . 11 3.4.2 Hausdorff Dimension . 13 3.5 Topological Dimension . 15 3.6 Left-Invariant Metrics on Groups . 15 3.7 Reductions, Ultralimits and Limits of Metric Spaces . 16 3.7.1 Reductions of Λ-valued Metric Spaces . 16 3.7.2 Ultralimits . 17 3.7.3 GH-Convergence and GH-Ultralimits . 18 3.7.4 Asymptotic Cones . 19 3.7.5 Tangent Cones . 22 3.7.6 Conical Metric Spaces . 22 3.8 Normed Spaces . 23 4 T-Convexity :::::::::::::::::::::::::::::::::::::: 24 4.1 T-convex Structures .
    [Show full text]
  • A Geometric Take on Metric Learning
    A Geometric take on Metric Learning Søren Hauberg Oren Freifeld Michael J. Black MPI for Intelligent Systems Brown University MPI for Intelligent Systems Tubingen,¨ Germany Providence, US Tubingen,¨ Germany [email protected] [email protected] [email protected] Abstract Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the struc- ture of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensional- ity reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Rieman- nian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data. 1 Learning and Computing Distances Statistics relies on measuring distances. When the Euclidean metric is insufficient, as is the case in many real problems, standard methods break down. This is a key motivation behind metric learning, which strives to learn good distance measures from data.
    [Show full text]
  • Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson
    Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson study of quasiconformal maps on such boundaries moti- The authors of this piece are organizers of the AMS vated Heinonen and Koskela [HK98] to axiomatize several 2020 Mathematics Research Communities summer aspects of Euclidean quasiconformal geometry in the set- conference Analysis in Metric Spaces, one of five ting of metric measure spaces and thereby extend Mostow’s topical research conferences offered this year that are work beyond the sub-Riemannian setting. The ground- focused on collaborative research and professional breaking work [HK98] initiated the modern theory of anal- development for early-career mathematicians. ysis on metric spaces. Additional information can be found at https://www Analysis on metric spaces is nowadays an active and in- .ams.org/programs/research-communities dependent field, bringing together researchers from differ- /2020MRC-MetSpace. Applications are open until ent parts of the mathematical spectrum. It has far-reaching February 15, 2020. applications to areas as diverse as geometric group the- ory, nonlinear PDEs, and even theoretical computer sci- The subject of analysis, more specifically, first-order calcu- ence. As a further sign of recognition, analysis on met- lus, in metric measure spaces provides a unifying frame- ric spaces has been included in the 2010 MSC classifica- work for ideas and questions from many different fields tion as a category (30L: Analysis on metric spaces). In this of mathematics. One of the earliest motivations and ap- short survey, we can discuss only a small fraction of areas plications of this theory arose in Mostow’s work [Mos73], into which analysis on metric spaces has expanded.
    [Show full text]
  • Metric Spaces We Have Talked About the Notion of Convergence in R
    Mathematics Department Stanford University Math 61CM – Metric spaces We have talked about the notion of convergence in R: Definition 1 A sequence an 1 of reals converges to ` R if for all " > 0 there exists N N { }n=1 2 2 such that n N, n N implies an ` < ". One writes lim an = `. 2 ≥ | − | With . the standard norm in Rn, one makes the analogous definition: k k n n Definition 2 A sequence xn 1 of points in R converges to x R if for all " > 0 there exists { }n=1 2 N N such that n N, n N implies xn x < ". One writes lim xn = x. 2 2 ≥ k − k One important consequence of the definition in either case is that limits are unique: Lemma 1 Suppose lim xn = x and lim xn = y. Then x = y. Proof: Suppose x = y.Then x y > 0; let " = 1 x y .ThusthereexistsN such that n N 6 k − k 2 k − k 1 ≥ 1 implies x x < ", and N such that n N implies x y < ". Let n = max(N ,N ). Then k n − k 2 ≥ 2 k n − k 1 2 x y x x + x y < 2" = x y , k − kk − nk k n − k k − k which is a contradiction. Thus, x = y. ⇤ Note that the properties of . were not fully used. What we needed is that the function d(x, y)= k k x y was non-negative, equal to 0 only if x = y,symmetric(d(x, y)=d(y, x)) and satisfied the k − k triangle inequality.
    [Show full text]
  • INTRODUCTION to ALGEBRAIC GEOMETRY 1. Preliminary Of
    INTRODUCTION TO ALGEBRAIC GEOMETRY WEI-PING LI 1. Preliminary of Calculus on Manifolds 1.1. Tangent Vectors. What are tangent vectors we encounter in Calculus? 2 0 (1) Given a parametrised curve α(t) = x(t); y(t) in R , α (t) = x0(t); y0(t) is a tangent vector of the curve. (2) Given a surface given by a parameterisation x(u; v) = x(u; v); y(u; v); z(u; v); @x @x n = × is a normal vector of the surface. Any vector @u @v perpendicular to n is a tangent vector of the surface at the corresponding point. (3) Let v = (a; b; c) be a unit tangent vector of R3 at a point p 2 R3, f(x; y; z) be a differentiable function in an open neighbourhood of p, we can have the directional derivative of f in the direction v: @f @f @f D f = a (p) + b (p) + c (p): (1.1) v @x @y @z In fact, given any tangent vector v = (a; b; c), not necessarily a unit vector, we still can define an operator on the set of functions which are differentiable in open neighbourhood of p as in (1.1) Thus we can take the viewpoint that each tangent vector of R3 at p is an operator on the set of differential functions at p, i.e. @ @ @ v = (a; b; v) ! a + b + c j ; @x @y @z p or simply @ @ @ v = (a; b; c) ! a + b + c (1.2) @x @y @z 3 with the evaluation at p understood.
    [Show full text]
  • Lecture 2 Tangent Space, Differential Forms, Riemannian Manifolds
    Lecture 2 tangent space, differential forms, Riemannian manifolds differentiable manifolds A manifold is a set that locally look like Rn. For example, a two-dimensional sphere S2 can be covered by two subspaces, one can be the northen hemisphere extended slightly below the equator and another can be the southern hemisphere extended slightly above the equator. Each patch can be mapped smoothly into an open set of R2. In general, a manifold M consists of a family of open sets Ui which covers M, i.e. iUi = M, n ∪ and, for each Ui, there is a continuous invertible map ϕi : Ui R . To be precise, to define → what we mean by a continuous map, we has to define M as a topological space first. This requires a certain set of properties for open sets of M. We will discuss this in a couple of weeks. For now, we assume we know what continuous maps mean for M. If you need to know now, look at one of the standard textbooks (e.g., Nakahara). Each (Ui, ϕi) is called a coordinate chart. Their collection (Ui, ϕi) is called an atlas. { } The map has to be one-to-one, so that there is an inverse map from the image ϕi(Ui) to −1 Ui. If Ui and Uj intersects, we can define a map ϕi ϕj from ϕj(Ui Uj)) to ϕi(Ui Uj). ◦ n ∩ ∩ Since ϕj(Ui Uj)) to ϕi(Ui Uj) are both subspaces of R , we express the map in terms of n ∩ ∩ functions and ask if they are differentiable.
    [Show full text]
  • Chapter 13 Curvature in Riemannian Manifolds
    Chapter 13 Curvature in Riemannian Manifolds 13.1 The Curvature Tensor If (M, , )isaRiemannianmanifoldand is a connection on M (that is, a connection on TM−), we− saw in Section 11.2 (Proposition 11.8)∇ that the curvature induced by is given by ∇ R(X, Y )= , ∇X ◦∇Y −∇Y ◦∇X −∇[X,Y ] for all X, Y X(M), with R(X, Y ) Γ( om(TM,TM)) = Hom (Γ(TM), Γ(TM)). ∈ ∈ H ∼ C∞(M) Since sections of the tangent bundle are vector fields (Γ(TM)=X(M)), R defines a map R: X(M) X(M) X(M) X(M), × × −→ and, as we observed just after stating Proposition 11.8, R(X, Y )Z is C∞(M)-linear in X, Y, Z and skew-symmetric in X and Y .ItfollowsthatR defines a (1, 3)-tensor, also denoted R, with R : T M T M T M T M. p p × p × p −→ p Experience shows that it is useful to consider the (0, 4)-tensor, also denoted R,givenby R (x, y, z, w)= R (x, y)z,w p p p as well as the expression R(x, y, y, x), which, for an orthonormal pair, of vectors (x, y), is known as the sectional curvature, K(x, y). This last expression brings up a dilemma regarding the choice for the sign of R. With our present choice, the sectional curvature, K(x, y), is given by K(x, y)=R(x, y, y, x)but many authors define K as K(x, y)=R(x, y, x, y). Since R(x, y)isskew-symmetricinx, y, the latter choice corresponds to using R(x, y)insteadofR(x, y), that is, to define R(X, Y ) by − R(X, Y )= + .
    [Show full text]
  • 5. the Inverse Function Theorem We Now Want to Aim for a Version of the Inverse Function Theorem
    5. The inverse function theorem We now want to aim for a version of the Inverse function Theorem. In differential geometry, the inverse function theorem states that if a function is an isomorphism on tangent spaces, then it is locally an isomorphism. Unfortunately this is too much to expect in algebraic geometry, since the Zariski topology is too weak for this to be true. For example consider a curve which double covers another curve. At any point where there are two points in the fibre, the map on tangent spaces is an isomorphism. But there is no Zariski neighbourhood of any point where the map is an isomorphism. Thus a minimal requirement is that the morphism is a bijection. Note that this is not enough in general for a morphism between al- gebraic varieties to be an isomorphism. For example in characteristic p, Frobenius is nowhere smooth and even in characteristic zero, the parametrisation of the cuspidal cubic is a bijection but not an isomor- phism. Lemma 5.1. If f : X −! Y is a projective morphism with finite fibres, then f is finite. Proof. Since the result is local on the base, we may assume that Y is affine. By assumption X ⊂ Y × Pn and we are projecting onto the first factor. Possibly passing to a smaller open subset of Y , we may assume that there is a point p 2 Pn such that X does not intersect Y × fpg. As the blow up of Pn at p, fibres over Pn−1 with fibres isomorphic to P1, and the composition of finite morphisms is finite, we may assume that n = 1, by induction on n.
    [Show full text]
  • Distance Metric Learning, with Application to Clustering with Side-Information
    Distance metric learning, with application to clustering with side-information Eric P. Xing, Andrew Y. Ng, Michael I. Jordan and Stuart Russell University of California, Berkeley Berkeley, CA 94720 epxing,ang,jordan,russell ¡ @cs.berkeley.edu Abstract Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many “plausible” ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they con- sider “similar.” For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in ¢¤£ , learns a distance metric over ¢¥£ that respects these relationships. Our method is based on posing met- ric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance. 1 Introduction The performance of many learning and datamining algorithms depend critically on their being given a good metric over the input space. For instance, K-means, nearest-neighbors classifiers and kernel algorithms such as SVMs all need to be given good metrics that
    [Show full text]
  • Hodge Theory
    HODGE THEORY PETER S. PARK Abstract. This exposition of Hodge theory is a slightly retooled version of the author's Harvard minor thesis, advised by Professor Joe Harris. Contents 1. Introduction 1 2. Hodge Theory of Compact Oriented Riemannian Manifolds 2 2.1. Hodge star operator 2 2.2. The main theorem 3 2.3. Sobolev spaces 5 2.4. Elliptic theory 11 2.5. Proof of the main theorem 14 3. Hodge Theory of Compact K¨ahlerManifolds 17 3.1. Differential operators on complex manifolds 17 3.2. Differential operators on K¨ahlermanifolds 20 3.3. Bott{Chern cohomology and the @@-Lemma 25 3.4. Lefschetz decomposition and the Hodge index theorem 26 Acknowledgments 30 References 30 1. Introduction Our objective in this exposition is to state and prove the main theorems of Hodge theory. In Section 2, we first describe a key motivation behind the Hodge theory for compact, closed, oriented Riemannian manifolds: the observation that the differential forms that satisfy certain par- tial differential equations depending on the choice of Riemannian metric (forms in the kernel of the associated Laplacian operator, or harmonic forms) turn out to be precisely the norm-minimizing representatives of the de Rham cohomology classes. This naturally leads to the statement our first main theorem, the Hodge decomposition|for a given compact, closed, oriented Riemannian manifold|of the space of smooth k-forms into the image of the Laplacian and its kernel, the sub- space of harmonic forms. We then develop the analytic machinery|specifically, Sobolev spaces and the theory of elliptic differential operators|that we use to prove the aforementioned decom- position, which immediately yields as a corollary the phenomenon of Poincar´eduality.
    [Show full text]
  • Curvature of Riemannian Manifolds
    Curvature of Riemannian Manifolds Seminar Riemannian Geometry Summer Term 2015 Prof. Dr. Anna Wienhard and Dr. Gye-Seon Lee Soeren Nolting July 16, 2015 1 Motivation Figure 1: A vector parallel transported along a closed curve on a curved manifold.[1] The aim of this talk is to define the curvature of Riemannian Manifolds and meeting some important simplifications as the sectional, Ricci and scalar curvature. We have already noticed, that a vector transported parallel along a closed curve on a Riemannian Manifold M may change its orientation. Thus, we can determine whether a Riemannian Manifold is curved or not by transporting a vector around a loop and measuring the difference of the orientation at start and the endo of the transport. As an example take Figure 1, which depicts a parallel transport of a vector on a two-sphere. Note that in a non-curved space the orientation of the vector would be preserved along the transport. 1 2 Curvature In the following we will use the Einstein sum convention and make use of the notation: X(M) space of smooth vector fields on M D(M) space of smooth functions on M 2.1 Defining Curvature and finding important properties This rather geometrical approach motivates the following definition: Definition 2.1 (Curvature). The curvature of a Riemannian Manifold is a correspondence that to each pair of vector fields X; Y 2 X (M) associates the map R(X; Y ): X(M) ! X(M) defined by R(X; Y )Z = rX rY Z − rY rX Z + r[X;Y ]Z (1) r is the Riemannian connection of M.
    [Show full text]
  • 5.2. Inner Product Spaces 1 5.2
    5.2. Inner Product Spaces 1 5.2. Inner Product Spaces Note. In this section, we introduce an inner product on a vector space. This will allow us to bring much of the geometry of Rn into the infinite dimensional setting. Definition 5.2.1. A vector space with complex scalars hV, Ci is an inner product space (also called a Euclidean Space or a Pre-Hilbert Space) if there is a function h·, ·i : V × V → C such that for all u, v, w ∈ V and a ∈ C we have: (a) hv, vi ∈ R and hv, vi ≥ 0 with hv, vi = 0 if and only if v = 0, (b) hu, v + wi = hu, vi + hu, wi, (c) hu, avi = ahu, vi, and (d) hu, vi = hv, ui where the overline represents the operation of complex conju- gation. The function h·, ·i is called an inner product. Note. Notice that properties (b), (c), and (d) of Definition 5.2.1 combine to imply that hu, av + bwi = ahu, vi + bhu, wi and hau + bv, wi = ahu, wi + bhu, wi for all relevant vectors and scalars. That is, h·, ·i is linear in the second positions and “conjugate-linear” in the first position. 5.2. Inner Product Spaces 2 Note. We can also define an inner product on a vector space with real scalars by requiring that h·, ·i : V × V → R and by replacing property (d) in Definition 5.2.1 with the requirement that the inner product is symmetric: hu, vi = hv, ui. Then Rn with the usual dot product is an example of a real inner product space.
    [Show full text]