0.1 Tangent Spaces and Lagrange Multipliers

Total Page:16

File Type:pdf, Size:1020Kb

0.1 Tangent Spaces and Lagrange Multipliers 0.1. TANGENT SPACES AND LAGRANGE MULTIPLIERS 1 0.1 Tangent Spaces and Lagrange Multipliers ~ n+k k If a di®erentiable function G = (G1;:::;Gk): E ! E then the surface S de¯ned by S = f~x j G~ (~x) = ~vg is called the level surface for G~ (~x) = ~v. Note that each of the functions n+k Gi : E ! R. If we denote by Si the level surface for the equation Tk Gi(~x) = vi, then S = i=1 Si. 0 0 0 ~ 0 0 n+k k Suppose that ~x = (x1; : : : ; xn+k) 2 S and that G (~x ) 2 L(E ; E ) has rank k. Let ±i;j = 1 if i = j and 0 if i 6= j. With respect to the standard basis f~ej = (±1;j; : : : ; ±n+k;j) j j = 1; 2; : : : ; n + kg for En+k and the analogous smaller basis for Ek, we note that the matrix ~ 0 0 [G (~x )]k£(n+k) has its whole set of k row vectors linearly independent, and 0 0 0 these row vectors are the gradient vectors rG1(~x ); rG2(~x );:::; rGk(~x ). Let Á~ : R ! S be a di®erentiable function for which Á~(0) = ~x0. Then we call the vector ~v = Á~0(0) a tangent vector to S at ~x0. 0 De¯nition 0.1.1. The tangent space T~x0 (S) at ~x 2 S is the set of all tangent vectors to S at ~x0. The translate 0 0 ~x + T~x0 = f~x + ~v j ~v 2 T~x0 (S)g is called the tangent plane to the surface S, with point of tangency at ~x0. A translate ~a + V = f~a + ~x j ~x 2 V g of a vector subspace V of En is called an a±ne subspace of En. An a±ne subspace is a vector subspace if and only if ~a 2 V . (See Exercise 1.) Theorem 0.1.1. Let G~ : En+k ! Ek be a di®erentiable function. Let ~x0 2 S = f~x j G~ (~x) = ~vg: ~ 0 0 Suppose G (~x ) has rank k. Then the tangent space T~x0 (S) is the vector subspace ¡ 0 0 ¢? T~x0 (S) = spanRfrG1(~x );:::; rGk(~x )g of dimension n. In words, T~x0 (S) is the orthogonal complement of the span 0 0 of the k gradient vectors rG1(~x );:::; rGk(~x ). 2 ~0 ~ Proof. Suppose ¯rst that ~v = Á (0) 2 T~x0 (S). This implies that Á maps 0 into each level surface Gi(~x) = vi. We will show that ~v ? rGi(~x ) for ~ each i = 1; : : : ; k. In fact, Gi(Á(t)) ´ vi, a real constant. We di®erentiate 0 ~ ~0 using the Chain Rule to ¯nd that Gi(Á(0))Á (0) = 0. In terms of the matrix 0 ~0 representation of the left side of the latter equation, we have rGi(~x )¢Á (0) = 0 0 ? 0, so that ~v ? rGi(~x ). This shows that T~x0 (S) ⊆ rGi(~x ) for each i. This implies that ¡ 0 0 ¢? T~x0 (S) ⊆ spanRfrG1(~x );:::; rGk(~x )g : ~ 0 0 The hypothesis that rank(G (~x )) = k implies that dimT~x0 (S) · n. If we can show that the tangent space is at least n-dimensional, then it will have to be the entire orthogonal complement of the span of the gradient vectors as claimed. Thus it will su±ce to produce a linearly independent set of n vectors in the tangent space. Because the rank of a matrix is also the number of linearly indepen- dent column vectors, it follows that the matrix [G~ 0(~x0)] has k independent columns. We can rearrange the order of the n elements of the standard basis of En+k to arrange that the ¯rst k columns are linearly independent. By the Implicit Function Theorem, there exists an open set U ½ Ek containing 0 0 n 0 0 (x1; : : : ; xk) and an open set V ½ E containing (xk+1; : : : ; xk+n) such that there are unique di®erentiable functions x1 = Ã1(xk+1; : : : ; xk+n) . xk = Ãk(xk+1; : : : ; xk+n) solving the equation ~ G(Ã1(xk+1; : : : ; xk+n);:::;Ãk(xk+1; : : : ; xk+n); xk+1; : : : ; xk+n) = ~v: Next we de¯ne n di®erentiable curves on S by the equations ~ 0 0 0 0 0 0 0 0 0 Á1(t) = (Ã1(xk+1 + t; xk+2; : : : ; xk+n);:::;Ãk(xk+1 + t; xk+2; : : : ; xk+n); xk+1 + t; xk+2; : : : ; xk+n) . ~ 0 0 0 0 0 0 0 0 0 Án(t) = (Ã1(xk+1; : : : ; xn+k¡1; xk+n + t);:::;Ãk(xk+1; : : : ; xk+n¡1; xk+n + t); xk+1; : : : xk+n¡1; xk+n + t) ~0 In comparing the vectors Ái(0) for i = 1; : : : ; n, observe that for each of these vectors the ¯nal n entries are all 0 except for a single entry which is 1. The location of the 1 is di®erent for each of these vectors. Thus the n vectors are independent and the theorem is proved. 0.1. TANGENT SPACES AND LAGRANGE MULTIPLIERS 3 Corollary 0.1.1. Let G~ : Ek+n ! Ek be a di®erentiable function and let ~x0 2 S = f~x j G~ (~x) = ~vg: Suppose ~x0 is a local extreme point of a di®erentiable function f : S ! R ~ 0 0 and that G (~x ) has rank k. Then there exist numbers ¸1; : : : ; ¸k such that 0 0 0 rf(~x ) = ¸1rG1(~x ) + ¢ ¢ ¢ + ¸krGk(~x ) (1) The numbers ¸1; : : : ; ¸k are called Lagrange multipliers. Proof. If Á~ : R ! S is a di®erentiable curve on S with Á~(0) = ~x0, let Ã(t) = f(Á~(t)). Since this function has an extreme point at 0, we have Ã0(0) = rf(Á~(0)) ¢ Á~0(0) = 0: It follows from Theorem 0.1.1 that rf(~x0) is orthogonal to the tangent space 0 T~x0 (S). Since the co-dimension of T~x0 (S) is k, it follows that rf(~x ) lies in 0 0 the span of the k vectors rG1(~x );:::; rGk(~x ). This proves the corollary. The method of Lagrange multipliers permits an optimization problem to be replaced by a problem of solving a system of equations. From the k + n components of the vectors in Equation 1, we obtain a system of k + n equations in the n+2k unknowns x1; : : : ; xk+n; ¸1; : : : ; ¸k. We get k additional equations from the k components of the equation G~ (~x) = ~v. Thus we obtain a system of n+2k equations in n+2k unknowns. Although we have replaced a calculus problem with an algebraic problem, the algebraic problem can be challenging. Nevertheless, the method of Lagrange multipliers is a powerful tool for optimization problems. Example 0.1.1. We will begin with a three-dimensional example. Consider the surface S de¯ned by the equation x4 +y4 +z4 = 1 in E3, shown in Figure 1. We will ¯nd both the maximum and the minimum values of the function f(~x) = x2+y2+z2 on S. (In e®ect, we are determining the closest and furthest distances from the origin on S.) In this example, we denote ~x = (x; y; z). Observe that if we de¯ne G(~x) = x4 + y4 + z4 then S = G¡1(f1g). Hence S is closed because G is continuous. S is also bounded. (Why?) Hence the function f must achieve both a maximum and a minimum value somewhere on S. Since S is smooth at all points and since rG is non-vanishing on S, 4 1 y 0.5 0 -0.5 -1 1 0.5 z 0 -0.5 -1 -1 -0.5 0 x 0.5 1 Figure 1: x4 + y4 + z4 = 1 the extreme points must occur at those points for which rf(~x) = ¸rG(~x). This yields the following system of equations. x(1 ¡ 2¸x2) = 0 y(1 ¡ 2¸y2) = 0 z(1 ¡ 2¸z2) = 0 x4 + y4 + z4 = 1 The reader should check the following by making the necessary calculations. ² If none of the three variables is zero, then x2 = y2 = z2 = 1 showing p p 2¸ 3 that ¸ = § 2 . This implies that f(x; y; z) = 3. ² If exactly one of the three variables is zero, then atp a point satisfying the system of equations we must have f(x; y; z) = 2. ² If exactly two of the variables are zero, then at a point satisfying the system we must have f(x; y; z) = 1. 0.1. TANGENT SPACES AND LAGRANGE MULTIPLIERS 5 p It follows that the maximum value of f on S is 3. But the reader should be able to explain why at least one of the variables must be non-zero. Thus the minimum value is 1. There is also an easy way to explain even from the outset why f(x; y; z) ¸ 1 everywhere on S. Exercises 0.1. 0 n 1. Prove that the tangent plane ~x + T~x0 (S) is a vector subspace of E if 0 and only if ~x 2 T~x0 (S). 2. Describe both the tangent space and the tangent³ plane to the´ sphere n¡1 n 0 p1 p1 p1 S = f~x 2 E j k~xk = 1g at the point ~x = n ; n ;:::; n . 3. The sphere S3 ½ E4 is de¯ned by ( ) X4 3 2 S = ~x j xi = 1: i=1 3 P4 De¯ne f : S ! R by f(~x) = i=1 aixi where ai is a constant for each i 2 f1; 2; 3; 4g.
Recommended publications
  • A Geometric Take on Metric Learning
    A Geometric take on Metric Learning Søren Hauberg Oren Freifeld Michael J. Black MPI for Intelligent Systems Brown University MPI for Intelligent Systems Tubingen,¨ Germany Providence, US Tubingen,¨ Germany [email protected] [email protected] [email protected] Abstract Multi-metric learning techniques learn local metric tensors in different parts of a feature space. With such an approach, even simple classifiers can be competitive with the state-of-the-art because the distance measure locally adapts to the struc- ture of the data. The learned distance measure is, however, non-metric, which has prevented multi-metric learning from generalizing to tasks such as dimensional- ity reduction and regression in a principled way. We prove that, with appropriate changes, multi-metric learning corresponds to learning the structure of a Rieman- nian manifold. We then show that this structure gives us a principled way to perform dimensionality reduction and regression according to the learned metrics. Algorithmically, we provide the first practical algorithm for computing geodesics according to the learned metrics, as well as algorithms for computing exponential and logarithmic maps on the Riemannian manifold. Together, these tools let many Euclidean algorithms take advantage of multi-metric learning. We illustrate the approach on regression and dimensionality reduction tasks that involve predicting measurements of the human body from shape data. 1 Learning and Computing Distances Statistics relies on measuring distances. When the Euclidean metric is insufficient, as is the case in many real problems, standard methods break down. This is a key motivation behind metric learning, which strives to learn good distance measures from data.
    [Show full text]
  • INTRODUCTION to ALGEBRAIC GEOMETRY 1. Preliminary Of
    INTRODUCTION TO ALGEBRAIC GEOMETRY WEI-PING LI 1. Preliminary of Calculus on Manifolds 1.1. Tangent Vectors. What are tangent vectors we encounter in Calculus? 2 0 (1) Given a parametrised curve α(t) = x(t); y(t) in R , α (t) = x0(t); y0(t) is a tangent vector of the curve. (2) Given a surface given by a parameterisation x(u; v) = x(u; v); y(u; v); z(u; v); @x @x n = × is a normal vector of the surface. Any vector @u @v perpendicular to n is a tangent vector of the surface at the corresponding point. (3) Let v = (a; b; c) be a unit tangent vector of R3 at a point p 2 R3, f(x; y; z) be a differentiable function in an open neighbourhood of p, we can have the directional derivative of f in the direction v: @f @f @f D f = a (p) + b (p) + c (p): (1.1) v @x @y @z In fact, given any tangent vector v = (a; b; c), not necessarily a unit vector, we still can define an operator on the set of functions which are differentiable in open neighbourhood of p as in (1.1) Thus we can take the viewpoint that each tangent vector of R3 at p is an operator on the set of differential functions at p, i.e. @ @ @ v = (a; b; v) ! a + b + c j ; @x @y @z p or simply @ @ @ v = (a; b; c) ! a + b + c (1.2) @x @y @z 3 with the evaluation at p understood.
    [Show full text]
  • Lecture 2 Tangent Space, Differential Forms, Riemannian Manifolds
    Lecture 2 tangent space, differential forms, Riemannian manifolds differentiable manifolds A manifold is a set that locally look like Rn. For example, a two-dimensional sphere S2 can be covered by two subspaces, one can be the northen hemisphere extended slightly below the equator and another can be the southern hemisphere extended slightly above the equator. Each patch can be mapped smoothly into an open set of R2. In general, a manifold M consists of a family of open sets Ui which covers M, i.e. iUi = M, n ∪ and, for each Ui, there is a continuous invertible map ϕi : Ui R . To be precise, to define → what we mean by a continuous map, we has to define M as a topological space first. This requires a certain set of properties for open sets of M. We will discuss this in a couple of weeks. For now, we assume we know what continuous maps mean for M. If you need to know now, look at one of the standard textbooks (e.g., Nakahara). Each (Ui, ϕi) is called a coordinate chart. Their collection (Ui, ϕi) is called an atlas. { } The map has to be one-to-one, so that there is an inverse map from the image ϕi(Ui) to −1 Ui. If Ui and Uj intersects, we can define a map ϕi ϕj from ϕj(Ui Uj)) to ϕi(Ui Uj). ◦ n ∩ ∩ Since ϕj(Ui Uj)) to ϕi(Ui Uj) are both subspaces of R , we express the map in terms of n ∩ ∩ functions and ask if they are differentiable.
    [Show full text]
  • 5. the Inverse Function Theorem We Now Want to Aim for a Version of the Inverse Function Theorem
    5. The inverse function theorem We now want to aim for a version of the Inverse function Theorem. In differential geometry, the inverse function theorem states that if a function is an isomorphism on tangent spaces, then it is locally an isomorphism. Unfortunately this is too much to expect in algebraic geometry, since the Zariski topology is too weak for this to be true. For example consider a curve which double covers another curve. At any point where there are two points in the fibre, the map on tangent spaces is an isomorphism. But there is no Zariski neighbourhood of any point where the map is an isomorphism. Thus a minimal requirement is that the morphism is a bijection. Note that this is not enough in general for a morphism between al- gebraic varieties to be an isomorphism. For example in characteristic p, Frobenius is nowhere smooth and even in characteristic zero, the parametrisation of the cuspidal cubic is a bijection but not an isomor- phism. Lemma 5.1. If f : X −! Y is a projective morphism with finite fibres, then f is finite. Proof. Since the result is local on the base, we may assume that Y is affine. By assumption X ⊂ Y × Pn and we are projecting onto the first factor. Possibly passing to a smaller open subset of Y , we may assume that there is a point p 2 Pn such that X does not intersect Y × fpg. As the blow up of Pn at p, fibres over Pn−1 with fibres isomorphic to P1, and the composition of finite morphisms is finite, we may assume that n = 1, by induction on n.
    [Show full text]
  • The Orientation Manifesto (For Undergraduates)
    The Orientation Manifesto (for undergraduates) Timothy E. Goldberg 11 November 2007 An orientation of a vector space is represented by an ordered basis of the vector space. We think of an orientation as a twirl, namely the twirl that rotates the first basis vector to the second, and the second to the third, and so on. Two ordered bases represent the same orientation if they generate the same twirl. (This amounts to the linear transformation taking one basis to the other having positive determinant.) If you think about it carefully, there are only ever two choices of twirls, and hence only two choices of orientation. n Because each R has a standard choice of ordered basis, fe1; e2; : : : ; eng (where ei is has 1 in the ith coordinates and 0 everywhere else), each Rn has a standard choice of orientation. The standard orientation of R is the twirl that points in the positive direction. The standard orientation of R2 is the counterclockwise twirl, moving from 3 e1 = (1; 0) to e2 = (0; 1). The standard orientation of R is a twirl that sweeps from the positive x direction to the positive y direction, and up the positive z direction. It's like a directed helix, pointed up and spinning in the counterclockwise direction if viewed from above. See Figure 1. An orientation of a curve, or a surface, or a solid body, is really a choice of orientations of every single tangent space, in such a way that the twirls all agree with each other. (This can be made horribly precise, when necessary.) There are several ways for a manifold to pick up an orientation.
    [Show full text]
  • On Manifolds of Negative Curvature, Geodesic Flow, and Ergodicity
    ON MANIFOLDS OF NEGATIVE CURVATURE, GEODESIC FLOW, AND ERGODICITY CLAIRE VALVA Abstract. We discuss geodesic flow on manifolds of negative sectional curva- ture. We find that geodesic flow is ergodic on the tangent bundle of a manifold, and present the proof for both n = 2 on surfaces and general n. Contents 1. Introduction 1 2. Riemannian Manifolds 1 2.1. Geodesic Flow 2 2.2. Horospheres and Horocycle Flows 3 2.3. Curvature 4 2.4. Jacobi Fields 4 3. Anosov Flows 4 4. Ergodicity 5 5. Surfaces of Negative Curvature 6 5.1. Fuchsian Groups and Hyperbolic Surfaces 6 6. Geodesic Flow on Hyperbolic Surfaces 7 7. The Ergodicity of Geodesic Flow on Compact Manifolds of Negative Sectional Curvature 8 7.1. Foliations and Absolute Continuity 9 7.2. Proof of Ergodicity 12 Acknowledgments 14 References 14 1. Introduction We want to understand the behavior of geodesic flow on a manifold M of constant negative curvature. If we consider a vector in the unit tangent bundle of M, where does that vector go (or not go) when translated along its unique geodesic path. In a sense, we will show that the vector goes \everywhere," or that the vector visits a full measure subset of T 1M. 2. Riemannian Manifolds We first introduce some of the initial definitions and concepts that allow us to understand Riemannian manifolds. Date: August 2019. 1 2 CLAIRE VALVA Definition 2.1. If M is a differentiable manifold and α :(−, ) ! M is a dif- ferentiable curve, where α(0) = p 2 M, then the tangent vector to the curve α at t = 0 is a function α0(0) : D ! R, where d(f ◦ α) α0(0)f = j dt t=0 for f 2 D, where D is the set of functions on M that are differentiable at p.
    [Show full text]
  • Manifolds, Tangent Vectors and Covectors
    Physics 250 Fall 2015 Notes 1 Manifolds, Tangent Vectors and Covectors 1. Introduction Most of the “spaces” used in physical applications are technically differentiable man- ifolds, and this will be true also for most of the spaces we use in the rest of this course. After a while we will drop the qualifier “differentiable” and it will be understood that all manifolds we refer to are differentiable. We will build up the definition in steps. A differentiable manifold is basically a topological manifold that has “coordinate sys- tems” imposed on it. Recall that a topological manifold is a topological space that is Hausdorff and locally homeomorphic to Rn. The number n is the dimension of the mani- fold. On a topological manifold, we can talk about the continuity of functions, for example, of functions such as f : M → R (a “scalar field”), but we cannot talk about the derivatives of such functions. To talk about derivatives, we need coordinates. 2. Charts and Coordinates Generally speaking it is impossible to cover a manifold with a single coordinate system, so we work in “patches,” technically charts. Given a topological manifold M of dimension m, a chart on M is a pair (U, φ), where U ⊂ M is an open set and φ : U → V ⊂ Rm is a homeomorphism. See Fig. 1. Since φ is a homeomorphism, V is also open (in Rm), and φ−1 : V → U exists. If p ∈ U is a point in the domain of φ, then φ(p) = (x1,...,xm) is the set of coordinates of p with respect to the given chart.
    [Show full text]
  • Chapter 7 Geodesics on Riemannian Manifolds
    Chapter 7 Geodesics on Riemannian Manifolds 7.1 Geodesics, Local Existence and Uniqueness If (M,g)isaRiemannianmanifold,thentheconceptof length makes sense for any piecewise smooth (in fact, C1) curve on M. Then, it possible to define the structure of a metric space on M,whered(p, q)isthegreatestlowerboundofthe length of all curves joining p and q. Curves on M which locally yield the shortest distance between two points are of great interest. These curves called geodesics play an important role and the goal of this chapter is to study some of their properties. 489 490 CHAPTER 7. GEODESICS ON RIEMANNIAN MANIFOLDS Given any p M,foreveryv TpM,the(Riemannian) norm of v,denoted∈ v ,isdefinedby∈ " " v = g (v,v). " " p ! The Riemannian inner product, gp(u, v), of two tangent vectors, u, v TpM,willalsobedenotedby u, v p,or simply u, v .∈ # $ # $ Definition 7.1.1 Given any Riemannian manifold, M, a smooth parametric curve (for short, curve)onM is amap,γ: I M,whereI is some open interval of R. For a closed→ interval, [a, b] R,amapγ:[a, b] M is a smooth curve from p =⊆γ(a) to q = γ(b) iff→γ can be extended to a smooth curve γ:(a ", b + ") M, for some ">0. Given any two points,− p, q →M,a ∈ continuous map, γ:[a, b] M,isa" piecewise smooth curve from p to q iff → (1) There is a sequence a = t0 <t1 < <tk 1 <t = b of numbers, t R,sothateachmap,··· − k i ∈ γi = γ ! [ti,ti+1], called a curve segment is a smooth curve, for i =0,...,k 1.
    [Show full text]
  • M382D NOTES: DIFFERENTIAL TOPOLOGY 1. the Inverse And
    M382D NOTES: DIFFERENTIAL TOPOLOGY ARUN DEBRAY MAY 16, 2016 These notes were taken in UT Austin’s Math 382D (Differential Topology) class in Spring 2016, taught by Lorenzo Sadun. I live-TEXed them using vim, and as such there may be typos; please send questions, comments, complaints, and corrections to [email protected]. Thanks to Adrian Clough, Parker Hund, Max Reistenberg, and Thérèse Wu for finding and correcting a few mistakes. CONTENTS 1. The Inverse and Implicit Function Theorems: 1/20/162 2. The Contraction Mapping Theorem: 1/22/164 3. Manifolds: 1/25/16 6 4. Abstract Manifolds: 1/27/16 8 5. Examples of Manifolds and Tangent Vectors: 1/29/169 6. Smooth Maps Between Manifolds: 2/1/16 11 7. Immersions and Submersions: 2/3/16 13 8. Transversality: 2/5/16 15 9. Properties Stable Under Homotopy: 2/8/16 17 10. May the Morse Be With You: 2/10/16 19 11. Partitions of Unity and the Whitney Embedding Theorem: 2/12/16 21 12. Manifolds-With-Boundary: 2/15/16 22 13. Retracts and Other Consequences of Boundaries: 2/17/16 24 14. The Thom Transversality Theorem: 2/19/16 26 15. The Normal Bundle and Tubular Neighborhoods: 2/22/16 27 16. The Extension Theorem: 2/24/16 28 17. Intersection Theory: 2/26/16 30 18. The Jordan Curve Theorem and the Borsuk-Ulam Theorem: 2/29/16 32 19. Getting Oriented: 3/2/16 33 20. Orientations on Manifolds: 3/4/16 35 21. Orientations and Preimages: 3/7/16 36 22.
    [Show full text]
  • Chapter 6 Implicit Function Theorem
    Implicit function theorem 1 Chapter 6 Implicit function theorem Chapter 5 has introduced us to the concept of manifolds of dimension m contained in Rn. In the present chapter we are going to give the exact de¯nition of such manifolds and also discuss the crucial theorem of the beginnings of this subject. The name of this theorem is the title of this chapter. We de¯nitely want to maintain the following point of view. An m-dimensional manifold M ½ Rn is an object which exists and has various geometric and calculus properties which are inherent to the manifold, and which should not depend on the particular mathematical formulation we use in describing the manifold. Since our goal is to do lots of calculus on M, we need to have formulas we can use in order to do this sort of work. In the very discussion of these methods we shall gain a clear and precise understanding of what a manifold actually is. We have already done this sort of work in the preceding chapter in the case of hypermani- folds. There we discussed the intrinsic gradient and the fact that the tangent space at a point of such a manifold has dimension n ¡ 1 etc. We also discussed the version of the implicit function theorem that we needed for the discussion of hypermanifolds. We noticed at that time that we were really always working with only the local description of M, and that we didn't particularly care whether we were able to describe M with formulas that held for the entire manifold.
    [Show full text]
  • Chapter 11 Riemannian Metrics, Riemannian Manifolds
    Chapter 11 Riemannian Metrics, Riemannian Manifolds 11.1 Frames Fortunately, the rich theory of vector spaces endowed with aEuclideaninnerproductcan,toagreatextent,belifted to the tangent bundle of a manifold. The idea is to equip the tangent space TpM at p to the manifold M with an inner product , p,insucha way that these inner products vary smoothlyh i as p varies on M. It is then possible to define the length of a curve segment on a M and to define the distance between two points on M. 541 542 CHAPTER 11. RIEMANNIAN METRICS, RIEMANNIAN MANIFOLDS The notion of local (and global) frame plays an important technical role. Definition 11.1. Let M be an n-dimensional smooth manifold. For any open subset, U M,ann-tuple of ✓ vector fields, (X1,...,Xn), over U is called a frame over U i↵(X1(p),...,Xn(p)) is a basis of the tangent space, T M,foreveryp U.IfU = M,thentheX are global p 2 i sections and (X1,...,Xn)iscalledaframe (of M). The notion of a frame is due to Elie´ Cartan who (after Darboux) made extensive use of them under the name of moving frame (and the moving frame method). Cartan’s terminology is intuitively clear: As a point, p, moves in U,theframe,(X1(p),...,Xn(p)), moves from fibre to fibre. Physicists refer to a frame as a choice of local gauge. 11.1. FRAMES 543 If dim(M)=n,thenforeverychart,(U,'), since 1 n d'− : R TpM is a bijection for every p U,the '(p) ! 2 n-tuple of vector fields, (X1,...,Xn), with 1 Xi(p)=d''−(p)(ei), is a frame of TM over U,where n (e1,...,en)isthecanonicalbasisofR .SeeFigure11.1.
    [Show full text]
  • Almost Complex Manifolds an Almost-Complex Manifold Is A
    Almost complex manifolds An almost-complex manifold is a smooth real manifold 2 M equipped with a smooth endomorphism field J : T M T M satisfying J = Ix for → x − all x M. The linear algebra introduced above may be applied pointwise to the tangent bundle∈ of M. The complexified tangent bundle is TCM := T M C, where C is regarded as a trivial 1,0 0,1 ⊗ vector bundle over M. Since TC,xM = Tx M Tx M for any point x M as above, the tensor field J splits the complexified tangent bundle⊕ into bundles of eigen∈spaces 1,0 0,1 TCM = T M T M, ⊕ and the smooth complex vector bundles (TM,J) and (T 1,0M, i) are C-linearly isomorphic. We notice that a real smooth manifold carrying an almost complex structure may not be a complex manifold. We’ll discuss later that a real smooth manifold carrying an integral almost complex structure is a complex manifold. [Example] (Almost complex structure on spheres) It is natural to ask which spheres S2n admit almost-complex structures or complex structure (or other ones such as K¨ahler structure). 2 CP1 p m R R S = is the Riemann sphere, a complex manifold. It is known HDR(S , ) = • for p =0or p = m; 0 otherwsie. On the other hand, if X is a compact K¨ahler manifold, it is known that the Betti number b (X) 1. Thus S2 is the only sphere that admits 2 ≥ a K¨ahler structure. S2 has only one almost-complex structure up to equivalence.
    [Show full text]