Manifold-Adaptive Dimension Estimation

Total Page:16

File Type:pdf, Size:1020Kb

Manifold-Adaptive Dimension Estimation Manifold-Adaptive Dimension Estimation Amir massoud Farahmand [email protected] Csaba Szepesv¶ari [email protected] Department of Computing Science, University of Alberta, Edmonton, AB T6G 2E8 Canada Jean-Yves Audibert [email protected] CERTIS - Ecole des Ponts, 19, rue Alfred Nobel - Cit¶eDescartes, 77455 Marne-la-Vall¶eeFrance Abstract larity is present in the data. Intuitively, learning should be easier when One such regularity that has attracted much attention the data points lie on a low-dimensional sub- lately is when the samples lie in a low-dimensional sub- manifold of the input space. Recently there manifold of the possibly high-dimensional input space. has been a growing interest in algorithms Consider for example the case when the data points that aim to exploit such geometrical prop- are images taken of a scene or object, from di®erent erties of the data. Oftentimes these algo- angles. Although the images may contain millions of rithms require estimating the dimension of pixels, they all lie on a manifold of low dimensionality, the manifold ¯rst. In this paper we propose such as 3. Another example is when the input data is an algorithm for dimension estimation and enriched by adding a huge number of feature compo- study its ¯nite-sample behaviour. The algo- nents computed from the original input components in rithm estimates the dimension locally around the hope that these additional features will help some the data points using nearest neighbor tech- learning algorithm (generalized linear models or the niques and then combines these local esti- \kernel trick" implement this idea). mates. We show that the rate of conver- Manifold learning research aims at ¯nding algorithms gence of the resulting estimate is indepen- that require less data (i.e., are more data e±cient) dent of the dimension of the input space and when the data happens to be supported on a low- hence the algorithm is \manifold-adaptive". dimensional submanifold of the input-space. We call a Thus, when the manifold supporting the data learning algorithm manifold-adaptive when its sample- is low dimensional, the algorithm can be ex- complexity depends on the intrinsic dimension of the ponentially more e±cient than its counter- manifold only.1 A classical problem in pattern recog- parts that are not exploiting this property. nition is the estimation of the dimension of the data Our computer experiments con¯rm the ob- manifold. Dimension estimation is interesting on its tained theoretical results. own, but it is also very useful as the estimate can be fed into manifold-aware supervised learning algorithms that need to know the dimension to work e±ciently 1. Introduction (e.g., Hein 2006; Gine and Koltchinskii 2007). The curse of dimensionality in machine learning refers In this paper we propose an algorithm for estimating to the tendency of learning algorithms working in high- the unknown dimension of a manifold from samples dimensional spaces to use resources (time, space, sam- and prove that it is manifold-adaptive. The new algo- ples) that scale exponentially with the dimensionality rithm belongs to the family of nearest-neighbor meth- of the space. Since most practical problems involve ods. Such methods have been considered since the late high-dimensional spaces, it is of uttermost importance 70s. Pettis et al. (1979) suggested to average distances to identify algorithms that are capable of avoiding this to k-nearest neighbors for various values of k and use exponential blow-up, exploiting when additional regu- the obtained values to ¯nd the dimension using an iter- th Appearing in Proceedings of the 24 International Confer- 1Of course, the sample-complexity may and will typi- ence on Machine Learning, Corvallis, OR, 2007. Copyright cally depend on the properties of the manifold and thus 2007 by the author(s)/owner(s). the embedding. Manifold-Adaptive Dimension Estimation ative method.2 Another more recent method is due to the ball B(x; r), while the other approach is to cal- Levina and Bickel (2005) who suggested an algorithm culate the radius of the smallest x-centered ball that based on a Poisson approximation to the process ob- encloses some ¯xed number of points. Either way, one tained by counting the number of neighbors of a point ends up with an estimate of both ln(P (Xi 2 B(x; r))) in its neighborhood. In a somewhat heuristic man- and ln(r). Taking multiple measurements, we may get ner they argued for the asymptotic consistency of this an estimate of d by ¯tting a line through these mea- method. Grassberger and Procaccia (1983) suggested surements, by treating ´ as a constant. Because ´ to estimate the dimension based on the so-called cor- cannot be considered constant when r is large (due to relation dimension, while Hein and Audibert (2005) the uneven sampling distribution or the curvature of suggested a method based on the asymptotics of a the manifold), one should ideally work at small scales smoothed version of the correlation dimension esti- (small r). On the other hand, when r is too small mate. Despite the large number of works and long then the measurements' variance will be high. A good history, to our best knowledge no previous rigorous estimator must thus ¯nd a good balance between the theoretical work has been done on the ¯nite-sample bias and the variance, making the estimation of the behavior of dimension-estimation algorithms, let alone intrinsic dimension a non-trivial problem. their manifold adaptivity. In this paper we study an algorithm in which we ¯x the \scale" by ¯xing the number of neighbors, k: the 2. Algorithm dimension is estimated from the distance to the kth nearest neighbor. In the other approach, i.e., when a The core component of our algorithm estimates the di- scale h = h is selected, the typical requirement is that mensionality of the manifold in a small neighborhood n hd n ! 1, or h = ­(n¡1=d). Given that d is unknown of a selected point. This point is then varied and re- n n this suggests to choose h = Cn¡1=D. This choice, sults of the local estimates are combined to give the n however, is too conservative and would not lead to a ¯nal estimate. dimension adaptive procedure.3 On the other hand, The local estimate is constructed as follows: Collect for the consistency of k-nearest neighbor procedures the observed data points into Dn = [X1;:::;Xn]. We one typically requires only kn=n ! 0 and kn ! 1 shall assume that Xi is an i.i.d. sample that comes (these conditions are independent of d). Therefore we from a distribution supported on the manifold M. De- prefer nearest-neighbor based techniques for this task. ¯ne ´(x; r) by In order to be more speci¯c about the method, let (k) d X (x) be the reordering of the data such that P (Xi 2 B(x; r)) = ´(x; r)r ; (1) jjX(k)(x) ¡ xjj · jjX(k+1)(x) ¡ xjj holds for k = or 1; 2; : : : ; n¡1 (ties are broken randomly). Here k¢k de- notes the `2-norm of RD. Hence, X(1)(x) is the nearest (2) ln(P (Xi 2 B(x; r))) = ln(´(x; r)) + d ln(r); (2) neighbor of x in Dn, X is the 2nd nearest neighbor, etc. Letr ^(k)(x) = kX(k)(x)¡xk be the distance to the where B(x; r) ½ RD is a ball around the point x 2 M kth nearest neighbor of x. In our theoretical analysis, in the Euclidean space RD. Our main assumption in for the sake of proofs simplicity, we use the follow- the paper will be that in a small neighborhood of 0 ing simple estimation method: Take k > 2. Denoting the function ´(x; ¢) is slowly varying (the assumptions ´(x; r) ¼ ´0, from (2) we have on ´ will be made precise later). This is obviously (k) satis¯ed in the commonly studied simple case when the ln(k=n) ¼ ln(´0) + d ln(^r (x)); (dk=2e) distribution of the data on the manifold is uniform and ln(k=(2n)) ¼ ln(´0) + d ln(^r (x)); ¡ ¢ the manifold satis¯es standard regularity assumption (k) such as those considered by Hein et al. (2006). since if n is big, P X0 2 B(x; r^ (x)) should be close to k=n. Reordering the above equations for d, we get There are two ways of using Equation (2) for estimat- ^ ln 2 ing the dimension d. Both rely on the observation that d(x) = ln(^r(k)(x)=r^(dk=2e)(x)) : (3) this equation is linear in d. One approach is to ¯x a radius and count the number of data points within When a center is selected from data, this point is naturally removed when calculating the point's near- 2Due to the lack of space, we cannot attempt to give a est neighbors. With a slight abuse of notation, the full review of existing work on dimension estimation. The interested reader may consult the papers of Kegl (2002) 3Of course, other options, such as using splitting or and Hein and Audibert (2005) which contain many further cross-validation to select h are also possible. We leave it pointers. for future work to study such algorithms. Manifold-Adaptive Dimension Estimation estimate when selecting center Xi is also denoted by bility at least 1 ¡ ± ^ d(Xi). Ã µ ¶ 1 r ! d ^ k ln(4=±) When used at a single random data point, the variance jd(X1) ¡ dj · E [C(X1)] d B + ; of the estimate will be high and due to k ¿ n the n k available data is used in a highly ine±cient manner.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]
  • INTRODUCTION to ALGEBRAIC GEOMETRY 1. Preliminary Of
    INTRODUCTION TO ALGEBRAIC GEOMETRY WEI-PING LI 1. Preliminary of Calculus on Manifolds 1.1. Tangent Vectors. What are tangent vectors we encounter in Calculus? 2 0 (1) Given a parametrised curve α(t) = x(t); y(t) in R , α (t) = x0(t); y0(t) is a tangent vector of the curve. (2) Given a surface given by a parameterisation x(u; v) = x(u; v); y(u; v); z(u; v); @x @x n = × is a normal vector of the surface. Any vector @u @v perpendicular to n is a tangent vector of the surface at the corresponding point. (3) Let v = (a; b; c) be a unit tangent vector of R3 at a point p 2 R3, f(x; y; z) be a differentiable function in an open neighbourhood of p, we can have the directional derivative of f in the direction v: @f @f @f D f = a (p) + b (p) + c (p): (1.1) v @x @y @z In fact, given any tangent vector v = (a; b; c), not necessarily a unit vector, we still can define an operator on the set of functions which are differentiable in open neighbourhood of p as in (1.1) Thus we can take the viewpoint that each tangent vector of R3 at p is an operator on the set of differential functions at p, i.e. @ @ @ v = (a; b; v) ! a + b + c j ; @x @y @z p or simply @ @ @ v = (a; b; c) ! a + b + c (1.2) @x @y @z 3 with the evaluation at p understood.
    [Show full text]
  • DIFFERENTIAL GEOMETRY COURSE NOTES 1.1. Review of Topology. Definition 1.1. a Topological Space Is a Pair (X,T ) Consisting of A
    DIFFERENTIAL GEOMETRY COURSE NOTES KO HONDA 1. REVIEW OF TOPOLOGY AND LINEAR ALGEBRA 1.1. Review of topology. Definition 1.1. A topological space is a pair (X; T ) consisting of a set X and a collection T = fUαg of subsets of X, satisfying the following: (1) ;;X 2 T , (2) if Uα;Uβ 2 T , then Uα \ Uβ 2 T , (3) if Uα 2 T for all α 2 I, then [α2I Uα 2 T . (Here I is an indexing set, and is not necessarily finite.) T is called a topology for X and Uα 2 T is called an open set of X. n Example 1: R = R × R × · · · × R (n times) = f(x1; : : : ; xn) j xi 2 R; i = 1; : : : ; ng, called real n-dimensional space. How to define a topology T on Rn? We would at least like to include open balls of radius r about y 2 Rn: n Br(y) = fx 2 R j jx − yj < rg; where p 2 2 jx − yj = (x1 − y1) + ··· + (xn − yn) : n n Question: Is T0 = fBr(y) j y 2 R ; r 2 (0; 1)g a valid topology for R ? n No, so you must add more open sets to T0 to get a valid topology for R . T = fU j 8y 2 U; 9Br(y) ⊂ Ug: Example 2A: S1 = f(x; y) 2 R2 j x2 + y2 = 1g. A reasonable topology on S1 is the topology induced by the inclusion S1 ⊂ R2. Definition 1.2. Let (X; T ) be a topological space and let f : Y ! X.
    [Show full text]
  • Manifold Reconstruction in Arbitrary Dimensions Using Witness Complexes Jean-Daniel Boissonnat, Leonidas J
    Manifold Reconstruction in Arbitrary Dimensions using Witness Complexes Jean-Daniel Boissonnat, Leonidas J. Guibas, Steve Oudot To cite this version: Jean-Daniel Boissonnat, Leonidas J. Guibas, Steve Oudot. Manifold Reconstruction in Arbitrary Dimensions using Witness Complexes. Discrete and Computational Geometry, Springer Verlag, 2009, pp.37. hal-00488434 HAL Id: hal-00488434 https://hal.archives-ouvertes.fr/hal-00488434 Submitted on 2 Jun 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Manifold Reconstruction in Arbitrary Dimensions using Witness Complexes Jean-Daniel Boissonnat Leonidas J. Guibas Steve Y. Oudot INRIA, G´eom´etrica Team Dept. Computer Science Dept. Computer Science 2004 route des lucioles Stanford University Stanford University 06902 Sophia-Antipolis, France Stanford, CA 94305 Stanford, CA 94305 [email protected] [email protected] [email protected]∗ Abstract It is a well-established fact that the witness complex is closely related to the restricted Delaunay triangulation in low dimensions. Specifically, it has been proved that the witness complex coincides with the restricted Delaunay triangulation on curves, and is still a subset of it on surfaces, under mild sampling conditions. In this paper, we prove that these results do not extend to higher-dimensional manifolds, even under strong sampling conditions such as uniform point density.
    [Show full text]
  • Hodge Theory
    HODGE THEORY PETER S. PARK Abstract. This exposition of Hodge theory is a slightly retooled version of the author's Harvard minor thesis, advised by Professor Joe Harris. Contents 1. Introduction 1 2. Hodge Theory of Compact Oriented Riemannian Manifolds 2 2.1. Hodge star operator 2 2.2. The main theorem 3 2.3. Sobolev spaces 5 2.4. Elliptic theory 11 2.5. Proof of the main theorem 14 3. Hodge Theory of Compact K¨ahlerManifolds 17 3.1. Differential operators on complex manifolds 17 3.2. Differential operators on K¨ahlermanifolds 20 3.3. Bott{Chern cohomology and the @@-Lemma 25 3.4. Lefschetz decomposition and the Hodge index theorem 26 Acknowledgments 30 References 30 1. Introduction Our objective in this exposition is to state and prove the main theorems of Hodge theory. In Section 2, we first describe a key motivation behind the Hodge theory for compact, closed, oriented Riemannian manifolds: the observation that the differential forms that satisfy certain par- tial differential equations depending on the choice of Riemannian metric (forms in the kernel of the associated Laplacian operator, or harmonic forms) turn out to be precisely the norm-minimizing representatives of the de Rham cohomology classes. This naturally leads to the statement our first main theorem, the Hodge decomposition|for a given compact, closed, oriented Riemannian manifold|of the space of smooth k-forms into the image of the Laplacian and its kernel, the sub- space of harmonic forms. We then develop the analytic machinery|specifically, Sobolev spaces and the theory of elliptic differential operators|that we use to prove the aforementioned decom- position, which immediately yields as a corollary the phenomenon of Poincar´eduality.
    [Show full text]
  • Appendix D: Manifold Maps for SO(N) and SE(N)
    424 Appendix D: Manifold Maps for SO(n) and SE(n) As we saw in chapter 10, recent SLAM implementations that operate with three-dimensional poses often make use of on-manifold linearization of pose increments to avoid the shortcomings of directly optimizing in pose parameterization spaces. This appendix is devoted to providing the reader a detailed account of the mathematical tools required to understand all the expressions involved in on-manifold optimization problems. The presented contents will, hopefully, also serve as a solid base for bootstrap- ping the reader’s own solutions. 1. OPERATOR DEFINITIONS In the following, we will make use of some vector and matrix operators, which are rather uncommon in mobile robotics literature. Since they have not been employed throughout this book until this point, it is in order to define them here. The “vector to skew-symmetric matrix” operator: A skew-symmetric matrix is any square matrix A such that AA= − T . This implies that diagonal elements must be all zeros and off-diagonal entries the negative of their symmetric counterparts. It can be easily seen that any 2× 2 or 3× 3 skew-symmetric matrix only has 1 or 3 degrees of freedom (i.e. only 1 or 3 independent numbers appear in such matrices), respectively, thus it makes sense to parameterize them as a vector. Generating skew-symmetric matrices ⋅ from such vectors is performed by means of the []× operator, defined as: 0 −x 2× 2: = x ≡ []v × () × x 0 x 0 z y (1) − 3× 3: []v = y ≡ z0 − x × z −y x 0 × The origin of the symbol × in this operator follows from its application to converting a cross prod- × uct of two 3D vectors ( x y ) into a matrix-vector multiplication ([]x× y ).
    [Show full text]
  • Differential Forms As Spinors Annales De L’I
    ANNALES DE L’I. H. P., SECTION A WOLFGANG GRAF Differential forms as spinors Annales de l’I. H. P., section A, tome 29, no 1 (1978), p. 85-109 <http://www.numdam.org/item?id=AIHPA_1978__29_1_85_0> © Gauthier-Villars, 1978, tous droits réservés. L’accès aux archives de la revue « Annales de l’I. H. P., section A » implique l’accord avec les conditions générales d’utilisation (http://www.numdam. org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ Ann. Inst. Henri Poincare, Section A : Vol. XXIX. n° L 1978. p. 85 Physique’ théorique.’ Differential forms as spinors Wolfgang GRAF Fachbereich Physik der Universitat, D-7750Konstanz. Germany ABSTRACT. 2014 An alternative notion of spinor fields for spin 1/2 on a pseudoriemannian manifold is proposed. Use is made of an algebra which allows the interpretation of spinors as elements of a global minimal Clifford- ideal of differential forms. The minimal coupling to an electromagnetic field is introduced by means of an « U(1 )-gauging ». Although local Lorentz transformations play only a secondary role and the usual two-valuedness is completely absent, all results of Dirac’s equation in flat space-time with electromagnetic coupling can be regained. 1. INTRODUCTION It is well known that in a riemannian manifold the laplacian D := - (d~ + ~d) operating on differential forms admits as « square root » the first-order operator ~ 2014 ~ ~ being the exterior derivative and e) the generalized divergence.
    [Show full text]
  • Lecture 12. Tensors
    Lecture 12. Tensors In this lecture we define tensors on a manifold, and the associated bundles, and operations on tensors. 12.1 Basic definitions We have already seen several examples of the idea we are about to introduce, namely linear (or multilinear) operators acting on vectors on M. For example, the metric is a bilinear operator which takes two vectors to give a real number, i.e. gx : TxM × TxM → R for each x is defined by u, v → gx(u, v). The difference between two connections ∇(1) and ∇(2) is a bilinear op- erator which takes two vectors and gives a vector, i.e. a bilinear operator Sx : TxM × TxM → TxM for each x ∈ M. Similarly, the torsion of a connec- tion has this form. Definition 12.1.1 A covector ω at x ∈ M is a linear map from TxM to R. The set of covectors at x forms an n-dimensional vector space, which we ∗ denote Tx M.Atensor of type (k, l)atx is a multilinear map which takes k vectors and l covectors and gives a real number × × × ∗ × × ∗ → R Tx : .TxM .../0 TxM1 .Tx M .../0 Tx M1 . k times l times Note that a covector is just a tensor of type (1, 0), and a vector is a tensor of type (0, 1), since a vector v acts linearly on a covector ω by v(ω):=ω(v). Multilinearity means that i1 ik j1 jl T c vi1 ,..., c vik , aj1 ω ,..., ajl ω i i j j 1 k 1 l . i1 ik j1 jl = c ...c aj1 ...ajl T (vi1 ,...,vik ,ω ,...,ω ) i1,...,ik,j1,...,jl 110 Lecture 12.
    [Show full text]
  • Differential Geometry Lecture 18: Curvature
    Differential geometry Lecture 18: Curvature David Lindemann University of Hamburg Department of Mathematics Analysis and Differential Geometry & RTG 1670 14. July 2020 David Lindemann DG lecture 18 14. July 2020 1 / 31 1 Riemann curvature tensor 2 Sectional curvature 3 Ricci curvature 4 Scalar curvature David Lindemann DG lecture 18 14. July 2020 2 / 31 Recap of lecture 17: defined geodesics in pseudo-Riemannian manifolds as curves with parallel velocity viewed geodesics as projections of integral curves of a vector field G 2 X(TM) with local flow called geodesic flow obtained uniqueness and existence properties of geodesics constructed the exponential map exp : V ! M, V neighbourhood of the zero-section in TM ! M showed that geodesics with compact domain are precisely the critical points of the energy functional used the exponential map to construct Riemannian normal coordinates, studied local forms of the metric and the Christoffel symbols in such coordinates discussed the Hopf-Rinow Theorem erratum: codomain of (x; v) as local integral curve of G is d'(TU), not TM David Lindemann DG lecture 18 14. July 2020 3 / 31 Riemann curvature tensor Intuitively, a meaningful definition of the term \curvature" for 3 a smooth surface in R , written locally as a graph of a smooth 2 function f : U ⊂ R ! R, should involve the second partial derivatives of f at each point. How can we find a coordinate- 3 free definition of curvature not just for surfaces in R , which are automatically Riemannian manifolds by restricting h·; ·i, but for all pseudo-Riemannian manifolds? Definition Let (M; g) be a pseudo-Riemannian manifold with Levi-Civita connection r.
    [Show full text]
  • Chapter 7 Geodesics on Riemannian Manifolds
    Chapter 7 Geodesics on Riemannian Manifolds 7.1 Geodesics, Local Existence and Uniqueness If (M,g)isaRiemannianmanifold,thentheconceptof length makes sense for any piecewise smooth (in fact, C1) curve on M. Then, it possible to define the structure of a metric space on M,whered(p, q)isthegreatestlowerboundofthe length of all curves joining p and q. Curves on M which locally yield the shortest distance between two points are of great interest. These curves called geodesics play an important role and the goal of this chapter is to study some of their properties. 489 490 CHAPTER 7. GEODESICS ON RIEMANNIAN MANIFOLDS Given any p M,foreveryv TpM,the(Riemannian) norm of v,denoted∈ v ,isdefinedby∈ " " v = g (v,v). " " p ! The Riemannian inner product, gp(u, v), of two tangent vectors, u, v TpM,willalsobedenotedby u, v p,or simply u, v .∈ # $ # $ Definition 7.1.1 Given any Riemannian manifold, M, a smooth parametric curve (for short, curve)onM is amap,γ: I M,whereI is some open interval of R. For a closed→ interval, [a, b] R,amapγ:[a, b] M is a smooth curve from p =⊆γ(a) to q = γ(b) iff→γ can be extended to a smooth curve γ:(a ", b + ") M, for some ">0. Given any two points,− p, q →M,a ∈ continuous map, γ:[a, b] M,isa" piecewise smooth curve from p to q iff → (1) There is a sequence a = t0 <t1 < <tk 1 <t = b of numbers, t R,sothateachmap,··· − k i ∈ γi = γ ! [ti,ti+1], called a curve segment is a smooth curve, for i =0,...,k 1.
    [Show full text]
  • Parallel and Killing Spinors on Spin Manifolds 1 Introduction
    Parallel and Killing Spinors on Spinc Manifolds Andrei Moroianu1 Institut fur¨ reine Mathematik, Ziegelstr. 13a, 10099 Berlin, Germany E-mail: [email protected] Abstract: We describe all simply connected Spinc manifolds carrying parallel and real Killing spinors. In particular we show that every Sasakian manifold (not necessarily Einstein) carries a canonical Spinc structure with Killing spinors. 1 Introduction The classification of irreducible simply connected spin manifolds with parallel spinors was obtained by M. Wang in 1989 [15] in the following way: the existence of a parallel spinor means that the spin representation of the holonomy group has a fixed point. Moreover, it requires the vanishing of the Ricci tensor, so the only symmetric spaces with parallel spinors are the flat ones. Then looking into Berger's list of possible holonomy groups for Riemannian manifolds and using some representation theory one finally obtains that the only suitable manifolds are those with holonomy 0, SU(n), Sp(n), Spin7 and G2. One can give the geometrical description of such a holonomy reduction in each of these cases [15]. For an earlier approach to this problem, see also [10]. The geometrical description of simply connected spin manifolds carrying real Killing spinors is considerably more complicated, and was obtained in 1993 by C. B¨ar [1] after a series of partial results of Th. Friedrich, R. Grunewald, I. Kath and O. Hijazi (cf. [4], [5], [6], [7], [9]). The main idea of C. B¨ar was to consider the cone over a manifold with Killing spinors and to show that the spin representation of the holonomy of the cone has a fixed point for a suitable scalar renormalisation of the metric on the base (actually this construction was already used in 1987 by R.
    [Show full text]