A Differential-Form Pullback Programming Language for Higher

Total Page:16

File Type:pdf, Size:1020Kb

A Differential-Form Pullback Programming Language for Higher A Differential-form Pullback Programming Language for Higher-order Reverse-mode Automatic Differentiation Carol Mak Luke Ong Abstract where the computational graph is constructed dynamically Building on the observation that reverse-mode automatic during the execution. differentiation (AD) — a generalisation of backpropagation Can we replace the traditional graphical representation of — can naturally be expressed as pullbacks of differential 1- reverse-mode AD by a simple yet expressive framework? In- forms, we design a simple higher-order programming lan- deed, there have been calls from the neural network com- guage with a first-class differential operator, and present munity for the development of differentiable programming a reduction strategy which exactly simulates reverse-mode [14, 19, 24], based on a higher-order functional language AD. We justify our reduction strategy by interpreting our with a built-in differential operator that returns the deriv- language in any differential λ-category that satisfies the Hahn- ative of a given program via reverse-mode AD. Such a lan- Banach Separation Theorem, and show that the reduction guage would free the programmer from implementational strategy precisely captures reverse-mode AD in a truly higher- details of differentiation. Programmers would be able to con- order setting. centrate on the construction of machine learning models, and train them by calling the built-in differential operator 1 Introduction on the cost function of their models. The goal of this work is to present a simple higher-order Automatic differentiation (AD) [34] is widely considered the programming language with an explicit differential oper- most efficient and accurate algorithm for computing deriva- ator, such that its reduction semantics is exactly reverse- tives, thanks largely to the chain rule. There are two modes mode AD, in a truly higher-order manner. of AD: The syntax of our language is inspied by Ehrhard and • Forward-mode AD evaluates the chain rule from in- Regnier [15]’s differential λ-calculus, which is an extension puts to outputs; it has time complexity that scales with of simply-typed λ-calculus with a differential operator that the number of inputs, and constant space complexity. mimics standard symbolic differentiation (but not reverse- • Reverse-mode AD — a generalisation of backpropaga- mode AD). Their definition of differentiation via a linear tion — evaluates the chain rule (in dual form) from substitution provides a good foundation for our language. outputs to inputs; it has time complexity that scales The reduction strategy of our language uses differential with the number of outputs, and space complexity λ-category [11] (the model of differential λ-calculus) as a arXiv:2002.08241v1 [cs.PL] 19 Feb 2020 that scales with the number of intermediate variables. guide. Differential λ-category is a Cartesian closed differen- In machine learning applications such as neural networks, tial category [9], and hence enjoys the fundamental prop- the number of input parameters is usually considerably erties of derivatives, and behaves well with exponentials larger than the number of outputs. For this reason, reverse- (curry). mode AD has been the preferred method of differentiation, especially in deep learning applications. (See Baydin et al. Contributions. Our starting point (Section 2.2) is the obser- [5] for an excellent survey of AD.) vation that the computation of reverse-mode AD can natu- The only downside of reverse-mode AD is its rather in- rally be expressed as a transformation of pullbacks of dif- volved definition, which has led to a variety of compli- ferential 1-forms. We argue that this viewpoint is essential cated implementations in neural networks. On the one hand, for understanding reverse-mode AD in a functional setting. TensorFlow [1] and Theano [3] employ the define-and-run Standard reverse-mode AD (as presented in [4, 5]) is only approach where the model is constructed as a computa- defined in Euclidean spaces. tional graph before execution. On the other hand, PyTorch We present (in Section 3) a simple higher-order program- [25] and Autograd [20] employ the define-by-run approach ming language, extending the simply-typed λ-calculus [12] 1 Carol Mak and Luke Ong n with an explicit differential operator called the pullback, where fj := πj ◦ f : R → R. We call the function J : (Ω λx.P)· S, which serves as a reverse-mode AD simulator. C∞(Rn, Rm) → C∞(Rn, L(Rn, Rm)) the Jacobian; 1 J(f ) the Using differential λ-category [11] as a guide, we design a Jacobian of f ; J(f )(x) the Jacobian of f at x; J(f )(x)(v) reduction strategy for our language so that the reduction of the Jacobian of f at x along v ∈ Rn and λx.J(f )(x)(v) the ∗ the application, (Ω λx.P)·(λx.ep ) S, mimics reverse-mode Jacobian of f along v. AD in computing the p-th row of the Jacobian matrix (deriv- Symbolic Differentiation Numerical derivatives are stan- ative) of the function λx.P at the point S, where ep is the dardly computed using symbolic differentiation: first com- column vector with 1 at the p-th position and 0 everywhere ∂ pute fj for all i, j using rules (e.g. product and chain rules), else. Moreover, we show how our reduction semantics can ∂zi be adapted to a continuation passing style evaluation (Sec- then substitute x0 for z to obtain J(f )(x0). tion 3.5). For example, to compute the Jacobian of f : hx,yi 7→ + + 2 2 , Owing to the higher-order nature of our language, stan- (x 1)(2x y ) at h1 3i by symbolic differentiation, first ∂ compute f = 2(x + 1)(2x + y2)(2x + y2 + 2(x + 1)) and dard differential calculus is not enough to model our lan- ∂x ∂f = + + 2 + guage and hence cannot justify our reductions. Our final ∂y 2(x 1)(2x y )(2y(x 1)). Then, substitute 1 for x contribution (in Section 4) is to show that any differential and 3 for y to obtain J(f )(h1, 3i) = 660 528 . λ-category [11] that satisfies the Hahn-Banach Separation Symbolic differentiation is accurate but inefficient. Notice + ∂f + Theorem is a model of our language (Theorem 4.6). Our re- that the term (x 1) appears twice in ∂x , and (1 1) is evalu- duction semantics is faithful to reverse-mode AD, in that it ∂f , + + 2 ated twice in ∂x (because for h : hx yi 7→ (x 1)(2x y ), is exactly reverse-mode AD when restricted to first-order; 1 both h(hx,yi) and ∂h contain the term (x + 1), and the prod- moreover we can perform reverse-mode AD on any higher- ∂x uct rule tells us to calculate them separately). This dupli- order abstraction, which may contain higher-order terms, cation is a cause of the so-called expression swell problem, duals, pullbacks, and free variables as subterms (Corollary resulting in exponential time-complexity. 4.8). Finally, we discuss related works in Section 5 and conclu- Automatic Differentiation Automatic differentiation sion and future directions in Section 6. (AD) avoids this problem by a simple divide-and-conquer Throughout this paper, we will point to the attached Ap- approach: first arrange f as a composite of elementary2 pendix for additional content. All proofs are in Appendix E, functions, g1,...,gk (i.e. f = gk ◦···◦ g1), then compute unless stated otherwise. the Jacobian of each of these elementary functions, and finally combine them via the chain rule to yield the desired Jacobian of f . 2 Reverse-mode Automatic Differentiation Forward-mode AD Recall the chain rule: We introduce forward- and reverse-mode automatic dif- J(f )(x0) = J(gk )(xk−1)×···×J(g2)(x1)×J(g1)(x0) ferentiation (AD), highlighting their respective benefits in = = practice. Then we explain how reverse-mode AD can nat- for f gk ◦···◦ g1, where xi : gi (xi−1). Forward-mode urally be expressed as the pullback of differential 1-forms. AD computes the Jacobian matrix J(f )(x0) by calculating = = = I (The examples used to illustrate the above methods are col- αi : J(gi )(xi−1)×αi−1 and xi : gi (xi−1), with α0 : (iden- = lated in Figure 4). tity matrix) and x0. Then, αk J(f )(x0) is the Jacobian of f at x0. This computation can neatly be presented as an iter- g ation of the h·|·i-reduction, hx | αi −→hg(x)|J(g)(x)× αi, for g = g ,...,g , starting from the pair hx | Ii. Besides be- 2.1 Forward- and Reverse-mode AD 1 k 0 ing easy to implement, forward-mode AD computes the new Recall that the Jacobian matrix of a smooth real-valued func- pair from the current pair hx | αi, requiring no additional Rn Rm Rn tion f : → at x0 ∈ is memory. ∂f1 ∂f1 ... ∂f1 To compute the Jacobian of f : hx,yi 7→ (x + 1)(2x + ∂z1 ∂z2 ∂zn x0 x0 x0 2 2 ∂f2 ∂f2 ... ∂f2 y ) at h1, 3i by forward-mode AD, first decompose f into ∂z1 ∂z2 ∂zn J(f )(x ) := x0 x0 x0 0 . . 1C∞(A, B) is the set of all smooth functions from A to B, and L(A, B) is ∂fm ∂fm ... ∂fm ∂z1 ∂z2 ∂zn the set of all linear functions from A to B, for Euclidean spaces A and B. x0 x0 x0 2in the sense of being easily differentiable 2 Differential-form Pullback Programming Language and Reverse-mode AD g ∗ (−)2 elementary functions as R2 −−→ R2 −−→ R −−−→ R, where Whenever this is the case, reverse-mode AD is more efficient g(hx,yi) := hx + 1, 2x + y2i. Then, starting from hh3, 1i| Ii, than forward-mode. iterate the h·|·i-reduction Remark 2.1.
Recommended publications
  • De Rham Cohomology of Smooth Manifolds
    VU University, Amsterdam Bachelorthesis De Rham Cohomology of smooth manifolds Supervisor: Author: Prof. Dr. R.C.A.M. Patrick Hafkenscheid Vandervorst Contents 1 Introduction 3 2 Smooth manifolds 4 2.1 Formal definition of a smooth manifold . 4 2.2 Smooth maps between manifolds . 6 3 Tangent spaces 7 3.1 Paths and tangent spaces . 7 3.2 Working towards a categorical approach . 8 3.3 Tangent bundles . 9 4 Cotangent bundle and differential forms 12 4.1 Cotangent spaces . 12 4.2 Cotangent bundle . 13 4.3 Smooth vector fields and smooth sections . 13 5 Tensor products and differential k-forms 15 5.1 Tensors . 15 5.2 Symmetric and alternating tensors . 16 5.3 Some algebra on Λr(V ) ....................... 16 5.4 Tensor bundles . 17 6 Differential forms 19 6.1 Contractions and exterior derivatives . 19 6.2 Integrating over topforms . 21 7 Cochains and cohomologies 23 7.1 Chains and cochains . 23 7.2 Cochains . 24 7.3 A few useful lemmas . 25 8 The de Rham cohomology 29 8.1 The definition . 29 8.2 Homotopy Invariance . 30 8.3 The Mayer-Vietoris sequence . 32 9 Some computations of de Rham cohomology 35 10 The de Rham Theorem 40 10.1 Singular Homology . 40 10.2 Singular cohomology . 41 10.3 Smooth simplices . 41 10.4 De Rham homomorphism . 42 10.5 de Rham theorem . 44 1 11 Compactly supported cohomology and Poincar´eduality 47 11.1 Compactly supported de Rham cohomology . 47 11.2 Mayer-Vietoris sequence for compactly supported de Rham co- homology . 49 11.3 Poincar´eduality .
    [Show full text]
  • Category Theory Course
    Category Theory Course John Baez September 3, 2019 1 Contents 1 Category Theory: 4 1.1 Definition of a Category....................... 5 1.1.1 Categories of mathematical objects............. 5 1.1.2 Categories as mathematical objects............ 6 1.2 Doing Mathematics inside a Category............... 10 1.3 Limits and Colimits.......................... 11 1.3.1 Products............................ 11 1.3.2 Coproducts.......................... 14 1.4 General Limits and Colimits..................... 15 2 Equalizers, Coequalizers, Pullbacks, and Pushouts (Week 3) 16 2.1 Equalizers............................... 16 2.2 Coequalizers.............................. 18 2.3 Pullbacks................................ 19 2.4 Pullbacks and Pushouts....................... 20 2.5 Limits for all finite diagrams.................... 21 3 Week 4 22 3.1 Mathematics Between Categories.................. 22 3.2 Natural Transformations....................... 25 4 Maps Between Categories 28 4.1 Natural Transformations....................... 28 4.1.1 Examples of natural transformations........... 28 4.2 Equivalence of Categories...................... 28 4.3 Adjunctions.............................. 29 4.3.1 What are adjunctions?.................... 29 4.3.2 Examples of Adjunctions.................. 30 4.3.3 Diagonal Functor....................... 31 5 Diagrams in a Category as Functors 33 5.1 Units and Counits of Adjunctions................. 39 6 Cartesian Closed Categories 40 6.1 Evaluation and Coevaluation in Cartesian Closed Categories. 41 6.1.1 Internalizing Composition................. 42 6.2 Elements................................ 43 7 Week 9 43 7.1 Subobjects............................... 46 8 Symmetric Monoidal Categories 50 8.1 Guest lecture by Christina Osborne................ 50 8.1.1 What is a Monoidal Category?............... 50 8.1.2 Going back to the definition of a symmetric monoidal category.............................. 53 2 9 Week 10 54 9.1 The subobject classifier in Graph.................
    [Show full text]
  • Notes on Differential Forms Lorenzo Sadun
    Notes on Differential Forms Lorenzo Sadun Department of Mathematics, The University of Texas at Austin, Austin, TX 78712 arXiv:1604.07862v1 [math.AT] 26 Apr 2016 CHAPTER 1 Forms on Rn This is a series of lecture notes, with embedded problems, aimed at students studying differential topology. Many revered texts, such as Spivak’s Calculus on Manifolds and Guillemin and Pollack’s Differential Topology introduce forms by first working through properties of alternating ten- sors. Unfortunately, many students get bogged down with the whole notion of tensors and never get to the punch lines: Stokes’ Theorem, de Rham cohomology, Poincare duality, and the realization of various topological invariants (e.g. the degree of a map) via forms, none of which actually require tensors to make sense! In these notes, we’ll follow a different approach, following the philosophy of Amy’s Ice Cream: Life is uncertain. Eat dessert first. We’re first going to define forms on Rn via unmotivated formulas, develop some profi- ciency with calculation, show that forms behave nicely under changes of coordinates, and prove Stokes’ Theorem. This approach has the disadvantage that it doesn’t develop deep intuition, but the strong advantage that the key properties of forms emerge quickly and cleanly. Only then, in Chapter 3, will we go back and show that tensors with certain (anti)symmetry properties have the exact same properties as the formal objects that we studied in Chapters 1 and 2. This allows us to re-interpret all of our old results from a more modern perspective, and move onwards to using forms to do topology.
    [Show full text]
  • Arxiv:1807.06986V1 [Math.CT] 18 Jul 2018 1.4
    A NAIVE DIAGRAM-CHASING APPROACH TO FORMALISATION OF TAME TOPOLOGY by notes by misha gavrilovich and konstantin pimenov in memoriam: evgenii shurygin ——————————————————————————————————— ... due to internal constraints on possible architectures of unknown to us functional "mental structures". Misha Gromov. Structures, Learning and Ergosystems: Chapters 1-4, 6. Abstract.— We rewrite classical topological definitions using the category- theoretic notation of arrows and are thereby led to their concise reformulations in terms of simplicial categories and orthogonality of morphisms, which we hope might be of use in the formalisation of topology and in developing the tame topology of Grothendieck. Namely, we observe that topological and uniform spaces are simplicial objects in the same category, a category of filters, and that a number of elementary properties can be obtained by repeatedly passing to the left or right orthogonal (in the sense of Quillen model categories) starting from a simple class of morphisms, often a single typical (counter)example appearing implicitly in the definition. Examples include the notions of: compact, discrete, connected, and totally disconnected spaces, dense image, induced topology, and separation axioms, and, outside of topology, finite groups being nilpotent, solvable, torsion-free, p-groups, and prime-to-p groups; injective and projective modules; injective and surjective (homo)morphisms. Contents 1. Introduction. 3 1.1. Main ideas. 3 1.2. Contents. 4 1.3. Speculations. 6 arXiv:1807.06986v1 [math.CT] 18 Jul 2018 1.4. Surjection: an example . 6 1.5. Intuition/Yoga of orthogonality. 7 1.6. Intuition/Yoga of transcription. 8 A draft of a research proposal. Comments welcome at miishapp@sddf:org.
    [Show full text]
  • Chapter 1 I. Fibre Bundles
    Chapter 1 I. Fibre Bundles 1.1 Definitions Definition 1.1.1 Let X be a topological space and let U be an open cover of X.A { j}j∈J partition of unity relative to the cover Uj j∈J consists of a set of functions fj : X [0, 1] such that: { } → 1) f −1 (0, 1] U for all j J; j ⊂ j ∈ 2) f −1 (0, 1] is locally finite; { j }j∈J 3) f (x)=1 for all x X. j∈J j ∈ A Pnumerable cover of a topological space X is one which possesses a partition of unity. Theorem 1.1.2 Let X be Hausdorff. Then X is paracompact iff for every open cover of X there exists a partition of unity relative to . U U See MAT1300 notes for a proof. Definition 1.1.3 Let B be a topological space with chosen basepoint . A(locally trivial) fibre bundle over B consists of a map p : E B such that for all b B∗there exists an open neighbourhood U of b for which there is a homeomorphism→ φ : p−1(U)∈ p−1( ) U satisfying ′′ ′′ → ∗ × π φ = p U , where π denotes projection onto the second factor. If there is a numerable open cover◦ of B|by open sets with homeomorphisms as above then the bundle is said to be numerable. 1 If ξ is the bundle p : E B, then E and B are called respectively the total space, sometimes written E(ξ), and base space→ , sometimes written B(ξ), of ξ and F := p−1( ) is called the fibre of ξ.
    [Show full text]
  • INTRODUCTION to DE RHAM COHOMOLOGY Contents 1
    INTRODUCTION TO DE RHAM COHOMOLOGY REDMOND MCNAMARA Abstract. We briefly review differential forms on manifolds. We prove ho- motopy invariance of cohomology, the Poincar´elemma and exactness of the Mayer{Vietoris sequence. We then compute the cohomology of some simple examples. Finally, we prove Poincar´eduality for orientable manifolds. Contents 1. Introduction 1 2. Smooth Homotopy Invariance 4 3. Mayer{Vietoris 5 4. Applications 7 5. Poincar´eDuality 7 Acknowledgments 10 References 10 1. Introduction Definition 1.1. A fiber bundle E over a topological space B with fiber a given topological space F is a space along with a continuous surjection π such that for each x 2 E there exists a neighborhood π(x) 2 U ⊂ B and a homeomorphism φ such that the following diagram commutes, φ π−1(U) / U × F π i Ô U where i is the usual map, i(p; v) = p. We call F the fiber and B the base space. Example 1.2. The trivial bundle over a base space B with fiber F is just B × F with the maps π = i and φ = id. Definition 1.3. A section s: B ! E of a fiber bundle is a continuous map with the property that π(s(p)) = p for all p 2 B. If B and E are smooth manifolds, we require that s is smooth. Date: August, 2014. 1 2 REDMOND MCNAMARA Definition 1.4. A vector bundle is a fiber bundle where the fiber is a vector space and with the additional properties that: (1) For all x 2 E and all corresponding U; V ⊂ B and φ, we have ◦ −1 φ (x; v) = (x; gUV (v)) for some invertible linear function gUV depend- ing only on U and V .
    [Show full text]
  • Category Theory
    University of Cambridge Mathematics Tripos Part III Category Theory Michaelmas, 2018 Lectures by P. T. Johnstone Notes by Qiangru Kuang Contents Contents 1 Definitions and examples 2 2 The Yoneda lemma 10 3 Adjunctions 16 4 Limits 23 5 Monad 35 6 Cartesian closed categories 46 7 Toposes 54 7.1 Sheaves and local operators* .................... 61 Index 66 1 1 Definitions and examples 1 Definitions and examples Definition (category). A category C consists of 1. a collection ob C of objects A; B; C; : : :, 2. a collection mor C of morphisms f; g; h; : : :, 3. two operations dom and cod assigning to each f 2 mor C a pair of f objects, its domain and codomain. We write A −! B to mean f is a morphism and dom f = A; cod f = B, 1 4. an operation assigning to each A 2 ob C a morhpism A −−!A A, 5. a partial binary operation (f; g) 7! fg on morphisms, such that fg is defined if and only if dom f = cod g and let dom fg = dom g; cod fg = cod f if fg is defined satisfying f 1. f1A = f = 1Bf for any A −! B, 2. (fg)h = f(gh) whenever fg and gh are defined. Remark. 1. This definition is independent of any model of set theory. If we’re givena particuar model of set theory, we call C small if ob C and mor C are sets. 2. Some texts say fg means f followed by g (we are not). 3. Note that a morphism f is an identity if and only if fg = g and hf = h whenever the compositions are defined so we could formulate the defini- tions entirely in terms of morphisms.
    [Show full text]
  • Bundles, Classifying Spaces and Characteristic Classes
    BUNDLES, CLASSIFYING SPACES AND CHARACTERISTIC CLASSES CHRIS KOTTKE Contents Introduction 1 1. Bundles 2 1.1. Pullback 2 1.2. Sections 3 1.3. Fiber bundles as fibrations 4 2. Vector bundles 4 2.1. Whitney sum 5 2.2. Sections of vector bundles 6 2.3. Inner products 6 3. Principal Bundles 7 3.1. Morphisms 7 3.2. Sections and trivializations 8 3.3. Associated bundles 9 3.4. Homotopy classification 11 3.5. B as a functor 14 4. Characteristic classes 16 4.1. Line Bundles 16 4.2. Grassmanians 17 4.3. Steifel-Whitney Classes 20 4.4. Chern Classes 21 References 21 Introduction Fiber bundles, especially vector bundles, are ubiquitous in mathematics. Given a space B; we would like to classify all vector bundles on B up to isomorphism. While we accomplish this in a certain sense by showing that any vector bundle E −! B is isomorphic to the pullback of a `universal vector bundle' E0 −! B0 (depending on the rank and field of definition) by a map f : B −! B0 which is unique up to homotopy, in practice it is difficult to compute the homotopy class of this `classifying map.' We can nevertheless (functorially) associate some invariants to vector bundles on B which may help us to distinguish them. (Compare the problem of classifying spaces up to homeomorphism, and the partial solution of associating functorial Date: May 4, 2012. 1 2 CHRIS KOTTKE invariants such as (co)homology and homotopy groups.) These invariants will be cohomology classes on B called characteristic classes. In fact all characteristic classes arise as cohomology classes of the universal spaces B0: 1.
    [Show full text]
  • Exact Categories in Functional Analysis
    Script Exact Categories in Functional Analysis Leonhard Frerick and Dennis Sieg June 22, 2010 ii To Susanne Dierolf. iii iv Contents 1 Basic Notions 1 1.1 Categories . 1 1.2 Morphisms and Objects . 5 1.3 Functors . 9 2 Additive Categories 25 2.1 Pre-additive Categories . 25 2.2 Kernels and Cokernels . 27 2.3 Pullback and Pushout . 36 2.4 Product, Coproduct and Biproduct . 42 2.5 Additive Categories . 50 2.6 Semi-abelian Categories . 58 3 Exact Categories 65 3.1 Basic Properties . 65 3.2 E-strict morphisms . 88 4 Maximal Exact Structure 103 4.1 p-strict morphisms . 103 4.2 Maximal Exact Structure . 109 4.3 Quasi-abelian Categories . 115 5 Derived Functors 121 5.1 Exact Functors . 121 5.2 Complexes . 126 5.3 Resolutions . 142 5.4 Derived Functors . 157 5.5 Universal δ-Functors . 171 6 Yoneda-Ext-Functors 181 6.1 Yoneda-Ext1 . 181 6.2 Yoneda-Extn . 195 v vi CONTENTS 7 Appendix 1: Projective Spectra 225 7.1 Categories of Projective Spectra . 225 7.2 The Projective Limit . 231 Preface These lecture notes have their roots in a seminar organized by the authors in the years 2008 and 2009 at the University of Trier. The present notes con- tain much more than the content of the original seminar, which was based on the works of Palamodov [17, 18] and the book of Wengenroth [31]. Over the course of the seminar, due to the research of the second named author for his Ph.D.-thesis, it turned out, that the language of exact categories, as introduced by Quillen [22], is a far more flexible and more useful start- ing point for homological algebra in functional analysis, than the setting of semi-abelian categories used in [17, 18, 31].
    [Show full text]
  • Weak Homomorphisms of Coalgebras Key One Chung Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 2007 Weak homomorphisms of coalgebras Key One Chung Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Mathematics Commons Recommended Citation Chung, Key One, "Weak homomorphisms of coalgebras" (2007). Retrospective Theses and Dissertations. 15903. https://lib.dr.iastate.edu/rtd/15903 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Weak homomorphisms of coalgebras by Key One Chung A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics Program of Study Committee: Jonathan D.H. Smith, Major Professor Fritz Keinert Ling Long Roger Maddux Sung-Yell Song Iowa State University Ames, Iowa 2007 Copyright c Key One Chung, 2007. All rights reserved. UMI Number: 3274845 UMI Microform 3274845 Copyright 2008 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, MI 48106-1346 ii TABLE OF CONTENTS ABSTRACT....................................... iv CHAPTER1. Introduction .. .. .. .. .. .. .. .. .. ... .. .. .. .. 1 CHAPTER2. Coagebrafundamentals . 5 2.1 Algebras and the concept of a coalgebra . ....... 5 2.2 Categories...................................... 5 2.3 Coalgebras and their properties .
    [Show full text]
  • Several Approaches to Non-Archimedean Geometry
    CHAPTER 2 Several approaches to non-archimedean geometry Brian Conrad1 Introduction Let k be a non-archimedean field: a field that is complete with respect to a specified nontrivial non-archimedean absolute value | · |. There is a classical theory of k-analytic manifolds (often used in the theory of algebraic groups with k a local field), and it rests upon versions of the inverse and implicit function theorems that can be proved for convergent power series over k by adapting the traditional proofs over R and C. Serre’s Harvard lectures [S] on Lie groups and Lie algebras develop this point of view, for example. However, these kinds of spaces have limited geomet- ric interest because they are totally disconnected. For global geometric applications (such as uniformization questions, as first arose in Tate’s study of elliptic curves with split multiplicative reduction over a non-archimedean field), it is desirable to have a much richer theory, one in which there is a meaningful way to say that the closed unit ball is “connected”. More generally, we want a satisfactory theory of coherent sheaves (and hence a theory of “analytic continuation”). Such a theory was first introduced by Tate in the early 1960’s, and then systematically developed (building on Tate’s remarkable results) by a number of mathematicians. Though it was initially a subject of specialized interest, in recent years the importance and power of Tate’s theory of rigid-analytic spaces (and its variants, due especially to the work of Raynaud, Berkovich, and Huber) has become ever more
    [Show full text]
  • Math 396. Bundle Pullback and Transition Matrices 1. Motivation Let F : X → X Be a C P Mapping Between Cp Premanifolds with Co
    Math 396. Bundle pullback and transition matrices 1. Motivation p p Let f : X0 X be a C mapping between C premanifolds with corners (X; O) and (X0; O0), ! p p 0 p . Let π : E X and π0 : E0 X0 be C vector bundles. Consider a C bundle morphism≤ ≤ 1 ! ! T E0 / E π0 π X / X 0 f so for each x X we get R-linear maps T : E (x) E(f(x )). A basic example is the case 0 0 x0 0 0 when X, X , and2 f are of class Cp+1 and Tj = df : TX! TX is the induced total derivative 0 0 ! mapping that is the old tangent map df(x):Tx (X ) T (X) on fibers. 0 0 ! f(x0) For each x0 X0 we may use E and f to obtain a vector space E(f(x0)) determined by f and E, and it is natural2 to ask if these can be \glued" together to be the fibers of some Cp vector bundle f (E) X equipped with a bundle morphism f : f E E over f : X X that is ∗ ! 0 ∗ ! 0 ! the identity map (f E)(x ) = E(f(x )) E(f(x )) on fibers.e (Strictly speaking, the notation f is ∗ 0 0 ! 0 abusive since it depends on E and not just f; hopefully this will not cause confusion.) Such a paire (f ∗E X0; f) will be called a pullback bundle when it satisfies the universal property that for any p ! p C bundle morphisme T : E0 E over f : X0 X there is a unique C vector bundle morphism T : E f E over X giving! a factorization ! 0 0 ! ∗ 0 T ( E0 / f ∗E / E T 0 f π0 e π X X / X 0 0 f so T : E (x ) (f E)(x ) = E(f(x )) is exactly the map T .
    [Show full text]