
Questions and speculation on learning and cohomology, Version 3 Joshua Tan April 23, 2018 Abstract I have tried to formulate a draft of some things I have been thinking at the beginning of my doctorate, in order to be able to get some feedback. Any comments or corrections are more than welcome. In particular, I would be very grateful for concrete (possibly partial) answers, for opinions on whether questions are interesting or not interesting, and for advice on things to read which I am unaware of. I apologize in advance for being too brief and not defining things in many places. I expect to revise this document as I get feedback and learn more. Version 3 will be the last major update to this document, as I focus attention on my doctoral thesis. I am still very interested in obtaining feedback on the ideas expressed here! Contents 1 Introduction 4 1.1 How to read this essay ........................ 4 1.2 Acknowledgements .......................... 5 1.3 Examples of connections between AI and geometry ........ 5 1.4 Examples of questions related to AI ................ 6 1.5 A question ............................... 7 1.6 Reasons for studying cohomology as an AI researcher ....... 7 2 Very brief review of AI 8 2.1 Approach: symbolic methods .................... 8 2.2 The extension to knowledge representation, part I ........ 9 2.3 Approach: statistical inference ................... 13 2.4 Approach: connectionism ...................... 16 2.5 Approach: situated cognition .................... 18 3 Very brief review of algebraic topology 22 3.1 Axiomatics .............................. 24 3.2 The category of spectra ....................... 25 3.3 The derived setting .......................... 28 1 3.4 Model categories ........................... 30 3.5 Brown representability ........................ 32 3.6 A list of topological constructions .................. 34 4 Organizing principles in algebraic geometry 36 4.1 A very brief review of sheaf theory ................. 39 4.2 Good cohomology, part 1 ...................... 46 4.3 A very brief review of sheaf cohomology .............. 51 4.4 Coherent algebraic sheaves ...................... 54 4.5 Serre duality ............................. 56 4.6 Good cohomology, part 2 ...................... 59 4.7 The Weil conjectures ......................... 62 5 Organizing principles in machine learning 67 5.1 A very brief review of computational learning theory ....... 69 5.2 A very brief review of sample compression ............. 72 5.3 A very brief review of AdaBoost .................. 77 5.4 Sheaf cohomology for AdaBoost ................... 87 5.4.1 Background .......................... 89 5.4.2 An analogy .......................... 92 5.4.3 Cohomology .......................... 93 5.4.4 Conjectures .......................... 94 5.5 Sample compression schemes via cubical complexes ........ 98 5.6 Invariant methods for machine learning .............. 99 6 What is a mathematical model? 100 6.1 The extension to knowledge representation, part II ........101 6.2 Interaction and intervention .....................108 6.3 Invariant methods ..........................110 6.3.1 The role of simulation ....................112 6.4 How to begin .............................116 A Sheaf theory for distributed systems 117 B A very brief review of probabilistic programming 118 B.1 Learning in probabilistic programming ...............119 C A very brief review of homotopy type theory 119 C.1 Very brief review of type theory ...................120 C.2 Propositions as (some) types ....................121 C.3 Univalent foundations ........................121 D Topological data analysis 123 D.1 Persistent homology .........................123 D.2 Persistent cohomology ........................125 D.3 Mapper ................................125 D.4 Quantitative homotopy ........................128 2 E TQFT 129 E.1 Very brief review of TQFT .....................129 E.2 A few questions ............................130 E.3 Very brief review of neural networks ................131 F Localization 132 F.1 Analogs of localization ........................133 G Motives 134 G.1 The emergence of sheaf theory ...................135 G.2 The search for a universal cohomology theory ...........135 G.3 A very brief review of SGA .....................137 G.4 Universality ..............................137 G.5 The standard conjectures ......................138 H Very brief review of QBism 139 I Miscellaneous 141 I.1 Homomorphic learning ........................141 I.2 Geometric complexity theory ....................141 I.3 Asynchoronous computation .....................142 3 “For an answer which cannot be expressed, the question too cannot be expressed. The riddle does not exist. If a question can be put at all, then it can also be answered.” – Wittgenstein 1 Introduction The reason for putting artificial intelligence (AI) and geometry together in the first place is due to an intuition I had very early on in 2011 as a student in robotics: what AI needed was a systematic way of putting together logic (in the form of algorithms, proofs, and engineering design) with real-world data, and that the statistical algorithms popular in machine learning comprised only one class of options. Geometry, I hoped, could be another—after all, somehow there was a “logic” embedded in the geometry of space-time called the physical laws. More than the hope for new algorithms, however, I wanted to construct the same sort of mathematical semantics for AI as there exists for the theory of compu- tation. I believed that the field of geometry could give an organizing principle for AI. By organizing principle I mean not just a classification of objects but some means of comparing and combining tools and methods in the discipline. Concretely, I wanted not just a classification of di↵erent mathematical models in AI (from ontologies to dynamical systems to Bayesian networks to simple, Boolean functions) but some means of comparing and combining “learning algo- rithms” meant to construct those models from data. The intuition for applying geometry is supported in part by the success of category theory in combining and composing tools and methods from many di↵erent areas of math, and in part by formal similarities between algebraic geometry and model theory (and now between homotopy theory and type theory) which illustrate how problems posed in logic and computer science could be transposed to cleaner frameworks of geometry, number theory, and category theory. Existing applications also show that such connections can be fruitful, from information geometry to per- sistent homology to geometric complexity to the use of topoi in categorial logic. In any case, AI is a jigsaw puzzle, and I need a way of organizing the pieces. Geometry seems like a good bet. For this to work, first o↵I need to find some non-trivial “isomorphic” struc- tures in AI and in geometry. Given such structures, it should be possible to apply organizing principles in geometry to AI. This hope is the subject of this brief essay. So far there isn’t much understanding, but rather a list of things I would like to understand in the future. 1.1 How to read this essay Sections 1-4 form the core narrative, reviewing developments across AI, algebraic topology, and algebraic geometry. Section 5 contains a few concrete proposals for future work. The lettered sections in the appendix discuss some particu- lar research topics related to geometric intelligence and should be regarded as pointers to further reading. 4 1.2 Acknowledgements I’ve had many interesting discussions in the course of this research. In particular, I would like to thank Samson Abramsky, Bob Coecke, David Spivak, Misha Gromov, Yiannis Vassolopoulos, Mehryar Mohri, Sylvain Cappell, and Brian Scassellati for help in developing some of these ideas. This essay is based on a similar document [52] by Andreas Holmstrom on cohomology theories in arithmetic geometry. 1.3 Examples of connections between AI and geometry To motivate the problem, we list some examples (in no particular order): in- variant methods in computer vision, motion tracking and planning, configu- ration spaces in kinematics, foldable robots and surfaces, grid cell geometry in the hippocampus, di↵erential geometry of neuronal networks, mobile sensor networks, topological measures of complexity, geometric optimization, Herlihy and Shavit’s application of topology to asynchronous computation, informa- tion geometry, topoi in categorial logic, vector logic, homotopy type theory and -groupoid structure of types, topological data analysis, persistent homology, quantitative1 homotopy theory, disconnectivity graphs, Morse theory in neural networks, Conley index theory in noise detection, manifold learning (dimen- sionality reduction) techniques like projection pursuit and Isomap, hierarchical clustering and any number of clustering algorithms on a metric space, topolog- ical clustering methods like Mapper, “partial clustering”, functorial clustering, kernel PCA, one-inclusion graphs for concept classes, cubical complexes and hy- perplane arrangements in sample compression, restricted Boltzmann machines and the renormalization group, herding and discrepancy theory, a variety of optimization scenarios like gradient descent or convex minimization, SVM, any variety of kernel methods, graphical models, k-nearest-neighbor search, random matrices in computational neuroscience, o-minimal theory and real algebraic geometry, sheaf theory for contextuality [1]. Given the interconnected nature of geometry
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages136 Page
-
File Size-