Handout 0: Historical Development of “Programming” Concepts

Total Page:16

File Type:pdf, Size:1020Kb

Handout 0: Historical Development of “Programming” Concepts 06-02552 Princ. of Progr. Languages (and “Extended”) The University of Birmingham Spring Semester 2018-19 School of Computer Science c Uday Reddy2018-19 Handout 0: Historical development of “programming” concepts Digital computers were devised in the 1950’s and the activity of writing instructions for these computers came to be known as programming. However, “programming” in the broad sense of devising procedures, methods, recipes or rules for solving problems is much older, perhaps as old as humanity. As soon as human beings formed societies and started working together to carry out their daily activities, they needed to devise procedures for such activities and communicate them to each other, first in speech and then in writing. These ways of describing recipes are still in use pretty much the same way, and we use them in our daily lives, e.g., in following cooking recipes, using knitting patterns, or operating household gadgets. It is also in use in industrial and business contexts, e.g., for describing manufacturing processes on factory floor or business processes in organizations. “Programming” in mathematics is of special interest to us because it directly gave birth to programming in computer science. These programs are typically called algorithms, a term derived from the name of al-Khwarizmi, an Arab mathematician of 800 A.D., whose book Al-jabr wa’l muqabalah on algebraic manipulations was highly influential in the development of modern mathematics. The devising of mathematical algorithms began in pre-historic times as soon as numbers were invented to count and to measure. Methods were then needed for the arithmetic operations of addition, subtraction, multiplication, division and others. All these algorithms were devised in pre-historic times, even before civilization began, and we learn them in the early years of our schooling. Babylonian algorithms The earliest written description of an algorithm we have available is that for division, 1 more precisely for finding inverses x , found on a stone tablet dating back to the Sumerian civilization of Babylonia in 2000 B.C. The first few stanzas of the tablet read as follows:1 [The number is 4;10. What is its inverse? Proceed as follows. Form the inverse of 10, you will find 6. Multiply 6 by 4, you will find 24. Add on 1, you will find 25. Form the inverse of 25], you will find 2;24. [Multiply 2;24 by 6], you will find 14;24. [The inverse is 14;24]. Such is the way to proceed. [The number is 8;20]. What is its inverse? [Proceed as follows. Form the inverse of 20], you will find 3. Mul[tiply 3 by 8], you will find 24. [Add on 1], you will find [25]. [Form the inverse of 25], you will find 2;24. [Multiply 2;24 by 3L you will find [7;12. The inverse is 7;12. Such is the way to proceed]. Recall that the Babylonians used base 60 for their number system (whereas we use base 10). So, 4 and 10 are two separate digits in the base-60 number system, representing the number 4 × 60 + 10. The inverse of the number is 14; 24, i.e., 14 × 60−1 + 24 × 60−2. The details of the algorithm are not important for our purpose. But we should pay attention to the form of the algorithm. Notice the following facts: • Whereas, nowadays, we would use a symbol like x to denote the input of the algorithm, the Babylonians in 2000 B.C. did not yet have the symbolic notation. Instead, they described the algorithm on sample inputs, several of them in fact, to provide insight to the reader. 1Jean-Luc Chabert et al., A History of Algorithms: From the Pebble to the Microchip, Springer, 1994. 1 • Each step of the algorithm is an elementary operation such as addition or multiplication, which is described separately. From our point of view, this is like an assembly language where each elementary operation forms a separate instruction for the machine. • While the algorithm is described in an “imperative style,” i.e., expressed in terms of commands or instructions to the reader, the overall computation is functional. Each step of the algorithm acts on some inputs and produces some outputs, which then form the inputs for the succeeding steps. Mathematical algorithms are still described in this style, albeit intermixed with algebraic expressions. Euclid’s constructions Euclid, a Greek mathematician from Alexandria at around 300 B.C., wrote a treatise on mathematics called The Elements. It was divided into 13 “books” with books 1-4 devoted to plane geometry, books 5-10 to ratios and proportions, and books 11-13 to spatial geometry. Founded in geometry, Euclid’s Elements is full of geometric constructions that can be carried out with a ruler and compass. Such “constructions” are once again algorithms, imperative in style. But the imperative style in geometry has more significance than in algebr because, as the steps of a construction are carried out on paper or tablet, the figures on paper are modified with additional lines or arcs being drawn. Thus Euclid’s imperative constructions involve change of state of the figures on the paper. They do not merely produce “outputs.” Euclid’s constructions rarely ask one to erase the lines or arcs that have been drawn. However, many constructions do produce intermediate lines/arcs, which do not form part of the net result of the construction. All of these intermediate lines need to be erased at the end of the construction, a process similar to garbage collection in programming languages. Every construction of Euclid came with a specification, a precise statement of the problem being solved, and a proof of correctness to the effect that the construction solves the stated problem. Euclid was thus the founder of the rigorous method of mathematical proof, which has since become the standard in mathematics. Euclid might also be regarded as the founder of programming theory with rigorous specifications and proofs of correctness. But this latter aspect of his work has not yet been widely recognized.2 Diophantus and algebra Diophantus, another Greek mathematician from Alexandria in the third century A.D., took the decisive step of introducing algebraic notation for sequences of computation steps. Whereas the Babylonians wrote “multiply 6 by 4, you will find 24,” Diophantus introduced the short-hand notation familiar today: 6 ∗ 4. The advantage of the Diophantine notation is that one can string together multiple operations, all in a single expression. Thus sequential algorithms could now be written asp compact expressions or formulas. For example, a formula for a root of a quadratic equation is written as −b + b2 − 4ac. The development of computer programming languages paralleled the Diophantine invention, when scientists at IBM led by John Backus developed the programming language Fortran (short for “formula translation”). A computer program called a compiler was devised to translate Diophantine-style algebraic formulas into sequential algorithms in the machine language. Algebraic notation provides sequencing via function composition. When we write an expression such as g(f(x)), we are saying, “apply the function f to the value x obtaining an output, and then apply the function g to that output obtaining the final result.” Thus, the algebraic notation gives us a compact way of describing a long series of computation steps. However, sequencing alone is not enough to express all algorithms. One needs at least two other ingredients: conditional evaluation and repetition. Conditional evaluation is expressed in standard mathematics as definition by cases. For example: x; if x ≥ y max (x; y) = y; if x < y Similar notation is used in Haskell for definition by cases: max x y | x >= y = x | otherwise = y One also uses a simpler if-then-else form for conditional expressions: max (x; y) = if x ≥ y then x else y The introduction of repetition is however a much larger invention. 2I am indebted to Professor Tony Hoare for exhibiting these connections. 2 Peano, Godel¨ and primitive recursion Mathematicians have long used the principle called mathematical induction to prove statements parameterized by natural numbers. The so-called “natural numbers” are simply the series of non-negative integers 0; 1; 2;:::. The principle of mathematical induction is the following rule: A property P (n) is true for all natural numbers n provided: • Base case: P (0), i.e., P is true for 0, and • Step case: If P (x) is true for any natural number x, then it is true for P (x + 1). We can also express the principle as a proof rule as follows: P (0) 8x: P (x) ) P (x + 1) 8n: P (n) The key idea of the rule is that, in showing that P is true for x + 1, we can assume that it is already true for x. This assumption is referred to as the inductive hypothesis. The use of such inductive hypothesis is the key to proving properties indexed by an infinite collection of values such as that of natural numbers. For example, it is straightforward using mathematical induction to prove that the sum of the first n natural numbers can be given by a simple formula: n(n + 1) P (n) : 1 + 2 + ··· + n = 2 We leave the proof as an exercise. The essence of mathematical induction is problem reduction. The problem of the truth of P (x+1) is reduced to that of the truth of P (x), which I will display as: P (x + 1) P (x) This means, if you know how to prove P (x) then there is a way to prove P (x + 1). In this way the bigger problem of the truth of P (x + 1) is “reduced” to the smaller problem of the truth of P (x).
Recommended publications
  • Functional Programming Is Style of Programming in Which the Basic Method of Computation Is the Application of Functions to Arguments;
    PROGRAMMING IN HASKELL Chapter 1 - Introduction 0 What is a Functional Language? Opinions differ, and it is difficult to give a precise definition, but generally speaking: • Functional programming is style of programming in which the basic method of computation is the application of functions to arguments; • A functional language is one that supports and encourages the functional style. 1 Example Summing the integers 1 to 10 in Java: int total = 0; for (int i = 1; i 10; i++) total = total + i; The computation method is variable assignment. 2 Example Summing the integers 1 to 10 in Haskell: sum [1..10] The computation method is function application. 3 Historical Background 1930s: Alonzo Church develops the lambda calculus, a simple but powerful theory of functions. 4 Historical Background 1950s: John McCarthy develops Lisp, the first functional language, with some influences from the lambda calculus, but retaining variable assignments. 5 Historical Background 1960s: Peter Landin develops ISWIM, the first pure functional language, based strongly on the lambda calculus, with no assignments. 6 Historical Background 1970s: John Backus develops FP, a functional language that emphasizes higher-order functions and reasoning about programs. 7 Historical Background 1970s: Robin Milner and others develop ML, the first modern functional language, which introduced type inference and polymorphic types. 8 Historical Background 1970s - 1980s: David Turner develops a number of lazy functional languages, culminating in the Miranda system. 9 Historical Background 1987: An international committee starts the development of Haskell, a standard lazy functional language. 10 Historical Background 1990s: Phil Wadler and others develop type classes and monads, two of the main innovations of Haskell.
    [Show full text]
  • Comparative Studies of Programming Languages; Course Lecture Notes
    Comparative Studies of Programming Languages, COMP6411 Lecture Notes, Revision 1.9 Joey Paquet Serguei A. Mokhov (Eds.) August 5, 2010 arXiv:1007.2123v6 [cs.PL] 4 Aug 2010 2 Preface Lecture notes for the Comparative Studies of Programming Languages course, COMP6411, taught at the Department of Computer Science and Software Engineering, Faculty of Engineering and Computer Science, Concordia University, Montreal, QC, Canada. These notes include a compiled book of primarily related articles from the Wikipedia, the Free Encyclopedia [24], as well as Comparative Programming Languages book [7] and other resources, including our own. The original notes were compiled by Dr. Paquet [14] 3 4 Contents 1 Brief History and Genealogy of Programming Languages 7 1.1 Introduction . 7 1.1.1 Subreferences . 7 1.2 History . 7 1.2.1 Pre-computer era . 7 1.2.2 Subreferences . 8 1.2.3 Early computer era . 8 1.2.4 Subreferences . 8 1.2.5 Modern/Structured programming languages . 9 1.3 References . 19 2 Programming Paradigms 21 2.1 Introduction . 21 2.2 History . 21 2.2.1 Low-level: binary, assembly . 21 2.2.2 Procedural programming . 22 2.2.3 Object-oriented programming . 23 2.2.4 Declarative programming . 27 3 Program Evaluation 33 3.1 Program analysis and translation phases . 33 3.1.1 Front end . 33 3.1.2 Back end . 34 3.2 Compilation vs. interpretation . 34 3.2.1 Compilation . 34 3.2.2 Interpretation . 36 3.2.3 Subreferences . 37 3.3 Type System . 38 3.3.1 Type checking . 38 3.4 Memory management .
    [Show full text]
  • An Introduction to Landin's “A Generalization of Jumps and Labels”
    Higher-Order and Symbolic Computation, 11, 117–123 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. An Introduction to Landin’s “A Generalization of Jumps and Labels” HAYO THIELECKE [email protected] qmw, University of London Abstract. This note introduces Peter Landin’s 1965 technical report “A Generalization of Jumps and Labels”, which is reprinted in this volume. Its aim is to make that historic paper more accessible to the reader and to help reading it in context. To this end, we explain Landin’s control operator J in more contemporary terms, and we recall Burge’s solution to a technical problem in Landin’s original account. Keywords: J-operator, secd-machine, call/cc, goto, history of programming languages 1. Introduction In the mid-60’s, Peter Landin was investigating functional programming with “Applicative Expressions”, which he used to provide an account of Algol 60 [10]. To explicate goto, he introduced the operator J (for jump), in effect extending functional programming with control. This general control operator was powerful enough to have independent interest quite beyond the application to Algol: Landin documented the new paradigm in a series of technical reports [11, 12, 13]. The most significant of these reports is “A Generalization of Jumps and Labels” (reprinted in this volume on pages 9–27), as it presents and investigates the J-operator in its own right. The generalization is a radical one: it introduces first-class control into programming languages for the first time. Control is first-class in that the generalized labels given by the J-operator can be passed as arguments to and returned from procedures.
    [Show full text]
  • Landin-Seminar-2014
    On correspondences between programming languages and semantic notations Peter Mosses Swansea University, UK BCS-FACS Annual Peter Landin Semantics Seminar 8th December 2014, London 1 50th anniversary! IFIP TC2 Working Conference, 1964 ‣ 50 invited participants ‣ seminal papers by Landin, Strachey, and many others ‣ proceedings published in 1966 2 Landin and Strachey (1960s) Denotational semantics (1970s) A current project 3 Peter J Landin (1930–2009) Publications 1964–66 ‣ The mechanical evaluation of expressions ‣ A correspondence between ALGOL 60 and Church's lambda-notation ‣ A formal description of ALGOL 60 ‣ A generalization of jumps and labels ‣ The next 700 programming languages [http://en.wikipedia.org/wiki/Peter_Landin] 4 1964 The mechanical evaluation of expressions By P. J. Landin This paper is a contribution to the "theory" of the activity of using computers. It shows how some forms of expression used in current programming languages can be modelled in Church's X-notation, and then describes a way of "interpreting" such expressions. This suggests a method, of analyzing the things computer users write, that applies to many different problem orientations and to different phases of the activity of using a computer. Also a technique is introduced by which the various composite information structures involved can be formally characterized in their essentials, without commitment to specific written or other representations. Introduction is written explicitly and prefixed to its operand(s), and The point of departure of this paper is the idea of a each operand (or operand-list) is enclosed in brackets, machine for evaluating schoolroom sums, such as e.g. Downloaded from 1.
    [Show full text]
  • The Call-By-Need Lambda Calculus, Revisited 3
    The Call-by-need Lambda Calculus, Revisited Stephen Chang and Matthias Felleisen College of Computer Science Northeastern University Boston, Massachusetts, USA { stchang | matthias } @ ccs.neu.edu Abstract. The existing call-by-need λ calculi describe lazy evaluation via equational logics. A programmer can use these logics to safely as- certain whether one term is behaviorally equivalent to another or to determine the value of a lazy program. However, neither of the existing calculi models evaluation in a way that matches lazy implementations. Both calculi suffer from the same two problems. First, the calculi never discard function calls, even after they are completely resolved. Second, the calculi include re-association axioms even though these axioms are merely administrative steps with no counterpart in any implementation. In this paper, we present an alternative axiomatization of lazy evalu- ation using a single axiom. It eliminates both the function call retention problem and the extraneous re-association axioms. Our axiom uses a grammar of contexts to describe the exact notion of a needed compu- tation. Like its predecessors, our new calculus satisfies consistency and standardization properties and is thus suitable for reasoning about be- havioral equivalence. In addition, we establish a correspondence between our semantics and Launchbury’s natural semantics. Keywords: call-by-need, laziness, lambda calculus 1 A Short History of the λ Calculus Starting in the late 1950s, programming language researchers began to look to Church’s λ calculus [6] for inspiration. Some used it as an analytic tool to understand the syntax and semantics of programming languages, while others exploited it as the basis for new languages.
    [Show full text]
  • Foundations of Functional Programming
    Foundations of Functional Programming Computer Science Tripos Part IB Easter Term Lawrence C Paulson Computer Laboratory University of Cambridge [email protected] Copyright © 2021 by Lawrence C. Paulson Contents 1 Introduction 1 2 Equality and Normalization 6 3 Encoding Data in the λ-Calculus 11 4 Writing Recursive Functions in the λ-calculus 16 5 The λ-Calculus and Computation Theory 23 6 ISWIM: The λ-calculus as a Programming Language 29 7 Lazy Evaluation via Combinators 39 8 Compiling Methods Using Combinators 45 i ii 1 1 Introduction This course is concerned with the λ-calculus and its close relative, combinatory logic. The λ-calculus is important to functional programming and to computer science generally: 1. Variable binding and scoping in block-structured languages can be mod- elled. 2. Several function calling mechanisms — call-by-name, call-by-value, and call-by-need — can be modelled. The latter two are also known as strict evaluation and lazy evaluation. 3. The λ-calculus is Turing universal, and is probably the most natural model of computation. Church’s Thesis asserts that the ‘computable’ functions are precisely those that can be represented in the λ-calculus. 4. All the usual data structures of functional programming, including infinite lists, can be represented. Computation on infinite objects can be defined formally in the λ-calculus. 5. Its notions of confluence (Church-Rosser property), termination, and nor- mal form apply generally in rewriting theory. 6. Lisp, one of the first major programming languages, was inspired by the λ-calculus. Many functional languages, such as ML, consist of little more than the λ-calculus with additional syntax.
    [Show full text]
  • Programming Languages and Their Trustworthy Implementation
    Programming languages and their trustworthy implementation Xavier Leroy INRIA Paris Van Wijngaarden award, 2016-11-05 A brief history of programming languages and their compilation It's all zeros and ones, right? 10111000 00000001 00000000 00000000 00000000 10111010 00000010 00000000 00000000 00000000 00111001 11011010 01111111 00000110 00001111 10101111 11000010 01000010 11101011 11110110 11000011 (x86 machine code for the factorial function) Machine code is. That doesn't make it a usable language. Antiquity (1950): assembly language A textual representation of machine code, with mnemonic names for instructions, symbolic names for code and data labels, and comments for humans to read. Example (Factorial in x86 assembly language) ; Input: argument N in register EBX ; Output: factorial N in register EAX Factorial: mov eax, 1 ; initial result = 1 mov edx, 2 ; loop index = 2 L1: cmp edx, ebx ; while loop <= N ... jg L2 imul eax, edx ; multiply result by index inc edx ; increment index jmp L1 ; end while L2: ret ; end Factorial function The Renaissance: arithmetic expressions (FORTRAN 1957) Express mathematical formulas the way we write them on paper. p −b ± b2 − 4ac x ; x = 1 2 2a In assembly: In FORTRAN: mul t1, b, b sub x1, d, b D = SQRT(B*B - 4*A*C) mul t2, a, c div x1, x1, t3 X1 = (-B + D) / (2*A) mul t2, t2, 4 neg x2, b X2 = (-B - D) / (2*A) sub t1, t1, t2 sub x2, x2, d sqrt d, t1 div x2, x2, t3 mul t3, a, 2 A historical parallel with mathematics Brahmagupta, 628: Whatever is the square-root of the rupas multiplied by the square [and] increased by the square of half the unknown, diminish that by half the unknown [and] divide [the remainder] by its square.
    [Show full text]
  • Some History of Functional Programming Languages
    Some History of Functional Programming Languages D. A. Turner University of Kent & Middlesex University Abstract. We study a series of milestones leading to the emergence of lazy, higher order, polymorphically typed, purely functional program- ming languages. An invited lecture given at TFP12, St Andrews Univer- sity, 12 June 2012. Introduction A comprehensive history of functional programming languages covering all the major streams of development would require a much longer treatment than falls within the scope of a talk at TFP, it would probably need to be book length. In what follows I have, firstly, focussed on the developments leading to lazy, higher order, polymorphically typed, purely functional programming languages of which Haskell is the best known current example. Secondly, rather than trying to include every important contribution within this stream I focus on a series of snapshots at significant stages. We will examine a series of milestones: 1. Lambda Calculus (Church & Rosser 1936) 2. LISP (McCarthy 1960) 3. Algol 60 (Naur et al. 1963) 4. ISWIM (Landin 1966) 5. PAL (Evans 1968) 6. SASL (1973{83) 7. Edinburgh (1969{80) | NPL, early ML, HOPE 8. Miranda (1986) 9. Haskell (1992 . ) 1 The Lambda Calculus The lambda calculus (Church & Rosser 1936; Church 1941) is a typeless theory of functions. In the brief account here we use lower case letters for variables: a; b; c ··· and upper case letters for terms: A; B; C ···. A term of the calculus is a variable, e.g. x, or an application AB, or an abstraction λx.A for some variable x. In the last case λx.
    [Show full text]
  • A Generalization of Jumps and Labels
    Higher-Order and Symbolic Computation, 11, 125–143 (1998) c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Generalization of Jumps and Labels PETER J. LANDIN [email protected] qmw, University of London 1 Abstract. This paper describes a new language feature that is a hybrid of labels and procedures. It is closely related to jumping out of a functional subroutine, and includes conventional labels and jumping as a special, but probably not most useful, case. It is independent of assignment, i.e., it can be added to a “purely-functional” (“non-imperative”) system (such as lisp without pseudo-functions or program feature). Experiments in purely functional programming suggest that its main use will be in success/failure situations, and failure actions. This innovation is incorporated in the projected experimental system, iswim. Keywords: “Explaining to programmers the logical structure of programming languages is like a cat explaining to a fish what it feels like to be wet”—Gorn. Introduction This paper provides constructive answers to three questions about the logical structure of programming languages. Each arises from an observation about the roles of labels and jumping in current languages. First, there are languages that have neither assignment nor jumping; e.g., “pure lisp”, i.e., lisp without pseudo-functions or program feature. (For evidence that such a language may nevertheless be powerful, see [7], especially the Appendix, and [6].) Also, it is obvious that a language may have assignment but not jumping; e.g., Algol 60 deprived of goto, labels, label, and switch’s; or lisp with program feature but without labels and GOTO.Is the converse true? That is to say, can a language have no assignment and yet have jumping, or rather have some analogy to conventional jumping that yields conventional jumping as a special case when assignment is added? There is an elementary remark that may not be out of place here.
    [Show full text]
  • On the Semantics of Intensionality and Intensional Recursion
    On the Semantics of Intensionality and Intensional Recursion G. A. Kavvos St John’s College University of Oxford arXiv:1712.09302v1 [cs.LO] 26 Dec 2017 A thesis submitted for the degree of Doctor of Philosophy Trinity Term 2017 This thesis is dedicated to my father, John G. Kavvos (1948–2015), who taught me to persist in the face of all adversity. Acknowledgements A thesis in the making is a process of controlled deconstruction of its author’s character. This fact in itself suffices to warrant a rich list of acknowledgees. First and foremost, I would like to thank my doctoral supervisor, Sam- son Abramsky, sine qua non. Not only did he suggest the—admittedly unusual—topic of this thesis, but his unfailingly excellent advice and his unparalleled wit were vital ingredients in encouraging me to plough on, even when it seemed futile to do so. I shall never forget the time he quoted the experience of J. E. Littlewood on the development of chaos theory: “For something to do we went on and on at the thing with no earthly prospect of “results”; suddenly the whole vista of the dramatic fine structure of solutions stared us in the face.” It was a pleasure to work for a few short years next to a scientist of his stature, who also happens to be a kind man with a wonderful sense of humour. I also want to thank my examiners, Luke Ong and Martin Hyland, who examined this work in detail, and provided many enlightening comments and suggestions. I am grateful to the UK Engineering and Physical Sciences Research Coun- cil (EPSRC) for their financial support.
    [Show full text]
  • The Next 700 Programming Languages
    The Next 700 Programming Languages P. J. Landin Univac Division of Sperry Rand Corp., New York, New York "... today... 1,700 special programming languages used to 'com- municate' in over 700 application areas."--Computer Software Issues, an American Mathematical Association Prospectus, July 1965. A family of unimplemented computing languages is de- differences in the set of things provided by the library or scribed that is intended to span differences of application area operating system. Perhaps had ALGOL 60 been launched by a unified framework. This framework dictates the rules as a family instead of proclaimed as a language, it would ckout the uses of user-coined names, and the conventions have fielded some of the less relevant criticisms of its about characterizing functional relationships. Within 'lhis frame- deficiencies. work 'lhe design of a specific language splits into two inde- At first sight the facilities provided in IswI~,~ will appear pendent parts. One is 'lhe choice of written appearances of comparatively meager. This appearance will be especially programs (or more generally, their physical representation). misleading to someone who has not appreciated how much The o:her is the choice of the abstract entities (such as numbers, of current manuals are devoted to the explanation of character-strings, lists of them, functional relations among common (i.e., problem-orientation independent) logical them) that can be referred to in the language. structure rather than problem-oriented specialties. For The system is biased towards "expressions" rather than example, in almost every language a user can coin names, "statements." It includes a nonprocedural (purely functional) obeying certain rules about the contexts in which the subsystem fhat aims to expand the class of users' needs that name is used and their relation to the textual segments can be met by a single print-instruction, without sacrificing the that introduce, define, declare, or otherwise constrain its important properties that make conventional right-hand-side use.
    [Show full text]
  • The History of Standard ML
    The History of Standard ML DAVID MACQUEEN, University of Chicago, USA ROBERT HARPER, Carnegie Mellon University, USA JOHN REPPY, University of Chicago, USA Shepherd: Kim Bruce, Pomona College, USA The ML family of strict functional languages, which includes F#, OCaml, and Standard ML, evolved from the Meta Language of the LCF theorem proving system developed by Robin Milner and his research group at the University of Edinburgh in the 1970s. This paper focuses on the history of Standard ML, which plays a central rôle in this family of languages, as it was the first to include the complete set of features that we now associate with the name “ML” (i.e., polymorphic type inference, datatypes with pattern matching, modules, exceptions, and mutable state). Standard ML, and the ML family of languages, have had enormous influence on the world of programming language design and theory. ML is the foremost exemplar of a functional programming language with strict evaluation (call-by-value) and static typing. The use of parametric polymorphism in its type system, together with the automatic inference of such types, has influenced a wide variety of modern languages (where polymorphism is often referred to as generics). It has popularized the idea of datatypes with associated case analysis by pattern matching. The module system of Standard ML extends the notion of type-level parameterization to large-scale programming with the notion of parametric modules, or functors. Standard ML also set a precedent by being a language whose design included a formal definition with an associated metatheory of mathematical proofs (such as soundness of the type system).
    [Show full text]