Linear Logic Programming Dale Miller INRIA/Futurs & Laboratoire D’Informatique (LIX) Ecole´ Polytechnique, Rue De Saclay 91128 PALAISEAU Cedex FRANCE
Total Page:16
File Type:pdf, Size:1020Kb
1 An Overview of Linear Logic Programming Dale Miller INRIA/Futurs & Laboratoire d’Informatique (LIX) Ecole´ polytechnique, Rue de Saclay 91128 PALAISEAU Cedex FRANCE Abstract Logic programming can be given a foundation in sequent calculus by viewing computation as the process of building a cut-free sequent proof bottom-up. The first accounts of logic programming as proof search were given in classical and intuitionistic logic. Given that linear logic allows richer sequents and richer dynamics in the rewriting of sequents during proof search, it was inevitable that linear logic would be used to design new and more expressive logic programming languages. We overview how linear logic has been used to design such new languages and describe briefly some applications and implementation issues for them. 1.1 Introduction It is now commonplace to recognize the important role of logic in the foundations of computer science. When a major new advance is made in our understanding of logic, we can thus expect to see that advance ripple into many areas of computer science. Such rippling has been observed during the years since the introduction of linear logic by Girard in 1987 [Gir87]. Since linear logic embraces computational themes directly in its design, it often allows direct and declarative approaches to compu- tational and resource sensitive specifications. Linear logic also provides new insights into the many computational systems based on classical and intuitionistic logics since it refines and extends these logics. There are two broad approaches by which logic, via the theory of proofs, is used to describe computation [Mil93]. One approach is the proof reduction paradigm, which can be seen as a foundation for func- 1 2 Dale Miller tional programming. Here, programs are viewed as natural deduction or sequent calculus proofs and computation is modeled using proof nor- malization. Sequents are used to type a functional program: a program fragment is associated with the single-conclusion sequent ∆ ¡! G if the code has the type declared in G when all its free variables have types declared for them in the set of type judgments ∆. Abramsky [Abr93] has extended this interpretation of computation to multiple-conclusion sequents of linear logic, ∆ ¡! Γ, where ∆ and Γ are both multisets of typing judgments. In that setting, cut-elimination can be seen as speci- fying concurrent computations. See also [BS94, Laf89, Laf90] for related uses of concurrency in proof normalization in linear logic. The more expressive types made possible by linear logic have also been used to provide static analysis of run-time garbage, aliases, reference counters, and single-threadedness [GH90, MOTW99, O’H91, Wad90]. Another approach to using proof theory to specify computation is the proof search paradigm, which can be seen as a foundation for logic programming. In this paper (which is an update to [Mil95]), we first provide an overview of the proof search paradigm and then outline the impact that linear logic has made to the design and expressivity of new logic programming languages. 1.2 Goal-directed proof search When logic programming is considered abstractly, sequents directly en- code the state of a computation and the changes that occur to sequents during bottom-up search for cut-free proofs encode the dynamics of com- putation. In particular, following the framework described in [MNPS91], a logic programming language consists of two kinds of formulas: program clauses describe the meaning of non-logical constants and goals are the possible consequences considered from collections of program clauses. A single-conclusion sequent ∆ ¡! G represents the state of an idealized logic programming interpreter in which the current logic program is ∆ (a set or multiset of formulas) and the goal is G. These two classes of formulas are duals of each other in the sense that a negative subformula of a goal is a program clause and a negative subformula of a program clause is a goal formula. An Overview of Linear Logic Programming 3 1.2.1 Uniform proofs The constants that appear in logical formulas are of two kinds: logical constants (connectives and quantifiers) and non-logical constants (pred- icates and function symbols). The “search semantics” of the former is fixed and independent of context: for example, the search for the proof of a disjunction or universal quantifier should be the same no matter what program is contained in the sequent for which a proof is required. On the other hand, the instructions for proving a formula with a non- logical constant head (that is, an atomic formula) are provided by the logic program in the sequent. This separation of constants into logical and non-logical yields two different phases in proof search for a sequent. One phase is that of goal reduction, in which the search for a proof of a non-atomic formula uses the introduction rule for its top-level logical constant. The other phase is backchaining, in which the meaning of an atomic formula is extracted from the logic program part of the sequent. The technical notion of uniform proofs is used to capture the notion of goal-directed search. When sequents are single-conclusion, a uniform proof is a cut-free proof in which every sequent with a non-atomic right- hand side is the conclusion of a right-introduction rule [MNPS91]. An interpreter attempting to find a uniform proof of a sequent would directly reflect the logical structure of the right-hand side (the goal) into the proof being constructed. As we shall see, left-introduction rules are used only when the goal formula is atomic and as part of the backchaining phase. A specific notion of goal formula and program clause along with a proof system is called an abstract logic programming language if a sequent has a proof if and only if it has a uniform proof. As we shall illustrate below, first-order and higher-order variants of Horn clauses paired with classical provability [NM90] and hereditary Harrop formulas paired with intuitionistic provability [MNPS91] are two examples of abstract logic programming languages. While backchaining is not part of the definition of uniform proofs, the structure of backchaining is consistent across several abstract logic pro- gramming languages. In particular, when proving an atomic goal, appli- cations of left-introduction rules can be used in a coordinated decompo- sition of a program clause that yields not only a matching atomic formula occurrence to the atomic goal but also possibly new goals formulas for which additional proofs must be attempted [NM90, MNPS91, HM91]. 4 Dale Miller 1.2.2 Logic programming in classical and intuitionistic logics In the beginning of the logic programming literature, there was one ex- ample of logic programming, namely, the first-order classical theory of Horn clauses, which was the logic basis of the popular programming language Prolog. However, no general framework existed for connecting logic and logic programming. The operational semantics of logic pro- grams was presented as resolution [AvE82], an inference rule optimized for classical reasoning. Miller and Nadathur [MN86, Mil86, NM90] were probably the first to use the sequent calculus to examine design and cor- rectness issues for logic programming languages. Moving to the sequent calculus made it nature to consider logic programming in settings other than just classical logic. We first consider the design of logic programming languages within classical and intuitionistic logic, where the logical constants are taken to be true, ^, _, ⊃, 8, and 9 (false and negation are not part of the first logic programs we consider). Horn clauses can be defined simply as those formulas built from true, ^, ⊃, and 8 with the proviso that no implication or universal quanti- fier is to the left of an implication. A goal in this setting would then be any negative subformula of a Horn clause: more specifically, they would be either true or a conjunction of atomic formulas. It is shown in [NM90] that a proof system similar to the one in Figure 1.1 is complete for the classical logic theory of Horn clauses and their associated goal formulas. It then follows immediately that Horn clauses are an abstract logic programming language. (The syntactic variable A in Figure 1.1 denotes atomic formulas.) Notice that sequents in this and other proof systems contain a signature Σ as its first element: this signature contains type declarations for all the non-logical constants in the sequent. Notice also that there are two different kinds of sequent judgments: one with and one without a formula on top of the sequent arrow. The sequent Σ : ∆ ¡!D A denotes the sequent Σ : ∆;D ¡! A but with the D formula being distinguished (that is, marked for backchaining). Inference rules in Figure 1.1, and those that we shall show in sub- sequent proof systems, can be divided into four categories. The right- introduction rules (goal-reduction) are those using the unlabeled sequent arrow and in which the goal formula is non-atomic. The left-introduction rules (backchaining) are those with sequent arrows labeled with a for- mula and it is on that formula that introduction rules are applied. The initial rule forms the third category and is the only rule with a repeated An Overview of Linear Logic Programming 5 Σ : ∆ ¡! G1 Σ : ∆ ¡! G2 Σ : ∆ ¡! true Σ : ∆ ¡! G1 ^ G2 Di D Σ : ∆ ¡! A Σ : ∆ ¡! A initial decide A D1^D2 Σ : ∆ ¡! A Σ : ∆ ¡! A Σ : ∆ ¡! A D[t=x] Σ : ∆ ¡! G Σ : ∆ ¡!D A Σ : ∆ ¡! A 8 x:D Σ : ∆ G¡!⊃D A Σ : ∆ ¡!¿ A Fig. 1.1. In the decide rule, D 2 ∆; in the left rule for ^, i 2 f1; 2g; and in the left rule for 8, t is a Σ-term of type ¿.