<<

rtifact Comple * A t * te n * te A is W s E * e n l l o

L D

C

o

P *

* c

u

e

m s

O

E

u

e

e

P

n

R

t

v e

o

d t

* y

* s E a a

l d u e a t Integrating Dependent and Linear Types

Neelakantan R. Krishnaswami Pierre Pradic Nick Benton University of Birmingham ENS Lyon Microsoft Research

Abstract ative programs, and type dependency permits giving very In this paper, we show how to integrate linear types with precise types to describe the precise functional behavior of type dependency, by extending the linear/non-linear calcu- those programs. The primary difficulty we face is precisely lus of Benton to support type dependency. the interaction between type dependency and the structural Next, we give an application of this calculus by giving a rules. Since linear types ensure a variable occurs only once in proof-theoretic account of imperative programming, which a program, and dependent types permit variables to occur in requires extending the calculus with computationally irrele- both types and terms, how should we count occurrences of vant quantification, proof irrelevance, and a monad of com- variables in types? putations. We show the soundness of our theory by giving The key observation underpinning our approach is that a realizability model in the style of Nuprl, which permits us we do not have to answer this question! We build on the work of to validate not only the β-laws for each type, but also the Benton [7], which formulated models of intuitionistic linear F a G η-laws. in terms of a monoidal adjunction between a These extensions permit us to decompose Hoare triples monoidal closed L and a cartesian closed category A into a collection of simpler type-theoretic connectives, yield- C. The exponential ! then decomposes as the composition G(F(A)) ing a rich equational theory for dependently-typed higher- of the . Syntactically, such a model then corresponds order imperative programs. Furthermore, both the type the- to a pair of lambda calculi, one intuitionistic, and one linear, F G ory and its model are relatively simple, even when all of the which are related by a pair of modal operators and . extensions are considered. In the linear/non-linear (LNL) calculus, we no longer have to view the intuitionistic function space P → Q through Keywords Linear types, dependent types, intersection types, the lens of the Girard encoding !P ( Q. Instead, the intu- proof irrelevance, separation logic, Hoare triples itionistic arrow, →, is a connective in its own right, inde- pendent of the linear function space, (. Given such a sep- 1. Introduction aration, we can make the intuitionistic part of the language dependent, changing the type X → Y to the pi-type Πx : X. Y, Two of the most influential research directions in language without altering the linear function space A ( B. We then design are substructural logic and dependent . end up with a type theory in which linear types may depend Substructural like linear logic [18] and separation on intuitionistic terms in quite sophisticated ways, but we logic [38] offer fine-grained control over the structural rules never have to form a type dependent on an open linear term. of logic (i.e., contraction and weakening), which means types As an illustration, recall Girard’s [19] classic motivating control not only how a value of a given type is used, but also example of the resource reading of linear logic in which the how often it is used. This permits giving a logical account proposition A represents spending a dollar, B represents ob- resource of the notion of , and has been tremendously useful taining a pack of Camels, and C obtaining a pack of Marl- in giving modular and accurate specifications to stateful and boro. The formula A ( B&C then expresses that one may imperative programs. spend a dollar and receive the option of a pack of Camels On the other hand, dependent type theory extends simple and of a pack of Marlboro, but only of these options may be type theory by permitting terms to occur in types, permitting exercised. In LNLD, from that assumption one can type, for the expression of concepts like equality in types. This enables example, a machine M that takes n dollars and produces its giving types which give a very fine-grained specification of choice of something you can buy: the functional behaviour of their inhabitants. 0 0 We would like to fully integrate dependent and linear M : Πn : N. tuple n A ( Fm : N, m : N, p : m + m = n. type theory. Linear type theory would permit us to extend tuple m B ⊗ tuple m0 C the Curry-Howard correspondence to (for example) imper- where tuple n A is the (definable) n-fold linear tensor A⊗· · ·⊗ A. The return type of the linear function space in the type Permission to make digital or hard copies of all or part of this work for of M is a linear dependent pair, which generalizes LNL’s F personal or classroom use is granted without fee provided that copies are not modality for mapping intuitionistic values into linear ones. made or distributed for profit or commercial advantage and that copies bear So M yields two integers m and m0, a proof that their sum is this notice and the full citation on the first page. Copyrights for components 0 of this work owned by others than the author(s) must be honored. Abstracting n and an m-tuple of Bs together with an m -tuple of Cs. with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request Contributions. In this paper, we make the following contri- permissions from [email protected]. butions: POPL ’15, January 15–17, 2015, Mumbai, India. Copyright is held by the owner/author(s). Publication rights licensed to ACM. • First, we describe a core type theory LNLD that smoothly ACM 978-1-4503-3300-9/15/01. . . $15.00. integrates both linearity and full type dependency, by ex- http://dx.doi.org/10.1145/2676726.2676969 Γ ok Γ ` Ui : Ui+1 Γ ` Li : Ui+1

Γ ok Γ ` X type Γ ` X : Ui Γ, x : X ` Y : Ui Γ ` X : Ui Γ, x : X ` Y : Ui · ok Γ, x : X ok Γ ` Πx : X. Y : Ui Γ ` Σx : X. Y : Ui

Γ ` ∆ ok Γ ` A : Li Γ ` 1 : Ui Γ ` G A : Ui Γ ` ∆ ok Γ ` A linear Γ ` X : U Γ ` e : XΓ ` e0 : X Γ ` · ok Γ ` ∆ a : A ok i , 0 Γ ` e =X e : Ui Γ ` X type Γ ` A linear Γ ` A : Li Γ ` B : Li Γ ` A : Li Γ ` B : Li Γ ` I : Li Γ ` A ⊗ B : Li Γ ` A ( B : Li Γ ` X : Ui Γ ` A : Li

Γ ` X type Γ ` A linear Γ ` X : Ui Γ, x : X ` A : Li Γ ` X : Ui Γ, x : X ` A : Li Γ ` Πx : X. A : Li Γ ` Fx : X. A : Li Γ ` X ≡ Y type Γ ` A ≡ B linear Γ ` A : Li Γ ` B : Li

Γ ` X ≡ Y : Ui Γ ` A ≡ B : Li Γ ` > : Li Γ ` A & B : Li Γ ` X ≡ Y type Γ ` A ≡ B linear Figure 2. Type Well-formedness Figure 1. Structural judgements

Γ ` e : YΓ ` X ≡ Y type tending the linear/non-linear calculus [7, 8]. Our type 0 Γ, x : X, Γ ` x : X Γ ` e : X Γ ` () : 1 theory is an extensional type theory, which means that we consider the full βη-theory of the language, and further- Γ ` e : XΓ ` e0 :[e/x]Y Γ ` e : Σx : X. Y 0 more, it also has a hierarchy, meaning that we Γ ` (e, e ) : Σx : X. Y Γ ` π1 e : X support “full-spectrum” dependent types including large eliminations (i.e., computing types from terms). Γ ` e : Σx : X. Y Γ ` π e :[π e/x]Y Our approach to linear dependent type theory gives a 2 1 simple proof-theoretic explanation of how the ad-hoc Γ ` Πx : X. Y type Γ, x : X ` e : Y restrictions on the interaction of linearity in earlier ap- Γ ` λx. e : Πx : X. Y proaches (such as Linear LF [10]) arise, by showing how Γ ` e : Πx : X. YΓ ` e0 : X they can be modelled in LNLD. Γ ` e e0 :[e0/x]Y • Second, we show how to turn this calculus into a lan- guage for dependently-typed imperative programming Γ ` e ≡ e0 : X Γ; · ` e : A 0 (in the style of separation logic). The general idea is to Γ ` refl : e =X e Γ ` G e : G A combine linearity and type dependency to turn separation- logic style specifications into types, and to take impera- Figure 3. Intuitionistic Typing tive programs to be inhabitants of those types. Ensuring that programs have the appropriate operational behavior requires a number of extensions to the type sys- The basic structural judgements are given in Figure 1. The tem, including pointer types, implicit quantification (in judgement Γ ok is the judgement that a context Γ of intuition- the style of intersection and union types), and proof irrel- istic hypotheses is well-formed. As is standard, a well-formed evance for linear types. context is either empty, or a context Γ 0, x : X where Γ 0 is well-formed and X is an intuitionistic type well-formed in • Once this is done, we not only have a very expressive type Γ. The judgement Γ ` ∆ ok says that ∆ is a well-formed lin- system for higher-order imperative programs, but we also ear context, whose linear types may depend (only) upon the can systematically give a natural equational theory for intuitionistic variables in Γ. The judgements Γ ` X type and them via the βη-theory for each of the type constructors Γ ` A linear judge when a term is an intuitionistic or linear we use. type, respectively. The equality judgements Γ ` X ≡ Y type • Next, we give a realizability model in the style of Nuprl [20] and Γ ` A ≡ B linear determine whether two intuitionistic to show the consistency of our calculus. Our model is very or linear types are definitionally equal, respectively. simple and easy to work with, but nevertheless simply Both Γ ` X type and Γ ` A linear work by immedi- and gracefully justifies the wide variety of the extensions ately appealing to an ordinary typing judgement for an in- we consider. tuitionistic universe Ui and a linear universe Li. The type • Finally, we give a small implementation of our extended well-formedness rules are given in Figure 2, and the intro- type theory, illustrating that it is possible to implement all duction and elimination rules are given in Figure 3, with the of these constructions. judgement Γ ` e : X. Both linear and intuitionistic universes Li and Ui are el- U L 2. Core Dependent Linear Type Theory ements of the larger universe i+1. Note that i is an ele- ment of Ui+1 — this is because a linear type is an intuitionis- In this section, we describe the core dependent linear/nonlinear tic term, since we do not wish to restrict how often type vari- calculus. The full grammar of terms and types is given later, ables ranging over linear types can occur. (That is, we want in Figure 10, but here we present the calculus rule by rule. polymorphic types like Πα : Li. α ( α to be well-formed.) Γ ∆ ` e : BΓ ` A ≡ B linear ; Γ ` e ≡ e0 : X Γ; ∆ ` t ≡ t0 : A Γ; a : A ` a : A Γ; ∆ ` e : A Γ ` p : e = e0 Γ; ∆ ` e : I Γ; ∆0 ` e0 : C X Γ ` e ≡ e0 : X Γ; · ` () : I Γ; ∆, ∆0 ` let () = e in e0 : C

Γ; ∆ ` e : AΓ; ∆0 ` e0 : B Γ ` (λx. e) e0 ≡ [e0/x]e : X Γ ` e ≡ λx. e x : Πx : X. Y Γ; ∆, ∆0 ` (e, e0) : A ⊗ B 0 0 0 Γ ` π1 (e, e ) ≡ e : X Γ ` π2 (e, e ) ≡ e : X Γ; ∆ ` e : A ⊗ BΓ; ∆0, a : A, b : B ` e0 : C Γ; ∆, ∆0 ` let (a, b) = e in e0 : C Γ ` e ≡ (π1 e, π2 e) : Σx : X. Y 0 0 Γ; ∆, a : A ` e : B Γ; ∆ ` e : A ( BΓ; ∆ ` e : A 0 0 Γ ` e ≡ e0 : 1 Γ; ∆ ` λa. e : A ( B Γ; ∆, ∆ ` e e : B

0 Γ, x : X; ∆ ` e : A Γ; ∆ ` e : Πx : X. AΓ ` e : X Γ ` G (G−1 e) ≡ e : G A Γ; ∆ ` λxˆ . e : Πx : X. A Γ; ∆ ` e e0 :[e0/x]A Γ; · ` G−1 (G t) ≡ t : A Γ; ∆ ` (λx. e) e0 ≡ [e0/x]e : C Γ; ∆ ` () : > Γ; ∆ ` e ≡ λx. e x : A B Γ; ∆ ` e1 : A1 Γ; ∆ ` e2 : A2 Γ; ∆ ` e : A & B ( Γ; ∆ ` (e1, e2) : A1 & A2 Γ; ∆ ` π1 e : A Γ; ∆ ` (λxˆ . e) e0 ≡ [e0/x]e : C Γ; ∆ ` e ≡ λxˆ . e x : Πx : X. A Γ; ∆ ` e : A & B 0 Γ; ∆ ` π2 e : B Γ; ∆ ` e ≡ e : >

Γ ` e : XΓ; ∆ ` t :[e/x]A Γ; ∆ ` π (e, e0) ≡ e : A Γ; ∆ ` π (e, e0) ≡ e0 : B Γ; ∆ ` F (e, t) : Fx : X. A 1 1

0 0 Γ; ∆ ` e : Fx : X. AΓ, x : X; ∆ , a : A ` e : C Γ; ∆ ` e ≡ (π1 e, π2 e) : A & B Γ; ∆, ∆0 ` let F (x, a) = e in e0 : C Γ; ∆ ` let () = () in e ≡ e : C Γ ` e : G A −1 Γ; · ` G e : A Γ; ∆ ` let () = t in [() /x]t0 ≡ [t/x]t0 : C

0 0 Figure 4. Linear Typing Γ; ∆ ` let (a, b) = (t1, t2) in t ≡ [t1/a, t2/b]t : C

Γ; ∆ ` let (a, b) = t in [(a, b) /x]t0 ≡ [t/x]t0 : C The intuitionistic types we consider are the dependent function space Πx : X. Y, with a lambda-abstraction and func- Γ; ∆ ` let F (x, a) = F (e, t) in t0 ≡ [e/x, t/a]t0 : C tion application as its introduction and elimination rules; the dependent pair Σx : X. Y, with pairing (e, e0) as its introduc- Γ; ∆ ` let F (x, a) = t in [F (x, a) /y]t0 ≡ [t/y]t0 : C tion, and the projective (or “strong”) eliminations π1 e and 0 π2 e; the 1; and the equality type e =X e . Equality Figure 5. βη-Equality is introduced with refl, but has no explicit elimination form, since we are giving an extensional type theory with equal- ity reflection. These rules are all standard for dependent type theory. Similarly, it is possible to add inductive types (such as We depart from propositional LNL when we reach the de- natural numbers) to the calculus, including support for large pendent version of the other adjoint connective. Instead of a eliminations, though we omit them from the rules in the pa- unary modal connective F(X), we introduce the linear depen- per for space reasons (rules and proofs are in the companion dent pair type Fx : X. A[x]. Its introduction form F (e, t) con- tech report). sists of a pair consisting of an intuitionistic term e of type X, The first place that linearity arises is with the the adjoint and a linear term t of type A[e] — observe the type of the lin- embedding of linear types into intuitionistic types G A. Its ear term may depend on the intuitionistic first component. (intuitionistic) introduction rule is G e, which checks that e However, just as in the LNL calculus, this pair has a pattern has linear type A, in the empty linear context. (Intuitively, matching eliminator let F (x, a) = e in e0. Also, we add a de- this reflects the fact that closed linear terms can be freely pendent linear function space Πx : X. A, which is a function duplicated.) It is worth noting that our intuitionistic types which takes an intuitionistic argument and returns a linear are also essentially the same types as are found in the non- result (whose type may depend on the argument). We add dependent LNL calculus, only we have changed X → Y to a this for essentially hygienic reasons — it gives us syntax for pi-type, and X × Y to a sigma-type. the equivalence (Fx : X. A) ( C ' Πx : X. A ( C. The typing judgement Γ; ∆ ` e : A for linear terms says The equality theory of our language is given in Figure 5. that e has type A, assuming intuitionistic variables Γ and This gives the β and η rules for each of the connectives, in- linear variables ∆. The well-formedness and typing rules for cluding commuting conversions for all of the positive con- the MALL connectives (I, ⊗, (,&, >) are all standard. The nectives (i.e., ⊗, I and F). We also include the equality reflec- elimination form of the type G A is G−1 e, which yields an A tion rule of extensional type theory. For space reasons, we do in an empty linear context, which is the same as in Benton not include any of the congruence rules, or the equivalence [7]. relation axioms (they are given in the technical report). Γ ` e : Loc Γ ` X : Ui Γ ` A : Li Γ ` Loc : Ui Γ ` e 7→ X : Li Γ ` T (A): Li Γ ` e : Loc Γ; ∆ ` t :[e 7→ X] Γ, x : X; ∆0, a :[e 7→ X] ` t0 : C Γ ` A : Li Γ; ∆, ∆0 ` let (x, a) = get(e, t) in t0 : C Γ ` [A]: Li Γ; ∆ ` e : A Γ; ∆ ` e : T (A) Γ; ∆0, a : A ` e0 : T (C) Γ ` X : Ui Γ, x : X ` Y : Ui Γ ` X : Ui Γ, x : X ` Y : Li Γ; ∆ ` val e : T (A) Γ; ∆, ∆0 ` let val a = e in e0 : T (C) Γ ` ∀x : X. Y : Ui Γ ` ∀x : X. Y : Li Γ ` e : X Γ ` X : Ui Γ, x : X ` Y : Ui Γ ` X : Ui Γ, x : X ` Y : Li Γ; · ` newX e : T ((Fx : Loc. [x 7→ X])) Γ ` ∃x : X. Y : Ui Γ ` ∃x : X. Y : Li Γ ` e : Loc Γ; ∆ ` t :[e 7→ X] Γ ` >I : Ui Γ; ∆ ` free (e, t): T (I)

0 Figure 6. Γ ` e : Loc Γ; ∆ ` t :[e 7→ X] Γ ` e : Y Well-formedness of extensions 0 Γ; ∆ ` e :=t e : T ([e 7→ Y])

Γ; ∆ ` e ÷ A Γ; ∆ ` e : A Γ, x : X ` e : Y x 6∈ FV(e) Γ ` e : ∀x : X. YΓ ` e0 : X Γ; ∆ ` ∗ :[A] Γ; ∆ ` e ÷ A Γ ` e : ∀x : X. Y Γ ` e :[e0/x]Y Γ; ∆ ` e :[A] Γ; ∆0, x : A ` e0 ÷ C 0 0 Γ, x : X, y : Y ` e : Z x 6∈ FV(e) Γ; ∆, ∆ ` let [x] = e in e ÷ C Γ, y : ∃x : X. Y ` e : Z Γ; ∆ ` e :[I] Γ; ∆0 ` e0 : C 0 0 Γ, x : X ` Y type Γ ` e0 : XΓ ` e :[e0/x]Y Γ; ∆, ∆ ` let [] = e in e : C Γ ` e : ∃x : X. Y Γ; ∆ ` e :[A ⊗ B] Γ; ∆0, a :[A], b :[B] ` e0 : C 0 0 Γ, x : X; ∆ ` e : A x 6∈ FV(e) Γ; ∆, ∆ ` let [a, b] = e in e : C Γ; ∆ ` e : ∀x : X. A Figure 8. Typing of Imperative Programs Γ; ∆ ` e : ∀x : X. AΓ ` e0 : X Γ; ∆ ` e :[e0/x]A

Γ, x : X; ∆, a : A ` e : C x 6∈ FV(e) Γ; ∆, a : ∃x : X. A ` e : C

Γ, x : X ` Y linear Γ ` e0 : XΓ; ∆ ` e :[e0/x]Y 0 0 Γ; ∆ ` e : ∃x : X. Y Γ, x : X ` e ≡ e : Y Γ ` e ≡ e : ∀x : X. YΓ ` t : X Γ ` e ≡ e0 : ∀x : X. Y Γ ` e ≡ e0 :[t/x]Y Γ, n : N ` Πx : X[n]. Y[n] type Γ, f : >I, x : X(0) ` e : Y(0) Γ, n : N, f : Πx : X[n]. Y[n], x : X[s(n)] ` e : Y[s(n)] Γ ` e ≡ e0 :[t/x]YΓ ` t : X n 6∈ (fix f x = e) FV Γ ` e ≡ e0 : ∃x : X. Y Γ ` fix f x = e : ∀n : N. Πx : X[n]. Y[n] Γ, x : X, y : Y ` e ≡ e0 : Z x 6∈ FV(e, e0, Z) 0 Figure 7. Intersection and Union Types Γ, y : ∃x : X. Y ` e ≡ e : Z

Γ; ∆ ` let val x = val t in t0 ≡ [t/x]t0 : T (C) 3. Internal Imperative Programming Γ; ∆ ` let val x = t in val x ≡ t : T (C) There have been a number of very successful applications of dependent type theory to verify imperative programs using let val y = (let val x = t1 in t2) in t3 dependent types, such as XCAP, Bedrock and the Verified Γ; ∆ ` ≡ : T (C) Software Toolchain [4, 11, 31]. These approaches all have the let val x = t1 in let val y = t2 in t3 common feature that they are “external” methodologies for imperative programming. That is, within the metalogic of Γ; ∆ ` e ≡ e0 :[A] dependent type theory, an imperative language (such as C or Γ, x : X; ∆ ` e ≡ e0 : A Γ; ∆ ` e ≡ e0 : ∀x : X. AΓ ` t : X assembly) and its semantics is defined, and a program logic 0 0 for that language is specified and proved correct. So the type Γ; ∆ ` e ≡ e : ∀x : X. Y Γ; ∆ ` e ≡ e :[t/x]A theory remains pure, and imperative programs are therefore Γ; ∆ ` e ≡ e0 :[t/x]AΓ ` t : X objects of study rather than programs of the type theory. Γ; ∆ ` e ≡ e0 : ∃x : X. A We would like to investigate an internal, “proofs as pro- grams”, methodology for imperative programming, where Γ, x : X; ∆, a : A ` e ≡ e0 : C x 6∈ FV(e, e0, C) we write functional programs which (a) have effects, but (b) Γ ` ∆, a : ∃x : X. A ≡ e : e0C nonetheless are inhabitants of the appropriate types. Γ ` (fix f x = e) e0 ≡ [(fix f x = e)/f, e0/x]e : Z Pointers, Computations, and Proof Irrelevance In order to program with mutable state, we need types to describe both Figure 9. Imperative Equality state and computation that manipulate it. To do this, we will follow the pattern of L3 [2], and introduce a type of locations, and a type of reference capabilities. The type Loc is an intuitionistic type of locations. This So we can type computations as G ([P] ( T (Fx : X. [Q])), is a type of addresses, which may be freely duplicated and representing computations that take (proof-irrelevant) pre- stored in data structures. Testing locations for equality is a states P to proof-irrelevant post-states, while returning val- pure computation, and for simplicity we do not consider ues of type X. address arithmetic. However, merely having a location is not We can now describe the typing rules for computations in sufficient to dereference it, since a priori we do not know if Figure 8. Given a location and a proof-irrelevant capability to that location points to allocated memory. access it, we can now type a dereference operation. Derefer- The right to access a pointer is encoded in the capability encing let (x, a) = get(e, t) in t0 (in Figure 8) takes a location type e 7→ X. This says that the location e holds a value of type e and a capability t, and then binds the contents to x, and the X. This differs slightly from separation logic, whose points- returned capability in the variable a, before evaluating the to assertion e 7→ v says that e points to the specific value body t0. v. However, since we have dependent types at our disposal, The dereference operation is a pure linear term (i.e., has we can easily recover the separation logic assertion with the a non-monadic type), since it reads but does not modify the encoding e 7→ v ≡ e 7→ (Σx : X. x = v). state. The other stateful operations are monadic. Allocation At this point, much of the descriptive power of separation newX e is a computation which returns a pair of location and logic is already available to us: as soon as we have locations an irrelevant capability to access that location. Deallocation and pointer capabilities, we can use the tensor product P ⊗ Q free (e, e0) takes a location and the capability to access it, just like the separating conjunction P ∗ Q.1 and then deallocates the memory, returning a computation of 0 Next, we introduce a type T (A) of computations. That is, unit type. Update e :=t e takes a location and a capability if A is a linear type, then T (A) is a computation type which to access it, and then updates the location, returning a new produces a result of type A. Note that T (A) is itself a linear capability to access it. Since capabilities are linear, we are type; intuitively, executing a computation modifies the cur- able to support strong update – assignment can change the rent state so that it becomes a post-state of type A. The reason type of a pointer. We also include standard monadic rules for we treat computations monadically is because, naively, allo- returning pure values and sequencing computations. cation and deallocation of memory violates linearity: for ex- As a simple example, consider the following term: ample, if we typed allocation as Fx : Loc. x 7→ X, the points- 1 let val F (x, c ) = new 0 in to assertion is “created from nothing”. Since one of our de- 1 N 2 let val F (y, c2) = newbool false in sign goals is precisely to keep the semantics (see section 4) as 0 3 let (n, c1) = get(x, c1) in naive as possible, we introduce a linear computation type. 0 4 let val c = (y :=c n) in With the features we have introduced so far, a stateful 2 2 5 val (F (x, c0 ) , F (y, c0 )) computation e might be given the type G (P ( T (Q)), repre- 1 2 senting a linearly-closed term which takes a pre-state of type On the first two lines, this function allocates two point- P, and computes a result of type Q. While this is perfectly ers x (pointing to 0) and y (pointing to false), with access adequate from a props-as-types point of view, it is not quite capabilities c1 and c2. On line 3, it dereferences x, binding right computationally. the result to n. Furthermore, the linear capability c1 is taken In our approach, we encode reasoning about state using by the dereference operation, and returned (as the capabil- 0 linear types, so that we can read the type P ( P as saying ity c0 ). On line 4, y is updated to point to n, again taking 0 1 that the state P implies the state P. However, a proof f of in the capability c2 and returning the capability c1. This is this entailment is a linear lambda term, and evaluating it a strong update, changing the type of y so that it points does computation. So if we precompose the proof f with the to a . Then, we return both pointers and computation e, then we get a result which does different their access capabilities on line 5, with an overall type of computation than e did originally. This is quite different from T ((Fx : Loc. [x 7→ N]) ⊗ (Fy : Loc. [y 7→ N])). the way that the rule of consequence works in Hoare logic. One point worth mentioning is that our computation To solve this problem, we introduce a form of linear proof monad is a strong monad, which is in some sense the essence irrelevance into our language. The type [A] is a squash type, of the frame rule of separation logic — the derivability of the which erases all of the proof-relevant data about A — we are map (A ⊗ T (B)) ( T ((A ⊗ B)) means that a computation of left with only knowledge of its inhabitation. Our B that doesn’t use the state A can “fold it in” to produce a has a rule saying that ∗ is an inhabitant of [A], if we can find bigger computation of A ⊗ B. evidence for it using the monadic judgement Γ; ∆ ` e ÷ A. Another point worth mentioning is that our system only This judgement lets us turn any derivation of A as evidence treats reading pointers as a pure linear action. Naively, one of inhabitation, and lets us use evidence of proof-irrelevant might expect writes could also be viewed as pure, since lin- inhabitation relevantly, as long as we stay in the irrelevance earity ensures that there is at most one capability to modify a judgement. We also give a pair of rules to distribute proof- location. Unfortunately, this idea is incompatible with proof- irrelevance through tensor products and units.2 irrelevance: squashing erases a term, and erasing terms which could modify the heap would be unsound.

1 Intersection Types and Implicit Quantification At this It is relatively easy to show that any formula in the logic of bunched point, we are close to being able to formulate specifications of implications [33] which does not interleave the ordinary implication imperative programs in an internal way. However, one still- P ⊃ Q and the magic wand P −∗Q has a corresponding proof in linear logic, since interleaving the additive and multiplicative missing feature of program logics is support of logical vari- implications is the only way to create bunched contexts. However, ables. These are variables which occur only in specifications, the magic wand is rarely used in practice (though see [21]). but which do not affect the evaluation of a program. Fortu- 2 Intuitionistically, the isomorphism [A × B] ' [A] × [B] is deriv- nately, these have also been studied extensively in lambda- able, but the proof relies on the fact that pairing has first and second calculus, albeit under a different name: intersection types. projections. With the linear tensor product, this isomorphism has to We introduce the two type formers ∀x : X. Y and ∃x : X. Y be built into the syntax. (and their linear counterparts), corresponding to an X-indexed intersection and union, respectively. A term e is an inhabitant The type of this function says that if we are given a list of of ∀x : X. Y if it is an inhabitant of Y for all x, and symmet- length n, then we can recursively traverse the linked list in rically, e inhabits ∃x : X. Y if there is some v : X such that order to get a computationally-relevant number equal to n. e :[v/x]Y. The precise rules are given in Figure 7 – the key This turns out to be a very common pattern in specifications: constraint is that even though a new hypothesis becomes we use computations to mediate between some logical fact available in the context, it cannot occur in the term. This about a piece of data (i.e., the index n) and computational ensures that it cannot affect evaluation, even though it can data (in this case, the length m of the list). appear in typing. Obviously, this function cannot be defined with an elimi- The classical A∧B can be encoded using nator on n, since the point is precisely to compute a relevant large elimination, as ∀b : 2. if(b, A, B), and the number equal to n, and the data we receive is merely a bare A ∨ B is ∃b : 2. if(b, A, B). pointer, along with the right to traverse it. Now, we can introduce logical variables into our specifi- Worth noting is that the squash type completely con- cations (i.e., types) by simple quantification: ceals almost all of the proof-related book-keeping related to manipulating the linked-list predicate LList. As a re- ∀x : X. G ([P] T (Fy : Y. [Q])) ( sult, it is obvious what the computationally-relevant ac- The variable x can now range over both the pre- and the post- tion is, though why things work can get quite tricky to conditions without being able to affect the computation. figure out. In this case, on the first line, we making use of the fact that given a list [LList(n, l)] we can get out Fixed Points Finally, we make use of intersection types to an element of the union type ∃p : n = 0. [l 7→ inl () ⊗ I] ∨ fix f x = e introduce a recursive function form (see Figure 7). ∃k : , n = s(k), l0 : Loc. [l 7→ inr l0 ⊗ LList(k, l0)]. This is bound ∀n : Πx : X[n] Y[n] N This function is ascribed the type N. . , to c0, and then the body can be typechecked under both al- f Πx : X[n] Y[n] x when (1) assuming has the type . and has ternatives. the type X[s(n)], then the body has the type Y[s(n)], and (2) we can show that the body has the type Y[0] when the ar- List Append We can also write a type-theoretic version of gument has type X[0]. (In the base case, we also assume that in-place list append, one of the examples from the original f has a which cannot be used, to prevent recursive paper on separation logic. First, let’s write a rehead function, calls.) which changes the head pointer of a list: This permits us to define recursive functions if we can 0 0 supply a decreasing termination metric as a computationally- rehead : ∀n : N. Πl, l : Loc. G([l 7→ inl ()] ⊗ [LList(n, l )] ( T ([LList(n, l)])) irrelevant argument. This is useful because the inductive 0 rehead l l = G(λc1 c2. eliminators require a proof-relevant argument, which we 0 00 let [c2, c2 ] = ∗ in might not have! 0 0 0 let (v, c2) = get(l , c2) in

3.1 Examples let val c1 = (l :=c1 v) in 0 0 let val () = free (l , c2) in With all of this machinery, we can now define a simple linked val ∗) list predicate. For simplicity, we leave the data out, leaving only the link structure behind. This function simply assigns the contents the list to the first argument pointer, and then frees the head pointer of LList : N × Loc → L0 the second list. As usual, a small amount of bureaucracy is LList(0, l) = [l 7→ inl ()] needed to split irrelevant arguments, but otherwise the code LList(n + 1, l) = Fl0 : Loc. [l 7→ inr l0] ⊗ LList(n, l0) is very similar to what one might write in ML or Haskell. This says that a list of length 0 is a pointer to a null value, append : ∀n, n0 : N. Πl, l0 : Loc. G([[LList(n, l)] ⊗ [LList(n0, l0)] 0 and a list of length n + 1 is a pointer to a list of length n. ( T ([LList(n + n , l)])) 0 Note that this predicate is proof-relevant, in that a term of append l l = G(λc1 c2. 0 00 type LList(n, l) is a nested tuple including all the pointers let [c1, c1 ] = ∗ in 0 0 in the linked list. As a result, we need to squash the pred- let (v, c1) = get(l, c1) in icate when giving specifications to programs. We will also case(v, −1 0 0 make use of the fact that proof-relevant quantifiers inside of inl () → let val c = G (rehead l l ) c1 c2 in val ∗ a squash can be turned into proof-irrelevant quantifiers out- 00 −1 00 0 00 inr l → let val c = G (append l l ) c1 c2 in val ∗)) side of a squash: that is, the axiom [Fx : X. A] ( ∃x : X. [A] is inhabited (by the term λx. ∗). In particular, this lets us turn This walks to the end of a list, and then reheads the head squashed coproducts into union types: [A ⊕ B] ( [A] ∨ [B]. of the second list to the tail of the first list. Then with some straightforward linearly-typed proof-programming, it’s pos- List length As a first example, if we wanted to define a sible to show that this gives us a linked list of the appropriate length function for linked lists, we would need to do it as: shape (which is concealed in the val ∗ terms). The fact that we len : ∀n : N. Πl : Loc. G([LList(n, l)] return only a squashed argument corresponds to the fact that ( (Fm : N, pf : m = n. [LList(n, l)])) this is an in-place algorithm, which modifies the state and re- len l = G(λc. turns only some linear knowledge about the new state. let [c0] = ∗ in Data Abstraction Because we can quantify over types, let [c c ] = c0 in 1, 2 our calculus also supports data abstraction, in the style of let ((v pf) c0 ) = get(l c ) in , , 1 , 1 Nanevski et al. [29]. As an example, we can use our linked case(v , list to implement a stack as an . First, we define inl () → F ( refl ∗) 0, , a function to grow a linked list by one element. inr l0 → let (F (m, pf, c00)) = G−1 (len l0) ∗ in F (m + 1, refl, ∗))) grow : Πl : Loc. ∀n : N. G(LList(l, n) ( T (LList(l, n + 1))) grow l = G(λc. are equal) means that we know that any dereference of l is let [c0, c00] = ∗ in equal to (v, refl), without having to put explicit proofs into let (v, c0) = get(l, c0) in the terms (because of equality reflection). case(v, In addition, moving things like preconditions out of the inl () →let val (l0, c0) = new inl () in computation monad T (A) lets us give a very simple model 0 let val c = l :=c0 inr l0 in for it, which means that we can easily show the soundness of val ∗ equations such as eliminating duplicate gets 00 −1 00 00 inr l → let val c = G (grow l ) c in val ∗)) let (x, c) = get(e, e0) in C[let (x0, c0) = get(e, c) in e00] Γ; ∆ ` ≡ : A This iterates to the end of the list, and then adds one let (x, c) = get(e, e0) in C[[c/c0, x/x0]e00] element to the tail, in order to preserve the identity of the list. Similarly, we can shrink a list by an element as well. and eliminating gets after sets: 0 0 0 000 shrink : Πl : Loc. ∀n : N. G(LList(l, n + 1) ( T (LList(l, n))) let val c = e :=e00 e in C[let (x , c ) = get(e, c) in e ] Γ; ∆ ` ≡ : T (A) shrink l = G(λc. 0 0 0 0 000 let val c = e := 00 e in C[[e /x c/c ]e ] let [c0, c00] = ∗ in e , let (l0, c0) = get(l, c0) in 0 let val c = l :=c0 inl () in G−1(rehead l l0) c0 c00) 3.3 Hoare Type Theory This deletes an element of the list, using the rehead function Hoare type theory [28] gives an alternative internal approach to ensure that the head pointer remains unchanged. to imperative programming, by directly embedding Hoare Now, we can give a procedure to build a stack as an triples into type theory. The central idea is to use dependent abstract type. types to specify imperative programs as elements of an in- dexed monad, where the index domain are formulas of sep- make : G(T(F Stack : N → Li, aration logic. This approach has proven remarkably success- push : ∀n : N. G Stack n T (Stack(s(n))), ( ful in verifying complex imperative programs (such as mod- pop : ∀n : N. G Stack(s(n)) ( T (Stack n). Stack 0)) ern congruence closure algorithms [30]), but working out the make = G(T(let val F (l, c) = new 0 in equational theory of the Hoare type has proven to be an ex- val F (λn. LList(n, l), F (grow l, F (shrink l, c))))) tremely challenging problem, to the point that Svendsen et al. [40] give a model without any equations. This procedure allocates a new location initialized to 0, and To illustrate the connection more clearly, we note that each then returns the grow and shrink operations at the types for of the basic constants of Hoare type theory is definable in our push and pop. Importantly, the representation of the stack calculus, with a type that is close to the image of the trans- is completely hidden — there is no way to know a pointer lation of its Hoare type. Below, we give allocation, derefer- represents the stack. ence, assignment, and the rule of consequence to illustrate. One particularly interesting feature of this specification is (We leave out the terms, because they can be read off the that the choice of representation invariant is data-dependent: types.) In each case, the Hoare type is on the top line, and our definition of the Stack predicate captures the actual loca- the linear type is on the line below. tion allocated as a free variable.

3.2 Equational Theory new1 : {emp} x : Loc {x 7→ ()} So far, we have decomposed the Hoare triple {P} x : X {Q} as G (T ((Fx : Loc. [x 7→ ()]))) the dependent linear type G ([P] ( T ((Fx : X. [Q]))). That is, 0 0 we view elements of Hoare type as duplicable terms, which : Πx : Loc, v, v : X. {x 7→ v} 1 {x 7→ v } 0 0 take a proof-irrelevant witness for the pre-state P, and then ∀v : X. Πx : Loc, v : X. G ([x 7→ v] ( T ([x 7→ v ])) compute a result of type X, as well as a producing a proof- irrelevant witness for the post-state Q. get : Πx : Loc, v : X. {x 7→ v} v0 : X {v0 = v ∧ x 7→ v} This is, admittedly, a zoo of connectives! However, such 0 0 ∀v : X. Πx : Loc. G ([x 7→ v] Fv : X, pf : v = v. [x 7→ v]) fine-grained decompositions have a number of benefits. ( Since each of the connectives we give has a very simple (P ⊃ P 0) → (Q0 ⊃ Q) → βη-theory, we can derive the equational theory for terms of con : ΠP P 0 Q Q0 {P 0} x : X {Q0} → 3 , , , . Hoare type by joining up the βη-theory of its components. {P} x : X {Q} As a simple example, an immediate consequence of our 0 0 decomposition is that all terms of type {0} x : X {Q} (that is, G [P ( P ] → G [Q ( Q] → 0 0 0 0 the type of computations with a false precondition) are equal, ΠP, P , Q, Q : Li. G ([P ] ( Fx : X. [Q ]) → simply via the eta-rule for functions and falsity. G ([P] ( Fx : X. [Q]) Another consequence (which we rely upon in our exam- ples) comes from our encoding of the points-to predicate of We can see that in some cases, the LNLD typings have separation logic. Our basic pointer type l 7→ X says only that computational advantages. For example, for the set constant, l contains a value of type X. We encode the points-to of sepa- we are able to use the implicit ∀ quantifier to make clear that ration logic l 7→ v using the type l 7→ Σx : X. x = v. Then, the while the pointer and the new value need to be passed as fact that the sigma-type is contractible (i.e., all its elements arguments, the old contents do not need to be passed in. This is not possible in HTT (due to the absence of intersections), 3 We say “the” equational theory because the βη-equality is canoni- and to work around this a two-state postcondition is used. cal. However, there are many desirable equations (such as those aris- As another example, the get constant doesn’t have a monadic ing from parametricity) which go far beyond it. effect in LNLD. As a result, any (totally correct) program in Hoare type tic and linear. The variable v ranges over intuitionistic values, theory should in principle be translatable into LNLD (neglect- u ranges over linear values, and σ denotes heaps. ing the much superior maturity and automation available for There are actually three operational semantics for our lan- Hoare type theory). guage. First, we have an evaluation relation e ⇓ v for intu- itionistic terms, a a linear evaluation relation hσ; ei ⇓ hσ0; ui Caveat There is, however, a significant difference between for linear terms, and a monadic evaluation relation hσ; ei LNLD and HTT: the latter supports general recursion, and hσ0; val ui for evaluating monadic terms. Both the linear and deals with partial correctness. LNLD requires a termination monadic semantics are store-passing relations, because they metric on fixed point definitions and only deals with total are imperative computations which may depend upon the correctness. state of memory. Most of the complexity in the of HTT comes The intuitionistic evaluation relation e ⇓ v is a standard >> from its treatment of recursion, such as the need for - call-by-name big-step evaluation relation for the lambda cal- closure to ensure admissibility for fixed point induction. We culus. We treat all types as values (and do not evaluate their have sidestepped these issues by focussing only on total subterms), and also include location values l in the syntactic correctness. category of values. Though there are no typing rules for loca- In fact, much of the subtlety in the design of LNLD lay tion values, they may arise during program evaluation when in the effort needed to rule out general recursion — in lan- fresh memory cells are allocated. guages with higher-order store, it is usually possible to write The linear evaluation relation hσ; ei ⇓ hσ0; ui evaluates the recursion operators by backpatching function pointers (aka machine configuration hσ; ei to hσ0; ui. This is also a call-by- “Landin’s knot”), and so we had to carefully exploit linear- name big-step evaluator. However, there are many rules in ity to prevent the typability of such examples. (This was this reduction relation, simply because we have many linear originally observed by Ahmed et al. [2].) Another example types in our language. The rules for functions (both linear is found in our fixed point operator, which defines recur- A ( B and dependent Πx : X. A) and pairs A & B follow the sive functions with an irrelevant termination metric. If we beta-rules of the equational theory, as expected. The let-style x e had given a general fixed point operator fix . , even with eliminators for I, A ⊗ B and Fx : X. A are more verbose, but an irrelevant termination metric, then looping computations also line up with the beta-rules of the theory. The eliminator would be definable. (This is connected to the ability of inter- G−1 e evaluates e to a term G t, and then executes t, again section types to detect evaluation order.) following the beta-rule. As a result, to formally argue that our approach makes A nuisance in our rules is that we need operational rules proving equations easier, we would need to give a version for the let-style rules (i.e., let [a, b] = e in e0) distributing T (−) of our monad which is partial, which would give a proof irrelevance through units and tensors. This has no com- calculus corresponding more closely to HTT. We hope to putational content, since the argument e is never evaluated, investigate these questions more deeply in future work. As but nevertheless we must include evaluation rules for it. matters stand, all we can say is that our semantics is in many Finally, we also have the dereference operation let (x, c0) = ways much less technically sophisticated than models such get(e, e0) in e00. This rule evaluates e to a location l, and e0 to as the one of Petersen et al. [34], even though it permits us to an irrelevant value ∗. Then it looks up the value v stored at prove a number of novel equations on programs. l, and binds that to x, and binds c0 to the irrelevant value ∗ before evaluating the continuation e00. 4. Metatheory One point worth making is that none of these rules affect We now establish the metatheory of our type system. Our ap- the shape of the heap – we can dereference pointers in the proach integrates the ideas of Harper [20] and Ahmed et al. pure linear fragment, but we cannot write or modify them. [2]. We begin by giving an untyped operational semantics for As a result, every linear reduction is really of the form hσ; ei ⇓ the language. In fact, we give three operational semantics: hσ; ui. We could have easily have made this into a syntactic one for pure intuitionistic terms, one for pure linear terms, restriction, but having two configurations is slightly more convenient when giving the realizability semantics. and one for monadic linear terms. 0 Finally, we have the monadic evaluation hσ; ei hσ ; val ui. Next, we use this operational semantics to give a seman- 0 tic realizability model, by defining the interpretation of each The rule for let-binding let val x = e in e is sequential com- position: it evaluates the first argument to a monadic value, logical connective as a partial equivalence relation (PER). Be- 0 cause we have dependent types, we need to define the set and then binds the result to x before evaluating e . The rule of syntactic types mutually recursively with the interpreta- for new e evaluates its (intuitionistic) argument to a value v, tion of each type, in the style of an inductive-recursive def- and then allocates a fresh location l to store it in, returning the location (and a dummy ∗ for the capability). The rule inition. Furthermore, we have to interpret intuitionistic and 0 0 for assignments e :=e00 e evaluates e to a location l, e linear terms differently: we use an ordinary PER on terms to 00 interpret intuitionistic types, and a PER on machine configu- to a value v, and e to a capability ∗. Then it modifies the heap so that l points to v. Similarly, the rule for deallocation rations (terms plus store) to interpret linear types. 0 This gives a semantic interpretation of closed types. We free (e, e ) evaluates e to l and then deallocates the pointer. extend this to an interpretation of contexts as sets of environ- Note that all of these operations could get stuck, if the ap- ments, and then prove the fundamental theorem of logical re- propriate piece of heap did not exist — the soundness proof lations for our type system, which establishes the soundness will establish that type-safe programs are indeed safe. and adequacy of our rules (including its equational theory). 4.2 CPPOs and Fixed Points 4.1 Operational Semantics Since types depend on terms in dependent type theory, we The syntax for our language of untyped terms is given in Fig- cannot define the grammar of types up front, before giving ure 10. For simplicity, all our types and terms live in the same an interpretation their interpretations. Instead, what we need syntactic category, regardless of whether they are intuitionis- is morally an inductive-recursive definition. As a result, we 0 e, t, X, A ::= Πx : X. Y | A ( B | λx : C. e | e e | λxˆ . e e ⇓ v | 1 | I | () | let () = e in e0 | Σx : X. Y | A ⊗ B | (e, e0) 0 e1 ⇓ λx : A. e [e2/x]e ⇓ v | π1 e | π2 e | let (x, y) = e in e | G e | G−1 e v ⇓ v e1 e2 ⇓ v | Fx : X. A | F (e, t) | let F (x, a) = t in t0 | > | A & B e ⇓ (e1, e2) e1 ⇓ v e ⇓ (e1, e2) e2 ⇓ v | ∀x : X. Y | ∃x : x. Y π1 e ⇓ v π2 e ⇓ v 0 | e =X e | refl | N | 0 | s(e) | iter(e, 0 → e0, s(x), y → e1) e ⇓ v e ⇓ 0 e0 ⇓ v | U | L i i s(e) ⇓ s(v) iter(e, 0 → e , s(x), y → e ) ⇓ v | x | fix f x = e 0 1 | [A] | let [x] = e in e | ∗ 0 e ⇓ s(n) iter(n, 0 → e0, s(x), y → e1) ⇓ v [n/x, v/y]e1 ⇓ v | e 7→ X | Loc | newX e | free (e, t) | l 0 0 00 0 iter(e, 0 → e0, s(x), y → e1) ⇓ v | let (x, a) = get(e, e ) in e | e :=e00 e | T (A) | val e | let val x = e in e0 0 e ⇓ fix f x = e0 [(fix f x = e0)/f, e /x]e0 ⇓ v v ::= λx : A. e | () | (e, e) | refl | G e | l | ∗ | 0 | s(v) e e0 ⇓ v | Πx : X. Y | A ( B | > | A & B | 1 | I | Σx : X. Y | A ⊗ B | Fx : X. B 0 0 hσ; ei ⇓ hσ ; ui | e =X e | e 7→ X | Loc | Ui | Li

u ::= λx. e | () | (e, e) | F (e, e) | λxˆ . e | (e, e0) 0 hσ; ui ⇓ hσ; ui | ∗ | val e | let val x = e in e | newX e | e :=e00 e 0 0 0 0 00 00 σ ::= · | σ, l : v hσ; e1i ⇓ hσ ; λx. e1 i hσ ; [e2/x]e1 i ⇓ hσ ; u i 00 00 hσ; e1 e2i ⇓ hσ ; u i

Figure 10. e t X A v u σ 0 0 00 00 Terms , , , , values , linear values , stores hσ; e1i ⇓ σ ; λxˆ . e hσ ; [e2/x]ei ⇓ hσ ; u i 00 00 hσ; e1 e2i ⇓ hσ ; u i

0 0 00 00 need to use a fixed theorem which is a little stronger than the hσ; ei ⇓ hσ ; (e1, e2)i hσ ; e1i ⇓ hσ ; u i 00 00 standard Kleene or Knaster-Tarski fixed point theorems. hσ; π1 ei ⇓ hσ ; u i

Recall that a pointed partial order is a triple (X, 6, ⊥) such 0 0 00 00 hσ; ei ⇓ hσ ; (e1, e2)i hσ ; e2i ⇓ hσ ; u i that X is a set, 6 is a partial order on X, and ⊥ is the least 00 00 element of X. A subset D ⊆ X is a directed set when every pair hσ; π2 ei ⇓ hσ ; u i of elements x, y ∈ D has an upper bound in D (i.e., there is a hσ; ei ⇓ hσ0; ()i hσ0; e0i ⇓ hσ00; ui z ∈ D such that x 6 z and y 6 z). A pointed partial order is hσ; let () = e in e0i ⇓ hσ00; ui complete (i.e., forms a CPPO) when every directed set D has a F 0 0 0 00 supremum D in X. hσ; ei ⇓ hσ ; (e1, e2)i hσ ; [e1/a, e2/b]e i ⇓ hσ ; ui The following fixed point theorem4 is in Harper [20], and hσ; let (a, b) = e in e0i ⇓ hσ00; ui is Theorem 8.22 in Davey and Priestley [13]. 0 0 0 00 hσ; ei ⇓ hσ ; F (e1, e2)i hσ ; [e1/x, e2/a]e i ⇓ hσ ; ui Theorem 1. (Fixed Points on CPPOs) If X is a CPPO, and hσ; let F (x, a) = e in e0i ⇓ hσ00; ui f : X → X is a monotone function on X, then f has a least fixed 0 0 0 0 0 point. e ⇓ G e hσ; e i ⇓ hσ ; ui hσ; ei ⇓ hσ ; u i −1 0 0 0 σ; G e ⇓ hσ ; ui hσ; let [] = e0 in ei ⇓ hσ ; u i

Proof. Construct the ordinal-indexed sequence xα, where: hσ; [∗/a, ∗/b]ei ⇓ hσ0; u0i hσ; let [a, b] = e in ei ⇓ hσ0; u0i x0 = ⊥ 0 x = f(x ) β+1 β e ⇓ l hσ; e0i ⇓ hσ0, l : v; ∗i hσ0, l : v; [v/x, ∗/c]e00i ⇓ hσ00; ui x = F x λ β<λ β hσ; let (x, c) = get(e, e0) in e00i ⇓ hσ00; ui Because f is monotone, we can show by transfinite induc- 0 tion that every initial segment is directed, which ensures the hσ; ei hσ ; val vi needed suprema exist and the sequence is well-defined. Now, we know there must be a stage λ such that xλ = hσ; val ei hσ; val ei xλ+1. If there were not, then we could construct a bijection 0 00 0 between the ordinals and the strictly increasing chain of el- hσ; ei ⇓ hσ ; ui e 6= val u0 hσ; ui hσ ; val u i 00 0 ements of the sequence x. However, the elements of the se- hσ; ei hσ ; val u i quence x are all drawn from X. Since X is a set, it follows that 0 0 hσ; ei hσ1; val e1i hσ1; [e1/x]e i hσ ; val vi the elements of x must themselves form a set. Since the ordi- 0 0 hσ; let val x = e in e i hσ ; val vi nals do not form a set (they are a proper class), this leads to a contradiction. Hence, f must have a fixed point. e ⇓ v l 6∈ dom(σ) hσ; newX ei hσ, l : v; val F (l, ∗)i 4.3 Partial Equivalence Relations and Semantic Type e ⇓ l e0 ⇓ v hσ; e00i ⇓ hσ0, l : v0; ∗i Systems 0 0 0 hσ, l : v ; e :=e00 e i hσ , l : v; val ∗i We now need to define what our types are, and how to interpret them. Following the example of Nuprl, we will e ⇓ l hσ; ti ⇓ hσ0, l : v; ∗i 0 hσ; free (e, t)i hσ ; val ()i 4 A constructive, albeit impredicative, proof of this theorem is possi- ble: this is Pataraia’s theorem [17]. Figure 11. Operational Semantics interpret types as partial equivalence relations of terms, with Loc = {(l, l) | l ∈ Loc} the intuition that if (e, e) ∈ X, then e is an element of the 1ˆ = {(() , ())} 0 0 type X, and that if (e, e ) ∈ X then e is equal to e . Because of Id(a, b, E) = {(refl, refl) | (a, b) ∈ E} Π(E, Φ) = {(v, v0) | ∀(a, a0) ∈ E. (va, v0a0) ∈ Φ(a)} type dependency, we will then simultaneously define the PER 0 0 0 0 of types, and an interpretation function sending each type to Σ(E, Φ) = {((a, b) , (a , b )) | (a, a ) ∈ E ∧ (b, b ) ∈ Φ(a)} G(C) = {(G e, G e0) | (h·; ei , h·; e0i) ∈ C} the PER it defines. ∀ˆ (E, Φ) = {(v, v0) | ∀(e, e0) ∈ E. (v, v0) ∈ Φ(e)} A partial equivalence relation (PER) is a symmetric, transi- ∃ˆ (E, Φ) = {(v, v0) | ∃(e, e0) ∈ E. (v, v0) ∈ Φ(e)}† k k tive relation on closed, terminating expressions. We further Nˆ = (s (0), s (0)) k is a natural number 0 0 require that PERs be closed under evaluation. Given a PER R, >ˆ I = {(v, v ) | v ∈ Val ∧ v ∈ Val}  we require that for all e, e0, v, v0 such that e ⇓ v and e0 ⇓ v0, we have that (e, e0) ∈ R if and only if (v, v0) ∈ R. Given a PER P, we write P∗ to close it up under evaluation. We extend this Figure 12. Intuitionistic PER constructions notation to functions, as well, so that given a function f into PERs, f∗(x) = (f x)∗. Finally, given a relation R, we write R† to take its symmetric, transitive closure. We will use PERs to pairs. The intersection type ∀ˆ (E, Φ) construction just takes interpret intuitionistic types. the intersection over Φ(a) for all a in E, which is a PER since A partial evaluation relation on configurations (CPER) is a PERs are closed under intersection. The union type is a little symmetric, transitive relation on terminating machine con- more complicated; it takes the union of PERs, and then has figurations hσ; ei. We further require that they be closed un- to take the symmetric transitive closure because PERs are not der linear evaluation. Given a CPER M, we require that for closed under unions. The natural number relation Nˆ relates 0 ˆ all hσ1; e1i such that hσ1; e1i ⇓ hσ1; u1i and hσ2; e2i such that numerals to themselves, and >I relates any two values. 0 hσ2; e2i ⇓ hσ2; u2i, we have (hσ1; e1i , hσ2; e2i) ∈ M if and only The linear constructions are given in Figure 13. The >ˆ re- 0 0 if (hσ1; u1i , hσ2; u2i) ∈ M. We will use CPERs to interpret lin- lation relates two unit values under any stores at all, whereas ear types, with the idea that the store components of each the Iˆ relation relates two units only in the empty store. Simi- element of the PER corresponds to the resources owned by larly, the A &ˆ B relation relates two pairs, as long as the first that term. components are related at A using all of the store, and the Note that since evaluation (both ordinary and linear) is second components are related at B using all of the store. deterministic, an evaluation-closed PER is determined by its The tensor relation C ⊗ˆ D relation, on the other hand, re- sub-PER on values (or value configurations). As a result, lates two pairs if their stores can divided into two disjoint we will often define PERs and functions on PERs by simply parts, one relating the first components in the C relation, and giving the relation for values. the other relating the second components in the D relation. A semantic linear/non-linear type system is a four-tuple (I ∈ The semantic linear function space constructor C (ˆ D re- PER, L ∈ PER, φ : |I| → PER, ψ : |L| → CPER) such that φ lates functions, such that if we have values in C with dis- respects I and ψ respects L. We say that I are the semantic joint store, then the applications are related in D with the intuitionistic types, L are the semantic linear types, and φ and ψ combined store (much like the “magic wand” of separation are the type interpretation functions. logic). The F(E, Ψ) relation consists of pairs of stores and val- The set of type systems forms a CPPO. The least ele- ues F (a, b) and F (a0, b0), such that a and a0 are intuitionistic ment is the type system (∅, ∅,!PER,!CPER) with an empty set terms related in E, and b and b0 are related in Ψ(a) using their of intuitionistic and linear types. The ordering (I, L, φ, ψ) 6 whole stores. The ΠL relation works similarly, except that it (I0, L0, φ0, ψ0) is given by set inclusion on I ⊆ I0 and L ⊆ L0, 0 relates on dependent linear functions rather than pairs. The when there is agreement between φ and φ on the common linear intersection and union type constructors work simi- part of their domains, and likewise for ψ and ψ0 (which we 0 0 larly to their intuitionistic counterparts. write φ v φ and ψ v ψ ). Given a directed set, the join is The proof irrelevance relation Irr(A) relates two squash given by taking unions pointwise (treating the functions φ tokens ∗ at some stores, if there are linear terms that could and ψ as graphs). relate those stores at A. The pointer construction Ptr(e, E) Next, our goal is to define a semantic type system to simply says that the heap consists of a single memory cell interpret our syntax. To do this, we will define a monotone addressed by e, and whose contents are related at E. (Its type system operator Tk. Since type systems form a CPPO, only term is ∗, because we don’t care about its value, as we can define the semantic type system we use to interpret we use it solely to track capabilities.) The Tˆ (A) construc- T our syntax as the least fixed point of k. tion relates monadic computations, if each side reduces to First, we’ll define some constructions on PERs in Fig- a store/value configuration which is related by A. Further- ures 12 and 13. There is one construction for each of the type more, only “frame-respecting” computations can be related connectives in our language, explaining what each connec- — if a computation works in a smaller store, it should also tive means “logical relations style” — though these construc- work in a larger store, without modifying that larger store. tions do not yet define a logical relation, since they are just We can now define an operator Tk on type systems: constructions on PERs. 0∗ 0∗ 0∗ 0∗ The intuitionistic constructions are given in Figure 12. Tk(I, L, φ, ψ) = (I , L , φ , ψ ) The Loc relation relates each location to itself, and the unit where I0 and L0 are defined in Figure 14 and and where φ0 relation 1ˆ relates the unit value to itself. The identity relation and ψ0 are defined in Figure 15. Id(a, b, E) is equal to {(refl, refl)} if a and b are in the relation The definition of I0 and L0 includes each of the base types, E, and is empty otherwise. The Π construction takes a PER E, and then for each of the higher-arity operations draws the and a function Φ from elements of E to PERs (which respects argument types from I or L, so that (for example) L0 includes the relation E). From that data it returns the PER of function 0 0 0 0 0 every A ⊗ B where A and B are in L. The definition of φ values (f, f ) which take E-related pairs (a, a ) to (f a, f a ) and ψ0 is by analysis of the structure of the elements of I0 in E(a), with the Σ(E, Φ) construction working similarly for and L0, using the PER constructions we described earlier >ˆ = {(hσ; ()i , hσ0; ()i) | σ, σ0 ∈ Store}

(hσ; (a, b)i , (hσ; ai , hσ0; a0i) ∈ A ∧ A &ˆ B = 0 0 0 0 0 hσ ; (a , b )i) (hσ; bi , hσ ; b i) ∈ B 0 Iˆ = {(h·; ()i , h·; ()i)} I = {(Loc, Loc)} ∪ {( , )} ∪ ∃σC, σD, σC0 , σD0 . N N {(> , > )} ∪ σ = σC · σD ∧ I I (hσ; (c, d)i , 0 0 0 {(e = e t = t ) | I(X Y) ∧ Φ(X)(e t ) ∧ Φ(X)(e t )} ∪ (C ⊗ˆ D) =  0 0 0 σ = σ · σ ∧  1 X 2, 1 Y 2 , 1, 1 2, 2 hσ (c d )i) C D 0  ; , 0 0  (Πx : X. Y[x], I(X, X ) ∧  (hσC; ci , hσC; c i) ∈ C ∧  ∪  0 0  Πx : X0 Y 0[x]) ∀(v v0) ∈ Φ(X) I(Y[v] Y 0[v0]) (hσD; di , hσD; d i) ∈ D . , . , 0 0 0 (Σx : X Y[x] I(X X0) ∧  ∀σ0#σ, σ0 #σ , c, c .  . ,  0 0  0 0 0 0 0 ∪  (hσ; ui , if (hσ ; ci , hσ ; c i) ∈ C  Σx : X . Y [x]) ∀(v, v ) ∈ Φ(X). I(Y[v], Y [v ]) (C ˆ D) = 0 0 (  hσ0 u0i)  hσ · σ uci   (∀x : X. Y[x] I(X, X0) ∧  ; 0; ,  ∪  then 0 0 0 0 ∈ D  ∀x : X0 Y 0[x]) ∀(v v0) ∈ Φ(X) I(Y[v] Y 0[v0]) hσ · σ0 ; u c i . , . , 0 (hσ; F (a, b)i , E(a, a0) ∧ (∃x : X. Y[x] I(X, X ) ∧ F(E, Ψ) =  0 0 0 0 0  0 0 0 0 0 ∪  hσ ; F (a , b ))i (hσ; bi , hσ ; b i) ∈ Ψ(a)  ∃x : X . Y [x]) ∀(v, v ) ∈ Φ(X). I(Y[v], Y [v ]) 0 0 (hσ; ui , ∀(e, e0) ∈ E. {(G A, G A ) | L(A, A )} ∪ ΠL(E, Ψ) = 0 0 0 0 0 hσ ; u i) (hσ; u ei , hσ ; u e i) ∈ Ψ(e) {(Ui, Ui) | i < k} ∪

(hσ; ui , ∀(e, e0) ∈ E. {(Li, Li) | i < k} ∀ˆ L(E, Ψ) = 0 0 0 0 hσ ; u i) (hσ; ui , hσ ; u i) ∈ Ψ(e) 0 † (hσ; ui , ∃(e, e ) ∈ E. L0 = {(I, I)} ∪ ∃ˆ L(E, Ψ) = 0 0 0 0 hσ ; u i) (hσ; ui , hσ ; u i) ∈ Ψ(e) {(A ⊗ B, A0 ⊗ B0) | L(A, A0) ∧ L(B, B0)} ∪ 0 0 0 0 0 0 ∀σf#σ1, σg#σ2. ∃σ1 , σ2 , u1, u2. {(A B, A B ) | L(A, A ) ∧ L(B, B )} ∪ 0 ( ( (hσ ; e i , hσ · σ ; e i σ · σ ; val u ∧ ( x : X A[x] I(X X0) ∧ Tˆ (A) = 1 1 1 f 1 1 f 1 F . , ,  hσ e i) hσ · σ e i hσ0 · σ val u i ∧  0 0 0 0 0 ∪  2; 2 2 g; 2 2 g; 2  Fx : X . A [x]) ∀(v, v ) ∈ Φ(X). L(A[v], A [v ])  0 0  0 ( σ1 ; u1 , hσ2 ; u2i) ∈ A (Πx : X. A[x], I(X, X ) ∧ 0 0 0 0 0 ∪  σ1 = [l : v1] ∧ σ2 = [l : v2] ∧  Πx : X . A [x]) ∀(v, v ) ∈ Φ(X). L(A[v], A [v ]) Ptr(e, E) = (hσ1; ∗i , hσ2; ∗i)  0 (e, l) ∈ Loc ∧ (v1, v2) ∈ E (∀x : X. A[x], I(X, X ) ∧ 0 0 0 0 0 0 0 0 ∪ Irr(A) = (hσ; ∗i , hσ ; ∗i) ∃a, a . (hσ; ai , hσ; a i) ∈ A ∀x : X . A [x]) ∀(v, v ) ∈ Φ(X). L(A[v], A [v ])

(∃x : X. A[x], I(X, X0) ∧  0 0 0 0 0 ∪ ∃x : X . A [x]) ∀(v, v ) ∈ Φ(X). L(A[v], A [v ]) Figure 13. Linear PER Constructions {(>, >)} ∪ {(A & B, A0 & B0) | L(A, A0) ∧ L(B, B0)} ∪ {(T (A), T (A0)) | (A, A0) ∈ L} ∪ to interpret each of the type constructors. Furthermore, in {(e 7→ X, e0 7→ X0) | (e, e0) ∈ Loc ∧ (X, X0) ∈ I} the definition of φ0, when we write φ0(Loc), we actually 0 0 mean any φ (e) such that I (e, Loc). This is justified by the Figure 14. Definition of type part of Tk fact that evaluation is deterministic and relations are closed under evalution. (Similarly, we define ψ0 only on the values, extending it to all terms by and closure under evaluation.) Once we have this definition, we can then show that Tk is a monotone type system operator.

Lemma 1 (Tk is monotone). We have that Tk is a monotone function on type systems.

Proof. (Sketch) This lemma really makes two claims. First, φ0(Loc) = Loc 0 φ (N) = Nˆ that Tk is indeed a function on semantic type systems (in 0 0 0 0 0 φ (>I) = >ˆ I particular, that φ and ψ respect I and L ), and second, 0 φ (e1 =X e2) = Id(e1, e2, φ(X)) that it is monotone. Both of these follow from a case analysis φ0(Πx : X. Y[x]) = Π(φ(X), λv. φ(Y[v])) on the shape of the possible elements of I0 and L0. See the φ0(Σx : X. Y[x]) = Σ(φ(X), λv. φ(Y[v])) technical report for details. φ0(∀x : X. Y[x]) = ∀ˆ (φ(X), λv. φ(Y[v])) φ0(∃x : X. Y[x]) = ∃ˆ (φ(X), λv. φ(Y[v])) φ0(G A) = G(ψ(A)) Then, we can take the interpretation of the i-th universe 0 φ (Ui) when i < k = let (Uˆ , Lˆ , φˆ , ψˆ ) = fix(Ti) in Uˆ Ti to be the least fixed point of Ti. Furthermore, we also have φ0(L ) when i < k = let (Uˆ , Lˆ , φˆ , ψˆ ) = fix(T ) in Lˆ a cumulativity property: i i ψ0(I) = Iˆ Lemma 2 (Universe Cumulativity). If i 6 k then Ti 6 Tk. ψ0(A ⊗ B) = ψ(A) ⊗ˆ ψ(B) 0 We sometimes write T for the limit of the countable uni- ψ (>) = >ˆ ψ0(A & B) = ψ(A) &ˆ ψ(B) verse hierarchy, and write U, L, φ, and ψ for its components. 0 ψ (A ( B) = ψ(A) (ˆ ψ(B) ψ0(Fx : X. A[x]) = F(φ(X), λv. ψ(A[v])) 0 4.4 Semantic Environments ψ (Πx : X. A[x]) = ΠL(φ(X), λv. ψ(A[v])) 0 φ (∀x : X. A[x]) = ∀ˆ L(φ(X), λv. ψ(A[v])) Our semantic type system T gives us an interpretation of 0 φ (∃x : X. A[x]) = ∃ˆ L(φ(X), λv. ψ(A[v])) closed types. Before we can prove the correctness of our typ- ψ0(T (A)) = T(ψ(A)) ing rules, we have to extend our interpretation to open types ψ0(e 7→ X) = Ptr(e, φ(X)) and terms, and to do that, we have to give an interpretation of environments. Figure 15. Definition of Tk, interpretation part In Figure 16, we give the interpretation of contexts. The meaning of an intuitionistic context [[Γ]] is given as a set of binary substitutions γ. By binary substitution, we mean that for every x ∈ dom(γ), we have that γ(x) = (e, e0) for some [[·]] = {hi} (i.e., γ1 or δ1) rather than the second. Connosieurs of depen-

γ ∈ [[Γ]] ∧ dent types may worry that difficulties could be lurking in

[[Γ, x : X]] = (γ, (e , e )/x) (γ1(X), γ2(X)) ∈ U ∧ this choice. Happily, there are no problems, because we es-  1 2   (e1, e2) ∈ φ(γ1(X))  tablished that T is a semantic type system up front, and so φ ψ I U   we know that and respect and . [[·]] = {(h; hii , h; hii)} ∃σ σ σ0 σ0 1, 2, 1 , 2 . Evaluation of all closed well-typed terms terminates. The (hσ; δ , δ i , σ = σ · σ ∧ σ0 = σ0 · σ0 ∧ [[∆ ∆ ]] = 1 2 1 2 1 2 1, 2  σ0 δ0 δ0 ) (hσ δ i σ0 δ0 ) ∈ [[∆ ]] ∧  fundamental property also implies consistency, since every ; 1 , 2 1; 1 , 1 ; 1 1  (hσ ; δ i , hσ0; δ0i) ∈ [[∆ ]]  well-typed term is in the logical relation and inconsistent 2 2 2 2 2 0 0 (A, A) ∈ L ∧ types (like 0 = 1) have empty relations. We also have ade- [[a : A]] = (hσ; e/ai , hσ ; e /ai) 0  (hσ; ei , hσ; e i) ∈ ψ(A) quacy — any two provably equal terms of natural number type will evaluate to the same numeral (and similarly for the Figure 16. Interpretation of Environments linear type Fn : N. I and the monadic type T ((Fn : N. I))).

5. Implementation pair of terms. We write γ1 for the ordinary unary substitution substituting the first projection of γ for each variable in γ, We have written a small type checker for a version of our the- and γ2 for the ordinary substitution substituting the second ory in Ocaml. We say “version of”, because the type system projection. We write γ(e) for the pair (γ1(e), γ2(e)). we present in this paper is (extremely) undecidable, and so The intepretation of the empty context is the empty sub- we have needed to make a number of modifications to the stitution. The interpretation of Γ, x : X is every substitution type theory to make it implementable. We emphasize that 0 (γ, (e, e )/x) where γ ∈ [[Γ]] and γ1(X) is a type in U, and even though all of the modifications we made seem straight- 0 (e, e ) are related by φ(γ1(X)). forward, we have done no proofs about the implementation: The interpretation of linear contexts ∆ is a little more com- we still need to show that everything is consistent and con- plicated. Instead of a binary substitution, its elements consist servative with respect to the extensional system. of pairs of store/substitution pairs. Intuitively, a linear en- Equality Reflection Our implementation does not support vironment is a set of linear values for each linear variable, equality reflection, or indeed most of the η rules. Our system plus the store needed to interpret the value. So the interpre- implements an untyped conversion relation, and opportunis- tation of a single-variable context a : A are all the pairs of tically implements a few of the easier eta-rules (for example, configurations (hσ; e/ai , hσ0; e0/ai), where hσ; ei and hσ0; e0i λx. f x ≡ f). So our implementation is really an intensional are related by ψ(A) (and A is a linear type in L). type theory, rather than an extensional one. The interpretation of the empty linear context is similar to the interpretation for I; it relates two empty substitutions in Intersection and Union Types The formulation of intersec- empty heaps. Likewise, the interpretation of contexts ∆1, ∆2 tions and unions in this paper is set up to optimize the sim- is very similar to the interpretation of the tensor product. plicity of the side conditions — to check a term e against the When we interpret a context ∆1, ∆2, we take it be a pair of type ∀x : X. Y, we just add x : X to the context, and check that 0 0 0 hσ; δ1, δ2i and hσ ; δ1, δ2i, such that we can divide the stores x does not occur free in e. However, eliminating an intersec- 0 0 0 so that σ = σ1 · σ2 and σ = σ1 · σ2, and hσ1; δ1i is related to tion type requires guessing the instantiation. 0 0 0 0 hσ1; δ1i at ∆1, and likewise hσ2; δ2i is related to hσ2; δ2i at ∆2. In our implementation, we have to write an explicit ab- straction λx. e to introduce a universal quantifier ∀x : X. Y, 4.5 Fundamental Property and eliminating an intersection requires an explicit annota- We can now state the fundamental lemma of logical relations: tion, so that if f : ∀x : X. Y, and we want to instantiate it with e : X, then we need to write f e :[e/x]Y. Similarly, we give a Theorem 2 . (Fundamental Property) syntax for implicit existentials ∃x : X. Y where the introduc- Suppose Γ ok and γ ∈ [[Γ]] and Γ ` ∆ ok and (hσ δ i hσ δ i) ∈ 1; 2 , 1; 2 tion is an explicit pack pack(e, t), with e : X and t :[e/x]Y, [[γ (∆)]]. Then we have that: 0 1 and the elimination is an unpack let pack(x, y) = e in e . 1. If Γ ` X type then γ(X) ∈ U. However, we track which variables are computationally 2. If Γ ` X ≡ Y type then (γ1(X), γ2(Y)) ∈ U. relevant or not: in both a forall-introduction λx. e, and a let pack(x y) = e in e0 x 3. If Γ ` e : X then γ(e) ∈ φ(γ1(X)). exists-elimination , , the is not com- putationally relevant, may only appear within type annota- 4. If Γ ` e1 ≡ e2 : X then (γ1(e1), γ2(e2)) ∈ φ(γ1(X)). 5. If Γ ` A linear then γ(A) ∈ L. tions, implicit applications and the first argument of packs. This restriction is in the style of the “resurrection operator” 6. If Γ ` A ≡ B linear then (γ (A), γ (B)) ∈ L. 1 2 of Pfenning [35] (which was later extended to type depen- 7. If Γ; ∆ ` e : A then (hσ1; γ1(δ1(e))i , hσ2; γ2(δ2(e))i) ∈ dency by Abel [1]). Our overall result also looks very much ψ(γ1(X)). like ICC∗ [6], the variant of the implicit calculus of construc- 8. If Γ; ∆ ` e1 ≡ e2 : A then (hσ1; γ1(δ1(e1))i , hσ2; γ2(δ2(e2))i) ∈ tions [27] with decidable typechecking. ψ(γ1(X)). 9. If Γ; ∆ ` e÷A then there are t and t0 such that for each γ ∈ [[Γ]] Proof Irrelevance In this paper, we implement linear proof ∗ and (hσ1; δ1i , hσ2; δ2i) ∈ [[γ1(∆)]], we have irrelevance with a “squashed” term . To turn this into some- 0 (hσ1; δ1(γ1(t))i , hσ2; δ2(γ2(t ))i) ∈ ψ(γ1(A)). thing we can typecheck, we note that the premise of the rule requires that we produce an irrelevant derivation Γ; ∆ ` Proof. The theorem follows by a massive mutual induction e ÷ A. The rules of this judgement (see Figure 8) are actu- over all of the rules of all of the judgements in the system. ally a linear version of the monadic modality of Pfenning and The full proof is in the companion technical report. Davies [36] (and in fact we borrow their notation). So, to turn Worth noting is that in many of the cases, we make the this into something we can typecheck, all we have to do is to arbitrary choice to use the first projection of the substitution replace ∗ with a term [e], where we can derive Γ; ∆ ` e ÷ A. We should also extend definitional equality to equate all ↓ P are used to segregate positive and negative types, with terms of type [A], though we have not yet implemented this, the monadic type {A} arising as the composite ↑↓ P. as that requires a typed conversion relation. The distinction between positive and negative corre- sponds to a value/computation distinction (c.f., [24, 44]). Evaluation Our preliminary experience is that equality is While the LNL calculus is also built on an adjunction, this the most challenging issue for us. We had to postulate nu- adjunction reflects the linear/nonlinear distinction and not merous equalities that are sound in our model, but not ap- the value/computation distinction. As a result, our language plied automatically, and this is quite noisy. For example, we is pure (validating the full βη theory for all the connec- needed to postulate function extensionality to make things tives), and we can freely mix positive and negative types work, since we formulate lists in terms of W-types [25], and (e.g., A ⊗ B ( C). they are not well-behaved intensionally. In addition, our by- However, polarization remains interesting from both hand derivations of uses fix f x = e often make use of equal- purely theoretical and more practical points of view. Spiwack ity reflection, but we have not yet found a completely sat- [39] gives dependent L, a polarized, linear, dependent type isfactory formulation of its typing rule that works well in theory from a purely proof-theoretic perspective. The key an intensional setting. An easy fix is to add lists (and other observation is that polarized calculi have a weaker notion of inductive types) as a primitive to our system, and another substitution (namely, variables stand for values, rather than possibility is to move to something like observational type arbitrary terms), and as a result, (a) dependent types may theory [3]. depend only on values (which forbids large eliminations), but (b) since contraction of closed values is “safe” (since no 6. Discussion and Related Work computations can occur in types), linear variables may be freely duplicated when they occur in types. We divide the discussion into several categories, even though Similar ideas are also found in the F∗ language of Swamy many individual pieces of work straddle multiple categories. et al. [41]. This is an extension of the F# language with type Dependent Types As an extensional type theory, our system dependency. Since F∗ is an imperative language in the ML is constructed in the Nuprl [12] tradition. Our metatheory tradition, for soundness dependency is restricted to value de- was designed following the paper of Harper [20], and many pendency, just as in dependent L. However, the treatment of of our logical connectives were inspired by work on Nuprl. dependency on linear variables is somewhat different — F∗ For example, our treatment of intersection and union types is has a universe of proof-irrelevant (intuitionistic) types, and similar to the work of Kopylov [23] (as well as PER models of affine variables can appear freely in proof-irrelevant types. polymorphism such as the work of Bainbridge et al. [5]). Our treatment of proof irrelevance is also inspired by the “squash Imperative Programming with Capabilities In addition to types” of Nogin [32], though because of the constraints of languages exploring “full-spectrum” type dependency (in- linearity we could not directly use his rules. cluding Hoare Type Theory, which we discuss in Section 3.3), there are also a number of languages which have looked at Linearity Cervesato and Pfenning [10] proposed the Linear using linear types as a programming feature. , a core type theory for integrating linear- ATS with linear dataviews, by Zhu and Xi [45], is not ity and dependent types. This calculus consisted of the nega- a dependently-typed language, since it very strictly segre- tive fragment of MALL ((,&, >) together with a dependent gates its proof and programming language. However, it has function space Πx : A. B, with the restriction that in any ap- an extremely extensive language of proofs, and this proof plication f e, the argument e could contain no free linear vari- language includes linear types, which are used to control ables. In our calculus, this type corresponds to Πx : G A. B. memory accesses in a fashion very similar to L3 or separa- This neatly explains why the argument of the application tion logic. Interestingly, the entire linear sublanguage is com- should not have any free linear variables; every LLF applica- putationally irrelevant – linear proofs are proofs, and hence tion f e corresponds to an application f (G e) in our calculus! erased at runtime. Very recently, Vak´ ar´ [42] has proposed an extension of Similarly, the calculus of capabilities of Pottier [37] repre- Linear LF, in which he extends linear LF with an intuition- sents another approach towards integrating linear capabili- istic sigma-type Σx : Ab. B, whose introduction form is a pair ties with a programming language. As with ATS, capabili- whose first component does not have any free linear vari- ties are linear and irrelevant, and this system also includes ables. In our calculus, this can be encoded as Fx : G A. B. This support for advanced features such as the anti-frame rule. encoding yields a nice explanation of why the first compo- (A simpler proof-irrelevant capability system can be found nent of the pair is intuitionistic, and also why the dependent in the work of Militao˜ et al. [26], who give a kernel linear pair has a let-style elimination rather than a projective elimi- capability calculus.) nator. (He also extends the calculus with many other features At this point, it should be clear that the combination of such as equality types and universes, just as we have, but in proof irrelevance and linearity is something which has ap- both cases these extensions are straightforward and so there peared frequently in the literature, in many different guises. is nothing interesting to compare.) Instead of an operational Unfortunately, it has mostly been introduced on an ad-hoc semantics, he gives a categorical axiomatization, and in fu- basis, and is not something which has been studied in its ture work we will see if his semantics applies to our syntax. own right. As a result, we cannot give a crisp comparison of our proof irrelevance modality with this other work, since Polarization, Value Dependency and Proof Irrelevance An- the design space seems very complex. other extension of LLF is the Concurrent Logical Framework (or CLF) of Watkins et al. [43], which extends LLF with positive Other Work One piece of work we wish to draw attention types (such as A⊕B, A⊗B, and the intuitionistic sigma types), to is the enriched effect calculus of Egger et al. [16]. This as well as a computational monad {A} to mediate between uses calculus is like the (non-dependent) LNL calculus, except of positive and negative types. This seems to be an instance that it restricts linear contexts to at most a single variable (so of focalization, in which a pair of adjoint modalities ↑ N and that they are stoups, in Girard’s terminology). One striking coincidence is that EEC has a mixed pair X !⊗ A, which looks [22] J. B. Jensen and L. Birkedal. Fictional separation logic. In very much like our linear dependent pair Fx : X. A. Programming Languages and Systems, pages 377–396. Springer, Another direction for future work is to enrich the heap 2012. model. Our model of imperative computations is a gener- [23] A. Kopylov. Type Theoretical Foundations for Data Structures, alization of Ahmed et al. [2], which in turn builds a logical Classes, and Objects. PhD thesis, 2004. relation model on top of the basic heap model of separa- [24] P. B. Levy. Call-By-Push-Value: A Functional/Imperative Synthesis, tion logic [38]. However, over the past decade, this model volume 2 of Semantics Structures in Computation. Springer, 2004. of heaps has been vastly extended, both from a semantic per- [25] P. Martin-Lof and G. Sambin. Intuitionistic type theory. Bibliopo- spective [9, 15], and from the angle of verification [14, 22]. lis Naples, 1984. [26] F. Militao,˜ J. Aldrich, and L. Caires. Substructural typestates. In References Proceedings of the ACM SIGPLAN 2014 workshop on Programming languages meets program verification, pages 15–26. ACM, 2014. [1] A. Abel. Irrelevance in type theory with a heterogeneous equal- ity judgement. In Foundations of Software Science and Computa- [27] A. Miquel. The implicit calculus of constructions extending tional Structures, pages 57–71. Springer, 2011. pure type systems with an intersection type binder and subtyp- ing. In TLCA, pages 344–359. Springer, 2001. [2] A. Ahmed, M. Fluet, and G. Morrisett. L3: A linear language with locations. Fundamenta Informaticae, 77(4):397–449, 2007. [28] A. Nanevski, G. Morrisett, and L. Birkedal. Polymorphism and separation in hoare type theory. In J. H. Reppy and J. L. Lawall, [3] T. Altenkirch, C. McBride, and W. Swierstra. Observational editors, ICFP, pages 62–73. ACM, 2006. equality, now! In PLPV, pages 57–68. ACM, 2007. [29] A. Nanevski, G. Morrisett, and L. Birkedal. Hoare type theory, [4] A. Appel, R. Dockins, A. Hobor, L. Beringer, J. Dodds, G. Stew- polymorphism and separation. Journal of Functional Program- art, S. Blazy, and X. Leroy. Program logics for certified compilers. ming, 18(5-6):865–911, 2008. Cambridge University Press, 2014. [30] A. Nanevski, V. Vafeiadis, and J. Berdine. Structuring the verifi- [5] E. S. Bainbridge, P.J. Freyd, A. Scedrov, and P.J. Scott. Functorial cation of heap-manipulating programs. In M. V. Hermenegildo polymorphism. Theoretical , 70(1):35–64, 1990. and J. Palsberg, editors, POPL, pages 261–274. ACM, 2010. [6] B. Barras and B. Bernardo. The implicit calculus of constructions [31] Z. Ni, D. Yu, and Z. Shao. Using XCAP to certify realistic as a programming language with dependent types. In Founda- systems code: Machine context management. In TPHOLs, pages tions of Software Science and Computational Structures, pages 365– 189–206, 2007. 379. Springer, 2008. [32] A. Nogin. Quotient types: A modular approach. In Theorem [7] N. Benton. A mixed linear and non-linear logic: Proofs, terms Proving in Higher Order Logics, pages 263–280. Springer, 2002. and models. In Computer Science Logic (CSL), 1994. [33] P. W. O’Hearn and D. J. Pym. The logic of bunched implications. [8] N. Benton and P. Wadler. Linear logic, monads and the lambda Bulletin of Symbolic Logic, 5(02):215–244, 1999. calculus. In Logic in Computer Science (LICS), 1996. [34] R. L. Petersen, L. Birkedal, A. Nanevski, and G. Morrisett. A [9] L. Birkedal, K. Støvring, and J. Thamsborg. A relational realiz- realizability model for impredicative hoare type theory. In ability model for higher-order stateful adts. The Journal of Logic Programming Languages and Systems, pages 337–352. Springer, and Algebraic Programming, 81(4):491–521, 2012. 2008. [10] I. Cervesato and F. Pfenning. A linear logical framework. Inf. [35] F. Pfenning. Intensionality, extensionality, and proof irrelevance Comput., 179(1):19–75, 2002. in modal type theory. In LICS 2001. Proceedings., pages 221–230. [11] A. Chlipala. Mostly-automated verification of low-level pro- IEEE, 2001. grams in computational separation logic. ACM SIGPLAN No- [36] F. Pfenning and R. Davies. A judgmental reconstruction of tices, 46(6):234–245, 2011. modal logic. Mathematical structures in computer science, 11(04): [12] R. L. Constable. Constructive mathematics as a programming 511–540, 2001. logic I: Some principles of theory. In Annals of Mathematics, [37] F. Pottier. Syntactic soundness proof of a type-and-capability volume 24, pages 21–37. Elsevier, 1985. system with hidden state. Journal of , 23 [13] B. A. Davey and H. A. Priestley. Introduction to lattices and order. (1):38–144, Jan. 2013. Cambridge university press, 2002. [38] J. C. Reynolds. Separation logic: A logic for shared mutable data [14] T. Dinsdale-Young, L. Birkedal, P. Gardner, M. Parkinson, and structures. In Logic in Computer Science, 2002. Proceedings. 17th H. Yang. Views: compositional reasoning for concurrent pro- Annual IEEE Symposium on, pages 55–74. IEEE, 2002. grams. ACM SIGPLAN Notices, 48(1):287–300, 2013. [39] A. Spiwack. A dissection of L, 2014. URL [15] D. Dreyer, G. Neis, and L. Birkedal. The impact of higher-order http://assert-false.net/arnaud/papers. state and control effects on local relational reasoning. Journal of [40] K. Svendsen, L. Birkedal, and A. Nanevski. Partiality, state and Functional Programming, 22(4-5):477–528, 2012. dependent types. In C.-H. L. Ong, editor, TLCA, volume 6690 of [16] J. Egger, R. E. Møgelberg, and A. Simpson. Enriching an effect Lecture Notes in Computer Science, pages 198–212. Springer, 2011. calculus with linear types. In Computer Science Logic, pages 240– [41] N. Swamy, J. Chen, C. Fournet, P.-Y. Strub, K. Bhargavan, and 254. Springer, 2009. J. Yang. Secure distributed programming with value-dependent [17] M. H. Escardo.´ Joins in the frame of nuclei. Applied Categorical types. In ICFP, pages 266–278, 2011. Structures, 11(2):117–124, 2003. [42]M.V ak´ ar.´ Syntax and semantics of linear dependent types, [18] J.-Y. Girard. Linear logic. Theoretical computer science, 50(1):1– 2014. URL http://arxiv.org/abs/1405.0033. 101, 1987. [43] K. Watkins, I. Cervesato, F. Pfenning, and D. Walker. A concur- [19] J.-Y. Girard. Linear logic: Its syntax and semantics. In Advances rent logical framework: The propositional fragment. In TYPES, in Linear Logic, volume 222 of London Mathematical Society Lecture pages 355–377, 2003. Notes. CUP, 1995. [44] N. Zeilberger. On the unity of duality. Ann. Pure Appl. Logic, 153 [20] R. Harper. Constructing type systems over an operational se- (1-3):66–96, 2008. mantics. Journal of Symbolic Computation, 14(1):71–84, 1992. [45] D. Zhu and H. Xi. Safe Programming with Pointers through [21] A. Hobor and J. Villard. The ramifications of sharing in data Stateful Views. In PADL, pages 83–97, January 2005. structures. ACM SIGPLAN Notices, 48(1):523–536, 2013.