Gradual Refinement Types Extended Version with Proofs

Nico Lehmann ∗ Éric Tanter † PLEIAD Laboratory PLEIAD Laboratory Computer Science Department (DCC) Computer Science Department (DCC) University of Chile University of Chile [email protected] [email protected]

Abstract Any program that type checks using this operator is guaranteed to Refinement types are an effective language-based verification tech- be free from division-by-zero errors at runtime. Consider: nique. However, as any expressive typing discipline, its strength is let f (x: Int)( y: Int)=1 /(x − y) its weakness, imposing sometimes undesired rigidity. Guided by abstract interpretation, we extend the agenda and Int is seen as a notational shortcut for {ν : Int | >}. Thus, in the develop the notion of gradual refinement types, allowing smooth definition of f the only information about x and y is that they are evolution and interoperability between simple types and logically- Int, which is sufficient to accept the subtraction, but insufficient to refined types. In doing so, we address two challenges unexplored prove that the divisor is non-zero, as required by the type of the in the gradual typing literature: dealing with imprecise logical in- division operator. Therefore, f is rejected statically. formation, and with dependent function types. The first challenge Refinement type systems also support dependent function types, leads to a crucial notion of locality for refinement formulas, and the allowing refinement predicates to depend on prior arguments. For second yields novel operators related to type- and term-level sub- instance, we can give f the more expressive type: stitution, identifying new opportunity for runtime errors in gradual x:Int → {ν :Int | ν 6= x} → Int dependently-typed languages. The gradual language we present is type safe, type sound, and satisfies the refined criteria for gradually- The body of f is now well-typed, because y 6= x implies x−y 6= 0. typed languages of Siek et al. We also explain how to extend our Refinement types have been used to verify various kinds of approach to richer refinement logics, anticipating key challenges to properties(Bengtson et al. 2011; Kawaguchi et al. 2009; Rondon consider. et al. 2008; Xi and Pfenning 1998), and several practical implemen- tations have been recently proposed(Chugh et al. 2012a; Swamy Categories and Subject Descriptors D.3.1 [Software]: Program- et al. 2016; Vazou et al. 2014; Vekris et al. 2016). ming Languages—Formal Definitions and Theory Integrating static and dynamic checking. As any static typing discipline, programming with refinement types can be demanding Keywords gradual typing; refinement types; abstract interpreta- for programmers, hampering their wider adoption. For instance, all tion callers of f must establish that the two arguments are different. A prominent line of research for improving the usability of re- finement types has been to ensure automatic checking and infer- 1. Introduction ence, e.g. by restricting refinement formulas to be drawn from an Refinement types are a lightweight form of language-based veri- SMT decidable logic(Rondon et al. 2008). But while type infer- fication, enriching types with logical predicates. For instance, one ence does alleviate the annotation burden on programmers, it does can assign a refined type to a division operation (/), requiring that not alleviate the rigidity of statically enforcing the type discipline. its second argument be non-zero: Therefore, several researchers have explored the complemen- tary approach of combining static and dynamic checking of refine- Int → {ν :Int | ν 6= 0} → Int ments(Flanagan 2006; Greenberg et al. 2010; Ou et al. 2004; Tanter and Tabareau 2015), providing explicit casts so that programmers ∗ Funded by CONICYT-PCHA/Magíster Nacional/2015-22150894 can statically assert a given property and have it checked dynami- cally. For instance, instead of letting callers of f establish that the † Partially funded by Fondecyt Project 1150017 two arguments are different, we can use a cast: let g (x: Int)( y: Int)=1 /(hci (x − y)) The cast hci ensures at runtime that the division is only applied if the value of x − y is not 0. Division-by-zero errors are still guaranteed to not occur, but cast errors can. These casts are essentially the refinement types counterpart of downcasts in a language like Java. As such, they have the same limitation when it comes to navigating between static and dynamic checking—programming in Python feels very different from pro- gramming in Java with declared type Object everywhere and ex- plicit casts before every method invocation! Gradual typing. To support a full slider between static and dy- Based on the unknown refinement of x, all uses of x are statically namic checking, without requiring programmers to explicitly deal accepted, but subject to runtime checks. Clients of g have no static with casts, Siek and Taha(2006) proposed gradual typing. Gradual requirement beyond passing an Int. The evolution of the program typing is much more than “auto-cast” between static types, because can lead to strengthening the type of x to {ν :Int | ν > 0 ∧ ?} gradual types can denote partially-known type information, yield- forcing clients to statically establish that the argument is positive. In ing a notion of consistency. Consider a variable x of gradual type the definition of g, this more precise gradual type pays off: the type Int → ?, where ? denotes the unknown type. This type conveys system definitely accepts 1/x, making the associated runtime check some information about x that can be statically exploited to defi- superfluous, and it still optimistically accepts b(x), subject to a nitely reject programs, such as x + 1 and x(true), definitely accept dynamic check. It now, however, definitely rejects a(x). Replacing valid applications such as x(1), and optimistically accept any use a(x) with a(x − 2) again makes the optimistically of the return value, e.g. x(1) + 1, subject to a runtime check. accept the program. Hence, programmers can fine tune the level of In essence, gradual typing is about plausible reasoning in pres- static enforcement they are willing to deal with by adjusting the ence of imprecise type information. The notion of type precision precision of type annotations, and get as much benefits as possible (Int → ? is more precise than ? → ?), which can be raised to (both statically and dynamically). terms, allows distinguishing gradual typing from other forms of Gradualizing refinement types presents a number of novel chal- static-dynamic integration: weakening the precision of a term must lenges, due to the presence of both logical information and de- preserve both its typeability and reduceability(Siek et al. 2015). pendent types. A first challenge is to properly capture the notion This gradual guarantee captures the smooth evolution along the of precision between (partially-unknown) formulas. Another chal- static-dynamic checking axis that gradual typing supports. lenge is that, conversely to standard gradual types, we do not want Gradual typing has triggered a lot of research efforts, including an unknown formula to stand for any arbitrary formula, otherwise how to adapt the approach to expressive type disciplines, such as it would be possible to accept too many programs based on the po- information flow typing(Disney and Flanagan 2011; Fennell and tential for logical contradictions (a value refined by a false formula Thiemann 2013) and effects(Bañados Schwerter et al. 2014, 2016). can be used everywhere). This issue is exacerbated by the fact that, These approaches show the value of relativistic gradual typing, due to dependent types, between refinements is a con- where one end of the spectrum is a static discipline (i.e. simple textual relation, and therefore contradictions might arise in a non- types) and the other end is a stronger static discipline (i.e. simple local fashion. Yet another challenge is to understand the dynamic types and effects). Similarly, in this work we aim at a gradual semantics and the new points of runtime failure due to the stronger language that ranges from simple types to logically-refined types. requirements on term substitution in a dependently-typed setting.

Gradual refinement types. We extend refinement types with gradual formulas, bringing the usability benefits of gradual typ- ing to refinement types. First, gradual refinement types accom- Contributions. This work is the first development of gradual typ- modate flexible idioms that do not fit the static checking disci- ing for refinement types, and makes the following contributions: pline. For instance, assume an external, simply-typed function check :: Int → Bool and a refined get function that requires • Based on a simple static refinement type system (Section 2), its argument to be positive. The absence of refinements in the sig- and a generic interpretation of gradual refinement types, we nature of check is traditionally interpreted as the absence of static derive a gradual refinement type system (Section 3) that is a knowledge denoted by the trivial formula >: conservative extension of the static type system, and preserves typeability of less precise programs. This involves developing a check :: {ν :Int | >} → {ν :Bool | >} notion of consistent subtyping that accounts for gradual logical Because of the lack of knowledge about check idiomatic expres- environments, and introducing the novel notion of consistent sions like: type substitution.

if check(x) then get(x) else get(−x) • We identify key challenges in defining a proper interpretation of gradual formulas. We develop a non-contradicting, semantic, cannot possibly be statically accepted. In general, lack of knowl- and local interpretation of gradual formulas (Section 4), estab- edge can be due to simple/imprecise type annotations, or to the lishing that it fulfills both the practical expectations illustrated limited expressiveness of the refinement logic. With gradual refine- above, and the theoretical requirements for the properties of the ment types we can interpret the absence of refinements as imprecise gradual refinement type system to hold. knowledge using the unknown refinement ?:1 • Turning to the dynamic semantics of gradual refinements, we check :: {ν :Int | ?} → {ν :Bool | ?} identify, beyond consistent subtyping transitivity, the need for two novel operators that ensure, at runtime, type preservation Using this annotation the system can exploit the imprecision to of the gradual language: consistent subtyping substitution, and optimistically accept the previous code subject to dynamic checks. consistent term substitution (Section 5). These operators crys- Second, gradual refinements support a smooth evolution on tallize the additional points of failure required by gradual de- the way to static refinement checking. For instance, consider the pendent types. challenge of using an existing library with a refined typed interface: • We develop the runtime semantics of gradual refinements as a :: {ν :Int | ν < 0} → Bool b :: {ν :Int | ν < 10} → Int reductions over gradual typing derivations, extending the work of Garcia et al.(2016) to accommodate logical environments One can start using the library without worrying about refinements: and the new operators dictated by dependent typing (Section 6). let g (x: {ν :Int | ?})= if a(x) then 1/x else b(x) The gradual language is type safe, satisfies the dynamic gradual guarantee, as well as refinement soundness. 1 In practice a language could provide a way to gradually import statically • To address the decidability of gradual refinement type checking, annotated code or provide a compilation flag to treat the absence of a we present an algorithmic characterization of consistent subtyp- refinement as the unknown formula ? instead of >. ing (Section 7). T ∈ TYPE, x ∈ VAR, c ∈ CONST, t ∈ TERM, are used to build up dependent function types x : T1 → T2. In p ∈ FORMULA, Γ ∈ ENV, Φ ∈ LENV order to restrict logical formulas to values, application arguments v ::= λx:T.t | x | c (Values) and conditions must be syntactic values(Vazou et al. 2014). t ::= v | t v | let x = t in t (Terms) The typing judgment Γ;Φ ` t : T uses a type environment Γ to if v then t else t | t :: T track variables in scope and a separate logical environment to track B ::= Int | Bool (Base Types) logical information. This separation is motivated by our desire to T ::= {ν :B | p} | x:T → T (Types) gradualize only the logical parts of types. For example, in Rule (Tλ) p ::= p = p | p < p | p + p | v | ν (Formulas) the body of a lambda λx:T.t is typed as usual extending the p ∧ p | p ∨ p | ¬p | > | ⊥ type environment and also enriching the logical environment with Γ ::= • | Γ, x:T (Type Environments) the logical information extracted from T , denoted T . Extraction Φ ::= • | Φ, x:p (Logical Environments) is defined as {ν :B | p} = p, and x:T1 → TL2 M= > since function typesL have no logicalM interpretationL per se. M Γ(x) = {ν :B | p} Γ;Φ ` t : T (Tx-refine) Subtyping is defined based on entailment in the logic. Rule Γ;Φ ` x : {ν :B | ν = x} (<:-refine) specifies that {ν :B | p} is a subtype of {ν :B | q} in an environment Φ if p, in conjunction with the information in Φ, Γ(x) = y :T1 → T2 (Tx-fun) (Tc) entails q. Extraction of logical information from an environment, Γ;Φ ` x :( y :T1 → T2) Γ;Φ ` c : ty(c) denoted Φ , substitutes actual variables for refinement variables, i.e. x:pL =M p[x/ν]. The judgment ∆ |= p states that the set of Φ ` T1 Γ, x:T1 ;Φ, x: T1 ` t : T2 (Tλ) L M formulasL M∆ entails p modulo the theory from which formulas are Γ;Φ ` λx:T1.t :( x:T1 → T2) drawn. A judgment ∆ |= p can be checked by issuing the query VALID(∆ → p) to an SMT solver. Note that subtyping holds only Γ;Φ ` t :(x:T1 → T2)Γ;Φ ` v : T Φ ` T <: T1 (Tapp) for well-formed types and environments (<:-refine); a type is well- Γ;Φ ` t v : T [v/x] 2 formed if it only mentions variables that are in scope (auxiliary

Γ;Φ ` v : {ν :Bool | p} Φ ` T1 <: T Φ ` T2 <: T definitions are in Appendix A.1). Γ;Φ, x:(v = true) ` t1 : T1 Γ;Φ, x:(v = false) ` t2 : T2 For the dynamic semantics we assume a standard call-by-value (Tif) Γ;Φ ` if v then t else t : T evaluation. Note also that Rule (Tapp) allows for arbitrary values 1 2 in argument position, which are then introduced into the logic upon Γ, x:T1 ;Φ, x: T1 ` t2 : T2 Γ;Φ ` t1 : T1 type substitution. Instead of introducing arbitrary lambdas in the Φ, x: T1 `L TM2 <: T Φ ` T logic, making verification undecidable, we introduce a fresh con- (Tlet) L M Γ;Φ ` let x = t1 in t2 : T stant for every lambda(Chugh 2013). This technique is fundamen- tal to prove soundness directly in the system since lambdas can Γ;Φ ` t : T1 Φ ` T1 <: T2 appear in argument position at runtime. For proofs of (T::) Γ;Φ ` t :: T2 : T2 and refinement soundness refer to Appendix A.1.

Φ ∪ { p } |= q 3. A Gradual Refinement Type System ` ΦΦL M ` p Φ ` q Φ ` T1 <: T2 (<:-refine) Φ ` {ν :B | p} <: {ν :B | q} Abstracting Gradual Typing (AGT) is a methodology to system- atically derive the gradual counterpart of a static typing disci- Φ ` T21 <: T11 Φ, x: T21 ` T12 <: T22 pline(Garcia et al. 2016), by viewing gradual types through the lens (<:-fun) L M of abstract interpretation(Cousot and Cousot 1977). Gradual types Φ ` x:T11 → T12 <: x:T21 → T22 are understood as abstractions of possible static types. The meaning Figure 1. Syntax and static semantics of the static language. of a gradual type is hence given by concretization to the set of static types that it represents. For instance, the unknown gradual type ? denotes any static type; the imprecise gradual type Int → ? denotes • Finally, we explain how to extend our approach to accommo- all function types from Int to any type; and the precise gradual type date a more expressive refinement logic (Section 8), by adding Int only denotes the static type Int. measures (Vazou et al. 2014) to the logic. Once the interpretation of gradual types is defined, the static Section 9 elaborates on some aspects of the resulting design, Sec- typing discipline can be promoted to a gradual discipline by lifting tion 10 discusses related work and Section 11 concludes. The type predicates (e.g. subtyping) and type functions (e.g. subtyping proofs and complete definitions are in the supplementary material, join) to their consistent counterparts. This lifting is driven by the referenced in this technical report as specific Appendices. plausibility interpretation of gradual typing: a consistent predicate To design the gradual refinement type system, we exercise the between two types holds if and only if the static predicate holds Abstracting Gradual Typing methodology(Garcia et al. 2016), a for some static types in their respective concretization. Lifting type systematic approach to derive gradual typing disciplines based on functions additionally requires a notion of abstraction from sets of abstract interpretation(Cousot and Cousot 1977). We summarize static types back to gradual types. By choosing the best abstrac- our specific contributions to AGT in Section 10. tion that corresponds to the concretization, a Galois connection is established and a coherent gradual language can be derived. In this work, we develop a gradual language that ranges from 2. A Static Refinement Type System simple types to logically-refined types. Therefore we need to spec- Figure 1 describes the syntax and static semantics of a language ify the syntax of gradual formulas, pe ∈ GFORMULA, and their with refinement types. Base refinements have the form {ν :B | p} meaning through a concretization function γp : GFORMULA → where p is a formula that can mention the refinement variable P(FORMULA). Once γp is defined, we need to specify the corre- ν. Here, we consider the simple quantifier free logic of linear sponding best abstraction αp such that hγp, αpi is a Galois con- arithmetic (QF-LIA), which suffices to capture the key issues in nection. We discovered that capturing a proper definition of grad- designing gradual refinement types; we discuss extensions of our ual formulas and their interpretation to yield a practical gradual approach to more expressive logics in Section 8. Base refinements refinement type system—i.e. one that does not degenerate and ac- program. As noticed by Garcia et al.(2016), precision is naturally induced by the concretization of gradual types: Te ∈ GTYPE, t ∈ GTERM, ep ∈ GFORMULA, Γ ∈ GENV, Φe ∈ GLENV Γ(x) = {ν :B | p } Definition 2 (Precision of gradual types). Te1 is less imprecise than Γ; Φ ` t : T (Tx-refine) e e e e Te2, notation Te1 v Te2, if and only if γT (Te1) ⊆ γT (Te2). Γ; Φe ` x : {ν :B | ν = x} The interpretation of a gradual environment is obtained by Γ(x) = y :T1 → T2 pointwise lifting of the concretization of gradual formulas. (Tx-fun) e e (Tc) e e Definition 3 (Concretization of gradual logical environments). Let Γ; Φe ` x :( y :Te1 → Te2) Γ; Φe ` c : ty(c) γΦ : GLENV → P(LENV) be defined as: Φ ` T Γ, x:T ; Φ, x: T ` t : T e e1 e1 e e1 e2 γ (Φ) = { Φ | ∀x.Φ(x) ∈ γp(Φ(x)) } (eTλ) L M Φ e e Γ; Φe ` λx:Te1.t :( x:Te1 → Te2) 3.2 Consistent Relations With the meaning of gradual types and logical environments, we Γ; Φe ` t : x:Te1 → Te2 Γ; Φe ` v : Te Φe ` Te . Te1 (eTapp) can lift static subtyping to its consistent counterpart: consistent Γ; Φ ` t v : T2 v/x e e J K subtyping holds between two gradual types, in a given logical environment, if and only if static subtyping holds for some static Γ; Φe ` v : {ν :Bool | ep } Φe ` Te1 . Te Φe ` Te2 . Te types and logical environment in the respective concretizations. Γ; Φ, x:(v = true) ` t1 : T1 Γ; Φ, x:(v = false) ` t2 : T2 (Tif) e e e e e Definition 4 (Consistent subtyping). Φe ` Te1 <‹: Te2 if and only if Γ; Φe ` if v then t1 else t2 : Te Φ ` T1 <: T2 for some Φ ∈ γΦ (Φ)e ,T1 ∈ γT (Te1) and T2 ∈ γT (Te2). Γ, x:Te1 ; Φe, x: Te1 ` t2 : Te2 Γ; Φe ` t1 : Te1 We describe an algorithmic characterization of consistent sub- L M Φe, x: Te1 ` Te2 . Te Φe ` Te typing, noted · ` · . ·, in Section 7. (eTlet) L M The static type system also relies on a type substitution function. Γ; Φ ` let x = t in t : T e 1 2 e Following AGT, lifting type functions to operate on gradual types requires an abstraction function from sets of types to gradual types: Γ; Φe ` t : Te1 Φe ` Te1 . Te2 (eT::) the lifted function is defined by abstracting over all the possible Γ; Φe ` t :: Te2 : Te2 results of the static function applied to all the represented static types. Instantiating this principle for type substitution: Figure 2. Typing rules of the gradual language. Definition 5 (Consistent type substitution).

Tfl[v/x] = α ({ T [v/x] | T ∈ γ (T ) }) cept almost any program—is rather subtle. In order to isolate this e T T e crucial design point, we defer the exact definition of GFORMULA, where αT is the natural lifting of the abstraction for formulas αp: γp and αp to Section 4. In the remainder of this section, we de- Definition 6 (Abstraction for gradual refinement types). Let fine the gradual refinement type system and establish its properties αT : P(TYPE) * GTYPE be defined as: independently of the specific interpretation of gradual formulas. α ({ {ν :B | p }}) = {ν :B | α ({ p })} The typing rules of the gradual language (Figure 2) directly T i p i mimic the static language typing rules, save for the fact that they αT ({ x:Ti1 → Ti2 }) = x:αT ({ Ti1 }) → αT ({ Ti2 }) use gradual refinement types Te, built from gradual formulas pe, and The algorithmic version of consistent type substitution, noted gradual environments Φe. Also, the rules use consistent subtyping · ·/· , substitutes in the known parts of formulas (Appendix A.8). and consistent type substitution. We define these notions below.2 J K 3.3 Properties of the Gradual Refinement Type System 3.1 Gradual Types and Environments The gradual refinement type system satisfies a number of desirable As we intend the imprecision of gradual types to reflect the impre- properties. First, the system is a conservative extension of the un- derlying static system: for every fully-annotated term both systems cision of formulas, the syntax of gradual types Te ∈ GTYPE simply contains gradual formulas. coincide (we use `S to denote the static system). Proposition 1 (Equivalence for fully-annotated terms). For any T ::= {ν :B | p } | x:T → T (Gradual Types) e e e e t ∈ TERM, Γ; Φ `S t : T if and only if Γ;Φ ` t : T Then, the concretization function for gradual formulas γp can be More interestingly, the system satisfies the static gradual guar- compatibly lifted to gradual types. antee of Siek et al.(2015): weakening the precision of a term pre- Definition 1 (Concretization of gradual types). Let the concretiza- serves typeability, at a less precise type. tion function γT : GTYPE → P(TYPE) be defined as follows: Proposition 2 (Static gradual guarantee). If • ; • ` t1 : Te1 and γ ({ν :B | p }) = {{ν :B | p} | p ∈ γ (p ) } T e p e t1 v t2, then • ; • ` t2 : Te2 and Te1 v Te2. γ (x:T → T ) = { x:T → T | T ∈ γ (T ) ∧ T ∈ γ (T ) } T e1 e2 1 2 1 T e1 2 T e2 We prove both properties parametrically with respect to the ac- The notion of precision for gradual types captures how much tual definitions of GFORMULA, γp and αp. The proof of Prop 1 static information is available in a type, and by extension, in a only requires that static type information is preserved exactly, i.e. γT (T ) = { T } and αT ({ T }) = T , which follows directly γ α 2 Terms t and type environments Γ have occurrences of gradual types in from the same properties for p and p. These hold trivially for them, but we do not change their notation for readability. In particular, func- the different interpretations of gradual formulas we consider in the tions that operate over the type environment Γ are unaffected by gradual- next section. The proof of Prop 2 relies on the fact that hγT , αT i ization, which only affects the meaning of relations over the logical envi- is a Galois connection. Again, this follows from hγp, αpi being a ronment Φ, most notably subtyping. Galois connection—a result we will establish in due course. 4. Interpreting Gradual Formulas satisfiable, as captured by the following definition (we write SAT(p) The definition of the gradual type system of the previous section for a formula p that is satisfiable): is parametric over the interpretation of gradual formulas. Starting Definition 8 (Non-contradicting concretization of gradual formu- from a naive interpretation, in this section we progressively build las). Let γp : GFORMULA → P(FORMULA) be defined as: a practical interpretation of gradual formulas. More precisely, we γp(p) = { p } γp(p ∧ ?) = { p ∧ q | SAT(p) ⇒ SAT(p ∧ q) } start in Section 4.1 with a definition of the syntax of gradual formu- This new definition of concretization is however still problem- las, GFORMULA, and an associated concretization function γp, and then successively redefine both until reaching a satisfactory defini- atic. Recall that a given concretization induces a natural notion of tion in Section 4.4. We then define the corresponding abstraction precision by relating the concrete sets(Garcia et al. 2016). Preci- sion of gradual formulas is the key notion on top of which precision function αp in Section 4.5. We insist on the fact that any interpretation of gradual formu- of gradual types and precision of gradual terms are built. las that respects the conditions stated in Section 3.3 would yield a Definition 9 (Precision of gradual formulas). pe is less imprecise “coherent” gradual type system. Discriminating between these dif- (more precise) than q , noted p v q , if and only if γp(p ) ⊆ γp(q ). ferent possible interpretations is eventually a design decision, moti- e e e e e vated by the expected behavior of a gradual refinement type system, The non-contradicting interpretation of gradual formulas is and is hence driven by considering specific examples. purely syntactic. As such, the induced notion of precision fails to capture intuitively useful connections between programs. For 4.1 Naive Interpretation instance, the sets of static formulas represented by the gradual for- mulas x ≥ 0 ∧ ? and x > 0 ∧ ? are incomparable, because they are Following the abstract interpretation viewpoint on gradual typing, a syntactically different. However, the gradual formula x > 0 ∧ ? gradual logical formula denotes a set of possible logical formulas. should intuitively refer to a more restrictive set of formulas, be- As such, it can contain some statically-known logical information, cause the static information x > 0 is more specific than x ≥ 0. as well as some additional, unknown assumptions. Syntactically, we can denote a gradual formula as either a precise formula (equiv- 4.3 Semantic Interpretation alent to a fully-static formula), or as an imprecise formula, p ∧ ?, To obtain a meaningful notion of precision between gradual formu- where p is called its known part.3 las, we appeal to the notion of specificity of logical formulas, which pe ∈ GFORMULA, p ∈ FORMULA is related to the actual semantics of formulas, not just their syntax. Formally, a formula p is more specific than a formula q if pe ::= p (Precise Formulas) | p ∧ ? (Imprecise Formulas) { p } |= q. Technically, this relation only defines a pre-order, be- cause formulas that differ syntactically can be logically equivalent. We use a conjunction in the syntax to reflect the intuition of a for- As usual we work over the equivalence classes and consider equal- mula that can be made more precise by adding logical information. ity up to logical equivalence. Thus, when we write p we actually Note however that the symbol ? can only appear once and in a con- refer to the equivalence class of p. In particular, the equivalence junction at the top level. That is, p ∨ ? and p ∨ (q ∧ ?) are not class of unsatisfiable formulas is represented by ⊥, which is the syntactically valid gradual formulas. We pose ? def= > ∧ ?. bottom element of the specificity pre-order. Having defined the syntax of gradual formulas, we must turn to In order to preserve non-contradiction in our semantic interpre- their semantics. Following AGT, we give gradual formulas meaning tation of gradual formulas, it suffices to remove (the equivalence by concretization to sets of static formulas. Here, the ? in a gradual class of) ⊥ from the concretization. Formally, we isolate ⊥ from formula p ∧ ? can be understood as a placeholder for additional the specificity order, and define the order only for the satisfiable logical information that strengthens the known part p. A natural, fragment of formulas, denoted SFORMULA: but naive, definition of concretization follows. Definition 10 (Specificity of satisfiable formulas). Given two for- Definition 7 (Naive concretization of gradual formulas). Let mulas p, q ∈ SFORMULA, we say that p is more specific than q in γp : GFORMULA → P(FORMULA) be defined as follows: the satisfiable fragment, notation p  q, if { p } |= q. γp(p) = { p } γp(p ∧ ?) = { p ∧ q | q ∈ FORMULA } Then, we define gradual formulas such that the known part of This definition is problematic. Consider a value v refined with an imprecise formula is required to be satisfiable: the gradual formula ν ≥ 2 ∧ ?. With the above definition, we p ∈ GFORMULA, p ∈ FORMULA, pX ∈ SFORMULA would accept passing v as argument to a function that expects a e negative argument! Indeed, a possible interpretation of the gradual pe ::= p (Precise Formulas) formula would be ν ≥ 2∧ν = 1, which is unsatisfiable4 and hence | pX ∧ ? (Imprecise Formulas) trivially entails ν < 0. Therefore, accepting that the unknown part Note that the imprecise formula x > 0∧x = 0∧?, for example, of a formula denotes any arbitrary formula—including ones that is syntactically rejected because its known part is not satisfiable. contradict the known part of the gradual formula—annihilates one However, x > 0 ∧ x = 0 is a syntactically valid formula because of the benefits of gradual typing, which is to reject such blatant precise formulas are not required to be satisfiable. inconsistencies between pieces of static information. The semantic definition of concretization of gradual formulas captures the fact that an imprecise formula stands for any satisfiable 4.2 Non-Contradicting Interpretation strengthening of its known part: To avoid this extremely permissive behavior, we must develop Definition 11 (Semantic concretization of gradual formulas). Let a non-contradicting interpretation of gradual formulas. The key γp : GFORMULA → P(FORMULA) be defined as follows: requirement is that when the known part of a gradual formula is X X X X satisfiable, the interpretation of the gradual formula should remain γp(p) = { p } γp(p ∧ ?) = { q | q  p } This semantic interpretation yields a practical notion of preci- 3 Given a static entity X, Xe denotes a gradual entity, and XÛ a set of Xs. sion that admits the judgment x > 0 ∧ ? v x ≥ 0 ∧ ?, as wanted. 4 We use the term “(un)satisfiable” instead of “(in)consistent” to avoid Unfortunately, despite the fact that, taken in isolation, gradual confusion with the term “consistency” from the gradual typing literature. formulas cannot introduce contradictions, the above definition does not yield an interesting gradual type system yet, because it allows Definition 12 (Local formula). A formula p(~x; ν) is local if the other kinds of contradictions to sneak in. Consider the following: formula ∃ν.p(~x; ν) is valid. let g (x: {ν :Int | ν > 0})( y: {ν :Int | ν = 0 ∧ ?})= x/y We call LFORMULA the set of local formulas. Note that the The static information y = 0 should suffice to statically reject this definition above implies that a local formula is satisfiable, be- definition. But, at the use site of the division operator, the consistent cause there must exist some ν for which the formula holds. Hence, subtyping judgment that must be proven is: LFORMULA ⊂ SFORMULA ⊂ FORMULA. Additionally, a local formula always produces satisfiable as- x:(ν > 0), y :(ν = 0 ∧ ?) ` {ν :Int | ν = y} . {ν :Int | ν 6= 0} sumptions when combined with a satisfiable logical environment: While the interpretation of the imprecise refinement of y cannot Proposition 3. Let Φ be a logical environment, ~x = dom(Φ) the contradict y = 0, it can stand for ν = 0 ∧ x ≤ 0, which contradicts vector of variables bound in Φ, and q(~x,ν) ∈ LFORMULA. If Φ x > 0. Hence the definition is statically accepted. is satisfiable then Φ ∪ { q(~x,ν) } is satisfiable. L M The introduction of contradictions in the presence of gradual L M formulas can be even more subtle. Consider the following program: Moreover, we note that local formulas have the same expres- siveness than non-local formulas when taken as a conjunction (we let h (x: {ν :Int | ?})( y: {ν :Int | ?})( z: {ν :Int | ν = 0}) use ≡ to denote logical equivalence). =( x + y)/z Proposition 4. Let Φ be a logical environment. If Φ is satisfiable One would expect this program to be rejected statically, because it then there exists an environment Φ0 with the same domainL M such that is clear that z = 0. But, again, one can find an environment that Φ ≡ Φ0 and for all x the formula Φ0(x) is local. makes consistent subtyping hold: x :(ν > 0), y :(ν = x ∧ ν < L M L M 0), z :(ν = 0). This interpretation introduces a contradiction Similarly to what we did for the semantic interpretation, we between the separate interpretations of different gradual formulas. redefine the syntax of gradual formulas such that the known part of an imprecise formula is required to be local: 4.4 Local Interpretation ◦ pe ∈ GFORMULA, p ∈ FORMULA, p ∈ LFORMULA We need to restrict the space of possible static formulas repre- pe ::= p (Precise Formulas) sented by gradual formulas, in order to avoid contradicting already- | p◦ ∧ ? (Imprecise Formulas) established static assumptions, and to avoid introducing contradic- tions between the interpretation of different gradual formulas in- The local concretization of gradual formulas allows imprecise for- volved in the same consistent subtyping judgment. mulas to denote any local formula strengthening the known part: Stepping back: what do refinements refine? Intuitively, the re- Definition 13 (Local concretization of gradual formulas). Let finement type {ν :B | p} refers to all values of type B that satisfy γp : GFORMULA → P(FORMULA) be defined as follows: the formula p. Note that apart from ν, the formula p can refer to ◦ ◦ ◦ ◦ other variables in scope. In the following, we use the more explicit γp(p) = { p } γp(p ∧ ?) = { q | q  p } syntax p(~x; ν) to denote a formula p that constrains the refinement From now on, we simply write p ∧ ? for imprecise formulas, variable ν based on the variables in scope ~x. leaving implicit the fact that p is a local formula. The well-formedness condition in the static system ensures that variables ~x on which a formula depends are in scope, but does not Examples. The local interpretation of imprecise formulas forbids restrict in any way how a formula uses these variables. This per- the restriction of previously-defined variables. To illustrate, con- missiveness of traditional static refinement type systems admits cu- sider the following definition: rious definitions. For example, the first argument of a function can let f (x: Int)( y: {ν :Int | ?})= y/x be constrained to be positive by annotating the second argument: The static information on x is not sufficient to prove the code safe. x:Int → y :{ν :Int | x > 0} → Int Because any interpretation of the unknown formula restricting y Applying this function to some negative value is perfectly valid but must be local, x cannot possibly be restricted to be non-zero, and yields a function that expects ⊥. A formula can even contradict the definition is rejected statically. information already assumed about a prior argument: The impossibility to restrict previously-defined variables avoids generating contradictions and hence accepting too many programs. x:{ν :Int | ν > 0} → y :{ν :Int | x < 0} → Int Recall the example of contradictions between different interpreta- We observe that this unrestricted freedom of refinement formu- tions of imprecise formulas: las is the root cause of the (non-local) contradiction issues that can let h (x: {ν :Int | ?})( y: {ν :Int | ?})( z: {ν :Int | ν = 0}) manifest in the interpretation of gradual formulas. =( x + y)/z Local formulas. The problem with contradictions arises from This definition is now rejected statically because accepting it would the fact that a formula p(~x; ν) is allowed to express restrictions mean finding well-formed local formulas p and q such that the not just on the refinement variable ν but also on the variables in following static subtyping judgment holds: scope ~x. In essence, we want unknown formulas to stand for any x:p, y :q, z :(ν = 0) ` {ν :Int | ν = z} <: {ν :Int | ν 6= 0} local restriction on the refinement variable, without allowing for contradictions with prior information on variables in scope. However, by well-formedness, p and q cannot restrict z; and by Intuitively, we say that a formula is local if it only restricts locality, p and q cannot contradict each other. the refinement variable ν. Capturing when a formula is local goes beyond a simple syntactic check because formulas should be able 4.5 Abstracting Formulas to mention variables in scope. For example, the formula ν > x is Having reached a satisfactory definition of the syntax and con- local: it restricts ν based on x without further restricting x. The key cretization function γp for gradual formulas, we must now find the to identify ν > x as a local formula is that, for every value of x, corresponding best abstraction αp in order to construct the required there exists a value for ν for which the formula holds. Galois connection. We observe that, due to the definition of γp, specificity  is central to the definition of precision v. We exploit The runtime semantics described in Sect. 5 rely on another notion this connection to derive a framework for abstract interpretation of abstraction built over αp, hence also partial, for which a similar based on the structure of the specificity order. result will be established, considering the relevant operators. The specificity order for the satisfiable fragment of formulas forms a join-semilattice. However, it does not contain a join for 5. Abstracting Dynamic Semantics arbitrary (possible infinite) non-empty sets. The lack of a join for arbitrary sets, which depends on the expressiveness of the logical Exploiting the correspondence between proof normalization and language, means that it is not always possible to have a best abstrac- term reduction(Howard 1980), Garcia et al.(2016) derive the dy- tion. We can however define a partial abstraction function, defined namic semantics of a gradual language by reduction of gradual typ- whenever it is possible to define a best one. ing derivations. This approach provides the direct runtime seman- tics of gradual programs, instead of the usual approach by transla- Definition 14. Let αp : P(FORMULA) * GFORMULA be the tion to some intermediate cast calculus(Siek and Taha 2006). partial abstraction function defined as follows. As a term (i.e. and its typing derivation) reduces, it is necessary αp({ p }) = p to justify new judgments for the typing derivation of the new term, such as subtyping. In a type safe static language, these new judg- α ( p ) = j p ∧ ? if p ⊆ LFORMULA and j p is defined p Û Û Û Û ments can always be established, as justified in the type preserva- αp(Ûp ) is undefined otherwise tion proof, which relies on properties of judgments such as transi- tivity of subtyping. However, in the case of gradual typing deriva- (b is the join for the specificity order in the satisfiable fragment) tions, these properties may not always hold: for instance the two The function αp is well defined because the join of a set of local consistent subtyping judgments Int . ? and ? . Bool cannot be formulas is necessarily a local formula. In fact, an even stronger combined to justify the transitive judgment Int . Bool. property holds: any upper bound of a local formula is local. More precisely, Garcia et al.(2016) introduce the notion of ev- We establish that, whenever αp is defined, it is the best possible idence to characterize why a consistent judgment holds. A consis- abstraction that corresponds to γp. This characterization validates tent operator, such as consistent transitivity, determines when evi- the use of specificity instead of precision in the definition of αp. dences can be combined to produce evidence for a new judgment. The impossibility to combine evidences so as to justify a combined Proposition 5 (α is sound and optimal). If α (p) is defined, then p p Û consistent judgment corresponds to a cast error: the realization, at p ⊆ γ (p ) if and only if α (p) v p . Û p e p Û e runtime, that the plausibility based on which the program was con- A pair hα, γi that satisfies soundness and optimality is a Galois sidered (gradually) well-typed is not tenable anymore. connection. However, Galois connections relate total functions. Compared to the treatment of (record) subtyping by Garcia Here αp is undefined whenever: (1) p is the empty set (the join is et al.(2016), deriving the runtime semantics of gradual refine- Û ments presents a number of challenges. First, evidence of consistent undefined since there is no least element), (2) Ûp is non-empty, but contains both local and non-local formulas, or (3) p is non-empty, subtyping has to account for the logical environment in the judg- Û ment (Sect. 5.1), yielding a more involved definition of the consis- and only contains local formulas, but b Ûp does not exist. Garcia et al.(2016) also define a partial abstraction function tent subtyping transitivity operator (Sect. 5.2). Second, dependent for gradual types, but the only source of partiality is the empty types introduce the need for two additional consistent operators: set. Technically, it would be possible to abstract over the empty set one corresponding to the subtyping substitution lemma, account- by adding a least element. But they justify the decision of leaving ing for substitution in types (Sect. 5.3), and one corresponding to abstraction undefined based on the observation that, just as static the lemma that substitution in terms preserves typing (Sect. 5.4). type functions are partial, consistent functions (which are defined Section 6 presents the resulting runtime semantics and the prop- using abstraction) must be too. In essence, statically, abstracting erties of the gradual refinement language. the empty set corresponds to a type error, and dynamically, it corresponds to a cast error, as we will revisit in Section 5. 5.1 Evidence for Consistent Subtyping The two other sources of partiality of αp cannot however be Evidence represents the plausible static types that support some justified similarly. Fortunately, both are benign in a very precise consistent judgment. Consider the valid consistent subtyping judg- sense: whenever we operate on sets of formulas obtained from the ment x:? ` {ν :Int | ν = x} . {ν :Int | ν > 0}. In addition to know- concretization of gradual formulas, we never obtain a non-empty ing that it holds, we know why it holds: for any satisfying inter- set that cannot be abstracted. Miné(2004) generalized Galois con- pretation of the gradual environment, x should be refined with a nections to allow for partial abstraction functions that are always formula ensuring that it is positive. That is, we can deduce precise defined whenever applying some operator of interest. More pre- bounds on the set of static entities that supports why the consis- cisely, given a set F of concrete operators, Miné defines the notion tent judgment holds. The abstraction of these static entities is what of hα, γi being an F-partial Galois connection, by requiring, in ad- Garcia et al.(2016) call evidence. dition to soundness and optimality, that the composition α ◦ F ◦ γ Because a consistent subtyping judgment involves a gradual en- be defined for every operator F ∈ F (see Appendix A.4). vironment and two gradual types, we extend the abstract interpre- 5 Abstraction for gradual types αT is the natural extension of ab- tation framework coordinate-wise to subtyping tuples: straction for gradual formulas α , and hence inherits its partiality. p Definition 15 (Subtyping tuple concretization). Let Observe that, in the static semantics of the gradual language, ab- <: <: straction is only used to define the consistent type substitution op- γτ : GTUPLE → P(TUPLE ) be defined as: erator ·[·/·] (Section 3.2). We establish that, despite the partiality of γτ (Φe, Te1, Te2) = γΦ (Φ)e × γT (Te1) × γT (Te2) αp, the pair hα , γ i is a partial Galois connection: T T Definition 16 (Subtypting tuple abstraction). Let <: <: Proposition 6 (Partial Galois connection for gradual types). The ατ : P(TUPLE ) * GTUPLE be defined as: pair hαT , γT i is a { tsubst˘ }-partial Galois connection, where ατ ({ Φi,Ti1,Ti2 }) = hαΦ ({ Φi }), αT ({ Ti1 }), αT ({ Ti2 })i tsubst˘ is the collecting lifting of type substitution, i.e. 5 We pose τ ∈ TUPLE<: = LENV × TYPE × TYPE for subtyping tuples, tsubst˘ (TÛ , v, x) = { T [v/x] | T ∈ TÛ } and GTUPLE<: = GLENV × GTYPE × GTYPE for their gradual lifting. This definition uses abstraction of gradual logical environments. The consistent transitivity operator collects and abstracts all available justifications that transitivity might hold in a particular Definition 17 (Abstraction for gradual logical environments). Let instance. Consistent transitivity is a partial function: if F◦<: pro- αΦ : P(ENV) * GENV be defined as: duces an empty set, ατ is undefined, and the transitive claim has α (Φ)(x) = α ({ Φ(x) | Φ ∈ Φ }) Φ Û p Û been refuted. Intuitively this corresponds to a runtime cast error. Consider, for example, the following evidence judgments: We can now define the interior of a consistent subtyping judg- ment, which captures the best coordinate-wise information that can ε1 . • ` {ν :Int | ν > 0 ∧ ?} . {ν :Int | ?} be deduced from knowing that such a judgment holds. ε2 . • ` {ν :Int | ?} . {ν :Int | ν < 10} where The interior of the judgment Φ ` T Definition 18 (Interior). e e1 . ε1 = h•, {ν :Int | ν > 0 ∧ ?}, {ν :Int | ?}i Te2, notation I<:(Φe, Te1, Te2) is defined by the function ε2 = h•, {ν :Int | ν < 10 ∧ ?}, {ν :Int | ν < 10}i <: <: I<: : GTUPLE → GTUPLE : Using consistent subtyping transitivity we can deduce evidence for the judgment: I<:(τ) = ατ (FI<: (γτ (τ))) e e <: <: <: (ε1 ◦ ε2) . • ` {ν :Int | ν > 0 ∧ ?} . {ν :Int | ν < 10} where FI : P(TUPLE ) → P(TUPLE ) <: where FI (τ) = { hΦ,T1,T2i ∈ τ | Φ ` T1 <: T2 } <: <: Û Û ε1 ◦ ε2 = h•, {ν :Int | ν > 0 ∧ ν < 10 ∧ ?}, {ν :Int | ν < 10}i Based on interior, we define what counts as evidence for con- hα , γ i <: As required, τ τ is a partial Galois connection for the sistent subtyping. Evidence is represented as a tuple in GTUPLE operator used to define consistent subtyping transitivity. that abstracts the possible satisfying static tuples. The tuple is self- interior to reflect the most precise information available: Proposition 8 (Partial Galois connection for transitivity). The pair hατ , γτ i is a { F <: }-partial Galois connection. <: ◦ Definition 19 (Evidence for consistent subtyping). EV = <: { hΦe, Te1, Te2i ∈ GTUPLE | I<:(Φe, Te1, Te2) = hΦe, Te1, Te2i } 5.3 Consistent Subtyping Substitution <: We use metavariable ε to range over EV , and introduce the The proof of type preservation for refinement types also relies on a subtyping substitution lemma, stating that a subtyping judgment is extended judgment ε . Φ ` T T , which associates particular e e1 . e2 preserved after a value is substituted for some variable x, and the runtime evidence to some consistent subtyping judgment. Initially, binding for x is removed from the logical environment: before a program executes, evidence ε corresponds to the interior of the judgment, also called the initial evidence (Garcia et al. 2016). Γ;Φ ` v : T Φ ` T <: T The abstraction function ατ inherits the partiality of αp. We 1 11 1 11 12 prove that hα , γ i is a partial Galois connection for every operator Φ1, x: T12 , Φ2 ` T21 <: T22 τ τ L M of interest, starting with FI<: , used in the definition of interior: Φ1, Φ2[v/x] ` T21[v/x] <: T22[v/x] Proposition 7 (Partial Galois connection for interior). The pair In order to justify reductions of gradual typing derivations, we hα , γ i is a { F }-partial Galois connection. τ τ I<: need to define an operator of consistent subtyping substitution that combines evidences from consistent subtyping judgments in order 5.2 Consistent Subtyping Transitivity to derive evidence for the consistent subtyping judgment between The initial gradual typing derivation of a program uses initial evi- types after substitution of v for x. dence for each consistent judgment involved. As the program ex- Definition 21 (Consistent subtyping substitution). Suppose: ecutes, evidence can be combined to exhibit evidence for other judgments. The way evidence evolves to provide evidence for fur- Γ; Φe1 ` v : Te11 ε1 . Φe1 ` Te11 . Te12 ther judgments mirrors the type safety proof, and justifications sup- ε2 . Φ1, x: T12 , Φ2 ` T21 . T22 ported by properties about the relationship between static entities. e Le M e e e As noted by Garcia et al.(2016), a crucial property used in Then we deduce evidence for consistent subtyping substitution as the proof of preservation is transitivity of subtyping, which may [v/x] (ε1 ◦<: ε2) . Φe1, Φe2 v/x ` Te21 v/x . Te22 v/x or may not hold in the case of consistent subtyping judgments, J K J K J K [v/x] <: <: <: because of the imprecision of gradual types. For instance, both where ◦<: : EV → EV * EV is defined by: • ` {ν :Int | ν > 10} . {ν :Int | ?} and • ` {ν : Int | ?} . {ν : Int | [v/x] ε ◦ ε = ατ (F (γτ (ε ), γτ (ε ))) ν < 10} hold, but • ` {ν :Int | ν > 10} {ν :Int | ν < 10} does not. 1 <: 2 [v/x] 1 2 . ◦<: Following AGT, we can formally define how to combine evi- <: <: <: dences to provide justification for consistent subtyping. and F [v/x] : P(TUPLE ) → P(TUPLE ) → P(TUPLE ) is: ◦<: Definition 20 (Consistent subtyping transitivity). Suppose: F [v/x] (τ 1, τ 2) = {hΦ1 ·Φ2[v/x],T21[v/x],T22[v/x]i | ◦<: Û Û ε1 . Φ ` T1 . T2 ε2 . Φ ` T2 . T3 e e e e e e ∃T11,T12. hΦ1,T11,T12i ∈ τ 1 ∧ We deduce evidence for consistent subtyping transitivity as Û hΦ1 ·x: T12 ·Φ2,T21,T22i ∈ τ 2 ∧ <: L M Û (ε1 ◦ ε2) . Φe ` Te1 . Te3 Φ1 ` T11 <: T12 ∧ Φ1 ·x: T12 ·Φ2 ` T21 <: T22} L M <: <: <: <: where ◦ : EV → EV * EV is defined by: The consistent subtyping substitution operator collects and ab-

<: stracts all justifications that some consistent subtyping judgment ε1 ◦ ε2 = ατ (F◦<: (γτ (ε1), γτ (ε2))) holds after substituting in types with a value, and produces the most <: <: <: precise evidence, if possible. Note that this new operator introduces and F◦<: : P(TUPLE ) → P(TUPLE ) → P(TUPLE ) is: a new category of runtime errors, made necessary by dependent F◦<: (Ûτ 1, Ûτ 2) = {hΦ,T1,T3i | ∃T2. hΦ,T1,T2i ∈ Ûτ 1∧ types, and hence not considered in the simply-typed setting of Gar- cia et al.(2016). hΦ,T2,T3i ∈ Ûτ 2 ∧ Φ ` T1 <: T2 ∧ Φ ` T2 <: T3} To illustrate consistent subtyping substitution consider: (Ix-refine) {ν:B | p } ·; · ` 3 : {ν :Int | ν = 3} Φ;e x e ∈ TERM{ν:B | ν = x} ε1 . • ` {ν :Int | ν = 3} . {ν :Int | ?} (Ix-fun) y:T → T ε2 . x:?, y :? ` {ν :Int | ν = x + y} {ν :Int | ν ≥ 0} Φ; x e1 e2 ∈ TERM . e y:Te1 → Te2 where T ε1 = h•, {ν :Int | ν = 3}, {ν :Int | ?}i Φ, x: T1 ; te2 ∈ TERM e e T2 (Iλ) L M e ε2 = hx:?·y :?, {ν :Int | ν = x + y}, {ν :Int | ν ≥ 0}i Φ; λxTe1 .tTe2 ∈ TERM e x:Te1 → Te2 We can combine ε1 and ε2 with the consistent subtyping substitu- tion operator to justify the judgment after substituting 3 for x: T Φ; te1 ∈ TERM ε1 . Φ ` T1 (x:T11 → T12) e T1 e e . e e [3/x] e (ε1 ◦<: ε2) . y :? ` {ν :Int | ν = 3 + y} . {ν :Int | ν ≥ 0} Φ; v ∈ TERM ε2 . Φ ` T2 T11 e T2 e e . e (Iapp) e where T x:T →T Φ;( ε1te1 )@ e11 e12 (ε2v) ∈ TERM [3/x] e Te12 v/x ε1 ◦<: ε2 = hy :ν ≥ −3 ∧ ?, {ν :Int | ν = 3 + y}, {ν :Int | ν ≥ 0}i J K Proposition 9 (Partial Galois connection for subtyping substitu- Figure 3. Gradual intrinsic terms (selected rules) tion). The pair hατ , γτ i is a { F [v/x] }-partial Galois connection. ◦<: 5.4 Consistent Term Substitution 6. Dynamic Semantics and Properties Another important aspect of the proof of preservation is the use of a term substitution lemma, i.e. substituting in an open term preserves We now turn to the actual reduction rules of the gradual language typing. Even in the simply-typed setting considered by Garcia et al. with refinement types. Following AGT, reduction is expressed over (2016), the term substitution lemma does not hold for the gradual gradual typing derivations, using the consistent operators men- language because it relies on subtyping transitivity. Without further tioned in the previous section. Because writing down reduction discussion, they adopt a simple technique: instead of substituting a rules over (bidimensional) derivation trees is unwieldy, we use plain value v for the variable x, they substitute an ascribed value instrincally-typed terms (Church 1940) as a convenient unidimen- sional notation for derivation trees(Garcia et al. 2016). v :: T , where T is the expected type of x. This technique ensures e e We expose this notational device in Section 6.1, and then use that the substitution lemma always holds. it to present the reduction rules (Section 6.2) and the definition of With dependent types, the term substitution lemma is more the consistent term substitution operator (Section 6.3). Finally, we challenging. A subtyping judgment can rely on the plausibility that state the meta-theoretic properties of the resulting language: type a gradually-typed variable is replaced with the right value, which safety, gradual guarantee, and refinement soundness (Section 6.4). may not be the case at runtime. Consider the following example: let f (x: {ν :Int | ν > 0})= x 6.1 Intrinsic Terms let g (x: {ν :Int | ?})( y: {ν :Int | ν ≥ x})= let z = f y in z + x We first develop gradual intrinsically-typed terms, or gradual in- trinsic terms for short. Intrinsic terms are isomorphic to typing This code is accepted statically due to the possibility of x be- derivation trees, so their structure corresponds to the gradual typ- ing positive inside the body of g. If we call g with −1 the ap- plication f y can no longer be proven possibly safe. Precisely, ing judgment Γ; Φe ` t : Te—a term is given a type in a specific the application f y relies on the consistent subtyping judgment type environment and gradual logical environment. Intrinsic terms x:?·y :ν ≥ x ` {ν :Int | ν = y} {ν :Int | ν > 0} supported by the are built up from disjoint families of intrinsically-typed variables . T evidence hx:ν > 0 ∧ ?·y :ν ≥ x, {ν :Int | ν = y}, {ν :Int | ν > 0}i. x e ∈ VAR . Because these variables carry type information, type Te After substituting by −1 the following judgment must be justi- environments Γ are not needed in intrinsic terms. Because typeabil- fied: y : ν ≥ −1 ` {ν : Int | ν = y} . {ν : Int | ν > 0}. This (fully ity of a term depends on its logical context, we define a family Φ precise) judgment cannot however be supported by any evidence. TERMe of sets indexed by both types and gradual logical envi- Te Note that replacing by an ascribed value does not help in the T ronments. For readability, we use the notation Φ; t e ∈ TERM , dependently-typed setting because, as illustrated by the previous e Te example, judgments that must be proven after substitution may not allowing us to view an intrinsic term as made up of a logical envi- even correspond to syntactic occurrences of the replaced variable. ronment and a term (when Φ is empty we stick to TERM • ). e Te Moreover, substitution also pervades types, and consequently for- Figure 3 presents selected formation rules of intrinsic terms. mulas, but the logical language has no notion of ascription. Rules (Ix-refine) and (Ix-fun) are straightforward. Rule (Iλ) re- Stepping back, the key characteristic of the ascription technique quires the body of the lambda to be typed in an extended logical used by Garcia et al.(2016) is that the resulting substitution opera- environment. Note that because gradual typing derivations include tor on gradual terms preserves exact types. Noting that after substi- evidence for consistent judgments, gradual intrinsic terms carry tution some consistent subtyping judgments may fail, we define a over evidences as well, which can be seen in rule (Iapp). The rule consistent term substitution operator that preserves typeability, but for application additionally features a type annotation with the @ is undefined if it cannot produce evidence for some judgment. This notation. As observed by Garcia et al.(2016), this annotation is yields yet another category of runtime failure, occurring at substi- necessary because intrinsic terms represent typing derivations at tution time. In the above example, the error manifests as soon as different steps of reduction. Therefore, they must account for the the application g −1 beta-reduces, before reducing the body of g. fact that runtime values can have more precise types than the ones Consistent term substitution relies on the consistent subtyping determined statically. For example, a term t in function position of substitution operator defined in Section 5.2 to produce evidence for an application may reduce to some term whose type is a subtype consistent subtyping judgments that result from substitution. We of the type given to t statically. An intrinsic application term hence defer its exact definition to Section 6.3 below. carries the type given statically to the subterm in function position. ∗ ∗ ∗ et ∈ EVTERM, ev ∈ EVVALUE, u ∈ SIMPLEVALUE, x ∈ VAR∗ (·)[·/·]: TERM → EVVALUE → VAR∗ * TERM ∗ ∗ ∗ t ∈ TERM∗, v ∈ VALUE, g ∈ EVFRAME, f ∈ TMFRAME et ::= εt {ν:B | p } {ν:B | p } T2 T1 T u/x T1 T2 ev ::= εu x e [εu/x e ] = uy e [εu/xe ] = yeJ K if xe 6= ye u ::= x∗ | n | b | λx∗.t∗ y:T → T y:T → T v ::= u | εu :: Te x e1 e2 [εu/x e1 e2 ] = εu :: (y :Te1 → Te2) T T T T g ::=  @eet | ev @e |  :: T | (let x =  @e in et)@e e (λyTe.t)[εu/xTe] = λyTe u/x .t[εu/xTe] f ::= g[ε] J K

• • T1 x:T11→T12 T −→: TERM × (TERM ∪ { error }) ((ε1te )@ e e (ε2v))[εu/xe] = Te Te T T (x:T →T ) u/x T (ε1te1 )[εu/xe]@ e11 e12 (ε2v)[εu/xe] T x:T →T J K ε1(λxe11 .t)@ e1 e2 ε2u −→  <: T icodu(ε2, ε1)t[(ε2 ◦ idom(ε1))u/xe11 ] :: Te2 u/x (·)[·/·]: EVTERM → EVVALUE → VAR∗ * EVTERM <: J K error if (ε2 ◦ idom(ε1)), icodu(ε2, ε1) or T11 T2 T1 [u/x] T2 T1  t[εuu/xe ] is not defined (ε1te )[εu/xe ] = (ε ◦<: ε1)te [εu/xe ]  [v/x] (ε ◦ ε )t[ε u/xT1 ] :: T  1 <: 2 1 e e Figure 5. Consistent term substitution (selected cases) (let xT1 = ε u in ε t)@T −→ T e 1 2 e error if t[ε1u/xe1 ] or  [v/x] (ε1 ◦<: ε2) is not defined

T1 T2 T T1 6.3 Consistent Term Substitution (if true then ε1te else ε2te )@e −→ ε1te :: Te T T T T The consistent term substitution operator described in Section 5.4 (if false then ε1te1 else ε2te2 )@e −→ ε2te2 :: Te is defined on intrinsic terms (Figure 5). To substitute a variable xTe

−→c: EVTERM × (EVTERM ∪ { error }) by a value u we must have evidence justifying that the type of u is a subtype of Te, supporting that substituting by u may be safe. ß <: (ε2 ◦ ε1)u Therefore, consistent term substitution is defined for evidence val- ε1(ε2u :: T ) −→c <: e error if (ε2 ◦ ε1) is not defined ues. The consistent term substitution operator recursively traverses the structure of an intrinsic term applying consistent subtyping sub- 7−→: TERM • × (TERM • ∪ { error }) stitution to every evidence, using an auxiliary definition for substi- Te Te tution into evidence terms. When substituting by an evidence value ε1u in an evidence term ε2t, we first combine ε1 and ε2 using con- tTe −→ r r ∈ (TERM • ∪ { error }) T sistent subtyping substitution and then substitute recursively into t. (R7−→) e tTe 7−→ r Note that substitution is undefined whenever consistent subtyping substitution is undefined. T T et −→ et0 te 7−→ te When reaching a variable, there is a subtle difference between (Rg) c (Rf) 1 2 substituting by a lambda and a base constant. Because variables g[et] 7−→ g[et0] T T f[t1e] 7−→ f[t2e] with base types are given the exact type {ν :B | ν = x}, after sub- stituting x by a value u the type becomes {ν :B | ν = u}, which Figure 4. Intrinsic reduction (error propagation rules omitted) exactly corresponds to the type for a base constant. For higher or- der variables an explicit ascription is needed to preserve the same type. Another subtlety is that types appearing in annotations above @ must be replaced by the same type, but substituting for the vari- 6.2 Reduction able x being replaced. This is necessary for the resulting term to be Figure 4 presents the syntax of the intrinsic term language and its well-typed in an environment where the binding for the substituted evaluation frames, in the style of Garcia et al.(2016). Values v variable has been removed from the logical environment. Similarly Te are either raw values u or ascribed values εu :: Te, where ε is the an intrinsic variable y other than the one being replaced must be Te u/x evidence that u is of a subtype of Te. Such a pair εu ∈ EVVALUE is replaced by a variable y J K. called an evidence value. Similarly, an evidence term εt ∈ EVTERM The key property is that consistent term substitution preserves ∗ is a term augmented with evidence. We use VAR∗ (resp. TERM∗) to typeability whenever it is defined. denote the set of all intrinsic variables (resp. terms). Figure 4 presents the reduction relation 7−→ and the two no- Proposition 10 (Consistent substitution preserves types). Suppose tions of reductions −→ and −→c. Reduction rules preserve the Φ1 ; u ∈ TERM , ε . Φ1 ` Tu Tx, and Φ1 ·x : Tx ·Φ2 ; t ∈ e Teu e e . e e e e exact type of a term and explicit ascriptions are used whenever a T L M T TERM then Φ ·Φ u/x ; t[εu/x ex ] ∈ TERM or t[εu/x ex ] reduction may implicitly affect type precision. The rules handle Te e1 e2 Te u/x J K J K evidences, combining them with consistent operators to derive new is undefined. evidence to form new intrinsic terms. Whenever combining evi- dences fails, the program ends with an error. An application may 6.4 Properties of the Gradual Refinement Types Language produce an error because it cannot produce evidence using con- We establish three fundamental properties based on the dynamic sistent transitivity to justify that the actual argument is subtype of semantics. First, the gradual language is type safe by construction. the formal argument. Additionally, the rules for application and let expression use consistent term substitution, which fails whenever Te • Te Proposition 11 (Type Safety). If t1 ∈ TERM then either t1 is a consistent subtyping substitution cannot combine evidences to jus- Te Te Te Te • Te tify all substituted occurrences. value v, t1 7−→ t2 for some term t2 ∈ TERM , or t1 7−→ error. Te More interestingly, the language satisfies the dynamic gradual Applying the same reduction approach focusing on x, we obtain guarantee of Siek et al.(2015): a well-typed gradual program that (after extraction) the following static entailment, which is valid: runs without errors still does with less precise type annotations. { x ≥ 0, x ≥ 0 → x + y ≥ 0, z ≥ 0 } |= x + y + z ≥ 0 ∧ x ≥ 0 Proposition 12 (Dynamic gradual guarantee). Suppose tTe1 v tTe2 . Thus the consistent entailment constraint (1) can be satisfied. 1 1 With function types, subtyping conveys many consistent entail- Te1 Te1 Te2 Te2 Te1 Te2 If t1 7−→ t2 then t1 7−→ t2 where t2 v t2 . ment constraints that must be handled together, because the same We also establish refinement soundness: the result of evaluat- interpretation for an unknown formula must be maintained be- ing a term yields a value that complies with its refinement. This tween different constraints. The reduction approach above can be property is a direct consequence of type preservation. extended to the higher-order case noting that constraints involved in subtyping form a tree structure, sharing common prefixes. Proposition 13 (Refinement soundness). Proposition 14 (Constraint reduction). Consider a set of consistent {ν:B | p } • {ν:B | p } ∗ If t e ∈ TERM and t e 7−→ v then: {ν:B | pe} entailment constrains sharing a common prefix (Φe1, y :?):

1. If v = u then p ![u/ν] is valid i e {Φe1, y :?, Φ2 |≈ ri(~x,y, ~zi)} 2. If v = εu :: {νL :BM | p } then p ![u/ν] is valid e e i L M Where ~x = dom(Φe1) (resp. ~zi = dom(Φ2)) is the set of variables where p ! extracts the static part of p . i S LeM e bound in Φe1 (resp. Φ2). Let ~z = i ~zi and define the canonical formula q(~x,ν) and its local restriction q0(~x,ν) as follows:

7. Algorithmic Consistent Subtyping ^ i q(~x,ν) = ∀~z. ( Φ → ri(~x,ν, ~z)) Having defined a gradually-typed language with refinements that 2 i L M satisfies the expected meta-theoretic properties (Sect. 3.3 and 6.4), 0 we turn to its decidability. The abstract interpretation framework q (~x,ν) = (∃ν.q(~x,ν)) → q(~x,ν) does not immediately yield algorithmic definitions. While some Let Φ1 ∈ γΦ (Φe1) be any logical environment in the concretization definitions can be easily characterized algorithmically, consistent of Φe1. Then the following proposition holds: there exists p(~x,ν) ∈ subtyping (Sect. 3.2) is both central and particularly challenging. i We now present a syntax-directed characterization of consistent γp(?) such that Φ1, y :p(~x,ν), Φ2 |= ri(~x,y, ~zi) for every i if L 0 i M subtyping, which is a decision procedure when refinements are and only if Φ1, y :q (~x,ν), Φ2 |= ri(~x,y, ~zi) for every i. L M drawn from the theory of linear arithmetic. In words, when a set of consistent entailment constraints share The algorithmic characterization is based on solving consistent the same prefix, we can replace the rightmost gradual formula entailment constraints of the form Φe |≈ qe. Solving such a con- by a canonical static formula that justifies the satisfiability of the constraints.7 This reduction preserves the set of interpretations of straint consists in finding a well-formed environment Φ ∈ γΦ (Φ)e and a formula q ∈ γp(q ) such that Φ |= q. We use the notation |≈ the prefix Φe1 that justify the satisfaction of the constraints. e L M to mirror |= in a consistent fashion. However, note that |≈ does not The algorithmic subtyping judgment Φe ` Te1 . Te2 is calculated correspond to the consistent lifting of |=, because entailment is de- in two steps. First, we recursively traverse the structure of types fined for sets of formulas while consistent entailment is defined for to collect a set of constraints C∗, made static by reduction. The (ordered) gradual logical environments. This is important to ensure full definition of constraint collection is in Appendix A.11. Second, well-formedness of logical environments. we check that these constraints, prepended with Φe, again reduced As an example consider the consistent entailment constraint: to static constraints, can be satisfied. The algorithmic definition x:?, y :?, z :(ν ≥ 0) |≈ x + y + z ≥ 0 ∧ x ≥ 0 ∧ ? (1) of consistent subtyping coincides with Definition 4 (Sect. 3.2), considering the local interpretation of gradual formulas. First, note that the unknown on the right hand side can be obviated, so to solve the constraint we must find formulas that restrict the Proposition 15. Φe ` Te1 . Te2 if and only if Φe ` Te1 <‹: Te2. possible values of x and y such that x + y + z ≥ 0 ∧ x ≥ 0 is always true. There are many ways to achieve this; we are only 8. Extension: Measures concerned about the existence of such an environment. The derivation of the gradual refinement language is largely inde- We describe a canonical approach to determine whether a con- pendent from the refinement logic. We now explain how to extend sistent entailment is valid, by reducing it to a fully static judg- 6 our approach to support a more expressive refinement logic, by ment. Let us illustrate how to reduce constraint (1) above. We considering measures (Vazou et al. 2014), i.e. inductively-defined first focus on the rightmost gradual formula in the environment, functions that axiomatize properties of data types. for y, and consider a static formula that guarantees the goal, using Suppose for example a IntList of lists of integers. The the static information further right. Here, this means binding y to measure len determines the length of a list. ∀z.z ≥ 0 → (x + ν + z ≥ 0 ∧ x ≥ 0). After quantifier elimi- measure len : IntList → Int nation, this formula is equivalent to x + ν ≥ 0 ∧ x ≥ 0. Because len([]) = 0 this formula is not local, we retain the strongest possible local for- len(x :: xs) = 1 + len(xs) mula that corresponds to it. In general, given a formula q(ν), the formula ∃ν.q(ν) captures the non-local part of q(ν), so the for- Measures can be encoded in the quantifier-free logic of equal- mula (∃ν.q(ν)) → q(ν) is local. Here, the non-local information ity, uninterpreted functions and linear arithmetic (QF-EUFLIA): is ∃ν.x + ν ≥ 0 ∧ x ≥ 0, which is equivalent to x ≥ 0, so the local a fresh uninterpreted function symbol is defined for every mea- formula for y is x ≥ 0 → x + ν ≥ 0. Constraint (1) is reduced to: sure, and each measure equation is translated into a refined type for x:?, y :(x ≥ 0 → x + ν ≥ 0), z :(ν ≥ 0) |≈ x + y + z ≥ 0 ∧ x ≥ 0 7 Proposition 14 states the equivalence only when the bound for the gradual formula is >—recall that ? stands for > ∧ ?. Dealing with arbitrary impre- 6 Our approach relies on the theory of linear arithmetic being full first order cise formulas p ∧ ? requires ensuring that the generated formula is more (including quantifiers) decidable—see discussion at the end of Section 8. specific than p, but the reasoning is similar (Appendix A.11). the corresponding data constructor(Vazou et al. 2014). For exam- To sum up, adapting our approach to accommodate a given re- ple, the definition of len yields refined types for the constructors finement logic requires extending the notion of locality (preserving of IntList, namely {ν : IntList | len(ν) = 0} for empty list, and satisfiability), and establishing the partial Galois connections for x:Int → l :IntList → {ν :IntList | len(ν) = 1 + len(l)} for . the relevant operators. This is enough to derive a gradual language Appropriately extending the syntax and interpretation of grad- that satisfies the properties of Sections 3.3 and 6.4. ual formulas with measures requires some care. Suppose a function Additionally, care must be taken to maintain decidable check- get to obtain the n-th element of a list, with type: ing. For example, our algorithmic approach to consistent subtyp- ing (Section 7) relies on the theory of linear arithmetic accepting l :IntList → n:{ν :Int | 0 ≤ ν < len(l)} → Int quantifier elimination, which is of course not true in all theories. Consider now a function that checks whether the n-th element of a The syntactic restriction for measures allows us to exploit the same list is less than a given number: approach for algorithmic consistent subtyping, since we can al- let f (l: {ν :IntList | ?})( n: {ν :Int | ?})( m: {ν :Int | ?})= ways see a formula in the restricted fragment of QF-EUFLIA as (get l n) < m an “equivalent” formula in QF-LIA. But extensions to other refine- ment logics may require devising other techniques, or may turn out We expect this code to be accepted statically because n could to be undecidable; this opens interesting venues for future work. stand for some valid index. We could naively consider that the un- known refinement of n stands for 0 ≤ ν < len(l). This interpreta- tion is however non-local, because it restricts len(l) to be strictly 9. Discussion greater than zero; a non-local interpretation would then also allow We now discuss two interesting aspects of the language design for the refinement for m to stand for some formula that contradicts this gradual refinement types. restriction on l. We must therefore adhere to locality to avoid con- Flow sensitive imprecision. An important characteristic of the tradictions (Sect. 4.4). Note that we can accept the definition of f static refinement type system is that it is flow sensitive. Flow sen- based on a local interpretation of gradual formulas: the unknown sitivity interacts with graduality in interesting ways. To illustrate, refinement of l could stand for len(l) > 0, and the refinement of recall the following example from the introduction: n could stand for a local constraint on n based on the fact that len(l) > 0 holds, i.e. len(l) > 0 → 0 ≤ ν < len(l). check :: Int → {ν :Bool | ?} To easily capture the notion of locality we leverage the fact that get :: {ν :Int | ν ≥ 0} → Int measures can be encoded in a restricted fragment of QF-EUFLIA that contains only unary function symbols, and does not allow for The gradual type system can leverage the imprecision in the return nested uninterpreted function applications. We accordingly extend type of check and transfer it to branches of a conditional. This the syntax of formulas in the static language, with f ∈ MEASURE: allows us to statically accept the example from the introduction, rewritten here in normal form: p ::= ... | f v | f ν let b = check(x) in For this logic, locality can be defined syntactically, mirroring Def- if b then get(x) inition 12. It suffices to notice that, in addition to restricting the else (let y = −x in get(y)) refinement variable ν, formulas are also allowed to restrict a mea- sure applied to ν. To check locality of a formula, we consider each Assuming no extra knowledge about x, in the then branch the syntactic occurrence of an application f(ν) as an atomic constant. following consistent entailment constraint must be satisfied: Definition 22 (Local formula for measures). Let p be a formula in x:>, b:?, b = true, z :(ν = x) |≈ z ≥ 0 0 the restricted fragment of QF-EUFLIA. Let p be the formula re- Similarly, for the else branch, the following consistent entailment sulting by substituting every occurrence of f(ν) for some function constraint must be satisfied: f by a fresh symbol cf(ν). Then, let X be the set of all symbols cf(ν). We say that p is local if ∃X.∃ν.p0 is valid. x:>, b:?, b = false, y :(ν = −x), z :(ν = y) |≈ z ≥ 0 The critical property for local formulas is that they always Note that the assumption b = true in the first constraint and preserves satisfiability (recall Proposition 3). b = false in the second are inserted by the type system to allow flow sensitivity. The first (resp. second) constraint can be trivially Proposition 16. Let Φ be a logical environment with formulas satisfied by choosing ? to stand for ν = false (resp. ν = true). in the restricted fragment of QF-EUFLIA, ~x = dom(Φ) the vector This choice introduces a contradiction in each branch, but is not of variables bound in Φ, and q(~x,ν) a local formula. If Φ is a violation of locality: the contradiction results from the static satisfiable then Φ ∪ { q(~x,ν) } is satisfiable. L M formula inserted by the flow-sensitive type system. Intuitively, the L M The definition of the syntax and interpretation of gradual formu- gradual type system accepts the program because—without any las follows exactly the definition from Section 4.4, using the new static information on the value returned by check—there is always definition of locality. Then, the concretization function for formu- the possibility for each branch not to be executed. las is naturally lifted to refinement types, gradual logical environ- The gradual type system also enables the smooth transition to more precise refinements. For instance, consider a different signa- ment and subtyping triples, and the gradual language is derived as ture for check, which specifies that if it returns true, then the input described in previous sections. Recall that the derived semantics must be positive:8 relies on hαT , γT i and hατ , γτ i being partial Galois connections. The abstraction function for formulas with measures is again par- check :: x:Int → {ν :Bool | (ν = true → x ≥ 0) ∧ ?} tial, thus αT and ατ are also partial. Therefore, we must establish In this case the then branch can be definitively accepted, with no that hαT , γT i and hατ , γτ i are still partial Galois connections for need for dynamic checks. However, the static information is not the operators used in the static and dynamic semantics. sufficient to definitely accept the else branch. In this case, the type Lemma 17 (Partial Galois connections for measures). The pair system can no longer rely on the possibility that the branch is never hα , γ i is a { tsubst }-partial Galois connection. The pair T T ˘ 8 The known part of the annotation may appear to be non-local; its locality hατ , γτ i is a { FI<: ,F◦<: ,F [v/x] }-partial Galois connection. becomes evident when writing the contrapositive x < 0 → ν = false. ◦<: executed, because we know that, at least for negative inputs, check for casting to subset types in Coq with arbitrary decidable proposi- will return false. Nevertheless, the type system can optimistically tions. Combining their cast mechanism with the implicit coercions assume that check returns false only for negative inputs. The pro- of Coq allows refinements to be implicitly asserted where required. gram is therefore still accepted statically, and subject to a dynamic None of these approaches classify as gradual typing per se(Siek check in the else branch. and Taha 2006; Siek et al. 2015), since they either require program- mers to explicitly insert casts, or they do not mediate between var- Eager vs. lazy failures. AGT allows us to systematically derive ious levels of type precision. For instance, Ou et al.(2004) only the dynamic semantics of the gradual language. This dynamic se- support either simple types or fully-specified refinements, while a mantics is intended to serve as a reference, and not as an efficient gradual refinement type system allows for, and exploits, partially- implementation technique. Therefore, defining an efficient cast cal- specified refinements such as ν > 0 ∧ ?. culus and a correct translation from gradual source programs is an Finally, this work relates in two ways to the gradual typing open challenge. literature. First, our development is in line with the relativistic view A peculiarity of the dynamic semantics of gradual refinement of gradual typing already explored by others(Bañados Schwerter types derived with AGT is the consistent term substitution operator et al. 2014; Disney and Flanagan 2011; Fennell and Thiemann (Section 6.3), which detects inconsistencies at the time of beta 2013; Thiemann and Fennell 2014), whereby the “dynamic” end of reduction. This in turn requires verifying consistency relations on the spectrum is a simpler static discipline. We extend the state-of- open terms, hence resorting to SMT-based reasoning at runtime; a the-art by gradualizing refinement types for the first time, including clear source of inefficiency. dependent function types. Notably, we prove that our language We observe that AGT has been originally formulated to derive a satisfies the gradual guarantee(Siek et al. 2015), a result that has runtime semantics that fails as soon as is justifiable. Eager failures not been established for any of the above-mentioned work. in the context of gradual refinements incurs a particularly high cost. Second, this work builds upon and extends the Abstracting Therefore, it becomes interesting to study postponing the detection Gradual Typing (AGT) methodology of Garcia et al.(2016). It con- of inconsistencies as late as possible, i.e. while preserving sound- firms the effectiveness of AGT to streamline most aspects of grad- ness. If justifications can be delayed until closed terms are reached, ual language design, while raising the focus on the key issues. For runtime checks boil down to direct evaluations of refinement for- gradual refinements, one of the main challenges was to devise a mulas, with no need to appeal to the SMT solver. To the best of our practical interpretation of gradual formulas, coming up with the knowledge, capturing different eagerness failure regimes within the notion of locality of formulas. To support the local interpretation AGT methodology has not yet been studied, even in a simply-typed of gradual formulas, we had to appeal to partial Galois connec- setting; this is an interesting venue for future work. tions(Miné 2004). This approach should be helpful for future ap- plications of AGT in which the interpretation of gradual types is not 10. Related Work as straightforward as in prior work. Also, while Garcia et al.(2016) focus exclusively on consistent subtyping transitivity as the locus A lot of work on refining types with properties has focused of runtime checking, dealing with refinement types requires other on maintaining statically decidable checking (e.g. through SMT meta-theoretic properties used for type preservation—lemmas re- solvers) via restricted refinement logics(Bengtson et al. 2011; lated to substitution in both terms and types—to be backed by evi- Freeman and Pfenning 1991; Rondon et al. 2008; Xi and Pfen- dence in the gradual setting, yielding new consistent operators that ning 1998). The challenge is then to augment the expressiveness of raise new opportunities for runtime failure. the refinement language to cover more interesting programs with- out giving up on automatic verification and inference(Chugh et al. 2012b; Kawaguchi et al. 2009; Vazou et al. 2013, 2015). Despite these advances, refinements are necessarily less expressive than us- ing higher-order logics such as Coq and Agda. For instance, subset types in Coq are very expressive but require manual proofs(Sozeau 2007).F? hits an interesting middle point between both worlds by 11. Conclusion supporting an expressive higher-order logic with a powerful SMT- Gradual refinement types support a smooth evolution between sim- backed type checker and inferencer based on heuristics, which falls ple types and logically-refined types. Supporting this continuous back on manual proving when needed(Swamy et al. 2016). slider led us to analyze how to deal with imprecise logical infor- Hybrid type checking(Knowles and Flanagan 2010) addresses mation. We developed a novel semantic and local interpretation of the decidability challenge differently: whenever the external prover gradual formulas that is key to practical gradual refinements. This is not statically able to either verify or refute an implication, a specific interpretation of gradual formulas is the main challenge in cast is inserted, deferring the check to runtime. Refinements are extending the refinement logic, as illustrated with measures. We arbitrary boolean expressions that can be evaluated at runtime. also demonstrate the impact of dependent function types in a grad- Refinements are however not guaranteed to terminate, jeopardizing ual language, requiring new notions of term and type substitutions soundness(Greenberg et al. 2010). with runtime checks. This work should inform the gradualization of Earlier, Ou et al.(2004) developed a core language with refine- other advanced type disciplines, both regarding logical assertions ment types, featuring three constructs: simple{e}, to denote that (e.g. Hoare logic) and full-fledged dependent types. expression e is simply well-typed, dependent{e}, to denote that A most pressing perspective is to combine gradual refinement the type checker should statically check all refinements in e, and types with , following the principled approach of assert(e, τ) to check at runtime that e produces a value of type τ. Garcia and Cimini(2015). This would allow progress towards a The semantics of the source language is given by translation to an practical implementation. Such an implementation should also tar- internal language, inserting runtime checks where needed. get a cast calculus, such as a manifest contract system, respecting Manifest contracts(Greenberg et al. 2010) capture the general the reference dynamic semantics induced by the AGT methodol- idea of allowing for explicit typecasts for refinements, shedding ogy. Finally, while we have explained how to extend the refine- light on the relation with dynamic contract checking(Findler and ment logic with measures, reconciling locality and decidability Felleisen 2002) that was initiated by Gronski and Flanagan(2007). with more expressive logics—or arbitrary terms in refinements— More recently, Tanter and Tabareau(2015) provide a mechanism might be challenging. Acknowledgments gramming Language Design and Implementation (PLDI 2009), pages 304–315. ACM Press, June 2009. We would like to thank Ronald Garcia, Jorge Pérez, Niki Vazou and the anonymous referees for their feedback, and Nick, Jesse and K. Knowles and C. Flanagan. Hybrid type checking. ACM Trans. Program. Jordan for their precious inspiration. Lang. Syst., 32(2), 2010. N. Lehmann and É. Tanter. Formalizing simple refinement types in Coq. In 2nd International Workshop on Coq for Programming Languages References (CoqPL’16), St. Petersburg, FL, USA, Jan. 2016. F. Bañados Schwerter, R. Garcia, and É. Tanter. A theory of gradual A. Miné. Weakly Relational Numerical Abstract Domains. PhD thesis, effect systems. In 19th ACM SIGPLAN Conference on Functional L’École Polythechnique, Dec. 2004. Programming (ICFP 2014), pages 283–295, Gothenburg, Sweden, Sept. 2014. ACM Press. X. Ou, G. Tan, Y. Mandelbaum, and D. Walker. Dynamic typing with dependent types. In Proceedings of the IFIP International Conference F. Bañados Schwerter, R. Garcia, and É. Tanter. Gradual type-and-effect on Theoretical Computer Science, pages 437–450, 2004. systems. Journal of Functional Programming, 26:19:1–19:69, Sept. 2016. P. M. Rondon, M. Kawaguci, and R. Jhala. Liquid types. In 29th ACM SIGPLAN Conference on Programming Language Design and Imple- J. Bengtson, K. Bhargavan, C. Fournet, A. D. Gordon, and S. Maffeis. mentation (PLDI 2008), pages 159–169, Tucson, AZ, USA, June 2008. Refinement types for secure implementations. ACM Transactions on ACM Press. Programming Languages and Systems, 33(2):8:1–8:45, Jan. 2011. J. G. Siek and W. Taha. Gradual typing for functional languages. In Scheme R. Chugh. Nested Refinement Types for JavaScript. PhD thesis, University and Functional Programming Workshop, pages 81–92, Sept. 2006. of California, Sept. 2013. J. G. Siek, M. M. Vitousek, M. Cimini, and J. T. Boyland. Refined cri- R. Chugh, D. Herman, and R. Jhala. Dependent types for JavaScript. In 27th teria for gradual typing. In 1st Summit on Advances in Programming ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages (SNAPL 2015), pages 274–293, 2015. Languages and Applications (OOPSLA 2012), pages 587–606, Tucson, AZ, USA, Oct. 2012a. ACM Press. M. Sozeau. Subset coercions in Coq. In Types for Proofs and Programs, volume 4502 of LNCS, pages 237–252. Springer-Verlag, 2007. R. Chugh, P. M. Rondon, A. Bakst, and R. Jhala. Nested refinements: a logic for . In 39th annual ACM SIGPLAN-SIGACT Symposium on N. Swamy, C. Hritcu, C. Keller, A. Rastogi, A. Delignat-Lavaud, S. Forest, K. Bhargavan, C. Fournet, P.-Y. Strub, M. Kohlweiss, J. K. Zinzindo- Principles of Programming Languages (POPL 2012), pages 231–244, ? Philadelphia, USA, Jan. 2012b. ACM Press. houe, and S. Zanella Béguelin. Dependent types and multi-effects in F . In 43rd ACM SIGPLAN-SIGACT Symposium on Principles of Program- A. Church. A formulation of the simple theory of types. J. Symbolic Logic, ming Languages (POPL 2016), pages 256–270, St Petersburg, FL, USA, 5(2):56–68, 06 1940. Jan. 2016. ACM Press. P. Cousot and R. Cousot. Abstract interpretation: A unified lattice model É. Tanter and N. Tabareau. Gradual certified programming in Coq. In for static analysis of programs by construction or approximation of fix- Proceedings of the 11th ACM Dynamic Languages Symposium (DLS points. In 4th ACM Symposium on Principles of Programming Lan- 2015), pages 26–40, Pittsburgh, PA, USA, Oct. 2015. ACM Press. guages (POPL 77), pages 238–252, Los Angeles, CA, USA, Jan. 1977. ACM Press. P. Thiemann and L. Fennell. Gradual typing for annotated type systems. In Z. Shao, editor, 23rd European Symposium on Programming Languages T. Disney and C. Flanagan. Gradual information flow typing. In Interna- and Systems (ESOP 2014), volume 8410 of LNCS, pages 47–66, Greno- tional Workshop on Scripts to Programs, 2011. ble, France, 2014. Springer-Verlag. L. Fennell and P. Thiemann. Gradual security typing with references. In N. Vazou, P. M. Rondon, and R. Jhala. Abstract refinement types. In Computer Security Foundations Symposium, pages 224–239, June 2013. M. Felleisen and P. Gardner, editors, Proceedings of the 22nd European R. B. Findler and M. Felleisen. Contracts for higher-order functions. In 7th Symposium on Programming Languages and Systems (ESOP 2013), vol- ACM SIGPLAN Conference on Functional Programming (ICFP 2002), ume 7792 of LNCS, pages 209–228, Rome, Italy, Mar. 2013. Springer- pages 48–59, Pittsburgh, PA, USA, Sept. 2002. ACM Press. Verlag. C. Flanagan. Hybrid type checking. In 33th ACM SIGPLAN-SIGACT N. Vazou, E. L. Seidel, R. Jhala, D. Vytiniotis, and S. L. P. Jones. Refine- Symposium on Principles of Programming Languages (POPL 2006), ment types for haskell. In 19th ACM SIGPLAN Conference on Func- pages 245–256, Charleston, SC, USA, Jan. 2006. ACM Press. tional Programming (ICFP 2014), pages 269–282, Gothenburg, Swe- T. Freeman and F. Pfenning. Refinement types for ML. In Proceedings of den, Sept. 2014. ACM Press. the ACM SIGPLAN Conference on Programming Language Design and N. Vazou, A. Bakst, and R. Jhala. Bounded refinement types. In 20th ACM Implementation (PLDI ’91), pages 268–277. ACM Press, 1991. SIGPLAN Conference on Functional Programming (ICFP 2015), pages R. Garcia and M. Cimini. Principal type schemes for gradual programs. 48–61, Vancouver, Canada, Sept. 2015. ACM Press. In 42nd ACM SIGPLAN-SIGACT Symposium on Principles of Program- P. Vekris, B. Cosman, and R. Jhala. Refinement types for TypeScript. In ming Languages (POPL 2015), pages 303–315. ACM Press, Jan. 2015. Proceedings of the 37th ACM SIGPLAN Conference on Programming R. Garcia, A. M. Clark, and É. Tanter. Abstracting gradual typing. In Language Design and Implementation (PLDI 2016), pages 310–325, 43rd ACM SIGPLAN-SIGACT Symposium on Principles of Program- Santa Barbara, CA, USA, June 2016. ACM Press. ming Languages (POPL 2016), St Petersburg, FL, USA, Jan. 2016. ACM H. Xi and F. Pfenning. Eliminating array bound checking through de- Press. pendent types. In Proceedings of the ACM SIGPLAN Conference on M. Greenberg, B. C. Pierce, and S. Weirich. Contracts made manifest. Programming Language Design and Implementation (PLDI ’98), pages In 37th annual ACM SIGPLAN-SIGACT Symposium on Principles of 249–257. ACM Press, 1998. Programming Languages (POPL 2010), pages 353–364. ACM Press, Jan. 2010. J. Gronski and C. Flanagan. Unifying hybrid types and contracts. In Trends in Functional Programming, pages 54–70, 2007. W. A. Howard. The formulae-as-types notion of construction. In J. P. Seldin and J. R. Hindley, editors, To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus, and Formalism, pages 479–490. Academic Press, New York, 1980. Reprint of 1969 article. M. Kawaguchi, P. Rondon, and R. Jhala. Type-based data structure verifi- cation. In Proceedings of the 30th ACM SIGPLAN Conference on Pro- A. Complete Formalization and Proofs Definition 26 (Term precision).

A.1 Static Refinement Types T1 v T2 t1 v t2 Px Pc Pλ e e In this section we present auxiliary definitions and properties for x v x c v c λx:Te1.t1 v λx:Te2.t2 the static refinement type system missing from the main body. Proofs of type safety and refinement soundness of this sys- t1 v t2 Te1 v Te2 t1 v t2 v1 v v2 tem was formalized in Coq and presented at the CoqPL work- P:: Papp t :: T v t :: T t1 v1 v t2 v2 shop(Lehmann and Tanter 2016). 1 e1 2 e2 v v v t v t t v t Definition 23 (Logical extraction). Pif 1 2 11 21 21 22 if v then t else t v if v then t else t {ν :B | p} = p 1 11 12 2 21 22 L M x:T1 → T2 = > t11 v t21 t12 v t22 L M Plet x:p = p[x/ν] let x = t11 in t12 v let x = t21 in t22 L M x1 :, p1 . . . , xn :pn = x1 :p1 ∪ · · · ∪ xn :pn Definition 27 (Precision for gradual logical environments). L M L M L M Definition 24 (Well-formedness). Φe1 is less imprecise than Φe2, notation Φe1 v Φe2, if and only if γ (Φ ) ⊆ γ (Φ ). fv(p) ⊆ dom(Φ) Φ e1 Φ e2 Φ ` p (wfp) Φ ` p A.3 Static Criteria for Gradual Refinement Types Φ ` p Φ ` T (wf-refine) In this section we prove the properties of the static semantics for Φ ` {ν :B | p} gradual refinement types. We assume a partial Galois connection hαp, γpi such that γp(p) = { p } and αp({ p }) = p. Φ ` T1 Φ, x: T1 ` T2 (wf-fun) L M Φ ` x:T1 → T2 Lemma 21. αT ({ T }) = T and γT (T ) = { T }. ` Φ (wf-empty) ` • Proof. By induction on the structure of T and using the definition ` ΦΦ ` p x∈ / dom(Φ) assumed for αp and γp in singleton sets and precise formulas. (wf-env) ` Φ, x:p

Definition 25 (Small step operational semantics). Lemma 22. If Φ ` T1 . T2 if and only if Φ ` T1 <: T2 0 t1 7−→ t1 0 let x = t1 in t2 7−→ let x = t1 in t2 Proof. Direct by Lemma 21 and definition of consistent subtyping.

0 t1 7−→ t1 0 let x = v in t 7−→ t[v/x] t1 t2 7−→ t t2 1 Lemma 23. T [v/x] = T‡[v/x] 0 t2 7−→ t2 0 v t2 7−→ v t2 (λx:T.t) v 7−→ t[v/x] Proof. Direct by Lemma 21 and definition of consistent type sub- stitution. 0 t1 7−→ t1 0 if t1 then t2 else t3 7−→ if t then t2 else t3 1 Proposition 1 (Equivalence for fully-annotated terms). For any t ∈ TERM, Γ; Φ `S t : T if and only if Γ;Φ ` t : T if true then t2 else t3 7−→ t2

if false then t2 else t3 7−→ t3 c v 7−→ δc(v) Proof. From left to right by induction on the static typing derivation 0 using lemmas 22 and 23. Form right to left by induction on the Proposition 18 (Type preservation). If • ; • ` t : T and t 7−→ t 0 0 0 gradual typing derivation using same lemmas. then • ; • ` t : T and • ` T <: T .

Proposition 19 (Progress). If • ; • ` t : T then t is a value or t 7−→ t0. Proposition 24 (αT is sound). If αT (TÛ) is defined, then TÛ ⊆ Proposition 20 (Refinement soundness). If • ; • ` t : {ν :B | p} γT (αT (TÛ)). and t 7−→∗ v then p[v/ν] is valid. To be exact, the Coq development formalizes the same system, Proof. By induction on the structure of α (T ) save for some inessential details. First, the Coq development does T Û not make use of the logical environment. This distinction was Case ({ν : B | pe}). By inversion TÛ = { {ν :B | pi}}. Applying necessary to ease gradualization. Second, as required by the AGT definition of γT and αT . approach, the system presented in the paper uses subtyping in an algorithmic style, while the Coq development uses a separate γT (αT (TÛ)) = γT ({ν :B | αp{ pi }}) subsumption rule. = {{ν :B | p} | p ∈ γp(αp({ pi })) } A.2 Gradual Refinement Types Auxiliary Definitions ⊇ { {ν :B | p} | p ∈ { pi }} by Proposition 40 This section present auxiliary definition for the gradual refinement = { {ν :B | pi}} = TÛ type system. 0 Case (x:Te1 → Te2). By inversion TÛ = { x:Ti1 → Ti2 }. 0 Lemma 29. If Te v Te then T‡e[v/x] v Te‡[v/x]. γT (αT (TÛ)) = γT (x:αT ({ Ti1 } → αT ({ Ti2 }))) Proof. By definition of v we know γ (T ) ⊆ γ (T 0). By defini- = {x:T → T | T ∈ γ (α ({ T })) ∧ T e T e 1 2 1 T T i1 0 tion of collecting lifting tsubst˘ (γT (Te)) ⊆ tsubst˘ (γT (Te )). And T2 ∈ γT (αT ({ Ti2 }))} finally by monotonicity of αT since it forms a Galois connection 0 ⊇ { x:T1 → T2 | T1 ∈ { Ti1 } ∧ T2 ∈ { Ti2 }} we conclude αT (tsubst˘ (γT (Te))) v αT (tsubst˘ (γT (Te ))). By IH. Lemma 30. If Φe ` Te, Φe v Φe0 and Te v Te0 then Φe ` Te ⊇ { x:Ti1 → Ti2 } Proof. Direct since domain of Φe is the same as the domain of Φe0. Proposition 25 (α is optimal). T Lemma 31 (Static gradual guarantee for open terms). If Γ1 ; Φe1 ` If α (T ) is defined and T ⊆ γ (T ) then α ( T ) v T . T Û Û T e T Û e t1 : Te1, t1 v t2, Γ1 v Γ2 Φe1 v Φe2, then Γ2 ; Φe2 ` t2 : Te2 and Te1 v Te2. Proof. By induction on the structure of Te. Proof. By induction on the typing derivation. Case ({ν : B | pe}). Then TÛ = { {ν :B | pi}} and (1) { pi } ⊆ γ (p ). It suffices to show that γ (α (T )) ⊆ γ (T ). Applying Case (Tx-refine). Because we give exact types, the type for the p e T T Û T e e the definition of αT and γT . variable is preserved. We conclude noting that x v x by (Px). γ (α (T )) = γ ({ν :B | α ({ p })}) T T Û T p i Case (eTx-fun). Under the same environment Γ the type for the = {{ν :B | p} | p ∈ γp(αp({ pi })) } variable is also the same. We also conclude noting that x v x by (Px). ⊆ { {ν :B | p} | p ∈ γp(pe) } by (1) and Proposition 41 = γ ({ν :B | p }). T e Case (eTc). Trivial since constants are always given the same type and by rule (P c) we have c v c. Case (x : Te1 → Te2). TÛ = { x:Ti1 → Ti2 }, (1) { Ti1 } ⊆ Case (Tλ). γT (Te1) and (2) { Ti2 } ⊆ γT (Te2). It suffices to show that e γ (α (T )) ⊆ γ (T ) α γ T T Û T e . Applying the definition of T and T . Φe ` Te1 Γ, x:Te1 ; Φe, x: Te1 ` t : Te2 (eTλ) L M (2) γT (αT (TÛ)) = γT (x:αT ({ Ti1 }) → αT ({ Ti2 })) Γ; Φe ` λx:Te1.t :(x:Te1 → Te2) = {x:T → T | T ∈ γ (α ({ T }))∧ 0 0 0 1 2 1 T T i1 Let t2 such that λx:Te1.t v t2, Γ such that Γ v Γ and Φe 0 0 0 such that Φ v Φ . By inversion on v we have (t2 = λx : T .t ), T2 ∈ γT (αT ({ Ti2 }))} e e e1 T v T 0 and t v t0. ⊆ {x:T → T | T ∈ γ (T )∧ e1 e1 1 2 1 T e1 Applying induction hypothesis to premises of 2 we have:

T2 ∈ γT (Te2)} by (1), (2) and IH 0 0 0 0 0 0 Γ , x:Te1 ;Φ , x: Te1 ` t : Te2 L M = γT (x:Te1 → Te2). 0 0 0 such that Te2 v Te2. We also known that Φe ` Te1 by Lemma 30. Then applying (eTλ) we have

Lemma 26 (Inversion lemma for precision of arrows). If x:Te1 → 0 0 0 0 0 0 0 0 0 0 0 Γ ;Φ ` λx:Te1.t : x:Te1 → Te2 T2 v x:T1 → T2 then T1 v T1 and T2 v T2. e e e e e e e 0 0 We conclude by noting that x : T1 → T2 v x : T1 → T2 by 0 0 e e e e Proof. By definition of v, γT (x : Te1 → Te2) ⊆ γT (x : Te1 → Te2), Lemma 27. thus 2 { x:T1 → T2 | hT1,T2i ∈ γ (Te1, Te2) } ⊆ Case (eTapp). { x:T 0 → T 0 | hT 0,T 0i ∈ γ2(T 0, T 0) } 1 2 1 2 e1 e2 Γ; Φe ` t : x:Te1 → Te2 Γ; Φe ` v : Te Φe ` Te . Te1 0 0 (Tapp) Then, γT (Te1) ⊆ γT (Te1) and γT (Te2) ⊆ γT (Te2), and by defi- e 0 0 Γ; Φe ` t v : Te2 v/x nition of v we have Te1 v Te1 and Te2 v Te2. J K (3) 0 0 0 0 0 0 Let t2 such that t v v t2, Γ such that Γ v Γ and Φ such that Lemma 27. If T1 v T and T2 v T then x:T1 → T2 v x:T → e e e1 e e2 e e e1 Φ v Φ0. By inversion on v we have (t = t0 v0), t v t0 and T 0 e e 2 e2 v v v0. Proof. Direct by definition of precision. Applying induction hypothesis to premises of 3. we have 0 0 0 0 0 0 0 0 Γ ; Φe ` t :(x:Te1 → Te2) (4) Lemma 28. If Φe ` Te1 . Te2, Φe v Φe , Te1 v Te1 and Te2 v Te2 then 0 0 0 Γ0 ; Φ0 ` v0 : T 0 Φe ` Te1 . Te2. e e (5) 0 0 Proof. By definition of consistent subtyping there exists hΦ,T1,T2i ∈ such that x : Te1 → Te2 v x : Te1 → Te2. Inverting this with Lemma 26 we have T v T 0. γτ (Φe, Te1, Te2) such that Φ ` T1 <: T2. By definition of v we e2 e2 0 0 We also know by Lemma 28 that also have γΦ (Φ)e ⊆ γΦ (Φe ), γT (Te1) ⊆ γT (Te1) and γT (Te2) ⊆ 0 0 0 0 0 0 γT (Te2). Therefore hΦ,T1,T2i is also in γτ (Φe, Te1, Te2). Φe ` Te . Te1 (6) Then using 4, 5 and 6 as premises for (eTapp) we conclude. By applying IH on premises of 17 we have: 0 0 0 0 0 0 0 0 0 0 Γ ; Φe ` t : Te1 (18) Γ ; Φe ` t v : Te2 v /x J K 0 0 0 0 such that Te1 v Te1. By Lemma 28: We conclude by noting that Te2 v /x v Te2 v /x by Lemma 29. J K J K 0 0 0 Φe ` Te1 . Te2 (19) Case (eTif). Using 18 and 19 as premises for (eT::) we conclude Γ; Φe ` v : {ν :Bool | ep } Φe ` Te1 . Te Φe ` Te2 . Te Γ; Φ, x:(v = true) ` t1 : T1 Γ; Φ, x:(v = false) ` t2 : T2 (Tif) e e e e 0 0 0 0 0 e Γ ; Φe ` t :: Te2 : Te2 Γ; Φe ` if v then t1 else t2 : Te (7)

0 0 Let t3 such that if v then t1 else t2 v t3, Γ such that Γ v Γ 0 0 Proposition 2 (Static gradual guarantee). If • ; • ` t1 : Te1 and and Φe such that Φe v Φe . By inversion on v we have (t3 = 0 0 0 0 0 0 t1 v t2, then • ; • ` t2 : T2 and T1 v T2. if v then t1 else t2), t1 v t1, t2 v t2 and v v v . e e e Applying induction hypothesis to premises of 7 and inverting resulting hypotheses. Proof. Direct consequence of Lemma 31.

Γ0 ; Φ0 ` v0 : {ν :Bool | p 0} (8) e e Proposition 32 (αΦ is sound). If αΦ (Φ)Û is defined, then ΦÛ ⊆ 0 0 Γ; Φe, x:(v =true) ` t1 : Te1 (9) γΦ (αΦ (Φ))Û . 0 0 Γ; Φe, x:(v =false) ` t2 : Te2 (10) Proof. Applying αΦ and γΦ , and using αp soundness (Prop- 0 0 such that Te1 v Te1 and Te2 v Te2. erty 40). By Lemma 28 we also know: γΦ (αΦ (Φ))Û = { Φ | ∀x.Φ(x) ∈ γp(αΦ (Φ)(Û x)) } Φ0 ` T 0 T (11) e 1 . = { Φ | ∀x.Φ(x) ∈ γ (α ({ Φ0(x) | Φ0 ∈ Φ })) } 0 0 p p Û Φe ` T2 . T (12) 0 0 ⊇ { Φ | ∀x.Φ(x) ∈ { Φ (x) | Φ ∈ ΦÛ }} by αp soundness Using 8, 9, 10, 11 and 12 as premises for (eTif) we conclude: = ΦÛ 0 0 0 0 0 Γ ; Φe ` if v then t1 else t2 : Te Case (eTlet). Proposition 33 (αΦ is optimal). If αΦ (Φ)Û is defined and ΦÛ ⊆ γΦ (Φ)e Γ; Φ ` t : T Φ, x: T ` T T e 1 e1 e e1 e2 . e then α ( Φ) v Φ. L M Φ Û e Γ, x:Te1 ; Φe, x: Te1 ` t2 : Te2 Φe ` Te (eTlet) L M (13) Γ; Φ ` let x = t in t : T e 1 2 e Proof. It suffices to show γΦ (αΦ (Φ))Û ⊆ γΦ (Φ)e . Applying αΦ and 0 0 γ . Let t3 such that let x = t1 in t2 v t3, Γ such that Γ v Γ and Φ 0 0 Φe such that Φe v Φe . By inversion on v we have t3 = (let x = t0 in t0 ) t v t0 1 2 and . γΦ (αΦ (Φ))Û = { Φ | ∀x.Φ(x) ∈ γp(αΦ (Φ)(Û x)) } Applying IH in premises of 13 we get: 0 0 = { Φ | ∀x.Φ(x) ∈ γp(αp({ Φ (x) | Φ ∈ ΦÛ })) }

0 0 0 0 ⊆ { Φ | ∀x.Φ(x) ∈ γp(Φ(x)) } Γ ; Φe ` t1 : Te1 (14) e 0 0 0 0 0 0 by αp optimality (Property 41) Γ , x:Te1 ; Φe , x: Te1 ` t2 : Te2 (15) L M 0 0 = γΦ (Φ)e such that Te1 v Te1 and Te2 v Te2. By Lemma 28 we also know that 0 0 0 Φe , x: Te1 ` Te2 . Te (16) L M A.4 Partial Galois connection Finally, using 14 and 15 and 16 as premises for (eTlet) we Definition taken verbatim from Miné(2004), reproduced here for conclude: convenience.

Γ0 ; Φ0 ` let x = t0 in t0 : T Definition 28 (Partial Galois connection). Let (C, vC ) and (A, vA) e 1 2 e be two posets, F a set of operators on C, α : C*A a partial Case (eT::). function and γ : A → C a total function. The pair hα, γi is an F-partial Galois connection if and only if: Γ; Φe ` t : Te1 Φe ` Te1 . Te2 (eT::) (17) 1. If α(c) is defined, then c vC γ(α(c)), and Γ; Φe ` t :: Te2 : Te2 2. If α(c) is defined, then c vC γ(a) implies α(c) vA a, and 0 0 0 3. For all F ∈ F and c ∈ C, α(F (γ(c))) is defined. Let t2 such that t :: Te2 v t2, Γ such that Γ v Γ and Φe such 0 0 0 0 that Φe v Φe . By inversion on v we have t2 = t :: Te2, t v t and This definition can be generalized for a set F of arbitrary n-ary 0 Te2 v Te2. operators. A.5 Satisfiability Modulo Theory • If M is a model for bq(~x,ν)c↓, then there exists v such that ↓ We consider the usual notions and terminology of first order logic M[ν 7→ v] |= q(~x,ν). Thus M[ν 7→ v] |= bq(~x,ν)c which means that M[ν 7→ v] |= bp(~x,ν)c◦. Then, M |= an model theory. Let Σ be a signature consisting of a set of func- ◦ tion and prediate symbols. Each function symbol f is associated ∃ν, bp(~x,ν)c then M[ν 7→ v] |= (∃ν, q(~x,ν)) → q(~x,ν). with a non-negative integer, called the arity of f. We call 0-arity • Suppose now that M is not a model for ∃ν, q(~x,ν), then ↓ function symbols constant symbols and denote them by a, b, c and M[ν 7→ v] 6|= bq(~x,ν)c for any v. Thus M[ν 7→ v] |= ◦ ◦ d. We use f, g and h to denote non-constant function symbols, and bp(~x,ν)c and M |= ∃ν, bp(~x,ν)c . x1, x2, x3,... to denote variables. We also use pervasively the re- finement variable ν which has an special meaning in our formal- ization. We write p(x1, . . . , xn) for a formula that may contain variables x , . . . , x . When there is no confusion we abbreviate 1 n Definition 31 (Logical equivalence). We say that p is equivalent to p(x , . . . , x ) as p(~x). When a variable contains the special re- 1 n q, notation p ≡ q, if M |= p if and only if M |= q. finement variable ν we always annotate it explicitly as p(~x,ν). A Σ-structure or model M consist of a non-empty universe Lemma 36. Let p(~x,ν) be a satisfiable formula, then p(~x,ν) ≡ | M | and an interpretation for variables and symbols. We often bp(~x,ν)c↓ ∧ bp(~x,ν)c◦. omit the Σ when it is clear from the context and talk just about a model. Given a model M we use the standard definition interpre- ↓ tation of a formula and denote it as M(p). We use M[x 7→ v] to Proof. Direct since M |= p(~x,ν) implies M |= bp(~x,ν)c . denote a structure where the variable x is interpreted as v, and all other variables, function and predicate symbols remain the sames Definition 32 (Localification of environment). Let Φ be a well- for all other variables. formed logical environment such that Φ is satisfiable. We define Satisfaction M |= p is defined as usual. If M |= p we say its localification as: L M ◦ ◦ that M is a model for p. We extend satisfaction to set of formulas: b•c = • bx:p(ν)c = x:p(ν) M |= ∆ if for all p ∈ ∆, M |= p. A formula p is said to be bΦ, y :p(~x,ν), z :q(~x,y, ν)c◦ = satisfiable if there exists a model M such that M |= p. A set of bΦ, y :p(~x,ν) ∧ bq(~x,y, ν)c↓c◦, z :bq(~x,y, ν)c◦ formulas ∆ entails a formula q if for every model M such that M |= ∆ then M |= q. Lemma 37 (Equivalence of environment localification). Let Φ be a We define a theory T as a collection of models. A formula p well-formed logical environment such that Φ is satisfiable. Then ◦ ◦ is said to be satisfiable modulo T if there exists a model M in T bΦc ≡ Φ and for all x the formula bΦc (xL) Mis local. such that M |= p. A set of formulas ∆ entails a formula q modulo T , notation ∆ |=T q, if for all model M ∈ T , M |= ∆ implies Proof. By induction on the structure of Φ using lemmas 35 and M |= q. 36. A.6 Local Formulas Proposition 4. Let Φ be a logical environment. If Φ is satisfiable Definition 29 (Projection). Let p(~x,y) be a formula we define its then there exists an environment Φ0 with the same domainL M such that ↓ 0 0 y-projection as bp(~x,y)cy = ∃y, p(~x). We extend the definition to Φ ≡ Φ and for all x the formula Φ (x) is local. ↓ L M L M sequence of variables as bp(~x,~y)c~y = ∃~y, p(~x,~y). Definition 30 (Localification). Let p(~x,ν) be a satisfiable formula, Proof. Direct consequence of Lemma 37. ◦ ↓ we define its localification on ν as bp(~x,ν)cν = bp(~x,ν)cν → p(~x,ν). Lemma 38 (Entailment is closed under projection). If p(~x,~y) |= ↓ ↓ q(~x,~y) then bp(~x,~y)c~y |= bq(~x,~y)c~y. Proposition 34.

If p ∈ LFORMULA and p  q then q ∈ LFORMULA. ↓ Proof. Let M be a model for bp(~x,~y)c~y. Then there exists a Proof. Let M be a any model. It suffices to show that M |= sequence of values ~v such that M[~y 7→ ~v] |= p(~x,~y). Then ∃ν, q(~x,ν). Since p(~x,ν) is local M |= ∃ν, p(~x,ν) and there by hypothesis M[~y 7→ ~v] |= q(~x,~y), which implies M |= exists v such that M[ν 7→ v] |= p(~x,ν). By hypothesis M[ν 7→ bq(~x,~y)c↓. v] |= q(~x,ν), thus M |= ∃ν, q(~x,ν). ~y

Proposition 3. Let Φ be a logical environment, ~x = dom(Φ) the Lemma 39. ∃y, p(~x,y) ∨ q(~x,y) ≡ (∃y, p(~x,y)) ∨ (∃y, q(~x,y)) vector of variables bound in Φ, and q(~x,ν) ∈ LFORMULA. If Φ is satisfiable then Φ ∪ { q(~x,ν) } is satisfiable. L M Proof. L M Proof. Let p(~x) = Φ . Because p is satisfiable then there exists ⇒ Let M be a model for ∃y, p(~x,y) ∨ q(~x,y). Then there exists some model M suchL M that M |= p(~x). Because q(~x,ν) is local p p v such that M[y 7→ v] |= p(~x,y) ∨ q(~x,y). If M[y 7→ v] |= then for every model M there exists v such that, M[ν 7→ v] |= p(~x,y) then M |= ∃y, p(~x,y). If M[y 7→ v] |= q(~x,y) then q(~x,v). Let v the value corresponding to M . By construction ~x p p M |= ∃y, q(~x,y). cannot contain ν, thus M [ν 7→ v ] is also a model for p(~x). We p p ⇐ M (∃y, p(~x,y)) ∨ (∃y, q(~x,y)) M |= conclude that M [ν 7→ v ] is a model for p(~x) ∧ q(~x,ν). Let be a model for . If p p ∃y, p( ~x,y) then there exists v such that M[y 7→ v] |= p(~x,y). Lemma 35. Let p(~x,ν) be a satisfiable formula then bp(~x,ν)c◦ is Thus M[y 7→ v] |= p(~x,y) ∨ q(~x,y) and consequently M |= local. ∃p(~x,y) ∨ q(~x,y). The case when M |= ∃y, q( ~x,y) is sym- metric. Proof. Let M be a any model. It suffices to show that M is a model for ∃ν, bp(~x,ν)c◦. A.7 Soundness and Optimality of α p Case (Te = {ν : B | pe}). If pe = p then it holds directly. Then assume p = p ∧ ?. By Lemma 42 p[v/x] is a bound for every This section present soundness and optimality of the pair hαp, γpi. e formula in γp(p ∧ ?) after applying the collecting substitution over Proposition 40 (αp is sound). If αp(Ûp) is defined, then Ûp ⊆ it. We conclude that p[v/x] must be the join of all that formulas γp(αp(Ûp)). because it is also in the set. Proof. By case analysis on when αp(Ûp) is defined. Case (Te = x : Te1 → Te2). Direct by applying the induction hypothesis. Case (Ûp = { p }). γp(αp({ p })) = γp(p) = { p } Case (Ûp ⊆ LFORMULA and b Ûp is defined). Applying the defi- nition of αp and γp: Proposition 6 (Partial Galois connection for gradual types). The γp(αp( Ûp )) = γp(j Ûp ∧ ?) pair hαT , γT i is a { tsubst˘ }-partial Galois connection, where = { q | q  j Ûp } tsubst˘ is the collecting lifting of type substitution, i.e. By definition b yields an upper bound, so if q ∈ Ûp then q  b Ûp. tsubst˘ (TÛ , v, x) = { T [v/x] | T ∈ TÛ } Thus Ûp ⊆ { q | q  b Ûp } = γp(αp(Ûp)). Proof. Direct by Prop. 43.

A.9 Dynamic Semantic Auxiliary Definitions Proposition 41 (αp is optimal). If αp(Ûp) is defined, then Ûp ⊆ γp(p ) implies αp(p) v p . Here we present auxiliary definitions missing from main body e Û e necessary for the dynamic semantics. Proof. By case analysis on the structure of p . e Definition 34 (Intrinsic term full definition). Case (p). Because p cannot be empty it must be that p = { p }. Û Û (In) (Ib) Then, αp(p) = p v p. Û Φ;e n ∈ TERM{ν:Int | ν = n} Φ;e b ∈ TERM{ν:Bool | ν = b} ◦ Case (p ∧ ?). By hypothesis αp(Ûp) must be defined thus Ûp ⊆ (Ix-refine) LFORMULA and p is defined. It suffices to show γ (α (p )) ⊆ {ν:B | p } b Û p p Û Φ;e x e ∈ TERM{ν:B | ν = x} γp(p ). Applying the definition of αp and γp. e (Ix-fun) γp(αp( p )) = γp(j p ∧ ?) Φ; xy:Te1 → Te2 ∈ TERM Û Û e y:Te1 → Te2 = { q | q  j p } Û Φ; tTe1 ∈ TERM e Te1 T2 Then it suffices to show that p  p. By hypothesis p ⊆ γp(p ), Φ, x: T1 ; te ∈ TERM ε . Φ ` T T b Û Û e e e Te2 e e1 . e2 q ∈ p q  p p p (Iλ) L M (I::) so if then . That is is an upper bound for . Then, by T T T Û Û Φ; λxe1 .te2 ∈ TERM Φ; εte1 :: T2 ∈ TERM e x:T1 → T2 e e T2 definition of join b Ûp  p. e e e T Φ; te1 ∈ TERM ε1 . Φ ` T1 (x:T11 → T12) e Te1 e e . e e Φ; v ∈ TERM ε2 . Φ ` T2 T11 e T2 e e . e A.8 Algorithmic Consistent Type Substitution (Iapp) e T x:T →T In this section we provide an algorithmic characterization of con- Φ;( ε1te1 )@ e11 e12 (ε2v) ∈ TERM e Te12 v/x sistent type substitution, which simply performs substitution in the J K known parts of the formulas of a type. We also prove that hα , γ i Φ; u ∈ TERM T T e {ν:Bool | ep } is a partial Galois connection for the collecting type substitution T Φ, x:(v = true); te1 ∈ TERM ε1 . Φ ` T1 . T operator. e Te1 e e e T Φ, x:(v = false); te2 ∈ TERM ε2 . Φ ` T2 T e T2 e e . e Definition 33 (Algorithmic consistent type substitution). (Iif) e T T T Φ;( if u then ε1te1 else ε2te2 )@ ∈ TERM {ν :B | p} v/x = {ν :B | p[v/x]} e e Te J K {ν :B | p ∧ ?} v/x = {ν :B | p[v/x] ∧ ?} T Φ; te11 ∈ TERM ε1 . Φ ` T11 T12 J K e Te11 e e . e T (y :Te1 → Te2) v/x = y :Te1 v/x → Te2 v/x Φ, x:T12 ; te2 ∈ TERM ε2 . Φ, x: T12 ` T2 T e e T2 e e e . e J K J K J K (Ilet) e L M Considering the local interpretation of gradual formulas, this T T T T Φ;( let xe12 = ε1te11 in ε2te2 )@ ∈ TERM definition is equivalent to Definition 5 (Sect. 3.2). e e Te Definition 35 . Lemma 42. If p  q then p[v/x]  p[v/x]. (Intrinsic reduction full definition) tTe −→ r r ∈ (TERM ∪ { error }) 0 Proof. Let M be a model for p[v/x]. Let v be equal to the inter- (R7−→) Te pretation of v in M. Then M[x 7→ v0] is a model for p. Then by tTe 7−→ r hypothesis M[x 7→ v0] |= q and consequently M |= p[v/x]. et −→ et0 et −→ error (Rg) c (Rgerr) c g[et] 7−→ g[et0] g[et] 7−→ error Proposition 43. T‡e[v/x] = Te v/x Te Te T J K t1 7−→ t2 te 7−→ error (Rf) (Rferr) Proof. By induction on the structure of T . T T T e f[t1e] 7−→ f[t2e] f[te] 7−→ error Definition 36 (Evidence domain). Definition 39 (Intrinsic Term precision).

T1 v T2 T1 v T2 t1 v t2 idom(Φe, x:Te11 → Te12, x:Te21 → Te22) = hΦe, Te21, Te11i IPx e e IPc IPλ e e T T c v c xe1 v xe2 λx:Te1.t1 v λx:Te2.t2

ε v ε t v t T v T Proposition 44. If ε . Φe ` x:Te11 → Te12 . x:Te21 → Te22 then P:: 1 2 1 2 e1 e2 idom(ε) . Φe ` Te21 . Te11. ε1t1 :: Te1 v ε2t2 :: Te2

T v T ε v ε ε v ε t v t v v v IPapp e1 e2 11 12 12 22 1 2 1 2 0 0 0 0 0 Proof. Let ε = hΦ , x:T → T , x:T → T i because ε is T T e e11 e12 e21 e22 (ε11t1)@e1 (ε12v1) v (ε12t2)@e2 (ε22v2) self-interior and by monotonicity of ατ we have. 0 0 0 0 0 0 ε11 v ε21 ε21 v ε22 Te1 v Te2 hΦe , Te21, Te11i = ατ ({hΦ,T21,T11i ∈ γτ (Φe , Te21, Te11) | v v v t v t t v t IPif 1 2 11 21 21 22 ∃T12 ∈ γ (T12),T22 ∈ γ (T22), T T T e T e (if v1 then ε11t11 else ε12t12)@e1 v (if v2 then ε21t21 else ε22t22)@e2 Φ ` T21 <: T11 ∧ Φ, x: T21 ` T12 <: T22}) ε v ε ε v ε L 0 M 0 0 11 21 21 22 v ατ ({hΦ,T21,T11i ∈ γτ (Φe , Te21, Te11) | T v T T v T t v t t v t IPlet e11 e21 e12 e22 11 21 12 22 Φ ` T21 <: T11}) T T T T (let xe11 = ε11t11 in ε12t12)@e21 v (let xe21 = ε21t21 in ε22t22)@e22 0 0 0 = I<:(Φe , Te21, Te11)

0 0 0 Thus, hΦe , Te21, Te11i must be self-interior and we are done. A.10 Dynamic Criteria for Gradual Refinement Types Lemma 47 (Subtyping narrowing). If Φ1, Φ3 ` T1 <: T2 and ` Φ2 then Φ1, Φ2, Φ3 ` T1 <: T2. Definition 37 (Evidence codomain). Proof. By induction on subtyping derivation. icod(Φe, x:Te11 → Te12, x:Te21 → Te22) = hΦe · x: Te21 , Te12, Te22i Case (<:-refine). Trivial because the logic is monotone. L M Proposition 45. If ε . Φe ` x:Te11 → Te12 . x:Te21 → Te22 then Case (<:-fun). Direct by applying the induction hypothesis. icod(ε) . Φe, x:Te21 ` Te12 . Te22.

Lemma 48 (Consistent subtyping narrowing). If Φ , Φ ` T <: T 0 0 0 0 0 e1 e2 e1 ‹ e2 Proof. Let ε = hΦe , x:Te11 → Te12, x:Te21 → Te22i because ε is then Φe1, Φe2, Φe3 ` Te1 <: Te2. self-interior and by monotonicity of ατ we have. ‹

0 0 0 0 Proof. Direct by Lemma 47 and definition of consistent subtyping. hΦe , Te21, Te12, Te22i = ατ ({hΦ,T21,T12,T22i ∈ 0 0 0 0 γτ (Φe , Te21, Te12, Te22) | ∃T11 ∈ γT (Te11), Lemma 49 (Subtyping strengthening). If Φ1, x : >, Φ2 ` T1 <: Φ ` T21 <: T11 ∧ Φ, x: T21 ` T12 <: T22}) T then Φ , Φ ` T <: T . L M 2 1 2 1 2 v ατ ({hΦ,T21,T12,T22i ∈ Proof. By induction on the structure of T1 γ (Φ0, T 0 , T 0 , T 0 ) | τ e e21 e21 e11 Case ({ν : B | p}). Direct since adding a true assumption can be Φ, x: T21 ` T12 <: T22}) removed from entailment. L 0 M 0 0 0 = I<:(Φe · Te21 , Te21, Te11) L M Case (x:T11 → T12). Direct by applying induction hypothesis. 0 0 0 0 Thus, hΦe ·x: Te21 , Te12, Te22i must be self-interior and we are done. L M Lemma 50 (Consistent Subtyping strengthening). If Φ1, x : >, Φ2 ` T1 <‹: T2 then Φ1, Φ2 ` T1 <‹: T2. Definition 38 (Evidence codomain substitution). Proof. Direct by Lemma 49 and definition of consistent subtyping. <: [v/x] icodv(ε1, ε2) = (ε1 ◦ idom(ε2)) ◦<: icod(ε2)

Lemma 51 (Typing strengthening). If Φ1, x:>, Φ2 ; t ∈ TERM e e Te1 then Φ1, Φ2 ; t ∈ TERM . Proposition 46. If ε . Φe ` x:Te11 → Te12 . x:Te21 → Te22, e e Te1 Γ; Φe ` u : Teu and εu . Φe ` Teu . Te11 then icodu(εu, ε) . Φe ` Proof. By induction on the derivation of Φ1, x : >, Φ2 ; t ∈ T12 u/x T22 u/x or icodu(εu, ε) is undefined. e e e . e TERM and using Lemma 50. J K J K Te1 Proposition 10 (Consistent substitution preserves types). Suppose Proof. Direct by Prop. 45 and definition of consistent subtyping Φ1 ; u ∈ TERM , ε . Φ1 ` Tu Tx, and Φ1 ·x : Tx ·Φ2 ; t ∈ substitution. e Teu e e . e e e e T L M T TERM then Φ ·Φ u/x ; t[εu/x ex ] ∈ TERM or t[εu/x ex ] Te e1 e2 Te u/x is undefined. J K J K Proof. By induction on the derivation of t. Case (I::). Case. Cases (In) and (Ib) follows directly since there are no Φe1, x: Tex , Φe2 ; t ∈ TERMT replacement and constant are given the same type regardless the L M e1 environment. ε1 . Φe1, x: Tex , Φe2 ` Te1 . Te2 (I::) L M (7) Φe1, x: Tex , Φe2 ; εt :: Te2 ∈ TERMT Case (Ix-refine). L M e2 We must prove that substitution is undefined or (Ix-refine) (1) {ν:B | q } Φ1, x: Tx , Φ2 ; y e ∈ TERM{ν:B | ν = y} [v/x] Tex e e e Φ1, Φ2 u/x ;(ε ◦<: ε1)t[εu/x ] :: T2 u/x ∈ TERM (8) L M e e J K e J K Te2 u/x We have two cases: J K [v/x] If (ε ◦<: ε1) is undefined then substitution for the whole term is undefined in which case we are done. Otherwise we have. • If x{ν:B | pe} 6= y{ν:B | qe} then replacement is defined as {ν:B | q } u/x y e J K which regardless the environment has type {ν : [v/x] B | ν = y}, thus we are done. ε ◦<: ε1 . Φe1, Φe2 u/x ` Te1 u/x . Te2 u/x (9) J K J K J K • x{ν:B | pe} = y{ν:B | qe} then we must replace by u which has Applying the induction hypothesis to first premise of 7 we have type {ν :B | ν = u} regardless of the environment, thus we are t[εu/xTex ] undefined, in which case we are done, or: also done. Tex Φ1, Φ2 u/x ; t[εu/x ] ∈ TERM (10) e e Te1 u/x Case (Ix-fun). J K J K Using 9 and 10 as premises for (I::) we conclude 8 as we wanted. (Ix-fun) (2) z:Te1 → Te2 Φe1, x: Tex , Φe2 ; y ∈ TERMz:T → T Case (Iapp). L M e1 e2 Φ1, x: Tx , Φ2 ; t ∈ TERM We have two cases: e e e T1 L M e Φ1, x: Tx , Φ2 ; v ∈ TERM e e e T2 L M e ε . Φ , x: T , Φ ` T (x:T → T ) (z:Te1 → Te2) u/x 1 e1 ex e2 e1 . e11 e12 • If the variable is not the same then we substitute by y J K L M ε2 . Φe1, x: Tex , Φe2 ` Te2 . Te11 which has type (z : Te1 → Te2) u/x regardless of the logical (Iapp) L M J K x:T →T environment. Φ1, x: Tx , Φ2 ;( ε1t)@ e11 e12 (ε2v) ∈ TERM e e e Te12 v/x • Otherwise by inverting the equality between variable we also L M J K(11) know that Tex is equal to z : Te1 → Te2. By hypothesis and narrowing (Lemma 48) If substitution in t or u is undefined, or consistent subtyping substitution for ε1 or ε2 is undefined we are done. Assuming the ε . Φe1, Φe2 u/x ` Teu . z :Te1 → Te2 (3) above is defined and applying induction hypothesis to premises J K of 11 we have: By 3 Teu and z : Te1 → Te2 must be well formed in Φe1, which Tex cannot contain x, thus substituting for x in both produces the Φ1, Φ2 u/x ; t[εu/x ] ∈ TERM (12) e e Te1 u/x same type. J K J K Tex Φ1, Φ2 u/x ; u[εu/x ] ∈ TERM (13) Using 3 as premise for (I::) we conclude that e e Te2 u/x J K J K On the other hand by applying consistent subtyping substitution Φe1, Φe2 u/x ; εu :: (z :Te1 → Te2) ∈ TERMz:T → T J K e1 e2 we have: Case (Iλ). [v/x] ε ◦<: ε1 . Φ1, Φ2 u/x ` T1 u/x . (x:T11 → T12) u/x (14) e e J K e J K e e J K [v/x] Φ1, x: Tx , Φ2, y : T1 ; t ∈ TERM ε ◦ ε . Φ , Φ u/x ` T u/x T u/x e e e e T2 <: 2 1 2 2 . 11 (15) (Iλ) L M L M e (4) e e J K e J K e J K T Φ1, x: Tx , Φ2 ; λye1 .t ∈ TERM e e e y:T1 → T2 L M e e We conclude by the algorithmic characterization of consistent Let assume that t[εu/xTex ] is defined, otherwise substitution is type substitution and using 12, 13, 14 and 15 as premises for (Iapp) also undefined for the lambda and we are done. to obtain: We must prove: Φe1, Φe2 u/x ; Te1 u/x Tex J K Φ1, Φ2 ; λy .t[εu/x ] ∈ TERM Tx (x:T11→T12) u/x Tx e e J K (y:T1 → T2) u/x (ε1t)[εu/xe ]@ e e (ε2v)[εu/xe ] ∈ TERM e e J K Te12 v/x u/x J K J KJ K Applying induction hypothesis to premise of 4 we have: Case (Iif). Tex Φe1, Φe2 u/x , y : Te1 u/x ; t[εu/x ] ∈ TERMT u/x (5) e2 Φ1, x: Tx , Φ2, y :(v = true); t1 ∈ TERM ε1 . Φ ` T1 T J K L J KM J K e e e T1 e e . e L M e Then, assume that 4 holds. By using 5 as premise for (Iλ) we Φ1, x: Tx , Φ2, y :(v = false); t2 ∈ TERM ε2 . Φ ` T2 T e e e Te2 e e . e derive: (Iif) L M Φ;( if v then ε t else ε t )@T ∈ TERM e 1 1 2 2 e T e (16) Te1 u/x Φ1, Φ2 u/x ; λy .t ∈ TERM (6) e e J K y:Te1 u/x → Te2 u/x J K J K J K If substitution in t1, t2 or v is undefined, or consistent subtyping We conclude by the algorithmic characterization of type substi- substitution for ε1 or ε2 is undefined we are done. Assuming the tution (Lemma 43). above is defined and applying induction hypothesis to premises of 16 we have: Case (Iapp).

Tex • ; t ∈ TERM ε1 . • ` T1 (x:T21 → T22) Φ1, Φ2 u/x , y :(v =true); t1[εu/x ] ∈ TERM (17) Te1 e . e e e e Te1 u/x J K J K • ; v ∈ TERM ε2 . • ` T2 T11 T2 e . e Tex e Φ1, Φ2 u/x , y :(v =false); t2[εu/x ] ∈ TERM (18) (Iapp) (2) e e Te2 u/x x:T →T J K J K • ;( ε1t)@ e21 e22 (ε2v) ∈ TERM Te22 v/x And by consistent subtyping substitution: J K [v/x] If t is not a value applying induction hypothesis to first premise 0 ε ◦<: ε1 . Φe1, Φe2 u/x ` Te1 u/x . Te u/x (19) of 2 we have t 7−→ error or t 7−→ t and • ; t ∈ TERM . If t 7−→ Te1 [v/x] J K J K J K ε ◦ ε . Φ , Φ u/x ` T u/x T u/x (20) x:T11→T12 <: 2 e1 e2 e2 . e error then, by rule (Rgerr), (ε1t)@ e e (ε2v) 7−→ error. J K J K J K x:T11→T12 0 x:T11→T12 By using , , and as premises for (Iif) we conclude: Otherwise, (ε1t)@ e e (ε2v) 7−→ (ε1t )@ e e (ε2v), 0 x:T11→T12 Φ1, Φ2 u/x ; and by Rule (Iapp) • ;(ε1t )@ e e (ε2v) ∈ TERM . e e Te12 v/x J K T J K (if v[εu/xTex ] then (ε t )[εu/xTex ] else (ε t )[εu/xTex ])@eu/x Te11 0 1 1 2 2 If t is a value then it equal to λx .t or ε3u :: Te1. If it is equal ∈ TERM T u/x to ε u :: T then by Rule (Rf) or (Rferr) it reduces to either error eJ K 3 e1 <: x:T11→T12 Case (Ilet). or to (ε3 ◦ ε1)u)@ e e (ε2v). If t equal to λxTe11 .t0 then T must be equal to x : T → T . Φ1, x: Tx , Φ2 ; t1 ∈ TERM e1 e11 e12 e e e Te11 L M x:T11→T12 Φ1, x: Tx , Φ2, x:T12 ; t2 ∈ TERM By rule (R−→) (ε1t)@ e e (ε2v) either goes to error e e e e T2 L M e ε . Φ , x: T , Φ ` T T or reduces to icodu(εu, ε1)t[εuu/x] :: Te2 u/x where εu = 1 e1 ex e2 e11 . e12 <: L M ε2 ◦ idom(ε1). In case every operator isJ definedK by Prop. 10 ε2 . Φ1, x: Tx , Φ2, y : T12 ` T2 . T (Ilet) e e e e e e • ; t[εuu/x] ∈ TERM . By Prop. 46 icodu(εu, ε1) . L M L M T12 u/x T T e Φ1, x: Tx , Φ2 ;( let ye12 = ε1t1 in ε2t2)@ ∈ TERM J K e e e e T • ` T u/x T u/x . We conclude by Rule (I::) that L M e (21) e12 . e22 • J K J K ; icodu(εu, ε1)t[εuu/x] :: Te2 u/x ∈ TERMT u/x J K e2 We assume substitution is defined for every subterm and con- J K sistent subtyping substitution for every evidence, otherwise we are Case (Iif). done. Applying IH to premises of 21 we obtain: Φ; u ∈ TERM e {ν:Bool | ep } x:(v = true); t1 ∈ TERM ε1 . Φ ` T1 T Tex Te1 e e . e Φe1, Φe2 u/x ; t1[εu/x ] ∈ TERMT u/x (22) e11 x:(v = false); t2 ∈ TERM ε2 . Φ ` T2 T J K J K Te2 e e . e Tex (Iif) (3) Φ1, Φ2 u/x , y : T12 εu/x ; t2[εu/x ] ∈ TERM (23) T e e e Te2 u/x • ;( if u then ε1t1 else ε2t2)@ ∈ TERM J K L J KM J K e Te By applying consistent subtyping substitution we have: By inversion the first hypothesis of 3 we have that u is either [v/x] true or false. If u = true then by rule (R−→) the term reduces ε ◦<: ε1 . Φe1, Φe2 u/x ` Te11 u/x . Te12 u/x (24) J K J K J K • [v/x] to ε1t1 :: Te. Which is well-formed because ; t1 ∈ TERMT by ε ◦<: ε2 . Φe1, Φe2 u/x , y : Te12 u/x ` Te2 u/x . Te u/x (25) e1 J K L J KM J K J K Lemma 51. We conclude analogously when u = false. Using 22, 23, 24 and 25 as premises for (Ilet) we conclude that: Case (Ilet).

Φ , Φ u/x ; • ; t1 ∈ TERM ε1 . • ` T11 . T12 e1 e2 Te11 e e J K T12 u/x Tx Tx T u/x x:T12 ; t2 ∈ TERM ε2 . x: T12 ` T2 T (let ye = (ε1t)[εu/xe ] in (ε2t)[εu/xe ])@e e T2 e e . e J K J K (Ilet) e L M (4) ∈ TERMT u/x e T12 T J K • ;( let xe = ε1t1 in ε2t2)@ ∈ TERM e Te

If t1 is not a value then by induction hypothesis it either reduces 0 0 to error or to some t1 such that • ; t1 ∈ TERM . We conclude Te • Te Te11 Proposition 11 (Type Safety). If t1 ∈ TERM then either t1 is a Te using Rule (Rg) or (Rgerr). Te Te Te • Te value v, t1 7−→ t2 for some term t2 ∈ TERM , or t1 7−→ error. If t1 is equal to ε3u :: Te11 then ε1(ε3u :: Te11) reduces to Te <: (ε1 ◦ ε3)u or to error We conclude by Rule (Rf) or (Rferr). If t1 is equal to u. Then, the whole term either reduces to an Te Proof. By induction on the derivation of t1 . error in which case we conclude by Rule (Rgerr) or it reduces [v/x] Te1 Te1 Case (In,Ib,Iλ,Ix-fun,Ix-refine). t is a value. to (ε1 ◦<: ε2)t2[ε1u/x ] :: Te. By Prop. 10 • ; t2[ε1u/x ] ∈ TERM . Since T is well-formed in • it does include x as a Case (I::). Te u/x e free variableJ K and hence T u/x = T . Thus we conclude that • ; t ∈ TERM e e T1 e [v/x] T1 J K (ε1 ◦ ε2)t2[ε1u/x e ] :: T is well-formed. ε1 . • ` Te1 . Te2 <: e (I::) (1) Φ; εt :: T2 ∈ TERM e e Te2 <: If t = u then εt :: Te2 is a value. Otherwise applying induction Lemma 52 (Monotonicity of ◦ ). If ε1 v ε2, ε3 v ε4 and 0 0 <: <: <: hypothesis to first premise of 1 we have t 7−→ t and • ; t ∈ ε1 ◦ ε3 is defined then ε2 ◦ ε4 is defined and ε1 ◦ ε3 v <: TERM or t 7−→ error. If t 7−→ error then εt :: T2 7−→ error ε2 ◦ ε4. Te1 e 0 by Rule (Rgerr). Otherwise, by Rule (Rg), εt :: Te2 7−→ εt :: Te2. Proof. We have γτ (ε1) ⊆ γτ (ε2) and γτ (ε3) ⊆ γτ (ε4). Conse- 0 • ; εt :: T2 ∈ TERM is well-formed because of Rule (I::). quently, F◦<: (γτ (ε1), γτ (ε3)) ⊆ F◦<: (γτ (ε2), γτ (ε4)). Because e Te2 ατ is monotone it must be that Case (IPif).

α (F <: (γ (ε ), γ (ε ))) v α (F <: (γ (ε ), γ (ε ))) τ ◦ τ 1 τ 3 τ ◦ τ 2 τ 4 ε11 v ε21 ε21 v ε22 Te1 v Te2 v v v t v t t v t and we conclude. 1 2 11 21 21 22 T T (if v1 then ε11t11 else ε12t12)@ 1 v (if v2 then ε21t21 else ε22t22)@ 2 [v/x] e e Lemma 53 (Monotonicity of ◦<: ). If ε1 v ε2, ε3 v ε4 and (1) [v/x] [v/x] [v/x] If v1 = true then it must be v2 = true. Thus, we have. ε1 ◦<: ε3 is defined then ε2 ◦<: ε4 is defined and ε1 ◦<: [v/x] ε3 v ε2 ◦<: ε4. T1 (if v1 then ε11t11 else ε12t12)@e −→ ε11t11 :: T1 Proof. Direct using the same argument of Lemma 52. e T2 (if v2 then ε21t21 else ε22t22)@e −→ ε21t21 :: Te2 Lemma 54 (Substitution preserves precision). If t1 v t2, u1 v Te1 Te1 By hypothesis ε12 v ε22, t12 v t22 and T1 v T2. Then by u2, Te1 v Te2, ε1 v ε2 and t1[ε1u1/x ] is defined then t2[ε2u2/x ] e e T1 T2 Rule (IP::) we conclude ε11t11 :: Te1 v ε21 :: Te2. A symmetric is defined and t1[ε1u1/x e ] v t2[ε2u2/x e ]. argument applies when v1 = false.

Proof. By induction on the derivation of t1 v t2. Case (Papp). 0 0 0 x t = yTe1 t = yTe2 yTe1 6= xTe1 Case (IP ). We have 1 and 2 . If it T1 v T2 ε11 v ε12 ε12 v ε22 t1 v t2 v1 v v2 follows directly. Otherwise there are two cases. e e (2) x:T 0 →T 0 x:T 0 →T 0 (ε11t1)@ e11 e12 (ε12v1) v (ε12t2)@ e21 e22 (ε22v2) If Te1 = {ν :B | p1} then it must be Te2 = {ν :B | p2}. By def- e 0 0 e Te1 Te1 Te2 Te2 inition of substitution. y [ε1u1/x ] = u1 and y [ε2u2/x ] = T1 T11 If (ε11t1)@e (ε12v1) reduces then t1 = λx e .t11 and by u2. We conclude because u1 v u2 by hypothesis. Te21 inversion of Rule (IPλ) t2 = λx .t21, Te11 v Te21 and t11 v t21. If Te1 = x : Te11 → Te12 then by definition of substitution. 0 0 It also must be that v1 = u1 and v2 = u2. Then we have Te1 Te1 Te2 Te2 y [ε1u1/x ] = ε1u1 :: Te1 and y [ε2u2/x ] = ε2u2 :: Te2. We conclude ε1u1 :: Te1 v ε2u2 :: Te2 by Rule (IP::) x:T 0 →T (ε t )@ e11 e12 (ε u ) −→ Case (IPc). Direct since substitution does not modify the term. 11 1 12 1 Te11 icodu1 (ε12, ε11)t[u1/x ] :: T12 u1/x T11 T11 e Case (IPλ). We have t1 = λx e .t11, t2 = λx e .t21 and t11 v J K T1 T2 By lemmas 52, and 54 we also have. t21. By induction hypothesis t11[ε1u1/x e ] v t21[ε2u2/x e ]. We conclude applying Rule (IPλ). 0 x:T →T22 (ε12t2)@ e21 e (ε22u2) −→ Case (IPif). We have t1 = if u1 then ε11t11 else ε12t12, t1 = Te21 if u2 then ε21t21 else ε22t22, t1i v t2i and ε1i v ε1i. Since icodu2 (ε22, ε21)t[u2/x ] :: Te22 u2/x Te1 [v/x] J K t1[ε1u1/x ] is defined it must be that ε1 ◦<: ε1i is defined. Then T11 T21 [v/x] [v/x] and t[u1/x e ] v t[u2/x e ] by Lemma 53 ε1 ◦<: ε1i v ε2 ◦<: ε2i. By induction hypothe- By lemmas 52 and 53 icodu (ε12, ε11) v icodu (ε22, ε21). By sis we also have t [ε u /xTe1 ] v t [ε u /xTe2 ]. We conclude by 1 2 1i 1 1 2i 2 2 T u /x v T u /x applying Rule (IPif). lemmas 26 and 29 e12 1 e22 2 . We conclude by Rule (IP::) that J K J K

x:T11→T12 Case (IPapp). We have t1 = (ε11t11)@ e e (ε12v1), t2 = icod (ε , ε )t[u /xTe11 ] :: T u /x v x:T21→T22 u1 12 11 1 e12 1 (ε21t21)@ e e (ε22v2), ε1i v ε2i, v1 v v2 and t11 v J K [v/x] [v/x] Te21 t21. By Lemma 53 we have ε1 ◦<: ε1i v ε2 ◦<: ε2i. By icodu2 (ε22, ε21)t[u2/x ] :: Te22 u2/x Te1 Te2 J K induction hypothesis we also have t11[ε1u1/x ] v t21[ε2u2/x ] Case (IPlet). Te1 Te2 and v1[ε1u1/x ] v v2[ε2u2/x ]. We conclude by applying Rule ε v ε ε v ε (IPapp) 11 21 21 22 Te11 v Te21 Te12 v Te22 t11 v t21 t12 v t22 T11 T12 T11 T21 T21 T22 Case (IPlet). We have t1 = (let y e = ε11t11 in ε12t12)@e , (let xe = ε11t11 in ε12t12)@e v (let xe = ε21t21 in ε22t22)@e (3) T21 T22 t2 = (let y e = ε21t21 in ε22t22)@e , ε1i v ε2i and t1i v [v/x] [v/x] t ε ◦ ε v ε ◦ ε T11 T21 2i. By Lemma 53 we have 1 <: 1i 2 <: 2i. By If (let x e = ε11t11 in ε12t12)@e reduces then t11 = u1 T1 T2 induction hypothesis we also have t1i[ε1u1/x e ] v t2i[ε2u2/x e ]. and inverting Rule (IPlet) we have t12 = u2. We conclude by applying Rule (IPlet). Te11 T12 (let x = ε11t11 in ε12t12)@e −→

[u1/x] Te11 (ε11 ◦<: ε12)t12[ε11u1/x ] :: Te12 Te1 Lemma 55 (Dynamic gradual guarantee for −→). Suppose t1 v Te2 Te1 Te1 Te2 Te2 Te1 Te2 By lemmas 53, and 54 we also have t1 . If t1 −→ t2 then t1 −→ t2 where t2 v t2 .

Te21 T21 Te1 Te1 (let x = ε21t21 in ε21t12)@e −→ Proof. By induction on t1 −→ t2 . [u2/x] Te11 Te1 (ε21 ◦<: ε21)t22[ε21u2/x ] :: T22 Case (IPc, IPλ, IP::,IPx). Direct since t1 does not reduce. e and 2. If v = εu :: {ν :B | p } then p ![u/ν] is valid e LeM [u1/x] Te11 Proof. (1) follows directly since p must be equal to {ν : B | ν = (ε11 ◦<: ε12)t12[ε11u1/x ] :: Te12 v e u}. For (2) we have that ε . • ` {ν :B | ν = u} . {ν :B | p }, thus [u2/x] Te11 e (ε ◦ ε )t [ε u /x ] :: T • 21 <: 21 22 21 2 e22 there exists p ∈ γp(pe) such that ` {ν :B | ν = u} <: {ν :B | p} thus u must satisfy p and hence, it satisfies p !. LeM Proposition 13 (Refinement soundness). Te1 Te2 Lemma 56. Suppose Φ; f1[t ] ∈ TERM 0 and Φ; f2[t ] ∈ • e Te1 e {ν:B | p } {ν:B | p } ∗ If t e ∈ TERM{ν:B | p } and t e 7−→ v then: Te1 Te2 Te1 Te2 e TERM 0 . If f1[t ] v f2[t ] then t v t . Te2 1. If v = u then p ![u/ν] is valid LeM Proof. By case analysis on the structure of f1. 2. If v = εu :: {ν :B | p } then p ![u/ν] is valid e LeM Proof. Te1 Te2 Direct consequence of type preservation and Lemma 60. Lemma 57. Suppose Φ; f1[t1 ] ∈ TERM 0 and Φ; f2[t1 ] ∈ e Te1 e Te1 Te2 Te1 Te2 Te1 Te2 TERM 0 . If f1[t1 ] v f2[t1 ] and t2 v t2 then f1[t2 ] v f2[t2 ]. Te2 A.11 Algorithmic Consistent Subtyping

Proof. By case analysis on the structure of f1. Definition 40 (Constraint collecting judgment).

∗ Te1 Te2 T1  T2 | C Lemma 58. Suppose Φ; g1[ε1t ] ∈ TERM 0 and Φ; g2[ε2t ] ∈ e e e Te1 e Te1 Te2 Te1 Te2 TERM 0 . If g1[ε1t ] v g2[ε2t ] then t v t and ε1 v ε2. (refine) Te2 {ν :B | p }  {ν :B | q } | (x:p ) ◦ {• |≈ q x/ν !} e e e LeJ KM Proof. By case analysis on the structure of g1. ∗ ∗ ∗ T1  T2 | C1 C2 = (x:p2) ◦ (C1 ∪ {• |≈ p1 x/ν !}) (fun1) e e e Le J KM Te1 ∗ Lemma 59. Suppose Φ;e g1[ε11t1 ] ∈ TERM 0 and Φ;e x:{ν :B | p1} → T1  x:{ν :B | p2} → T2 | C Te1 e e e e 2 Te2 Te1 Te2 Te1 Te2 g2[ε21t1 ] ∈ TERMT 0 . If g1[ε11t1 ] v g2[ε21t1 ], t2 v t2 and ∗ e2 y :T21 → T22  y :T11 → T12 | C T T e e e e 1 then e1 e2 . ∗ ∗ ∗ ∗ ε12 v ε22 g1[ε12t2 ] v g2[ε22t2 ] T13  T23 | C C = C ∪ C (fun2) e e 2 3 1 2 ∗ Proof. By case analysis on the structure of g1. x:(y :Te11 → Te12) → Te13  x:(y :Te21 → Te22) → Te23 | C3

Te1 Te2 (x:p) ◦ {Φ1 |≈ r1,..., Φn |≈ rn} = Proposition 12 (Dynamic gradual guarantee). Suppose t1 v t1 . Te1 Te1 Te2 Te2 Te1 Te2 {(x:p·Φ1 |≈ r1),..., (x:p·Φn |≈ rn)} If t1 7−→ t2 then t1 7−→ t2 where t2 v t2 . (x:p ∧ ?) ◦ {Φ1 |≈ r1,..., Φn |≈ rn} = Te1 Te1 0 0 Proof. By induction on the derivation of t1 7−→ t2 . {(x:q ·Φ1 |≈ r1),..., (x:q ·Φn |≈ rn)}

Te1 where ~z = S dom(Φi) Case (Rgerr, Rferr). Impossible since t1 must reduce to a well- i typed term. V q = ∀~z, i( Φi → ri) ∧ p L M q0 = ((∃ν, q) → q) ∧ (¬(∃ν, q) → p) Te1 Te1 Te2 Te2 Case (R−→). We have t1 −→ t2 so by Lemma 55 t1 −→ t2 . Te2 Te2 Definition 41 (Algorithmic consistent subtyping). We conclude by Rule (R−→) that t1 7−→ t2 . ∗ ∗ Te1  Te2 | C ` Φe ◦ C Case (Rf). Φe ` Te1 . Te2 Φe ` Te1 . Te2 Te1 Te1 t1 7−→ t2 (Rf) (1) Φe ◦ {Φ1 |≈ r1,..., Φn |≈ rn} = {(Φe·Φ1 |≈ r1),..., (Φe·Φn |≈ rn)} f [tTe1 ] 7−→ f [tTe1 ] 1 1 1 2 i Lemma 61. Let { (Φ1, y :p(~x,ν) ∧ ?, Φ2) } be a set of well- T1 T1 We have f1[t e ] v f2[t e ]. Thus applying induction hypothesis formed gradual environment with the same prefix, ~x = dom(Φ1) i to premise of 1 we have tTe2 7−→ tTe2 and tTe1 v tTe2 . We conclude the vector of variables bound in Φ1, ~zi = dom(Φ2) the vector of 1 2 2 2 variables bound in Φi and r (~x,ν, ~z ) a set of static formulas. Te1 Te2 2 i i by Lemma 57 that f1[t2 ] v f2[t2 ]. S Define ~z = i ~zi and ^ Te1 0 Te2 i Case (Rg). We have t = g1[ε11(ε12u1 :: T1)] and t = q(~x,ν) = (∀~z, ( Φ2 → ri(~x,ν, ~zi))) ∧ p(~x,ν) 1 e 1 L M 0 Te1 Te1 Te1 i g2[ε21(ε22u2 :: T2)]. We have that t 7−→ t thus t must e 1 2 2 0 <: 0 Te2 q (~x,ν) = (∃ν, q(~x,ν)) → q(~x,ν) ∧ ¬(∃ν, q(~x,ν)) → p(~x,ν) be equal to g1[(ε12 ◦ ε11)u1 :: T1]. By Lemma 52 t must e 1 0 reduce to g [(ε ◦<: ε )u :: T 0]. We conclude by Lemma 58 and If there exists p (~x,ν) ∈ γp(p(~x,ν) ∧ ?) such that 2 22 21 2 e2 0 i <: 0 <: Φ1, y :p (~x,ν), Φ2 |= ri(~x,y, ~zi) for every i then Lemma 59 that g1[(ε12 ◦ ε11)u1 :: Te1] v g2[(ε22 ◦ ε21)u2 :: 0 i 0 LΦ1, y :q (~x,ν), Φ2M |= ri(~x,y, ~zi) for every i. Te2]. L M Proof. Let M be a model such that 0 i M |= Φ1 ∧ q (~x,ν) ∧ Φ2 (2) {ν:B | p } • Lemma 60. If v e ∈ TERM then. L M L M {ν:B | pe} It must be that M |= ∃ν, q(~x,ν). Indeed, let v such that M[ν 7→ 0 0 1. If v = u then p ![u/ν] is valid v] |= p (~x,ν). Such v must exists because p (~x,ν) is local. It LeM suffices to prove that M[ν 7→ v] |= q(~x,ν). Let ~vz an arbitrary Applying rule (<:-fun) we conclude Φ ` x : {ν : B | p1} → 0 e vector of values. We have M[ν 7→ v][~z 7→ ~vz] |= Φ1 ∧ p (~x,ν) T12 <: x:{ν :B | p2} → T22. 0 e ‹ e e since Φ1 and p (~x,ν) does not mention variables inL ~z andM Φ1 does i not mention ν. For all i if M[ν 7→ v][~z 7→ ~vz] |= Φ2 then by the Case (x:Te11 → Te12). Direct by induction hypothesis noting that above and by hypothesis M[ν 7→ v][~z 7→ ~vz] |=Lri(~x,ν,M ~zi). We the binding for y does not add useful information when added to V i conclude that M[ν 7→ v][~z 7→ ~vz] |= i( Φ2 → ri(~x,ν, ~zi)) the logical environment. and we are done. L M Then it must be that M |= q(~x,ν) and consequently for all i ~vz, M[~z 7→ ~vz] |= Φ2 → ri(~x,ν, ~zi). In particular this is true ∗ ∗ for the vector ~v ofL valuesM bound to ~z in M. It cannot be that Lemma 64. If Φe ` Te1 <‹: Te2 then Te1  Te2 | C and Φe ` Φe ◦ C . i M 6|= Φ2 because it contradicts 2, thus it must be that M is Proof. By hypothesis there exists hΦ,T1,T2i ∈ γτ (Φ, T1, T2) a modelL for Mri(~x,ν, ~zi) and we conclude. e e e such that Φ ` T1 <: T2. By induction on the derivation of Φ ` T1 <: T2. i Lemma 62. Let { (Φe1, y :p(~x,ν) ∧ ?, Φ2) } be a set of well- Case (<:-refine). Direct by Lemma 62. formed gradual environment with the same prefix, ~x = dom(Φe1) i Case (<:-fun). We have T1 = x:T11 → T12 and T2 = x:T21 → the vector of variables bound in Φe1, ~zi = dom(Φ2) the vector of e e e e e variables bound in Φi and { r (~x,ν, ~z ) } a set of static formulas. Te22. By case analysis on the structure of Te11. Both cases follow S 2 i i directly from Lemma 62 Define ~z = i ~zi and ^ q(~x,ν) = (∀~z, ( Φ2 → ri(~x,ν, ~zi))) ∧ p(~x,ν) L M i Proposition 15. Φe ` Te1 . Te2 if and only if Φe ` Te1 <: Te2. q0(~x,ν) = (∃ν, q(~x,ν)) → q(~x,ν) ∧ ¬(∃ν, q(~x,ν)) → p(~x,ν) ‹ Proof. Direct by lemmas 64 and 63 Let (Φ1) ∈ γΦ (Φe1) any environment in the concretization of 0 the common prefix. There exists p (~x,ν) ∈ γp(p(~x,ν) ∧ ?) such A.12 Dynamic Operators 0 i that Φ1, y :p (~x,ν), Φ2 |= ri(~x,ν, ~zi) for every i if and only if 0 i Definition 43. Let Φe = (Φe1, y :p , Φe2) be gradual logical environ- Φ1,L y :q (~x,ν), Φ2 |= rMi(~x,ν, ~zi) for every i. e L M ment, and r a static formula. For Φe we define its admissible set on Proof. x implying r as: {p0 ∈ γ (p ) | ∃(Φ , Φ ) ∈ γ (Φ , Φ ), Φ , x:p0, Φ |= r} ⇒ Direct by Lemma 61. p e 1 2 Φ e1 e2 1 2 0 L M ⇐ It suffices to prove that q (~x,ν) ∈ γp(p(~x,ν) ∧ ?). Indeed, let We omit y, Φ or r if they are clear from the context. M be a model for q0(~x,ν). If M |= ∃ν, q(~x,ν) then M |= e ↓ q(~x,ν) and consequently M |= p(~x,ν). If M 6|= ∃ν, q(~x,ν) Lemma 65. p |= bpc~y then M |= p(~x,ν). On the other hand it follows directly that 0 ↓ q (~x,ν) is local. Proof. Let M[~y 7→ ~v] be a model for p then M |= bpc~y which ↓ implies M[~y 7→ ~v] |= bpc~y

Definition 42. We define the extended constraint satisfying judg- Lemma 66 (Join for leftmost gradual binding). ∗ Let Φe = (Φ1, y : p ∧ ?, Φe2) be a well-formed gradual logical ment to Φ ` Φe ◦ C where Φ ∈ γΦ (Φ)e is an evidence for the constraint. environment, ~x = dom(Φ1) the vector of variables bound in Φ1, ∗ ∗ ~z = dom(Φe2) the vector of variables bound in Φe2, and r(~x,y, ~z) Lemma 63. Te1  Te2 | C and Φ ` Φe ◦ C then Φ ` Te1 <: Te2. 0 ‹ a static formula. Let Φe = (Φ1, y : p ∧ ?, Φ2) be the environment Proof. By induction on the structure of T . resulting from the reduction of Φe by iteratively applying Lemma 62 e1 until reaching the binding for y. Define Case ({ν : B | p }). By inversion T = {ν : B | q }. The only e e2 e q(~x,ν) = (∀~z, Φ1 ∧ Φ2 → r(~x,ν, ~z)) ∧ p(~x,ν) constraint generated is Φ, x : r |≈ q x/ν where r is generated 0 L M L M e e ! q (~x,ν) = (∃ν, q(~x,ν)) → q(~x,ν) ∧ ¬(∃ν, q(~x,ν)) → p(~x,ν) canonical admissible formula. It followsL J fromKM Proposition 62 that The admissible set on y implying r(~x,y, ~z) of Φ is empty or its this constraints can be satisfied if and only if Φe, x:p |≈ q x/ν ! e e LeJ KM join is q0(~x,ν). can be satisfied, which in turn is equivalent to Φe ` {ν : B | pe} <‹: {ν :B | qe} being true. Proof. By Lemma 61 the admissible set on y implying r(~x,y, ~z) is the same for Φ and Φ0 because every step of the reduction does Case (x : {ν : B | p } → T ). By inversion T = x : {ν : B | e e e1 e12 e2 not change the possible set of admissible environments in the sub- pe2} → Te22. We have as induction hypothesis. environment to the left of the binding under focus. We prove that 0 0 ∗ 0 ∗ 0 q (~x,ν) is in the admissible set and then that it is an upper bound For all Φ if T12  T22 | C and ` Φ ◦ C then Φ ` T12 <: T22. e e e e e e ‹ e for every formula in the admissible set, thus it must be the join. (3) 0 ∗ First, q (~x,ν) ∈ γp(p(~x,ν) ∧ ?), the proof is similar to that By hypothesis we have Φ, x : p2 ` Φe, x : p2 ◦ C , thus 0 0 e on Lemma 62. Second, if M |= q (~x,ν) it must be that M |= instantiating the induction hypothesis with Φ = Φe, x : pe2 we have r(~x,y, ~z). As in Lemma 61 it is first necessary to prove that M Φ, x:p2 ` Te12 <‹: Te22. must be a model for ∃ν, q(~x,ν) an then conclude that M must It also follows from hypothesis that Φ, x : p2 ` Φe, x : pe2 ◦ {• |≈ be a model for r(~x,y, ~z). For that we assume that there is at least p1 x/ν !} Thus, analogous as in the base case we conclude one formula in the admissible set, otherwise we are done anyways. Le J KM 0 Φ ` {ν :B | p2} <‹: {ν :B | pe1}. Then, q (~x,ν) is in the admissible set if it is non-empty. 0 We now prove that q (~x,ν) is an upper bound for the admissible Lemma 68. Let (Φ1, x : p) and (Φ2, x : s ∧ ?, Φ2) be two well set if it is non-empty. Let s(~x,ν) be an arbitrary formula in the formed gradual environment, ~z = dom(Φ2) and r a static formula. admissible set. Then Φ1 ∧ s(~x,y) ∧ Φ2 |= r(~x,y, ~z). Let M Define: be a model for s(~x,νL ).M If M 6|= ∃ν,L q(~x,νM ) then M trivially q(~x,ν) = (∀~z, Φ2 → r) ∧ s 0 L M models q (~x,ν) since M |= p(~x,ν) because s(~x,ν) is in the q0(~x,ν) = ((∃ν, q(~x,ν)) → q(~x,ν)) ∧ ¬(∃ν, q(~x,ν)) → s admissible set. Otherwise we must prove, that M |= q(~x,ν). Again 0 0 we known that M models p(~x,ν) thus it suffices to show that If there exists s ∈ γp(s ∧ ?) such that Φ1, x:p |= s and 0 0 L M 0 M |= ∀~z, Φ1 ∧ Φ2 → r(~x,ν, ~z). Let ~v be an arbitrary vector of Φ1, x:s , Φ2 |= r then Φ1, x:p |= q and Φ1, x:q , Φ2 |= r. L M L M L M values, weL proveM thatL MM[~z 7→ ~v] |= Φ1 ∧ Φ2 → r(~x,ν, ~z). If L M L M Proof. Let M be a model for Φ1, x:p . Then by hypothesis M |= M[~z 7→ ~v] 6|= Φ1 ∧ Φ2 we are done. Otherwise M[~z 7→ ~v] |= 0 0 L M L M s . It then follows directly thatL M |= qM. Φ1 ∧ s(~x,y) ∧ Φ2 , because s(~x,ν) does not mention variables 0 Lin ~zMand we originallyL M assume M |= s(~x,ν). Because s(~x,ν) is in Let M be a model for Φ, x:q , Φ2 . It must be that M |= L M the admissible set it must be that M[~z 7→ ~v] |= r(~x,y, ~z). Thus ∃ν, q. Indeed, it suffices to show v such that M[ν 7→ v] |= p. Hence, we have that M |= r and we conclude. M[~z 7→ ~v] |= Φ1 ∧ Φ2 → r(~x,ν, ~z) and we conclude that M |= ∀~z, Φ1 ∧L ΦM2 →L rM(~x,ν, ~z). L M L M Lemma 69. Let (Φe, x:pe) and (Φe, x:s∧?, Φ2) be two well formed Lemma 67 (Join for inner gradual binding). gradual environment and r a static formula. Define: q(~x,ν) = (∀~z, Φ → r) ∧ s Let Φe = (Φe1, y :p∧?, Φe2) be gradual logical environment such that 2 0 L M dom?(Φe1) is non-empty, and r a static formula. If the admissible set q (~x,ν) = ((∃ν, q(~x,ν)) → q(~x,ν)) ∧ ¬(∃ν, q(~x,ν)) → s on y implying r of Φe is non-empty then its join is p. 0 Let Φ ∈ γΦ (Φ)e and p ∈ γp(p ). There exists s ∈ γp(s ∧ ?) 0 e 0 0 such that Φ, x:p |= s and Φ, x:s , Φ2 |= r if and only if Proof. Let p be any formula in the admissible set on y. Then there 0 0 0 Φ, x:p |=L q andM Φ, x:q , Φ2L |= r. M exists (Φ1, Φ2) ∈ γΦ (Φe1, Φe2) such that Φ1, y :p , Φ2 |= r. Let L M L M p be an upper bound for the admissible set.L Let M be anM arbitrary Proof. model such that M |= p, it suffices to show that M |= p since we already known that p is an upper bound for the admissible set. ⇒ Follows from Lemma 68. There is some x ∈ dom?(Φ1). Let q = Φ1(x) be the formula e ⇐ It just suffices to notice that q ∈ γp(s ∧ ?). bound to x in Φ1. Let v be the value bound to x in M. We create 0 the environment Φ1 which is equal to Φ1 in every binding but in x. For x we bound q ∧ ν 6= v. The formula s = (x 6= v → 0 p ) ∧ (x = ν → p) is in the admissible set because s ∈ γp(p ∧ ?) Proposition 8 (Partial Galois connection for transitivity). The pair 0 hατ , γτ i is a { F <: }-partial Galois connection. and Φ1, y :s, Φ2 |= r. Moreover M |= s thus M |= p and we ◦ conclude.L M Proof. Follows directly from lemmas 69, 67 and 66. Proposition 7 (Partial Galois connection for interior). The pair Proposition 9 (Partial Galois connection for subtyping substitu- hατ , γτ i is a { FI<: }-partial Galois connection. tion). The pair hατ , γτ i is a { F [v/x] }-partial Galois connection. ◦ Proof. Follows directly from lemmas 66 and 67, since they charac- <: terize the join when the admissible set is non-empty. Proof. Follows directly from lemmas 69, 67 and 66.