<<

Quantifiers and dependent types

Constructive (15-317) Instructor: Giselle Reis

Lecture 05

We have largely ignored the quantifiers so far. It is time to give them the proper attention, and: (1) design its rules, (2) show that they are locally sound and complete and (3) give them a computational . Quantifiers are statements about some formula parametrized by a . Since we are working with a first-order logic, this term will have a simple type τ, different from the type o of formulas. A second-order logic allows terms to be of type τ → ι, for simple types τ and ι, and higher-order allow terms to have arbitrary types. In particular, one can quantify over formulas in second- and higher-order logics, but not on first-order logics. As a consequence, the principle of induction (in its general form) can be expressed on those logics as:

∀P.∀n.(P (z) ∧ (∀x.P (x) ⊃ P (s(x))) ⊃ P (n))

But this comes with a toll, so we will restrict ourselves to first-order in this course. Also, we could have multiple types for terms (many-sorted logic), but since a logic with finitely many types can be reduced to a single-sorted logic, we work with a single type for simplicity, called τ.

1 Rules in

To design the natural deduction rules for the quantifiers, we will follow the same procedure as for the other connectives and look at their meanings. Some examples will also help on the way. Since we now have a new ele- ment in our language, namely, terms, we will have a new judgment a : τ denoting that the term a has type τ. Let’s start with universal quantification.

LECTURE NOTES LEC 05 2 Quantifiers and dependent types

A formula ∀x.A holds iff we know that for every term a chosen, the A[a/x]1 holds.

How can we design introduction and elimination rules for ∀? Remem- ber that Gentzen’s idea was to develop a calculus that mimics mathematical reasoning, so let’s look at en example of universal statement: Every n has a unique prime factorization. The introduction rule corresponds to a proof of that statement while the elimination rule corresponds to using it in a proof. A proof of this statement typically starts as: let n be a natural number... and goes on to prove the property for such n. So we take a generic vari- able, assume nothing about it, except its type, and prove the property. This translates to the following natural deduction rule:

a : τ . . A[a/x] true ∀Ia ∀x.A true

The term a is called an eigenvariable and it should be fresh, meaning that it has not occurred anywhere else in the proof. This guarantees that we are using a completely generic term of that type, and if the property holds for it, it will hold for any instantiation by an actual term. What if we want to use a universal ? Suppose we are proving and we need to use the prime factorization of a number. If this is a natural number, we can go ahead and use our theorem. All we need to do is show that the object we need the factorization of is a natural number. This translates to the following natural deduction elimination rule:

∀x.A true t : τ ∀E A[t/x] true

In this case, the term t already exists and we need to show it satisfies the conditions, i.e., has the correct type, for applying the theorem. Now for existential quantification.

A formula ∃x.A holds iff for some term t, A[t/x] holds.

1Denoting A with all occurrences of x replaced by a.

LECTURE NOTES LEC 05 Quantifiers and dependent types 3

Introducing the existential quantifier is straightforward. All we have to do is provide a witness, i.e., a term for which the proposition A holds. This gives us two premises:

A[t/x] true t : τ ∃I ∃x.A true

The premise on the left represents the fact that A[t/x] holds and the premise on the right shows the term has the correct type. What about the elimination rule? Suppose you have the following “the- orem”: there exists an to merge two sorted lists in constant time, which can be used to show that there exists a sorting algorithm with complexity O(n). The goes along these lines: Let a be such (magical) con- stant time merging algorithm, then we can use it on mergesort and get the new recursive relation for its work: W (n) = 2W (n/2) + c. Resolving this recurrence gives us a work of O(n). Of course this theorem is not true (it would be too good!), but the idea of using an existential statement ∃x.A is that we can assume the existence of a generic term a for which the theorem holds and use this fact to show anything else. In natural deduction terms, this is:

u a : τ A[a/x] true . . ∃x.A true C true ∃Ea,u C true

As in the case for ∀I, a is an eigenvariable. If freshness is not required in this case, we are able to prove unsound formulas, such as ∃x.A ⊃ ∀x.A:

u v ∃x.A true A[a/x] true ∃Ea,v A[a/x] true ∀Ia ∀x.A true ⊃ Iu ∃x.A ⊃ ∀x.A true

LECTURE NOTES LEC 05 4 Quantifiers and dependent types

2 Local soundness and completeness

Now we check if these rules are in harmony by showing local soundness and completeness. A quick reminder: local soundness amounts to show- ing that whatever information extracted by the elimination rules is already packaged by the introduction rule. This is shown by a local reduction, i.e., the introduction of a connective followed by the elimination can be trans- formed into a more direct proof of the conclusion. The local reduction for the universal quantifier is:

a : τ D A[a/x] true E ∀Ia E t : τ ∀x.A true t : τ D[t/a] ∀E A[t/x] true ⇒R A[t/x] true

Since D is a derivation of A[a/x] for a generic a, i.e, it makes no assump- tions over its structure, then it can be safely replaced by an actual term t. The local reduction for the existential quantifier is:

D u E a : τ A[a/x] true D A[t/x] true t : τ E ∃I F t : τ A[t/x] true ∃x.A true C true F[t/a] ∃Ea,u C true ⇒R C true

Notice again how we can “instantiate” a with t in F, since a is an eigen- variable. Local completeness amounts to showing that the information obtained by elimination rules is enough to reconstruct the formula. In this case, we use local expansions. The local expansion for the universal quantifier is:

D ∀x.A true a : τ ∀E A[a/x] true D ∀Ia ∀x.A true ⇒E ∀x.A true

LECTURE NOTES LEC 05 Quantifiers and dependent types 5

The elimination of ∀ requires some term of type τ. Any term, since the statement tells us that for all terms A holds. In particular, this can be the eigenvariable provided by the ∀I rule. The local expansion for the existential quantifier is:

u a : τ A[a/x] true D ∃I ∃x.A true ∃x.A true D ∃Ea,u ∃x.A true ⇒R ∃x.A true

In this case, it is the introduction of ∃ that requires a term of type τ and an evidence of A for that term. This is provided by the assumptions of ∃E.

3 Proofs as programs

The next thing we did for all the other connectives was to show its com- putational interpretation in the Curry-Howard isomorphism. Does this ex- tend to the quantifiers as well? Yes, quantifiers also have a programming language counterpart, called dependent types. Unfortunately not many pro- gramming languages implement dependent types, mostly because it adds complexity to type-checking possibly leading to undecidability. Neverthe- less, it is a very powerful feature: by using dependent types we can prove that a program meets its specification only by typechecking it! Dependent types are types parametrized by a value. Imagine that you are writing a program and you want to use an array of size 5. How do you declare this type? In most programming languages there is no way of making this restriction on the type level, but with dependent types we can have a type ∀n.array(n) denoting a family of types: arrays of size n. Then you can simply declare your type as array(5). This is called a dependent type2: it takes a value and constructs a new type. Naturally, it will be represented in the programming side as λx.M. Now suppose you want to define a type for prime numbers. This can be done via a dependent pair type3 hn : int, prime(n)i, given a proposition prime that checks for primality. This type denotes all integers n that satisfy prime(n). The rules for quantifiers with proof terms are:

2Denoted by Πx : τ.τ 0 in the world. 3Denoted by Σx : τ.τ 0 in the type theory world.

LECTURE NOTES LEC 05 6 Quantifiers and dependent types

a : τ . . M : A[a/x] M : ∀x.A t : τ ∀I ∀E λa : τ.M : ∀x.A Mt : A[t/x]

a : τ u : A[a/x] . . t : τ M : A[t/x] M : ∃x.E N : C ∃I ∃E ht, Mi : ∃x.A let ha, ui = M in N : C

Annotating the local reductions and expansions with proof terms gives us the following transformations:

(λa.M)t ⇒R M[t/a]

let ha, ui = ht, Mi in N ⇒R N[t/a][M/u]

M : ∀x.A ⇒E λa.Ma

M : ∃x.A ⇒E let ha, ui = M in ha, ui

Remark 1. Notice how this is different from polymorphism. When we declare a list of type α list, we are quantifying over types: for every type α there is a type for lists of α elements. In dependent types, we quantify over values. Just out of curiosity: polymorphism in programming languages is captured by the polymor- phic λ-calculus (also called System F) and equivalent to second-order with only universal quantification.

LECTURE NOTES LEC 05