<<

Pages

Yves Lafont

1999 (last correction in 2017)

http://iml.univ-mrs.fr/~lafont/linear/ calculus

Proofs Formulas- and rules- Basic equivalences and second order definability

Alternative formulations: Two-sided - Hybrid sequent calculus

Variations: Intuitionistic Linear Logic- More structural rules

Reductions Theorem: Cut can be eliminated and identity can be restricted to the atomic case.

Cut elimination (key cases)- Cut elimination (commutative cases)- Expansion of identities

Corollaries: subformula property, strong and weak consistency, splitting property, disjunction property, and existence property.

About provable formulas

Translations Polarities- -

Miscellaneous Invariants

About exponential rules

2 Phase semantics and Decision problems

MLL = Multiplicative Linear Logic MALL = Multiplicative Additive Linear Logic MELL = Multiplicative Exponential Linear Logic LL = Linear Logic WLL = Affine Linear Logic

NP = non deterministic polynomial time PSPACE = polynomial space NEXPTIME = non deterministic exponential time

Propositional case Phase spaces- Phase models- Syntactical model

Theorem: If A is a propositional formula, the following properties are equivalent:

1. A is provable; 2. A holds in any phase model; 3. A holds in the syntactical model; 4. A is cut-free provable.

Corollary: cut elimination (propositional case)

Finite model property Finite models - Semilinear models

MLL MALL MELL LL WLL finite model property yes yes no no yes semilinear model property yes yes ? no yes yes yes ? no yes

Provability problem MLL MALL MELL LL propositional case NP-complete PSPACE-complete ? undecidable first order case NP-complete NEXPTIME-complete undecidable undecidable second order case undecidable undecidable undecidable undecidable

3 Formulas

Formulas are built from atoms and units, using (binary) connectives, modalities, and quantifiers: • An atom is a propositional (i.e. second order) variable α or its dual α⊥. More generally, it is an atomic predicate ⊥ α(t1, . . . , tn) or its dual α (t1, . . . , tn), where t1, . . . , tn are first order terms. • The multiplicative connectives are ⊗ (times, or multiplicative conjunction) and its dual (par, or multiplicative disjunction). The corresponding units are 1 (one) and ⊥ (bottom). ` • The additive connectives are & (with, or additive conjunction) and its dual ⊕ (plus, or additive disjunction). The corresponding units are > (top) and 0 (zero).

• The exponential modalities are ! (of course) and its dual ? (why not). • The quantifiers are ∀ (for all, or universal quantifier) and its dual ∃ (exists, or existential quantifier). A quantifier applies to a first order variable x or to a second order variable α. If A is an atom, A⊥ stands for its dual. In particular, A⊥⊥ = A. Linear is extended to all formulas by de Morgan equations:

(A ⊗ B)⊥ = A⊥ B⊥, (A B)⊥ = A⊥ ⊗ B⊥, 1⊥ = ⊥, ⊥⊥ = 1, ` ` (A & B)⊥ = A⊥ ⊕ B⊥, (A ⊕ B)⊥ = A⊥ & B⊥, >⊥ = 0, 0⊥ = >,

(!A)⊥ = ?A⊥, (?A)⊥ = !A⊥, (∀ξ.A)⊥ = ∃ξ.A⊥, (∃ξ.A)⊥ = ∀ξ.A⊥.

⊥ Linear implication is defined by A ( B = A B. ` If x is a first order variable and t is a first order term, A[t/x] stands for the formula A where all free occurrences of x have been replaced by t (and bound variables have been renamed when necessary). Similarly, if α is a propositional variable and B is a formula, A[B/α] stands for the formula A where all free occurrences of α (respectively α⊥) have been replaced by B (respectively B⊥).

4 Sequents and rules

Sequents are of the form ` Γ where Γ is a (possibly empty) sequence of formulas A1,...,An. In practise, the sequent ` Γ is identified with the sequence Γ, and a sequent consisting of a single formula is identified with this formula. ⊥ ⊥ ⊥ Γ stands for A1 ,...,An (respectively !Γ for !A1,..., !An and ?Γ for ?A1,..., ?An), and if ∆ is another sequence of formulas, Γ ` ∆ stands for ` Γ⊥, ∆.

A sequent is provable if it can be derived using the following rules:

` Γ, A, B, ∆ id ` A, Γ ` A⊥, ∆ x ⊥ cut ` Γ, B, A, ∆ ` A, A ` Γ, ∆

` A, Γ ` B, ∆ ` A, B, Γ ` Γ ⊗ 1 ⊥ ` A ⊗ B, Γ, ∆ ` A B, Γ ` 1 ` ⊥, Γ ` ` ` A, Γ ` B, Γ ` A, Γ ` B, Γ > & ⊕1 ⊕2 ` A & B, Γ ` A ⊕ B, Γ ` A ⊕ B, Γ ` >, Γ

` A, ?Γ ` A, Γ ` ?A, ?A, Γ ` Γ ! ?d ?c ?w ` !A, ?Γ ` ?A, Γ ` ?A, Γ ` ?A, Γ

` A, Γ ` A[τ/ξ], Γ ∀ ∃ ` ∀ξ.A, Γ ` ∃ξ.A, Γ Note that exchange is the only . The rules for exponentials are respectively called promotion, dereliction, contraction, and weakening. In the ∀-rule, ξ must have no free occurrence in Γ, but it is well understood that a bound variable can always be renamed. In the ∃-rule, τ is a first order term (if ξ is a first order variable) or a formula (if ξ is a second order variable).

By exchange, any permutation of Γ can be derived from Γ, so that in practise, sequents are considered as finite multisets, and exchange is implicit. Similarly, the following rules are derivable:

` Γ, ∆ ` ?Γ, ?Γ, ∆ ` ∆ ?D ?C ?W ` ?Γ, ∆ ` ?Γ, ∆ ` ?Γ, ∆

5 Basic equivalences and second order definability

We say that A and B are equivalent and we write A ≡ B if both A ` B and B ` A are provable. This amounts to say that, for any Γ, the sequent ` A, Γ is provable if and only if ` B, Γ is provable, or more generally, that ` Γ[A/α] is provable if and only if ` Γ[B/α] is provable.

Here are some typical equivalences:

A ⊗ (B ⊗ C) ≡ (A ⊗ B) ⊗ C,A ⊗ B ≡ B ⊗ A, A ⊗ 1 ≡ A

A (B C) ≡ (A B) C,A B ≡ B A, A ⊥ ≡ A, ` ` ` ` ` ` ` A &(B & C) ≡ (A & B)& C,A & B ≡ B & A, A ≡ A & A, A & 1 ≡ A,

A ⊕ (B ⊕ C) ≡ (A ⊕ B) ⊕ C,A ⊕ B ≡ B ⊕ A, A ≡ A ⊕ A, A ⊕ 0 ≡ A,

A ⊗ (B ⊕ C) ≡ (A ⊗ B) ⊕ (A ⊗ C),A ⊗ 0 ≡ 0,

A (B & C) ≡ (A B)&(A C),A > ≡ >, ` ` ` ` !!A ≡ !A, !A ≡ !A ⊗ !A, !1 ≡ 1, !(A & B) ≡ !A ⊗ !B, !> ≡ 1,

??A ≡ ?A, ?A ≡ ?A ?A, ?⊥ ≡ ⊥, ?(A ⊕ B) ≡ ?A ?B, ?0 ≡ ⊥, ` ` ∀ξ.∀ζ.A ≡ ∀ζ.∀ξ.A, ∀ξ.(A & B) ≡ ∀ξ.A & ∀ξ.B, A ≡ ∀ξ.A, A ∀ξ.B ≡ ∀ξ.(A B), ` ` ∃ξ.∃ζ.A ≡ ∃ζ.∃ξ.A, ∃ξ.(A ⊕ B) ≡ ∃ξ.A ⊕ ∃ξ.B, A ≡ ∃ξ.A, A ⊗ ∃ξ.B ≡ ∃ξ.(A ⊗ B).

In the last two equivalences of the last two lines, the variable ξ must have no free occurrence in A.

By definition of linear implication, we also get:

A ( (B ( C) ≡ (A ⊗ B) ( C,A ( (B C) ≡ (A ( B) C, ` ` ⊥ ⊥ ⊥ A ( B ≡ B ( A , 1 ( A ≡ A, A ( ⊥ ≡ A ,

A ( (B & C) ≡ (A ( B)&(A ( C), (A ⊕ B) ( C ≡ (A ( C)&(B ( C),

A ( > ≡ >, 0 ( A ≡ >,

A ( ∀ξ.B ≡ ∀ξ.(A ( B), ∃ξ.B ( A ≡ ∀ξ.(B ( A). In the last two equivalences, the variable ξ must have no free occurrence in A.

Note also the following equivalences:

1 ≡ ∀α.α ( α, A ⊕ B ≡ ∀α.!(A ( α) ( !(B ( α) ( α, 0 ≡ ∀α.α,

⊥ ⊥ ≡ ∃α.α ⊗ α ,A & B ≡ ∃α.!(α ( A) ⊗ !(α ( B) ⊗ α, > ≡ ∃α.α. In other words:

• Multiplicative units are definable in terms of second order quantifiers and multiplicative connectives. • Additive connectives are definable in terms of second order quantifiers, multiplicative connectives, and exponen- tials. • Additive units are definable in terms of second order quantifiers.

6 Two-sided sequent calculus

Formulas are built in the same way as in the one-sided calculus, except that linear negation and linear implication are primitive connectives. Sequents are of the form Γ ` ∆ where Γ and ∆ are finite sequences of formulas. The rules are the following: Γ ` ∆, A, B, Θ Γ, A, B, ∆ ` Θ Γ ` A, ∆ Θ,A ` Λ ` x x ` id cut Γ ` ∆, B, A, Θ Γ, B, A, ∆ ` Θ A ` A Γ, Θ ` ∆, Λ

Γ,A ` ∆ Γ ` A, ∆ ⊥ ⊥ Γ,A ` B, ∆ Γ ` A, ∆ Θ,B ` Λ ⊥ ` · ⊥ · ` ` ( ( ` Γ ` A , ∆ Γ,A ` ∆ Γ ` A ( B, ∆ Γ, Θ,A ( B ` ∆, Λ Γ ` A, ∆ Θ ` B, Λ Γ, A, B ` ∆ Γ ` ∆ ` ⊗ ⊗ ` ` 1 1 ` Γ, Θ ` A ⊗ B, ∆, Λ Γ,A ⊗ B ` ∆ ` 1 Γ, 1 ` ∆

Γ,A ` ∆ Θ,B ` Λ Γ ` A, B, ∆ Γ ` ∆ ` ` ⊥ ` ` ⊥ Γ, Θ,A B ` ∆, Λ Γ ` A B, ∆ ⊥ ` Γ ` ⊥, ∆ ` ` ` ` Γ ` A, ∆ Γ ` B, ∆ Γ,A ` ∆ Γ,B ` ∆ ` & & ` & ` ` > Γ ` A & B, ∆ Γ,A & B ` ∆ 1 Γ,A & B ` ∆ 2 Γ ` >, ∆

Γ,A ` ∆ Γ,B ` ∆ Γ ` A, ∆ Γ ` B, ∆ ⊕ ` ` ⊕ ` ⊕ 0 ` Γ,A ⊕ B ` ∆ Γ ` A ⊕ B, ∆ 1 Γ ` A ⊕ B, ∆ 2 Γ, 0 ` ∆

!Γ ` A, ?∆ Γ,A ` ∆ Γ, !A, !A ` ∆ Γ ` ∆ ` ! !d ` !c ` !w ` !Γ ` !A, ?∆ Γ, !A ` ∆ Γ, !A ` ∆ Γ, !A ` ∆

!Γ,A ` ?∆ Γ ` A, ∆ Γ ` ?A, ?A, ∆ Γ ` ∆ ? ` ` ?d ` ?c ` ?w !Γ, ?A ` ?∆ Γ ` ?A, ∆ Γ ` ?A, ∆ Γ ` ?A, ∆

Γ ` A, ∆ Γ,A[τ/ξ] ` ∆ Γ,A ` ∆ Γ ` A[τ/ξ], ∆ ` ∀ ∀ ` ∃ ` ` ∃ Γ ` ∀ξ.A, ∆ Γ, ∀ξ.A ` ∆ Γ, ∃ξ.A ` ∆ Γ ` ∃ξ.A, ∆ In the ` ∀-rule and in the ∃ `-rule, ξ must have no free occurrence in Γ, ∆.

De Morgan equations become equivalences, so that any formula A of the two-sided calculus is equivalent to a formula Ab of the one-sided calculus. A sequent Γ ` ∆ is provable in the two-sided calculus if and only if the sequent ` Γb⊥, ∆b is provable in the one-sided calculus. In particular, a sequent ` Γ is provable in the one-sided calculus if and only if it is provable in the two-sided calculus.

Hybrid sequent calculus

Formulas are the same as in the one-sided calculus. Sequents are of the form ` Θ ; Γ where Θ and Γ are finite sequences of formulas. We write ` Γ for ` ; Γ. The rules are the following (J.-M. Andreoli):

0 ` Θ;Γ, A, B, ∆ ` Θ, A, Θ ; A, Γ id ` Θ; A, Γ ` Θ; A⊥, ∆ x a ⊥ cut ` Θ;Γ, B, A, ∆ ` Θ, A, Θ0 ;Γ ` Θ; A, A ` Θ;Γ, ∆

` Θ; A, Γ ` Θ; B, ∆ ` Θ; A, B, Γ ` Θ;Γ ⊗ 1 ⊥ ` Θ; A ⊗ B, Γ, ∆ ` Θ; A B, Γ ` Θ; 1 ` Θ; ⊥, Γ ` ` ` Θ; A, Γ ` Θ; B, Γ ` Θ; A, Γ ` Θ; B, Γ > & ⊕1 ⊕2 ` Θ; A & B, Γ ` Θ; A ⊕ B, Γ ` Θ; A ⊕ B, Γ ` Θ; >, Γ

` Θ; A ` Θ,A ;Γ ` Θ; A, Γ ` Θ; A[τ/ξ], Γ ! ? ∀ ∃ ` Θ;!A ` Θ;?A, Γ ` Θ; ∀ξ.A, Γ ` Θ; ∃ξ.A, Γ The second structural rule is called absorption. In the ∀-rule, ξ must have no free occurrence in Θ, Γ.

A sequent ` Θ ; Γ is provable in the hybrid calculus if and only if ` ?Θ, Γ is provable in the one-sided calculus. In particular, ` Γ is provable in the one-sided calculus if and only if it is provable in the hybrid calculus. Note that, in practise, Θ can be considered as a finite of formulas.

7 Intuitionistic Linear Logic

Formulas are built as in (Classical) Linear Logic, except that there is no negation, no why not, and par is replaced by linear implication. Sequents are of the form Γ ` A where Γ is a finite sequence of formulas and A is a formula. The rules are the following: Γ, A, B, ∆ ` C Γ ` A ∆,A ` C x id cut Γ, B, A, ∆ ` C A ` A Γ, ∆ ` C

Γ,A ` B Γ ` A ∆,B ` C ` ( ( ` Γ ` A ( B Γ, ∆,A ( B ` C Γ ` A ∆ ` B Γ, A, B ` C Γ ` C ` ⊗ ⊗ ` ` 1 1 ` Γ, ∆ ` A ⊗ B Γ,A ⊗ B ` C ` 1 Γ, 1 ` C

Γ ` A Γ ` B Γ,A ` C Γ,B ` C ` & & ` & ` ` > Γ ` A & B Γ,A & B ` C 1 Γ,A & B ` C 2 Γ ` >

Γ,A ` C Γ,B ` C Γ ` A Γ ` B 0 ` ⊕ ` ` ⊕1 ` ⊕2 Γ,A ⊕ B ` C Γ ` A ⊕ B Γ ` A ⊕ B Γ, 0 ` C

!Γ ` A Γ,A ` C Γ, !A, !A ` C Γ ` C ` ! !d ` !c ` !w ` !Γ ` !A Γ, !A ` C Γ, !A ` C Γ, !A ` C

Γ ` A Γ,A[τ/ξ] ` C Γ,A ` C Γ ` A[τ/ξ] ` ∀ ∀ ` ∃ ` ` ∃ Γ ` ∀ξ.A Γ, ∀ξ.A ` C Γ, ∃ξ.A ` C Γ ` ∃ξ.A

In the ` ∀-rule and in the ∃ `-rule, ξ must have no free occurrence in Γ (respectively in Γ and C).

If A is provable in Intuitionistic Linear Logic, then it is obviously provable in Linear Logic. The converse holds if A contains no additive unit (>, 0) and no second order quantifier, but not in general: Linear Logic is not a conservative extension of Intuitionistic Linear Logic (H. Schellinx). For instance, the formula ((α ( β) ( 0) ( α ⊗ > is provable in Linear Logic, but not in Intuitionistic Linear Logic.

More structural rules

Linear Logic does not allow (unrestricted) contraction and weakening:

` A, A, Γ ` Γ c w ` A, Γ ` A, Γ If we add them, we essentially get Classical Logic with modalities. In that case, we have A⊗B ≡ A&B, A B ≡ A⊕B, 1 ≡ >, and ⊥ ≡ 0. Furthermore, the ⊥-rule becomes redundant, as well as the ?c-rule and the ?w-rule. `

If we only add (unrestricted) weakening, we get Affine Linear Logic. In that case, we have 1 ≡ >, ⊥ ≡ 0, and the following sequents become provable:

A ⊗ B ` A & B,A ⊕ B ` A B. ` Furthermore, the ⊥-rule becomes redundant, as well as the ?w-rule.

If we only add (unrestricted) contraction, we get Contractive Linear Logic. In that case, the following sequents become provable: A & B ` A ⊗ B,A B ` A ⊕ B. ` Furthermore, the ?c-rule becomes redundant.

Contractive Linear Logic must not be confused with Relevant Logic, which has an artificial distributivity law, and therefore, no good property of cut elimination.

8 Cut elimination (key cases)

A cut with an identity is eliminated as follows: id ` A, A⊥ ` A, Γ cut → ` A, Γ ` A, Γ A cut between two matching logical rules is eliminated as follows:

` A, Γ ` B, ∆ ` A⊥,B⊥, Θ ` B, ∆ ` A⊥,B⊥, Θ ⊗ cut ` A ⊗ B, Γ, ∆ ` A⊥ B⊥, Θ → ` A, Γ ` A⊥, ∆, Θ cut` cut ` Γ, ∆, Θ ` ` Γ, ∆, Θ

` Γ 1 ⊥ ` 1 ` ⊥, Γ → ` Γ cut ` Γ

⊥ ` A, Γ ` B, Γ ` A , ∆ ⊥ & ⊕1 ` A, Γ ` A , ∆ ` A & B, Γ ` A⊥ ⊕ B⊥, ∆ → cut cut ` Γ, ∆ ` Γ, ∆

⊥ ` A, Γ ` B, Γ ` B , ∆ ⊥ & ⊕2 ` B, Γ ` B , ∆ ` A & B, Γ ` A⊥ ⊕ B⊥, ∆ → cut cut ` Γ, ∆ ` Γ, ∆

` A, ?Γ ` A⊥, ∆ ! ?d ` A, ?Γ ` A⊥, ∆ ` !A, ?Γ ` ?A⊥, ∆ → cut cut ` ?Γ, ∆ ` ?Γ, ∆

` A, ?Γ ! ⊥ ⊥ ` A, ?Γ ` ?A⊥, ?A⊥, ∆ ` A, ?Γ ` !A, ?Γ ` ?A , ?A , ∆ ! ?c ! cut ` !A, ?Γ ` ?A⊥, ∆ → ` !A, ?Γ ` ?A⊥, ?Γ, ∆ cut cut ` ?Γ, ∆ ` ?Γ, ?Γ, ∆ ?C ` ?Γ, ∆

` A, ?Γ ` ∆ ! ?w ` ∆ ` !A, ?Γ ` ?A⊥, ∆ → ?W cut ` ?Γ, ∆ ` ?Γ, ∆

⊥ ` A, Γ ` A [τ/ξ], ∆ ⊥ ∀ ∃ ` A[τ/ξ], Γ ` A [τ/ξ], ∆ ` ∀ξ.A, Γ ` ∃ξ.A⊥, ∆ → cut cut ` Γ, ∆ ` Γ, ∆

9 Cut elimination (commutative cases)

Commutation with most logical rules is straigthforward. Here are some typical examples:

` A, B, C, Γ ` A, B, C, Γ ` A⊥, ∆ cut ` A, B C, Γ ` A⊥, ∆ → ` B,C, Γ, ∆ ` cut `` B C, Γ, ∆ ` B C, Γ, ∆ ` ` ` ` A, B, Γ ` C, Θ ` A, B, Γ ` A⊥, ∆ ⊗ cut ` A, B ⊗ C, Γ, Θ ` A⊥, ∆ → ` B, Γ, ∆ ` C, Θ cut ⊗ ` B ⊗ C, Γ, ∆, Θ ` B ⊗ C, Γ, ∆, Θ

` A, B, Γ ` A, C, Γ ` A, B, Γ ` A⊥, ∆ ` A, C, Γ ` A⊥, ∆ & cut cut ` A, B & C, Γ ` A⊥, ∆ → ` B, Γ, ∆ ` C, Γ, ∆ cut & ` B & C, Γ, ∆ ` B & C, Γ, ∆ > ` A, >, Γ ` A⊥, ∆ > cut → ` >, Γ, ∆ ` >, Γ, ∆ In the case of the promotion rule, the commutation is the following:

` A⊥, ?∆ ` ?A, B, ?Γ ` A⊥, ?∆ ! ! ! ` ?A, B, ?Γ ` !A⊥, ?∆ ` ?A, !B, ?Γ ` !A⊥, ?∆ → cut cut ` B, ?Γ, ?∆ ` !B, ?Γ, ?∆ ! ` !B, ?Γ, ?∆

Expansion of identities

Identities are expanded into atomic ones as follows: id id ` A, A⊥ ` B,B⊥ id ⊗ ` A ⊗ B,A⊥ B⊥ → ` A ⊗ B,A⊥,B⊥ ` ` A ⊗ B,A⊥ B⊥ ` ` 1 id ` 1 ` 1, ⊥ → ⊥ ` 1, ⊥ id id ` A, A⊥ ` B,B⊥ id ⊕1 ⊕2 ` A & B,A⊥ ⊕ B⊥ → ` A, A⊥ ⊕ B⊥ ` B,A⊥ ⊕ B⊥ & ` A & B,A⊥ ⊕ B⊥ id > ` >, 0 → ` >, 0 id ` A, A⊥ id ?d ` !A, ?A⊥ → ` A, ?A⊥ ! ` !A, ?A⊥ id ` A, A⊥ id ∃ ` ∀ξ.A, ∃ξ.A⊥ → ` A, ∃ξ.A⊥ ∀ ` ∀ξ.A, ∃ξ.A⊥

10 About provable formulas

In a cut-free proof of a formula A, the last rule must be a logical rule. Therefore: •⊥ is not provable (strong consistency); • 0 is not provable (weak consistency);

• if A ⊗ B is provable, then A and B are provable (splitting property); • if A ⊕ B is provable, then A or B is provable (disjunction property); • if ∃ξ.A is provable, then A[τ/ξ] is provable for some τ (existence property).

To sum up: • 1, > are provable, but not ⊥, 0, α, α⊥; • A ⊗ B, as well as A & B, is provable if and only if A and B are provable; • A B is provable if and only if the sequent ` A, B is provable; ` • A ⊕ B is provable if and only if A or B is provable; • !A, as well as ∀ξ.A, is provable if and only if A is provable; • ?A is provable whenever A is provable;

•∃ ξ.A is provable if and only if A[τ/ξ] is provable for some τ. It may happen that: • A and B are provable, but not A B (take A = B = 1); ` • A B is provable but neither A nor B is provable (take A = α and B = α⊥); ` • ?A is provable but not A (take A = α ⊕ α⊥). Note also that: • the empty sequent is not provable, so that A and A⊥ cannot be both provable;

• the sequent ` A1,...,An is provable if and only if the formula A1 ··· An is provable. ` ` Polarities

We say that a formula A is positive (respectively negative) if A ≡ !A (respectively A ≡ ?A). Obviously, A is positive if and only if A⊥ is negative. Furthermore: • 1 and 0 are positive, as well as !A for any A; • if A and B are positive, so are A ⊗ B and A ⊕ B; • if A is positive, so is ∃ξ.A.

By , we get: •⊥ and > are negative, as well as ?A for any A; • if A and B are negative, so are A B and A & B; ` • if A is negative, so is ∀ξ.A.

We say that A is regular if A ≡ ?!A. We have: • if A is positive, then ?A is regular; •⊥ is regular, as well as ?!A for any A;

• if A and B are regular, so is A B. ` 11 Intuitionistic Logic

Intuitionistic formulas are built from atoms (α, or more generally, α(t1, . . . , tn)) and units (>, ⊥) using binary connectives (⇒, ∧, ∨) and quantifiers (∀, ∃). Sequents are of the form Γ ` C where Γ is a sequence of formulas and C is a formula. The rules are the following:

Γ, A, B, ∆ ` C Γ, A, A ` C Γ ` C Γ ` A ∆,A ` C x c w id cut Γ, B, A, ∆ ` C Γ,A ` C Γ,A ` C A ` A Γ, ∆ ` C

Γ,A ` B Γ ` A ∆,B ` C ` ⇒ ⇒ ` Γ ` A ⇒ B Γ, ∆,A ⇒ B ` C

Γ ` A Γ ` B Γ,A ` C Γ,B ` C ` ∧ ∧ ` ∧ ` ` > Γ ` A ∧ B Γ,A ∧ B ` C 1 Γ,A ∧ B ` C 2 Γ ` >

Γ,A ` C Γ,B ` C Γ ` A Γ ` B ⊥ ` ∨ ` ` ∨1 ` ∨2 Γ,A ∨ B ` C Γ ` A ∨ B Γ ` A ∨ B Γ, ⊥ ` C

Γ ` A Γ,A[τ/ξ] ` C Γ,A ` C Γ ` A[τ/ξ] ` ∀ ∀ ` ∃ ` ` ∃ Γ ` ∀ξ.A Γ, ∀ξ.A ` C Γ, ∃ξ.A ` C Γ ` ∃ξ.A

In the ` ∀-rule (respectively in the ∃ `-rule), ξ must have no free occurrence in Γ (respectively in Γ,C). Alternatively, the rules for ∧ and > can be formulated as follows (multiplicative version):

Γ ` A ∆ ` B Γ, A, B ` C ` ∧ ∧ ` ` > Γ, ∆ ` A ∧ B Γ,A ∧ B ` C ` > The translation A 7→ A∗ from Intuitionistic Logic into Linear Logic is defined by

∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ α = α, (A ⇒ B) = !A ( B , (A ∧ B) = A & B , (A ∨ B) = !A ⊕ !B ,

>∗ = >, ⊥∗ = 0, (∀ξ.A)∗ = ∀ξ.A∗, (∃ξ.A)∗ = ∃ξ.!A∗.

A sequent Γ ` C is provable in Intuitionistic Logic if and only if its translation !Γ∗ ` C∗ is provable in Linear Logic. The only if direction is easy. Conversely, it is clear that Γ ` C is provable in Intuitionistic Logic whenever its ∗ ∗ translation !Γ ` C is provable in Intuitionistic Linear Logic: It suffices to consider the obvious translation A 7→ A∗ from Intuitionistic Linear Logic into Intuitionistic Logic, defined by

α∗ = α, (A ( B)∗ = A∗ ⇒ B∗, (A ⊗ B)∗ = (A & B)∗ = A∗ ∧ B∗, (A ⊕ B)∗ = A∗ ∨ B∗,

1∗ = >∗ = >, 0∗ = ⊥, (!A)∗ = A∗, (∀ξ.A)∗ = ∀ξ.A∗, (∃ξ.A)∗ = ∃ξ.A∗.

But since Linear Logic is not a conservative extension of Intuitionistic Linear Logic, it is more difficult to show that Γ ` C is provable in Intuitionistic Logic whenever its translation is provable in Linear Logic (H. Shellinx).

The above translation is sometimes called the call-by-name translation. There is also a call-by-value translation, defined by ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ ∗ α = !α, (A ⇒ B) = !(A ( B ), (A ∧ B) = A ⊗ B , (A ∨ B) = A ⊕ B ,

>∗ = 1, ⊥∗ = 0, (∀ξ.A)∗ = !∀ξ.A∗, (∃ξ.A)∗ = ∃ξ.A∗. In that case, a proof of Γ ` C is translated into a proof of Γ∗ ` C∗, using the fact that A∗ is positive for any A.

12 Classical Logic

Formulas are built from atoms (α, ¬α, or more generally, α(t1, . . . , tn), ¬α(t1, . . . , tn)) and units (>, ⊥) using binary connectives (∧, ∨) and quantifiers (∀, ∃). Negation is defined on atoms and extended to all formulas by De Morgan equations. Implication is defined by A ⇒ B = ¬A∨B. Sequents are of the form ` Γ where Γ is a sequence of formulas. The rules are the following:

` Γ, A, B, ∆ ` A, A, Γ ` Γ ` A, Γ ` ¬A, ∆ x c w id cut ` Γ, B, A, ∆ ` A, Γ ` A, Γ ` A, ¬A Γ, ∆

` A, Γ ` B, Γ ` A, Γ ` B, Γ ∧ ∨ ∨ > ` A ∧ B, Γ ` A ∨ B, Γ 1 ` A ∨ B, Γ 2 ` >, Γ

` A, Γ ` A[τ/ξ], Γ ∀ ∃ ` ∀ξ.A, Γ ` ∃ξ.A, Γ In the ∀-rule, ξ must have no free occurrence in Γ. Alternatively, the rules for ∧, ∨, and > can be formulated as follows (multiplicative version):

` A, Γ ` B, ∆ ` A, B, Γ ∧ ∨ > ` A ∧ B, Γ, ∆ ` A ∨ B, Γ ` >

A translation A 7→ A∗ of (cut-free) Classical Logic into Linear Logic is given by

α∗ = α, (¬α)∗ = α⊥ (A ∧ B)∗ = ?A∗ ⊗ ?B∗, (A ∨ B)∗ = A∗ ⊕ B∗,

>∗ = 1, ⊥∗ = 0, (∀ξ.A)∗ = ∀ξ.?A∗, (∃ξ.A)∗ = ∃ξ.A∗.

A variant is given by (A ∧ B)∗ = ?A∗ &?B∗ and >∗ = >. In both cases, a sequent ` Γ is provable in Classical Logic if and only if its translation ` ?Γ∗ is provable in Linear Logic. The only if direction is easy: It suffices to consider cut-free proofs. Conversely, it is clear that ` Γ is provable in Classical Logic whenever its translation ` ?Γ∗ is provable in Linear Logic: It suffices to consider the obvious translation A 7→ A∗ from Linear Logic into Classical Logic, defined by

⊥ α∗ = α, (α )∗ = ¬α, (A ⊗ B)∗ = (A & B)∗ = A∗ ∧ B∗, (A B)∗ = (A ⊕ B)∗ = A∗ ∨ B∗, ` 1∗ = >∗ = >, ⊥∗ = 0∗ = ⊥, (!A)∗ = (?A)∗ = A∗, (∀ξ.A)∗ = ∀ξ.A∗, (∃ξ.A)∗ = ∃ξ.A∗.

An alternative translation of (cut-free) Classical Logic into Linear Logic is given by

α∗ = ?α, (¬α)∗ = ?α⊥ (A ∧ B)∗ = A∗ & B∗, (A ∨ B)∗ = A∗ B∗, ` >∗ = >, ⊥∗ = ⊥, (∀ξ.A)∗ = ∀ξ.A∗, (∃ξ.A)∗ = ?∃ξ.A∗.

In that case, a cut-free proof of ` Γ is translated into a proof of ` Γ∗, using the fact that A∗ is negative for any A.

A translation of (full) Classical Logic into Linear Logic is given by

α∗ = ?!α, (¬α)∗ = ?!?α⊥, (A ∧ B)∗ = ?(!A∗ ⊗ !B∗), (A ∨ B)∗ = A∗ B∗, ` >∗ = ?1, ⊥∗ = ⊥, (∀ξ.A)∗ = ?!∀ξ.A∗, (∃ξ.A)∗ = ?∃ξ.!A∗.

In that case, a proof of ` Γ is translated into a proof of ` Γ∗, using the fact that A∗ is regular and that (¬A)∗ ≡ ?A∗⊥ for any A.

13 Invariants

Here, we consider propositional formulas in the multiplicative fragment of Linear Logic, built from a set P of proposi- tional variables. If Γ is a sequent, [Γ]⊗ stands for the number of occurrences of ⊗ in Γ. Similarly, we define [Γ] , [Γ]1, ` [Γ]⊥, [Γ]α, and [Γ]α⊥. Note that the following equation holds for any sequent of length n: X [Γ]⊗ + [Γ] + n = [Γ]1 + [Γ]⊥ + ([Γ]α + [Γ]α⊥). ` α∈P

Furthermore, a provable sequent satisfies the following equations: X [Γ]α = [Γ]α⊥, [Γ] + n = [Γ]⊥ + [Γ]α + 1, [Γ] + [Γ]1 + n = [Γ]⊗ + [Γ]⊥ + 2. ` α∈P `

This is easily checked by induction on proofs. Note that the last equation is a consequence of the previous ones. In the case of a provable formula, we get: X [A]α = [A]α⊥, [A] = [A]⊥ + [A]α, [A] + [A]1 = [A]⊗ + [A]⊥ + 1. ` α∈P `

Those conditions are not sufficient: Take for instance A = (1 1) ⊗ ⊥, or A = (α α) ⊗ (α⊥ α⊥). ` ` ` About exponential rules

If A is a formula, A(n) stands for the sequent A, . . . , A of length n. The following rules are derivable (weak promotion, digging, absorption, and multiplexing):

` A, Γ ` ??A, Γ ` ?A, A, Γ ` A(n), Γ ` !A, ?Γ ` ?A, Γ ` ?A, Γ ` ?A, Γ Furthermore, promotion is derivable from weak promotion and digging.

Multiplexing is invertible in certain circumstances: A sequent ` ?A, Γ with no occurrence of &, !, or second order ∃ is provable if and only if ` A(n), Γ is provable for some n. This is easily checked by induction on cut-free proofs. To see that this does not hold in general, take for instance A = α⊥ and Γ = α & 1, or Γ = !α. Similarly, a sequent ` ?A, Γ with no occurrence of ! or second order ∃ is provable if and only if ` (A ⊕ ⊥)(n), Γ is provable for some n.

If A is a formula, !nA stands for the formula (A&1)⊗· · ·⊗(A&1)(n times) and ?nA for the formula (A⊕⊥) ··· (A⊕⊥) (n times). The latter result can be generalized a follows: Consider a provable sequent with p occurrences` ` of !, q occurrences of ?, and no second order ∃. Then, for all m1, . . . , mp ∈ N, there are n1, . . . , nq ∈ N such that the sequent obtained by replacing the p occurrences of ! by !m1 ,..., !mp and the q occurrences of ? by ?n1 ,..., ?nq is provable (approximation theorem).

Note that the following rule is not admissible: C ` AC ` C ⊗ CC ` 1 C ` !A

For instance, if A = C = α ⊗ !(α ( α ⊗ α) ⊗ !(α ( 1), the three premisses are provable but not the conclusion.

14 Phase spaces

If M is a (multiplicative) monoid and X,Y ⊂ M, we write XY for the set {xy | x ∈ X and y ∈ Y } and X ( Y for the set {z ∈ M | xz ∈ Y for all x ∈ X}.

A phase space is a pair (M, ⊥M ) where M is a commutative (multiplicative) monoid and ⊥M ⊂ M. If X ⊂ M, we ⊥ M write X for X ( ⊥ . It is easy to prove the following properties: X ⊂ Y ⊥ if and only if XY ⊂ ⊥M ,XX⊥ ⊂ ⊥M , if X ⊂ Y then Y ⊥ ⊂ X⊥,X ⊂ X⊥⊥,

⊥⊥⊥ ⊥ ⊥⊥ ⊥ ⊥ ⊥ ⊥⊥ ⊥ ⊥ ⊥ ⊥ X = X , (X Y ) = (XY ) = X ( Y , (X ∪ Y ) = (X ∪ Y ) = X ∩ Y .

A fact is an X ⊂ M such that X = X⊥⊥, or equivalently, X = Y ⊥ for some Y ⊂ M. For instance, ⊥M = {1}⊥ is a fact, as well as 1M = (⊥M )⊥ = {1}⊥⊥, >M = M = ∅⊥, 0M = (>M )⊥ = M ⊥ = ∅⊥⊥. Note also that:

⊥ ⊥ • if X ⊂ M and Y is a fact, then X ( Y = (XY ) is a fact; T S ⊥ ⊥ • if (Xi)i∈I is a family of facts, then i∈I Xi = ( i∈I Xi ) is a fact; • if X ⊂ M, then X⊥⊥ is the smallest fact containing X. We write x v y if {y}⊥ ⊂ {x}⊥, or equivalently, {x}⊥⊥ ⊂ {y}⊥⊥. We write x ≡ y if x v y and y v x, or equivalently, {x}⊥ = {y}⊥.

We define the following operations on facts:

⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥⊥ X Y = X ( Y = (X Y ) ,X ⊗ Y = (X Y ) = (X ( Y ) = (XY ) , ` ` X & Y = X ∩ Y = (X⊥ ∪ Y ⊥)⊥,X ⊕ Y = (X⊥ & Y ⊥)⊥ = (X⊥ ∩ Y ⊥)⊥ = (X ∪ Y )⊥⊥,

?X = (X⊥ ∩ IM )⊥, !X = (?X⊥)⊥ = (X ∩ IM )⊥⊥, where IM = {x ∈ 1M | x = x2}. In the last two definitions, IM may be replaced by any submonoid KM of JM = {x ∈ 1M | x ≡ x2} = {x ∈ M | x v x2 and x v 1}. Such a KM is called an exponential structure (Y. Lafont). For instance:

KM = {1} (trivial structure), KM = IM (standard structure), KM = JM (full structure).

Anyway, the notion of topolinear space is definitively obsolete.

15 Phase models

Here we consider propositional formulas built from a set P of propositional variables. A phase model is a phase space (possibly with an exponential structure) together with a fact αM for each α ∈ P. The interpretation, which is already defined for units and propositional variables is extended to all formulas in the obvious way:

(α⊥)M = (αM )⊥, (A B)M = AM BM , (A ⊗ B)M = AM ⊗ BM , ` ` (A & B)M = AM & BM , (A ⊕ B)M = AM ⊕ BM , (?A)M = ?AM , (!A)M = !AM .

M ⊥ M M ⊥ M M ⊥ M M M Since A is always a fact, we also get (A ) = (A ) and (A ( B) = (A ) B = A ( B . ` M M M We say that A holds in the model if 1 ∈ A . Note that the formula A ( B holds if and only if A ⊂ B . More M M generally, if Γ is a sequence of formulas A1,...,An, we write Γ for (A1 ··· An) , and we say that the sequent M M ⊥ M ⊥ M ` Γ holds in the model if 1 ∈ Γ , or equivalently, (A1 ) ··· (An ) ⊂ ⊥ `. By` induction on proofs, it is easy to see that if a sequent is provable, then it holds in any phase model (soundness).

Here are some basic examples:

• If ⊥M = ∅, then ⊥M and >M are the only facts, and we get a Boolean model. In that case, soundness means that any provable formula is classically valid.

• Let α ∈ P. If M = Z with addition, ⊥M = {0}, αM = {1}, and βM = {0} for each β ∈ P \ {α}, then for any M multiplicative formula A, the set A consists of the single element [A]α − [A]α⊥. In that case, soundness means that [A]α = [A]α⊥ whenever A is provable.

• If M = Z with addition, ⊥M = {1}, and αM = {1} for each α ∈ P, then for any multiplicative formula M P A, the set A consists of the single element [A]⊥ + α∈P [A]α − [A] . In that case, soundness means that P ` [A] = [A]⊥ + α∈P [A]α whenever A is provable. ` A logical congruence ∼ on a phase model M is a congruence such that ⊥M is closed for ∼: if x ∈ ⊥M and x ∼ y, then y ∈ ⊥M . For instance, = is the finest logical congruence and ≡ is the coarsest one. If ∼ is a logical congruence, then all facts are closed for ∼, and the canonical projection π : M 7→ M/∼ induces a structure of phase model on the quotient monoid M/∼:

⊥M/∼ = π(⊥M ), αM/∼ = π(αM ) for each α ∈ P, KM/∼ = π(KM ).

It is easy to see that AM/∼ = π(AM ) for any formula A. In particular, A holds in M if and only if it holds in M/∼. Note that if KM is the standard exponential structure on M, then π(KM ) is not necessarily the standard exponential structure on M/∼.

16 Syntactical model

The syntactical model is defined as follows: • M is the free commutative monoid generated by all formulas. In other words, M is the set of all sequents, considered as finite multisets, with multiset union;

•⊥ M is the set of all cut-free provable sequents, and αM = {α}⊥ for each α ∈ P. In other words, αM = {Γ ∈ M | ` α, Γ is cut-free provable}; • KM is the set of all sequents of the form ?Γ. By induction on formulas, AM ⊂ {A}⊥ for any A (M. Okada). In particular, if A holds in the syntactical model, then the empty sequent belongs to {A}⊥, which means that A is (cut-free) provable (completeness).

Note the following points: • By cut elimination, ⊥M is the set of all provable sequents. However, it is essential to consider cut-free proofs if one wants to use this model for proving cut elimination.

• In fact, AM = {A}⊥ for any A, but in the cut-free version of the syntactical model, only the inclusion can be proved directly, and this is enough for proving completeness and cut elimination. • M can be replaced by the free commutative monoid generated by all subformulas of A. Therefore, completeness holds for phase models whose underlying monoid is finitely generated.

• M can also be replaced by M/∼, where ∼ is the smallest congruence such that Γ, Γ ∼ Γ for any Γ ∈ KM , so that π(KM ) = IM/∼. Therefore, completeness holds for phase models with the standard exponential structure.

• Completeness does not hold for phase models with the trivial exponential structure: For instance, α ⊕ (!α ( 0) holds in any such phase model, but it is not provable.

Finite models

We say that that a model M is finite if it has finitely many facts. In that case, M/≡ is a finite monoid and satisfies the same formulas as M. We say that a fragment of Linear Logic satisfies the finite model property if any formula of this fragment that hold in all finite models is provable. It is easy to see that such a fragment is decidable.

The multiplicative additive fragment satisfies the finite model property (Y. Lafont). To see that, it suffices to consider a finite version of the syntactical model. For instance: • M is the free commutative monoid generated by all subformulas of A; •⊥ M = {Γ ∈ M | ` Γ is cut-free provable or |Γ| > |A|}, where |Γ| = [Γ] + n for any sequent Γ of length n, and αM = {α}⊥ for each α ∈ P. ` The multiplicative exponential fragment does not satisfy the finite model property: For instance, the formula !α ⊗ !(α ⊗ β) ⊗ !(α ⊗ β ( 1) ( β holds in any finite model, but it is not provable.

We say that that a model M is affine if the set ⊥M is an ideal of M, that is ⊥M M ⊂ ⊥M . This amonts to say that ⊥M = 0M , or equivalently, 1M = >M . In that case, any fact is an ideal. There is a completeness theorem for Affine Linear Logic with respect to affine models, and Affine Linear Logic satisfies a finite model property. This comes from the fact that if M is a finitely generated free commutative monoid, then all its ideals are finitely generated. Consequently, Affine Linear Logic is decidable (A. Kopylov).

17