<<

Kleene Algebra with Observations Tobias Kappé University College London [email protected] Paul Brunet University College London Jurriaan Rot University College London Radboud University, Nijmegen Alexandra Silva University College London Jana Wagemaker University College London Fabio Zanasi University College London

Abstract Kleene algebra with tests (KAT) is an algebraic framework for reasoning about the control flow of sequential programs. Generalising KAT to reason about concurrent programs is not straightforward, because axioms native to KAT in conjunction with expected axioms for concurrency lead to an anomalous equation. In this paper, we propose Kleene algebra with observations (KAO), a variant of KAT, as an alternative foundation for extending KAT to a concurrent setting. We characterise the free model of KAO, and establish a decision procedure w.r.t. its equational theory.

2012 ACM Subject Classification Theory of computation → Formal languages and automata theory

Keywords and phrases Concurrent Kleene algebra, Kleene algebra with tests, free model, axiomat- isation, decision procedure

Digital Object Identifier 10.4230/LIPIcs.CONCUR.2019.41

Related Version A full version of the paper is available at https://arxiv.org/abs/1811.10401

Funding ERC Starting Grant ProFoundNet (679127). Jurriaan Rot: Marie Curie Fellowship (795119). Alexandra Silva: Leverhulme Prize (PLP–2016–129). Fabio Zanasi: EPSRC grant (EP/R020604/1).

Acknowledgements We are grateful to Jean-Baptiste Jeannin, Dan Frumin and Damien Pous individually, for their input which helped contextualise this work.

arXiv:1811.10401v4 [cs.LO] 21 Aug 2019 1 Introduction

The axioms of Kleene algebra (KA)[22, 9] correspond well to program composition [14], making them a valuable tool for studying equivalences between programs from an algebraic perspective. An extension of Kleene algebra known as Kleene algebra with tests (KAT)[24] adds primitives for conditional branching, and is particularly useful when proving validity of program transformations, such as optimisations applied by a compiler [28, 41]. As a matter of fact, KAT is sufficiently abstract to express not only program behaviour, but also program specifications; consequently, its laws can be used to compare programs to specifications [25, 1]. What makes this connection especially powerful is that KA (resp. KAT)

© T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi; licensed under Creative Commons License CC-BY Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany 41:2 Kleene Algebra with Observations

is known to be sound and complete with respect to a language model [5, 30, 23, 29], meaning that an equation is valid in any KA (resp. KAT) precisely when it holds in the corresponding language model. Practical algorithms for deciding language equivalence [17, 6, 34] enable checking equations in KA or KAT, and hence automated verification becomes feasible [10]. More recently, Kleene algebra has been extended with a parallel composition operator, yielding concurrent Kleene algebra (CKA)[15, 13, 16]. Crucially, CKA includes the exchange law, which encodes interleaving, i.e., the (partial) sequentialisation of threads. Like its predecessors, CKA can be applied to verify (concurrent) programs by reasoning about equivalences [16]. The equational theory of CKA has also been characterised in terms of a language-like semantics [32, 21], where equivalence is known to be decidable [7]. Since both KAT and CKA are conservative extensions of KA, this prompted Jipsen and Moshier [19] to study a marriage between the two, dubbed concurrent Kleene algebra with tests (CKAT). The aim of CKAT is to extend CKA with Boolean guards, and thus arrive at a new algebraic perspective on verification of concurrent programs with conditional branching. The starting point of this paper is the realisation that CKAT is not a suitable model of concurrent programs. This is because for any test p and CKAT-term e, one can prove

p · e · p ≡CKAT 0, an equation that appears to have no reasonable interpretation for programs. The derivation goes as follows:

0 5KAT p · e · p 5CKA e k (p · p) ≡KAT e k 0 ≡CKA 0 . As we shall see, this is possible because of the interplay between the exchange law and the fact that KAT identifies conjunction of tests with their sequential composition. For sequential programs, this identification is perfectly reasonable. In the context of concurrency with interleaving, however, actions from another thread may be scheduled in between two sequentially composed tests, whereas the conjunction of tests executes atomically. Indeed, an action scheduled between the tests might very well change the result of the second test. It thus appears that, to reason algebraically about programs with both tests and con- currency, one needs a perspective on conditional branching where the conjunction of two tests is not necessarily the same as executing one test after the other. The remit of this paper is to propose an alternative to KAT, which we call Kleene algebra with observations (KAO), that makes exactly this distinction. We claim that, because of this change, KAO is more amenable to a sensible extension with primitives for concurrency. Establishing the meta-theory of KAO turns out to be a technically demanding task. We therefore devote this paper to such foundations, and leave development of concurrent KAO to follow-up work. Concretely, we characterise the equational theory of KAO in terms of a language model (Section 5). Furthermore, we show that we can decide equality of these languages (and hence the equational theory of KAO) by deciding language equivalence of non-deterministic finite automata (Section 6). Both proofs show a clear separation of concerns: their kernel is idiomatic to KAO, and some well-known results from KA complete the argument. For space reasons, detailed proofs are only included in the full version of this paper [20]; here, we sketch the main insights needed to prove the core propositions and theorems.

2 Preliminaries

We start by outlining some concepts and elementary results.

Boolean algebra We use 2 to denote the {0, 1}. The powerset (i.e., set of ) of a set S is denoted 2S. We fix a finite set Ω of symbols called observables. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:3

The propositional terms over Ω, denoted TP , are generated by the grammar

p, q ::= ⊥ | > | o ∈ Ω | p ∨ q | p ∧ q | p .

We write ≡BA for the smallest congruence on TP that satisfies the axioms of Boolean algebra, i.e., such that for all p, q, r ∈ TP the following hold:

p ∨ ⊥ ≡BA p p ∨ q ≡BA q ∨ p p ∨ p ≡BA > p ∨ (q ∨ r) ≡BA (p ∨ q) ∨ r

p ∧ > ≡BA p p ∧ q ≡BA q ∧ p p ∧ p ≡BA ⊥ p ∧ (q ∧ r) ≡BA (p ∧ q) ∧ r

p ∨ (q ∧ r) ≡BA (p ∨ q) ∧ (p ∨ r) p ∧ (q ∨ r) ≡BA (p ∧ q) ∨ (p ∧ r) .

The set of atoms, denoted A, is defined as 2Ω. The semantics of propositional terms is given A by the map − BA : TP → 2 , as follows: J K

⊥ BA = ∅ o BA = {α ∈ A : o ∈ α} p ∨ q BA = p BA ∪ q BA J K J K J K J K J K > BA = A p BA = A\ p BA p ∧ q BA = p BA ∩ q BA . J K J K J K J K J K J K

We also write p 5BA q as a shorthand for p ∨ q ≡BA q.

It is known that − BA characterises ≡BA (c.f. [4, Chapter 5.9]), in the following sense: J K

I Theorem 2.1 (Completeness for BA). Let p, q ∈ TP ; now p ≡BA q if and only if p BA = q BA. J K J K V V V When α ∈ A, we write πα for the Boolean term o∈α o ∧ o∈Ω\α o, in which is the obvious generalisation of ∧ for some (arbitrary) choice of bracketing and order on terms. The following is then straightforward to prove.

I Lemma 2.2. For all α ⊆ Ω it holds that πα BA = {α}. J K

Kleene algebra A word over a set ∆ is a sequence of symbols d0 ··· dn−1 from ∆. The empty word is denoted . A set of words is called a language. Words can be concatenated: if w and x are words, then wx is the word where the symbols of w precede those of x. If L and L0 are languages over ∆, then L · L0 is the language of pairwise from L and L0, i.e., {wx : w ∈ L, x ∈ L0}. We write L? for the Kleene of L, which is the set ? {w0 ··· wn−1 : w0, . . . , wn−1 ∈ L}. This makes ∆ the set of all words over ∆. We fix a finite set of symbols Σ called the alphabet. The rational terms over Σ, denoted TR, are generated by the grammar

e, f ::= 0 | 1 | a ∈ Σ | e + f | e · f | e? .

We write ≡KA for the smallest congruence on TR that satisfies the axioms of Kleene algebra, i.e., such that for all e, f, g ∈ TR the following hold:

e + 0 ≡KA e e + e ≡KA e e + f ≡KA f + e e + (f + g) ≡KA (e + f) + g

e · 1 ≡KA e e ≡ 1 · e e · 0 ≡KA 0 0 ≡ 0 · e e · (f · g) ≡KA (e · f) · g

? ? ? e · (f + g) ≡KA e · f + e · g 1 + e · e ≡KA e e + f · g 5KA g =⇒ f · e 5KA g

? ? ? (e + f) · g ≡KA e · g + f · g 1 + e · e ≡KA e e + f · g 5KA f =⇒ e · g 5KA f , in which e 5KA f is a shorthand for e + f ≡KA f. 41:4 Kleene Algebra with Observations

Σ? The semantics of rational terms is given by − KA : TR → 2 , in the following sense: J K

0 KA = ∅ a KA = {a} e + f KA = e KA ∪ f KA J K J?K ? J K J K J K 1 KA = {} e KA = e KA e · f KA = e KA · f KA . J K J K J K J K J K J K

It is furthermore known that − KA characterises ≡KA [5, 30, 23], as follows: J K

I Theorem 2.3 (Completeness for KA). Let e, f ∈ TR; now e ≡KA f if and only if e KA = f KA. J K J K We also work with matrices and vectors of rational terms. Let Q be a finite set. A Q-vector is a function x : Q → TR; a Q- is a function M : Q × Q → TR. Let e ∈ TR, let x and y be Q-vectors, and let M be a Q-matrix. Addition of vectors is defined pointwise, i.e., x + y is the Q-vector given by (x + y)(q) = x(q) + y(q). We can also scale vectors, writing x e for the Q-vector given by (x e)(q) = x(q) · e. Multiplication of a Q#-vector by a Q-matrix yields a #Q-vector, as expected: X (M · x)(q) = M(q, q0) · x(q0) . q0∈Q

Here, P is the usual generalisation of + for some (arbitrary) choice of bracketing and order on terms; the empty sum is defined to be 0.

We write x ≡KA y when x and y are pointwise equivalent, i.e., for all q ∈ Q we have

x(q) ≡KA y(q); we extend 5KA to Q-vectors as before. Matrices over rational terms again obey the axioms of Kleene algebra [23]. As a special case, we can obtain the following:

I Lemma 2.4. Let M be a Q-matrix. Using the entries of M and applying the operators of Kleene algebra, we can construct a Q-matrix M ?, which has the following property. Let y be ? any Q-vector; now M · y is the least (w.r.t. 5KA) Q-vector x such that M · x + y 5KA x.

Automata and bisimulations We briefly recall bisimulation up to congruence for language equivalence of automata, from [6]. This will be used in Section 6. A non-deterministic automaton (NDA) over an alphabet Σ is a triple (X, o, d) where o: X → 2 is an output function, and d: X×Σ → X a transition function. A non-deterministic finite automaton (NFA) is an NDA where X is finite. It will be convenient to characterise ? the semantics of an NDA (X, o, d) recursively as the unique map ` : X → 2Σ such that [ `(x) = { : o(x) = 1} ∪ {a}· `(x0) . x0∈d(x,a)

This coincides with the standard definition of language acceptance for NDAs. The determin- isation of an NDA (X, o, d) is the deterministic automaton (2X , o, d) [35], where ( 1 if ∃s ∈ V s.t. o(s) = 1 [ o(V ) = d(V, a) = d(s, a) . 0 otherwise s∈V

The congruence closure of a relation R ⊆ 2X × 2X , denoted Rc, is the least equivalence relation such that R ⊆ Rc, and if (C,D) ∈ Rc and (E,F ) ∈ Rc then (C ∪ E,D ∪ F ) ∈ Rc. A bisimulation up to congruence for an NDA (X, o, d) is a relation R ⊆ 2X × 2X such that for all (V,W ) ∈ R, we have o¯(V ) = o¯(W ) and furthermore for all a ∈ Σ, we have (d¯(V, a), d¯(W, a)) ∈ Rc. Bisimulations up to congruence give a proof technique for language equivalence of states of non-deterministic automata, as a consequence of the following [6]. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:5

X S S I Theorem 2.5. Let (X, o, d) be an NDA. For all U, V ∈ 2 , we have x∈U `(x) = x∈V `(x) if and only if there exists a bisimulation up to congruence R for (X, o, d) such that (U, V ) ∈ R. In [6], it is shown how the construction of bisimulations up to congruence leads to a very efficient algorithm for language equivalence.

3 The problem with CKAT

Our motivation for adjusting the axioms of KAT is based on the fact that conjunction and sequential composition are distinct when interleaving is involved, because sequential composition leaves a “gap” between tests that might be used by another thread, possibly changing the outcome of the second test. We now formalise this, by detailing how combining

KAT and CKA to obtain CKAT (as in [19]) leads to the absurd equation p · e · p ≡CKAT 0.

Kleene algebra with tests To obtain KAT, we enrich rational terms with propositions; concretely, the set of guarded rational terms over Σ and Ω, denoted TGR, is generated by

? e, f ::= 0 | 1 | a ∈ Σ | p ∈ TP | e + f | e · f | e .

We define ≡KAT as the smallest congruence on TGR which contains ≡BA and obeys the axioms of ≡KA (e.g., e + e ≡KAT e). Furthermore, ≡KAT should relate constants and operators on 1 propositional subterms: for all p, q ∈ TP it holds that

⊥ ≡KAT 0 > ≡KAT 1 p ∨ q ≡KAT p + q p ∧ q ≡KAT p · q .

Guarded rational terms relate to programs by viewing actions as statements and proposi- tions as assertions. The last axiom is therefore not strange: if we assert x = 1 followed by y = 1 then surely, if x and y remain the same, this is equivalent to asserting x = 1 ∧ y = 1.

Concurrent Kleene algebra To obtain CKA, we add a parallel composition operator to KA. Concretely, the set of series-rational terms [33] over Σ, denoted TSR, is generated by

e, f ::= 0 | 1 | a ∈ Σ | e + f | e · f | e k f | e? .

We define ≡CKA as the smallest congruence on TSR which obeys the axioms that generate ≡KA, as well as the following for all e, f, g, h ∈ TSR:

e k f ≡CKA f k e e k 1 ≡CKA e e k 0 ≡CKA 0 (e + f) k g ≡CKA e k g + f k g

e k (f k g) ≡CKA (e k f) k g (e k f) · (g k h) 5CKA (e · g) k (f · h) , in which e 5CKA f is an abbreviation for e + f ≡CKA f. Here, 0 annihilates parallel composition because 0 corresponds to the program without valid traces; hence, running e in parallel with 0 also cannot yield any traces. Distributivity of k over + witnesses that a choice within a thread can also be made outside that thread. The last axiom, called the exchange law [15], encodes interleaving. Intuitively, it says that when programs e · g and f · h run in parallel, their behaviour includes running their heads in parallel (i.e., e k f) followed by their tails (i.e., g k h). Taken to its extreme, the exchange law says that the behaviour of a program includes its complete linearisations.

1 This is a slightly contrived definition of KAT; one usually presents the constants and operators using the same symbols [24]. To contrast KAO and KAT it is helpful to make this identification explicit. 41:6 Kleene Algebra with Observations

Concurrent Kleene algebra with tests To define CKAT [19], we choose guarded series- rational terms over Σ and Ω, denoted TGSR, as those generated by the grammar

? e, f ::= 0 | 1 | a ∈ Σ | p ∈ TP | e + f | e · f | e k f | e .

Now, ≡CKAT is the least congruence on TGSR obeying the axioms of ≡KAT and ≡CKA.

If we view guarded series-rational terms as representations of programs, and use ≡CKAT to reason about them, then we should expect to obtain sensible statements about programs, since the axioms that underpin CKAT seem reasonable in that context. To test this hypothesis, consider the guarded series-rational term p · e · p, which represents a program that first performs an assertion p, then runs a program described by e, and asserts the negation of p. There are choices of p and e that should make p · e · p describe a program with valid behaviour; for instance, p could assert that x = 1, and e could be the program x ← 0. Hence, by our hypothesis, p · e · p should not, in general, be equivalent to 0, the program without any valid behaviour. Unfortunately, the opposite is true.

I Antinomy 3.1. Let e ∈ TGSR and p ∈ TP ; now p · e · p ≡CKAT 0.

Proof. By the axiom that 1 is the unit of k and the exchange law, we derive that

p · e · p ≡CKAT (p k 1) · (1 k e) · p 5CKAT ((p · 1) k (1 · e)) · p .

By the same reasoning, and the axiom that 1 is the unit of ·, we have that

((p · 1) k (1 · e)) · p ≡CKAT (p k e) · (p k 1) 5CKAT (p · p) k (e · 1) .

Applying the unit, and using the identification of ∧ and · on tests, we find that

(p · p) k (e · 1) ≡CKAT (p · p) k e ≡CKAT (p ∧ p) k e ≡CKAT ⊥ k e .

Since ⊥ and 0 are identified, and 0 annihilates parallel composition, we have that

⊥ k e ≡CKAT 0 k e ≡CKAT 0 .

By the above, we have p · e · p 5CKAT 0. Since 0 5CKAT p · e · p, the claim follows. J

Another way of contextualising the above is to consider that propositional Hoare logic can be encoded in KAT [26]. Concretely, a propositional Hoare triple {p}S{q} is valid if

p · eS · q ≡KAT 0, where eS is a straightforward encoding of S. It stands to reason that sequential programs should similarly encode in CKAT. However, if we apply that line of reasoning to the above, then any Hoare triple of the form {p}S{p} is valid, which would mean that any property is an invariant of any program.

4 Kleene algebra with observations

We now propose KAO as an alternative way of embedding Boolean guards in rational terms. This approach should prevent Antinomy 3.1, and thus make KAO more suitable for an extension with concurrency. We start by motivating how we adapt KAT, before proceeding with a language model and a generalisation of partial derivatives to KAO. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:7

4.1 From tests to observations

The root of the problem in Antinomy 3.1 is the axiom p · q ≡KAT p ∧ q, telling us that

(p · p) k e ≡CKAT (p ∧ p) k e. Interpreted as programs, these are different: in one, e can be interleaved between p and p, which the other does not allow. To fix this problem, we propose a new perspective on Boolean guards. Rather than considering a guard a test, which in KAT entails an assertion valid from the last action to the next, we consider a guard an observation, i.e., an assertion valid at that particular point in the execution of the program. This weakens the connection between conjunction and : if a program observes p followed by q, there is no guarantee that p and q can be observed simultaneously. Hence, we drop the equivalence between conjunction and concatenation of tests. We call this system weak Kleene algebra with observations; like KAT, its axioms are based on the axioms of Boolean algebra and Kleene algebra, as follows.

I Definition 4.1 (WKAO axioms). We define ≡WKAO as the smallest congruence on TGR that contains ≡BA and also obeys the axioms of ≡KA. Furthermore, ⊥ ≡WKAO 0 and for all p, q ∈ TP it holds that p ∨ q ≡WKAO p + q.

Since conjunction and concatenation no longer coincide, we have also dropped the axiom that relates their units. This can be justified from our shift in perspective: >, i.e., the observation that always succeeds, is an action that leaves some record in the behaviour of the program, whereas 1, i.e., the program that does nothing, has no such obligation.2 As hinted before, it may happen that when a program observes p followed by q, both these assertions are true simultaneously, i.e., their conjunction could have been observed; hence, the behaviour of observing p and q should be contained in the behaviour of observing p and then q. For instance, consider the pseudo-programs S = assert p ∧ q; e and S0 = assert p; assert q; e. Here, S asserts that p and q hold at the same time before proceeding with e, while S0 first asserts p, and then q. The behaviour of S should be included in that of S0: if p and q are simultaneously observable, then the first two observations might take place simultaneously. Encoding this in an additional axiom leads to KAO proper.

I Definition 4.2 (KAO axioms). We define ≡KAO as the smallest congruence on TGR that contains ≡WKAO, and furthermore satisfies the contraction law: for all p, q ∈ TP it holds that p ∧ q 5KAO p · q — where e 5KAO f is a shorthand for e + f ≡KAO f. We briefly return to our example programs. If we encode S as (p ∧ q) · e and S0 as p · q · e, 0 then the behaviour of S is contained in that of S , because (p ∧ q) · e 5KAO p · q · e.

I Remark 4.3. In the presence of the other axioms, p ∧ q 5KAO p · q is equivalent to p 5KAO p · p.

Indeed, the second inequality may be inferred from the first, since p ≡KAO p ∧ p. The converse implication is obtained as follows: p∧q 5KAO (p ∧ q)·(p ∧ q) 5KAO p·q, using that p∧q 5KAO p, q. While the contraction law is more convenient to compare terms, the axiom p 5KAO p · p will serve implicitly as the basis for the contraction relation  defined in the next section.

4.2 A language model The Kleene algebra variants encountered thus far have models based on rational languages, which characterise the equivalences derivable using the axioms. To find a model for guarded rational terms which corresponds to KAO, we start with a model of weak KAO:

2 We refer to [20, Appendix E] for further algebraic details on the consequences of identifying these units. 41:8 Kleene Algebra with Observations

I Definition 4.4 (WKAO semantics). Let Γ = A∪Σ. Languages over Γ are called observation Γ? languages. We define − WKAO : TGR → 2 as follows: J K ? ? 0 WKAO = ∅ a WKAO = {a} e + f WKAO = e WKAO ∪ f WKAO e WKAO = e WKAO J K J K J K J K J K J K J K 1 WKAO = {} p WKAO = p BA e · f WKAO = e WKAO · f WKAO , J K J K J K J K J K J K ? where it is understood that p BA ⊆ A ⊆ Γ ⊆ Γ . J K

Observation languages are a model for guarded rational terms w.r.t. ≡WKAO:

I Lemma 4.5 (WKAO soundness). Let e, f ∈ TGR. Now, if e ≡WKAO f, then e WKAO = f WKAO. J K J K

More work is necessary to get a model of guarded rational terms w.r.t. ≡KAO. In particular,

− WKAO does not preserve the contraction law: if o1, o2 ∈ Ω, then J K

o1 ∧ o2 WKAO = {α ∈ A : o1, o2 ∈ α} o1 · o2 WKAO = {αβ ∈ A · A : o1 ∈ α, o2 ∈ β} , J K J K 3 meaning that o1 ∧ o2 WKAO 6⊆ o1 · o2 WKAO, despite o1 ∧ o2 5KAO o1 · o2. To obtain aJ soundK model,J we needK the counterpart of the contraction law on the level of observation languages, which works out as follows

? I Definition 4.6 (Contraction). We define  as the smallest relation on Γ s.t. w, x ∈ Γ? α ∈ A . wαx  wααx

When L ⊆ Γ?, we write L↓ for the -closure of L, that is to say, the smallest observation language such that L ⊆ L↓, and if w  x with x ∈ L↓, then w ∈ L↓.

Thus, L↓ contains all words obtained from words in L by contracting any number of repeated atoms (but not actions) into one. Applying this closure to the semantics encodes the intuition that repeated observations may also correspond to just one step in the execution. With these tools, we can define a model of KAO as follows.

Γ? I Definition 4.7 (KAO semantics). We define − KAO : TGR → 2 by e KAO = e WKAO↓. J K J K J K

Alternatively, we can describe − KAO compositionally, using the following. J K I Lemma 4.8. Let e, f ∈ TGR. The following hold: (i) e + f KAO = e KAO ∪ f KAO, and ? ? J K J K J K (ii) e · f KAO = ( e KAO · f KAO)↓, and (iii) e KAO = e KAO↓. J K J K J K J K J K

It is now straightforward to check that − KAO preserves the axioms of ≡KAO. J K I Lemma 4.9 (KAO soundness). Let e, f ∈ TGR. Now, if e ≡KAO f, then e KAO = f KAO. J K J K 4.3 Partial derivatives Derivatives [8] are a powerful tool in Kleene algebra variants. In the context of programs, we can think of derivatives as an operational semantics; they tell us whether the term represents a program that can halt immediately (given by the termination map), as well as the program that remains to be executed once an action is performed (given by the continuation map).

3 Incidentally, this shows that the contraction law is independent of the axioms that build ≡WKAO, i.e., that those axioms are not sufficient to prove the contraction law. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:9

Derivatives are typically compatible with the congruences used to reason about terms; what’s more, they are closely connected to finite automata [8]. This makes them useful for reasoning about language models [26, 6, 34]. We therefore introduce a form of derivatives of guarded rational terms that works well with ≡KAO. Let us start by giving the termination map, which is close to the termination map of rational terms compatible with ≡KA.

I Definition 4.10 (Termination). We define  : TGR → 2 inductively, as follows: (0) = 0 (a) = 0 (e + f) = max{(e), (f)} (e?) = 1 (1) = 1 (p) = 0 (e · f) = min{(e), (f)} .

To check that  characterises guarded rational terms that can terminate, we record the following lemma, which says that (e) = 1 precisely when e KAO includes the empty string. J K I Lemma 4.11. Let e ∈ TGR. Now (e) 5KAO e, and (e) = 1 if and only if  ∈ e KAO. J K Next we define the continuation map, or more specifically, partial continuation map [2]: given a ∈ Γ, this function gives a set of terms representing possible continuations of the program after performing a. We start with the continuation map for actions.

TGR I Definition 4.12 (Σ-continuation). We define δ : TGR × Σ → 2 inductively δ(0, a) = δ(1, a) = ∅ δ(e + f, a) = δ(e, a) ∪ δ(f, a) δ(a, a0) = {1 : a = a0} δ(e · f, a) = {e0 · f : e0 ∈ δ(e, a)} ∪ ∆(e, f, a) δ(p, a) = ∅ δ(e?, a) = {e0 · e? : e0 ∈ δ(e, a)} , where ∆(e, f, a) = δ(f, a) when (e) = 1, and ∅ otherwise. Finally, we give the continuation map for observations, which is similar to the one for actions, except that on sequential composition it is subtly different.

TGR I Definition 4.13 (A-continuation). We define ζ : TGR × A → 2 inductively ζ(0, α) = δ(1, α) = ∅ ζ(e + f, α) = ζ(e, α) ∪ ζ(f, α) ζ(a, α) = ∅ ζ(e · f, α) = {e0 · f : e0 ∈ ζ(e, α)} ∪ Z(e, f, α) ? 0 ? 0 ζ(p, α) = {1 : πα 5BA p} ζ(e , α) = {e · e : e ∈ ζ(e, α)} , where Z(e, f, α) = ζ(f, α) when (e) = 1 or (e0) = 1 for some e0 ∈ ζ(e, α), and ∅ otherwise.

For instance, let p, q ∈ TP . If α ∈ p BA ∩ q BA, then we can calculate that J K J K ζ(p · q, α) = {p0 · q : p0 ∈ ζ(p, α)} ∪ Z(p, q, α) = {1 · q} ∪ {1} , which is to say that the program can either continue in 1 · q, where it has to make an observation validating q, or it can choose to re-use the observation α to validate q, continuing in Z(p, q, α) = ζ(q, α) = {1}, because 1 ∈ ζ(p, α). This notion that ζ can apply an observation more than once to validate multiple assertions in a row can be formalised.

0 0 I Lemma 4.14. Let e ∈ TGR, α ∈ A, and e ∈ ζ(e, α). Now ζ(e , α) ⊆ ζ(e, α). To check that δ and ζ indeed output continuations of the term after the action or assertion has been performed, we should check that they are compatible with ≡KAO.

0 0 I Lemma 4.15. Let e ∈ TGR. For all a ∈ Σ and e ∈ δ(e, a), we have a · e 5KAO e. 0 0 Furthermore, for all α ∈ A and e ∈ ζ(e, α), we have πα · e 5KAO e. 41:10 Kleene Algebra with Observations

We also need the reach of a guarded rational term, which is meant to describe the set of terms that can be obtained by repeatedly applying continuation maps.

TGR I Definition 4.16 (Reach of a term). For e ∈ TGR, we define ρ : TGR → 2 inductively:

ρ(0) = ∅ ρ(a) = {1, a} ρ(e + f) = ρ(e) ∪ ρ(f) ρ(1) = {1} ρ(p) = {1, p} ρ(e · f) = {e0 · f : e0 ∈ ρ(e)} ∪ ρ(f) ρ(e?) = {1} ∪ {e0 · e? : e0 ∈ ρ(e)} .

Indeed, ρ(e) truly contains all terms reachable from e by means of δ and ζ:

0 0 I Lemma 4.17. Let e ∈ TGR. If a ∈ Σ, then δ(e, a) ⊆ ρ(e); if e ∈ ρ(e), then δ(e , a) ⊆ ρ(e). Furthermore, if α ∈ A, then ζ(e, α) ⊆ ρ(e); if e0 ∈ ρ(e), then ζ(e0, α) ⊆ ρ(e).

It is not hard to see that for all e ∈ TGR, we have that ρ(e) is finite. Note that ρ(e) does not, in general, contain e itself. On the other hand, ρ(e) does contain a set of terms that is

sufficient to reconstruct e, up to ≡KAO. We describe these as follows.

TGR I Definition 4.18 (Initial factors). For e ∈ TGR, we define ι : TGR → 2 inductively:

ι(0) = ∅ ι(a) = {a} ι(e + f) = ι(e) ∪ ι(f) ι(1) = {1} ι(p) = {p} ι(e · f) = {e0 · f : e0 ∈ ι(e)} ι(e?) = {1} ∪ {e0 · e? : e0 ∈ ι(e)} .

Now, to reconstruct e from ι(e), all we have to do is sum its elements.

P 0 I Lemma 4.19. Let e ∈ TGR. Then e ≡KAO e0∈ι(e) e .

5 Completeness

We now set out to show that − KAO characterises ≡KAO. Since soundness of − KAO w.r.t. J K J K ≡KAO was already shown in Lemma 4.9, it remains to prove completeness, i.e., if e and f are

interpreted equally by − KAO, then they can be proven equivalent via ≡KAO. To this end, we first identify a ofJ guardedK rational terms for which a completeness result can be shown by relying on the completeness result for KA. To describe these terms, we need the following.

I Definition 5.1 (Atomic and guarded terms). Let Π denote the set {πα : α ∈ A}. The set of atomic guarded rational terms, denoted TAGR, is the subset of TGR generated by the grammar

? e, f ::= 0 | 1 | a ∈ Σ | πα ∈ Π | e + f | e · f | e .

Furthermore, if e ∈ TGR such that e KAO = e WKAO, we say that e is closed. J K J K

Completeness of ≡KAO with respect to − KAO can then be established for atomic and closed guarded rational terms as follows. J K

I Proposition 5.2. Let e, f ∈ TAGR be closed, and e KAO = f KAO; then e ≡KAO f. J K J K Sketch. By noting that the semantics of an atomic and closed term coincide with the KA- semantics (over an extended alphabet); the result then follows by appealing to completeness of KA, and the fact that the axioms of KA are contained in those for KAO. J T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:11

To show that − KAO characterises ≡KAO for all guarded rational terms, it suffices to find for J K every e ∈ TGR a closed eˆ ∈ TAGR with e ≡KAO eˆ. After all, if we have such a transformation, ˆ ˆ then e KAO = f KAO implies eˆ KAO = f KAO, which implies eˆ ≡KAO f, and hence e ≡KAO f. InJ theK remainderJ K of this section,J K weJ K describe how to obtain a closed and atomic guarded rational term from a guarded rational term e. This transformation works by first “dis- assembling” e using partial derivatives; on an intuitive level, this is akin to creating a (non-deterministic finite) automaton that accepts the observation language described by e. Next, we “reassemble” from this representation an atomic term eˆ, equivalent to e; this is analogous to constructing a rational expression from the automaton. To show that eˆ is closed, we leverage Lemma 4.14, essentially arguing that the automaton obtained from e encodes . We extend the notation for vectors and matrices over rational terms to guarded rational terms. We can then represent a guarded rational term by a matrix and a vector, as follows.

I Definition 5.3. For e ∈ TGR, define the ρ(e)-vector xe and ρ(e)-matrix Me by

0 0 0 00 X X xe(e ) = (e ) Me(e , e ) = a + πα . e00∈δ(e0,a) e00∈ζ(e0,α)

Note that, in the above, δ(e0, a) and ζ(e0, α) are finite by Lemma 4.17. Kozen’s argument that matrices over rational terms satisfy the axioms of Kleene al- gebra [23] generalises to guarded rational terms. This leads to a straightforward generalisation of Lemma 2.4. For the sake of brevity, we use T to stand in for WKAO or KAO.

I Lemma 5.4. Let M be a Q-matrix. Using the entries of M and applying the operators of Kleene algebra, we can construct a Q-matrix M ?, which has the following property. Let y be ? any Q-vector; now M · y is the least (w.r.t. 5T) Q-vector x such that M · x + y 5T x.

We can now use the above to reassemble a term from Me and xe as follows.

I Definition 5.5 (Transformation of terms). Let e ∈ TGR. We write se for the ρ(e)-vector ? P 0 given by Me · xe, and eˆ for the guarded rational term given by e0∈ι(e) se(e ).

Since eˆ is constructed from Me and se, and these are built using atomic guarded rational terms, eˆ is atomic. We carry on to show that eˆ ≡KAO e. To this end, the following is useful.

I Proposition 5.6 (Least solutions). Let e, f ∈ TGR. Now se f is the least (w.r.t. 5T) ρ(e)-vector s such that for each e0 ∈ ρ(e), it holds that #

0 X 00 X 00 0 (e ) · f + a · s(e ) + πα · s(e ) 5T s(e ) . e00∈δ(e0,a) e00∈ζ(e0,α)

When s = se f, the above is an equivalence, i.e., we can substitute 5T with ≡T. # ? Sketch. The least such s coincides with se = Me · xe, by using the property of se that we obtain as a result of Lemma 5.4. The latter claim goes by standard argument akin to the argument showing that the least pre-fixpoint of a monotone operator is a fixpoint. J Using the above, we show that the contents of ρ(e) themselves qualify as a least ρ(e)-vector satisfying system obtained from e (and fixing f = 1). More concretely, we have the following.

0 0 0 I Proposition 5.7. Let e ∈ TGR and e ∈ ρ(e). Then se(e ) ≡KAO e . Sketch. The key idea is to first relate the least ρ(e)-vectors satisfying the system obtained from e to those satisfying the systems arising from its subterms. The proof then follows by induction on e, and some straightforward derivations. J 41:12 Kleene Algebra with Observations

This allows us to conclude (using Lemma 4.19) that e ≡KAO eˆ.

I Corollary 5.8 (Transformation preserves equivalence). Let e ∈ TGR; then e ≡KAO eˆ. It remains to show that eˆ is closed. To this end, the following characterisation is helpful: ? ? I Lemma 5.9. Let L ⊆ Γ , and define as the smallest relation on Γ satisfying the rules P a ∈ Γ w x α ∈ A w x P P .   aw ax αw ααx P P P L↓ is the smallest subset of Γ? such that L ⊆ L↓, and if w x with x ∈ L↓, then w ∈ L↓. P We can then employ the characterisation of se in Proposition 5.6 to show, by induction on the length of the -chain, that the components of se are closed. P 0 0 I Proposition 5.10. Let e ∈ T . For all e ∈ ρ(e), it holds that se(e ) is closed.

Sketch. Note that se satisfies the system obtained from e as in Proposition 5.6, both w.r.t.

≡KAO and ≡WKAO. Using Lemma 5.9, we can then show (by induction on the number of 0 0 applications of ) that s(e ) WKAO ⊆ s(e ) KAO; the other inclusion holds trivially. J P J K J K Hence, we can conclude (using Lemma 4.8) that eˆ is closed as well, as follows.

I Corollary 5.11 (Transformation yields closed term). Let e ∈ TGR; now eˆ is closed. We are now ready conclude with the main result of this section.

I Theorem 5.12 (Soundness and completeness). Let e, f ∈ TGR; then

e ≡KAO f ⇐⇒ e KAO = f KAO . J K J K Sketch. Since we have a way to obtain from e ∈ TGR a closed term eˆ ∈ TAGR, for which it

furthermore holds that e ≡KAO eˆ, we can appeal to Proposition 5.2. J

6 Decision procedure

We now design a procedure that takes terms e, f ∈ TGR and decides whether e KAO = f KAO; J K J K by Theorem 5.12, this gives us a decision procedure for e ≡KAO f. To this end, we reduce to the problem of language equivalence between NFAs over guarded rational terms, defined using partial derivatives. This, in turn, can be efficiently decided using bisimulation techniques. First, observe that TGR carries a non-deterministic automaton structure.

I Definition 6.1 (Syntactic automaton). The set TGR carries an NDA structure (TGR, , θ) TGR where  : TGR → 2 is as in Definition 4.10, and θ : TGR × Γ → 2 is given by ( δ(e, a) if a ∈ Σ θ(e, a) = . ζ(e, a) if a ∈ A We call this the syntactic automaton of guarded rational terms. Γ? Let `: TGR → 2 be the semantics of this automaton as given in Section 2. It is easy to see that ` is the unique function such that [ [ `(e) = { : (e) = 1} ∪ {a}· `(e0) ∪ {α}· `(e0) . (1) e0∈δ(e,a) e0∈ζ(e,α) To use this NDA for KAO equivalence, we need to show that the language accepted by the

state e ∈ TGR is e KAO. For this goal, it helps to algebraically characterise expressions in terms of their derivatives.J K We call this a fundamental theorem of KAO, after [39, 40]. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:13

I Theorem 6.2 (Fundamental theorem). For all e ∈ TGR, the following holds

X 0 X 0 e ≡KAO (e) + a · e + πα · e . e0∈δ(e,a) e0∈ζ(e,α)

Sketch. The right-to-left containment is a result of Lemmas 4.11 and 4.15. The converse is a straightforward calculation using Proposition 5.6 and Corollary 5.8. J

Next, we turn the fundamental theorem into a statement about the KAO semantics. Before we do, however, we need the following basic result about the KAO semantics.

I Lemma 6.3. Let us fix α ∈ A. For all e ∈ TGR, we have:

sX 0{ [ 0 πα · e = {α}· e KAO . e0∈ζ(e,α) KAO e0∈ζ(e,α) J K

We now arrive at a semantic analogue of the fundamental theorem.

I Proposition 6.4. For all e ∈ TGR, we have:

[ 0 [ 0 e KAO = { : (e) = 1} ∪ {a}· e KAO ∪ {α}· e KAO . J K e0∈δ(e,a) J K e0∈ζ(e,α) J K

Sketch. By applying soundness of ≡KAO w.r.t. − KAO (Lemma 4.9) to the fundamental J K theorem, and using Lemma 6.3 for the term summing over the A-continuations of e. J

The language of a state in the syntactic NFA is then characterised by the KAO semantics.

I Theorem 6.5 (Soundness of translation). Let e ∈ TGR; then e KAO = `(e). J K Sketch. A direct consequence of Proposition 6.4 and the uniqueness of ` in satisfying (1). J

To decide whether e KAO = f KAO it suffices to show that e and f are language equivalent in the syntactic automaton.J K WeJ expressK this in terms of ι as defined in Definition 4.18.

I Corollary 6.6. For all e, f ∈ TGR, we have

[ 0 [ 0 e KAO = f KAO ⇔ `(e ) = `(f ) . J K J K e0∈ι(e) f 0∈ι(f)

By construction, ι(e) ⊆ ρ(e) for every e ∈ TGR. Since ρ(e) and ρ(f) are closed under partial derivatives, we can restrict the syntactic automaton to ρ(e) ∪ ρ(f), to obtain a finite S 0 S 0 NDA. To decide e ≡KAO f, it suffices to decide e0∈ι(e) `(e ) = f 0∈ι(f) `(f ) on this NFA. This leads us to the main result of this section: a decision procedure for KAO.

I Theorem 6.7 (Decision procedure). For all e, f ∈ TGR, we have e ≡KAO f if and only if there exists R such that (ι(e), ι(f)) ∈ R and R is a bisimulation up to congruence for the syntactic automaton restricted to ρ(e) ∪ ρ(f).

S 0 S 0 Sketch. By Theorem 2.5, such an R exists precisely when e0∈ι(e) e KAO = f 0∈ι(f) f KAO; J K J K by Corollary 6.6 and Theorem 5.12 this is equivalent to e ≡KAO f. J 41:14 Kleene Algebra with Observations

I Example 6.8. We know that for all a, b ∈ Ω we have that a ∧ b 6≡KAO a · b. This also follows from our decision procedure, which we argue by showing that any attempt to construct a bisimulation up to congruence R containing ({a ∧ b}, {a · b}) fails. We start with R = {({a ∧ b}, {a · b})}, and note that ¯({a ∧ b}) = 0 = ¯({a · b}). We

now take a derivative w.r.t. α ∈ A such that πα 5BA a ∧ b. To grow R into a bisimulation up to congruence, all derivatives should be checked and possibly added to R; this specific choice, however, will lead to a counterexample. We have that θ¯({a ∧ b}, α) = {1} and θ¯({a · b}, α) = {1 · b, 1}. We add ({1}, {1 · b, 1}) to R, noting that ¯({1}) = 1 = ¯({1 · b, 1}), ¯ and continue with the next derivative w.r.t. β ∈ A such that πβ 5BA b. We get θ({1}, β) = ∅ and θ¯({1 · b, 1}, β) = {1}. We now have (∅, {1}) ∈ R, but ¯(∅) 6= ¯({1}). Thus, we cannot construct a bisimulation up to congruence R such that ({a ∧ b}, {a · b}) ∈ R.

I Remark 6.9. For KAO-expressions without observations, the derivatives with respect to an element of A result in the . Hence, for these KAO-expressions, deciding equivalence comes down to standard derivative-based techniques for rational expressions [37].

7 Related work

This work fits in the larger tradition of Kleene algebra as a presentation of the “laws of programming”, the latter having been studied by Hoare and collaborators [14, 13]. More precisely, our efforts can be grouped with recent efforts to extend Kleene algebra with concurrency [15, 13, 32, 21], and thence with Boolean guards [19].

We proved that ≡KAO is sound and complete w.r.t. − KAO, based on the existing complete- ness proof for KA. By re-using completeness results forJ aK simpler algebra, the proof shows a clear separation of concerns between the “old” algebra being extended and the new layer of axioms placed on top. This strategy pops up quite often in some form [29, 1, 32, 21]. We note that unlike [29, 1], our transformation does not proceed by induction on the term, but leverages the least fixpoints computable in every Kleene algebra. Further, the combination of KA with additional hypotheses presented in [27] might yield another route to completeness. The use of linear systems to study automata was pioneered by Conway [9] and Back- house [3]. Kozen’s completeness proof expanded on this by generalising Kleene algebra to matrices [23]. The connection between linear systems, derivatives and completeness was studied by Kozen [26] and Jacobs [18]. To keep our presentation simple, we give our proof of completeness in elementary terms; a proof in terms of coalgebra [38] may yet be possible. Using bisimulation to decide language equivalence in a deterministic finite automaton originates from Hopcroft and Karp [17]. This technique has many generalisations [36].

8 Conclusions and further work

Kleene algebra with observations (KAO) is an algebraic framework that adds Boolean guards to Kleene algebra in such a way that a sensible extension with concurrency is still possible, in contrast with Kleene algebra with tests (KAT). Indeed, the laws of KAO prevent the problem presented by Antinomy 3.1. The axiomatisation of KAO, as well as the decision procedure for equivalence, give an alternative foundation for combining KAO with concurrent Kleene algebra to arrive at a new equational calculus of concurrent programs. The most obvious direction of further work is to define concurrent Kleene algebra with observations (CKAO) as the amalgamation of axioms of KAO and CKA, in pursuit of a characterisation of its equational theory and an analogous decidability result. We conjecture that languages of sp-pomsets [12, 11, 33] over actions and atoms, closed under some suitable T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:15 relation, form the free model; a proof would likely build on [31] and re-use techniques from [21]. While the equational theory of CKAO might be decidable, we are less optimistic about a feasible algorithm; deciding the equational theory of CKA is already expspace-complete [7]. Another avenue of future research would be to create a programming language using KAO (or, possibly, CKAO) by instantiating actions and observations, and adding axioms that encode the intention of those primitives. NetKAT [1, 10, 41], a language for describing and reasoning about network behaviour is an excellent example of such an endeavour based on KAT. Our long-term hope is that CKAO will function as a foundation for a “concurrent” version of NetKAT aimed at describing and reasoning about networks with concurrency. Finally, we note that the decision procedure for KAO based on bisimulation up to con- gruence leaves room for optimisation. Besides adapting the work on symbolic algorithms for bisimulation-based algorithms in KAT [34], the transitivity property witnessed in Lemma 4.14 seems like it could sometimes allow a bisimulation-based algorithm to decide early.

References 1 Carolyn Jane Anderson, Nate Foster, Arjun Guha, Jean-Baptiste Jeannin, Dexter Kozen, Cole Schlesinger, and David Walker. NetKAT: semantic foundations for networks. In POPL, pages 113–126, 2014. doi:10.1145/2535838.2535862. 2 Valentin M. Antimirov. Partial derivatives of regular expressions and finite automaton constructions. Theor. Comput. Sci., 155(2):291–319, 1996. doi:10.1016/0304-3975(95) 00182-4. 3 Roland Backhouse. Closure algorithms and the star-height problem of regular languages. PhD thesis, University of London, 1975. 4 Garrett Birkhoff and Thomas C. Bartee. Modern applied algebra. McGraw-Hill, 1970. 5 Maurice Boffa. Une remarque sur les systèmes complets d’identités rationnelles. ITA, 24:419– 428, 1990. 6 Filippo Bonchi and Damien Pous. Checking NFA equivalence with bisimulations up to congruence. In POPL, pages 457–468, 2013. doi:10.1145/2429069.2429124. 7 Paul Brunet, Damien Pous, and Georg Struth. On decidability of concurrent Kleene algebra. In CONCUR, pages 28:1–28:15, 2017. doi:10.4230/LIPIcs.CONCUR.2017.28. 8 Janusz A. Brzozowski. Derivatives of regular expressions. J. ACM, 11(4):481–494, 1964. doi:10.1145/321239.321249. 9 John Horton Conway. Regular Algebra and Finite Machines. Chapman and Hall, 1971. 10 Nate Foster, Dexter Kozen, Matthew Milano, Alexandra Silva, and Laure Thompson. A coalgebraic decision procedure for NetKAT. In POPL, pages 343–355, 2015. doi:10.1145/ 2676726.2677011. 11 Jay L. Gischer. The equational theory of pomsets. Theor. Comput. Sci., 61:199–224, 1988. doi:10.1016/0304-3975(88)90124-7. 12 Jan Grabowski. On partial languages. Fundam. Inform., 4(2):427, 1981. 13 Tony Hoare. Laws of programming: The algebraic unification of theories of concurrency. In CONCUR, pages 1–6, 2014. doi:10.1007/978-3-662-44584-6_1. 14 Tony Hoare, Ian J. Hayes, Jifeng He, Carroll Morgan, A. W. Roscoe, Jeff W. Sanders, Ib Holm Sørensen, J. Michael Spivey, and Bernard Sufrin. Laws of programming. Commun. ACM, 30(8):672–686, 1987. doi:10.1145/27651.27653. 15 Tony Hoare, Bernhard Möller, Georg Struth, and Ian Wehrman. Concurrent Kleene algebra. In CONCUR, pages 399–414, 2009. doi:10.1007/978-3-642-04081-8_27. 16 Tony Hoare, Stephan van Staden, Bernhard Möller, Georg Struth, and Huibiao Zhu. Devel- opments in concurrent Kleene algebra. J. Log. Algebr. Meth. Program., 85(4):617–636, 2016. doi:10.1016/j.jlamp.2015.09.012. 17 John E. Hopcroft and Richard M. Karp. A linear algorithm for testing equivalence of finite automata. Technical Report TR71-114, Cornell University, December 1971. 41:16 Kleene Algebra with Observations

18 Bart Jacobs. A bialgebraic review of deterministic automata, regular expressions and languages. In Algebra, Meaning, and Computation (Joseph A. Goguen festschrift), pages 375–404, 2006. doi:10.1007/11780274_20. 19 Peter Jipsen and M. Andrew Moshier. Concurrent Kleene algebra with tests and branching automata. J. Log. Algebr. Meth. Program., 85(4):637–652, 2016. doi:10.1016/j.jlamp.2015. 12.005. 20 Tobias Kappé, Paul Brunet, Jurriaan Rot, Alexandra Silva, Jana Wagemaker, and Fabio Zanasi. Kleene algebra with observations. CoRR, abs/1811.10401, 2018. arXiv:1811.10401. 21 Tobias Kappé, Paul Brunet, Alexandra Silva, and Fabio Zanasi. Concurrent Kleene al- gebra: Free model and completeness. In ESOP, pages 856–882, 2018. doi:10.1007/ 978-3-319-89884-1_30. 22 Stephen C. Kleene. Representation of events in nerve nets and finite automata. Automata Studies, pages 3–41, 1956. 23 Dexter Kozen. A completeness theorem for Kleene algebras and the algebra of regular events. Inf. Comput., 110(2):366–390, 1994. doi:10.1006/inco.1994.1037. 24 Dexter Kozen. Kleene algebra with tests and commutativity conditions. In TACAS, pages 14–33, 1996. doi:10.1007/3-540-61042-1_35. 25 Dexter Kozen. On hoare logic and kleene algebra with tests. ACM Trans. Comput. Log., 1(1):60–76, 2000. doi:10.1145/343369.343378. 26 Dexter Kozen. Myhill-Nerode relations on automatic systems and the completeness of Kleene algebra. In STACS, pages 27–38, 2001. doi:10.1007/3-540-44693-1_3. 27 Dexter Kozen and Konstantinos Mamouras. Kleene algebra with equations. In ICALP, pages 280–292, 2014. doi:10.1007/978-3-662-43951-7_24. 28 Dexter Kozen and Maria-Christina Patron. Certification of compiler optimizations using Kleene algebra with tests. In CL, pages 568–582, 2000. doi:10.1007/3-540-44957-4_38. 29 Dexter Kozen and Frederick Smith. Kleene algebra with tests: Completeness and decidability. In CSL, pages 244–259, 1996. doi:10.1007/3-540-63172-0_43. 30 Daniel Krob. Complete systems of B-rational identities. Theor. Comput. Sci., 89(2):207–343, 1991. doi:10.1016/0304-3975(91)90395-I. 31 Michael R. Laurence and Georg Struth. Completeness theorems for bi-Kleene algebras and series-parallel rational pomset languages. In RAMiCS, pages 65–82, 2014. doi:10.1007/ 978-3-319-06251-8_5. 32 Michael R. Laurence and Georg Struth. Completeness theorems for pomset languages and concurrent Kleene algebras, 2017. arXiv:1705.05896. 33 Kamal Lodaya and Pascal Weil. Series-parallel languages and the bounded-width property. Theor. Comput. Sci., 237(1):347–380, 2000. doi:10.1016/S0304-3975(00)00031-1. 34 Damien Pous. Symbolic algorithms for language equivalence and Kleene algebra with tests. In POPL, pages 357–368, 2015. doi:10.1145/2676726.2677007. 35 Michael O. Rabin and Dana S. Scott. Finite automata and their decision problems. IBM J. of Research and Dev., 3(2):114–125, 1959. doi:10.1147/rd.32.0114. 36 Jurriaan Rot, Filippo Bonchi, Marcello M. Bonsangue, Damien Pous, Jan Rutten, and Alexandra Silva. Enhanced coalgebraic bisimulation. Mathematical Structures in Comput. Sci., 27(7):1236–1264, 2017. doi:10.1017/S0960129515000523. 37 Jan J. M. M. Rutten. Automata and coinduction (an exercise in coalgebra). In CONCUR, pages 194–218, 1998. doi:10.1007/BFb0055624. 38 Jan J. M. M. Rutten. Universal coalgebra: a theory of systems. Theor. Comput. Sci., 249(1):3–80, 2000. doi:10.1016/S0304-3975(00)00056-6. 39 Jan J. M. M. Rutten. Behavioural differential equations: a coinductive calculus of streams, automata, and power series. Theor. Comput. Sci., 308(1-3):1–53, 2003. doi: 10.1016/S0304-3975(02)00895-2. 40 Alexandra Silva. Kleene Coalgebra. PhD thesis, Radboud Universiteit Nijmegen, 2010. 41 Steffen Smolka, Spiridon Aristides Eliopoulos, Nate Foster, and Arjun Guha. A fast compiler for NetKAT. In ICFP, pages 328–341, 2015. doi:10.1145/2784731.2784761. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:17

A Proofs about the language model

I Lemma 4.5 (WKAO soundness). Let e, f ∈ TGR. Now, if e ≡WKAO f, then e WKAO = f WKAO. J K J K

Proof. It suffices to check the claim for the pairs generating ≡WKAO.

For the pairs that come from ≡BA, the claim then goes through by the fact that − WKAO J K coincides with − BA on TP , and that ≡BA is sound w.r.t. − BA by Theorem 2.1. J K J K Γ? For the pairs that come from ≡KA, the claim goes through by the observation that 2 forms a Kleene algebra when equipped with the operators used to define − WKAO, and the J K fact that ≡KA encodes the axioms of a Kleene algebra. Lastly, we can easily check that for any p, q ∈ TP we have that

p ∨ q WKAO = p BA ∪ q BA = p + q WKAO ⊥ WKAO = ⊥ BA = ∅ = 0 WKAO . J J K J K J K J K J K J K J K

I Lemma 4.8. Let e, f ∈ TGR. The following hold: (i) e + f KAO = e KAO ∪ f KAO, and ? ? J K J K J K (ii) e · f KAO = ( e KAO · f KAO)↓, and (iii) e KAO = e KAO↓. J K J K J K J K J K Proof. Let ∗ denote the reflexive, transitive closure of . ∗ (i) First, suppose w ∈ e + f KAO. Then w  x for some x ∈ e + f WKAO. If x ∈ e WKAO, J K J K J K then w ∈ e KAO; similarly, if x ∈ f WKAO, then w ∈ f KAO. J K J K J K ∗ For the other inclusion, suppose that w ∈ e KAO ∪ f KAO. If w ∈ e KAO, then w  x J K J K J K for some x ∈ e WKAO. In that case, x ∈ e + f WKAO, and thus w ∈ e + f KAO. Similarly, J K J K J K if w ∈ f KAO, also w ∈ e + f KAO. (ii) For theJ inclusionK fromJ left toK right, we note that

e · f KAO = ( e WKAO · f WKAO)↓ ⊆ ( e KAO · f KAO)↓ . J K J K J K J K J K

For the other inclusion, suppose w ∈ ( e KAO · f KAO)↓. Then there exist x ∈ e KAO and ∗ J K J K J K y ∈ f KAO such that w  xy. In that case, there also exist u ∈ e WKAO and v ∈ f WKAO J K ∗ ∗ ∗ ∗J K ∗ J K such that x  u and y  v. Hence, we find that w  xy  xv  uv ∈ e · f KAO, J K and thus w ∈ f KAO. (iii) When L is an observationJ K language, we define Ln by setting L0 = {} and Ln+1 = L·Ln. It is then straightforward to see that L? = S Ln. Similarly, we define en by e0 = 1 n∈N and en+1 = e · en. By (i) and (ii), we find:

? ? [ n [ n [ n ? e KAO = e WKAO↓ = e WKAO↓ = e KAO = e KAO↓ = e KAO↓ . J J K J K J K J K J K J K n∈N n∈N n∈N

I Lemma 4.9 (KAO soundness). Let e, f ∈ TGR. Now, if e ≡KAO f, then e KAO = f KAO. J K J K

Proof. We proceed by induction on the construction of ≡KAO. In the base, we check the pairs that generate ≡KAO. For the pairs that come from ≡WKAO, we know by the above that

e WKAO = f WKAO, and hence e KAO = e WKAO↓ = f WKAO↓ = f KAO. J K J K J K J K J K J K For the contraction law, let p, q ∈ TP ; we should show p ∧ q KAO ⊆ p · q KAO. To see this, J K J K let w ∈ p ∧ q KAO. Then there exists an α ∈ p ∧ q WKAO = p BA ∩ q BA with w  α. Since J K J K J K J K αα ∈ p · q WKAO and α  αα, it follows that α ∈ p · q KAO and hence w ∈ p · q KAO. ForJ theK inductive step, we should verify thatJ the claimK is preserved byJ theK closure rules for congruence. This gives us three cases to consider.

If e = e0 + e1 and f = f0 + f1 with e0 ≡KAO f0 and e1 ≡KAO f1, then by induction we know

that e0 KAO = f0 KAO and e1 KAO = f1 KAO. By Lemma 4.8(i), we can then derive that J K J K J K J K e KAO = e0 KAO ∪ e1 KAO = f0 KAO ∪ f1 KAO = f KAO. J K J K J K J K J K J K 41:18 Kleene Algebra with Observations

If e = e0 · e1 and f0 · f1 with e0 ≡KAO f0 and e1 ≡KAO f1, then by induction we know

that e0 KAO = f0 KAO and e1 KAO = f1 KAO. By Lemma 4.8(ii), we can then derive that J K J K J K J K e KAO = ( e0 KAO · e1 KAO)↓ = ( f0 KAO · f1 KAO)↓ = f KAO J K ? J K J ?K J K J K J K If e = e0 and f = f0 and e0 ≡KAO f0, then by induction we know that e0 KAO = f0 KAO. ? ? J K J K By Lemma 4.8(iii), we can then derive that e KAO = e0 KAO↓ = f1 KAO↓ = f KAO. J J K J K J K J K B Proofs about partial derivatives

I Lemma 4.11. Let e ∈ TGR. Now (e) 5KAO e, and (e) = 1 if and only if  ∈ e KAO. J K

Proof. First, assume that  ∈ e KAO. We will now show that (e) = 1 by induction on e. In the base, it suffices to considerJ theK case where e = 1; here, we find (e) = 1 by definition of . For the inductive step, there are three cases to consider.

If e = e0 + e1, then  ∈ e0 KAO or  ∈ e1 KAO by Lemma 4.8(i). By induction, we then J K J K find that (e0) = 1 or (e1) = 1, and hence (e) = 1. ∗ If e = e0 · e1, then   wx with w ∈ e0 KAO and x ∈ e1 KAO by Lemma 4.8(ii). By J K J K definition of , we find that w = x = . By induction, we then have that (e0) = (e1) = 1, and thus (e) = 1. ? If e = e0, then (e) = 1 by definition.

Next, note that to prove that (e) = 1 implies  ∈ e KAO, it suffices to prove that J K (e) 5KAO e, by Lemma 4.9. This we do by induction on e. In the base, the claim is true trivially. For the inductive step, there are three cases to consider.

If e = e0 + e1, then we can derive that by induction that

(e) ≡KAO (e0) + (e1) 5KAO e0 + e1 = e .

If e = e0 · e1, then we can derive by induction that

(e) = (e0) · (e1) 5KAO e0 · e1 = e .

? ? ? If e = e0, then (e) = 1 5KAO 1 + e0 · e0 ≡KAO e0 = e. J

0 0 I Lemma 4.14. Let e ∈ TGR, α ∈ A, and e ∈ ζ(e, α). Now ζ(e , α) ⊆ ζ(e, α).

Proof. We proceed by induction on e. In the base, where e ∈ {0, 1} ∪ Σ ∪ TP , the claim holds vacuously. For the inductive step, there are three cases to consider. 0 0 If e = e0 + e1, then e ∈ ζ(e0, α) or e ∈ ζ(e1, α); we assume the former without loss of 0 generality. By induction, we then know that ζ(e , α) ⊆ ζ(e0, α); since ζ(e0, α) ⊆ ζ(e, α), the claim then follows.

If e = e0 · e1, we have three subcases to consider. 0 0 0 0 If e = e0 · e1 with e0 ∈ ζ(e0, α), then by induction we know that ζ(e0, α) ⊆ ζ(e0, α). 00 0 If e ∈ ζ(e0 · e1, α), then we have two more subcases to consider. 00 00 00 0 00 ∗ If e = e0 · e1 for some e0 ∈ ζ(e0, α), then by induction we know that e0 ∈ ζ(e0, α), 00 and therefore e ∈ ζ(e0 · e1, α). 00 0 0 ∗ If e ∈ Z(e0, e1, α) = ζ(e1, α), then one of two cases applies. First, if (e0) = 1, we 00 00 0 find that e ∈ ζ(e1, α) = Z(e0, e1, α). Second, if there exists an e0 ∈ ζ(e0, α) such 00 00 00 that (e0 ) = 1, then we find that e0 ∈ ζ(e0, α) by induction; hence, e ∈ ζ(e1, α) = Z(e0, e1, α) ⊆ ζ(e, α). 0 00 If e ∈ Z(e0, e1, α) = ζ(e1, α) then by induction we find e ∈ ζ(e1, α) = Z(e0, e1, α) ⊆ δ(e, α). T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:19

? 0 0 ? 0 00 0 If e = e0, then e = e0 · e0 for some e0 ∈ ζ(e0, α). If e ∈ ζ(e , α), then we have two subcases to consider. 00 00 ? 00 0 00 If e = e0 · e0 for some e0 ∈ ζ(e0, α), then by induction we know that e0 ∈ ζ(e0, α). It therefore follows that e00 ∈ ζ(e, α). 00 0 ? ? 00 If e ∈ Z(e0, e0, α) = ζ(e0, α), then e ∈ ζ(e, α) immediately. J 0 0 I Lemma 4.15. Let e ∈ TGR. For all a ∈ Σ and e ∈ δ(e, a), we have a · e 5KAO e. 0 0 Furthermore, for all α ∈ A and e ∈ ζ(e, α), we have πα · e 5KAO e. Proof. We prove the second claim by induction on e; the first claim can be shown analogously. 0 For the base, the claim holds vacuously if e ∈ {0, 1} ∪ Σ. When e ∈ TP , we have e = 1 and 0 πα 5BA e. Hence, we conclude that πα · e ≡KAO πα 5KAO e. For the inductive step, there are three cases to consider. 0 0 If e = e0 + e1, then either e ∈ ζ(e0, α) or e ∈ ζ(e1, α); w.l.o.g., we assume the former. 0 By induction, we then find that πα · e 5KAO e0 5KAO e. If e = e0 · e1, then there are two cases to consider. 0 0 0 0 If e = e0 · e1 for some e0 ∈ ζ(e0, α), then by induction we know that πα · e0 5KAO e0. 0 0 By induction, we derive πα · e ≡KAO πα · e0 · e1 5KAO e0 · e1 = e. 0 0 If e ∈ Z(e0, e1, α) = ζ(e1, α), then πα · e 5KAO e1, and one of two cases applies. First, 0 if (e0) = 1, then πα · e 5KAO e1 ≡KAO 1 · e1 5KAO e0 · e1 = e, by Lemma 4.11. Second, 0 0 0 if there exists some e0 ∈ ζ(e0, α) with (e0) = 1, then by induction πα · e0 5KAO e0. Hence, we can derive that

0 0 πα · e ≡KAO (πα ∧ πα) · e 0 5KAO (πα · πα) · e 0 ≡KAO (πα · 1) · (πα · e ) 0 0 5KAO (πα · e0) · (πα · e )

5KAO e0 · e1

≡KAO e .

? 0 0 ? 0 0 If e = e0, then e = e0·e0 for some e0 ∈ ζ(e0, α). By induction, we know that πα·e0 5KAO e0. We can then derive that

0 0 ? ? ? ? πα · e ≡KAO πα · e0 · e0 5KAO e0 · e0 5KAO 1 + e0 · e0 ≡KAO e0 . J 0 0 I Lemma 4.17. Let e ∈ TGR. If a ∈ Σ, then δ(e, a) ⊆ ρ(e); if e ∈ ρ(e), then δ(e , a) ⊆ ρ(e). Furthermore, if α ∈ A, then ζ(e, α) ⊆ ρ(e); if e0 ∈ ρ(e), then ζ(e0, α) ⊆ ρ(e).

Proof. We prove the second claim by induction on e; the first claim can be proved analogously. In the base, there are two cases to consider. If e ∈ {0, 1} ∪ Σ, then the claim holds vacuously. Otherwise, if e ∈ TP , then ζ(e, α) ⊆ {1} ⊆ ρ(e). For the inductive step, there are three cases to consider. If e = e0 + e1, then by induction we have ζ(e0, α) ⊆ ρ(e0) and ζ(e1, α) ⊆ ρ(e1). Hence, we find that ζ(e, α) = ζ(e0, α) ∪ ζ(e1, α) ⊆ ρ(e0) ∪ ρ(e1) = ρ(e). If e = e0 · e1, then by induction we have ζ(e0, α) ⊆ ρ(e0) and ζ(e1, α) ⊆ ρ(e1). Hence, we can calculate that

0 0 0 0 ζ(e, α) = {e0 · e1 : e0 ∈ ζ(e0, α)} ∪ Z(e0, e1, α) ⊆ {e0 · e1 : e0 ∈ ρ(e0)} ∪ ρ(e1) = ρ(e) .

? If e = e0, then by induction we have ζ(e0, α) ⊆ ρ(e0). Hence, we find that

0 ? 0 0 ? 0 ζ(e, α) = {e0 · e0 : e0 ∈ ζ(e0, α)} ⊆ {e0 · e0 : e0 ∈ ρ(e0)} ⊆ ρ(e) . 41:20 Kleene Algebra with Observations

For the second part, we prove that if e0 ∈ ρ(e), then ρ(e0) ⊆ ρ(e); this, in conjunction with the first part, validates the claim. We proceed by induction on e. In the base, there are two cases to consider. First, if e ∈ {0, 1} ∪ Σ, then the claim holds vacuously. Otherwise, if 0 0 e ∈ TP , then e = 1, and therefore ρ(e ) = ∅ ⊆ ρ(e). For the inductive step, there are three cases to consider. 0 0 If e = e0 + e1, assume w.l.o.g. e ∈ ρ(e0). By induction, ρ(e ) ⊆ ρ(e0) ⊆ ρ(e). If e = e0 · e1 then there are two cases to consider. 0 0 0 If e = e0 · e1 where e0 ∈ ρ(e0), then we calculate

0 00 00 0 00 00 ρ(e ) = {e0 · e1 : e0 ∈ ρ(e0)} ∪ ρ(e1) ⊆ {e0 · e1 : e0 ∈ ρ(e0)} ∪ ρ(e1) = ρ(e) .

0 0 If e ∈ ρ(e1), then by induction we have ρ(e ) ⊆ ρ(e1) ⊆ ρ(e). ? 0 0 0 ? 0 If e = e0, then either e = 1 or e = e0 · e0 for some e0 ∈ ρ(e0). In the former case, ρ(e0) = ∅. In the latter case, we find by induction that

0 00 ? 00 0 ? 00 ? 00 ? ρ(e ) = {e0 · e0 : e0 ∈ ρ(e0)} ∪ ρ(e0) ⊆ {e0 · e0 : e0 ∈ ρ(e0)} ∪ ρ(e0) = ρ(e) . J

P 0 I Lemma 4.19. Let e ∈ TGR. Then e ≡KAO e0∈ι(e) e .

Proof. The proof proceeds by induction on e. In the base, where e ∈ {0, 1} ∪ Σ ∪ TP , the claim holds trivially. For the inductive step, there are three cases to consider. If e = e0 + e1, then we calculate by induction that

X 0 X X 0 e = e0 + e1 ≡KAO e0 + e1 ≡KAO e . 0 0 0 e0∈ι(e0) e1∈ι(e1) e ∈ι(e)

If e = e0 · e1, then we calculate by induction that

X 0 X 0 e = e0 · e1 ≡KAO e0 · e1 ≡KAO e . 0 0 e0∈ι(e0) e ∈ι(e)

? If e = e0, then we calculate by induction that

? ? X 0 ? X 0 e = e0 ≡KAO 1 + e0 · e0 ≡KAO 1 + e0 · e0 ≡KAO e . J 0 0 0 e0∈ι(e0) e ∈ι(e)

C Proofs towards completeness

I Proposition 5.2. Let e, f ∈ TAGR be closed, and e KAO = f KAO; then e ≡KAO f. J K J K Proof. Note that we can regard atomic guarded rational terms as rational expressions over ∆? ∆ = Σ ∪ Π; let this interpretation be given by − KA : TAGR → 2 . It is now easy to show J K that for g ∈ TAGR closed, we have g KAO = g KA. By the premise that e and f are closed, J K J K and that e KAO = f KAO, we thus find that e KA = f KA. By Theorem 2.3, it then follows J K J K J K J K that e ≡KA f; hence, e ≡KAO f, since the axioms generating ≡KA are also valid for ≡KAO. J

I Proposition 5.6 (Least solutions). Let e, f ∈ TGR. Now se f is the least (w.r.t. 5T) ρ(e)-vector s such that for each e0 ∈ ρ(e), it holds that #

0 X 00 X 00 0 (e ) · f + a · s(e ) + πα · s(e ) 5T s(e ) . e00∈δ(e0,a) e00∈ζ(e0,α)

When s = se f, the above is an equivalence, i.e., we can substitute 5T with ≡T. # T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:21

Proof. Suppose that s is a ρ(e)-vector that satisfies the system of equations above. Using Lemma 4.17, we can then calculate for any e0 ∈ ρ(e) as follows:

0 0 X 0 00 00 (Me · s + xe f)(e ) ≡T (e ) · f + Me(e , e ) · s(e ) # e00∈ρ(e)

0 X  X X  00 ≡T (e ) · f + a + πα · s(e ) e00∈ρ(e) e00∈δ(e0,a) e00∈ζ(e0,α) 0 X 00 X 00 0 ≡T (e ) · f + a · s(e ) + πα · s(e ) 5T s(e ) . e00∈δ(e0,a) e0∈ζ(e00,α) ? ? Hence, we find that Me · s + xe f 5T s. Since Me · (xe f) = (Me · xe) f = se f is the # # # # least ρ(e)-vector for which this holds, we can conclude that se f 5T s. # Conversely, a similar derivation shows that se f is can take the place of s in the system # above, and hence the least such s is a lower bound for se f. As for the other claim, let s be the ρ(e)-vector defined# as follows:

0 0 X 00 X 00 s(e ) = (e ) · f + a · (se f)(e ) + πα · (se f)(e ) . e00∈δ(e0,a) # e00∈ζ(e0,α) #

Now s 5T se f by the first part of this proposition. By monotonicity, we also derive the following: #

0 X 00 X 00 (e ) · f + a · s(e ) + πα · s(e ) e00∈δ(e0,a) e00∈ζ(e0,α) 0 X 00 X 00 0 5T (e ) · f + a · (se f)(e ) + πα · (se f)(e ) ≡T s(e ) , e00∈δ(e0,a) # e00∈ζ(e0,α) # meaning s satisfies the system obtained from e and f; hence, se f 5T s by the first part of # this proposition again. Consequently, s ≡KAO se f, validating the claim. J # To prove Proposition 5.7, we need two technical lemmas. 0 0 0 I Lemma C.1. Let e, f ∈ TGR. If f ∈ ρ(f) and ρ(f) ⊆ ρ(e), then sf (f ) 5T se(f ). 0 0 0 Proof. We fix a ρ(f)-vector s by choosing for f ∈ ρ(f) that s(f ) = se(f ). By Proposi- tion 5.6, we have for any f 0 ∈ ρ(f) that

0 X 00 X 00 0 (f ) + a · s(f ) + πα · s(f ) 5T s(f ) . f 00∈δ(f 0,a) f 00∈ζ(f 0,α)

Hence, s satisfies the system of linear equations obtained from f, and therefore sf is a lower bound of s; the claim then follows. J 0 0 0 0 0 I Lemma C.2. Let e, f ∈ TGR. If e ∈ ρ(e) and f ∈ ι(f), then se(e ) · sf (f ) 5T se·f (e · f). Proof. Using Proposition 5.6 and Lemmas 4.11, 4.19 and C.1, we calculate for all e0 ∈ ρ(e):

0 0 X 00 X 00 (e ) · sf (f ) + a · se·f (e · f) + πα · se·f (e · f) e00∈δ(e0,a) e00∈ζ(e0,α)

0  0 X 0 X 0  ≡T (e ) · (f ) + a · sf (f ) + πα · sf (f ) f 00∈δ(f 0,a) f 00∈ζ(f 0,α) X 00 X 00 + a · se·f (e · f) + πα · se·f (e · f) e00∈δ(e0,a) e00∈ζ(e0,α) 0 X X 0 5T (e · f) + a · se·f (g) + πα · se·f (g) ≡T se·f (e · f) . g∈δ(e0·f,a) g∈ζ(e0·f,α) 41:22 Kleene Algebra with Observations

0 0 Hence, the ρ(e)-vector s where s(e ) = se·f (e · f) satisfies the system obtained from e and 0 0 sf (f ). By Proposition 5.6, we find for all e ∈ ρ(e) that

0 0 0 0 0 0 se(e ) · sf (f ) = (se sf (f ))(e ) 5T s(e ) = se·f (e · f) . J # 0 0 0 I Proposition 5.7. Let e ∈ TGR and e ∈ ρ(e). Then se(e ) ≡KAO e . Proof. Let s be the identity on ρ(e). By Lemma 4.15 and Lemma 4.11, we find that s satisfies the system of linear equations obtained from e as in Proposition 5.6. Hence, by that 0 0 0 0 lemma, for all e ∈ ρ(e) we have se(e ) 5KAO s(e ) = e . The other direction goes by induction on e. In the base, there are four cases. If e = 0, then ρ(e) = ∅, and so the claim holds trivially. 0 0 0 If e = 1, then e = 1, and e = 1 = (1) 5KAO se(1) = se(e ). If e = a for some a ∈ Σ, then either e0 = a, or e0 = 1. The case where e0 = 1 can be argued as before. On the other hand, if e0 = a, then we find that

0 0 e = a ≡KAO a · 1 5KAO a · se(1) 5KAO se(a) = se(e ) .

0 0 If e = p for some p ∈ TP , then either e = p or e = 1. As before, we have to argue the case where e0 = p only. By Theorem 2.1 and Lemma 2.2, we find

0 X X 0 e = p ≡KAO πα 5KAO πα · se(1) 5KAO se(p) = se(e ) . πα5BAp πα5BAp For the inductive step, there are three cases If e = e0 + e1, then the claim follows by Lemma C.1 0 0 0 0 If e = e0 · e1, then either e = e0 · f for some e ∈ ρ(e0), or e ∈ ρ(e1). In the former case, we find by induction, Lemmas 4.19 and C.2 that

0 0 X 0 0 X 0 0 0 e = e0 · e1 ≡KAO e0 · e1 5KAO se0 (e0) · se1 (e1) 5KAO se(e ) . 0 0 e1∈ι(e1) e1∈ι(e1)

0 0 In the latter case, we find e 5KAO se(e ) as before. ? 0 0 0 If e = e0, then it suffices to treat the case where e = e0 · e with e0 ∈ ρ(e0) only. By 00 0 0 0 Lemma C.2, we derive for any e ∈ ι(e) that se0 (e0)·se(e ) 5KAO se0·e(e0·e). By Lemma C.1 0 and the observation that ρ(e0 · e) ⊆ ρ(e), we furthermore know that se0·e(e0 · e) 5KAO 0 se(e0 · e). We then conclude by induction, Lemma 4.19 and the above that

0 0 X 0 00 X 0 0 0 0 e = e0 · e ≡KAO e0 · e 5KAO se0 (e0) · se(e ) 5KAO se(e0 · e) = se(e ) . J e00∈ι(e) e0∈ι(e)

I Corollary 5.8 (Transformation preserves equivalence). Let e ∈ TGR; then e ≡KAO eˆ. Proof. Using Proposition 5.7 and Lemma 4.19, we calculate:

X 0 X 0 e ≡KAO e ≡KAO se(e ) ≡KAO eˆ . J e0∈ι(e) e0∈ι(e)

? ? I Lemma 5.9. Let L ⊆ Γ , and define as the smallest relation on Γ satisfying the rules P a ∈ Γ w x α ∈ A w x P P .   aw ax αw ααx P P P L↓ is the smallest subset of Γ? such that L ⊆ L↓, and if w x with x ∈ L↓, then w ∈ L↓. P T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:23

Proof. It suffices to prove that, whenever w  x, also w x, and when w x also w ∗ x, where ∗ denotes the reflexive and transitive closure of P. For the first part,P suppose that w  x; in that case, there exist u, v ∈ Γ? such that α ∈ A, with w = uαv and x = uααv.A straightforward inductive argument on the length of v then shows that v v, and hence αv ααv. By induction on the length of w, we conclude w = uαv uααv P= x. PFor the other direction, suppose that w x; we proceed by inductionP on the construction of . In the base, we have that w =  =Px, in which case w ∗ x immediately. For the inductiveP step, there are two cases to consider. If there exists an a ∈ Γ such that w = aw0 and x = ax0 with w0 x0, then by induction 0 ∗ 0 ? P 0 w  x . There must then exist y0, . . . , yn−1 ∈ Γ such that w = y0  y1  · · ·  0 0 yn−1 = x . By induction on n, we can then show that w = aw = ay0  ay1  · · ·  0 ayn−1 = ax = x. If there exists an α ∈ A such that w = αw0 and x = ααx0 with w0 x0, then by induction w0 ∗ x0. By an argument similar to the previous case, we can showP that w = αw0 ∗ αx0. 0 0 ∗ Since αx  ααx = x, it follows that w  x. J 0 0 I Proposition 5.10. Let e ∈ T . For all e ∈ ρ(e), it holds that se(e ) is closed. 0 0 Proof. By definition, we have that se(e ) WKAO ⊆ se(e ) KAO. For the reverse inclusion, 0 J K J K it suffices to prove that se(e ) WKAO is closed under , by Lemma 5.9. More precisely, if 0 J K P 0 x ∈ se(e ) WKAO and w x, we should show that w ∈ se(e ) WKAO. We do this in tandem for all eJ0 ∈ ρ(eK), and proceedP by induction on . In theJ base, weK have that w =  = x, and so the claim follows. P For the inductive step, there are two cases to consider. If there exists an a ∈ Γ such that w = aw0 and x = ax0 with w0 x0, then either a ∈ Σ or a ∈ A. In the latter case, we find by Proposition 5.6 (for T = WKAOP ) and Lemma 4.5 00 0 0 00 that there exist α ∈ A and e ∈ ζ(e , α) such that a ∈ πα BA and x ∈ se(e ) WKAO. By 0 00 J K J K induction, w ∈ se(e ) WKAO; but then, again by Proposition 5.6 and Lemma 4.5 it follows 0 J 0 K that w = aw ∈ se(e ) WKAO. The case where a ∈ Σ can be argued analogously. Suppose there existsJ K an α ∈ A such that w = αw0 and x = ααx0 with w0 x0. As in the previous case, we find β, γ ∈ A and e00 ∈ ζ(e0, β) and e000 ∈ ζ(e00, γ) Psuch 0 000 that α ∈ πβ BA and α ∈ πγ BA, as well as x ∈ se(e ) WKAO. By induction, we find 0 000J K J K J K w ∈ se(e ) WKAO. By Lemma 2.2, it also follows that β = α = γ; hence, by Lemma 4.14, we findJ e000 ∈K ζ(e0, α). Again applying Proposition 5.6 and Lemma 4.5, we can then 0 0 conclude that w = αw ∈ se(e ) WKAO. J J K I Corollary 5.11 (Transformation yields closed term). Let e ∈ TGR; now eˆ is closed. Proof. Using Lemmas 4.5, 4.8 and 4.9 and Proposition 5.10, we find [ 0 [ 0 eˆ KAO = se(e ) KAO = se(e ) WKAO = eˆ WKAO . J J K e0∈ι(e)J K e0∈ι(e)J K J K

D Proofs about the decision procedure

For the proofs that follow, we shall need the following technical lemma. 0 0 I Lemma D.1. Let e ∈ TGR. If a ∈ Σ and e ∈ ι(e), then δ(e , a) ⊆ δ(e, a). Furthermore, if α ∈ A and e0 ∈ ι(e), then ζ(e0, α) ⊆ ζ(e, α). Proof. We prove the second claim by induction on e; the first claim can be shown similarly. In the base, where e ∈ {0, 1} ∪ Σ ∪ TP , the claim holds immediately. For the inductive step, there are three cases to consider. 41:24 Kleene Algebra with Observations

0 If e = e0 + e1, assume w.l.o.g. that e ∈ ι(e0). By induction, we then derive

0 ζ(e , α) ⊆ ζ(e0, α) ⊆ ζ(e, α) .

0 0 0 0 If e = e0 · e1, then e = e0 · e1 for some e0 ∈ ι(e0). We now claim that Z(e0, e1, α) ⊆ 0 Z(e0, e1, α). To see this, it suffices to consider the case where Z(e0, e1, α) = ζ(e1, α), 0 00 0 00 i.e., where (e0) = 1 or there exists an e0 ∈ ζ(e0, α) with (e0 ) = 1. In the former case, 00 (e0) = 1 by Lemma 4.11 and Lemma 4.19. In the latter case, we find that e0 ∈ ζ(e0, α) 0 by induction. Thus, we have Z(e0, e1, α) = ζ(e1, α) = Z(e0, e1, α). Hence, we derive by induction that

0 00 00 0 0 ζ(e , α) = {e0 · e1 : e0 ∈ ζ(e0, α)} ∪ Z(e0, e1, α) 00 00 ⊆ {e0 · e1 : e0 ∈ ζ(e0, α)} ∪ Z(e0, e1, α) = ζ(e, α) .

? 0 0 ? 0 If e = e0, then e = e0 · e0 for some e0 ∈ ι(e0). By induction, we then derive

0 00 ? 00 0 ? ζ(e , α) = {e0 · e0 : e0 ∈ ζ(e0, α)} ∪ ζ(e0, α) 00 ? 00 ? ⊆ {e0 · e0 : e0 ∈ ζ(e0, α)} ∪ ζ(e0, α) = ζ(e, α) . J

I Theorem 6.2 (Fundamental theorem). For all e ∈ TGR, the following holds

X 0 X 0 e ≡KAO (e) + a · e + πα · e . e0∈δ(e,a) e0∈ζ(e,α)

Proof. Let us write e˜ for the right-hand side of the claimed equivalence. By Lemma 4.11

and Lemma 4.15, we already know that e˜ 5KAO e. For the other direction, we derive as follows

X 0 e ≡KAO se(e ) (Corollary 5.8) e0∈ι(e)

X  0 X 00 X 00  ≡KAO (e ) + a · se(e ) + πα · se(e ) (Proposition 5.6) e0∈ι(e) e00∈δ(e0,a) e00∈ζ(e0,α)

X  0 X 00 X 00 ≡KAO (e ) + a · e + πα · e (Proposition 5.7) e0∈ι(e) e00∈δ(e0,a) e00∈ζ(e0,α) X 0 X 00 X 00 ≡KAO (e ) + a · e + πα · e (Distributivity) e0∈ι(e) e0∈ι(e) e0∈ι(e) e00∈δ(e0,a) e00∈ζ(e0,α) X 0 X 0 5KAO (e) + a · e + πα · e =e ˜ . (Def , Lemma D.1) e0∈δ(e,a) e0∈ζ(e,α)

Hence, e 5KAO e˜, completing the proof. J

To prove Lemma 6.3, the following intermediate lemma is useful.

? I Lemma D.2. Let α ∈ A, w ∈ Γ and e ∈ TGR. Now, if αw ∈ e KAO, then there exists an 0 0 J K e ∈ ζ(e, α) such that w ∈ e KAO. J K P 0 Proof. By Lemma 4.19 and Lemma 4.9, we know that αw ∈ e0∈ι(e) e KAO. Hence, by 0 0 J K Lemma 4.8, there exists an e ∈ ι(e) such that αw ∈ e KAO. By Proposition 5.7 and 0 J K Lemma 4.9, it then follows that αw ∈ se(e ) KAO. By Proposition 5.10, we find that J K T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:25

0 αw ∈ se(e ) WKAO. We furthermore know by Proposition 5.6 and Lemmas 2.2 and 4.5, as well J K as the definition of − WKAO that J K 0 s 0 X 00 X 00{ se(e ) WKAO = (e ) + a · se(e ) + πα · e e00∈δ(e0,a) e00∈ζ(e0,α) J K WKAO 0 [ 00 [ 00 = (e ) WKAO ∪ {a}· se(e ) WKAO ∪ {β}· se(e ) WKAO . J K e00∈δ(e0,a) J K e00∈ζ(e0,β) J K Since αw starts with an atom, it appears in the third component above, i.e., there must exist 00 0 00 00 β ∈ A and e ∈ ζ(e , β) with αw ∈ {β}· se(e ) WKAO. Hence, α = β and w ∈ se(e ) WKAO. J K 00 00 J K By Propositions 5.7 and 5.10 and Lemma 4.9, we find that se(e ) WKAO = e KAO. Lastly, by Lemma D.1, we know that e00 ∈ ζ(e, α). In total, we concludeJ thatK e00 ∈Jζ(e,K α) such that 00 w ∈ e KAO. J J K I Lemma 6.3. Let us fix α ∈ A. For all e ∈ TGR, we have:

sX 0{ [ 0 πα · e = {α}· e KAO . e0∈ζ(e,α) KAO e0∈ζ(e,α) J K Proof. The inclusion from right to left follows by Lemma 2.2 and Lemma 4.8. For the other direction, note that by Lemma 2.2 and Lemma 4.5 we have

sX 0{ [ 0 [ 0 πα · e = {α}· e WKAO ⊆ {α}· e KAO . e0∈ζ(e,α) WKAO e0∈ζ(e,α) J K e0∈ζ(e,α) J K It then suffices to prove that the right-hand side of the claimed equation is closed under , since the left-hand side is the least set satisfying both of these conditions. Thus, suppose 0 0 00 e ∈ ζ(e, α), and let w ∈ {α}· e KAO with x  w. We should find e ∈ ζ(e, α) such that 00 J K 0 0 0 x ∈ {α}· e KAO. To this end, we note that w = αw for some w ∈ e KAO, and that moreover there existJ u,K v ∈ Γ? and β ∈ A such that w = uββv and x = uβvJ. ThisK gives us two cases. (i) If u = , then α = β. Since w0 = αv, we find by Lemma D.2 an e00 ∈ ζ(e0, α) such that 00 00 v ∈ e KAO. By Lemma 4.14, we furthermore know that e ∈ ζ(e, α). The claim is then J K 00 satisfied, since x = αv ∈ {α}· e KAO. 0 J K 0 0 0 (ii) If u 6= , then write u = αu . We then know that w = u ββv. By closure of e KAO with 0 0 0 J K 0 respect to , it follows that u βv ∈ e KAO. Hence, x = uβv = αu βv ∈ {α}· e KAO, 0 J K J K satisfying the claim, since e ∈ ζ(e, α). J

I Proposition 6.4. For all e ∈ TGR, we have: [ 0 [ 0 e KAO = { : (e) = 1} ∪ {a}· e KAO ∪ {α}· e KAO . J K e0∈δ(e,a) J K e0∈ζ(e,α) J K Proof. We derive as follows.

s X 0 X 0{ e KAO = (e) + a · e + πα · e (Lemma 4.9, Theorem 6.2) e0∈δ(e,a) e0∈ζ(e,α) J K KAO sX 0{ sX 0{ = (e) KAO ∪ a · e ∪ πα · e (Lemma 4.8) e0∈δ(e,a) e0∈ζ(e,α) J K KAO KAO sX 0{ sX 0{ = { : (e) = 1} ∪ a · e ∪ πα · e (Lemma 4.11) e0∈δ(e,a) e0∈ζ(e,α) KAO KAO [ 0 [ 0 = { : (e) = 1} ∪ {a}· e KAO ∪ {α}· e KAO . (Lemma 6.3) e0∈δ(e,a) J K e0∈ζ(e,α) J K In the last step, we also use that closure distributes in the semantics of a · e0. As a ∈ Σ, we 0 0 0 0 0 know that {a}· e KAO = {a}· e WKAO↓ = ({a}· e WKAO)↓ = a · e WKAO↓ = a · e KAO. J J K J K J K J K J K 41:26 Kleene Algebra with Observations

I Corollary 6.6. For all e, f ∈ TGR, we have [ 0 [ 0 e KAO = f KAO ⇔ `(e ) = `(f ) . J K J K e0∈ι(e) f 0∈ι(f) Proof. We derive as follows.

sX 0{ sX 0{ e KAO = f KAO ⇔ e = f (Lemma 4.19) e0∈ι(e) f 0∈ι(f) J K J K KAO KAO [ 0 [ 0 ⇔ e KAO = f KAO (Lemma 4.8) e0∈ι(e)J K f 0∈ι(f)J K [ 0 [ 0 ⇔ `(e ) = `(f ) . (Theorem 6.5) J e0∈ι(e) f 0∈ι(f)

I Theorem 6.7 (Decision procedure). For all e, f ∈ TGR, we have e ≡KAO f if and only if there exists R such that (ι(e), ι(f)) ∈ R and R is a bisimulation up to congruence for the syntactic automaton restricted to ρ(e) ∪ ρ(f). Proof. We derive as follows.

e ≡KAO f ⇔ e KAO = f KAO (Theorem 5.12) J [K J K [ ⇔ `(e0) = `(f 0) (Corollary 6.6) e0∈ι(e) f 0∈ι(f) ⇔ ∃R such that R is a bisimulation up to congruence and (ι(e), ι(f)) ∈ R. (Theorem 2.5) J

E Tests versus observations

We now investigate the relationships between models of KAT and models of KAO. To make this discussion more articulate, we note that both types of models are instances of a more general class of models: Kleene algebras K containing a subset B ⊆ K where B is a Boolean algebra. For the purpose of this discussion, we assume a priori two (and only these two) correspondences between the two signatures hK, 0, 1, +, ·,?i and hB, ⊥, >, ∨, ∧, ·i, namely: (i) that the natural order on B is compatible with the natural order on K, that is, for all a, b ∈ B such that a ∨ b = b, it also holds that a + b = b. (ii) the contraction law, that is, for all a, b ∈ B it holds that a ∧ b ≤ a · b. According to the laws of Boolean algebra, ∧ is a greatest lower bound (GLB) for elements of B. In most semantics, one would like to interpret conjunction as an intersection, hence as a GLB in K (and not just in B). Symbolically,

∀x ∈ K, ∀a, b ∈ B, x ≤ a and x ≤ b =⇒ x ≤ a ∧ b . (Glb)

Any KAT satisfies (Glb), because for x ∈ K and a, b ∈ B with x ≤ a, b, we have x = x · 1 = x · b + b = x · b + x · b ≤ a · b + b · b = a · b + 0 = a ∧ b .

In most models of KAT, including the language model and relational model, the Boolean sub-algebra is an ideal of the full algebra, i.e., it is downward-closed in the following sense:

∀x ∈ K, ∀a ∈ B, x ≤ a =⇒ x ∈ B. (Ideal) This is also the case in the language model of KAO. For both algebras there are models that do not satisfy this property, but it is our impression that these are not the most useful. T. Kappé, P. Brunet, J. Rot, A. Silva, J. Wagemaker and F. Zanasi 41:27

I Proposition E.1. If (Ideal) holds, then so does (Glb). Proof. Let x ∈ K and a, b ∈ B such that x ≤ a, b. Since a ∈ B, we have x ∈ B by (Ideal). Therefore, we have x = x ∧ x ≤ a ∧ b. J

I Proposition E.2. If (Ideal) holds, then K is a model of KAO.

Proof. First, we show that a + b = a ∨ b. Since a + b 5BA > + > ≡KA >, we know that a + b ∈ B. We also know that a, b 5KA a + b and that a, b 5BA a ∨ b, so we get: a ∨ b 5KA (a + b) ∨ (a + b) ≡BA a + b 5BA (a ∨ b) + (a ∨ b) ≡KA a ∨ b. Now, since 0 5KA >, 0 is also an element of B, and therefore we can conclude:

0 ≡BA 0 ∨ ⊥ = 0 + ⊥ ≡KA ⊥ . J

Consider now the following law, which holds trivially in any KAT:

1 = > . (Units)

Since KAO distinguishes · from ∧, we distinguished their units as well. This turns out to be important: identifying the units leads to identifying the products in a large class of models.

I Proposition E.3. Suppose (Glb) and (Units) hold. Then for a, b ∈ B we have a·b = a∧b. Proof. Since we have assumed the contraction law, we need to check a · b ≤ a ∧ b only. Thanks to (Glb) this inequality reduces to checking that a · b ≤ a and a · b ≤ b. Both of these hold, as for instance a · b ≤ a · > = a · 1 = a. J

I Proposition E.4. Suppose (Ideal) holds. Now K is a KAT iff (Units) holds. Proof. The direct implication being straightforward, it remains to show the converse. By Proposition E.2, we know that K is a model of KAO, so it suffices to check that ∧ and · coincide on B. Thanks to Proposition E.1, we know that (Glb) holds, so we may conclude using Proposition E.3. J This concludes our comparison between models of KAT and models of KAO.