Concurrency in -Passing Style∗ — The Companion Technical Report —

Eneia Nicolae Todoran Technical University of Cluj-Napoca Department of Baritiu Str. 28, 400027, Cluj-Napoca, Romania. {[email protected]} Nikolaos S. Papaspyrou National Technical University of Athens School of Electrical and Computer Engineering Software Engineering Laboratory Polytechnioupoli, 15780 Zografou, Greece. {[email protected]}

Abstract By using the mathematical framework of metric semantics we develop a designed in continuation-passing style for an abstract concurrent providing a general mech- anism of interaction between multisets of distributed actions. The basic laws of concurrent systems are satisfied in this semantics. Next, by customizing the behavior of we obtain denota- tional semantics for a couple of concurrent incorporating mechanisms for multiparty and binary interactions. We offer denotational semantics for two imperative concurrent languages based on Hoare’s CSP and a process algebra for DNA computing introduced by Cardelli.

Keywords: continuation-passing style, denotational semantics, metric spaces,

1 Introduction

Continuations are a very powerful control-flow mechanism. As abstract representations of control flow, they were introduced in the 1960s and soon came into prominence in the field of denotational semantics for modelling the semantics of programming languages [23, 38, 34]. In a few words, a continuation represents control flow by capturing the notion of “the rest of the program”, after the current point during the program’s execution. Some functional languages, such as Scheme [22] and SML/NJ [37], provide direct support for capturing and invoking continuations, which are treated as first-class values. In such languages, continuations can be used to implement a variety of advanced control flow constructs. They roughly correspond to “jumps” to different places; programmers can use them as a low-level mechanism to implement disciplined (or undisciplined) “goto”, resumable exceptions, or even coroutines. Continuation-passing style (CPS) is the style of programming in which control is passed explicitly in the form of continuations, in contrast to direct style, which is the usual style of programming. In direct style, a is called with a number of arguments, it evaluates its result, and returns it to the caller. In CPS, on the other hand, functions never return to the caller. They receive as an additional argument a continuation (or more than one, if necessary), which they invoke with the value of their result, as soon as that is computed. Programs in direct style can be automatically translated to CPS [13]. Indeed CPS

∗Printed: August 2016, Revised April 2017

1 is commonly employed in the development of optimizing compilers for languages with higher-order functions [3]. In recent years, CPS has found a significant practical use for synchronous or asynchronous web programming [31, 32, 12, 21]. Continuations do not only generalize the notion of a “callback” function that is standard in web programming. They can be used to circumvent the statelessness of the HTTP protocol which, in the traditional model of web programming, typically leads to poorly structured code. Several continuation-aware web servers and web frameworks have been developed [47, 44, 48, 35] and have known varying degrees of popularity. In the semantics of programming languages, continuations can be used to model a variety of advanced control concepts, including nonlocal exits, exceptions, coroutines [18] and even multitasking [46, 14]. Various forms of delimited continuations [15, 16, 36, 33, 13, 19, 25] provide even finer control than traditional continuations; they represent a part of the rest of what remains to be computed and they allow (or impose) a more structured way to the use of continuations. However, it is generally believed that traditional continuations do not work well in the presence of concurrency [19]. In the words of Mosses, from his more recent survey of description languages [27]:

“Although continuation-passing style is sometimes regarded as a standard style to use for de- notational semantics, it is inadequate for describing languages that involve non-determinism or concurrent processes.”

We argue that the fault lies not with the use of CPS, either as a programming style or as a tool for modeling denotational semantics, but with the standard, traditional notion of continuation, which is inadequate to describe certain aspects of concurrency. To alleviate this problem, in our previous publications we have introduced the continuation semantics for concurrency (CSC) technique and we have used it for defining the semantics of several concurrent languages [40, 9]. CSC provides a general tool for modelling control in concurrent systems and it can be used to design both operational and denotational semantic models. When used for the latter, the dis- tinctive characteristic of the CSC technique is that continuations are modelled as appropriate structures of computations (). The structure of continuations is representative of the control flow con- cepts of the languages under study. Computations are also called processes in [40]. Intuitively, CSC is a semantic formalization of a process scheduler. Processes are grouped in what is called a continuation, and the term process denotes the (partially evaluated) meaning of a statement. The concurrency control concepts themselves are modelled as operations manipulating continuations. The domain of CSC can be defined based on a set Id of (process) identifiers. The set of identifiers Id is endowed with a partial order, which induces the structure of continuations specific of the language under investigation. For ex- ample, in [42, 41] continuations are multisets of computations that are executed in parallel. In [40, 9, 10] continuations are trees of computations with active computations at the leaves. In this paper, we propose a different solution by focusing on a much simpler structure for contin- uations, more reminiscent of traditional continuations. We investigate the denotational semantics of concurrent systems in classic continuation-passing style (CPS) based on the interplay between compu- tations (denotations), interaction multisets, synchronous and asynchronous continuations. Our notion of a synchronous continuation roughly coincides with the traditional one: a function that awaits a value and represents the “rest of the program” by yielding its final result. However, the awaited value encom- passes an asynchronous continuation, which may contain a suspended computation, in a way similar to that used by resumption-based approaches to concurrency [11]. We argue that this enhanced notion of continuation provides the necessary structure that is missing from the traditional approach. When a computation is evaluated with respect to a synchronous continuation, the computation is decomposed into an interaction multiset and an asynchronous continuation, which are transmitted as parameters to the synchronous continuation. We offer denotational semantics designed in CPS for various binary and multiparty interaction mechanisms. We hope to collect sufficient evidence that CPS can be used as a general tool for designing denotational semantics for concurrent languages.

2 Note that CSC (continuation semantics for concurrency [40, 9]) and CPS (continuation-passing style) are different techniques, relying on different (classes of) domains and different evaluation strategies for continuations. CSC captures the flow of control by explicit manipulations of process identifiers and requires silent steps to handle synchronous interactions [40, 42, 10], that are not needed in the CPS- based approach that we introduce in this paper. Also, using classic CPS, in this work we establish laws of synchronous interactions that have not been investigated for CSC. Unlike other semantic models of concurrency [30, 5], in the CPS approach that we introduce in this paper the final yield of the semantic functions is a simple collection of observations assembled in a linear time domain. No branching structure is needed. All communication and concurrency control concepts are modelled as operations manipulating continuations. The terms linear time and branching time are often used in temporal logic and semantics [5]. Intuitively, an element of a linear time domain is a collection of (execution) traces. An element of a branching time domain is a tree like structure. Following de Bakker and de Vink [5] in this paper we use the mathematical methodology of metric semantics. The main mathematical tool in this approach to semantics is Banach’s theorem which states that contracting functions on complete metric spaces have unique fixed points. The metric domains that we use to express computation are complete metric spaces. We use the general method for solving reflexive domain equations (in a of complete metric spaces) developed by America and Rutten [2] as our domain of computations is defined as the solution of an equation where the domain variable occurs in the left hand side of a function space construction. The rest of this paper is organized as follows. In sections 3 and 4 we introduce an abstract con- current language L providing a general mechanism of of interaction between multisets of distributed actions. In section 4 we define a denotational semantics for L. The denotational semantics is designed in continuation-passing style (CPS). We show that the semantic operators designed in CPS satisfy the basic laws of concurrent systems, such as the commutativity and the associativity of parallel composi- tion. Next, by customizing the behavior of continuations, in sections 5 and 6 we obtain denotational models for a couple of imperative concurrent languages and a nature inspired formalism. In section 5 we investigate two imperative concurrent languages LMSYN and LSYN based on Hoare’s CSP, providing multiparty and binary interaction mechanisms, respectively. In section 6 we investigate the semantics of a language LDNA based on a process algebra for DNA computing introduced by Cardelli. A smaller version of this technical report will appear in Fundamenta Informaticae.

Haskell Implementation We have developed complete and accurate implementations of the denotational semantics given in this paper, using the functional language Haskell [29]. The LMSYN , LSYN and LDNA example programs presented in this paper as well as other programs are provided, and can be executed by using the semantic interpreters available from [49]. Note that only two interpreters are available at [49]. LSYN programs can be executed using the for LMSYN , because LSYN is a strict subset of LMSYN . In designing the denotational semantics presented in this paper we started from domain equations of the form D = F → R, where F = (D → R); such domain equations are standard in classic continuation-passing style (CPS) [45, 39, 26]. A computation () of type D is a function that takes a continuation of type F and applies the continuation to the computation, yielding a final result of type R. R is some domain of final program answers. This domain can be implemented in Haskell by: type D = F -> R, type F = K -> R and data K = K D. Specific of the approach that we introduce in this paper is the decomposition of a computation of type D into an interaction multiset of type W and an asynchronous continuation of type K, which are transmitted as parameters to a synchronous continuation of type F. An asynchronous continuation may be either a computation (KD) or the empty asynchronous continuation K0. In Haskell we implement this domain of CPS as follows [49]:

type D = F -> R type F = (W, K) -> R data K = K0 | K D

3 The definitions given above represent a Haskell implementation of the mathematical domains (of de- notations, synchronous continuations and asynchronous continuations, respectively) given in section 4, where the various domains are defined formally.

2 Mathematical Preliminaries

The notation (x ∈)X introduces the set X with typical element x ranging over X. By P(X) we denote the power set of X, i.e., the set of all subsets of X. The notation Pπ(X) denotes the subset of the power set of X whose elements have property π. For example, Pfinite (X) is the set of all finite subsets of X. Let f ∈ X → Y be a function. The function [ f | x 7→ y ]: X → Y , is defined (for x, x0∈X, y∈Y ) 0 0 0 by: [ f | x 7→ y ](x ) = if x =x then y else f(x ). Instead of [[ f | x1 7→ y1 ] · · · | xn 7→ yn ] we write [ f | x1 7→ y1 | · · · | xn 7→ yn ]. Higher-order functions have a prominent role in the semantic models given in this paper. We assume the reader is familiar with the λ calculus notation.

2.1 Multisets A multiset is a generalization of a set, intuitively, a collection in which an element may occur more than once. One can represent the concept of a multiset of elements of type X by using functions from + X → N. A finite multiset of X can be represented by a function m ∈ A → N , where A ∈ Pfinite (X) + and N = N \{0} (the set of positive natural numbers). Let (x ∈)X be a countable set. We define:

not. [ + [X] = {m | m ∈ A → N } A ∈ Pfinite (X)

As X is countable, Pfinite (X) is also countable. [X] is the set of all finite multisets of elements of + type X. An element m ∈ [X] is a function m : A → N , for some finite subset A ⊆ X, such that ∀x ∈ A : m(x) > 0. m(x) is called the multiplicity (number of occurrences) of x in m. We represent a multiset m ∈ [X] by enumerating its elements between brackets ‘[’ and ‘]’. For example, [x1, x2, x1, x1, x2, x3, x3] is the multiset with 3 occurrences of x1, 2 occurrences of x2, and 2 + occurrences of x3, i.e., the function m : {x1, x2, x3} → N , m(x1) = 3, m(x2) = 2 and m(x3) = 2. Notice that the elements of a multiset are not ordered. For example [x1, x1, x2] = [x1, x2, x1] = [x2, x1, x1]. [] is the empty multiset, i.e., the function with empty graph. One can define various operations on multisets m1, m2 ∈ [X]. Below, dom(m) is the domain of function m. We use the multiset sum operation m1 ] m2 (] :[X] × [X] → [X]), defined by dom(m1 ] m2) = dom(m1) ∪ dom(m2) , and   m1(x) + m2(x) if x ∈ dom(m1) ∩ dom(m2) (m1 ] m2)(x) = m1(x) if x ∈ dom(m1) \ dom(m2)  m2(x) if x ∈ dom(m2) \ dom(m1) The multiset sum operation ] is associative and commutative. We write m1 = m2 to express that the multisets m1 and m2 are equal. m1 = m2 iff dom(m1) = dom(m2) and ∀x ∈ dom(m1): m1(x) = m2(x). We also write x ∈ m to express that x ∈ dom(m) . More about the mathematics of multisets can be found in [1].

2.2 Metric spaces The study presented in this paper takes place in the mathematical framework of 1-bounded complete metric spaces. We work with the following notions which we assume known: metric and ultrametric space, isometry (distance preserving bijection between metric spaces, denoted by ‘=∼’), complete metric space, and compact set. The reader may consult the monograph [5] for further details. If (X, dX ), (Y, dY ) are metric spaces, we recall that a function f : X → Y is a contraction if ∃c ∈ R, 0 ≤ c < 1, ∀x1, x2 ∈ X : dY (f(x1), f(x2)) ≤ c · dX (x1, x2). In metric semantics it is customary to

4 1 attach a contracting factor of c = 2 to each computation step. When c = 1 the function f is called non- 1 expansive. We denote the set of all nonexpansive functions from X to Y by X → Y . Let f : X → X be a function. If f(x) = x we call x a fixed point of f. When this fixed point is unique we write x = fix(f).

Theorem 2.1 (Banach) Let (X, dX ) be a nonempty complete metric space. Each contraction f : X → X has a unique fixed point.

If (x, y ∈)X is any nonempty set, one can define the discrete metric on X (d : X × X → [0, 1]) as follows: d(x, y) = 0 if x = y, and d(x, y) = 1 otherwise. (X, d) is a complete ultrametric space. Other composed metric spaces can be built up using the composite metrics given in definition 2.2.

Definition 2.2 Let (X, dX ), (Y, dY ) be (ultra) metric spaces. On (x ∈)X, (f∈)X → Y (the function space), (x, y)∈X×Y (the Cartesian product), u, v∈X + Y (the disjoint union, defined by: X + Y = ({1} × X) ∪ ({2} × Y )), and U, V ∈P(X) (the power set of X) one can define the following metrics:

1 (a) d 1 : X × X → [0, 1], d 1 (x1, x2) = · dX (x1, x2) 2 ·X 2 ·X 2

(b) dX→Y :(X → Y ) × (X → Y ) → [0, 1], dX→Y (f1, f2) = supx∈X dY (f1(x), f2(x))

(c) dX×Y :(X × Y ) × (X × Y ) → [0, 1] dX×Y ((x1, y1), (x2, y2)) = max{dX (x1, x2), dY (y1, y2)}

(d) dX+Y :(X + Y ) × (X + Y ) → [0, 1] dX+Y (u, v) = if (u, v ∈ X) then dX (u, v) else if (u, v∈Y ) then dY (u, v) else 1

(e) dH : P(X) × P(X) → [0, 1], dH (U, V ) = max{supu∈U d(u, V ), supv∈V d(v, U)}, where d(u, W ) = infw∈W dX (u, w). By convention sup ∅ = 0, inf ∅ = 1 (dH is the Hausdorff metric).

We use the abbreviation Pnco(X) to denote the power set of non-empty compact subsets of X. Also, we 1 often suppress the metric part in domain definitions, and write, e.g., · X instead of (X, d 1 ). 2 2 ·X

Remark 2.3 Let (X, dX ), (Y, dY ), d 1 , dX→Y , dX×Y , dX+Y and dH be as in definition 2.2. In case 2 ·X dX , dY are ultrametrics, so are d 1 , dX→Y , dX×Y , dX+Y and dH . Moreover, if (X, dX ), (Y, dY ) are 2 ·X 1 1 complete then 2 · X, X → Y , X → Y , X × Y ,X + Y , and Pnco(X) (with the metrics defined above) are also complete metric spaces [5].

3 Syntax and Semantics of Interaction

We consider an abstract language L whose semantics is defined based on a general notion of interaction between multisets of distributed actions. We assume given a set (a ∈ )A of elementary statements or atomic statements, and a set (y ∈)Y of variables. ;, + and k are operators for sequential, nondeterministic and parallel composition, respectively. k is also called a merge operator. bb is the left merge operator and | is the synchronization merge operator. b is the left synchronization merge operator, that is specific of the continuation semantics presented in this paper.

Definition 3.1 (Syntax of L)

(a) (Statements) s(∈ Stat) ::= a | y | s;s | s + s | s k s | s | s | sbb s | sb s

(b) (Guarded statements) g(∈ GStat) ::= a | g;s | g + g | g k g | g | g | gbb s | gb g

(c) (Declarations) (D ∈)Decl = Y → GStat

(d) (Programs) (π ∈)L = Decl × Stat

5 Remark 3.2 Following [5], we employ an approach to recursion based on declarations and guarded statements. In a guarded statement each recursive call is preceded by at least one elementary statement, which guarantees the fact that the semantic operators are contracting functions in the present metric setting. For the sake of brevity (and without loss of generality) in what follows we assume a fixed dec- laration D ∈ Decl. All considerations in any given argument refer to this fixed D. Having a fixed declaration D, instead of an L program π = (D, s) we simply mention the statement s.

For inductive proofs we introduce a complexity measure cs that decreases upon recursive calls. It is easy to check that cs is well defined due to our restriction to guarded recursion.

Definition 3.3 (Complexity measure) The function cs : L → N is given by:

cs(a) = 1

cs(y) = 1 + cs(D(y))

cs(s1 op s2) = 1 + cs(s1) op ∈ {;, bb}

cs(s1 op s2) = 1+ max{cs(s1), cs(s2)} op ∈ {+, k, | , b }

Operators ;, +, k, bb , | are taken from classic process algebra theories [4]. Operator b is included in the semantic model of L for technical reasons, that will be explained later. L is an abstract language. To illustrate the use of CPS as a language design tool, we also consider a couple of concrete languages, obtained from L by customizing the behavior of elementary actions and the set of operators. For the concrete languages presented in this paper it is convenient to describe the behavior associated to the elementary statements with the aid of a set Act of actions, and we assume given a mapping i : A → Act. Intuitively, an action is an intermediate representation of an elementary statement. The set of actions may be needed for technical reasons, in order to obtain a clear separation between syntax and semantics. Let (w ∈)W = [Act] be the set of all finite multisets of actions, that we name interaction multisets. Let also (θ ∈)Θ be a set of interaction abstractions. There is a special value ↑∈ Θ, representing a failed interaction (or no interaction at all). The semantics of L is defined based on a mapping interact : W → Θ, which describes a general notion of interaction between multisets of distributed actions. Customizing the interaction mapping we obtain various interaction mechanisms, including binary and multiparty CSP- like synchronous communication [20, 42, 10] and the multiparty interaction mechanism employed in the DNA inspired language presented in [8]. These models are presented in section 5 and section 6, respectively.

Remark 3.4 In general, an interaction may also depend upon the current state of the system, in which case the mapping interact should take an additional parameter representing the current state. We do not describe formally such a language in this paper, but we consider an example from the literature. The asynchronous communication mechanism presented in [7] involves no synchronous interaction between two or more distributed actions, but there is a suspension mechanism and the elementary actions are interpreted as state transformations.

4 Semantics for Concurrency in Continuation-Passing Style

We introduce a denotational semantics designed in continuation-passing style (CPS) for the language L. In this section we assume given a metric domain (r ∈)R whose elements are used as final yields of our 1 denotational semantics. We also assume that R is endowed with a binary operator ⊕ : R × R → R, which is nonexpansive, commutative, associative and idempotent. ⊕ is an abstract semantic operator for nondeterministic choice in L. Note that, there is no need to impose a branching structure upon R. It can be a simple linear time domain, whose elements are collections of (sequences of) observables. In subsequent sections we will present several possible instantiations of R and the semantic operator ⊕.

6 The denotational semantics of L is defined as a mapping [[·]] : L → D, where D is defined by the following domain equation (isometry between complete metric spaces, to be precise):

1 (φ ∈)D =∼ F → R (ϕ ∈)F = (W × K) →1 R 1 (κ ∈)K = {κ } + · D 0 2 D is the domain of denotations or computations. F is the domain of synchronous continuations. K is the domain of asynchronous continuations. A denotation φ is a function which receives as parameter a synchronous continuation ϕ and returns a value of the final domain R. A synchronous continuation receives as parameter a pair (w, κ), consisting of an interaction multiset w and an asynchronous con- tinuation κ and yields a value of the final domain. An asynchronous continuation is either the empty 1 continuation κ0, or a denotation stored in the space 2 · D. In the above equations the set W is endowed with the discrete metric which is an ultrametric. The composed metric spaces are built up using the composite metrics given in definition 2.2. According to the general theory developed in [2], these domain equations have a solution which is unique (up to isometry). Note that continuations are elements of a complete space which is the solution of a domain equation where the domain variable occurs in the left hand side of a function space construction. The solutions for D, F and K are obtained as complete ultrametric spaces.

1 Definition 4.1 Let (ω ∈)Op = D × D → D. For any ω ∈ Op, we denote by ωb the operator ωb : 1 K × K → K defined, for any φ, φ1, φ2 ∈ D by: ωb(κ0, κ0) = κ0, ωb(κ0, φ) = ωb(φ, κ0) = φ, and ωb(φ1, φ2) = ω(φ1, φ2). We define Ω; , Ωb , Ωk : Op → Op as follows, and we take ; = fix(Ω; ), k= fix(Ωk ), bb = Ω; (k), b = Ωb (k), and | = Ω | (k).

Ω; (ω)(φ1, φ2) = λϕ . φ1(λ(w1, κ1) . ϕ(w1, ωb(κ1, φ2)))

Ωb (ω)(φ1, φ2) = λϕ . φ1(λ(w1, κ1) . φ2(λ(w2, κ2) . ϕ(w1 ] w2, ωb(κ1, κ2))))

Ω | (ω)(φ1, φ2) = λϕ . Ωb (ω)(φ1, φ2)(ϕ) ⊕ Ωb (ω)(φ2, φ1)(ϕ)

Ωk (ω)(φ1, φ2) = λϕ . Ω; (ω)(φ1, φ2)(ϕ) ⊕ Ω; (ω)(φ2, φ1)(ϕ) ⊕ Ω | (ω)(φ1, φ2)(ϕ)

In combination with Banach’s theorem 2.1, Lemma 4.2 shows that the semantic operators ;, k, bb , | and b are well defined.

1 Lemma 4.2 The higher-order mappings Ω; and Ωk are 2 - contractive (in ω). Proof Omitted. The properties stated by this Lemma are an easy consequence of the fact that in the right hand side of each equation defining Ω; , Ωk , Ω | and Ωb , the computations ωb(κ1, φ2) and ωb(κ1, κ2) are 1 stored in the space 2 · D. 2 Some explanations are necessary. The semantic operators ;, bb and b behave as follows:

φ1 ;φ2 = λϕ . φ1(λ(w1, κ1) . ϕ(w1, κ1b; φ2))

φ1 bb φ2 = λϕ . φ1(λ(w1, κ1) . ϕ(w1, κ1 bk φ2))

φ1 b φ2 = λϕ . φ1(λ(w1, κ1) . φ2(λ(w2, κ2) . ϕ(w1 ] w2, κ1 bk κ2)))

A sequential composition φ1 ;φ2 is evaluated with respect to a synchronous continuation ϕ. The com- putation φ1 yields a pair (w1, κ1) representing its decomposition into an interaction multiset w1 and an asynchronous continuation κ1. Control is then transmitted to the synchronous continuation ϕ, which receives as parameters the interaction multiset w1 and an asynchronous continuation consisting of a

7 sequential composition between κ1 and the computation φ2. The latter does not contribute to the interac- tion; its execution is started only after the completion of the asynchronous continuation κ1, representing the rest of the computation φ1. The left merge composition φ1 bb φ2 is evaluated in a similar manner, but in this case the asynchronous continuation κ1 is executed in parallel with the computation φ2. The left synchronization operation φ1 b φ2 evaluates both computations φ1 and φ2; they both contribute to the interaction and their asynchronous continuations are executed in parallel. The operator for parallel composition k can be expressed (based on bb and b ) as a nondeterministic choice between alternative computations. It can express all possible interleavings of two or more concur- rent processes and synchronize an arbitrary number of concurrent processes, taking into consideration all possible interactions.

φ1 | φ2 = λϕ . (φ1 b φ2)(ϕ) ⊕ (φ2 b φ1)(ϕ)

φ1 k φ2 = λϕ . (φ1 bb φ2)(ϕ) ⊕ (φ2 bb φ1)(ϕ) ⊕ (φ1 | φ2)(ϕ)

Remark 4.3 1 1 (a) Operators ;, k, bb , | , b : D×D → D, and b; , bk : K×K → K are nonexpansive in both arguments. They behave as follows: κ0b; κ0 = κ0 bk κ0 = κ0, κ0b; φ = φb; κ0 = κ0 bk φ = φbk κ0 = φ, φ1b; φ2 = φ1 ;φ2, and φ1 bk φ2 = φ1 kφ2, for any φ, φ1, φ2 ∈ D. (b) Operator b is specific of the continuation semantics that we present in this paper. It was introduced in order to obtain symmetric definitions for operators k and | . By using this symmetry and the fact that the nondeterministic choice operator ⊕ is commutative, it follows that operators k and | are also commutative: φ1 k φ2 = φ2 k φ1, and φ1 | φ2 = φ2 | φ1, for any φ1, φ2 ∈ D. The commutativity of k extends easily to the whole class of asynchronous continuations: κ1 bk κ2 = κ2 bk κ1, for any κ1, κ2 ∈ K.

In definition 4.4 we assume given a set Act of actions, and a mapping i : A → Act, which behaves as explained in section 3. We recall that W = [Act].

Definition 4.4 (Denotational semantics of L) Let (S ∈)Sem = L → D. We define Φ: Sem → Sem as follows. We also take [[·]] = fix(Φ) to be the denotational semantics of L ([[·]] : L → D).

Φ(S)(a)(ϕ) = ϕ([i(a)], κ0) Φ(S)(y)(ϕ) = Φ(S)(D(y))(ϕ)

Φ(S)(s1 + s2)(ϕ) = Φ(S)(s1)(ϕ) ⊕ Φ(S)(s2)(ϕ)

Φ(S)(s1 ;s2)(ϕ) = (Φ(S)(s1);S(s2))(ϕ)

Φ(S)(s1 ks2)(ϕ) = (Φ(S)(s1)bb S(s2))(ϕ) ⊕

(Φ(S)(s2)bb S(s1))(ϕ) ⊕

(Φ(S)(s1) | Φ(S)(s2))(ϕ)

Φ(S)(s1 | s2)(ϕ) = (Φ(S)(s1) | Φ(S)(s2))(ϕ)

Φ(S)(s1 bb s2)(ϕ) = (Φ(S)(s1)bb S(s2))(ϕ)

Φ(S)(s1 b s2)(ϕ) = (Φ(S)(s1)b Φ(S)(s2))(ϕ)

The denotational semantics [[·]] is defined as the (unique) fixed point of Φ. It may not be obvious why on the right-hand sides of the equations given in definition 4.4(b), in some places we use Φ(S) while in other places we use S. The definition of Φ(S) is organized by induction on cs(s) (see definition 3.3). Intuitively, Φ is a contraction (hence it has a unique fixed point) because in the definition of Φ, each computation S(s) occurs as a parameter of an operator ; or bb , which stores S(s) in the space 1 K = {κ0} + 2 · D. Definition 4.4 is justified by Lemma 4.5, which is easily established.

8 1 Lemma 4.5 The mapping Φ is well defined and 2 - contractive (in S).

Proof It suffices to show that

d(Φ(S1)(s)(ϕ), Φ(S2)(s)(ϕ)) ≤ d(S1,S2) for any s ∈ Stat, ϕ ∈ F. We proceed by induction on cs(s). We treat two subcases. Case s = a.

d(Φ(S1)(a)(ϕ), Φ(S2)(a)(ϕ))

= d(ϕ([i(a)], κ0), ϕ([i(a)], κ0)) = 0

≤ d(S1,S2)

Case s = s1 ; s2.

d(Φ(S1)(s1 ; s2)(ϕ), Φ(S2)(s1 ; s2)(ϕ))

= d((Φ(S1)(s1); S1(s2))(ϕ), (Φ(S2)(s1); S2(s2))(ϕ)) = d(Φ(S1)(s1)(λ(w1, κ1) . ϕ(w1, κ1b; S1(s2))), Φ(S2)(s1)(λ(w1, κ1) . ϕ(w1, κ1b; S2(s2)))) ≤ max{d(Φ(S1)(s1)(λ(w1, κ1) . ϕ(w1, κ1b; S1(s2))), #.1 Φ(S1)(s1)(λ(w1, κ1) . ϕ(w1, κ1b; S2(s2)))) , d(Φ(S1)(s1)(λ(w1, κ1) . ϕ(w1, κ1b; S2(s2))), #.2 Φ(S2)(s1)(λ(w1, κ1) . ϕ(w1, κ1b; S2(s2)))) } #.1 Taking into account that Φ(S1)(s1) is nonexpansive, for we can compute as follows: #.1 ≤ sup d(ϕ(w , κ ; S (s )), ϕ(w , κ ; S (s ))) [ϕ is nonexpansive] (w1,κ1)∈(W ×K) 1 1b 1 2 1 1b 2 2 ≤ sup d((w , κ ; S (s )), (w , κ ; S (s ))) (w1,κ1)∈(W ×K) 1 1b 1 2 1 1b 2 2 1 [K = {κ0} + 2 · D] 1 ≤ d 1 (S1(s2),S2(s2)) ≤ · dD(S1,S2) 2 ·D 2 Also, for #.2 we have: #.2 ≤ d(Φ(S1)(s1), Φ(S2)(s1)) [ Induction hypothesis, cs(s1) < cs(s1 ; s2) = cs(s)] 1 ≤ 2 · d(S1,S2) 2

4.1 Concurrency Laws in Continuation Semantics We present a method of describing the behavior of concurrent systems in denotational models designed in continuation-passing style. For the abstract language L we show that the semantic operators satisfy the usual laws of concurrency theories, such as commutativity and associativity of parallel composition. Various properties can be proved by simple manipulations of the semantic equations. Some properties 1 require arguments of the kind “ε ≤ 2 · ε ⇒ ε = 0,” which are standard in metric semantics [5]. The main concurrency laws that can be established for L are given in theorem 4.15.

4.1.1 Some auxiliary properties The following lemma is useful in various contexts.

Lemma 4.6 For any s ∈ Stat, and ϕ1, ϕ2 ∈ F, φ1, φ2 ∈ D:

(a) [[s]](λ(w, κ) . ϕ1(w, κ) ⊕ ϕ2(w, κ)) = [[s]](ϕ1) ⊕ [[s]](ϕ2)

9 (b) [[s]](λ(w, κ) . φ1(ϕ1) ⊕ φ2(ϕ2)) = [[s]](λ(w, κ) . φ1(ϕ1)) ⊕ [[s]](λ(w, κ) . φ2(ϕ2))

Proof We only handle Lemma 4.6(a). We proceed by induction on cs(s). Two subcases. Case s = a.

[[a]](λ(w, κ) . ϕ1(w, κ) ⊕ ϕ2(w, κ))

= ϕ1([i(a)], κ0) ⊕ ϕ2([i(a)], κ0)

= [[a]](ϕ1) ⊕ [[a]](ϕ2)

Case s = s1 b s2.

[[s1 b s2 ]](λ(w, κ) . ϕ1(w, κ) ⊕ ϕ2(w, κ)) $.0 = [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ1(w1 ] w2, κ1 bk κ2) ⊕ ϕ2(w1 ] w2, κ1 bk κ2)))

1 Let ψ1, ψ2 ∈ (W × K) → F be given by: ψ1(w, κ) = λ(w2, κ2) . ϕ1(w ] w2, κbk κ2), and ψ2(w, κ) = λ(w2, κ2) . ϕ2(w ] w2, κbk κ2). It is easy to check that ψ1, ψ2 are well-defined (ψ1(w, κ), ψ2(w, κ) ∈ F, for any w ∈ W, κ ∈ K) and nonexpansive. Therefore:

$.0 = [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ψ1(w1, κ1)(w2, κ2) ⊕ ψ2(w1, κ1)(w2, κ2)))

[Induction hypothesis for s2 (cs(s2) < cs(s1 b s2))]

= [[s1 ]](λ(w1, κ1) . [[s2 ]](ψ1(w1, κ1)) ⊕ [[s2 ]](ψ2(w1, κ1)))

= [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ1(w1 ] w2, κ1 bk κ2))⊕ ∗.0 [[s2 ]](λ(w2, κ2) . ϕ2(w1 ] w2, κ1 bk κ2)))

0 0 Let ϕ1, ϕ2 ∈ F be given by: ϕ1 = λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ1(w1 ] w2, κ1 bk κ2)), and ϕ2 = λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ2(w1 ] w2, κ1 bk κ2)). In the sequel we compute as follows:

∗.0 0 0 = [[s1 ]](λ(w1, κ1) . ϕ1(w1, κ1) ⊕ ϕ2(w1, κ1))

[Induction hypothesis for s1 (cs(s1) < cs(s1 b s2))] 0 0 = [[s1 ]](ϕ1) ⊕ [[s1 ]](ϕ2)

= [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ1(w1 ] w2, κ1 bk κ2)))⊕

[[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ2(w1 ] w2, κ1 bk κ2)))

= [[s1 b s2 ]](ϕ1) ⊕ [[s1 b s2 ]](ϕ2) 2

0 0 0 00 Lemma 4.7 Let KD = {κ0} ∪ {[[s ]] | s ∈ Stat}. For any s ∈ Stat, and ϕ , ϕ ∈ F: d([[s]](ϕ0), [[s]](ϕ00)) ≤ sup d(ϕ0(w, κ), ϕ00(w, κ)) w∈W,κ∈KD

Proof By induction on cs(s). Two subcases. Case s = a

0 00 0 00 d([[a]](ϕ ), [[a]](ϕ )) = d(ϕ ([i(a)], κ0), ϕ ([i(a)], κ0)) ≤ sup d(ϕ0(w, κ), ϕ00(w, κ)) w∈W,κ∈KD

10 Case s = s1 bb s2

0 00 d([[s1 bb s2 ]](ϕ ), [[s1 bb s2 ]](ϕ )) 0 00 = d([[s1]](λ(w1, κ1) . ϕ (w1, κ1 bk [[s2 ]])), [[s1]](λ(w1, κ1) . ϕ (w1, κ1 bk [[s2 ]]))) Induction hypothesis ≤ sup d(ϕ0(w, κk [[s ]]), ϕ00(w, κk [[s ]])) w∈W,κ∈KD b 2 b 2 0 00 = max {supw∈W d(ϕ (w, [[s2 ]]), ϕ (w, [[s2 ]])), 0 0 00 0 supw∈W,s0∈Stat d(ϕ (w, [[s k s2 ]]), ϕ (w, [[s k s2 ]])) ≤ sup d(ϕ0(w, κ), ϕ00(w, κ)) w∈W,κ∈KD

2

1 Lemma 4.8 For any opK : K × K → K, s1, s2 ∈ Stat, and ϕ ∈ F:

[[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2))))

= [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2))))

Proof By induction on cs(s1). Three subcases.

Case s1 = a.

[[a]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2))))

= [[s2 ]](λ(w2, κ2) . ϕ([i(a)] ] w2, opK (κ0, κ2)))

= [[s2 ]](λ(w2, κ2) . [[a]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2))) 1 2 Case s1 = s1 + s1. 1 2 [[s1 + s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2)))) 1 = [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2))))⊕ 2 [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2)))) 1 2 [Induction hypothesis, cs(s1) = 1 + max{cs(s1), cs(s1)}] 1 = [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2))))⊕ 2 [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2)))) [Lemma 4.6(b)] 1 = [[s2 ]](λ(w2, κ2) . ([[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2)))⊕ 2 [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2))))) 1 2 = [[s2 ]](λ(w2, κ2) . [[s1 + s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2))))

= [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2))))

1 2 0 1 0 1 1 2 Case s1 = s1 bb s1. Let opK :(K×K) → K be given by: opK (κ1, κ2) = opK (κ1 bk [[s1 ]], κ2). 0 As opK and ; are nonexpansive, opK is also nonexpansive. 1 2 [[s1 bb s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2)))) 1 1 1 1 1 2 = [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1 bk [[s1 ]], κ2)))) 1 1 1 1 0 1 = [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, opK (κ1, κ2)))) 1 [Induction hypothesis, cs(s1) = 1 + cs(s1)] 1 1 1 1 0 1 = [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2)))) 1 1 1 1 1 2 = [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1 bk [[s1 ]], κ2))))

11 1 2 = [[s2 ]](λ(w2, κ2) . [[s1 bb s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, opK (κ1, κ2)))) 2 If we replace opK with bk in Lemma 4.8 we obtain immediately the following interesting result: [[s1 b s2 ]] = [[s2 b s1 ]], for any s1, s2 ∈ Stat. Lemma 4.9 presents some simple properties of b .

Lemma 4.9 For any s1, s2, s3 ∈ Stat:

(a) [[s1 | s2 ]] = [[s2 | s1 ]] = [[(s1 b s2) + (s2 b s1)]] = [[s1 b s2 ]] = [[s2 b s1 ]]

(b) [[s1 b (s2 + s3)]] = [[(s1 b s2) + (s1 b s3)]]

(c) [[(s1 + s2)b s3)]] = [[(s1 b s3) + (s2 b s3)]]

Proof Lemma 4.9(a) is an easy consequence of Lemma 4.8. Lemma 4.9(b) can be proved by using Lemma 4.6. Here we only handle Lemma 4.9(c).

[[(s1 + s2)b s3)]] = λϕ . [[s1 + s2 ]](λ(w, κ) . [[s3 ]](λ(w3, κ3) . ϕ(w ] w3, κbk κ3)))

= λϕ . ([[s1 ]](λ(w, κ) . [[s3 ]](λ(w3, κ3) . ϕ(w ] w3, κbk κ3)))⊕

[[s2 ]](λ(w, κ) . [[s3 ]](λ(w3, κ3) . ϕ(w ] w3, κbk κ3))))

= λϕ . ([[s1 b s3 ]](ϕ) ⊕ [[s2 b s3 ]](ϕ))

= λϕ . [[(s1 b s3) + (s2 b s3)]](ϕ)

= [[(s1 b s3) + (s2 b s3)]] 2

4.1.2 Syntactic contexts and basic interaction terms We show that continuations can be used to reason in a compositional manner upon the behavior of concurrent programs. We introduce a notion of syntactic contexts for the class of L statements.

Definition 4.10 (Contexts for L) C ::= • | a | y | C ;C | C + C | C kC | Cbb C | C | C | Cb C We denote by C(s) the result of substituting s for all occurrences of • in C. This substitution can be defined inductively: •(s) = s, a(s) = a, y(s) = y, and (C1 op C2)(s) = C1(s) op C2(s), where op ∈ {;, +, k, bb , | , b }.

Lemma 4.11 shows that program properties are preserved in any L syntactic context. It is an easy consequence of the compositionality of the denotational semantics [[·]].

Lemma 4.11 If s1, s2 ∈ Stat and [[s1 ]] = [[s2 ]], then [[C(s1)]] = [[C(s2)]], for any context C.

As a syntactic counterpart of the set W of interaction multisets, we introduce a class of interaction statements (u ∈)UStat, with elements of the form (a1 | · · · | an), where ai ∈ A are L elementary statements. The semantic operator for synchronization | is commutative (see Remark 4.3(b)). We will prove that | is also associative, hence any order of association of the elementary statements ai can be used in an interaction statement.

Definition 4.12 (Interaction statements) The class (u ∈)UStat is given by: u ::= a | u | u, where a(∈ A) is an elementary statement. We define a function iU : UStat → W , iU (a) = [i(a)], iU (u1 | u2) = iU (u1) ] iU (u2). Recall that i : A → Act is a given function mapping elementary statements to actions, and W = [Act]. ] is the multiset sum operation, described in section 2.1.

12 In definition 4.13 we introduce the class (t ∈)TStat of basic interaction terms (TStat ⊆ Stat). We can prove that every non-recursive L statement (i.e., not containing recursion variables y ∈ Y ) is semanti- cally equivalent to a basic interaction term (obviously, any t ∈ TStat is a non-recursive term). Definition 4.13 (Basic interaction terms) We define the set (t ∈)TStat by: t ::= u | u; t | t + t.

The properties stated in Lemma 4.14 can be established for any s, s1, s2 ∈ Stat and for any u ∈ UStat. In particular, they can be used to reason about the behavior of basic interaction terms.

Lemma 4.14 For any s, s1, s2 ∈ Stat, and for any u ∈ U we have: [[u]] = λϕ . ϕ(iU (u), κ0), [[u;s]] = λϕ . ϕ(iU (u), [[s]]), and [[s1 + s2 ]] = λϕ . ([[s1 ]](ϕ) ⊕ [[s2 ]](ϕ)).

Proof We show that [[u]] = λϕ . ϕ(iU (u), κ0) by structural induction on u. Case u = a.

[[a]] = λϕ . ϕ([i(a)], κ0) = λϕ . ϕ(iU (a), κ0)

Case u = u1 | u2.

[[u1 | u2 ]] [Lemma 4.9(a)]

= [[u1 b u2 ]] = λϕ . [[u1 ]](λ(w1, κ1) . [[u2 ]](λ(w2, κ2) . ϕ(w1 ] w2, κ1 bk κ2)))

[Induction hypothesis for u2]

= λϕ . [[u1 ]](λ(w1, κ1) . ϕ(w1 ] iU (u2), κ1 bk κ0))

[Induction hypothesis for u1]

= λϕ . ϕ(iU (u1) ] iU (u2), κ0 bk κ0) = λϕ . ϕ(iU (u1 | u2), κ0)

= λϕ . ϕ(iU (u), κ0)

Next, we show that [[u;s]] = λϕ . ϕ(iU (u), [[s]]), by using the previous result.

[[u; s]] = λϕ . [[u]](λ(w, κ) . ϕ(w, κb; [[s]])) = λϕ . ϕ(iU (u), κ0b; [[s]])

= λϕ . ϕ(iU (u), [[s]])

The property [[s1 + s2 ]] = λϕ . ([[s1 ]](ϕ) ⊕ [[s2 ]](ϕ)) is obvious (by definition 4.4). 2

4.1.3 Concurrency laws

In the sequel we write s1 ' s2 to express that [[C(s1)]] = [[C(s2)]] for all L contexts C.

Theorem 4.15 For all s, s1, s2, s3 ∈ Stat, and u, u1, u2 ∈ UStat:

(A1) s1 + s2 ' s2 + s1 (A2) (s1 + s2) + s3 ' s1 + (s2 + s3) (A3) s + s ' s (A4) (s1 + s2);s3 ' (s1 ;s3) + (s2 ;s3) (A5) (s1 ;s2);s3 ' s1 ;(s2 ;s3) (CM1) s1 k s2 ' (s1 bb s2) + (s2 bb s1) + (s1 | s2) (CM2) ubb s ' u;s (CM3) (u;s1)bb s2 ' u;(s1 ks2) (CM4) (s1 + s2)bb s3 ' (s1 bb s3) + (s2 bb s3) (CM5) (u1 ;s) | u2 ' (u1 | u2);s (CM6) u1 | (u2 ; s) ' (u1 | u2);s (CM7) (u1 ;s1) | (u2 ;s2) ' (u1 | u2);(s1 ks2) (CM8) (s1 + s2) | s3 ' (s1 | s3) + (s2 | s3) (CM9) s1 | (s2 + s3) ' (s1 | s2) + (s1 | s3)

13 Proof According to Lemma 4.11, in each case, in order to prove that s ' s0 it is enough to show that [[s]] = [[s0 ]].

(A1) [[s1 + s2 ]] = λϕ . [[s1 + s2 ]](ϕ)

= λϕ . ([[s1 ]](ϕ) ⊕ [[s2 ]](ϕ)) [⊕ is commutative]

= λϕ . ([[s2 ]](ϕ) ⊕ [[s1 ]](ϕ)) = λϕ . [[s2 + s1 ]](ϕ)

= [[s2 + s1 ]]

(A2) [[(s1 + s2) + s3 ]] = λϕ . [[(s1 + s2) + s3 ]](ϕ)

= λϕ . (([[s1 ]](ϕ) ⊕ [[s2 ]](ϕ)) ⊕ [[s3 ]](ϕ)) [⊕ is associative]

= λϕ . ([[s1 ]](ϕ) ⊕ ([[s2 ]](ϕ) ⊕ [[s3 ]](ϕ)))

= λϕ . [[s1 + (s2 + s3)]](ϕ)

= [[s1 + (s2 + s3)]] (A3) We leave this proof as an easy exercise for the reader. It is a consequence of the fact that ⊕ is idempotent.

(A4) [[(s1 + s2); s3 ]] = λϕ . [[(s1 + s2); s3 ]](ϕ)

= λϕ . [[s1 + s2 ]](λ(w, κ) . ϕ(w, κb; [[s3 ]]))

= λϕ . ([[s1 ]](λ(w, κ) . ϕ(w, κb; [[s3 ]])) ⊕ [[s2 ]](λ(w, κ) . ϕ(w, κb; [[s3 ]])))

= λϕ . ([[s1 ; s3 ]](ϕ) ⊕ [[s2 ; s3 ]](ϕ))

= λϕ . [[(s1 ; s3) + (s2 ; s3)]](ϕ)

= [[(s1 ; s3) + (s2 ; s3)]]

1 (A5) The proof for A5 involves an argument of the kind ε ≤ 2 · ε ⇒ ε = 0. Let

ε = sups1,s2,s3∈Stat dD([[s1 ]]; ([[s2 ]]; [[s3 ]]), ([[s1 ]]; [[s2 ]]); [[s3 ]]).

sups1,s2,s3∈Stat dD([[s1 ;(s2 ; s3)]], [[(s1 ; s2); s3 ]]) = ε = sup d (λϕ . [[s ]](λ(w , κ ) . ϕ(w , κ ; ([[s ]]; [[s ]]))), s1,s2,s3∈Stat D 1 1 1 1 1b 2 3 λϕ . [[s1 ]](λ(w1, κ1) . ϕ(w1, (κ1b; [[s2 ]])b; [[s3 ]]))) [Lemma 4.7] ≤ sup d (ϕ(w, κ; ([[s ]]; [[s ]])), ϕ(w, (κ; [[s ]]); [[s ]])) ϕ∈F,w∈W,κ∈KD,s2,s3∈Stat R b 2 3 b 2 b 3 [ϕ is nonexpansive, ∀ϕ ∈ F ] ≤ sup d (κ; ([[s ]]; [[s ]]), (κ; [[s ]]); [[s ]]) κ∈KD,s2,s3∈Stat K b 2 3 b 2 b 3 1 [KD = {κ0} ∪ {s1 | s1 ∈ Stat}, K = {κ0} + ( 2 · D)]

= max{d 1 ([[s2 ; s3 ]], [[s2 ; s3 ]]), sups ,s ,s ∈Stat d 1 ([[s1 ]]; ([[s2 ]]; [[s3 ]]), ([[s1 ]]; [[s2 ]]); [[s3 ]])} 2 ·D 1 2 3 2 ·D 1 1 = 2 · sups1,s2,s3∈Stat dD([[s1 ]]; ([[s2 ]]; [[s3 ]]), ([[s1 ]]; [[s2 ]]); [[s3 ]]) = 2 · ε 1 We have obtained ε ≤ 2 · ε, which implies ε = 0. Hence [[s1 ;(s2 ; s3)]] = [[(s1 ; s2); s3 ]], for any s1, s2, s3 ∈ Stat, as required.

(CM1) [[s1 k s2 ]] = λϕ . [[s1 k s2 ]](ϕ)

= λϕ . (([[s1 ]]bb [[s2 ]])(ϕ) ⊕ ([[s2 ]]bb [[s1 ]])(ϕ) ⊕ ([[s1 ]] | [[s2 ]])(ϕ))

14 = λϕ . ([[s1 bb s2 ]](ϕ) ⊕ [[s2 bb s1 ]](ϕ) ⊕ [[s1 | s2 ]](ϕ))

= λϕ . ([[(s1 bb s2) + (s2 bb s1) + (s1 | s2)]](ϕ))

= [[(s1 bb s2) + (s2 bb s1) + (s1 | s2)]] (CM2) For CM2 we compute as follows:

[[ubb s]] = λϕ . [[ubb s]](ϕ) = λϕ . [[u]](λ(w, κ) . ϕ(w, κbk [[s]])) [Lemma 4.14] = λϕ . ϕ(iU (u), [[s]]) = λϕ . [[u]](λ(w, κ) . ϕ(w, κb; [[s]])) = λϕ . [[u; s]](ϕ) = [[u; s]]

(CM3) [[(u; s1)bb s2 ]] = λϕ . [[(u; s1)bb s2 ]](ϕ)

= λϕ . [[u; s1 ]](λ(w1, κ1) . ϕ(w1, κ1 bk [[s2 ]]))

= λϕ . [[u]](λ(w, κ) . ϕ(w, (κbk [[s1 ]])bk [[s2 ]])) [Lemma 4.14]

= λϕ . ϕ(iU (u), (κ0 bk [[s1 ]])bk [[s2 ]])

= λϕ . ϕ(iU (u), [[s1 ]] k [[s2 ]])

= λϕ . ϕ(iU (u), [[s1 k s2 ]]) [Lemma 4.14]

= [[u;(s1 k s2)]]

(CM4) [[(s1 + s2)bb s3 ]] = λϕ . [[(s1 + s2)bb s3 ]](ϕ)

= λϕ . [[s1 + s2 ]](λ(w, κ) . ϕ(w, κbk [[s3 ]]))

= λϕ . ([[s1 ]](λ(w, κ) . ϕ(w, κbk [[s3 ]])) ⊕ [[s2 ]](λ(w, κ) . ϕ(w, κbk [[s3 ]])))

= λϕ . ([[s1 bb s3 ]](ϕ) ⊕ [[s2 bb s3 ]](ϕ))

= λϕ . [[(s1 bb s3) + (s2 bb s3)]](ϕ)

= [[(s1 bb s3) + (s2 bb s3)]]

(CM5) [[(u1 ; s) | u2 ]] = λϕ . [[(u1 ; s) | u2 ]](ϕ) [Lemma 4.9]

= λϕ . [[(u1 ; s)b u2 ]](ϕ)

= λϕ . [[u1 ; s]](λ(w, κ) . [[u2 ]](λ(w2, κ2) . ϕ(w ] w2, κbk κ2)))

= λϕ . [[u1 ]](λ(w1, κ1) . [[u2 ]](λ(w2, κ2) . ϕ(w1 ] w2, (κ1b; [[s]])bk κ2))) [Lemma 4.14]

= λϕ . [[u1 ]](λ(w1, κ1) . ϕ(w1 ] iU (u2), (κ1b; [[s]])bk κ0)) [Lemma 4.14]

= λϕ . ϕ(iU (u1) ] iU (u2), (κ0b; [[s]])bk κ0))

= λϕ . ϕ(iU (u1 | u2), [[s]]) [Lemma 4.14]

= [[(u1 | u2); s]]

(CM6) [[u1 | (u2 ; s)]] [Lemma 4.9]

= [[(u2 ; s) | u1 ]] [(CM5), UStat ⊆ Stat, Lemma 4.9]

= [[(u1 | u2); s]]

15 (CM7) [[(u1 ; s1) | (u2 ; s2)]] = λϕ . [[(u1 ; s1) | (u2 ; s2)]](ϕ) [Lemma 4.9]

= λϕ . [[(u1 ; s1)b (u2 ; s2)]](ϕ)

= λϕ . [[u1 ; s1 ]](λ(w1, κ1) . [[u2 ; s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, κ1 bk κ2)))

0 0 0 0 = λϕ . [[u1 ; s1 ]](λ(w1, κ1) . [[u2 ]](λ(w2, κ2) . ϕ(w1 ] w2, κ1 bk (κ2b; [[s2 ]])))) 0 0 0 0 0 0 0 0 = λϕ . [[u1 ]](λ(w1, κ1) . [[u2 ]](λ(w2, κ2) . ϕ(w1 ] w2, (κ1b; [[s1 ]])bk (κ2b; [[s2 ]])))) [Lemma 4.14] 0 0 0 0 = λϕ . [[u1 ]](λ(w1, κ1) . ϕ(w1 ] iU (u2), (κ1b; [[s1 ]])bk (κ0b; [[s2 ]])))) [Lemma 4.14]

= λϕ . ϕ(iU (u1) ] iU (u2), (κ0b; [[s1 ]])bk (κ0b; [[s2 ]])) [Lemma 4.14]

= λϕ . ϕ(iU (u1 | u2), [[s1 k s2 ]]) [Lemma 4.14]

= [[(u1 | u2);(s1 k s2)]]

(CM8) [[(s1 + s2) | s3 ]] [Lemma 4.9]

= [[(s1 + s2)b s3 ]] [Lemma 4.9]

= [[(s1 b s3) + (s2 b s3)]] = λϕ . [[(s1 b s3) + (s2 b s3)]](ϕ)

= λϕ . ([[s1 b s3 ]](ϕ) ⊕ [[s2 b s3 ]](ϕ)) [Lemma 4.9]

= λϕ . ([[s1 | s3 ]](ϕ) ⊕ [[s2 | s3 ]](ϕ))

= λϕ . [[(s1 | s3) + (s2 | s3)]](ϕ)

= [[(s1 | s3) + (s2 | s3)]] (CM9) Property (CM9) follows easily by using (CM8) the fact that the synchronization operator ’ | ’ is commutative.

2 We can find similarities between the set of laws established in theorem 4.15 and the axioms of the Algebra of Communication Processes (ACP) of Bergstra and Klop [6, 4]. Both systems are parameterized by a set (a ∈)A, the elements of which are called atomic (or elementary) statements in L, respectively atomic actions in ACP. The left synchronization merge operator b of L is lacking from ACP, but ACP provides an encapsulation operator ∂H (H ⊆ A), which is not included in L. Also, in ACP there is a (partial) communication function γ : A×A → A which describes the effect of simultaneously executing two atomic actions (considering the notation employed in [4]). If in L we see the set (u ∈)UStat of interaction statements as a set of “atomic actions”, then the set of laws stated in theorem 4.15 coincides with the set of axioms of the ACP process algebra from which the axioms for deadlock and the axioms for encapsulation are eliminated (see [6, 4]). In this interpretation, the two systems behave the same if we consider a communication (interaction) function γ : UStat × UStat → UStat, defined for all pairs u1, u2 of “actions” by γ(u1, u2) = u1 | u2, in which case the axioms for deadlock can also be removed (because this γ is defined everywhere).

Remark 4.16 For any non-recursive L program (closed term) s ∈ Stat, there is a basic interaction term t ∈ TStat such that s ' t.1 We can prove this claim by considering the laws stated in theorem 4.15 as a term rewriting system with rules corresponding to laws A3-A5 and CM1-CM9. Each of these laws can give rise to a correspond- ing rewriting rule (by reading the law from left to right). For example, the rewriting rule corresponding

1In Remark 4.16 we investigate the similarities with the ACP theory [6, 4], hence we consider only L programs that do not contain the left synchronization merge operator b .

16 to law CM1 is: s1 k s2 → (s1 bb s2) + (s2 bb s1) + (s1 | s2). It is not difficult to prove that the rewriting system corresponding to laws A3-A5 and CM1-CM9 is strongly normalizing2 and confluent3 modulo the laws A1 and A2 (i.e., up to the order of summands). Various similar proofs are provided, e.g., in [6, 4]. The terminology that we use in this remark is also taken from [6, 4]. Let (h ∈)HStat be given by h ::= u | h; h | h + h. Clearly, HStat is a (strict) superset of T Stat (we have UStat ⊆ T Stat ⊆ HStat ⊆ Stat). In T Stat the sequential composition is available only in the restricted (prefix) form u; t, with u ∈ UStat and t ∈ T Stat, whereas HStat also contains statements of the form h1 ; h2, for arbitrary h1, h2 ∈ HStat. It is easy to see that, for every HStat term h there is a basic interaction term t such that h ' t. This can be proved by considering a term rewriting system with rules corresponding to the laws A4 and A5 (stated in theorem 4.15). It is not hard to see that this term rewriting system is strongly normalizing and confluent (modulo the laws A1 and A2), and that a normal form4 of a closed term is a basic interaction term t ∈ T Stat. More generaly, one can show that the term rewriting system corresponding to the laws A3-A5 and CM1-CM9 is strongly normalizing and confluent (modulo the laws A1 and A2) and that a normal form must be a basic interaction term t ∈ T Stat. For if a closed L term s is an element of (s ∈)HStat, the property follows by rewriting s into a basic interaction term t ∈ T Stat, as explained above. Otherwise, if a closed L term s (s∈ / HStat) has an occurrence of the parallel composition operator k then the rewriting rule corresponding to CM1 can still be applied. Finally, if the term s has an occurrence of bb or | and s∈ / HStat, then consider the smallest subterm s0 such that s0 ∈/ HStat and s0 has 0 0 0 0 0 an occurrence of bb or | . Such a subterm s has either the form s1 bb s2 or the form s1 | s2, where 0 0 0 0 0 0 0 s1, s2 ∈ HStat; note that, if s = s1 | s2 then either s1 ∈/ UStat or s2 ∈/ UStat, because we assume 0 0 0 0 0 0 that s ∈/ HStat (if s1 ∈ UStat and s2 ∈ UStat then we also have s1 | s2 ∈ UStat). If s has the 0 0 0 form s1 bb s2 then we can rewrite s1(∈ HStat) into a basic interaction term and in this case one of the rewriting rules corresponding to the laws CM2-CM4 can still be applied. Otherwise, if s0 has the form 0 0 0 0 s1 | s2 then we can rewrite both s1(∈ HStat) and s2(∈ HStat) into corresponding basic interaction 0 0 0 0 0 terms t1(∈ T Stat) and t2(∈ T Stat) and we obtain t1 | t2; note that in this case either t1 ∈/ UStat or 0 0 0 t2 ∈/ UStat (because either s1 ∈/ UStat or s2 ∈/ UStat), hence one of the rewriting rules corresponding to the laws CM5-CM9 is applicable.

Proposition 4.17 establishes a couple of properties which are also well known from classic process algebra theories [4], including the associativity of operators k, | . The left synchronization merge oper- ator b is also associative. The proof of Proposition 4.17 is laborious but it relies again on arguments of 1 the kind “ε ≤ 2 · ε ⇒ ε = 0”, which are standard in metric semantics [5].

Proposition 4.17 For all s, s1, s2, s3 ∈ Stat:

(1) (s1 ks2)ks3 ' s1 k(s2 ks3) (2) (s1 b s2)b s3 ' s1 b (s2 b s3) (3) (s1 | s2) | s3 ' s1 | (s2 | s3) (4) (s1 bb s2)bb s3 ' s1 bb (s2 ks3) (5) s1 | (s2 bb s3) ' (s1 | s2)bb s3

Proof According to Lemma 4.11, in each case, in order to prove that s ' s0 it is enough to show that [[s]] = [[s0 ]]. The proof of property 4.17(1) (the associativity of the parallel composition operator k) is the most laborious one. Let ε = sups1,s2,s3∈Stat dD(([[s1 ]]k[[s2 ]])k[[s3 ]], [[s1 ]]k([[s2 ]] k [[s3 ]])). We will 1 show that ε ≤ 2 · ε, which implies ε = 0. Therefore we have [[(s1 k s2) k s3 ]] = [[(s1 k s2) k s3 ]], hence we obtain property 4.17(1): (s1 ks2)ks3 ' s1 k(s2 ks3), as required.

ε = sups1,s2,s3∈Stat supϕ∈F d((([[s1 ]]k[[s2 ]])k[[s3 ]])(ϕ), ([[s1 ]]k([[s2 ]] k [[s3 ]]))(ϕ))

2There is no infinite sequence of reductions s1 → s2 → · · ·. 3Whenever s → · · · → s1 and s → · · · → s2 we can find an s0 such that s1 → · · · → s0 and s2 → · · · → s0. 4A term s is a normal form (or in normal form) if there is no term s0 with s → s0.

17 = sups1,s2,s3∈Stat,ϕ∈F d(([[s1 ]]k[[s2 ]])(λ(w, κ) . ϕ(w, κbk [[s3 ]]))⊕

[[s3 ]](λ(w3, κ3) . ϕ(w3, κ3 bk ([[s1 ]] k [[s2 ]])))⊕

([[s1 ]]k[[s2 ]])(λ(w, κ) . [[s3 ]](λ(w3, κ3) . ϕ(w ] w3, κbk κ3)))⊕

[[s3 ]](λ(w3, κ3) . ([[s1 ]] k [[s2 ]])(λ(w, κ) . ϕ(w3 ] w, κ3 bk κ))),

[[s1 ]](λ(w1, κ1) . ϕ(w1, κ1 bk ([[s2 ]] k [[s3 ]])))⊕

([[s2 ]]k[[s3 ]])(λ(w, κ) . ϕ(w, κbk [[s1 ]]))⊕

[[s1 ]](λ(w1, κ1) . ([[s2 ]] k [[s3 ]])(λ(w, κ) . ϕ(w1 ] w, κ1 bk κ)))⊕

([[s2 ]]k[[s3 ]])(λ(w, κ) . [[s1 ]](λ(w1, κ1) . ϕ(w ] w1, κbk κ1)))) [Lemma 4.6, ] is commutative and associative]

= sups1,s2,s3∈Stat,ϕ∈F

$.1 d([[s1 ]](λ(w1, κ1) . ϕ(w1, (κ1 bk [[s2 ]])bk [[s3 ]])) ⊕

$.2 [[s2 ]](λ(w2, κ2) . ϕ(w2, (κ2 bk [[s1 ]])bk [[s3 ]])) ⊕

$.3 [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, (κ1 bk κ2)bk [[s3 ]]))) ⊕

$.4 [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, (κ2 bk κ1)bk [[s3 ]]))) ⊕

$.5 [[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w3, (κ1 bk [[s2 ]])bk κ3))) ⊕

$.6 [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w2 ] w3, (κ2 bk [[s1 ]])bk κ3))) ⊕

$.7 [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ1 bk κ2)bk κ3)))) ⊕

$.8 [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ2 bk κ1)bk κ3)))) ⊕

$.9 [[s3 ]](λ(w3, κ3) . ϕ(w3, κ3 bk ([[s1 ]] k [[s2 ]]))) ⊕

$.10 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w3, κ3 bk (κ1 bk [[s2 ]])))) ⊕

$.11 [[s3 ]](λ(w3, κ3) . [[s2 ]](λ(w2, κ2) . ϕ(w2 ] w3, κ3 bk (κ2 bk [[s1 ]])))) ⊕

$.12 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ]w2 ]w3, κ3 bk (κ1 bk κ2))))) ⊕

$.13 [[s3 ]](λ(w3, κ3) . [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2 ] w3, κ3 bk (κ2 bk κ1))))) ,

∗.1 [[s1 ]](λ(w1, κ1) . ϕ(w1, κ1 bk ([[s2 ]] k [[s3 ]]))) ⊕

∗.2 [[s2 ]](λ(w2, κ2) . ϕ(w2, (κ2 bk [[s3 ]])bk [[s1 ]])) ⊕

∗.3 [[s3 ]](λ(w3, κ3) . ϕ(w3, (κ3 bk [[s2 ]])bk [[s1 ]])) ⊕

∗.4 [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w2 ] w3, (κ2 bk κ3)bk [[s1 ]]))) ⊕

∗.5 [[s3 ]](λ(w3, κ3) . [[s2 ]](λ(w2, κ2) . ϕ(w2 ] w3, (κ3 bk κ2)bk [[s1 ]]))) ⊕

∗.6 [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, κ1 bk (κ2 bk [[s3 ]])))) ⊕

∗.7 [[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w3, κ1 bk (κ3 bk [[s2 ]])))) ⊕

18 ∗.8 [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, κ1 bk (κ2 bk κ3))))) ⊕

∗.9 [[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2 ] w3, κ1 bk (κ3 bk κ2))))) ⊕

∗.10 [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2, (κ2 bk [[s3 ]])bk κ1))) ⊕

∗.11 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w3, (κ3 bk [[s2 ]])bk κ1))) ⊕

∗.12 [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ]w2 ]w3, (κ2 bk κ3)bk κ1)))) ⊕

∗.13 [[s3 ]](λ(w3, κ3) . [[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2 ] w3, (κ3 bk κ2)bk κ1)))) ) [⊕ is nonexpansive, commutative, associative and idempotent]

$.1 ∗.1 $.2 ∗.2 ≤ max{sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.3 ∗.6 $.4 ∗.10 sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.5 ∗.7 $.6 ∗.4 sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.7 ∗.8 $.8 ∗.12 sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.9 ∗.3 $.10 ∗.11 sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.11 ∗.5 $.12 ∗.13 sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.13 ∗.13 $.7 ∗.9 sups1,s2,s3∈Stat,ϕ∈F d( , ), sups1,s2,s3∈Stat,ϕ∈F d( , ),

$.1 ∗.1 1 We treat two typical cases. First, we show that sups1,s2,s3∈Stat,ϕ∈F d( , ) ≤ 2 · ε. Indeed:

$.1 ∗.1 sups1,s2,s3∈Stat,ϕ∈F d( , )

= sups1,s2,s3∈Stat,ϕ∈F d([[s1 ]](λ(w1, κ1) . ϕ(w1, (κ1 bk [[s2 ]])bk [[s3 ]]),

[[s1 ]](λ(w1, κ1) . ϕ(w1, κ1 bk ([[s2 ]] k [[s3 ]]))))) [Lemma 4.7]

≤ sup sup d(ϕ(w, (κk [[s ]])k [[s ]]), ϕ(w, κk ([[s ]] k [[s ]]))) s2,s3∈Stat,ϕ∈F w∈W,κ∈KD b 2 b 3 b 2 3 [ϕ is nonexpansive ]

≤ sup d ((κk [[s ]])k [[s ]], κk ([[s ]] k [[s ]])) s2,s3∈Stat,κ∈KD K b 2 b 3 b 2 3 0 0 [KD = {κ0} ∪ {[[s ]] | s ∈ Stat}]

≤ max{sups2,s3∈Stat dK((κ0 bk [[s2 ]])bk [[s3 ]], κ0 bk ([[s2 ]] k [[s3 ]])),

0 0 0 sups ,s2,s3∈Stat dK(([[s ]]bk [[s2 ]])bk [[s3 ]], [[s ]]bk ([[s2 ]] k [[s3 ]]))} 1 [K = {κ0} + ( 2 · D)]

≤ max{sups ,s ∈Stat d 1 ([[s2 ]] k [[s3 ]], [[s2 ]] k [[s3 ]]), 2 3 2 ·D 0 0 sups0,s ,s ∈Stat d 1 (([[s ]] k [[s2 ]]) k [[s3 ]], [[s ]] k ([[s2 ]] k [[s3 ]]))} 2 3 2 ·D 1 0 0 0 = 2 · sups ,s2,s3∈Stat dD(([[s ]] k [[s2 ]]) k [[s3 ]], [[s ]] k ([[s2 ]] k [[s3 ]])) 1 ≤ 2 · ε

19 $.8 ∗.12 1 We also show that sups1,s2,s3∈Stat,ϕ∈F d( , ) ≤ 2 · ε.

$.8 ∗.12 sups1,s2,s3∈Stat,ϕ∈F d( , )

= sups1,s2,s3∈Stat,ϕ∈F

d([[s2 ]](λ(w2, κ2) . [[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ2 bk κ1)bk κ3)))),

[[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w2 ] w3, (κ2 bk κ3)bk κ1))))) [Lemma 4.7]

≤ sup s1,s2,s3∈Stat,ϕ∈F,w∈W,κ∈KD

d([[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w ] w3, (κbk κ1)bk κ3))),

[[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w ] w3, (κbk κ3)bk κ1)))) [d is an ultrametric]

≤ sup s1,s2,s3∈Stat,ϕ∈F,w∈W,κ∈KD

max{d([[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w ] w3, (κbk κ1)bk κ3))),

$$.1 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w ] w3, (κbk κ1)bk κ3)))) ,

d([[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w ] w3, (κbk κ1)bk κ3))),

∗∗.1 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w ] w3, (κbk κ3)bk κ1)))) }

00 1 1 00 Let opK : K →((K × K) → K), be given by opK (κ)(κ1, κ3) = (κbk κ1)bk κ3. It is easy to see that 00 00 00 0 0 0 0 opK is well-defined (nonexpansive in its arguments). Let also ϕ ∈ F, ϕ (w , κ ) = ϕ(w ] w , κ ). 00 00 Note that ϕ (w1 ] w3, opK (κ)(κ1, κ3)) = ϕ(w1 ] w ] w3, (κbk κ1)bk κ3). We have: sup max{$$.1,∗∗.1 } s1,s2,s3∈Stat,ϕ∈F,w∈W,κ∈KD

[bk is commutative, Remark 4.3] ≤ sup s1,s2,s3∈Stat,ϕ∈F,w∈W,κ∈KD 00 00 max{d([[s1 ]](λ(w1, κ1) . [[s3 ]](λ(w3, κ3) . ϕ (w1 ] w3, opK (κ)(κ1, κ3)))), 00 00 $$.2 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ (w1 ] w3, opK (κ)(κ1, κ3))))) ,

d([[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w ] w3, (κ1 bk κ)bk κ3))),

∗∗.2 [[s3 ]](λ(w3, κ3) . [[s1 ]](λ(w1, κ1) . ϕ(w1 ] w ] w3, κ1 bk (κbk κ3))))) } Notice that $$.2 = 0, by Lemma 4.8. Also, by using Lemma 4.7 we obtain:

∗∗.2 0 00 00 0 ≤ sup 0 00 0 00 d(ϕ(w ] w ] w , (κ k κ)k κ ), s1,s2,s3∈Stat,ϕ∈F,w,w ,w ∈W,κ,κ ,κ ∈KD b b

ϕ(w ] w0 ] w00, κ00 bk (κbk κ0))) [ϕ is nonexpansive, ϕ ∈ F]

00 0 00 0 ∗∗.3 ≤ sup 0 00 d ((κ k κ)k κ , κ k (κk κ )) κ,κ ,κ ∈KD K b b b b

20 0 00 ∗∗.3 According to the definition of bk , if κ = κ0 or κ = κ0 or κ = κ0 then = 0. Otherwise, recall that 0 0 KD = {κ0} ∪ {[[s ]] | s ∈ Stat} (see Lemma 4.7). Hence we have:

∗∗.3 00 0 00 0 ≤ sups,s0,s00∈Stat d 1 (([[s ]]bk [[s]])bk [[s ]], [[s ]]bk ([[s]]bk [[s ]])) 2 ·D

1 00 0 00 0 ≤ 2 · sups,s0,s00∈Stat dD(([[s ]]bk [[s]])bk [[s ]], [[s ]]bk ([[s]]bk [[s ]])) 1 ≤ 2 · ε $.8 ∗.12 1 Therefore, we have sups1,s2,s3∈Stat,ϕ∈F d( , ) ≤ 2 · ε. 1 The other cases can be handled similarly, and we obtain ε ≤ 2 · ε, which imples ε = 0. Therefore we obtain [[(s1 ks2)ks3 ]] = [[s1 k(s2 ks3)]], for any s1, s2, s3 ∈ Stat, which implies the property stated by Proposition 4.17(1), namely (s1 ks2)ks3 ' s1 k(s2 ks3), as required. We continue with Proposition 4.17(2). We have

[[(s1 b s2)b s3 ]] = λϕ . [[(s1 b s2)b s3 ]](ϕ)

= λϕ . [[s1 b s2 ]](λ(w, κ) . [[s3 ]](λ(w3, κ3) . ϕ(w ] w3, κbk κ3))) [] is associative]

##.1 = λϕ . [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ1 bk κ2)bk κ3)))) Similarly:

[[s1 b (s2 b s3)]] = λϕ . [[s1 b (s2 b s3)]](ϕ)

= λϕ . [[s1 ]](λ(w1, κ1) . [[s2 b s3 ]](λ(w, κ) . ϕ(w1 ] w, κ1 bk κ)))

##.2 = λϕ . [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, κ1 bk (κ2 bk κ3))))) In the sequel we compute as follows

sups1,s2,s3∈Stat d([[(s1 b s2)b s3 ]], [[s1 b (s2 b s3)]])

##.1 ##.2 = sups1,s2,s3∈Stat d( , )

= sups1,s2,s3∈Stat,ϕ∈F

d([[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ1 bk κ2)bk κ3)))),

[[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, κ1 bk (κ2 bk κ3)))))) [Lemma 4.7]

≤ sup s2,s3∈Stat,ϕ∈F,w1∈W,κ1∈KD

d([[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ1 bk κ2)bk κ3))),

[[s2 ]](λ(w2, κ2) . [[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, κ1 bk (κ2 bk κ3))))) [Lemma 4.7] ≤ sup s3∈Stat,ϕ∈F,w1,w2∈W,κ1,κ2∈KD

d([[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, (κ1 bk κ2)bk κ3)),

[[s3 ]](λ(w3, κ3) . ϕ(w1 ] w2 ] w3, κ1 bk (κ2 bk κ3)))) [Lemma 4.7] ≤ sup ϕ∈F,w1,w2,w3∈W,κ1,κ2,κ3∈KD

d(ϕ(w1 ] w2 ] w3, (κ1 bk κ2)bk κ3), ϕ(w1 ] w2 ] w3, κ1 bk (κ2 bk κ3))) [ϕ is nonexpansive]

21 ≤ sup d((κ k κ )k κ , κ k (κ k κ )) κ1,κ2,κ3∈KD 1 b 2 b 3 1 b 2 b 3 [Proposition 4.17(1)] = 0

0 0 We recall that KD = {κ0} ∪ {[[s ]] | s ∈ Stat}, see Lemma 4.7. We obtain [[(s1 b s2)b s3 ]] = [[s1 b (s2 b s3)]], for any s1, s2, s3 ∈ Stat. Hence, we infer immediately the property stated by Propo- sition 4.17(2), namely (s1 b s2)b s3 ' s1 b (s2 b s3), as required. Proposition 4.17(3) follows without difficulty by using Proposition 4.17(2) and Lemma 4.9(a). By using Lemma 4.9(a), it is easy to show that [[s1 | (s2 | s3)]] = [[s1 b (s2 b s3)]] and [[(s1 | s2) | s3 ]] = [[(s1 b s2)b s3 ]]. Hence we obtain

[[s1 | (s2 | s3)]] = [[s1 b (s2 b s3)]] [Proposition 4.17(2)]

= [[(s1 b s2)b s3 ]] = [[(s1 | s2) | s3 ]] which implies (s1 | s2) | s3 ' s1 | (s2 | s3), as required. Next, we handle Proposition 4.17(4). We have

[[(s1 bb s2)bb s3 ]]

= λϕ . [[s1 bb s2 ]](λ(w, κ) . ϕ(w, κbk [[s3 ]]))

##.3 = λϕ . [[s1 ]](λ(w1, κ1) . ϕ(w1, (κ1 bk [[s2 ]])bk [[s3 ]])) and

[[s1 bb (s2 bb s3)]]

##.4 = λϕ . [[s1 ]](λ(w1, κ1) . ϕ(w1, κ1 bk [[s2 k s3 ]])) Therefore

sups1,s2,s3∈Stat d([[(s1 bb s2)bb s3 ]], [[s1 bb (s2 k s3)]])

##.3 ##.4 = sups1,s2,s3∈Stat d( , ) [Lemma 4.7]

≤ sup d(ϕ(w , (κ k [[s ]])k [[s ]]), ϕ(w , κ k [[s k s ]])) s2,s3∈Stat,ϕ∈F,w1∈W,κ1∈KD 1 1 b 2 b 3 1 1 b 2 3 [ϕ is nonexpansive]

≤ sup d((κ k [[s ]])k [[s ]], κ k ([[s ]] k [[s ]])) s2,s3∈Stat,κ1∈KD 1 b 2 b 3 1 b 2 3 [Proposition 4.17(1)] = 0

Therefore [[(s1 bb s2)bb s3 ]] = [[s1 bb (s2 k s3)]], for any s1, s2, s3 ∈ Stat, which implies the desired property stated by Proposition 4.17(4), namely (s1 bb s2)bb s3 ' s1 bb (s2 k s3). Finally, we prove Proposition 4.17(5). We have:

[[s1 | (s2 bb s3)]] [Lemma 4.9(a)]

= λϕ . [[s1 b (s2 bb s3)]](ϕ)

= λϕ . [[s1 ]](λ(w1, κ1) . [[s2 bb s3 ]](λ(w, κ) . ϕ(w1 ] w, κ1 bk κ)))

##.5 = λϕ . [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, κ1 bk (κ2 bk [[s3 ]])))) and

[[(s1 | s2)bb s3 ]] = λϕ . [[(s1 | s2)bb s3 ]](ϕ)

22 = λϕ . [[s1 | s2 ]](λ(w, κ) . ϕ(w, κbk [[s3 ]])) [Lemma 4.9(a)]

= λϕ . [[s1 b s2 ]](λ(w, κ) . ϕ(w, κbk [[s3 ]]))

##.6 = λϕ . [[s1 ]](λ(w1, κ1) . [[s2 ]](λ(w2, κ2) . ϕ(w1 ] w2, (κ1 bk κ2)bk [[s3 ]]))) Therefore

sups1,s2,s3∈Stat d([[s1 | (s2 bb s3)]], [[(s1 | s2)bb s3 ]]) ##.5 ##.6 = sups1,s2,s3∈Stat d( , ) [Lemma 4.7] ≤ sup d(ϕ(w ] w , κ k (κ k [[s ]])), ϕ(w ] w , (κ k κ )k [[s ]])) s3∈Stat,ϕ∈F,w1,w2∈W,κ1,κ2∈KD 1 2 1 b 2 b 3 1 2 1 b 2 b 3 [ϕ is nonexpansive]

= sup d(κ k (κ k [[s ]]), (κ k κ )k [[s ]]) s3∈Stat,κ1,κ2∈KD 1 b 2 b 3 1 b 2 b 3 [Proposition 4.17(1)] = 0

Hence [[s1 | (s2 bb s3)]] = [[(s1 | s2)bb s3 ]], for any s1, s2, s3 ∈ Stat, and thus s1 | (s2 bb s3) ' (s1 | s2)bb s3, as required. 2 The behavior of a computation [[s]](ϕ) cannot be characterized solely by the set of laws stated in theorem 4.15 and Proposition 4.17, which are all independent of the continuation parameter ϕ. In gen- eral, the semantics of interaction depends on the continuation ϕ. All successful interactions, and all deadlock situations, can be modeled based on the interaction multiset that is transmitted as a parameter to a continuation. In the following sections we present denotational semantics for a couple of languages obtained by providing concrete instantiations of the continuation parameter, which in turn depends on the mapping interact, defining the behavior of interaction multisets. Remark 4.18 There are benefits resulting from the use of continuations in concurrency semantics. No branching structure is needed. The final yield of the denotational semantics can be a simple linear time domain. In continuation semantics one can design denotational models for concurrency that produce exactly the desired observables assembled in linear models, rather than complex branching structures incorporating communication attempts and silent steps as in classic resumption-based models [30, 5]. A couple of examples are provided in subsequent sections. We can also mention some of our previous works [10, 43]. The terminology “linear time” versus “branching time” is often used in temporal logic and denotational semantics (see, e.g., [5]). Intuitively, an element of a linear time domain is a collec- tion of traces. An element of a branching time domain is a tree-like structure whose nodes represent nondeterministic choice points. However, in continuation semantics it may be more difficult to design fully abstract denotational models of concurrent languages. The correctness condition of full abstractness raises no special difficulty [40]. However, in continuation semantics some properties can be established only for computations denotable by program statements; see Remark 5.6. Hence, it may be difficult to achieve the completeness condition of full abstractness. The full abstractness problem was raised by Milner [24].

5 Continuation Semantics for Imperative Concurrent Languages

In this section we study imperative concurrent languages where the semantics of elementary statements is defined based on a set (σ ∈)Σ of states. The final semantic domain will be (r ∈)R = Σ → P, where (p ∈)P is a linear time domain. The domain P is defined as the (unique) solution of the domain equation given below, where set Σ is endowed with the discrete metric, which is an ultrametric. The solution for P is a complete ultrametric space [5].

(p ∈)P = Pnco(Q) 1 (q ∈)Q =∼ {} + {δ} + (Σ × · Q) 2

23  models the empty sequence. δ is used to model deadlock. Instead of (σ1, (σ2,... (σn, ) ...)), (σ1, (σ2,... (σn, δ) ...)) and (σ1, (σ2,...)), we write σ1σ2 . . . σn, σ1σ2 . . . σnδ and σ1σ2 ..., respec- tively. We use the following notations: σ · q = (σ, q) and σ · p = {σ · q | q ∈ p}, for any σ ∈ Σ, p ∈ P. 1 Note that d(σ · p1, σ · p2) = 2 · d(p1, p2) [5]. 1 We introduce an operator + : P × P → P, p1 + p2 = {q | q ∈ p1 ∪ p2, q 6= δ} ∪ {δ | δ ∈ p1 ∩ p2}. This definition expresses that p1 + p2 blocks only if both p1 and p2 block. Operator + is well-defined, nonexpansive, associative, commutative and idempotent [5]. The operator for nondeterministic choice 1 ⊕ : R × R → R is defined as: (r1 ⊕ r2) = λσ . (r1(σ) + r2(σ)). It is easy to check that ⊕ is also well-defined, nonexpansive, associative, commutative and idempotent. We also use the notation: σ0 · r = λσ . {σ0 · q | q ∈ r(σ)} (∈ R), for any σ0 ∈ Σ, r ∈ R.

5.1 A Language with Multiparty Interactions

We consider a CSP-like language LMSYN extended with a mechanism of communication and synchro- nization on multiple channels inspired by the Join calculus [17]. The multiparty interaction mechanism 5 incorporated in LMSYN was introduced in [42] and subsequently investigated in [10]. LMSYN provides the same operators as L and two elementary statements for synchronous interaction on multiple chan- nels: a send statement c!e and a join pattern (c1?v1& ··· &cn?vn). n + 1 statements c1!e1, . . . , cn!en and (c1?v1& ··· &cn?vn) executed synchronously by concurrent processes can interact as follows. The values of the expressions e1, . . . , en are transmitted concurrently along the channels c1, . . . , cn from the processes that execute the send statements (ci!ei, 1 ≤ i ≤ n) to the process that executes the join pattern (c1?v1& ··· &cn?vn). The value of each expression ei is transmitted along the channel ci and assigned to the corresponding variable vi. The whole interaction behaves like a distributed multi-assignment. When n = 1, the interaction is a binary (point-to-point) communication. LMSYN also provides a multi-assignment statement (v1, ··· , vn := e1, ··· , en), which allows several variables to be assigned in parallel, e.g., as in Python. LMSYN evaluates all the expressions e1, . . . , en in the right-hand side of the multi-assignment, then it matches the values of the expressions with corresponding variables in the left-hand side. The value of each expression ei is assigned to the corresponding variable vi. We assume as given a set (v ∈)V of variables, a set (c ∈)Ch of communication channels and a set (e ∈)Exp of expressions (without side effects).

Definition 5.1 (Syntax of LMSYN ) The syntax of LMSYN comprises the following components: (a) (Join patterns) j(∈ J) ::= c?e | j&j

(b) (Elementary statements) a(∈ A) ::= stop | (v1, ··· , vn := e1, ··· , en) | c!e | j (c) (Statements) s(∈ Stat) ::= a | y | s;s | s + s | s k s | s | s | sbb s | sb s

To be valid, the channels c1, . . . , cn and the variables v1, . . . , vn of a join pattern j = (c1?v1& ··· &cn?vn) must be pairwise distinct. Also, the variables v1, . . . , vn of a multi-assignment (v1, ··· , vn := e1, ··· , en), must be pairwise distinct, and the lists of variables v1, . . . , vn and expressions e1, . . . , en must have the same length. The sets of guarded statements, declarations, and programs for LMSYN are defined as in definition 3.1.

We obtain a denotational semantics for LMSYN by customizing certain components of the semantic model introduced in section 4. The evaluation of expressions is modeled by a given valuation E[[·]] : Exp → Σ → Val, where (ξ ∈)Val is some set of values and (σ ∈)Σ = V → Val is the set of states. + + The set of Act actions for LMSYN is Act = { stop } ∪ (V × Ξ) ∪ (Ch × Ξ) ∪ (Ch × V ) , where (ξ ∈)Ξ = Σ → Val. (V × Ξ)+ ((Ch × V )+) is the set of all finite and non-empty sequences over + V × Ξ (Ch × V ). For easier readability, we denote typical elements (v1, ξ1) ··· (vn, ξn) ∈ (V × Ξ) ,

5 However, the languages investigated in [42, 10] lack the multi-assignment statement incorporated in LMSYN .

24 + (c, ξ) ∈ (Ch × Ξ), and (c1, v1) ··· (cn, vn) ∈ (Ch × V ) by (v1 := ξ1, ··· , vn := ξn), c!ξ, and (c1?v1& ··· cn?vn), respectively. The mapping i : A → Act is given by: i( stop ) = stop , i(v1, ··· , vn := e1, ··· , en) = + (v1 := E[[e1]], ··· , vn := E[[en]]) ∈ (V × Ξ) , i(c!e) = c!E[[e]] ∈ (Ch×Ξ), and i(c1?v1& ··· &cn?vn) = + (c1?v1& ··· &cn?vn) ∈ (Ch × V ) . The mapping interact : W → Θ, where W = [Act], and (θ ∈)Θ = {↑} ∪ (Σ → Σ), behaves as follows: interact([(v1 := ξ1, ··· , vn := ξn)]) = λσ . [ σ | v1 7→ ξ1 | · · · | vn 7→ ξn ] = interact([c1!ξ1, . . . , cn!ξn, (c1?v1& ··· &cn?vn)]), for any v1, . . . , vn ∈ V, ξ1, . . . , ξn ∈ Ξ, c1, . . . , cn ∈ Ch, and interact(w) = ↑, for any other w ∈ W . Any assignment action can always be executed independently. Also, any n + 1 actions c1!ξ1, . . . , cn!ξn and (c1?v1& ··· &cn?vn) occurring in parallel processes can interact synchronously. No other interaction is possible in LSYN . We recall that the elements of a multiset are not ordered. For example, [c!ξ, c?v] = [c?v, c!ξ]. We use the symbol ↑ to express the impossibility of an interaction between a multiset of actions. In particular, note that the action stop cannot interact with any other action(s): if w ∈ W and stop ∈ w then interact(w) = ↑ (in the semantic model we use the symbol δ). Θ = {↑} ∪ (Σ → Σ) is the set of interaction abstractions (see section 3). In the case of imperative languages a (successfull) interaction abstraction is a state transformation mapping describing the effect of an interaction between a multiset of concurrent actions. We can now define an initial continuation ϕ0 ∈ F, as fixed point of an appropriate higher-order mapping; the domain definitions are as in section 4, only the domain R = Σ → P is specific.

Definition 5.2 We define ϕ0 ∈ F by ϕ0 = fix(Ψ), where Ψ: F → F is given by:  λσ . {δ} if interact(w) = ↑ Ψ(ϕ)(w, κ ) = 0 λσ . {σ0} if interact(w) = θ 6= ↑, σ0 = θ(σ)  λσ . {δ} if interact(w) = ↑ Ψ(ϕ)(w, φ) = λσ . (σ0 · φ(ϕ))(σ0) if interact(w) = θ 6= ↑, σ0 = θ(σ)

We also define D[[·]] : LMSYN → R, by D[[s]] = [[s]](ϕ0). Ψ is a contraction and has a unique fixed point in particular due to the “σ0·” - step in its definition.

1 Lemma 5.3 The mapping Ψ is well defined and 2 - contractive (in ϕ). Proposition 5.4 is easily established. It can be used to describe compositionally the behavior of D[[s]], which evaluates an LMSYN program s with respect to the initial (empty) continuation ϕ0. We write s1 ∼ s2 to express that D[[C(s1)]] = D[[C(s2)]], for all contexts C.

Proposition 5.4 For any s, s1, s2 ∈ Stat, v1, . . . , vn ∈ V, e1, . . . , en ∈ Exp, c1, . . . , cn ∈ Ch, and u ∈ UStat:6

s1 ' s2 ⇒ s1 ∼ s2 (A6) s + stop ∼ s (A7) stop ;s ∼ s (MCF1) c1!e1 | · · · | cn!en | (c1?v1& ··· &cn?vn) ∼ (v1, ··· , vn := e1, ··· , en) (MCF2) u ∼ stop if interact(iU (u)) =↑ By using Lemma 4.14 we can also prove the following:

Lemma 5.5 For any s, s1, s2 ∈ Stat, and u ∈ UStat:  λσ . {δ} if interact(iU (u)) = ↑ D[[u]] = 0 0 λσ . {σ } if interact(iU (u)) = θ 6= ↑, σ = θ(σ)  λσ . {δ} if interact(iU (u)) = ↑ D[[u; s]] = 0 0 0 λσ . (σ ·D[[s]])(σ ) if interact(iU (u)) = θ 6= ↑, σ = θ(σ)

D[[s1 + s2]] = D[[s1]] ⊕ D[[s2]]

6 Note that interact(iU (c1!e1 | · · · | cn!en | (c1?v1& ··· &cn?vn)))) 6=↑

25 Note that, D[[(v1, ··· , vn := e1, ··· , en)]] = D[[c1!e1 | · · · | cn!en | (c1?v1& ··· &cn?vn)]] = λσ . {[ σ | v1 7→ E[[e1]](σ) | · · · | vn 7→ E[[en]](σ)]}. Also, D[[ stop ;s]] = D[[ stop ]] = λσ . {δ}, for any s ∈ Stat. By using Proposition 5.4 and Remark 4.16, for any non-recursive statement (closed term) s ∈ Stat there is a basic interaction term t ∈ TStat, such that s ∼ t. Next, one can use the properties of D[[·]] stated by Lemma 5.5, which are appropriate for evaluating basic interaction terms. For example:

D[[c!ekc?v]] = D[[(c!ebb c?v) + (c?vbb c!e) + (c!e | c?v)]]

= D[[(c!e;c?v) + (c?v; c!e) + (c!e | c?v)]] = D[[ stop + stop + (c!e | c?v)]]

= D[[c!e | c?v]] = λσ . {[ σ | v 7→ E[[e]](σ)]}

As a slightly more complicated example one can check that D[[c1!e1 kc2!e2 k(c1?v1&c2?v2)]] = λσ . {[ σ | v1 7→ E[[e1]](σ) | v2 7→ E[[e2]](σ)]}.

Remark 5.6 According to Lemma 4.9, we have [[s1 ]]b [[s2 ]] = [[s2 ]]b [[s1 ]], for any s1, s2 ∈ Stat. How- ever, φ b φδ = λϕ . λσ . {}= 6 λϕ . λσ . {δ} = φδ b φ, where φ = λϕ . λσ . {}, and φδ = λϕ . λσ . {δ}. One can easily verify that φ, φδ ∈ D, but there is no L statement s ∈ Stat such that [[s]] = φ or [[s]] = φδ. Lemma 4.9 is needed in the proof of theorem 4.15. In general, the properties stated by theorem 4.15 hold only for computations denotable by program statements.

5.2 Binary Communication

Starting from LMSYN it is straightforward to obtain a CSP-like language with (only) binary communi- cation. In the syntax definition 5.1 of LMSYN we replace the general multi-assignment statement with the the simple assignment v := e and the join pattern with the receive statement c?v. The constructions c!e and c?v are now as in Occam [28]. Next, the mapping interact : W → Θ specializes as follows: interact([v := ξ]) = λσ . [ σ | v 7→ ξ ] = interact([c!ξ, c?v]), and interact(w) = ↑, for any other w ∈ W . Here, (θ ∈)Θ = {↑} ∪ (Σ → Σ), and W = [Act], where Act = { stop } ∪ (V × Ξ) ∪ (Ch × Ξ) ∪ (Ch × V ). The mapping i : A → Act is given by: i(v := e) = (v, E[[e]]) ∈ (V × Ξ), i(c?v) = (c, v) ∈ (Ch × V ), i(c!e) = (c, E[[e]]) ∈ (Ch × Ξ) and i( stop ) = stop . Let’s call this language restricted to binary communication LSYN . The other components of the denotational semantics remain unchanged. The denotational model of LSYN satisfies all concurrency laws stated in theorem 4.15. However, we note that the interaction statements u of the form u = a1 | · · · | an, with n > 2, will all block in LSYN , a property known as handshaking [4]. Obviously, the properties stated by theorem 4.15 still hold if we replace each u ∈ UStat by a ∈ A in CM2, CM3, CM5-CM7 (because A ⊆ UStat). The properties A6 and A7 given in Proposition 5.4 are also satisfied for LSYN , and MCF1 and MCF2 specialize to the following:

(CF1) c!e | c?v ∼ v := e (CF2) u ∼ stop if interact(iU (u)) =↑ for any u ∈ UStat, c ∈ Ch, v ∈ V, e ∈ Exp. Now consider the set (a ∈)A of elementary statements in LSYN as a set of atomic actions and (following the notation in [4]) a partial communication function γ : A × A → A, given by γ(c!e, c?v) = γ(c?v, c!e) = (v := e), and γ(a1, a2) undefined, otherwise. According to Proposition 5.4 (specialized for LSYN ), the relation ∼ inherits all properties (A1-A5 and CM1-CM9) of the relation ', given in theorem 4.15. Also, note that, the collection of laws A1-A7, CF1, CF2, CM1-CM9, satisfied by ∼, coincides with the set of axioms of the process algebra ACP for synchronous communication, (for this specific communication function γ and) without encapsulation [4].

26 6 Continuation Semantics for a Nature-Inspired Formalism

We employ the domain of continuations introduced in section 4 in defining a denotational semantics, for a language LDNA inspired by DNA computing. LDNA is derived from the combinatorial strand algebra P, introduced in [8]. For a discussion concerning the relevance of the P formalism for DNA computing the reader is referred to [8]. Our present aim is to investigate this area of DNA computing, by using a method consecrated in the tradition of programming languages semantics, viz. denotational semantics. The elementary components in LDNA are called signals and gates. We use a countable set (x ∈)X 0 0 for the class of signals. A gate is a pair of multisets of signals written as [x1, . . . , xn].[x1, . . . , xm] in [8]. 0 0 We use a slightly different notation, adding enclosing angle brackets: h[x1, . . . , xn].[x1, . . . , xm]i. We define the class of gates by (g ∈)G = ([X] \ []) × [X]. Note that the input part [x1, . . . , xn] of a gate 0 0 h[x1, . . . , xn].[x1, . . . , xm]i cannot be the empty multiset. Definition 6.1 We define the set (a ∈)A of elementary components by: a ::= x | g. The syntax of (P ∈)LDNA is given below (where n ∈ N): P ::= 0 | a | P n | P kP

0 is the inert component. An elementary component a ∈ A may be a signal x ∈ X, or a gate g ∈ n G. P1 kP2 is the parallel composition of P1 and P2. P is a construction for finite populations, an 0 n+1 n abbreviation for n concurrent copies of P , with P = 0, and P = P kP , for any n ∈ N. Note that we use the semantic notion of a multiset in the syntax definition of LDNA. We could easily make a complete separation between syntax and semantics, e.g., by modeling a gate as a pair of sequences of signals (described in BNF as purely syntactic entities) rather than multisets. Instead, we use multisets because the order of signals in a gate is irrelevant. LDNA is similar to LMSYN in that it incorporates an interaction mechanism based on join synchronization [17], but the ability to join and fork signals (explained below) and to combine signals and gates into populations are specific of the strand process 0 0 algebras given in [8]. When a gate h[x1, . . . , xn].[x1, . . . , xm]i is combined with n concurrent signals x1, . . . , xn the multiset [x1, . . . , xn] behaves as a join pattern [17]. The gate joins the signals x1, . . . , xn, 0 0 forks the signals x1, . . . , xm and is consumed in the interaction [8]. The behavior of such a system is expressed formally in [8] by a binary reaction relation → (⊆ P × P) and the following rule, where 0 0 g = h[x1, . . . , xn].[x1, . . . , xm]i: 0 0 x1 k ... k xn k g −→ x1 k ... k xm In the transition presented above the gate g captures the information that is processed in such an inter- action (it contains all signals that are joined and forked in the interaction). Following [43] we design a continuation-based denotational semantics where G (the set of gates) is used as the set of observable items in the definition of the final domain R, given below. ∗ LDNA is obtained from P by replacing the construction for unbounded populations P (incorporated in P) with the construction for finite populations P n. P ∗ can be used to generate an arbitrary number of concurrent copies of P . For simplicity, we restrict to finite populations. Apart from this, LDNA is identical with P. A denotational semantics designed with continuations for the (complete) language P introduced in [8] is provided in [43]. The denotational model given in [43] is designed by using a domain for continuations similar to the one that we employ in this paper, and it requires an additional fixed point construction (for handling unbounded populations) that we do not present here. The denotational model for LDNA can be based on the semantic framework introduced in section 4. The class of actions is Act = A, and i : A → Act, i(a) = a, for any a ∈ A. We take W = [Act], and (θ ∈)Θ = {↑} ∪ G. We define the mapping interact : W → Θ, interact([x1, . . . , xn, g]) = g if 0 0 g = h[x1, . . . , xn].[x1, . . . , xm]i, i.e., if the multiset of signals x1, . . . , xn matches the input part of the gate g, and interact(w) = ↑ otherwise. We use the following final semantic domain:

(r ∈)R = Pnco(Q) 1 (q ∈)Q =∼ {} + (G × · Q) 2

27  models the empty sequence (there is no notion of abnormal termination or deadlock in LDNA [8, 10]). We use the same notation for sequences, as in section 5, writing, e.g., g1g2 ..., instead of (g1, (g2,...)). Also, we use the notations: g · q = (g, q) and g · r = {g · q | q ∈ r}, for any g ∈ G, q ∈ Q and r ∈ R. 1 We define operator ⊕ : R × R → R by r1 ⊕ r2 = {q | q ∈ r1 ∪ r2, q 6= } ∪ { |  ∈ r1 ∩ r2}. The semantic domains (φ ∈)D, (ϕ ∈)F, (κ ∈)K, and the semantic operators for parallel compo- 0 n+1 n sition bk and k are defined as in section 4. We use the notation κ = κ0, and κ = κbk κ , for any κ ∈ K, n ∈ N. Let φ(·) : A → D, be given by φ(a) = λϕ . ϕ([i(a)], κ0). The denotational semantics [[·]] : LDNA → K is defined (inductively) by the following equations:

[[0]] = κ0

[[a]] = φ(a) [[P n ]] = [[P ]]n

[[P1 k P2 ]] = [[P1 ]]bk [[P2 ]]

We write P1 ' P2 to express that [[C(P1)]] = [[C(P2)]], where C is a syntactic context for LDNA, an element of the class C ::= • | 0 | a | Cn | C k C, and the substitution C(P ) is defined by structural n n induction: •(P ) = P, 0(P ) = 0, a(P ) = a, C (P ) = C(P ) , and (C1 k C2)(P ) = C1(P ) k C2(P ). The following properties can be established: P k 0 ' P , P1 k P2 ' P2 k P1, and (P1 k P2) k P3 ' P1 k (P2 k P3), for any P,P1,P2,P3 ∈ LDNA. We also define D[[·]] : LDNA → R as follows: D[[P ]] = {} if [[P ]] = κ0, and D[[P ]] = [[P ]](ϕ0) otherwise; here ϕ0 = fix(Ψ) is the initial continuation defined as fixed point of the higher-order map- ping Ψ: F → F (intuitively, Ψ is a contraction due to the ”g·”-step in its definition [5])

 {} if interact(w) = ↑   {g} if interact(w) = g = h[x1, . . . , xn].[]i ∈ G Ψ(ϕ)(w, κ0) = g · (φ 0 k · · · k φ 0 )(ϕ) if interact(w) = g,  (x1) (xm)  0 0 g = h[x1, . . . , xn].[x1, . . . , xm]i ∈ G  {} if interact(w) = ↑   g · φ(ϕ) if interact(w) = gh[x , . . . , x ].[]i ∈ G Ψ(ϕ)(w, φ) = 1 n g · (φ kφ 0 k · · · k φ 0 )(ϕ) if interact(w) = g,  (x1) (xm)  0 0 g = h[x1, . . . , xn].[x1, . . . , xm]i ∈ G

0 0 0 3 3 Example 6.2 Let P = x1 k h[x1].[x1]i k x2 k h[x2].[x2]i, and P = x k h[x, x1].[x2, x]i k (x1) . 3 0 The subterm x k h[x, x1].[x2, x]i of P is a catalytic system, ready to transform multiple x1 (in this example 3 instances of x1) to x2, with catalyst x [8]. One can check the following:

0 0 0 0 D[[P ]] = {h[x1].[x1]ih[x2].[x2]i, h[x2].[x2]ih[x1].[x1]i} 0 D[[P ]] = {h[x, x1].[x2, x]ih[x, x1].[x2, x]ih[x, x1].[x2, x]i}

7 Conclusion

For an abstract concurrent language L providing a general mechanism of interaction between multi- sets of distributed actions we defined a denotational semantics designed in continuation-passing style (CPS). The language L provides the same operators as the Algebra of Communication Processes (ACP) of Bergstra and Klop [6] without encapsulation; in addition, L provides an operator that we name left synchronization merge, specific of the continuation-based approach that we introduce in this paper. Next, we investigated various interaction mechanisms for binary and multiparty interactions. By customizing the behavior of continuations we obtained denotational models designed in CPS for a couple of concur- rent languages: two imperative languages based on Hoare’s CSP and a formalism for DNA computing introduced by Cardelli. Following [5] we chose to use the mathematical framework of complete metric

28 spaces for our semantic investigation. By using standard techniques from metric semantics we proved that the semantic operators designed in CPS satisfy the basic laws of concurrent systems. In particular, we established the semantic properties of the operators for synchronous interaction designed in CPS. For concurrent processes communicating by handshake we established a collection of laws which coincides with the set of axioms of the ACP theory [6] for synchronous communication without encapsulation. In the near future we will study the formal relationship between the denotational models given in this paper and corresponding , also by using techniques from metric semantics. Next, we intend to investigate the formal relation between the CPS-based approach presented in this paper and other approaches to concurrency semantics. In particular, we will investigate the formal relationship between CPS and the direct approach to concurrency semantics, where the semantic operators for parallel composition are not designed with continuations [30, 5].

References

[1] A. Alexandru, G. Ciobanu. Mathematics of multisets in the Fraenkel-Mostowski framework. Bull. Math. Soc. Sci. Math. Roumanie, tome 58(106), no.1, pp. 3-18, 2015.

[2] P. America, J.J.M.M. Rutten. Solving Reflexive Domain Equations in a Category of Complete Metric Spaces. J. of Comp. Syst. Sci, vol. 39(3), pp. 343–375, 1989.

[3] A.W. Appel. Compiling with Continuations. Cambridge University Press, 1992.

[4] J.C.M. Baeten, W.P. Weijland. Process Algebra. Cambridge Univ. Press, 1990.

[5] J.W. de Bakker, E.P. de Vink. Control Flow Semantics. MIT Press, 1996.

[6] J.A. Bergstra, J.W. Klop. Process Algebra for Synchronous Communication. Information and Control, vol. 60, pp. 109–137, 1984.

[7] F.S. de Boer, J.N. Kok, C. Palamidessi, J.J.M.M. Rutten. A Paradigm for Asynchronous Commu- nication and its Application to Concurrent Constraint Programming. In K.R. Apt, J.W. de Bakker, J.J.M.M. Rutten, editors, Logic Progr. Languages: Constraints, Functions and Objects, pp. 82–114, MIT Press, 1993.

[8] L. Cardelli. Strand Algebras for DNA Computing. Natural Computing, vol. 10(1), pp. 407–428, 2011.

[9] G. Ciobanu, E.N. Todoran, Continuation Semantics for Asynchronous Concurrency, Fundamenta Informaticae, vol. 131(3-4), pp. 373–388, 2014.

[10] G. Ciobanu, E.N. Todoran. Continuation Semantics for Concurrency with Multiple Channels Com- munication. In Proc. 17th International Conference on Formal Engineering Methods 2015 (ICFEM 2015), LNCS, vol. 9407, pp. 400–416, Springer, 2015.

[11] K. Claessen A Poor Man’s Concurrency Monad. Journal of , vol. 9(3), pp. 313–323, 1999.

[12] E. Cooper, S. Lindley, P. Wadler, and J. Yallop. Links: Web Programming Without Tiers. In Proc. 5th International Conference on for Components and Objects, pp. 266–296, LNCS 4709, Springer-Verlag, 2007.

[13] O. Danvy, and A. Filinski. Representing Control: a Study of the CPS Transformation. Mathematical Structures in Computer Science, vol. 2(4), pp. 361–391, 1992.

[14] R.K. Dybvig and R. Hieb. Engines from Continuations. Computer Languages, vol. 14(2), pp. 109– 123, 1989.

29 [15] M. Felleisen. The Theory and Practice of First-Class Prompts. In Procedings of 15th Annual ACM Symposium on Principles of Programming Languages, pp. 180–190, ACM Press, 1988.

[16] M. Felleisen, M. Wand, D.P. Friedman, and B. Duba. Abstract Continuations: a Mathematical Semantics for Handling Full Functional Jumps. In Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52–62, ACM Press, 1988.

[17] C. Fournet and G. Gonthier. The Join calculus: A Language for Distributed Mobile Programming. LNCS vol. 2395, pp. 268–332, Springer-Verlag, 2002.

[18] C.T. Haynes, D.P. Friedman, and M. Wand. Obtaining Coroutines with Continuations. Computer Languages, vol. 11(3/4), pp. 143–153, 1986.

[19] R. Hieb, R.K. Dybvig and C.W. Anderson. Subcontinuations. Lisp and Symbolic Computation, vol. 7(1), pp. 83–110, 1994.

[20] C.A.R. Hoare. Communicating Sequential Processes. Prentice Hall, 1985.

[21] P. Leger and H. Fukuda, Using Continuations and Aspects to Tame Asynchronous Programming on the Web. In Companion Proc. 15th International Conference on Modularity, pp. 79–82, ACM Press, 2016.

[22] R. Kelsey, W. Clinger and J. Rees, editors, Revised Report on the Algorithmic Language Scheme, 1998. Available from http://swiss.csail.mit.edu/projects/scheme/.

[23] A. Mazurkiewicz. Proving Algorithms by Tail Functions. Information and Control, vol. 18, pp. 220–226, 1971.

[24] R. Milner. Fully Abstract Models of Typed λ-Calculi. Theoretical Computer Science, vol. 4, pp. 1– 22, 1977.

[25] L. Moreau and C. Queinnec. Partial Continuations as the Difference of Continuations: A Du- umvirate of Control Operators. In Proceedings of the 6th International Symposium on Program- ming Languages Implementation and Logic Programming, LNCS vol. 844, pp. 182–197, Springer- Verlag, 1994.

[26] P.D. Mosses. Denotational Semantics. In Handbook of Theoretical Computer Science (vol. B), pp. 575–631, 1990.

[27] P.D. Mosses. Programming Language Description Languages: From Christopher Strachey to Se- mantics Online. In P. Boca et al., editors, Formal Methods: State of the Art and New Directions, pp. 249–273, Springer-Verlag, 2010.

[28] INMOS Ltd. Occam Programming Manual. Prentice Hall, 1984.

[29] S. Peyton Jones, J. Hughes, Editors. Report on the Programming Language Haskell 98: a Non-Strict Purely Functional Language. 1999, available at http://www.haskell.org

[30] G. Plotkin. A Powerdomain Construction. SIAM Journal of Computing, vol. 5(3), pp. 452-487, 1976.

[31] C. Queinnec. The Influence of Browsers on Evaluators or, Continuations to Program Web Servers. In Proceedings of the 5th ACM SIGPLAN International Conference on Functional Programming, pp. 23-33, ACM Press, 2000.

[32] C. Queinnec. Inverting Back the Inversion of Control or, Continuations Versus Page-centric Pro- gramming. SIGPLAN Notices, vol. 38(2), pp. 57–64, 2003.

30 [33] C. Queinnec and B. Serpette. A Dynamic Extent Conrol Operator for Partial Continuations. In Proc. 18th Annual ACM Symposium on Principles of Programming Languages, pp. 174–184, ACM Press, 1991.

[34] J.C. Reynolds. The Discoveries of Continuations. Lisp and Symbolic Computation, vol. 6(3–4), pp. 233–247, 1993.

[35] Seaside Available from http://www.seaside.st/.

[36] D. Sitaram and M. Felleisen. Control Delimiters and their Hierarchies. Lisp and Symbolic Compu- tation, vol. 3(1), pp. 67–99, 1990.

[37] Standard ML of New Jersey. Available from http://www.seaside.st/.

[38] C. Strachey and C.P. Wadsworth. Continuations: A Mathematical Semantics for Handling Full Jumps. Higher-Order and Symbolic Computation, vol. 13(1/2), pp. 135–152, 2000. Reprint of the technical monograph PRG-11, Oxford University Computing Laboratory (1974), with a foreword.

[39] R.D. Tennent. The Denotational Semantics of Programming Languages. Communications of the ACM, vol. 19(8), pp. 437–453, 1976.

[40] E.N. Todoran. Metric Semantics for Synchronous and Asynchronous Communication: A Continuation-based Approach. Electronic Notes in Theoretical Computer Science, vol. 28, pp. 119– 146, Elsevier, 2000.

[41] E.N. Todoran, N. Papaspyrou. Continuations for parallel logic programming. Proceedings of 2nd International ACM-SIGPLAN Conference on Principles and Practice of Declarative Programming (PPDP 2000), pages 257-267, ACM Press, 2000.

[42] E.N. Todoran. Comparative Semantics for Modern Communication Abstractions. Proc. 4th IEEE Int. Conf. on Intelligent Computer Communication and Processing (ICCP 2008), pp. 153–160, 2008.

[43] E.N. Todoran, N. Papaspyrou. Experiments with Continuation Semantics for DNA Computing. Proc. 9th IEEE Int. Conf. on Intelligent Computer Communication and Processing (ICCP 2013), pp. 251–258, 2013.

[44] UnCommon Web Available from https://common-lisp.net/project/ucw/.

[45] P. Wadler. The Essence of Functional Programming. In Proceeding of the 19th ACM SIGPLAN- SIGACT symposium on Principles Of Programming Languages (POPL ’ 92), pages 1–14, 1992.

[46] Wand, M. Continuation-Based Multiprocessing. Higher-Order and Symbolic Computation, vol. 12(3), pp. 285–299, 1999. Reprint from the Proceedings of the 1980 ACM Conference on LISP and Functional Programming, with a foreword.

[47] Web Applications in Racket Available from http://docs.racket-lang.org/web-server/.

[48] Weblocks Available from https://common-lisp.net/project/cl-weblocks/.

[49] http://ftp.utcluj.ro/pub/users/gc/eneia/fi17/

31