Simulation and Modelling

Tony Field and Jeremy Bradley Room 372. Email: [email protected]

Department of Computing, Imperial College London

Produced with prosper and LATEX

SM [11/06] – p. 1/47 PEPA: Stochastic algebra

PEPA is a language for describing systems which have underlying continuous time Markov chains PEPA is useful because: it is a formal, algebraic description of a system it is compositional it is parsimonious (succinct) it is easy to learn! it is used in research and in industry

SM [11/06] – p. 2/47 Tool Support

PEPA has several methods of execution and analysis, through comprehensive tool support: PEPA Workbench: Edinburgh Möbius: Urbana-Champaign, Illinois PRISM: Birmingham ipc: Imperial College London

SM [11/06] – p. 3/47 Types of Analysis

Steady-state and transient analysis in PEPA:

def 0.05 PEPA model: transient X_1 -> X_1 , r . , r . Steady state: X_1 A1 = (start 1 ) A2 + (pause 2 ) A3 0.045 def A2 = (run, r ).A1 + (fail, r ).A3 0.04 3 4 0.035

def 0.03 A3 = (recover, r1 ).A1 ⇒ 0.025 def Probability AA = (run, ⊤).(alert, r ).AA 0.02 5 0.015 def £¡ 0.01 Sys = AA A1 0.005 {run} 0 0 5 10 15 20 25 30 Time, t

SM [11/06] – p. 4/47 Passage-time Quantiles

Extract a passage-time density from a PEPA model:

def A1 = (start, r1 ).A2 + (pause, r2 ).A3 def A2 = (run, r3 ).A1 + (fail, r4 ).A3 def r A3 = (recover, 1 ).A1 ⇒ def AA = (run, ⊤).(alert, r5 ).AA def Sys = AA £¡ A1 {run}

SM [11/06] – p. 5/47 PEPA Syntax

Syntax:

P ::= (a, λ).P P + P P £¡ P P/L A L Action prefix: (a, λ).P

Competitive choice: P1 + P2 Cooperation: P £¡ P 1 L 2 Action hiding: P/L Constant label: A

SM [11/06] – p. 6/47 Prefix: (a,λ).A

Prefix is used to describe a process that evolves from one state to another by emitting or performing an action Example: P def= (a, λ).A ...means that the process P evolves with rate λ to become process A, by emitting an a-action λ is an exponential rate parameter This is also be written:

(a,λ) P −−−→ A

SM [11/06] – p. 7/47 Choice: P1 + P2

PEPA uses a type of choice known as competitive choice Example: def P = (a, λ).P1 + (b, µ).P2 ...means that P can evolve either to produce an a-action with rate λ or to produce a b-action with rate µ P 3 1 (a,λ)   In state-transition terms, P Q Q Q (b, µ) Qs P2

SM [11/06] – p. 8/47 Choice: P1 + P2

def P = (a, λ).P1 + (b, µ).P2 This is competitive choice since:

P1 and P2 are in a race condition – the first one to perform an a or a b will dictate the direction of choice for P1 + P2 What is the probability that we see an a-action?

SM [11/06] – p. 9/47 Cooperation: P1 £¡ P2 L

£¡ defines concurrency and communication L within PEPA The L in P £¡ P defines the set of actions 1 L 2 over which two components are to cooperate

Any other actions that P1 and P2 can do, not mentioned in L, can happen independently

If a ∈ L and P1 enables an a, then P1 has to wait for P2 to enable an a before the cooperation can proceed Easy source of !

SM [11/06] – p. 10/47 Cooperation: P1 £¡ P2 L

(a,λ) ′ (a,⊤) ′ If P1 −−−→ P1 and P2 −−−→ P2 then:

(a,λ) ′ ′ P1 £¡ P2 −−−→ P £¡ P {a} 1 {a} 2 ⊤ represents a passive rate which, in the cooperation, inherits the λ-rate of from P1 If both rates are specified and the only a-evolutions allowed from P1 and P2 are, a,λ a,µ ( ) ′ ( ) ′ P1 −−−→ P1 and P2 −−−→ P2 then:

(a,min(λ,µ)) ′ ′ P1 £¡ P2 −−−→ P £¡ P {a} 1 {a} 2

SM [11/06] – p. 11/47 Cooperation: P1 £¡ P2 L

The general cooperation case is where:

P1 enables m a-actions P2 enables n a-actions at the moment of cooperation ...in which case there are mn possible transitions for P1 £¡ P2 {a}

(a,R) P1 £¡ P2 −−−→ where {a} λ µ R = ra(P1) ra(P2) min(ra(P1), ra(P2)) r (P )= (a,ri) r is the apparent rate of an a Pi:P −→ i action a – the total rate at which P can do a

SM [11/06] – p. 12/47 Simplified Cooperation: P1 £¡ P2 L

An approximation to pairwise cooperation:

(a,⊤) ′ (a,⊤) ′ P1 −−−→ P1 and P2 −−−→ P2 (a,⊤) P £¡ P −−−→ P ′ £¡ P ′ 1 L 2 1 L 2

a,λ a, ( ) ′ ( ⊤) ′ P1 −−−→ P1 and P2 −−−→ P2 (a,⊤) ′ (a,λ) ′ P1 −−−→ P1 and P2 −−−→ P2 (a,λ) Both give: P £¡ P −−−→ P ′ £¡ P ′ 1 L 2 1 L 2

a,λ a,µ ( ) ′ ( ) ′ P1 −−−→ P1 and P2 −−−→ P2 (a,min(λ,µ)) P £¡ P −−−→ P ′ £¡ P ′ 1 L 2 1 L 2

SM [11/06] – p. 13/47 Hiding: P/L

Used to turn observable actions in P into hidden or silent actions in P/L L defines the set of actions to hide

(a,λ) If P −−−→ P ′:

(τ,λ) P/{a} −−−→ P ′/{a} τ is the silent action Used to hide complexity and create a component interface Cooperation on τ not allowed

SM [11/06] – p. 14/47 Constant: A

Used to define components labels, as in: P def= (a, λ).P ′ Q def= (q, µ).W P ,P ′, Q and W are all constants

SM [11/06] – p. 15/47 PEPA: A Transmitter-Receiver

def System = (Transmitter £¡ Receiver) £¡ Network ∅ {transmit,receive}

def Transmitter = (transmit, λ1 ).(t_recover, λ2 ).Transmitter def Receiver = (receive, ⊤).(r_recover,µ).Receiver def Network = (transmit, ⊤).(delay, ν1 ).(receive, ν2 ).Network

A simple transmitter-receiver over a network

SM [11/06] – p. 16/47 T-R: Global state space

SM [11/06] – p. 17/47 Expansion law for 2 Components

(a1,r1) (a2,r2) P £¡ P where P −−−→ P ′ and P −−−→ P ′ 1 L 2 1 1 2 2

There are four cases: a1, a2 6∈ L, a1 = a2 ∈ L, a1 ∈ L, a2 6∈ L and a1 6∈ L, a2 ∈ L: ′ ′ P1 £¡ P2 =(a1, r1).(P £¡ P2)+(a2, r2).(P1 £¡ P ) L 1 L L 2 if a1, a2 6∈ L ′ ′ P1 £¡ P2 =(a1, min(r1, r2)).(P £¡ P ) L 1 L 2 if a1 = a2 ∈ L ′ P1 £¡ P2 =(a1, r1).(P £¡ P2) L 1 L if a1 6∈ L, a2 ∈ L ...

SM [11/06] – p. 18/47 Possible Evolutions of 2 Cpts

(a1,r1) (a2,r2) P £¡ P where P −−−→ P ′ and P −−−→ P ′. 1 L 2 1 1 2 2

(a1,r1) a , a 6∈ L: P £¡ P −−−→ P ′ £¡ P 1 2 1 L 2 1 L 2

(a2,r2) a , a 6∈ L: P £¡ P −−−→ P £¡ P ′ 1 2 1 L 2 1 L 2

(a1,r1) a 6∈ L, a ∈ L: P £¡ P −−−→ P ′ £¡ P 1 2 1 L 2 1 L 2

(a2,r2) a ∈ L, a 6∈ L: P £¡ P −−−→ P £¡ P ′ 1 2 1 L 2 1 L 2

(a1,min(r1,r2)) a = a ∈ L: P £¡ P −−−→ P ′ £¡ P ′ 1 2 1 L 2 1 L 2 a =6 a , a , a ∈ L: P £¡ P 6−−−→ 1 2 1 2 1 L 2

SM [11/06] – p. 19/47 Extracting the CTMC

So how do we get a Markov chain from this Once we have enumerated the global states, we map each PEPA state onto a CTMC state The transitions of the global state space become transitions of the CTMC generator matrix Any self loops are ignored in the generator matrix – why? Any multiple transitions have their rate summed in the generator matrix - why?

SM [11/06] – p. 20/47 Extracting the CTMC (2)

(a,λ) For example if: P £¡ P −−−→ P £¡ P ′ 1 L 2 L 2 1. Enumerate all the states and assign them numbers: ··· 3: P £¡ P 1 L 2 4: P £¡ P ′ 1 L 2 ···

2. Construct Q by setting q34 = λ in this case 3. If another transition with rate µ is discovered for states 3 to 4 then q34 becomes λ + µ

SM [11/06] – p. 21/47 Extracting the CTMC (3)

4. Ignore any transitions from state i to state i

5. Finally set qii = − Pj6=i qij 6. Now sum of all rows of Q should be 0 7. To solve for the steady-state of Q:

~πQ = ~0

find the elements of ~π = (π1,π2,...πn) using the additional constraint that:

π1 + π2 + ··· + πn = 1

SM [11/06] – p. 22/47 Voting Example I

System def= (Voter || Voter || Voter) £¡ ((Poler £¡ Poler) £¡ Poler_group_0) {vote} L L′ where L = {recover_all} L′ = {recover, break, recover_all}

SM [11/06] – p. 23/47 Voting Example II

Voter def= (vote, λ).(pause,µ).Voter Poler def= (vote, ⊤).(register,γ).Poler + (break,ν).Poler_broken Poler_broken def= (recover,τ).Poler + (recover_all, ⊤).Poler

SM [11/06] – p. 24/47 Voting Example III

Poler_group_0 def= (break, ⊤).Poler_group_1 Poler_group_1 def= (break, ⊤).Poler_group_2 + (recover, ⊤).Poler_group_0 Poler_group_2 def= (recover_all,δ) .Poler_group_0

SM [11/06] – p. 25/47 M/M/2/3 Queue

From tutorial sheet, asked to design a M/M/2/3 queue in PEPA There are two possible architectures depending on the type of M/M/2 queue fully parallel client – client can be processed by as many servers as are available concurrently fully serial client – client is allocated to a particular server and dealt with solely by that server until complete

SM [11/06] – p. 26/47 M/M/2/3 Queue: Parallel Client

def Arrival = (arrive,λ).Arrival def Server1 = (service,µ).Server1 def Server2 = (service,µ).Server2 def Buff0 = (arrive, ⊤).Buff1 def Buff1 = (arrive, ⊤).Buff2 + (service, ⊤).Buff0 def Buff2 = (arrive, ⊤).Buff3 + (service, ⊤).Buff1 def Buff3 = (service, ⊤).Buff2

def Sysp = Arrival £¡ (Buff0 £¡ (Server1 k Server2)) L M where L = {arrive},M = {service}

SM [11/06] – p. 27/47 M/M/2/3 Queue: Parallel Client

(Server1 k Server2) is shorthand notation for (Server1 £¡ Server2) ∅

The model Sysp would be analogous to having a single server serving at twice the rate, i.e. 2µ ...so why not have a single server serve at rate 2µ? ...because it allows us to model breakdowns or hetereogeneous servers i.e. in some way give each server individual behaviour

Serveri =(service, µ).Servicei +(break, γ).(recover, χ).Serveri

SM [11/06] – p. 28/47 M/M/2/3 Queue: Serial Client

Client is allocated to a particular server (first one free) e.g. post-office counters

Arrival def= (arrive,λ).Arrival

def Server1 = (to_server, ⊤).(service,µ).Server1 def Server2 = (to_server,µ).(to_server, ⊤).Server2

def B0 = (arrive, ⊤).B1 def B1 = (arrive, ⊤).B2 + (to_server, ρ).B0 def B2 = (arrive, ⊤).B3 + (to_server, ρ).B1 def B3 = (to_server, ρ).B2

SM [11/06] – p. 29/47 M/M/2/3 Queue: Serial Client

Compose servers with B components

B0 £¡ (Server1 k Server2) {to_server} Now 3 customers can arrive in succession initially but while 2 are being serviced, a further 2 customers could arrive – making 5 i.e. not strictly a M/M/2/3 queue Need to have a further counting process, Buff , to check buffer not exceeded

SM [11/06] – p. 30/47 M/M/2/3 Queue: Serial Client

def Buff0 = (arrive, ⊤).Buff1 def Buff1 = (arrive, ⊤).Buff2 + (service, ⊤).Buff0 def Buff2 = (arrive, ⊤).Buff3 + (service, ⊤).Buff1 def Buff3 = (service, ⊤).Buff2

Now overall composed process looks like:

Arrival £¡ (Buff0 £¡ (B0 £¡ (Server1 k Server2))) {arrive} {arrive,service} {to_server}

SM [11/06] – p. 31/47 Really Simple Syndication – RSS

Subscription-driven publishing Clients subscribe to a feed & periodically fetch it The result might be cached

SM [11/06] – p. 32/47 RSS Publish–Subscribe System

System equation for RSS PubSub system:

RSS_System def= (RSS_Client[N ] £¡ RSS_Server[N ,M]) C L S £¡ RSS_Network[N ] L N where: L = {subscribe, unsubscribe, rss_poll, rss_update, rss_cache_hit} M = {new_feed, rss_refresh}

SM [11/06] – p. 33/47 Notation: Group Cooperation

For a component A, we can express N copies of A in parallel (no inter-component cooperation): A[N]= A || A || ··· || A | {zN } For a component A, we can express N copies of A in cooperation with each other over the set of actions L: A[N, L]= A £¡ A £¡ ··· £¡ A L L L | {zN }

SM [11/06] – p. 34/47 RSS Client

def RSS_Client = (subscribe,λ1).RSS_Client n def RSS_Client n = (rss_poll,λ3).RSS_Client 1 + (unsubscribe,λ2).RSS_Client def RSS_Client c = (rss_poll,λ4).RSS_Client 1 + (unsubscribe,λ2).RSS_Client def RSS_Client 1 = (rss_update, ⊤).RSS_Client n + (rss_cache_hit, ⊤).RSS_Client c

+ (unsubscribe,λ2).RSS_Client An RSS client actively subscribes to RSS feeds. Once subscribed, it polls at regular intervals for new content.

SM [11/06] – p. 35/47 RSS Client

A polling RSS Client receives faster response if the feed is cached at the server (through the rss_cache_hit action) – see RSS Server description At any stage an RSS server can unsubscribe from the feed

SM [11/06] – p. 36/47 RSS Server

RSS_Server def= (subscribe, ⊤).RSS_Server + (unsubscribe, ⊤).RSS_Server

+ (rss_poll, ⊤).RSS_Server 2 + (new_feed, µ1).RSS_Server 1 def RSS_Server 1 = (rss_refresh, µ2).RSS_Server def RSS_Server 2 = (rss_update, µu).RSS_Server + (rss_cache_hit, µc).RSS_Server An RSS server receives subscription events (subscribe, unsubscribe), poll requests for updates (rss_poll) and sends updates to clients. Also acts on changes to backend feed database (new_feed, rss_refresh)

SM [11/06] – p. 37/47 RSS Server

An RSS server sees a feed refresh at regular intervals (rss_refresh). It would be hard (require a huge amount of extra state) to keep track of individual RSS clients that needed updating rather than getting a cached copy – this is a useful abstraction

Here µc > µu to represent cached data being faster to access than fetching fresh data from feed database. Again no notion of keeping track of whether feed has expired explicitly – just use probabilistic model of relative frequency of update versus cached

SM [11/06] – p. 38/47 RSS Network

def RSS_Network = (subscribe, ⊤).RSS_Network 1 + (unsubscribe.⊤).RSS_Network 1 + (rss_poll, ⊤).RSS_Network 1 + (rss_update, ⊤).RSS_Network 1 + (rss_cache_hit, ⊤).RSS_Network 1 def RSS_Network 1 = (net_recover,γ).RSS_Network An RSS network is an abstraction layer which allows actions to be transmitted from server to client but has a recovery period between actions net_recover

SM [11/06] – p. 39/47 RSS Network

For a single RSS_Network component, if it is busy performing a net_recover action, it is not able to enable the other actions, rss_poll, rss_cache_hit, etc. This corresponds to network busy period.

There are NN × RSS_Network components in the system acting in parallel (not communicating with each other) – used to simulate network capacity, i.e. NN simultaneous connections.

SM [11/06] – p. 40/47 Probing a model

One way of asking questions of a SPA model is to construct a probe A probe is an SPA component, Q, which acts (cooperates) on the model in question M and changes state on seeing key actions a ∈ L in the model M £¡ Q L When constructing such a probe, it is important not to unintentionally disable/alter timing of actions in the model – i.e. a behaviourally and temporally neutral probe

SM [11/06] – p. 41/47 Probe for RSS model

In the RSS example a probe which could be used to measure the time between subscribe and unsubscribe actions might be:

Q def= (subscribe, ⊤).Q′ + (unsubscribe, ⊤).Q Q′ def= (unsubscribe, ⊤).Q′ + (subscribe, ⊤).Q′ in Q £¡ RSS_System where: L L = {subscribe, unsubscribe}

SM [11/06] – p. 42/47 Steady-state reward vectors

Reward vectors are a way of relating the analysis of the CTMC back to the PEPA model A reward vector is a vector, ~r, which expresses a looked-for property in the system: e.g. utilisation, loss, delay, mean buffer length To find the reward value of this property at steady state – need to calculate: reward = ~π · ~r

SM [11/06] – p. 43/47 Constructing reward vectors

Typically reward vectors match the states where particular actions are enabled in the PEPA model

Client = (use, ⊤).(think, µ).Client Server = (use, λ).(swap, γ).Server Sys = Client £¡ Server use

There are 4 states – enumerated as 1 : (C, S), 2 : (C′, S′), 3 : (C, S′) and 4 : (C′, S)

SM [11/06] – p. 44/47 Constructing reward vectors

If we want to measure server usage in the system, we would reward states in the global state space where the action use is enabled or active Only the state 1 : (C, S) enables use

So we set r1 = 1 and ri = 0 for 2 ≤ i ≤ 4, giving: ~r = (1, 0, 0, 0) These are typical action-enabled rewards, where the result of ~r · ~π is a probability

SM [11/06] – p. 45/47 Mean Occupation as a Reward

Quantities such as mean buffer size can also be expressed as rewards

B0 = (arrive,λ).B1 B1 = (arrive,λ).B2 + (service, µ).B0 B2 = (arrive,λ).B3 + (service, µ).B1 B3 = (service, µ).B2

For this M/M/1/3 queue, number of states is 4

SM [11/06] – p. 46/47 Mean Occupation as a Reward

Having a reward vector which reflects the number of elements in the queue will give the mean buffer occupation for M/M/1/3 i.e. set ~r = (0, 1, 2, 3) such that:

3 mean buffer size = ~π · ~r = X πiri i=0

SM [11/06] – p. 47/47