<<

ACM/IEEE PADS 2008 1 Process Models for Packet/Analytic-Based Network Simulations

Robert G. Cole, George Riley, Derya Cansever and William Yurcik

Index Terms—Hybrid Simulation Models, Event Driven Simulations, Network Models, Stochastic Queues and Dif- summed internal load fusion . external arrival external departure Abstract network queue E present our preliminary work that develops a new W approach to hybrid packet/analytic network sim- network ulations for improved network simulation fidelity, scale, Fig. 1. An example network of queues with internal and external and simulation efficiency. Much work in the literature ad- arrival and departure traffic. dresses this topic, including [10] [11] [8] [12] [13] and others. Current approaches rely upon models, which we refer to in this paper as Deterministic Fluid Models [9] [12], to address foreground traffic, which is of primary interest, and back- the analytic modeling aspects of these hybrid simulations. ground traffic, which exists solely to provide competition Instead we draw upon an extensive literature on stochas- to the foreground traffic being measured. In the hybrid tic models of queues and (eventually) networks of queues approach discussed here, analytic models are used is to es- to implement a hybrid stochastic model/packet network timate the impact of the background traffic on the fore- simulation. We will refer to our approach as Stochastic ground traffic, which is handled explicitly via discrete- Fluid Models throughout this paper. We outline our ap- event handling with full packet–level detail. In the re- proach, present test cases, and present simulation results mainder of this paper, we will assume that the analytically comparing the measured queue metrics from our approach modeled traffic represents, in some sense, background traf- for hybrid simulation to those of a deterministic fluid model fic and that the explicitly handled packet traffic is fore- hybrid simulation and a full packet–level simulation. We ground traffic. We will use the terms background traffic also discuss plans for areas of research on this ap- and foreground traffic to distinguish between the analyt- proach. ically modeled versus the explicitly handled event packet traffic. Of course other ways to divide up the analytically I. Introduction modeled and packet handled traffic are possible. Consider a network model as found in Figure 1. The There are two interesting aspects of the analytic model- network model is comprised of nodes and communica- ing in the context of a network of queues. One relates to tions links. Associated with each communications link is a the allocation and estimation of the intermediate load gen- queue. Traffic can arrive and depart the network model at erated by the traffic at the nodes within the network based each node in the network. Each node also carries internal upon the assumed routing patterns, total estimated exter- traffic which is forwarded throughout the network between nal traffic loads, finite queue sizes and associated network the traffic source and destination nodes. internal packet losses. The other aspect relates to meth- In hybrid simulation models, some of the traffic is han- ods of , at a given queue, the analytically modeled dled via analytic methods and some of the traffic is han- background traffic with the explicitly handled foreground dled via more CPU and memory intensive discrete-event, traffic. Our focus in this paper is the later; hence we con- packet-level handling. The more traffic modelled analyt- centrate on methods to mix the analytically modeled traffic ically, the more efficient the simulation becomes (in the with the event-driven, packet-level traffic at a given queue. general case), and thus the simulation can scale to larger A future paper will concentrate on the network equations. networks. Typically, there is a distinction drawn between The remainder of this paper is organized as follows: In the next section we discuss previous methods for hybrid R. G. Cole is with the Applied Lab and the Department of network simulation which rely upon deterministic fluid , Johns Hopkins University, Baltimore, MD. Phone: +1 443 778–6951, e-mail: [email protected] model approximations to queue dynamics. In Section III G. Riley is with the School of Electrical and Computer En- we present our approach based upon methods in stochas- gineering, Georgia Institute of Technology, Atlanta, GA. Phone: tic queue dynamics. In Section IV we develop our hy- +1 404 463–1774, e-mail: [email protected] D. Cansever is the Program Director of Advanced Internet- brid equations for a specific instance of a stochastic queue working, SI International, Inc. Phone: +1 703 234–6960, e-mail: model, i.e., a model based upon solutions [email protected] to the Fokker-Planck . In Section V we report on W. Yurcik is with the Department of Computer Science, Univer- sity of Texas-Dallas, Dallas, TX. Phone: +1 309 531–1570, e-mail: our initial simulations investigations comparing three test [email protected] cases; one based upon pure event-driven packet-level sim- 2 ACM/IEEE PADS 2008 ulations, one based upon hybrid deterministic fluid/packet ωmax simulation and one based upon our hybrid stochastic pro- ω+ (t i ) slope = cess/packet simulation. In Section VI we discuss conclu- λd − µ sions and future investigations. work ω− (t i )

ωi ( work carried by packet i ) II. Deterministic Models time Most current approaches to performance speedup for i i+1 i+2 i+3 network simulations involving hybrid event-analytic sim- packet arrivals ulation rely on analytic models based upon deterministic Fluid Approximations (FFAs), e.g., [7] [4] [10] [11] [8] Fig. 2. An illustration of the hybrid deterministic modeling approach [13]. We refer to these approaches as Deterministic Mod- for mixing traffic at a queue. els because the evolution of the queue dynamics modeling the background analytic traffic is assumed to be a deter- ministic process captured in the form of differential equa- rate at the queue, i.e., the inverse of the link bandwidth. tions. The fluid dynamics is derived from the integration Let w(t) be the total workload in the queue at time t, and of these equations. In some cases the differential equations w−(ti) and w+(ti) be the total workload in the queue just are solvable, e.g., fixed arrival rate models, and numeri- prior to and just following (respectively) a distinguished cal integration of the differential equations is not neces- time event, e.g., the arrival of packet i at time ti. Let wi sary. Both cases address variable arrival rates; one be the work carried by the ith packet to the queue. Finally, through time dependent variables in the differential equa- let wmax be the maximum amount of work (or backlog) in tions [9] [13] and one through discrete event fluid models the queue due to its finite size. where rate changes are propagated throughout the network Figure 2 gives a pictorial representation of the method via events [4] [11] [7]. used to mix analytically modeled background traffic with Two approaches to using deterministic analytic models explicitly handled packet traffic. Individual packet arrival for hybrid simulations are found in the literature. One events are indicated along the bottom axis of the figure. approach, used in [13], divides the network into a hierarchy These packets carry with them a given workload which is a consisting of a high speed core and a lower speed edge. of their packet size and the link bandwidth serv- In the core, all traffic is modeled analytically and in the ing the queue in question. If the packet is allowed to enter edge all traffic is modeled via packet-level, event-driven the queue, then the workload in queue jumps from w−(ti) simulation. Packet traffic traversing the core is converted to w+(ti) where the difference is the workload, wi, carried to fluid load on the core and is then converted back to by the packet. In between packet arrivals, the workload packet-level traffic at the far edge. evolves deterministically. As mentioned, we assume the Another approach, used in [4] [11] and [8], is to treat queue dynamics evolve at a constant rate between packet some of the traffic throughout the network analytically. arrivals (as long as the workload does not exceed the max- Then develop methods to explicitly mix both analytic and imum wmax or drops below the minimum of zero) given by packet traffic at each multiplexing point throughout the the difference λd −µ which is fixed over the averaging time network. Our interests are in this later approach. δ. Then the explicit handling of the packet traffic and the A. Deterministic/Packet Mixing Equations process for updating the impact of the background traffic Here, we develop the simplest, least assuming method are as follows: to mix deterministic traffic with packet traffic. There exist • If, during the current δ time averaging period, λd < µ, methods to improve the fidelity of our example determinis- i.e., the background traffic does not overflow the server tic model, e.g., [4] [11] [8], but these improvements require rate. For each packet arrival bringing wi work to the a priori knowledge of the behavior of the foreground traf- queue, do the following tasks: fic, which in general is not known. This choice is made – Increment the packet arrival counter i at the queue. here for several reasons. First, we wish to make an equiva- – Update the queue backlog just prior to the packet lent comparison to our stochastic models presented below arrival time ti, as in order to access its ability to negate a reliance on a priori w (t ) ← max[w (t )+(λ −µ)(t −t ), 0] (1) traffic knowledge. Second, our main focus in this paper is − i + i−1 d i i−1 to access aspects of the viability of the stochastic modeling – If wi < wmax − w−(ti), then accept the packet, up- and mixing approach against full packet level simulations. date queue workload as w+(ti) = w−(ti) + wi and Assume that λd is the average deterministically modeled schedule the packet departure event at time traffic rate into the queue over a time averaging interval δ. We assume that δ is large compared to a typical packet service ti = w−(ti)/µ + ti (2) transmission time but small compared to the total sim- ulated time. Note that we assume that the parameters – Else, discard the packet and set w+(ti) = w−(ti). describing the arrival processes, both deterministic and • Else λd ≥ µ. Here, the background fluid (which is as- stochastic, in this paper to be fixed. Let µ be the server sumed constant) is continually overflowing the queue. COLE, et al: MODELS FOR PACKET/ANALYTIC-BASED NETWORK SIMULATIONS 3

Hence, we have that w(t) = wmax and no packet traf- mean drift fic is allowed into the queue. Therefore, the model has ω−(t i+1 ) possible stochastic the foreground traffic experiencing 100% packet loss. trajectories As mentioned, there are ways to overcome this effect, a packet loss estimate ωmax see, e.g., [4], but they rely on having an estimate of ω+(t i+1 ) the foreground traffic rate. σ t ω +(t i ) m =λ − µ Without a priori knowledge of the foreground traffic be- work s havior, the deterministic model will deny the foreground diffusional density traffic access to the buffer. This situation can be addressed time by estimating foreground traffic load based on previous be- i i+1 havior, as has been done elsewhere. packet arrivals

Fig. 3. An illustration of the hybrid stochastic modeling approach III. Stochastic Models for mixing traffic at a queue. In this section we present our approach to hybrid net- work simulations which relies upon stochastic models of the background traffic and explicit handling of packet events The to determine a specific w−(ti) given a for the foreground traffic. As discussed previously, two as- value of w+(ti−1) is based upon the knowledge of the pects of this level of integration need consideration, i.e., temporal evolution of the Cumulative Distribution Func- methods to mix the analytic background traffic with the tion (CDF) of the stochastic process. The CDF is de- explicit event handling of the foreground traffic at a com- fined as F (w, t|w0, t0) = P r[Xt < wt|X0 = w0] and satis- mon queue, and methods to estimate the arrival processes R ∞ fies 0 dwF (w, t|w0, t0) = 1 and limt→to F (w, t|w0, t0) = of the background traffic at the intermediate queues in the δ(w − w0), the Dirac Delta Function. Our method ap- network based upon network routing and external arrival plies for any chosen model of the CDF of the stochastic processes. We concentrate on the former in this paper. processes. Here we outline the mixing algorithm and in Section IV we give an example of one particular stochastic A. Stochastic/Packet Mixing Equations process, e.g., a Brownian Motion process from heavy traffic approximations of queues [3]. We use a CDF to determine Here we describe our method to mix the stochastic mod- probabilistically the evolution of w (t ) given w (t ). eled background traffic and the packet traffic at a given − i + i−1 queue within the network. In some sense, our approach Figure 3 gives a pictorial representation of the method defines a natural extension to the deterministic/packet hy- used for mixing analytically modeled stochastic traffic with brid approach described above. However, our approach explicitly handled packet traffic. Individual packet arrival eliminates the need for ad-hoc methods and assumptions events are indicated along the bottom axis of the figure. to mixing analytic and packet traffic at common queues, These packets carry with them a given workload which is a does not require a priori knowledge of foreground packet function of their packet size and the link bandwidth serving behavior and applies independent of whether the overall the queue in question. If the packet is allowed to enter the load on the server is under or exceeds its capacity. queue, then the workload in queue jumps from w−(ti) to w (t ) where the difference is the workload carried by the As above, assume that λ is the average background traf- + i s packet. In between packet arrivals, the workload evolves fic (now treated as a stochastic process, hence the subscript according to a given stochastic process based upon load “s”) rate into the queue over a time averaging interval δ. parameters which characterize the associated arrival pro- This will represent the average rate of traffic associated cess, i.e., λ and the coefficient of variation, C . Given the with the analytically modeled traffic. However, we also s v stochastic process and its Cumulative Distri- characterize the variation in the background arrival pro- bution, we can determine an appropriate w (t ) given cess through its Coefficient of Variation (CoV) function, − i+1 w (t ). We accomplish this by sampling against the given C [1]. The CoV is defined as C = σ /λ−1 where σ is the + i v v s s s CDF for each packet interarrival. standard deviation of the inter-arrival times for the incom- −1 The explicit handling of the packet traffic and the pro- ing stochastic background traffic. Here, λs is the mean inter-arrival time for the incoming stochastic background cess for updating the impact of the stochastic traffic are as traffic. We assume the characterization of the background follows: For each packet arrival, do the following tasks: traffic is known, i.e., (λs,Cv), at each node (queue) in the • Increment the packet arrival counter i at the queue. network. Let µ be the server rate at the queue, i.e., the • Update the queue backlog just prior to the inverse of the link bandwidth. Let w(t) be the total work- packet arrival time ti, by sampling against the load in the queue at time t, and w−(ti) and w+(ti) be the Cumulative Function, total workload in the queue just prior to and just following F (w, ti|w+(ti−1), ti−1). Specifically, given a Distri- (respectively) a distinguished time event, e.g., the arrival bution Function F (X) = P r[X < w], and a random of packet i. Let wi be the work carried by the ith packet variable Y uniformly distributed between 0 and to the queue. Finally, let wmax be the maximum amount 1, then we sample against the distribution by the of work (or backlog) in the queue due to its finite size. formula w = F −1(Y ). This yields a specific value for 4 ACM/IEEE PADS 2008

2 2 2 w−(ti). where m = λs − µ, σ = λs × Cv + µ × Cv,µ, F (w,t = 0) = • If wi < wmax −w−(ti), then accept the packet, update H(w − w0), and H(x) is the Heavy-side Function. For the queue workload as w+(ti) = w−(ti) + wi and schedule purpose of this paper, we assume that all packets have the the packet departure event at time same size, hence we model a GI/D/1/K queueing system and have that the CoV for the service process is zero. This tservice = w (t )/µ + t (3) 2 2 i − i i assumption implies that σ = λs × Cv . The Fokker-Planck equation has an analytic solution de- • Else, discard the packet and set w+(ti) = w−(ti). rived in [3] The approach outlined above leads to a few observations. First, this approach is a natural extension to the meth- w − w0 − mt F (w,t|w0,t = 0) = α × Φ( √ ) ods previously discussed. Second, this approach explic- σ t itly accounts for background traffic loss and event packet 2 −w − w − mt + β × e2mw/σ Φ( √0 ) (5) loss for all arrival load conditions, not just for background σ t loads less than the line rate. Specifically, the impact of background traffic on the packet traffic is handled at each F (w, t = 0) = H(w − w0) (6) packet arrival, while the impact of packet traffic on back- ±w−w −mt √0 ±w − w − mt 1 Z σ t 2 ground traffic (and hence background traffic loss) is han- Φ( √0 ) = √ e−x /2dx (7) dled by resetting the initial queue state following each σ t 2π −∞ packet arrival. Third, unlike for deterministic background The multipliers, α and β, are determined by Boundary models where it is often necessary to know a priori the fore- Conditions (BCs) of the system. ground load in order to get good estimates of foreground The most natural BCs found in the literature are packet loss, this approach requires no such or estimation. Instead, the foreground packet loss is a natu- lim F (w, t|w0, t = 0) = 0 (8) ral result of the fluctuations in the background traffic and w→0 the current load placed on the system by the foreground and traffic. lim F (w, t|w0, t = 0) = 1 (9) The above procedure provides a better estimation of as- w→wmax pects of the foreground and background losses and their where the first BC addresses the lower limit of zero on the cross interactions. work in the system and the later BC ensures that the finite size queue limits the work in the system to the maximum IV. An Example Stochastic Model work allowed. Solving for α and β using these BCs, we get Our approach to analytic/packet level mixing at a given 2 −1 2mwmax/σ queue within the network was described independently α = Φ(wmax,+) − e Φ(wmax,−) (10) of the underlying stochastic process describing the back- ground traffic. Hence, it is possible to rely upon a number and of potential stochastic processes. We describe one here and β = −α (11) rely upon it in our simulation results in Section V. Further, where we have used the abbreviations it is certainly possible to use several stochastic processes ±wmax − w0 − mt to model the background traffic based upon which model Φ(wmax,±) = Φ( √ ) (12) best represents the results as a function of system load σ t and other aspects of the system model. This is a topic for We also investigate the use of alternative BCs. Consider future research. the system starting out with a very large initial value for The stochastic model we investigate here plays a promi- the work, w . If we are interested in the predicted work nent role in the work on heavy traffic limits in queuing 0 in the system after a very short time interval, we would models and is based upon the Brownian Motion stochastic expect that a lower limit on the work in queue would be process [3]. given by w0 − µt. Hence, an alternative set of BCs, which A. Brownian Motion Model we refer to BCs with a moving lower BC, is

For our initial investigations, we assume that the lim F (w, t|w0, t = 0) = 0 (13) stochastic process is well approximated by a Brownian Mo- w→max[0,w0−µt] tion model which is strictly appropriate for heavy traffic and limits, see, e.g., [3]. Given this, and assuming a GI/G/1/K lim F (w, t|w0, t = 0) = 1 (14) queueing system, we then have that the Cumulative Dis- w→wmax tribution Function, F (wi, ti|w+(ti−1), ti−1) satisfies the Solving for α and β using these BCs, we get Fokker-Planck Equation [3] with appropriate boundary −1 conditions to be discussed below, i.e., α = Φ(wmax,+) 2 2m(wmax−wmin)/σ ∂ ∂F 1 ∂2F Φ(wmax,+)Φ(wmax,−)e F = −m + σ2 (4) − (15) ∂t ∂w 2 ∂w2 Φ(wmin,−) COLE, et al: STOCHASTIC PROCESS MODELS FOR PACKET/ANALYTIC-BASED NETWORK SIMULATIONS 5

1 1 Time = 10.000 (sec) Time = 10.000 (sec) Sources Sinks Time = 2.000 (sec) Time = 2.000 (sec) Time = 1.000 (sec) Time = 1.000 (sec) 0.8 Time = 0.250 (sec) 0.8 Time = 0.250 (sec) Buffer Time = 0.050 (sec) Time = 0.050 (sec) Foreground Time = 0.010 (sec) Time = 0.010 (sec)  C1 R1 S1 0.6 0.6   Background 100 Mbps 500 Kbps F(w,t) F(w,t) 0.4 0.4

0.2 0.2 Fig. 5. The reference connection used for the simulation runs.

0 0 0 100 200 0 100 200 Work (Kbits) Work (Kbits)

Fig. 4. Evolution of the CDF for two cases of load equal to 0.98 use of the Hyper-Exponential Process allows us to model (left) and 1.02 (right). bursty traffic sources, which are well known to exist in the Internet, by setting the rates to different values. Our reference used for the simulation runs is and shown in Figure 5. Here node C1 is the source for two dif- 2 −αΦ(w )e2mwmin/σ ferent applications; one which is a 50Kbps Poisson packet β = min,+ (16) Φ(wmin,− stream representing our foreground UDP traffic case and one representing our Hyper-Exponential background UDP where wmin(t) = max[0,w0 −µt] and we have used the fur- packet stream which can run at different rates and bursti- ther abbreviations ness factors. Both application traffic, foreground and back-

±wmin − w0 − mt ground, travel to node R1 over a 100 Mbps link and then Φ(wmin,±) = Φ( √ ) (17) σ t onto node S1 over a 500 Kbps link. The link between R1 and S1 represents the queueing point, or bottleneck in our These sets of equations for the CDF completely deter- simulations. For each reference case studied, we ran three mine the time evolution of the stochastic system. The CDF different simulation models; first is the base case which is with BCs at w = 0 and w = wmax are well studied and are a full event-driven packet level simulation of both the fore- discussed in [5] and [6]. In Figure 4 we present some results ground and the background traffic, second is the determin- for the shape of the CDF for the moving lower BCs, for istic hybrid case where the foreground traffic is treated at two server utilization of 0.98 and 1.02 and various times. the packet level while the background traffic is treated an- For these examples we used a buffer size of 50 packets of alytically as a fixed rate fluid, and third is the stochastic 560 Bytes each, an initial queue workload of 179200 Bytes hybrid case where the foreground traffic is treated at the representing a buffer at 80% occupancy and a server rate of packet level while the background traffic is treated analyti- 500 Kbps. We see that for both the cases, where loads are cally as a stochastic fluid with fixed mean and variance. To less than and greater than server capacity, there is a finite investigate impacts of different foreground packet flows on probability that the buffer is not fully occupied. This fact the hybrid systems, we also ran these three cases when the allows the packet level traffic access to the buffer even in foreground traffic is a TCP-Reno stream from nodes C1 to server overload conditions, unlike the deterministic case. S1 through R1, where we set the slow-start threshold to 20 KBytes. V. Results For each of the above cases, we varied the mean and the In this section we present the results for our investiga- CoV of the background Hyper-Exponential UDP stream to tions into the accuracy of the stochastic models in predict- cover a range of traffic loads and loss rates. Table V lists ing queuing behavior in a single queue system. We com- the various mean background rates and example param- pare the results of this method to the comparable event eters representing the extreme CoV investigated for each driven simulation model of the equivalent queuing sys- mean rate. For all the results presented, we averaged the tem and of the equivalent deterministic background traffic metric values over ten independent simulation runs with model as described above. We use the Georgia Tech Sim- different random seedings for each. Each simu- ulation tool (GTNetS) [2] for our simulation studies of the lation run consisted of an initialization period of 50 seconds base case (packet–level detail only) as well as the hybrid where just the background traffic processes run (in order cases. For our simulation results, we developed two new to allow the system to equalibriate) and then we initiated modules within GTNetS. The two modules developed are the foreground process. We continued each simulation run a) A Hybrid Interface Module (HIM) which handled the for an additional 1000 seconds. mixing of the analytic background traffic with the fore- We first present our results for the case of a UDP fore- ground packet traffic, and b) A new traffic generation ap- ground packet-based stream. For this foreground process plication, the Hyper-Exponential Arrival (HEA) module, we the packet level delay and loss. In Figure which generates application level UDP packets according 6 we show the results for mean delay estimates for the to a Hyper-Exponential Process [1]. This process models a three cases of base, deterministic and stochastic traffic for source which alternates between a high rate and a low rate a background stream with a CoV equal to unity. The rates traffic source, where each rate period is exponentially dis- simulated in the plots are 300, 350, 400, 450 and 500 Kbps. tributed. Hence, by setting the rates equal, we reproduce a Note, however, that these are the application-level rates simple and relatively smooth Poisson traffic pattern. Our and due to packet overhead of 48 bytes per packet, the 6 ACM/IEEE PADS 2008

1 TABLE I Base Det Parameters for the Hyper-Exponential background process. Stoc_mvBC 0.8

−1 λs Cv tH rH tL rl (Kbps) (msec) (Kbps) (msec) (Kbps) 0.6

300 1.000 200 300 200 300 300 1.511 200 500 200 100 0.4

400 1.000 200 400 200 400 Loss Probability 400 1.753 200 700 200 100 0.2

410 1.000 200 410 200 410 410 1.753 200 710 200 110 0 420 1.000 200 420 200 420 250 300 350 400 450 500 420 1.753 200 720 200 120 Background Rate (Kbps) 430 1.000 200 430 200 430 430 1.753 200 730 200 130 Fig. 7. Results for the UDP foreground loss probability versus back- 440 1.000 200 440 200 440 ground traffic rate. 440 1.753 200 740 200 140

450 1.000 200 450 200 450 450 1.340 200 700 200 200

500 1.000 200 500 200 500 500 1.413 200 700 200 300 the background traffic and this is what allows the packet traffic lossy entry into the buffer. In the plots in Figure 8 we investigate the impact of in- creased CoV in the background stream on the foreground packet metrics. Figure 8 present the results for mean rates respective line rates are 328, 383, 438, 492 and 530 Kbps. of 300, 400, 430 and 450 Kbps in the background streams. Here we see mean delay of the foreground stream increas- We see that the mean delay increases as the CoV increases ing as the mean rate of the background traffic increases for for lower loads, while for higher loads the mean delay actu- all three cases. However, the stochastic case does a better ally decreases for increasing CoV. The deterministic model job at tracking the delays recorded for the base simulation does not account for variations in the dynamics of the back- case, while the deterministic case underestimates the de- ground stream so it is incapable of tracking these effects. lays at lower loads and overestimates the delay at higher We show results for the moving lower BC discussed above loads. for the stochastic Brownian Motion Model. The stochas- tic model captures the trends in the delay versus load and 500 Base Det CoV. However, the trends are not as pronounced as in the Stoc_mvBC 400 base simulation case. This result will be addressed in fu- ture studies. 300 We now present our results for the case where the fore- ground traffic is a TCP packet stream. Due to TCP’s 200 Delay (msec) adaptive windowing policy, the foreground traffic in these simulation runs will attempt to utilize the remaining band- 100 width on the link between R1 and S1. Hence, the fore-

0 ground loads vary for different cases of background traffic 250 300 350 400 450 500 Background Rate (Kbps) loads. We first show a set of results comparing the TCP goodput versus rate for the various simulation cases. These Fig. 6. Results for the UDP foreground delay versus background results are shown in Figure 9. We see that the goodput gen- traffic rate. erally decreases with increasing background rate. Further, the deterministic model underestimates the TCP goodput, Figure 7 presents results on the foreground packet loss due to its over estimation of the foreground packet losses probability for the different background traffic rates where (from previous results above). While the stochastic mod- the CoV is unity. The loss metric is a much more sensitive els do a more reasonable job in tracking TCP throughput. and challenging test of models due to its dependence upon Further, the estimates of the stochastic models improve the high percentiles of the work-in-queue probability dis- for higher loads as expected. We show the results for both tributions. We see extremely low packet losses for all cases BCs discussed above. That the TCP goodput as modeled for rates less than 400 Kbps, as expected. Above 400 Kbps, by the stochastic hybrid case tracks well the base case is a where the link is approaching a load near unity, the loss tribute to the models ability to estimate packet loss prob- begin to increase. The losses for the deterministic model ability, as TCP goodput is highly non-linear with respect rapidly approach unity, because it has no mechanism to to packet loss. allow packet traffic entry into the buffer when the fluid in- We show the results for background streams with in- put rate equals or exceeds the service rate of the system. creasing Coefficients of Variation, where we are presenting While the results for the stochastic model are quite close the average goodput of the TCP foreground process. Fig- to the base case loss results. As mentioned previously, the ure 10 show the results for a mean background traffic rate stochastic model explicitly accounts for the fluctuations in of 300, 400, 450 and 500 Kbps. We observe that the TCP COLE, et al: STOCHASTIC PROCESS MODELS FOR PACKET/ANALYTIC-BASED NETWORK SIMULATIONS 7

100 100000 Base(300Kbps) Det(300Kbps) Stoc_mvBC(300Kbps) 80 10000

60 1000

40 100 Delay (msec) Goodput (Bps)

20 10 Base Det Stoc mvBC Stoc 0 1 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 250 300 350 400 450 500 550 Coefficient of Variation Background Rate (Kbps) 400 Base(400Kbps) Det(400Kbps) Fig. 9. Results for the TCP foreground goodput versus background 350 Stoc_mvBC(400Kbps) traffic rate. 300

250

200 VI. Conclusions and Future Studies 150 Delay (msec)

100 We have presented the initial results of an investigation

50 into the use of stochastic queueing models for the purpose of developing hybrid analytic and packet level, event driven 0 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 network simulation tools. Our ultimate objective is to im- Coefficient of Variation 500 prove the scalability of network simulations, decrease their

450 run-times and maintain high fidelity of their results. There

400 are numerous aspects of this topic which require investiga- tions, including packet/analytic model mixing at network 350 queues and estimations of stochastic model parameters at 300 interior queues in the network. In this paper, we have 250 Delay (msec) only begun investigating the initial question of how to mix 200 stochastic analytic fluids with packet level events and per- 150 Base(430Kbps) Det(430Kbps) formed some initial investigations into the fidelity of these Stoc_mvBC(430Kbps) 100 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 methods. However, we are encouraged by these initial re- Coefficient of Variation sults. We believe the use of stochastic background models 500 lead to a natural, unforced of mixing packet and an- 450 alytic traffic in common queues. This is not the case when 400 relying upon deterministic models which force designers 350 to make ad-hoc assumptions and decisions with respect to 300 how to mix packet and analytic traffic and which often 250 Delay (msec) have to rely upon a priori knowledge of foreground traffic 200 behavior.

150 Base(450Kbps) Det(450Kbps) There remains much work to further explore and develop Stoc_mvBC(450Kbps) 100 a high fidelity hybrid stochastic and packet-level network 0.9 1.0 1.1 1.2 1.3 1.4 Coefficient of Variation simulation capability. Future work includes: • Investigate the use of improved stochastic models and Fig. 8. Results for the UDP foreground delay versus CoV for various approximations from the field of Heavy Traffic results background traffic rates. in queuing theory to improve the fidelity of the pre- dictions. • Extend our methods to networks of queues. • Further explore the fidelity of these methods in the goodput for the base case shows a weak dependence on the presence of other background traffic types and loads CoV for lower loads and a greater, increasing dependence and their impact of other foreground application traf- on CoV at higher loads. Further, the analytic models result fic. in an underestimation of the TCP goodput for all cases. • Investigate the benefits of using hybrid stochas- We present the results for both stochastic models, i.e., the tic/packet simulation techniques when modeling pro- models resulting from different lower BCs. The stochas- cesses with long tailed distributions. tic models capture the qualitative tends shown in the base case. However, quantitatively they underestimate the TCP VII. Acknowledgements goodput. As before, we see that the deterministic model results are independent of the CoV of the background traf- We would like to thank the reviewers for their detailed fic stream. and insightful comments and in pointing out Reference [7]. 8 ACM/IEEE PADS 2008

25000 Queueing Networks, I. Nonequilibrium Distributions and Com- puter Modeling, Journal of the Association for Computing Ma- chinery, Vol. 21, No. 3, 1974. 20000 [7] Liu, B., Guo, Y., Kurose, J., Towsley, D. and W. Gong, Fluid Simulation of Large Scale Networks: Issues and Tradeoffs, PDPTA, 1999. 15000 [8] Liu, J., Parallel Simulation of Hybrid Network Traffic Models, IEEE ACM PADS’07, San Diego, CA, USA, June 2007. Goodput (Bps) [9] Mitra, D., Stochastic theory of a fluid model of producers and 10000 Base(300Kbps) consumers coupled by a buffer, Adv. in Appl. Prob., Vol. 20, Det(300Kbps) 1988. Stoc_mvBC(300Kbps) Stoc(300Kbps) [10] Nicol, D. and G. Yan, Simulation of Network Traffic at Coarse 5000 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 Time-scales, IEEE ACM PADS’05, Monterrey, CA, USA, June Coefficient of Variation 2005. 20000 [11] Liljenstam, M., Nicol, D., Yuan, Y. and J. Liu, RINSE: the real- time Interactive Network Simulation Environment for Network Security Exercises, IEEE ACM PADS’05, Monterrey, CA, USA, 15000 June 2005. [12] Misra, V., Gong, W.B. and D. Towsley, Fluid-based Analysis of a Network of AQM Routers Supporting TCP Flows With an Ap- 10000 plication to RED, Proceedings of ACM SIGCOMM’2000, 2000. [13] Gu,Y., Liu, Y. and D. Towsley, On Integrating Fluid Models

Goodput (Bps) with Packet Simulation, IEEE INFOCOM’04, 2004. 5000 Base(400Kbps) Det(400Kbps) Stoc_mvBC(400Kbps) Stoc(400Kbps) 0 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Coefficient of Variation 10000 Base(450Kbps) Det(450Kbps) Stoc_mvBC(450Kbps) 8000 Stoc(450Kbps)

6000

4000 Goodput (Bps)

2000

0 0.9 1 1.1 1.2 1.3 1.4 1.5 Coefficient of Variation 10000 Base(500Kbps) Det(500Kbps) Stoc_mvBC(500Kbps) 8000 Stoc(500Kbps)

6000

4000 Goodput (Bps)

2000

0 0.9 1 1.1 1.2 1.3 1.4 1.5 Coefficient of Variation

Fig. 10. Results for the TCP foreground goodput versus Cv for various background traffic rates.

References [1] Cooper, R., Introduction to , North-Holland, Oxford, 1981. [2] Riley, G., The Georgia Tech Simulation Tool, http://maniacs.ece.gatech.edu/ 2008. [3] Heyman, D., A Diffusion Model Approximation for the GI/G/1 Queue in Heavy Traffic, Bell Labs Technical Journal, Vol. 54, No. 9, November 1975. [4] Kiddle, C., Simmonds, R., Williamson, C. and B. Unger, Hy- brid Packet/Fluid Flow Network Simulation, 17th Workshop on Parallel and Distributed Simulation, June 2003. [5] Kobayashi, H., Applications of the Diffusion Approximation to Queueing Networks, I. Equilibrium Queue Distributions, Journal of the Association for Computing Machinery, Vol. 21, No. 2, 1974. [6] Kobayashi, H., Applications of the Diffusion Approximation to