Computing the Blocking Probabilities in a Circuit-Switched Network with Multi-Priority Traffic

Computing the Blocking Probability in Tactical Communications Networks[1]

Victor T.-S Shi, William Perrizo

Computer Science Dept., North Dakota State University

Fargo, ND 58105

-22-

Abstract — Tactical communications networks are multi-hop wireless networks in which switches and endpoints are mobile nodes. In a tactical environment, fast algorithms for performance analysis are desirable for optimizing the network in a timely fashion. Also, preemptive priorities are commonly used to achieve low blocking probabilities for high priority calls when the loss of equipment in the battlefield is not trivial. This paper presents analytical algorithms for computing the end-to-end blocking probability in a tactical communications network where a preemption service discipline is employed and traffic is divided into multiple classes to provide multiple grades of service. Each class of traffic has its distinct characteristics, such as average call arrival rate, average call holding time, and service priority. Experiments show that the preemption does provide significantly better performance for higher priority traffic. The algorithms presented in this paper may also be useful in the optimization of other rapidly deployable networks, where mobility, communication efficiency and computational complexity for adapting the network to unpredictable environments are of significant concern.

Index terms: Communication networks, call blocking probabilities, algorithms

1  Introduction

Tactical communications networks use wireless technology to achieve high degree of mobility while providing multiple services to users. Unlike cellular networks where base stations are static, tactical communications networks can not rely on static infrastructure for three reasons [1]: 1) fixed nodes are more vulnerable to enemy attacks; 2) highly mobile military forces need networks that are equally mobile; 3) the need for military networks to continue “in operation” even when some nodes are destroyed and/or some links are jammed. Thus special techniques need to be used in a tactical network to efficiently utilize the limited network resources, and to quickly adjust network topologies to tolerate frequent node/link failures in battlefield environments.

Ad hoc networking [2] is one of the network architectures that have been suggested for future mobile military networks [3]. Ad hoc networks presume a homogeneous environment – all nodes have similar capabilities with respect to energy supply, processing power, memory capacity, etc. While Ad hoc networks may be appropriate for a Navy network in which every node is a ship, it may not be good for army battalion forces. In a tactical network for army battalion forces, some communication nodes are foot soldiers. It is nearly impossible to require foot soldiers carrying heavy equipment in a battlefield where mobility is paramount.

In this paper we focus on analyzing a two-tier multi-hop network [1]. This two-tier network consists of two types of mobile nodes – base stations and endpoints. The base stations are more powerful than mobile endpoints and together form a backbone network. The mobile endpoints may be much lighter, so that they can be carried by foot soldiers. In contrast to the single-hop cellular network model, the performance of such a two-tier tactical network needs to be frequently evaluated so that its dynamic topological structure can be optimized to tolerate node and link failures. In general, it is not feasible to use simulation for the purpose of topological optimization because of the limited time constraint. We hence seek analytic algorithms for fast network performance analysis. A typical two-tier tactical network [1] is shown in Figure 1.

Another feature of tactical networks is preemption. Preemption is the best way to utilize the limited resources in a battlefield and to compensate for loss of equipment. In this paper we divide the traffic flowing through the network into multiple classes according users’ grade of service (GoS) requirements [4, 5]. Multiple grades of service are achieved by the use of multiple preemptive priorities. Call blocking probability is used as the performance metric at the call level.

The paper is organized as follows. In section 2 we discuss a model for the two-tier network to which k preemptive priority classes of calls arrive. Section 3 presents mathematical algorithms for computing end-to-end blocking probabilities of calls of any preemptive priority. Section 4 demonstrates how to apply the algorithms in section 3 to the performance analysis of a network with 6 backbone nodes and traffic of 3-preemptive priorities. Section 5 discusses related work. We end with conclusions in Section 6.

2  The Model and Notations

Figure 1 shows a two-tier network model we use for the discussion of a multi-hop tactical network. In the model, the nodes are classified into two types – switches (base stations) and endpoints (in general we can also view the switches as clusters and endpoints as sub-clusters with aggregated traffic at that sub-cluster, if the considered network is a network of three or more levels of hierarchy [6, 15]). Switches form a backbone network. Endpoints are attached to a switch. Traffic traverses shortest-path routes. All the switches in the network are non-blocking and the call setup time is negligible compared to the average call holding time. Calls (corresponding to customers in a random service system) are categorized into k classes according to their priorities. The call service discipline is preemptive priority, i.e., a newly arriving call of higher priority can preempt a channel that is servicing a call of the lowest priority in the link when all the channels in the network are busy. The preempted call is lost and cleared for simplicity. The considered link system is an Erlang loss system. This means that a newly arriving call is lost when it cannot find a free channel and no lower priority calls are in service.

Let denote the number of nodes and the number of links in the network. The th link has channels. We use the following notations for the convenience of formula development.

, : denote, respectively, average arrival rate and blocking probability of a -th priority call on the -th link, , . We assume Poisson distribution for call arrival processes at the session level.

: , denotes the acceptance probability of a -th priority call on the -th link, , .

: denotes average holding time of a -th priority call, . We assume negative exponential distribution for call holding times at the session level.

: , denotes the offered load of a -th priority call on the -th link, , .

: denotes the offered load of a -th priority call between node and node .

: , denotes the total offered load of the -th priority calls on the network.
: , denotes the total offered load on the network.
: denotes the shortest-path between the -th node and the -th node.

: denotes the carried load of -th priority calls on the path between the -th node and the -th node.

: , denotes the total carried -th priority load on the network.

: denote the carried load of the -th priority calls on the -th link.

: denote the blocking probability of -th priority calls between nodes and .

3  Algorithms for computing the blocking probabilities

From the above model and definitions, the carried load of priority on the path from node to node is:

(1)

The carried load of priority flowing on link is . Since the offered load equals the carried load plus load loss, (i.e., ), it follows that . Thus, we have

(2)

In a network of classes of traffic, the blocking probability depends on the traffic volumes of for all . When given an end-to-end traffic pattern {: and }, depends on the routing policy and the service discipline and is also affected by other classes of traffic. As we mentioned in [7], preemptive service discipline separates the effects of lower priority traffic from higher priority traffic. In other words, the behavior of higher priority traffic is not affected by the behavior of lower priority traffic. This allows us to produce algorithms in which blocking probability of each class can be calculated separately and reduces the computational complexity (this complexity reduction at the link level has been demonstrated in [7]). In particular, for calculating and , we only need to consider traffic of class 1, because such traffic has the highest priority and other traffic does not affect its performance when the preemptive service discipline is applied. We can use the Erlang formula and the methods in [8] for computing the blocking probability of class 1 (). For computing (), the traffic we need to consider is calls from higher priority classes and the class that the call is pertaining to, i.e., classes of priorities 1, 2, …, . As such, we can apply induction techniques to compute as follows.

Assuming we have reached a stable solution of and by induction, we initialize a value to , and then solve equations:

(3.1)

( < )

with the boundary conditions

(3.2)

( = and when )

for ,

(3.3)

( = and when )

(3.4)

(= and ).

then the link blocking probability is

(3.5)

and the call blocking probability is for a shortest-path routing network (see [9] for a mathematical proof). Compare the computed from (2) with the old assumed value. Adopt the new value if their difference is not within a predefined threshold. Compute iteratively by using equations (1), (2) and (3.1-3.5) until the difference of two values are within the threshold, then we have reached a stable solution for and . The convergence will behave in the same manner as the mathematical algorithms demonstrated by Akinpelu in [8], since for , the unknown variables are and . All and for have been computed in previous steps. Thus, we need only to deal with the same single-type traffic problem each time (to determine the traffic flows of service priority, see the example in section 4).

Based on the above discussion we formally present the algorithms as follows. We initialize for , , and . This yields faster convergence than initializing , because we can easily guess that is very close to 0 for a practical network (see the computational example in section 4). In fact, a technique we can use for fast convergence is to initialize , since is always at least , due to the properties of the preemptive service discipline. We use in the algorithms to denote the result obtained from equations (3.1-3.5) for presentation simplicity.

Algorithm 1:
For computing the blocking probabilities of 1-st priority calls.
Step 1: Initialization.
Let for ; Let for and

;
Step 2: compute the offered load for

Step 3: compute the blocking probability for

Step 4: compute for and

Step 5: compare this with the former . If the relative difference is less

than a given value (for example 0.0001), go to Step 6; otherwise go to Step 2.
Step 6: end.
For and and , assume that we have , , , , , , …, , , , then we can use the following algorithms, 2 and 3, to get the end-to-end blocking probabilities of -th priority calls in the network.
Algorithm 2:
Step 1: Initialization.
Let for ; Let for and

; .
Step 2: compute the offered load for

Step 3: compute the blocking probability for

Step 4: compute for and

Step 5: compare this with the former . If the relative difference is less

than a given value (for example 0.0001), go to Step 6; otherwise go to Step 2.
Step 6: end.

Algorithm 3:
Repeat algorithm 2 until , giving all for , , and .

Figure 2. A flowchart of the algorithms for calculating call blocking probabilities

Figure 2 is a flowchart of the algorithms for calculating the call blocking probabilities of a network with the shortest-path routing policy and -priority traffic.

4  Results of an example

Consider a communication network as shown in Figure 3. The traffic from each node in the graph is the aggregated traffic from all the endpoints it represents. We say a node represents an endpoint if the endpoint is attached to that node, or is within the cluster that the node represents. Assume each link has 15 channels. The offered loads have 3-preemptive priorities () and have the following traffic pattern: for and ,

, when ;

, when ;

, when ;

Figure 3. An example network of 6 nodes with 3-priority traffic ()

Then we can derive the following algorithms from formulas in the preceding section for computing the blocking probability of each node-pair in the graph.

For =1, the Erlang formula

, () (5)

The corresponding algorithms are the same as Algorithm 1 in section 3 for computing .

For =2, (6)

can be calculated by solving the equations (6.1-6.3), which are deduced from the general equations (3.1-3.5) when =2

(6.1)

()

with the boundary conditions

(6.2)

(;)

(6.3)

(; )

The corresponding algorithms for computing are algorithm 2 in section 3, where . That is:

Algorithm for :
Step 1: Initialization.
Let for ; Let for and

;
Step 2: compute the offered load for