Congestion Avoidance in Computer Networks

With a Connectionless Network Layer

Ra j Jain, K. K. Ramakrishnan, Dah-Ming Chiu

Digital Equipment Corp oration

550 King St. LKG1-2/A19

Littleton, MA 01460

DEC-TR-506

c

Copyright 1988, Digital Equipment Corp oration. All rights reserved.

Version:June 1, 1997

Abstract

Widespread use of computer networks and the use of varied technology for the interconnection of computers

has made congestion a signi cant problem.

In this rep ort, we summarize our research on congestion avoidance. We compare the concept of congestion

avoidance with that of congestion control. Brie y, congestion control is a recovery mechanism, while conges-

tion avoidance is a prevention mechanism. A congestion control scheme helps the network to recover from

the congestion state while a congestion avoidance scheme allows a network to op erate in the region of low

delay and high with minimal queuing, thereby preventing it from entering the congested state

in which packets are lost due to bu er shortage.

A number of p ossible alternatives for congestion avoidance were identi ed. From these alternatives we

selected one called the binary feedback scheme in which the network uses a single bit in the network layer

header to feed back the congestion information to its users, which then increase or decrease their load to

make optimal use of the resources. The concept of global optimality in a distributed system is de ned in

terms of eciency and fairness such that they can b e indep endently quanti ed and apply to anynumber of

resources and users.

The prop osed scheme has b een simulated and shown to be globally ecient, fair, resp onsive, convergent,

robust, distributed, and con guration-indep endent.

queuing and congestion. 1 INTRODUCTION

We are concerned here with congestion avoidance

rather than congestion control. Brie y, a conges-

Congestion in computer networks is b ecoming a sig-

tion avoidance scheme allows a network to op erate in

ni cant problem due to increasing use of the net-

the region of low delay and high throughput. These

works, as well as due to increasing mismatch in link

schemes prevent a network from entering the con-

sp eeds caused byintermixing of old and new technol-

gested state in which the packets are lost. We will

ogy. Recent technological advances such as lo cal area

elab orate on this p oint in the next section where the

networks LANs and b er optic LANs have resulted

terms ow control, congestion control, and conges-

in a signi cant increase in the bandwidths of com-

tion avoidance will b e de ned and their relationship

puter network links. However, these new technolo-

to each other discussed.

gies must co exist with the old low bandwidth media

We studied a number of alternative schemes for such as the twisted pair. This heterogeneity has re-

congestion avoidance. Based on a numb er of require- sulted in mismatch of arrival and service rates in the

ments describ ed later in this rep ort, we selected an intermediate no des in the network, causing increased 1

p erformance. A ow control scheme protects the alternative called the binary feedback scheme for de-

destination from b eing o o ded by the source. tailed study. This scheme uses only a single bit in

Some of the alternatives that have b een describ ed the network layer header to feed back the congestion

in the literature are window ow-control, Xon/Xo information from the network to users, which then in-

[7], rate ow-control [5], etc. In the window ow- crease or decrease their load on the network to make

control scheme, the destination sp eci es a limit on ecient and fair use of the resources. We present

the numb er of packets that the source may send with- precise de nitions of eciency and fairness that can

out further p ermission from the destination. b e used for other distributed systems as well.

Let us now extend the con guration to include a

This rep ort is a summary of our work in the area of

communication subnet see Figure 1b consisting of

congestion avoidance in connectionless networks. We

routers and links that have limited memory, band-

have tried to make this summary as self-contained

width, and pro cessing sp eeds. Now the source must

and brief as p ossible. For further information, the

not only ob ey the directives from the destination, but

reader is encouraged to read detailed rep orts in [16,

also from all the routers and links in the network.

22,4,23].

Without this additional control the source may send

packets at a rate to o fast for the network, leading

to queuing, bu er over ow, packet losses, retrans-

2 CONCEPTS

missions, and p erformance degradation. A conges-

tion control scheme protects the network from b eing

In this section we de ne the basic concepts of ow

o o ded by its users transp ort entities at source and

control, congestion control, and congestion avoid-

destination no des.

ance.

connection-oriented networks the congestion

In

problem is generally solved by reserving the resources

at all routers during connection setup. In connec-

tionless networks it can b e done by explicit messages

choke packets from the network to the sources [19],

or by implicit means such as timeout on a packet

loss. In [15, 13, 21], a number of alternatives have

b een discussed and a timeout-based scheme has b een

analyzed in detail.

Traditional congestion control schemes help im-

prove the p erformance after congestion has o ccurred.

Figure 2 shows general patterns of resp onse time

and throughput of a network as the network load

increases. If the load is small, throughput gener-

ally keeps up with the load. As the load increases,

throughput increases. After the load reaches the net-

work capacity, throughput stops increasing. If the

Figure 1:

load is increased any further, the queues start build-

ing, p otentially resulting in packets b eing dropp ed.

Throughput may suddenly drop when the load in- Consider the simple con guration shown in Figure

creases b eyond this p oint and the network is said to 1a, in which two no des are directly connected via

be congested. The resp onse-time curve follows a simi- a link. Without any control, the source may send

lar pattern. At rst the resp onse time increases little packets at a rate to o fast for the destination. This

with load. When the queues start building up, the may cause bu er over ow at the destination, lead-

resp onse time increases linearly until nally, as the ing to packet losses, retransmissions, and degraded 2

Ascheme that allows the network to op erate at the

knee is called a congestion avoidance scheme, as

distinguished from a congestion control scheme that

tries to keep the network op erating in the zone to

the left of the cli . A prop erly designed congestion

avoidance scheme will ensure that the users are en-

couraged to increase their trac load as long as this

do es not signi cantly a ect the resp onse time and are

required to decrease them if that happ ens. Thus, the

network load oscillates around the knee. Congestion

control schemes are still required, however, to protect

the network should it reach the cli due to transient

changes in the network.

The distinction between congestion control and

congestion avoidance is similar to that b etween dead-

lo ck recovery and deadlo ck avoidance. Congestion

control pro cedures are curative and the avoidance

pro cedures are preventive in nature. The p oint at

which a congestion control scheme is called up on

dep ends up on the amount of memory available in

the routers, whereas the p oint at which a conges-

tion avoidance scheme is invoked is indep endent of

the memory size.

We elab orate further on these concepts in [16].

3 ALTERNATIVES

Figure 2:

Congestion control and congestion avoidance are dy-

namic system control issues. Like all other control

queues start over owing, the resp onse time increases

schemes they consist of two parts: a feedback mecha-

drastically.

nism and a control mechanism. The feedback mecha-

The p oint at which throughput approaches zero is

nism allows the system network to inform its users

called the p oint of congestion col lapse. This is also

sources or destinations of the current state of the

the p oint at which the resp onse time approaches in-

system, and the control mechanism allows the users

nity. The purp ose of a congestion control scheme

to adjust their loads on the system.

such as [15, 3] is to detect the fact that the network

The problem of congestion control has b een dis-

has reached the p oint of congestion collapse resulting

cussed extensively in the literature. A number of

in packet losses, and to reduce the load so that the

feedback mechanisms have b een prop osed. If we

network returns to an uncongested state.

extend those mechanisms to op erate the network

We call the p oint of congestion collapse a cli due

around the knee rather than the cli , we obtain

the fact that the throughput falls o rapidly after this

congestion avoidance mechanisms. For the feedback

p oint. We use the term knee to describ e the p oint

mechanisms wehave the following alternatives:

after which the increase in the throughput is small,

but after which a signi cant increase in the resp onse 1. Congestion feedback via packets sent from

time results. routers to sources 3

the destination either asks the source to reduce the 2. Feedback included in the routing messages ex-

load or returns the signal backto the source in the changed among routers

packets or acknowledgments going in the reverse

3. End-to-end prob e packets sentby sources

direction. This is the alternative that we study here

and in [22, 23].

4. Each packet containing a congestion feedback

The key architectural assumption ab out the net-

eld lled in by routers in packets going in the

works in this study is that they use connectionless

reverse direction{ reverse feedback

and transp ort level connections. By

this we mean that a is not aware of the trans-

5. Each packet containing a congestion feedback

p ort connections passing through it, and the trans-

eld lled in by routers in packets going in the

p ort entities are not aware of the path used by their

forward direction{ forwardfeedback

packets. There is no prior reservation of resources

The rst alternative is p opularly known as choke

at routers b efore an entity sets up a connection. The

packet [19] or source quench message in ARPAnet

routers cannot compute the resource demands except

[20]. It requires intro ducing additional trac in the

by observing the trac owing through them. Ex-

network during congestion, whichmay not b e desir-

amples of network architectures with connectionless

able.

network layers are DoD TCP/IP, DNA, and ISO con-

The second alternative, increasing the cost used

nectionless network service used with ISO transp ort

in up dating the forwarding database of congested

class 4 [9].

paths, has b een tried b efore in ARPAnet's delay-

sensitive routing. The delays were found to vary to o

quickly, resulting in a high overhead [18].

4 PERFORMANCE MET-

The third alternative, prob e packets, also su ers

RICS

from the disadvantage of added overhead, unless

prob e packets have a dual role of carrying other in-

A congestion avoidance scheme is basically a resource

formation in them. If the latter were the case, there

allo cation mechanism in which the subnet set of in-

would be no reason not to use every packet going

termediate no des or routers is a set of m resources

through the network as a prob e packet. We may

that has to b e allo cated to n users source-destination

achieve this by reserving a eld in the packet that

pairs. There are two parties involved in any resource

is used by the network to signal congestion. This

allo cation mechanism: the resource manager and the

leads us to the last two alternatives.

user. The resource manager's goal is to use the re-

The fourth alternative, reverse feedback, requires

source as eciently as p ossible. Users, on the other

routers to piggyback the signal on the packets going

hand, are more interested in getting a fair share of

in the direction opp osite the congestion. This alter-

the resource. We therefore need to de ne eciency

native has the advantage that the feedback reaches

and fairness.

the source faster. However, the forward and reverse

For our current problem of congestion avoidance,

trac are not always related. The destinations of the

the routers are our resources and therefore we use

reverse trac may not b e the cause of or even the par-

the terms routers and resources interchangeably. The

ticipants in the congestion on the forward path. Also,

concepts intro duced here, however, are general and

many networks including Digital Network Architec-

apply to other distributed resource allo cation prob-

ture, or DNA have path-splitting such that the path

lems as well. Similarly, for the current problem,

from A to B is not necessarily the same as that from

the demands and allo cations are measured by pack-

BtoA.

ets/second throughput, but the concepts apply to

The fth alternative, forward feedback, sends the

other ways of quantifying demands and allo cations.

signal in the packets going in the forward direction

Readers not interested in de nitions of these met- direction of congestion. In the case of congestion 4

the total throughput is equal to the knee-capacityof rics may skip to the next section on the proposed

the resource. However, a maximally ecient allo ca- scheme.

tion may not be fair, as some users may get b etter

treatment than others. The fairness of an allo cation

4.1 Single Resource, Single User

is a function of the amounts demanded as well as the

Consider rst only one user and one resource. In this

amounts allo cated. To simplify the problem, let us

case fairness is not an issue. If the user is allowed

rst consider the case of equal demands in which all

to increase its demand window, the throughput in-

users have identical demands D. The maximally

creases. However, the resp onse time total waiting

fair allo cation then consists of equal allo cations to

time at the resource also increases. Although we

all users, i.e., a = A for all i. The fairness of any

i

wanttoachieve as high a throughput as p ossible, we

other non-equal allo cation is measured by the fol-

also wanttokeep the resp onse time as small as p ossi-

lowing fairness function [11]:

ble. One waytoachieve a tradeo b etween these con-

P

n

2

 x 

icting requirements is to maximize resource power

i

i=1

P

Fairness = 1

n

2

[8, 17], which is de ned by:

n x

i

i=1

Resource Throughput

where x = a =D .

i i

Resource Power =

Resource Resp onse Time

This function has the prop erty that its value always

lies b etween 0 and 1 and that 1 or 100 represents

Here, is a constant. Generally, =1. Other val-

a maximally fair allo cation.

ues of can be used to give higher preference to

Notice that we use user throughput to measure al-

throughput  >1 or resp onse time  <1. The

lo cations and demands b ecause of its additivity prop-

concepts presented in this rep ort apply to anyvalue

erty: total throughput of n users at a single resource

of . However, unless otherwise sp eci ed we will as-

is the sum of their individual .

sume throughout this rep ort that = 1. The re-

source p ower is maximum at the knee.

4.3 Single Resource, Multiple Users

For any given inter-arrival and service time distri-

butions, we can compute the throughput at the knee.

with Unequal Demands

We call this the knee-capacity of the resource.

Given a resource with knee-capacityofT , eachof

k nee

The maximally ecient op erating p oint for

the n users deserves a fair share of T =n. However,

k nee

the resource is its knee. The eciency of resource

there is no p oint in allo cating T =n to a user who

k nee

usage is therefore quanti ed by:

is demanding less than T =n. It would b e b etter to

k nee

Resource Power

give the excess to another user who needs more. This

Resource Eciency =

argument leads us to extend the concept of maximal ly

Resource Power at Knee

fair al location such that the fair share t is computed

f

The resource is used at 100 eciency at the knee.

sub ject to the following two constraints:

As we move away from the knee, the resource is

b eing used ineciently, that is, either underutilized

1. The resource is fully allo cated:

throughput lower than the knee-capacity or overuti-

n

X

lized high resp onse time.

a = T

i k nee

i=1

4.2 Single Resource, Multiple Users

with Equal Demands

2. No one gets more than the fair share or its de-

mands

With multiple users we have an additional require-

a = minfd ;t g

i i f

ment of fairness. The allo cation is ecient as long as 5

4.5 Multiple Resources, Multiple Given the knee capacity of a resource and indi-

vidual user demands, the ab ove two constraints al-

Users

lows us to determine the maximally fair allo cation

  

In this case, there are n users and m resources. The

fA ;A ;:::;A g. If actual allo cation fa ;:::;a g is

1 n

n

1 2

th

i user has a path p consisting of a subset of re-

i

di erent from this, we need a distance function to

th

sources fr ;r ;:::;r g. Similarly, j resource

i1 i2 im

quantify the fairness. We do this by using the fair-

i



serves n users fU ;U ;:::;U g. The global e-

j j 1 j2 jn

ness function of equation 1 with x = a =A .

j

i i

i

ciency is still de ned by the b ottleneck resource which

The eciency of the resource usage can be com-

is identi ed by the resource with the highest utiliza-

puted as b efore by computing resource power from

tion. The problem of nding the maximally ecient

the resource throughput which is given as the sum

and maximally fair allo cation is now a constrained

of user throughputs in this case and the resource

optimization problem as it has to take di ering user

resp onse time. The allo cation that is 100 ecient

paths into account. Wehave develop ed an algorithm

and 100 fair is the optimal allo cation.

[23] which gives the globally optimal fair and e-

Wemust p oint out that the ab ove discussion for a

cient allo cation for any given set of resources, users,

single resource case also applies if there are multiple

and paths.

m routers but all routers are shared by all n users.

  

Once globally optimal allo cation fA ;A ;:::;A g

1 2 n

In this case the set of m routers can b e combined and

has b een determined, it is easy to quantify fairness

considered as one resource.

of any other allo cation fa ;a ;:::;a g by using the

1 2 n

same fairness function as in the single resource case



4.4 Multiple Resources, One User

equation 1 with x = a =A .

i i

i

This fairness is called global fairness and the e-

We have extended the concepts of fairness and ef-

ciency of the b ottleneck resources is called the global

ciency to a distributed system with multiple re-

eciency. An allo cation which is 100 globally ef-

sources. Let us rst consider a case of a single user so

cient and 100 globally fair is said to b e globally

that fairness is not an issue. For the subnet conges-

optimal. It should b e p ointed out that by asso ciat-

tion problem, the user has a path P passing through

ing eciency with resource p ower rather than user

m resources routers fr ;r ;:::;r g. The resource

1 2 m

power, wehave b een able to avoid the problems en-

with the lowest service rate determines the user's

countered by other researchers [2, 10] in using the

throughput and is called the b ottleneck resource.

power metric.

The b ottleneck resource has the highest utilization

Notice that we havea multi-criteria optimization

ratio of throughput to service rate and contributes

problem since we are trying to maximize eciency as

the most to user's resp onse time. The maximally e-

well as fairness. One wayto solve such problems is

cient op erating p oint for the system is de ned as the

to combine the multiple criteria into one, for instance

same as that for the b ottleneck router. Thus, given a

by taking a weighted sum or by taking a pro duct. We

system of m resources, we determine the b ottleneck

chose instead to put a strict priority on the two cri-

and de ne its eciency as the global eciency and

teria. Eciency has a higher priority than fairness.

its knee as the maximally ecient op erating p oint for

Given two alternatives, we prefer the more ecient

the system.

alternative. Given two alternatives with equal e-

ciency,wecho ose the fairer alternative.

Global Eciency = Eciency of the Bottleneck

Resource

Note that the global eciency, as de ned here, de-

5 THE PROPOSED SCHEME

p ends up on the resp onse time at the b ottleneck re-

source and not on the user resp onse time, whichisa Wehave designed a scheme that allows a network to

sum of resp onse time at m resources. op erate at its knee. As shown in Figure 3, the scheme 6

proach, while the ISO TP4 [9] implementation uses

the destination-based approach.

In the remainder of this rep ort, we use the word

user to include b oth source and destination trans-

p ort entities. Thus, when we say that the user

changes its window, the change might b e decided and

a ected by the source or destination transp ort entity.

Figure 3:

uses one bit called the congestion avoidance bit

in the network layer header of the packet for feed-

back from the subnet to the users. A source clears

the congestion avoidance bit as the packet enters the

subnet. All routers in the subnet monitor their load

and if they detect that they are op erating ab ove the

knee, they set the congestion avoidance bit in the

packets b elonging to users causing overload. Routers

op erating b elow the knee pass the bit as received.

When the packet is received at the destination the

Figure 4:

network layer passes the bit to the destination trans-

p ort, which takes action based on the bits.

The prop osed congestion avoidance scheme con-

There are two versions of the binary feedback

sists of two parts: a feedback mechanism in routers,

scheme:

and a control mechanism for users. We call these

1. Destination-based

the router p olicy and the user p olicy, resp ectively.

Each of these mechanisms can b e further sub divided

2. Source-based

into three comp onents as shown in Figure 4. We ex-

plain these comp onents b elow. For further details see

In the rst version, the destination examines the

[16,22,23].

bits received, determines a new ow-control window,

and sends this window to the source. In the sec-

ond version, the destination sends all bits backtothe

5.1 Router Policies

source along with the acknowledgments. In this case,

Routers in a connectionless network environment are

we need to reserve one bit in the headers of transp ort

not informed ab out resource requirements of trans-

layer acknowledgment packets where the destination

p ort entities and therefore they have no prior knowl-

transp ort entity copies the bit received from the net-

edge of future trac. They attempt to optimize their

work layer. The source transp ort entity examines the

op eration by monitoring the current load and by ask-

stream of bits received, determines a new op erating

ing the users via the bit to increase or decrease

window, and uses it as long as it do es not violate the

the load. Thus, the routers have three distinct algo-

window limit imp osed by the the destination.

rithms:

We have studied b oth versions. The NSP trans-

p ort proto col in DNA [6] uses the source-based ap- 1. To determine the instantaneous load level 7

queue lengths over a long interval. The key question 2. To estimate average load over a appropriate time

is how long an interval is long enough. interval

3. To determine the set of users who should be

asked to adjust their loads

We call these three algorithms congestion detec-

tion, feedback lter, and feedback selections, resp ec-

tively. The op eration of these comp onents and the

alternatives considered are describ ed next.

5.1.1 Congestion Detection

Before a router can feed back any information, it

must determine its load level. It may be under-

utilized b elow the knee or overutilized ab ove the

knee. This can b e determined, based on the utiliza-

Figure 5:

tion, bu er availability, or queue lengths.

We found that the average queue length provides

We recommend averaging since the b eginning of

the b est mechanism to determine if we are ab oveor

the previous regeneration cycle. A regeneration cycle

b elow the knee. This alternative is least sensitiveto

is de ned as the interval consisting of a busy period

the arrival or service distributions and is indep en-

and an id le period, as shown in Figure 5. The be-

dent of the memory available at the router. For b oth

ginning of the busy p erio d is called a regeneration

M/M/1 and D/D/1 queues the knee o ccurs when the

p oint. The word regeneration signi es the birth of

average queue length is one. For other arrival pat-

a new system, since the queuing system's b ehavior

terns suchaspacket trains [14], this is approximately

after the regeneration p oint do es not dep end up on

though not exactly true. The routers, therefore,

that b efore it. The average queue length is given by

monitor the queue lengths and ask users to reduce

the area under the curve divided by the time since

the load if the average queue length is more than

the last but one regeneration p oint. Note that the

one, and vice versa.

averaging includes a part of the current, though in-

complete, cycle. This is shown in Figure 5.

5.1.2 Feedback Filter

5.1.3 Feedback Selection

After a router has determined its load level, its feed-

back to users is useful if and only if the state last

The two comp onents of router p olicy discussed so

long enough for the users to take action based on it.

far congestion detection and feedback lter ensure

A state that changes very fast may lead to confusion

that the router op erates eciently, that is, around

b ecause by the time users b ecome aware of it, the

the knee. They b oth work based up on the total load

state no longer holds and the feedback is mislead-

on the router, to decide if the total load is ab ove the

ing. Therefore, we need a low-pass lter function to

knee or b elow the knee. The total numb er of users or

pass only those states that are exp ected to last long

the fact that only a few of the users might b e causing

enough for the user action to b e meaningful.

the overload is not considered in those comp onents.

This consideration rules out the use of instanta- Fairness considerations demand that only those users

neous queue lengths to b e used in congestion detec- who are sending more than their fair share should

tion. An instantaneous queue length of 100 may not be asked to reduce their load, and others should b e

be a problem for a very fast router but may be a asked to increase if p ossible. This is done by the

problem for a slow router. We need to average the feedback selection, an imp ortant comp onent of our 8

5.2.1 Signal Filter scheme. Without the selection, the system may sta-

bilize at op erate around an op erating p oint that is

The problem solved by this comp onent is to examine

ecient but not fair. For example, two users sharing

the stream of the last n bits, for instance, and to

the same path maykeep op erating at widely di erent

decide whether the user should increase or decrease

throughputs.

its load window. Mathematically,

The feedback selection works by keeping a count

of the numberofpackets sentby di erent users since

d = f b ;b ;b ;:::;b 

1 2 3 n

the b eginning of the queue averaging interval. This is

equivalent to monitoring their throughputs. Based on

Here, d is the binary decision 0  increase, 1 

th

the total throughput, a fair share is determined and

decrease and b is the the i bit with b b eing the

i n

users sending more than the fair share are asked to

most recently received bit. The function f is the

reduce their load while the users sending less than the

signal lter function. The function that we nally

fair share are asked to increase their load. Of course,

chose requires counting the number of 1s and 0s in

if the router is op erating b elow the knee, each one

the stream of the last n bits. Let

is encouraged to increase regardless of their current

X

s =numb er of ones in the stream = b

load. The fair share is estimated by assuming the

1 i

capacity to b e at 90 of the total throughput since

s =numb er of zeros in the stream = n s

0 1

the b eginning of the last regeneration cycle.

The feedback selection as prop osed here attempts

Then, if s >pnthen d =1 else d =0. Here, p is

1

to achieve fairness among di erent network layer ser-

a parameter called cuto probability. We found

vice access p oint NSAP pairs b ecause the packet

that for exp onentially distributed service times, the

counts used in the algorithm corresp ond to these

optimal choice was p =0:5, as exp ected. For deter-

pairs.

ministic service times, however, we found that the

This completes the discussion on the router p oli-

choice of p do es not matter. This is b ecause in deter-

cies. Wenow turn to the user p olicies.

ministic cases, the router ltering results in the user

consistently receiving either all 1s if the load at the

b ottleneck is ab ove the knee or all 0s if the load is

5.2 User Policies

b elow the knee. Based on this observation, we rec-

Each user receives a stream of congestion avoidance

ommend using a cuto probability of 50.

bits, called signals, from the network. These signals

In summary, the signal ltering simply consists of

are not all identical or else wewould not need all of

comparing the counts of 1s and 0s received in the bit

them. Some signals ask the user to reduce the load,

stream and deciding to go up or down as indicated

while others ask it to increase the load. The user p ol-

by the ma jority of the bits.

icy should b e designed to compress this stream into a

single increase/decrease decision at suitable intervals.

5.2.2 Decision Frequency

The key questions that the user p olicy helps answer

The decision frequency comp onent of the user p ol-

are:

icy consists of deciding how often to change the win-

1. How can all signals received b e combined?

dow. Changing it to o often leads to unnecessary os-

cillations, whereas changing it infrequently leads to a

2. How often should the windowbechanged?

system that takes to o long to adapt.

System control theory tells us that the optimal

3. Howmuch should the change b e?

control frequency dep ends up on the feedback delay

{ the time b etween applying a control change win- We call these three algorithms signal lter, deci-

dow and getting feedback bits from the network sion frequency, and increase/decrease algorithm, re-

corresp onding to this control. sp ectively. 9

new information available since the last activation of In computer networks, it takes one round-trip de-

the comp onent. We therefore chose the simple ap- lay to a ect the control, that is, for the new window

proach. We have already partitioned the problem to take e ect and another round-trip delay to get the

so that the signal lter lo oks at the feedback signals resulting change fed back from the network to the

and decides whether to increase or decrease. The in- users. This leads us to the recommendation that win-

crease/decrease algorithm, therefore, needs to lo ok at dows should be adjusted once every two round-trip

the window in the last cycle and decide what the new delays two window turns and that only the feed-

window should b e. We limited our search among al- back signals received in the past cycle should b e used

ternatives to the rst order linear functions for b oth in window adjustment, as shown in Figure 6.

increase and decrease:

Increase: w = aw + b

new old

Decrease: w = cw d

new old

Here, w is the window in the last cycle and w

old new

is the window to be used in the next cycle; a, b, c,

and d are non-negative parameters. There are four

sp ecial cases of the increase/decrease algorithms:

a Multiplicative Increase, Additive Decrease b =0,

c=1

b Multiplicative Increase, Multiplicative Decrease

b =0, d =0

c Additive Increase, Additive Decrease a =1, c=

1

d Additive Increase, Multiplicative Decrease a =1,

d =0

Figure 6:

The choices of the alternatives and parameter val-

ues are governed by the following goals:

1. Eciency: The system b ottlenecks should be

5.2.3 Increase/Decrease Algorithms

op erating at the knee.

The purp ose of the increase/decrease algorithm is to

determine the amountby which the window should

2. Fairness: The users sharing a common b ottle-

be changed once a decision has b een made to adjust

neck should get the same throughput.

it.

In the most general case, the increase or decrease

3. Minimum Convergence Time: Starting from any

amount would be a function of the complete past

state, the network should reach the optimal ef-

history of controls windows and feedbacks bits.

cientaswell as fair state as so on as p ossible.

In the simplest case, the increase/decrease amount

4. Minimum Oscillation Size: Once at the opti- would b e a function only of the window used in the

mal state, the user windows oscillate continu- last cycle and the resulting feedback. Actually, there

ously b elow and ab ove this state. The param- is little p erformance di erence exp ected b etween the

eters should b e chosen such that the oscillation simplest and the most general control approach, pro-

size is minimal. vided that the simple scheme makes full use of the 10

These considerations lead us to the following rec- prop osed scheme dynamically adjusts its op er-

ommendation for increase/decrease algorithms [16, ation to the current optimal p oint. The users

4]: continuously monitor the network by changing

the load slightly b elow and slightly ab ove the

Additive Increase: w = w +1

new new

optimal p oint and verify the current state by ob-

Multiplicative Decrease: w =0:875w

new old

serving the feedback.

If the network is op erating b elow the knee, all users

5. Minimum oscillation: The increase amountof1

go up equally, but, if the network is congested, the

and decrease factor of 0.875 have b een chosen

multiplicative decrease makes users with higher win-

to minimize the amplitude of oscillations in the

dows go down more than those with lower windows,

window sizes.

making the allo cation more fair. Note that 0:875 =

3

1 2 . Thus, the multiplication can b e p erformed

6. Convergence: If the network con guration and

without oating p oint hardware, by simple logical

workload remain stable, the scheme brings the

shift instructions.

network to a stable op erating p oint.

The computations should b e rounded to the near-

est integer. Truncation, instead of rounding, results

7. Robustness: The scheme works under a noisy

in lower fairness.

random environment. We have tested it for

This completes our discussion of the prop osed bi-

widely varying service-time distributions.

nary feedbackscheme. The key router and user p olicy

8. Low parameter sensitivity: While comparing

algorithms are summarized in the app endix.

various alternatives, we studied their sensitivity

with resp ect to parameter values. If the p erfor-

mance of an alternative was found to be very

6 FEATURES OF THE

sensitive to the setting of a parameter value, the

SCHEME

alternativewas discarded.

The design of the binary feedbackscheme was based

9. Information entropy: Information entropy re-

on a number of goals that we had determined be-

lates to the use of feedback information. We

forehand. Below, we showhow the binary feedback

want to get the maximum information across

scheme meets these goals.

with the minimum amount of feedback. Given

one bit of feedback, information theory tells us

1. No control during normal op eration: The scheme

that the maximum information would be com-

do es not cause any extra overhead during normal

municated if the bit was set 50 of the time.

underloaded conditions.

10. Dimensionless parameters: A parameter that

2. No new packets during overload: The scheme

has dimensions length, mass, time is generally

do es not require generation of new messages

a function of network sp eed or con guration. A

e.g., source quench during overload conditions.

dimensionless parameter has wider applicability.

Thus, for example, in cho osing the increase al-

3. Distributed control: The scheme is distributed

gorithm we preferred increasing the windowby

and works without any central observer.

an absolute amountof k packets rather than by

a rate of t packets/second. The optimal value of 4. Dynamism: Network con gurations and trac

the latter dep ends up on the link bandwidth. All vary continuously. No des and links come up and

parameters of the prop osed scheme are dimen- down and the load placed on the network by

sionless, making it applicable to networks with users varies widely. The optimal op erating p oint

widely varying bandwidths. is therefore a continuously moving target. The 11

rive at the routers. Our scheme, on the other hand, 11. Con guration indep endence: We have tested

is not so much concerned with the bu ers. Rather the scheme for many di erent con gurations

it tries to maximize the throughput while also min- of widely varying lengths and sp eeds including

imizing the delay. The routers start setting the bits those with and without satellite links.

as so on as the average queue length is more than one.

Most of the discussion in this and asso ciated

The numb er of bu ers available at the router has no

rep orts centers around window-based ow-control

e ect on our scheme.

mechanisms. However, we must p oint out that this

The key test to decide whether a particular scheme

is not a requirement. The congestion avoidance algo-

is a congestion control or a congestion avoidance

rithms and concepts can b e easily mo di ed for other

scheme is to consider a network with all no des hav-

forms of ow control such as rate-based ow control,

ing in nite memory in nite bu ers. A congestion

in which the sources must send at a rate lower than

control scheme will generally remain inactiveinsuch

a maximum rate in packets/second or bytes/second

a network, allowing the users to use large windows

sp eci ed by the destination. In this case, the users

causing high resp onse time. A congestion avoidance

would adjust rates based on the signals received from

scheme, on the other hand, is useful even in a net-

the network.

work with in nite memory. It tries to adjust queuing

in the network so that a high throughput and a low

resp onse time is achieved.

7 COMPARISON WITH SIM-

ILAR SCHEMES

It must be p ointed out that the binary feedback

8 PERFORMANCE

scheme prop osed here is di erent from most other

schemes in that it is the rst attempt to achieve

The binary feedback scheme was designed using a

congestion avoidance rather than congestion control.

simulation mo del that allowed us to compare various

Similar congestion control schemes exist in literature.

alternatives and study them in detail. Most of the

For example, the congestion control scheme used in

choices discussed earlier in this rep ort have b een jus-

SNA [1] also uses bits in the network layer headers to

ti ed using analytical arguments. However, wehave

feed back congestion information from the network

veri ed all arguments using simulation as well. The

to the source. It uses two bits called the change win-

mo del allows us to simulate anynumb er of users go-

dow indicator CWI and the reset window indicator

ing through various paths in the network. It is an

RWI. The rst bit indicates mo derate congestion,

extension of the mo del describ ed in [12]. The mo del

while the second one indicates severe congestion. The

simulates p ortions of network and transp ort layers.

CWI bit is set by a router when it nds that more

The transp ort layer is mo deled in detail. The routers

than a p ercentage, such as 75, of its bu ers have

are mo deled as single server queues. The mo del's

b een used. After all bu ers are used up, the router

key limitation currently is that the acknowledgments

starts setting RWI bits in the packets going in the

returning from a destination to a source are not ex-

reverse direction. On receipt of a CWI, the source

plicitly simulated. Instead, the source is informed of

decreases the windowby1. On the receipt of a RWI,

the packet delivery as so on as the packet is accepted

the source resets the window to h, where h is the

by the destination.

numb er of hops. If b oth bits are clear, the windowis

increased by one until a maximum of 3h is reached.

In this section, we present a few cases to illus-

trate the p erformance of the binary feedbackscheme. The key di erence b etween SNA's scheme and all

Other simulation results including those for random prior work in congestion control and our binary feed-

service times and highly congested networks are given backscheme is the de nition of the goal. SNA's goal

in [22,23]. is to ensure that packets nd bu ers when they ar- 12

and nding the windowvalues that are ecient and 8.1 Case I Single User

fair. The router 2 is the b ottleneck. At the knee

This case consists of a single user using a path con-

its throughput is 1/5 packets p er unit time, divided

sisting of four routers, as shown in Figure 7a.

equally b etween the two users. The optimal window

The service times at the routers are 2, 5, 3, and

in this case is w = 10 for user 1 and w = 5 for user

1 2

4 units of time, resp ectively. In our simulation the

2. The plots of the two user's windows are shown in

user's sp eed is one packet p er unit of time. In other

Figure 8b. Notice that with only one user the system

words all times are expressed as multiples of time

stabilizes at the window of 15 and keeps oscillating

required to send one packet. The third router is a

around it until the second user joins the network.

satellite link having a xed delay of 62.5 units of

At this p oint, the rst user receives decrease signals

time. The second router is the b ottleneck, and its

from the b ottleneck router while the second user re-

power as a function of the window size is shown in

ceives increase signals. The windows eventually sta-

Figure 7b. This graph is obtained by running the

bilize when they reach their optimal values. Figure

simulation without the binary feedbackscheme at a

8c shows a plot of throughputs of the two users. The

xed window and observing the user throughput and

throughput of the rst user drops while that of the

resp onse times. It is seen from this gure that the

second increases until they b oth share the b ottleneck

knee o ccurs at a window of 15.5.

approximately equally.

Figure 7c shows a plot of a user's window with the

Another feature of the scheme, which can b e seen

binary feedback scheme. The time is shown along

from this case, is that the scheme adapts as the num-

the horizontal axis. Notice that the user starts with

b er of users in the network changes. The users need

a window of 1 and sends packet 1 and 2; b oth packets

not start at the same p oint window of 1 to reach

traverse the subnet with the congestion bit clear. The

the fair op erating p oint.

user, therefore, increases the windowto2. Packets 3

and 4 are sent. After their acknowledgment, packets

5 and 6 are sent. The congestion bits in packets 5 and

9 SUMMARY

6 are examined. They are clear and so the windowis

increased to 3. This continues until the windowis16.

The key contributions of our congestion avoidance

At this p oint, the b ottleneck starts op erating ab ove

research are the following:

the knee and starts setting congestion bits in packets.

1. We have intro duced the new term congestion

The user, up on receiving these packets, reduces the

avoidance. It has b een distinguished from other

window to 160.875 or 13. The cycle then rep eats

similar terms of ow control and congestion con-

and the windowkeeps oscillating b etween 13 and 16.

trol. It has b een shown that the preventive

This case illustrates the fact that the network op-

mechanism, congestion avoidance, helps the net-

erates eciently.

work use its resources in an optimal manner.

2. We de ned the concept of global optimality in a

8.2 Case II Two Users

distributed system with multiple resources and

To illustrate the fairness asp ects of the scheme, con-

multiple users. The optimality is de ned by e-

sider the same con guration as in Case 1, except that

ciency and fairness. Both concepts have b een de-

wehavenow added another user that enters the sub-

velop ed so that they can b e indep endently quan-

net at router 1 and exits after router 2 see Figure

ti ed and can apply to anynumb er of resources

8a. Also, the second user starts after the rst one

and users.

has sent 200 packets. The optimal op erating p oint for

3. Other researchers attempting to de ne global op- this case can b e determined by running the simula-

timality have had diculty extending the con- tion without the congestion avoidance scheme for var-

cept of p ower to distributed resources. By de n- ious combinations of window sizes for the two users 13

[2] K. Bharat-Kumar and J. M. Ja e, \A New ing eciency for each resource and relating fair-

ApproachtoPerformance- Oriented Flow Con- ness to users, wehave b een able to separate the

trol," IEEE Transactions on Communications, two concepts.

Vol. COM-29, No. 4, April 1981, pp. 427 - 435.

4. Wehave develop ed a simple scheme that allows

a network to reach the optimal op erating p oint

[3] W. Bux and D. Grillo, \Flow Control in Lo cal-

automatically. This scheme makes use of a sin-

Area Networks of Interconnected Token Rings,"

gle bit in the network layer header. This bit is

IEEE Transactions on Communications, Vol.

shared by all resources.

COM-33, No. 10, Octob er 1985, pp. 1058-66.

5. We divided the problem of congestion avoidance

[4] Dah-Ming Chiu and Ra j Jain, \Analysis of

into six comp onents which can b e studied sepa-

Increase/Decrease Algorithms For Congestion

rately. This allowed us to compare a number of

Avoidance in Computer Networks," Digital

alternatives for each comp onent and select the

Equipment Corp oration, Technical Rep ort TR-

b est.

509, August 1987, To b e published in Computer

Networks and ISDN Systems.

6. We have simulated the binary feedbackscheme

and tested its p erformance in many di erent con-

[5] David Clark, \NETBLT: A Bulk Data Transfer

gurations and conditions. The scheme has b een

Proto col," Massachusetts Institute of Technol-

found to op erate optimally in all cases tested.

ogy, Lab for Computer Science, RFC-275, Febru-

ary 1985.

10 ACKNOWLEDGMENTS

[6] Digital Equipment Corp., \DECnet Digital Net-

work Architecture NSP Functional Sp eci cation,

Many architects and implementers of Digital's net-

Phase IV, Version 4.0.0," March 1982.

working architecture participated in a series of meet-

[7] M. Gerla and L. Kleinro ck, \Flow Control: A

ings over the last three years in which the ideas pre-

Comparative Survey," IEEE Transactions on

sented here were discussed and improved. Almost

Communications, Vol. COM-28, No. 4, April

all memb ers of the architecture group contributed to

1980, pp. 553 - 574.

the pro ject in one way or another. In particular, we

would like to thank Tony Lauck and Linda Wright for

[8] A. Giessler, J. Haanle, A. Konig and E. Pade,

encouraging us to work in this area. Radia Perlman,

\Free Bu er Allo cation - An Investigation by

Art Harvey, Kevin Miles, and Mike Shand are the

Simulation," Computer Networks, Vol. 1, No. 3,

resp onsible architects whose willingness to incorp o-

July 1978, pp. 191-204.

rate our ideas provided further encouragement. We

would also like to thank Bill Hawe, Dave Oran, and

[9] International Organization of Standardization,

John Harp er for feedback and interest. The idea of

\ISO 8073: Information Pro cessing Systems -

prop ortional decrease was rst prop osed by George

Op en Systems Interconnection - Connection Ori-

Verghese and Tony Lauck. The concept of maximal

ented Transp ort Proto col Sp eci cation," July

fairness was develop ed by Bob Thomas and Cuneyt

1986.

Ozveren.

[10] J. M. Ja e, \Flow Control Power is Nondecen-

tralizable," IEEE Transaction on Communica-

References

tions, Vol. COM-29, No. 9, Septemb er 1981, pp.

1301-1306.

[1] V. Ahuja, \Routing and Flow Control in

[11] Ra j Jain, Dah-Ming Chiu, and William Systems Network Architecture," IBM Systems

Hawe, \A Quantitative Measure of Fairness Journal, Vol. 18, No. 2, 1979, pp. 298 - 314. 14

Control Computer Networks, Versailles, France. and Discrimination for Resource Allo cation in

February 1979. Shared Systems," Digital Equipment Corp ora-

tion, Technical Rep ort TR-301, Septemb er 1984.

[20] John Nagle, \Congestion Control in TCP/IP

Internetworks," Computer Communication Re-

[12] Ra j Jain, \Using Simulation to Design a Com-

view, Vol. 14, No. 4, Octob er 1984, pp. 11-17.

puter Control Proto col,"

Pro c. Sixteenth Annual Pittsburgh Conference

[21] K. K. Ramakrishnan, \Analysis of a Dy-

on Mo deling and Simulation, Pittsburgh, PA,

namic Window Congestion Control Proto col in

April 25-26, 1985, pp. 987-993.

Heterogeneous Environments Including Satellite

Links," Pro ceedings of Computer Networking

[13] Ra j Jain, \Divergence of Timeout Algorithms

Symp osium, Novemb er 1986.

for Packet Retransmission," Pro c. Fifth Annual

International Pho enix Conf. on Computers and

[22] K. K. Ramakrishnan and Ra j Jain, \An Explicit

Communications, Scottsdale, AZ, March 26-28,

Binary FeedbackScheme for Congestion Avoid-

1986, pp. 174-179.

ance in Computer Networks with a Connection-

less Network Layer," Pro c. ACM SIGCOMM'88,

[14] Ra j Jain and Shawn Routhier, \Packet Trains

Stanford, CA, August 1988.

- Measurements and a New Mo del for Com-

puter Network Trac," IEEE Journal on Se-

[23] K. K. Ramakrishnan, Dah-Ming Chiu and Ra j

lected Areas in Communications, Vol. SAC-4,

Jain, \Congestion Avoidance in Computer Net-

No. 6, Septemb er 1986, pp. 986-995.

works with a Connectionless Network Layer.

Part IV: A Selective Binary FeedbackScheme for

[15] Ra j Jain, \A Timeout-Based Congestion Con-

General Top ologies," Digital Equipment Corp o-

trol Scheme for Window Flow-Controlled Net-

ration, Technical Rep ort TR-510, August 1987.

works," IEEE Journal on Selected Areas in Com-

munications, Vol. SAC-4, No. 7, Octob er 1986,

App endix A: Algorithms

pp. 1162-1167.

The SIMULA pro cedures used in the simulation

mo del are included here to clearly explain various

[16] Ra j Jain and K. K. Ramakrishnan, \Congestion

algorithms used in the scheme. The data structures

Avoidance in Computer Networks with a Con-

used by the queue servers in the routers are presented

nectionless Network Layer: Concepts, Goals and

followed by ve pro cedures which are used as follows:

Metho dology," Pro c. IEEE -

ing Symp osium, Washington, D. C., April 1988,

1. Arrival: This pro cedure is executed on each

pp. 134-143.

packet arrival. It computes the area under the

queue length curve. Also, at the b egining of

[17] L. Kleinro ck, \Power and Deterministic Rules of

a new cycle, the tables are initialized. SIM-

Thumb for Probabilistic Problems in Computer

ULA variable 'time' gives the currently simu-

Communications," in Pro c. Int. Conf. Commun.,

lated time.

June 1979, pp. 43.1.1-10.

2. Departure: This pro cedure is executed on the

[18] J. M. McQuillan, I. Richer, and E. C.

packet departure. It decreases the queue size

Rosen, \The New Routing Algorithm for the

and up dates the value of area under the queue

ARPANET," IEEE Transactions on Communi-

length curve. A hash function is used to nd the

cations, Vol. COM-28, No. 5, May 1980, pp. 711-

table entry where counts for packets sentby this

719.

user are kept.

[19] J.C. Ma jithia, et al, \Exp eriments in Conges- 3. Fair Share: This pro cedure is used to decide the

tion Control Techniques," Pro c. Int. Symp. Flow maximum numb er of packets any user should b e 15

allowed to send. The routers set the conges-

tion avoidance bits in packets b elonging to users

sending more than this amount.

4. Increase: This pro cedure is used by a transp ort

entity to increase its window if less than 50 of

the bits received are set.

5. Decrease: This pro cedure is used by a trans-

p ort entity to decrease its window if more than

or equal to 50 of the bits are set. The SIM-

ULA function Entierx returns the highest in-

teger less than or equal to x.

The rst two pro cedures are parts of the feedback

lter algorithm discussed earlier under router p oli-

cies. The third pro cedure constitutes the feedback

selector algorithm. The last two pro cedures makeup

the increase/decrease algorithms of the user p olicies.

Figure 7: 16

Figure 8: 17

!The following data structure is maintained by each queue server or router.

The size of the table 'dim_tables' to be used is left to the implementors;

REAL ARRAY packets_sent[0:dim_tables];!Table for keeping packet counts;

!0th location is used for total count;

REAL ARRAY prev_packets_sent[0:dim_tables]; !Counts for previous cycle;

REAL avg_q_length; !Average queue length at this server;

REAL area; !Area under Q length vs time curve;

REAL prev_area; !Area in the previous cycle;

INTEGER q_length; !Queue length includes one in service;

REAL q_change_time; !Last time the queue changed;

REAL prev_cycle_begin_time; !Time at which previous cycle began;

REAL cycle_begin_time; !Time at which this cycle began;

PROCEDURE arrival; !To be executed on packet arrival;

BEGIN

INTEGER i; !Temporary index variable;

area:=area+q_length*time-q_change_time;!Compute Area under the curve;

q_length:=q_length+1; !Increment number in the queue;

q_change_time:=time; !Time of change in Queue length;

IFq_length=1 THEN !Begining of a new cycle;

BEGIN !End the previous cycle;

prev_cycle_begin_time:=cycle_begin_time;

cycle_begin_time:=time;

prev_area:=area;

area:=0;

FOR i:=0 STEP 1 UNTIL dim_tables DO

BEGIN

prev_packets_sent[i]:=packets_sent[i];!Remember all counts;

packets_sent[i]:=0; !Clear packet counts;

END; !of FOR;

END; !of IFQ_length=1;

END of arrival;

PROCEDURE departure; !To be executed on packet departure;

BEGIN

BOOLEAN bit; !The congestion avoidance bit in the packet;

INTEGER user; !Index in the packet count table;

area:=area+q_length*time-q_change_time;!Compute area under the curve;

q_length:=q_length-1; !Decrement the number in the queue;

q_change_time:=time; !Rememeber time of queue length change;

avg_q_length:=area+prev_area/time-prev_cycle_begin_time;!Compute avg Q length;

user:=hashsource_address,dest_address,dim_tables;!Find index into the table;

packets_sent[user]:=packets_sent[user]+1;!Increment the count;

packets_sent[0]:=packets_sent[0]+1;!Increment total count also;

IFavg_q_length>2 !Are we heavily congested?;

THEN bit:=TRUE !Yes, set bit for all users; 18

ELSE IF avg_q_length<1 THEN !No, do nothing if we are underloaded;

ELSE IF packets_sent[user]+prev_packets_sent[user]>fair_share

THEN bit:=TRUE; !If the user sent too many packets, set bit;

END of departure; 19

REAL PROCEDURE fair_share; !Computes the max number of packets a user can send;

BEGIN

REAL capacity; !Knee capacity of the server;

REAL old_fair_share; !Max allocation used previously;

INTEGER sum_allocation; !Total capacity allocated;

INTEGER old_sum_allocation; !Capacity allocated previously;

INTEGER i; !Index variable;

REAL demand; !Demand of the ith user;

INTEGER num_not_allocated; !Number of users yet to be allocated;

capacity := 0.9*packets_sent[0]+prev_packets_sent[0];

!Assume capacity=90 of packets sent;

num_not_allocated := dim_tables;!Initialize number of users to be allocated;

sum_allocation := 0; !Total allocation so far;

old_sum_allocation := -1; !Allocation in the previous iteration;

fair_share := -1; !Users below this allocation are good;

WHILE sum_allocation>old_sum_allocation DO

BEGIN !Beginning of a new iteration;

old_fair_share := fair_share;

old_sum_allocation := sum_allocation;

fair_share := capacity-sum_allocation/num_not_allocated;!New estimate;

FOR i := 1 STEP 1 UNTIL dim_tables DO

BEGIN

demand:=packets_sent[i]+prev_packets_sent[i];!Demand in the last two cycles;

IFdemand<=fair_share AND demand>old_fair_share

THEN BEGIN

num_not_allocated := num_not_allocated-1;!One more user satisfied;

sum_allocation := sum_allocation+demand;

END; !of IF;

END; !of FOR;

END; !of WHILE;

END of fair_share;

PROCEDURE increasew,w_max,w_used; !Used to increase the window;

NAME w,w_used; !These parameters are called by name;

REAL w; !Computed window real valued;

INTEGER w_max; !Max window allowed by the destination;

INTEGER w_used; !Window valued used integer valued;

BEGIN

w:=w+1; !Go up by 1;

IFw>w_used+1 THEN w:=w_used+1;!No more than 1 above the last used;

IF w>w_max THEN w:=w_max; !Also, never beyond the destination limit;

w_used:=Entierw+0.5 !Round-off;

END of increase;

PROCEDURE decreasew,w_used; !Used to decrease the window;

NAME w,w_used; !These parameters are called by name; 20

REAL w; !Computed window real valued;

INTEGER w_used; !Window value used integer valued;

BEGIN

w:=0.875*w; !Multiplicative decrease;

IFw<1THEN w:=1; !Do not reduce below one;

w_used:=Entierw+0.5; !Round-off;

END of decrease; 21