Congestion Avoidance in Computer Networks
With a Connectionless Network Layer
Ra j Jain, K. K. Ramakrishnan, Dah-Ming Chiu
Digital Equipment Corp oration
550 King St. LKG1-2/A19
Littleton, MA 01460
DEC-TR-506
c
Copyright 1988, Digital Equipment Corp oration. All rights reserved.
Version:June 1, 1997
Abstract
Widespread use of computer networks and the use of varied technology for the interconnection of computers
has made congestion a signi cant problem.
In this rep ort, we summarize our research on congestion avoidance. We compare the concept of congestion
avoidance with that of congestion control. Brie y, congestion control is a recovery mechanism, while conges-
tion avoidance is a prevention mechanism. A congestion control scheme helps the network to recover from
the congestion state while a congestion avoidance scheme allows a network to op erate in the region of low
delay and high throughput with minimal queuing, thereby preventing it from entering the congested state
in which packets are lost due to bu er shortage.
A number of p ossible alternatives for congestion avoidance were identi ed. From these alternatives we
selected one called the binary feedback scheme in which the network uses a single bit in the network layer
header to feed back the congestion information to its users, which then increase or decrease their load to
make optimal use of the resources. The concept of global optimality in a distributed system is de ned in
terms of eciency and fairness such that they can b e indep endently quanti ed and apply to anynumber of
resources and users.
The prop osed scheme has b een simulated and shown to be globally ecient, fair, resp onsive, convergent,
robust, distributed, and con guration-indep endent.
queuing and congestion. 1 INTRODUCTION
We are concerned here with congestion avoidance
rather than congestion control. Brie y, a conges-
Congestion in computer networks is b ecoming a sig-
tion avoidance scheme allows a network to op erate in
ni cant problem due to increasing use of the net-
the region of low delay and high throughput. These
works, as well as due to increasing mismatch in link
schemes prevent a network from entering the con-
sp eeds caused byintermixing of old and new technol-
gested state in which the packets are lost. We will
ogy. Recent technological advances such as lo cal area
elab orate on this p oint in the next section where the
networks LANs and b er optic LANs have resulted
terms ow control, congestion control, and conges-
in a signi cant increase in the bandwidths of com-
tion avoidance will b e de ned and their relationship
puter network links. However, these new technolo-
to each other discussed.
gies must co exist with the old low bandwidth media
We studied a number of alternative schemes for such as the twisted pair. This heterogeneity has re-
congestion avoidance. Based on a numb er of require- sulted in mismatch of arrival and service rates in the
ments describ ed later in this rep ort, we selected an intermediate no des in the network, causing increased 1
p erformance. A ow control scheme protects the alternative called the binary feedback scheme for de-
destination from b eing o o ded by the source. tailed study. This scheme uses only a single bit in
Some of the alternatives that have b een describ ed the network layer header to feed back the congestion
in the literature are window ow-control, Xon/Xo information from the network to users, which then in-
[7], rate ow-control [5], etc. In the window ow- crease or decrease their load on the network to make
control scheme, the destination sp eci es a limit on ecient and fair use of the resources. We present
the numb er of packets that the source may send with- precise de nitions of eciency and fairness that can
out further p ermission from the destination. b e used for other distributed systems as well.
Let us now extend the con guration to include a
This rep ort is a summary of our work in the area of
communication subnet see Figure 1b consisting of
congestion avoidance in connectionless networks. We
routers and links that have limited memory, band-
have tried to make this summary as self-contained
width, and pro cessing sp eeds. Now the source must
and brief as p ossible. For further information, the
not only ob ey the directives from the destination, but
reader is encouraged to read detailed rep orts in [16,
also from all the routers and links in the network.
22,4,23].
Without this additional control the source may send
packets at a rate to o fast for the network, leading
to queuing, bu er over ow, packet losses, retrans-
2 CONCEPTS
missions, and p erformance degradation. A conges-
tion control scheme protects the network from b eing
In this section we de ne the basic concepts of ow
o o ded by its users transp ort entities at source and
control, congestion control, and congestion avoid-
destination no des.
ance.
connection-oriented networks the congestion
In
problem is generally solved by reserving the resources
at all routers during connection setup. In connec-
tionless networks it can b e done by explicit messages
choke packets from the network to the sources [19],
or by implicit means such as timeout on a packet
loss. In [15, 13, 21], a number of alternatives have
b een discussed and a timeout-based scheme has b een
analyzed in detail.
Traditional congestion control schemes help im-
prove the p erformance after congestion has o ccurred.
Figure 2 shows general patterns of resp onse time
and throughput of a network as the network load
increases. If the load is small, throughput gener-
ally keeps up with the load. As the load increases,
throughput increases. After the load reaches the net-
work capacity, throughput stops increasing. If the
Figure 1:
load is increased any further, the queues start build-
ing, p otentially resulting in packets b eing dropp ed.
Throughput may suddenly drop when the load in- Consider the simple con guration shown in Figure
creases b eyond this p oint and the network is said to 1a, in which two no des are directly connected via
be congested. The resp onse-time curve follows a simi- a link. Without any control, the source may send
lar pattern. At rst the resp onse time increases little packets at a rate to o fast for the destination. This
with load. When the queues start building up, the may cause bu er over ow at the destination, lead-
resp onse time increases linearly until nally, as the ing to packet losses, retransmissions, and degraded 2
Ascheme that allows the network to op erate at the
knee is called a congestion avoidance scheme, as
distinguished from a congestion control scheme that
tries to keep the network op erating in the zone to
the left of the cli . A prop erly designed congestion
avoidance scheme will ensure that the users are en-
couraged to increase their trac load as long as this
do es not signi cantly a ect the resp onse time and are
required to decrease them if that happ ens. Thus, the
network load oscillates around the knee. Congestion
control schemes are still required, however, to protect
the network should it reach the cli due to transient
changes in the network.
The distinction between congestion control and
congestion avoidance is similar to that b etween dead-
lo ck recovery and deadlo ck avoidance. Congestion
control pro cedures are curative and the avoidance
pro cedures are preventive in nature. The p oint at
which a congestion control scheme is called up on
dep ends up on the amount of memory available in
the routers, whereas the p oint at which a conges-
tion avoidance scheme is invoked is indep endent of
the memory size.
We elab orate further on these concepts in [16].
3 ALTERNATIVES
Figure 2:
Congestion control and congestion avoidance are dy-
namic system control issues. Like all other control
queues start over owing, the resp onse time increases
schemes they consist of two parts: a feedback mecha-
drastically.
nism and a control mechanism. The feedback mecha-
The p oint at which throughput approaches zero is
nism allows the system network to inform its users
called the p oint of congestion col lapse. This is also
sources or destinations of the current state of the
the p oint at which the resp onse time approaches in-
system, and the control mechanism allows the users
nity. The purp ose of a congestion control scheme
to adjust their loads on the system.
such as [15, 3] is to detect the fact that the network
The problem of congestion control has b een dis-
has reached the p oint of congestion collapse resulting
cussed extensively in the literature. A number of
in packet losses, and to reduce the load so that the
feedback mechanisms have b een prop osed. If we
network returns to an uncongested state.
extend those mechanisms to op erate the network
We call the p oint of congestion collapse a cli due
around the knee rather than the cli , we obtain
the fact that the throughput falls o rapidly after this
congestion avoidance mechanisms. For the feedback
p oint. We use the term knee to describ e the p oint
mechanisms wehave the following alternatives:
after which the increase in the throughput is small,
but after which a signi cant increase in the resp onse 1. Congestion feedback via packets sent from
time results. routers to sources 3
the destination either asks the source to reduce the 2. Feedback included in the routing messages ex-
load or returns the signal backto the source in the changed among routers
packets or acknowledgments going in the reverse
3. End-to-end prob e packets sentby sources
direction. This is the alternative that we study here
and in [22, 23].
4. Each packet containing a congestion feedback
The key architectural assumption ab out the net-
eld lled in by routers in packets going in the
works in this study is that they use connectionless
reverse direction{ reverse feedback
network service and transp ort level connections. By
this we mean that a router is not aware of the trans-
5. Each packet containing a congestion feedback
p ort connections passing through it, and the trans-
eld lled in by routers in packets going in the
p ort entities are not aware of the path used by their
forward direction{ forwardfeedback
packets. There is no prior reservation of resources
The rst alternative is p opularly known as choke
at routers b efore an entity sets up a connection. The
packet [19] or source quench message in ARPAnet
routers cannot compute the resource demands except
[20]. It requires intro ducing additional trac in the
by observing the trac owing through them. Ex-
network during congestion, whichmay not b e desir-
amples of network architectures with connectionless
able.
network layers are DoD TCP/IP, DNA, and ISO con-
The second alternative, increasing the cost used
nectionless network service used with ISO transp ort
in up dating the forwarding database of congested
class 4 [9].
paths, has b een tried b efore in ARPAnet's delay-
sensitive routing. The delays were found to vary to o
quickly, resulting in a high overhead [18].
4 PERFORMANCE MET-
The third alternative, prob e packets, also su ers
RICS
from the disadvantage of added overhead, unless
prob e packets have a dual role of carrying other in-
A congestion avoidance scheme is basically a resource
formation in them. If the latter were the case, there
allo cation mechanism in which the subnet set of in-
would be no reason not to use every packet going
termediate no des or routers is a set of m resources
through the network as a prob e packet. We may
that has to b e allo cated to n users source-destination
achieve this by reserving a eld in the packet that
pairs. There are two parties involved in any resource
is used by the network to signal congestion. This
allo cation mechanism: the resource manager and the
leads us to the last two alternatives.
user. The resource manager's goal is to use the re-
The fourth alternative, reverse feedback, requires
source as eciently as p ossible. Users, on the other
routers to piggyback the signal on the packets going
hand, are more interested in getting a fair share of
in the direction opp osite the congestion. This alter-
the resource. We therefore need to de ne eciency
native has the advantage that the feedback reaches
and fairness.
the source faster. However, the forward and reverse
For our current problem of congestion avoidance,
trac are not always related. The destinations of the
the routers are our resources and therefore we use
reverse trac may not b e the cause of or even the par-
the terms routers and resources interchangeably. The
ticipants in the congestion on the forward path. Also,
concepts intro duced here, however, are general and
many networks including Digital Network Architec-
apply to other distributed resource allo cation prob-
ture, or DNA have path-splitting such that the path
lems as well. Similarly, for the current problem,
from A to B is not necessarily the same as that from
the demands and allo cations are measured by pack-
BtoA.
ets/second throughput, but the concepts apply to
The fth alternative, forward feedback, sends the
other ways of quantifying demands and allo cations.
signal in the packets going in the forward direction
Readers not interested in de nitions of these met- direction of congestion. In the case of congestion 4
the total throughput is equal to the knee-capacityof rics may skip to the next section on the proposed
the resource. However, a maximally ecient allo ca- scheme.
tion may not be fair, as some users may get b etter
treatment than others. The fairness of an allo cation
4.1 Single Resource, Single User
is a function of the amounts demanded as well as the
Consider rst only one user and one resource. In this
amounts allo cated. To simplify the problem, let us
case fairness is not an issue. If the user is allowed
rst consider the case of equal demands in which all
to increase its demand window, the throughput in-
users have identical demands D. The maximally
creases. However, the resp onse time total waiting
fair allo cation then consists of equal allo cations to
time at the resource also increases. Although we
all users, i.e., a = A for all i. The fairness of any
i
wanttoachieve as high a throughput as p ossible, we
other non-equal allo cation is measured by the fol-
also wanttokeep the resp onse time as small as p ossi-
lowing fairness function [11]:
ble. One waytoachieve a tradeo b etween these con-
P
n
2
x
icting requirements is to maximize resource power
i
i=1
P
Fairness = 1
n
2
[8, 17], which is de ned by:
n x
i
i=1
Resource Throughput
where x = a =D .
i i
Resource Power =
Resource Resp onse Time
This function has the prop erty that its value always
lies b etween 0 and 1 and that 1 or 100 represents
Here, is a constant. Generally, =1. Other val-
a maximally fair allo cation.
ues of can be used to give higher preference to
Notice that we use user throughput to measure al-
throughput >1 or resp onse time <1. The
lo cations and demands b ecause of its additivity prop-
concepts presented in this rep ort apply to anyvalue
erty: total throughput of n users at a single resource
of . However, unless otherwise sp eci ed we will as-
is the sum of their individual throughputs.
sume throughout this rep ort that = 1. The re-
source p ower is maximum at the knee.
4.3 Single Resource, Multiple Users
For any given inter-arrival and service time distri-
butions, we can compute the throughput at the knee.
with Unequal Demands
We call this the knee-capacity of the resource.
Given a resource with knee-capacityofT , eachof
k nee
The maximally ecient op erating p oint for
the n users deserves a fair share of T =n. However,
k nee
the resource is its knee. The eciency of resource
there is no p oint in allo cating T =n to a user who
k nee
usage is therefore quanti ed by:
is demanding less than T =n. It would b e b etter to
k nee
Resource Power
give the excess to another user who needs more. This
Resource Eciency =
argument leads us to extend the concept of maximal ly
Resource Power at Knee
fair al location such that the fair share t is computed
f
The resource is used at 100 eciency at the knee.
sub ject to the following two constraints:
As we move away from the knee, the resource is
b eing used ineciently, that is, either underutilized
1. The resource is fully allo cated:
throughput lower than the knee-capacity or overuti-
n
X
lized high resp onse time.
a = T
i k nee
i=1
4.2 Single Resource, Multiple Users
with Equal Demands
2. No one gets more than the fair share or its de-
mands
With multiple users we have an additional require-
a = minfd ;t g
i i f
ment of fairness. The allo cation is ecient as long as 5
4.5 Multiple Resources, Multiple Given the knee capacity of a resource and indi-
vidual user demands, the ab ove two constraints al-
Users
lows us to determine the maximally fair allo cation
In this case, there are n users and m resources. The
fA ;A ;:::;A g. If actual allo cation fa ;:::;a g is
1 n
n
1 2
th
i user has a path p consisting of a subset of re-
i
di erent from this, we need a distance function to
th
sources fr ;r ;:::;r g. Similarly, j resource
i1 i2 im
quantify the fairness. We do this by using the fair-
i
serves n users fU ;U ;:::;U g. The global e-
j j 1 j2 jn
ness function of equation 1 with x = a =A .
j
i i
i
ciency is still de ned by the b ottleneck resource which
The eciency of the resource usage can be com-
is identi ed by the resource with the highest utiliza-
puted as b efore by computing resource power from
tion. The problem of nding the maximally ecient
the resource throughput which is given as the sum
and maximally fair allo cation is now a constrained
of user throughputs in this case and the resource
optimization problem as it has to take di ering user
resp onse time. The allo cation that is 100 ecient
paths into account. Wehave develop ed an algorithm
and 100 fair is the optimal allo cation.
[23] which gives the globally optimal fair and e-
Wemust p oint out that the ab ove discussion for a
cient allo cation for any given set of resources, users,
single resource case also applies if there are multiple
and paths.
m routers but all routers are shared by all n users.
Once globally optimal allo cation fA ;A ;:::;A g
1 2 n
In this case the set of m routers can b e combined and
has b een determined, it is easy to quantify fairness
considered as one resource.
of any other allo cation fa ;a ;:::;a g by using the
1 2 n
same fairness function as in the single resource case
4.4 Multiple Resources, One User
equation 1 with x = a =A .
i i
i
This fairness is called global fairness and the e-
We have extended the concepts of fairness and ef-
ciency of the b ottleneck resources is called the global
ciency to a distributed system with multiple re-
eciency. An allo cation which is 100 globally ef-
sources. Let us rst consider a case of a single user so
cient and 100 globally fair is said to b e globally
that fairness is not an issue. For the subnet conges-
optimal. It should b e p ointed out that by asso ciat-
tion problem, the user has a path P passing through
ing eciency with resource p ower rather than user
m resources routers fr ;r ;:::;r g. The resource
1 2 m
power, wehave b een able to avoid the problems en-
with the lowest service rate determines the user's
countered by other researchers [2, 10] in using the
throughput and is called the b ottleneck resource.
power metric.
The b ottleneck resource has the highest utilization
Notice that we havea multi-criteria optimization
ratio of throughput to service rate and contributes
problem since we are trying to maximize eciency as
the most to user's resp onse time. The maximally e-
well as fairness. One wayto solve such problems is
cient op erating p oint for the system is de ned as the
to combine the multiple criteria into one, for instance
same as that for the b ottleneck router. Thus, given a
by taking a weighted sum or by taking a pro duct. We
system of m resources, we determine the b ottleneck
chose instead to put a strict priority on the two cri-
and de ne its eciency as the global eciency and
teria. Eciency has a higher priority than fairness.
its knee as the maximally ecient op erating p oint for
Given two alternatives, we prefer the more ecient
the system.
alternative. Given two alternatives with equal e-
ciency,wecho ose the fairer alternative.
Global Eciency = Eciency of the Bottleneck
Resource
Note that the global eciency, as de ned here, de-
5 THE PROPOSED SCHEME
p ends up on the resp onse time at the b ottleneck re-
source and not on the user resp onse time, whichisa Wehave designed a scheme that allows a network to
sum of resp onse time at m resources. op erate at its knee. As shown in Figure 3, the scheme 6
proach, while the ISO TP4 [9] implementation uses
the destination-based approach.
In the remainder of this rep ort, we use the word
user to include b oth source and destination trans-
p ort entities. Thus, when we say that the user
changes its window, the change might b e decided and
a ected by the source or destination transp ort entity.
Figure 3:
uses one bit called the congestion avoidance bit
in the network layer header of the packet for feed-
back from the subnet to the users. A source clears
the congestion avoidance bit as the packet enters the
subnet. All routers in the subnet monitor their load
and if they detect that they are op erating ab ove the
knee, they set the congestion avoidance bit in the
packets b elonging to users causing overload. Routers
op erating b elow the knee pass the bit as received.
When the packet is received at the destination the
Figure 4:
network layer passes the bit to the destination trans-
p ort, which takes action based on the bits.
The prop osed congestion avoidance scheme con-
There are two versions of the binary feedback
sists of two parts: a feedback mechanism in routers,
scheme:
and a control mechanism for users. We call these
1. Destination-based
the router p olicy and the user p olicy, resp ectively.
Each of these mechanisms can b e further sub divided
2. Source-based
into three comp onents as shown in Figure 4. We ex-
plain these comp onents b elow. For further details see
In the rst version, the destination examines the
[16,22,23].
bits received, determines a new ow-control window,
and sends this window to the source. In the sec-
ond version, the destination sends all bits backtothe
5.1 Router Policies
source along with the acknowledgments. In this case,
Routers in a connectionless network environment are
we need to reserve one bit in the headers of transp ort
not informed ab out resource requirements of trans-
layer acknowledgment packets where the destination
p ort entities and therefore they have no prior knowl-
transp ort entity copies the bit received from the net-
edge of future trac. They attempt to optimize their
work layer. The source transp ort entity examines the
op eration by monitoring the current load and by ask-
stream of bits received, determines a new op erating
ing the users via the bit to increase or decrease
window, and uses it as long as it do es not violate the
the load. Thus, the routers have three distinct algo-
window limit imp osed by the the destination.
rithms:
We have studied b oth versions. The NSP trans-
p ort proto col in DNA [6] uses the source-based ap- 1. To determine the instantaneous load level 7
queue lengths over a long interval. The key question 2. To estimate average load over a appropriate time
is how long an interval is long enough. interval
3. To determine the set of users who should be
asked to adjust their loads
We call these three algorithms congestion detec-
tion, feedback lter, and feedback selections, resp ec-
tively. The op eration of these comp onents and the
alternatives considered are describ ed next.
5.1.1 Congestion Detection
Before a router can feed back any information, it
must determine its load level. It may be under-
utilized b elow the knee or overutilized ab ove the
knee. This can b e determined, based on the utiliza-
Figure 5:
tion, bu er availability, or queue lengths.
We found that the average queue length provides
We recommend averaging since the b eginning of
the b est mechanism to determine if we are ab oveor
the previous regeneration cycle. A regeneration cycle
b elow the knee. This alternative is least sensitiveto
is de ned as the interval consisting of a busy period
the arrival or service distributions and is indep en-
and an id le period, as shown in Figure 5. The be-
dent of the memory available at the router. For b oth
ginning of the busy p erio d is called a regeneration
M/M/1 and D/D/1 queues the knee o ccurs when the
p oint. The word regeneration signi es the birth of
average queue length is one. For other arrival pat-
a new system, since the queuing system's b ehavior
terns suchaspacket trains [14], this is approximately
after the regeneration p oint do es not dep end up on
though not exactly true. The routers, therefore,
that b efore it. The average queue length is given by
monitor the queue lengths and ask users to reduce
the area under the curve divided by the time since
the load if the average queue length is more than
the last but one regeneration p oint. Note that the
one, and vice versa.
averaging includes a part of the current, though in-
complete, cycle. This is shown in Figure 5.
5.1.2 Feedback Filter
5.1.3 Feedback Selection
After a router has determined its load level, its feed-
back to users is useful if and only if the state last
The two comp onents of router p olicy discussed so
long enough for the users to take action based on it.
far congestion detection and feedback lter ensure
A state that changes very fast may lead to confusion
that the router op erates eciently, that is, around
b ecause by the time users b ecome aware of it, the
the knee. They b oth work based up on the total load
state no longer holds and the feedback is mislead-
on the router, to decide if the total load is ab ove the
ing. Therefore, we need a low-pass lter function to
knee or b elow the knee. The total numb er of users or
pass only those states that are exp ected to last long
the fact that only a few of the users might b e causing
enough for the user action to b e meaningful.
the overload is not considered in those comp onents.
This consideration rules out the use of instanta- Fairness considerations demand that only those users
neous queue lengths to b e used in congestion detec- who are sending more than their fair share should
tion. An instantaneous queue length of 100 may not be asked to reduce their load, and others should b e
be a problem for a very fast router but may be a asked to increase if p ossible. This is done by the
problem for a slow router. We need to average the feedback selection, an imp ortant comp onent of our 8
5.2.1 Signal Filter scheme. Without the selection, the system may sta-
bilize at op erate around an op erating p oint that is
The problem solved by this comp onent is to examine
ecient but not fair. For example, two users sharing
the stream of the last n bits, for instance, and to
the same path maykeep op erating at widely di erent
decide whether the user should increase or decrease
throughputs.
its load window. Mathematically,
The feedback selection works by keeping a count
of the numberofpackets sentby di erent users since
d = f b ;b ;b ;:::;b
1 2 3 n
the b eginning of the queue averaging interval. This is
equivalent to monitoring their throughputs. Based on
Here, d is the binary decision 0 increase, 1
th
the total throughput, a fair share is determined and
decrease and b is the the i bit with b b eing the
i n
users sending more than the fair share are asked to
most recently received bit. The function f is the
reduce their load while the users sending less than the
signal lter function. The function that we nally
fair share are asked to increase their load. Of course,
chose requires counting the number of 1s and 0s in
if the router is op erating b elow the knee, each one
the stream of the last n bits. Let
is encouraged to increase regardless of their current
X
s =numb er of ones in the stream = b
load. The fair share is estimated by assuming the
1 i
capacity to b e at 90 of the total throughput since
s =numb er of zeros in the stream = n s
0 1
the b eginning of the last regeneration cycle.
The feedback selection as prop osed here attempts
Then, if s >pnthen d =1 else d =0. Here, p is
1
to achieve fairness among di erent network layer ser-
a parameter called cuto probability. We found
vice access p oint NSAP pairs b ecause the packet
that for exp onentially distributed service times, the
counts used in the algorithm corresp ond to these
optimal choice was p =0:5, as exp ected. For deter-
pairs.
ministic service times, however, we found that the
This completes the discussion on the router p oli-
choice of p do es not matter. This is b ecause in deter-
cies. Wenow turn to the user p olicies.
ministic cases, the router ltering results in the user
consistently receiving either all 1s if the load at the
b ottleneck is ab ove the knee or all 0s if the load is
5.2 User Policies
b elow the knee. Based on this observation, we rec-
Each user receives a stream of congestion avoidance
ommend using a cuto probability of 50.
bits, called signals, from the network. These signals
In summary, the signal ltering simply consists of
are not all identical or else wewould not need all of
comparing the counts of 1s and 0s received in the bit
them. Some signals ask the user to reduce the load,
stream and deciding to go up or down as indicated
while others ask it to increase the load. The user p ol-
by the ma jority of the bits.
icy should b e designed to compress this stream into a
single increase/decrease decision at suitable intervals.
5.2.2 Decision Frequency
The key questions that the user p olicy helps answer
The decision frequency comp onent of the user p ol-
are:
icy consists of deciding how often to change the win-
1. How can all signals received b e combined?
dow. Changing it to o often leads to unnecessary os-
cillations, whereas changing it infrequently leads to a
2. How often should the windowbechanged?
system that takes to o long to adapt.
System control theory tells us that the optimal
3. Howmuch should the change b e?
control frequency dep ends up on the feedback delay
{ the time b etween applying a control change win- We call these three algorithms signal lter, deci-
dow and getting feedback bits from the network sion frequency, and increase/decrease algorithm, re-
corresp onding to this control. sp ectively. 9
new information available since the last activation of In computer networks, it takes one round-trip de-
the comp onent. We therefore chose the simple ap- lay to a ect the control, that is, for the new window
proach. We have already partitioned the problem to take e ect and another round-trip delay to get the
so that the signal lter lo oks at the feedback signals resulting change fed back from the network to the
and decides whether to increase or decrease. The in- users. This leads us to the recommendation that win-
crease/decrease algorithm, therefore, needs to lo ok at dows should be adjusted once every two round-trip
the window in the last cycle and decide what the new delays two window turns and that only the feed-
window should b e. We limited our search among al- back signals received in the past cycle should b e used
ternatives to the rst order linear functions for b oth in window adjustment, as shown in Figure 6.
increase and decrease:
Increase: w = aw + b
new old
Decrease: w = cw d
new old
Here, w is the window in the last cycle and w
old new
is the window to be used in the next cycle; a, b, c,
and d are non-negative parameters. There are four
sp ecial cases of the increase/decrease algorithms:
a Multiplicative Increase, Additive Decrease b =0,
c=1
b Multiplicative Increase, Multiplicative Decrease
b =0, d =0
c Additive Increase, Additive Decrease a =1, c=
1
d Additive Increase, Multiplicative Decrease a =1,
d =0
Figure 6:
The choices of the alternatives and parameter val-
ues are governed by the following goals:
1. Eciency: The system b ottlenecks should be
5.2.3 Increase/Decrease Algorithms
op erating at the knee.
The purp ose of the increase/decrease algorithm is to
determine the amountby which the window should
2. Fairness: The users sharing a common b ottle-
be changed once a decision has b een made to adjust
neck should get the same throughput.
it.
In the most general case, the increase or decrease
3. Minimum Convergence Time: Starting from any
amount would be a function of the complete past
state, the network should reach the optimal ef-
history of controls windows and feedbacks bits.
cientaswell as fair state as so on as p ossible.
In the simplest case, the increase/decrease amount
4. Minimum Oscillation Size: Once at the opti- would b e a function only of the window used in the
mal state, the user windows oscillate continu- last cycle and the resulting feedback. Actually, there
ously b elow and ab ove this state. The param- is little p erformance di erence exp ected b etween the
eters should b e chosen such that the oscillation simplest and the most general control approach, pro-
size is minimal. vided that the simple scheme makes full use of the 10
These considerations lead us to the following rec- prop osed scheme dynamically adjusts its op er-
ommendation for increase/decrease algorithms [16, ation to the current optimal p oint. The users
4]: continuously monitor the network by changing
the load slightly b elow and slightly ab ove the
Additive Increase: w = w +1
new new
optimal p oint and verify the current state by ob-
Multiplicative Decrease: w =0:875w
new old
serving the feedback.
If the network is op erating b elow the knee, all users
5. Minimum oscillation: The increase amountof1
go up equally, but, if the network is congested, the
and decrease factor of 0.875 have b een chosen
multiplicative decrease makes users with higher win-
to minimize the amplitude of oscillations in the
dows go down more than those with lower windows,
window sizes.
making the allo cation more fair. Note that 0:875 =