
CS144: An Introduction to Computer Networks Packet Switching CS144, Stanford University Outline 1. End-to-end delay 2. Queueing delay 3. Simple deterministic queue model 4. Examples CS144, Stanford University 2 Propagation Delay, tl: The time it takes a single bit to travel over a link at propagation speed c. l l t = l c Example: A bit takes 5ms to travel 1,000km in an optical fiber with propagation speed 2 x 108 m/s. CS144, Stanford University 3 Packetization Delay, tp: The time from when the first to the last bit of a packet is transmitted. p r bits/s p t = p r Example 1: A 64byte packet takes 5.12ms to be transmitted onto a 100Mb/s link. Example 2: A 1kbit packet takes 1.024s to be transmitted onto a 1kb/s link. CS144, Stanford University 4 Total time to send a packet across a link: The time from when the first bit is transmitted until the last bit arrives. l p r bits/s 푝 푙 푡 = 푡 + 푡 = + 푝 푙 푟 푐 Example: A 100bit packet takes 10 + 5 = 15ms to be sent at 10Mb/s over a 1km link. CS144, Stanford University 5 End-to-end delay A l1, r1 l2, r2 l3, r3 l4, r4 B Example: How long will it take a packet of length p to travel from A to B, from when the 1st bit is sent, until the last bit arrives? Assume the switches store-and-forward packets along the path. æ p l ö End-to-end delay, t = åç + i ÷ i è ri c ø CS144, Stanford University 6 A l1, r1 l2, r2 l3, r3 l4, r4 B S1 S2 S3 p/r time A 1 p/r2 S1 l /c 1 p/r S2 3 l2/c p/r S3 4 l3/c B time l /c æ p l ö End-to-e4 nd delay, t = åç + i ÷ i è ri c ø CS144, Stanford University 7 Data H Other packets A l1, r1 l2, r2 l3, r3 l4, r4 B S1 S2 Q2(t) S3 p/r 1 æ ö A p li End-to-end delay, t = åç + +Qi (t)÷ p/r2 r c S1 i è i ø l /c Q2(t) 1 p/r S2 3 l2/c p/r S3 4 l3/c B time l4/c CS144, Stanford University 8 Packet delay variation 100% 90% Stanford-Princeton Stanford-Tsinghua 4,000 km 10,000 km 80% Variation ~50ms Variation ~200ms 70% CDF (%) 60% 50% 40% 30% 20% 10% 0% 0 100 200 300 400 500 600 700 RTT (ms) CS144, Stanford University 9 Simple model of a router queue CS144, Stanford University 11 Simple model of a router queue 퐴(푡): cumulative arrivals up until time 푡 퐷(푡): cumulative departures up until time 푡 푄(푡): number of packets in queue at time 푡 Router queue Arriving packets 퐴(푡), 휆 퐷(푡) from different inputs R 푄(푡) Properties: 1. 퐴 푡 , 퐷 푡 non-decreasing 2. 퐴 푡 ≥ 퐷(푡) CS144, Stanford University 12 Simple model of a queue Cumulative number of bytes arrived up until time t. 퐴(푡) 퐴(푡) 퐷(푡) 푄(푡) Cumulative number of bytes Link R R rate time 퐷(푡) Properties: 1. 퐴 푡 , 퐷 푡 non-decreasing Cumulative number of bytes 2. 퐴 푡 ≥ 퐷(푡) departed up until time t. CS144, Stanford University 13 Simple model of a queue 퐴(푡) 퐷(푡) 푑(푡) Cumulative 푄(푡) number of bytes time Queue occupancy: 푄(푡) = 퐴(푡) − 퐷(푡). Queueing delay, 푑(푡), is the time spent in the queue by a byte that arrived at time t, assuming the queue is served first-come-first-served (FCFS). CS144, Stanford University 14 Example (store & forward) Every second, a 500 bit packet arrives to a queue at rate 10,000b/s. The maximum departure rate is 1,000b/s. What Cumulative number of bitsnumberof 퐴(푡) is the average occupancy of the 500 queue? 퐷(푡) 푄(푡) 0.05s 0.55s 1.0s 1.05s time Solution: During each repeating 1s cycle, the queue fills at rate 10,000b/s for 0.05s, then empties at rate 1,000b/s for 0.5s. Over the first 0.55s, the average queue occupancy is therefore 250 bits. The queue is empty for 0.45s every cycle, and so average queue occupancy is (0.55 * 250) + (0.45 * 0) = 137.5 bits. CS144, Stanford University 15 Example (“cut through”) Every second, a 500 bit packet arrives to Cumulative a queue at rate 10,000b/s. The bitsnumberof A(t) maximum departure rate is 1,000b/s. 500 What is the time average occupancy of D(t) the queue? Q(t) 0.05s 0.5s 1.0s time Solution: During each repeating 1s cycle, the queue fills at rate 10,000b/s to 500-50=450 bits over the first 0.05s, then drains at rate 1,000b/s for 0.45s. Over the first 0.5s, the average queue occupancy is therefore 225 bits. The queue is empty for 0.5s every cycle, and so average queue occupancy: 푄ത 푡 = 0.5 × 225 + 0.5 × 0 = 112.5 CS144, Stanford University 16 What if some packets are more important than others? CS144, Stanford University By default, switches and routers use FIFO (aka FCFS) queues Buffer size, B Packets arriving Service Rate, R Departing to different ingress ports packets CS144, Stanford University 18 By default, switches and routers use FIFO (aka FCFS) queues Buffer size, B Packets arriving Service Rate, R Departing to different ingress ports packets CS144, Stanford University 19 Some packets are more important For example: 1. The control traffic that keeps the network working (e.g. packets carrying routing table updates) 2. Traffic from a particular user (e.g. a customer paying more) 3. Traffic belonging to a particular application (e.g. videoconference) 4. Traffic to/from particular IP addresses (e.g. emergency services) 5. Traffic that is time sensitive (e.g. clock updates) CS144, Stanford University 20 Flows When talking about priorities, it’s convenient to talk about a “flow” of packets that all share a common set of attributes. For example: 1. The flow of packets all belonging to the same TCP connection Identified by the tuple: TCP port numbers, IP addresses, TCP protocol 2. The flow of packets all destined to Stanford Identified by a destination IP address belonging to prefix 171.64/16 3. The flow of packets all coming from Google Identified by a source IP address belonging to the set of prefixes Google owns. 4. The flow of web packets using the http protocol Identified by packets with TCP port number = 80 5. The flow of packets belonging to gold-service customers Typically identified by marking the IP TOS (type of service) field CS144, Stanford University 21 Outline 1. Strict Priorities 2. Weighted Priorities and Rate Guarantees CS144, Stanford University 22 Strict Priorities High priority flows Low priority flows CS144, Stanford University 23 Strict Priorities High priority flows Low priority flows “Strict priorities” means a queue is only served when all the higher priority queues are empty CS144, Stanford University 24 Strict Priorities: Things to bear in mind 1. Strict priorities can be used with any number of queues. 2. Strict priorities means a queue is only served when all the higher priority queues are empty. 3. Highest priority flows “see” a network with no lower priority traffic. 4. Higher priority flows can permanently block lower priority flows. Try to limit the amount of high priority traffic. 5. Not likely to work well if you can’t control the amount of high priority traffic. 6. Or if you really want weighted (instead of strict) priority. CS144, Stanford University 25 How do I give weighted (instead of strict) priority? CS144, Stanford University 26 CS144, Stanford University 27 Trying to treat flows equally CS144, Stanford University 28 Trying to treat flows equally While each flow gets to send at the same packet rate, the data rate is far from equal. CS144, Stanford University 29 Scheduling flows bit-by-bit CS144, Stanford University 30 Scheduling flows bit-by-bit Now each flow gets to send at the same data rate, but we no longer have “packet switching”. CS144, Stanford University 31 Can we combine the best of both? i.e. packet switching, but with bit-by-bit accounting? CS144, Stanford University 32 Fair Queueing 3 2 1 5 2 4 3 12 1 6 5 4 3 2 1 5 2 4 3 1 2 1 Packets are sent in the order they would complete in the bit-by-bit scheme. Does this give fair (i.e. equal) share of the data rate? CS144, Stanford University 33 Yes! 1. It can be proved that the departure time of a packet with Fair Queueing is no more than Lmax/R seconds later than if it was scheduled bit-by-bit. Where Lmax is the maximum length packet and R is the data rate of the outgoing link. 2. In the limit, the two flows receive equal share of the data rate. 3. The result extends to any number of flows sharing a link.1 [1] “Analysis and Simulation of a Fair Queueing Algorithm” Demers, Keshav, Shenker. 1990. CS144, Stanford University 34 What if we want to give a different share of the link to each flow? i.e., a weighted fair share. CS144, Stanford University 35 Weighted Fair Queueing 3 2 1 3/4 3 2 2 1 1 6 5 4 3 2 1 3 2 2 1 1 1/4 As before, packets are sent in the order they would complete in the bit-by-bit scheme.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages58 Page
-
File Size-