Queue Management ? ? Mike Freedman COS 461: Computer Networks Physical Address Path

Total Page:16

File Type:pdf, Size:1020Kb

Queue Management ? ? Mike Freedman COS 461: Computer Networks Physical Address Path Last lecture: Congestion Control What can end-points do to collectively to make good use of shared underlying resources? logical link name Queue Management ? ? Mike Freedman COS 461: Computer Networks physical address http://www.cs.princeton.edu/courses/archive/spr20/cos461/ path 2 Today: Queue Management What can individual links do to make good use of shared underlying resources? logical link name Packet Queues physical ? path address 3 4 1 Router Line Cards (Interface Cards, Adaptors) control plane • Packet handling to/from link data plane Processor –Packet forwarding –Buffer management Transmit –Link scheduling Line card Line card lookup –Packet filtering Switching Receive Line card Fabric Line card –Rate limiting –Packet marking Line card Line card –Measurement to/from switch 5 6 Queue Management Issues FIFO Scheduling and Drop-Tail • Scheduling discipline • Access to the bandwidth: first-in first-out queue – Which packet to send? – Packets only differentiated when they arrive – Some notion of fairness? Priority? • Drop policy – When should you discard a packet? • Access to the buffer space: drop-tail queuing – Which packet to discard? – If the queue is full, drop the incoming packet • Goal: balance throughput and delay – Huge buffers minimize drops, but add to queuing delay (thus higher RTT, longer slow start, …) 7 8 2 Bursty Loss From Drop-Tail Queuing • TCP depends on packet loss – Packet loss is indication of congestion – TCP additive increase drives network into loss Early Detection of Congestion • Drop-tail leads to bursty loss – Congested link: many packets encounter full queue – Synchronization: many connections lose packets at once 9 10 Slow Feedback from Drop Tail Random Early Detection (RED) • Feedback comes when buffer is completely full • Router notices that queue is getting full – … even though the buffer has been filling for a while – … and randomly drops packets to signal congestion • Plus, the filling buffer is increasing RTT • Packet drop probability – … making detection even slower – Drop probability increases as queue length increases – Else, set drop probability f(avg queue length) • Better to give early feedback – Get 1-2 connections to slow down before it’s too late! Drop 0 1 0 Probability Average Queue Length 11 12 3 Properties of RED Properties of RED • Drops packets before queue is full • Drops packets before queue is full – In the hope of reducing the rates of some flows – In the hope of reducing the rates of some flows • Tolerant of burstiness in the traffic • Tolerant of burstiness in the traffic – By basing the decisions on average queue length – By basing the decisions on average queue length • Which of the following are true? • Which of the following are true? (Y) Drops packet in proportion to each flow’s rate (Y) Drops packet in proportion to each flow’s rate (M) High-rate flows selected more often (M) High-rate flows selected more often (C) Helps desynchronize the TCP senders (C) Helps desynchronize the TCP senders (A) All of the above (A) All of the above 13 14 Problems With RED • Hard to get tunable parameters just right – How early to start dropping packets? – What slope for increase in drop probability? – What time scale for averaging queue length? From Loss to Notification • RED has mixed adoption in practice – If parameters aren’t set right, RED doesn’t help • Many other variations in research community – Names like “Blue” (self-tuning), “FRED”… 15 16 4 Feedback: From loss to notification Explicit Congestion Notification • Needs support by router, sender, AND receiver • Early dropping of packets – End-hosts check ECN-capable during TCP handshake – Good: gives early feedback – Bad: has to drop the packet to give the feedback • ECN protocol (repurposes 4 header bits) 1. Sender marks “ECN-capable” when sending • Explicit Congestion Notification 2. If router sees “ECN-capable” and congested, marks – Router marks the packet with an ECN bit packet as “ECN congestion experienced” – Sending host interprets as a sign of congestion 3. If receiver sees “congestion experienced”, marks “ECN echo” flag in responses until congestion ACK’d 4. If sender sees “ECN echo”, reduces cwnd and marks “congestion window reduced” flag in next packet 17 18 ECN Questions ECN Questions Why separate ECN experienced and echo flags? Why separate ECN experienced and echo flags? (Y) Detect reverse path congestion with “experienced” (Y) Detect reverse path congestion with “experienced” (M) Congestion could happen in either direction, want (M) Congestion could happen in either direction, want sender to react to forward direction sender to react to forward direction (C) Both of the above (C) Both of the above Why “echo” resent & “congestion window reduced” ACK? (Y) Congestion in reverse path can lose ECN-echo, still want to respond to congestion in forward path (M) Only should apply bacKoff once per cwnd (C) Both of the above 19 20 5 ECN Questions Why separate ECN experienced and echo flags? (Y) Detect reverse path congestion with “experienced” (M) Congestion could happen in either direction, want sender to react to forward direction (C) Both of the above Link Scheduling Why “echo” resent & “congestion window reduced” ACK? (Y) Congestion in reverse path can lose ECN-echo, still want to respond to congestion in forward path (M) Only should apply bacKoff once per cwnd (C) Both of the above 21 22 First-In First-Out Scheduling Strict Priority • First-in first-out scheduling • Multiple levels of priority – Simple, but restrictive – Always transmit high-priority traffic, when present • Example: two kinds of traffic • Isolation for the high-priority traffic – Voice over IP needs low delay – Almost like it has a dedicated link – E-mail is not that sensitive about delay – Except for (small) delay for packet transmission • Voice traffic waits behind e-mail • But, lower priority traffic may starve L 23 24 6 Weighted Fair Scheduling Weighted Fair Scheduling • Weighted fair scheduling • If non-work conserving (resources can go idle) – Assign each queue a fraction of the link bandwidth – Each flow gets at most its allocated weight – Rotate across queues on a small time scale • WFQ is work-conserving – Send extra traffic from one queue if others are idle – Results in (Y) higher or (M) lower utilization than non- work conserving? • WFQ results in max-min fairness – Maximize the minimum rate of each flow 50% red, 25% blue, 25% green 25 26 Max-Min Fairness Max-Min Fairness • Maximize the minimum rate of each flow • Maximize the minimum rate of each flow 1. Allocate in order of increasing demand 1. Allocate in order of increasing demand 2. No flow gets more than demand 2. No flow gets more than demand 3. The excess, if any, is equally shared 3. The excess, if any, is equally shared 8 8 10 10 min(ri , f ) = C 4 å 10 10 10 4 i 10 2 2 8 2 8 f = 4: Total = 10 2 Total = 6 min(8, 4) = 4 2 min(10, 4) = 4 4 4 Rate Demanded/Allocated Rate 2 2 2 Demanded/Allocated Rate 2 2 2 min(2, 4) = 2 Flows sorted by demand 27 Flows sorted by demand 28 27 28 7 Weighted Fair Queueing Bit-by-Bit Fair Queueing Question: What is a “flow”? Flow 5-tuple: protocol, IP source/dest, port src/dest Link 1, ingress Link 1, egress Question: How can we give weights? Protocol class, bit markings, prefixes, etc. Link 2, ingress Link 2, egress flow 1 Flow 1 Classifier flow 2 Scheduler Bit-by-bit round robin Link 3, ingress Link 3, egress flow n Classification Flow N 29 30 WFQ slides credit: Nick McKeown (Stanford) Bit-by-Bit Weighted Fair Queueing Packet vs. “Fluid” System • Flows allocated different rates by servicing different • Bit-by-bit FQ is not implementable: number of bits for each flow during each round. …In real packet-based systems: – One queue is served at any given time w1 = 0.1 – Packet transmission cannot be preempted w2 = 0.3 1 R1 C w3 = 0.3 • Goal: A packet scheme close to fluid system w4 = 0.3 Order of service for the four queues: – Bound perFormance w.r.t. fluid system … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… 31 32 8 First Cut: Simple Round Robin Packet-by-packet Fair Queuing (Weighted Fair Queuing) • Serve a packet from non-empty queues in turn – Lets assume all flows have equal weight Deals better with variable size packets and weights • Variable packet length à get more service by sending bigger packets Key Idea: 1. Determine the finish time of packets in bit-by- • Unfair instantaneous service rate (esp. with bit system, assuming no more arrivals variable weights) 2. Serve packets in order of finish times – E.g. 500 flows: 250 with weight 1, 250 with weight 10 – Each round takes 2,750 packet times – What if a packet arrives right after its “turn”? 33 34 Implementing WFQ Finish order Challenge: Determining finish time is hard In what order do the packets finish? Increasing Li/wi Idea: Don’t need finish time. Need finish order. w1 = 0.1 L1 w2 = 0.3 L2 1 R1 C The finish order is a lot easier to calculate. w3 = 0.3 L3 Order of service for the four queues: w4 = 0.3 L4 … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Does not change with future packet arrivals! 35 36 9 Bit-by-bit System Round Bit-by-bit System Round ww11== 0.11 L1 ww11== 0.11 L1 ww22== 0.33 L2 1 ww22== 0.33 L2 1 R1 C R1 C ww33== 0.33 L3 ww33== 0.33 L3 Order of service for the four queues: Order of service for the four queues: ww44== 0.3 3 L4 ww44== 0.3 3 L4 … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Round – One complete cycle through all the queues Round – One complete cycle through all the queues sending wi bits per queue sending
Recommended publications
  • 15-744: Computer Networking Fair Queuing Overview Example
    Fair Queuing • Fair Queuing • Core-stateless Fair queuing 15-744: Computer Networking • Assigned reading • Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience L-7 Fair Queuing • Congestion Control for High Bandwidth-Delay Product Networks (XTP - 2 sections) 2 Overview Example • 10Gb/s linecard • TCP and queues • Requires 300Mbytes of buffering. • Queuing disciplines • Read and write 40 byte packet every 32ns. •RED • Memory technologies • DRAM: require 4 devices, but too slow. • Fair-queuing • SRAM: require 80 devices, 1kW, $2000. • Core-stateless FQ • Problem gets harder at 40Gb/s • Hence RLDRAM, FCRAM, etc. •XCP 3 4 1 Rule-of-thumb If flows are synchronized • Rule-of-thumb makes sense for one flow Wmax • Typical backbone link has > 20,000 flows • Does the rule-of-thumb still hold? Wmax 2 Wmax Wmax 2 t • Aggregate window has same dynamics • Therefore buffer occupancy has same dynamics • Rule-of-thumb still holds. 5 6 If flows are not synchronized Central Limit Theorem W B • CLT tells us that the more variables (Congestion 0 Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows) • Width of Gaussian decreases with 1 n 1 • Buffer size should also decreases with n Bn1 2T C Buffer Size Probability B Distribution n n 7 8 2 Required buffer size Overview • TCP and queues • Queuing disciplines •RED 2TC n • Fair-queuing • Core-stateless FQ Simulation •XCP 9 10 Queuing Disciplines Packet Drop Dimensions • Each router must implement some queuing discipline Aggregation Per-connection
    [Show full text]
  • 15-744: Computer Networking Fair Queuing Overview Example
    Fair Queuing • Fair Queuing • Core-stateless Fair queuing 15-744: Computer Networking • Assigned reading • [DKS90] Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience L-5 Fair Queuing • [SSZ98] Core-Stateless Fair Queueing: Achieving Approximately Fair Allocations in High Speed Networks 2 Overview Example • 10Gb/s linecard • TCP and queues • Requires 300Mbytes of buffering. • Queuing disciplines • Read and write 40 byte packet every 32ns. • RED • Memory technologies • DRAM: require 4 devices, but too slow. • Fair-queuing • SRAM: require 80 devices, 1kW, $2000. • Core-stateless FQ • Problem gets harder at 40Gb/s • Hence RLDRAM, FCRAM, etc. • XCP 3 4 1 Rule-of-thumb If flows are synchronized • Rule-of-thumb makes sense for one flow • Typical backbone link has > 20,000 flows • Does the rule-of-thumb still hold? t • Aggregate window has same dynamics • Therefore buffer occupancy has same dynamics • Rule-of-thumb still holds. 5 6 If flows are not synchronized Central Limit Theorem B • CLT tells us that the more variables (Congestion 0 Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows) • Width of Gaussian decreases with • Buffer size should also decreases with Buffer Size Probability Distribution 7 8 2 Required buffer size Overview • TCP and queues • Queuing disciplines • RED • Fair-queuing • Core-stateless FQ Simulation • XCP 9 10 Queuing Disciplines Packet Drop Dimensions • Each router must implement some queuing discipline Aggregation Per-connection state Single
    [Show full text]
  • Queue Scheduling Disciplines
    White Paper Supporting Differentiated Service Classes: Queue Scheduling Disciplines Chuck Semeria Marketing Engineer Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408 745 2000 or 888 JUNIPER www.juniper.net Part Number:: 200020-001 12/01 Contents Executive Summary . 4 Perspective . 4 First-in, First-out (FIFO) Queuing . 5 FIFO Benefits and Limitations . 6 FIFO Implementations and Applications . 6 Priority Queuing (PQ) . 6 PQ Benefits and Limitations . 7 PQ Implementations and Applications . 8 Fair Queuing (FQ) . 9 FQ Benefits and Limitations . 9 FQ Implementations and Applications . 10 Weighted Fair Queuing (WFQ) . 11 WFQ Algorithm . 11 WFQ Benefits and Limitations . 13 Enhancements to WFQ . 14 WFQ Implementations and Applications . 14 Weighted Round Robin (WRR) or Class-based Queuing (CBQ) . 15 WRR Queuing Algorithm . 15 WRR Queuing Benefits and Limitations . 16 WRR Implementations and Applications . 18 Deficit Weighted Round Robin (DWRR) . 18 DWRR Algorithm . 18 DWRR Pseudo Code . 19 DWRR Example . 20 DWRR Benefits and Limitations . 24 DWRR Implementations and Applications . 25 Conclusion . 25 References . 26 Textbooks . 26 Technical Papers . 26 Seminars . 26 Web Sites . 27 Copyright © 2001, Juniper Networks, Inc. List of Figures Figure 1: First-in, First-out (FIFO) Queuing . 5 Figure 2: Priority Queuing . 7 Figure 3: Fair Queuing (FQ) . 9 Figure 4: Class-based Fair Queuing . 11 Figure 5: A Weighted Bit-by-bit Round-robin Scheduler with a Packet Reassembler . 12 Figure 6: Weighted Fair Queuing (WFQ)—Service According to Packet Finish Time . 13 Figure 7: Weighted Round Robin (WRR) Queuing . 15 Figure 8: WRR Queuing Is Fair with Fixed-length Packets . 17 Figure 9: WRR Queuing is Unfair with Variable-length Packets .
    [Show full text]
  • Enhanced Core Stateless Fair Queuing with Multiple Queue Priority Scheduler
    The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 159 Enhanced Core Stateless Fair Queuing with Multiple Queue Priority Scheduler Nandhini Sivasubramaniam 1 and Palaniammal Senniappan 2 1Department of Computer Science, Garden City College of Science and Management Studies, India 2Department of Science and Humanities, VLB Janakiammal College of Engineering and Technology, India Abstract: The Core Stateless Fair Queuing (CSFQ) is a distributed approach of Fair Queuing (FQ). The limitations include its inability to estimate fairness during large traffic flows, which are short and bursty (VoIP or video), and also it utilizes the single FIFO queue at the core router. For improving the fairness and efficiency, we propose an Enhanced Core Stateless Fair Queuing (ECSFQ) with multiple queue priority scheduler. Initially priority scheduler is applied to the flows entering the ingress edge router. If it is real time flow i.e., VoIP or video flow, then the packets are given higher priority else lower priority. In core router, for higher priority flows the Multiple Queue Fair Queuing (MQFQ) is applied that allows a flow to utilize multiple queues to transmit the packets. In case of lower priority, the normal max-min fairness criterion of CSFQ is applied to perform probabilistic packet dropping. By simulation results, we show that this technique improves the throughput of real time flows by reducing the packet loss and delay. Keywords: CSFQ, quality of service, MQFQ, priority scheduler. Received November 21, 2011; accepted July 29, 2012; published online January 29, 2013 1. Introduction 1.2. Fair Queuing Techniques 1.1. Queuing Algorithms in Networking Queuing in networking includes techniques like priority queuing [7, 11, 13], Fair Queuing (FQ) [2, 10, Queueing Theory is a branch of applied probability 19] and Round Robin Queuing [12, 15, 16].
    [Show full text]
  • Per-Flow Fairness in the Datacenter Network Yixi Gong, James W
    1 Per-Flow Fairness in the Datacenter Network YiXi Gong, James W. Roberts, Dario Rossi Telecom ParisTech Abstract—Datacenter network (DCN) design has been ac- produced by gaming applications [1], instant voice transla- tively researched for over a decade. Solutions proposed range tion [29] and augmented reality services, for example. The from end-to-end transport protocol redesign to more intricate, exact flow mix will also be highly variable in time, depending monolithic and cross-layer architectures. Despite this intense activity, to date we remark the absence of DCN proposals based both on the degree to which applications rely on offloading to on simple fair scheduling strategies. In this paper, we evaluate the Cloud and on the accessibility of the Cloud that varies due the effectiveness of FQ-CoDel in the DCN environment. Our to device capabilities, battery life, connectivity opportunity, results show, (i) that average throughput is greater than that etc. [9]. attained with DCN tailored protocols like DCTCP, and (ii) the As DCN resources are increasingly shared among multiple completion time of short flows is close to that of state-of-art DCN proposals like pFabric. Good enough performance and stakeholders, we must question the appropriateness of some striking simplicity make FQ-CoDel a serious contender in the frequently made assumptions. How can one rely on all end- DCN arena. systems implementing a common, tailor-made transport pro- tocol like DCTCP [6] when end-systems are virtual machines under tenant control? How can one rely on applications I. INTRODUCTION truthfully indicating the size of their flows to enable shortest In the last decade, datacenter networks (DCNs) have flow first scheduling as in pFabric [8] when a tenant can gain increasingly been built with relatively inexpensive off-the- better throughput by simply slicing a long flow into many shelf devices.
    [Show full text]
  • An Extension of the Codel Algorithm
    Proceedings of the International MultiConference of Engineers and Computer Scientists 2015 Vol II, IMECS 2015, March 18 - 20, 2015, Hong Kong Hard Limit CoDel: An Extension of the CoDel Algorithm Fengyu Gao, Hongyan Qian, and Xiaoling Yang Abstract—The CoDel algorithm has been a significant ad- II. BACKGROUND INFORMATION vance in the field of AQM. However, it does not provide bounded delay. Rather than allowing bursts of traffic, we argue that a Nowadays the Internet is suffering from high latency and hard limit is necessary especially at the edge of the Internet jitter caused by unprotected large buffers. The “persistently where a single flow can congest a link. Thus in this paper we full buffer problem”, recently exposed as “bufferbloat” [1], proposed an extension of the CoDel algorithm called “hard [2], has been observed for decades, but is still with us. limit CoDel”, which is fairly simple but effective. Instead of number of packets, this extension uses the packet sojourn time When recognized the problem, Van Jacobson and Sally as the metric of limit. Simulation experiments showed that the Floyd developed a simple and effective algorithm called RED maximum delay and jitter have been well controlled with an (Random Early Detection) in 1993 [3]. In 1998, the Internet acceptable loss on throughput. Its performance is especially Research Task Force urged the deployment of active queue excellent with changing link rates. management in the Internet [4], specially recommending Index Terms—bufferbloat, CoDel, AQM. RED. However, this algorithm has never been widely deployed I. INTRODUCTION because of implementation difficulties. The performance of RED depends heavily on the appropriate setting of at least The CoDel algorithm proposed by Kathleen Nichols and four parameters: Van Jacobson in 2012 has been a great innovation in AQM.
    [Show full text]
  • Scheduling for Fairness: Fair Queueing and CSFQ
    Copyright Hari Balakrishnan, 1998-2005, all rights reserved. Please do not redistribute without permission. LECTURE 8 Scheduling for Fairness: Fair Queueing and CSFQ ! 8.1 Introduction nthe last lecture, we discussed two ways in which routers can help in congestion control: Iby signaling congestion, using either packet drops or marks (e.g., ECN), and by cleverly managing its buffers (e.g., using an active queue management scheme like RED). Today, we turn to the third way in which routers can assist in congestion control: scheduling. Specifically, we will first study a particular form of scheduling, called fair queueing (FQ) that achieves (weighted) fair allocation of bandwidth between flows (or between whatever traffic aggregates one chooses to distinguish between.1 While FQ achieves weighted-fair bandwidth allocations, it requires per-flow queueing and per-flow state in each router in the network. This is often prohibitively expensive. We will also study another scheme, called core-stateless fair queueing (CSFQ), which displays an interesting design where “edge” routers maintain per-flow state whereas “core” routers do not. They use their own non- per-flow state complimented with some information reaching them in packet headers (placed by the edge routers) to implement fair allocations. Strictly speaking, CSFQ is not a scheduling mechanism, but a way to achieve fair bandwidth allocation using differential packet dropping. But because it attempts to emulate fair queueing, it’s best to discuss the scheme in conjuction with fair queueing. The notes for this lecture are based on two papers: 1. A. Demers, S. Keshav, and S. Shenker, “Analysis and Simulation of a Fair Queueing Algorithm”, Internetworking: Research and Experience, Vol.
    [Show full text]
  • K04-2: Distributed Weighted Fair Queuing in 802.11 Wireless
    Distributed Weighted Fair Queuing in 802.11 Wireless LAN Albert Banchs, Xavier P´erez NEC Europe Ltd., Network Laboratories Heidelberg, Germany Abstract— With Weighted Fair Queuing, the link’s bandwidth Specifically, the station computes a random value in the is distributed among competing flows proportionally to their range of 0 to the so-called Contention Window (CW). A backoff weights. In this paper we propose an extension of the DCF func- Ì = time interval is computed usingthis random value: back Óf f tion of IEEE 802.11 to provide weighted fair queuing in Wireless Ì Ê aÒd´¼;Cϵ £ Ì ×ÐÓØ LAN. Simulation results show that the proposed scheme is able ×ÐÓØ ,where is the slot time. This backoff to provide the desired bandwidth distribution independent of the interval is then used to initialize the backoff timer. This timer flows aggressiveness and their willingness to transmit. Backwards is decreased only when the medium is idle. The timer is frozen compatibility is provided such that legacy IEEE 802.11 terminals when another station is detected as transmitting. Each time the receive a bandwidth corresponding to the default weight. medium becomes idle for a period longer than DIFS, the back- Index Terms— Wireless LAN, IEEE 802.11, MAC, Weighted off timer is periodically decremented, once every slot-time. Fair Queuing, Bandwidth Allocation As soon as the backoff timer expires, the station accesses the medium. A collision occurs when two or more stations start I. INTRODUCTION transmission simultaneously in the same slot. An acknowledg- ment is used to notify the sendingstations that the transmitted Much research has been performed on ”weighted fair queu- frame has been successfully received.
    [Show full text]
  • The So-Called Qos Is for Different Needs of Various Network Applications, and Provides Different Quality of Service, Such As
    The so-called QoS is for different needs of various network applications, and provides different quality of service, such as providing dedicated bandwidth, reducing packet loss rate, reducing packet transmission delay and delay jitter. That is to say, in the case that the bandwidth is not sufficient, the contradiction between the bandwidth occupied by various service flows is balanced. The switch classifies the data streams in the ingress phase, and then maps different types of data streams to different priority queues in the export phase. Finally, the scheduling mode determines the manner in which packets of different priority queues are forwarded, thereby implementing QoS features. QoS working principle 1.Message classification: objects are identified according to certain matching rules. 2.Mapping: Users can map packets entering the switch to different priority queues according to the priority mode. The switch provides two priority modes: 802.1P priority and DSCP priority. 3.Queue scheduling: When the network is congested, it must solve the problem that multiple data streams compete for resources at the same time, usually by queue scheduling. The company's switches provide a total of five scheduling modes (supporting several of them depending on the switch chip), respectively SP(Strict Priority),RR(Round-Robin)WRR(Weighted Round-Robin)DRR(Deficit Round-Robin),WFQ(Weighted Fair Queuing). 1、Basic concept of QoS queue scheduling (1)802.1P priority 802.1Q Frame format As shown in the figure, each 802.1Q Tag has a Pri field, which consists of three bits, ranging from 0 to 7. The 802.1P priority is based on the Pri field value to determine the priority of the data frame level.
    [Show full text]
  • CS640: Introduction to Computer Networks Queuing Disciplines Typical Internet Queuing
    11/29/2007 CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS Queuing Disciplines • Each router must implement some queuing discipline – Scheduling discipline – Drop policy • Queuing allocates both bandwidth and buffer space: – Bandwidth: which packet to serve (transmit) next – Buffer space: which packet to drop next (when required) • Queuing also affects latency 2 Typical Internet Queuing • FIFO + drop-tail – Simplest choice – Used widely in the Internet – FIFO: scheduling discipline – Drop-tail: drop policy • FIFO (first-in-first-out) – Implies single class of traffic, no priority • Drop-tail – Arriving packets get dropped when queue is full regardless of flow or importance 3 1 11/29/2007 FIFO + Drop-tail Problems • Lock-out problem – Allows a few flows to monopolize the queue space – Send more, get more No implicit policing • Full queues – TCP detects congestion from loss – Forces network to have long standing queues in steady-state – Queueing delays – bad for time sensitive traffic – Synchronization: end hosts react to same events • Full queue empty Full empty… • Poor support for bursty traffic 4 Lock-out Problem • Priority queueing can solve some problems – Starvation – Determining priorities is hard • Simpler techniques: Random drop – Packet arriving when queue is full causes some random packet to be dropped • Drop front – On full queue, drop packet at head of queue • Random drop and drop front solve the lock- out problem but not the full-queues problem 5 Random Early Detection (RED)
    [Show full text]
  • Congestion Control Outline
    Congestion Control and Resource Allocation Lecture material taken from “Computer Networks A Systems Approach”, Third Edition,Peterson and Davie, Morgan Kaufmann, 2007. Congestion Control Outline • Congestion Control • Flows • CC Taxonomy • Evaluation Criteria • Introduction to Queueing – FIFO (FCFS drop tail) – Priority – FQ (Fair Queueing) – WFQ (Weighted Fair Queueing) Computer Networks Congestion Control 2 Definitions • Flow control:: keep a fast sender from overrunning a slow receiver. • Congestion control:: the efforts made by network nodes to prevent or respond to overload conditions. Congestion control is intended to keep a fast sender from sending data into the network due to a lack of resources in the network {e.g., available link capacity, router buffers}. Computer Networks Congestion Control 3 Congestion Control • Congestion control is concerned with the bottleneck routers in a packet switched network. • Congestion control can be distinguished from routing in that sometimes there is no way to ‘route around’ a congested router. Computer Networks Congestion Control 4 Congestion 3 6 1 4 8 2 5 7 Copyright ©2000 The McGraw Hill Leon-Garcia & Widjaja: Communication Copyright ©2000 CompaniesThe McGraw Hill Companies LeonLeon-LeonGarcia-Garcia-Garcia & &Networks Widjaja: &Widjaja: Widjaja: Communication Communication Communication Networks Figure 7.50b NetworksNetworks Computer Networks Congestion Control 5 Source 1 10-Mbps Ethernet Router Destination 1.5-Mbps T1 link Source 100-Mbps FDDI 2 Figure 6.1 Congestion in a packet- switched network Computer Networks Congestion Control 6 Flows • flow :: a sequence of packets sent between a source/destination pair and following the same route through the network. • Connectionless flows within the TCP/IP model:: The connection-oriented abstraction, TCP, is implemented at the transport layer while IP provides a connectionless datagram delivery service.
    [Show full text]
  • Deficit Round Robin
    Efficient Fair Queuing using Deficit Round Robin M. Shreedhar George Varghese Microsoft Corporation* Washington University in St. Louis. Abstract sends at an uncontrolled rate can seize a large fraction of the buffers at an intermediate router; this can result in dropped Fair queuing is a technique that allows each flow passing packets for other sources sending at more moderate rates! through a network device to have a fair share of network A solution to this problem is needed to isolate the effects of resources. Previous schemes for fair queuing that achieved bad behavior to users that are behaving badly. nearly perfect fairness were expensive to implement: specifi- cally, the work required to process a packet in these schemes An isolation mechanism called Fair Queuing [DKS89] was O(log(n) ), where n is the number of active flows. This has been proposed, and has been proved [GM90] to have is expensive at high speeds. On the other hand, cheaper nearly perfect isolation and fairness. Unfortunately, Fair of fair queuing that have been reported in approximations Queuing (FQ) appears to be expensive to implement. Specif- the literature exhibit unfair behavior. In this paper, we ically, FQ requires O(log(n) ) work per packet to implement describe a new approximation of fair queuing, that we call Deficit Round Robin. Our scheme achieves nearly perfect fair queuing, where n is the number of packet streams that fairness in terms of throughput, requires only O(1) work are concurrently active at the gateway or router. With to process a packet, and is simple enough to implement in a large number of active packet streams, FQ is hard to hardware.
    [Show full text]