CSE 237A Scheduling of Communication

Total Page:16

File Type:pdf, Size:1020Kb

CSE 237A Scheduling of Communication CSE 237A Scheduling of communication Tajana Simunic Rosing Department of Computer Science and Engineering University of California, San Diego. 1 RlReal-time commun ica tion sc hdliheduling Value of comm unication depends on the time at which the message is delivered to the recipient Metrics Throughput Delay Delay jitter Loss rate Fairness Deadlines can be hard or soft Deterministic vs. statistical guarantees 2 Key Problem Allocation/ sched uling of comm unication Point-to-point link e.g. wire (scheduling at transmitter) Distributed link e. g. wireless (MAC) or bus (arbiter) Entire network (routing) Anywhere there is a shared resource! Different from scheduling to CPUs Often no pppreemption or coarse pre-emption Channels of time-varying quality/capacity 3 Type of Traffic So urces Cons tan t bit rat e: peri odi c t raffi c Fixed-size packets at periodic intervals Analogous to periodic tasks with constant computation time in RM model Variable bit rate: bursty traffic Fixed-size packets at irregular intervals Variable-size packets at regular intervals 4 Key Iss u es Scheduling Admission control PliiPolicing Goals: meet performance and fairness metrics high resource utilization easy to implement small work per data item , scale slowly with # of flows or tasks easy admission control decisions 5 ShdliScheduling for commun ica tion Determine who sends data when: FIFO Priority queuing: preemptive, non-preemptive Round robin Weighted fair queuing EDF Discard mechanism when buffer is f ull Distributed implementation Need multiple access mechanism 6 Problems du e to FIFO qu eu es 1. In order to maximize its chances of ss ee success, a source has an incentive to maximize the rate at which it transmits. Fairn 2. FIFO queue is “unfair” since it favors the most greedy flow. es 3.ee It i s har d to con tro l the de lay o f pac ke ts Delay through a network of FIFO queues. Guarant 7 Fairness 10 Mb/s 0.55 A Mb/s 1.1 Mb/s 100 R C Mb/s 1 e.g. an http flow with a given B 0.55 (IP SA, IP DA, TCP SP, TCP DP) Mb/s What is the “fair” allocation: (0.55Mb/s, 0.55Mb/s) or (0.1Mb/s, 1Mb/s)? 8 Fairness 10 Mb/s A 1.1 Mb/s 100 R1 D Mb/s B 0.2 C Mb/s What is the “fair” allocation? 9 Fairness Intuitively each connection gets no more than what it wants the excess, if any, is equally shared Fairness is intuitively a good idea Fairness also provides protection traffic hogs cannot overrun others automatically builds firewalls around heavy users reverse is not true: protection may not lead to fairness Transfer half of excess Unsatisfied demand ABC ABC 10 Max-min Fairness Maximize the minimum share of task or flow whose demand is not fully satisfied Resources are allocated in order of increasing demand, normalized by weight No task or flow gets a share larger than its demand Task or flows with unsatisfied demands get resource shared in ppproportion to their wei ghts 11 Max-Min Fairness A common way to allocate flows N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). 1. Pick the flow, f, with the smallest requested rate. 2. If W(f)<C/N, then set R(f) = W(f). 3. If W(f)>) > C/N, then set R(f)=) = C/N. 4. Set N = N – 1. C = C –R(f). 5. If N > 0 got1to 1. 12 Max-Min Fairness An example W(f1) = 0.1 W(f ) = 0.5 1 2 C R1 W(f3) = 10 W(f4) = 5 Round 1: Set R(f1) = 0.1 Round 2: Set R(f2) = 0.9/3 = 0.3 Round 3: Set R(f4) = 0.6/2 = 0.3 Round 4: Set R(f3) = 0.3/1 = 0.3 13 Round Robin Scan class queues serving one from each class that has a non-empty queue Figure from: Kurose14 & Ross Weighted Round Robin Round-robin unfair if packets are of different length or weihtights are no t equa l Different weights, fixed packet size serv e m or e th an on e pack et per vi si t, af ter n orm alizin g to obtain integer weights Different weights, variable size packets normalize weights by mean packet size Problems with variable size packets and different weights, need to know mean packtket s ize in a dvance fair only over time scales > round time round time can be large can lead to long periods of unfairness 15 Priority Que u ing Flows classified according to priorities Preemptive and non-preemptive versions Figure from: Kurose16 & Ross Fair Que u eing 1. Packets belonging to a flow are placed in a FIFO. This is called “per-flow queuing”. 2. FIFOs are scheduled one bit at a time, in a round-robin fashion. 3. This is called Bit-by-Bit Fair Queuing. Flow 1 Bit-by-bit round robin Classification Flow N Scheduling 17 Weighted Bit-by-Bit Fair Queueing Likew ise, flows can be a lloca te d differen t rates by servicing a different number of bits f or each fl ow dur ing eac h roun d. R(f1) = 0.1 R(f ) = 0.3 1 2 C R1 R(f3) = 0.3 R(f4)03) = 0.3 OdOrder of service for the four queues: … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Also called “Generalized Processor Sharing (GPS)” 18 Generalized Processor Sharing Generalized Round Robin In any time interval, allocates resource in proportion to the weig h t a m on g th e set of all backl ogged co nn ecti on s Serves infinitesimal resource to each Achieves max-min fairness But is non-implementable Figure from: S. Keshav,19 Cornell Weighted Fair Queueing (WFQ) Problem: We need to serve a whole packet at a time. Solution: 1. Determine what time a packet, p, would complete if we served flows bit- by-bit. Call this the packet’s finishing time, Fp. 2. Serve packets in the order of increasing finishing time. Theorem: PktPacket p will depar t be fore Fp+ TRANSPmax Also called “Packetized Generalized Processor Sharing (PGPS)” 20 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Equal Weights 6 5 4 3 2 1 0 Time A1 = 4 1 B1 = 3 1 C2 = 1 C1 = 1 1 D2 = 2 D1 = 1 1 Weights : 1:1:1:1 6 5 4 3 2 1 0 Time D1, C1 Depart at R=1 A2, C3 arrive A2 = 2 A1 = 4 1 B1 = 3 D1 C1 B1 A1 1 C3 = 2 C2 = 1 C1 = 1 1 Round 1 D2 = 2 D1 = 1 1 Weights : 1:1:1:1 6 5 4 3 2 1 0 Time C2 Departs at R=2 A2 = 2 A1 = 4 1 B1 = 3 D2 C2 B1 A1 D1 C1 B1 A1 1 C3 = 2 C2 = 1 C1 = 1 1 Round 2 Round 1 D2 = 2 D1 = 1 1 Weights : 1:1:1:1 21 Understandinggy bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Equal Weights 6 5 4 3 2 1 0 Time D2, B1 Depart at R=3 A2 = 2 A1 = 4 1 B1 = 3 D2 C3 B1 A1 D2 C2 B1 A1 D1 C1 B1 A1 1 C3 = 2 C2 = 1 C1 = 1 1 Round 3 Round 2 Round 1 D2 = 2 D1 = 1 1 Weights : 1:1:1:1 6 5 4 3 2 1 0 A2 C3, A1 Depart at R=4 Time Departs at R=6 A2 = 2 A1 = 4 1 B1 = 3 A2 A2 C3 A1 D2 C3 B1 A1 D2 C2 B1 A1 D1 C1 B1 A1 1 C3 = 2 C2 = 1 C1 = 1 1 6 5 Round 4 Round 3 Round 2 Round 1 D2 = 2 D1 = 1 1 Weights : 1:1:1:1 Sort packets 6 5 4 3 2 1 0 Time Departure order for packet by packet WFQ: Sort by finish round of packets A2 = 2 A1 = 4 1 B1 = 3 A2 A2 C3 C3 A 1 A 1 A1 A 1 D2 D2 B1 B1 B1 C2 D1 C1 1 C3 = 2 C2 = 1 C1 = 1 1 D2 = 2 D1 = 1 1 Weights : 1:1:1:1 22 WFQ implementation ov erv iew Assuming we know the current round number R Finish number of packet of length p if arriving to active connection = previous finish number + p if arriving to an inactive connection = R + p Dealing with weights replace p by p/w Computing the current round number Define round number (R) as a real-valued variable R increases at a rate inversely proportional to # of active connections replace # of connections by sum of connection weights WFQ emulates GPS instead of bit-by-bit RR with this definition from: S. Keshav,23 Cornell WFQ wi Serves traffi c at rat e R (t) = i w ∑k∈ACTIVE(t) k 1 If w =1 then the rate is: R(t) = i’s | ACTIVE(t) | Finish number computation: Pi (k,t) Fi (k,t) = max{Fi (k −1,t), R(t)}+ wi 1 Round rate: RR(t) ~ w TSR ∑k∈ACTIVE (t) k 24 WFQ E xample Pi (k,t) Fi (k,t) = max{Fi (k −1,t), R(t)}+ wi =1 link rate 1 unit/sec Queue A Queue B Queue C time # conn RR R F P F P F P ΔR(t) = RR(t)⋅Δtime⋅ LinkRate 0 3 0.33 0 1 1 2 2 2 2 320.51102121 4 3 0.33 1.5 3.5 2 2 0.5 2 0.5 555.5 1 1 2 353.5 151.5 2 0 2 0 7003.53.502020 TSR 25 WFQ E xample Pi (k,t) Fi (k,t) = max{Fi (k −1,t), R(t)}+ wi link rate 100 bits/s Q A wa=2 Q B wb=5 time (s) # conn RR R F P F P ΔR(t) = RR(t)⋅Δtime⋅ LinkRate 0 2 0 x=100 y=200 1.5 When do packets complete service? 2 What round number? 282.8 What is packet ZZs’s finish number if it arrives at time 1.5s and is 10 bits? 3 TSR 26 Ev alu ation Pros like GPS, it provides protection can obtain worst-case end-to-end delay bound gives users incentive to use intelligent flow control Cons needs per-connection state comppplex to implement requires a priority queue 27 Problems du e to FIFO qu eu es 1.
Recommended publications
  • 15-744: Computer Networking Fair Queuing Overview Example
    Fair Queuing • Fair Queuing • Core-stateless Fair queuing 15-744: Computer Networking • Assigned reading • Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience L-7 Fair Queuing • Congestion Control for High Bandwidth-Delay Product Networks (XTP - 2 sections) 2 Overview Example • 10Gb/s linecard • TCP and queues • Requires 300Mbytes of buffering. • Queuing disciplines • Read and write 40 byte packet every 32ns. •RED • Memory technologies • DRAM: require 4 devices, but too slow. • Fair-queuing • SRAM: require 80 devices, 1kW, $2000. • Core-stateless FQ • Problem gets harder at 40Gb/s • Hence RLDRAM, FCRAM, etc. •XCP 3 4 1 Rule-of-thumb If flows are synchronized • Rule-of-thumb makes sense for one flow Wmax • Typical backbone link has > 20,000 flows • Does the rule-of-thumb still hold? Wmax 2 Wmax Wmax 2 t • Aggregate window has same dynamics • Therefore buffer occupancy has same dynamics • Rule-of-thumb still holds. 5 6 If flows are not synchronized Central Limit Theorem W B • CLT tells us that the more variables (Congestion 0 Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows) • Width of Gaussian decreases with 1 n 1 • Buffer size should also decreases with n Bn1 2T C Buffer Size Probability B Distribution n n 7 8 2 Required buffer size Overview • TCP and queues • Queuing disciplines •RED 2TC n • Fair-queuing • Core-stateless FQ Simulation •XCP 9 10 Queuing Disciplines Packet Drop Dimensions • Each router must implement some queuing discipline Aggregation Per-connection
    [Show full text]
  • 15-744: Computer Networking Fair Queuing Overview Example
    Fair Queuing • Fair Queuing • Core-stateless Fair queuing 15-744: Computer Networking • Assigned reading • [DKS90] Analysis and Simulation of a Fair Queueing Algorithm, Internetworking: Research and Experience L-5 Fair Queuing • [SSZ98] Core-Stateless Fair Queueing: Achieving Approximately Fair Allocations in High Speed Networks 2 Overview Example • 10Gb/s linecard • TCP and queues • Requires 300Mbytes of buffering. • Queuing disciplines • Read and write 40 byte packet every 32ns. • RED • Memory technologies • DRAM: require 4 devices, but too slow. • Fair-queuing • SRAM: require 80 devices, 1kW, $2000. • Core-stateless FQ • Problem gets harder at 40Gb/s • Hence RLDRAM, FCRAM, etc. • XCP 3 4 1 Rule-of-thumb If flows are synchronized • Rule-of-thumb makes sense for one flow • Typical backbone link has > 20,000 flows • Does the rule-of-thumb still hold? t • Aggregate window has same dynamics • Therefore buffer occupancy has same dynamics • Rule-of-thumb still holds. 5 6 If flows are not synchronized Central Limit Theorem B • CLT tells us that the more variables (Congestion 0 Windows of Flows) we have, the narrower the Gaussian (Fluctuation of sum of windows) • Width of Gaussian decreases with • Buffer size should also decreases with Buffer Size Probability Distribution 7 8 2 Required buffer size Overview • TCP and queues • Queuing disciplines • RED • Fair-queuing • Core-stateless FQ Simulation • XCP 9 10 Queuing Disciplines Packet Drop Dimensions • Each router must implement some queuing discipline Aggregation Per-connection state Single
    [Show full text]
  • Queue Scheduling Disciplines
    White Paper Supporting Differentiated Service Classes: Queue Scheduling Disciplines Chuck Semeria Marketing Engineer Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, CA 94089 USA 408 745 2000 or 888 JUNIPER www.juniper.net Part Number:: 200020-001 12/01 Contents Executive Summary . 4 Perspective . 4 First-in, First-out (FIFO) Queuing . 5 FIFO Benefits and Limitations . 6 FIFO Implementations and Applications . 6 Priority Queuing (PQ) . 6 PQ Benefits and Limitations . 7 PQ Implementations and Applications . 8 Fair Queuing (FQ) . 9 FQ Benefits and Limitations . 9 FQ Implementations and Applications . 10 Weighted Fair Queuing (WFQ) . 11 WFQ Algorithm . 11 WFQ Benefits and Limitations . 13 Enhancements to WFQ . 14 WFQ Implementations and Applications . 14 Weighted Round Robin (WRR) or Class-based Queuing (CBQ) . 15 WRR Queuing Algorithm . 15 WRR Queuing Benefits and Limitations . 16 WRR Implementations and Applications . 18 Deficit Weighted Round Robin (DWRR) . 18 DWRR Algorithm . 18 DWRR Pseudo Code . 19 DWRR Example . 20 DWRR Benefits and Limitations . 24 DWRR Implementations and Applications . 25 Conclusion . 25 References . 26 Textbooks . 26 Technical Papers . 26 Seminars . 26 Web Sites . 27 Copyright © 2001, Juniper Networks, Inc. List of Figures Figure 1: First-in, First-out (FIFO) Queuing . 5 Figure 2: Priority Queuing . 7 Figure 3: Fair Queuing (FQ) . 9 Figure 4: Class-based Fair Queuing . 11 Figure 5: A Weighted Bit-by-bit Round-robin Scheduler with a Packet Reassembler . 12 Figure 6: Weighted Fair Queuing (WFQ)—Service According to Packet Finish Time . 13 Figure 7: Weighted Round Robin (WRR) Queuing . 15 Figure 8: WRR Queuing Is Fair with Fixed-length Packets . 17 Figure 9: WRR Queuing is Unfair with Variable-length Packets .
    [Show full text]
  • Enhanced Core Stateless Fair Queuing with Multiple Queue Priority Scheduler
    The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 159 Enhanced Core Stateless Fair Queuing with Multiple Queue Priority Scheduler Nandhini Sivasubramaniam 1 and Palaniammal Senniappan 2 1Department of Computer Science, Garden City College of Science and Management Studies, India 2Department of Science and Humanities, VLB Janakiammal College of Engineering and Technology, India Abstract: The Core Stateless Fair Queuing (CSFQ) is a distributed approach of Fair Queuing (FQ). The limitations include its inability to estimate fairness during large traffic flows, which are short and bursty (VoIP or video), and also it utilizes the single FIFO queue at the core router. For improving the fairness and efficiency, we propose an Enhanced Core Stateless Fair Queuing (ECSFQ) with multiple queue priority scheduler. Initially priority scheduler is applied to the flows entering the ingress edge router. If it is real time flow i.e., VoIP or video flow, then the packets are given higher priority else lower priority. In core router, for higher priority flows the Multiple Queue Fair Queuing (MQFQ) is applied that allows a flow to utilize multiple queues to transmit the packets. In case of lower priority, the normal max-min fairness criterion of CSFQ is applied to perform probabilistic packet dropping. By simulation results, we show that this technique improves the throughput of real time flows by reducing the packet loss and delay. Keywords: CSFQ, quality of service, MQFQ, priority scheduler. Received November 21, 2011; accepted July 29, 2012; published online January 29, 2013 1. Introduction 1.2. Fair Queuing Techniques 1.1. Queuing Algorithms in Networking Queuing in networking includes techniques like priority queuing [7, 11, 13], Fair Queuing (FQ) [2, 10, Queueing Theory is a branch of applied probability 19] and Round Robin Queuing [12, 15, 16].
    [Show full text]
  • Scheduling Algorithms
    Scheduling in OQ architectures Scheduling and QoS scheduling Andrea Bianco Telecommunication Network Group [email protected] http://www.telematica.polito.it/ Andrea Bianco – TNG group - Politecnico di Torino Computer Networks Design and Management - 1 Scheduling algorithms • Scheduling: choose a packet to transmit over a link among all packets stored in a given buffer (multiplexing point) • Mainly look at QoS scheduling algorithms – Choose the packet according to QoS needs N OUTPUT inputs BUFFER Andrea Bianco – TNG group - Politecnico di Torino Computer Networks Design and Management - 2 Pag. 1 Scheduling in OQ architectures Output buffered architecture • Advantage of OQ (Output Queued) architectures – All data immediately transferred to output buffers according to data destination – It is possible to run QoS scheduling algorithms independently for each output link • In other architectures, like IQ or CIOQ switches, problems become more complex – Scheduling to satisfy QoS requirements and scheduling to maximize the transfer data from inputs to outputs have conflicting requirements Andrea Bianco – TNG group - Politecnico di Torino Computer Networks Design and Management - 3 QoS scheduling algorithms • Operate over multiplexing points • Micro or nano second scale • Easy enough to be implemented in hardware at high speed • Regulate interactions among flows • Single traffic relation (1VP/1VC) • Group of traffic relations (more VC/1VP o more VC with similar QoS needs) • QoS classes • Strictly related and dependent from buffer management
    [Show full text]
  • Per-Flow Fairness in the Datacenter Network Yixi Gong, James W
    1 Per-Flow Fairness in the Datacenter Network YiXi Gong, James W. Roberts, Dario Rossi Telecom ParisTech Abstract—Datacenter network (DCN) design has been ac- produced by gaming applications [1], instant voice transla- tively researched for over a decade. Solutions proposed range tion [29] and augmented reality services, for example. The from end-to-end transport protocol redesign to more intricate, exact flow mix will also be highly variable in time, depending monolithic and cross-layer architectures. Despite this intense activity, to date we remark the absence of DCN proposals based both on the degree to which applications rely on offloading to on simple fair scheduling strategies. In this paper, we evaluate the Cloud and on the accessibility of the Cloud that varies due the effectiveness of FQ-CoDel in the DCN environment. Our to device capabilities, battery life, connectivity opportunity, results show, (i) that average throughput is greater than that etc. [9]. attained with DCN tailored protocols like DCTCP, and (ii) the As DCN resources are increasingly shared among multiple completion time of short flows is close to that of state-of-art DCN proposals like pFabric. Good enough performance and stakeholders, we must question the appropriateness of some striking simplicity make FQ-CoDel a serious contender in the frequently made assumptions. How can one rely on all end- DCN arena. systems implementing a common, tailor-made transport pro- tocol like DCTCP [6] when end-systems are virtual machines under tenant control? How can one rely on applications I. INTRODUCTION truthfully indicating the size of their flows to enable shortest In the last decade, datacenter networks (DCNs) have flow first scheduling as in pFabric [8] when a tenant can gain increasingly been built with relatively inexpensive off-the- better throughput by simply slicing a long flow into many shelf devices.
    [Show full text]
  • An Extension of the Codel Algorithm
    Proceedings of the International MultiConference of Engineers and Computer Scientists 2015 Vol II, IMECS 2015, March 18 - 20, 2015, Hong Kong Hard Limit CoDel: An Extension of the CoDel Algorithm Fengyu Gao, Hongyan Qian, and Xiaoling Yang Abstract—The CoDel algorithm has been a significant ad- II. BACKGROUND INFORMATION vance in the field of AQM. However, it does not provide bounded delay. Rather than allowing bursts of traffic, we argue that a Nowadays the Internet is suffering from high latency and hard limit is necessary especially at the edge of the Internet jitter caused by unprotected large buffers. The “persistently where a single flow can congest a link. Thus in this paper we full buffer problem”, recently exposed as “bufferbloat” [1], proposed an extension of the CoDel algorithm called “hard [2], has been observed for decades, but is still with us. limit CoDel”, which is fairly simple but effective. Instead of number of packets, this extension uses the packet sojourn time When recognized the problem, Van Jacobson and Sally as the metric of limit. Simulation experiments showed that the Floyd developed a simple and effective algorithm called RED maximum delay and jitter have been well controlled with an (Random Early Detection) in 1993 [3]. In 1998, the Internet acceptable loss on throughput. Its performance is especially Research Task Force urged the deployment of active queue excellent with changing link rates. management in the Internet [4], specially recommending Index Terms—bufferbloat, CoDel, AQM. RED. However, this algorithm has never been widely deployed I. INTRODUCTION because of implementation difficulties. The performance of RED depends heavily on the appropriate setting of at least The CoDel algorithm proposed by Kathleen Nichols and four parameters: Van Jacobson in 2012 has been a great innovation in AQM.
    [Show full text]
  • Scheduling for Fairness: Fair Queueing and CSFQ
    Copyright Hari Balakrishnan, 1998-2005, all rights reserved. Please do not redistribute without permission. LECTURE 8 Scheduling for Fairness: Fair Queueing and CSFQ ! 8.1 Introduction nthe last lecture, we discussed two ways in which routers can help in congestion control: Iby signaling congestion, using either packet drops or marks (e.g., ECN), and by cleverly managing its buffers (e.g., using an active queue management scheme like RED). Today, we turn to the third way in which routers can assist in congestion control: scheduling. Specifically, we will first study a particular form of scheduling, called fair queueing (FQ) that achieves (weighted) fair allocation of bandwidth between flows (or between whatever traffic aggregates one chooses to distinguish between.1 While FQ achieves weighted-fair bandwidth allocations, it requires per-flow queueing and per-flow state in each router in the network. This is often prohibitively expensive. We will also study another scheme, called core-stateless fair queueing (CSFQ), which displays an interesting design where “edge” routers maintain per-flow state whereas “core” routers do not. They use their own non- per-flow state complimented with some information reaching them in packet headers (placed by the edge routers) to implement fair allocations. Strictly speaking, CSFQ is not a scheduling mechanism, but a way to achieve fair bandwidth allocation using differential packet dropping. But because it attempts to emulate fair queueing, it’s best to discuss the scheme in conjuction with fair queueing. The notes for this lecture are based on two papers: 1. A. Demers, S. Keshav, and S. Shenker, “Analysis and Simulation of a Fair Queueing Algorithm”, Internetworking: Research and Experience, Vol.
    [Show full text]
  • University of Bradford Ethesis
    Performance Modelling and Analysis of Weighted Fair Queueing for Scheduling in Communication Networks. An investigation into the Development of New Scheduling Algorithms for Weighted Fair Queueing System with Finite Buffer. Item Type Thesis Authors Alsawaai, Amina S.M. Rights <a rel="license" href="http://creativecommons.org/licenses/ by-nc-nd/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by- nc-nd/3.0/88x31.png" /></a><br />The University of Bradford theses are licenced under a <a rel="license" href="http:// creativecommons.org/licenses/by-nc-nd/3.0/">Creative Commons Licence</a>. Download date 26/09/2021 15:01:09 Link to Item http://hdl.handle.net/10454/4877 University of Bradford eThesis This thesis is hosted in Bradford Scholars – The University of Bradford Open Access repository. Visit the repository for full metadata or to contact the repository team © University of Bradford. This work is licenced for reuse under a Creative Commons Licence. Performance Modelling and Analysis of Weighted Fair Queueing for Scheduling in Communication Networks An investigation into the Development of New Scheduling Algorithms for Weighted Fair Queueing System with Finite Buffer Amina Said Mohammed Alsawaai Submitted for the Degree of Doctor of Philosophy Department of Computing School of Computing, Informatics and Media University of Bradford 2010 Abstract Analytical modelling and characterization of Weighted Fair Queueing (WFQ) have re- cently received considerable attention by several researches since WFQ offers the min- imum delay and optimal fairness guarantee. However, all previous work on WFQ has focused on developing approximations of the scheduler with an infinite buffer because of supposed scalability problems in the WFQ computation.
    [Show full text]
  • K04-2: Distributed Weighted Fair Queuing in 802.11 Wireless
    Distributed Weighted Fair Queuing in 802.11 Wireless LAN Albert Banchs, Xavier P´erez NEC Europe Ltd., Network Laboratories Heidelberg, Germany Abstract— With Weighted Fair Queuing, the link’s bandwidth Specifically, the station computes a random value in the is distributed among competing flows proportionally to their range of 0 to the so-called Contention Window (CW). A backoff weights. In this paper we propose an extension of the DCF func- Ì = time interval is computed usingthis random value: back Óf f tion of IEEE 802.11 to provide weighted fair queuing in Wireless Ì Ê aÒd´¼;Cϵ £ Ì ×ÐÓØ LAN. Simulation results show that the proposed scheme is able ×ÐÓØ ,where is the slot time. This backoff to provide the desired bandwidth distribution independent of the interval is then used to initialize the backoff timer. This timer flows aggressiveness and their willingness to transmit. Backwards is decreased only when the medium is idle. The timer is frozen compatibility is provided such that legacy IEEE 802.11 terminals when another station is detected as transmitting. Each time the receive a bandwidth corresponding to the default weight. medium becomes idle for a period longer than DIFS, the back- Index Terms— Wireless LAN, IEEE 802.11, MAC, Weighted off timer is periodically decremented, once every slot-time. Fair Queuing, Bandwidth Allocation As soon as the backoff timer expires, the station accesses the medium. A collision occurs when two or more stations start I. INTRODUCTION transmission simultaneously in the same slot. An acknowledg- ment is used to notify the sendingstations that the transmitted Much research has been performed on ”weighted fair queu- frame has been successfully received.
    [Show full text]
  • Qos in IEEE 802.11-Based Wireless Networks: a Contemporary Survey Aqsa Malik, Junaid Qadir, Basharat Ahmad, Kok-Lim Alvin Yau, Ubaid Ullah
    1 QoS in IEEE 802.11-based Wireless Networks: A Contemporary Survey Aqsa Malik, Junaid Qadir, Basharat Ahmad, Kok-Lim Alvin Yau, Ubaid Ullah. Abstract—Apart from mobile cellular networks, IEEE 802.11- requirements at a lower cost. In IEEE 802.11 WLANs, the based wireless local area networks (WLANs) represent the error and interference prone nature of wireless medium—due most widely deployed wireless networking technology. With the to fading and multipath effects [2]—makes QoS provisioning migration of critical applications onto data networks, and the emergence of multimedia applications such as digital audio/video even more challenging. The combination of best-effort routing, and multimedia games, the success of IEEE 802.11 depends datagram routing, and an unreliable wireless medium, makes critically on its ability to provide quality of service (QoS). A lot the task of QoS provisioning in IEEE 802.11 WLANs very of research has focused on equipping IEEE 802.11 WLANs with challenging. features to support QoS. In this survey, we provide an overview of In this survey, we provide a focused overview of work these techniques. We discuss the QoS features incorporated by the IEEE 802.11 standard at both physical (PHY) and media access done to ensure QoS in the IEEE 802.11 standard. We have control (MAC) layers, as well as other higher-layer proposals. We the following three goals: (i) to provide a self-contained also focus on how the new architectural developments of software- introduction to the QoS features embedded in the IEEE defined networking (SDN) and cloud networking can be used to 802.11 standard; (ii) to provide a layer-wise description and facilitate QoS provisioning in IEEE 802.11-based networks.
    [Show full text]
  • Simulating Strict Priority Queueing, Weighted Round Robin, And
    International Journal on Advances in Networks and Services, vol 10 no 1 & 2, year 2017, http://www.iariajournals.org/networks_and_services/ 1 Simulating Strict Priority Queueing, Weighted Round Robin, and Weighted Fair Queueing with NS-3 Robert Chang and Vahab Pournaghshband Advanced Network and Security Research Laboratory Computer Science Department California State University, Northridge Northridge, California, USA [email protected] [email protected] Abstract—Strict priority queueing, weighted fair queueing, and weighted round robin are amongst the most popular differen- tiated service queueing disciplines widely used in practice to ensure quality of service for specific types of traffic. In this paper, we present the design and implementation of these three methods in Network Simulator 3 (ns-3). ns-3 is a discrete event network simulator designed to simulate the behavior of computer networks and internet systems. Utilizing our implementations will provide users with the opportunity to research new solutions to existing problems that were previously not available to solve with the existing tools. We believe the ease of configuration and use of our modules will make them attractive tools for further research. By comparing the behavior of our modules with expected outcomes derived from the theoretical behavior of each queueing algorithm, we were able to verify the correctness of Figure 1. ns-3’s simulation network architecture [3] our implementation in an extensive set of experiments. These format. ns-3 has an object oriented design, which facilitates implementations can be used by the research community to investigate new properties and applications of differentiated rapid coding and extension. It also includes automatic memory service queueing.
    [Show full text]