The Need for Realism When Simulating Network Congestion*

Total Page:16

File Type:pdf, Size:1020Kb

The Need for Realism When Simulating Network Congestion* The Need for Realism when Simulating Network Congestion* Kevin Mills and Chris Dabrowski National Institute of Standards & Technology Communications & Networking Symposium April 3-6, 2016 *For more details see: NIST Technical Note 1905 http://dx.doi.org/10.6028/NIST.TN.1905 Total talk is ≈ 16 slides • Motivation – 2 slides • Research Questions and Approach – 2 slides • Models – 5 slides • Experiment Design – 1 slide • Results – 5 slides • Findings – 1 slide April 3-6, 2016 Communications & Networking Symposium 2 Academics Model Spreading Network Congestion as a Percolation Process All Find that Signals Appear Near a Critical Point in Abstract Network Models April 3-6, 2016 Communications & Networking Symposium 3 Abstract Models Lack Key Traits of Real Networks 1. Human-engineered, tiered topologies, with propagation Routers & 2. Router buffer sizes finite Links 3. Router speeds varied to meet demands, limit losses 4. Injection from sources and receivers only at lowest tier Computers 5. Distribution of sources and receivers non-uniform 6. Connection of sources/receivers with few varied speeds 7. Duty cycle of sources exhibits cyclic behavior Users 8. Human sources exhibit limited patience 9. Sources transfer flows of various sizes Protocols 10. Flows use the Transmission Control Protocol (TCP) to modulate injection rate based on measured congestion DOES LACK OF REALISM MATTER WHEN SIMULATATING NETWORK CONGESTION? April 3-6, 2016 Communications & Networking Symposium 4 Specific Research Questions 1. Does congestion spread in abstract models mirror spread in realistic models? 2. Are some elements of realism essential to capture when modeling network congestion? 3. Are some elements unnecessary? 4. What measures of congestion can be compared, and how, across diverse network models? April 3-6, 2016 Communications & Networking Symposium 5 Research Approach Abstract Realistic Model from Model from Literature Literature Basic Model Factor into Behaviors Flexible Realism Elements Network that can be Simulator enabled or disabled (FxNS) Simulate Combinations of Realism Elements and Compare Patterns of Congestion April 3-6, 2016 Communications & Networking Symposium 6 Models • Abstract EGM Model→high abstraction • Realistic MesoNet Model→high realism • Flexible FxNS Model→combinations of realism from low to high April 3-6, 2016 Communications & Networking Symposium 7 The Abstract (EGM) Model P. Echenique, J. Gomez-Gardenes, and Y. Moreno, “Dynamics of Jamming Transitions in Complex Networks”, Europhysics Letters, 71, 325 (2005) γ = -2.2 -γ Simulations based on 11,174-node scale-free graph, Pk ~ k & γ=2.2, taken from a 2001 snapshot of the Internet Autonomous System (AS) topology collected by the Oregon Router Server (image courtesy Sandy Ressler) April 3-6, 2016 Communications & Networking Symposium 8 Details of the EGM Model Node Buffer Size: ∞ for EGM, all packets buffered, no packets dropped Injection Rate: p packets injected at random nodes (uniform) at each time step Destination Node: chose randomly (uniform) for each packet Forwarding Rate: 1 packet per node at each time step Routing Algorithm: If node is destination, remove packet; Otherwise select next-hop as neighboring node i with minimum di System Response: proportion ρ of injected packets queued in the network Computing di Measuring ρ h is a traffic awareness parameter, whose value 0 ... 1. A = aggregate number of packets where i is the index of a node’s neighbor, t = time di is minimum #hops to destination via τ = measurement interval size neighbor i, and ci is the queue length of i. p = packet inject rate h = 1 is shortest path (in hops) April 3-6, 2016 Communications & Networking Symposium 9 Comparative Simulation Results FxNS Simulations with EGM Simulations All Realism Elements Disabled 1 0.8 h=0.85 0.6 ρ ρ 0.4 h=1 0.2 0 0 5 10 15 20 25 30 p p April 3-6, 2016 Communications & Networking Symposium 10 The Realistic (MesoNet) Model K. Mills, E. Schwartz, and J. Yuan, "How to Model a TCP/IP Network using only 20 Parameters", WSC 2010, Dec. 5-8, Baltimore, MD. I2b I2c K1d A2a A2b I1b I1c I1d I2a I2d K1c Category ID Name FxNS A2c K1b K2a K2b A1a I1a I2 x1 topologySpecific NC A1b I2e K1a C1a H1c H1b I1 K1 K2c A2 18% H1a I2f x2 propagation delay DE A1c C1b H1d K2 Network A1 I2g K0a K2d H1e I x3 network14% speed VS C1c I0a A C1 H1f J1b K B2d H1 J1c B2c B2e J1a J1d x4 buffer provisioning PD C B2f H J2a J1e H2a J1 L1a L1b x5 number sources/sinks B2b B2g J2b L1c H2b J1f L0b L1d Sources & x6 source distribution SR C2 H2 J L0a B2 H2c J2c J2 Sinks C2a x7 sink distribution L1 B2a H2d L2a C2b H2e J2d C2c J2e J2f x8 source/sink speed VS H2f L L2 L2b M2f M2d M2e M2g x9 think time p B L2c M2c D2b D2c D2d D2a N1a L2d M2b M2 x10 patience n/a B1a B1 F1c F1d F1b N1b B0a D2e M1a M2a D2 N1c x11 web object file sizes FL F1a M B1b D2f M1b Users M1 N N1 N1d D2g F1 M1c x12 larger file sizes B1c B1d D F2a F2b N1e M1d N2 M1e M1f M1g N1f x13 localized congestion n/a F F2c D1a D1 F2 O N2a N2f O0a x14 long-lived flows F0a F2d N2b N2e N2c N2d D1b D0a E O1 O2 x15 control algorithm F2e O2g D1c F2g F2f Congestion D1d O1a O1b O2f TCP E1a x16 initial cwnd E1 O1c Control O2e E2 O1c O2a x17 Initial sst G1a G O2b O2c O2d E1b G1b P x18 measurement interval fixed E1c P1a P2g Simulation E2a E2c G1c G1 G2 G2f E2b P1 P2 x19 simulation duration fixed Control Figure 2. G2e P1b P2f G1d x20 startup pattern p G1e P1c P2a G1f G2d P2e G2a G2b G2c P1d P2b P2c P2d Comparisons of MesoNet Simulations vs. FxNS Simulations (all realism elements enabled) for eight MesoNet responses are available in NIST TN 1905 – Appendix A April 3-6, 2016 Communications & Networking Symposium 11 FxNS Combinations 7 Realism Elements 7 Dependencies among Realism Elements 34 Valid FxNS Combinations Seq Cmb TCP FL SR DE VS NC PD 1 c0 0 0 0 0 0 0 0 2 c1 0 0 0 0 0 0 1 3 c2 0 0 0 0 0 1 0 . 32 c123 1 1 1 1 0 1 1 33 c126 1 1 1 1 1 1 0 34 c127 1 1 1 1 1 1 1 April 3-6, 2016 Communications & Networking Symposium 12 Experiment Design Enabled Disabled PD buffers = 250×router speed buffers = ∞ FIXED PARAMETERS 3-tier 218-node topology flat 218-node • 218-Router Topology (Fig. 2) as in Fig. 2 with routers topology as in NC labeled as core, PoP, D- Fig. 2 but with • Routing (SPF propagation delay) class, F-class or N-class routers unlabeled • Duration (200,000 ts per p) core 80 p/ts; PoP 10 p/ts; D-class 10 p/ts; F-class 2 all routers and VS p/ts; N-class 1 p/ts; fast sources/sinks 9 source/sink 2 p/ts; normal p/ts VARIABLE PARAMETERS source/sink 0.2 p/ts • Packet-Injection Rate p (up to 2500) core links have no propagation DE propagation delays delays • FxNS Combination 51,588 sources & 206,352 no sources or SR sinks deployed uniformly sinks deployed below access routers RESPONSES transfers are packet streams: sized randomly • Congestion Spread χ=|Gχ|/|GN| from Pareto distribution transfers are FL • (mean 350, shape 1.5) - individual packets Connectivity Breakdown α=|Gα|/|GN| streams set up with TCP • Proportion of Packets Delivered π connection procedures packet transmission • Scaled (0..1) Latency of Delivered Packets δ packet regulated by TCP transmissions not congestion-control TCP regulated by including slow-start (initial congestion- cwnd = 2 sst = 230/2) and Only concepts in common among all 34 control congestion avoidance combinations: graph and packet April 3-6, 2016 Communications & Networking Symposium 13 Results1,2 [1] 136 xy-plots (34 FxNS combinations × 4 responses) are available at: http://tinyurl.com/poylful [2] Related FxNS simulation data can be explored interactively using a multidimensional visualization created by Phillip Gough of CSIRO: http://tinyurl.com/payglq6 April 3-6, 2016 Communications & Networking Symposium 14 Results I – Abstract (C0) vs. Realistic (C127) C0 C127 C127 C0 C127 C0 C0 C127 Plots for all responses and all 34 combinations available: http://tinyurl.com/poylful April 3-6, 2016 Communications & Networking Symposium 15 Results II – Congestion Spread χ All Combinations April 3-6, 2016 Communications & Networking Symposium 16 Results III – Connectivity Breakdown α All Combinations April 3-6, 2016 Communications & Networking Symposium 17 Results IV – Packet Delivery π All Combinations April 3-6, 2016 Communications & Networking Symposium 18 Results V – Scaled Packet Latency δ All Combinations April 3-6, 2016 Communications & Networking Symposium 19 Findings • Congestion spreads differently in abstract and realistic models • Hierarchical Router Speeds and TCP very important to model • Packet dropping important to model for accurate packet latencies • Propagation delay not important to model in a continental US network, but would be important to model in topologies where propagation delays exceed queuing delays • Congestion spread, connectivity breakdown and the effectiveness and efficiency of packet delivery can be measured using only two concepts: graphs and packets April 3-6, 2016 Communications & Networking Symposium 20 For more of our research see: http://www.nist.gov/itl/antd/emergent_behavior.cfm April 3-6, 2016 Communications & Networking Symposium 21.
Recommended publications
  • Application Note Pre-Deployment and Network Readiness Assessment Is
    Application Note Contents Title Six Steps To Getting Your Network Overview.......................................................... 1 Ready For Voice Over IP Pre-Deployment and Network Readiness Assessment Is Essential ..................... 1 Date January 2005 Types of VoIP Overview Performance Problems ...... ............................. 1 This Application Note provides enterprise network managers with a six step methodology, including pre- Six Steps To Getting deployment testing and network readiness assessment, Your Network Ready For VoIP ........................ 2 to follow when preparing their network for Voice over Deploy Network In Stages ................................. 7 IP service. Designed to help alleviate or significantly reduce call quality and network performance related Summary .......................................................... 7 problems, this document also includes useful problem descriptions and other information essential for successful VoIP deployment. Pre-Deployment and Network IP Telephony: IP Network Problems, Equipment Readiness Assessment Is Essential Configuration and Signaling Problems and Analog/ TDM Interface Problems. IP Telephony is very different from conventional data applications in that call quality IP Network Problems: is especially sensitive to IP network impairments. Existing network and “traditional” call quality Jitter -- Variation in packet transmission problems become much more obvious with time that leads to packets being the deployment of VoIP service. For network discarded in VoIP
    [Show full text]
  • Latency and Throughput Optimization in Modern Networks: a Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui
    READY TO SUBMIT TO IEEE COMMUNICATIONS SURVEYS & TUTORIALS JOURNAL 1 Latency and Throughput Optimization in Modern Networks: A Comprehensive Survey Amir Mirzaeinnia, Mehdi Mirzaeinia, and Abdelmounaam Rezgui Abstract—Modern applications are highly sensitive to com- On one hand every user likes to send and receive their munication delays and throughput. This paper surveys major data as quickly as possible. On the other hand the network attempts on reducing latency and increasing the throughput. infrastructure that connects users has limited capacities and These methods are surveyed on different networks and surrond- ings such as wired networks, wireless networks, application layer these are usually shared among users. There are some tech- transport control, Remote Direct Memory Access, and machine nologies that dedicate their resources to some users but they learning based transport control, are not very much commonly used. The reason is that although Index Terms—Rate and Congestion Control , Internet, Data dedicated resources are more secure they are more expensive Center, 5G, Cellular Networks, Remote Direct Memory Access, to implement. Sharing a physical channel among multiple Named Data Network, Machine Learning transmitters needs a technique to control their rate in proper time. The very first congestion network collapse was observed and reported by Van Jacobson in 1986. This caused about a I. INTRODUCTION thousand time rate reduction from 32kbps to 40bps [3] which Recent applications such as Virtual Reality (VR), au- is about a thousand times rate reduction. Since then very tonomous cars or aerial vehicles, and telehealth need high different variations of the Transport Control Protocol (TCP) throughput and low latency communication.
    [Show full text]
  • Lab 6: Understanding Traditional TCP Congestion Control (HTCP, Cubic, Reno)
    NETWORK TOOLS AND PROTOCOLS Lab 6: Understanding Traditional TCP Congestion Control (HTCP, Cubic, Reno) Document Version: 06-14-2019 Award 1829698 “CyberTraining CIP: Cyberinfrastructure Expertise on High-throughput Networks for Big Science Data Transfers” Lab 6: Understanding Traditional TCP Congestion Control Contents Overview ............................................................................................................................. 3 Objectives............................................................................................................................ 3 Lab settings ......................................................................................................................... 3 Lab roadmap ....................................................................................................................... 3 1 Introduction to TCP ..................................................................................................... 3 1.1 TCP review ............................................................................................................ 4 1.2 TCP throughput .................................................................................................... 4 1.3 TCP packet loss event ........................................................................................... 5 1.4 Impact of packet loss in high-latency networks ................................................... 6 2 Lab topology...............................................................................................................
    [Show full text]
  • Adaptive Method for Packet Loss Types in Iot: an Naive Bayes Distinguisher
    electronics Article Adaptive Method for Packet Loss Types in IoT: An Naive Bayes Distinguisher Yating Chen , Lingyun Lu *, Xiaohan Yu * and Xiang Li School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China; [email protected] (Y.C.); [email protected] (X.L.) * Correspondence: [email protected] (L.L.); [email protected] (X.Y.) Received: 31 December 2018; Accepted: 23 January 2019; Published: 28 January 2019 Abstract: With the rapid development of IoT (Internet of Things), massive data is delivered through trillions of interconnected smart devices. The heterogeneous networks trigger frequently the congestion and influence indirectly the application of IoT. The traditional TCP will highly possible to be reformed supporting the IoT. In this paper, we find the different characteristics of packet loss in hybrid wireless and wired channels, and develop a novel congestion control called NB-TCP (Naive Bayesian) in IoT. NB-TCP constructs a Naive Bayesian distinguisher model, which can capture the packet loss state and effectively classify the packet loss types from the wireless or the wired. More importantly, it cannot cause too much load on the network, but has fast classification speed, high accuracy and stability. Simulation results using NS2 show that NB-TCP achieves up to 0.95 classification accuracy and achieves good throughput, fairness and friendliness in the hybrid network. Keywords: Internet of Things; wired/wireless hybrid network; TCP; naive bayesian model 1. Introduction The Internet of Things (IoT) is a new network industry based on Internet, mobile communication networks and other technologies, which has wide applications in industrial production, intelligent transportation, environmental monitoring and smart homes.
    [Show full text]
  • Congestion Control Overview
    ELEC3030 (EL336) Computer Networks S Chen Congestion Control Overview • Problem: When too many packets are transmitted through a network, congestion occurs At very high traffic, performance collapses Perfect completely, and almost no packets are delivered Maximum carrying capacity of subnet • Causes: bursty nature of traffic is the root Desirable cause → When part of the network no longer can cope a sudden increase of traffic, congestion Congested builds upon. Other factors, such as lack of Packets delivered bandwidth, ill-configuration and slow routers can also bring up congestion Packets sent • Solution: congestion control, and two basic approaches – Open-loop: try to prevent congestion occurring by good design – Closed-loop: monitor the system to detect congestion, pass this information to where action can be taken, and adjust system operation to correct the problem (detect, feedback and correct) • Differences between congestion control and flow control: – Congestion control try to make sure subnet can carry offered traffic, a global issue involving all the hosts and routers. It can be open-loop based or involving feedback – Flow control is related to point-to-point traffic between given sender and receiver, it always involves direct feedback from receiver to sender 119 ELEC3030 (EL336) Computer Networks S Chen Open-Loop Congestion Control • Prevention: Different policies at various layers can affect congestion, and these are summarised in the table • e.g. retransmission policy at data link layer Transport Retransmission policy • Out-of-order caching
    [Show full text]
  • The Effects of Different Congestion Control Algorithms Over Multipath Fast Ethernet Ipv4/Ipv6 Environments
    Proceedings of the 11th International Conference on Applied Informatics Eger, Hungary, January 29–31, 2020, published at http://ceur-ws.org The Effects of Different Congestion Control Algorithms over Multipath Fast Ethernet IPv4/IPv6 Environments Szabolcs Szilágyi, Imre Bordán Faculty of Informatics, University of Debrecen, Hungary [email protected] [email protected] Abstract The TCP has been in use since the seventies and has later become the predominant protocol of the internet for reliable data transfer. Numerous TCP versions has seen the light of day (e.g. TCP Cubic, Highspeed, Illinois, Reno, Scalable, Vegas, Veno, etc.), which in effect differ from each other in the algorithms used for detecting network congestion. On the other hand, the development of multipath communication tech- nologies is one today’s relevant research fields. What better proof of this, than that of the MPTCP (Multipath TCP) being integrated into multiple operating systems shortly after its standardization. The MPTCP proves to be very effective for multipath TCP-based data transfer; however, its main drawback is the lack of support for multipath com- munication over UDP, which can be important when forwarding multimedia traffic. The MPT-GRE software developed at the Faculty of Informatics, University of Debrecen, supports operation over both transfer protocols. In this paper, we examine the effects of different TCP congestion control mechanisms on the MPTCP and the MPT-GRE multipath communication technologies. Keywords: congestion control, multipath communication, MPTCP, MPT- GRE, transport protocols. 1. Introduction In 1974, the TCP was defined in RFC 675 under the Transmission Control Pro- gram name. Later, the Transmission Control Program was split into two modular Copyright © 2020 for this paper by its authors.
    [Show full text]
  • What Causes Packet Loss IP Networks
    What Causes Packet Loss IP Networks (Rev 1.1) Computer Modules, Inc. DVEO Division 11409 West Bernardo Court San Diego, CA 92127, USA Telephone: +1 858 613 1818 Fax: +1 858 613 1815 www.dveo.com Copyright © 2016 Computer Modules, Inc. All Rights Reserved. DVEO, DOZERbox, DOZER Racks and DOZER ARQ are trademarks of Computer Modules, Inc. Specifications and product availability are subject to change without notice. Packet Loss and Reasons Introduction This Document To stream high quality video, i.e. to transmit real-time video streams, over IP networks is a demanding endeavor and, depending on network conditions, may result in packets being dropped or “lost” for a variety of reasons, thus negatively impacting the quality of user experience (QoE). This document looks at typical reasons and causes for what is generally referred to as “packet loss” across various types of IP networks, whether of the managed and conditioned type with a defined Quality of Service (QoS), or the “unmanaged” kind such as the vast variety of individual and interconnected networks across the globe that collectively constitute the public Internet (“the Internet”). The purpose of this document is to encourage operators and enterprises that wish to overcome streaming video quality problems to explore potential solutions. Proven technology exists that enables transmission of real-time video error-free over all types of IP networks, and to perform live broadcasting of studio quality content over the “unmanaged” Internet! Core Protocols of the Internet Protocol Suite The Internet Protocol (IP) is the foundation on which the Internet was built and, by extension, the World Wide Web, by enabling global internetworking.
    [Show full text]
  • Packet Loss Burstiness: Measurements and Implications for Distributed Applications
    Packet Loss Burstiness: Measurements and Implications for Distributed Applications David X. Wei1,PeiCao2,StevenH.Low1 1Division of Engineering and Applied Science 2Department of Computer Science California Institute of Technology Stanford University Pasadena, CA 91125 USA Stanford, CA 94305 USA {weixl,slow}@cs.caltech.edu [email protected] Abstract sources. The most commonly used control protocol for re- liable data transmission is Transmission Control Protocol Many modern massively distributed systems deploy thou- (TCP) [18], with a variety of congestion control algorithms sands of nodes to cooperate on a computation task. Net- (Reno [5], NewReno [11], etc.), and implementations (e.g. work congestions occur in these systems. Most applications TCP Pacing [16, 14]). The most commonly used control rely on congestion control protocols such as TCP to protect protocol for unreliable data transmission is TCP Friendly the systems from congestion collapse. Most TCP conges- Rate Control (TFRC) [10]. These algorithms all use packet tion control algorithms use packet loss as signal to detect loss as the congestion signal. In designs of these protocols, congestion. it is assumed that the packet loss process provides signals In this paper, we study the packet loss process in sub- that can be detected by every flow sharing the network and round-trip-time (sub-RTT) timescale and its impact on the hence the protocols can achieve fair share of the network loss-based congestion control algorithms. Our study sug- resource and, at the same time, avoid congestion collapse. gests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects.
    [Show full text]
  • A Network Congestion Control Protocol (NCP)
    A Network Congestion control Protocol (NCP) Debessay Fesehaye, Klara Nahrstedt and Matthew Caesar Department of Computer Science UIUC, 201 N Goodwin Ave Urbana, IL 61801-2302, USA Email:{dkassa2,klara,caesar}@cs.uiuc.edu Abstract The transmission control protocol (TCP) which is the dominant congestion control protocol at the transport layer is proved to have many performance problems with the growth of the Internet. TCP for instance results in throughput degradation for high bandwidth delay product networks and is unfair for flows with high round trip delays. There have been many patches and modifications to TCP all of which inherit the problems of TCP in spite of some performance improve- ments. On the other hand there are clean-slate design approaches of the Internet. The eXplicit Congestion control Protocol (XCP) and the Rate Control Protocol (RCP) are the prominent clean slate congestion control protocols. Nonetheless, the XCP protocol is also proved to have its own performance problems some of which are its unfairness to long flows (flows with high round trip delay), and many per-packet computations at the router. As shown in this paper RCP also makes gross approximation to its important component that it may only give the performance reports shown in the literature for specific choices of its parameter values and traffic patterns. In this paper we present a new congestion control protocol called Network congestion Control Protocol (NCP). We show that NCP can outperform both TCP, XCP and RCP in terms of among other things fairness and file download times. Keywords: Congestion Control, clean-slate, fairness, download times.
    [Show full text]
  • How to Properly Measure and Correct Packet Loss
    White Paper How to Properly Measure and Correct Packet Loss Silver Peak avoids common packet delivery issues across the WAN White Paper How to Properly Measure and Correct Packet Loss Silver Peak avoids common packet delivery issues across the WAN Shared Wide Area Networks (WANs), like MPLS and Internet VPNs, are prone to network congestion during periods of heavy utilization. When different traffic is vying for limited shared resources, packets inevitably will be dropped or delivered out of order, a concept known as “packet loss”. As more enterprises deploy real-time traffic, like voice, video and other unified communications, packet loss is an increasingly significant problem. That is because dropped and out-of-order packets degrade the quality of latency sensitive applications – e.g. packet loss causes video pixilation, choppy phone calls, or terminated sessions. In addition, loss is increasingly problematic as more enterprises look to lower disaster recovery costs by performing data replication over shared WANs. As figure 1 shows, when a small amount of packet loss (0.1%) is combined with a marginal amount of latency (50 ms), data throughput will never exceed 10 Mbps per flow – regardless of how much bandwidth is available. This can severely hamper key enterprise initiatives that require high sustained data throughput, like replication. The problem is that most enterprises don’t know packet loss exists on their WANs as they don’t have the proper tools for measuring it. (Service Providers only exacerbate the problem by offering SLAs that are based on monthly averages, which can be quite misleading. An enterprise can experience several hours of loss during several work days and still statistically have around 0% loss over the course of a month).
    [Show full text]
  • The Need for Realism When Simulating Network Congestion
    The Need for Realism when Simulating Network Congestion Kevin Mills Chris Dabrowski NIST NIST Gaithersburg, MD 20899 Gaithersburg, MD 20899 [email protected] [email protected] throughout the network. The researchers identify various ABSTRACT Many researchers use abstract models to simulate network signals that arise around the critical point. Such signals congestion, finding patterns that might foreshadow onset of could foreshadow onset of widespread congestion. These congestion collapse. We investigate whether such abstract developments appear promising as a theoretical basis for monitoring methods that could be deployed to warn of models yield congestion behaviors sufficiently similar to impending congestion collapse. Despite showing promise, more realistic models. Beginning with an abstract model, we add elements of realism in various combinations, questions surround this research, as the models are quite culminating with a high-fidelity simulation. By comparing abstract, bearing little resemblance to communication congestion patterns among combinations, we illustrate networks deployed based on modern technology. We congestion spread in abstract models differs from that in explore these questions by examining the influence of realistic models. We identify critical elements of realism realism on congestion spread in network simulations. needed when simulating congestion. We demonstrate a We begin with an abstract network simulation from the means to compare congestion patterns among simulations literature. We add realism elements in combinations, covering diverse configurations. We hope our contributions culminating with a high-fidelity simulation, also from the lead to better understanding of the need for realism when literature. By comparing patterns of congestion among the simulating network congestion. combinations, we explore a number of questions.
    [Show full text]
  • TCP Congestion Control Scheme for Wireless Networks Based on TCP Reserved Field and SNR Ratio
    International Journal of Research and Reviews in Information Sciences (IJRRIS), ISSN: 2046-6439, Vol. 2, No. 2, June 2012 http://www.sciacademypublisher.com/journals/index.php/IJRRIS/article/download/817/728 TCP Congestion Control Scheme for Wireless Networks based on TCP Reserved Field and SNR Ratio Youssef Bassil LACSC – Lebanese Association for Computational Sciences, Registered under No. 957, 2011, Beirut, Lebanon Email: [email protected] Abstract – Currently, TCP is the most popular and widely used network transmission protocol. In actual fact, about 90% of connections on the internet use TCP to communicate. Through several upgrades and improvements, TCP became well optimized for the very reliable wired networks. As a result, TCP considers all packet timeouts in wired networks as due to network congestion and not to bit errors. However, with networking becoming more heterogeneous, providing wired as well as wireless topologies, TCP suffers from performance degradation over error-prone wireless links as it has no mechanism to differentiate error losses from congestion losses. It therefore considers all packet losses as due to congestion and consequently reduces the burst of packet, diminishing at the same time the network throughput. This paper proposes a new TCP congestion control scheme appropriate for wireless as well as wired networks and is capable of distinguishing congestion losses from error losses. The proposed scheme is based on using the reserved field of the TCP header to indicate whether the established connection is over a wired or a wireless link. Additionally, the proposed scheme leverages the SNR ratio to detect the reliability of the link and decide whether to reduce packet burst or retransmit a timed-out packet.
    [Show full text]