An Architecture for High-Speed Packet Switched Networks (Thesis)

Total Page:16

File Type:pdf, Size:1020Kb

An Architecture for High-Speed Packet Switched Networks (Thesis) Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1989 An Architecture for High-Speed Packet Switched Networks (Thesis) Rajendra Shirvaram Yavatkar Report Number: 89-898 Yavatkar, Rajendra Shirvaram, "An Architecture for High-Speed Packet Switched Networks (Thesis)" (1989). Department of Computer Science Technical Reports. Paper 765. https://docs.lib.purdue.edu/cstech/765 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. AN ARCIITTECTURE FOR IDGH-SPEED PACKET SWITCHED NETWORKS Rajcndra Shivaram Yavalkar CSD-TR-898 AugusL 1989 AN ARCHITECTURE FOR HIGH-SPEED PACKET SWITCHED NETWORKS A Thesis Submitted to the Faculty of Purdue University by Rajendra Shivaram Yavatkar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy August 1989 Il TABLE OF CONTENTS Page LIST OF FIGURES Vl ABSTRACT ................................... Vlll 1. INTRODUCTION 1 1.1 BackgroWld... 2 1.1.1 Network Architecture 2 1.1.2 Network-Level Services. 7 1.1.3 Circuit Switching. 7 1.1.4 Packet Switching . 8 1.1.5 Summary.... 11 1.2 The Proposed Solution. 12 1.3 Plan of Thesis. ..... 14 2. DEFINITIONS AND TERMINOLOGY 15 2.1 Components of Packet Switched Networks 15 2.2 Concept Of Internetworking .. 16 2.3 Communication Services .... 17 2.4 Flow And Congestion Control. 18 3. NETWORK ARCHITECTURE 19 3.1 Basic Model . ... 20 3.2 Services Provided. 22 3.3 Protocol Hierarchy 24 3.4 Addressing . 26 3.5 Routing .. 29 3.6 Rate-based Congestion Avoidance 31 3.7 Responsibilities of a Router 32 3.8 Autoconfiguration ... .. 33 3.8.1 Adding a New Node 35 3.8.2 Node Recovery ... 37 3.8.3 Adding aNew Link 38 3.9 Summary .......... 39 111 Page 4. HIGH-SPEED PROPAGATION OF CONTROL INFORMA- TION. .......... 41 4.1 Basic Algorithm 43 4.2 Possible Problems 45 4.2.1 Node restart 45 4.2.2 Network Partitions 46 4.2.3 Packet Corruption 46 4.3 Algorithm...... ... 46 4.3.1 Status Information At Each Node 48 4.3.2 Link Status Message .. 48 4.3.3 Additional Parameters . 50 4.3.4 Specific Actions. 52 4.4 Discussion. 55 4.5 Summary ........ 57 5. CONGESTION AVOIDANCE AND CONTROL. 59 5.1 Problem of Congestion .... 59 5.2 Related Work ......... 60 5.2.1 Preallocation of Buffers 60 5.2.2 Implicit Feedback. 61 5.2.3 Explicit Feedback . 61 5.3 Overview Of Our Scheme . 65 5.3.1 Avoidance and Control 66 5.3.2 Potential Users 66 5.4 Congestion Avoidance . 67 5.5 Congestion Control . 68 5.5.1 Rate Control Message 68 5.5.2 Generating Rate Control Messages 69 5.5.3 Using Rate Control Messages 73 5.5.4 Changing Transmission Rates . 74 5.5.5 Examples Of Uses 76 5.6 Features of Our Scheme .. 76 5.7 Performance Considerations 77 5.8 Summary . 78 6. A RESOURCE RESERVATION MODEL 80 6.1 Need for Predictable Performance ... 80 6.2 Existing Communication Abstractions 81 6.2.1 Circuit Switching . 81 6.2.2 Packet Switching . .. .. 82 6.2.3 Integrated or Hybrid Switching 83 IV Page 6.2.4 Discussion.......... 84 6.3 Providing Predictable Performance 85 6.4 Flow Semantics . ...... .. 87 6.4.1 Operations On A Flow. 88 6.4.2 Flow Parameters ... 90 6.5 Implementation........ 91 6.5.1 Architectural Support 93 6.5.2 Terminology ... .. 95 6.5.3 Precomputation of Flow Paths 95 6.5.4 Flow Initiation 97 6.5.5 Discussion. 100 6.6 Examples of Uses . 102 6.7 Related Work . 102 6.7.1 Type Of Service Routing with Loadsharing (BBN) 102 6.7.2 Real-Time Message Streams .. 104 6.7.3 Connection-Oriented Transport. 104 6.8 Summary ............ ... 105 7. EXPERIMENTAL EVALUATION 106 7.1 Introduction............ 106 7.2 Overview Of Prototype Network 108 7.3 Packet Switch Architecture .. 109 7.4 Packet Switch Implementation 111 7.4.1 Software Emulation 113 7.4.2 Point-to-Point Links. 116 7.5 Hosts and Routers ...... 116 7.5.1 A Review of Architectural Details 117 7.5.2 Software Implementation .. ... 119 7.6 Evaluation Methodology. ......... 119 7.6.1 Transmission Capacity of Network and Individual Links 119 7.6.2 Verification of Correctness. 122 7.7 Experimental Results. ...... 124 7.7.1 Predictable Performance. 125 7.7.2 Flows vs. Datagrams .... 131 7.7.3 Congestion Avoidance and Control 133 7.8 Summary ... 136 8. CONCLUSIONS 137 8.1 Predictable Performance. 137 8.2 Rate-based Congestion Avoidance and Control 138 8.3 Fast Propagation Of Link Status Updates 139 8.4 Future Directions ................. 140 v Page 8.4.1 Congestion Control. 140 8.4.2 Flows 140 8.5 Summary .. 142 BIBLIOGRAPHY. 143 VI LIST OF FIGURES Figure Page 1.1 A logical view of a computer network. ........ 3 1.2 The ISO Reference Model for Network Architectures 5 3.1 Basic model of UPDS in the context of an internet 21 3.2 Multi-Level Organization in upns 25 3.3 Protocol Hierarchy in UPDS . 27 3.4 A bidireclionallink in UPDS 28 3.5 Link (m,n) partitions the network. 38 4.1 A node with three adjacent links 46 4.2 A network with two partitions . 47 4.3 Algorithm to merge two link status update tables . 54 4.4 Recovery of node k merges two partitions 55 6.1 An abstract view of components of a flow 89 7.1 The basic hardware building block used for input 110 7.2 The basic hardware building block used for output 110 7.3 The structure of a multiprocessor packet switch. 112 7.4 Organization of the Software Emulator. ... .. 114 7.5 Organization of the Router software on input side 118 7.6 Organization of the Router software on output side. 120 7.7 Representative topologies used in experiments . ... 123 VIl Figure Page 7.8 Average per-packet delay for flow traffic with no datagram traffic. The dashed vertical line indicates the point at which interpacket ar- rival time equals per-packet processing time. 126 7.9 Effect of datagram traffic on per packet delays for flows when data­ gram traffic constitutes 15% of the total capacity. The average delays are same as those observed when datagram. traffic is absent. 127 7.10 Effect of datagram traffic on per packet delays for flows when data­ gram traffic constitutes 30% ofthe total capacity. The dashed vertical line marks the point at which the total tr~c starts exceeding the path capacity. 128 7.11 Effect of datagram traffic on flow throughput, where dashed line plots throughput when datagram traffic constitutes 30% of network capac~ ity, and solid line plots throughput when datagram traffic varies from oto 15% 130 7.12 Five plots in the graph show the effect of How traffic on per packet delays for datagrams depending on the amount of How traffic present. For the rightmost plot, no How traffic is present, whereas in the left­ most plot, amount of flow traffic equals 65% of the total path capac~ ~ rn 7.13 Two sources of traffic share a path to a common destination. 133 7.14 Congestion avoidance with two sources of traffic.The dashed vertical line marks the point at which both the sources start adjusting their transmission rates in response to rate control messages. The solid plot shows the throughput behavior for source 1 whereas dashed. plot shows the same for source 2. The dotted plot at the top shows the combined throughput of both the sources. ............... 134 VUl ABSTRACT Yavatkar, Rajendra Shivaram. Ph.D., Purdue University, August, 1989. An Architecture For High~Speed Packet Switched Networks. Major Professor: Douglas E. Comer. The emergence of performance intensive distributed applications is making new demands on computer networks. Distributed applications access computing re­ sources and information available at multiple computers located across a network. Realizing such distributed applications over geographically distant areas reqUIres access to predictable and high performance corrununication. Circuit switching and packet switching are two possible techniques for providing high performance communication. Circuit switched networks preallocate resources to individual sources of traffic before any traffic is s"ent, whereas packet switched networks allocate resources dynamically as the traffic travels through the network. The advantage of circuit switching lies in guaranteed performance due to reserved capacity, but the the network capacity is wasted when circuits are idle. Packet switched networks have been preferred in data networks due to their lower cost and efficient utilization of network resources. However, the major limitation of current packet switched networks lies in their inability to provide predictable performance. This dissertation proposes a new architecture for providing predictable high performance in high speed packet switched networks. The architecture combines the advantages of circuit switching and packet switching by providing two services: datagramlJ and jlOWlJ. The datagram service supports best-effort delivery of traffic. The main liability of a datagram service lies in congestion. To avoid congestion, the architecture uses a novel, rate-based congestion control scheme. To support IX applications that demand specific performance guarantees, the architecture provides an abstraction called a flow. A flow is a communication channel that has specific performance characteristics associated with its traffic. When requesting a flow, a source specifies the performance needs and a destination. The underlying delivery system guarantees to meet those needs by pre-allocating resources along a selected path to the destination. An experimental evaluation using a prototype network demonstrates the viabil­ ity of the architecture in providing predictable, high performance.
Recommended publications
  • Lecture 10: Switching & Internetworking
    Lecture 10: Switching & Internetworking CSE 123: Computer Networks Alex C. Snoeren HW 2 due WEDNESDAY Lecture 10 Overview ● Bridging & switching ◆ Spanning Tree ● Internet Protocol ◆ Service model ◆ Packet format CSE 123 – Lecture 10: Internetworking 2 Selective Forwarding ● Only rebroadcast a frame to the LAN where its destination resides ◆ If A sends packet to X, then bridge must forward frame ◆ If A sends packet to B, then bridge shouldn’t LAN 1 LAN 2 A W B X bridge C Y D Z CSE 123 – Lecture 9: Bridging & Switching 3 Forwarding Tables ● Need to know “destination” of frame ◆ Destination address in frame header (48bit in Ethernet) ● Need know which destinations are on which LANs ◆ One approach: statically configured by hand » Table, mapping address to output port (i.e. LAN) ◆ But we’d prefer something automatic and dynamic… ● Simple algorithm: Receive frame f on port q Lookup f.dest for output port /* know where to send it? */ If f.dest found then if output port is q then drop /* already delivered */ else forward f on output port; else flood f; /* forward on all ports but the one where frame arrived*/ CSE 123 – Lecture 9: Bridging & Switching 4 Learning Bridges ● Eliminate manual configuration by learning which addresses are on which LANs Host Port A 1 ● Basic approach B 1 ◆ If a frame arrives on a port, then associate its source C 1 address with that port D 1 ◆ As each host transmits, the table becomes accurate W 2 X 2 ● What if a node moves? Table aging Y 3 ◆ Associate a timestamp with each table entry Z 2 ◆ Refresh timestamp for each
    [Show full text]
  • Wifi Direct Internetworking
    WiFi Direct Internetworking António Teólo∗† Hervé Paulino João M. Lourenço ADEETC, Instituto Superior de NOVA LINCS, DI, NOVA LINCS, DI, Engenharia de Lisboa, Faculdade de Ciências e Tecnologia, Faculdade de Ciências e Tecnologia, Instituto Politécnico de Lisboa Universidade NOVA de Lisboa Universidade NOVA de Lisboa Portugal Portugal Portugal [email protected] [email protected] [email protected] ABSTRACT will enable WiFi communication range and speed even in cases of: We propose to interconnect mobile devices using WiFi-Direct. Hav- network infrastructure congestion, which may happen in highly ing that, it will be possible to interconnect multiple o-the-shelf crowded venues (such as sports and cultural events); or temporary, mobile devices, via WiFi, but without any supportive infrastructure. or permanent, absence of infrastructure, as may happen in remote This will pave the way for mobile autonomous collaborative sys- locations or disaster situations. tems that can operate in any conditions, like in disaster situations, WFD allows devices to form groups, with one of them, called in very crowded scenarios or in isolated areas. This work is relevant Group Owner (GO), acting as a soft access point for remaining since the WiFi-Direct specication, that works on groups of devices, group members. WFD oers node discovery, authentication, group does not tackle inter-group communication and existing research formation and message routing between nodes in the same group. solutions have strong limitations. However, WFD communication is very constrained, current imple- We have a two phase work plan. Our rst goal is to achieve mentations restrict group size 9 devices and none of these devices inter-group communication, i.e., enable the ecient interconnec- may be a member of more than one WFD group.
    [Show full text]
  • Packet Processing at Wire Speed Using Network Processors
    Packet processing at wire speed using Network processors Chethan Kumar and Hao Che University of Texas at Arlington {ckumar, hche}@cse.uta.edu Abstract 1 Introduction Recent developments in fiber optics and the new The modern day Internet has seen an explosive bandwidth hungry applications have put more stress growth of applications being used on it. As more and on the active components (switches, routers etc.,) of a more applications are being developed, there is an network. Optical fiber bandwidth is no longer a increase in the amount of load put on the internet. At constraint for increasing the network bandwidth. the same time the fiber optics bandwidth has However, the processing power of the network has not increased dramatically to meet the traffic demand, but scaled upto the increase in the fiber bandwidth. the present day routers have limited processing power Communication industry is looking forward for more to handle this profound demand increase. Hence the innovative ways of designing router1 architecture and networking and telecommunications industry is research is being conducted to develop a scalable, compelled to look for new solutions for improving flexible and cost-effective architecture for routers. A the performance and the processing power of the successful outcome of this effort is a specialized routers. processor called Network processor. Network processor provides performance at hardware speeds One of the industry’s solutions to the challenges while attaining the flexibility of software. Network posed by the increased demand for the processing processors from different vendors employ different power is programmable functional units grouped into architectures and the choice of a particular type of a processor called Application Specific Instruction network processor can affect the architecture of the Processor (ASIP) or Network processor (NP)2.
    [Show full text]
  • Packet Processing Execution Engine (PROX) - Performance Characterization for NFVI User Guide
    USER GUIDE Intel Corporation Packet pROcessing eXecution Engine (PROX) - Performance Characterization for NFVI User Guide Authors 1 Introduction Yury Kylulin Properly designed Network Functions Virtualization Infrastructure (NFVI) environments deliver high packet processing rates, which are also a required dependency for Luc Provoost onboarded network functions. NFVI testing methodology typically includes both Petar Torre functionality testing and performance characterization testing, to ensure that NFVI both exposes the correct APIs and is capable of packet processing. This document describes how to use open source software tools to automate peak traffic (also called saturation) throughput testing to characterize the performance of a containerized NFVI system. The text and examples in this document are intended for architects and testing engineers for Communications Service Providers (CSPs) and their vendors. This document describes tools used during development to evaluate whether or not a containerized NFVI can perform the required rates of packet processing within set packet drop rates and latency percentiles. This document is part of the Network Transformation Experience Kit, which is available at https://networkbuilders.intel.com/network-technologies/network-transformation-exp- kits. 1 User Guide | Packet pROcessing eXecution Engine (PROX) - Performance Characterization for NFVI Table of Contents 1 Introduction ................................................................................................................................................................................................................
    [Show full text]
  • Internetworking and Layered Models
    1 Internetworking and Layered Models The Internet today is a widespread information infrastructure, but it is inherently an insecure channel for sending messages. When a message (or packet) is sent from one Website to another, the data contained in the message are routed through a number of intermediate sites before reaching its destination. The Internet was designed to accom- modate heterogeneous platforms so that people who are using different computers and operating systems can communicate. The history of the Internet is complex and involves many aspects – technological, organisational and community. The Internet concept has been a big step along the path towards electronic commerce, information acquisition and community operations. Early ARPANET researchers accomplished the initial demonstrations of packet- switching technology. In the late 1970s, the growth of the Internet was recognised and subsequently a growth in the size of the interested research community was accompanied by an increased need for a coordination mechanism. The Defense Advanced Research Projects Agency (DARPA) then formed an International Cooperation Board (ICB) to coordinate activities with some European countries centered on packet satellite research, while the Internet Configuration Control Board (ICCB) assisted DARPA in managing Internet activity. In 1983, DARPA recognised that the continuing growth of the Internet community demanded a restructuring of coordination mechanisms. The ICCB was dis- banded and in its place the Internet Activities Board (IAB) was formed from the chairs of the Task Forces. The IAB revitalised the Internet Engineering Task Force (IETF) as a member of the IAB. By 1985, there was a tremendous growth in the more practical engineering side of the Internet.
    [Show full text]
  • Guidelines for the Secure Deployment of Ipv6
    Special Publication 800-119 Guidelines for the Secure Deployment of IPv6 Recommendations of the National Institute of Standards and Technology Sheila Frankel Richard Graveman John Pearce Mark Rooks NIST Special Publication 800-119 Guidelines for the Secure Deployment of IPv6 Recommendations of the National Institute of Standards and Technology Sheila Frankel Richard Graveman John Pearce Mark Rooks C O M P U T E R S E C U R I T Y Computer Security Division Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899-8930 December 2010 U.S. Department of Commerce Gary Locke, Secretary National Institute of Standards and Technology Dr. Patrick D. Gallagher, Director GUIDELINES FOR THE SECURE DEPLOYMENT OF IPV6 Reports on Computer Systems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems. This Special Publication 800-series reports on ITL’s research, guidance, and outreach efforts in computer security and its collaborative activities with industry, government, and academic organizations. National Institute of Standards and Technology Special Publication 800-119 Natl. Inst. Stand. Technol. Spec. Publ. 800-119, 188 pages (Dec. 2010) Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately.
    [Show full text]
  • Lightweight Internet Protocol Stack Ideal Choice for High-Performance HFT and Telecom Packet Processing Applications Lightweight Internet Protocol Stack
    GE Intelligent Platforms Lightweight Internet Protocol Stack Ideal Choice for High-Performance HFT and Telecom Packet Processing Applications Lightweight Internet Protocol Stack Introduction The number of devices connected to IP networks will be nearly three times as high as the global population in 2016. There will be nearly three networked devices per capita in 2016, up from over one networked device per capita in 2011. Driven in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 15 gigabytes per capita in 2016, up from 4 gigabytes per capita in 2011 (Cisco VNI). Figure 1 shows the anticipated growth in IP traffic and networked devices. The IP traffic is increasing globally at the breath-taking pace. The rate at which these IP data packets needs to be processed to ensure not only their routing, security and delivery in the core of the network but also the identification and extraction of payload content for various end-user applications such as high frequency trading (HFT), has also increased. In order to support the demand of high-performance IP packet processing, users and developers are increasingly moving to a novel approach of combining PCI Express packet processing accelerator cards in a standalone network server to create an accelerated network server. The benefits of using such a hardware solu- tion are explained in the GE Intelligent Platforms white paper, Packet Processing in Telecommunications – A Case for the Accelerated Network Server. 80 2016 The number of networked devices
    [Show full text]
  • High Level Synthesis for Packet Processing Pipelines Cristian
    High Level Synthesis for Packet Processing Pipelines Cristian Soviani Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2007 c 2007 Cristian Soviani All Rights Reserved ABSTRACT High Level Synthesis for Packet Processing Pipelines Cristian Soviani Packet processing is an essential function of state-of-the-art network routers and switches. Implementing packet processors in pipelined architectures is a well-known, established technique, albeit different approaches have been proposed. The design of packet processing pipelines is a delicate trade-off between the desire for abstract specifications, short development time, and design maintainability on one hand and very aggressive performance requirements on the other. This thesis proposes a coherent design flow for packet processing pipelines. Like the design process itself, I start by introducing a novel domain-specific language that provides a high-level specification of the pipeline. Next, I address synthesizing this model and calculating its worst-case throughput. Finally, I address some specific circuit optimization issues. I claim, based on experimental results, that my proposed technique can dramat- ically improve the design process of these pipelines, while the resulting performance matches the expectations of hand-crafted design. The considered pipelines exhibit a pseudo-linear topology, which can be too re- strictive in the general case. However, especially due to its high performance, such an architecture may be suitable for applications outside packet processing, in which case some of my proposed techniques could be easily adapted. Since I ran my experiments on FPGAs, this work has an inherent bias towards that technology; however, most results are technology-independent.
    [Show full text]
  • Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr
    Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr. Sharma, Antoine Kaufmann, and Thomas Anderson, University of Washington; Changhoon Kim, Barefoot Networks; Arvind Krishnamurthy, University of Washington; Jacob Nelson, Microsoft Research; Simon Peter, The University of Texas at Austin https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/sharma This paper is included in the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’17). March 27–29, 2017 • Boston, MA, USA ISBN 978-1-931971-37-9 Open access to the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation is sponsored by USENIX. Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr. Sharma∗ Antoine Kaufmann∗ Thomas Anderson∗ Changhoon Kim† Arvind Krishnamurthy∗ Jacob Nelson‡ Simon Peter§ Abstract of the packet header, perform simple computations on Recent hardware switch architectures make it feasible values in packet headers, and maintain mutable state that to perform flexible packet processing inside the net- preserves the results of computations across packets. Im- work. This allows operators to configure switches to portantly, these advanced data-plane processing features parse and process custom packet headers using flexi- operate at line rate on every packet, addressing a ma- ble match+action tables in order to exercise control over jor limitation of earlier solutions such as OpenFlow [22] how packets are processed and routed. However, flexible which could only operate on a small fraction of packets, switches have limited state, support limited types of op- e.g., for flow setup. FlexSwitches thus hold the promise erations, and limit per-packet computation in order to be of ushering in the new paradigm of a software defined able to operate at line rate.
    [Show full text]
  • MODULE V Internetworking
    MODULE V Internetworking: Concepts, Addressing, Architecture, Protocols, Datagram Processing, Transport-Layer Protocols, And End-To-End Services Computer Networks and Internets -- Module 5 1 Spring, 2014 Copyright 2014. All rights reserved. Topics d Internet concept and architecture d Internet addressing d Internet Protocol packets (datagrams) d Datagram forwarding d Address resolution d Error reporting mechanism d Con®guration d Network address translation Computer Networks and Internets -- Module 5 2 Spring, 2014 Copyright 2014. All rights reserved. Topics (continued) d Transport layer protocol characteristics and techniques d Message transport with the User Datagram Protocol (UDP) d Stream transport with the Transmission Control Protocol (TCP) d Routing algorithms and protocols d Internet multicast and multicast routing Computer Networks and Internets -- Module 5 3 Spring, 2014 Copyright 2014. All rights reserved. Internet Concept And Internet Architecture What Is The Internet? d Users see it as services and applications ± Web and e-commerce ± Email, texting, instant messenger ± Social networking and blogs ± Music and video download (and upload) ± Voice and video teleconferencing d Networking professionals see it as infrastructure ± Platform on which above services run ± Grows rapidly Computer Networks and Internets -- Module 5 5 Spring, 2014 Copyright 2014. All rights reserved. Growth Of The Internet 1000M . .. .. .. 900M ... .. .. .. ... .. 800M .. .. .. ... .. 700M .. .. .. ... .. 600M .. .. .. ... .. 500M .. .. .. ... .. .
    [Show full text]
  • Internetworking
    Carnegie Mellon Internetworking 15-213: Introduc0on to Computer Systems 19th Lecture, Oct. 28, 2010 Instructors: Randy Bryant and Dave O’Hallaron 1 Carnegie Mellon A Client-Server Transac8on 1. Client sends request Client Server process process Resource 4. Client 3. Server sends response 2. Server handles handles response request Note: clients and servers are processes running on hosts (can be the same or different hosts) Most network applica8ons are based on the client-server model: . A server process and one or more client processes . Server manages some resource . Server provides service by manipulang resource for clients . Server ac0vated by request from client (vending machine analogy) 2 Carnegie Mellon Hardware Organiza8on of a Network Host CPU chip register file ALU system bus memory bus I/O main MI bridge memory Expansion slots I/O bus USB graphics disk network controller adapter controller adapter mouse keyboard monitor disk network 3 Carnegie Mellon Computer Networks A network is a hierarchical system of boxes and wires organized by geographical proximity . SAN (System Area Network) spans cluster or machine room . Switched Ethernet, Quadrics QSW, … . LAN (Local Area Network) spans a building or campus . Ethernet is most prominent example . WAN (Wide Area Network) spans country or world . Typically high-speed point-to-point phone lines An internetwork (internet) is an interconnected set of networks . The Global IP Internet (uppercase “I”) is the most famous example of an internet (lowercase “i”) Let’s see how an internet is built from the ground up 4 Carnegie Mellon Lowest Level: Ethernet Segment host host host 100 Mb/s 100 Mb/s hub port Ethernet segment consists of a collec8on of hosts connected by wires (twisted pairs) to a hub Spans room or floor in a building Operaon .
    [Show full text]
  • ICMP for Ipv6
    ICMP for IPv6 ICMP in IPv6 functions the same as ICMP in IPv4. ICMP for IPv6 generates error messages, such as ICMP destination unreachable messages, and informational messages, such as ICMP echo request and reply messages. • Information About ICMP for IPv6, on page 1 • Additional References for IPv6 Neighbor Discovery Multicast Suppress, on page 3 Information About ICMP for IPv6 ICMP for IPv6 Internet Control Message Protocol (ICMP) in IPv6 functions the same as ICMP in IPv4. ICMP generates error messages, such as ICMP destination unreachable messages, and informational messages, such as ICMP echo request and reply messages. Additionally, ICMP packets in IPv6 are used in the IPv6 neighbor discovery process, path MTU discovery, and the Multicast Listener Discovery (MLD) protocol for IPv6. MLD is used by IPv6 devices to discover multicast listeners (nodes that want to receive multicast packets destined for specific multicast addresses) on directly attached links. MLD is based on version 2 of the Internet Group Management Protocol (IGMP) for IPv4. A value of 58 in the Next Header field of the basic IPv6 packet header identifies an IPv6 ICMP packet. ICMP packets in IPv6 are like a transport-layer packet in the sense that the ICMP packet follows all the extension headers and is the last piece of information in the IPv6 packet. Within IPv6 ICMP packets, the ICMPv6 Type and ICMPv6 Code fields identify IPv6 ICMP packet specifics, such as the ICMP message type. The value in the Checksum field is derived (computed by the sender and checked by the receiver) from the fields in the IPv6 ICMP packet and the IPv6 pseudoheader.
    [Show full text]