An Architecture for High-Speed Packet Switched Networks (Thesis)
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Lecture 10: Switching & Internetworking
Lecture 10: Switching & Internetworking CSE 123: Computer Networks Alex C. Snoeren HW 2 due WEDNESDAY Lecture 10 Overview ● Bridging & switching ◆ Spanning Tree ● Internet Protocol ◆ Service model ◆ Packet format CSE 123 – Lecture 10: Internetworking 2 Selective Forwarding ● Only rebroadcast a frame to the LAN where its destination resides ◆ If A sends packet to X, then bridge must forward frame ◆ If A sends packet to B, then bridge shouldn’t LAN 1 LAN 2 A W B X bridge C Y D Z CSE 123 – Lecture 9: Bridging & Switching 3 Forwarding Tables ● Need to know “destination” of frame ◆ Destination address in frame header (48bit in Ethernet) ● Need know which destinations are on which LANs ◆ One approach: statically configured by hand » Table, mapping address to output port (i.e. LAN) ◆ But we’d prefer something automatic and dynamic… ● Simple algorithm: Receive frame f on port q Lookup f.dest for output port /* know where to send it? */ If f.dest found then if output port is q then drop /* already delivered */ else forward f on output port; else flood f; /* forward on all ports but the one where frame arrived*/ CSE 123 – Lecture 9: Bridging & Switching 4 Learning Bridges ● Eliminate manual configuration by learning which addresses are on which LANs Host Port A 1 ● Basic approach B 1 ◆ If a frame arrives on a port, then associate its source C 1 address with that port D 1 ◆ As each host transmits, the table becomes accurate W 2 X 2 ● What if a node moves? Table aging Y 3 ◆ Associate a timestamp with each table entry Z 2 ◆ Refresh timestamp for each -
Wifi Direct Internetworking
WiFi Direct Internetworking António Teólo∗† Hervé Paulino João M. Lourenço ADEETC, Instituto Superior de NOVA LINCS, DI, NOVA LINCS, DI, Engenharia de Lisboa, Faculdade de Ciências e Tecnologia, Faculdade de Ciências e Tecnologia, Instituto Politécnico de Lisboa Universidade NOVA de Lisboa Universidade NOVA de Lisboa Portugal Portugal Portugal [email protected] [email protected] [email protected] ABSTRACT will enable WiFi communication range and speed even in cases of: We propose to interconnect mobile devices using WiFi-Direct. Hav- network infrastructure congestion, which may happen in highly ing that, it will be possible to interconnect multiple o-the-shelf crowded venues (such as sports and cultural events); or temporary, mobile devices, via WiFi, but without any supportive infrastructure. or permanent, absence of infrastructure, as may happen in remote This will pave the way for mobile autonomous collaborative sys- locations or disaster situations. tems that can operate in any conditions, like in disaster situations, WFD allows devices to form groups, with one of them, called in very crowded scenarios or in isolated areas. This work is relevant Group Owner (GO), acting as a soft access point for remaining since the WiFi-Direct specication, that works on groups of devices, group members. WFD oers node discovery, authentication, group does not tackle inter-group communication and existing research formation and message routing between nodes in the same group. solutions have strong limitations. However, WFD communication is very constrained, current imple- We have a two phase work plan. Our rst goal is to achieve mentations restrict group size 9 devices and none of these devices inter-group communication, i.e., enable the ecient interconnec- may be a member of more than one WFD group. -
Packet Processing at Wire Speed Using Network Processors
Packet processing at wire speed using Network processors Chethan Kumar and Hao Che University of Texas at Arlington {ckumar, hche}@cse.uta.edu Abstract 1 Introduction Recent developments in fiber optics and the new The modern day Internet has seen an explosive bandwidth hungry applications have put more stress growth of applications being used on it. As more and on the active components (switches, routers etc.,) of a more applications are being developed, there is an network. Optical fiber bandwidth is no longer a increase in the amount of load put on the internet. At constraint for increasing the network bandwidth. the same time the fiber optics bandwidth has However, the processing power of the network has not increased dramatically to meet the traffic demand, but scaled upto the increase in the fiber bandwidth. the present day routers have limited processing power Communication industry is looking forward for more to handle this profound demand increase. Hence the innovative ways of designing router1 architecture and networking and telecommunications industry is research is being conducted to develop a scalable, compelled to look for new solutions for improving flexible and cost-effective architecture for routers. A the performance and the processing power of the successful outcome of this effort is a specialized routers. processor called Network processor. Network processor provides performance at hardware speeds One of the industry’s solutions to the challenges while attaining the flexibility of software. Network posed by the increased demand for the processing processors from different vendors employ different power is programmable functional units grouped into architectures and the choice of a particular type of a processor called Application Specific Instruction network processor can affect the architecture of the Processor (ASIP) or Network processor (NP)2. -
Packet Processing Execution Engine (PROX) - Performance Characterization for NFVI User Guide
USER GUIDE Intel Corporation Packet pROcessing eXecution Engine (PROX) - Performance Characterization for NFVI User Guide Authors 1 Introduction Yury Kylulin Properly designed Network Functions Virtualization Infrastructure (NFVI) environments deliver high packet processing rates, which are also a required dependency for Luc Provoost onboarded network functions. NFVI testing methodology typically includes both Petar Torre functionality testing and performance characterization testing, to ensure that NFVI both exposes the correct APIs and is capable of packet processing. This document describes how to use open source software tools to automate peak traffic (also called saturation) throughput testing to characterize the performance of a containerized NFVI system. The text and examples in this document are intended for architects and testing engineers for Communications Service Providers (CSPs) and their vendors. This document describes tools used during development to evaluate whether or not a containerized NFVI can perform the required rates of packet processing within set packet drop rates and latency percentiles. This document is part of the Network Transformation Experience Kit, which is available at https://networkbuilders.intel.com/network-technologies/network-transformation-exp- kits. 1 User Guide | Packet pROcessing eXecution Engine (PROX) - Performance Characterization for NFVI Table of Contents 1 Introduction ................................................................................................................................................................................................................ -
Internetworking and Layered Models
1 Internetworking and Layered Models The Internet today is a widespread information infrastructure, but it is inherently an insecure channel for sending messages. When a message (or packet) is sent from one Website to another, the data contained in the message are routed through a number of intermediate sites before reaching its destination. The Internet was designed to accom- modate heterogeneous platforms so that people who are using different computers and operating systems can communicate. The history of the Internet is complex and involves many aspects – technological, organisational and community. The Internet concept has been a big step along the path towards electronic commerce, information acquisition and community operations. Early ARPANET researchers accomplished the initial demonstrations of packet- switching technology. In the late 1970s, the growth of the Internet was recognised and subsequently a growth in the size of the interested research community was accompanied by an increased need for a coordination mechanism. The Defense Advanced Research Projects Agency (DARPA) then formed an International Cooperation Board (ICB) to coordinate activities with some European countries centered on packet satellite research, while the Internet Configuration Control Board (ICCB) assisted DARPA in managing Internet activity. In 1983, DARPA recognised that the continuing growth of the Internet community demanded a restructuring of coordination mechanisms. The ICCB was dis- banded and in its place the Internet Activities Board (IAB) was formed from the chairs of the Task Forces. The IAB revitalised the Internet Engineering Task Force (IETF) as a member of the IAB. By 1985, there was a tremendous growth in the more practical engineering side of the Internet. -
Guidelines for the Secure Deployment of Ipv6
Special Publication 800-119 Guidelines for the Secure Deployment of IPv6 Recommendations of the National Institute of Standards and Technology Sheila Frankel Richard Graveman John Pearce Mark Rooks NIST Special Publication 800-119 Guidelines for the Secure Deployment of IPv6 Recommendations of the National Institute of Standards and Technology Sheila Frankel Richard Graveman John Pearce Mark Rooks C O M P U T E R S E C U R I T Y Computer Security Division Information Technology Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899-8930 December 2010 U.S. Department of Commerce Gary Locke, Secretary National Institute of Standards and Technology Dr. Patrick D. Gallagher, Director GUIDELINES FOR THE SECURE DEPLOYMENT OF IPV6 Reports on Computer Systems Technology The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) promotes the U.S. economy and public welfare by providing technical leadership for the nation’s measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of information technology. ITL’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the cost-effective security and privacy of sensitive unclassified information in Federal computer systems. This Special Publication 800-series reports on ITL’s research, guidance, and outreach efforts in computer security and its collaborative activities with industry, government, and academic organizations. National Institute of Standards and Technology Special Publication 800-119 Natl. Inst. Stand. Technol. Spec. Publ. 800-119, 188 pages (Dec. 2010) Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. -
Lightweight Internet Protocol Stack Ideal Choice for High-Performance HFT and Telecom Packet Processing Applications Lightweight Internet Protocol Stack
GE Intelligent Platforms Lightweight Internet Protocol Stack Ideal Choice for High-Performance HFT and Telecom Packet Processing Applications Lightweight Internet Protocol Stack Introduction The number of devices connected to IP networks will be nearly three times as high as the global population in 2016. There will be nearly three networked devices per capita in 2016, up from over one networked device per capita in 2011. Driven in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 15 gigabytes per capita in 2016, up from 4 gigabytes per capita in 2011 (Cisco VNI). Figure 1 shows the anticipated growth in IP traffic and networked devices. The IP traffic is increasing globally at the breath-taking pace. The rate at which these IP data packets needs to be processed to ensure not only their routing, security and delivery in the core of the network but also the identification and extraction of payload content for various end-user applications such as high frequency trading (HFT), has also increased. In order to support the demand of high-performance IP packet processing, users and developers are increasingly moving to a novel approach of combining PCI Express packet processing accelerator cards in a standalone network server to create an accelerated network server. The benefits of using such a hardware solu- tion are explained in the GE Intelligent Platforms white paper, Packet Processing in Telecommunications – A Case for the Accelerated Network Server. 80 2016 The number of networked devices -
High Level Synthesis for Packet Processing Pipelines Cristian
High Level Synthesis for Packet Processing Pipelines Cristian Soviani Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2007 c 2007 Cristian Soviani All Rights Reserved ABSTRACT High Level Synthesis for Packet Processing Pipelines Cristian Soviani Packet processing is an essential function of state-of-the-art network routers and switches. Implementing packet processors in pipelined architectures is a well-known, established technique, albeit different approaches have been proposed. The design of packet processing pipelines is a delicate trade-off between the desire for abstract specifications, short development time, and design maintainability on one hand and very aggressive performance requirements on the other. This thesis proposes a coherent design flow for packet processing pipelines. Like the design process itself, I start by introducing a novel domain-specific language that provides a high-level specification of the pipeline. Next, I address synthesizing this model and calculating its worst-case throughput. Finally, I address some specific circuit optimization issues. I claim, based on experimental results, that my proposed technique can dramat- ically improve the design process of these pipelines, while the resulting performance matches the expectations of hand-crafted design. The considered pipelines exhibit a pseudo-linear topology, which can be too re- strictive in the general case. However, especially due to its high performance, such an architecture may be suitable for applications outside packet processing, in which case some of my proposed techniques could be easily adapted. Since I ran my experiments on FPGAs, this work has an inherent bias towards that technology; however, most results are technology-independent. -
Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr
Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr. Sharma, Antoine Kaufmann, and Thomas Anderson, University of Washington; Changhoon Kim, Barefoot Networks; Arvind Krishnamurthy, University of Washington; Jacob Nelson, Microsoft Research; Simon Peter, The University of Texas at Austin https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/sharma This paper is included in the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’17). March 27–29, 2017 • Boston, MA, USA ISBN 978-1-931971-37-9 Open access to the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation is sponsored by USENIX. Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr. Sharma∗ Antoine Kaufmann∗ Thomas Anderson∗ Changhoon Kim† Arvind Krishnamurthy∗ Jacob Nelson‡ Simon Peter§ Abstract of the packet header, perform simple computations on Recent hardware switch architectures make it feasible values in packet headers, and maintain mutable state that to perform flexible packet processing inside the net- preserves the results of computations across packets. Im- work. This allows operators to configure switches to portantly, these advanced data-plane processing features parse and process custom packet headers using flexi- operate at line rate on every packet, addressing a ma- ble match+action tables in order to exercise control over jor limitation of earlier solutions such as OpenFlow [22] how packets are processed and routed. However, flexible which could only operate on a small fraction of packets, switches have limited state, support limited types of op- e.g., for flow setup. FlexSwitches thus hold the promise erations, and limit per-packet computation in order to be of ushering in the new paradigm of a software defined able to operate at line rate. -
MODULE V Internetworking
MODULE V Internetworking: Concepts, Addressing, Architecture, Protocols, Datagram Processing, Transport-Layer Protocols, And End-To-End Services Computer Networks and Internets -- Module 5 1 Spring, 2014 Copyright 2014. All rights reserved. Topics d Internet concept and architecture d Internet addressing d Internet Protocol packets (datagrams) d Datagram forwarding d Address resolution d Error reporting mechanism d Con®guration d Network address translation Computer Networks and Internets -- Module 5 2 Spring, 2014 Copyright 2014. All rights reserved. Topics (continued) d Transport layer protocol characteristics and techniques d Message transport with the User Datagram Protocol (UDP) d Stream transport with the Transmission Control Protocol (TCP) d Routing algorithms and protocols d Internet multicast and multicast routing Computer Networks and Internets -- Module 5 3 Spring, 2014 Copyright 2014. All rights reserved. Internet Concept And Internet Architecture What Is The Internet? d Users see it as services and applications ± Web and e-commerce ± Email, texting, instant messenger ± Social networking and blogs ± Music and video download (and upload) ± Voice and video teleconferencing d Networking professionals see it as infrastructure ± Platform on which above services run ± Grows rapidly Computer Networks and Internets -- Module 5 5 Spring, 2014 Copyright 2014. All rights reserved. Growth Of The Internet 1000M . .. .. .. 900M ... .. .. .. ... .. 800M .. .. .. ... .. 700M .. .. .. ... .. 600M .. .. .. ... .. 500M .. .. .. ... .. . -
Internetworking
Carnegie Mellon Internetworking 15-213: Introduc0on to Computer Systems 19th Lecture, Oct. 28, 2010 Instructors: Randy Bryant and Dave O’Hallaron 1 Carnegie Mellon A Client-Server Transac8on 1. Client sends request Client Server process process Resource 4. Client 3. Server sends response 2. Server handles handles response request Note: clients and servers are processes running on hosts (can be the same or different hosts) Most network applica8ons are based on the client-server model: . A server process and one or more client processes . Server manages some resource . Server provides service by manipulang resource for clients . Server ac0vated by request from client (vending machine analogy) 2 Carnegie Mellon Hardware Organiza8on of a Network Host CPU chip register file ALU system bus memory bus I/O main MI bridge memory Expansion slots I/O bus USB graphics disk network controller adapter controller adapter mouse keyboard monitor disk network 3 Carnegie Mellon Computer Networks A network is a hierarchical system of boxes and wires organized by geographical proximity . SAN (System Area Network) spans cluster or machine room . Switched Ethernet, Quadrics QSW, … . LAN (Local Area Network) spans a building or campus . Ethernet is most prominent example . WAN (Wide Area Network) spans country or world . Typically high-speed point-to-point phone lines An internetwork (internet) is an interconnected set of networks . The Global IP Internet (uppercase “I”) is the most famous example of an internet (lowercase “i”) Let’s see how an internet is built from the ground up 4 Carnegie Mellon Lowest Level: Ethernet Segment host host host 100 Mb/s 100 Mb/s hub port Ethernet segment consists of a collec8on of hosts connected by wires (twisted pairs) to a hub Spans room or floor in a building Operaon . -
ICMP for Ipv6
ICMP for IPv6 ICMP in IPv6 functions the same as ICMP in IPv4. ICMP for IPv6 generates error messages, such as ICMP destination unreachable messages, and informational messages, such as ICMP echo request and reply messages. • Information About ICMP for IPv6, on page 1 • Additional References for IPv6 Neighbor Discovery Multicast Suppress, on page 3 Information About ICMP for IPv6 ICMP for IPv6 Internet Control Message Protocol (ICMP) in IPv6 functions the same as ICMP in IPv4. ICMP generates error messages, such as ICMP destination unreachable messages, and informational messages, such as ICMP echo request and reply messages. Additionally, ICMP packets in IPv6 are used in the IPv6 neighbor discovery process, path MTU discovery, and the Multicast Listener Discovery (MLD) protocol for IPv6. MLD is used by IPv6 devices to discover multicast listeners (nodes that want to receive multicast packets destined for specific multicast addresses) on directly attached links. MLD is based on version 2 of the Internet Group Management Protocol (IGMP) for IPv4. A value of 58 in the Next Header field of the basic IPv6 packet header identifies an IPv6 ICMP packet. ICMP packets in IPv6 are like a transport-layer packet in the sense that the ICMP packet follows all the extension headers and is the last piece of information in the IPv6 packet. Within IPv6 ICMP packets, the ICMPv6 Type and ICMPv6 Code fields identify IPv6 ICMP packet specifics, such as the ICMP message type. The value in the Checksum field is derived (computed by the sender and checked by the receiver) from the fields in the IPv6 ICMP packet and the IPv6 pseudoheader.