Internet History and Future

Total Page:16

File Type:pdf, Size:1020Kb

Internet History and Future City with Connectivity and Technology Science and Innovation Week Internet History and Future Dr. Lawrence Roberts Founder, Chairman, Anagran SLIDE 1 | © 2010 ANAGRAN, INC. Early Packet Switching History Redundancy Rand Report IEEE paper 0.85716 Paul Routing ARPANET Program Baran Economics0.7143 Rand IFIP paper ACM paper Donald Davies 0.57144 Topology NPL Len Kleinrock 0.42858 Queuing MIT Roberts RLE Report Larry Roberts Davies & & Marill Scantlebury 0.28572 MIT ARPA Protocol Book “Communication Nets” NPL J.C.R. Licklider - Intergalactic Network One Node 0.14286 TX-2-SDC 2 Node Exp IEEE papers Experiment INTERNET FJCC Paper ACM paper 3 nodes 13 20 38 SJCC Paper 0 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 SLIDE 2 | © 2010 ANAGRAN, INC. The Beginning of the Internet and ARPANET became the Internet 1965 – MIT- 2 Computer Experiment Roberts designs packet structure Len Kleinrock – queuing theory 1967 – Roberts moved to ARPA Designs ARPANET 1969 – First 4 nodes installed UCLA, SRI, UCSB, University of Utah 1971 - Email created Main traffic soon 1972 – Bob Kahn joins Roberts at ARPA Roberts at MIT Computer 1973 – Roberts leaves and starts Telenet; first commercial packet carrier in the world 1974 – TCP design paper published by Bob Kahn and Vint Cerf 1983 – TCP/IP installed on ARPANET and required by DOD 1993 – Internet opened to commercial use SLIDE 3 | © 2010 ANAGRAN, INC. Original Internet Design - It Was Designed for Data File Transfer and Email main activities Constrained by high cost of memory Only Packet Destination Examined No Source Checks ARPANET 1971 No QoS No Security Best Effort Only Voice Considered Video thought not feasible Not much change in packet switching since then SLIDE 4 | © 2010 ANAGRAN, INC. Internet Early History 100,000 “Internet” Name first used- RFC 675 Roberts term at ARPA Kahn term at ARPA Cerf term at ARPA 10,000 SATNET - Satellite to UK Aloha-Packet Radio PacketRadioNET Spans US DNS 1,000 TCP/IP Design Hosts NCP TCP/IP Ethernet Traffic 100 EMAIL FTP Hosts or Traffic in bps/10 10 ICCC Demo X.25 – Virtual Circuit standard 1 1969 1971 1973 1975 1977 1979 1981 1983 1985 1987 SLIDE 5 | © 2010 ANAGRAN, INC. ARPANET Expansion ARPANET July 1977 SLIDE 6 | © 2010 ANAGRAN, INC. NAE Draper Award Laureates Feb. 20th, 2001 For creating the Internet Roberts Kahn Kleinrock Cerf SLIDE 7 | © 2010 ANAGRAN, INC. Prince of Asturias Award for Technical and Scientific Research, Oct 25, 2002 Roberts Kahn Cerf Berners-Lee SLIDE 8 | © 2010 ANAGRAN, INC. Prince of Asturias Award for Technical and Scientific Research, Oct 25, 2002 SLIDE 9 | © 2010 ANAGRAN, INC. Major Internet Contributions 1959-1964 - Kleinrock develops packet network theory proving that packets could be safely queued with modest buffers at network nodes 1965 – Roberts tests a two node packet network and proves telephone network inadequate for data, packet network needed 1967-1973 - Roberts at ARPA designs ARPANET, contracts parts out (routers, transmission lines, protocol, application software), growing network to 38 nodes and 50 computers 1973-1985 - Kahn at ARPA, manages ARPANET, converting to TCP/IP, and standardizing DoD (also world) on TCP/IP 1975-1983 - Cerf at ARPA designs TCP/IP and helps grow network 1990-1993- Berners-Lee designs hypertext browser (WWW) SLIDE 1 0 | © 2010 ANAGRAN, INC. Packet Switching – 1969 Cost Crossover Cost for data with circuit switching Cost for data with packet switching 60 65 70 75 80 From: “Data by the Packet,” IEEE Spectrum, Lawrence Roberts, Vol. 11, No. 2, February 1974, pp. 46-51. SLIDE 1 1 | © 2010 ANAGRAN, INC. Internet Traffic History: Growth = 6 Trillion in 40 years Internet Traffic Growth 100000 10000 World Internet Traffic PB/mo Commercial Doubling/year 1000 100 10 NSFNET 1 0.1 0.01 WWW PB/month 0.001 ARPANET 0.0001 0.00001 0.000001 0.0000001 TCP/IP 0.00000001 Commercial X.25 Service 0.000000001 1970 1980 1990 2000 2010 Internet Traffic has doubled every 11 months from 1970 to 2010 SLIDE 1 2 | © 2010 ANAGRAN, INC. Some Network Problem Persist Fairness - Broadband & Wireless Access 5% of users take 70%-80% of shared capacity Current network is unfair; Each flow gets equal capacity Multi-flow applications thus use unfair portion of capacity Multi-flow applications: P2P, Maps, content caching Quality of Experience Queuing adds delay, delay jitter and TCP stalls Web access much slower than needed Video stalls, Wireless voice breaks up Utilization Current network utilization is very low at network edge Security SLIDE 1 3 | © 2010 ANAGRAN, INC. Internet Technology – Finally Some Changes For 40 years network equipment still uses the same technology as ARPANET in 1969 - Queues Moore’s Law has allowed for major speed increases But network equipment still uses queues to control traffic overload Every packet is processed independently (at high cost) Average flow rate needed is achieved but flow rates are randomized Flow Rate Control (FRC) provides a new solution A Flow is a sequence of packets – file transfer, voice, video, etc. Flow Rate Control controls the rate of every flow without queues Maximum trunk capacity is held just below limit – thus no congestion Computation reduced: First packet examined, most are streamed out Cost, power, and size reduced 5:1 FSA Signaling protocol offers nearly ideal network service & greatly improved network security SLIDE 1 4 | © 2010 ANAGRAN, INC. Controlling Overload – Queues vs Flow Rate Control Current Packet Queuing Design of Network Equipment Queue/Discard NPU Queue/Discard NPU examines all packets 4 U 1500 Watts Today, network equipment uses packet queues which handle overload by delaying and discarding random packets - result is delay, delay jitter, and TCP stalls. New Flow Rate Control (FRC) Design of Network Equipment Rate Control Flows Switch Measure Utilization 1 U 300 Watts NPU NPU only looks at 7% of packets Anagran’s new approach uses FRC to intelligently manage overload, reduce delay, increase throughput, provide equalization, and support multiple levels of service. SLIDE 1 5 | © 2010 ANAGRAN, INC. Power & Cost is Lower for Flow vs. Packet Processing 1988 Crossover - Flow vs Packet Processing 1.E+09 Packet Processing 1.E+08 Flow Processing 1.E+07 1.E+06 1.E+05 x 10 eachfor line – 1.E+04Gbps per $ 1.E+03 5 1.E+02 Log scale 1.E+01 1975 1980 1985 1990 1995 2000 2005 2010 2015 1988 Flow processing depends more on memory cost than on computing. Memory cost has fallen faster than computing. Flow was too expensive before 1988. Flow processing is now it is 5 times less power and cost than packet processing and flow processing's advantage is continuing to increase. SLIDE 1 6 | © 2010 ANAGRAN, INC. Fixing Network Problem Areas Fairness TCP and queuing lead to equal capacity per flow & congestion Flow Rate Control (FRC) can provide Subscriber Equalization Equal Capacity for Equal Pay Supports multiple pay classes, each with increased average rate Quality of Service (QoS) Replace queues with Flow Rate Control (FRC) No delay added - Streaming video runs faster, no stalls No delay jitter - Good voice quality even on wireless No TCP stalls or resets - All flows run smoothly at controlled rate Utilization If current QoS is ok, Utilization can be increased substantiality SLIDE 1 7 | © 2010 ANAGRAN, INC. Internet Traffic Projection – Fairness Issue World Internet Traffic 1000000 Un-Equalized Multi-Flow Internet Traffic Multi-Flow Traffic 100000 World Landline Internet Traffic 10000 World Wireless Internet Traffic 1000 100 10 1 0.1 PB/month 0.01 0.001 Landline Wireless 0.0001 0.00001 0.000001 0.0000001 Internet Commercially Available 0.00000001 TCP 0.000000001 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 2020 In 1999 Multi-Flow applications, starting with P2P, grew to consume up to 70% of the Internet capacity Subscriber Equalization should slowly return capacity to the normal user Currently Wireless Internet traffic is exploding and will soon equal landline traffic SLIDE 1 8 | © 2010 ANAGRAN, INC. Multi-Flow Traffic World Internet Traffic Impact of Multi-Flow Traffic 16,000 14,000 70% of Capacity, 5% of Users 12,000 10,000 8,000 PB/month 6,000 Multi-Flow 4,000 30% of Capacity, 95% of Users 2,000 Single-Flow - 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 Major problem today is that Internet allows unfairness Each flow is given equal capacity Multi-flow applications receive unfair fraction of capacity Generally 5% of users get 70% of shared capacity Subscriber Equality is needed (Get what you pay for) SLIDE 1 9 | © 2010 ANAGRAN, INC. Flow Rate Control Exists in the Anagran FR-1000 Transitioning from Packet to Flow Traffic Management Anagran Fast Flow TechnologyTM (patents pending) “Delay-less” Architecture … Zero output buffer queuing Bump in the Wire Packet processing bypassed on 95%+ of all packets or L3 Routing Product Specs … 40 Gbps throughput, 10 GE and 1 GE (10/100/1000) ports 1,500,000 simultaneous flows … up to 8,000 distinct flow classes or VLANs Supports 75,000 subscribers with rate caps, service classes, and subscriber equalization Redundant power, hot swappable modules, and HA via dual unit configuration 100% NetFlow available even at 40 Gbps SLIDE 2 0 | © 2010 ANAGRAN, INC. Issues in Education Networks Student Access, Priority and Equality Eliminate P2P overload with student equality Guarantee minimum and maximum total fraction of Internet Prohibit or limit certain external activity like social networking Faculty Access, Priority and Equality (perhaps by groups) Access Limitations to Servers by person or Group Assured Capacity for Selected Servers and Services Distance Learning Video Priority and Guarantees Utilization of LAN and WAN typically increased 100% Major Cost savings on Equipment and Communication SLIDE 2 1 | © 2010 ANAGRAN, INC.
Recommended publications
  • The Internet Development Process Stockholm October 2010
    The Internet development Process Stockholm October 2010 Pål Spilling What I would talk about? • The main Norwegian contributions • Competitions between alternatives • Did Norway benefit from its participation? • Some observations and reflections • Conclusion A historical timeline 1981/82 TCP/IP accepted standard for US 1993 Web browser Mosaic Defence became available 1980 TCP/IP fully 2000 developed 1975 Start of the Internet Project; 1974 Preliminary specificaons of TCP 1977 First 3 – network Demonstration Mid 1973 ARPANET covers US, Hawaii, FFI Kjeller, and UCL London 1950 1955 ‐1960, End 1968 Ideas of resource Start of the ARPANET sharing networks project Norway and UK on ARPANET 1973 London o SATNET o Kjeller Norwegian Contributions • SATNET development – Simulations – Performance measurements • Internet performance measurements • Packet speech experiments over the Internet • Improved PRNET protocol architecture Competitions between alternatives • X.25 (ITU) • ISO standards (committee work) • DECNET (proprietary) • IBM (proprietary) • TCP/IP demonstrated its usefullness i 1977 accepted as a standard for US defence Norwegian benefits • Enabled me to create a small Norwegian internet • Got access to UNIX, with TCP/IP and user services integrated • Gave research scientists early exposure to internet and its services • Early curriculum in computer communications; Oslo University Observations and reflections • Norwegian Arpanet Committee; dissolved itself due to lack of interest • IP address space too small for todays use • TCP split in
    [Show full text]
  • Packet Processing at Wire Speed Using Network Processors
    Packet processing at wire speed using Network processors Chethan Kumar and Hao Che University of Texas at Arlington {ckumar, hche}@cse.uta.edu Abstract 1 Introduction Recent developments in fiber optics and the new The modern day Internet has seen an explosive bandwidth hungry applications have put more stress growth of applications being used on it. As more and on the active components (switches, routers etc.,) of a more applications are being developed, there is an network. Optical fiber bandwidth is no longer a increase in the amount of load put on the internet. At constraint for increasing the network bandwidth. the same time the fiber optics bandwidth has However, the processing power of the network has not increased dramatically to meet the traffic demand, but scaled upto the increase in the fiber bandwidth. the present day routers have limited processing power Communication industry is looking forward for more to handle this profound demand increase. Hence the innovative ways of designing router1 architecture and networking and telecommunications industry is research is being conducted to develop a scalable, compelled to look for new solutions for improving flexible and cost-effective architecture for routers. A the performance and the processing power of the successful outcome of this effort is a specialized routers. processor called Network processor. Network processor provides performance at hardware speeds One of the industry’s solutions to the challenges while attaining the flexibility of software. Network posed by the increased demand for the processing processors from different vendors employ different power is programmable functional units grouped into architectures and the choice of a particular type of a processor called Application Specific Instruction network processor can affect the architecture of the Processor (ASIP) or Network processor (NP)2.
    [Show full text]
  • Adding Enhanced Services to the Internet: Lessons from History
    Adding Enhanced Services to the Internet: Lessons from History kc claffy and David D. Clark [email protected] and [email protected] September 7, 2015 Contents 1 Introduction 3 2 Related technical concepts 4 2.1 What does “enhanced services” mean? . 4 2.2 Using enhanced services to mitigate congestion . 5 2.3 Quality of Service (QoS) vs. Quality of Experience (QoE) . 6 2.4 Limiting the use of enhanced services via regulation . 7 3 Early history of enhanced services: technology and operations (1980s) 7 3.1 Early 1980s: Initial specification of Type-of-Service in the Internet Protocol suite . 7 3.2 Mid 1980s: Reactive use of service differentation to mitigate NSFNET congestion . 10 3.3 Late 1980s: TCP protocol algorithmic support for dampening congestion . 10 4 Formalizing support for enhanced services across ISPs (1990s) 11 4.1 Proposed short-term solution: formalize use of IP Precedence field . 11 4.2 Proposed long-term solution: standardizing support for enhanced services . 12 4.3 Standardization of enhanced service in the IETF . 13 4.4 Revealing moments: the greater obstacle is economics not technology . 15 5 Non-technical barriers to enhanced services on the Internet (2000s) 15 5.1Early2000s:afuneralwakeforQoS............................ 15 5.2 Mid 2000s: Working with industry to gain insight . 16 5.3 Late 2000s: QoS becomes a public policy issue . 17 6 Evolving interconnection structure and implications for enhanced services (2010s) 20 6.1 Expansion of network interconnection scale and scope . 20 6.2 Emergence of private IP-based platforms to support enhanced services . 22 6.3 Advancing our empirical understanding of performance impairments .
    [Show full text]
  • Features of the Internet History the Norwegian Contribution to the Development PAAL SPILLING and YNGVAR LUNDH
    Features of the Internet history The Norwegian contribution to the development PAAL SPILLING AND YNGVAR LUNDH This article provides a short historical and personal view on the development of packet-switching, computer communications and Internet technology, from its inception around 1969 until the full- fledged Internet became operational in 1983. In the early 1990s, the internet backbone at that time, the National Science Foundation network – NSFNET, was opened up for commercial purposes. At that time there were already several operators providing commercial services outside the internet. This presentation is based on the authors’ participation during parts of the development and on literature Paal Spilling is studies. This provides a setting in which the Norwegian participation and contribution may be better professor at the understood. Department of informatics, Univ. of Oslo and University 1 Introduction Defense (DOD). It is uncertain when DoD really Graduate Center The concept of computer networking started in the standardized on the entire protocol suite built around at Kjeller early 1960s at the Massachusetts Institute of Technol- TCP/IP, since for several years they also followed the ogy (MIT) with the vision of an “On-line community ISO standards track. of people”. Computers should facilitate communica- tions between people and be a support for human The development of the Internet, as we know it today, decision processes. In 1961 an MIT PhD thesis by went through three phases. The first one was the Leonard Kleinrock introduced some of the earliest research and development phase, sponsored and theoretical results on queuing networks. Around the supervised by ARPA. Research groups that actively same time a series of Rand Corporation papers, contributed to the development process and many mainly authored by Paul Baran, sketched a hypotheti- who explored its potential for resource sharing were cal system for communication while under attack that permitted to connect to and use the network.
    [Show full text]
  • Packet Processing Execution Engine (PROX) - Performance Characterization for NFVI User Guide
    USER GUIDE Intel Corporation Packet pROcessing eXecution Engine (PROX) - Performance Characterization for NFVI User Guide Authors 1 Introduction Yury Kylulin Properly designed Network Functions Virtualization Infrastructure (NFVI) environments deliver high packet processing rates, which are also a required dependency for Luc Provoost onboarded network functions. NFVI testing methodology typically includes both Petar Torre functionality testing and performance characterization testing, to ensure that NFVI both exposes the correct APIs and is capable of packet processing. This document describes how to use open source software tools to automate peak traffic (also called saturation) throughput testing to characterize the performance of a containerized NFVI system. The text and examples in this document are intended for architects and testing engineers for Communications Service Providers (CSPs) and their vendors. This document describes tools used during development to evaluate whether or not a containerized NFVI can perform the required rates of packet processing within set packet drop rates and latency percentiles. This document is part of the Network Transformation Experience Kit, which is available at https://networkbuilders.intel.com/network-technologies/network-transformation-exp- kits. 1 User Guide | Packet pROcessing eXecution Engine (PROX) - Performance Characterization for NFVI Table of Contents 1 Introduction ................................................................................................................................................................................................................
    [Show full text]
  • Lightweight Internet Protocol Stack Ideal Choice for High-Performance HFT and Telecom Packet Processing Applications Lightweight Internet Protocol Stack
    GE Intelligent Platforms Lightweight Internet Protocol Stack Ideal Choice for High-Performance HFT and Telecom Packet Processing Applications Lightweight Internet Protocol Stack Introduction The number of devices connected to IP networks will be nearly three times as high as the global population in 2016. There will be nearly three networked devices per capita in 2016, up from over one networked device per capita in 2011. Driven in part by the increase in devices and the capabilities of those devices, IP traffic per capita will reach 15 gigabytes per capita in 2016, up from 4 gigabytes per capita in 2011 (Cisco VNI). Figure 1 shows the anticipated growth in IP traffic and networked devices. The IP traffic is increasing globally at the breath-taking pace. The rate at which these IP data packets needs to be processed to ensure not only their routing, security and delivery in the core of the network but also the identification and extraction of payload content for various end-user applications such as high frequency trading (HFT), has also increased. In order to support the demand of high-performance IP packet processing, users and developers are increasingly moving to a novel approach of combining PCI Express packet processing accelerator cards in a standalone network server to create an accelerated network server. The benefits of using such a hardware solu- tion are explained in the GE Intelligent Platforms white paper, Packet Processing in Telecommunications – A Case for the Accelerated Network Server. 80 2016 The number of networked devices
    [Show full text]
  • 829 DARPA November 1982 PACKET SATELLITE TECHNOLOGY
    Network Working Group V. Cerf Request for Comments: 829 DARPA November 1982 PACKET SATELLITE TECHNOLOGY REFERENCE SOURCES Vinton G. Cerf Defense Advanced Research Projects Agency ABSTRACT This paper describes briefly the packet satellite technology developed by the Defense Advanced Research Projects Agency and several other participating organizations in the U.K. and Norway and provides a biblography of relevant papers for researchers interested in experimental and operational experience with this dynamic satellite-sharing technique. INTRODUCTION Packet Satellite technology was an outgrowth of early work in packet switching on multiaccess radio channels carried out at the University of Hawaii with the support of the Defense Advanced Research Projects Agency (DARPA). The primary difference between the earlier packet-switched ARPANET [1, 2] and the ALOHA system developed at the University of Hawaii [3] was the concept of multiple transmitters dynamically sharing a common and directly-accessible radio channel. In the ARPANET, sources of traffic inserted packets of data into the network through packet switches called Interface Message Processors (IMPs). The IMPs used high speed point-to-point full-duplex telephone circuits [4] on a store-and-forward basis. All packet traffic for a given telephone circuit was queued, if necessary, in the IMP and transmitted as soon as the packet reached the head of the queue. On such full duplex circuits there is exactly one transmitter and one receiver in each direction. The ALOHA system, on the other hand, assigned a common transmit channel frequency to ALL radio terminals. A computer at the University of Hawaii received packet bursts from the remote terminals which shared the "multi-access" channel.
    [Show full text]
  • High Level Synthesis for Packet Processing Pipelines Cristian
    High Level Synthesis for Packet Processing Pipelines Cristian Soviani Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2007 c 2007 Cristian Soviani All Rights Reserved ABSTRACT High Level Synthesis for Packet Processing Pipelines Cristian Soviani Packet processing is an essential function of state-of-the-art network routers and switches. Implementing packet processors in pipelined architectures is a well-known, established technique, albeit different approaches have been proposed. The design of packet processing pipelines is a delicate trade-off between the desire for abstract specifications, short development time, and design maintainability on one hand and very aggressive performance requirements on the other. This thesis proposes a coherent design flow for packet processing pipelines. Like the design process itself, I start by introducing a novel domain-specific language that provides a high-level specification of the pipeline. Next, I address synthesizing this model and calculating its worst-case throughput. Finally, I address some specific circuit optimization issues. I claim, based on experimental results, that my proposed technique can dramat- ically improve the design process of these pipelines, while the resulting performance matches the expectations of hand-crafted design. The considered pipelines exhibit a pseudo-linear topology, which can be too re- strictive in the general case. However, especially due to its high performance, such an architecture may be suitable for applications outside packet processing, in which case some of my proposed techniques could be easily adapted. Since I ran my experiments on FPGAs, this work has an inherent bias towards that technology; however, most results are technology-independent.
    [Show full text]
  • The Internet Development Process: Observations and Reflections
    The Internet Development Process: Observations and Reflections Pål Spilling University Graduate Center (UNIK), Kjeller, Norway [email protected] Abstract. Based on the experience of being part of the team that developed the internet, the author will look back and provide a history of the Norwegian participation. The author will attempt to answer a number of questions such as why was The Norwegian Defense Research Establishment (FFI) invited to participate in the development process, what did Norway contribute to in the project, and what did Norway benefit from its participation? Keywords: ARPANET, DARPA, Ethernet, Internet, PRNET. 1 A Short Historical Résumé The development of the internet went through two main phases. The first one laid the foundation for packet switching in a single network called ARPANET. The main idea behind the development of ARPANET was resource sharing [1]. At that time computers, software, and communication lines were very expensive, meaning that these resources had to be shared among as many users as possible. The development started at the end of 1968. The U.S. Defense Advanced Research Project Agency (DARPA) funded and directed it in a national project. It became operational in 1970; in a few years, it spanned the U.S. from west to east with one arm westward to Hawaii and one arm eastward to Kjeller, Norway, and then onwards to London. The next phase, the “internet project,” started in the latter half of 1974 [2]. The purpose of the project was to develop technologies to interconnect networks based on different communication principles, enabling end-to-end communications between computers connected to different networks.
    [Show full text]
  • Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr
    Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr. Sharma, Antoine Kaufmann, and Thomas Anderson, University of Washington; Changhoon Kim, Barefoot Networks; Arvind Krishnamurthy, University of Washington; Jacob Nelson, Microsoft Research; Simon Peter, The University of Texas at Austin https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/sharma This paper is included in the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’17). March 27–29, 2017 • Boston, MA, USA ISBN 978-1-931971-37-9 Open access to the Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation is sponsored by USENIX. Evaluating the Power of Flexible Packet Processing for Network Resource Allocation Naveen Kr. Sharma∗ Antoine Kaufmann∗ Thomas Anderson∗ Changhoon Kim† Arvind Krishnamurthy∗ Jacob Nelson‡ Simon Peter§ Abstract of the packet header, perform simple computations on Recent hardware switch architectures make it feasible values in packet headers, and maintain mutable state that to perform flexible packet processing inside the net- preserves the results of computations across packets. Im- work. This allows operators to configure switches to portantly, these advanced data-plane processing features parse and process custom packet headers using flexi- operate at line rate on every packet, addressing a ma- ble match+action tables in order to exercise control over jor limitation of earlier solutions such as OpenFlow [22] how packets are processed and routed. However, flexible which could only operate on a small fraction of packets, switches have limited state, support limited types of op- e.g., for flow setup. FlexSwitches thus hold the promise erations, and limit per-packet computation in order to be of ushering in the new paradigm of a software defined able to operate at line rate.
    [Show full text]
  • Securing the Border Gateway Protocol by Stephen T
    September 2003 Volume 6, Number 3 A Quarterly Technical Publication for From The Editor Internet and Intranet Professionals In This Issue The task of adding security to Internet protocols and applications is a large and complex one. From a user’s point of view, the security- enhanced version of any given component should behave just like the From the Editor .......................1 old version, just be “better and more secure.” In some cases this is simple. Many of us now use a Secure Shell Protocol (SSH) client in place of Telnet, and shop online using the secure version of HTTP. But Securing BGP: S-BGP...............2 there is still work to be done to ensure that all of our protocols and associated applications provide security. In this issue we will look at Securing BGP: soBGP ............15 routing, specifically the Border Gateway Protocol (BGP) and efforts that are underway to provide security for this critical component of the Internet infrastructure. As is often the case with emerging Internet Virus Trends ..........................23 technologies, there exists more than one proposed solution for securing BGP. Two solutions, S-BGP and soBGP, are described by Steve Kent and Russ White, respectively. IPv6 Behind the Wall .............34 The Internet gets attacked by various forms of viruses and worms with Call for Papers .......................40 some regularity. Some of these attacks have been quite sophisticated and have caused a great deal of nuisance in recent months. The effects following the Sobig.F virus are still very much being felt as I write this. Fragments ..............................41 Tom Chen gives us an overview of the trends surrounding viruses and worms.
    [Show full text]
  • An Architecture for High-Speed Packet Switched Networks (Thesis)
    Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1989 An Architecture for High-Speed Packet Switched Networks (Thesis) Rajendra Shirvaram Yavatkar Report Number: 89-898 Yavatkar, Rajendra Shirvaram, "An Architecture for High-Speed Packet Switched Networks (Thesis)" (1989). Department of Computer Science Technical Reports. Paper 765. https://docs.lib.purdue.edu/cstech/765 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. AN ARCIITTECTURE FOR IDGH-SPEED PACKET SWITCHED NETWORKS Rajcndra Shivaram Yavalkar CSD-TR-898 AugusL 1989 AN ARCHITECTURE FOR HIGH-SPEED PACKET SWITCHED NETWORKS A Thesis Submitted to the Faculty of Purdue University by Rajendra Shivaram Yavatkar In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy August 1989 Il TABLE OF CONTENTS Page LIST OF FIGURES Vl ABSTRACT ................................... Vlll 1. INTRODUCTION 1 1.1 BackgroWld... 2 1.1.1 Network Architecture 2 1.1.2 Network-Level Services. 7 1.1.3 Circuit Switching. 7 1.1.4 Packet Switching . 8 1.1.5 Summary.... 11 1.2 The Proposed Solution. 12 1.3 Plan of Thesis. ..... 14 2. DEFINITIONS AND TERMINOLOGY 15 2.1 Components of Packet Switched Networks 15 2.2 Concept Of Internetworking .. 16 2.3 Communication Services .... 17 2.4 Flow And Congestion Control. 18 3. NETWORK ARCHITECTURE 19 3.1 Basic Model . ... 20 3.2 Services Provided. 22 3.3 Protocol Hierarchy 24 3.4 Addressing . 26 3.5 Routing .. 29 3.6 Rate-based Congestion Avoidance 31 3.7 Responsibilities of a Router 32 3.8 Autoconfiguration ..
    [Show full text]