Anatomy of Network Elements Anatomy of Core Network Elements

Total Page:16

File Type:pdf, Size:1020Kb

Anatomy of Network Elements Anatomy of Core Network Elements AnatomyAnatomy of Coreof Network Network Elements Elements from 1Gbps to 10Tbps Josef Ungerman CSE, CCIE#6167 Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 1 Agenda 1. Basic Terms 2. Router Architectures 3. Switch Architectures 4. Hybrid Architectures 5. Network Processors 6. Switch Fabrics Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 2 Basic Terms Chapter 1 Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 3 Cisco in 80’s: Router Architecture CPU DRAM Flash, NVRAM, CON, AUX,... Packet interfaces Interconnect interfaces Store & Forward Switching – using packet buffers and QoS, handles WAN interfaces (very variable interface speeds) Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 4 Real-Time Packet Processing Process Switching CPU DRAM Flash, NVRAM, CON, AUX,... process level Packet interrupt level interfaces Interconnect interfaces Process Switching – IOS Process handles the forwarding decision and other operations with the packet Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 5 Real-Time Packet Processing Data Plane vs. Control Plane CPU DRAM Flash, NVRAM, CON, AUX,... process level process region Control Packet interrupt level I/O region Data Packet interfaces Interconnect interfaces Data Plane – transit packets (aka. fast path) Control Plane – packets for the router (routing, management, exceptions) • routing/control plane = routing and vital functions (OSPF, BGP, LDP, NTP, keepalives,...) • management plane = access to the router (telnet, ssh, SNMP, ...) Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 6 Real-Time Packet Processing NP (Network Processor) – S/W vs. H/W router CPU Route DRAM Flash, NVRAM, Control Packet CON, AUX,... IOS Data Packet NP (Network u-code Packet DRAM Processor) Data Packet interfaces Interconnect interfaces NP (Network Processor) – NP handles the data plane, not IOS (platform-dependent) CPU – runs IOS – handles only the control plane (platform-independent) Slow Path – IOS on the CPU can still forward some packets that NP cannot handle (eg. exceptions, non-IP protocols routing, unsupported features) Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 7 Real-Time Packet Processing NP (Network Processor) – S/W vs. H/W router Routing & Forwarding Engine CPU Route DRAM Flash, NVRAM, Control Packet CON, AUX,... IOS Data Packet NP (Network NPU header BQS Packet DRAM Processor) Data Packet interfaces Interconnect interfaces BQS (Buffering, Queuing, Scheduling) or TM (Traffic Manager) ASIC – handles the memory access and QoS (packet body) NPU (Network Processing Unit) – handles only packet forwarding and operations (packet header) Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 8 Summary – what is inside the router? BBB – basic building blocks • Processor • control-plane – OS processor • data-plane – network processor • Memory • DRAM for OS memory and packet buffers • SRAM for caches • TCAM for fast lookups • Interconnects • bus • serial link • switch fabric We do not care about what is visible on the router • chassis, fans, power supplies • control ports – CON, AUX, BITS, Alarms, Disks • data ports – LAN and WAN interfaces Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 9 “It is always something (corollary). Good, Fast, Cheap: Pick any two (you can’t have all three).” RFC 1925 “The Twelve Networking Truths” Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 10 Packet Processing Technology Primer Performance vs. Flexibility CPU (Central Processing Unit) • multi-purpose processors (CISC, RISC) • high s/w flexibility [weeks] • low performance stability [cca 1Mpps today] • usage example: access routers (ISR’s) ASIC (Application Specific Integrated Circuit) • mono-purpose hard-wired functionality • low engineering flexibility [2 years] • high performance stability [over 200 Mpps today] • usage example: switches (Catalysts), core routers Input Demux Feedback NP (Network Processor) = “something in between” IM IM IM IM 0 4 8 12 Mem. Column • performance + programmability IM IM IM IM 1 5 9 13 Mem. Column IM IM IM IM • moderate s/w flexibility [months] 2 6 10 14 Mem. Column IM IM IM IM 3 7 11 15 7 • moderate and stable performance [4Mpps – 40 Mpps+] Mem. Column Mux Output • can be expensive, power-hungry, can have low code memory • usage: fast feature-rich edge and aggregation Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 11 Memory Technology Primer Capacity vs. Access Speed Two basic memory technologies are in use today: • Static RAM (SRAM, SSRAM) • Dynamic RAM (DRAM, EDO DRAM, SDRAM, DDR) SRAM DRAM High Power Low Power High Speed Low Speed [10-20ns] [40-60ns] Low Density High Density [eg. 16M per chip] [eg. 1G per chip] Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 12 Interconnects Technology Primer Capacity vs. Complexity • Bus • half-duplex, shared medium • for example PCI [800Mbps to 25Gbps+ today] • simple and cheap • Serial Lane (Point-to-Point Link Set) • dedicated, unidirectional or full-duplex line • for example SPI4.2 [11.2Gbps+ today] • Switching Fabric (cross-bar, exchange) • non-blocking, full-duplex, any-to-any • for example GSR, CRS [40Gbps to 9.6Tbps+ today] Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 13 Example: Lookup Problem memory vs. processing TCAM (Ternary Content Addressable Memory) SRAM with a comparator at each cell ROOT 1 step – very fast, but very expensive 10.0.0.0 192.0.0.0 54.0.0.0 parallel, order independent lookups 10.1.0.0 10.10.0.0 192.5.0.0 192.8.0.0 (ACL, QoS, Netflow, even FIB) 54.10.0.0 10.1.1.0 10.10.5.0 192.8.2.0 Content and Mask Address 10.1.1.1 54.10.1.0 54.10.4.0 192.8.2.0 192.8.2.128 . load share 192.168.100.xxx 801 punt Tree or Serial Lookup host-route 192.168.200.xxx 802 cache drop 192.168.300.xxx 803 8-8-8-8 used by generic IOS glean incomplete . 16-8-8 used by the C12000, 11-8-5-8 used by C10K memory vs. speed tradeoff! - could be 8-1-1-1-1-1-1-1-1-1-1-1... (low SRAM) 192.168.200.111 802 - could be used also for ACL, uRPF, accounting Query Result NEXTHOP Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 14 Router Anatomy Chapter 2 Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 15 Fundamental Building Blocks Simplex serial link set Module, card Duplex serial link set I/O module (hardware module with I/O Active and backup backplane interfaces connection (serial duplex link set) Bus Switch Fabric (any-to-any full-duplex switching element) Mux/demux, fabric interface (typically including a tiny buffer) Forwarding ASIC (a complex of hardware F elements and SRAM’s handling data plane) Queuing ASIC (BQS – Buffering/Queuing/Scheduling, Q TM – Traffic Manager, etc.) NP, Network Processor (programmable hardware NP element handling data plane) buff. Packet buffering, packet memory, QoS point Control Plane element : CPU + DRAM + Flash + NVRAM and IOS control interfaces. CPU (Central Processing Unit) is a general Anatomy © 2009 Ciscopurpose Systems, Inc. microAll rights reserved.-processor running the OS (Operating System) 16 Cisco 7200 (1990’s) Software Router • data plane = IOS interrupt level architecture • control plane = IOS processes NPE-200 buff. IOS I/O controller bridge CON/AUX Flash FE PA PCI Bus PA PA 600 Mbps PA PCI Bus PA 600 Mbps Bus L Bus R PA 1, 4 or 6 PA slots Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 17 Cisco 7200 – NPE-G1/G2 upgrade architecture Software Router • no change, just faster CPU/memory NPE-G2 I/O controller on-board PCI Bus 4x GE buff. IOS crypto 600 Mbps CON/AUX/Flash bridge PA PCI Bus PA PA 600 Mbps PA PCI Bus PA 600 Mbps Bus L Bus R PA 1, 4 or 6 PA slots Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 18 Cisco ESR10000 Hardware Router • data plane = PXF chip (u-code) architecture • control plane = CPU with IOS • DMA chip for packet memory 1.6G H/H PRE (active) H/H IOS Q buff. NP SIP-600 (2-slot) 11G SPA SPA PRE (standby) NP Q buff. Full IOS Height Linecard 8 full-height slots (ESR 10008) Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 19 Cisco ASR1000 Split data and control plane • RP = control-plane only architecture • ESP = data-plane (QFP chip) 20Gbps, 16Mpps, C-programmable ASR1006 SIP SPA 11.2G ESP (active) RP (active) Encryption SPA Coprocessor NP IOS buff. SIP SPA SPA ESP (standby) RP (standby) buff. SIP NP IOS Encryption SPA Coprocessor SPA 1-3 SIP slots Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 20 “It is more complicated than you think.” RFC 1925 “The Twelve Networking Truths” Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 21 Cisco 7200: centralized single processor architecture Single-Processor • one CPU for everything NPE-G2 I/O controller on-board PCI Bus 4x GE buff. IOS PA 600 Mbps VSA CON/AUX/Flash bridge PA PCI Bus PA PA 600 Mbps PA PCI Bus PA 600 Mbps Bus L Bus R PA 1, 4 or 6 PA slots Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 22 Cisco 7500: distributed multi-processor architecture Multi-Processor • distributed, parallel CPU’s RSP (active) RSP (standby) buff. IOS buff. IOS memd memd VIP VIP PA PA buff. buff. IOS IOS PA PCI Bus PA 600 Mbps VIP VIP Cy Bus Cy Bus 1Gbps 1Gbps PA PA buff. buff. IOS IOS PA Bus L Bus R PA 3, 5 or 7 VIP slots Anatomy © 2009 Cisco Systems, Inc. All rights reserved. 23 Cisco 12000 – switch fabric architectural evolution Distributed Forwarding Architecture • up to 600Gbps today RP (active) RP (standby) IOS IOS Switch Fabric Cards Engine 0 arb. Engine 5 buff. CSC redundant 10G 622 Q NP SPA buff. M IOS arb. IOS Q CSC NP SPA buff. Engine 2 Engine 6 buff. SFC buff. Q 3G 40G Q F F SFC IOS Q F Q buff.
Recommended publications
  • C5ENPA1-DS, C-5E NETWORK PROCESSOR SILICON REVISION A1
    Freescale Semiconductor, Inc... SILICON REVISION A1 REVISION SILICON C-5e NETWORK PROCESSOR Sheet Data Rev 03 PRELIMINARY C5ENPA1-DS/D Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. Silicon RevisionA1 C-5e NetworkProcessor Data Sheet Rev 03 C5ENPA1-DS/D F o r M o r Preli G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m m d u c t , inary Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc. C5ENPA1-DS/D Rev 03 CONTENTS .
    [Show full text]
  • Design and Implementation of a Stateful Network Packet Processing
    610 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 25, NO. 1, FEBRUARY 2017 Design and Implementation of a Stateful Network Packet Processing Framework for GPUs Giorgos Vasiliadis, Lazaros Koromilas, Michalis Polychronakis, and Sotiris Ioannidis Abstract— Graphics processing units (GPUs) are a powerful and TCAMs, have greatly reduced both the cost and time platform for building the high-speed network traffic process- to develop network traffic processing systems, and have ing applications using low-cost hardware. The existing systems been successfully used in routers [4], [7] and network tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic intrusion detection systems [27], [34]. These systems offer a operations and pattern matching. However, they still suffer from scalable method of processing network packets in high-speed significant overheads due to critical-path operations that are still environments. However, implementations based on special- being carried out on the CPU, and redundant inter-device data purpose hardware are very difficult to extend and program, transfers. In this paper, we present GASPP, a programmable net- and prohibit them from being widely adopted by the industry. work traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPU-based implemen- In contrast, the emergence of commodity many-core tations of a broad range of operations commonly used in the architectures, such as multicore CPUs and modern graph- network traffic processing applications, including the first purely ics processors (GPUs) has proven to be a good solution GPU-based implementation of network flow tracking and TCP for accelerating many network applications, and has led stream reassembly.
    [Show full text]
  • Embedded Multi-Core Processing for Networking
    12 Embedded Multi-Core Processing for Networking Theofanis Orphanoudakis University of Peloponnese Tripoli, Greece [email protected] Stylianos Perissakis Intracom Telecom Athens, Greece [email protected] CONTENTS 12.1 Introduction ............................ 400 12.2 Overview of Proposed NPU Architectures ............ 403 12.2.1 Multi-Core Embedded Systems for Multi-Service Broadband Access and Multimedia Home Networks . 403 12.2.2 SoC Integration of Network Components and Examples of Commercial Access NPUs .............. 405 12.2.3 NPU Architectures for Core Network Nodes and High-Speed Networking and Switching ......... 407 12.3 Programmable Packet Processing Engines ............ 412 12.3.1 Parallelism ........................ 413 12.3.2 Multi-Threading Support ................ 418 12.3.3 Specialized Instruction Set Architectures ....... 421 12.4 Address Lookup and Packet Classification Engines ....... 422 12.4.1 Classification Techniques ................ 424 12.4.1.1 Trie-based Algorithms ............ 425 12.4.1.2 Hierarchical Intelligent Cuttings (HiCuts) . 425 12.4.2 Case Studies ....................... 426 12.5 Packet Buffering and Queue Management Engines ....... 431 399 400 Multi-Core Embedded Systems 12.5.1 Performance Issues ................... 433 12.5.1.1 External DRAMMemory Bottlenecks ... 433 12.5.1.2 Evaluation of Queue Management Functions: INTEL IXP1200 Case ................. 434 12.5.2 Design of Specialized Core for Implementation of Queue Management in Hardware ................ 435 12.5.2.1 Optimization Techniques .......... 439 12.5.2.2 Performance Evaluation of Hardware Queue Management Engine ............. 440 12.6 Scheduling Engines ......................... 442 12.6.1 Data Structures in Scheduling Architectures ..... 443 12.6.2 Task Scheduling ..................... 444 12.6.2.1 Load Balancing ................ 445 12.6.3 Traffic Scheduling ...................
    [Show full text]
  • Network Processors: Building Block for Programmable Networks
    NetworkNetwork Processors:Processors: BuildingBuilding BlockBlock forfor programmableprogrammable networksnetworks Raj Yavatkar Chief Software Architect Intel® Internet Exchange Architecture [email protected] 1 Page 1 Raj Yavatkar OutlineOutline y IXP 2xxx hardware architecture y IXA software architecture y Usage questions y Research questions Page 2 Raj Yavatkar IXPIXP NetworkNetwork ProcessorsProcessors Control Processor y Microengines – RISC processors optimized for packet processing Media/Fabric StrongARM – Hardware support for Interface – Hardware support for multi-threading y Embedded ME 1 ME 2 ME n StrongARM/Xscale – Runs embedded OS and handles exception tasks SRAM DRAM Page 3 Raj Yavatkar IXP:IXP: AA BuildingBuilding BlockBlock forfor NetworkNetwork SystemsSystems y Example: IXP2800 – 16 micro-engines + XScale core Multi-threaded (x8) – Up to 1.4 Ghz ME speed RDRAM Microengine Array Media – 8 HW threads/ME Controller – 4K control store per ME Switch MEv2 MEv2 MEv2 MEv2 Fabric – Multi-level memory hierarchy 1 2 3 4 I/F – Multiple inter-processor communication channels MEv2 MEv2 MEv2 MEv2 Intel® 8 7 6 5 y NPU vs. GPU tradeoffs PCI XScale™ Core MEv2 MEv2 MEv2 MEv2 – Reduce core complexity 9 10 11 12 – No hardware caching – Simpler instructions Î shallow MEv2 MEv2 MEv2 MEv2 Scratch pipelines QDR SRAM 16 15 14 13 Memory – Multiple cores with HW multi- Controller Hash Per-Engine threading per chip Unit Memory, CAM, Signals Interconnect Page 4 Raj Yavatkar IXPIXP 24002400 BlockBlock DiagramDiagram Page 5 Raj Yavatkar XScaleXScale
    [Show full text]
  • Intel® IXP42X Product Line of Network Processors with ETHERNET Powerlink Controlled Node
    Application Brief Intel® IXP42X Product Line of Network Processors With ETHERNET Powerlink Controlled Node The networked factory floor enables the While different real-time Ethernet solutions adoption of systems for remote monitoring, are available or under development, EPL, long-distance support, diagnostic services the real-time protocol solution managed and the integration of in-plant systems by the open vendor and user group EPSG with the enterprise. The need for flexible (ETHERNET Powerlink Standardization Group), connectivity solutions and high network is the only deterministic data communication bandwidth is driving a fundamental shift away protocol that is fully conformant with Ethernet from legacy industrial bus architectures and networking standards. communications protocols to industry EPL takes the standard approach of IEEE standards and commercial off-the shelf (COTS) 802.3 Ethernet with a mixed polling and time- solutions. Standards-based interconnect slicing mechanism to transfer time-critical data technologies and communications protocols, within extremely short and precise isochronous especially Ethernet, enable simpler and more cycles. In addition, EPL provides configurable cost-effective integration of network elements timing to synchronize networked nodes with and applications, from the enterprise to the www.intel.com/go/embedded high precision while asynchronously transmitting factory floor. data that is less time-critical. EPL is the ideal The Intel® IXP42X product line of network solution for meeting the timing demands of processors with ETHERNET Powerlink (EPL) typical high performance industrial applications, software helps manufacturers of industrial such as automation and motion control. control and automation devices bridge Current implementations have reached 200 µs between real-time Ethernet on the factory cycle-time with a timing deviation (jitter) less floor and standard Ethernet IT networks in than 1 µs.
    [Show full text]
  • And GPU-Based DNN Training on Modern Architectures
    An In-depth Performance Characterization of CPU- and GPU-based DNN Training on Modern Architectures Presentation at MLHPC ‘17 Ammar Ahmad Awan, Hari Subramoni, and Dhabaleswar K. Panda Network Based Computing Laboratory Dept. of Computer Science and Engineering The Ohio State University [email protected], {subramon,panda}@cse.ohio-state.edu CPU based Deep Learning is not as bad as you think! • Introduction – CPU-based Deep Learning – Deep Learning Frameworks • Research Challenges • Design Discussion • Performance Characterization • Conclusion Network Based Computing Laboratory MLHPC ‘17 High-Performance Deep Learning 2 GPUs are great for Deep Learning • NVIDIA GPUs have been the main driving force for faster training of Deep Neural Networks (DNNs) • The ImageNet Challenge - (ILSVRC) – 90% of the ImageNet teams used GPUs in 2014* https://www.top500.org/ – DL models like AlexNet, GoogLeNet, and VGG – GPUs: A natural fit for DL due to the throughput-oriented nature – GPUs are also growing in the HPC arena! *https://blogs.nvidia.com/blog/2014/09/07/imagenet/ Network Based Computing Laboratory MLHPC ‘17 High-Performance Deep Learning 3 https://www.top500.org/statistics/list/ But what about CPUs? • Intel CPUs are everywhere and many-core CPUs are emerging according to Top500.org • Host CPUs exist even on the GPU nodes – Many-core Xeon Phis are increasing • Xeon Phi 1st generation: a many-core co-processor • Xeon Phi 2nd generation (KNL): a self-hosted many- core processor! • Usually, we hear CPUs are 10x – 100x slower than GPUs? [1-3] – But can we do better? 1- https://dl.acm.org/citation.cfm?id=1993516 System Count for Xeon Phi 2- http://ieeexplore.ieee.org/abstract/document/5762730/ 3- https://dspace.mit.edu/bitstream/handle/1721.1/51839/MIT-CSAIL-TR-2010-013.pdf?sequence=1 Network Based Computing Laboratory MLHPC ‘17 High-Performance Deep Learning 4 Deep Learning Frameworks – CPUs or GPUs? • There are several Deep Learning (DL) or DNN Training frameworks – Caffe, Cognitive Toolkit, TensorFlow, MXNet, and counting...
    [Show full text]
  • Effective Compilation Support for Variable Instruction Set Architecture
    Effective Compilation Support for Variable Instruction Set Architecture Jack Liu, Timothy Kong, Fred Chow Cognigine Corporation 6120 Stevenson Blvd. Fremont, CA 94538, USA g fjackl,timk,fredc @cognigine.com Abstract running embedded applications. these application specific instruction set processors (ASIPs) [1, 2] use instruction sets customized towards a specific type of application so they Traditional compilers perform their code generation can deliver efficient run-time performance for typical pro- tasks based on a fixed, pre-determined instruction set. This grams written for that application area. Because the instruc- paper describes the implementation of a compiler that de- tion set is pre-determined, the compiler is built and config- termines the best instruction set to use for a given pro- ured to generate code based on a custom, fixed instruction gram and generates efficient code sequence based on it. We set [16]. first give an overview of the VISC Architecture pioneered at Cognigine that exemplifies a Variable Instruction Set Ar- The Variable Instruction Set Communications Architec- chitecture. We then present three compilation techniques ture (VISC Architecture ÌÅ ) from Cognigine represents a that, when combined, enable us to provide effective com- very different approach in the attempt to provide greater pilation and optimization support for such an architecture. configurability in compiling embedded software. The VISC The first technique involves the use of an abstract opera- Architecture can perform a complex set of instructions con- tion representation that enables the code generator to op- sisting of multiple, fine and coarse grain operations that op- timize towards the core architecture of the processor with- erate on multiple operands at the same time in one fixed out committing to any specific instruction format.
    [Show full text]
  • NP-5™ Network Processor
    NETWORK PROCESSOR PRODUCT BRIEF † NP-5™ Network Processor 240Gbps NPU for Carrier Ethernet Applications HIGHLIGHTS Mellanox’s NP-5 is a highly-flexible network processor with integrated traffic management, targeting Carrier Ethernet Switches and Routers (CESR) and other Carrier Ethernet platforms – 240Gbps wire-speed network processor aimed that require high performance, flexible packet processing and fine-grained traffic management. at Carrier Ethernet applications TARGET APPLICATIONS – Integrated 240Gbps traffic management NP-5’s flexibility and integration allows system vendors to deliver cost effective solutions that providing granular bandwidth control can easily adapt to changing market requirements. – Up to 480Gbps peak processing data path and Typical applications include: Stand-alone pizza box solutions: CoS classification • Line cards in modular chassis: • Ethernet Aggregation Nodes – Suited for line card, services card and pizza box • Edge and Core Routers • Server Load Balancing Switches applications • Transport Switches • Multi-10Gbps Firewalls & VPN – On-chip control CPU for host CPU offload • Data Center Switches • Intrusion Detection Appliances • 3G/4G Wireless Infrastructure Equipment • Network Monitoring and Analysis Services – Power management for minimizing line card and system power dissipation SOFTWARE TOOLS – Operations, Administration and Management Mellanox supplies a comprehensive set of software tools to facilitate system design for NP-5 (OAM) processing offload based products. The toolset manages the data plane and allows designers to edit, build and – Synchronous Ethernet and IEEE1588v2 offload debug microcode applications for the Mellanox’s network processors with a unified GUI. for Circuit Emulation Services Designers can quickly develop and test the microcode using cycle-accurate simulation and debugging tools including breakpoints, single-step program execution and access to internal – IP reassembly for advanced packet processing resources.
    [Show full text]
  • Network Processors the Morgan Kaufmann Series in Systems on Silicon Series Editor: Wayne Wolf, Georgia Institute of Technology
    Network Processors The Morgan Kaufmann Series in Systems on Silicon Series Editor: Wayne Wolf, Georgia Institute of Technology The Designer’s Guide to VHDL, Second Edition Peter J. Ashenden The System Designer’s Guide to VHDL-AMS Peter J. Ashenden, Gregory D. Peterson, and Darrell A. Teegarden Modeling Embedded Systems and SoCs Axel Jantsch ASIC and FPGA Verification: A Guide to Component Modeling Richard Munden Multiprocessor Systems-on-Chips Edited by Ahmed Amine Jerraya and Wayne Wolf Functional Verification Bruce Wile, John Goss, and Wolfgang Roesner Customizable and Configurable Embedded Processors Edited by Paolo Ienne and Rainer Leupers Networks-on-Chips: Technology and Tools Edited by Giovanni De Micheli and Luca Benini VLSI Test Principles & Architectures Edited by Laung-Terng Wang, Cheng-Wen Wu, and Xiaoqing Wen Designing SoCs with Configured Processors Steve Leibson ESL Design and Verification Grant Martin, Andrew Piziali, and Brian Bailey Aspect-Oriented Programming with e David Robinson Reconfigurable Computing: The Theory and Practice of FPGA-Based Computation Edited by Scott Hauck and André DeHon System-on-Chip Test Architectures Edited by Laung-Terng Wang, Charles Stroud, and Nur Touba Verification Techniques for System-Level Design Masahiro Fujita, Indradeep Ghosh, and Mukul Prasad VHDL-2008: Just the New Stuff Peter J. Ashenden and Jim Lewis On-Chip Communication Architectures: System on Chip Interconnect Sudeep Pasricha and Nikil Dutt Embedded DSP Processor Design: Application Specific Instruction Set Processors Dake Liu Processor Description Languages: Applications and Methodologies Edited by Prabhat Mishra and Nikil Dutt Network Processors Architecture, Programming, and Implementation Ran Giladi Ben-Gurion University of the Negev and EZchip Technologies Ltd.
    [Show full text]
  • Synchronized MIMD Computing Bradley C. Kuszmaul
    Synchronized MIMD Computing by Bradley C. Kuszmaul SB mathematics Massachusetts Institute of Technology SB computer science and engineering Massachusetts Institute of Technology SM electrical engineering and computer science Massachusetts Institute of Technology Submitted to the Department of Electrical Engineering and Computer Science in partial ful®llment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 1994 c Massachusetts Institute of Technology 1994. All rights reserved. : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : : :: : :: : :: : : :: Author : Department of Electrical Engineering and Computer Science May 22, 1994 :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : : :: : :: Certi®ed by :: Charles E. Leiserson Professor of Computer Science and Engineering Thesis Supervisor :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : :: : : :: : :: : : :: Accepted by : F. R. Morgenthaler Chair, Department Committee on Graduate Students 1 2 Synchronized MIMD Computing by Bradley C. Kuszmaul Submitted to the Department of Electrical Engineering and Computer Science on May 22, 1994, in partial ful®llment of the requirements for the degree of Doctor of Philosophy Abstract Fast global synchronization provides simple, ef®cient solutions to many of the system problems of parallel computing. It achieves this by providing composition
    [Show full text]
  • A Network Processor Architecture for High Speed Carrier Grade Ethernet Networks
    TECHNISCHE UNIVERSITAT¨ MUNCHEN¨ Lehrstuhl f¨urIntegrierte Systeme A Network Processor Architecture for High Speed Carrier Grade Ethernet Networks Kimon Karras Vollst¨andigerAbdruck der von der Fakult¨atf¨urElektrotechnik und Informationstechnik der Technischen Universit¨atM¨unchen zur Erlangung des akademischen Grades eines Doktor-Ingenieurs (Dr.-Ing.) genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr.-Ing. Wolfgang Kellerer Pr¨uferder Dissertation: 1. Univ.-Prof. Dr. sc.techn. Andreas Herkersdorf 2. Univ.-Prof. Dr.-Ing. Andreas Kirst¨adter,Universit¨atStuttgart Die Dissertation wurde am 13.05.2013 bei der Technischen Universit¨atM¨unchen einge- reicht und durch die Fakult¨atf¨urElektrotechnik und Informationstechnik am 24.03.2014 angenommen. Preface This work was performed at the Institute for Integrated Systems of the Technical University of Munich under the auspices of Prof. Dr. sc.techn. Andreas Herkersdorf and Dr.-Ing. Thomas Wild, both of whom I have to whole-heartedly thank for giving me the opportunity of working with them and for their invaluable help and guidance throughout the five years of my stay at the institute. A further thanks goes to all the people at the institute with whom I've cooperated over the years and who have made life enjoyable both on a professional and on a personal level. A very great thanks is due to Daniel Llorente for keeping me company and bearing with me during most of this time. His contribution to this thesis is immense and without him it wouldn't have been possible. Muchas gracias, companero~ ! Furthermore I would like to thank all the partner companies and participants of the 100GET project, which formed the basis for this disseration.
    [Show full text]
  • Object-Oriented Reconfigurable Processing for Wireless Networks Andrew A
    Object-Oriented Reconfigurable Processing for Wireless Networks Andrew A. Gray, Clement Lee, Payman Arabshahi, Jeffrey Srinivasan Jet Propulsion Laboratory, California Institute of Technology 4800 Oak Grove Drive, MS 238-343, Pasadena, CA 91101 USA Abstract – We present an outline of reconfigurable processor while at the same time being sensitive and potentially error technologies and design methods with emphasis on an object- prone and costly. oriented approach, and both full and partial dynamic To address these issues, we present design methods reconfiguration. A specific broadly applicable architecture for that facilitate flexibility (leading to prolonged processor implementing a software reconfigurable network processor for lives), and dynamic reconfigurability, along with a generic wireless communication applications is presented; a prototype of which is currently operating in the laboratory. This reconfigurable processor architecture. Dynamic partial architecture, its associated object oriented design methods, and reconfiguration of the network processor will greatly partial reconfiguration techniques enable rapid-prototyping increase the value of the processor within the network. For and rapid implementations of communications and navigation instance the processor’s ability to perform a very large signal processing functions; provide long-life communications number of temporally separated (time-multiplexed) infrastructure; and result in dynamic operation within functions will dramatically increase network elasticity and networks with heterogeneous nodes, as well as compatibility compatibility (see Fig. 1). with other networks. This work builds upon numerous The paradigms will also empower the design engineer advances in commercial industry as well as military software with tools required for rapid prototyping and radio developments to space-based radios and network processing. The development of such radios and the network implementation of a wide variety of signal processing processor presented here require defining the correct functions.
    [Show full text]