Network Processors: Building Block for Programmable Networks

Total Page:16

File Type:pdf, Size:1020Kb

Network Processors: Building Block for Programmable Networks NetworkNetwork Processors:Processors: BuildingBuilding BlockBlock forfor programmableprogrammable networksnetworks Raj Yavatkar Chief Software Architect Intel® Internet Exchange Architecture [email protected] 1 Page 1 Raj Yavatkar OutlineOutline y IXP 2xxx hardware architecture y IXA software architecture y Usage questions y Research questions Page 2 Raj Yavatkar IXPIXP NetworkNetwork ProcessorsProcessors Control Processor y Microengines – RISC processors optimized for packet processing Media/Fabric StrongARM – Hardware support for Interface – Hardware support for multi-threading y Embedded ME 1 ME 2 ME n StrongARM/Xscale – Runs embedded OS and handles exception tasks SRAM DRAM Page 3 Raj Yavatkar IXP:IXP: AA BuildingBuilding BlockBlock forfor NetworkNetwork SystemsSystems y Example: IXP2800 – 16 micro-engines + XScale core Multi-threaded (x8) – Up to 1.4 Ghz ME speed RDRAM Microengine Array Media – 8 HW threads/ME Controller – 4K control store per ME Switch MEv2 MEv2 MEv2 MEv2 Fabric – Multi-level memory hierarchy 1 2 3 4 I/F – Multiple inter-processor communication channels MEv2 MEv2 MEv2 MEv2 Intel® 8 7 6 5 y NPU vs. GPU tradeoffs PCI XScale™ Core MEv2 MEv2 MEv2 MEv2 – Reduce core complexity 9 10 11 12 – No hardware caching – Simpler instructions Î shallow MEv2 MEv2 MEv2 MEv2 Scratch pipelines QDR SRAM 16 15 14 13 Memory – Multiple cores with HW multi- Controller Hash Per-Engine threading per chip Unit Memory, CAM, Signals Interconnect Page 4 Raj Yavatkar IXPIXP 24002400 BlockBlock DiagramDiagram Page 5 Raj Yavatkar XScaleXScale CoreCore processorprocessor y Compliant with the ARM V5TE architecture – support for ARM’s thumb instructions – support for Digital Signal Processing (DSP) enhancements to the instruction set – Intel’s improvements to the internal pipeline to improve the memory-latency hiding abilities of the core – does not implement the floating-point instructions of the ARM V5 instruction set Page 6 Raj Yavatkar MicroenginesMicroengines –– RISCRISC processorsprocessors y IXP 2800 has 16 microengines, organized into 4 clusters (4 MEs per cluster) y ME instruction set specifically tuned for processing network data – Arithmetic and Logical operations that operate at bit, byte, and long- word levels – can be combined with shift and rotate operations in single instructions. – integer multiplication provided; no division or FP operations y 40-bit x 4K control store y six-stage pipeline in an instruction – On an average takes one cycle to execute y Each ME has eight hardware-assisted threads of execution – can be configured to use either all eight threads or only four threads y The non-preemptive hardware thread arbiter swaps between threads in round-robin order Page 7 Raj Yavatkar MicroEngineMicroEngine v2v2 From Next Neighbor D-Push S-Push Bus Bus Local 128 128 128 Next 128 D 128 S Memory GPR GPR Neighbor Xfer In Xfer In 640 words Control Store 4K LM Addr 1 2 per B_op A_op Instructions LM Addr 0 CTX Prev B Prev A P-Random # A_Operand B_Operand CRC Unit Status Multiply and TAGs Lock LRU 32-bit Execution 0-15 0-15 Find first bit Logic CRC remain Data Path (6-bit) CAM Add, shift, logical Local CSRs Status Entry# ALU_Out To Next Neighbor Timers Timestamp 128 D 128 S Xfer Out Xfer Out D-Pull Bus S-Pull Bus Page 8 Raj Yavatkar Why mult-ithreading? Page 9 Raj Yavatkar Packet processing using multi-threading within a MicroEngine Page 10 Raj Yavatkar RegistersRegisters availableavailable toto eacheach MEME y four different types of registers – general purpose, SRAM transfer, DRAM transfer, next- neighbor (NN) – Also, access to many CSRs y 256, 32-bit GPRs – can be accessed in thread-local or absolute mode y 256, 32-bit SRAM transfer registers. – used to read/write to all functional units on the IXP2xxx except the DRAM y 256, 32-bit DRAM transfer registers – divided equally into read-only and write-only – used exclusively for communication between the MEs and the DRAM y Benefit of having separate transfer and GPRs – ME can continue processing with GPRs while other functional units read and write the transfer registers Page 11 Raj Yavatkar NextNext--NeighborNeighbor RegistersRegisters y Each ME has 128, 32-bit next-neighbor registers – makes data written in these registers available in the next microengine (numerically) – E.g., if ME 0 writes data into a next-neighbor register, ME 1 can read the data from its next-neighbor register, and so on y In another mode, these registers are used as extra GPRs – Data written into a next-neighbor register is read back by the same microengine Page 12 Raj Yavatkar GeneralizedGeneralized threadthread signalingsignaling y Each ME thread has 15 numbered signals. y Most accesses to functional units outside of the ME can cause a signal to any one signal number y The signal number generated for any functional unit access is under the programmer’s control y A ME thread can test for the presence or absence of any of these signals – used to control branching on the signal presence – Or, to specify to the thread arbiter that a ME thread is ready to run only after the signal is received y Benefit of the approach – software can have multiple outstanding references to the same unit and wait for all of them to complete using different signals Page 13 Raj Yavatkar DifferentDifferent TypesTypes ofof MemoryMemory Type of Logical Size in Approx Special Memory width bytes unloaded Notes (bytes) latency (cycles) Local to 4 2560 3 Indexed ME addressing post incr/decr On-chip 4 16K 60 Atomic ops scratch 16 rings w/at. get/put SRAM 4 256M 150 Atomic ops 64-elem q- array DRAM 8 2G 300 Direct path to/fro MSF Page 14 Raj Yavatkar IXP2800IXP2800 FeaturesFeatures y Half Duplex OC-192 / 10 Gb/sec Ethernet Network Processor y XScale Core – 700 MHz (half the ME) – 32 Kbytes instruction cache / 32 Kbytes data cache y Media / Switch Fabric Interface – 2 x 16 bit LVDS Transmit & Receive – Configured as CSIX-L2 or SPI-4 y PCI Interface – 64 bit / 66 MHz Interface for Control – 3 DMA Channels y QDR Interface (w/Parity) – (4) 36 bit SRAM Channels (QDR or Co-Processor) – Network Processor Forum LookAside-1 Standard Interface – Using a “clamshell” topology both Memory and Co-processor can be instantiated on same channel y RDR Interface – (3) Independent Direct Rambus DRAM Interfaces – Supports 4i Banks or 16 interleaved Banks – Supports 16/32 Byte bursts Page 15 Raj Yavatkar HardwareHardware FeaturesFeatures toto easeease packetpacket processingprocessing y Ring Buffers – For inter-block communication/synchronization – Producer-consumer paradigm y Next Neighbor Registers and Signaling – Allows for single cycle transfer of context to the next logical micro-engine to dramatically improve performance – Simple, easy transfer of state y Distributed data caching within each micro-engine – Allows for all threads to keep processing even when multiple threads are accessing the same data Page 16 Raj Yavatkar OutlineOutline y IXP 2xxx hardware architecture y IXA software architecture y Usage questions y Research questions Page 17 Raj Yavatkar IXA Portability Framework - Goals y Accelerate software development for the IXP family of network processors y Provide a simple and consistent infrastructure to write networking applications y Enable reuse of code across applications written to the framework y Improve portability of code across the IXP family y Provide an infrastructure for third parties to supply code – for example, to support TCAMs Page 18 Raj Yavatkar IXAIXA SoftwareSoftware FrameworkFramework External Control Plane Protocol Stacks Processors Control Plane PDK XScale™ C/C++ Core Core Components Language Core Component Library Resource Manager Library Microengine Microblock Library Microengine Pipeline C Language Micro Micro Micro block block block Protocol Library Utility Library Hardware Abstraction Library Page 19 Raj Yavatkar Software Framework on the MEv2 y Microengine C compiler (language) y Optimized Data Plane Libraries – Microcode and MicroC library for commonly used functions y Microblock Programming Model – Enables development of modular code building blocks – Defines the data flow model, common data structures, state sharing between code blocks etc – Ensures consistency and improves reuse across different apps y Core component library – Provides a common way of writing slow-path components that interact with their counterpart fast-path code y Microblocks and example applications written to the microblock programming model – IPv4/IPv6 Forwarding, MPLS, DiffServ etc. Page 20 Raj Yavatkar MicroMicro--engineengine CC CompilerCompiler y C language constructs – Basic types, pointers, bit fields y In-line assembly code support y Aggregates – Structs, unions, arrays – Intrinsics for specialized ME functions – Different memory models and special constructs for data placement (e.g., __declspec(sdram) struct msg_hdr hd) Page 21 Raj Yavatkar What is a Microblock? y Data plane packet processing on the microengines is divided into logical functions called microblocks y Coarse Grain and stateful y Example – 5-Tuple Classification – IPv4 Forwarding – NAT y Several microblocks running on a microengine thread can be combined into a microblock group. – A microblock group has a dispatch loop that defines the dataflow for packets between microblocks – A microblock group runs on each thread of one or more microengines y Microblocks can send and receive packets to/from an associated Xscale Core Component Page 22 Raj Yavatkar CoreCore ComponentsComponents andand MicroblocksMicroblocks Core Core Core XScale™ Component Component Component Core Core Component Library Resource Manager Library Microblock Library Micro- engines Microblock Microblock Microblock Microblock Intel/3rd party User-written Core Library blocks code Libraries Page 23 Raj Yavatkar SimplifiedSimplified PacketPacket FlowFlow (IPv6(IPv6 example)example) Rx a. Put Packet in DRAM DRAM SRAM b. Put Descriptor in SRAM H c. Queue Handle on ring N Ethernet Header Prefix next-hop-id Interface# Source H OffsetOffset 3FFF020304 N d. Pull meta-data in GPRs IPv6 Header Flags SizeSize … DMAC e. Set DL state in GPRs Payload … … …. f. Set next_blk = Classify Classify g. Get Headers in HCache Packet Buffer h.Set HeaderType to IPv6 Buffers Descriptors Route Table Next-Hop i.
Recommended publications
  • Intel® Strongarm® SA-1110 High- Performance, Low-Power Processor for Portable Applied Computing Devices
    Advance Copy Intel® StrongARM® SA-1110 High- Performance, Low-Power Processor For Portable Applied Computing Devices PRODUCT HIGHLIGHTS ■ Innovative Application Specific Standard Product (ASSP) delivers leadership performance, integration and low power for palm-size devices, PC companions, smart phones and other emerging portable applied computing devices As businesses and individuals rely increasingly on portable applied ■ High-speed 100 MHz memory bus and a computing devices to simplify their lives and boost their productivity, flexible memory these devices have to perform more complex functions quickly and controller that adds efficiently. To satisfy ever-increasing customer demands to support for SDRAM, communicate and access information ‘anytime, anywhere’, SMROM, and variable- manufacturers need technologies that deliver high-performance, robust latency I/O devices — provides design functionality and versatility while meeting the small-size and low-power flexibility, scalability and restrictions of portable, battery-operated products. Intel designed the high memory bandwidth SA-1110 processor with all of these requirements in mind. ■ Rich development The Intel® SA-1110 is a highly integrated 32-bit StrongARM® environment enables processor that incorporates Intel design and process technology along leading edge products with the power efficiency of the ARM* architecture. The SA-1110 is while reducing time- to-market software compatible with the ARM V4 architecture while utilizing a high-performance micro-architecture that is optimized to take advantage of Intel process technology. The Intel SA-1110 provides the performance, low power, integration and cost benefits of the Intel SA-1100 processor plus a high speed memory bus, flexible memory controller and the ability to handle variable-latency I/O devices.
    [Show full text]
  • Comparison of Contemporary Real Time Operating Systems
    ISSN (Online) 2278-1021 IJARCCE ISSN (Print) 2319 5940 International Journal of Advanced Research in Computer and Communication Engineering Vol. 4, Issue 11, November 2015 Comparison of Contemporary Real Time Operating Systems Mr. Sagar Jape1, Mr. Mihir Kulkarni2, Prof.Dipti Pawade3 Student, Bachelors of Engineering, Department of Information Technology, K J Somaiya College of Engineering, Mumbai1,2 Assistant Professor, Department of Information Technology, K J Somaiya College of Engineering, Mumbai3 Abstract: With the advancement in embedded area, importance of real time operating system (RTOS) has been increased to greater extent. Now days for every embedded application low latency, efficient memory utilization and effective scheduling techniques are the basic requirements. Thus in this paper we have attempted to compare some of the real time operating systems. The systems (viz. VxWorks, QNX, Ecos, RTLinux, Windows CE and FreeRTOS) have been selected according to the highest user base criterion. We enlist the peculiar features of the systems with respect to the parameters like scheduling policies, licensing, memory management techniques, etc. and further, compare the selected systems over these parameters. Our effort to formulate the often confused, complex and contradictory pieces of information on contemporary RTOSs into simple, analytical organized structure will provide decisive insights to the reader on the selection process of an RTOS as per his requirements. Keywords:RTOS, VxWorks, QNX, eCOS, RTLinux,Windows CE, FreeRTOS I. INTRODUCTION An operating system (OS) is a set of software that handles designed known as Real Time Operating System (RTOS). computer hardware. Basically it acts as an interface The motive behind RTOS development is to process data between user program and computer hardware.
    [Show full text]
  • SIMD Extensions
    SIMD Extensions PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 12 May 2012 17:14:46 UTC Contents Articles SIMD 1 MMX (instruction set) 6 3DNow! 8 Streaming SIMD Extensions 12 SSE2 16 SSE3 18 SSSE3 20 SSE4 22 SSE5 26 Advanced Vector Extensions 28 CVT16 instruction set 31 XOP instruction set 31 References Article Sources and Contributors 33 Image Sources, Licenses and Contributors 34 Article Licenses License 35 SIMD 1 SIMD Single instruction Multiple instruction Single data SISD MISD Multiple data SIMD MIMD Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus, such machines exploit data level parallelism. History The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1] The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel.
    [Show full text]
  • Generic Pipelined Processor Modeling and High Performance
    Generic Pipelined Processor Modeling and High Performance Cycle-Accurate Simulator Generation Mehrdad Reshadi, Nikil Dutt Center for Embedded Computer Systems (CECS), Donald Bren School of Information and Computer Science, University of California Irvine, CA 92697, USA. {reshadi, dutt}@cecs.uci.edu simulators were more limited or slower than their manually generated Abstract counterparts. Detailed modeling of processors and high performance cycle- Colored Petri Net (CPN) [1] is a very powerful and flexible accurate simulators are essential for today’s hardware and software modeling technique and has been successfully used for describing design. These problems are challenging enough by themselves and parallelism, resource sharing and synchronization. It can naturally have seen many previous research efforts. Addressing both capture most of the behavioral elements of instruction flow in a simultaneously is even more challenging, with many existing processor. However, CPN models of realistic processors are very approaches focusing on one over another. In this paper, we propose complex mostly due to incompatibility of a token-based mechanism for the Reduced Colored Petri Net (RCPN) model that has two capturing data hazards. Such complexity reduces the productivity and advantages: first, it offers a very simple and intuitive way of modeling results in very slow simulators. In this paper, we present Reduced pipelined processors; second, it can generate high performance cycle- Colored Petri Net (RCPN), a generic modeling approach for accurate simulators. RCPN benefits from all the useful features of generating fast cycle-accurate simulators for pipelined processors. Colored Petri Nets without suffering from their exponential growth in RCPN is based on CPN and reduces the modeling complexity by complexity.
    [Show full text]
  • C5ENPA1-DS, C-5E NETWORK PROCESSOR SILICON REVISION A1
    Freescale Semiconductor, Inc... SILICON REVISION A1 REVISION SILICON C-5e NETWORK PROCESSOR Sheet Data Rev 03 PRELIMINARY C5ENPA1-DS/D Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. Silicon RevisionA1 C-5e NetworkProcessor Data Sheet Rev 03 C5ENPA1-DS/D F o r M o r Preli G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m m d u c t , inary Freescale Semiconductor, Inc... Freescale Semiconductor,Inc. F o r M o r G e o I n t f o o : r w m w a t w i o . f n r e O e n s c T a h l i e s . c P o r o m d u c t , Freescale Semiconductor, Inc. C5ENPA1-DS/D Rev 03 CONTENTS .
    [Show full text]
  • Design and Implementation of a Stateful Network Packet Processing
    610 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 25, NO. 1, FEBRUARY 2017 Design and Implementation of a Stateful Network Packet Processing Framework for GPUs Giorgos Vasiliadis, Lazaros Koromilas, Michalis Polychronakis, and Sotiris Ioannidis Abstract— Graphics processing units (GPUs) are a powerful and TCAMs, have greatly reduced both the cost and time platform for building the high-speed network traffic process- to develop network traffic processing systems, and have ing applications using low-cost hardware. The existing systems been successfully used in routers [4], [7] and network tap the massively parallel architecture of GPUs to speed up certain computationally intensive tasks, such as cryptographic intrusion detection systems [27], [34]. These systems offer a operations and pattern matching. However, they still suffer from scalable method of processing network packets in high-speed significant overheads due to critical-path operations that are still environments. However, implementations based on special- being carried out on the CPU, and redundant inter-device data purpose hardware are very difficult to extend and program, transfers. In this paper, we present GASPP, a programmable net- and prohibit them from being widely adopted by the industry. work traffic processing framework tailored to modern graphics processors. GASPP integrates optimized GPU-based implemen- In contrast, the emergence of commodity many-core tations of a broad range of operations commonly used in the architectures, such as multicore CPUs and modern graph- network traffic processing applications, including the first purely ics processors (GPUs) has proven to be a good solution GPU-based implementation of network flow tracking and TCP for accelerating many network applications, and has led stream reassembly.
    [Show full text]
  • Embedded Multi-Core Processing for Networking
    12 Embedded Multi-Core Processing for Networking Theofanis Orphanoudakis University of Peloponnese Tripoli, Greece [email protected] Stylianos Perissakis Intracom Telecom Athens, Greece [email protected] CONTENTS 12.1 Introduction ............................ 400 12.2 Overview of Proposed NPU Architectures ............ 403 12.2.1 Multi-Core Embedded Systems for Multi-Service Broadband Access and Multimedia Home Networks . 403 12.2.2 SoC Integration of Network Components and Examples of Commercial Access NPUs .............. 405 12.2.3 NPU Architectures for Core Network Nodes and High-Speed Networking and Switching ......... 407 12.3 Programmable Packet Processing Engines ............ 412 12.3.1 Parallelism ........................ 413 12.3.2 Multi-Threading Support ................ 418 12.3.3 Specialized Instruction Set Architectures ....... 421 12.4 Address Lookup and Packet Classification Engines ....... 422 12.4.1 Classification Techniques ................ 424 12.4.1.1 Trie-based Algorithms ............ 425 12.4.1.2 Hierarchical Intelligent Cuttings (HiCuts) . 425 12.4.2 Case Studies ....................... 426 12.5 Packet Buffering and Queue Management Engines ....... 431 399 400 Multi-Core Embedded Systems 12.5.1 Performance Issues ................... 433 12.5.1.1 External DRAMMemory Bottlenecks ... 433 12.5.1.2 Evaluation of Queue Management Functions: INTEL IXP1200 Case ................. 434 12.5.2 Design of Specialized Core for Implementation of Queue Management in Hardware ................ 435 12.5.2.1 Optimization Techniques .......... 439 12.5.2.2 Performance Evaluation of Hardware Queue Management Engine ............. 440 12.6 Scheduling Engines ......................... 442 12.6.1 Data Structures in Scheduling Architectures ..... 443 12.6.2 Task Scheduling ..................... 444 12.6.2.1 Load Balancing ................ 445 12.6.3 Traffic Scheduling ...................
    [Show full text]
  • IXP400 Software's Programmer's Guide
    Intel® IXP400 Software Programmer’s Guide June 2004 Document Number: 252539-002c Intel® IXP400 Software Contents INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights. Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications. The Intel® IXP400 Software v1.2.2 may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. MPEG is an international standard for video compression/decompression promoted by ISO. Implementations of MPEG CODECs, or MPEG enabled platforms may require licenses from various entities, including Intel Corporation. This document and the software described in it are furnished under license and may only be used or copied in accordance with the terms of the license. The information in this document is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Intel Corporation.
    [Show full text]
  • Comparative Architectures
    Comparative Architectures CST Part II, 16 lectures Lent Term 2006 David Greaves [email protected] Slides Lectures 1-13 (C) 2006 IAP + DJG Course Outline 1. Comparing Implementations • Developments fabrication technology • Cost, power, performance, compatibility • Benchmarking 2. Instruction Set Architecture (ISA) • Classic CISC and RISC traits • ISA evolution 3. Microarchitecture • Pipelining • Super-scalar { static & out-of-order • Multi-threading • Effects of ISA on µarchitecture and vice versa 4. Memory System Architecture • Memory Hierarchy 5. Multi-processor systems • Cache coherent and message passing Understanding design tradeoffs 2 Reading material • OHP slides, articles • Recommended Book: John Hennessy & David Patterson, Computer Architecture: a Quantitative Approach (3rd ed.) 2002 Morgan Kaufmann • MIT Open Courseware: 6.823 Computer System Architecture, by Krste Asanovic • The Web http://bwrc.eecs.berkeley.edu/CIC/ http://www.chip-architect.com/ http://www.geek.com/procspec/procspec.htm http://www.realworldtech.com/ http://www.anandtech.com/ http://www.arstechnica.com/ http://open.specbench.org/ • comp.arch News Group 3 Further Reading and Reference • M Johnson Superscalar microprocessor design 1991 Prentice-Hall • P Markstein IA-64 and Elementary Functions 2000 Prentice-Hall • A Tannenbaum, Structured Computer Organization (2nd ed.) 1990 Prentice-Hall • A Someren & C Atack, The ARM RISC Chip, 1994 Addison-Wesley • R Sites, Alpha Architecture Reference Manual, 1992 Digital Press • G Kane & J Heinrich, MIPS RISC Architecture
    [Show full text]
  • Demystifying Internet of Things Security Successful Iot Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M
    Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment — Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Ned Smith David M. Wheeler Demystifying Internet of Things Security: Successful IoT Device/Edge and Platform Security Deployment Sunil Cheruvu Anil Kumar Chandler, AZ, USA Chandler, AZ, USA Ned Smith David M. Wheeler Beaverton, OR, USA Gilbert, AZ, USA ISBN-13 (pbk): 978-1-4842-2895-1 ISBN-13 (electronic): 978-1-4842-2896-8 https://doi.org/10.1007/978-1-4842-2896-8 Copyright © 2020 by The Editor(s) (if applicable) and The Author(s) This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons license, unless indicated otherwise in a credit line to the material.
    [Show full text]
  • IXP43X Product Line of Network Processors Specification Update December 2008 2 Order Number: 316847; Revision: 005US Contents
    Intel® IXP43X Product Line of Network Processors Specification Update December 2008 Order Number: 316847; Revision: 005US INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
    [Show full text]
  • Arm C Language Extensions Documentation Release ACLE Q1 2019
    Arm C Language Extensions Documentation Release ACLE Q1 2019 Arm Limited. Mar 21, 2019 Contents 1 Preface 1 1.1 Arm C Language Extensions.......................................1 1.2 Abstract..................................................1 1.3 Keywords.................................................1 1.4 How to find the latest release of this specification or report a defect in it................1 1.5 Confidentiality status...........................................1 1.5.1 Proprietary Notice.......................................2 1.6 About this document...........................................3 1.6.1 Change control.........................................3 1.6.1.1 Change history.....................................3 1.6.1.2 Changes between ACLE Q2 2018 and ACLE Q1 2019................3 1.6.1.3 Changes between ACLE Q2 2017 and ACLE Q2 2018................3 1.6.2 References...........................................3 1.6.3 Terms and abbreviations....................................3 1.7 Scope...................................................4 2 Introduction 5 2.1 Portable binary objects..........................................5 3 C language extensions 7 3.1 Data types................................................7 3.1.1 Implementation-defined type properties............................7 3.2 Predefined macros............................................8 3.3 Intrinsics.................................................8 3.3.1 Constant arguments to intrinsics................................8 3.4 Header files................................................8
    [Show full text]