Nettlp: a Development Platform for Pcie Devices in Software Interacting with Hardware

Total Page:16

File Type:pdf, Size:1020Kb

Nettlp: a Development Platform for Pcie Devices in Software Interacting with Hardware NetTLP: A Development Platform for PCIe devices in Software Interacting with Hardware Yohei Kuga and Ryo Nakamura, The University of Tokyo; Takeshi Matsuya, Keio University; Yuji Sekiya, The University of Tokyo https://www.usenix.org/conference/nsdi20/presentation/kuga This paper is included in the Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20) February 25–27, 2020 • Santa Clara, CA, USA 978-1-939133-13-7 Open access to the Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20) is sponsored by NetTLP: A Development Platform for PCIe devices in Software Interacting with Hardware Yohei Kuga1, Ryo Nakamura1, Takeshi Matsuya2, Yuji Sekiya1 1The Univeristy of Tokyo, 2Keio University Abstract Table 1: Comparison of platforms for prototyping PCIe de- Observability on data communication is always essential vices from the viewpoints of software and hardware. for prototyping, developing, and optimizing communication systems. However, it is still challenging to observe transac- PCIe device tions flowing inside PCI Express (PCIe) links despite them Software Hardware being a key component for emerging peripherals such as smart Software QEMU Root Complex − NICs, NVMe, and accelerators. To offer the practical observ- Hardware NetTLP FPGA/ASIC ability on PCIe and for productively prototyping PCIe devices, we propose NetTLP, a development platform for software PCIe devices that can interact with hardware root complexes. By contrast to the spread of PCIe, it is still difficult for On the NetTLP platform, software PCIe devices on top of researchers and software developers to observe PCIe and pro- IP network stacks can send and receive Transaction Layer totype PCIe devices, although they are crucial for optimizing Packets (TLPs) to and from hardware root complexes or other performance and developing future PCIe devices. Observ- devices through Ethernet links, an Ethernet and PCIe bridge ing PCIe transactions is difficult because PCIe transactions called a NetTLP adapter, and PCIe links. This paper describes are confined in hardware. PCIe is not just a simple fat-pipe the NetTLP platform and its implementation: the NetTLP between hardware elements; it also has several features for adapter and LibTLP, which is a software implementation of achieving high-performance communication, i.e., hardware the PCIe transaction layer. Moreover, this paper demonstrates interrupt, virtualization, and CPU cache operations. Utilizing the usefulness of NetTLP through three use cases: (1) observ- these features is important for exploiting PCIe efficiently; ing TLPs sent from four commercial PCIe devices, (2) 400 however, the concrete behaviors of the transactions in PCIe LoC software Ethernet NIC implementation that performs links cannot be determined unless special capture devices for an actual NIC for a hardware root complex, and (3) physical observing PCIe hardware are used. memory introspection. In addition to the observation, prototyping PCIe devices lacks productivity. Field Programmable Gate Array (FPGA) 1 Introduction is a major platform for prototyping PCIe devices [26,40,44, 45, 50]. However, developing all parts of a PCIe device on PCI Express (PCIe) is a widely used I/O interconnect for an FPGA still requires significant effort, such as the great storage, graphic, network, and accelerator devices [16,24,25]. devotion of the NetFPGA project [52] for networking devices. Not limited to connect the peripheral devices, some high- Another approach is to adopt virtualization or simulation, e.g., performance interconnects adopt the PCIe protocol [5,22,35]. GEM5 [11,23] and RTL simulators [4,31]. QEMU [9], which Moreover, specifications of future interconnects are designed is a famous virtualization platform, can be used for proto- by extending the PCIe protocol [10, 14]. Such versatility of typing PCIe devices from the software perspective. QEMU PCIe is derived from the packet-based data communication enables researchers and developers to prototype new hardware and the flexibility of PCIe topology. The PCIe specification architecture; however, its environment is fully softwarized. defines building blocks comprising PCIe topologies: endpoint, QEMU devices can communicate with only the emulated switches, bridges, and root complexes. PCIe packets flow root complex and cannot communicate with the physical root through point-to-point PCIe links between the blocks, and complex and other hardware connected to the root complex. motherboard manufacturers can expand the PCIe topologies The goal of this paper is to bridge the gap between software with these blocks depending on the use cases. and hardware for PCIe, as shown in Table 1. Our proposed USENIX Association 17th USENIX Symposium on Networked Systems Design and Implementation 141 platform, called NetTLP, offers softwarized PCIe endpoints CPU-to-Device/Device-to-CPU that can interact with hardware root complexes. By using Net- Device-to-Device Remote DMA TLP, researchers and software developers can prototype their CPU Memory Memory CPU PCIe devices as software PCIe endpoints and test the soft- ware devices with actual hardware root complexes through PCIe Switch Root Complex Root Complex the PCIe protocol. This hybrid platform of software and hard- PCIe PCIe PCIe PCIe PCIe PCIe ware simultaneously improves both the observability of PCIe Device Device Device Device Device Device transactions and the productivity of prototyping PCIe devices. The key technique for connecting softwarized PCIe end- Figure 1: A PCIe topology and three communication models. points to hardware root complexes is to separate the PCIe transaction layer into software and put the software transac- tion layer on top of IP network stacks. Our FPGA-based add- We show a prototype of a nonexistent Ethernet NIC • in card, called NetTLP adapter, delivers Transaction Layer with NetTLP. This prototyping demonstrates the high Packets (TLPs) to a remote host over Ethernet and IP net- productivity of the NetTLP platform; the NIC is certainly works. The substance of the NetTLP adapter is implemented implemented in software, but it performs as an actual in software on the IP network stack of the remote host with Ethernet NIC for a physical root complex. LibTLP, which is a software implementation of the PCIe trans- action layer. The NetTLP platform consisting of the adapter We demonstrate the possibility of developing memory • and library enables software PCIe devices on IP network introspection methods on NetTLP without implement- stacks to interact with hardware root complex through the ing dedicated devices. As a proof-of-concept, we imple- NetTLP adapter. Moreover, TLPs delivered over Ethernet mented two applications that gather process information links can be easily observed by IP networking techniques from a remote host by DMA through NetTLP. such as tcpdump and Wireshark. In this paper, we describe NetTLP, the novel platform for All source codes for hardware and software and captured software PCIe devices. To achieve the platform, we investi- data described in this paper are publicly available [1]. gate PCIe from the perspective of packet-based communica- tion (§2) and then describe the approach to connect the soft- 2 Background ware PCIe devices with hardware root complexes and process TLPs in software (§3) and describe its implementation (§4). PCIe is not only an interconnect, but also a packet-based data In addition to micro-benchmarks (§5), we demonstrate three communication network. As with IP networks, PCIe has a use cases of NetTLP (§6): observing behaviors of a root com- layering model composed of a physical layer, a data link layer, plex and commercial devices at a TLP-level, prototyping an and a transaction layer. The data link layer delivers PCIe Ethernet NIC in software interacting with a physical root packets across one hop over a PCIe link, while the transaction complex, and physical memory introspection using NetTLP. layer is responsible for delivering TLPs from a PCIe endpoint The contributions of this paper include the following: to a PCIe endpoint across the PCIe links. PCIe interconnect is We propose a novel platform for prototyping PCIe de- composed of the following elements that are capable of sup- • vices in software, while the software devices can commu- porting the layer functionalities: endpoints, switches, bridges, nicate with physical hardware such as root complexes, and root complexes. PCIe switches and root complexes route CPU, memory, and other PCIe devices. This platform and forward PCIe packets in accordance with the addresses offers high productivity for prototyping PCIe devices in memory-mapped I/O (MMIO) space or requester IDs. Any with actual interactions with hardware. functionalities of PCIe stand at the packet-based communi- cation, e.g., MSI-X for hardware interrupts is implemented We provide observability of PCIe transactions confined • by memory writes to specific memory addresses. Because of in hardware by the softwarized PCIe endpoints on the being such a packet-based network, PCIe topologies and their IP network stack. Our modified tcpdump can distinguish communication models are flexible, as depicted in Figure 1. the encapsulated TLPs in Ethernet by NetTLP, enabling In IP networks, we can easily prototype and implement any us to easily capture TLPs in an IP networking manner. part of the networks, such as end hosts, switches, and routers, We present detailed observation results of PCIe transac- and observe packets flowing
Recommended publications
  • Summit™ M5x Protocol Analyzer / Jammer for PCI Express® 5.0
    Summit™ M5x Protocol Analyzer / Jammer for PCI Express® 5.0 The Summit M5x is Teledyne LeCroy’s highest performance PCI Express analyzer, Key Features offering both inline probing and direct tapping approaches to protocol capture - the best in overall diversity in probing methods. Find errors fast • One button error check Inline operation provides advanced features such as: real-time error injection (jammer) • Fast upload speed for PCI Express 4.0 specification; support for data rates of 2.5 GT/s, 5.0 GT/s, 8.0 • Large trace memory GT/s, and 16.0 GT/s; full data capture on bidirectional link widths of x1, x2, x4, x8 and • Powerful triggering/filtering x16 for up to 16 GT/s; and up to 128GB of trace memory. See and understand the traffic ® • Get useful information Direct tapping operation supports PCIe 5.0 protocol analysis with speeds up to • More choices of data views 32GT/s at up to x8 link widths. The Summit M5x is ideal for high-performance protocol • More ways to analyze data development of add-in boards, switches, servers and workstations, and for customers • Custom decoding and reports currently working on PCIe. Data capture Flexible Hardware Analyzer for CXL (Compute Express Link) Links. • 100% data capture at 32.0 GT/s The Summit M5x PCIe 5.0 Protocol Analyzer is a Support is provided for CXL.io, CXL.mem and high-end analyzer that offers important analysis CXL.cache with full decoding from the FLIT layer Deep memory buffer features for new application development. to the CXL Message layers.
    [Show full text]
  • SC20를 통해 본 HPC 기술 동향 HPC Technology Through SC20
    Electronics and Telecommunications Trends SC20를 통해 본 HPC 기술 동향 HPC Technology Through SC20 어익수 (I.S. Eo, [email protected]) 슈퍼컴퓨팅기술연구센터 책임연구원 모희숙 (H.S. Mo, [email protected]) 슈퍼컴퓨팅기술연구센터 책임연구원 박유미 (Y.M. Park, [email protected]) 슈퍼컴퓨팅기술연구센터 책임연구원/센터장 한우종 (W.J. Han, [email protected]) 인공지능연구소 책임연구원/연구위원 ABSTRACT High-performance computing (HPC) is the underpinning for many of today’s most exciting new research areas, to name a few, from big science to new ways of fighting the disease, to artificial intelligence (AI), to big data analytics, to quantum computing. This report captures the summary of a 9-day program of presentations, keynotes, and workshops at the SC20 conference, one of the most prominent events on sharing ideas and results in HPC technology R&D. Because of the exceptional situation caused by COVID-19, the conference was held entirely online from 11/9 to 11/19 2020, and interestingly caught more attention on using HPC to make a breakthrough in the area of vaccine and cure for COVID-19. The program brought together 103 papers from 21 countries, along with 163 presentations in 24 workshop sessions. The event has covered several key areas in HPC technology, including new memory hierarchy and interconnects for different accelerators, evaluation of parallel programming models, as well as simulation and modeling in traditional science applications. Notably, there was increasing interest in AI and Big Data analytics as well. With this summary of the recent HPC trend readers may find useful information to guide the R&D directions for challenging new technologies and applications in the area of HPC.
    [Show full text]
  • Designware IP for Cloud Computing Socs
    DesignWare IP for Cloud Computing SoCs Overview Hyperscale cloud data centers continue to evolve due to tremendous Internet traffic growth from online collaboration, smartphones and other IoT devices, video streaming, augmented and virtual reality (AR/VR) applications, and connected AI devices. This is driving the need for new architectures for compute, storage, and networking such as AI accelerators, Software Defined Networks (SDNs), communications network processors, and solid state drives (SSDs) to improve cloud data center efficiency and performance. Re-architecting the cloud data center for these latest applications is driving the next generation of semiconductor SoCs to support new high-speed protocols to optimize data processing, networking, and storage in the cloud. Designers building system-on-chips (SoCs) for cloud and high performance computing (HPC) applications need a combination of high-performance and low-latency IP solutions to help deliver total system throughput. Synopsys provides a comprehensive portfolio of high-quality, silicon-proven IP that enables designers to develop SoCs for high-end cloud computing, including AI accelerators, edge computing, visual computing, compute/application servers, networking, and storage applications. Synopsys’ DesignWare® Foundation IP, Interface IP, Security IP, and Processor IP are optimized for high performance, low latency, and low power, while supporting advanced process technologies from 16-nm to 5-nm FinFET and future process nodes. High-Performance Computing Today’s high-performance computing (HPC) solutions provide detailed insights into the world around us and improve our quality of life. HPC solutions deliver the data processing power for massive workloads required for genome sequencing, weather modeling, video rendering, engineering modeling and simulation, medical research, big data analytics, and many other applications.
    [Show full text]
  • TAKING OCP to the NEXT LEVEL – WORKLOAD OPTIMIZATION Krishna Paul Principal Engineer Data Center Group
    TAKING OCP TO THE NEXT LEVEL – WORKLOAD OPTIMIZATION Krishna Paul Principal Engineer Data Center Group Consume. Collaborate. Collaborate. Contribute. Contribute. OPEN COMPUTE HISTORY OF COMMITMENT AND LEADERSHIP 2011 +25 +80 Founding Contributions and Products Board Member Enablements with Partners of OCP Consume. Collaborate. Collaborate. Contribute. Contribute. GETTING TO $10B IN OCP DEPLOYMENT Workload Open Management Integrated Optimized Solutions Consume. Collaborate. Collaborate. Contribute. Contribute. NEW OCP COMPUTE PLATFORMS THIS YEAR Mount Olympus Next Gen Platform Cooper Lake Processor Platforms for Cascade Lake Processor 2S 4S (2x 2S) 8S (4x 2S) Consume. Collaborate. Collaborate. Contribute. Contribute. *Other names and brands may be claimed as property of others. INTEL® HIGH-DENSITY, CLOUD-OPTIMIZED PLATFORM Cloud-Optimized Platform 2U 450mm x 780mm 4S Intel® Xeon® Scalable processors 48 DDR4 memory slots, SATA/SAS/NVMe 2.5” SSD drive bays Available in second half 2019 *Other names and brands may be claimed as property of others. Consume. Collaborate. Collaborate. Contribute. Contribute. OCP CARDS SUPPORT NEW AI ACCELERATORS INTEL® NERVANA™ NEURAL NETWORK PROCESSOR (NNP) FOR TRAINING: FOR INFERENCE: Nervana™ Dedicated deep learning training acceleration Dedicated deep learning inference acceleration Optimized memory and interconnects 10nm Intel® process node In production in 2019 In production in 2019 Intel is a proud partner of the community Consume. Collaborate. Collaborate. Contribute. Contribute. *Other names and brands may be claimed as property of others. ANNOUNCING A NEW CPU-TO-DEVICE INTERCONNECT STANDARD COMPUTE EXPRESS LINK (CXL) ▪ New CXL specification and consortium ▪ Memory coherent, high-speed interconnect ▪ Initial spec donated by Intel ▪ Intel® believes there is an opportunity for an OCP working group to define new form factors for CXL ▪ Optimized stack with x16 PCI Express Gen 5 physical and electrical connection (32 GT/s) ▪ Use cases include AI, networking, media, graphics and more ▪ Availability expected 2021 Consume.
    [Show full text]
  • Intel FPGA Product Catalog Devices: 10 Nm Device Portfolio Intel Agilex FPGA and Soc Overview
    • Cover TBD INTEL® FPGA PRODUCT CATALOG Version 19.3 CONTENTS Overview Acceleration Platforms and Solutions Intel® FPGA Solutions Portfolio 1 Intel FPGA Programmable Acceleration Overview 61 Devices Intel Acceleration Stack for Intel Xeon® CPU with FPGAs 62 Intel FPGA Programmable Acceleration Cards 63 10 nm Device Portfolio - Intel AgilexTM - Intel Programmable Acceleration Card with 63 FPGA and SoC Overview 2 Intel Arria 10 GX FPGA - Intel Agilex FPGA Features 4 - Intel FPGA Programmable Acceleration Card D5005 64 Generation 10 Device Portfolio - Intel FPGA Programmable Acceleration Card N3000 65 - Generation 10 FPGAs and SoCs 6 - Intel FPGA Programmable Acceleration Card 66 - Intel Stratix® 10 FPGA and SoC Overview 7 Comparison - Intel Stratix 10 FPGA Features 9 Accelerated Workload Solutions 67 - Intel Stratix 10 SoC Features 11 - Intel Stratix 10 TX Features 13 - Intel Stratix 10 MX Features 15 Design Tools, OS Support, and Processors - Intel Stratix 10 DX Features 17 Intel Quartus® Prime Software 68 - Intel Arria® 10 FPGA and SoC Overview 20 DSP Builder for Intel FPGAs 71 - Intel Arria 10 FPGA Features 21 Intel FPGA SDK for OpenCL™ 72 - Intel Arria 10 SoC Features 23 - Intel Cyclone® 10 FPGA Overview 25 Intel SoC FPGA Embedded Development Suite 73 - Intel Cyclone 10 GX FPGA Features 26 SoC Operating System Support 74 - Intel Cyclone 10 LP FPGA Features 27 Nios® II Processor 75 - Intel MAX® 10 FPGA Overview 29 - Intel MAX 10 FPGA Features 30 Nios II Processor Embedded Design Suite 76 Nios II Processor Operating System Support 28
    [Show full text]
  • Exploiting Cache Side Channels on CPU-FPGA Cloud Platforms
    Exploiting Cache Side-Channels on CPU-FPGA Cloud Platforms Cache-basierte Seitenkanalangriffe auf CPU-FPGA Cloud Plattformen Masterarbeit im Rahmen des Studiengangs Informatik der Universität zu Lübeck vorgelegt von Thore Tiemann ausgegeben und betreut von Prof. Dr. Thomas Eisenbarth Lübeck, den 09. Januar 2020 Abstract Cloud service providers invest more and more in FPGA infrastructure to offer it to their customers (IaaS). FPGA use cases include the acceleration of artificial intelligence applica- tions. While physical attacks on FPGAs got a lot of attention by researchers just recently, nobody focused on micro-architectural attacks in the context of FPGA yet. We take the first step into this direction and investigate the timing behavior of memory reads on two Intel FPGA platforms that were designed to be used in a cloud environment: a Programmable Acceleration Card (PAC) and a Xeon CPU with an integrated FPGA that has an additional coherent local cache. By using the Open Programmable Acceleration Engine (OPAE) and the Core Cache Interface (CCI-P), our setup represents a realistic cloud scenario. We show that an Acceleration Functional Unit (AFU) on either platform can, with the help of a self-designed hardware timer, distinguish which of the different parts of the memory hierarchy serves the response for memory reads. On the integrated platform, the same is true for the CPU, as it can differentiate between answers from the FPGA cache, the CPU cache, and the main memory. Next, we analyze the effect of the caching hints offered by the CCI-P on the location of written cache lines in the memory hierarchy and show that most cache lines get cached in the LLC of the CPU.
    [Show full text]
  • Rambus Buys Into CXL Interconnect Ecosystem with Two New Deals June 17 2021
    Market Insight Report Reprint Rambus buys into CXL interconnect ecosystem with two new deals June 17 2021 John Abbott The addition of PLDA and AnalogX will speed up Rambus’ initiative to enable memory expansion and pooling in disaggregated infrastructure through the emerging Compute Express Link interconnect ecosystem. CXL 2.0 is a PCIe-based technology intended to make it easier to connect CPUs with memory and specialist accelerators, and to separate memory from physical servers to improve memory bandwidth, capacity and efficiency. This report, licensed to Rambus, developed and as provided by S&P Global Market Intelligence (S&P), was published as part of S&P’s syndicated market insight subscription service. It shall be owned in its entirety by S&P. This report is solely intended for use by the recipient and may not be reproduced or re-posted, in whole or in part, by the recipient without express permission from S&P. Market Insight Report Reprint Introduction Snapshot Rambus has acquired two companies, PDLA and Acquirer Rambus AnalogX. The silicon memory IP company is looking to boost its portfolio of new products and IP through Targets PLDA, AnalogX contributing to the emerging CXL (Compute Express Subsector Semiconductor IP Link) interconnect ecosystem. CXL 2.0, launched last November, is a PCIe-based technology intended to Deal value Not disclosed make it easier to connect CPUs with memory and Date announced 16-Jun-21 specialist accelerators, and to separate memory Closing date, Q3 2021 from physical servers to improve memory bandwidth, expected capacity and efficiency. Advisers None disclosed Rambus intends to combine its existing serial connection, memory and security skills, and IP with the newly acquired IP and engineers from the two companies to produce CXL buffering chips and memory controllers, powering next-generation PCIe 6.0 and CXL 3.0 devices.
    [Show full text]
  • Dynamic Link Manager (For Windows®) User Guide
    Hitachi Command Suite Dynamic Link Manager (for Windows®) User Guide Document Organization Product Version Getting Help Contents MK-92DLM129-42 © 2014, 2017 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and recording, or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Data Systems Corporation (collectively "Hitachi"). Licensee may make copies of the Materials provided that any such copy is: (i) created as an essential step in utilization of the Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other copies of the Materials. "Materials" mean text, data, photographs, graphics, audio, video and documents. Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials contain the most current information available at the time of publication. Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for information about feature and product availability, or contact Hitachi Data Systems Corporation at https://support.hds.com/en_us/contact-us.html. Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of Hitachi products is governed by the terms of your agreements with Hitachi Data Systems Corporation. By using this software, you agree that you are responsible for: 1) Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other individuals to access relevant data; and 2) Verifying that data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
    [Show full text]
  • Vfio/Iommu/Pci Mc
    Linux Plumbers Conference 2021 Contribution ID: 27 Type: not specified VFIO/IOMMU/PCI MC The PCI interconnect specification, the devices that implement it, and the system IOMMUs that provide mem- ory and access control to them are nowadays a de-facto standard for connecting high speed components, incorporating more and more features such as: • Address Translation Service (ATS)/Page Request Interface (PRI) • Single-root I/O Virtualization (SR-IOV)/Process Address Space ID (PASID) • Shared Virtual Addressing (SVA) • Remote Direct Memory Access (RDMA) • Peer-to-Peer DMA (P2PDMA) • Cache Coherent Interconnect for Accelerators (CCIX) • Compute Express Link (CXL) • Data Object Exchange (DOE) • Gen-Z These features are aimed at high-performance systems, server and desktop computing, embedded andSoC platforms, virtualization, and ubiquitous IoT devices. The kernel code that enables these new system features focuses on coordination between the PCI devices,the IOMMUs they are connected to and the VFIO layer used to manage them (for userspace access and device passthrough) with related kernel interfaces and userspace APIs to be designed in-sync and in a clean way for all three sub-systems. The VFIO/IOMMU/PCI micro-conference focuses on the kernel code that enables these new system features that often require coordination between the VFIO, IOMMU and PCI sub-systems. Following up the successful LPC 2017, 2019 and 2020 VFIO/IOMMU/PCI micro-conference, the Linux Plumbers Conference 2021 VFIO/IOMMU/PCI track will therefore focus on promoting discussions on the current ker- nel patches aimed at VFIO/IOMMU/PCI sub-systems with specific sessions targeting discussion for the kernel patches that enable technology (e.g., device/sub-device assignment, PCI core, IOMMU virtualization, VFIO up- dates, etc.) requiring the three sub-systems coordination.
    [Show full text]
  • Muertos De Nvidia Intel Xe
    1 MUERTOS DE NVIDIA INTEL XE - EL FUTURO DE LOS PROCESADORES GRAFICOS.´ Julian´ David Colmenares Rodr´ıguez Osmel Shamir Duran´ Castro Universidad Industrial de Santander Universidad Industrial de Santander Ingenier´ıa de Sistemas Ingenier´ıa de Sistemas [email protected] [email protected] Kevin Dlaikan Castillo Nelson Alexis Caceres´ Carreno˜ Universidad Industrial de Santander Universidad Industrial de Santander Ingenier´ıa de Sistemas Ingenier´ıa de Sistemas [email protected] [email protected] Mauren Lorena Cobos Universidad Industrial de Santander Ingenier´ıa de Sistemas [email protected] Resumen The new Intel discrete graphics card called Intel Xe (code name Artic Sound) will be launched in 2020 with three different microarchitects: Xe-LP, Xe-HP, Xe-HPC. While the PC players have great interest in the first two, the Xe-HPC will be focused on supercomputing. These graphics cards will be built from the basic components of the existing graphics ip, which is why Xe technology is expected to be significantly the Gen11 GPU architecture of Ice Lake processors. In addition Intel created oneApi, a programming environment that follows the CUDA model of NVIDIA which will allow users to encode algorithms in the GPU, implemented in CPU, GPU, AI, FPGA. In addition, Intel will use the rambo cache”memory, allowing it to be shared by both the CPU and graphics cards. La nueva tarjeta grafica´ discreta de Intel llamada Intel Xe (nombre codigo´ Artic Sound) sera´ lanzada al mercado en 2020 con tres diferentes microarquitecturas: Xe-LP, Xe-HP, Xe-HPC. Mientras los jugadores de pc tienen gran interes´ en las dos primeras, la Xe-HPC estara´ enfocada en la supercomputacion.´ Estas tarjetas graficas´ se construiran´ a partir de los componentes basicos´ de las ip de graficos´ existentes, es por ello que se espera que la tecnolog´ıa Xe sea esencialmente la arquitectura GPU Gen11 de los procesadores Ice Lake.
    [Show full text]
  • Compute Express Link™ (Cxl™): a Coherent Interface for Ultra-High-Speed Transfers
    2020 OFA Virtual Workshop COMPUTE EXPRESS LINK™ (CXL™): A COHERENT INTERFACE FOR ULTRA-HIGH-SPEED TRANSFERS Jim Pappas, Director, Industry Initiatives, Intel Corporation Chairman, CXL Consortium AGENDA ▪ Industry Landscape ▪ Compute Express Link™ Overview ▪ Introducing CXL™ Consortium ▪ CXL Features and Benefits ▪ CXL Use Cases ▪ Summary 2 © OpenFabrics Alliance Compute Express Link™ and CXL™ are trademarks of the Compute Express Link Consortium. INDUSTRY LANDSCAPE ▪Industry trends are driving demand for faster data processing and next-generation data center performance Proliferation of Growth of Cloudification of the Cloud Computing AI & Analytics Network & Edge 3 © OpenFabrics Alliance Compute Express Link™ and CXL™ are trademarks of the Compute Express Link Consortium. WHY THE NEED FOR A NEW CLASS OF INTERCONNECT? ▪ Industry mega-trends are driving demand for faster data processing and next-generation data center performance: • Proliferation of Cloud Computing • Growth of Artificial Intelligence and Analytics • Cloudification of the Network and Edge ▪ Need a new class of interconnect for heterogenous computing and disaggregation usages: • Efficient resource sharing Today’s Environment • Shared memory pools with efficient access mechanisms • Enhanced movement of operands and results between accelerators and target devices • Significant latency reduction to enable disaggregated memory ▪ The industry needs open standards that can comprehensively address next-gen interconnect challenges CXL Enabled Environment 4 © OpenFabrics Alliance
    [Show full text]
  • Presence of Europe in HPC-HPDA Standardisation and Recommendations to Promote European Technologies
    Ref. Ares(2020)1227945 - 27/02/2020 H2020-FETHPC-3-2017 - Exascale HPC ecosystem development EXDCI-2 European eXtreme Data and Computing Initiative - 2 Grant Agreement Number: 800957 D5.3 Presence of Europe in HPC-HPDA standardisation and recommendations to promote European technologies Final Version: 1.0 Authors: JF Lavignon, TS-JFL; Marcin Ostasz, ETP4HPC; Thierry Bidot, Neovia Innovation Date: 21/02/2020 D5.3 Presence of Europe in HPC-HPDA standardisation Project and Deliverable Information Sheet EXDCI Project Project Ref. №: FETHPC-800957 Project Title: European eXtreme Data and Computing Initiative - 2 Project Web Site: http://www.exdci.eu Deliverable ID: D5.3 Deliverable Nature: Report Dissemination Level: Contractual Date of Delivery: PU * 29 / 02 / 2020 Actual Date of Delivery: 21/02/2020 EC Project Officer: Evangelia Markidou * - The dissemination level are indicated as follows: PU – Public, CO – Confidential, only for members of the consortium (including the Commission Services) CL – Classified, as referred to in Commission Decision 2991/844/EC. Document Control Sheet Title: Presence of Europe in HPC-HPDA standardisation and Document recommendations to promote European technologies ID: D5.3 Version: <1.0> Status: Final Available at: http://www.exdci.eu Software Tool: Microsoft Word 2013 File(s): EXDCI-2-D5.3-V1.0.docx Written by: JF Lavignon, TS-JFL Authorship Marcin Ostasz, ETP4HPC Thierry Bidot, Neovia Innovation Contributors: Reviewed by: Elise Quentel, GENCI John Clifford, PRACE Approved by: MB/TB EXDCI-2 - FETHPC-800957 i 21.02.2020 D5.3 Presence of Europe in HPC-HPDA standardisation Document Status Sheet Version Date Status Comments 0.1 03/02/2020 Draft To collect internal feedback 0.2 07/02/2020 Draft For internal reviewers 1.0 21/02/2020 Final version Integration of reviewers’ comments Document Keywords Keywords: PRACE, , Research Infrastructure, HPC, HPDA, Standards Copyright notices 2020 EXDCI-2 Consortium Partners.
    [Show full text]