Opencapi, Gen-Z, CCIX: Technology Overview, Trends, and Alignments

Total Page:16

File Type:pdf, Size:1020Kb

Opencapi, Gen-Z, CCIX: Technology Overview, Trends, and Alignments OpenCAPI, Gen-Z, CCIX: Technology Overview, Trends, and Alignments BRAD BENTON| AUGUST 8, 2017 Agenda 2 | OpenSHMEM Workshop | August 8, 2017 AGENDA Overview and motivation for new system interconnects Technical overview of the new, proposed technologies Emerging trends Paths to convergence? 3 | OpenSHMEM Workshop | August 8, 2017 Overview and Motivation 4 | OpenSHMEM Workshop | August 8, 2017 NEWLY EMERGING BUS/INTERCONNECT STANDARDS Three new bus/interconnect standards announced in 2016 CCIX: Cache Coherent Interconnect for Accelerators ‒ Formed May, 2016 ‒ Founding members include: AMD, ARM, Huawei, IBM, Mellanox, Xilinx ‒ ccixconsortium.com Gen-Z ‒ Formed August, 2016 ‒ Founding members include: AMD, ARM, Cray, Dell EMC, HPE, IBM, Mellanox, Micron, Xilinx ‒ genzconsortium.org OpenCAPI: Open Coherent Accelerator Processor Interface ‒ Formed September, 2016 ‒ Founding members include: AMD, Google, IBM, Mellanox, Micron ‒ opencapi.org 5 5 | OpenSHMEM Workshop | August 8, 2017 NEWLY EMERGING BUS/INTERCONNECT STANDARDS Motivations for these new standards Tighter coupling between processors and accelerators (GPUs, FPGAs, etc.) ‒ unified, virtual memory address space ‒ reduce data movement and avoid data copies to/from accelerators ‒ enables sharing of pointer-based data structures w/o the need for deep copies Open, non-proprietary standards-based solutions Higher bandwidth solutions ‒ 25Gbs and above vs. 16Gbs forPCIe-Gen4 Better exploitation of new and emerging memory/storage technologies ‒ NVMe ‒ Storage class memory (SCM), persistent memory ‒ HMC, HBM, etc. Ease of programming ‒ accelerator operates in the address space of the process ‒ support for atomic operations 6 6 | OpenSHMEM Workshop | August 8, 2017 NEWLY EMERGING BUS/INTERCONNECT STANDARDS Why 3 different standards bodies? Different groups have been working to solve similar problems However, each approach has its differences Many companies involved with more than one consortium Possible shake outs/convergence as things move forward We’ve been here before: Future I/O + Next Generation I/O => InfiniBand But the landscape is different this time around 7 7 | OpenSHMEM Workshop | August 8, 2017 Technical Overview 8 | OpenSHMEM Workshop | August 8, 2017 OpenCAPI: OPEN COHERENT ACCELERATOR PROCESSOR INTERFACE www.opencapi.org Tightly coupled interface between processor, accelerators and memory ‒ Operates on virtual addresses ‒ virt-to-phys translation occurs on the host CPU ‒ OpenCAPI 3.0: ‒ Bandwidth: 25 Gps/lane x8 ‒ Coherent access to system memory from accelerator ‒ OpenCAPI 4.0: ‒ Support for caching on accelerators ‒ Bandwidth: Support for additional link widths: x4, x8, x16, x32 Use Cases ‒ Coherent access from accelerator to system memory ‒ Advanced memory technologies ‒ Coherent storage controller ‒ Agnostic to processor architecture http://www.opencapi.org/ 9 9 | OpenSHMEM Workshop | August 8, 2017 ATTRIBUTES OpenCAPI Integrated into POWER9 (e.g., Zaius) Supports features of interest to NIC/FPGA vendors ‒ Virtual address translation services ‒ Aggregation of accelerator & system memory Communication with OpenCAPI-attached devices managed by vendor-specific drivers/libraries Accelerator holds virtual address; address translation managed by the host http://opencapi.org/wp-content/uploads/2016/11/OpenCAPI-Overview-SC16-vf.pptx 10 10 | OpenSHMEM Workshop | August 8, 2017 GEN-Z: MEMORY SEMANTIC FABRIC www.genzconsortium.org Scalable from component to cross-rack communications ‒ Direct attach, switched, or fabric topologies ‒ Bandwidth: ‒ 32GB/s to 400+ GB/s ‒ Support for intermediate speeds ‒ Can gateway to other networks e.g., Enet, InfiniBand ‒ Unify general data access as memory operations ‒ byte addressable load/store ‒ messaging (put/get) ‒ IO (block memory) Use Cases ‒ Component disaggregation ‒ Persistent memory ‒ Long haul/rack-to-rack interconnect ‒ Rack-scale composability http://www.genzconsortium.org/ 11 11 | OpenSHMEM Workshop | August 8, 2017 ATTRIBUTES Gen-Z Defines new mechanicals/connectors But will also integrate with existing mechanical form factors, connectors, and cables Multiple ways to move data: ‒ Load/store ‒ Messaging interfaces ‒ Block I/O interfaces Gen-Z to provide libfabric integration for distributed messaging (MPI, OpenSHMEM, etc.) Scalability ‒ Up to 4096 components/subnet ‒ Up to 64K subnets http://www.genzconsortium.org/ 12 12 | OpenSHMEM Workshop | August 8, 2017 CCIX: CACHE COHERENT INTERCONNECT FOR ACCELERATORS www.ccixconsortium.com Tightly coupled interface between processor, Processor/ Processor accelerators and memory Bridge ‒ Bandwidth: CCIX CCIX ‒ 16/20/25 Gps/lane ‒ Hardware cache coherence enabled across the link CCIX CCIX ‒ Accelerator CCIX attached Driver-less and interrupt-less framework for data Memory sharing Use Cases Processor ‒ Allows low-latency main memory expansion Processor CCIX ‒ Extend processor cache coherency to accelerators, CCIX CCIX network/storage adapters, etc. CCIX CCIX CCIX CCIX CCIX ‒ Supports multiple ISAs over a single Switch Accel Accel CCIX CCIX interconnect standard CCIX CCIX CCIX CCIX CCIX CCIX CCIX Accel Accel CCIX Accel Accel CCIX CCIX http://www.ccixconsortium.com/ 13 13 | OpenSHMEM Workshop | August 8, 2017 ATTRIBUTES CCIX Link layer is a straightforward extension of PCIe In box/node system interconnect Preserves existing mechanicals, connectors, etc. Communication with CCIX-attached devices managed by vendor-specific drivers/libraries Address translation services via ATS/PRI http://www.ccixconsortium.com/ 14 14 | OpenSHMEM Workshop | August 8, 2017 COMPARISONS Standard Physical Topology Unidirectional Mechanicals Coherence Layer Bandwidth p2p and 32-50GB/s x16 Full cache coherence between CCIX PCIe PHY PCIe switched processors and accelerators Supports existing PCIe Multiple Signaling Rates: mechanicals/form IEEE 802.3 16, 25, 28, 56 GT/s factors Does not specify cache coherent agent Short and p2p and GenZ operations, but does specify protocols Long Reach switched Multiple link widths: Will develop new, Gen-Z that support cache coherent agents PHY 1 to 256 lanes specific mechanicals/form factors IEEE 802.3 In definition, see Zaius 25GB/s x8 Coherent access to system memory OpenCAPI 3.0 Short Reach p2p design for a possible approach Multiple link widths IEEE 802.3 Coherent access to memory OpenCAPI 4.0 p2p x4, x8, x16, x32 Short Reach Cache on accelerator 12.5, 25, 50, 100GB/s 15 15 | OpenSHMEM Workshop | August 8, 2017 Trends 16 | OpenSHMEM Workshop | August 8, 2017 OPENCAPI MEMBERSHIP Board Level Contributor Level Observer/Academic Level AMD Amphenol Achronix Google Microsemi Applied Materials IBM Molex* Dell EMC Mellanox Parade ELI Beamlines Micron Samsung Everspin* NVIDIA SK hynix* HPE WesternDigital TE Connectivity* NGCodec Xilinx Tektronix SmartDV* Toshiba SuperMicro Synology Univ. Cordoba *New Members (since March, 2017) 17 | OpenSHMEM Workshop | August 8, 2017 GEN-Z MEMBERSHIP General Member Alpha Data IBM Numascale* AMD Integrated Device Tech PDLA Group Amphenol Corporation IntelliProp Red Hat ARM Jabil Circuit Samsung Avery Design* Lenovo Seagate Broadcom Lotes SK Hynix Cadence* Luxchare* SMART Modular Tech* Cavium Mellanox Spin Transfer Cray Mentor Graphics* TE Connectivity* Dell EMC Micron Tyco Electronics* Everspin Technologies* Microsemi Western Digital FoxxConn Molex* Xilinx HPE NetApp* Yadro Huawei Nokia *New Members (since March, 2017) 18 | OpenSHMEM Workshop | August 8, 2017 CCIX MEMBERSHIP Promoter Level Contributor Level Adopter Level No Longer Members AMD Amphenol Integrated Device Tech Arteris ARM Avery Design INVECAS Bull/Atos Huawei Broadcom Netronome* IBM Mellanox Cadence Phytium Tech* Teledyne Qualcomm Cavium PLDA Group* TSMC Xilinx Keysight Shanghai Zhaoxin* Lenovo* Silicon Labs* Micron SmartDV* Netspeed Red Hat Samsung* SK hynix* Synopsys TE Connectivity *New Members (since March, 2017) 19 | OpenSHMEM Workshop | August 8, 2017 CONSORTIA MEMBERSHIP DISTRIBUTION Membership Distribution: March, 2017; All Levels Consortia Membership Breakdown: Consortia Membership Count Single & Multiple Memberships 45 40 35 22 23 30 25 20 17 11 15 12 Member Count Member 10 5 10 13 12 30 0 CCIX Gen-Z OpenCAPI CCIX Gen-Z OpenCAPI Single Membership Multiple Memberhips 20 20 | OpenSHMEM Workshop | August 8, 2017 CONSORTIA MEMBERSHIP DISTRIBUTION Membership Distribution: August, 2017; All Levels Consortia Membership Breakdown: Consortia Membership Count Single & Multiple Memberships 45 40 35 28 28 30 25 25 20 19 16 15 Member Count Member 10 16 5 9 12 41 0 CCIX Gen-Z OpenCAPI CCIX Gen-Z OpenCAPI Single Membership Multiple Memberhips 21 21 | OpenSHMEM Workshop | August 8, 2017 CONSORTIA MEMBERSHIP DISTRIBUTION Membership Distribution: August, 2017; Voting Levels Consortia Membership Breakdown: Consortia Membership Count (Voting) Single & Multiple Memberships (Voting) 45 40 17 20 35 30 25 25 20 15 Member Count Member 10 16 12 16 5 41 5 0 4 CCIX Gen-Z OpenCAPI CCIX Gen-Z OpenCAPI Single Membership Multiple Memberhips 22 22 | OpenSHMEM Workshop | August 8, 2017 MULTIPLE MEMBERSHIP DISTRIBUTION August, 2017 (Voting) August, 2017 CCIX/Gen-Z CCIX/Gen-Z/OpenCAPI 40% CCIX/Gen-Z/OpenCAPI CCIX/Gen-Z 40% 29% 42% March, 2017 CCIX/Gen-Z/OpenCAPI CCIX/Gen-Z Gen-Z/OpenCAPI CCIX/OpenCAPI 31% 38% 20% 0% Gen-Z/OpenCAPI CCIX/OpenCAPI 25% 4% Gen-Z/OpenCAPI CCIX/OpenCAPI 31% 0% 23 | OpenSHMEM Workshop | August 8, 2017 MEMBERSHIP PAIRINGS August, 2017 Voting Membership Levels August, 2017 All Membership Levels GenZ/OpenCAPI 33% March, 2017 All Membership Levels GenZ/OpenCAPI
Recommended publications
  • What Is It and How We Use It
    Infiniband Overview What is it and how we use it What is Infiniband • Infiniband is a contraction of "Infinite Bandwidth" o can keep bundling links so there is no theoretical limit o Target design goal is to always be faster than the PCI bus. • Infiniband should not be the bottleneck. • Credit based flow control o data is never sent if receiver can not guarantee sufficient buffering What is Infiniband • Infiniband is a switched fabric network o low latency o high throughput o failover • Superset of VIA (Virtual Interface Architecture) o Infiniband o RoCE (RDMA over Converged Ethernet) o iWarp (Internet Wide Area RDMA Protocol) What is Infiniband • Serial traffic is split into incoming and outgoing relative to any port • Currently 5 data rates o Single Data Rate (SDR), 2.5Gbps o Double Data Rate (DDR), 5 Gbps o Quadruple Data Rate (QDR), 10 Gbps o Fourteen Data Rate (FDR), 14.0625 Gbps o Enhanced Data Rate (EDR) 25.78125 Gbps • Links can be bonded together, 1x, 4x, 8x and 12x HDR - High Data Rate NDR - Next Data Rate Infiniband Road Map (Infiniband Trade Association) What is Infiniband • SDR, DDR, and QDR use 8B/10B encoding o 10 bits carry 8 bits of data o data rate is 80% of signal rate • FDR and EDR use 64B/66B encoding o 66 bits carry 64 bits of data Signal Rate Latency SDR 200ns DDR 140ns QDR 100ns Hardware 2 Hardware vendors • Mellanox o bought Voltaire • Intel o bought Qlogic Infiniband business unit Need to standardize hardware. Mellanox and Qlogic cards work in different ways.
    [Show full text]
  • Software-Defined Hardware Provides the Key to High-Performance Data
    Software-Defined Hardware Provides the Key to High-Performance Data Acceleration (WP019) Software-Defined Hardware Provides the Key to High- Performance Data Acceleration (WP019) November 13, 2019 White Paper Executive Summary Across a wide range of industries, data acceleration is the key to building efficient, smart systems. Traditional general-purpose processors are falling short in their ability to support the performance and latency constraints that users have. A number of accelerator technologies have appeared to fill the gap that are based on custom silicon, graphics processors or dynamically reconfigurable hardware, but the key to their success is their ability to integrate into an environment where high throughput, low latency and ease of development are paramount requirements. A board-level platform developed jointly by Achronix and BittWare has been optimized for these applications, providing developers with a rapid path to deployment for high-throughput data acceleration. A Growing Demand for Distributed Acceleration There is a massive thirst for performance to power a diverse range of applications in both cloud and edge computing. To satisfy this demand, operators of data centers, network hubs and edge-computing sites are turning to the technology of customized accelerators. Accelerators are a practical response to the challenges faced by users with a need for high-performance computing platforms who can no longer count on traditional general-purpose CPUs, such as those in the Intel Xeon family, to support the growth in demand for data throughput. The core of the problem with the general- purpose CPU is that Moore's Law continues to double the number of available transistors per square millimeter approximately every two years but no longer allows for growth in clock speeds.
    [Show full text]
  • IBM Flex System IB6131 Infiniband Switch User's Guide
    IBM Flex System IB6131 InfiniBand Switch User’s Guide ii IBM Flex System IB6131 InfiniBand Switch User’s Guide IBM Flex System IB6131 InfiniBand Switch User’s Guide Note: Before using this information and the product it supports, read the general information in , ʺAppen‐ dix B: Noticesʺ on page 33, the Safety Information and Environmental Notices and Userʹs Guide docu‐ ments on the IBM Notices for Network Devices CD, and the Warranty Information document that comes with the product. First Edition February 2012 © Copyright IBM Corporation 2012. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Safety . vii Chapter1. Introduction . 1 Related documentation. 1 Notices and statements in this document . 3 Features and specifications. 3 Specifications . 4 Major components of the switch . 5 Chapter2. Installing the switch and basic setup. 7 Installing the IBM Flex system blade . 7 CMM . 7 Serial port access (Method 1) . 7 Configuration . 9 Configuration Wizard (Method 2) . 9 Cabling the switch . 15 Chapter3. LEDs and interfaces. 17 Port LEDs . 17 Switch status lights . 17 Power LED . 18 Fault LED . 18 Unit identification switch identifier LED . 18 RS‐232 interface through mini connector. 18 RJ‐45 Ethernet connector . 19 Configuring the IBM Flex System IB6131 InfiniBand switch . 19 Rerunning the Wizard. 19 Updating the switch software . 20 Chapter4. Connecting to the switch platform . 25 Starting an SSH connection to the switch (CLI) . 25 Starting a WebUI connection to the switch . 25 Managing the IBM Flex System IB6131 InfiniBand switch . 26 Chapter5. Solving problems . 27 Running POST .
    [Show full text]
  • Global MV Standards English
    www.visiononline.org www.emva.org www.jiia.org www.china-vision.org www.vdma.com/vision Member-supported trade associations promote the growth of the global vision and imaging industry. Standards development is key to the success of the industry and its trade groups help fund, maintain, manage and promote standards. In 2009, three leading vision associations, AIA, EMVA and JIIA began a cooperative initiative to coordinate the development of globally adopted vision standards. In 2015 they were joined by CMVU and VDMA-MV. This publication is one product of this cooperative effort. Version: April 2016 Copyright 2013, AIA, EMVA and JIIA. All rights reserved. Data within is intended as an information resource and no warranty for its use is given. Camera Link (including PoCL and PoCL-Lite), Camera Link HS, GigE Vision and USB3 Vision are the trademarks of AIA. GenICam is the trademark of EMVA. CoaXPress and IIDC2 are the trademarks of JIIA. FireWire is the trademark of Apple Inc. IEEE 1394 is the trademark of The 1394 Trade Association. USB is the trademark of USB Implementers Forum, Inc. All other names are trademarks or trade names of their respective companies. This is a comprehensive look at the various digital hardware and software interface standards used in machine vision and imaging. In the early days of machine vision, the industry adopted existing analog television standards such as CCIR or RS-170 for the interface between cameras and frame grabbers. The defining In the 1990s, digital technology became characteristics of today’s prevalent and a multitude of proprietary hardware and software interface solutions were used.
    [Show full text]
  • Infiniband Trade Association Integrators' List
    InfiniBand Trade Association Integrators’ List April 2019 © InfiniBand Trade Association SPACE IBTA InfiniBand Integrators' List April 2019 Manufacturer Description Model Type Speed FW SW ConnectX®-3 Pro VPI, FDR IB (56Gb/s) and 40/56GbE MLNX_OFED_LINUX 4.4- Mellanox MCX354A-FCCT HCA FDR 2.42.5000 Dual-port QSFP, PCIe3.0 x8 2.0.7.0 ConnectX®-4 VPI, EDR IB (100Gb/s) and 100GbE MLNX_OFED_LINUX 4.5- Mellanox MCX456A-ECAT HCA EDR 12.24.1000 Dual-port QSFP28, PCIe3.0 x16 1.0.1.0 ConnectX®-5 VPI, EDR IB (100Gb/s) and 100GbE MLNX_OFED_LINUX 4.5- Mellanox MCX556A-EDAT HCA EDR 16.24.1000 Dual-port QSFP28, PCIe4.0 x16 1.0.1.0 ConnectX®-6 VPI adapter card, HDR IB (200Gb/s) and 200GbE, MLNX_OFED_LINUX 4.5- Mellanox MCX653A-HCAT HCA HDR 20.25.0330 dual-port QSFP56, Socket Direct 2x PCIe3.0 x16 1.0.1.0 SwitchX®2 InfiniBand to Ethernet gateway, 36 QSFP ports, Mellanox MSX6036G-2SFS Switch FDR 9.4.5070 3.6.8010 Managed Switch Switch-IB 2 based EDR InfiniBand 1U Switch, 36 QSFP28 ports, Mellanox MSB7800-ES2F Switch EDR 15.1910.0618 3.7.1134 x86 dual core Mellanox® Quantum(TM) HDR InfiniBand Switch, 40 QSFP56 Mellanox MQM8700-HS2F Switch HDR 27.2000.1016 3.7.1942 ports, 2 Power Supplies (AC), x86 dual core Software Versions Diagnostic Software Operating System Cent OS 7.5.1804 ibutils2 MLNX OFED 4.5-1.0.1.0 Mellanox OFED MLNX OFED 4.5-1.0.1.0 Compliance Test Suite v. 1.0.52 Open MPII Open MPI 3.1.2 Benchmark Intel MPI Benchmarks Benchmarks Performed Test Plan Software Forge IBTA MOI Suite PingPong Gather Duration 2-5 Minutes PingPing Gatherv Sendrecv Scatter Conditions for Passing Testing Exchange Scatterv Link Width Link width is @ expected width - i.e.
    [Show full text]
  • Infiniband Technology Overview
    InfiniBand Technology Overview Dror Goldenberg, Mellanox Technologies SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. InfiniBand Technology Overview 2 © 2008 Storage Networking Industry Association. All Rights Reserved. Abstract InfiniBand Technology Overview The InfiniBand architecture brings fabric consolidation to the data center. Storage networking can concurrently run with clustering, communication and management fabrics over the same infrastructure, preserving the behavior of multiple fabrics. The tutorial provides an overview of the InfiniBand architecture including discussion of High Speed – Low Latency, Channel I/O, QoS scheduling, partitioning, high availability and protocol offload. InfiniBand based storage protocols, iSER (iSCSI RDMA Protocol), NFS over RDMA and SCSI RDMA Protocol (SRP), are introduced and compared with alternative storage protocols, such as iSCSI and FCP. The tutorial further enumerates value-add features that the InfiniBand brings to clustered storage, such as atomic operations and end to end data integrity. Learning Objectives: Understand the InfiniBand architecture and feature set. Understand the benefits of InfiniBand for networked storage. Understand the standard InfiniBand storage protocols. InfiniBand Technology Overview 3 © 2008 Storage Networking Industry Association. All Rights Reserved. Agenda Motivation and General Overview Protocol Stack Layers Storage Protocols over InfiniBand Benefits InfiniBand Technology Overview 4 © 2008 Storage Networking Industry Association.
    [Show full text]
  • Designcon 2003 Tecforum I2C Bus Overview January 27 2003
    DesignCon 2003 TecForum I2C Bus Overview January 27 2003 Philips Semiconductors Jean Marc Irazabal –Technical Marketing Manager for I2C Devices Steve Blozis –International Product Manager for I2C Devices Agenda • 1st Hour • Serial Bus Overview • I2C Theory Of Operation • 2nd Hour • Overcoming Previous Limitations • I2C Development Tools and Evaluation Board • 3rd Hour • SMBus and IPMI Overview • I2C Device Overview • I2C Patent and Legal Information • Q & A Slide speaker notes are included in AN10216 I2C Manual 2 DesignCon 2003 TecForum I C Bus Overview 2 1st Hour 2 DesignCon 2003 TecForum I C Bus Overview 3 Serial Bus Overview 2 DesignCon 2003 TecForum I C Bus Overview 4 Com m uni c a t i o ns Automotive SERIAL Consumer BUSES IEEE1394 DesignCon 2003 TecForum I UART SPI 2 C Bus Overview In d u s t r ia l 5 General concept for Serial communications SCL SDA select 3 select 2 select 1 READ Register or enable Shift Reg# enable Shift Reg# enable Shift Reg# WRITE? // to Ser. // to Ser. // to Ser. Shift R/W Parallel to Serial R/W R/W “MASTER” DATA SLAVE 1 SLAVE 2 SLAVE 3 • A point to point communication does not require a Select control signal • An asynchronous communication does not have a Clock signal • Data, Select and R/W signals can share the same line, depending on the protocol • Notice that Slave 1 cannot communicate with Slave 2 or 3 (except via the ‘master’) Only the ‘master’ can start communicating. Slaves can ‘only speak when spoken to’ 2 DesignCon 2003 TecForum I C Bus Overview 6 Typical Signaling Characteristics LVTTL 2 RS422/485
    [Show full text]
  • Program & Exhibits Guide
    FROM CHIPS TO SYSTEMS – LEARN TODAY, CREATE TOMORROW CONFERENCE PROGRAM & EXHIBITS GUIDE JUNE 24-28, 2018 | SAN FRANCISCO, CA | MOSCONE CENTER WEST Mark You Calendar! DAC IS IN LAS VEGAS IN 2019! MACHINE IP LEARNING ESS & AUTO DESIGN SECURITY EDA IoT FROM CHIPS TO SYSTEMS – LEARN TODAY, CREATE TOMORROW JUNE 2-6, 2019 LAS VEGAS CONVENTION CENTER LAS VEGAS, NV DAC.COM DAC.COM #55DAC GET THE DAC APP! Fusion Technology Transforms DOWNLOAD FOR FREE! the RTL-to-GDSII Flow GET THE LATEST INFORMATION • Fusion of Best-in-Class Optimization and Industry-golden Signoff Tools RIGHT WHEN YOU NEED IT. • Unique Fusion Data Model for Both Logical and Physical Representation DAC.COM • Best Full-flow Quality-of-Results and Fastest Time-to-Results MONDAY SPECIAL EVENT: RTL-to-GDSII Fusion Technology • Search the Lunch at the Marriott Technical Program • Find Exhibitors www.synopsys.com/fusion • Create Your Personalized Schedule Visit DAC.com for more details and to download the FREE app! GENERAL CHAIR’S WELCOME Dear Colleagues, be able to visit over 175 exhibitors and our popular DAC Welcome to the 55th Design Automation Pavilion. #55DAC’s exhibition halls bring attendees several Conference! new areas/activities: It is great to have you join us in San • Design Infrastructure Alley is for professionals Francisco, one of the most beautiful who manage the HW and SW products and services cities in the world and now an information required by design teams. It houses a dedicated technology capital (it’s also the city that Design-on-Cloud Pavilion featuring presentations my son is named after).
    [Show full text]
  • DGX-1 User Guide
    NVIDIA DGX-1 DU-08033-001 _v15 | May 2018 User Guide TABLE OF CONTENTS Chapter 1. Introduction to the NVIDIA DGX-1 Deep Learning System................................. 1 1.1. Using the DGX-1: Overview............................................................................. 1 1.2. Hardware Specifications................................................................................. 2 1.2.1. Components.......................................................................................... 2 1.2.2. Mechanical............................................................................................3 1.2.3. Power Requirements................................................................................ 3 1.2.4. Connections and Controls.......................................................................... 3 1.2.5. Rear Panel Power Controls.........................................................................4 1.2.6. LAN LEDs..............................................................................................5 1.2.7. IPMI Port LEDs....................................................................................... 5 1.2.8. Hard Disk Indicators................................................................................ 6 1.2.9. Power Supply Unit (PSU) LED..................................................................... 7 Chapter 2. Installation and Setup............................................................................ 8 2.1. Registering Your DGX-1.................................................................................
    [Show full text]
  • Migrating to Achronix FPGA Technology (AN023) Migrating to Achronix FPGA Technology (AN023)
    Migrating to Achronix FPGA Technology (AN023) Migrating to Achronix FPGA Technology (AN023) November 19, 2020 Application Note Introduction Many users transitioning to Achronix FPGA technology will be familiar with existing FPGA solutions from other vendors. Although Achronix technology and tools are very similar to existing FPGA technology and tools, there are some differences. Understanding these differences are needed to achieve the very best performance and quality of results (QoR). This application note discusses any differences in the Achronix tool flow, highlighting key files and methodologies that users may not be familiar with. Further this application note details the primitive components present in the Achronix fabric, and how they may differ from, or in many cases are similar to, other vendors. Finally this application note reviews the unique features, particularly focused on AI and ML workloads that are present in the Achronix FPGA devices. Related Documents This application note is intended to give an overview of any changes that a user may encounter when migrating to Achronix technology. For full details of any of the items described below, the user is directed to the appropriate user guide or application note. Instead of duplicating information, this application note highlights the changes, and refers the user to the appropriate user guide where they can obtain the full information. A number of user guides are commonly referred to throughout this document Speedster7t IP Component Library User Guide (UG086) .This user guide describes all the silicon elements on the Speedster7t family. It includes descriptions, and instantiation templates, of the memories, DSP, MLP, NAP and logic primitives.
    [Show full text]
  • I2C Overview
    I2C Overview 2H 2003 Agenda • What is I2C and why would you be interested in using the I2C bus? • What new I2C devices are there and what are the typical applications? • How are we going to help you design-in these devices? Semiconductors 2 Transmission Standards 2 2500 I C data can be transmitted at speeds of 100 kHz, 400 kHz CML 655 or 3.4 MHz. bps) M 400 GTLP 1394.a I2C data can be LV BTL DS = 35 ECL RS- ETL /PECL 644 transmitted longer /LV 10 PECL distances using bus buffers like the 3.4 MHz General RS-422 P82B96 Purpose 1 Logic RS-485 Data Transfer Rate ( 400 KHz 0.1 100 KHz I2C RS-232 RS-423 0.5 0 10 100 1000 Backplane Length (meters) Cable Length (meters) Semiconductors 3 I2C Bus Basics - Address and Data µcon- GPIO A/D LCD RTC µcon- troller troller The master always D/A sends the SCL II (clock) signal. SCL SDA Each device is addressed individually by software with a unique address that New devices or can be modified by hardware pins. functions can be easily 1010A A A R/W A ‘clipped on to an 2 1 0 2 EEPROM existing bus! The open drain/collector outputs A1 provide for a “wired-AND” connection A0 that allows devices to be added or removed without impact and always 1010100R/W require a pull-up resistor. Write data Master Slave S slaveslave address address W W A Adata data A data A data A P transmitter receiver A P < n data bytes > Read data S slave address R A data A data A P receiver transmitter < n data bytes > last data byte S = Start condition R/W = read / write not A = Acknowledge A = Not Acknowledge P = Stop condition Semiconductors
    [Show full text]
  • HPE EDR Infiniband Adapters Overview
    QuickSpecs HPE EDR InfiniBand Adapters Overview HPE EDR InfiniBand Adapters HPE EDR InfiniBand 100Gb 1-port 841QSFP28 Adapter is based on Mellanox ConnectX®-5 technology. It supports InfiniBand function for HPE ProLiant XL and DL Servers. It is designed for customers who need low latency and high bandwidth InfiniBand interconnector in their high performance computing (HPC) systems. HPE EDR InfiniBand/Ethernet 100Gb 2-port 841QSFP28 Adapter is based on Mellanox ConnectX®-5 VPI technology. It supports dual-function InfiniBand and Ethernet for HPE ProLiant XL and DL Servers. It can function as a dual ported EDR InfiniBand card, a dual ported 100Gb Ethernet card, or a mixed function card. It is designed for customers who need low latency and high bandwidth interconnector in their high performance computing (HPC) systems. HPE EDR InfiniBand/Ethernet 100G 2-port 840QSFP28 Adapter is based on Mellanox ConnectX®-4 technology. It supports dual- function InfiniBand and Ethernet for HPE ProLiant XL and DL Servers. It can function as a dual ported EDR InfiniBand card, a dual ported 100Gb Ethernet card, or a mixed function card. They are designed for customers who need low latency and high bandwidth interconnector in their high performance computing (HPC) systems. HPE EDR InfiniBand/Ethernet 100Gb 1-port 841OCP QSFP28 Adapter is based on Mellanox ConnectX®-5 VPI technology. It supports dual-function InfiniBand and Ethernet for the HPE Apollo 70 Server. It is designed for customers who need low latency and high bandwidth interconnector in their high performance computing (HPC) systems based on HPE Apollo 70 servers. HPE Ethernet 100Gb 1-port QSFP28 MCX515A-CCAT Adapter is based on Mellanox ConnectX®-5 EN technology.
    [Show full text]