D4.1 Report on State of the Art of Novel Compute Elements and Gap Analysis in MPI and GASPI

Total Page:16

File Type:pdf, Size:1020Kb

D4.1 Report on State of the Art of Novel Compute Elements and Gap Analysis in MPI and GASPI Ref. Ares(2019)558413 - 31/01/2019 HORIZON 2020 TOPIC FETHPC-02-2017 Transition to Exascale Computing Exascale Programming Models for Heterogeneous Systems 801039 D4.1 Report on state of the art of novel compute elements and gap analysis in MPI and GASPI WP4: Productive computing with FPGAs, GPUs and low-power microprocessors Date of preparation (latest version): [DATE] Copyright© 2018-2021 The EPiGRAM-HS Consortium The opinions of the authors expressed in this document do not necessarily reflect the official opinion of the EPiGRAM-HS partners nor of the European Commission. D4.1: Report on state of the art of novel compute elements and gap analysis in MPI and GASPI 2 DOCUMENT INFORMATION Deliverable Number D4.1 Report on state of the art of novel compute elements Deliverable Name and gap analysis in MPI and GASPI Due Date 31/01/2019 (PM5) Deliverable Lead FhG Martin Kühn, Dominik Loroch, Carsten Lojewski, Valeria Bartsch (Fraunhofer ITWM), Gilbert Netzer (KTH), Authors Daniel Holmes (EPCC), Alan Stokes (EPCC), Oliver Brown (EPCC) Valeria Bartsch, Fraunhofer ITWM, Responsible Author [email protected] Keywords Novel compute, programming models WP/Task WP4 /Task 4.3 Nature R Dissemination Level PU Final Version Date 31/01/2019 Sergio Rivas-Gomez (KTH), Oliver Brown (EPCC), Luis Reviewed by Cebamanos (EPCC) MGT Board Approval YES D4.1: Report on state of the art of novel compute elements and gap analysis in MPI and GASPI 3 DOCUMENT HISTORY Partner Date Comment Version First draft (Multi/Many core, GPU gap 0.1 FhG 04.12.2018 analysis, specific HW for DL) Extension to Multi/Many core for ARM 0.2 FhG, KTH 19.12.2018 and RISC5, FPGA section, Introduction added Executive Summary, Introduction into FhG 28.12.2018 0.3 GPUs added FhG 04.01.2019 Glossary added, Conclusion added 0.4 MPI gap analysis added, table on GPUs FhG 09.01.2019 added, GASPI gap analysis for FPGAs 0.5 added EPCC 09.01.2019 Added 4.1 Introduction to Beyond V-N 0.6 FhG, KTH, Implement changes to address comments by 25.01.2019 0.7 EPCC reviewers Discussion of comments on Cell BE and FhG, KTH 28.01.2019 0.8 big.LITTLE, etc. KTH 31.01.2019 Final version 1.0 D4.1: Report on state of the art of novel compute elements and gap analysis in MPI and GASPI 4 Executive Summary The purpose of this document is to survey the state of the art of novel compute elements and do a gap analysis in MPI and GASPI. Currently a major hurdle in large supercomputers is the huge amount of energy necessary to operate these systems. The power consumption and efficiency of some novel compute devices is very attractive and therefore it is expected to see a rise in heterogeneous HPC cluster in future. For this deliverable we choose to analyse the following types of compute elements: Multi and Many Core architectures, Graphical Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs) and specialised hardware for Deep Learning (DL). The choice is motivated by the current market share, uptake in HPC clusters and impact that some of the novel compute elements would have on the HPC systems. The current hardware trends are the following: • Multi core architectures are still used in many of the supercomputers in the Top500 list. However currently the product pipeline of the main suppliers is uncertain. MPI and GASPI support for such architectures is mature. • GPUs are becoming more important in HPC and are already predominant in DL applications. Part of the success of GPUs can be explained by the rise of the CUDA programming language, which delivers easy-to-use interfaces for application developers. The integration of GPUs as accelerators in the HPC network stack can be done on various levels. Basic MPI and GASPI functionalities exist, a tighter integration is possible, but the benefit is questionable. • FPGAs are known to be difficult to program. This has been one of the biggest hurdles for the integration of FPGAs in HPC architectures. High Level Synthesis (HLS) tools start to become mature, making FPGAs easier-to-program. Currently there is no notable MPI nor GASPI support for FPGAs. However, this situation is changing. For example, for GASPI there are efforts dedicated for a better support in the scope of the EuroExa project. • Neuromorphic computing is well suited for neural networks and very energy-efficient. The first commercial neuromorphic platform could be used as accelerator for a limited number of applications. There is no MPI nor GASPI support foreseen for this type of compute element. • Specific hardware for deep learning is on the rise. There is an emergence of highly efficient hardware IP on servers/accelerators on D4.1: Report on state of the art of novel compute elements and gap analysis in MPI and GASPI 5 one hand and edge/SoC on the other hand. For HPC, accelerators are the better alternative for DL applications due to their superior bandwidth and performance. To summarize: while the MPI and GASPI support is mature for Multi and some Many Core architectures, most other accelerators (GPUs, FPGAs, etc.) are not tightly integrated in HPC workflows. In the scope of EPiGRAM-HS, we care especially about the integration into inter-node networks. The integration of novel accelerators in HPC network stacks can be done on various levels depending on how tight such an integration is intended to be. Most accelerators (such as GPUs or FPGAs) are connected via PCI-Express, though a direct integration into a data-centre network, like Ethernet or InfiniBand, is possible. The steps towards an integration of accelerators in HPC workflows are described (from loose to tight): • Communication only possible with host-CPU with a multi-level communication hierarchy at node-level. • Zero-copy data transfers possible by exposing GPU and FPGA resources as memory. • Modification of the network stack to program the NIC to directly transfer data using peer-to-peer transactions, control of communication still at host (e.g. GPUDirect-RDMA technology for GPUs). • Own identity (rank) on interconnect, allows accelerator to initiate communications on its own (GPUs: dCUDA technology [dCUDA]). • Creation of custom communication instances. Currently the integration of GPUs is the most sophisticated compared to FPGAs or any other novel accelerators. For upcoming accelerators, the components allowing a tighter integration still need to be developed. In the scope of the EPiGRAM-HS project, the gap analysis of MPI and GASPI will be used to guide our future work. Though not all identified gaps can and will be closed within the duration of the project, we will focus on the most important accelerators (GPUs and FPGAs) and the requirements of the project’s applications. D4.1: Report on state of the art of novel compute elements and gap analysis in MPI and GASPI 6 Contents 1 Introduction ............................................................................................................... 7 1.1 Glossary ............................................................................................................. 9 2 Multi Core / Many Core .......................................................................................... 10 2.1 Introduction to the hardware ........................................................................... 10 2.2 Instruction-Set Architecture Considerations ..................................................... 13 2.3 Micro Architecture Implementations ................................................................ 14 2.4 Gap analysis for MPI ....................................................................................... 14 3 Graphical Processor Units ........................................................................................ 17 3.1 Introduction to the hardware ........................................................................... 17 3.2 Gap analysis ..................................................................................................... 18 4 Field Programmable Gate-Arrays ............................................................................ 21 4.1 Introduction to the hardware ........................................................................... 21 4.2 FPGA usage as accelerators in data-center applications .................................. 22 4.3 Integration of FPGAs and HPC Network Stacks ............................................. 23 4.4 Gap analysis ..................................................................................................... 24 5 Beyond von Neumann .............................................................................................. 26 5.1 Introduction to the hardware ........................................................................... 26 5.2 Gap analysis ..................................................................................................... 28 6 Specific hardware for Deep Learning ....................................................................... 29 6.1 Introduction to the hardware ........................................................................... 29 6.2 Chips for Edge Devices ..................................................................................... 29 6.3 Chips for Accelerator Devices ........................................................................... 32 6.4 Programming Devices ....................................................................................... 33 6.5 Gap analysis ..................................................................................................... 34 7 Conclusion and Future Work ..................................................................................
Recommended publications
  • Accenture AI Inferencing in Action
    POV POV PUT YOUR AI SOLUTION ON STEROIDS POV PUT YOUR AI SOLUTION ON STEROIDS POINT OF VIEW POV PUT YOUR AI SOLUTION ON STEROIDS POV MATCH GPU PERFORMANCE AT HALF THE COST FOR AI INFERENCE WORKLOADS Proven CPU-based solution from Accenture and Intel boosts the performance and lowers the cost of AI inferencing by enabling an easy-to-deploy, scalable, and cost-efficient architecture AI INFERENCING—THE NEXT CRITICAL STEP AFTER AI ALGORITHM TRAINING Artificial Intelligence (AI) solutions include three main functions—identifying and preparing data, training an artificial intelligence algorithm, and using the algorithm for inferring new outcomes. Each function requires different compute recourses and deployment architecture. The choices of infrastructure components and technologies significantly impact the performance and costs associated with deploying an end-to-end AI solution. Data scientists and machine learning (ML) engineers spend significant time devising the right architecture for all stages of the AI pipeline. MODEL DATA TRAINING AND SCORING AND PREPARATION OPTIMIZATION INFERENCE Once an AI computer/algorithm has been trained through traditional or deep learning techniques, it can deliver value by interpreting data (i.e., inferring). Through inference, an AI algorithm can analyze data to: • Differentiate between various items • Identify trends and patterns that can be leveraged during decision-making • Reveal opportunities and possible solutions • Recognize voices, faces, images, etc. POV PUT YOUR AI SOLUTION ON STEROIDS POV Revealing hidden As we look to the future, AI inference will become increasingly possibilities— important to businesses operating in all segments—from health care to financial services to aerospace. And as the reliance on AI inference continues to grow, so Accenture AIP, does the importance of choosing the right AI infrastructure to support it.
    [Show full text]
  • GPU Developments 2018
    GPU Developments 2018 2018 GPU Developments 2018 © Copyright Jon Peddie Research 2019. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • Persistent Memory for Artificial Intelligence
    Persistent Memory for Artificial Intelligence Bill Gervasi Principal Systems Architect [email protected] Santa Clara, CA August 2018 1 Demand Outpacing Capacity In-Memory Computing Artificial Intelligence Machine Learning Deep Learning Memory Demand DRAM Capacity Santa Clara, CA August 2018 2 Driving New Capacity Models Non-volatile memories Industry successfully snuggling large memories to the processors… Memory Demand DRAM Capacity …but we can do oh! so much more Santa Clara, CA August 2018 3 My Three Talks at FMS NVDIMM Analysis Memory Class Storage Artificial Intelligence Santa Clara, CA August 2018 4 History of Architectures Let’s go back in time… Santa Clara, CA August 2018 5 Historical Trends in Computing Edge Co- Computing Processing Power Failure Santa Clara, CA August 2018 Data Loss 6 Some Moments in History Central Distributed Processing Processing Shared Processor Processor per user Dumb terminals Peer-to-peer networks Santa Clara, CA August 2018 7 Some Moments in History Central Distributed Processing Processing “Native Signal Processing” Hercules graphics Main CPU drivers Sound Blaster audio Cheap analog I/O Rockwell modem Ethernet DSP Tightly-coupled coprocessing Santa Clara, CA August 2018 8 The Lone Survivor… Integrated graphics Graphics add-in cards …survived the NSP war Santa Clara, CA August 2018 9 Some Moments in History Central Distributed Processing Processing Phone providers Phone apps provide controlled all local services data processing Edge computing reduces latency Santa Clara, CA August 2018 10 When the Playing
    [Show full text]
  • AI Accelerator Latencies in Hybrid Vehicular Simulation
    AI Accelerator Latencies in Hybrid Vehicular Simulation Jussi Hanhirova Matias Hyyppä Abstract Aalto University Aalto University We study the use of accelerators for vehicular AI (Artifi- Espoo, Finland Espoo, Finland cial Intelligence) applications. Managing the computation jussi.hanhirova@aalto.fi juho.hyyppa@aalto.fi is complex as vehicular AI applications call for high per- formance computations in a real-time distributed environ- ment, in which low and predictable latencies are essential. We have used the CARLA simulator together with machine learning based on CNNs (Convolutional Neural Networks) in our research. In this paper, we present the latency be- Anton Debner Vesa Hirvisalo havior with GPU acceleration for CNN processing. Our ex- Aalto University Aalto University perimentation is motivated by using the simulator to find the Espoo, Finland Espoo, Finland corner cases that are demanding for the accelerated CNN anton.debner@aalto.fi vesa.hirvisalo@aalto.fi processing. Author Keywords Computation acceleration; GPU; deep learning ACM Classification Keywords D.4.8 [Performance]: Simulation; I.2.9 [Robotics]: Autonomous vehicles; I.3.7 [Three-Dimensional Graphics and Realism]: Virtual reality Introduction In this paper, we address the usage of accelerators in ve- hicular AI (Artificial Intelligence) systems and in the simula- tors that are needed in the development of such systems. The recent development of AI system is enabling many new Convolutional Neural Net- applications including autonomous driving of motor vehi- software [5] together with deep learning based inference on works (CNN) are a specific cles on public roads. Many of such systems process sen- TensorFlow [6]. Our measurements show the basic viability class of neural networks that sor data related to environment perception in real-time, be- of the hybrid simulation approach, but they also underline are often used in deep form cause they trigger actions which have latency requirements.
    [Show full text]
  • Unified Inference and Training at the Edge
    Unified Inference and Training at the Edge By Linley Gwennap Principal Analyst October 2020 www.linleygroup.com Unified Inference and Training at the Edge By Linley Gwennap, Principal Analyst, The Linley Group As more edge devices add AI capabilities, some applications are becoming increasingly complex. Wearables and other IoT devices often have multiple sensors, requiring different neural networks for each sensor, or they may use a single complex network to combine all the input data, a tech- nique called sensor fusion. Others implement on-device training to customize the application. The GPX-10 processor can handle these advanced AI applications while keeping power to a minimum. Ambient Scientific sponsored this white paper, but the opinions and analysis are those of the author. Introduction As more IoT products implement AI inferencing on the device, some applications are becoming increasingly complex. Instead of just a single sensor, they may have several, such as a wearable device that has accelerometer, pulse rate, and temperature sensors. Each sensor may require a different neural network, or a single complex network can combine all the input data, a technique called sensor fusion. A microcontroller (e.g., Cortex-M) or DSP can handle a single sensor, but these simple cores are inefficient for more complex IoT applications. Training introduces additional complication to edge devices. Today’s neural networks are trained in the cloud using generic data from many users. This approach takes advan- tage of the massive compute power in cloud data centers, but it creates a one-size-fits-all AI model. The trained model can be distributed to many devices, but even if each device performs the inferencing, the model performs the same way for all users.
    [Show full text]
  • Low-Power Ultra-Small Edge AI Accelerators for Image Recog- Nition with Convolution Neural Networks: Analysis and Future Directions
    Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 July 2021 doi:10.20944/preprints202107.0375.v1 Review Low-power Ultra-small Edge AI Accelerators for Image Recog- nition with Convolution Neural Networks: Analysis and Future Directions Weison Lin 1, *, Adewale Adetomi 1 and Tughrul Arslan 1 1 Institute for Integrated Micro and Nano Systems, University of Edinburgh, Edinburgh EH9 3FF, UK; [email protected]; [email protected] * Correspondence: [email protected] Abstract: Edge AI accelerators have been emerging as a solution for near customers’ applications in areas such as unmanned aerial vehicles (UAVs), image recognition sensors, wearable devices, ro- botics, and remote sensing satellites. These applications not only require meeting performance tar- gets but also meeting strict reliability and resilience constraints due to operations in harsh and hos- tile environments. Numerous research articles have been proposed, but not all of these include full specifications. Most of these tend to compare their architecture with other existing CPUs, GPUs, or other reference research. This implies that the performance results of the articles are not compre- hensive. Thus, this work lists the three key features in the specifications such as computation ability, power consumption, and the area size of prior art edge AI accelerators and the CGRA accelerators during the past few years to define and evaluate the low power ultra-small edge AI accelerators. We introduce the actual evaluation results showing the trend in edge AI accelerator design about key performance metrics to guide designers on the actual performance of existing edge AI acceler- ators’ capability and provide future design directions and trends for other applications with chal- lenging constraints.
    [Show full text]
  • Network on Chip for FPGA Development of a Test System for Network on Chip
    Network on Chip for FPGA Development of a test system for Network on Chip Magnus Krokum Namork Master of Science in Electronics Submission date: June 2011 Supervisor: Kjetil Svarstad, IET Norwegian University of Science and Technology Department of Electronics and Telecommunications Problem Description This assignment is a continuation of the project-assignment of fall 2010, where it was looked into the development of reactive modules for application-test and profiling of the Network on Chip realization. It will especially be focused on the further development of: • The programmability of the system by developing functionality for more advanced surveillance of the communication between modules and routers • Framework that will be used to test and profile entire applications on the Network on Chip The work will primarily be directed towards testing and running the system in a way that resembles a real system at run time. The work is to be compared with relevant research within similar work. Assignment given: January 2011 Supervisor: Kjetil Svarstad, IET i Abstract Testing and verification of digital systems is an essential part of product develop- ment. The Network on Chip (NoC), as a new paradigm within interconnections; has a specific need for testing. This is to determine how performance and prop- erties of the NoC are compared to the requirements of different systems such as processors or media applications. A NoC has been developed within the AHEAD project to form a basis for a reconfigurable platform used in the AHEAD system. This report gives an outline of the project to develop testing and benchmarking systems for a NoC.
    [Show full text]
  • Survey and Benchmarking of Machine Learning Accelerators
    1 Survey and Benchmarking of Machine Learning Accelerators Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, and Jeremy Kepner MIT Lincoln Laboratory Supercomputing Center Lexington, MA, USA freuther,pmichaleas,michael.jones,vijayg,sid,[email protected] Abstract—Advances in multicore processors and accelerators components play a major role in the success or failure of an have opened the flood gates to greater exploration and application AI system. of machine learning techniques to a variety of applications. These advances, along with breakdowns of several trends including Moore’s Law, have prompted an explosion of processors and accelerators that promise even greater computational and ma- chine learning capabilities. These processors and accelerators are coming in many forms, from CPUs and GPUs to ASICs, FPGAs, and dataflow accelerators. This paper surveys the current state of these processors and accelerators that have been publicly announced with performance and power consumption numbers. The performance and power values are plotted on a scatter graph and a number of dimensions and observations from the trends on this plot are discussed and analyzed. For instance, there are interesting trends in the plot regarding power consumption, numerical precision, and inference versus training. We then select and benchmark two commercially- available low size, weight, and power (SWaP) accelerators as these processors are the most interesting for embedded and Fig. 1. Canonical AI architecture consists of sensors, data conditioning, mobile machine learning inference applications that are most algorithms, modern computing, robust AI, human-machine teaming, and users (missions). Each step is critical in developing end-to-end AI applications and applicable to the DoD and other SWaP constrained users.
    [Show full text]
  • Embedded Networks on Chip for Field-Programmable Gate Arrays by Mohamed Saied Abdelfattah a Thesis Submitted in Conformity With
    Embedded Networks on Chip for Field-Programmable Gate Arrays by Mohamed Saied Abdelfattah A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Electrical and Computer Engineering University of Toronto © Copyright 2016 by Mohamed Saied Abdelfattah Abstract Embedded Networks on Chip for Field-Programmable Gate Arrays Mohamed Saied Abdelfattah Doctor of Philosophy Graduate Department of Electrical and Computer Engineering University of Toronto 2016 Modern field-programmable gate arrays (FPGAs) have a large capacity and a myriad of embedded blocks for computation, memory and I/O interfacing. This allows the implementation of ever-larger applications; however, the increase in application size comes with an inevitable increase in complexity, making it a challenge to implement on-chip communication. Today, it is a designer's burden to create a customized communication circuit to interconnect an application, using the fine-grained FPGA fab- ric that has single-bit control over every wire segment and logic cell. Instead, we propose embedding a network-on-chip (NoC) to implement system-level communication on FPGAs. A prefabricated NoC improves communication efficiency, eases timing closure, and abstracts system-level communication on FPGAs, separating an application's behaviour and communication which makes the design of complex FPGA applications easier and faster. This thesis presents a complete embedded NoC solution, includ- ing the NoC architecture and interface, rules to guide its use with FPGA design styles, application case studies to showcase its advantages, and a computer-aided design (CAD) system to automatically interconnect applications using an embedded NoC. We compare NoC components when implemented hard versus soft, then build on this component-level analysis to architect embedded NoCs and integrate them into the FPGA fabric; these NoCs are on average 20{23× smaller and 5{6× faster than soft NoCs.
    [Show full text]
  • An Early Performance Evaluation of Many Integrated Core Architecture Based SGI Rackable Computing System
    NAS Technical Report: NAS-2015-04 An Early Performance Evaluation of Many Integrated Core Architecture Based SGI Rackable Computing System Subhash Saini,1 Haoqiang Jin,1 Dennis Jespersen,1 Huiyu Feng,2 Jahed Djomehri,3 William Arasin,3 Robert Hood,3 Piyush Mehrotra,1 Rupak Biswas1 1NASA Ames Research Center 2SGI 3Computer Sciences Corporation Moffett Field, CA 94035-1000, USA Fremont, CA 94538, USA Moffett Field, CA 94035-1000, USA {subhash.saini, haoqiang.jin, [email protected] {jahed.djomehri, william.f.arasin, dennis.jespersen, piyush.mehrotra, robert.hood}@nasa.gov rupak.biswas}@nasa.gov ABSTRACT The GPGPU approach relies on streaming multiprocessors and Intel recently introduced the Xeon Phi coprocessor based on the uses a low-level programming model such as CUDA or a high- Many Integrated Core architecture featuring 60 cores with a peak level programming model like OpenACC to attain high performance of 1.0 Tflop/s. NASA has deployed a 128-node SGI performance [1-3]. Intel’s approach has a Phi serving as a Rackable system where each node has two Intel Xeon E2670 8- coprocessor to a traditional Intel processor host. The Phi has x86- core Sandy Bridge processors along with two Xeon Phi 5110P compatible cores with wide vector processing units and uses coprocessors. We have conducted an early performance standard parallel programming models such as MPI, OpenMP, evaluation of the Xeon Phi. We used microbenchmarks to hybrid (MPI + OpenMP), UPC, etc. [4-5]. measure the latency and bandwidth of memory and interconnect, Understanding performance of these heterogeneous computing I/O rates, and the performance of OpenMP directives and MPI systems has become very important as they have begun to appear functions.
    [Show full text]
  • A Comparison of Four Series of Cisco Network Processors
    Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.4, October 2016 A COMPARISON OF FOUR SERIES OF CISCO NETWORK PROCESSORS Sadaf Abaei Senejani 1, Hossein Karimi 2 and Javad Rahnama 3 1Department of Computer Engineering, Pooyesh University, Qom, Iran 2 Sama Technical and Vocational Training College, Islamic Azad University, Yasouj Branch, Yasouj, Iran 3 Department of Computer Science and Engineering, Sharif University of Technology, Tehran, Iran. ABSTRACT Network processors have created new opportunities by performing more complex calculations. Routers perform the most useful and difficult processing operations. In this paper the routers of VXR 7200, ISR 4451-X, SBC 7600, 7606 have been investigated which their main positive points include scalability, flexibility, providing integrated services, high security, supporting and updating with the lowest cost, and supporting standard protocols of network. In addition, in the current study these routers have been explored from hardware and processor capacity viewpoints separately. KEYWORDS Network processors, Network on Chip, parallel computing 1. INTRODUCTION The expansion of science and different application fields which require complex and time taking calculations and in consequence the demand for fastest and more powerful computers have made the need for better structures and new methods more significant. This need can be met through solutions called “paralleled massive computers” or another solution called “clusters of computers”. It should be mentioned that both solutions depend highly on intercommunication networks with high efficiency and appropriate operational capacity. Therefore, as a solution combining numerous cores on a single chip via using Network on chip technology for the simplification of communications can be done.
    [Show full text]
  • MYTHIC MULTIPLIES in a FLASH Analog In-Memory Computing Eliminates DRAM Read/Write Cycles
    MYTHIC MULTIPLIES IN A FLASH Analog In-Memory Computing Eliminates DRAM Read/Write Cycles By Mike Demler (August 27, 2018) ................................................................................................................... Machine learning is a hot field that attracts high-tech The startup plans by the end of this year to tape out investors, causing the number of startups to explode. As its first product, which can store up to 50 million weights. these young companies strive to compete against much larg- At that time, it also aims to release the alpha version of its er established players, some are revisiting technologies the software tools and a performance profiler. The company veterans may have dismissed, such as analog computing. expects to begin volume shipments in 4Q19. Mythic is riding this wave, using embedded flash-memory technology to store neural-network weights as analog pa- Solving a Weighty Problem rameters—an approach that eliminates the power consumed Mythic uses Fujitsu’s 40nm embedded-flash cell, which it in moving data between the processor and DRAM. plans to integrate in an array along with digital-to-analog The company traces its origin to 2012 at the University converters (DACs) and analog-to-digital converters (ADCs), of Michigan, where CTO Dave Fick completed his doctoral as Figure 1 shows. By storing a range of analog voltages in degree and CEO Mike Henry spent two and a half years as the bit cells, the technique uses the cell’s voltage-variable a visiting scholar. The founders worked on a DoD-funded conductance to represent weights with 8-bit resolution. This project to develop machine-learning-powered surveillance drones, eventually leading to the creation of their startup.
    [Show full text]