PIM Lite: on the Road Towards Relentless Multi-Threading in Massively Parallel Systems

Total Page:16

File Type:pdf, Size:1020Kb

PIM Lite: on the Road Towards Relentless Multi-Threading in Massively Parallel Systems PIM Lite: On the Road Towards Relentless Multi-threading in Massively Parallel Systems Jay Brockman, Peter Kogge, Shyamkumar Thoziyoor, and Edward Kang Department of Computer Science and Engineering University of Notre Dame February 17, 2003 Abstract Processing In Memory (PIM) technology (mixing signi¯cant processing logic with dense memory on the same chip) has become a popular new emerging trend in recent years. In many cases, however, it has been used simply as a step towards a \system on a chip." This paper assumes that PIM systems will be inherently massively parallel, with many chips collaborating in a computation, perhaps in concert with more conventional microprocessors. While such systems could be designed to support \classical" parallel models such as DSM or message passing, this paper discusses several di®erent models born from the HTMT project. All of these models involved signi¯cant multi-threading, with large numbers of relatively light weight threads executing within the PIM nodes. To take advantage of these characteristics, we have designed a new ISA and matching microarchitecture that supports such multithreading in ways that leverage very e±ciently the enhanced local bandwidth and access time capable from an on chip memory macro. A simpli¯ed version of this, termed PIM Lite, is about to go to fab as a memory part with multiple internal nodes, all of which support very light weight threads in a simple SMT microarchitecture. This paper will discuss PIM Lite, and then our outlook on what more advanced designs might look like. 1 Introduction In the conventional von Neumann model, processors and memory play complementary but entirely separate roles in computation: the memory stores data but has no provisions for operating on it while the processor modi¯es data but has little provision for storing it. The \von Neumann bottleneck" or \memory wall" associated with transferring data between the processor and memory is a well-know limitation to computing performance. Memory hierarchies do not eliminate the basic problem that the processor must manage nearly every detail of data transfer, from computing the address of the desired data, to coordination of the transfer protocols. Processing in Memory (PIM) breaks through the memory wall by pushing computation into the memory system. Rather than denoting a speci¯c implementation, PIM is a collection of methods and technologies that cover all aspects of pushing computation into the memory system including programming and execution models, microarchitectural organization, and physical design and layout. One of the key goals of this paper, however, is to demonstrate that by considering these technologies simultaneously and rationalizing the interfaces between them, extremely e±cient PIM implementations are possible that can improve performance over conventional systems by orders of magnitude. By physically placing processing logic on a memory chip, the processor can access data at the highest bandwidths and lowest latencies possible. Further, and distinguishing this project from others, is the emphasis not on leveraging the technology for single thread performance, but for support of massively parallel scalable systems where very signi¯cant amounts of many forms of concurrency is present for exploitation. Section 2 of this paper briefly overviews the assumed technology. Section 3 discusses the spectrum of PIM chip architectures that have been investigated. Section 4 describes some computational idioms we have focused on. Section 5 then describes the ISA of PIM Lite an early attempt to support the key ideas behind such idioms. Section 6 discusses a preliminary implementation of this ISA on a multi-node chip. A synthesized VHDL version of PIM Lite is currently running on an FPGA board with silicon planned for the ¯rst quarter of 2003. 1 2 Background and Related Work 2.1 Leveraging Memory Bandwidth Figure 1 illustrates the organization of a typical DRAM macro. Internal to the macro, data is stored typically in rows of 2K memory cells, where the entire row is read and bu®ered whenever any cell on that row is ¯rst accessed, regardless of how many bits of data on the row are actually needed by the application. Furthermore, once accessed, the contents of a row are latched into a primary sense amp within the macro. These values stay in the latch into another access overwrite them. Thus once accessed, such a row is called the currently open row. 2048 bits/internal row (256 bytes) 256 bits ott (ater 8 ) Figure 1: DRAM memory macro organization Multiplexing within the circuitry common to the whole macro typically allows bu®ered row data to be paged out at typically 256 bits at a time. Each such 256 bit word is termed a wide word here. Given these numbers of 2048 bits per open row, and 256 bits per wide word, there are 8 separate wide words per open row. A key property of such memory macros is that if the next access to the macro is to an address within the currently open row, then the access to the memory macro can be suppressed, and the correct wide word encompassing the desired data can be brought out by simply changing the multiplexor control values. This takes considerably less time than a full access. In many memory chips, when reads of consecutive data words is desired, this fast multiplexing is used repeatedly, and is termed page mode access. Given a very conservative row access time of 20 ns and a page access time of 2 ns, the bandwidth available from a single memory macro is over 50 Gbit/s. A central issues in PIM research is to discover ways to take advantage of this extraordinary bandwidth. A \SIMD" branch of PIM architecture stresses multiple dataflows, often as small as 1 bit wide, next to the memory macros row output, and running more or less synchronously. Variations of this approach date all the way back to the DAP and the CM-2 [9] (when memory densities were small), through the early and mid 1990s when chips such as the TERASYS [10] and the Linden DAAM [15] had several megabits of memory on board, to a more recent announcement of the Micron YUKON chip [12] that uses state-of-the- art DRAM. We have coined the term ASAP (At the Sense Amps Processing [4]) to refer to a wide set of ALUs positioned close to memory macro so that all the bits that are made available on an access can be processed simultaneously. The V-IRAM chip [14] integrates several high speed vector pipelines along with an integrated controller on a chip with signi¯cant embedded DRAM. The DIVA chip [8] combined a dense SRAM with a pipelined CPU and a set of ALUs. Multiple independent processors on a chip are a more recent phenomena. Perhaps the ¯rst was EXECUBE [13], which supported eight separate CPUs on a single die where the memory was state of the art DRAM. These CPUs were interconnected with DMA channels to form a binary hypercube on a chip.(In addition to running in a MIMD mode, EXECUBE also allowed constraining any number of the on-chip processors to run in SIMD mode). The POWER4 [23] is built on an all logic technology, but nevertheless, supports a very dense cache hierarchy on top of two separate microprocessor cores. The BLUE GENE chip [7] supports a large number of separate cores, all sharing the on chip memory in an SMP-like con¯guration. 2 Another important dimension of the PIM architectural design space is the external interface. Some chips, such as the Mitsubishi M32R/D [16], we conceives as single-chip systems and have interfaces more reminiscent of a conventional microprocessor, such as an external memory and I/O bus. More pertinent to the proposed research are those chips designed to be part of large, scalable systems. A number of chips, including TERASYS, DIVA, to the Yukon, have primary interfaces that allow them to \look like memory chips" to external systems, albeit almost always with some extra protocols that allow the \intelligence" on board to be touched and activated. Such interfaces allow potentially large numbers of them to be assembled into a single memory subsystem, and connected to a classical computer. Other chips, such as the EXECUBE and the BLUE GENE, have as their primary interface multiple chip-to-chip communication ports, allowing large arrays of such chips to make up a stand-alone single part type parallel processor. DIVA and Yukon have both memory-like interfaces, as well as ports for independent PIM-to-PIM communication. 2.2 Multi-PIM Systems and Communication Clearly, various combinations of the suite of architectural techniques discussed above support the full spec- trum of conventional programming idioms. Vector and short vector SIMD processing, and SMP processing are all possible on chip. Among arrays of PIM chips, programming models from SPMD to message passing are readily supported. This, however, is not the end. First suggested by work from the PIM FAST project [4], and ampli¯ed in other projects such as HTMT [22][4], the concept of parcels (PARallel Communication ELements) expands the semantics of a classical memory read or write operation to accommodate a PIM-enhanced memory. In [4], we introduced the notion of a microserver as a PIM component to service parcels. In their simplest form, parcels and microservers could be used to service split-phase memory transactions, similar to the message-driven memory modules proposed in dataflow architectures such as [2] [20] [18]. More generally, the parcel/microserver model can be used to deliver any command, with arguments, to a location in memory. This mechanism could be used to remotely trigger actions ranging from simple reads and writes, to atomic updates, to invoking methods on objects.
Recommended publications
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • A Massively-Parallel Mixed-Mode Computer Designed to Support
    This paper appeared in th International Parallel Processing Symposium Proc of nd Work shop on Heterogeneous Processing pages NewportBeach CA April Triton A MassivelyParallel MixedMo de Computer Designed to Supp ort High Level Languages Christian G Herter Thomas M Warschko Walter F Tichy and Michael Philippsen University of Karlsruhe Dept of Informatics Postfach D Karlsruhe Germany Mo dula Abstract Mo dula pronounced Mo dulastar is a small ex We present the architectureofTriton a scalable tension of Mo dula for massively parallel program mixedmode SIMDMIMD paral lel computer The ming The programming mo del of Mo dula incor novel features of Triton are p orates b oth data and control parallelism and allows hronous and asynchronous execution mixed sync Support for highlevel machineindependent pro Mo dula is problemorientedinthesensethatthe gramming languages programmer can cho ose the degree of parallelism and mix the control mo de SIMD or MIMDlike as need Fast SIMDMIMD mode switching ed bytheintended algorithm Parallelism maybe nested to arbitrary depth Pro cedures may b e called Special hardware for barrier synchronization of from sequential or parallel contexts and can them multiple process groups selves generate parallel activity without any restric tions Most Mo dula programs can b e translated into ecient co de for b oth SIMD and MIMD archi A selfrouting deadlockfreeperfect shue inter tectures connect with latency hiding Overview of language extensions The architecture is the outcomeofanintegrated de Mo dula extends Mo dula
    [Show full text]
  • Massively Parallel Computing with CUDA
    Massively Parallel Computing with CUDA Antonino Tumeo Politecnico di Milano 1 GPUs have evolved to the point where many real world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs. Jack Dongarra Professor, University of Tennessee; Author of “Linpack” Why Use the GPU? • The GPU has evolved into a very flexible and powerful processor: • It’s programmable using high-level languages • It supports 32-bit and 64-bit floating point IEEE-754 precision • It offers lots of GFLOPS: • GPU in every PC and workstation What is behind such an Evolution? • The GPU is specialized for compute-intensive, highly parallel computation (exactly what graphics rendering is about) • So, more transistors can be devoted to data processing rather than data caching and flow control ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • The fast-growing video game industry exerts strong economic pressure that forces constant innovation GPUs • Each NVIDIA GPU has 240 parallel cores NVIDIA GPU • Within each core 1.4 Billion Transistors • Floating point unit • Logic unit (add, sub, mul, madd) • Move, compare unit • Branch unit • Cores managed by thread manager • Thread manager can spawn and manage 12,000+ threads per core 1 Teraflop of processing power • Zero overhead thread switching Heterogeneous Computing Domains Graphics Massive Data GPU Parallelism (Parallel Computing) Instruction CPU Level (Sequential
    [Show full text]
  • CS 677: Parallel Programming for Many-Core Processors Lecture 1
    1 CS 677: Parallel Programming for Many-core Processors Lecture 1 Instructor: Philippos Mordohai Webpage: mordohai.github.io E-mail: [email protected] Objectives • Learn how to program massively parallel processors and achieve – High performance – Functionality and maintainability – Scalability across future generations • Acquire technical knowledge required to achieve the above goals – Principles and patterns of parallel programming – Processor architecture features and constraints – Programming API, tools and techniques 2 Important Points • This is an elective course. You chose to be here. • Expect to work and to be challenged. • If your programming background is weak, you will probably suffer. • This course will evolve to follow the rapid pace of progress in GPU programming. It is bound to always be a little behind… 3 Important Points II • At any point ask me WHY? • You can ask me anything about the course in class, during a break, in my office, by email. – If you think a homework is taking too long or is wrong. – If you can’t decide on a project. 4 Logistics • Class webpage: http://mordohai.github.io/classes/cs677_s20.html • Office hours: Tuesdays 5-6pm and by email • Evaluation: – Homework assignments (40%) – Quizzes (10%) – Midterm (15%) – Final project (35%) 5 Project • Pick topic BEFORE middle of the semester • I will suggest ideas and datasets, if you can’t decide • Deliverables: – Project proposal – Presentation in class – Poster in CS department event – Final report (around 8 pages) 6 Project Examples • k-means • Perceptron • Boosting – General – Face detector (group of 2) • Mean Shift • Normal estimation for 3D point clouds 7 More Ideas • Look for parallelizable problems in: – Image processing – Cryptanalysis – Graphics • GPU Gems – Nearest neighbor search 8 Even More… • Particle simulations • Financial analysis • MCMC • Games/puzzles 9 Resources • Textbook – Kirk & Hwu.
    [Show full text]
  • Core Processors
    UNIVERSITY OF CALIFORNIA Los Angeles Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Maryam Moazeni 2013 © Copyright by Maryam Moazeni 2013 ABSTRACT OF THE DISSERTATION Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors by Maryam Moazeni Doctor of Philosophy in Computer Science University of California, Los Angeles, 2013 Professor Majid Sarrafzadeh, Chair The extensive use of medical monitoring devices has resulted in the generation of tremendous amounts of data. Storage, retrieval, and analysis of such data require platforms that can scale with data growth and adapt to the various behavior of the analysis and processing algorithms. In recent years, many-core processors and more specifically many-core Graphical Processing Units (GPUs) have become one of the most promising platforms for high performance processing of data, due to the massive parallel processing power they offer. However, many of the algorithms and data structures used in medical and bioinformatics systems do not follow a data-parallel programming paradigm, and hence cannot fully benefit from the parallel processing power of ii data-parallel many-core architectures. In this dissertation, we present three techniques to adapt several non-data parallel applications in different dwarfs to modern many-core GPUs. First, we present a load balancing technique to maximize parallelism in non-serial polyadic Dynamic Programming (DP), which is a family of dynamic programming algorithms with more non-uniform data access pattern. We show that a bottom-up approach to solving the DP problem exploits more parallelism and therefore yields higher performance.
    [Show full text]
  • Massively Parallel Computers: Why Not Prirallel Computers for the Masses?
    Maslsively Parallel Computers: Why Not Prwallel Computers for the Masses? Gordon Bell Abstract In 1989 I described the situation in high performance computers including several parallel architectures that During the 1980s the computers engineering and could deliver teraflop power by 1995, but with no price science research community generally ignored parallel constraint. I felt SIMDs and multicomputers could processing. With a focus on high performance computing achieve this goal. A shared memory multiprocessor embodied in the massive 1990s High Performance looked infeasible then. Traditional, multiple vector Computing and Communications (HPCC) program that processor supercomputers such as Crays would simply not has the short-term, teraflop peak performanc~:goal using a evolve to a teraflop until 2000. Here's what happened. network of thousands of computers, everyone with even passing interest in parallelism is involted with the 1. During the first half of 1992, NEC's four processor massive parallelism "gold rush". Funding-wise the SX3 is the fastest computer, delivering 90% of its situation is bright; applications-wise massit e parallelism peak 22 Glops for the Linpeak benchmark, and Cray's is microscopic. While there are several programming 16 processor YMP C90 has the greatest throughput. models, the mainline is data parallel Fortran. However, new algorithms are required, negating th~: decades of 2. The SIMD hardware approach of Thinking Machines progress in algorithms. Thus, utility will no doubt be the was abandoned because it was only suitable for a few, Achilles Heal of massive parallelism. very large scale problems, barely multiprogrammed, and uneconomical for workloads. It's unclear whether The Teraflop: 1992 large SIMDs are "generation" scalable, and they are clearly not "size" scalable.
    [Show full text]
  • Introduction to Parallel Computing
    INTRODUCTION TO PARALLEL COMPUTING Plamen Krastev Office: 38 Oxford, Room 117 Email: [email protected] FAS Research Computing Harvard University OBJECTIVES: To introduce you to the basic concepts and ideas in parallel computing To familiarize you with the major programming models in parallel computing To provide you with with guidance for designing efficient parallel programs 2 OUTLINE: Introduction to Parallel Computing / High Performance Computing (HPC) Concepts and terminology Parallel programming models Parallelizing your programs Parallel examples 3 What is High Performance Computing? Pravetz 82 and 8M, Bulgarian Apple clones Image credit: flickr 4 What is High Performance Computing? Pravetz 82 and 8M, Bulgarian Apple clones Image credit: flickr 4 What is High Performance Computing? Odyssey supercomputer is the major computational resource of FAS RC: • 2,140 nodes / 60,000 cores • 14 petabytes of storage 5 What is High Performance Computing? Odyssey supercomputer is the major computational resource of FAS RC: • 2,140 nodes / 60,000 cores • 14 petabytes of storage Using the world’s fastest and largest computers to solve large and complex problems. 5 Serial Computation: Traditionally software has been written for serial computations: To be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete set of instructions Instructions are executed one after another Only one instruction can be executed at any moment in time 6 Parallel Computing: In the simplest sense, parallel
    [Show full text]
  • Massively Parallel Processor Architectures for Resource-Aware Computing
    Massively Parallel Processor Architectures for Resource-aware Computing Vahid Lari, Alexandru Tanase, Frank Hannig, and J¨urgen Teich Hardware/Software Co-Design, Department of Computer Science Friedrich-Alexander University Erlangen-N¨urnberg (FAU), Germany {vahid.lari, alexandru-petru.tanase, hannig, teich}@cs.fau.de Abstract—We present a class of massively parallel processor severe, when considering massively parallel architectures, with architectures called invasive tightly coupled processor arrays hundreds to thousands of resources, all working with real-time (TCPAs). The presented processor class is a highly parame- requirements. terizable template, which can be tailored before runtime to fulfill costumers’ requirements such as performance, area cost, As a remedy, we present a domain-specific class of mas- and energy efficiency. These programmable accelerators are well sively parallel processor architectures called invasive tightly suited for domain-specific computing from the areas of signal, coupled processor arrays (TCPA), which offer built-in and image, and video processing as well as other streaming processing scalable resource management. The term “invasive” stems from applications. To overcome future scaling issues (e. g., power con- sumption, reliability, resource management, as well as application a novel paradigm called invasive computing [3], for designing parallelization and mapping), TCPAs are inherently designed and programming future massively parallel computing systems in a way to support self-adaptivity and resource awareness at (e.g., heterogeneous MPSoCs). The main idea and novelty of hardware level. Here, we follow a recently introduced resource- invasive computing is to introduce resource-aware program- aware parallel computing paradigm called invasive computing ming support in the sense that a given application gets the where an application can dynamically claim, execute, and release ability to explore and dynamically spread its computations resources.
    [Show full text]
  • Real-Time Network Traffic Simulation Methodology with a Massively Parallel Computing Architecture
    TRANSPORTATION RESEA RCH RECORD 1358 13 Advanced Traffic Management System: Real-Time Network Traffic Simulation Methodology with a Massively Parallel Computing Architecture THANAVAT }UNCHAYA, GANG-LEN CHANG, AND ALBERTO SANTIAGO The advent of parallel computing architectures presents an op­ of these ATMS developments would ensure optimal net­ portunity fo r lra nspon a tion professionals 1 imulate a large- ca le workwise performance. tra ffi c network with sufficienily fa t response time for real-time The anticipated benefits of these control methods depend opcrarion. Howeve r, ii neccssira tc ·. a fundament al change in the on the complex interactions among principal traffic system modeling algorithm LO tuke full ad va ntage of parallel computing. • uch a methodology t imulare tra ffic n twork with the Con­ components. These systems include driver behavior, level of nection Machine, a massively parallel computer, is described . The congestion, dynamic nature of traffic patterns, and the net­ basic parallel computing architectures are introdu ed, along with work's geometric configuration. It is crucial to the design of a list of commercially available parall el comput ers. This is fol ­ these strategies that a comprehensive understanding of the lowed by an in-depth presentation of the proposed simulation complex interrelations between these key system components meth odology with a massively parallel computer. The propo ed traffic simulation model ha. an inherent path-proces ing capa­ be established. Because it is often difficult for theoretical bilit y to represent drivers ' roure choice behavior at the individual­ formulations to take all such complexities into account, traffic vehicle level.
    [Show full text]
  • Massively Parallel Message Passing: Time to Start from Scratch?
    Massively Parallel Message Passing: Time to Start from Scratch? Anthony Skjellum, PhD University of Alabama at Birmingham Department of Computer and Information Sciences College of Arts and Sciences Birmingham, AL, USA [email protected] Abstract primitive abstractions do not do enough to abstract how operations are performed compared to what is to be performed. Extensions Yes: New requirements have invalidated the use of MPI and to and attempts to standardize with distributed shared memory (cf, other similar library-type message passing systems at full scale on MPI-2 one-sided primitives [10]) have been included, but are systems of 100's of millions of cores. Uses on substantial subsets currently somewhat incompatible with PGAS languages and sys- of these systems should remain viable, subject to limitations. tems [15], and furthermore MPI isn't easy to mutually convert No: If the right set of implementation and application deci- with BSP-style notations or semantics either [16]. Connection sions are made, and runtime strategies put in place, existing MPI with publish-subscribe models [17] used in military / space-based notations can used within exascale systems at least to a degree. fault-tolerant message passing systems are absent (see also [18]). Maybe: Implications of instantiating “MPI worlds” as islands The hypothesis of this paper is that new requirements arising inside exascale systems cannot damage overall scalability too in exascale computation have at least partly invalidated the use of much, or they will inhibit exascalability, and thus fail. the parallel model expressed and implied by MPI and other simi- lar library-type message passing systems at full scale on systems Categories and Subject Descriptors F.1.2.
    [Show full text]
  • An Event-Driven Massively Parallel Fine-Grained Processor Array
    An Event-Driven Massively Parallel Fine-Grained Processor Array Declan Walsh and Piotr Dudek School of Electrical and Electronic Engineering The University of Manchester, UK [email protected]; [email protected] Abstract—A multi-core event-driven parallel processor array connected to its four nearest direct neighbours and only local design is presented. Using relatively simple 8-bit processing cores communications are performed. This network topology and a 2D mesh network topology, the architecture focuses on reduces the long wire interconnects which also reduces power reducing the area occupation of a single processor core. A large and enables operations at higher frequencies. number of these processor cores can be implemented on a single The SIMD implementations are useful for performing tasks integrated chip to create a MIMD architecture capable of providing a powerful processing performance. Each processor which require data-level parallelism. However, for tasks which core is an event-driven processor which can enter an idle mode require more complex processing than simple pixel-parallel when no data is changing locally. An 8 × 8 prototype processor operators, e.g., event-driven networks [12], message-passing array is implemented in a 65 nm CMOS process in 1,875 µm × inference networks [13], or even some low-level image 1,875 µm. This processor array is capable of performing 5.12 processing operations [14, 15], SIMD processing may not be GOPS operating at 80 MHz with an average power consumption optimal and a more flexible MIMD (Multiple Instruction of 75.4 mW. Multiple Data) solution is more suitable.
    [Show full text]
  • A Massively Parallel Digital Learning Processor
    A Massively Parallel Digital Learning Processor Hans Peter Graf Srihari Cadambi Igor Durdanovic [email protected] [email protected] [email protected] Venkata Jakkula Murugan Sankardadass Eric Cosatto Srimat Chakradhar [email protected] [email protected] [email protected] [email protected] NEC Laboratories, America 4 Independence Way, Suite 200; Princeton, NJ 07738, USA Abstract We present a new, massively parallel architecture for accelerating machine learning algorithms, based on arrays of vector processing elements (VPEs) with variable-resolution arithmetic. Groups of VPEs operate in SIMD (single instruction multiple data) mode, and each group is connected to an independent memory bank. The memory bandwidth thus scales with the number of VPEs, while the main data flows are local, keeping power dissipation low. With 256 VPEs, implemented on two FPGAs (field programmable gate array) chips, we obtain a sustained speed of 19 GMACS (billion multiply- accumulate per sec.) for SVM training, and 86 GMACS for SVM classification. This performance is more than an order of magnitude higher than that of any FPGA implementation reported so far. The speed on one FPGA is similar to the fastest speeds published on a Graphics Processor for the MNIST problem, despite a clock rate that is an order of magnitude lower. Tests with Convolutional Neural Networks show similar compute performances. This massively parallel architecture is particularly attractive for embedded applications, where low power dissipation is critical. 1 Introduction Machine learning demands higher and higher compute-performance, but serial processors are not improving that much anymore - at least not as quickly as they used to.
    [Show full text]