Lessons from Massively Parallel Applications on Message Passing Computers

Total Page:16

File Type:pdf, Size:1020Kb

Lessons from Massively Parallel Applications on Message Passing Computers Syracuse University SURFACE Northeast Parallel Architecture Center College of Engineering and Computer Science 1992 Lessons from Massively Parallel Applications on Message Passing Computers Geoffrey C. Fox Syracuse University, Northeast Parallel Architectures Center Follow this and additional works at: https://surface.syr.edu/npac Part of the Computer Sciences Commons Recommended Citation Fox, Geoffrey C., "Lessons from Massively Parallel Applications on Message Passing Computers" (1992). Northeast Parallel Architecture Center. 72. https://surface.syr.edu/npac/72 This Article is brought to you for free and open access by the College of Engineering and Computer Science at SURFACE. It has been accepted for inclusion in Northeast Parallel Architecture Center by an authorized administrator of SURFACE. For more information, please contact [email protected]. Lessons from Massively Parallel Applications on Message Passing Computers Geo rey C. Fox Syracuse University Northeast Parallel Architectures Center 111 College Place Syracuse, New York 13244-4100 Abstract We review a decade's work on message passing MIMD paral lel computers in the areas of hardware, software and applications. We conclude that dis- tributed memory paral lel computing works, and describe the implications of this for futureportable software systems. 1 Intro duction We start with a nostalgic note. The 1984 COMPCON conference was my rst opp ortunity to discuss our early hyp ercub e results from Caltech[1] based on the software and science applications we built for C. Seitz's 64- no de Cosmic Cub e which started \pro duction" runs on Quantum Chromo- dynamics (QCD) in Octob er, 1983. That rst MIMD machine was only two mega ops p erformance | ten times b etter than the VAX11/780 wewere using at the time. However, the basic parallelization issues remain similar in the 1991 six giga op QCD implementation on the full size 64 K CM-2. What havewe and others learned in the succeeding eightyears while the parallel hardware has evolved impressively with in particular a factor of 3000 improvement in p erformance? There is certainly plenty of information! In 1989, I surveyed some four hundred pap ers describing parallel applications [2], [3]| now the total must b e over one thousand. A new complete survey is to o daunting for me. However, my p ersonal exp erience, and I b elieve the lesson of the widespread international research on message passing parallel computers, has a clear message. The message passing computational modelisverypowerful and al lows one to express essential ly al l large scale computations and execute them ef- ciently on distributed memory SIMD and MIMD paral lel machines. Less formally one can say that parallel computing works, or more con- troversially but accurately in my opinion that \distributed memory parallel computing works". In the rest of this pap er, we will dissect this assertion and suggest that it has di erent implications for hardware, software and applications. Formally,we relate these as shown in Figure 1 by viewing computation as a series of maps. Software is an expression of the map of the problem onto the machine. In Section 2, we review a classi cation of problems describ ed in more detail in [3], [4], [5], [6], [7], [8]. In the following three sections, wedraw lessons for applications, hardware, and software and quantify our assertion ab ove ab out message passing parallel systems. 2 Problem Architecture Problems like computers havearchitectures. Both are large complex collec- tions of ob jects. A problem will p erform well when mapp ed onto a com- puter if their architectures matchwell. This lo ose statement will b e made more precise in the following, but not completely in this brief pap er. Ata coarse level, weliketointro duce ve broad problem classes which are brie y describ ed in Table 1. These can and should b e re ned, but this is not nec- essary here. Thus, as describ ed in Table 1, we do need to di erentiate the application equivalent of the control structure | SIMD and MIMD | for computers. However, details such as the top ology (hyp ercub e, mesh, tree, etc.) are imp ortant for detailed p erformance estimates but not for the gen- eral conclusions of this pap er. Note that the ab ove implies that problems and computers b oth have a top ology. We will use the classi cation of Table 1 in the following sections which will also expand and exemplify the brief de nitions of Table 1. 3 Applications Let us give some examples of the ve problem architectures. Synchronous: These are regular computations on regular data do- mains and can b e exempli ed by full matrix algorithms such as LU decomp o- sition; nite di erence algorithms and convolutions such as the fast Fourier transform. 2 Synchronous: Data Paral lel Tightly coupled and software needs to ex- ploit features of problem structure to get go o d p erformance. Compar- atively easy as di erent data elements are essentially identical. Loosely Synchronous: As ab ove but data elements are not identical. Still parallelizes due to macroscopic time synchronization. Asynchronous: Functional (or data) parallelism that is irregular in space and time. Often lo osely coupled and so need not worry ab out optimal decomp ositions to minimize communication. Hard to paral- lelize (massively) unless ::: Embarrassingly Paral lel: Indep endent execution of disconnected com- p onents. A=LS: (Loosely Synchronous Complex) Asynchronous collection of (lo osely) synchronous comp onents where these programs themselves can b e parallelized. Table 1: Five Problem Architectures 3 Lo osely Synchronous: These are typi ed by iterative calculations (or time evolutions) on geometrically irregular and p erhaps heterogeneous data domains. Examples are irregular mesh nite element problems, and inhomogeneous particle dynamics. Asynchronous: These are characterized by a temp oral irregularity which makes parallelization hard. An imp ortant example is even driven simulation where events, as in a battle eld simulation, o ccur in spatially distributed fashion but irregularly in time. Branch and b ound and other pruned tree algorithms common in arti cial intelligence such as computer chess [9] fall in this category. Synchronous and Lo osely synchronous problems parallelize naturally in a fashion that scales to large computers with many no des. One only requires that the application b e \large enough" which can b e quanti ed by a detailed p erformance analysis [10] whichwas discussed quite accurately in my original COMPCON pap er [1]. The sp eed up N (1) S = (1 + f ) c on a computer with N no des where the overhead f has a term due to c communication which has the form 1 t comm (2) f / c 11=d t n calc where t and t are resp ectively typical no de to no de communication comm calc and no de ( oating p oint) calculation time. n is the application grain size and d its dimension which is de ned precisely in [10]; in physical simulations 1 d is usually the geometric dimension. Go o d p erformance requires 11=d n b e \small" with a value that dep ends on the critical machine parameter t =t . The grain size n would b e the numb er of grid p oints stored comm calc on each no de in a nite di erence problem so that the complete problem had Nn grid p oints. Implicit in the ab ove discussion is that these problems are \data parallel" in the language of Hillis [11], [12]. This terminology is sometimes only asso ciated with problems run on SIMD machines but in fact, data parallelism is the general origin of massive parallelism on either SIMD or MIMD architectures. MIMD machines are needed for lo osely synchronous data parallel problems where we do not have a homogeneous algorithm with the same up date op eration on each data element. The ab ove analysis do es not apply to asynchronous problems as this case has additional synchronization overhead. One can, in fact, use mes- 4 Numerical ! Virtual ! Real Formulation \compiler" Machine Machine (Parallel) (3) of Problem (Virtual Sp eci c Computer Problem) \assembler" sage passing to naturally synchronize synchronous or lo osely synchroniza- tion problems. These typically divide into communication and calculation phases as given by individual iterations or time steps in a simulation. These phases de ne an algorithmic synchronization common to the entire applica- tion. This is lacking in asynchronous problems which require sophisticated parallel software supp ort such as that given by the time warp system [13]. However,thereisavery imp ortant class of asynchronous applications for which large scale parallelization is p ossible. These we call loosely syn- chronous complex as they consist of an asynchronous collection of lo osely synchronous (or synchronous) mo dules. A go o d example, shown in Figure 2, is the simulation of a satellite based defense system. Viewed at the level of the satellites, we see an asynchronous application. However, the mo dules are not elemental events but rather large scale data parallel subsystems. In a simulation develop ed by JPL, these mo dules included sophisticated Kalman lters and target weap on asso ciation [14]. This problem class shows a func- tional parallelism at the mo dule levelandaconventional data parallelism within the mo dules. The latter ensures that large problems of this class will parallelize on large machines. Image analysis, vision and other large information pro cessing or command and control problems fall in the lo osely synchronous complex class. A nal problem class of practical imp ortance is termed \embarrassingly parallel". These consist of a set of indep endent calculations. This is seen in partsofmanychemistry calculations where one can indep endently compute the separate matrix elements of the Hamiltonian.
Recommended publications
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • A Massively-Parallel Mixed-Mode Computer Designed to Support
    This paper appeared in th International Parallel Processing Symposium Proc of nd Work shop on Heterogeneous Processing pages NewportBeach CA April Triton A MassivelyParallel MixedMo de Computer Designed to Supp ort High Level Languages Christian G Herter Thomas M Warschko Walter F Tichy and Michael Philippsen University of Karlsruhe Dept of Informatics Postfach D Karlsruhe Germany Mo dula Abstract Mo dula pronounced Mo dulastar is a small ex We present the architectureofTriton a scalable tension of Mo dula for massively parallel program mixedmode SIMDMIMD paral lel computer The ming The programming mo del of Mo dula incor novel features of Triton are p orates b oth data and control parallelism and allows hronous and asynchronous execution mixed sync Support for highlevel machineindependent pro Mo dula is problemorientedinthesensethatthe gramming languages programmer can cho ose the degree of parallelism and mix the control mo de SIMD or MIMDlike as need Fast SIMDMIMD mode switching ed bytheintended algorithm Parallelism maybe nested to arbitrary depth Pro cedures may b e called Special hardware for barrier synchronization of from sequential or parallel contexts and can them multiple process groups selves generate parallel activity without any restric tions Most Mo dula programs can b e translated into ecient co de for b oth SIMD and MIMD archi A selfrouting deadlockfreeperfect shue inter tectures connect with latency hiding Overview of language extensions The architecture is the outcomeofanintegrated de Mo dula extends Mo dula
    [Show full text]
  • Massively Parallel Computing with CUDA
    Massively Parallel Computing with CUDA Antonino Tumeo Politecnico di Milano 1 GPUs have evolved to the point where many real world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs. Jack Dongarra Professor, University of Tennessee; Author of “Linpack” Why Use the GPU? • The GPU has evolved into a very flexible and powerful processor: • It’s programmable using high-level languages • It supports 32-bit and 64-bit floating point IEEE-754 precision • It offers lots of GFLOPS: • GPU in every PC and workstation What is behind such an Evolution? • The GPU is specialized for compute-intensive, highly parallel computation (exactly what graphics rendering is about) • So, more transistors can be devoted to data processing rather than data caching and flow control ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • The fast-growing video game industry exerts strong economic pressure that forces constant innovation GPUs • Each NVIDIA GPU has 240 parallel cores NVIDIA GPU • Within each core 1.4 Billion Transistors • Floating point unit • Logic unit (add, sub, mul, madd) • Move, compare unit • Branch unit • Cores managed by thread manager • Thread manager can spawn and manage 12,000+ threads per core 1 Teraflop of processing power • Zero overhead thread switching Heterogeneous Computing Domains Graphics Massive Data GPU Parallelism (Parallel Computing) Instruction CPU Level (Sequential
    [Show full text]
  • CS 677: Parallel Programming for Many-Core Processors Lecture 1
    1 CS 677: Parallel Programming for Many-core Processors Lecture 1 Instructor: Philippos Mordohai Webpage: mordohai.github.io E-mail: [email protected] Objectives • Learn how to program massively parallel processors and achieve – High performance – Functionality and maintainability – Scalability across future generations • Acquire technical knowledge required to achieve the above goals – Principles and patterns of parallel programming – Processor architecture features and constraints – Programming API, tools and techniques 2 Important Points • This is an elective course. You chose to be here. • Expect to work and to be challenged. • If your programming background is weak, you will probably suffer. • This course will evolve to follow the rapid pace of progress in GPU programming. It is bound to always be a little behind… 3 Important Points II • At any point ask me WHY? • You can ask me anything about the course in class, during a break, in my office, by email. – If you think a homework is taking too long or is wrong. – If you can’t decide on a project. 4 Logistics • Class webpage: http://mordohai.github.io/classes/cs677_s20.html • Office hours: Tuesdays 5-6pm and by email • Evaluation: – Homework assignments (40%) – Quizzes (10%) – Midterm (15%) – Final project (35%) 5 Project • Pick topic BEFORE middle of the semester • I will suggest ideas and datasets, if you can’t decide • Deliverables: – Project proposal – Presentation in class – Poster in CS department event – Final report (around 8 pages) 6 Project Examples • k-means • Perceptron • Boosting – General – Face detector (group of 2) • Mean Shift • Normal estimation for 3D point clouds 7 More Ideas • Look for parallelizable problems in: – Image processing – Cryptanalysis – Graphics • GPU Gems – Nearest neighbor search 8 Even More… • Particle simulations • Financial analysis • MCMC • Games/puzzles 9 Resources • Textbook – Kirk & Hwu.
    [Show full text]
  • Core Processors
    UNIVERSITY OF CALIFORNIA Los Angeles Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Maryam Moazeni 2013 © Copyright by Maryam Moazeni 2013 ABSTRACT OF THE DISSERTATION Parallel Algorithms for Medical Informatics on Data-Parallel Many-Core Processors by Maryam Moazeni Doctor of Philosophy in Computer Science University of California, Los Angeles, 2013 Professor Majid Sarrafzadeh, Chair The extensive use of medical monitoring devices has resulted in the generation of tremendous amounts of data. Storage, retrieval, and analysis of such data require platforms that can scale with data growth and adapt to the various behavior of the analysis and processing algorithms. In recent years, many-core processors and more specifically many-core Graphical Processing Units (GPUs) have become one of the most promising platforms for high performance processing of data, due to the massive parallel processing power they offer. However, many of the algorithms and data structures used in medical and bioinformatics systems do not follow a data-parallel programming paradigm, and hence cannot fully benefit from the parallel processing power of ii data-parallel many-core architectures. In this dissertation, we present three techniques to adapt several non-data parallel applications in different dwarfs to modern many-core GPUs. First, we present a load balancing technique to maximize parallelism in non-serial polyadic Dynamic Programming (DP), which is a family of dynamic programming algorithms with more non-uniform data access pattern. We show that a bottom-up approach to solving the DP problem exploits more parallelism and therefore yields higher performance.
    [Show full text]
  • Massively Parallel Computers: Why Not Prirallel Computers for the Masses?
    Maslsively Parallel Computers: Why Not Prwallel Computers for the Masses? Gordon Bell Abstract In 1989 I described the situation in high performance computers including several parallel architectures that During the 1980s the computers engineering and could deliver teraflop power by 1995, but with no price science research community generally ignored parallel constraint. I felt SIMDs and multicomputers could processing. With a focus on high performance computing achieve this goal. A shared memory multiprocessor embodied in the massive 1990s High Performance looked infeasible then. Traditional, multiple vector Computing and Communications (HPCC) program that processor supercomputers such as Crays would simply not has the short-term, teraflop peak performanc~:goal using a evolve to a teraflop until 2000. Here's what happened. network of thousands of computers, everyone with even passing interest in parallelism is involted with the 1. During the first half of 1992, NEC's four processor massive parallelism "gold rush". Funding-wise the SX3 is the fastest computer, delivering 90% of its situation is bright; applications-wise massit e parallelism peak 22 Glops for the Linpeak benchmark, and Cray's is microscopic. While there are several programming 16 processor YMP C90 has the greatest throughput. models, the mainline is data parallel Fortran. However, new algorithms are required, negating th~: decades of 2. The SIMD hardware approach of Thinking Machines progress in algorithms. Thus, utility will no doubt be the was abandoned because it was only suitable for a few, Achilles Heal of massive parallelism. very large scale problems, barely multiprogrammed, and uneconomical for workloads. It's unclear whether The Teraflop: 1992 large SIMDs are "generation" scalable, and they are clearly not "size" scalable.
    [Show full text]
  • Introduction to Parallel Computing
    INTRODUCTION TO PARALLEL COMPUTING Plamen Krastev Office: 38 Oxford, Room 117 Email: [email protected] FAS Research Computing Harvard University OBJECTIVES: To introduce you to the basic concepts and ideas in parallel computing To familiarize you with the major programming models in parallel computing To provide you with with guidance for designing efficient parallel programs 2 OUTLINE: Introduction to Parallel Computing / High Performance Computing (HPC) Concepts and terminology Parallel programming models Parallelizing your programs Parallel examples 3 What is High Performance Computing? Pravetz 82 and 8M, Bulgarian Apple clones Image credit: flickr 4 What is High Performance Computing? Pravetz 82 and 8M, Bulgarian Apple clones Image credit: flickr 4 What is High Performance Computing? Odyssey supercomputer is the major computational resource of FAS RC: • 2,140 nodes / 60,000 cores • 14 petabytes of storage 5 What is High Performance Computing? Odyssey supercomputer is the major computational resource of FAS RC: • 2,140 nodes / 60,000 cores • 14 petabytes of storage Using the world’s fastest and largest computers to solve large and complex problems. 5 Serial Computation: Traditionally software has been written for serial computations: To be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete set of instructions Instructions are executed one after another Only one instruction can be executed at any moment in time 6 Parallel Computing: In the simplest sense, parallel
    [Show full text]
  • Massively Parallel Processor Architectures for Resource-Aware Computing
    Massively Parallel Processor Architectures for Resource-aware Computing Vahid Lari, Alexandru Tanase, Frank Hannig, and J¨urgen Teich Hardware/Software Co-Design, Department of Computer Science Friedrich-Alexander University Erlangen-N¨urnberg (FAU), Germany {vahid.lari, alexandru-petru.tanase, hannig, teich}@cs.fau.de Abstract—We present a class of massively parallel processor severe, when considering massively parallel architectures, with architectures called invasive tightly coupled processor arrays hundreds to thousands of resources, all working with real-time (TCPAs). The presented processor class is a highly parame- requirements. terizable template, which can be tailored before runtime to fulfill costumers’ requirements such as performance, area cost, As a remedy, we present a domain-specific class of mas- and energy efficiency. These programmable accelerators are well sively parallel processor architectures called invasive tightly suited for domain-specific computing from the areas of signal, coupled processor arrays (TCPA), which offer built-in and image, and video processing as well as other streaming processing scalable resource management. The term “invasive” stems from applications. To overcome future scaling issues (e. g., power con- sumption, reliability, resource management, as well as application a novel paradigm called invasive computing [3], for designing parallelization and mapping), TCPAs are inherently designed and programming future massively parallel computing systems in a way to support self-adaptivity and resource awareness at (e.g., heterogeneous MPSoCs). The main idea and novelty of hardware level. Here, we follow a recently introduced resource- invasive computing is to introduce resource-aware program- aware parallel computing paradigm called invasive computing ming support in the sense that a given application gets the where an application can dynamically claim, execute, and release ability to explore and dynamically spread its computations resources.
    [Show full text]
  • Real-Time Network Traffic Simulation Methodology with a Massively Parallel Computing Architecture
    TRANSPORTATION RESEA RCH RECORD 1358 13 Advanced Traffic Management System: Real-Time Network Traffic Simulation Methodology with a Massively Parallel Computing Architecture THANAVAT }UNCHAYA, GANG-LEN CHANG, AND ALBERTO SANTIAGO The advent of parallel computing architectures presents an op­ of these ATMS developments would ensure optimal net­ portunity fo r lra nspon a tion professionals 1 imulate a large- ca le workwise performance. tra ffi c network with sufficienily fa t response time for real-time The anticipated benefits of these control methods depend opcrarion. Howeve r, ii neccssira tc ·. a fundament al change in the on the complex interactions among principal traffic system modeling algorithm LO tuke full ad va ntage of parallel computing. • uch a methodology t imulare tra ffic n twork with the Con­ components. These systems include driver behavior, level of nection Machine, a massively parallel computer, is described . The congestion, dynamic nature of traffic patterns, and the net­ basic parallel computing architectures are introdu ed, along with work's geometric configuration. It is crucial to the design of a list of commercially available parall el comput ers. This is fol ­ these strategies that a comprehensive understanding of the lowed by an in-depth presentation of the proposed simulation complex interrelations between these key system components meth odology with a massively parallel computer. The propo ed traffic simulation model ha. an inherent path-proces ing capa­ be established. Because it is often difficult for theoretical bilit y to represent drivers ' roure choice behavior at the individual­ formulations to take all such complexities into account, traffic vehicle level.
    [Show full text]
  • Massively Parallel Message Passing: Time to Start from Scratch?
    Massively Parallel Message Passing: Time to Start from Scratch? Anthony Skjellum, PhD University of Alabama at Birmingham Department of Computer and Information Sciences College of Arts and Sciences Birmingham, AL, USA [email protected] Abstract primitive abstractions do not do enough to abstract how operations are performed compared to what is to be performed. Extensions Yes: New requirements have invalidated the use of MPI and to and attempts to standardize with distributed shared memory (cf, other similar library-type message passing systems at full scale on MPI-2 one-sided primitives [10]) have been included, but are systems of 100's of millions of cores. Uses on substantial subsets currently somewhat incompatible with PGAS languages and sys- of these systems should remain viable, subject to limitations. tems [15], and furthermore MPI isn't easy to mutually convert No: If the right set of implementation and application deci- with BSP-style notations or semantics either [16]. Connection sions are made, and runtime strategies put in place, existing MPI with publish-subscribe models [17] used in military / space-based notations can used within exascale systems at least to a degree. fault-tolerant message passing systems are absent (see also [18]). Maybe: Implications of instantiating “MPI worlds” as islands The hypothesis of this paper is that new requirements arising inside exascale systems cannot damage overall scalability too in exascale computation have at least partly invalidated the use of much, or they will inhibit exascalability, and thus fail. the parallel model expressed and implied by MPI and other simi- lar library-type message passing systems at full scale on systems Categories and Subject Descriptors F.1.2.
    [Show full text]
  • An Event-Driven Massively Parallel Fine-Grained Processor Array
    An Event-Driven Massively Parallel Fine-Grained Processor Array Declan Walsh and Piotr Dudek School of Electrical and Electronic Engineering The University of Manchester, UK [email protected]; [email protected] Abstract—A multi-core event-driven parallel processor array connected to its four nearest direct neighbours and only local design is presented. Using relatively simple 8-bit processing cores communications are performed. This network topology and a 2D mesh network topology, the architecture focuses on reduces the long wire interconnects which also reduces power reducing the area occupation of a single processor core. A large and enables operations at higher frequencies. number of these processor cores can be implemented on a single The SIMD implementations are useful for performing tasks integrated chip to create a MIMD architecture capable of providing a powerful processing performance. Each processor which require data-level parallelism. However, for tasks which core is an event-driven processor which can enter an idle mode require more complex processing than simple pixel-parallel when no data is changing locally. An 8 × 8 prototype processor operators, e.g., event-driven networks [12], message-passing array is implemented in a 65 nm CMOS process in 1,875 µm × inference networks [13], or even some low-level image 1,875 µm. This processor array is capable of performing 5.12 processing operations [14, 15], SIMD processing may not be GOPS operating at 80 MHz with an average power consumption optimal and a more flexible MIMD (Multiple Instruction of 75.4 mW. Multiple Data) solution is more suitable.
    [Show full text]
  • A Massively Parallel Digital Learning Processor
    A Massively Parallel Digital Learning Processor Hans Peter Graf Srihari Cadambi Igor Durdanovic [email protected] [email protected] [email protected] Venkata Jakkula Murugan Sankardadass Eric Cosatto Srimat Chakradhar [email protected] [email protected] [email protected] [email protected] NEC Laboratories, America 4 Independence Way, Suite 200; Princeton, NJ 07738, USA Abstract We present a new, massively parallel architecture for accelerating machine learning algorithms, based on arrays of vector processing elements (VPEs) with variable-resolution arithmetic. Groups of VPEs operate in SIMD (single instruction multiple data) mode, and each group is connected to an independent memory bank. The memory bandwidth thus scales with the number of VPEs, while the main data flows are local, keeping power dissipation low. With 256 VPEs, implemented on two FPGAs (field programmable gate array) chips, we obtain a sustained speed of 19 GMACS (billion multiply- accumulate per sec.) for SVM training, and 86 GMACS for SVM classification. This performance is more than an order of magnitude higher than that of any FPGA implementation reported so far. The speed on one FPGA is similar to the fastest speeds published on a Graphics Processor for the MNIST problem, despite a clock rate that is an order of magnitude lower. Tests with Convolutional Neural Networks show similar compute performances. This massively parallel architecture is particularly attractive for embedded applications, where low power dissipation is critical. 1 Introduction Machine learning demands higher and higher compute-performance, but serial processors are not improving that much anymore - at least not as quickly as they used to.
    [Show full text]