Near Data Processing: Are We There Yet?

Total Page:16

File Type:pdf, Size:1020Kb

Near Data Processing: Are We There Yet? NEAR DATA PROCESSING: ARE WE THERE YET? MAYA GOKHALE LAWRENCE LIVERMORE NATIONAL LABORATORY This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. LLNL-PRES-665279 OUTLINE Why we need near memory computing Niche application Data reorganization engines Computing near storage FPGAs for computing near memory WHY AREN’T WE THERE YET? § Processing near memory is attractive for high bandwidth, low latency, massive parallelism § 90’s era designs closely integrated logic and DRAM transistor § Expensive § Slow § Niche § Is 3D packaging the answer? § Expensive § Slow § Niche § What are the technology and commercial incentives? EXASCALE POWER PROBLEM: DATA MOVEMENT IS A PRIME CULPRIT • Cost of a double precision flop is negligible compared to the cost of reading and writing memory • Cache-unfriendly applications waste memory bandwidth, energy • Number of nodes needed to solve problem is inflated due to poor memory bw utilization • Similar phenomenon in disk Sources: Horst Simon, LBNL Greg Astfalk, HP MICRON HYBRID MEMORY CUBE OFFERS OPPORTUNITY FOR PROCESSING NEAR MEMORY Fast logic layer • Customized processors • Through silicon vias (TSV) for high bandwidth access to memory Vault organization yields enormous bandwidth in the package Best case latency is higher than traditional DRAM due to packetized abstract memory interface HMC LATENCY: LINK I TO VAULT I, 128B, 50/50 HMC LATENCY: LINK I TO VAULT J, 128B, 50/50 One link active, showing effect of request through communication network. HMC: WIDE VARIABILITY IN LATENCY WITH CONTENTION Requests on links 1-4 for data on vaults 0-4 (link 0 local) HMC SHOWS HIGH BANDWIDTH FOR REGULAR, STREAMING ACCESS PATTERNS, BUT … 128B payload 64B payload 32B payload 16B payload Small payload, irregular accesses cut bandwidth by factor of 3 SPECIALIZED PROCESSING FOR HIGH VALUE APPLICATIONS Processing in memory/storage might make sense for niche, high value applications § Image pixel processing, character processing (bioinformatics applications) § Massively parallel regular expression applications (cybersecurity, search problems cast as RE) § Content addressable memories At each clock cycle, the pipelined ALU can either load the value of the GOR signal. The microcode programmer data from memory or store data to memory, but not both has access to the GOR signal at the host processor through at the same time. Also, on each clock cycle, the MUpro- a GOR register containing the last 32 GORs generated by duces three outputs that can be either selected for storage the system. At each clock cycle, the pipelined ALU can(underTERASYS either load maskthe value WORKSTATION control)of the GOR signal. or The selected microcode programmer for recirculation. The Terasys interface board and PIM array units fit into an PIXEL/CHARACTERdata from memory or store data to memory, butATerasys not both has access workstation to the GOR signal (see at the hostFigure processor 1) through consists ofPROCESSING external enclosure. The system can accommodate up to Additionally, data can be sent to other processors via the NETWORK. at the same time. Also, on each clock cycle, the MUpro- a GOR register containing the last 32 GORs generated by PARTITIONEDOR In contrast to the GOR, duces three outputs that can be either selectedrouting for storage network.the system. whicheight PIM performs array units,only many-to-one with 4K processors communication, per array unit, the (under mask control) or selected for recirculation. Additionally, data can be sent to other processorsThea via Sun the processor Sparc-2PARTITIONED processor,canOR inputNETWORK. data In contrast through to the GOR, a multiplexer PORgiving network a maximum can be usedconfiguration for many-to-one of 32,768 or one-to-many PIM proces- routing network. an SBuswhich interface performs only card many-to-one residing communication, in the Sparcthe cabinet, sors. The processor can input data through a(MUX) multiplexer from POR networkeither can the be used parallel for many-to-one prefix or one-to-many network, the global communication among groups of processors. The 32K (MUX) from either the parallel prefix network,OR thea network, globalTerasys communication interface the partitioned among board, groups andofOR processors. network, The 32K or the internal processorsA data inparallel the Terasys program workstation resides can on be partitioned,the Sparc. IN MEMORYOR network, the partitioned OR network, or the internalARCHITECTURE processors in the Terasys workstation can be partitioned, masuregister control (see the “Network andmasuregister mask”one line or morestarting withcontrol PIM 2 processors array (see per units. partitionthe “Network and increasing andin mask” line startingConventional with 2 sequential processors instructions per partition are and executed increasing by the in at lower right in Figure 3). at lower right in Figure 3). Sparc. Operations on data parallel operands are conveyed Indirect addressing as memory writes through the Terasys interface board to Indirect addressing signifies an operation A[i], where each processor has its own instances ofA andIndirect i. To perform addressing PIM memory and cause the single bit ALUs to perform the this operation, the processors must access different mem- ory locations simultaneously. Indirect addressing signifies an operation A[i], where specifiedoperations at the specified row address across all We decided against hardware support for indirect Sparc-2 the columns of the memory. addressing in the current PIM implementationeach for processor two has its own instances ofA and i. To perform major reasons. First, introducing indirect addressingthis operation, reg- the processorsI must accessI different mem- An instruction written to the PIM array is actually a 12- isters on a per-processor basis would reduce the number of processors per chip. Second, the error detectionory locations and simultaneously.I Sbus interface 2 Kbits I bit address in a 4K entry table of horizontal microcode, correction circuitry,which operates on a row basis, would which is generated on a per-program basis. The selected not operate correctly in the presence of per-processorWe indi-decided against hardware supportExternal for indirect rect addressing registers. Thus, we optedaddressing for indirect in the current PIM implementationenclosure for two table entry is the real instruction, which is sent from the addressing through microcode. To make this operation................................................ efficient, we have the compiler emit array-boundmajor infor- reasons. First, introducing indirect addressing reg- Terasys interface board to the PIM array units. mation as part of an indexing operation. This allows the microcoded indexing subroutine to limit theisters number on of a per-processor basis? would64 bits reduce the number For global communications among the processors there 4 b locations to query to the number of array elementsof processors rather per chip. Second, the error detection and are three networks: global OR, partitioned2 Kbits OR, and par- than the entire 2K bits. Register In indirect addressing, each processor holdscorrection its private circuitry,which operates onPIM a row basis, would allel prefix network. Level 0 of the PPN serves as a bidi- instance of an array. The application can also use a single PIM command(25) arrayA in Sparc conventional memory, whichnot is shared operate by correctly___.......__. in the presence of arrayper-processor units indi-j rectional linear nearest-neighbor network. ..___......._ all PIM processors. Each processor can index this shared I I I I rect addressing4 - - - registers... - - Thus, we opted for indirect In the following sections, we describe the PIM chip and table with its unique index i that is in PIM memory.L An 4K 4K 4K 4K optimized microcode routine, which is a factoraddressing of 8 faster through microcode. To make this operationi single-bit processor, the three interprocessor communi- than the more general per-processor table lookup,8 is pro-processors processorsf- processors processors ; vided for this operation in a Fortran-based efficient,library called we haveConventional the compiler emit array-bound infor- cation networks, the PIM array unit, and the Terasys inter- data(4) Passwork (Parallel SIMD Workbench) .3 mation as part of an indexing operation. This allows the face board. lnterprocessor communication microcoded indexing subroutine to limit the number of ? 64 bits A simple linear interconnect, augmented by global OR, Figure 2. A processor-in-memory chip. i 4 b partitioned OR, and parallel prefix net- locationsigure 1. to A query16K processor to the number Terasys of array workstation. elements rather Processor-in-memory chip design. This allows the PIM processor to than the entire 2K bits. The PIM integrated circuit, with slightlyRegisterover one million In indirect addressing, each processor holds its private transistors on 1-micron technology, contains 2K x 64 bits PARALLEL EXPRESSIONS.compute a global ORPolys or partitioned can be OR signal in one tick, instancepoly of unsigned an array. x:4; The I* application 4-bit logical can variable also use x */a single ofPIM SRAM, command(25) 64 custom-designed single-bit processors, plus send one bit of data to a left or right ___.......__. used in C expressions just as normal C Netwo neighbor in one tick, and arrayAtypedef in Sparc unsigned conventional poly int33:33; memory, /* which 33-bit islogical, shared
Recommended publications
  • Lecture 7 CUDA
    Lecture 7 CUDA Dr. Wilson Rivera ICOM 6025: High Performance Computing Electrical and Computer Engineering Department University of Puerto Rico Outline • GPU vs CPU • CUDA execution Model • CUDA Types • CUDA programming • CUDA Timer ICOM 6025: High Performance Computing 2 CUDA • Compute Unified Device Architecture – Designed and developed by NVIDIA – Data parallel programming interface to GPUs • Requires an NVIDIA GPU (GeForce, Tesla, Quadro) ICOM 4036: Programming Languages 3 CUDA SDK GPU and CPU: The Differences ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • GPU – More transistors devoted to computation, instead of caching or flow control – Threads are extremely lightweight • Very little creation overhead – Suitable for data-intensive computation • High arithmetic/memory operation ratio Grids and Blocks Host • Kernel executed as a grid of thread Device blocks Grid 1 – All threads share data memory Kernel Block Block Block space 1 (0, 0) (1, 0) (2, 0) • Thread block is a batch of threads, Block Block Block can cooperate with each other by: (0, 1) (1, 1) (2, 1) – Synchronizing their execution: For hazard-free shared Grid 2 memory accesses Kernel 2 – Efficiently sharing data through a low latency shared memory Block (1, 1) • Two threads from two different blocks cannot cooperate Thread Thread Thread Thread Thread (0, 0) (1, 0) (2, 0) (3, 0) (4, 0) – (Unless thru slow global Thread Thread Thread Thread Thread memory) (0, 1) (1, 1) (2, 1) (3, 1) (4, 1) • Threads and blocks have IDs Thread Thread Thread Thread Thread (0, 2) (1, 2) (2,
    [Show full text]
  • ATI Radeon™ HD 4870 Computation Highlights
    AMD Entering the Golden Age of Heterogeneous Computing Michael Mantor Senior GPU Compute Architect / Fellow AMD Graphics Product Group [email protected] 1 The 4 Pillars of massively parallel compute offload •Performance M’Moore’s Law Î 2x < 18 Month s Frequency\Power\Complexity Wall •Power Parallel Î Opportunity for growth •Price • Programming Models GPU is the first successful massively parallel COMMODITY architecture with a programming model that managgped to tame 1000’s of parallel threads in hardware to perform useful work efficiently 2 Quick recap of where we are – Perf, Power, Price ATI Radeon™ HD 4850 4x Performance/w and Performance/mm² in a year ATI Radeon™ X1800 XT ATI Radeon™ HD 3850 ATI Radeon™ HD 2900 XT ATI Radeon™ X1900 XTX ATI Radeon™ X1950 PRO 3 Source of GigaFLOPS per watt: maximum theoretical performance divided by maximum board power. Source of GigaFLOPS per $: maximum theoretical performance divided by price as reported on www.buy.com as of 9/24/08 ATI Radeon™HD 4850 Designed to Perform in Single Slot SP Compute Power 1.0 T-FLOPS DP Compute Power 200 G-FLOPS Core Clock Speed 625 Mhz Stream Processors 800 Memory Type GDDR3 Memory Capacity 512 MB Max Board Power 110W Memory Bandwidth 64 GB/Sec 4 ATI Radeon™HD 4870 First Graphics with GDDR5 SP Compute Power 1.2 T-FLOPS DP Compute Power 240 G-FLOPS Core Clock Speed 750 Mhz Stream Processors 800 Memory Type GDDR5 3.6Gbps Memory Capacity 512 MB Max Board Power 160 W Memory Bandwidth 115.2 GB/Sec 5 ATI Radeon™HD 4870 X2 Incredible Balance of Performance,,, Power, Price
    [Show full text]
  • AMD Accelerated Parallel Processing Opencl Programming Guide
    AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • AMD Opencl User Guide.)
    AMD Accelerated Parallel Processing OpenCLUser Guide December 2014 rev1.0 © 2014 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • Novel Methodologies for Predictable CPU-To-GPU Command Offloading
    Novel Methodologies for Predictable CPU-To-GPU Command Offloading Roberto Cavicchioli Università di Modena e Reggio Emilia, Italy [email protected] Nicola Capodieci Università di Modena e Reggio Emilia, Italy [email protected] Marco Solieri Università di Modena e Reggio Emilia, Italy [email protected] Marko Bertogna Università di Modena e Reggio Emilia, Italy [email protected] Abstract There is an increasing industrial and academic interest towards a more predictable characterization of real-time tasks on high-performance heterogeneous embedded platforms, where a host system offloads parallel workloads to an integrated accelerator, such as General Purpose-Graphic Processing Units (GP-GPUs). In this paper, we analyze an important aspect that has not yet been considered in the real-time literature, and that may significantly affect real-time performance if not properly treated, i.e., the time spent by the CPU for submitting GP-GPU operations. We will show that the impact of CPU-to-GPU kernel submissions may be indeed relevant for typical real-time workloads, and that it should be properly factored in when deriving an integrated schedulability analysis for the considered platforms. This is the case when an application is composed of many small and consecutive GPU com- pute/copy operations. While existing techniques mitigate this issue by batching kernel calls into a reduced number of persistent kernel invocations, in this work we present and evaluate three other approaches that are made possible by recently released versions of the NVIDIA CUDA GP-GPU API, and by Vulkan, a novel open standard GPU API that allows an improved control of GPU com- mand submissions.
    [Show full text]
  • Adaptive GPU Tessellation with Compute Shaders Jad Khoury, Jonathan Dupuy, and Christophe Riccio
    i i i i Adaptive GPU Tessellation with Compute Shaders Jad Khoury, Jonathan Dupuy, and Christophe Riccio 1.1 Introduction GPU rasterizers are most efficient when primitives project into more than a few pixels. Below this limit, the Z-buffer starts aliasing, and shad- ing rate decreases dramatically [Riccio 12]; this makes the rendering of geometrically-complex scenes challenging, as any moderately distant poly- gon will project to sub-pixel size. In order to minimize such sub-pixel pro- jections, a simple solution consists in procedurally refining coarse meshes as they get closer to the camera. In this chapter, we are interested in deriving such a procedural refinement technique for arbitrary polygon meshes. Traditionally, mesh refinement has been computed on the CPU via re- cursive algorithms such as quadtrees [Duchaineau et al. 97, Strugar 09] or subdivision surfaces [Stam 98, Cashman 12]. Unfortunately, CPU-based refinement is now fundamentally bottlenecked by the massive CPU-GPU streaming of geometric data it requires for high resolution rendering. In order to avoid these data transfers, extensive work has been dedicated to implement and/or emulate these recursive algorithms directly on the GPU by leveraging tessellation shaders (see, e.g., [Niessner et al. 12,Cash- man 12,Mistal 13]). While tessellation shaders provide a flexible, hardware- accelerated mechanism for mesh refinement, they remain limited in two respects. First, they only allow up to log2(64) = 6 levels of subdivision. Second, their performance drops along with subdivision depth [AMD 13]. In the following sections, we introduce a GPU-based refinement scheme that is free from the limitations incurred by tessellation shaders.
    [Show full text]
  • Integrated Framework for Heterogeneous Embedded Platforms Using Opencl Author: Kulin Seth Department: Electrical and Computer Engineering
    NORTHEASTERN UNIVERSITY Graduate School of Engineering Thesis Title: Integrated framework for heterogeneous embedded platforms using OpenCL Author: Kulin Seth Department: Electrical and Computer Engineering Approved for Thesis Requirements of the Master of Science Degree: Thesis Advisor: Prof. David Kaeli Date Thesis Committee: Prof. Gunar Schirner Date Thesis Committee: Dr. Rafael Ubal Date Head of Department: Date Graduate School Notified of Acceptance: Associate Dean of Engineering: Yaman Yener Date INTEGRATED FRAMEWORK FOR HETEROGENEOUS EMBEDDED PLATFORMS USING OPENCL A Thesis Presented by Kulin Seth to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering Northeastern University Boston, Massachusetts March 2011 c Copyright 2011 by Kulin Seth All Rights Reserved iii Abstract The technology community is rapidly moving away from the age of computers and laptops, and is entering the emerging era of hand-held devices. With the rapid de- velopment of smart phones, tablets, and pads, there has been widespread adoption of Graphic Processing Units (GPUs) in the embedded space. The hand-held market is now seeing an ever-increasing rate of development of computationally intensive ap- plications, which require significant amounts of processing resources. To meet this challenge, GPUs can be used for general-purpose processing. We are moving towards a future where devices will be more connected and integrated. This will allow appli- cations to run on handheld devices, while offloading computationally intensive tasks to other compute units available. There is a growing need for a general program- ming framework which can utilize heterogeneous processing units such as GPUs and DSPs on embedded platforms.
    [Show full text]
  • Virtual GPU Software User Guide Is Organized As Follows: ‣ This Chapter Introduces the Capabilities and Features of NVIDIA Vgpu Software
    Virtual GPU Software User Guide DU-06920-001 _v13.0 Revision 02 | August 2021 Table of Contents Chapter 1. Introduction to NVIDIA vGPU Software..............................................................1 1.1. How NVIDIA vGPU Software Is Used....................................................................................... 1 1.1.2. GPU Pass-Through.............................................................................................................1 1.1.3. Bare-Metal Deployment.....................................................................................................1 1.2. Primary Display Adapter Requirements for NVIDIA vGPU Software Deployments................2 1.3. NVIDIA vGPU Software Features............................................................................................. 3 1.3.1. GPU Instance Support on NVIDIA vGPU Software............................................................3 1.3.2. API Support on NVIDIA vGPU............................................................................................ 5 1.3.3. NVIDIA CUDA Toolkit and OpenCL Support on NVIDIA vGPU Software...........................5 1.3.4. Additional vWS Features....................................................................................................8 1.3.5. NVIDIA GPU Cloud (NGC) Containers Support on NVIDIA vGPU Software...................... 9 1.3.6. NVIDIA GPU Operator Support.......................................................................................... 9 1.4. How this Guide Is Organized..................................................................................................10
    [Show full text]
  • Opencl Programming
    OpenCL programming T-106.6200 Special course High-Performance Embedded Computing (HPEC) Autumn 2013 (I-II), Lecture 8 Vesa Hirvisalo 2013-11-05 V. Hirvisalo ESG/CSE/Aalto Today • Introduction to OpenCL – Nature and history of the language • OpenCL programming – Programming model – Program structure • A short example – More in the exercises • Properties – Kernel language restrictions 2013-11-05 V. Hirvisalo ESG/CSE/Aalto Introduction to OpenCL 2013-11-05 V. Hirvisalo ESG/CSE/Aalto OpenCL • Low-level language for high-performance heterogeneous data-parallel computation – An open standard, mostly API-based • Access to all compute devices in your system – CPUs – GPUs – Other accelerators • Based on C99 – A well-known languae – A low level language • Portable across devices (but not trivial) – Vector intrinsics and math libraries – Guaranteed precision for operations 2013-11-05 V. Hirvisalo ESG/CSE/Aalto OpenCL vs CUDA • Different background – CUDA has better tools, language, and features – OpenCL supports more devices • Different vendor connections – OpenCL: Apple etc. (but basically not vendor specific) – CUDA: NVIDIA (is vendor specific) • Basically the same – If you know basic of one, you know the other – OpenCL is not HW specific • More queries, setting, …, more verbose (more painful to code) • You are by default closer to the driver (you understand more) – Both reflect GPU architectures of approx. 2009 • The world is in a change… 2013-11-05 V. Hirvisalo ESG/CSE/Aalto Applicability of OpenCL • Anything that is – Computationally intensive • Low intensity: lot of memory access, few (and cheap) computations ! !e.g., A[i] = B[j] • High intensity: costly computations, few memory accesses! ! !e.g., A[i] = exp(temp)*acos(temp)! – Data-parallel • Full independency between data items not required • Can cope with limited dependencies – E.g., streaming computations, stencil computations, … – Floating-point computations ((16)/32/64 bit, IEEE 754) • Can do integers, too • Note this is because OpenCL was designed for GPUs, and GPUs are good at these things 2013-11-05 V.
    [Show full text]
  • Lecture 7 Today's Content Trends
    Today’s Content • Introduction – Trends in HPC Lecture 7 – GPGPU CUDA (I) • CUDA Programming 1 2 Trends Trends in High-Performance Computing • HPC is never a commodity until 1994 • In 1990’s – Performances of PCs are getting faster and faster – Proprietary computer chips offers lower and lower performance/price compared to standard PC chips • 1994, NASA – Thomas Sterling, Donald Becker, … (1994) – 16 DX4 (40486) processors, channel bonded Ethernet. 3 4 Factors contributed to the growth of Beowulf class computers. • The prevalence of computers for office automation, home computing, games and entertainment now provide system designers with new types of cost-effective components. • The COTS industry now provides fully assembled subsystems (microprocessors, motherboards, disks and network interface cards). • Mass market competition has driven the prices down and reliability up for these subsystems. • The availability of open source software, particularly the Linux operating system, GNU compilers and programming tools and MPI and PVM message passing libraries. • Programs like the HPCC program have produced many years of experience working with parallel algorithms. • The recognition that obtaining high performance, even from vendor provided, parallel MPP: Massive Parallel Processing platforms is hard work and requires researchers to adopt a do-it-yourself attitude. MPP vs. cluster: they are very similar. Multiple processors, each has its private • An increased reliance on computational science which demands high performance memory, are interconnected
    [Show full text]
  • Compute & Memory Optimizations for High-Quality Speech Recognition on Low-End GPU Processors
    Compute & Memory Optimizations for High-Quality Speech Recognition on Low-End GPU Processors Kshitij Gupta and John D. Owens Department of Electrical & Computer Engineering, University of California, Davis One Shields Avenue, Davis, California, USA {kshgupta,jowens}@ucdavis.edu AbstractGaussian Mixture Model (GMM) computations in of the application to suit the underlying processor architecture modern Automatic Speech Recognition systems are known to and programming model. dominate the total processing time, and are both memory To this end, a majority of the work in literature has to date bandwidth and compute intensive. Graphics processors (GPU), primarily focused on suggesting modifications for the sole are well suited for applications exhibiting data- and thread-level purpose of extracting maximum speedup. The most parallelism, as that exhibited by GMM score computations. By exploiting temporal locality over successive frames of speech, we constrained part in any product today, however, is the memory have previously presented a theoretical framework for modifying system. Unlike the past where discrete GPUs were more the traditional speech processing pipeline and obtaining popular for desktop systems, the recent trend towards significant savings in compute and memory bandwidth widespread deployment of portable electronics is necessitating requirements, especially on resource-constrained devices like a move towards integrated GPUs, heterogeneous processors, those found in mobile devices. which are based on a unified memory architecture (CPU and In this paper we discuss in detail our implementation for two GPU SoC) like the NVIDIA Tegra [4] and AMD Fusion [5]. of the three techniques we previously proposed, and suggest a set As the CPU and GPU reside on the same die and share the of guidelines of which technique is suitable for a given condition.
    [Show full text]
  • The Compute Architecture of Intel® Processor Graphics Gen8
    The Compute Architecture of Intel® Processor Graphics Gen8 Version 1.1 External Revision History Ver Date Comment 1.0 9/10/2014 IDF-2014 release of “The Compute Architecture of Intel Processor Graphics Gen8” – Stephen Junkins 1.1 7/15/2015 Updated to include Intel Iris diagrams, speeds/feeds, text references, and other minor edits. – Stephen Junkins 1 CONTENTS 2 Audience .......................................................................................................................................................... 2 3 Introduction ...................................................................................................................................................... 2 3.1 What is Intel Processor Graphics? ............................................................................................................ 2 4 SoC Architecture .............................................................................................................................................. 3 4.1 SoC Architecture ...................................................................................................................................... 4 4.2 Ring Interconnect ..................................................................................................................................... 4 4.3 Shared LLC ............................................................................................................................................. 4 4.4 Optional EDRAM.....................................................................................................................................
    [Show full text]