What Is Stream Processing?

Total Page:16

File Type:pdf, Size:1020Kb

What Is Stream Processing? Introduction to Stream Processing Trond Hagen SINTEF ICT, Applied Mathematics Geilo, January 2008 ICT 1 Schedule, Thursday 09:00 - 09:40 Introduction to stream processing Trond Hagen 09:50 - 10:30 Introduction to stream processing cont’d Trond Hagen 15:00-15-40 CUDA programming Johan Seland 15:50-16-30 CUDA programming cont’d Johan Seland 17:00-18-30 Examples of applications André Brodtkorb, Johan Seland, ICT 2 Schedule, Friday 09:00 - 09:45 Introduction to Cell BE Trond Hagen 10:00 - 10:45 Programming Cell BE André Brodtkorb 11:00-12:00 “Birds of a feather” – parallel processing Johan Seland Summary and discussion ICT 3 Thursday evening Don’t miss the quiz in the bar after the dinner! Chance to win a ~10 000,- NOK HPC graphics card sponsored by NVIDIA ICT 4 Outline Introduction to Stream Processing Introduction to Graphics Processing Units (GPUs) GPU Architecture GPU Programming Models Examples Looking Forward ICT 5 What is Stream Processing? A stream is a set of input and output data Stream processing is a series of operations (kernel functions) applied for each element in a stream Uniform streaming is most typical. One kernel at a time is applied to all elements of the stream Single Instruction Multiple Data (SIMD) ICT 6 Instruction-Based Processing Instructions Processor Data Cache Memory Memory operands (data) During processing, the data required for an instruction’s execution is loaded into the cache, if not already present. Very flexible model, but has the disadvantage that the data-sequence is completely driven by the instruction sequence, yielding inefficient performance for uniform operations on large data blocks. ICT 7 Data Stream Processing Processor Data Data Memory Memory pipeline configuration The processor is first configured by the instructions that need to be performed and in the next step a data-stream is processed. The execution is distributed among several pipelines. ICT 8 Different Computing Paradigms Data stream processing is advantageous when large data blocks undergo the same operation, because this allows the memory efficient streaming and parallel processing of the data. Example: Matrix-matrix addition C = A+B // instruction based // data stream for(i=0; i<numRows; i++) setInputArrays(A, B); for(j=0; j<numCols; j++) setOutputArrays(C); C[i][j]=A[i][j]+B[i][j]; loadKernel(”return a+b”); execute(); ICT 9 Hardware Evolution “The number of transistors on an integrated circuit for minimum component cost doubles every 24 months” - Moore’s Law Moore’s law still seems to hold true Increases in transistor density have driven roughly proportional increases in processor performance Performance gains have also relied heavily on the increase of clock frequency, which is closely related to transistor size Smaller transistors can switch faster ICT 10 Hardware Evolution (cont’d) Higher frequency gives easy and instant performance benefits Software runs faster without any modification This has allowed the computing industry to realize increasing value from their existing code base, with relatively little effort In the resent years we have not seen the traditional increase in core frequency Increasing the frequency has several implications: ICT 11 Memory Problem Memory speeds have not increased as fast as core frequencies A processor can wait through hundreds of clock cycles if it has to get data or instructions from main memory Larger caches combined with instruction level parallelism can reduce the memory-wait time ICT 12 Longer Instruction Pipelines Longer instruction pipelines allow for higher core frequencies Longer pipelines results in more cache-miss penalty and lower the number of completed instructions per cycle The Intel Pentium 4 Prescott had a pipeline length of 31 stages and a frequency > 3 GHz Industry seems to converge between 9 and 11 stages ICT 13 Power Consumption Problem When the frequency increases, the power consumption increases disproportionately The dependency between core frequency and power consumption is often said to be quadratic ICT 14 Multi-Core Processors Multi-core can be used to take advantage of the continuously increase in transistor density Doubling the number of cores and halving frequency gives roughly the same performance, while the power consumption is reduced drastically Adding cores makes it possible to get higher performance without increasing the power density ICT 15 Intel Core 2 Duo Two “fat” cores ICT 16 General-purpose CPUs ~50 % of the die area is cache An addition to that a lot of the area is used for instruction level parallelism Instruction pipelining (execution of multiple instructions can be partially overlapped) Superscalar (parallel instruction pipelines) Branch prediction (predict outcome of decisions) Out-of-order execution The amount of transistors doing direct computation is shrinking relative to the total number of transistors ICT 17 Solution: Heterogeneous Architectures? Heterogeneous computing is the strategy of using multiple types of processing elements within a single workflow, and allowing each to perform the tasks to which it is best suited. Do not need only “fat” general purpose cores. Instead one can design smaller and simpler computational units, which can very effectively perform special tasks, e.g. floating point operations. Two most important heterogeneous systems: Graphics Processing Units Cell BE ICT 18 Cell Broadband Engine One “fat” core and eight “thin” cores ICT 19 Graphics Processing Units (GPUs) ICT 20 What is a GPU? Graphics rendering device Efficient at displaying computer graphics Their highly parallel architecture makes them very effective for a range of complex algorithms Driven by demand for increased realism in games NVIDIA GeForce 8800 GTX All PCs have a GPU ICT 21 GPU-Design Premises Games typically renders 10 000s triangles @ 60 fps Screen resolution is typically 1600 x 1200 Each pixel is recalculated every frame A pixel loosely corresponds to a lightweight thread This corresponds to creating, running and destroying 115 200 000 threads per second GPUs are designed to make these operations fast ICT 22 State of the art real-time rendering ICT 23 State of the art real-time rendering ICT 23 GPU Versus CPU Performance Performance 600 ATI NVIDIA Intel 450 300 GFLOPS 150 0 37,000 37,500 38,000 38,500 39,000 ICT 24 GPU vs. CPU Performance (cont’d) Nr. 1 top 500 November 2007: BlueGene /L 212 992 processors ~596 TFLOPS Theoretically the same processing power as ~1150 high- end GPUs http://www.top500.org ICT 25 Power / Performance Ratio “If the performance per watt of today’s computers doesn’t improve, the electrical costs of running them could end up far greater than the initial hardware price tag” - google CPU – Intel Core 2 Duo “Conroe” 65 W / 20 GFLOPs = 3.25 Watt / GFLOPS GPU 150 W / 500 GFLOPs = 0.3 Watt / GFLOPS Cell BE 70 W / 250 GFLOPs = 0.28 Watt / GFLOPS (5 W per SPE) ICT 26 State of the art real-time rendering (cont’d) ICT 27 State of the art real-time rendering (cont’d) ICT 27 3D Navier-Stokes Fluid Simulator ICT 28 3D Navier-Stokes Fluid Simulator ICT 28 Deformable Skin Demo ICT 29 Deformable Skin Demo ICT 29 More Simulation and Physics in Games Human head: Sum-of-Gaussian algorithm for realistic real- time images of human skin Gaussian convolution filter for blurring, image processing. Smoke: Fluid simulator for smoke More realistic simulations in future games Frog: Real-time mesh deformation in combination with rendering is not possible on a CPU. Real-time simulations require heavy computational power ICT 30 GPU Architecture ICT 31 Overall System Architecture CPU 1333 MHz 12.0 GB/s FSB 12.0 GB/s 4 GB/s System North Bridge GPU To DRAM PCIe Display 86 GB/s, GeForce 8800 GTX South Bridge Graphics DRAM Other Peripherals ICT 32 NVIDIA GeForce 8800 GTX Price: 3000 NOK. ~520 GFLOPS performance Core clock: 675 MHz 681 million transistors Internal bandwith: 86.4 GB/s 128 stream processors 1.35 GHz Memory GDDR3 768 MB 1.8 GHz Programmable in high-level language ICT 33 GeForce 8800 Architecture Overview Image courtesy of rage3d.com ICT 34 GeForce 8800 Architecture Overview (cont’d) 16 stream processors in each multiprocessor 128 stream processors in total. L1 cache shared between all stream processors in a multiprocessor Image courtesy of rage3d.com ICT 35 GPU Programming Models ICT 36 Traditional Problems with GPU Programming The user must be an expert in computer graphics and its computational idioms to make effective use of the hardware The programming model is unusual Debugging is difficult Few libraries ICT 37 Graphics Processing Pipeline This is the abstraction the user is exposed to by the DirectX and OpenGL graphics APIs Hard to understand for non-graphics programmers ICT 38 Compute Unified Driver Architecture (CUDA) Software interface for utilizing the GPU as a data-parallel computing device developed by NVIDIA. No need for graphics API Available for NVIDIA GeForce 8000-series GPUs and the HPC Tesla-series. ICT 39 CUDA – Software Layers CPU Application CUDA Libraries CUDA Runtime CUDA Driver GPU ICT 40 CUDA – Programming Model The GPU is viewed as a device that operates as a coprocessor to the CPU (host). A kernel is a function that is executed on the device as many different threads, doing the same type of operations. The batch of threads that executes a kernel is organized as a grid of thread blocks. A thread block is a batch of threads that can communicate with each other. All threads in a block runs on the same
Recommended publications
  • Rapidmind: Portability Across Architectures and Its Limitations
    RapidMind: Portability across Architectures and its Limitations Iris Christadler and Volker Weinberg Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften, D-85748 Garching bei M¨unchen, Germany Abstract. Recently, hybrid architectures using accelerators like GP- GPUs or the Cell processor have gained much interest in the HPC community. The \RapidMind Multi-Core Development Platform" is a programming environment that allows generating code which is able to seamlessly run on hardware accelerators like GPUs or the Cell processor and multi-core CPUs both from AMD and Intel. This paper describes the ports of three mathematical kernels to RapidMind which have been chosen as synthetic benchmarks and representatives of scientific codes. Performance of these kernels has been measured on various RapidMind backends (cuda, cell and x86) and compared to other hardware-specific implementations (using CUDA, Cell SDK and Intel MKL). The results give an insight into the degree of portability of RapidMind code and code performance across different architectures. 1 Introduction The vast computing horsepower which is offered by hardware accelerators and their usually good power efficiency has aroused interest of the high performance computing community in these devices. The first hybrid system which entered the Top500 list [1] was the TSUBAME cluster at Tokyo Institute of Technology in Japan. Several hundred Clearspeed cards were used to accelerate an Opteron based cluster; the system was ranked No. 9 in the Top500 list in November 2006. Already in June 2006, a sustained Petaflop/s application performance was firstly reached with the RIKEN MD-GRAPE 3 system in Japan, a special purpose system dedicated for molecular dynamics simulations.
    [Show full text]
  • Data-Parallel Programming with Intel Array Building Blocks (Arbb)
    Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Data-parallel programming with Intel Array Building Blocks (ArBB) Volker Weinberg * Leibniz Rechenzentrum der Bayerischen Akademie der Wissenschaften, Boltzmannstr. 1, D-85748 Garching b. München, Germany Abstract Intel Array Building Blocks is a high-level data-parallel programming environment designed to produce scalable and portable results on existing and upcoming multi- and many-core platforms. We have chosen several mathematical kernels - a dense matrix-matrix multiplication, a sparse matrix-vector multiplication, a 1-D complex FFT and a conjugate gradients solver - as synthetic benchmarks and representatives of scientific codes and ported them to ArBB. This whitepaper describes the ArBB ports and presents performance and scaling measurements on the Westmere-EX based system SuperMIG at LRZ in comparison with OpenMP and MKL. 1. Introduction Intel ArBB [1] is a combination of RapidMind and Intel Ct (“C for Throughput Computing”), a former research project started in 2007 by Intel to ease the programming of its future multi-core processors. RapidMind was a multi-core development platform which allowed the user to write portable code that was able to run on multi-core CPUs both from Intel and AMD as well as on hardware accelerators like GPGPUs from NVIDIA and AMD or the CELL processor. The platform was developed by RapidMind Inc., a company that started in 2004 based on the research related to the Sh project [2] at the University of Waterloo. Intel acquired RapidMind Inc. in August 2009 and combined the advantages of RapidMind with Intel’s Ct technology into a successor named “Intel Array Building Blocks”.
    [Show full text]
  • A Rough Guide to Scientific Computing on The
    SCOP3 A Rough Guide to Scientific Computing On the PlayStation 3 Technical Report UT-CS-07-595 Version 1.0 by Alfredo Buttari Piotr Luszczek Jakub Kurzak Jack Dongarra George Bosilca Innovative Computing Laboratory University of Tennessee Knoxville May 11, 2007 Contents 1 Introduction 1 2 Hardware 3 2.1 CELL Processor .................................... 3 2.1.1 POWER Processing Element (PPE) ...................... 3 2.1.2 Synergistic Processing Element (SPE) ..................... 5 2.1.3 Element Interconnection Bus (EIB) ....................... 6 2.1.4 Memory System ................................ 7 2.2 PlayStation 3 ...................................... 7 2.2.1 Network Card .................................. 7 2.2.2 Graphics Card ................................. 7 2.3 GigaBit Ethernet Switch ................................ 8 2.4 Power Consumption .................................. 8 3 Software 9 3.1 Virtualization Layer: Game OS ............................. 9 3.2 Linux Kernel ...................................... 9 3.3 Compilers ........................................ 10 ii CONTENTS CONTENTS 3.4 TCP/IP Stack ...................................... 11 3.5 MPI ........................................... 11 4 Cluster Setup 14 4.1 Basic Linux Installation ................................. 14 4.2 Linux Kernel Recompilation .............................. 16 4.3 IBM CELL SDK Installation ............................... 19 4.4 Network Configuration ................................. 22 4.5 MPI Installation ....................................
    [Show full text]
  • A Rough Guide to Scientific Computing on the Playstation 3
    SCOP3 A Rough Guide to Scientific Computing On the PlayStation 3 Technical Report UT-CS-07-595 Version 0.1 by Alfredo Buttari Piotr Luszczek Jakub Kurzak Jack Dongarra George Bosilca Innovative Computing Laboratory University of Tennessee Knoxville April 19, 2007 Contents 1 Introduction 1 2 Hardware 3 2.1 CELL Processor .................................... 3 2.1.1 POWER Processing Element (PPE) ...................... 3 2.1.2 Synergistic Processing Element (SPE) ..................... 5 2.1.3 Element Interconnection Bus (EIB) ....................... 6 2.1.4 Memory System ................................ 7 2.2 PlayStation 3 ...................................... 7 2.2.1 Network Card .................................. 7 2.2.2 Graphics Card ................................. 7 2.3 GigaBit Ethernet Switch ................................ 8 2.4 Power Consumption .................................. 8 3 Software 9 3.1 Virtualization Layer: Game OS ............................. 9 3.2 Linux Kernel ...................................... 9 3.3 Compilers ........................................ 10 ii CONTENTS CONTENTS 3.4 TCP/IP Stack ...................................... 11 3.5 MPI ........................................... 11 4 Cluster Setup 13 4.1 Basic Linux Installation ................................. 13 4.2 Linux Kernel Recompilation .............................. 15 4.3 IBM CELL SDK Installation ............................... 19 4.4 Network Configuration ................................. 21 4.5 MPI Installation ....................................
    [Show full text]
  • CUDA and Opencl
    CUDACUDA andand OpenCLOpenCL ------ DevelopmentDevelopment InterfacesInterfaces forfor MulticoreMulticore ProgrammingProgramming Dr. Jun Ni, Ph.D. Associate Professor of Radiology, Biomedical Engineering, Mechanical Engineering, Computer Science The University of Iowa, Iowa City, Iowa, USA Dec. 23 Harbin Engineering University OutlineOutline CUDA,CUDA, NavidaNavida--basedbased ProgrammingProgramming environmentenvironment forfor GPGPUGPGPU OpenCLOpenCL,, openopen sourcesource programmingprogramming environmentenvironment forfor multicoremulticore processorprocessor systemssystems forfor Cell/BECell/BE andand GPUGPU IntroductionIntroduction toto CUDACUDA CUDACUDA (an(an acronymacronym forfor ComputeCompute UnifiedUnified DeviceDevice ArchitectureArchitecture)) a parallel computing architecture developed by NVIDIA CUDACUDA isis thethe computingcomputing engineengine inin NVIDIANVIDIA graphicsgraphics processingprocessing unitsunits ((GPUsGPUs)) accessible to software developers through industry standard programming languages ProgrammersProgrammers useuse 'C'C forfor CUDA'CUDA' (C(C withwith NVIDIANVIDIA extensions)extensions) compiled through a PathScale Open64 C compiler to code algorithms for execution on the GPU IntroductionIntroduction toto CUDACUDA CUDA architecture supports a range of computational interfaces OpenCL DirectCompute Third party wrappers Python Fortran Java Matlab The latest drivers all contain the necessary CUDA components CUDA works with all NVIDIA GPUs G8X series onwards: GeForce Quadro Tesla IntroductionIntroduction
    [Show full text]
  • Parallel Programming Models
    Parallel Programming Models Dr. Mario Deilmann Intel Compiler and Languages Lab Software & Services Group, Developer Products Division Copyright © 2009, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Our Vision: Making models survive future architectures Single Source – portable across systems and into the future CPU w/Accelerator CPU SSE 4 LRBni AVX ManyCore Processors Hybrid Processors Software & Services Group, Developer Products Division Copyright © 2009, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Today’s Parallelism • There are lots of programming options TBB from Intel: Ct – Old recommendations (e.g. OpenMP) Cilk – Newer recommendations (e.g. TBB, Ct) RapidMind – New acquisitions (i.e. Cilk, RapidMind) OpenMP MPI – Competitive offerings (e.g. OpenCL, CUDA) OpenCL Xntask • An initiative (or a few initiatives) to CnC clarify the programming options for our customers Software & Services Group, Developer Products Division Copyright © 2009, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. POSIX* pthreads*: Example #include <stdio.h> #include <pthread.h> const int num_threads = 4 ; void* thread_func(void* arg) { do_work(); return NULL; Parallel processing } main() { pthread_t threads[num_threads]; for( int i = 0; i < num_threads; i++) pthread_create (&threads[i], NULL, thread_func, NULL); for(int j = 0; j < num_threads; j++) pthread_join (threads[j],
    [Show full text]
  • On the Effective Parallel Programming of Multi-Core Processors
    On the Effective Parallel Programming of Multi-core Processors PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K.C.A.M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op dinsdag 7 december 2010 om 10.00 uur door Ana Lucia VARBˆ ANESCUˇ Master of Science in Architecturi Avansate de Calcul, Universitatea POLITEHNICA Bucure¸sti, Romˆania geboren te Boekarest, Roemeni¨e. Dit proefschrift is goedgekeurd door de promotor: Prof. dr. ir. H.J. Sips Samenstelling promotiecommissie: Rector Magnificus voorzitter Prof.dr.ir. H.J. Sips Technische Universiteit Delft, promotor Prof.dr.ir. A.J.C. van Gemund Technische Universiteit Delft Prof.dr.ir. H.E. Bal Vrije Universiteit Amsterdam, The Netherlands Prof.dr.ir. P.H.J. Kelly Imperial College of London, United Kingdom Prof.dr.ing. N. T¸˜apu¸s Universitatea Politehnic˜aBucure¸sti, Romania Dr. R.M. Badia Barcelona Supercomputing Center, Spain Dr. M. Perrone IBM TJ Watson Research Center, USA Prof.dr. K.G. Langendoen Technische Universiteit Delft, reservelid This work was carried out in the ASCI graduate school. Advanced School for Computing and Imaging ASCI dissertation series number 223. The work in this dissertation was performed in the context of the Scalp project, funded by STW-PROGRESS. Parts of this work have been done in collaboration and with the material support of the IBM T.J. Watson Research Center, USA. Part of this work has been done in collaboration with the Barcelona Supercomputing Center, and supported by a HPC Europa mobility grant.
    [Show full text]
  • Data-Parallel Programming with Intel Array Building Blocks (Arbb)
    Available online at www.prace-ri.eu Partnership for Advanced Computing in Europe Data-parallel programming with Intel Array Building Blocks (ArBB) Volker Weinberg * Leibniz Rechenzentrum der Bayerischen Akademie der Wissenschaften, Boltzmannstr. 1, D-85748 Garching b. München, Germany Abstract Intel Array Building Blocks is a high-level data-parallel programming environment designed to produce scalable and portable results on existing and upcoming multi- and many-core platforms. We have chosen several mathematical kernels - a dense matrix-matrix multiplication, a sparse matrix-vector multiplication, a 1-D complex FFT and a conjugate gradients solver - as synthetic benchmarks and representatives of scientific codes and ported them to ArBB. This whitepaper describes the ArBB ports and presents performance and scaling measurements on the Westmere-EX based system SuperMIG at LRZ in comparison with OpenMP and MKL. 1. Introduction Intel ArBB [1] is a combination of RapidMind and Intel Ct (“C for Throughput Computing”), a former research project started in 2007 by Intel to ease the programming of its future multi-core processors. RapidMind was a multi-core development platform which allowed the user to write portable code that was able to run on multi-core CPUs both from Intel and AMD as well as on hardware accelerators like GPGPUs from NVIDIA and AMD or the CELL processor. The platform was developed by RapidMind Inc., a company that started in 2004 based on the research related to the Sh project [2] at the University of Waterloo. Intel acquired RapidMind Inc. in August 2009 and combined the advantages of RapidMind with Intel’s Ct technology into a successor named “Intel Array Building Blocks”.
    [Show full text]
  • The Playstation 3 for High Performance Scientific Computing
    The PlayStation 3 for High Performance Scientific Computing Jakub Kurzak Department of Computer Science, University of Tennessee Alfredo Buttari French National Institute for Research in Computer Science and Control (INRIA) Piotr Luszczek MathWorks, Inc. Jack Dongarra Department of Electrical Engineering and Computer Science, University of Tennessee Computer Science and Mathematics Division, Oak Ridge National Laboratory School of Mathematics & School of Computer Science, University of Manchester The heart of the Sony PlayStation 3, the technological advances. STI CELL processor, was not originally in- Aside from the laws of exponential growth, a tended for scientific number crunching, and the fundamental result in Very Large Scale Integration PlayStation 3 itself was not meant primarily to (VLSI) complexity theory states that, for certain tran- serve such purposes. Yet, both these items may sitive computations (in which any output may depend impact the High Performance Computing world. on any input), the time required to perform the com- This introductory article takes a closer look at putation is reciprocal to the square root of the chip the cause of this potential disturbance. area. To decrease the time, the cross section must be increased by the same factor, and hence the total area must be increased by the square of that fac- The Multi-Core Revolution tor [3]. What this means is that instead of build- ing a chip of area A, four chips can be built of area In 1965 Gordon E. Moore, a co-founder of Intel made A/4, each of them delivering half the performance, the empirical observation that the number of tran- resulting in a doubling of the overall performance.
    [Show full text]
  • Frameworks for Multi-Core Architectures: a Comprehensive Evaluation Using 2D/3D Image Registration
    This is the author’s version of the work. The definitive work was published in Proceedings of the 24th International Conference on Architecture of Computing Systems (ARCS), Lake Como, Italy, February 22-25, 2011. Frameworks for Multi-core Architectures: A Comprehensive Evaluation using 2D/3D Image Registration Richard Membarth1, Frank Hannig1, Jürgen Teich1, Mario Körner2, and Wieland Eckert2 1 Hardware/Software Co-Design, Department of Computer Science, University of Erlangen-Nuremberg, Germany. {richard.membarth,hannig,teich}@cs.fau.de 2 Siemens Healthcare Sector, H IM AX, Forchheim, Germany. {mario.koerner,wieland.eckert}@siemens.com Abstract. The development of standard processors changed in the last years moving from bigger, more complex, and faster cores to putting several more simple cores onto one chip. This changed also the way pro- grams are written in order to leverage the processing power of multiple cores of the same processor. In the beginning, programmers had to di- vide and distribute the work by hand to the available cores and to man- age threads in order to use more than one core. Today, several frame- works exist to relieve the programmer from such tasks. In this paper, we present five such frameworks for parallelization on shared memory multi-core architectures, namely OpenMP, Cilk++, Threading Building Blocks, RapidMind, and OpenCL. To evaluate these frameworks, a real world application from medical imaging is investigated, the 2D/3D im- age registration. In an empirical study, a fine-grained data parallel and a coarse-grained task parallel parallelization approach are used to evaluate and estimate different aspects like usability, performance, and overhead of each framework.
    [Show full text]
  • Software Plattform Embedded Systems 2020
    SPES Software Plattform Embedded Systems 2020 - Beschreibung der Fallstudie „Multi-core and Many-core Evaluation“ - Version: 1.0 Projektbezeichnung SPES 2020 Verantwortlich Richard Membarth QS-Verantwortlich Mario Körner, Frank Hannig Erstellt am 18.06.2010 Zuletzt geändert 18.06.2010 16:09 Freigabestatus Vertraulich für Partner Projektöffentlich X Öffentlich Bearbeitungszustand in Bearbeitung vorgelegt X fertig gestellt Weitere Produktinformationen Erzeugung Richard Membarth Mitwirkend Frank Hannig, Mario Körner, Wieland Eckert Änderungsverzeichnis Änderung Geänderte Beschreibung der Änderung Autor Zustand Kapitel Nr. Datum Version 1 22.06.10 1.0 Alle Finale Reporterstellung Prüfverzeichnis Die folgende Tabelle zeigt einen Überblick über alle Prüfungen – sowohl Eigenprüfungen wie auch Prüfungen durch eigenständige Qualitätssicherung – des vorliegenden Dokumentes. Geprüfte Neuer Datum Anmerkungen Prüfer Version Produktzustand Contents 1 Evaluation Application and Criteria 7 1.1 2D/3D Image Registration . .7 1.2 Checklist . .9 1.3 Profiling . 10 1.4 Parallelization Approaches . 12 2 Multi-Core Frameworks 15 2.1 OpenMP . 15 2.2 Cilk++ . 29 2.3 Threading Building Blocks . 43 2.4 RapidMind . 57 2.5 OpenCL . 70 2.6 Discussion . 82 3 Many-Core Frameworks 89 3.1 RapidMind . 89 3.2 PGI Accelerator . 92 3.3 OpenCL . 99 3.4 CUDA . 105 3.5 Intel Ct . 112 3.6 Larrabee . 112 3.7 Related Frameworks . 112 3.7.1 Bulk-Synchronous GPU Programming . 112 3.7.2 HMPP Workbench . 113 3.7.3 Goose . 113 3.7.4 YDEL for CUDA . 113 3.8 Discussion . 113 4 Conclusion 117 Bibliography 121 3 Abstract In this study, different parallelization frameworks for standard shared memory multi-core processors as well as parallelization frameworks for many-core processors like graphics cards are evaluated.
    [Show full text]
  • Frameworks for GPU Accelerators: a Comprehensive Evaluation Using 2D/3D Image Registration
    This is the author’s version of the work. The definitive work was published in Proceedings of the 9th IEEE Symposium on Application Specific Processors (SASP), San Diego, CA, USA, June 5-6, 2011. Frameworks for GPU Accelerators: A Comprehensive Evaluation using 2D/3D Image Registration Richard Membarth, Frank Hannig, and Jürgen Teich Mario Körner and Wieland Eckert Department of Computer Science, Siemens Healthcare Sector, H IM AX, University of Erlangen-Nuremberg, Germany. Forchheim, Germany. {richard.membarth,hannig,teich}@cs.fau.de {mario.koerner,wieland.eckert}@siemens.com for graphics cards without bothering about low-level hardware Abstract—In the last decade, there has been a dramatic growth details. in research and development of massively parallel many-core Compared to the predominant task parallel processing architectures like graphics hardware, both in academia and industry. This changed also the way programs are written in model on standard shared memory multi-core processors, order to leverage the processing power of a multitude of cores where each core processes independent tasks, a data parallel on the same hardware. In the beginning, programmers had to processing model is used for many-core processing. The main use special graphics programming interfaces to express general source for data parallelism arises out of inherent parallelism purpose computations on graphics hardware. Today, several within loops, distributing the computation of different loop frameworks exist to relieve the programmer from such tasks. In this paper, we present five frameworks for parallelization on iterations to the available processing cores. Parallelization GPU Accelerators, namely RapidMind, PGI Accelerator, HMPP frameworks assist the programmer in extracting this kind of Workbench, OpenCL, and CUDA.
    [Show full text]