Opencl: a Parallel Programming Standard for Heterogeneous

Total Page:16

File Type:pdf, Size:1020Kb

Opencl: a Parallel Programming Standard for Heterogeneous N OVEL A RCHITecTURes Editors: Volodymyr Kindratenko, [email protected] Pedro Trancoso, [email protected] OPEN CL: A PA R all E L PR O GR A MMING STA ND A RD F O R HETER O GENE O U S CO MPUTING SY S TEM S By John E. Stone, David Gohara, and Guochun Shi The OpenCL standard offers a common API for program execution on systems composed of different types of computational devices such as multicore CPUs, GPUs, or other accelerators. he strong need for increased programming toolkits. OpenCL and hardware abstractions, OpenCL computational performance in defines core functionality that all lets developers accelerate applications T science and engineering has led devices support, as well as optional with task- or data-parallel computa- to the use of heterogeneous comput- functionality for high-function de- tions in a heterogeneous computing ing, with GPUs and other accelerators vices; it also includes an extension environment consisting of the host acting as coprocessors for arithmetic mechanism that lets vendors expose CPU and any attached OpenCL de- intensive data-parallel workloads.1–4 unique hardware features and ex- vices. Such devices might or might OpenCL is a new industry standard perimental programming interfaces not share memory with the host CPU, for task-parallel and data-parallel het- for application developers’ benefit. and typically have a different ma- erogeneous computing on a variety Although OpenCL can’t mask sig- chine instruction set. The OpenCL of modern CPUs, GPUs, DSPs, and nificant differences in hardware archi- programming interfaces therefore as- other microprocessor designs.5 This tecture, it does guarantee portability sume heterogeneity between the host trend toward heterogeneous comput- and correctness. This makes it much and all attached devices. ing and highly parallel architectures easier for developers to start with a OpenCL’s key programming inter- has created a strong need for software correctly functioning OpenCL pro- faces include functions for development infrastructure in the gram tuned for one architecture form of parallel programming lan- and produce a correctly function- • enumerating available target de- guages and subroutine libraries that ing program optimized for another vices (CPUs, GPUs, and various can support heterogeneous comput- architecture. accelerators); ing on multiple vendors’ hardware • managing the target devices’ platforms. To address this, developers The OpenCL contexts; adapted many existing science and en- Programming Model • managing memory allocations; gineering applications to take advan- In OpenCL, a program is executed on • performing host-device memory tage of multicore CPUs and massively a computational device, which can be a transfers; parallel GPUs using toolkits such as CPU, GPU, or another accelerator • compiling the OpenCL programs Threading Building Blocks (TBB), (see Figure 1). Devices contain one or and kernel functions that the OpenMP, Compute Unified Device more compute units (processor cores). devices will execute; Architecture (CUDA),6 and others.7,8 These units are themselves composed • launching kernels on the target Existing programming toolkits, how- of one or more single-instruction devices; ever, were either limited to a single multiple-data (SIMD) processing ele- • querying execution progress; and microprocessor family or didn’t sup- ments (PE) that execute instructions • checking for errors. port heterogeneous computing. in lock-step. OpenCL provides easy-to-use Although developers can compile abstractions and a broad set of OpenCL Device Management and link OpenCL programs into programming APIs based on past By providing a common language inary objects using off line com- successes with CUDA and other and common programming interfaces pilation methodology, OpenCL 66 Copublished by the IEEE CS and the AIP 1521-9615/10/$26.00 © 2010 IEEE COMPUTING IN SC IEN C E & ENGINEERING Authorized licensed use limited to: University of Cincinnati. Downloaded on May 05,2010 at 16:50:02 UTC from IEEE Xplore. Restrictions apply. OpenCL program OpenCL context Miscsupport OpenCL device functions encourages runtime compilation that lets OpenCL programs run Kernel A natively on the target hardware— even on platforms unavailable to the OpenCL device original software developer. Runtime Kernel B compilation eliminates dependencies on instruction sets, letting hardware vendors significantly change instruc- tion sets, drivers, and supporting Kernel C libraries from one hardware genera- tion to the next.2 Applications that use OpenCL’s runtime compilation Figure 1. OpenCL describes hardware in terms of a hierarchy of devices, compute features will automatically take ad- units, and clusters of single instruction multiple data (SIMD) processing elements. vantage of the target device’s latest Before becoming accessible to an application, devices must first be incorporated hardware and software features with- into an OpenCL context. OpenCL programs contain one or more kernel functions as well as supporting routines that kernels can use. out having to recompile the main ap- plication itself. Because OpenCL targets a broad • a large high-latency global memory, that OpenCL programs require to run range of microprocessor designs, it • a small low-latency read-only con- on that context. must support a multiplicity of pro- stant memory, Once a context is created, OpenCL gramming idioms that match the tar- • a shared local memory accessible programs can be compiled at runtime get architectures. Although OpenCL from multiple PEs within the same by passing the source code to Open- guarantees kernel portability and cor- compute unit, and CL compilation functions as arrays of rectness across a variety of hardware, • a private memory, or device regis- strings. After an OpenCL program it doesn’t guarantee that a particular ters, accessible within each PE. is compiled, handles can be obtained kernel will achieve peak performance for the kernel functions contained in on different architectures; the hard- Devices can implement local memory the program. The kernels can then be ware’s underlying nature might make using high-latency global memory, launched on devices within the Open- some programming strategies more fast on-chip static RAM, or a shared CL context. OpenCL host-device appropriate for particular platforms register file. Applications can query memory I/O operations and kernels than for others. As an example, a device attributes to determine prop- are executed by enqueing them into GPU-optimized kernel might achieve erties of the available compute units one of the command queues associ- peak memory performance when a and memory systems and use them ated with the target device. single work-group’s work-items col- accordingly. lectively perform loads and stores, Before an application can compile OpenCL and Modern whereas a Cell-optimized kernel might OpenCL programs, allocate device Processor Architectures perform better using a double buffer- memory, or launch kernels, it must State-of-the-art microprocessors con- ing strategy combined with calls to first create a context associated with tain several architectural features async_workgroup_copy(). A p pl ic a - one or more devices. Because OpenCL that, historically, have been poorly tions select the most appropriate ker- associates memory allocations with a supported or difficult to use in exist- nel for the target devices by querying context rather than a specific device, ing programming languages. This has the installed devices’ capabilities and developers should exclude devices led vendors to create their own pro- hardware attributes at runtime. with inadequate memory capacity gramming tools, language extensions, when creating a context, otherwise vector intrinsics, and subroutine li- OpenCL Device Contexts the least-capable device will limit the braries to close the programmability and Memory maximum memory allocation. Simi- gap created by these hardware fea- OpenCL defines four types of memory larly, they should exclude devices from tures. To help clarify the relationship systems that devices can incorporate: a context if they don’t support features between the OpenCL programming MAY/JUNE 2010 67 Authorized licensed use limited to: University of Cincinnati. Downloaded on May 05,2010 at 16:50:02 UTC from IEEE Xplore. Restrictions apply. N OVEL A RCHITecTURes model and the diversity of poten- instruction set extensions such as x86 execution pipelines. Each SPE has local tial target hardware, we compare the SSE and Power/VMX (vector multi- store, a small, local software-managed architectural characteristics of three media extensions). The current CPU cache-like fast memory. Applications exemplary microprocessor families implementations for x86 processors can load data from system memory and relate them to key OpenCL often make best use of SSE when to local store or vice versa using di- abstractions and OpenCL program- OpenCL kernels are written with rect memory access (DMA) requests, ming model features. explicit use of float4 types. CPU im- with the best bandwidth achieved plementations often map all memory when both source and destination Multicore CPUs spaces onto the same hardware cache, are aligned to 128 bytes. Cell can ex- Modern CPUs are typically com- so a kernel that explicitly uses con- ecute data transfer and instructions posed of a few high-frequency proces- stant and local memory spaces might simultaneously, letting application sor cores with advanced features such actually incur more overhead than developers
Recommended publications
  • Other Apis What’S Wrong with Openmp?
    Threaded Programming Other APIs What’s wrong with OpenMP? • OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming CPU cycles. – cannot arbitrarily start/stop threads – cannot put threads to sleep and wake them up later • OpenMP is good for programs where each thread is doing (more-or-less) the same thing. • Although OpenMP supports C++, it’s not especially OO friendly – though it is gradually getting better. • OpenMP doesn’t support other popular base languages – e.g. Java, Python What’s wrong with OpenMP? (cont.) Can do this Can do this Can’t do this Threaded programming APIs • Essential features – a way to create threads – a way to wait for a thread to finish its work – a mechanism to support thread private data – some basic synchronisation methods – at least a mutex lock, or atomic operations • Optional features – support for tasks – more synchronisation methods – e.g. condition variables, barriers,... – higher levels of abstraction – e.g. parallel loops, reductions What are the alternatives? • POSIX threads • C++ threads • Intel TBB • Cilk • OpenCL • Java (not an exhaustive list!) POSIX threads • POSIX threads (or Pthreads) is a standard library for shared memory programming without directives. – Part of the ANSI/IEEE 1003.1 standard (1996) • Interface is a C library – no standard Fortran interface – can be used with C++, but not OO friendly • Widely available – even for Windows – typically installed as part of OS – code is pretty portable • Lots of low-level control over behaviour of threads • Lacks a proper memory consistency model Thread forking #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void*(*start_routine, void*), void *arg) • Creates a new thread: – first argument returns a pointer to a thread descriptor.
    [Show full text]
  • Opencl on Shared Memory Multicore Cpus
    OpenCL on shared memory multicore CPUs Akhtar Ali, Usman Dastgeer and Christoph Kessler Book Chapter N.B.: When citing this work, cite the original article. Part of: Proceedings of the 5th Workshop on MULTIPROG -2012. E. Ayguade, B. Gaster, L. Howes, P. Stenström, O. Unsal (eds), 2012. Copyright: HiPEAC Network of Excellence Available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93951 Published in: Proc. Fifth Workshop on Programmability Issues for Multi-Core Computers (MULTIPROG-2012) at HiPEAC-2012 conference, Paris, France, Jan. 2012. OpenCL for programming shared memory multicore CPUs Akhtar Ali, Usman Dastgeer, and Christoph Kessler PELAB, Dept. of Computer and Information Science, Linköping University, Sweden [email protected] {usman.dastgeer,christoph.kessler}@liu.se Abstract. Shared memory multicore processor technology is pervasive in mainstream computing. This new architecture challenges programmers to write code that scales over these many cores to exploit the full compu- tational power of these machines. OpenMP and Intel Threading Build- ing Blocks (TBB) are two of the popular frameworks used to program these architectures. Recently, OpenCL has been defined as a standard by Khronos group which focuses on programming a possibly heteroge- neous set of processors with many cores such as CPU cores, GPUs, DSP processors. In this work, we evaluate the effectiveness of OpenCL for programming multicore CPUs in a comparative case study with OpenMP and Intel TBB for five benchmark applications: matrix multiply, LU decomposi- tion, 2D image convolution, Pi value approximation and image histogram generation. The evaluation includes the effect of compiler optimizations for different frameworks, OpenCL performance on different vendors’ plat- forms and the performance gap between CPU-specific and GPU-specific OpenCL algorithms for execution on a modern GPU.
    [Show full text]
  • Heterogeneous Task Scheduling for Accelerated Openmp
    Heterogeneous Task Scheduling for Accelerated OpenMP ? ? Thomas R. W. Scogland Barry Rountree† Wu-chun Feng Bronis R. de Supinski† ? Department of Computer Science, Virginia Tech, Blacksburg, VA 24060 USA † Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA 94551 USA [email protected] [email protected] [email protected] [email protected] Abstract—Heterogeneous systems with CPUs and computa- currently requires a programmer either to program in at tional accelerators such as GPUs, FPGAs or the upcoming least two different parallel programming models, or to use Intel MIC are becoming mainstream. In these systems, peak one of the two that support both GPUs and CPUs. Multiple performance includes the performance of not just the CPUs but also all available accelerators. In spite of this fact, the models however require code replication, and maintaining majority of programming models for heterogeneous computing two completely distinct implementations of a computational focus on only one of these. With the development of Accelerated kernel is a difficult and error-prone proposition. That leaves OpenMP for GPUs, both from PGI and Cray, we have a clear us with using either OpenCL or accelerated OpenMP to path to extend traditional OpenMP applications incrementally complete the task. to use GPUs. The extensions are geared toward switching from CPU parallelism to GPU parallelism. However they OpenCL’s greatest strength lies in its broad hardware do not preserve the former while adding the latter. Thus support. In a way, though, that is also its greatest weak- computational potential is wasted since either the CPU cores ness. To enable one to program this disparate hardware or the GPU cores are left idle.
    [Show full text]
  • AMD Accelerated Parallel Processing Opencl Programming Guide
    AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • Implementing FPGA Design with the Opencl Standard
    Implementing FPGA Design with the OpenCL Standard WP-01173-3.0 White Paper Utilizing the Khronos Group’s OpenCL™ standard on an FPGA may offer significantly higher performance and at much lower power than is available today from hardware architectures such as CPUs, graphics processing units (GPUs), and digital signal processing (DSP) units. In addition, an FPGA-based heterogeneous system (CPU + FPGA) using the OpenCL standard has a significant time-to-market advantage compared to traditional FPGA development using lower level hardware description languages (HDLs) such as Verilog or VHDL. 1 OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. Introduction The initial era of programmable technologies contained two different extremes of programmability. As illustrated in Figure 1, one extreme was represented by single core CPU and digital signal processing (DSP) units. These devices were programmable using software consisting of a list of instructions to be executed. These instructions were created in a manner that was conceptually sequential to the programmer, although an advanced processor could reorder instructions to extract instruction-level parallelism from these sequential programs at run time. In contrast, the other extreme of programmable technology was represented by the FPGA. These devices are programmed by creating configurable hardware circuits, which execute completely in parallel. A designer using an FPGA is essentially creating a massively- fine-grained parallel application. For many years, these extremes coexisted with each type of programmability being applied to different application domains. However, recent trends in technology scaling have favored technologies that are both programmable and parallel. Figure 1.
    [Show full text]
  • Opencl SYCL 2.2 Specification
    SYCLTM Provisional Specification SYCL integrates OpenCL devices with modern C++ using a single source design Version 2.2 Revision Date: – 2016/02/15 Khronos OpenCL Working Group — SYCL subgroup Editor: Maria Rovatsou Copyright 2011-2016 The Khronos Group Inc. All Rights Reserved Contents 1 Introduction 13 2 SYCL Architecture 15 2.1 Overview . 15 2.2 The SYCL Platform and Device Model . 15 2.2.1 Platform Mixed Version Support . 16 2.3 SYCL Execution Model . 17 2.3.1 Execution Model: Queues, Command Groups and Contexts . 18 2.3.2 Execution Model: Command Queues . 19 2.3.3 Execution Model: Mapping work-items onto an nd range . 21 2.3.4 Execution Model: Execution of kernel-instances . 22 2.3.5 Execution Model: Hierarchical Parallelism . 24 2.3.6 Execution Model: Device-side enqueue . 25 2.3.7 Execution Model: Synchronization . 25 2.4 Memory Model . 26 2.4.1 Access to memory . 27 2.4.2 Memory consistency . 28 2.4.3 Atomic operations . 28 2.5 The SYCL programming model . 28 2.5.1 Basic data parallel kernels . 30 2.5.2 Work-group data parallel kernels . 30 2.5.3 Hierarchical data parallel kernels . 30 2.5.4 Kernels that are not launched over parallel instances . 31 2.5.5 Synchronization . 31 2.5.6 Error handling . 32 2.5.7 Scheduling of kernels and data movement . 32 2.5.8 Managing object lifetimes . 34 2.5.9 Device discovery and selection . 35 2.5.10 Interfacing with OpenCL . 35 2.6 Anatomy of a SYCL application .
    [Show full text]
  • An Introduction to Gpus, CUDA and Opencl
    An Introduction to GPUs, CUDA and OpenCL Bryan Catanzaro, NVIDIA Research Overview ¡ Heterogeneous parallel computing ¡ The CUDA and OpenCL programming models ¡ Writing efficient CUDA code ¡ Thrust: making CUDA C++ productive 2/54 Heterogeneous Parallel Computing Latency-Optimized Throughput- CPU Optimized GPU Fast Serial Scalable Parallel Processing Processing 3/54 Why do we need heterogeneity? ¡ Why not just use latency optimized processors? § Once you decide to go parallel, why not go all the way § And reap more benefits ¡ For many applications, throughput optimized processors are more efficient: faster and use less power § Advantages can be fairly significant 4/54 Why Heterogeneity? ¡ Different goals produce different designs § Throughput optimized: assume work load is highly parallel § Latency optimized: assume work load is mostly sequential ¡ To minimize latency eXperienced by 1 thread: § lots of big on-chip caches § sophisticated control ¡ To maXimize throughput of all threads: § multithreading can hide latency … so skip the big caches § simpler control, cost amortized over ALUs via SIMD 5/54 Latency vs. Throughput Specificaons Westmere-EP Fermi (Tesla C2050) 6 cores, 2 issue, 14 SMs, 2 issue, 16 Processing Elements 4 way SIMD way SIMD @3.46 GHz @1.15 GHz 6 cores, 2 threads, 4 14 SMs, 48 SIMD Resident Strands/ way SIMD: vectors, 32 way Westmere-EP (32nm) Threads (max) SIMD: 48 strands 21504 threads SP GFLOP/s 166 1030 Memory Bandwidth 32 GB/s 144 GB/s Register File ~6 kB 1.75 MB Local Store/L1 Cache 192 kB 896 kB L2 Cache 1.5 MB 0.75 MB
    [Show full text]
  • Opencl in Scientific High Performance Computing
    OpenCL in Scientific High Performance Computing —The Good, the Bad, and the Ugly Matthias Noack [email protected] Zuse Institute Berlin Distributed Algorithms and Supercomputing 2017-05-18, IWOCL 2017 https://doi.org/10.1145/3078155.3078170 1 / 28 Systems: • 2× Cray XC40 (#118 and #150 in top500, year 4/5 in lifetime) with Xeon CPUs • 80-node Xeon Phi (KNL) Cray XC40 test-and-development system • 2× 32-node Infiniband cluster with Nvidia K40 • 2 test-and-development systems with AMD FirePro W8100 What I do: • computer science research (methods) • development of HPC codes • evaluation of upcoming technologies • consulting/training with system users Context ZIB hosts part of the HLRN (Northern German Supercomputing Alliance) facilities. 2 / 28 What I do: • computer science research (methods) • development of HPC codes • evaluation of upcoming technologies • consulting/training with system users Context ZIB hosts part of the HLRN (Northern German Supercomputing Alliance) facilities. Systems: • 2× Cray XC40 (#118 and #150 in top500, year 4/5 in lifetime) with Xeon CPUs • 80-node Xeon Phi (KNL) Cray XC40 test-and-development system • 2× 32-node Infiniband cluster with Nvidia K40 • 2 test-and-development systems with AMD FirePro W8100 2 / 28 Context ZIB hosts part of the HLRN (Northern German Supercomputing Alliance) facilities. Systems: • 2× Cray XC40 (#118 and #150 in top500, year 4/5 in lifetime) with Xeon CPUs • 80-node Xeon Phi (KNL) Cray XC40 test-and-development system • 2× 32-node Infiniband cluster with Nvidia K40 • 2 test-and-development systems with AMD FirePro W8100 What I do: • computer science research (methods) • development of HPC codes • evaluation of upcoming technologies • consulting/training with system users 2 / 28 • intra-node parallelism dominated by OpenMP (e.g.
    [Show full text]
  • The Continuing Renaissance in Parallel Programming Languages
    The continuing renaissance in parallel programming languages Simon McIntosh-Smith University of Bristol Microelectronics Research Group [email protected] © Simon McIntosh-Smith 1 Didn’t parallel computing use to be a niche? © Simon McIntosh-Smith 2 When I were a lad… © Simon McIntosh-Smith 3 But now parallelism is mainstream Samsung Exynos 5 Octa: • 4 fast ARM cores and 4 energy efficient ARM cores • Includes OpenCL programmable GPU from Imagination 4 HPC scaling to millions of cores Tianhe-2 at NUDT in China 33.86 PetaFLOPS (33.86x1015), 16,000 nodes Each node has 2 CPUs and 3 Xeon Phis 3.12 million cores, $390M, 17.6 MW, 720m2 © Simon McIntosh-Smith 5 A renaissance in parallel programming Metal C++11 OpenMP OpenCL Erlang Unified Parallel C Fortress XC Go Cilk HMPP CHARM++ CUDA Co-Array Fortran Chapel Linda X10 MPI Pthreads C++ AMP © Simon McIntosh-Smith 6 Groupings of || languages Partitioned Global Address Space GPU languages: (PGAS): • OpenCL • Fortress • CUDA • X10 • HMPP • Chapel • Metal • Co-array Fortran • Unified Parallel C Object oriented: • C++ AMP CSP: XC • CHARM++ Message passing: MPI Multi-threaded: • Cilk Shared memory: OpenMP • Go • C++11 © Simon McIntosh-Smith 7 Emerging GPGPU standards • OpenCL, DirectCompute, C++ AMP, … • Also OpenMP 4.0, OpenACC, CUDA… © Simon McIntosh-Smith 8 Apple's Metal • A "ground up" parallel programming language for GPUs • Designed for compute and graphics • Potential to replace OpenGL compute shaders, OpenCL/GL interop etc. • Close to the "metal" • Low overheads • "Shading" language based
    [Show full text]
  • SYCL – Introduction and Hands-On
    SYCL – Introduction and Hands-on Thomas Applencourt + people on next slide - [email protected] Argonne Leadership Computing Facility Argonne National Laboratory 9700 S. Cass Ave Argonne, IL 60349 Book Keeping Get the example and the presentation: 1 git clone https://github.com/alcf-perfengr/sycltrain 2 cd sycltrain/presentation/2020_07_30_ATPESC People who are here to help: • Collen Bertoni - [email protected] • Brian Homerding - [email protected] • Nevin ”:)” Liber [email protected] • Ben Odom - [email protected] • Dan Petre - [email protected]) 1/17 Table of contents 1. Introduction 2. Theory 3. Hands-on 4. Conclusion 2/17 Introduction What programming model to use to target GPU? • OpenMP (pragma based) • Cuda (proprietary) • Hip (low level) • OpenCL (low level) • Kokkos, RAJA, OCCA (high level, abstraction layer, academic projects) 3/17 What is SYCL™?1 1. Target C++ programmers (template, lambda) 1.1 No language extension 1.2 No pragmas 1.3 No attribute 2. Borrow lot of concept from battle tested OpenCL (platform, device, work-group, range) 3. Single Source (two compilations passes) 4. High level data-transfer 5. SYCL is a Specification developed by the Khronos Group (OpenCL, SPIR, Vulkan, OpenGL) 1SYCL Doesn’t mean ”Someone You Couldn’t Love”. Sadly. 4/17 SYCL Implementations 2 2Credit: Khronos groups (https://www.khronos.org/sycl/) 5/17 Goal of this presentation 1. Give you a feel of SYCL 2. Go through code examples (and make you do some homework) 3. Teach you enough so that can search for the rest if you interested 4. Question are welcomed! 3 3Please just talk, or use slack 6/17 Theory A picture is worth a thousand words4 4and this is a UML diagram so maybe more! 7/17 Memory management: SYCL innovation 1.
    [Show full text]
  • Opencl – the Open Standard for Parallel Programming of Heterogeneous Systems J
    OpenCL – The Open Standard for Parallel Programming of Heterogeneous Systems J. Y. Xu Institute of Information & Mathematical Sciences Massey University at Albany, Auckland, New Zealand [email protected] OpenCL, or Open Computing Language, is an open source API that allows for the development of portable code that take advantage of the parallel computing power of modern electronic hardwares, and has the support of a number of industrial heavy weights, making it an idea starting point for any parallel application from consumer software to HPC programs. Keywords: OpenCL, parallel computing, GPGPU 1. Introduction Today's computers are becoming increasing reliant on parallel computing. Processors are shipping with multiple cores, and new ways of computing are being discovered such as GPGPU (General Purpose Graphics Processing Unit) programming [1][3]. Each of these new processing techniques present advantages of their own, thus making the combined use of these technologies empirical. For example GPGPU allows for massive data parallel computations with improved floating point precision [2], while CPU allows for task parallel computation. Combining these two results in applications that are capable of massively parallel vector computations, while remain interactive to end users. While presenting unique opportunities, these emerging trends also present a number of challenges. First of all, GPGPU programming evolved from graphical APIs, and as a result the language constructs are different to those of multi-core CPU programming. This means that a program written for a dual core CPU cannot be made to run on GPGPUs without extensive modification. Worse still, different vendors have their own proprietary technologies. OpenCL, or Open Computing Language, is a prominent project aimed to bring uniformity to this new area.
    [Show full text]
  • Opencl: a Hands-On Introduction
    OpenCL: A Hands-on Introduction Tim Mattson Alice Koniges Intel Corp. Berkeley Lab/NERSC Simon McIntosh-Smith University of Bristol Acknowledgements: In addition to Tim, Alice and Simon … Tom Deakin (Bristol) and Ben Gaster (Qualcomm) contributed to this content. Agenda Lectures Exercises An Introduction to OpenCL Logging in and running the Vadd program Understanding Host programs Chaining Vadd kernels together Kernel programs The D = A + B + C problem Writing Kernel Programs Matrix Multiplication Lunch Working with the OpenCL memory model Several ways to Optimize matrix multiplication High Performance OpenCL Matrix multiplication optimization contest The OpenCL Zoo Run your OpenCL programs on a variety of systems. Closing Comments Course materials In addition to these slides, C++ API header files, a set of exercises, and solutions, we provide: OpenCL C 1.2 Reference Card OpenCL C++ 1.2 Reference Card These cards will help you keep track of the API as you do the exercises: https://www.khronos.org/files/ opencl-1-2-quick-reference-card.pdf The v1.2 spec is also very readable and recommended to have on-hand: https://www.khronos.org/registry/ cl/specs/opencl-1.2.pdf AN INTRODUCTION TO OPENCL Industry Standards for Programming Heterogeneous Platforms GPUs CPUs Emerging Increasingly general Multiple cores driving Intersection purpose data-parallel performance increases computing Graphics Multi- Heterogeneous APIs and processor Computing Shading programming – Languages e.g. OpenMP OpenCL – Open Computing Language Open, royalty-free standard for portable, parallel programming of heterogeneous parallel computing CPUs, GPUs, and other processors The origins of OpenCL ARM AMD Merged, needed Nokia commonality IBM ATI across products Sony Wrote a rough draft Qualcomm GPU vendor – straw man API Imagination wants to steal TI NVIDIA market share from CPU + many more Khronos Compute CPU vendor – group formed wants to steal Intel market share from GPU Was tired of recoding for many core, GPUs.
    [Show full text]