A Modern C++ Programming Model for Gpus Using Khronos SYCL Michael Wong, Gordon Brown

A Modern C++ Programming Model for Gpus Using Khronos SYCL Michael Wong, Gordon Brown

A Modern C++ Programming Model for GPUs using Khronos SYCL Michael Wong, Gordon Brown ACCU 2018 VP of R&D of Codeplay Who am I? Who are we? Chair of SYCL Heterogeneous Programming Language C++ Directions Group ISOCPP.org Director, VP http://isocpp.org/wiki/faq/wg21#michael-wong Head of Delegation for C++ Standard for Canada Ported Build LLVM- TensorFlow to Chair of Programming Languages for Standards Council based compilers open standards of Canada for accelerators using SYCL Chair of WG21 SG19 Machine Learning Chair of WG21 SG14 Games Dev/Low Latency/Financial Trading/Embedded Releasing open- Editor: C++ SG5 Transactional Memory Technical source, open- Implement Specification standards based OpenCL and Editor: C++ SG1 Concurrency Technical Specification AI acceleration SYCL for tools: SYCL- accelerator MISRA C++ and AUTOSAR BLAS, SYCL-ML, processors wongmichael.com/about VisionCpp We build GPU compilers for semiconductor companies • Now working to make AI/Ml heteroegneous acceleration safe for autonomous vehicle 2 © 2018 Codeplay Software Ltd. Gordon Brown ● Background in C++ programming models for heterogeneous systems ● Developer with Codeplay Software for 6 years ● Worked on ComputeCpp (SYCL) since it’s inception ● Contributor to the Khronos SYCL standard for 6 years ● Contributor to C++ executors and heterogeneity or 2 years 3 © 2017 Codeplay Software Ltd. Numerous people internal and external to the original C++/Khronos group, in industry and academia, have made contributions, influenced ideas, written part of this presentations, and offered feedbacks to form part of this talk. Specifically, Paul Mckenney, Joe Hummel, Bjarne Stroustru, Botond Ballo for some of the slides. I even lifted this acknowledgement and disclaimer from some of them. Acknowledgement Disclaimer But I claim all credit for errors, and stupid mistakes. These are mine, all mine! 4 © 2018 Codeplay Software Ltd. Legal Disclaimer THIS WORK REPRESENTS THE OTHER COMPANY, PRODUCT, AND VIEW OF THE AUTHOR AND DOES SERVICE NAMES MAY BE NOT NECESSARILY REPRESENT TRADEMARKS OR SERVICE MARKS THE VIEW OF CODEPLAY. OF OTHERS. 5 © 2018 Codeplay Software Ltd. Codeplay - Connecting AI to Silicon Products Addressable Markets Automotive (ISO 26262) C++ platform via the SYCL™ open standard, enabling IoT, Smartphones & Tablets vision & machine learning e.g. TensorFlow™ High Performance Compute (HPC) Medical & Industrial Technologies: Vision Processing The heart of Codeplay's compute technology Machine Learning enabling OpenCL™, SPIR™, HSA™ and Vulkan™ Artificial Intelligence Big Data Compute Company Customers High-performance software solutions for custom heterogeneous systems Enabling the toughest processor systems with tools and middleware based on open standards Established 2002 in Scotland Partners ~70 employees 6 © 2018 Codeplay Software Ltd. 3 Act Play 1. What’s still missing from C++? 2. What makes GPU work so fast? 3. What is Modern C++ that works on GPUs, CPUs, everything? 7 © 2018 Codeplay Software Ltd. Act 1 1. What’s still missing from C++? 8 © 2018 Codeplay Software Ltd. What have we achieved so far for C++20? 9 © 2018 Codeplay Software Ltd. Use the Proper Abstraction with C++ Abstraction How is it supported Cores C++11/14/17 threads, async HW threads C++11/14/17 threads, async, hw_concurrency Vectors Parallelism TS2-> Atomic, Fences, lockfree, futures, counters, C++11/14/17 atomics, Concurrency TS1-> transactions Transactional Memory TS1 Parallel Loops Async, TBB:parallel_invoke, C++17 parallel algorithms, for_each Heterogeneous offload, fpga OpenCL, SYCL, HSA, OpenMP/ACC, Kokkos, Raja Distributed HPX, MPI, UPC++ Caches C++17 false sharing support Numa TLS Exception handling in concurrent environment 10 © 2018 Codeplay Software Ltd. Task vs data parallelism Task Data parallelism parallelism Task parallelism: ● Few large tasks with different operations / control flow ● Optimized for latency Data parallelism: ● Many small tasks with same operations on multiple data ● Optimized for throughput 11 © 2017 Codeplay Software Ltd. Review of Latency, bandwidth, throughput ● Latency is the amount of time it takes to travel through the tube. ● Bandwidth is how wide the tube is. ● The amount of water flow will be your throughput 12 © 2017 Codeplay Software Ltd. Definition and examples Latency is the time required to perform some action or to produce some result. Latency is measured in units of time -- hours, minutes, seconds, nanoseconds or clock periods. Throughput is the number of such actions executed or results produced per unit of time. This is measured in units of whatever is being produced (cars, motorcycles, I/O samples, memory words, iterations) per unit of time. The term "memory bandwidth" is sometimes used to specify the throughput of memory systems. bandwidth is the maximum rate of data transfer across a given path. Example An assembly line is manufacturing cars. It takes eight hours to manufacture a car and that the factory produces one hundred and twenty cars per day. The latency is: 8 hours. The throughput is: 120 cars / day or 5 cars / hour. 13 © 2017 Codeplay Software Ltd. 14 © 2017 Codeplay Software Ltd. Flynn’s Taxonomy • Distinguishes multi-processor computer architectures along the two independent dimensions • Instruction and Data • Each dimension can have one state: Single or Multiple • SISD: Single Instruction, Single Data • Serial (non-parallel) machine • SIMD: Single Instruction, Multiple Data • Processor arrays and vector machines • MISD: Multiple Instruction, Single Data (weird) • MIMD: Multiple Instruction, Multiple Data • Most common parallel computer systems 15 © 2017 Codeplay Software Ltd. What kind of processors should we build CPU GPU ● Small number of large ● Large number of small processors processors ● More control structures and ● Less control structures and less processing units more processing units ○ Can do more complex logic ○ Can do less complex logic ○ Requires more power ○ Lower power consumption ● Optimise for latency ● Optimised for throughput ○ Minimising the time taken ○ Maximising the amount of for one particular task work done per unit of time 16 © 2017 Codeplay Software Ltd. Multicore CPU vs Manycore GPU • Each core optimized for a • Cores optimized for aggregate throughput, deemphasizing single thread individual performance • Fast serial processing • Scalable parallel processing • Must be good at • Assumes workload is highly parallel • Maximize throughput of all threads everything – Lots of big ALUs • Minimize latency of 1 – Multithreading can hide thread latency, no big caches – Lots of big on chip caches – Simpler control, cost amortized over ALUs via SIMD – Sophisticated controls 17 © 2017 Codeplay Software Ltd. SIMD hard knocks ● SIMD architectures use data parallelism ● Improves tradeoff with area and power ○ Amortize control overhead over SIMD width ● Parallelism exposed to programmer & compiler ● Hard for a compiler to exploit SIMD ● Hard to deal with sparse data & branches ○ C and C++ Difficult to vectorize, Fortran better ● So ○ Either forget SIMD or hope for the autovectorizer ○ Use compiler intrinsics 18 © 2017 Codeplay Software Ltd. Memory ● Many core gpu is a device for turning a compute bound problem into a memory bound problem ● Lots of processors but only one socket ● Memory concerns dominate performance tuning 19 © 2017 Codeplay Software Ltd. Memory is SIMD too ● Virtually all processors have SIMD memory subsystems ● This has 2 effects ○ Sparse access wastes bandwidth ○ Unaligned access wastes bandwidth 20 © 2017 Codeplay Software Ltd. Data Structure Padding ● Multidimensional arrays are usually stored as monolithic vectors in memory ● Care should be taken to assure aligned memory accesses for the necessary access pattern 21 © 2017 Codeplay Software Ltd. Coalescing ● GPUs and CPUs both perform memory transactions at a larger granularity than the program requests (cache line) ● GPUs have a coalescer which examines memory requests dynamically and coalesces them ● To use bandwidth effectively, when threads load, they should ○ Present a set of unit strided loads (dense accesses) ○ Keep sets of loads aligned to vector boundaries 22 © 2017 Codeplay Software Ltd. Power of Computing • 1998, when C++ 98 was released • Intel Pentium II: 0.45 GFLOPS • No SIMD: SSE came in Pentium III • No GPUs: GPU came out a year later • 2011: when C++11 was released • Intel Core-i7: 80 GFLOPS • AVX: 8 DP flops/HZ*4 cores *4.4 GHz= 140 GFlops • GTX 670: 2500 GFLOPS • Computers have gotten so much faster, how come software have not? • Data structures and algorithms latency • 23 © 2017 Codeplay Software Ltd. In 1998, a typical machine had the following flops .45 GFLOPS, 1 core Single threaded C++98/C99/Fortran dominated this picture 24 © 2017 Codeplay Software Ltd. In 2011, a typical machine had the following flops 80 GFLOPS 4 cores To program the CPU, you might use C/C++11, OpenMP, TBB, Cilk, OpenCL 25 © 2017 Codeplay Software Ltd. In 2011, a typical machine had the following flops 80 GFLOPS 4 cores+140 GFLOPS AVX To program the CPU, you might use C/C++11, OpenMP, TBB, Cilk, CUDA, OpenCL To program the vector unit, you have to use Intrinsics, OpenCL, CUDA, or auto- vectorization 26 © 2017 Codeplay Software Ltd. In 2011, a typical machine had the following flops 80 GFLOPS 4 cores+140 GFLOPS AVX+2500 GFLOPS GPU To program the CPU, you might use C/C++11, OpenMP, TBB, Cilk, CUDA, OpenCL To program the vector unit, you have to use Intrinsics, OpenCL, CUDA or auto- vectorization To program the GPU, you have to use CUDA, OpenCL, OpenGL,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    166 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us