Jsc Openacc Course 2019

Jsc Openacc Course 2019

INTRODUCTION TO OPENACC JSC OPENACC COURSE 2019 28 October 2019 Andreas Herten Forschungszentrum Jülich Member of the Helmholtz Association Outline OpenACC OpenACC by Example Other Directives History OpenACC Workflow Conclusions OpenMP Identify Parallelism List of Tasks Modus Operandi Parallelize Loops OpenACC’s Models parallel loops pgprof kernels Data Transfers GPU Memory Spaces Portability Clause: copy Visual Profiler Data Locality Analyse Flow data enterdata Routines Member of the Helmholtz Association 28 October 2019 Slide 1 73 OpenACC History 2011 OpenACC 1.0 specification is released NVIDIA, Cray, PGI, CAPS 2013 OpenACC 2.0: More functionality, portability 2015 OpenACC 2.5: Enhancements, clarifications 2017 OpenACC 2.6: Deep copy, … 2018 OpenACC 2.7: Clarifications, more host, reductions, … ! https://www.openacc.org/ (see also: Best practice guide ) Support Compiler: PGI, GCC, Clang, Sunway Languages: C/C++, Fortran Member of the Helmholtz Association 28 October 2019 Slide 2 73 Open{MP$ACC} Everything’s connected OpenACC modeled after OpenMP … … but specific for accelerators OpenMP 4.0/4.5 has also offloading feature; compiler support improving (Clang, GCC, XL) OpenACC more descriptive, OpenMP more prescriptive Basic principle same: Fork/join model Master thread launches parallel child threads; merge after execution parallel parallel fork join master fork join master master master OpenMP OpenACC Member of the Helmholtz Association 28 October 2019 Slide 3 73 Modus Operandi Three-step program 1 Annotate code with directives, indicating parallelism 2 OpenACC-capable compiler generates accelerator-specific code 3 $uccess Member of the Helmholtz Association 28 October 2019 Slide 4 73 1 Directives pragmatic Compiler directives state intend to compiler C/C++ Fortran #pragma acc kernels !$acc kernels for (int i = 0; i < 23; i++) do i = 1, 24 // ... ! ... !$acc end kernels Ignored by compiler which does not understand OpenACC High level programming model for many-core machines, especially accelerators OpenACC: Compiler directives, library routines, environment variables Portable across host systems and accelerator architectures Member of the Helmholtz Association 28 October 2019 Slide 5 73 2 Compiler Simple and abstracted Compiler support PGI Best performance, great support, free GCC Actively performance-improved, OSS Clang First alpha version Trust compiler to generate intended parallelism; always check status output! No need to know ins’n’outs of accelerator; leave it to expert compiler engineers? One code can target different accelerators: GPUs, or even multi-core CPUs ! Portability ?: Eventually you want to tune for device; but that’s possible Member of the Helmholtz Association 28 October 2019 Slide 6 73 Expose 3 $uccess Parallelism Iteration is key Measure Compile Serial to parallel: fast Serial to fast parallel: more time needed Start simple ! refine ) Productivity Because of generalness: Sometimes not last bit of hardware performance accessible But: Use OpenACC together with other accelerator-targeting techniques (CUDA, libraries, …) Member of the Helmholtz Association 28 October 2019 Slide 7 73 OpenACC Accelerator Model For computation and memory spaces Main program executes on host Wait for code Transfer Device code is transferred to accelerator Start main Execution on accelerator is started program Host waits until return (except: async) Wait Run code Two separate memory spaces; data Finish code transfers back and forth Return to host Transfers hidden from programmer Memories not coherent! Compiler helps; GPU runtime helps Host Memory DMA Transfers Device Memory Member of the Helmholtz Association 28 October 2019 Slide 8 73 OpenACC Programming Model A binary perspective OpenACC interpretation needs to be activated as compile flag PGI pgcc -acc [-ta=tesla|-ta=multicore] GCC gcc -fopenacc ! Ignored by incapable compiler! Additional flags possible to improve/modify compilation -ta=tesla:cc70 Use compute capability 7.0 -ta=tesla:lineinfo Add source code correlation into binary -ta=tesla:managed Use unified memory -fopenacc-dim=geom Use geom configuration for threads Member of the Helmholtz Association 28 October 2019 Slide 9 73 A Glimpse of OpenACC #pragma acc data copy(x[0:N],y[0:N]) !$acc data copy(x(1:N),y(1:N)) #pragma acc parallel loop !$acc parallel loop { for (int i=0; i<N; i++){ do i = 1,N x[i] = 1.0; x(i) = 1.0 y[i] = 2.0; y(i) = 2.0 } end do for (int i=0; i<N; i++){ do i = 1,N y[i] = i*x[i]+y[i]; y(i) = i*x(i)+y(i); } end do } !$acc end parallel loop !$acc end data Member of the Helmholtz Association 28 October 2019 Slide 10 73 OpenACC by Example Parallelization Workflow Identify available parallelism Parallelize loops with OpenACC Optimize data locality Optimize loop performance Member of the Helmholtz Association 28 October 2019 Slide 12 73 Jacobi Solver Algorithmic description Example for acceleration: Jacobi solver Iterative solver, converges to correct value Each iteration step: compute average of neighboring points Example: 2D Poisson equation: r2A(x; y) = B(x; y) Ai;j+1 Data Point Boundary Point Ai−1;j Ai+1;j Stencil Ai;j−1 1 A + (i; j) = − (B(i; j) − (A (i − 1; j) + A (i; j + 1); +A (i + 1; j) + A (i; j − 1))) k 1 4 k k k k Member of the Helmholtz Association 28 October 2019 Slide 13 73 Jacobi Solver Source code while ( error > tol && iter < iter_max ) { Iterate until converged error = 0.0; Iterate across for (int ix = ix_start; ix < ix_end; ix++){ matrix elements for (int iy = iy_start; iy < iy_end; iy++){ Anew[iy*nx+ix] = -0.25 * (rhs[iy*nx+ix] - Calculate new value ( A[iy*nx+ix+1] + A[iy*nx+ix-1] + A[(iy-1)*nx+ix] + A[(iy+1)*nx+ix])); from neighbors error = fmaxr(error, fabsr(Anew[iy*nx+ix]-A[iy*nx+ix])); }} Accumulate error for (int iy = iy_start; iy < iy_end; iy++){ for( int ix = ix_start; ix < ix_end; ix++ ){ Swap input/output A[iy*nx+ix] = Anew[iy*nx+ix]; }} for (int ix = ix_start; ix < ix_end; ix++){ A[0*nx+ix] = A[(ny-2)*nx+ix]; A[(ny-1)*nx+ix] = A[1*nx+ix]; Set boundary conditions } // same for iy iter++; } 28 October 2019 Slide 14 73 Parallelization Workflow Identify available parallelism Parallelize loops with OpenACC Optimize data locality Optimize loop performance Member of the Helmholtz Association 28 October 2019 Slide 15 73 Profiling Profile […] premature optimization is the root of all evil. Yet we should not pass up our [optimization] opportunities […] – Donald Knuth [3] Investigate hot spots of your program! ! Profile! Many tools, many levels: perf, PAPI, Score-P, Intel Advisor, NVIDIA profilers, … Here: Examples from PGI Member of the Helmholtz Association 28 October 2019 Slide 16 73 Identify Parallelism TASK 1 Generate Profile Use pgprof to analyze unaccelerated version of Jacobi solver Investigate! Task 1: Analyze Application Re-load PGI compiler with module load PGI Change to Task1/ directory Compile: make task1 Usually, compile just with make (but this exercise is special) Submit profiling run to the batch system: make task1_profile Study srun call and pgprof call; try to understand ??? Where is hotspot? Which parts should be accelerated? Member of the Helmholtz Association 28 October 2019 Slide 17 73 Profile of Application Info during compilation $ pgcc -DUSE_DOUBLE -Minfo=all,intensity -fast -Minfo=ccff -Mprof=ccff poisson2d_reference.o poisson2d.c -o poisson2d poisson2d.c: main: 68, Intensity = 32.00 Loop not vectorized: may not be beneficial 82, Intensity = 0.0 84, Intensity = 0.0 Memory zero idiom, loop replaced by call to __c_mzero8 98, Intensity = 0.0 FMA (fused multiply-add) instruction(s) generated Automated optimization of compiler, due to -fast FMA, unrolling, (vectorization) Member of the Helmholtz Association 28 October 2019 Slide 18 73 Profile of Application Info during run $ pgprof --cpu-profiling on [...] ./poisson2d ======== CPU profiling result (flat): Time(%) Time Name 54.07% 746.34ms main (poisson2d.c:128 0x372) 20.74% 286.27ms main (poisson2d.c:128 0x38c) 4.44% 61.343ms main (poisson2d.c:128 0x37e) 3.70% 51.119ms __c_mcopy8_sky (0x60bec197) ... ======== Data collected at 100Hz frequency Basically, everything in main() Since everything is in main – limited helpfulness Let’s look into main! Member of the Helmholtz Association 28 October 2019 Slide 19 73 Code Independency Analysis Independence is key Data dependency while ( error > tol && iter < iter_max ) { between iterations error = 0.0; for (int ix = ix_start; ix < ix_end; ix++){ for (int iy = iy_start; iy < iy_end; iy++){ Anew[iy*nx+ix] = -0.25 * (rhs[iy*nx+ix] - Independent loop ( A[iy*nx+ix+1] + A[iy*nx+ix-1] iterations + A[(iy-1)*nx+ix] + A[(iy+1)*nx+ix])); error = fmaxr(error, fabsr(Anew[iy*nx+ix]-A[iy*nx+ix])); }} for (int iy = iy_start; iy < iy_end; iy++){ for( int ix = ix_start; ix < ix_end; ix++ ){ Independent loop A[iy*nx+ix] = Anew[iy*nx+ix]; iterations }} for (int ix = ix_start; ix < ix_end; ix++){ A[0*nx+ix] = A[(ny-2)*nx+ix]; A[(ny-1)*nx+ix] = A[1*nx+ix]; Independent loop } iterations // same for iy iter++; } Member of the Helmholtz Association 28 October 2019 Slide 20 73 Parallelization Workflow Identify available parallelism Parallelize loops with OpenACC Optimize data locality Optimize loop performance Member of the Helmholtz Association 28 October 2019 Slide 21 73 Parallel Loops: Parallel An important directive Programmer identifies block containing parallelism ! compiler generates parallel code (kernel) Program launch creates gangs of parallel threads on parallel device Implicit barrier at end of parallel region Each gang executes same code sequentially OpenACC: parallel C

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    88 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us