
OPENACC INTRODUCTION AND APPLICATIONS UPDATE PGI Compilers for Heterogeneous Supercomputing, September 2017 PGI — THE NVIDIA HPC SDK Fortran, C & C++ Compilers Optimizing, SIMD Vectorizing, OpenMP Accelerated Computing Features OpenACC Directives, CUDA Fortran Multi-Platform Solution X86-64 and OpenPOWER Multicore CPUs NVIDIA Tesla GPUs Supported on Linux, macOS, Windows MPI/OpenMP/OpenACC Tools Debugger Performance Profiler Interoperable with DDT, TotalView 2 THREE WAYS TO BUILD A FASTER COMPUTER Faster Clock More Work per Clock Don't Stall 3 OPENACC FOR EVERYONE PGI Community Edition Now Available FREE PROGRAMMING MODELS OpenACC, CUDA Fortran, OpenMP, C/C++/Fortran Compilers and Tools PLATFORMS X86, OpenPOWER, NVIDIA GPU UPDATES 1-2 times a year 6-9 times a year 6-9 times a year PGI Premier SUPPORT User Forums PGI Support Services LICENSE Annual Perpetual Volume/Site 4 OPENACC DIRECTIVES Manage #pragma acc data copyin(a,b) copyout(c) Incremental Data { Movement ... Single source #pragma acc parallel { Initiate #pragma acc loop gang vector Interoperable Parallel for (i = 0; i < n; ++i) { Execution c[i] = a[i] + b[i]; Performance portable ... } CPU, GPU, Manycore Optimize } Loop ... Mappings } 5 OPENACC KERNELS Easy On-ramp to GPU programming . < statement > exp(A)exp(A) < statement > A !$acc kernels do i = 1, N A(i) = exp(A(i)) enddo !$acc end kernels < statement > . Host Accelerator Memory Memory 6 OPENACC DATA REGIONS ... #pragma acc data copy(b[0:n][0:m]) \ create(a[0:n][0:m]) { p21 for (iter = 1; iter <= p; ++iter){ A #pragma acc kernels SS (B(B)) { for (i = 1; i < n-1; ++i){ for (j = 1; j < m-1; ++j){ 1p a[i][j]=w0*b[i][j]+ B S (B)(B) w1*(b[i-1][j]+b[i+1][j]+ b[i][j-1]+b[i][j+1])+ w2*(b[i-1][j-1]+b[i-1][j+1]+ b[i+1][j-1]+b[i+1][j+1]); } } for( i = 1; i < n-1; ++i ) for( j = 1; j < m-1; ++j ) Host Accelerator b[i][j] = a[i][j]; Memory Memory } } } ... 7 OPENACC IS FOR MULTICORE, MANYCORE & GPUS 98 !$ACC KERNELS 99 !$ACC LOOP INDEPENDENT 100 DO k=y_min-depth,y_max+depth 101 !$ACC LOOP INDEPENDENT 102 DO j=1,depth 103 density0(x_min-j,k)=left_density0(left_xmax+1-j,k) 104 ENDDO 105 ENDDO 106 !$ACC END KERNELS CPU GPU % pgfortran -ta=multicore –fast –Minfo=acc -c \ % pgfortran -ta=tesla,cc35,cc60 –fast -Minfo=acc –c \ update_tile_halo_kernel.f90 update_tile_halo_kernel.f90 . 100, Loop is parallelizable 100, Loop is parallelizable Generating Multicore code 102, Loop is parallelizable 100, !$acc loop gang Accelerator kernel generated 102, Loop is parallelizable Generating Tesla code 100, !$acc loop gang, vector(4) ! blockidx%y threadidx%y 102, !$acc loop gang, vector(32) ! blockidx%x threadidx%x 8 PGI 17.7 OPENACC PERFORMANCE Geometric mean across all 15 SPEC ACCEL 1.2 benchmarks 60 49x 50 CPU Core CPU 40 30 20 10x 12x 12x 10 Speedup vs Single Single vs Speedup 0 Dual-Haswell Dual-Broadwell Dual-POWER8 Tesla P100 (32 cores) (40 cores) (20 cores) Performance measured August, 2017 using PGI 17.7 compilers and are considered estimates per SPEC run and reporting rules. SPEC® and SPEC ACCEL® are registered trademarks of the Standard Performance Evaluation Corporation (www.spec.org). 9 CLOVERLEAF V1.3 AWE Hydrodynamics mini-app 6500+ lines, !$acc kernels OpenACC or OpenMP Source on GitHub http://uk-mac.github.io/CloverLeaf 10 OPENACC DIRECTIVES 75 !$ACC KERNELS 76 !$ACC LOOP INDEPENDENT 77 DO k=y_min,y_max 78 !$ACC LOOP INDEPENDENT PRIVATE(right_flux,left_flux,top_flux,bottom_flux,total_flux, min_cell_volume,energy_change,recip_volume) 79 DO j=x_min,x_max 80 81 left_flux= (xarea(j ,k )*(xvel0(j ,k )+xvel0(j ,k+1) & 82 +xvel0(j ,k )+xvel0(j ,k+1)))*0.25_8*dt*0.5 83 right_flux= (xarea(j+1,k )*(xvel0(j+1,k )+xvel0(j+1,k+1) & 84 +xvel0(j+1,k )+xvel0(j+1,k+1)))*0.25_8*dt*0.5 85 bottom_flux=(yarea(j ,k )*(yvel0(j ,k )+yvel0(j+1,k ) & 86 +yvel0(j ,k )+yvel0(j+1,k )))*0.25_8*dt*0.5 87 top_flux= (yarea(j ,k+1)*(yvel0(j ,k+1)+yvel0(j+1,k+1) & 88 +yvel0(j ,k+1)+yvel0(j+1,k+1)))*0.25_8*dt*0.5 89 total_flux=right_flux-left_flux+top_flux-bottom_flux 90 91 volume_change(j,k)=volume(j,k)/(volume(j,k)+total_flux) 92 93 min_cell_volume=MIN(volume(j,k)+right_flux-left_flux+top_flux-bottom_flux & 94 ,volume(j,k)+right_flux-left_flux & 95 ,volume(j,k)+top_flux-bottom_flux) 97 recip_volume=1.0/volume(j,k) 99 energy_change=(pressure(j,k)/density0(j,k)+viscosity(j,k)/density0(j,k))*... 101 energy1(j,k)=energy0(j,k)-energy_change 103 density1(j,k)=density0(j,k)*volume_change(j,k) 105 ENDDO http://uk-mac.github.io/CloverLeaf 106 ENDDO 107 !$ACC END KERNELS 11 OPENACC DIRECTIVES FOR GPUS 75 !$ACC KERNELS 76 !$ACC LOOP INDEPENDENT 77 DO k=y_min,y_max 78 !$ACC LOOP INDEPENDENT PRIVATE(right_flux,left_flux,top_flux,bottom_flux,total_flux, min_cell_volume,energy_change,recip_volume) 79 DO j=x_min,x_max % pgfortran –fast –ta=tesla –Minfo -c PdV_kernel.f90 80 pdv_kernel: 81 left_flux= (xarea(j ,k )*(xvel0(j ,k )+xvel0(j ,k+1) & ... 82 +xvel0(j ,k )+xvel0(j ,k+1)))*0.25_8*dt*0.5 83 right_flux= (xarea(j+1,k )*(xvel0(j+1,k )+xvel0(j+1,k+1) & 77, Loop is parallelizable 84 +xvel0(j+1,k )+xvel0(j+1,k+1)))*0.25_8*dt*0.5 79, Loop is parallelizable 85 bottom_flux=(yarea(j ,k )*(yvel0(j ,k )+yvel0(j+1,k ) & Accelerator kernel generated 86 +yvel0(j ,k )+yvel0(j+1,k )))*0.25_8*dt*0.5 87 top_flux= (yarea(j ,k+1)*(yvel0(j ,k+1)+yvel0(j+1,k+1) & Generating Tesla code 88 +yvel0(j ,k+1)+yvel0(j+1,k+1)))*0.25_8*dt*0.5 77, !$acc loop gang, vector(4) ! blockidx%y 89 total_flux=right_flux-left_flux+top_flux-bottom_flux ! threadidx%y 90 91 volume_change(j,k)=volume(j,k)/(volume(j,k)+total_flux) 79, !$acc loop gang, vector(32)! blockidx%x 92 ! threadidx%x 93 min_cell_volume=MIN(volume(j,k)+right_flux-left_flux+top_flux-bottom_flux & ... 94 ,volume(j,k)+right_flux-left_flux & 95 ,volume(j,k)+top_flux-bottom_flux) 97 recip_volume=1.0/volume(j,k) 99 energy_change=(pressure(j,k)/density0(j,k)+viscosity(j,k)/density0(j,k))*... 101 energy1(j,k)=energy0(j,k)-energy_change 103 density1(j,k)=density0(j,k)*volume_change(j,k) 105 ENDDO 106 ENDDO 107 !$ACC END KERNELS 12 OPENACC DIRECTIVES FOR MULTICORE CPUS 75 !$ACC KERNELS 76 !$ACC LOOP INDEPENDENT 77 DO k=y_min,y_max 78 !$ACC LOOP INDEPENDENT PRIVATE(right_flux,left_flux,top_flux,bottom_flux,total_flux, min_cell_volume,energy_change,recip_volume) 79 DO j=x_min,x_max % pgfortran –fast –ta=multicore ... PdV_kernel.f90 80 pdv_kernel: 81 left_flux= (xarea(j ,k )*(xvel0(j ,k )+xvel0(j ,k+1) & ... 82 +xvel0(j ,k )+xvel0(j ,k+1)))*0.25_8*dt*0.5 83 right_flux= (xarea(j+1,k )*(xvel0(j+1,k )+xvel0(j+1,k+1) & 77, Loop is parallelizable 84 +xvel0(j+1,k )+xvel0(j+1,k+1)))*0.25_8*dt*0.5 Generating Multicore code 85 bottom_flux=(yarea(j ,k )*(yvel0(j ,k )+yvel0(j+1,k ) & 77, !$acc loop gang 86 +yvel0(j ,k )+yvel0(j+1,k )))*0.25_8*dt*0.5 87 top_flux= (yarea(j ,k+1)*(yvel0(j ,k+1)+yvel0(j+1,k+1) & 79, Loop is parallelizable 88 +yvel0(j ,k+1)+yvel0(j+1,k+1)))*0.25_8*dt*0.5 3 loop-carried redundant expressions removed 89 total_flux=right_flux-left_flux+top_flux-bottom_flux with 9 operations and 9 arrays 90 91 volume_change(j,k)=volume(j,k)/(volume(j,k)+total_flux) Innermost loop distributed: 2 new loops 92 Generated vector SIMD code for the loop 93 min_cell_volume=MIN(volume(j,k)+right_flux-left_flux+top_flux-bottom_flux & Generated 2 prefetch instructions for the loop 94 ,volume(j,k)+right_flux-left_flux & 95 ,volume(j,k)+top_flux-bottom_flux) Generated 12 prefetch instructions for the loop 97 recip_volume=1.0/volume(j,k) ... 99 energy_change=(pressure(j,k)/density0(j,k)+viscosity(j,k)/density0(j,k))*... 101 energy1(j,k)=energy0(j,k)-energy_change 103 density1(j,k)=density0(j,k)*volume_change(j,k) 105 ENDDO 106 ENDDO 107 !$ACC END KERNELS 13 OPENACC PERFORMANCE PORTABILITY AWE Hydrodynamics CloverLeaf mini-App, bm32 data set http://uk-mac.github.io/CloverLeaf 142x 93x 60 Core 52x 50 PGI OpenACC Haswell 40 Intel/IBM OpenMP 30 20 14x 9x 9x 10x 10x 11x 11x Speedup vs Single Speedup 10 0 Multicore Haswell Multicore Broadwell Multicore POWER8 Kepler Pascal Pascal 2x 4x Systems: Haswell: 2x16 core Haswell server, four K80s, CentOS 7.2 (perf-hsw10), Broadwell: 2x20 core Broadwell server, eight P100s (dgx1-prd-01), Minsky: POWER8+NVLINK, four P100s, RHEL 7.3 (gsn1. Compilers: Intel 17.0.1, IBM XL 13.1.3, PGI 16.10. Benchmark: CloverLeaf v1.3 downloaded from http://uk-mac.github.io/CloverLeaf the week of November 7 2016; CloverlLeaf_Serial; CloverLeaf_ref (MPI+OpenMP); CloverLeaf_OpenACC (MPI+OpenACC) Data compiled by PGI November 2016. 14 Parallelization Strategy Within Gaussian 16, GPUs are used for a small fraction of code that consumes a large fraction of the execution time. Te implementation of GPU parallelism conforms to Gaussian’s general parallelization strategy. Its main tenets are to avoid changing the underlying source code and to avoid modifcations which negatively afect CPU performance. For these reasons, OpenACC was used for GPU parallelization. PGI Accelerator Compilers with OpenACC PGI compilers fully support the current OpenACC standard as well as important extensions to it. PGI is an important contributor to the ongoing development of OpenACC.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages49 Page
-
File Size-