Programming Manycore Systems

Programming Manycore Systems

Programming ManyCore Systems CAPS OpenACC Compilers Stéphane BIHAN CERN – openlab 2013, 27th of Feb. About CAPS CAPS Mission and Offer Mission Help you break the parallel wall and harness the power of manycore computing § Software programming tools o CAPS Manycore Compilers o CAPS Workbench: set of development tools § CAPS engineering services o Port applications to manycore systems o Fine tune parallel applications o Recommend machine configurations o Diagnose application parallelism Based in Rennes, France § Training sessions Created in 2002 o OpenACC, CUDA, OpenCL, More than 10 years of experBse o OpenMP/pthreads About 30 employees CERN openlab - 27th of Feb. 2013 3 CAPS Worldwide Headquarters - FRANCE CAPS - USA CAPS - CHINA Immeuble CAP Nord 26 O’Farell St; Suite 500 Suite E2, 30/F 4A Allée Marie Berhaut San Francisco JuneYao International Plaza 35000 Rennes CA 94108 789, Zhaojiabang Road, France [email protected] Shanghai 200032 Tel.: +33 (0)2 22 51 16 00 Tel.: +86 21 3363 0057 [email protected] [email protected] CERN openlab - 27th of Feb. 2013 4 CAPS Ecosystem They trust us Resellers and Partners üMain chip makers üHardware constructors üMajor soKware leaders CERN openlab - 27th of Feb. 2013 5 Many-Core Technology Insights CERN openlab - 27th of Feb. 2013 6 Various Many-Core Paths • 50+ number of x86 cores • Large number of small cores • Support convenBonal programming • Data parallelism is key • Vectorizaon is key • PCIe to CPU connecBon • Run as an accelerator or standalone AMD Discrete GPU INTEL Many Integrated Cores • Large number of small cores • Data parallelism is key • Support nested and dynamic parallelism • Integrated CPU+GPU cores • PCIe to host CPU or low • Target power efficient devices at this stage power ARM CPU (CARMA) • Shared memory system with NVIDIA GPU parBBons AMD APU CERN openlab - 27th of Feb. 2013 7 Many-core Programming Models Applicaons Direcves Programming languages Libraries (OpenACC, OpenHMPP) (CUDA, OpenCL,,…) Just use as is Standard and high level Maximum performance and maintenance cost CAPS Compilers Portability and Performance CERN openlab - 27th of Feb. 2013 8 Directive-based Programming § Consortium created in 2011 by CAPS, CRAY, NVIDIA and PGI o New members in 2012: Allinea, Georgia Tech, ORNL, TU-Dresden, University of Houston, Rogue Wave o Momentum with broad adoption o 2nd specification to be announced § Created by CAPS in 2007 and adopted by PathScale o Advanced features • Multi devices • Native and accelerated libraries with directives and proxy • Tuning directives for loop optimizations, use of device features, … • Ground for innovation in CAPS o Can interoperate with OpenACC § First technical report proposing directives for accelerators o Not a specification yet o Aim is to get early feedback CERN openlab - 27th of Feb. 2013 9 CAPS Compilers § Source to source compilers o Use your preferred native x86 and hardware vendor compilers OpenHMPP/OpenACC annotated source code CAPS Manycore Compiler Preprocessor Back-end Generators ApplicaCon source code Target source code NVIDIA CUDA OpenCL Standard Compiler Target compiler Intel, GNU, AbsoL Host applicaon Accelerated parts CAPS RunCme Target driver AMD Many-core architecture NVIDIA/AMD GPUs, Intel MIC CERN openlab - 27th of Feb. 2013 10 How to Use CAPS Compiler § Here is an example with an OpenACC compiler from CAPS: $ capsmc gcc myopenaccapp.c –o myopenaccapp.exe § Host compatible compilers : o GNU o Open64 o Intel o Absoft o … CERN openlab - 27th of Feb. 2013 11 OpenACC and OpenHMPP Directive-based Programming CERN openlab - 27th of Feb. 2013 12 Simply Accelerate with OpenACC All loop nests in a kernels construct will be #pragma acc kernels a differentkernel { Each loop nest is launched in order in #pragma acc loop independent for (int i = 0; i < n; ++i){ the device for (int j = 0; j < n; ++j){ for (int k = 0; k < n; ++k){ Add independent clause to disambiguate B[i][j*k%n] = A[i][j*k%n]; pointers } } #pragma acc parallel { } #pragma acc loop for (int i = 0; i < n; ++i){ #pragma acc loop for (int j = 0; j < n; ++j){ The entire parallel construct becomes a B[i][j] = A[i][j]; single kernel } } for(int k=0; k < n; k++){ #pragma acc loop Add the loop directives to distribute for (int i = 0; i < n; ++i){ #pragma acc loop the work amongst the threads for (int j = 0; j < n; ++j){ (redundant otherwise) C[k][i][j] = B[k-1][i+1][j] + …; } } } } CERN openlab - 27th of Feb. 2013 13 Gang, Workers and Vector in a Grid Grid with ‘gang’ number of blocks Either let the compiler map the parallel loop to the accelerator or configure manually Block #pragma acc kernels { #pragma acc loop independent for (int i = 0; i < n; ++i){ for (int j = 0; j < n; ++j){ for (int k = 0; k < n; ++k){ B[i][j*k%n] = A[i][j*k%n]; } } } #pragma acc loop gang(NB) worker(NT) for (int i = 0; i < n; ++i){ #pragma acc loop vector(NI) Block with ‘worker’ for (int j = 0; j < m; ++j){ number of threads B[i][j] = i * j * A[i][j]; } } } Vector: the number of inner loop iteraons to be executed by a thread CERN openlab - 27th of Feb. 2013 14 OpenHMPP Codelet/Callsite Directives § Accelerate function calls only #pragma hmpp sgemm codelet, target=CUDA:OPENCL, args[vout].io=inout, & #pragma hmpp & args[*].transfer=auto extern void sgemm( int m, int n, int k, float alpha, const float vin1[n][n], const float vin2[n][n], float beta, float vout[n][n] ); int main(int argc, char **argv) { Declare CUDA and /* . */ OPENCL codelets for( j = 0 ; j < 2 ; j++ ) { #pragma hmpp sgemm callsite sgemm( size, size, size, alpha, vin1, vin2, beta, vout ); } /* . */ Synchronous codelet call } CERN openlab - 27th of Feb. 2013 15 Managing Data #pragma acc data copyin(A[1:N-2]), Data construct region with In/out copyout(B[N]) { #pragma acc kernels { #pragma acc loop independent Explicit copy from device to host for (int i = 0; i < N; ++i){ A[i][0] = ...; A[i][M - 1] = 0.0f; } ... Use create clause in data construct } #pragma acc update host(A) for accelerator-local arrays ... #pragma acc kernels for (int i = 0; i < n; ++i){ B[i] = ...; } } CERN openlab - 27th of Feb. 2013 16 OpenHMPP Advanced Programming CERN openlab - 27th of Feb. 2013 17 What’s in OpenHMPP not in OpenACC? § Use specialized libraries with directives § Support for multiple devices § Zero-copy transfers on APUs § Tuning directives § Use native and external kernels CERN openlab - 27th of Feb. 2013 18 Using Accelerated Libraries with OpenHMPP § Single OpenHMPP directive before call to library routine o Preserve original source code o Make CPU and specialized libraries co-exist through a proxy o Use most efficient library depending on data size for instance ... ... ... GPU ... lib #pragma hmppalt CAPS call CPU_blas(…) Proxy ... CPU ... lib call CPU_fft(…)... ... CERN openlab - 27th of Feb. 2013 19 Multi-GPU and Data Collection #pragma hmpp parallel, device="i%2" Distribute the data over for( i=0;i<n;i++){ the accelerators //Allocate the mirrors for d #pragma hmpp <MyGroup> allocate, data[”d[i+1]” Execute the tasks in ... } #pragma hmpp <MyGroup> parallel parallel according to the for(k=0;k<n;k++) { owner compute rule #pragma hmpp <MyGroup> f1 callsite myparallelfunc(d[k],n); } GPU 0 GPU 1 GPU 2 Device Device Device Main memory memory memory memory d0 d1 d2 d3 d1 d2 d3 CERN openlab - 27th of Feb. 2013 20 Zero-copy Transfers on AMD APU § AMD APUs have a single system memory with a GPU partition o Avoid duplication of memory objects between CPU and GPU o Avoid copies required for memory coherency between CPU and GPU o Minimization of data movement not required anymore to get performance #pragma hmpp replacealloc="cl_mem_alloc_host_ptr" float * A = malloc(N * sizeof(float)); float * B = malloc(N * sizeof(float)); #pragma hmpp replacealloc="host” void foo(int n, float* A, float* B){ int i; #pragma acc kernels, pcopy(A[:n],B[:n]) #pragma acc loop independent for (i = 0; i < n; ++i) { #pragma hmppcg memspace openclalloc (A,B) AMD APU A[i] = -i + A[i]; B[i] = 2*i + A[i]; } } CERN openlab - 27th of Feb. 2013 21 OpenHMPP Tuning Directives § Apply loop transformations o Loop unrolling, blocking, tiling, permute, … § Map computations on accelerators Control gridification o 1D gridification #pragma hmppcg(CUDA) gridify(j,i), blocksize "64x1" Using 64 threads #pragma hmppcg(CUDA) unroll(8), jam, split, noremainder for( i = 0 ; i < n; i++ ) { int j; #pragma hmppcg(CUDA) unroll(4), jam(i), noremainder for( j = 0 ; j < n; j++ ) { int k; double prod = 0.0f; for( k = 0 ; k < n; k++ ) { prod += VA(k,i) * VB(j,k); Loop transformations } VC(j,i) = alpha * prod + beta * VC(j,i); } } CERN openlab - 27th of Feb. 2013 22 OpenHMPP Tuning Directives (2) § Use hardware specific features o constant and shared memory, NVIDIA UVA, ... GPU synchronization barriers o Set buffer size void conv1(int A[N], int B[N]) according to grid size { int i,k ; int buf[DIST+256+DIST] ; int grid = 0 ; Detect grid support #pragma hmppcg set grid = GridSupport() if grid){ #pragma hmppcg gridify(i), blocksize 256x1, shared(buf) for (i=DIST; i<N-DIST ; i++){ int t ; Declare buf in shared #pragma hmppcg set t = RankInBlock(i) memory // Load the first 256 elements buf[t] = A[i-DIST] ; // Load the remaining elements Parallel load in shared if (t < 2*DIST ) memory buf[t+256] = A[i-DIST+256] ; #pragma hmppcg grid barrier Wait for end loading before use CERN openlab - 27th of Feb. 2013 23 External Functions Directives Import external definiBon sum.h Extern.c #ifndef SUM_H #define SUM_H #include "sum.h" float sum( float x, float y ); int main(int argc, char **argv) { int i, N = 64; #endif /* SUM_H */ float A[N], B[N]; ..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    38 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us