
Advanced Programming of ManyCore Systems CAPS OpenACC Compilers Stéphane BIHAN NVIDIA GTC, March 20, 2013, San Jose, California About CAPS CAPS Mission and Offer Mission Help you break the parallel wall and harness the power of manycore computing § Software programming tools o CAPS Manycore Compilers o CAPS Workbench: set of development tools § CAPS engineering services o Port applications to manycore systems o Fine tune parallel applications o Recommend machine configurations o Diagnose application parallelism Based in Rennes, France § Training sessions Created in 2002 o OpenACC, CUDA, OpenCL, More than 10 years of experEse o OpenMP/pthreads About 30 employees NVIDIA GTC, March 20, 2013, San Jose, California 3 CAPS Worldwide Headquarters - FRANCE CAPS - USA CAPS - CHINA Immeuble CAP Nord 26 O’Farell St; Suite 500 Suite E2, 30/F 4A Allée Marie Berhaut San Francisco JuneYao International Plaza 35000 Rennes CA 94108 789, Zhaojiabang Road, France [email protected] Shanghai 200032 Tel.: +33 (0)2 22 51 16 00 Tel.: +86 21 3363 0057 [email protected] [email protected] NVIDIA GTC, March 20, 2013, San Jose, California 4 CAPS Ecosystem They trust us Resellers and Partners üMain chip makers üHardware constructors üMajor soNware leaders NVIDIA GTC, March 20, 2013, San Jose, California 5 Many-Core Technology Insights NVIDIA GTC, March 20, 2013, San Jose, California 6 Various Many-Core Paths • 50+ number of x86 cores • Large number of small cores • Support convenEonal programming • Data parallelism is key • Vectorizaon is key • PCIe to CPU connecEon • Run as an accelerator or standalone AMD Discrete GPU INTEL Many Integrated Cores • Large number of small cores • Data parallelism is key • Support nested and dynamic parallelism • Integrated CPU+GPU cores • PCIe to host CPU or low • Target power efficient devices at this stage power ARM CPU (CARMA) • Shared memory system with NVIDIA GPU parEEons AMD APU NVIDIA GTC, March 20, 2013, San Jose, California 7 Many-core Programming Models Applicaons Direcves Programming languages Libraries (OpenACC, OpenHMPP) (CUDA, OpenCL,,…) Just use as is Standard and high level Maximum performance and maintenance cost CAPS Compilers Portability and Performance NVIDIA GTC, March 20, 2013, San Jose, California 8 Directive-based Programming § Consortium created in 2011 by CAPS, CRAY, NVIDIA and PGI o New members in 2012: Allinea, Georgia Tech, ORNL, TU-Dresden, University of Houston, Rogue Wave o Momentum with broad adoption o Version 2 to be released § Created by CAPS in 2007 and adopted by PathScale o Advanced features • Multi devices • Native and accelerated libraries with directives and proxy • Tuning directives for loop optimizations, use of device features, … • Ground for innovation in CAPS o Can interoperate with OpenACC § OpenMP4 with extensions for accelerators o Release candidate 2 available o To be released in the coming months NVIDIA GTC, March 20, 2013, San Jose, California 9 CAPS Compilers § Source to source compilers o Use your preferred native x86 and hardware vendor compilers OpenHMPP/OpenACC annotated source code CAPS Manycore Compiler Preprocessor Back-end Generators ApplicaCon source code Target source code NVIDIA CUDA OpenCL Standard Compiler Target compiler Intel, GNU, AbsoL Host applicaon Accelerated parts CAPS RunCme Target driver AMD Many-core architecture NVIDIA/AMD GPUs, Intel MIC NVIDIA GTC, March 20, 2013, San Jose, California 10 OpenACC and OpenHMPP Directive-based Programming NVIDIA GTC, March 20, 2013, San Jose, California 11 Simply Accelerate with OpenACC All loop nests in a kernels construct will be #pragma acc kernels a differentkernel { Each loop nest is launched in order in #pragma acc loop independent for (int i = 0; i < n; ++i){ the device for (int j = 0; j < n; ++j){ for (int k = 0; k < n; ++k){ Add independent clause to disambiguate B[i][j*k%n] = A[i][j*k%n]; pointers } } #pragma acc parallel { } #pragma acc loop for (int i = 0; i < n; ++i){ #pragma acc loop for (int j = 0; j < n; ++j){ The entire parallel construct becomes a B[i][j] = A[i][j]; single kernel } } for(int k=0; k < n; k++){ #pragma acc loop Add the loop directives to distribute for (int i = 0; i < n; ++i){ #pragma acc loop the work amongst the threads for (int j = 0; j < n; ++j){ (redundant otherwise) C[k][i][j] = B[k-1][i+1][j] + …; } } } NVIDIA GTC, March 20, 2013, San Jose, California } 12 Gang, Workers and Vector in a Grid Grid with ‘gang’ number of blocks Either let the compiler map the parallel loop to the accelerator or configure manually Block #pragma acc kernels { #pragma acc loop independent for (int i = 0; i < n; ++i){ for (int j = 0; j < n; ++j){ for (int k = 0; k < n; ++k){ B[i][j*k%n] = A[i][j*k%n]; } } } #pragma acc loop gang(NB) worker(NT) for (int i = 0; i < n; ++i){ #pragma acc loop vector(NI) Block with ‘worker’ for (int j = 0; j < m; ++j){ number of threads B[i][j] = i * j * A[i][j]; } } } Vector: the number of inner loop iteraons to be executed by a thread NVIDIA GTC, March 20, 2013, San Jose, California 13 OpenHMPP Codelet/Callsite Directives § Accelerate function calls only #pragma hmpp sgemm codelet, target=CUDA:OPENCL, args[vout].io=inout, & #pragma hmpp & args[*].transfer=auto extern void sgemm( int m, int n, int k, float alpha, const float vin1[n][n], const float vin2[n][n], float beta, float vout[n][n] ); int main(int argc, char **argv) { Declare CUDA and /* . */ OPENCL codelets for( j = 0 ; j < 2 ; j++ ) { #pragma hmpp sgemm callsite sgemm( size, size, size, alpha, vin1, vin2, beta, vout ); } /* . */ Synchronous codelet call } NVIDIA GTC, March 20, 2013, San Jose, California 14 Managing Data #pragma acc data copyin(A[1:N-2]), Data construct region with In/out copyout(B[N]) { #pragma acc kernels { #pragma acc loop independent Explicit copy from device to host for (int i = 0; i < N; ++i){ A[i][0] = ...; A[i][M - 1] = 0.0f; } ... Use create clause in data construct } #pragma acc update host(A) for accelerator-local arrays ... #pragma acc kernels for (int i = 0; i < n; ++i){ B[i] = ...; } } NVIDIA GTC, March 20, 2013, San Jose, California 15 What’s New In OpenACC 2.0 § Clarifications of some OpenACC 1.0 features § Improved asynchronism o WAIT clause on PARALLEL, KERNELS, UPDATE and WAIT directives o ASYNC clause on WAIT so queues can wait for each others in a non-blocking way for the host § ENTER DATA and EXIT DATA directives o Equivalent to the DATA directive but without syntactic constraints § New TILE directive on ACC LOOP to decompose loops into tiles before worksharing § Lot’s of new API calls § Support for multiple types of devices § ROUTINE directive for function calls within PARALLEL and KERNELS § Simplification of the use of global data withi routines § Nested parallelism using parallel or kernel region containing another parallel or kernel region NVIDIA GTC, March 20, 2013, San Jose, California 16 OpenHMPP Advanced Programming NVIDIA GTC, March 20, 2013, San Jose, California 17 What’s in OpenHMPP not in OpenACC? § Use specialized libraries with directives § Zero-copy transfers on APUs § Tuning directives § Use native kernels NVIDIA GTC, March 20, 2013, San Jose, California 18 Using Accelerated Libraries with OpenHMPP § Single OpenHMPP directive before call to library routine o Preserve original source code o Make CPU and specialized libraries co-exist through a proxy o Use most efficient library depending on data size for instance ... ... ... GPU ... lib #pragma hmppalt CAPS call CPU_blas(…) Proxy ... CPU ... lib call CPU_fft(…)... ... NVIDIA GTC, March 20, 2013, San Jose, California 19 Zero-copy Transfers on AMD APU § AMD APUs have a single system memory with a GPU partition o Avoid duplication of memory objects between CPU and GPU o Avoid copies required for memory coherency between CPU and GPU o Minimization of data movement not required anymore to get performance #pragma hmpp replacealloc="cl_mem_alloc_host_ptr" float * A = malloc(N * sizeof(float)); float * B = malloc(N * sizeof(float)); #pragma hmpp replacealloc="host” void foo(int n, float* A, float* B){ int i; #pragma acc kernels, pcopy(A[:n],B[:n]) #pragma acc loop independent for (i = 0; i < n; ++i) { #pragma hmppcg memspace openclalloc (A,B) AMD APU A[i] = -i + A[i]; B[i] = 2*i + A[i]; } } NVIDIA GTC, March 20, 2013, San Jose, California 20 OpenHMPP Tuning Directives § Apply loop transformations o Loop unrolling, blocking, tiling, permute, … § Map computations on accelerators Control gridification o 1D gridification #pragma hmppcg(CUDA) gridify(j,i), blocksize "64x1" Using 64 threads #pragma hmppcg(CUDA) unroll(8), jam, split, noremainder for( i = 0 ; i < n; i++ ) { int j; #pragma hmppcg(CUDA) unroll(4), jam(i), noremainder for( j = 0 ; j < n; j++ ) { int k; double prod = 0.0f; for( k = 0 ; k < n; k++ ) { prod += VA(k,i) * VB(j,k); Loop transformations } VC(j,i) = alpha * prod + beta * VC(j,i); } } NVIDIA GTC, March 20, 2013, San Jose, California 21 Native CUDA/OpenCL Functions Support § Integration of handwritten CUDA and OpenCL codes with directives Native function in kernel.c nave.cu CUDA float f1 ( float x, float y, float alpha, float pow) __device__ float f1 ( float x, float y, { float alpha, float pow) float res; { res = alpha * cosf(x); float res; res += powf(y, pow); res = alpha * __cosf(x); return sqrtf(res); res += __powf(y, pow); } return __fsqrt_rn(res); } #pragma hmpp mycdlt codelet, … void myfunc( int size, float alpha, float pow, float *v1, float *v2, float *res) Include file { containing native int i; float t[size],temp; function definition #pragma hmppcg include (<native.cu>) #pragma hmppcg native (f1) Use a GPU sqrt for(i = 0 ; i < size ; i++ ) res[i] = f1( v1[i],v2[i], alpha, pow); } ‘f1’ is a native … function CERN openlab - 27th of Feb.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages32 Page
-
File Size-