Using the High Productivity Language Chapel to Target GPGPU Architectures

Using the High Productivity Language Chapel to Target GPGPU Architectures

View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Illinois Digital Environment for Access to Learning and Scholarship Repository Using the High Productivity Language Chapel to Target GPGPU Architectures Albert Sidelnik Mar´ıa J. Garzaran´ Bradford L. Chamberlain David Padua Cray Inc. Department of Computer Science [email protected] University of Illinois fasideln2,garzaran,[email protected] Abstract reason is the need to schedule across device classes: the user must It has been widely shown that GPGPU architectures offer large decide how to partition and correctly schedule the execution be- performance gains compared to their traditional CPU counterparts tween the devices. This difficulty is typically compounded by each for many applications. The downside to these architectures is that device having separate address spaces, forcing the user to take care the current programming models present numerous challenges to of the allocation, deallocation, and movement of device data. the programmer: lower-level languages, explicit data movement, In this paper we build on the parallel programming language loss of portability, and challenges in performance optimization. Chapel’s [6] native data parallel support and provide compiler tech- In this paper, we present novel methods and compiler transfor- niques to increase the programmability of heterogeneous systems mations that increase productivity by enabling users to easily pro- containing GPU accelerator components, while retaining perfor- gram GPGPU architectures using the high productivity program- mance and portability across other architectures. Chapel is a high- ming language Chapel. Rather than resorting to different parallel level general purpose language built from the ground up in order to libraries or annotations for a given parallel platform, we leverage a increase programmer productivity, while allowing control of work language that has been designed from first principles to address the distribution, communication, and locality. It includes support for challenge of programming for parallelism and locality. This also parallel models such as data-, task-, and nested parallelism. Rather has the advantage of being portable across distinct classes of paral- than rely completely on the compiler for performance optimiza- lel architectures, including desktop multicores, distributed memory tions, we leverage Chapel’s multiresolution philosophy of allowing clusters, large-scale shared memory, and now CPU-GPU hybrids. a programmer to start with an extremely high-level specification (in We present experimental results from the Parboil benchmark suite this case, with Chapel’s array language support) and drop to lower which demonstrate that codes written in Chapel achieve perfor- levels if the compiler is not providing sufficient performance. This mance comparable to the original versions implemented in CUDA. allows expert programmers the ability to tune their algorithm’s per- formance with similar capabilities as a lower-level model such as CUDA. 1. Introduction In the last few years, systems of heterogeneous components, in- Evaluations and Contributions We evaluate the performance and programmability of our compiler prototype against applications cluding GPGPU accelerator architectures, have become increas- 1 ingly popular. This popularity has been driven by many emerging from the Parboil benchmark suite. Because the applications in applications in consumer and HPC markets [25]. Significant cost, Parboil are performance-tuned and handed-coded in CUDA, they power, and performance benefits are derived from executing these make an ideal comparison since the goal of this work is to increase applications on systems containing both SIMD and conventional programmer productivity without sacrificing performance. MIMD devices. For this reason, the interest in heterogeneous sys- The contributions of this paper are as follows: tems is not a passing fad. It is instead likely that many systems, • We present a high-level and portable approach to developing from hand-held to large-scale [15], will soon, or already do, con- applications on GPU accelerator platforms with a single uni- tain heterogeneous components. fied language, instead of libraries or annotations, that can target Programmability and the ability to optimize for performance multiple classes of parallel architectures. This includes the in- and power are considered major difficulties introduced by hetero- troduction of a user-defined distribution for GPU accelerators. geneous systems such as those containing GPUs which are co- • processors that must be activated from conventional processors. We introduce compiler transformations that map a high-level Heterogeneity is also important for performance since conventional language onto GPU accelerator architectures. This also includes processors perform much better in certain classes of computations, a conservative algorithm for implicitly moving data between particularly irregular computations. These difficulties arise for two a host and the accelerator device. These techniques would be main reasons. First, with todays tools, it is necessary to use a differ- applicable to other high-level languages such as Python or Java ent programming model for each system component; CUDA [24] with the goals of targeting accelerators. or OpenCL [17] are often used to program GPGPU architectures, • Results demonstrate that performance of the hand-coded im- while C or C++ extended with OpenMP [9] or Intel TBB [26] are plementations of the Parboil benchmark written in CUDA are used for conventional multicores, and MPI is used for distributed comparable to the low-level Chapel implementation which is memory clusters. This results in a loss of portability across different simpler, and easier to read and maintain. parallel architectures, as one must fully port and maintain separate copies of the code to run on the different architectures. The second 1 http://impact.crhc.illinois.edu/parboil.php 1 2011/4/25 1 #define N 2000000 1 config const N = 2000000; 2 int main() { 2 const mydist = new dist(new GPUDist 3 float *host_a, *host_b, *host_c; 3 (rank=1, tbSizeX=256)); 4 float *gpu_a, *gpu_b, *gpu_c; 4 const space : domain(1) distributed 5 cudaMalloc((void**)&gpu_a, sizeof(float)*N); 5 mydist = [1..N]; 6 cudaMalloc((void**)&gpu_b, sizeof(float)*N); 6 var A, B, C : [space] real; 7 cudaMalloc((void**)&gpu_c, sizeof(float)*N); 7 (B, C) = (0.5, 0.5); 8 dim3 dimBlock(256); 8 const alpha = 3.0; 9 dim3 dimGrid(N/dimBlock.x ); 9 forall (a,b,c) in (A,B,C) do 10 if( N % dimBlock.x != 0 ) dimGrid.x+=1; 10 a = b + alpha * c; 11 set_array<<<dimGrid,dimBlock>>>(gpu_b,0.5f,N); 12 set_array<<<dimGrid,dimBlock>>>(gpu_c,0.5f,N); Figure 2. STREAM Triad written in Chapel for a GPU 13 float scalar = 3.0f; 14 STREAM_Triad<<<dimGrid,dimBlock>>>(gpu_b, 15 gpu_c, gpu_a, scalar, N); 16 cudaThreadSynchronize(); 17 cudaMemCpy(host_a, gpu_a, sizeof(float)*N, 18 cudaMemcpyDeviceToHost); 19 cudaFree(gpu_a); 20 cudaFree(gpu_b); 21 cudaFree(gpu_c); 22 } 23 __global__ void set_array(float *a, float value, 1 config const N = 2000000; 24 int len) { 2 const mydist = new dist(new Block 25 int idx = threadIdx.x+blockIdx.x*blockDim.x; 3 (bbox=[1..N])); 26 if(idx < len) a[idx] = value; 4 const space : domain(1) distributed 27 } 5 mydist = [1..N]; 28 __global__ void STREAM_Triad(float *a, float *b, 6 var A, B, C : [space] real; 29 float *c, float scalar, int len) { 7 (B, C) = (0.5, 0.5); 30 int idx = threadIdx.x+blockIdx.x*blockDim.x; 8 const alpha = 3.0; 31 if(idx < len) c[idx] = a[idx]+scalar*b[idx]; 9 forall (a,b,c) in (A,B,C) do 32 } 10 a = b + alpha * c; Figure 1. STREAM Triad written in CUDA Figure 3. STREAM Triad written in Chapel for a multicore Outline This paper is organized as follows: Section 2 gives mo- mance of STREAM written for the GPU, we see that it matches tivation for this work. Section 3 describes background information the equivalent implementation written in CUDA. It’s important to on Chapel and the GPU architecture. Sections 4 and 5 provide the re-emphasize that for the cluster and Chapel-GPU bar, we used implementation details for running on a GPU. In Section 6, we the same Chapel code where only the distribution was changed, present some example Chapel codes that target the GPU acceler- whereas the CUDA code does not support the same degree of porta- ator. Section 7 describes our initial results using the Parboil bench- bility. mark suite. Sections 8 and 9 present related and future work. We provide conclusions in Section 10. 3. Background 2. Motivation This section presents a short overview of the programming lan- guage Chapel with the primary focus on data-parallelism, as we As a motivating example, consider the implementations of the leverage this when targeting GPU accelerator implementations. STREAM Triad benchmark from the HPCC Benchmark Suite [21] Additionally we describe Nvidia’s CUDA programming model, in Figures 1–3. The comparison between the reference CUDA which is the language generated by our compiler. implementation in Figure 1 and the Chapel code for a GPU in Figure 2 clearly shows that the Chapel code has significantly fewer 3.1 Chapel Language Overview lines of code, and is simpler and more readable. This is achieved using Chapel distributions, domains, data parallel computations Chapel is an object-oriented parallel programming language de- through the forall statement, and variable type inference [3, 6]. signed from first principles, rather than an extension to any exist- Furthermore,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us