A Performance Improvement Approach for CPU-GPU Heterogeneous Computing

A Performance Improvement Approach for CPU-GPU Heterogeneous Computing

International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8, Issue-1, May 2019 A Performance Improvement Approach for CPU-GPU Heterogeneous Computing Raju K, Niranjan N Chiplunkar Abstract: The heterogenous computing system involving Each thread of a block has a thread id and each thread Central Processing Unit and Graphics Processing Unit block within a grid has unique block id. Threads belonging to (CPU-GPU) is widely used for accelerating compute intensive the same block share the data among themselves using applications that exhibit data parallelism. In CPU-GPU shared memory associated with that block. Thread execution model, when the GPU is performing the computation synchronization within a block is achieved through barrier the CPU cores remain idle, wasting enormous computational synchronization mechanism. A global memory is available power. The performance of an application on GPU can be further improved by efficiently utilizing the computational for all threads for reading and writing purposes. power of CPU cores along with that of GPU. In this paper we The control flow during the execution of any CUDA propose an approach to simultaneously utilize computational program is as shown in Fig. 1. A CUDA program is the power of both CPU and GPU to perform a task. We execute combination of host code and device code executed by CPU different independent data parallel portions of an application and GPU respectively. The address spaces of host memory concurrently on CPU and GPU. We use CUDA framework to and the device memory are different. Before the device code execute the task on GPU side and POSIX threads (Pthreads) to or the kernel begins its execution, the required input data execute the task on CPU side. Through several experiments we demonstrate that by judiciously allocating different kernels to resides in the host memory. Since the host memory is not suitable processors and executing them concurrently, our directly accessible to the device, the input data is transferred approach can improve the performance of a CUDA based from host to device memory through PCI-Express. After the application compared to the GPU-only execution of that host to device data transfer the kernel function is invoked. application. During the kernel execution each of thread in the grid execute an instance of the kernel code, but processing Index Terms: Heterogeneous computing; GPU; Multicore different portions of input data. On completion of the kernel CPU; Concurrent execution; CUDA kernels. execution the computed results are transferred from device to I. INTRODUCTION host memory. The CPU-GPU heterogeneous system provides a powerful architecture for accelerating computation-intensive data parallel applications. The availability of thousands of cores in a GPU makes it a suitable architecture for executing applications that exhibit massive parallelism. With thousands of cores a GPU can improve the performance of parallel applications. Also, the modern CPUs possess multiple processor cores providing huge computational power. GPUs were originally developed for rendering images. However, with the emergence of Compute Unified Device Architecture (CUDA), a programming framework Fig. 1: Execution control flow of a CUDA program. based on C language, the GPUs are widely used for general purpose computation. CUDA has reduced the programmer’s In the execution flow described above, the process of efforts in parallelizing the applications on CPU-GPU invoking a kernel is an asynchronous or non-blocking heterogeneous systems.In CUDA, the CPU along with its operation. That is, instead of waiting for the device to memory is referred to as the host, and the GPU along with its complete the kernel execution, the control is returned to the memory is referred to as the device. The code that runs on the host immediately after the kernel is launched. Even though device is known as the kernel. Threads in CUDA are the host reacquires the control, it can only perform host to organized into thread blocks and grids. A thread block is a device data transfers and launch new kernels by using CUDA group of threads, and a grid is group of thread blocks. An streams. The multiple cores of the CPU are neither used for instance of the kernel code is executed by each thread of the kernel execution nor for executing any other independent grid. computations of the current application. Hence the enormous Revised Manuscript Received on May 24, 2019. computational power of the CPU cores is wasted. Raju K, Department of Computer Science and Engineering, NMAM Institute of Technology, Nitte, Karkala, Karnataka, India. Niranjan N Chiplunkar, Department of Computer Science and Engineering, NMAM Institute of Technology, Nitte, Karkala, Karnataka, India. Published By: Blue Eyes Intelligence Engineering Retrieval Number: F2540037619 /19©BEIESP 532 & Sciences Publication A Performance Improvement Approach for CPU-GPU Heterogeneous Computing The computational power of idle CPU cores can be It can be observed from Fig. 2 that the CPU cores remain effectively utilized by concurrently executing different idle when a kernel is executing on the GPU. As the kernel independent computational tasks of an application on both launch is non-blocking operation, the control returns to the CPU and GPU cores. That is, in an application with multiple host thread immediately after a kernel is invoked. The host independent tasks, when the GPU is running a kernel, other thread could execute kernel-2 on the CPU while the kernel-1 independent tasks can be assigned to the CPU cores. This is executing on the GPU as shown in the Fig. 3. However, the method of execution improves the utilization of the concurrent execution of the two kernels is possible only when computational resources in the CPU-GPU heterogeneous the kernel-2 does not dependent on kernel-1, i.e. kernel-2 system, which in turn can improve the performance. does not require the results produced by kernel-1for its In this paper, we present our approach for the concurrent execution. For an application having two independent execution of independent kernels of a CUDA application on CUDA kernels, say kernel-1 and kernl-2, we perform two CPU and GPU. For a set of test applications we evaluate the separate runs to determine which kernel is to be executed on effectiveness of our approach by orchestrating the CPU-GPU a particular processor (CPU or GPU) so that the overall concurrent execution. Through experiments we analyze the concurrent execution time is minimal. Accordingly, the suitability of a processing device (CPU or GPU) for the combination for first run is kernel-1 on CPU and kernel-2 on execution a given computational task and thereby determine GPU, and the vice versa for the second run. For the first run, an optimal assignment of tasks to processors. Finally, we we implement the concurrent execution of the two kernels compare the performance of CPU-GPU concurrent execution using the following steps: to the GPU-only execution of the application. 1) Declare the host and device copies of the input and The rest of this paper is organized as follows. Section II output data. Initialize the host copy of the input data provides the implementation details of concurrent execution values. of two independent kernels of a CUDA application. Section 2) Send the input data from the host memory to the device III presents the details of concurrent execution of multiple memory that is required for the execution of the CUDA kernels. In Section IV, we analyze the results of our kernel-1on the GPU. experiments. The related work is discussed in Section V, and 3) Launch the kernel-1 on GPU. we conclude our work in Section VI. 4) Soon after the kernel launch, the host thread spawns a child thread which in turn invokes the CPU function II. CONCURRENT EXECUTION OF TWO equivalent to the kernel-2. Among the various APIs and INDEPENDENT KERNELS frameworks available for threading on the host side, we In this section we present the method to concurrently opted to use Pthreads (POSIX threads) due to the low execute two independent kernels, one each on CPU and GPU. overhead associated with thread creation and Fig. 2 depicts the GPU-only execution flow of two management. independent kernels of a CUDA application. 5) After the execution of the kernel-1 on the GPU, the output data is transferred back to the CPU memory. 6) The CPU-GPU execution time for the two kernels is measured including the input and output data transfer time between CPU and GPU. Similarly, a second run is performed by interchanging the assignment of kernels to processors. Among these two runs the one which yields low execution time is considered for computing execution speedup. The pseudocode that translates the above six steps is as shown below. In this pseudocode we have taken 1D stencil operation as kernel-1 and binary search operation as Fig. 2: GPU-only execution of two CUDA kernels. kernel-2. Step1: // For 1d stencil, allocate space for host and // device copies of input and output data (in, out, // d_din, d_out) of size bytes, and setup values. int *in, *out; int *d_in, *d_out; in = (int *)malloc(size); out = (int *)malloc(size); cudaMalloc((void **)&d_in, size); cudaMalloc((void **)&d_out, size); // for binary search declare the input parameters // in a structure inpar and initialize values Fig. 3: CPU-GPU concurrent execution of two CUDA kernels. Published By: Blue Eyes Intelligence Engineering Retrieval Number: F2540037619 /19©BEIESP 533 & Sciences Publication International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-8, Issue-1, May 2019 We have executed seven different CUDA C test Step2: cudaEventRecord(start, 0); applications labeled A1 through A7, listed in Table 1. In each //Copy input data from CPU to GPU.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us