Cooperative Heterogeneous Computing for Parallel Processing on CPU/GPU Hybrids

Cooperative Heterogeneous Computing for Parallel Processing on CPU/GPU Hybrids

Cooperative Heterogeneous Computing for Parallel Processing on CPU/GPU Hybrids Changmin Lee and Won W. Ro Jean-Luc Gaudiot School of Electrical and Electronic Engineering Department of Electrical Engineering and Yonsei University Computer Science, University of California Seoul 120-749, Republic of Korea Irvine, CA 92697-2625 {exahz, wro}@yonsei.ac.kr [email protected] Abstract The main motivation of our research is to exploit parallelism across the host CPU cores in addition to the GPU cores. This paper presents a cooperative heterogeneous com- This will eventually provide additional computing power puting framework which enables the efficient utilization of for the kernel execution while utilizing the idle CPU cores. available computing resources of host CPU cores for CUDA Our ultimate goal is thus to provide a technique which even- kernels, which are designed to run only on GPU. The pro- tually exploits sufficient parallelism across heterogeneous posed system exploits at runtime the coarse-grain thread- processors. level parallelism across CPU and GPU, without any source The paper proposes Cooperative Heterogeneous Com- recompilation. To this end, three features including a work puting (CHC), a new computing paradigm for explicitly distribution module, a transparent memory space, and a processing CUDA applications in parallel on sets of hetero- global scheduling queue are described in this paper. With geneous processors including x86 based general-purpose a completely automatic runtime workload distribution, the multi-core processors and graphics processing units. There proposed framework achieves speedups as high as 3.08 have been several previous research projects which have compared to the baseline GPU-only processing. aimed at exploiting parallelism on CPU and GPU. How- ever, those previous approaches require either additional programming language support or API development. As 1. Introduction opposed to those previous efforts, our CHC is a software framework that provides a virtual layer for transparent exe- cution over host CPU cores. This enables the direct execu- General-Purpose computing on Graphics Processing tion of CUDA code, while simultaneously providing suffi- Units (GPGPU) has recently emerged as a powerful com- cient portability and backward compatibility. puting paradigm because of the massive parallelism pro- To achieve an efficient cooperative execution model, we vided by several hundreds of processing cores [4, 15]. Un- have developed three important techniques: der the GPGPU concept, NVIDIAR has developed a C- based programming model, Compute Unified Device Ar- • A workload distribution module (WDM) for CUDA chitecture (CUDA), which provides greater programmabil- kernel to map each kernel onto CPU and GPU ity for high-performance graphics devices. As a matter of • fact, general-purpose computing on graphics devices with A memory model that supports a transparent memory CUDA helps improve the performance of many applications space (TMS) to manage the main memory with GPU under the concept of a Single Instruction Multiple Thread memory model (SIMT). • A global scheduling queue (GSQ) that supports bal- Although the GPGPU paradigm successfully provides anced thread scheduling and distribution on each of the significant computation throughput, its performance could CPU cores still be improved if we could utilize the idle CPU resource. Indeed, in general, the host CPU is being held while the We present a theoretical analysis of the expected perfor- CUDA kernel executes on the GPU devices; the CPU is not mance to demonstrate the maximum feasible improvement allowed to resume execution until the GPU has completed of our proposed system. In addition, the performance eval- the kernel code and has provided the computation results. uation on a real system has been performed and the results CPU GPU CPU GPU Serial code Serial code Kernel_func<<<>>>(); Kernel_func<<<>>>(); waiting Serial code Serial code Execution time reduction (a) (b) Figure 1. Execution flow of CUDA program: limitation where CPU stalls (a) and cooperative execution in parallel (b). show that speedups as high as 3.08 have been achieved. On ment that enhances computing performance for media ker- average, the complete CHC system shows a performance nels on multicore CPUs with IntelR Graphics Media Ac- improvement of 1.42 over GPU-only computation with 14 celerator (GMA) [20]. However, this programming model CUDA applications. uses the CPU cores only for serial execution. The Merge The rest of the paper is organized as follows. Section 2 framework has extended EXOCHI for the parallel execu- reviews related work and Section 3 introduce the existing tion on CPU and GMA; however, it still requires APIs and CUDA programming model and describe motivation of this the additional porting time [13]. Lee et al. have presented work. In Section 4, we presents the design and limitations a framework which aims at porting an OpenCL program of the CHC framework. Section 5 gives preliminary results. on the Cell BE processor [12]. They have implemented a Finally, we conclude this work in Section 6. runtime system that manages software-managed caches and coherence protocols. 2. Related work Ravi et al. [17] have proposed a compiler and a run- time framework that generate a hybrid code running on both CPU and GPU. It dynamically distributes the workload, but There have been several prior research projects which the framework targets only for generalized reduction appli- aim at mapping an explicitly parallel program for graphics cations, while our system targets to map general CUDA ap- devices onto multi-core CPUs or heterogeneous architec- plications. Qilin [14], in the most relevant study to our pro- tures. MCUDA [18] automatically translates CUDA codes posed framework, has shown an adaptive kernel mapping for general purpose multi-core processors, applying source- using a dynamic work distribution. The Qilin system trains to-source translation. This implies that the MCUDA tech- a program to maintain databases for the adaptive mapping nique translates the kernel source code into a code written in scheme. In fact, Qilin requires and strongly relies on its own a general purpose high-level language, which requires one programming interface. This implies that the system cannot additional step of source recompilation. directly port the existing CUDA codes, but rather program- Twin Peaks [8] maps an OpenCL-compatible program mers should modify the source code to fit their interfaces. targeted for GPUs onto multi-core CPUs by using the As an alternative, CHC is designed for seamless porting of LLVM (Low Level Virtual Machine) intermediate repre- the existing CUDA code on CPU cores and GPU cores. In sentation for various instruction sets. Ocelot [6], which other words, we focus on providing backward compatibility inspired our runtime system, uses a dynamic translation of CUDA runtime APIs. technique to map a CUDA program onto multi-core CPUs. Ocelot converts at runtime PTX code into an LLVM code without recompilation and optimizes PTX and LLVM code 3. Motivation for execution by the CPU. The proposed framework in this paper is largely different from these translation techniques One of the major roles of the host CPU for the CUDA (MCUDA, Twin Peaks, and Ocelot) in that we support co- kernel is limited to controlling and accessing the graphics operative execution for parallel processing over both CPU devices, while the GPU device provides a massive amount cores and GPU cores. of data parallelism. Fig. 1(a) shows an example where the In addition, EXOCHI provides a programming environ- host controls the execution flow of the program only, while the device is responsible for executing the kernel. Once a CUDA Executable Binary Kernel Configuration PTX Assembly Code CUDA program is started, the host processor executes the Information program sequentially until the kernel code is encountered. As soon as the host calls the kernel function, the device Workload starts to execute the kernel with a large number of hard- Distribution Module LLVM PTX-to-LLVM PTX ware threads on the GPU device. In fact, the host processor is held in the idle state until the device reaches the end of CPU Loader GPU Loader the kernel execution. CHC As a result, the idle time causes an inefficient utilization Framework Abstraction Layer for TMS Kernel Kernel of the CPU hardware resource of the host machine. Our CPU GPU CHC system is to use the idle computing resource with con- LLVM-JIT CUDA Driver (PTX-JIT) current execution of the CUDA kernel on both CPU and Global Scheduling Queue GPU (as described in Fig. 1(b)). Considering that the fu- Global Memory ture computer systems are expected to incorporate more Core --- Core SM --- SM cores in both general purpose processors and graphics de- CPU GPU vices, parallel processing on CPU and GPU would become Main Memory a great computing paradigm for high-performance appli- cations. This would be quite helpful to program a sin- gle chip heterogeneous multi-core processor including CPU Figure 2. An overview of the CHC runtime and GPU as well. Note that IntelR and AMDR have al- system. ready shipped commercial heterogeneous multi-core pro- cessors. In fact, CUDA is capable of enabling asynchronous con- core side, CHC uses the PTX translator provided in Ocelot current execution between host and device. The concurrent in order to convert PTX instructions into LLVM IR [6]. This execution returns a control to the host before the device has LLVM IR is used for a kernel context of all thread blocks completed a requested task (i.e., non-blocking). However, running on CPU cores, and LLVM-JIT is utilized to execute the CPU that has the control can only perform a function the kernel context [11]. such as memory copy, setting other input data, or kernel The CUDA kernel execution typically needs some start- launches using streams. The key difference on which we up time to initialize the GPU device. In the CHC frame- focus is in the use of idle computing resources with concur- work, the GPU start-up process and the PTX-to-LLVM rent execution of the same CUDA kernel on both CPU and translation are simultaneously performed to hide the PTX- GPU, thereby easing the GPU burden.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us