Automatic Source Transformation from C++ to CUDA Using Clang/LLVM

Automatic Source Transformation from C++ to CUDA Using Clang/LLVM

Paper: Togpu: Automatic Source Transformation from C++ to CUDA using Clang/LLVM Marangoni, Matthew; Wischgoll, Thomas; Wright State University; Dayton, Ohio Abstract Moreover, motivation is not necessarily present. For example, Parallel processing using GPUs provides substantial in- a physicist looking to accelerate molecular dynamic calculations creases in algorithm performance across many disciplines includ- may neither hold a passion for GPU development nor care what ing image processing. Serial algorithms are commonly trans- Single Instruction Multiple Data (SIMD) signifies for develop- lated to parallel CUDA or OpenCL algorithms. To perform this ment. Similarly, a GPGPU expert with a deadline may be unable translation a user must first overcome various GPU development to dedicate the resources required to convert an image progressing entry barriers. These obstacles change depending on the user algorithm into a GPU compatible form while ensuring high data but in general may include learning to program using the cho- throughput. sen API, understanding the intricacies of parallel processing and Strides in accessibility have been made through languages optimization, and other issues such as the upkeep of two sets of such as CUDA replacing previous techniques such as manipu- code. Such barriers are experienced by experts and novices alike. lating shader pipelines to perform computations. Despite these Leveraging the unique source to source transformation tools pro- developments GPU accessibility remains limited. The potential vided by Clang/LLVM we have created a tool to generate CUDA benefits of parallel processing using consumer tier GPUs is high from C++. Such transformations reduce obstacles experienced in but the initial time investment by itself can be prohibitive. Includ- developing GPU software and can increase efficiency and revi- ing the time required to perform the operations of interest and sion speed regardless of experience. Image processing algorithms workflow impacts such as maintaining separate GPU and CPU specifically benefit greatly from a quick revision cycle which our sources, the obstacles facing users grow further. These barriers tool facilitates. This manuscript details togpu, an open source, are not limited to inexperienced users but instead also impact ex- cross platform tool which performs C++ to CUDA source to pert users. source transformations. We present experimentation results using An inexperienced user must know how to program, use a common image processing algorithms. The tool lowers entrance language that supports CUDA in some fashion and is applicable barriers while preserving a singular code base and readability. to the problem domain, and have knowledge of: parallel pro- Providing core tools enhancing GPU developer facilitates further cessing and programming, CUDA, any necessary APIs between developments to improve high performance, parallel computing. CUDA and their language of choice, device/platform specifics, and GPU optimization and debugging if necessary. Similarly an Introduction experienced user must also have the aforementioned knowledge Parallel computations have become affordable through the required of the inexperienced user but also may experience knowl- usage of consumer GPUs. The large performance increase offered edge hurdles. The topics of optimization and debugging are such by such devices has been utilized in many areas. The process areas where intricate knowledge is often mandatory. The user of converting traditional CPU code to parallelized code is non- may also be required to obtain at least functional domain specific trivial. It often requires in depth knowledge of various areas re- knowledge if they are asked to implement algorithms versus opti- lated to GPU computing. This poses obstacles for those with rele- mizing existing code. vant computational experience and more obstacles for those users Existing solutions to lowering entrance barriers for parallel lacking such experience. These are significant barriers to entry for processing have shown great strides in accessibility while main- GPU computing which restrict both the adoption of GPU comput- taining functionality. When using these solutions users continue ing and it’s development. Various approaches exist to make GPU to encounter various issues both during immediate development computing easier for the user. This manuscript presents the ini- and in the overall development workflow. These approaches can tial development cycle of a tool to transform C++ to CUDA in a be improved upon to minimize the remaining obstacles. It is com- configurable manner. The tool, togpu, improves upon existing ap- mon to require users to learn the equivalent of another API, while proaches to lower GPU development entrance barriers, accelerate still requiring users to have in depth knowledge of GPUs and GPU developer workflow, and lay a foundation for further devel- parallel computing. While in depth knowledge may become a opment. Minimizing the effort required to generate something necessity at stages where device/platform optimization and intri- functional and useful is a design goal in many areas — GPU de- cate performance monitoring take place, it should be minimized velopment is no different. as much as possible to ease GPU development. Other occurrences The barriers of entry to effectively utilize GPUs for parallel such as mixing configuration and execution, forcing refactoring of processing are high in many areas. The large breadth and depth code to adopt a new library, and closed source are just a few com- of knowledge a user must possess is one of the fundamental ob- mon issues which may accompany existing methods. By focusing stacles a user faces. It is common occurrence for users to lack on both development usage and workflow impacts we target miti- the time to learn these areas sufficiently to ease development. gating common issues to lower GPU development barriers. Related Work Basic Linear Algebra Subroutines (cuBLAS) provides a GPU ver- Lowering barriers to entry for the usage of parallel process- sion of the commonly used BLAS library [7]. Frameworks pro- ing on GPUs is not a new problem. Various approaches exist and vide different facilities for users, some such as cuBLAS are fo- are presented in this section. Exploration reveals existing solu- cused on an area, others provide basic data structure translations. tions are largely spread throughout four categories: Domain Spe- The library Thrust is an example [1]. cific Language (DSL), Wrapper/Binding, Framework, and Source to Source Transformation. Thrust is a parallel algorithm library that resembles the C++ Standard Template Library (STL) [1]. It provides access Domain Specific Language to extensive GPU-enabled functions in order to produce high- A Domain Specific Language (DSL) is a programming lan- performance applications [1]. Thrust is similar in notion to li- guage that is suited to the needs of a particular domain. Constructs braries translated by NVIDIA, such as CUBLAS, but provides within the language facilitate common operations associated with more generalized parallel structures [1]. Thrust requires that the the domain. For example, CUDA (or CUDA C as it is also refer- user learn their API and still requires some knowledge of parallel enced) is a DSL. CUDA provides access to functionality that is computing. In conflict with the goal to provide a transparent as common within the GPU computing domain. Programming facil- possible programmatic solution, these extra stipulations further ities to copy data to and from the GPU, launching kernels, decla- differentiate Thrust. Another framework is available for Python ration of device and host functions, and other related operations titled Parakeet that provides Just In Time (JIT) compilation of are bundled into this extended version of C. These operations are Python code for a GPU [11]. suited to the GPU computing domain. Other DSL examples in- With minimal annotations, Parakeet provides a strong frame- clude Microsoft Accelerator [12] and Copperhead [3]. work for automatically generating and managing the execution of While DSLs afford users working with GPU computing sub- GPU code using JIT compilation [11].Parallelization is achieved stantial convenience and ease of usage, they still stand as another by implementing NumPy operations in terms of four data parallel learning obstacle — a time sink for the user. Some DSLs are operations [11]. Parakeet utilizes heuristic based, array shape de- a subset of a language, however they still require knowledge of tection to determine algorithm execution location (CPU, GPU) data parallel operations, parallel computing knowledge, and ad- [11]. Parakeet provides a functional demonstration of the us- ditional language specific requirements and constructs. CUDA age of parallel operators to implement common functionality as for example forces users to gain such knowledge. In addition to a means of parallelization. A single annotation is required to the other issues covered in the Workflow Complications section, identify a function for processing. Parakeet’s minimal annotation this knowledge mandate is a shortcoming of DSLs. It is clear usage lowers the amount of parallel domain knowledge required that a domain specific language, while useful, is not able to suffi- from users. However, some significant constraints preclude adop- ciently mitigate knowledge requirements for inexperienced users tion outside of numerical computing. For instance, Parakeet par- performing feasibility exploration. These

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us