Compilation of Opencl Programs for Stream Processing Architectures

Compilation of Opencl Programs for Stream Processing Architectures

Compilation of OpenCL Programs for Stream Processing Architectures Pedro Moreira Instituto Superior Tecnico´ Email: [email protected] Abstract—The stream architecture and the stream execution dependences between different tasks (temporal locality), and model optimize memory usage while minimizing processor idle when data is processed sequentially, as in a stream (spatial time by exploring the multiple types of locality in stream locality). Stream architectures assume a specific programming applications: temporal, spatial, and instruction locality. These are the main focus of this work. model, that allows programmers to make explicit the various Heterogeneous systems can contain various computing devices elements that express the locality of stream applications. These that implement different architectures, and it can an arduous elements are, typically, the definition of taks or kernels that task to take the architectural details of each device into account process data, the data streams that those tasks consume and when developing applications for these systems. produce, and the relations between those tasks. This work proposes StreamOCL, an implementation of the OpenCL framework, and its goal is to provide a programming The challenge in using various devices, of diverse architec- environment for heterogeneous systems, targeting stream devices. tures, in heterogeneous systems [4], [27], [23], [1], is that The proposed approach also enables applications to explore it is difficult to develop software that is capable of using the advantages of stream computing, without requiring them to the different devices efficiently. The result of the compilation be specifically implemented for stream devices. The polyhedral process necessarily varies from one device to another, as they model is used to analyse memory access patterns of OpenCL kernels and capture the locality of OpenCL applications. can have different instruction set architectures (ISAs), and The results validate the approach since the current implemen- different memory organizations. Also, the execution models tation was able to capture the access patterns of a baseline stream can be different depending on what the architectures were application. designed for. In the case of stream architectures, it is possible A 2× speedup was also achieved in kernel execution time on to accomplish a very efficient use of memory, but only for the current testing platform by scheduling kernels according to the stream execution model. In spite of the positive results, the applications that follow the stream execution model. overhead of computing memory dependences is currently not There are software development tools for multicore devices compensated by the performance gain in kernel execution. (such as CUDA or the Stream Virtual Machine), but they can Index Terms—Stream Architecture, OpenCL, Polyhedral Anal- only be used for a restricted subset of device architectures. ysis, Low-Level Dependence Graph The use of heterogeneous systems leads to the need for a programming environment that allows the development of I. INTRODUCTION applications for different devices and, yet, allows the efficient To answer the need for parallel processing power, multicore use of their computational resources, despite the architectural devices have been developed that are capable of executing differences that distinguish them. OpenCL is a programming many arithmetic intensive tasks in parallel. In this particular framework that aims to solve precisely this problem. It ab- context, several architectures have been developed in the latest stracts the architectures, programming models and execution years, each specifically designed for a different purpose. models of computing devices into a single, unified framework, This work focuses on the stream architecture, and the that allows applications to use the computational power of stream computing model, which, by decoupling computation heterogeneous systems in a transparent way. OpenCL provides tasks from communication tasks, allows exposing additional an application programming interface (API) that serves as an parallelism levels and leads to a faster application execution. abstraction for the communication with the computing devices, In particular, in multicore computing devices, there is the and a low-level programming language, that are architecture- problem of memory bandwidth and latency, since there are independent. By providing an abstract front-end implemented many cores competing for the same memory resources. Stream by architecture specific back-ends for the different target architectures aim at solving this problem by using a different devices, OpenCL enables the development of portable appli- approach to transferring data from device memory to the cores, cations that exploit the computational power of heterogeneous and scheduling tasks in a way that minimizes memory transfers systems. between the cores and the device’s global memory. The stream architecture maximizes the use of the processing cores in the A. Objectives execution of programs that express the principles of locality The purpose of this work is to provide an implementation in different ways: when different instructions are executed of the OpenCL framework for stream devices. Implementing over the same data (instruction locality), when there are data the OpenCL framework also comprises the implementation a device driver that can make the communication between the The OpenCL execution model is divided in two parts: the host CPU and the target device. The implementation of the host program and the kernel execution that occurs on the OpenCL interface has to take the characteristics of the stream guest devices. The host program is executed on the CPU architecture into account, and enable OpenCL applications to and manages platforms, devices, and issues commands to the use the resources of stream devices efficiently and according devices such as memory transfers and kernel execution. On to the stream model. the guest side, the devices execute the commands issued by The result is an implementation of the OpenCL framework the host program. tha adapts the OpenCL model and enables client applications Each work-item is uniquely identified by a tuple of IDs. to exploit stream devices, by implementing a set of features When a kernel is scheduled, a coordinate space of work-item that capture the locality in OpenCL applications. IDs is defined, called NDRange, that represents the IDs of all work-items that will execute that kernel instance. Each work- II. RELATED WORK item can be uniquely identified either by its global position in A. Stream Architecture the NDRange or by its work-group ID and local position in the work-group. With the introduction of programmable graphics processing Studies have been made [25], [19] with the purpose of units (GPUs [20], [6], [22]), stream processing [24], [17], [16], evaluating whether an OpenCL application attains portable [5], [7] has gained importance. In particular, by decoupling performance. The conclusion is that a single OpenCL code computations with communication tasks, it allows a more cannot efficiently explore the architecture of different devices. efficient exploitation of the application parallelism [3], [26]. CUDA’s compiler generates more efficient code than the The stream execution model simplifies overall control by default OpenCL compiler, but with some optimizations that the decoupling data communication operations from the actual OpenCL compiler can do automatically, it should be possible computations. Moreover, it allows for a more efficient ex- to achieve equivalent results to those of CUDA. ploitation of fine-grained data-level parallelism, and simplifies the exploitation of task-level parallelism, by having different C. Kernel Access Pattern Analysis and Optimizations cores executing different operations over streaming elements. For the stream controller to manage the multiple data trans- The stream model allows for kernel execution to occur in fers that occur between the device memory and the processing parallel with data transfers between the various levels of elements, and between the processing elements themselves, it memory hierarchy, minimizing the time that the cores are idle is required that a description of the memory access patterns of waiting for data to be fetched. It also allows to exploit data the kernels is provided. By knowing which regions of memory locality in the sense that each kernel consumes data while it are going to be used by which cores the stream controller is is produced by other kernels. able to prefetch data from device memory to the cores and Stream architectures are typically composed of a hierarchy efficiently transfer the intermediate values of a program from of memory levels and clusters of processing elements. one cluster to another in a stream-like fashion. Stream processors rely on especially devised stream con- Static analysis [14] is made by parsing the source code trollers for managing data transfers from global memory to or read a compiled program, searching for the instructions the Local Register Files (LRFs). These transfers are often that perform accesses to memory and representing the ad- made through an intermediate memory level called the Stream dresses they use. Access patterns are represented with abstract Register File (SRF), through which all memory transfers mathematical models that allow to find parallel regions or between device global memory and the LRFs are made. While dependences between memory

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us