Dcuda: Hardware Supported Overlap of Computation and Communication

Dcuda: Hardware Supported Overlap of Computation and Communication

dCUDA: Hardware Supported Overlap of Computation and Communication Tobias Gysi Jeremia Bar¨ Torsten Hoefler Department of Computer Science Department of Computer Science Department of Computer Science ETH Zurich ETH Zurich ETH Zurich [email protected] [email protected] [email protected] Abstract—Over the last decade, CUDA and the underlying utilization of the costly compute and network hardware. To GPU hardware architecture have continuously gained popularity mitigate this problem, application developers can implement in various high-performance computing application domains manual overlap of computation and communication [23], [27]. such as climate modeling, computational chemistry, or machine learning. Despite this popularity, we lack a single coherent In particular, there exist various approaches [13], [22] to programming model for GPU clusters. We therefore introduce overlap the communication with the computation on an inner the dCUDA programming model, which implements device- domain that has no inter-node data dependencies. However, side remote memory access with target notification. To hide these code transformations significantly increase code com- instruction pipeline latencies, CUDA programs over-decompose plexity which results in reduced real-world applicability. the problem and over-subscribe the device by running many more threads than there are hardware execution units. Whenever a High-performance system design often involves trading thread stalls, the hardware scheduler immediately proceeds with off sequential performance against parallel throughput. The the execution of another thread ready for execution. This latency architectural difference between host and device processors hiding technique is key to make best use of the available hardware perfectly showcases the two extremes of this design space. resources. With dCUDA, we apply latency hiding at cluster scale to automatically overlap computation and communication. Our Both architectures have to deal with the latency of hardware benchmarks demonstrate perfect overlap for memory bandwidth- components such as memories or floating point units. While bound tasks and good overlap for compute-bound tasks. the host processor employs latency minimization techniques Index Terms—Distributed memory, gpu, latency hiding, pro- such as prefetching and out-of-order execution, the device gramming model, remote memory access processor employs latency hiding techniques such as over- I. INTRODUCTION subscription and hardware threads. Today, we typically target GPU clusters using two program- To avoid the complexity of handling two programming ming models that separately deal with inter-node and single- models and to apply latency hiding at cluster scale, we in- node parallelization. For example, we may use MPI [7] to troduce the dCUDA (distributed CUDA) programming model. move data in between nodes and CUDA [19] to implement the We obtain a single coherent software stack by combining on-node computation. MPI provides point-to-point communi- the CUDA programming model with a significant subset of cation and collectives that allow to synchronize concurrent the remote memory access capabilities of MPI [12]. More processes executing on different cluster nodes. Using a fork- precisely, a global address space and device-side put and get join model, CUDA allows to offload compute kernels from operations enable transparent remote memory access using the the host to the massively parallel device. To combine the two high-speed network of the cluster. We thereby make use of programming models, MPI-CUDA programs usually alternate over-decomposition to over-subscribe the hardware with spare sequentially between on-node kernel invocations and inter- parallelism that enables automatic overlap of remote memory node communication. While being functional, this approach accesses with concurrent computation. To synchronize the also entails serious disadvantages. program execution, we additionally provide notified remote The main disadvantage is that application developers need memory access operations [3] that after completion notify the to know the concepts of both programming models and target via notification queue. understand several intricacies to work around their inconsis- We evaluate the dCUDA programming model using a tencies. For example, the MPI software stack in meantime stencil code, a particle simulation, and an implementation has been adapted to support direct device-to-device [24] data of sparse matrix-vector multiplication. To compare perfor- transfers. However, the control path remains on the host which mance and usability, we implement dCUDA and MPI-CUDA causes frequent host-device synchronizations and redundant versions of these mini-applications. Two out of three mini- data structures on host and device. applications show excellent automatic overlap of communica- On the other hand, the sequential execution of on-node tion and computation. Hence, application developers not only computation and inter-node communication inhibits efficient benefit from the convenience of device-side remote memory SC16; Salt Lake City, Utah, USA; November 2016 978-1-4673-8815-3/16/$31.00 ©2016 IEEE access. More importantly, dCUDA enables automatic overlap involves sender and receiver. Unfortunately, this combination of computation and communication without costly, manual of data movement and synchronization is a bad fit for extend- code transformations. As dCUDA programs are less network ing the CUDA programming model. On the one hand, CUDA latency sensitive, our development might even motivate more programs typically perform many data movements in the form throughput oriented network designs. In brief, we make the of device memory accesses before synchronizing the execution following contributions: using global barriers. On the other hand, CUDA programs • We implement the first device-side communication library over-subscribe the device by running many more threads than that provides MPI like remote memory access operations there are hardware execution units. As sender and receiver and target notification for GPU clusters. might not be active at the same time, two sided communication • We design the first GPU cluster programming model that is hardly practical. To avoid active target synchronization, MPI makes use of over-subscription and hardware threads to alternatively provides one sided put and get operations that automatically overlap inter-node communication with on- implement remote memory access [12] using a mapping of node computation. the distributed memory to a global address space. We believe that remote memory access programming is a natural extension II. PROGRAMMING MODEL of the CUDA programming model. The CUDA programming model and the underlying hard- Finally, programming large distributed memory machines ware architecture have proven excellent efficiency for parallel requires more sophisticated synchronization mechanisms than compute tasks. To achieve high performance, CUDA programs the barriers implemented by CUDA. We propose a notification offload the computation to an accelerator device with many based synchronization infrastructure [3] that after completion throughput optimized compute cores that are over-subscribed of the put and get operations enqueues notifications on the with many more threads than they have execution units. To target. To synchronize, the target waits for incoming notifica- overlap instruction pipeline latencies, in every clock cycle the tions enqueued by concurrent remote memory accesses. This compute cores try to select among all threads in flight some queue based system enables to build linearizable semantics. that are ready for execution. To implement context switches B. Combining MPI & CUDA on a clock-by-clock basis, the compute cores split the register file and the scratchpad memory among the threads in flight. CUDA programs structure the computation using kernels, Hence, the register and scratchpad utilization of a code effec- which embody phases of concurrent execution followed by tively limit the number of threads in flight. However, having result communication and synchronization. More precisely, enough parallel work is of key importance to fully overlap kernels write to memory with relaxed consistency and only the instruction pipeline latencies. Little’s law [1] states that after an implicit barrier synchronization at the end of the kernel this minimum required amount of parallel work corresponds execution the results become visible to the entire device. To to the product of bandwidth and latency. For example, we need expose parallelism, kernels use a set of independent thread 200kB of data on-the-fly to fully utilize a device memory with groups called blocks that provide memory consistency and 200GB/s bandwidth and 1µs latency. Thereby, 200kB translate synchronization among the threads of the block. The blocks to roughly 12,000 threads in flight each of them accessing are scheduled to the different compute cores of the device. two double precision floating point values at once. We show Once a block is running its execution cannot be interrupted as in Section IV that the network of our test system has 6GB/s todays devices do not support preemption [30]. While each bandwidth and 19µs latency. We therefore need 114kB of data compute core keeps as many blocks in flight as possible, or roughly 7,000 threads in flight to fully utilize the network. the number of concurrent blocks is constraint by hardware Based on the observation that typical CUDA programs

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us