Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version)

Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version)

Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version) Tyler Sorensen Hugues Evrard Alastair F. Donaldson Imperial College London Imperial College London Imperial College London London, UK London, UK London, UK [email protected] [email protected] [email protected] ABSTRACT Host CPU GPU Compute Units (CU) 0 1 2 3 There is growing interest in accelerating irregular data-parallel Graphics request 2CU algorithms on GPUs. These algorithms are typically blocking, so kill gather time they require fair scheduling. But GPU programming models (e.g. kill OpenCL) do not mandate fair scheduling, and GPU schedulers are execution time graphics graphics unfair in practice. Current approaches avoid this issue by exploit- Graphics terminated ing scheduling quirks of today’s GPUs in a manner that does not fork allow the GPU to be shared with other workloads (such as graphics time rendering tasks). We propose cooperative kernels, an extension to the traditional GPU programming model geared towards writing Figure 1: Cooperative kernels can flexibly resize to let other blocking algorithms. Workgroups of a cooperative kernel are fairly tasks, e.g. graphics, run concurrently scheduled, and multitasking is supported via a small set of language extensions through which the kernel and scheduler cooperate. We describe a prototype implementation of a cooperative kernel frame- holds a mutex, an unfair scheduler may cause another workgroup work implemented in OpenCL 2.0 and evaluate our approach by to spin-wait forever for the mutex to be released. Similarly, an porting a set of blocking GPU applications to cooperative kernels unfair scheduler can cause a workgroup to spin-wait indefinitely at and examining their performance under multitasking. Our pro- a global barrier so that other workgroups do not reach the barrier. totype exploits no vendor-specific hardware, driver or compiler A Degree of Fairness: Occupancy-bound Execution. The cur- support, thus our results provide a lower-bound on the efficiency rent GPU programming models—OpenCL [13], CUDA [19] and with which cooperative kernels can be implemented in practice. HSA [11], specify almost no guarantees regarding scheduling of workgroups, and current GPU schedulers are unfair in practice. CCS CONCEPTS Roughly speaking, each workgroup executing a GPU kernel is • Software and its engineering → Multiprocessing / multi- mapped to a hardware compute unit.1 The simplest way for a GPU programming / multitasking; Semantics; • Computing method- driver to handle more workgroups being launched than there are ologies → Graphics processors; compute units is via an occupancy-bound execution model [7, 27] where, once a workgroup has commenced execution on a compute KEYWORDS unit (it has become occupant), the workgroup has exclusive access GPU, cooperative multitasking, irregular parallelism to the compute unit until it finishes execution. Experiments suggest that this model is widely employed by today’s GPUs [2, 7, 20, 27]. The occupancy-bound execution model does not guarantee fair 1 INTRODUCTION scheduling between workgroups: if all compute units are occupied The Needs of Irregular Data-parallel Algorithms. Many inter- then a not-yet-occupant workgroup will not be scheduled until arXiv:1707.01989v1 [cs.PL] 6 Jul 2017 esting data-parallel algorithms are irregular: the amount of work to some occupant workgroup completes execution. Yet the execution be processed is unknown ahead of time and may change dynami- model does provide fair scheduling between occupant workgroups, cally in a workload-dependent manner. There is growing interest in which are bound to separate compute units that operate in parallel. accelerating such algorithms on GPUs [2–5, 7, 8, 12, 14, 15, 17, 20, Current GPU implementations of blocking algorithms assume the 22, 26, 27, 30, 32]. Irregular algorithms usually require blocking syn- occupancy-bound execution model, which they exploit by launch- chronization between workgroups, e.g. many graph algorithms use ing no more workgroups than there are available compute units [7]. a level-by-level strategy, with a global barrier between levels; work Resistance to Occupancy-bound Execution. stealing algorithms require each workgroup to maintain a queue, Despite its practi- typically mutex-protected, to enable stealing by other workgroups. cal prevalence, none of the current GPU programming models To avoid starvation, a blocking concurrent algorithm requires actually mandate occupancy-bound execution. Further, there are fair scheduling of workgroups. For example, if one workgroup reasons why this model is undesirable. First, the execution model does not enable multitasking, since a workgroup effectively owns a compute unit until the workgroup has completed execution. The © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa- tion for Computing Machinery. 1In practice, depending on the kernel, multiple workgroups might map to the same compute unit; we ignore this in our current discussion. GPU cannot be used meanwhile for other tasks (e.g. rendering). Sec- where a processing unit would temporarily give up its hardware ond, energy throttling is an important concern for battery-powered resource. We deviate from this design as, in the case of a global devices [31]. In the future, it will be desirable for a mobile GPU barrier, adopting yield would force the cooperative kernel to block driver to power down some compute units, suspending execution completely when a single workgroup yields, stalling the kernel until of associated occupant workgroups, if the battery level is low. the given workgroup resumes. Instead, our offer_kill allows a ker- Our assessment, informed by discussions with a number of indus- nel to make progress with a smaller number of workgroups, with trial practitioners who have been involved in the OpenCL and/or workgroups potentially joining again later via request_fork. HSA standardisation efforts (including [10, 23]), is that GPU ven- Figure 1 illustrates sharing of GPU compute units between a dors (1) will not commit to the occupancy-bound execution model cooperative kernel and a graphics task. Workgroups 2 and 3 of they currently implement, for the above reasons, yet (2) will not the cooperative kernel are killed at an offer_kill to make room guarantee fair scheduling using preemption. This is due to the high for a graphics task. The workgroups are subsequently restored to runtime cost of preempting workgroups, which requires managing the cooperative kernel when workgroup 0 calls request_fork. The thread local state (e.g. registers, program location) for all workgroup gather time is the time between resources being requested and the threads (up to 1024 on Nvidia GPUs), as well as shared memory, application surrendering them via offer_kill. To satisfy soft-real the workgroup local cache (up to 64 KB on Nvidia GPUs). Vendors time constraints, this time should be low; our experimental study instead wish to retain the essence of the simple occupancy-bound (Sec. 5.4) shows that, in practice, the gather-time for our applications model, supporting preemption only in key special cases. is acceptable for a range of graphics workloads. For example, preemption is supported by Nvidia’s Pascal archi- The cooperative kernels model has several appealing properties: tecture [18], but on a GTX Titan X (Pascal) we still observe star- vation: a global barrier executes successfully with 56 workgroups, (1) By providing fair scheduling between workgroups, coopera- but deadlocks with 57 workgroups, indicating unfair scheduling. tive kernels meet the needs of blocking algorithms, including irregular data-parallel algorithms. Our Proposal: Cooperative Kernels. To summarise: blocking al- (2) The model has no impact on the development of regular (non- gorithms demand fair scheduling, but for good reasons GPU ven- cooperative) compute and graphics kernels. dors will not commit to the guarantees of the occupancy-bound (3) The model is backwards-compatible: offer_kill and request_fork execution model. We propose cooperative kernels, an extension to may be ignored, and a cooperative kernel will behave exactly the GPU programming model that aims to resolve this impasse. as a regular kernel does on current GPUs. A kernel that requires fair scheduling is identified as cooperative, (4) Cooperative kernels can be implemented over the occupancy- and written using two additional language primitives, offer_kill and bound execution model provided by current GPUs: our proto- request_fork, placed by the programmer. Where the cooperative type implementation uses no special hardware/driver support. kernel could proceed with fewer workgroups, a workgroup can (5) If hardware support for preemption is available, it can be lever- execute offer_kill, offering to sacrifice itself to the scheduler. This aged to implement cooperative kernels efficiently, and coopera- indicates that the workgroup would ideally continue executing, but tive kernels can avoid unnecessary preemptions by allowing that the scheduler may preempt the workgroup; the cooperative the programmer to communicate “smart” preemption points. kernel must be prepared to deal with either scenario. Where the cooperative kernel could use additional resources, a workgroup Placing the primitives manually is straightforward for the rep- can execute request_fork to indicate that the kernel is prepared to resentative set of GPU-accelerated irregular algorithms we have proceed with the existing

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us