Pegasus: Coordinated Scheduling for Virtualized Accelerator-based Systems Vishakha Gupta Karsten Schwan Niraj Tolia Georgia Institute of Technology Georgia Institute of Technology Maginatics Vanish Talwar Parthasarathy Ranganathan HP Labs HP Labs Abstract higher level programming APIs [19, 28]. Unfortunately, Heterogeneous multi-cores—platforms comprised of technically, this implies that drivers rather than operating both general purpose and accelerator cores—are becom- systems or hypervisors determine how accelerators are ing increasingly common. While applications wish to shared, which restricts scheduling policies and thus, the freely utilize all cores present on such platforms, operat- optimization criteria applied when using such heteroge- ing systems continue to view accelerators as specialized neous systems. devices. The Pegasus system described in this paper uses A driver-based execution model can not only poten- an alternative approach that offers a uniform resource tially hurt utilization, but also make it difficult for appli- usage model for all cores on heterogeneous chip mul- cations and systems to obtain desired benefits from the tiprocessors. Operating at the hypervisor level, its novel combined use of heterogeneous processing units. Con- scheduling methods fairly and efficiently share acceler- sider, for instance, an advanced image processing ser- ators across multiple virtual machines, thereby making vice akin to HP’s Snapfish [32] or Microsoft’s Photo- accelerators into first class schedulable entities of choice Synth [25] applications, but offering additional compu- for many-core applications. Using NVIDIA GPGPUs tational services like complex image enhancement and coupled with x86-based general purpose host cores, a watermarking, hosted in a data center. For such appli- Xen-based implementation of Pegasus demonstrates im- cations, the low latency responses desired by end users proved performance for applications by better managing require the combined processing power of both general combined platform resources. With moderate virtual- purpose and accelerator cores. An example is the ex- ization penalties, performance improvements range from ecution of sequences of operations like those that first 18% to 140% over base GPU driver scheduling when the identify spatial correlation or correspondence [33] be- GPUs are shared. tween images prior to synthesizing them [25]. For these pipelined sets of tasks, some can efficiently run on multi- 1 Introduction core CPUs, whereas others can substantially benefit from Systems with specialized processors like those used acceleration [6, 23]. However, when they concurrently for accelerating computations, network processing, or use both types of processing resources, low latency is at- cryptographic tasks [27, 34] have proven their utility tained only when different pipeline elements are appro- in terms of higher performance and lower power con- priately co- or gang-scheduled onto both CPU and GPU sumption. This is not only causing tremendous growth cores. As shown later in this paper, such co-scheduling is in accelerator-based platforms, but it is also leading difficult to perform with current accelerators when used to the release of heterogeneous processors where x86- in consolidated data center settings. Further, it is hard to based cores and on-chip network or graphics accelera- enforce fairness in accelerator use when the many clients tors [17, 31] form a common pool of resources. However, in typical web applications cause multiple tasks to com- operating systems and virtualization platforms have not pete for both general purpose and accelerator resources, yet adjusted to these architectural trends. In particular, The Pegasus project addresses the urgent need for sys- they continue to treat accelerators as secondary devices tems support to smartly manage accelerators. It does and focus scheduling and resource management on their this by leveraging the new opportunities presented by general purpose processors, supported by vendors that increased adoption of virtualization technology in com- shield developers from the complexities of accelerator mercial, cloud computing [1], and even high perfor- hardware by ‘hiding’ it behind drivers that only expose mance infrastructures [22, 35]: the Pegasus hypervisor extensions (1) make accelerators into first class schedu- tent, VMs use the resources managed by these multiple lable entities and (2) support scheduling methods that en- scheduling domains. This approach leverages notions of able efficient use of both the general purpose and acceler- ‘cellular’ hypervisor structures [11] or federated sched- ator cores of heterogeneous hardware platforms. Specifi- ulers that have been shown useful in other contexts [20]. cally, for platforms comprised of x86 CPUs connected to Concurrent use of both CPU and GPU resources is one NVIDIA GPUs, these extensions can be used to manage class of coordination methods Pegasus implements, with all of the platform’s processing resources, to address the other methods aimed at delivering both high performance broad range of needs of GPGPU (general purpose com- and fairness in terms of VM usage of platform resources. putation on graphics processing units) applications, in- Pegasus relies on application developers or toolchains cluding the high throughput requirements of compute in- to identify the right target processors for different com- tensive web applications like the image processing code putational tasks and to generate such tasks with the ap- outlined above and the low latency requirements of com- propriate instruction set architectures (ISAs). Further, putational finance [24] or similarly computationally in- its current implementation does not interact with tool tensive high performance codes. For high throughput, chains or runtimes, but we recognize that such inter- platform resources can be shared across many applica- actions could improve the effectiveness of its runtime tions and/or clients. For low latency, resource manage- methods for resource management [8]. An advantage ment with such sharing also considers individual applica- derived from this lack of interaction, however, is that tion requirements, including those of the inter-dependent Pegasus does not depend on certain toolchains or run- pipeline-based codes employed for the financial and im- times, nor does it require internal information about ac- age processing applications. celerators [23]. As a result, Pegasus can operate with The Pegasus hypervisor extensions described in Sec- both ‘closed’ accelerators like NVIDIA GPUs and with tions 3 and 5 do not give applications direct access to ‘open’ ones like IBM Cell [14], and its approach can eas- accelerators [28], nor do they hide them behind a virtual ily be extended to support other APIs like OpenCL [19]. file system layer [5, 15]. Instead, similar to past work Summarizing, the Pegasus hypervisor extensions on self-virtualizing devices [29], Pegasus exposes to ap- make the following contributions: plications a virtual accelerator interface, and it supports Accelerators as first class schedulable entities— existing GPGPU applications by making this interface accelerators (accelerator physical CPUs or aPCPUs) can identical to NVIDIA’s CUDA programming API [13]. be managed as first class schedulable entities, i.e., they As a result, whenever a virtual machine attempts to use can be shared by multiple tasks, and task mappings to the accelerator by calling this API, control reverts to the processors are dynamic, within the constraints imposed hypervisor. This means, of course, that the hypervisor by the accelerator software stacks. ‘sees’ the application’s accelerator accesses, thereby get- Visible heterogeneity—Pegasus respects the fact that ting an opportunity to regulate (schedule) them. A sec- aPCPUs differ in capabilities, have different modes of ond step taken by Pegasus is to then explicitly coordi- access, and sometimes use different ISAs. Rather than nate how VMs use general purpose and accelerator re- hiding these facts, Pegasus exposes heterogeneity to the sources. With the Xen implementation [7] of Pegasus applications and the guest virtual machines (VMs) that shown in this paper, this is done by explicitly scheduling are capable of exploiting it. guest VMs’ accelerator accesses in Xen’s Dom0, while at Diversity in scheduling—accelerators are used in the same time controlling those VMs’ use of general pur- multiple ways, e.g., to speedup parallel codes, to increase pose processors, the latter exploiting Dom0’s privileged throughput, or to improve a platform’s power/perfor- access to the Xen hypervisor and its VM scheduler. mance properties. Pegasus addresses differing applica- Pegasus elevates accelerators to first class schedula- tion needs by offering a diversity of methods for schedul- ble citizens in a manner somewhat similar to the way ing accelerator and general purpose resources, including it is done in the Helios operating system [26], which co-scheduling for concurrency constraints. uses satellite kernels with standard interfaces for XScale- ‘Coordination’ as the basis for resource manage- based IO cards. However, given the fast rate of tech- ment—internally, accelerators use specialized execution nology development in accelerator chips, we consider environments with their own resource managers [14, 27]. it premature to impose a common abstraction across Pegasus uses coordinated scheduling methods to align all possible heterogeneous processors. Instead, Pegasus accelerator
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-