Crane: Fast and Migratable GPU Passthrough for Opencl Applications

Total Page:16

File Type:pdf, Size:1020Kb

Crane: Fast and Migratable GPU Passthrough for Opencl Applications Crane: Fast and Migratable GPU Passthrough for OpenCL Applications James Gleeson, Daniel Kats, Charlie Mei, Eyal de Lara University of Toronto Toronto, Canada {jgleeson,dbkats,cmei,delara}@cs.toronto.edu ABSTRACT (IaaS) providers now have virtualization offerings with ded- General purpose GPU (GPGPU) computing in virtualized icated GPUs that customers can leverage in their guest vir- environments leverages PCI passthrough to achieve GPU tual machines (VMs). For example, Amazon Web Services performance comparable to bare-metal execution. However, allows VMs to take advantage of NVIDIA’s high-end GPUs GPU passthrough prevents service administrators from per- in EC2 [6]. Microsoft and Google recently introduced simi- forming virtual machine migration between physical hosts. lar offerings on Azure and GCP platforms respectively [34, Crane is a new technique for virtualizing OpenCL-based 21]. GPGPU computing that achieves within 5:25% of pass- Virtualized GPGPU computing leverages modern through GPU performance while supporting VM migra- IOMMU features to give the VM direct use of the GPU tion. Crane interposes a virtualization-aware OpenCL li- through DMA and interrupt remapping – a technique which brary that makes it possible to reclaim and subsequently is usually called PCI passthrough [17, 50]. By running reassign physical GPUs to a VM without terminating the the GPU driver in the context of the guest operating guest or its applications. Crane also enables continued GPU system, PCI passthrough bypasses the hypervisor and operation while the VM is undergoing live migration by achieves performance that is comparable to bare-metal OS transparently switching between GPU passthrough opera- execution [47]. tion and API remoting. Unfortunately, GPU passthrough is incompatible with VM migration for two key reasons: (1) the hypervisor can- not safely interrupt guest GPU access from the guest GPU CCS Concepts drivers and GPGPU applications, and (2) the hypervisor •Computing methodologies ! Graphics processors; cannot extract and reinject GPU state. Without the abil- •Computer systems organization ! Heterogeneous ity of the hypervisor to interrupt the guest OS’s access to (hybrid) systems; •Software and its engineering ! the GPU, it is impossible for the hypervisor to reclaim the Virtual machines; GPU from the guest OS without the guest OS crashing; we confirm this behaviour in the Xen hypervisor (Section 2.1). Without the ability of the hypervisor to extract GPU hard- Keywords ware state prior to migration, any data stored by guest virtualization, live migration, passthrough, GPU, GPGPU, GPGPU applications in GPU device memory are lost once OpenCL the guest moves to a new machine. We present Crane, a new approach for achieving pass- through GPU performance without sacrificing VM migra- 1. INTRODUCTION tion for virtualized OpenCL GPGPU applications. Crane General purpose GPU (GPGPU) is being used to accel- allows both stop-and-copy and live migration. Crane does erate parallel workloads as varied as training deep neural not require modification to applications, the hypervisor, and networks for machine learning [2], encoding high-definition importantly, guest OS drivers which are often proprietary. video for streaming [42], and molecular dynamics [20]. Crane implements virtualization using only the portable GPGPU programming is primarily done according to one of OpenCL API, allowing Crane to achieve vendor indepen- two APIs: CUDA for NVIDIA’s GPUs, and OpenCL, which dence: the ability to migrate any current and future GPU is implemented on many different GPUs from NVIDIA and model that supports the OpenCL API without vendor sup- other vendors. Major cloud “infrastructure as a service” port. Permission to make digital or hard copies of all or part of this work Crane operationally consists of three main components: for personal or classroom use is granted without fee provided that a transparent interposition library, a guest migration dae- copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. mon, and a host migration daemon. When the guest OS is Copyrights for components of this work owned by others than ACM not migrating, the OpenCL applications run at passthrough must be honored. Abstracting with credit is permitted. To copy speeds with OpenCL calls sent directly to the GPU. While otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions running, the interposition library interposes on OpenCL from [email protected]. calls by tracking state needed to recreate OpenCL objects. SYSTOR ’17 Haifa, Israel When an administrator intends to migrate, Crane signals © 2017 ACM. ISBN 978-1-4503-5035-8/17/05…$15.00 their intent to the guest migration daemon, which coordi- DOI: http://dx.doi.org/10.1145/3078468.3078478 nates the pre-migration stage. During pre-migration, the 2.1 GPU passthrough guest migration daemon coordinates both extracting GPU Passthrough is a technique that allows guest OS GPU state to DRAM and safely interrupting guest GPU access. drivers direct use of GPUs as if the guest OS was installed on Crane implements a universal GPU device memory extrac- the system as a bare-metal OS. Passthrough takes advantage tion mechanism built using portable OpenCL APIs for sav- of the input-output memory management unit (IOMMU) ing GPU state to DRAM. Guest GPU access is safely paused found on modern chipsets, which allows guests unmediated by releasing OpenCL objects, unloading OpenCL shared li- access to GPU device memory, thereby bypassing any hy- brary handles, and removing guest GPU drivers. With the pervisor involvement. As a result, the guest VM attains GPU unused, the host migration daemon safely reclaims GPU performance comparable to a bare-metal OS [54]. the GPU and commences VM migration. Crane enables live Unfortunately, using passthrough sacrifices the ability to migration [13] by temporarily switching from passthrough migrate VMs. To prove that the hypervisor cannot safely in- GPU execution to API remoting, which forwards OpenCL terrupt GPU access, we conducted a small experiment with commands over RPC to a stationary proxy domain on the the Xen hypervisor. Modern hypervisors are able to dy- source host. The proxy domain is pre-provisioned with a namically detach and reattach PCI devices in passthrough dedicated passthrough GPU to minimize the time to tran- without requiring a restart of the guest VM [49, 30]. Our ex- sition to API forwarding. Once the guest migrates, Crane periment consists of running an OpenCL application in the leverages support in modern OSes and hypervisors for hot- guest OS, then issuing a detach of its GPU card while the plugging PCI devices to re-attach a passthrough GPU [17, application is running. After the card detaches, the entire 50], and executes the post-migration stage. During post- guest OS crashes (for experimental setup see Section 6.1). migration, Crane coordinates resuming execution using the Hence, detaching the card in the hypervisor is equivalent to passthrough GPU by reinjecting GPU device memory us- ripping the physical GPU card out of the PCI-express slot ing its universal mechanism, and recreating OpenCL objects of a bare-metal OS installation. from tracked API state. Even if passthough GPU access could be interrupted by This paper makes the following contributions: the hypervisor, the hypervisor has no method of extracting state from the GPU and reinjecting it into the GPU on the • An approach to GPU virtualization that simulta- new host. Proprietary GPU drivers provide no standard neously achieves — Passthrough performance: API for GPU state extraction/reinjection. within 5:25% of bare-metal passthrough GPU perfor- A simple approach is to terminate the OpenCL applica- mance, Vendor independence: Crane uses vendor tions before migrating then restart them on the new host; standardized OpenCL APIs to implement virtualiza- however, sheer economic costs [5] in lost computational re- tion, allowing it to work for current and future GPU sources precludes such an approach in industry for batch models, VM migration: Crane supports both stop- processing jobs, which can takes many hours to execute [35]. and-copy and live migration modes. • A method for extraction/reinjection of guest GPU 2.2 OpenCL programming state and safe interruption of guest GPU access with- The OpenCL API describes a cross-platform method out terminating the guest or its OpenCL applications. of writing and executing massively data-parallel programs The approach makes use of standard OpenCL APIs to known as kernels on a device such as a GPU. The general maintain transparency thereby avoiding application, flow of an OpenCL application is as follows: driver, and hypervisor modifications. 1. Device configuration: Query and select an available • A method for transparently extracting and injecting GPU to use throughout the program. OpenCL state, enabling OpenCL applications to con- tinue to execute (albeit at lower performance) while 2. Resource allocation: Kernels are compiled for the the VM is undergoing live migration via API remot- target device, and OpenCL memory buffers are used ing. to write GPU memory as input. The rest of the paper is structured as follows. Section 2 3. Program execution: Kernels are enqueued for ex- provides a quick overview of OpenCL and introduces GPU ecution on the GPU, and results are read from GPU passthrough and how it prevents VM migration. Sections 3 device memory back to DRAM. and 4 describe the design and implementation of Crane. 4. Cleanup: Command queues are flushed, and any al- Section 5 discusses our solution for dynamically offloading located OpenCL resource handles are freed. close-source GPU drivers. Section 6 presents our experimen- tal evaluation. Section 7 discusses related work. Section 8 The standardized OpenCL API empowers developers with discusses future work such as extending Crane to CUDA. application portability across all GPU vendors and mod- Finally, Section 9 concludes the paper. els, including popular proprietary vendors like NVIDIA and AMD.
Recommended publications
  • GPU Virtualization on Vmware's Hosted I/O Architecture
    GPU Virtualization on VMware’s Hosted I/O Architecture Micah Dowty, Jeremy Sugerman VMware, Inc. 3401 Hillview Ave, Palo Alto, CA 94304 [email protected], [email protected] Abstract more computational performance than CPUs. At the Modern graphics co-processors (GPUs) can produce same time, GPU acceleration has extended beyond en- high fidelity images several orders of magnitude faster tertainment (e.g., games and video) into the basic win- than general purpose CPUs, and this performance expec- dowing systems of recent operating systems and is start- tation is rapidly becoming ubiquitous in personal com- ing to be applied to non-graphical high-performance ap- puters. Despite this, GPU virtualization is a nascent field plications including protein folding, financial modeling, of research. This paper introduces a taxonomy of strate- and medical image processing. The rise in applications gies for GPU virtualization and describes in detail the that exploit, or even assume, GPU acceleration makes specific GPU virtualization architecture developed for it increasingly important to expose the physical graph- VMware’s hosted products (VMware Workstation and ics hardware in virtualized environments. Additionally, VMware Fusion). virtual desktop infrastructure (VDI) initiatives have led We analyze the performance of our GPU virtualiza- many enterprises to try to simplify their desktop man- tion with a combination of applications and microbench- agement by delivering VMs to their users. Graphics vir- marks. We also compare against software rendering, the tualization is extremely important to a user whose pri- GPU virtualization in Parallels Desktop 3.0, and the na- mary desktop runs inside a VM. tive GPU. We find that taking advantage of hardware GPUs pose a unique challenge in the field of virtu- acceleration significantly closes the gap between pure alization.
    [Show full text]
  • Gscale: Scaling up GPU Virtualization with Dynamic Sharing of Graphics
    gScale: Scaling up GPU Virtualization with Dynamic Sharing of Graphics Memory Space Mochi Xue, Shanghai Jiao Tong University and Intel Corporation; Kun Tian, Intel Corporation; Yaozu Dong, Shanghai Jiao Tong University and Intel Corporation; Jiacheng Ma, Jiajun Wang, and Zhengwei Qi, Shanghai Jiao Tong University; Bingsheng He, National University of Singapore; Haibing Guan, Shanghai Jiao Tong University https://www.usenix.org/conference/atc16/technical-sessions/presentation/xue This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. gScale: Scaling up GPU Virtualization with Dynamic Sharing of Graphics Memory Space Mochi Xue1,2, Kun Tian2, Yaozu Dong1,2, Jiacheng Ma1, Jiajun Wang1, Zhengwei Qi1, Bingsheng He3, Haibing Guan1 {xuemochi, mjc0608, jiajunwang, qizhenwei, hbguan}@sjtu.edu.cn {kevin.tian, eddie.dong}@intel.com [email protected] 1Shanghai Jiao Tong University, 2Intel Corporation, 3National University of Singapore Abstract As one of the key enabling technologies of GPU cloud, GPU virtualization is intended to provide flexible and With increasing GPU-intensive workloads deployed on scalable GPU resources for multiple instances with high cloud, the cloud service providers are seeking for practi- performance. To achieve such a challenging goal, sev- cal and efficient GPU virtualization solutions. However, eral GPU virtualization solutions were introduced, i.e., the cutting-edge GPU virtualization techniques such as GPUvm [28] and gVirt [30]. gVirt, also known as GVT- gVirt still suffer from the restriction of scalability, which g, is a full virtualization solution with mediated pass- constrains the number of guest virtual GPU instances.
    [Show full text]
  • A Full GPU Virtualization Solution with Mediated Pass-Through
    A Full GPU Virtualization Solution with Mediated Pass-Through Kun Tian, Yaozu Dong, David Cowperthwaite Intel Corporation Abstract shows the spectrum of GPU virtualization solutions Graphics Processing Unit (GPU) virtualization is an (with hardware acceleration increasing from left to enabling technology in emerging virtualization right). Device emulation [7] has great complexity and scenarios. Unfortunately, existing GPU virtualization extremely low performance, so it does not meet today’s approaches are still suboptimal in performance and full needs. API forwarding [3][9][22][31] employs a feature support. frontend driver, to forward the high level API calls inside a VM, to the host for acceleration. However, API This paper introduces gVirt, a product level GPU forwarding faces the challenge of supporting full virtualization implementation with: 1) full GPU features, due to the complexity of intrusive virtualization running native graphics driver in guest, modification in the guest graphics software stack, and and 2) mediated pass-through that achieves both good incompatibility between the guest and host graphics performance and scalability, and also secure isolation software stacks. Direct pass-through [5][37] dedicates among guests. gVirt presents a virtual full-fledged GPU the GPU to a single VM, providing full features and the to each VM. VMs can directly access best performance, but at the cost of device sharing performance-critical resources, without intervention capability among VMs. Mediated pass-through [19], from the hypervisor in most cases, while privileged passes through performance-critical resources, while operations from guest are trap-and-emulated at minimal mediating privileged operations on the device, with cost. Experiments demonstrate that gVirt can achieve good performance, full features, and sharing capability.
    [Show full text]
  • GPU Virtualization on Vmware's Hosted I/O Architecture
    GPU Virtualization on VMware's Hosted I/O Architecture Micah Dowty Jeremy Sugerman VMware, Inc. VMware, Inc. 3401 Hillview Ave, Palo Alto, CA 94304 3401 Hillview Ave, Palo Alto, CA 94304 [email protected] [email protected] ABSTRACT 1. INTRODUCTION Modern graphics co-processors (GPUs) can produce high fi- Over the past decade, virtual machines (VMs) have be- delity images several orders of magnitude faster than general come increasingly popular as a technology for multiplexing purpose CPUs, and this performance expectation is rapidly both desktop and server commodity x86 computers. Over becoming ubiquitous in personal computers. Despite this, that time, several critical challenges in CPU virtualization GPU virtualization is a nascent field of research. This paper have been overcome, and there are now both software and introduces a taxonomy of strategies for GPU virtualization hardware techniques for virtualizing CPUs with very low and describes in detail the specific GPU virtualization archi- overheads [2]. I/O virtualization, however, is still very tecture developed for VMware’s hosted products (VMware much an open problem and a wide variety of strategies are Workstation and VMware Fusion). used. Graphics co-processors (GPUs) in particular present a challenging mixture of broad complexity, high performance, We analyze the performance of our GPU virtualization with rapid change, and limited documentation. a combination of applications and microbenchmarks. We also compare against software rendering, the GPU virtual- Modern high-end GPUs have more transistors, draw more ization in Parallels Desktop 3.0, and the native GPU. We power, and offer at least an order of magnitude more com- find that taking advantage of hardware acceleration signif- putational performance than CPUs.
    [Show full text]
  • Enabling VDI for Engineers and Designers Positioning Information
    Enabling VDI for Engineers and Designers Positioning Information The ability to work remotely has been a critical business need for many years. But with the recent pandemic, the focus has shifted from supporting the occasional business trip or conference attendee, to supporting all non-essential employees. Virtual Desktop Infrastructure (VDI) provides a robust solution for businesses, allowing their employees to access business data on any mobile device – be it a laptop, tablet, or smartphone – without risk of losing that business data if the device is lost or stolen. With VDI the data resides in the data center, behind company firewalls. VDI allows companies to implement flexible work schedules, where employees have the freedom to pick up the kids from school and log on later in the evening to finish up. VDI also provides for centralized management of security fixes and application upgrades, ensuring all workers are using the latest version of applications. Although a great solution for your typical office workers, the benefits of VDI have largely been unavailable to another set of employees. Power users – engineers and designers using 3D or other graphics visualization applications – have generally been left out of these flexible arrangements. The graphics visualization applications they use are processor intensive, and the files they create and modify can be very large. The workstations that run the applications and the high-resolution monitors that display the designs are heavy and thus difficult to move. In order to have decent response times, power users have needed to be connected to a high-speed local area network. Work has been underway for several years to expand the benefits of VDI to the power user, and today solutions exist to support the majority of visualization applications.
    [Show full text]
  • Virtual GPU Software User Guide Is Organized As Follows: ‣ This Chapter Introduces the Capabilities and Features of NVIDIA Vgpu Software
    Virtual GPU Software User Guide DU-06920-001 _v13.0 Revision 02 | August 2021 Table of Contents Chapter 1. Introduction to NVIDIA vGPU Software..............................................................1 1.1. How NVIDIA vGPU Software Is Used....................................................................................... 1 1.1.2. GPU Pass-Through.............................................................................................................1 1.1.3. Bare-Metal Deployment.....................................................................................................1 1.2. Primary Display Adapter Requirements for NVIDIA vGPU Software Deployments................2 1.3. NVIDIA vGPU Software Features............................................................................................. 3 1.3.1. GPU Instance Support on NVIDIA vGPU Software............................................................3 1.3.2. API Support on NVIDIA vGPU............................................................................................ 5 1.3.3. NVIDIA CUDA Toolkit and OpenCL Support on NVIDIA vGPU Software...........................5 1.3.4. Additional vWS Features....................................................................................................8 1.3.5. NVIDIA GPU Cloud (NGC) Containers Support on NVIDIA vGPU Software...................... 9 1.3.6. NVIDIA GPU Operator Support.......................................................................................... 9 1.4. How this Guide Is Organized..................................................................................................10
    [Show full text]
  • With NVIDIA Virtual GPU
    Virtualizing AI/DL platform: NVIDIA Virtual GPU Solution Doyoung Kim, Sr. Solution Architect NVIDIA Virtual GPU Solution Support, Updates & Maintenance NVIDIA Virtual GPU Software Tesla Datacenter GPUs How it works? CPU Only VDI With NVIDIA Virtual GPU Apps and VMs NVIDIA Graphics Drivers Apps and VMs NVIDIA Virtual GPU Hypervisor NVIDIA virtualization software Hypervisor Server NVIDIA Tesla GPU Server Evolution of Virtual GPU Live Migration Ultra High End Simulation, NGC Support Photo Realism, 3D Rendering, AL/DL Business Designers, User Architects, Engineers 2013 2016 2017 Today GPU Compute on Virtual GPU Optimized DL Container from NGC Virtual Machine Virtual Quadro GPU CUDA & OpenCL enabled NVIDIA Virtualization Software Hypervisor NVIDIA Tesla GPU Server Virtual GPU can…. ✓ Run GPU compute workload - Any CUDA / OpenCL applications - Requires NVIDIA Virtual GPU 5.0 or higher - Requires Pascal or later GPU ✓ Be fully integrated with Virtualization solution - Live migration support from Virtual GPU 6.0 or higher - Support Cluster / Host / VM / Application level performance monitoring - Enabled for all major solutions such as vSphere / XenServer / KVM ✓ Provide every level of AI / DL Platforms - From desktop level to multi-GPU server - Fully support NVIDIA GPU Cloud (NGC) - Support up to 4 multi-vGPU per Virtual Machine* * Requires Virtual GPU 7.0 or higher / RHEL KVM only Why Virtualization? Which is best for AI/DL research system? PC with Consumer GPU GPU Server w/o Virtualization NVIDIA GPU Virtualization Manageability Require MGMT
    [Show full text]
  • GPGPU Task Scheduling Technique for Reducing the Performance Deviation of Multiple GPGPU Tasks in RPC-Based GPU Virtualization Environments
    S S symmetry Article GPGPU Task Scheduling Technique for Reducing the Performance Deviation of Multiple GPGPU Tasks in RPC-Based GPU Virtualization Environments Jihun Kang and Heonchang Yu * Department of Computer Science and Engineering, Korea University, Seoul 02841, Korea; [email protected] * Correspondence: [email protected] Abstract: In remote procedure call (RPC)-based graphic processing unit (GPU) virtualization envi- ronments, GPU tasks requested by multiple-user virtual machines (VMs) are delivered to the VM owning the GPU and are processed in a multi-process form. However, because the thread executing the computing on general GPUs cannot arbitrarily stop the task or trigger context switching, GPU monopoly may be prolonged owing to a long-running general-purpose computing on graphics processing unit (GPGPU) task. Furthermore, when scheduling tasks on the GPU, the time for which each user VM uses the GPU is not considered. Thus, in cloud environments that must provide fair use of computing resources, equal use of GPUs between each user VM cannot be guaranteed. We propose a GPGPU task scheduling scheme based on thread division processing that supports GPU use evenly by multiple VMs that process GPGPU tasks in an RPC-based GPU virtualization environment. Our method divides the threads of the GPGPU task into several groups and controls the execution time of each thread group to prevent a specific GPGPU task from a long time monopolizing the GPU. The Citation: Kang, J.; Yu, H. GPGPU Task Scheduling Technique for efficiency of the proposed technique is verified through an experiment in an environment where Reducing the Performance Deviation multiple VMs simultaneously perform GPGPU tasks.
    [Show full text]
  • NVIDIA GRID GPU Acceleration for Virtualization
    NVIDIA GRID™ GPU Acceleration for Virtualization GRID For VDI GRID Enabled Solutions AGENDA User Profiles and Experiences VDI Circa 2011 Task Worker GRID Powered VDI Designer Task Worker A TRUE PC EXPERIENCE Delivered to any device for the hundreds of millions of power users who want to bring their own devices to work. Power Knowledge User Worker Key Components of GRID GRID VGX GRID GRID VCA Software GPUs Visual Computing Appliance VDI VIRTUAL MACHINE VIRTUAL DESKTOPS NVIDIA GRID Enabled Virtual Desktop NVIDIA Driver NVIDIA GRID ENABLED Hypervisor NVIDIA GRID GPU Key Components of GRID GRID VGX GRID GRID VCA Software GPUs Visual Computing Appliance Key Components of GRID GRID GPUs NVIDIA Brands ® GeForce Quadro® Tegra® NVIDIA GRID™ Tesla® NVIDIA GRID K1 NVIDIA GRID K2 GPU 4 Kepler GPUs 2 High End Kepler GPUs CUDA cores 768 (192 / GPU) 3072 (1536 / GPU) Memory Size 16GB DDR3 (4GB / GPU) 8GB GDDR5 Max Power 130 W 225 W Form Factor Dual Slot ATX, 10.5” Dual Slot ATX, 10.5” Display IO None None Aux power requirement 6-pin connector 8-pin connector PCIe x16 x16 PCIe Generation Gen3 (Gen2 compatible) Gen3 (Gen2 compatible) Cooling solution Passive Passive # users 4 - 1001 2 – 641 Watts per user ~ 1.5 W ~ 3.5 W OpenGL 4.x 4.x Microsoft DirectX 11 11 GRID VGX Virtualization support Yes Yes 1 Number of users depends on software solution, workload, and screen resolution GRID Enabled OEM Platforms IBM iDataPlex DX360 Cisco UCS C240 M3 2 GRID K1 or 2 GRID K2 2 GRID K1 or 2 GRID K2 Dell PowerEdge R720 2 GRID K1 or 2 GRID K2 HP ProLiant SL250 2 GRID
    [Show full text]
  • CUSTOMER EXPERIENCES with GPU VIRTUALIZATION and 3D REMOTING Derek Thorslund Director of Product Management, HDX
    CUSTOMER EXPERIENCES WITH GPU VIRTUALIZATION AND 3D REMOTING Derek Thorslund Director of Product Management, HDX © 2014 Citrix HDX 3D PRO – REAL CUSTOMER EXPERIENCES . Industry leaders have been delivering centralized 3D graphics applications using Citrix XenDesktop HDX 3D Pro since 2009 . Range of software and GPU hardware technologies — “Software GPU” — Dedicated GPUs — Shared GPUs including GRID vGPU © 2014 Citrix CITRIX MILESTONES IN 3D GRAPHICS REMOTING 2006 2009 2011 2012 2013 Project K2 GA of XenServer 6.0 Higher fps via OpenGL & delivers CATIA XenDesktop hypervisor NVIDIA GRID™ DirectX vGPU / to Boeing HDX 3D Pro introduces API plus GPU Sharing Dreamliner with Deep GPU improved with NVIDIA designers Compression Passthrough compression GRID™ cards © 2014 Citrix XenDesktop HDX 3D Pro XenDesktop VDI XenApp Any Multi-user device Single-user Windows Server ICA Windows 7 or 8 2008/2012 RDS XenServer or vSphere/ESXi or Bare Metal 1 user/machine: N users/machine: . XenServer/vSphere direct GPU . XenServer/vSphere direct GPU passthrough or bare metal passthrough or bare metal N users/machine: . XenServer NVIDIA GRID vGPU © 2014 Citrix Insider tips: GPU passthrough is a great option for high-end users doing intensive simulations • Servers like the HP ws460c Gen8 can take 8 GPUs, and even larger servers are coming this year • Full CUDA and OpenCL support (not yet available with GRID vGPU) • Well-established technology (XenServer and vSphere/ESXi) • Can mix GPU passthrough with vGPU on NVIDIA GRID K2 and K1 cards • High-end designers often require other parts of the HDX portfolio • 3D Space Mouse, printers, secure smart card access, etc. Industrial Automation © 2014 Citrix HDX 3D Pro client options Desktop Virtualization for High-end Graphics Users Content creation: Content access: © 2014 Citrix Segmenting the user population Tier 1: Professional users (e.g.
    [Show full text]
  • Using PCI Pass-Through for GPU Virtualization with CUDA*
    Using PCI Pass-Through for GPU Virtualization with CUDA* Chao-Tung Yang**, Hsien-Yi Wang, and Yu-Tso Liu Department of Computer Science, Tunghai University, Taichung 40704, Taiwan ROC [email protected], [email protected], [email protected] Abstract. Nowadays, NVIDIA’s CUDA is a general purpose scalable parallel programming model for writing highly parallel applications. It provides several key abstractions – a hierarchy of thread blocks, shared memory, and barrier synchronization. This model has proven quite successful at programming mul- tithreaded many core GPUs and scales transparently to hundreds of cores: scientists throughout industry and academia are already using CUDA to achieve dramatic speedups on production and research codes. GPU-base clusters are likely to play an important role in future cloud data centers, because some com- pute-intensive applications may require both CPUs and GPUs. The goal of this paper is to develop a VM execution mechanism that could run these applica- tions inside VMs and allow them to effectively leverage GPUs in such a way that different VMs can share GPUs without interfering with one another. Keywords: CUDA, GPU virtualization, Cloud computing, IaaS, PCI pass- through. 1 Introduction GPUs are really “manycore” processors, with hundreds of processing elements. A graphics processing unit (GPU) is a specialized microprocessor that offloads and ac- celerates graphics rendering from the central (micro-) processor. Modern GPUs are very efficient at manipulating computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algo- rithms. We know that a CPU has only 8 cores at single chip currently, but a GPU has grown to 448 cores.
    [Show full text]
  • Master's Thesis
    Graphics processing on HPC virtual applications Graphics performance of Windows applications running on Unix systems Master of Science Thesis Compurer Systems and Networks Roi Costas Fiel Department of Computer Science and Engineering Chalmers University of Technology Gothenburg, Sweden, September 2014 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. Graphics processing on HPC virtual applications Graphics performance of Windows applications running on Unix systems Roi Costas Fiel Examiner: Marina Papatriantafilou Department of Computer Science and Engineering Chalmers University of Technology SE4412 96 G¨oteborg Sweden Telephone + 46 (0)314772 1000 Abstract Simulation, graphic design and other applications with high graphic processing needs have been taking advantage of high performance computing systems in order to deal with complex computations and massive volumes of data. These systems are usually built on top of a single operating system and rely on virtualization in order to run appli- cations compiled for different ones.
    [Show full text]