Robert Wipfel David Atkisson RDMA GPU Direct for ioMemory Vince Brisebois GPU Technology Conference: S4265

Fusion-io—Copyright © 2014 Fusion-io, Inc. All rights reserved. Abstract

▸ Learn how to eliminate I/O bottlenecks by integrating Fusion-io ioMemory Flash storage into your GPU apps

Technical overview of Fusion-io’s PCIe attached ioMemory

Best practices and tuning for GPU applications using ioMemory based flash storage ▸ Topics will cover threading, pipe-lining, and data path acceleration via RDMA GPU Direct ▸ Sample code showing integration between RDMA GPU Direct and Fusion-io ioMemory will be presented

2 Outline

▸ Fusion-io ioMemory ▸ CUDA • Programming patterns ▸ GPU Direct ▸ RDMA GPU Direct for ioMemory • API • Sample code • Performance ▸ Conclusion

3 Fusion-io ioMemory Fusion-io Introduction

Fusion-io delivers the world’s data faster. From workstations to datacenter servers, Fusion-io accelerates innovative leaders like:

and over 4000 other companies worldwide. ioMemory Accelerates Applications

Server Relational Databases Virtualization VDI Big Data Search Analytics

INFORMIX

HPC Messaging Workstation Collaboration Caching Security/Logging Web

MQ LAMP

Lotus GPFS

6 Fusion-io Flash Results Are Dramatic

FINANCIALS WEB TECH RETAIL GOV / MFG

®

The world’s leading Q&A site

FASTER FASTER DATA QUERY FASTER DATA DATABASE WAREHOUSE PROCESSING FASTER QUERIES 5x ANALYSIS 30x REPLICATION 40x QUERIES 15x THROUGHPUT 15x

60+ case studies at http://fusionio.com/casestudies

7 Fusion-io ioMemory Purpose Built Products

ENTERPRISE HYPERSCALE WORKSTATION

SCALE UP SCALE OUT SINGLE USER

8 Fusion-io MEZZANINE Up to 1.2 TB of capacity Up to 3.0TB of capacity per slot for maximum per PCIe slot performance density

Up to 2.4TB of capacity Up to 3.2TB of capacity Up to 1.6TB of per x8 PCIe slot per PCIe slot workstation acceleration

9 Fusion-io Access Delay Time

Disk Drives & SSDs Server/Workstation Based Flash

CPU Cache DRAM

• Significantly faster than disk • More scalable than memory • Less expensive than memory • Data is persistent like on disk

Nanoseconds 10 -9 Microseconds 10 -6 Milliseconds 10-3 Cut-Through Architecture

Generic SSD Disk Approach Fusion Cut-Through Approach

Host App Host App CPU OS CPU OS

RAID PCIe Controller DRAM DRAM SC PCIe

Super

Capacitors SAS

Data path Controller

NAND

11 Fusion-io High Bandwidth Scalability

ioMemory bandwidth scales linearly while SSDs and PCIe SSDs max out at around 2GB/s (when adding multiple devices)

Read (MB/s) 8000 ioFX Read SSD Read

6000

4000 ioMemory sweet spot 2000 Bandwidth (MB/s) 0 1 2 3 4 5 6 # of Devices

Example: Eight ioMemory devices in RAID0 provide a bandwidth of 12GB/s in a single system! Latency

With an average response time of 50µs ioMemory will service I/O’s for every 1ms of disk access latency!

-OR-

50µs 100µs 150µs 200µs 250µs 300µs 350µs 400µs 450µs 500µs 550µs 600µs 650µs 700µs 750µs 800µs 850µs 900µs 950µs 1000µs

Disk

Fusion-io

25 Consistent Application Performance

ioMemory balances read/write performance for consistent throughput

ioMemory

SSD Queuing behind slow writes causes SSD latency spikes

14 ioDrive2 – ioMemory for Enterprise

Pushing Performance Density Envelope with ioMemory Platform

▸ Combines VSL and ioMemory into an ioMemory platform that takes enterprise applications and databases to the next level • Consistent Low Latency Performance • Industry-leading Capacity • Lowest CPU Utilization Per Unit of Application Work • Unrivaled Endurance • Enterprise Reliability via Self-Healing, Wear Management, and Predictive Monitoring • Extensive OS Support and Flexibility

15 Fusion-io ioDrive2 Duo – Heaviest Workloads

▸ Based on the same ioMemory platform as the ioDrive2 ▸ The ioDrive2 Duo provides 2.4TB of ioMemory per x4 and x8 PCI Express slot ▸ Organizations can:

• Handle twice the transactions, queries and requests • Add capacity for future growth

• Ensure applications seamlessly handle larger traffic spikes

• Consolidate more infrastructure

16 Fusion-io Drivers for Hyperscale Data Centers

‣ Large-scale Transaction Processing ‣ Content Distribution On-Demand Streaming ‣ Big Data Management and Meta Data Indexing ‣ Cloud Computing and Multi-tenancy

MARKET SEGMENTS ‣ Business Intelligence: Data Warehousing

‣ Increase Transaction Throughput ‣ Lower Latency and Command Completion Times ‣ Lower Flash TCO KEY DRIVERS Source: HGST Summit Aug 2012

17 Fusion-io ioScale – ioMemory for Hyperscale

Purpose-built for the Largest Web Scale and Cloud Environments ▸ Consistent low latency and high performance

▸ Reliability through Simplicity ▸ Unrivaled endurance

▸ Tools for Flash-Smart Applications ▸ Industry-Leading performance density • 410GB, 825GB, 1650GB and 3.2TB

18 Fusion-io ioFX – ioMemory for Workstations

Bridging the gap between creative potential and application performance ▸ Tuned for sustained performance in multithreaded applications ▸ Work on 2K, 4K and 5K content interactively, in full resolution ▸ Manipulate stereoscopic content in real-time ▸ Accelerate video and image editing and compositing ▸ Speed video playback ▸ Powerful throughput to maximize GPU processing ▸ Simplify and accelerate encoding and transcoding as well as other data-intensive activities in contemporary digital production

19 Fusion-io ioDrive2 Specifications

ioDrive2 Capacity 400GB SLC 600GB SLC 365GB MLC 785GB MLC* 1.2TB MLC* 3.0TB MLC Read Bandwidth - 1MB 1.4 GB/s 1.5 GB/s 910 MB/s 1.5 GB/s 1.5 GB/s 1.5 GB/s Write Bandwidth - 1MB 1.3 GB/s 1.3 GB/s 590 MB/s 1.1 GB/s 1.3 GB/s 1.3 GB/s Ran. Read IOPS - 512B 360,000 365,000 137,000 270,000 275,000 143,000 Ran. Write IOPS - 512B 800,000 800,000 535,000 800,000 800,000 535,000 Ran. Read IOPS - 4K 270,000 290,000 110,000 215,000 245,000 136,000 Ran. Write IOPS - 4K 270,000 270,000 140,000 230,000 250,000 242,000 Read Access Latency 47µs 47µs 68µs 68µs 68µs 68µs Write Access Latency 15µs 15µs 15µs 15µs 15µs 15µs Bus Interface PCI Express 2.0 Weight 6.6 ounces 9.5 ounces Form Factor HH x HL FH x HL Warranty 5 years or maximum endurance used Supported Operating Systems Microsoft Windows 64-bit Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 Linux RHEL 5/6; SLES 10/11; OEL 5/6; CentOS 5/6; Debian Squeeze; Fedora 16/17; openSUSE 12; Ubuntu 10/11/12 UNIX Solaris 10/11 x64; OpenSolaris 2009.06 x64; OSX 10.6/10.7/10.8 Hypervisors VMware ESX 4.0/4.1/ESXi 4.1/5.0/5.1, Windows 2008 R2 with Hyper-V, Hyper-V Server 2008 R2 www.fusionio.com August 2013

20 Fusion-io ioDrive2 Duo Specifications

ioDrive2 Duo Capacity 2.4TB MLC* 1.2TB SLC* Read Bandwidth (1 MB) 3.0 GB/s 3.0 GB/s Write Bandwidth (1 MB) 2.5 GB/s 2.5 GB/s Ran. Read IOPS (512B) 540,000 700,000 Ran. Write IOPS (512B) 1,100,000 1,100,000 Ran. Read IOPS (4K) 480,000 580,000 Ran. Write IOPS (4K) 490,000 535,000 Read Access Latency 68µs 47µs Write Access Latency 15µs 15µs Bus Interface PCI Express 2.0 x8 electrical x8 physical Weight Less than 11 ounces Form Factor Full-height, half-length Warranty 5 years or maximum endurance used Supported Operating Systems Microsoft Windows 64-bit Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 Linux RHEL 5/6; SLES 10/11; OEL 5/6; CentOS 5/6; Debian Squeeze; Fedora 16/17; openSUSE 12; Ubuntu 10/11/12 UNIX Solaris 10/11 x64; OpenSolaris 2009.06 x64; OSX 10.6/10.7/10.8 Hypervisors VMware ESX 4.0/4.1/ESXi 4.1/5.0/5.1, Windows 2008 R2 with Hyper-V, Hyper-V Server 2008 R2

www.fusionio.com August 2013

21 Fusion-io ioScale Specifications

ioScale ioScale ioScale ioScale 410GB 825GB 1650GB 3.2TB Form Factor HHxHL HHxHL HHxHL FHxHL Read Bandwidth (1MB) 1.4 GB/s 1.4 GB/s 1.4 GB/s 1.5 GB/s Write Bandwidth (1MB) 700 MB/s 1.1 GB/s 1.1 GB/s 1.3 GB/s Random Read IOPS (512 bytes) 78K 135K 144K 115K Random Write IOPS (512 bytes) 535K 535K 535K 535K Read Access Latency 77µs 77µs 77µs 92µs Write Access Latency 19µs 19µs 19µs 19µs Power Profile < 25W < 25W < 25W < 25W PBW (typical) 2 4 8 20 Supported Operating Systems Microsoft Windows 64-bit Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 Linux RHEL 5/6; SLES 10/11; OEL 5/6; CentOS 5/6; Debian Squeeze; Fedora 16/17; openSUSE 12; Ubuntu 10/11/12 UNIX Solaris 10/11 x64; OpenSolaris 2009.06 x64; OSX 10.6/10.7/10.8 Hypervisors VMware ESX 4.0/4.1/ESXi 4.1/5.0/5.1, Windows 2008 R2 with Hyper-V, Hyper-V Server 2008 R2

22 Fusion-io ioFX Specifications

ioFX Capacity 420GB 1650GB NAND Type MLC (Multi Level Cell) Read Bandwidth (1MB) 1.4 GB/s 1.4GB/s Write Bandwidth (1MB) 700MB/s 1.1GB/s Read Access Latency (4K) 77µs 77µs Write Access Latency (4K) 19µs 19µs Bus Interface PCI Express 2.0 x4 (x8 physical) PCI Express 2.0 x4 (x4 physical) Bundled Mgmt Software ioSphere Microsoft Windows: 64-bit Windows Server 2012, Windows Server 2008 R2, Windows 8, Windows 7

Linux: RHEL 5/6, SLES 10/11, CentOS 5/6, Debian Squeeze, Fedora 16/17, Ubuntu 10/11/12, OELv5/6, OpenSUSE 12 Operating Systems OSX: MAC OSX 10.6.7/10.7/10.8

Hypervisors: VMware ESX 4.0/4.1, VMware ESXi 4.1/5.0/5.1, Windows 2008 R2 with Hyper-V, Hyper-V Server 2008 R2 Warranty 3 years or maximum endurance used

www.fusionio.com August 2013

23 Fusion-io Datasheets http://www.fusionio.com/

24 Fusion-io CUDA CUDA and the ioMemory VSL

Host

DRAM / Memory / CPU and cores Operating System and Tables

Application Memory ioMemory Virtualization

Virtual Storage Layer (VSL)

Commands

PCIe

Applications/Databases DATA TRANSFERS

ioDrive ioMemory File System Data-Path Controller Kernel

Virtual Storage Layer (VSL) Banks

Channels Wide ioMemory

26 Fusion-io ioMemory -> GPU

▸ OS-pinned CUDA buffer // Alloc OS-pinned memory cudaHostAlloc((void**)&h_odata, memSize, (wc) ? cudaHostAllocWriteCombined : 0); ▸ Read from ioMemory fd = open("/mnt/ioMemoryFile", O_RDWR | O_DIRECT); rc = read(fd, h_odata, memSize); ▸ Copy (DMA) to GPU cudaMemcpyAsync(d_idata, h_odata, memSize, cudaMemcpyHostToDevice, stream);

27 Fusion-io ioMemory -> GPU

Application File System CUDA RT / Driver API Kernel Kernel Virtual Storage Layer Nvidia.ko

DMA DMA

28 Fusion-io GPU -> ioMemory

▸ OS-pinned CUDA buffer // Alloc OS-pinned memory cudaHostAlloc((void**)&h_odata, memSize, (wc) ? cudaHostAllocWriteCombined : 0); ▸ Copy (DMA) from GPU cudaMemcpyAsync(h_odata, d_idata, memSize, cudaMemcpyDeviceToHost, stream); ▸ Write to ioMemory fd = open("/mnt/ioMemoryFile", O_RDWR | O_DIRECT); rc = write(fd, h_odata, memSize);

29 Fusion-io Programming Patterns

▸ Pipelines • CPU threads • CUDA streams ▸ Ring buffers • Parallel DMA ▸ Direct I/O

30 Fusion-io GPU Direct GPU Direct https://developer.nvidia.com/gpudirect ▸ Copy data directly to/from CUDA pinned host memory • Avoid one copy but not a host memory bypass • Currently supported by ioDrive (as described previously) ▸ Peer to peer transfers • P2P DMA ▸ between PCIe GPUs ▸ Peer to peer memory access between GPUs • NUMA ▸ between CUDA kernels

32 Fusion-io RDMA GPU Direct http://docs.nvidia.com/cuda/gpudirect-rdma/index.html

33 Fusion-io RDMA GPU Direct

▸ Available in Kepler-class GPUs and CUDA 5.0+ ▸ Fastest possible inter-device communication • Utilizes PCIe peer-to-peer DMA transfer ▸ Avoids host memory altogether • No bounce buffer nor copy-in/copy-out ▸ Third party device can directly access GPU memory • Tech preview (proof of concept) for ioMemory

34 Fusion-io RDMA GPU Direct for ioMemory ioMemory <-> GPU

Application File System CUDA RT / Driver API Kernel Kernel nvidia_get/put_p2p_pages() Virtual Storage Layer Nvidia.ko

DMA

36 Fusion-io Design Considerations

▸ Supported systems • Kepler-class GPUs • Sandy/Ivy Bridge ▸ User space tokens • Query CUDA and pass to VSL ▸ Lazy unpinning ▸ Kernel driver linkage • VSL uses nvidia_get/put_pages() APIs • Adopt the Mellanox GPU Direct approach

37 Fusion-io CUDA API

▸ Data structure typedef struct CUDA_POINTER_ATTRIBUTE_P2P_TOKENS_st { unsigned long long p2pToken; unsigned int vaSpaceToken; } CUDA_POINTER_ATTRIBUTE_P2P_TOKENS; ▸ Function CUresult CUDAAPI cuPointerGetAttribute(void *data, CUpointer_attribute attribute, CUdeviceptr pointer); ▸ Usage ptr = cudaMalloc(); cuPointerGetAttribute(&tokens, CU_POINTER_ATTRIBUTE_P2P_TOKENS, ptr);

38 Fusion-io VSL user space API

typedef struct fusion_iovec { fio_block_range_t iov_range; ///< Range of sectors uint32_t iov_op; ///< FIOV_TRIM/WRITE/READ uint32_t iov_flags; ///< FIOV_FLAGS_* (GPU) uint64_t iov_base; ///< Mem addr (CPU or GPU) ///< Special tokens for p2p GPU transfers unsigned long long iov_p2p_token; unsigned int iov_va_space_token; unsigned int iov_padding_reserved; } fusion_iovec_t;

int ufct_iovec_transfer(int fd, const struct fusion_iovec *iov, int iovcnt, uint32_t *sectors_transferred);

39 Fusion-io Sample Code

cuda_err = cudaMalloc((void**)&buf, allocate_size); cu_err = cuPointerGetAttribute(&cuda_tokens, CU_POINTER_ATTRIBUTE_P2P_TOKENS, (Cudeviceptr)buf);

iov[0].iov_range.base = sector; // ioMemory block addr iov[0].iov_range.length = individual_buf_size; iov[0].iov_op = is_write ? FIOV_WRITE : FIOV_READ; iov[0].iov_flags = is_gpu ? FIOV_FLAGS_GPU : 0; iov[0].iov_base = buf; // GPU memory addr iov[0].iov_p2p_token = cuda_tokens.p2pToken; iov[0].iov_va_space_token = cuda_tokens.vaSpaceToken;

ufct_iovec_transfer(fd, iov, iovcnt, §ors_transferred);

40 Fusion-io Benchmark

▸ System details • Intel Sandy Bridge • NVIDIA Tesla K20 ▸ Modified CUDA bandwidth test ▸ Modified fio-iovec-transfer test

41 Fusion-io Modified CUDA bandwidth test

Bandwidth (GB/s) Single Thread 1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0 HARD DRIVE IOMEMORY IOM W/ GDR IOM W/ GDR W/ LAZY UNPINNING

Read (ioM->GPU) Write (GPU->ioM)

42 Fusion-io Modified fio-iovec-transfer test

Bandwidth (GB/s) with Multiple Threads

2.5

2

1.5

1

0.5

0 1 THREAD 2 THREADS 4 THREADS 8 THREADS

Read (ioM->GPU) Write (GPU->ioM) Read GDR (ioM->GPU) Write GDR (GPU->ioM)

43 Fusion-io Conclusion Conclusion

▸ ioMemory for high performance GPU I/O • Available now! ▸ Tech preview of RDMA GPU Direct for ioMemory • Bypass host memory to further increase I/O performance ▸ ioMemory can accelerate your GPU applications ▸ Invite feedback • [email protected]

45 Fusion-io Thank You

fusionio.com | DELIVERING THE WORLD’S DATA. FASTER.