Sorting Large Datasets with Heterogeneous CPU/GPU Architectures

Sorting Large Datasets with Heterogeneous CPU/GPU Architectures

Sorting Large Datasets with Heterogeneous CPU/GPU Architectures Michael Gowanlock Ben Karsin School of Informatics, Computing, & Cyber Systems Department of Information and Computer Sciences Northern Arizona University University of Hawaii at Manoa Flagstaff, AZ, 86011 Honolulu, HI, 96822 [email protected] [email protected] Abstract—We examine heterogeneous sorting for input data B, using the CPUs and GPU(s) on the platform. Depend- that exceeds GPU global memory capacity. Applications that ing on the strategy employed to generate B, the memory require significant communication between the host and GPU bandwidth between the CPU and main memory may also often need to obviate communication overheads to achieve performance gains over parallel CPU-only algorithms. We become a bottleneck. Thus, it is not clear that multi-core advance several optimizations to reduce the host-GPU com- CPU-only parallel sorting algorithms will not outperform a munication bottleneck, and find that host-side bottlenecks also hybrid CPU/GPU approach. need to be mitigated to fully exploit heterogeneous architec- Recently, [5] advanced a state-of-the-art radix sort for the tures. We demonstrate this by comparing our work to end- GPU. They apply their sort to the problem of heterogeneous to-end response time calculations from the literature. Our approaches mitigate several heterogeneous sorting bottlenecks, sorting. While the results for the on-GPU sorting are im- as demonstrated on single- and dual-GPU platforms. We pressive, several overheads in the hybrid sorting workflow achieve speedups up to 3.47× over the parallel reference imple- appear to be omitted. In this work, we reproduce their results mentation on the CPU. The current path to exascale requires to determine the extent that missing overheads may degrade heterogeneous architectures. As such, our work encourages the performance of their heterogeneous sorting pipeline. future research in this direction for heterogeneous sorting in (i) the multi-GPU NVLink era. This paper has two goals: advance an efficient hybrid approach to sorting data larger than GPU memory that maximizes the utilization of both the CPU and GPU; and, I. INTRODUCTION (ii) reproduce the method used for computing the end-to-end Data transfers between the host (CPU) and device (GPU) time in [5], and compare it to our end-to-end performance have been a persistent bottleneck for General Purpose Com- that includes all overheads. Our contributions are as follows: puting on Graphics Processing Units (GPGPU). Current • We advance a heterogeneous sort that utilizes both multi- technology uses PCIe v.3 for data transfers, with a peak core CPUs and many-core GPU(s) that outperforms the aggregate data transfer rate of 32 GB/s. NVLink increases parallel reference implementation when sorting data that the aggregate bandwidth to up to 300 GB/s [1]. However, the exceeds GPU global memory capacity. data transfer rate remains a bottleneck; for example, multi- • We demonstrate critical bottlenecks in hybrid sorting, GPU platforms will compete for bandwidth in applications and propose several optimizations that reduce the load with large host-device data transfers. imbalance between CPU and GPU tasks. Due to their high throughput, accelerators are increasingly • We show that these bottlenecks cannot be omitted when abundant in clusters. Flagship clusters are decreasing the ag- evaluating the performance of heterogeneous sorting, as gregate number of nodes, and increasing per-node resources. may be the case in the literature. This is important, as: (i) network communication is a bottle- • Our heterogeneous sorting optimizations achieve a high neck; thus, executing a larger fraction of the computation on efficiency relative to the modeled baseline lower bound. each node instead of distributing the work reduces inter-node Paper organization: Section II describes related work on communication overhead [2]; (ii) reducing the number of CPU and GPU sorting. Section III advances the hybrid algo- nodes in a cluster improves node failure rates, which require rithm to sort datasets exceeding global memory. Section IV expensive recovery strategies, such as checkpointing and evaluates the proposed algorithm. In Section V we discuss rollback [3]; (iii) accelerators are more energy efficient than the future of heterogeneous sorting, and conclude the paper. CPUs for some applications [4]. Consequently, increasing per-node capacity and decreasing node failure rates helps II. RELATED WORK pave the way to exascale. Sorting is a well-studied problem due to its utility as a We examine the problem of sorting using a heterogeneous subroutine of many algorithms [6]. In this paper, we focus on approach, defined as follows: given an unsorted input list, A, sorting large datasets using a heterogeneous platform con- of n elements in main memory, generate a sorted output list, sisting of CPUs and GPUs. Such systems are commonplace, and at present, are the most likely path to exascale [7], [8]. that outperform Thrust [5], [14], [34], although their perfor- We do not advance a new on-GPU or CPU sorting algorithm. mance gains are limited and the Thrust library is continually Rather, we utilize state-of-the-art sorting algorithms within updated and improved. We use Thrust for on-GPU sorting, the broader problem of heterogeneous sorting. as the impact of performance gains will have a small overall impact when accounting for CPU-GPU transfers. A. Parallel Sorting Algorithms Fully utilizing the CPUs and GPU(s) is a significant Over the past several decades, parallelism has become challenge, as data transfer overheads are frequently a bottle- increasingly important when solving fundamental, computa- neck [31], [36]. As such, sorting using a hybrid CPU/GPU tionally intensive problems. Sorting is one such fundamental approach has been mostly overlooked. Recently, Stehle and problem for which there are several efficient parallel solu- Jacobsen [5] proposed a hybrid CPU/GPU radix sort, show- tions. When sorting datasets on keys of primitive datatypes, ing that it outperforms existing CPU-only sorts. However, radix sort provides the best asymptotic performance [9] and as we show in Section IV, this work ignores a significant has been shown to provide the best performance on a range portion of the overheads associated with data transfers. In of platforms [10]–[14]. When sorting based on an arbitrary this work, we consider all potential performance loss due to comparator function, however, parallel algorithms based on data transfers to determine the efficacy of CPU/GPU sorting. Mergesort [15], Quicksort [16], and Distribution sort [17] are III. HETEROGENEOUS SORTING commonly used. Each of these algorithms has relative advan- tages depending on the application and hardware platform. We present our approach to hybrid CPU/GPU sorting. We Regarding Mergesort, parallelism is trivially obtained by use NVIDIA CUDA [37] terminology when discussing GPU concurrently merging the many small input sets. However, details. The approach we present relies on sorting on the once the number of merge lists becomes small, merge lists GPU and merging on the CPU. We divide our unsorted input must be partitioned to increase parallelism [18]. Multiway list A into nb sublists of size bs, that we call batches, to be sorted on the GPU. When describing our approach, we variants can similarly be parallelized and further reduce the n assume bs evenly divides n (i.e., bs = ). We merge the nb number of accesses to slow memory at the cost of a more nb complex algorithm [19], [20]. Quicksort is an algorithm that batches on the CPU to create the final sorted list, B. Table I is shown to perform well in practice, and can be parallelized outlines the parameters and notation we use in this work. using the parallel prefix sums operation [16]. See [9] for an Table I: Table of notation. overview of these algorithms. Highly optimized implementations of parallel sorting Description n Input size. algorithms are available for both CPU [19]–[21] and nb Number of batches/sublists to be sorted on the GPU. GPU [13], [22], [23]. The Intel Thread Building Blocks nGP U Number of GPUs used. (TBB) [21] and MCSTL [19] parallel libraries provide ns The number of streams used. highly optimized parallel Mergesort algorithms for CPUs. bs Size of each batch to be sorted. ps Size of the pinned memory buffer. Several libraries also provide optimized radix sort algo- A Unsorted list to be sorted (n elements). rithms [20], [24]. The PARADIS algorithm [11] has been B Output sorted list (n elements). shown to provide the best performance on the CPU, but we W Working memory for sublists to be merged (n elements). HtoD Data transfer from the host to device. were unable to obtain the code for evaluation. DtoH Data transfer from the device to host. Stage Pinned memory staging area. B. GPUs and Hybrid Architectures MCpy A host-to-host memory copy to or from pinned memory. GPUs have been shown to provide significant performance GPUSort Sorting on the GPU. Merge Multiway merge of n batches. gains over traditional CPUs on many general-purpose ap- b plications [25]–[28]. Applications that can leverage large amounts of parallelism can realize remarkable performance A. Parallel Merging on the CPU gains using GPUs [29], [30]. However, memory limitations Sorting sublists occurs on the GPU and merging occurs on and data transfers can become bottlenecks [31]. the CPU. After all nb batches have been sorted on the GPU While sorting on the GPU has received a lot of atten- and transferred to the host, we use the parallel multiway tion [5], [28], [32]–[35], most papers assume that inputs fit merge from the GNU library. Merging the log nb batches in GPU memory and disregard the cost of data transfers to into the final sorted list B requires O(n log nb) work and the GPU. In the GPU sorting community, this is reasonable O(log nb) time in the CREW PRAM model.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us