
Parallel Search On Video Cards Tim Kaldewey, Jeff Hagen, Andrea Di Blas, Eric Sedlar Oracle Server Technologies – Special Projects {tim.kaldewey, jeff.hagen, andrea.di.blas, eric.sedlar}@oracle.com Abstract Workload Small Medium Large [#Queries] 16 1k 16k Recent approaches exploiting the massively parallel ar- binary CPU 1.0 1.0 1.0 chitecture of graphics processors (GPUs) to acceler- binary CPU SSE 1.3 1.4 1.4 ate database operations have achieved intriguing results. binary CPU TLP 0.6 1.1 2.1 While parallel sorting received significant attention, par- binary GPU TLP 0.3 5.1 5.1 allel search has not been explored. With p-ary search we p-ary GPU SIMD 0.5 5.4 6.7 present a novel parallel search algorithm for large-scale database index operations that scales with the number of Table 1: Measured speedup of parallel index search implemen- processors and outperforms traditional thread-level par- tations with respect to a serial CPU implementation allel GPU and CPU implementations. With parallel ar- chitectures becoming omnipresent, and with searching in the enterprise, particularly when ignoring historical being a fundamental functionality for many applications, data. In such cases, database performance is governed we expect it to be applicable beyond the database do- by main memory latency [3], aggravated by the ever main. While GPUs do not appear to be ready to be widening gap between main memory and CPU speeds. adopted for general-purpose database applications yet, In the 1990’s the term memory wall was coined [4], given their rapid development, we expect this to change which remains an accurate description of the situation in the near future. The trend towards massively paral- today [5]. lel architectures, combining CPU and GPU processing, The paradigm shift from bandwidth to throughput ori- encourages development of parallel techniques on both ented, parallel computing [6] comes with new opportuni- architectures. ties to circumvent the memory wall. Interleaving mem- 1 Introduction ory requests from many cores and threads theoretically Specialized hardware has never made significant inroads allows for much higher memory throughput than opti- into the database market. Recently, however, developers mizing an individual core could provide. Leveraging the have realized the inevitably parallel future of comput- massive parallelism of GPUs with up to 128 cores [7] we ing, and are considering alternative architectures. Cur- put this approach to the test. rent GPUs offer a much higher degree of parallelism Optimizing enterprise-class (parallel) database sys- than CPUs, and databases, with their abundance of set- tems for throughput usually means exploiting thread- oriented processing, offer many opportunities to ex- level parallelism (TLP), pairing each query with a sin- ploit data parallelism. Driven by the large market for gle thread to avoid the cost associated with thread syn- games and multimedia, graphics processors (GPUs) have chronization. The GPU’s threading model exposed to evolved more rapidly than CPUs, and now outperform the programmer suggests the same mapping, although them not only in terms of processing power, but also in modern GPUs architectures actually consist of multiple terms of memory performance, which is more often than SIMD processors, each comprising multiple processing computational performance the bottleneck in database elements (Fig. 1). While for certain workloads this ap- applications [1]. proach already outperforms similarly priced CPUs, we Today’s enterprise online-transaction processing demonstrate that algorithms exploiting the nature of the (OLTP) systems rarely need to access data not in GPU’s SIMD architecture can go much further (Tab. 1). memory [2]. The growth rates of main memory size We developed a parallel search algorithm, named p- have outstripped the growth rates of structured data ary search, with the goal to improve response time for Active time Time−based scheduling Streaming Streaming time quantum CPU Multiprocessor Multiprocessor GPU PE PE PE PE Idle time PE PE PE PE ... Memory request time PE PE PE PE PE PE Local Memory PE Local Memory PE Thread 1 Thread 2 Thread 3 Event−based scheduling Main Memory Global Memory Thread 1 Th. 2 Thread 3 Thread 4 Thread 1 ... Figure 1: Architecture of an Nvidia GeForce 8 series GPU. massively parallel architectures like the GPU, which Figure 2: Comparison of scheduling techniques. Event-based were originally designed to improve throughout, rather scheduling on the GPU maximizes processor utilization by sus- pending threads as soon as they issue a memory request, with- than individual response time. The "p" in the name out waiting for a time time quantum to expire, as on the CPU. refers to the number of processors that can be synchro- nized within a few compute cycles, which determines ferred to as General-Purpose Graphics Processing Unit, convergence rate of this algorithm. As it turns out, im- or GPGPU. However, programming the GPU had to be plemented on the GPU, p-ary search outperforms con- done by using standard graphics APIs, such as OpenGL ventional approaches not only in terms of response time or DirectX. Even when using it for general-purpose com- but also throughput, despite significantly more memory putations, the developer had to map data and variables accesses and theoretically lower throughput. However, to graphics objects, using single-precision floating-point small workloads yield poor performance, due to the over- values, the only data type available on the GPU. Algo- head incurred to invoke GPU computation. As the GPU rithms had to be expressed in terms of geometry and operates in batch mode, the often critical time-to-first- color transformations, and to actually perform a compu- result is the time-to-completion, such that larger work- tation required pretending to draw a scene. This task was loads/batches incur higher latency. usually rather challenging even for simple applications, Despite promising performance results, the GPU is not due to the rigid limitations in terms of functionality, data ready yet to be adopted as a query co-processor for mul- types, and memory access. tiple reasons. First, the GPU’s batch processing mode. Nvidia’s Compute Unified Device Architecture Second, the lack of global synchronization primitives, (CUDA), an extension to the C programming language small caches and the absence of dynamic memory allo- that allows programming the GPU directly, was a major cation make it difficult to develop efficient parallel al- leap ahead [11]. At the top level, a CUDA application gorithms for more complex operations such as joins [8]. consists of two parts: a serial program running on the Third, the development environment is still in its infan- CPU and a parallel part, called a kernel, running on the cies offering limited debugging and profiling capabili- GPU. ties. However, GPU computing is progressing so rapidly The kernel is organized as a number of blocks of that between the time this research was conducted and threads, with one block running all its own threads to its publication some of these issues are already addressed completion on one of the several streaming multiproces- in the latest hardware and software generation. For ex- sors, SMs, available. When the number of blocks as de- ample, global synchronization and asynchronous com- fined by the programmer exceeds the number of physical munication have been added to the hardware feature multiprocessors, blocks are queued automatically. Each list. While emerging parallel programming APIs like SM has eight processing elements, PEs (Fig. 1). PEs in OpenCL [9] already blur the frontiers between CPU and the same streaming multiprocessor execute the same in- GPU computing, future architectures like Larrabee [10] struction at the same time in Single Instruction-Multiple integrate both. Data, SIMD, mode [12]. To optimize SM utilization, within a block the GPU 2 GPGPU groups threads following the same code path into so At the beginning of the computer graphics era, the CPU called warps for SIMD-parallel execution. Due to this was in charge of all graphics operations. Progressively, this mechanism, nVidia calls its GPU architecture Sin- more and more complex operations were offloaded to gle Instruction-Multiple Threads(SIMT). Threads run- the GPU. When, thanks to their massively parallel ar- ning on the same SM share a set of registers as well chitecture, GPUs started becoming more powerful than as a low-latency shared memory located on the proces- the CPU itrself, many programmers began exploring the sor chip. This shared memory is small (16 KB on our use of GPU for non-graphics computations, a practice re- G80) but about 100× faster than the larger global mem- Figure 4: Four PEs jointly performing a search based on do- main decomposition in 2 steps. At each step all PEs are reas- signed within a sub-range. GPU’s for more complex operations like join [19], this has only been considered recently for general-purpose databases [8]. GPGPU research prior to 2007, before the release of nVidia’s CUDA [11], required the use of graphics specific APIs that only supported floating-point Figure 3: Four PEs independently performing a binary search numbers. However, fast searching — which is funda- for different keys on the same data range in five steps. mental even beyond database applications — has not been explored for general purpose data on the GPU yet. ory on the GPU board. A careful memory access strategy The obvious way of implementing search on the GPU is even more important on the GPU than it is on the CPU is to exploit data parallelism, omnipresent in large-scale because caching on the GPU is minimal and mainly the database applications, handling thousands of queries si- programmer’s responsibility. multaneously. Multiple threads run the same serial algo- To compensate for the small local memories and rithm on different problem instances, which is no differ- caches GPUs employ massive multithreading to effec- ent than CPU multi-threading where each select opera- tively hide memory latency.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-