High-Speed Query Processing Over High-Speed Networks

High-Speed Query Processing Over High-Speed Networks

High-Speed Query Processing over High-Speed Networks Wolf Rödiger Tobias Mühlbauer Alfons Kemper Thomas Neumann TU München TU München TU München TU München Munich, Germany Munich, Germany Munich, Germany Munich, Germany [email protected] [email protected] [email protected] [email protected] ABSTRACT 59.7 16 GB/s 59.7 GB/s 128 GB Modern database clusters entail two levels of networks: con- CPU 0 QPI CPU 1 GB/s necting CPUs and NUMA regions inside a single server in CPU 0 CPU 1 10 cores 10 cores the small and multiple servers in the large. The huge perfor- 128 GB QPI mance gap between these two types of networks used to slow 16 GB/s down distributed query processing to such an extent that a PCIe 3.0 cluster of machines actually performed worse than a single 15.75 GB/s many-core server. The increased main-memory capacity of the cluster remained the sole benefit of such a scale-out. host 0 host 1 HCA The economic viability of high-speed interconnects such as host 2 InfiniBand has narrowed this performance gap considerably. 4 GB/s Infiniband 4⨉QDR However, InfiniBand’s higher network bandwidth alone does not improve query performance as expected when the dis- host 3 host 4 host 5 tributed query engine is left unchanged. The scalability of distributed query processing is impaired by TCP overheads, switch contention due to uncoordinated communication, and Figure 1: Two levels of networks in a cluster: con- load imbalances resulting from the inflexibility of the classic necting CPUs in the small and servers in the large exchange operator model. This paper presents the blueprint for a distributed query engine that addresses these problems by considering both levels of networks holistically. It consists This development is driven by a significant change in the of two parts: First, hybrid parallelism that distinguishes lo- hardware landscape: Today’s many-core servers often have cal and distributed parallelism for better scalability in both main-memory capacities of several terabytes. The advent the number of cores as well as servers. Second, a novel of these brawny servers enables unprecedented single-server communication multiplexer tailored for analytical database query performance. Moreover, a small cluster of such servers workloads using remote direct memory access (RDMA) and is often already sufficient for companies to analyze their busi- low-latency network scheduling for high-speed communica- ness. For example, Walmart—the world’s largest company tion with almost no CPU overhead. An extensive evalua- by revenue—uses a cluster of only 16 servers with 64 ter- tion within the HyPer database system using the TPC-H abytes of main memory to analyze their business data [27]. benchmark shows that our holistic approach indeed enables Such a cluster entails two levels of networks as highlighted high-speed query processing over high-speed networks. in Figure 1: The network in the small connects several many-core CPUs and their local main memory inside a sin- 1. INTRODUCTION gle server via a high-speed QPI interconnect. Main-memory arXiv:1502.07169v4 [cs.DB] 2 Nov 2015 Main-memory database systems have gained increasing database systems have to efficiently parallelize query execu- interest in academia and industry over the last years. The tion across these many cores and adapt to the non-uniform success of academic projects, including MonetDB [22] and memory architecture (NUMA) to avoid the high cost of re- HyPer [18], has led to the development of commercial main- mote memory accesses [21, 20]. Traditionally, exchange op- memory database systems such as Vectorwise, SAP HANA, erators are used to introduce parallelism both locally inside Oracle Exalytics, IBM DB2 BLU, and Microsoft Apollo. a single server as well as globally between servers. How- ever, the inflexibility of the classic exchange operator model introduces several scalability problems. We propose a new hybrid approach instead, that combines special decoupled exchange operators for distributed processing with the exist- This work is licensed under the Creative Commons Attribution- ing intra-server morsel-driven parallelism [20] for local pro- NonCommercial-NoDerivatives 4.0 International License. To view a copy cessing. Choosing the paradigm for each level that fits best, of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For hybrid parallelism scales better with the number of cores per any use beyond those covered by this license, obtain permission by emailing server than classic exchange operators as shown in Figure 2. [email protected]. Proceedings of the VLDB Endowment, Vol. 9, No. 4 The network in the large connects separate servers. In the Copyright 2015 VLDB Endowment 2150-8097/15/12. past, limited bandwidth actually reduced query performance HyPer (hybrid parallelism) RDMA (40 Gb/s InfiniBand) + scheduling 12× HyPer (exchange) TCP/IP (40 Gb/s InfiniBand) 3× TCP/IP (1 Gb/s Ethernet) 9× Vectorwise (exchange) 2× 6× speed-up of speed-up of 3× 1× query response times 1× query response times 0× 6 (1) 30 (5) 60 (10) 90 (15) 120 (20) 1 2 3 4 5 6 number of cores (per server) number of servers Figure 2: Hybrid parallelism scales significantly bet- Figure 3: Simply increasing the network bandwidth ter with the number of cores per server than classic is not enough; a novel RDMA-based communication exchange operators (6 servers, TPC-H, SF 300) multiplexer is required (HyPer, TPC-H, SF 100) when scaling out to a cluster. Consequently, previous re- the available bandwidth of high-speed interconnects search focussed on techniques that avoid communication as with minimal CPU overhead; it avoids switch con- much as possible [29, 28]. In the mean time, high-speed net- tention via low-latency network scheduling, improving works such as InfiniBand have become economically viable, all-to-all communication throughput by 40 %. offering link speeds of several gigabytes per second. How- 3. A prototypical implementation of our approach in our ever, faster networking hardware alone is not enough to scale full-fledged in-memory DBMS HyPer that scales in query performance with the cluster size. Similar to the tran- both dimensions, the number of cores as well as servers. sition from disk to main memory, new bottlenecks surface when InfiniBand replaces Gigabit Ethernet. TCP/IP pro- Section 2 evaluates high-speed cluster interconnects for cessing overheads and switch contention threaten the scala- typical analytical database workloads. Specifically, we study bility of distributed query processing. Figure 3 demonstrates how to optimize TCP and RDMA for expensive all-to-all these bottlenecks by comparing two distributed query en- data shuffles common for distributed joins and aggregations. gines using the TPC-H benchmark. Both engines are imple- Building upon these findings, Section 3 presents a blueprint mented in our in-memory database system HyPer. The first for our novel distributed query engine that is carefully tai- uses traditional TCP/IP, while the second is built with re- lored for both the network in the small and in the large. mote direct memory access (RDMA). The experiment adds It consists of hybrid parallelism for improved scalability in servers to the cluster while keeping the data set size fixed at both the number of cores and servers as well as our optimized scale factor 100. Using Gigabit Ethernet actually decreases communication multiplexer that combines RDMA and low- performance by 6 compared to using just a single server of latency network scheduling for high-speed communication. the cluster. The insufficient network bandwidth slows down Finally, Section 4 provides a comprehensive performance query processing.× Still, a scale out is inevitable once the data evaluation using the ad-hoc OLAP benchmark TPC-H, com- exceeds the main memory capacity of a single server. Infini- paring a prototypical implementation of our approach within Band 4 QDR offers 32 the bandwidth of Gigabit Ethernet. our full-fledged in-memory database system HyPer to sev- However, Figure 3 shows that simply using faster network- eral SQL-on-Hadoop as well as in-memory MPP database ing hardware× is not enough.× The distributed query engine systems: HyPer improves TPC-H performance by 256 com- has to be adapted to avoid TCP/IP overheads and switch pared to Spark SQL, 168 to Cloudera Impala, 38 to Mem- contention. By combining RDMA and network scheduling SQL, and 5.4 to Vectorwise Vortex. × in our novel distributed query engine we can scale query per- × × formance with the cluster size, achieving a speedup of 3.5 2. HIGH-SPEED× NETWORKS for 6 servers. RDMA enables true zero-copy transfers at al- InfiniBand is a high-bandwidth and low-latency cluster in- most no CPU cost. Recent research has shown the benefits× terconnect. Several data rates have been introduced, which of RDMA for specific operators (e.g., joins [4]) and key-value are compared to Gigabit Ethernet (GbE) in Table 1. The stores [17]. However, we are the first to present the design following performance study uses InfiniBand 4 QDR hard- and implementation of a complete distributed query engine ware that offers 32 the bandwidth of GbE and latencies as based on RDMA that is able to process complex analytical low as 1.3 microseconds. We expect the findings× to be valid workloads such as the TPC-H benchmark. In particular, for the faster data× rates 4 FDR and 4 EDR as well. this paper makes the following contributions: InfiniBand offers the choice between two transport pro- × × 1. Hybrid parallelism: A NUMA-aware distributed query tocols: TCP via IP over InfiniBand (IPoIB) and the native execution engine that integrates seamlessly with intra- InfiniBand ibverbs interface for remote direct memory access server morsel-driven parallelism, scaling considerably (RDMA).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us