
Can High-Performance Interconnects Benefit Hadoop Distributed File System? Sayantan Sur, Hao Wang, Jian Huang, Xiangyong Ouyang and Dhabaleswar K. Panda Department of Computer Science and Engineering, The Ohio State University fsurs, wangh, huangjia, ouyangx, [email protected] Abstract of data distribution and fault tolerance. Most importantly, the model aims to be efficient on commodity clusters During the past several years, the MapReduce comput- connected with Gigabit Ethernet. This model can handle ing model has emerged as a scalable model that is capable both structured and unstructured data. Since the time when of processing petabytes of data. The Hadoop MapReduce the MapReduce paper was published, Doug Cutting et. framework, has enabled large scale Internet applications al. started developing Hadoop, which is an Open-source and has been adopted by many organizations. The Hadoop implementation of the MapReduce computing model. It Distributed File System (HDFS) lies at the heart of the is available from the Apache Software Foundation [27]. ecosystem of software. It was designed to operate and The Hadoop MapReduce software relies on the Hadoop scale on commodity hardware such as cheap Linux ma- Distributed File System (HDFS) as the underlying basis chines connected with Gigabit Ethernet. The field of High- for providing data distribution and fault tolerance. Over performance Computing (HPC) has been witnessing a time, HDFS has also become the underlying file system transition to commodity clusters. Increasingly, networks for the Hadoop database (HBase), that is an Open-source such as InfiniBand and 10Gigabit Ethernet have become implementation of Google’s BigTable [6]. The goal of commoditized and available on motherboards and as low- HBase is to provide random, real time read/write access cost PCI-Express devices. Software that drives InfiniBand to large quantities of data, in the order of billions of rows and 10Gigabit Ethernet works on mainline Linux kernels. and millions of columns. These interconnects provide high bandwidth along with The Hadoop project has gained widespread acceptance low CPU utilization for network intensive operations. As and is very widely used in many organizations around the the amounts of data processed by Internet applications world. As data gathering technologies (such as sensors) reaches hundreds of petabytes, it is expected that the witness an explosion, it is expected that in the future, network performance will be a key component towards massive quantities of data in hundreds or thousands of scaling data-centers. In this paper, we examine the impact petabytes will need to be processed to gain insight into pat- of high-performance interconnects on HDFS. Our findings terns and trends. In order to process these large quantities reveal that the impact is substantial. We observe up to of data, many more thousands of servers may be required. 11%, 30% and 100% performance improvement for the While the Hadoop framework has no fundamental scaling sort, random write and sequential write benchmarks using limitations, recently there has been some discussion about magnetic disks (HDD). We also find that with the emerging its efficiency. In particular, data-centers of the future cannot trend of Solid State Drives, having a faster interconnect expand at the rate at which data storage and gathering makes a larger impact as local I/O costs are reduced. We capabilities are expanding. This is due to power limitations. observe up to 48%, 59% and 219% improvement for the Improving efficiency of the HDFS will have a significant same benchmarks when SSD is used in combination with impact on the design of future data-centers. advanced interconnection networks and protocols. During the past decade, the field of High-performance Computing has been witnessing a transition to commodity clusters connected with modern interconnects such as In- finiBand and 10Gigabit Ethernet. Increasingly, InfiniBand I. Introduction has become commoditized and available on motherboards and as low-cost PCI-Express devices. Software that drives The MapReduce computing model has recently InfiniBand and 10Gigabit Ethernet also works on mainline emerged as a viable model for processing petabytes of Linux kernels. These interconnects provide not only high data. This model for processing data in large Internet bandwidth (up to 32Gbps), and low latency (1µs-2µs), but warehouses, was first proposed by Google Inc [7]. The also help server scalability by using very little CPU and MapReduce model enables developers to write highly reduced memory copies for network intensive operations. parallel codes without dealing with many intricate details The popularity of InfiniBand as such, can be measured by the fact that 42.6% of the compute clusters in the Top500 1) InfiniBand Architecture: The InfiniBand specifica- list [29] of most powerful supercomputers use InfiniBand. tion clearly demarcates the duties of hardware (such as These clusters are very high on the efficiency metric, Host Channel Adapters (HCAs)) and software. The inter- i.e. performance achieved compared to peak performance. action between software and HCAs is carried out by the Typical efficiencies of InfiniBand clusters range from 85%- verbs layer, which is described in the following section. 95%. The InfiniBand fabric can consist of multi-thousand nodes As the amount of data processed by Internet applica- with multiple adapters. Typically, InfiniBand networks are tions reaches hundreds of petabytes, it is expected that the deployed using the fat-tree topology, which provides con- network performance will be a key component towards stant bisection bandwidth. However, recently, some large scaling data-centers. In this paper, we examine the impact deployments have also adopted 3-D torus and hypercube of high-performance interconnects on HDFS. Our findings topologies. InfiniBand provides flexible static routing. The reveal that the impact is substantial. We observe up to routing tables at switches can be configured using the 11%, 30% and 100% performance improvement for the Subnet Manager. For more details on InfiniBand please sort, random write and sequential write benchmarks using refer to specification documents available from [11]. magnetic disks (HDD). We also find that with the emerging 2) InfiniBand Verbs Layer: Upper-level software uses trend of Solid State Drives, having a faster interconnect an interface called verbs to access the functionality pro- makes a larger impact as local I/O costs are reduced. We vided by HCAs and other network equipment (such as observe up to 48%, 59% and 219% improvement for the switches). This is illustrated in Figure 1(a) (to the ex- same benchmarks when SSD is used in combination with treme right). Verbs that are used to transfer data are advanced interconnection networks and protocols. completely OS-bypassed. The verbs interface is a low- The rest of the paper is organized as follows. In level communication interface that follows the Queue Pair Section II, we provide an overview of the topics dealt (or communication end-points) model. Queue pairs are with in this paper. In Section III, we show some of the required to establish a queue pair between themselves. benefits of modern interconnects. Experimental results and Each queue pair has a certain number of work queue discussions are presented in Section IV. We discuss related elements. Upper-level software places a work request on work in Section V. We conclude the paper in Section VI. the queue pair that is then processed by the HCA. When a work element is completed, it is placed in the completion II. Background queue. Upper level software can detect completion by polling the completion queue. In this Section, we provide a “bottom-up” overview of Additionally, there are different types of Queue Pairs networking and software components in a data-center that based on the type of transport used. There are Reliably is interconnected using High-performance networks. Connected (RC) queue pairs that provide reliable trans- mission (retransmissions after packet losses are performed A. InfiniBand Overview by the HCA). These RC queue pairs need to be established uniquely for each communicating pair. This implies an 2 InfiniBand [2] is an industry standard switched fabric O(n ) memory usage (for a system with N processes). that is designed for interconnecting nodes in HEC clusters. Another type of queue pair is the Unreliable Datagram It is a high-speed, general purpose I/O interconnect that is (UD). This queue pair type does not provide reliable trans- widely used by scientific computing centers world-wide. mission although it has a significant memory advantage – The recently released TOP500 rankings in November 2010 only one UD QP is capable of communicating with all reveal that more than 42% of the computing systems remote processes. Thus, the memory usage of UD QP is use InfiniBand as their primary interconnect. The yearly O(n) (for a system with N processes). growth rate of InfiniBand in the TOP500 systems is pegged 3) InfiniBand IP Layer: InfiniBand also provides a at 30%, indicating a strong momentum in adoption. One of driver for implementing the IP layer. This exposes the In- the main features of InfiniBand is Remote Direct Memory finiBand device as just another network interface available Access (RDMA). This feature allows software to remotely from the system with an IP address. Typically, Ethernet read memory contents of another remote process without interfaces are presented as eth0, eth1 etc. Similarly, any software involvement at the remote side. This feature IB devices are presented as ib0, ib1 and so on. This is very powerful and can be used to implement high- interface is presented in Figure 1(a) (second from the left, performance communication protocols. named IPoIB). It does not provide OS-bypass. This layer InfiniBand has started making inroads into the commer- is often called “IP-over-IB” or IPoIB in short. We will use cial domain with the recent convergence around RDMA this terminology in the paper.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-