
2014 IEEE 28th International Parallel & Distributed Processing Symposium DataMPI: Extending MPI to Hadoop-like Big Data Computing* Xiaoyi Lu, Fan Liang, Bing Wang, Li Zha, and Zhiwei Xu Institute of Computing Technology, Chinese Academy of Sciences {luxiaoyi, liangfan, wangbing, char, zxu}@ict.ac.cn Abstract—MPI has been widely used in High Performance and techniques from the HPC field can be transferred to the Computing. In contrast, such efficient communication support Big Data Computing field. For example, many optimized is lacking in the field of Big Data Computing, where com- MPI communication techniques on different networks (e.g. munication is realized by time consuming techniques such as HTTP/RPC. This paper takes a step in bridging these InfiniBand and 1/10GigE) developed for HPC can be utilized two fields by extending MPI to support Hadoop-like Big by Big Data Computing applications. Data Computing jobs, where processing and communication of a large number of key-value pair instances are needed A. Motivation through distributed computation models such as MapReduce, An obvious starting point for this line of research is Iteration, and Streaming. We abstract the characteristics of key-value communication patterns into a bipartite commu- to extend MPI for Big Data, reusing MPI abstractions, nication model, which reveals four distinctions from MPI: mechanisms, and operations. However, we should first as- Dichotomic, Dynamic, Data-centric, and Diversified features. sess how much performance improvement we can attain Utilizing this model, we propose the specification of a min- by extending MPI to Big Data. As mentioned above, the imalistic extension to MPI. An open source communication communication mechanisms, involving HTTP and RPC, are library, DataMPI, is developed to implement this specification. Performance experiments show that DataMPI has significant commonly used in different types of network interactions in advantages in performance and flexibility, while maintaining Hadoop, like Hadoop RPC calls among different components high productivity, scalability, and fault tolerance of Hadoop. and MapReduce Shuffle. The achievable peak bandwidth Keywords-MPI; Hadoop; MapReduce; Big Data; DataMPI and RPC latency are the two most important indicators to show the performance improvement potential from the basic I. INTRODUCTION communication primitive level. Compared with MPI com- Message Passing Interface (MPI) has been widely used in munication primitives, the performance of Hadoop commu- the field of High Performance Computing. This includes the nication primitives is low. Figure 1(a) shows the comparison MPI specifications [1] and various MPI implementations, of achieved peak bandwidths for three systems, Hadoop such as MPICH [2], MVAPICH [3], and Open MPI [4]. HTTP Jetty, our proposed DataMPI, and MVAPICH2, on MPI not only enables fast communication, but also provides three networks (IB/IPoIB, 10GigE, and 1GigE). The peak a standard interface facilitating the development of parallel bandwidth is measured by varying both total data size and and distributed application programs in a portable way. packet size. As we can see, DataMPI and MVAPICH2 drive Recently, Big Data has attracted much attention in the IT bandwidth more than twice as Hadoop Jetty on the networks industry and academic communities. Much work has been of IB/IPoIB and 10GigE. DataMPI and MVAPICH2 use done on Big Data Computing models and systems, such network more efficiently than Hadoop Jetty even on 1GigE. as MapReduce [5], MapReduce Online [6], Twister [7], The bandwidth of DataMPI is slightly lower than that of Dryad [8], Spark [9], and S4 [10]. In the open-source MVAPICH2. This is because the Java binding (See Sec- community, Hadoop [11] is a widely used implementation tion IV) of DataMPI introduces some minor overheads in the of the MapReduce model for Big Data. These systems have JVM layer. We further implement an RPC system based on one commonality: they feature the processing and communi- DataMPI by using the same data serialization mechanism as cation of a large number of key-value pair instances. We call default Hadoop RPC. Then, we can use similar benchmarks such systems Hadoop-like Big Data Computing systems, or to measure the performance of our DataMPI RPC and default Big Data Computing systems for short. Hadoop RPC. Figure 1(b) shows the latency comparison Differing from the HPC field, such Big Data Comput- between them. When the payload size ranges from 1 B to ing systems lack efficient communication support such as 4 KB, the latency of DataMPI RPC is better than that of MPI. Instead, communication is realized by time consuming Hadoop RPC up to 18% on 1GigE, 32% on 10GigE, and techniques such as HTTP, RPC. This paper takes a step 55% on IB. From these comparisons, we can see that our in bridging these two fields by extending MPI to support proposed DataMPI can provide much better performance Hadoop-like Big Data Computing jobs, so that knowledge than Hadoop in the primitive-level communication, which means the performance improvement potential of extending *This research is supported in part by the Strategic Priority Program of Chinese Academy of Sciences (Grant No. XDA06010401), the Guangdong MPI to Big Data is attractive. Talents Program, the Hi-Tech Research and Development (863) Program Even though we can get good performance improve- of China (Grant No. 2013AA01A209, 2013AA01A213). 1530-2075/14 $31.00 © 2014 IEEE 829 DOI 10.1109/IPDPS.2014.90 2500 250 Hadoop-1GigE DataMPI-1GigE Hadoop Jetty Hadoop-10GigE DataMPI-10GigE DataMPI Hadoop-IPoIB (16Gbps) DataMPI-IB (16Gbps) 2000 MVAPICH2 200 1500 150 1000 100 500 RPC Latency (ms) 50 BandWidth (MB/sec) 0 0 1 2 4 8 16 32 64 128 256 512 1024 2048 409 IB/IPoIB(16Gbps) 10GigE 1GigE 6 Networks Payload Size (byte) (a) Peak Bandwidth (higher is better) (b) RPC Latency (lower is better) Figure 1. Performance Improvement Potentials of Hadoop Communication Primitives by Using MPI ments in basic communication primitives, it is still full of Section III introduces our proposed extensions. Section IV challenges to extend MPI to real Big Data applications. presents our library design. Section V describes our exper- One main reason is that the buffer-to-buffer communication iments and evaluation. Section VI discusses related work. model of MPI does not fit the key-value communication Section VII offers concluding remarks and future work. patterns of Big Data [12]. This paper overcomes these challenges by carefully analyzing the open-source codes of II. COMMUNICATION CHARACTERISTICS OF BIG DATA three representative Big Data Computing workload models: By the analysis on Hadoop MapReduce, we abstract Hadoop (the MapReduce model), PageRank [13] (the It- the essential communication characteristics of Big Data eration model), and S4 (the Streaming model). We focus Computing systems in this section. on presenting the detailed analysis of Hadoop but will also A. A 4D Bipartite Communication Model present evaluation results for the Iteration and the Streaming From the perspective of task management and data move- benchmarks. This paper addresses the following issues: ment, we can find that the MapReduce framework sustains 1) What main characteristics can we abstract to help independence among map tasks, and also makes no data design Big Data communication facilities? exchange among reduce tasks. The data is moved from 2) How can we extend MPI to Hadoop-like Big Data one set of tasks (maps) to the other set (reduces). This Computing in a minimalistic way? And how can we paradigm can be represented as a bipartite graph, which is design a high-performance communication library to constructed by two groups of tasks and the data movements implement our proposed extensions? between them. As shown in Figure 2, the intermediate data 3) How much performance improvement can be obtained moves from the tasks in Communicator O (Operation) to by our library over different Big Data processing those in Communicator A (Aggregation). In fact, if we workloads? Can the primitive level performance ben- analyze the execution paradigms of other Hadoop-like Big efits be really achieved in the application level? Data Computing systems (e.g. Dryad, S4), we will find that B. Contributions the bipartite communication model is also concealed in the The main contributions of this paper are: intermediate processing steps or stages of those systems. 1) Abstracting the requirements of a 4D (Dichotomic, Through analyzing Hadoop MapReduce and other Big Data Dynamic, Data-centric, and Diversified) bipartite com- systems, we can further identify the 4D communication munication model and key-value pair based commu- characteristics from the bipartite model. nication, which capture the essential communication Dichotomic. The MapReduce and other Big Data systems characteristics of Hadoop-like Big Data Computing. show that communications happen between two communi- 2) Proposing a simple programming specification to ex- cators. The underlying communication is a bipartite graph, tend MPI to Big Data. Several representative Big i.e., the processes are dichotomic and belong to either the Data benchmarks (WordCount, TeraSort, PageRank, O communicator or the A communicator. K-means [14] , Top-K) are used to show that our Dynamic. Big Data communication has a dynamic charac- extension is easy to use and flexible. teristic, which means the number of concurrent running tasks 3) Presenting a novel design of high-performance com- in each communicator of Figure 2 often changes dynamically munication library called DataMPI for Big Data Com- because of task finish and launch. This indicates we usually puting. Performance evaluations on testbeds show that do not explicitly assign a destination to the emitted data our library can achieve significant better performance and it will be automatically distributed by the system. For on a variety of workloads. example, when we write a map function in a MapReduce The rest of this paper is organized as follows. Sec- program, we only need to emit the processed data, but do tion II discusses Big Data communication characteristics.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-