
Piccolo: Building Fast, Distributed Programs with Partitioned Tables Russell Power Jinyang Li New York University http://news.cs.nyu.edu/piccolo Abstract memory applications have been built using message- Piccolo is a new data-centric programming model for passing primitives such as MPI [21]. For many users, writing parallel in-memory applications in data centers. the communication-centric model provided by message- Unlike existing data-flow models, Piccolo allows compu- passing is too low-level an abstraction - they fundamen- tation running on different machines to share distributed, tally care about data and processing data, as opposed to mutable state via a key-value table interface. Piccolo en- the location of data and how to get to it. ables efficient application implementations. In particu- Data-centric programming models [19, 27, 1], in lar, applications can specify locality policies to exploit which users are presented with a simplified interface the locality of shared state access and Piccolo’s run-time to access data but no explicit communication mecha- automatically resolves write-write conflicts using user- nism, have proven a convenient and popular mecha- defined accumulation functions. nism for expressing many computations. MapReduce Using Piccolo, we have implemented applications for and Dryad [27] provide a data-flow programming model several problem domains, including the PageRank algo- that does not expose any globally shared state. While the rithm, k-means clustering and a distributed crawler. Ex- data-flow model is ideally suited for bulk-processing of periments using 100 Amazon EC2 instances and a 12 on-disk data, it is not a natural fit for in-memory compu- machine cluster show Piccolo to be faster than existing tation: applications have no online access to intermediate data flow models for many problems, while providing state and often have to emulate shared memory access by similar fault-tolerance guarantees and a convenient pro- joining multiple data streams. Distributed shared mem- gramming interface. ory [29, 32, 7, 17] and tuple spaces [13] allow sharing of distributed in-memory state. However, their simple mem- 1 Introduction ory (or tuple) model makes it difficult for programmers With the increased availability of data centers and cloud to optimize for good application performance in a dis- platforms, programmers from different problem domains tributed environment. face the task of writing parallel applications that run This paper presents Piccolo, a data-centric program- across many nodes. These application range from ma- ming model for writing parallel in-memory applications chine learning problems (k-means clustering, neural net- across many machines. In Piccolo, programmers orga- works training), graph algorithms (PageRank), scientific nize the computation around a series of application ker- computation etc. Many of these applications extensively nel functions, where each kernel is launched as multi- access and mutate shared intermediate state stored in ple instances concurrently executing on many compute memory. nodes. Kernel instances share distributed, mutable state It is difficult to parallelize in-memory computation using a set of in-memory tables whose entries reside in across many machines. As the entire computation is di- the memory of different compute nodes. Kernel instances vided among multiple threads running on different ma- share state exclusively via the key-value table interface chines, one needs to coordinate these threads and share with get and put primitives. The underlying Piccolo run- intermediate results among them. For example, to com- time sends messages to read and modify table entries pute the PageRank score of web page p, a thread needs stored in the memory of remote nodes. to access the PageRank scores of p’s “neighboring” web By exposing shared global state, the programming pages, which may reside in the memory of threads run- model of Piccolo offers several attractive features. First, ning on different machines. Traditionally, parallel in- it allows for natural and efficient implementations for ap- 1 plications that require sharing of intermediate state such 2 Programming Model as k-means computation, n-body simulation, PageRank Piccolo’s programming environment is exposed as a li- calculation etc. Second, Piccolo enables online applica- brary to existing languages (our current implementation tions that require immediate access to modified shared supports C++ and Python) and requires no change to un- state. For example, a distributed crawler can learn of derlying OS or compiler. This section describes the pro- newly discovered pages quickly as a result of state up- gramming model in terms of how to structure application dates done by ongoing web crawls. programs (§2.1), share intermediate state via key/value tables (§2.2), optimize for locality of access (§2.3), and Piccolo borrows ideas from existing data-centric sys- recover from failures(§2.4). We conclude this section by tems to enable efficient application implementations. showing how to implement the PageRank algorithm on Piccolo enforces atomic operations on individual key- top of Piccolo (§2.5). value pairs and uses user-defined accumulation func- tions to automatically combine concurrent updates on 2.1 Program structure the same key (similar to reduce functions in MapRe- Application programs written for Piccolo consist of con- duce [19]). The combination of these two techniques trol functions which are executed on a single machine, eliminates the need for fine-grained application-level and kernel functions which are executed concurrently synchronization for most applications. Piccolo allows on many machines. Control functions create shared ta- applications to exploit locality of access to shared state. bles, launch multiple instances of a kernel function, and Users control how table entries are partitioned across ma- perform global synchronization. Kernel functions consist chines by defining a partitioning function [19]. Based of sequential code which read from and write to tables on users’ locality policies, the underlying run-time can to share state among concurrently executing kernel in- schedule a kernel instance where its needed table parti- stances. By default, control functions execute in a sin- tions are stored, thereby reducing expensive remote table gle thread and a single thread is created for executing access. each kernel instance. However, the programmer is free to We have built a run-time system consisting of one create additional application threads in control or kernel master (for coordination) and several worker processes functions as needed. (for storing in-memory table partitions and executing Kernel invocation: The programmer uses the Run kernels). The run-time uses a simple work stealing function to launch a specified number (m) of kernel in- heuristic to dynamically balance the load of kernel exe- stances executing the desired kernel function on dif- cution among workers. Piccolo provides a global check- ferent machines. Each kernel instance has an identifier ··· − point/restore mechanism to recover from machine fail- 0 m 1 which can be retrieved using the my instance ures. The run-time uses the Chandy-Lamport snapshot function. algorithm [15] to periodically generate a consistent snap- Kernel synchronization: The programmer invokes a shots of the execution state without pausing active com- global barrier from within a control function to wait for putations. Upon machine failure, Piccolo recovers by re- the completion of all previously launched kernels. Cur- starting the computation from its latest snapshot state. rently, Piccolo does not support pair-wise synchroniza- tion among concurrent kernel instances. We found that Experiments have shown that Piccolo is fast and pro- global barriers are sufficient because Piccolo’s shared ta- vides excellent scaling for many applications. The per- ble interface makes most fine-grained locking operations formance of PageRank and k-means on Piccolo is 11× unnecessary. This overall application structure, where and 4× faster than that of Hadoop. Computing a PageR- control functions launch kernels across one or more ank iteration for a 1 billion-page web graph takes only global barriers, is reminiscent of the CUDA model [36] 70 seconds on 100 EC2 instances. Our distributed web which also explicitly eschews support for pair-wise crawler can easily saturate a 100 Mbps internet uplink thread synchronization. when running on 12 machines. 2.2 Table interface and semantics The rest of the paper is organized as follows. Sec- Concurrent kernel instances share intermediate state tion 2 provides a description of the Piccolo program- across machine through key-value based in-memory ta- ming model, followed by the design of Piccolo’s run- bles. Table entries are spread across all nodes and each time (Section 3). We describe the set of applications we key-value pair resides in the memory of a single node. constructed using Piccolo in Section 4. Section 5 dis- Each table is associated with explicit key and value types cusses our prototype implementation. We show Piccolo’s which can be arbitrary user-declared serializable types. performance evaluation in Section 6 and present related As Figure 1 shows, the key-value interface provides a work in Section 7. uniform access model whether the underlying table en- 2 Table<Key, Value>: signment of table partitions to machines, it is the pro- clear () grammer’s responsibility to ensure that the largest table contains(Key) get(Key) partition
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-