
Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks Michael Isard Mihai Budiu Yuan Yu Microsoft Research, Silicon Valley Microsoft Research, Silicon Valley Microsoft Research, Silicon Valley Andrew Birrell Dennis Fetterly Microsoft Research, Silicon Valley Microsoft Research, Silicon Valley ABSTRACT 1. INTRODUCTION Dryad is a general-purpose distributed execution engine for The Dryad project addresses a long-standing problem: coarse-grain data-parallel applications. A Dryad applica- how can we make it easier for developers to write efficient tion combines computational “vertices” with communica- parallel and distributed applications? We are motivated tion “channels” to form a dataflow graph. Dryad runs the both by the emergence of large-scale internet services that application by executing the vertices of this graph on a set of depend on clusters of hundreds or thousands of general- available computers, communicating as appropriate through purpose servers, and also by the prediction that future ad- files, TCP pipes, and shared-memory FIFOs. vances in local computing power will come from increas- The vertices provided by the application developer are ing the number of cores on a chip rather than improving quite simple and are usually written as sequential programs the speed or instruction-level parallelism of a single core with no thread creation or locking. Concurrency arises from [3]. Both of these scenarios involve resources that are in Dryad scheduling vertices to run simultaneously on multi- a single administrative domain, connected using a known, ple computers, or on multiple CPU cores within a computer. high-performance communication topology, under central- The application can discover the size and placement of data ized management and control. In such cases many of the at run time, and modify the graph as the computation pro- hard problems that arise in wide-area distributed systems gresses to make efficient use of the available resources. may be sidestepped: these include high-latency and unre- Dryad is designed to scale from powerful multi-core sin- liable networks, control of resources by separate federated gle computers, through small clusters of computers, to data or competing entities, and issues of identity for authentica- centers with thousands of computers. The Dryad execution tion and access control. Our primary focus is instead on engine handles all the difficult problems of creating a large the simplicity of the programming model and the reliability, distributed, concurrent application: scheduling the use of efficiency and scalability of the applications. computers and their CPUs, recovering from communication For many resource-intensive applications, the simplest way or computer failures, and transporting data between ver- to achieve scalable performance is to exploit data paral- tices. lelism. There has historically been a great deal of work in the parallel computing community both on systems that Categories and Subject Descriptors automatically discover and exploit parallelism in sequential D.1.3 [PROGRAMMING TECHNIQUES]: Concurrent programs, and on those that require the developer to explic- Programming—Distributed programming itly expose the data dependencies of a computation. There are still limitations to the power of fully-automatic paral- lelization, and so we build mainly on ideas from the latter General Terms research tradition. Condor [37] was an early example of such Performance, Design, Reliability a system in a distributed setting, and we take more direct inspiration from three other models: shader languages devel- Keywords oped for graphic processing units (GPUs) [30, 36], Google’s MapReduce system [16], and parallel databases [18]. In all Concurrency, Distributed Programming, Dataflow, Cluster these programming paradigms, the system dictates a com- Computing munication graph, but makes it simple for the developer to supply subroutines to be executed at specified graph ver- tices. All three have demonstrated great success, in that large numbers of developers have been able to write con- current software that is reliably executed in a distributed Permission to make digital or hard copies of all or part of this work for fashion. personal or classroom use is granted without fee provided that copies are We believe that a major reason for the success of GPU not made or distributed for profit or commercial advantage and that copies shader languages, MapReduce and parallel databases is that bear this notice and the full citation on the first page. To copy otherwise, to the developer is explicitly forced to consider the data paral- republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. lelism of the computation. Once an application is cast into EuroSys’07, March 21–23, 2007, Lisboa, Portugal. this framework, the system is automatically able to provide Copyright 2007 ACM 978-1-59593-636-3/07/0003 ...$5.00. the necessary scheduling and distribution. The developer need have no understanding of standard concurrency mech- use heuristics to automatically select and tune appropriate anisms such as threads and fine-grain concurrency control, Dryad features, and thereby get good performance for most which are known to be difficult to program correctly. In- simple applications. stead the system runtime abstracts these issues from the We summarize Dryad’s contributions as follows: developer, and also deals with many of the hardest dis- tributed computing problems, most notably resource alloca- • We built a general-purpose, high performance distrib- tion, scheduling, and the transient or permanent failure of a uted execution engine. The Dryad execution engine subset of components in the system. By fixing the boundary handles many of the difficult problems of creating a between the communication graph and the subroutines that large distributed, concurrent application: scheduling inhabit its vertices, the model guides the developer towards across resources, optimizing the level of concurrency an appropriate level of granularity. The system need not try within a computer, recovering from communication or too hard to extract parallelism within a developer-provided computer failures, and delivering data to where it is subroutine, while it can exploit the fact that dependencies needed. Dryad supports multiple different data trans- are all explicitly encoded in the flow graph to efficiently port mechanisms between computation vertices and distribute the execution across those subroutines. Finally, explicit dataflow graph construction and refinement. developers now work at a suitable level of abstraction for • We demonstrated the excellent performance of Dryad writing scalable applications since the resources available at from a single multi-core computer up to clusters con- execution time are not generally known at the time the code sisting of thousands of computers on several nontrivial, is written. real examples. We further demonstrated that Dryad’s The aforementioned systems restrict an application’s com- fine control over an application’s dataflow graph gives munication flow for different reasons. GPU shader languages the programmer the necessary tools to optimize trade- are strongly tied to an efficient underlying hardware imple- offs between parallelism and data distribution over- mentation that has been tuned to give good performance head. This validated Dryad’s design choices. for common graphics memory-access patterns. MapReduce was designed to be accessible to the widest possible class of • We explored the programmability of Dryad on two developers, and therefore aims for simplicity at the expense fronts. First, we have designed a simple graph descrip- of generality and performance. Parallel databases were de- tion language that empowers the developer with ex- signed for relational algebra manipulations (e.g. SQL) where plicit graph construction and refinement to fully take the communication graph is implicit. advantage of the rich features of the Dryad execution By contrast, the Dryad system allows the developer fine engine. Our user experiences lead us to believe that, control over the communication graph as well as the subrou- while it requires some effort to learn, a programmer tines that live at its vertices. A Dryad application developer can master the APIs required for most of the appli- can specify an arbitrary directed acyclic graph to describe cations in a couple of weeks. Second, we (and oth- the application’s communication patterns, and express the ers within Microsoft) have built simpler, higher-level data transport mechanisms (files, TCP pipes, and shared- programming abstractions for specific application do- memory FIFOs) between the computation vertices. This mains on top of Dryad. This has significantly lowered direct specification of the graph also gives the developer the barrier to entry and increased the acceptance of greater flexibility to easily compose basic common opera- Dryad among domain experts who are interested in tions, leading to a distributed analogue of “piping” together using Dryad for rapid application prototyping. This traditional Unix utilities such as grep, sort and head. further validated Dryad’s design choices. Dryad is notable for allowing graph vertices (and compu- tations in general) to use an arbitrary number of inputs and The next three sections describe the abstract form of a outputs. MapReduce restricts all computations to take a Dryad application and outline the steps involved in writ- single input set and generate a single
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-