Integrating Asynchronous Task Parallelism and Data-Centric

Integrating Asynchronous Task Parallelism and Data-Centric

Integrating Asynchronous Task Parallelism and Data-centric Atomicity ⇤ Vivek Kumar † Julian Dolby Stephen M. Blackburn Rice University IBM T.J. Watson Research Australian National University ABSTRACT CCS Concepts Processor design has turned toward parallelism and het- Software and its engineering Concurrent pro- erogeneous cores to achieve performance and energy effi- gramming• structures; Source code! generation; Run- ciency. Developers find high-level languages attractive as time environments; they use abstraction to o↵er productivity and portability over these hardware complexities. Over the past few decades, re- searchers have developed increasingly advanced mechanisms Keywords to deliver performance despite the overheads naturally im- Task Parallelism, Work-Stealing, Data-centric Atomicity, Jikes posed by this abstraction. Recent work has demonstrated RVM that such mechanisms can be exploited to attack overheads that arise in emerging high-level languages, which provide strong abstractions over parallelism. However, current im- 1. INTRODUCTION plementation of existing popular high-level languages, such Today and in the foreseeable future, performance will be as Java, o↵er little by way of abstractions that allow the delivered principally in terms of increased hardware paral- developer to achieve performance in the face of extensive lelism. This fact is an apparently unavoidable consequence hardware parallelism. of wire delay and the breakdown of Dennard scaling, which In this paper, we present a small set of extensions to the together have put a stop to hardware delivering ever faster Java programming language that aims to achieve both high sequential performance. Unfortunately, software parallelism performance and high productivity with minimal program- is often difficult to identify and expose, which means it is mer e↵ort. We incorporate ideas from languages like X10 often hard to realize the performance potential of modern and AJ to develop five annotations in Java for achieving processors. Common programming models using threads asynchronous task parallelism and data-centric concurrency impose significant complexity to organize code into multi- control. These annotations allow the use of a highly efficient ple threads of control and to manage the balance of work implementation of a work-stealing scheduler for task paral- amongst threads to ensure good utilization of multiple cores. lelism. We evaluate our proposal by refactoring classes from Much research has been focused on developing program- a number of existing multithreaded open source projects to ming models in which programmers simply annotate por- use our new annotations. Our results suggest that these an- tions of work that may be done concurrently and allow the notations significantly reduce the programming e↵ort while system to determine how the work can be executed efficiently achieving performance improvements up to 30% compared and correctly. X10 [13] is one such recently developed paral- to conventional approaches. lel programming language that provides async-finish based annotations to the users to specify the tasks that can run asynchronously. Other closely related programming models ⇤ This work was supported by IBM and ARC are ICC++ [15], Cilk [20], and Habanero-Java [11, 24]. All LP0989872. Any opinions, findings and conclusions ex- these implementations employ a work-stealing scheduler [9] pressed herein are the authors’ and do not necessarily reflect within the underlying language runtime that schedules fine- those of the sponsors. grained packets of work exposed by the programmer, ex- † Work done while the author was affiliated with Aus- tralian National University. ploiting idle processors and unburdening those that are over- loaded. However, a drawback of this approach is the runtime overheads due to over decomposition of tasks. As a common work-around, programmer chooses a granularity size, which Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed can stop the creation of new tasks. Again, choosing a right for profit or commercial advantage and that copies bear this notice and the full cita- granularity size for each async-finish block in a large code- tion on the first page. Copyrights for components of this work owned by others than base can be a daunting task. ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission Further, to ensure the correctness of parallel program, it and/or a fee. Request permissions from [email protected]. is important that threads of execution do not interfere with PPPJ’16, August 29-30, 2016, Lugano,Switzerland each other while sharing data in the memory (data-race). c 2016 ACM. ISBN 978-1-4503-4135-6/16/08. $15.00 Traditional approaches to avoid data-race follow control- DOI: http://dx.doi.org/... centric approaches where the instruction sequences are pro- tected with synchronization operations. This is a burden 2. MOTIVATION on the programmer, as he has to ensure that all shared data Pundits predict that for the next twenty years processor are consistently protected. In a large code-base, this control- designs will use large-scale parallelism with heterogeneous centric approach can significantly decrease the productivity cores to control power and energy [10]. Software demands for and can lead to data-races that are frustratingly difficult to correctness, complexity management, programmer produc- track down and eliminate. To remove this burden from the tivity, time-to-market, reliability, security, and portability programmer, Atomic Sets for Java (AJ) introduced a data- have pushed developers away from low-level programming centric approach for synchronization [17]. In this model, languages towards high-level ones. Java is a high-level lan- the user simply groups fields of objects into atomic sets to guage that has gained huge popularity. As evident from the specify that these objects must always be updated atomi- TIOBE [6] index for June 2016, Java is on number one spot cally. AJ then uses the Eclipse Refactoring [32] tool to au- in the programming language community. tomatically generate synchronization statements at all rel- However, mainstream languages such as Java do not pro- evant places. The basic idea behind atomic sets is simple: vide support for parallel programming that is both easy to the reason we need synchronization at all is that some sets use and efficient. Figure 1 shows a bank account transaction of fields have some consistency property that must be main- code written both in plain Java and AJWS. The class Bank tained. The atomic sets model abstracts away from what carries out money transfer between a pair of sender and re- these properties are, and has them declared explicitly. While ceiver accounts. The class Transfer contains the detail of AJ has been extensively studied in several prior works [36, sender and receiver accounts. Multiple transfers might be 17, 37, 31, 22, 26], it is still not open-sourced. Moreover, all taking place to/from the same account. The other operation prior studies on AJ have been done only by using explicit performed by class Bank is adding interest to each account. Java threading for expressing parallelism. Our principal contribution is an open-sourced new sys- 1 2.1 Bank transaction in plain Java tem, AJWS —Atomic Java with Work-Stealing, which by Figure 1(a) shows a plain Java pseudocode for this bank drawing together the benefits of work-stealing and AJ, al- program. The methods processTransfer (Line 35) and lows the application programmer to conveniently and suc- addInterest (Line 41) contain codes that can exploit paral- cinctly expose the parallelism inherent in their program in lelism. Di↵erent techniques that the user can choose for par- a mainstream object-oriented language. We have imple- allelizing are (a) partitioning the for loops (Line 36 and 42) mented AJWS by using JastAdd meta-compilation system [18]. among threads, (b) using Java’s concurrent utilities for par- AJWS uses a highly scalable and low-overhead implementa- allelizing these for loops, and (c) using async-finish con- tion of a work-stealing runtime [28, 27], which relieves the structs from Habanero-Java. The first two approaches incur programmer in choosing the granularity of each parallel sec- significant syntactic overheads with respect to the sequen- tion. tial version of the same code. Moreover, in all the above In summary, the contributions of this paper are as follow- approaches the programmer must control the granularity of ing: the parallel executions to avoid runtime overheads. To en- sure consistency, class Account requires a lock object to We introduce a new parallel programming model (AJWS) access any member methods (e.g., method addInterest at • that provide a set of five annotations to bring together Line 15). Further, to ensure proper accounting in case of the benefits of work-stealing and data-centric approach inter-account transfer (Line 22), both the participating ac- for synchronization. counts should be locked before the transfer actually happens (Line 24). However, Line 24 can create a potential deadlock. We provide compiler transformation of AJWS anno- Suppose there is a thread T1 doing transfer from account A • tations to vanilla Java that uses a recently developed to B and another thread T2 is doing transfer from account highly scalable implementation of a work-stealing run- B to A. At Line 24, T1 locks A and T2 locks B.Next,T1 try time. locking B and T2 try locking A, thereby creating a deadlock. We demonstrate the benefits of AJWS by modifying 2.2 Bank transaction using AJWS • three large open-sourced Java applications and by eval- To minimize programming complexities, our AJWS sys- uating the performance on a quad-core smartphone, to tem provides the user a set of five annotations to ensure evaluate AJWS on mobile devices that are increasingly race free concurrency (syntax and semantics are explained multicore.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us