ASC: Automatically Scalable Computation

ASC: Automatically Scalable Computation

ASC: Automatically Scalable Computation The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Waterland, Amos, Elaine Angelino, Ryan P. Adams, Jonathan Appavoo, and Margo Seltzer. 2014. "ASC: automatically scalable computation." In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, March 1-5, 2014, Salt Lake City, UT: 575-590. Published Version doi:10.1145/2541940.2541985 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:34309064 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#OAP ASC: Automatically Scalable Computation Amos Waterland Jonathan Appavoo Margo Seltzer Elaine Angelino Boston University Harvard University Ryan P. Adams [email protected] [email protected] Harvard University [email protected] [email protected] [email protected] Abstract formance improves as a function of the number of cores and We present an architecture designed to transparently and amount of memory available. automatically scale the performance of sequential programs We begin with a computational model that views the data as a function of the hardware resources available. The archi- and hardware available to a program as comprising an expo- tecture is predicated on a model of computation that views nentially large state space. This space is composed of all pos- program execution as a walk through the enormous state sible states of the registers and memory of a single-threaded space composed of the memory and registers of a single- processor. In this model, execution of a single instruction cor- threaded processor. Each instruction execution in this model responds to a transition between two states in this space, and moves the system from its current point in state space to a an entire program execution corresponds to a path or trajec- deterministic subsequent point. We can parallelize such ex- tory through this space. Given this model and a system with ecution by predictively partitioning the complete path and N processors we would ideally be able to automatically re- speculatively executing each partition in parallel. Accurately duce the time to execute a trajectory by a factor of N. In partitioning the path is a challenging prediction problem. We theory, this could be achieved by dividing the trajectory into have implemented our system using a functional simulator N equal partitions and executing each of them in parallel. that emulates the x86 instruction set, including a collection of Of course, we do not know the precise trajectory a program state predictors and a mechanism for speculatively executing will follow, so we do not know, a priori, the precise points threads that explore potential states along the execution path. on the trajectory that will equally partition it. Nevertheless, While the overhead of our simulation makes it impractical if we attempt to predict N − 1 points on the trajectory and to measure speedup relative to native x86 execution, experi- speculatively execute the trajectory segments starting at those ments on three benchmarks show scalability of up to a factor points, we will produce a speedup if even a small subset of of 256 on a 1024 core machine when executing unmodified our predictions are accurate. From this vantage point, accu- sequential programs. rately predicting points on the future trajectory of the system suggests a methodology for automatically scaling sequential Categories and Subject Descriptors I.2.6 [Artificial Intel- execution. ligence]: Learning—Connectionism and neural nets; C.5.1 The primary design challenge in realizing this architecture [Large and Medium (“Mainframe”) Computers]: Super (very is accurately predicting points that partition a trajectory. We large) computers break this challenge into two parts: (1) recognizing states from which accurate prediction is possible and will result in Keywords Machine learning, Automatic parallelization useful speedup, and (2) predicting future states of the system 1. Introduction when the current state of execution is recognized as one from The Automatically Scalable Computation (ASC) architec- which prediction is possible. ture is designed to meet two goals: it is straightforward to Given solutions for these two challenges, a basic ASC ar- program and it automatically scales up execution according chitecture works as follows. While sequentially executing on to available physical resources. For the first goal, we define one core, ASC allocates additional cores for predictive exe- “straightforward to program” as requiring only that the pro- cution. Each predictive execution core begins executing at a grammer write sequential code that compiles into a single- different predicted state and continues executing for a given threaded binary program. The second goal requires that per- length of time. We then store the results of predictive execu- Permission to make digital or hard copies of all or part of this work for personal or tion in a state cache: for example, as compressed pairs of start classroom use is granted without fee provided that copies are not made or distributed and end states. At appropriate times, the sequential execu- for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the tion core consults the state cache. If its current state matches author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or a cached start state on all relevant coordinates, it achieves republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. speedup by “fast-forwarding” to the associated cached end ASPLOS ’14, March 01 - 05 2014, Salt Lake City, UT, USA. state and then resumes execution. If the predicted states cor- Copyright is held by the owner/author(s). Publication rights licensed to ACM. rectly and evenly partition the execution trajectory and the ACM 978-1-4503-2305-5/14/03. $15.00. http://dx.doi.org/10.1145/2541940.2541985 ASC components operate efficiently, we will achieve per- fect and automatic linear speedup of the sequential execution. x4. In x5, we present both theoretical and analytical results Thus, our architecture has the potential to produce arbitrary that demonstrate the potential of the architecture and the scalability, but its true efficacy will be a function of an imple- strengths and weaknesses of our prototype. In x6, we discuss mentation’s ability to (1) recognize good points from which the implications of this work and avenues for future research. to make predictions, (2) make accurate predictions, (3) effi- 2. Related work ciently query the cache, (4) efficiently allocate cores to the There are three broad categories of work that share our various parts of the architecture, and (5) execute trajectory- goal of automatically scaling program execution: paralleliz- based computation efficiently. ing compilers, software systems that parallelize binary pro- The primary challenges to efficiently implementing ASC grams, and hardware parallelization. Although each category are (1) manipulating the potentially enormous state space and of work arises from conceptual models rather different from (2) managing the cache of state pairs. Although the entire 7 ours, notions of statistical prediction and speculative execu- state space of a program is sufficiently large (e.g., 10 bits tion have independently arisen in all three. for one of our benchmark programs) that efficient manipu- 2.1 Compiler parallelization lation seems intractable, we exploit the fact that predictable Traditional compiler parallelization based on static analy- units of computation (think “functions or loops”) often touch sis [1] has produced sophisticated research compilers [3,9]. only a tiny fraction of that space (e.g., 103 bits). Thus, we en- Although these compilers can automatically parallelize most code cache entries using a sparse representation that is itself loops that have regular, well-defined data access patterns [32], compressible. In addition, a single cache entry can be used at the limitations of static analysis have become apparent [27]. multiple points in a program’s execution, effecting a general- When dealing with less regular loops, parallelizing compilers ized form of memoization. either give up or generate both sequential and parallel code We present an implementation of ASC, the Learning-based that must use runtime failsafe checking [55]. The ASC ar- Automatically Scalable Computation (LASC) system. It is a chitecture is able to speed up irregular loops by using on- trajectory-based functional simulator (TBFS) that emulates line probabilistic inference to predict likely future states, as the x86 instruction set with a fast, adaptive algorithm for we show in x5. However, it can also import the sophisti- recognizing predictable states, a set of on-line learning al- cated static analyses of traditional parallelizing compilers in gorithms that use observed states to learn predictors for fu- the form of probability priors on loops that the compiler was ture states, a resource allocator that selects optimal combina- almost but not quite able to prove independent.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us