Parallel Optimizations with Executable Rewriting

Parallel Optimizations with Executable Rewriting

POWER: Parallel Optimizations With Executable Rewriting Nipun Arora, Jonathan Bell, Martha Kim, *Vishal K. Singh, Gail E. Kaiser Computer Science Department *NEC-Labs America Columbia University Princeton, NJ fnipun,jbell,martha,[email protected] [email protected] Abstract niques are limited to structured DO-ACROSS and DO- ALL style parallelizations [2, 4, 15]. The hardware industry’s rapid development of multi- There have also been a few attempts at purely dy- core and many core hardware has outpaced the soft- namic auto-parallelization. These approaches are typi- ware industry’s transition from sequential to parallel cally rooted in Bernstein’s conditions, which allow two programs. Most applications are still sequential, and statements to run in parallel if they do not produce a read- many cores on parallel machines remain unused. We write or write-write data conflict [3]. Dynamic paral- propose a tool that uses data-dependence profiling and lelization [7, 17] occurs while a program is running and binary rewriting to parallelize executables without ac- can provide more opportunities to parallelize, but comes cess to source code. Our technique uses Bernstein’s con- at a cost of run-time overhead. ditions to identify independent sets of basic blocks that In this paper we present POWER: Parallel optimiza- can be executed in parallel, introducing a level of granu- tions with executable rewriting, a tool that transforms larity between fine-grained instruction level and coarse- sequential binary executables into parallel ones, using a grained task level parallelism. We analyze dynamically hybrid of static and dynamic analysis techniques. The generated control and data dependence graphs to find POWER tool chain captures run-time profiles over test independent sets of basic blocks which can be paral- executions of the code and then analyzes them offline lelized. We then propose to parallelize these candidates to determine potential parallelizations of basic blocks. using binary rewriting techniques. Our technique aims POWER is designed to make some parallelization easily to demonstrate the parallelism that remains in serial ap- exploitable, without requiring additional development plication by exposing concrete opportunities for paral- time to understand and parallelize the code. lelism. POWER is designed with three principle goals:: 1 Introduction Generality: The parallelized binary should run on all operating systems and architectures on which the origi- The proliferation of multi-core architectures over the last nal serial binary ran. 10 years offers a significant performance incentive for Transparency: In many situations, the application parallel software. Unfortunately, such software is sub- source code will not be available. POWER must not re- stantially more complicated than serial code to develop, quire source code or any changes in underlying execution debug and test, resulting in a large number of applica- environment like OS or hardware. tions that cannot exploit today’s abundant hardware re- Performance: POWER must exploit more parallelisms sources. There has thus been significant research inter- than other tools, to give improved performance for the est in auto-parallelization - techniques that automatically application. transform serial code into a parallel equivalent [2]. We believe that POWER offers a significant improve- Auto-parallelization approaches can broadly be di- ment over current auto-parallelization techniques, as fol- vided into two categories, each with its own drawbacks. lows: Static auto-parallelization techniques are applied when • Exploitation of basic block level parallelism using the program is not running, simply by examining a pro- dynamic profiles gram’s code or a disassembled binary. These techniques • Profile driven auto-parallelization using a hybrid add little to no overhead during execution, but suffer technique (static/dynamic) from the usual drawbacks of static alias analysis [5], a • A source-free, binary only approach to parallelism well known and unresolved problem within the compil- ers community. Thus, in practice, static analysis tech- One of our key innovations is to explore parallelism at Serial Profile Profile Hot Spot Analysis Binary Parallelized Executable Generation Analysis & Load Balancing Rewriting Executable Input Output This Paper POWER Framework Figure 1: POWER Architecture Overview: POWER takes in an unmodified serial binary, profiles it, analyzes the static program structure in conjunction with the dynamic profile information, then rewrites the binary to expose basic block-level parallelism the basic block level, based on profiles collected during piler approaches, where transforms must to be correct program execution. The intuition behind parallelism at under all conditions, our transforms are somewhat op- the basic block granularity is that most parallelism hot timistic, and they fallback to the original code in case the spots occur in loops, recursions, and similar structures. actual execution doesn’t match the profile. This is similar A basic block representation provides natural boundaries to optimistic optimization approaches where code seg- for detecting parallelism candidates, as loop/function re- ments are executed out of order on an otherwise idle pro- cursions all start with a jump instruction to the head of cessor and the results are flushed if a dependency arises a basic block. Basic block level parallelism, introduces later. This section gives a detailed view of the work flow a new level of granularity between instruction level and of POWER. Later in sec. 3, we explain the working of task-level parallelism. the tool-chain using an example. During program execution, POWER collects both data and control flow profiles. By generating data dependency profiles, we are able to observe actual data-dependence 2.1 Serial Executable Input as the program executes. Static parallelizers, however, POWER takes a sequential binary as input, which is then are limited in their applicability because of pointer alias- used for profile generation in the subsequent steps. The ing problems. Using these profiles simplifies aliasing system requires no source code information or special concerns and provides us with additional candidates for compilers. This allows us to employ POWER to any parallelism. sequential application, irrespective of original program- Operating purely at the binary level allows us to ming language or the compiler that produced it, includ- present a generic solution that can be applied irrespec- ing, importantly, legacy applications. tive of the language/compiler of the target application. Our approach can optimize applications regardless of the availability of their source code, a potentially valuable 2.2 Profile Generation feature when supporting legacy systems. The first phase of our approach generates a cumulative profile of control flow and data dependencies over sev- 2 POWER: A profile-guided auto- eral runs of the program. The goal of profiling is to get parallelization system for executables a realistic idea of the control flows in the application bi- nary. By observing executions of the target application Our approach is a hybrid of static and profile guided ap- at the binary level, we create a disambiguated control proaches, and is presented in Figure 1. POWER utilizes and data dependence flow of the program. We instru- information gathered by running the target program sev- ment each basic block and variable access with the PIN eral times on representative inputs, generating a number Instrumentation Tool [16]. The PIN instrumentation tool of profiles. After generating profiles, we use a novel framework provides an opportunity to dynamically in- combination of the apriori algorithm [1] and Djikstra’s sert code anywhere necessary, including before specific shortest path algorithm to identify sets of basic blocks instructions and functions. that can be parallelized. Next, we analyze potential par- Since the profile is dependent on the input set provided allel schedules based on hot spots and load balancing by the user, the quality of the profile depends entirely constraints. Finally, we propose to apply the paralleliza- on the coverage and representativeness of the profiled tions determined above to the binary and verify the cor- inputs. Regressive test-suites and benchmarks are of- rectness of the rewritten binary. Unlike classical com- ten provided with applications for testing with real-world 2 loads, and these can be used to generate profiles for our problem becomes somewhat more complicated when ba- tool. sic blocks have data dependencies, and even more so as those dependencies become more complex. 2.2.1 Control Flow Analysis We view parallelization of basic blocks as a subset problem. For each basic block Bi in the program, we col- The control flow is simply a trace of instructions exe- lect a set of conflicts Si. Our goal is to build the maximal cuted, represented as a set of basic blocks, addressed by subset of blocks such that we can run the most blocks in their address offset from the image load address. Instruc- parallel, given the conflict sets and control dependencies. tions are identified by image names and offsets rather Unfortunately, analyzing dynamic traces generated at than instruction pointers to control for different image binary level brings about it’s own sets of problems. One load addresses across different profile runs. We annotate of the largest difficulties is dealing with very big trace each basic block with the number of times it was exe- files, creating

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us