Binary Analysis for Measurement and Attribution of Program Performance

Binary Analysis for Measurement and Attribution of Program Performance

Binary Analysis for Measurement and Attribution of Program Performance Nathan R. Tallent John M. Mellor-Crummey Michael W. Fagan Rice University ftallent,johnmc,[email protected] Abstract obtain top performance. To support tuning of such code, perfor- Modern programs frequently employ sophisticated modular de- mance analysis tools must pinpoint context-sensitive inefficiencies signs. As a result, performance problems cannot be identified from in fully optimized applications. costs attributed to routines in isolation; understanding code perfor- Several contemporary performance tools measure and attribute mance requires information about a routine’s calling context. Ex- execution costs to calling context in some form. However, when isting performance tools fall short in this respect. Prior strategies applied to fully optimized applications, existing tools fall short for attributing context-sensitive performance at the source level ei- for two reasons. First, current calling context measurement tech- ther compromise measurement accuracy, remain too close to the bi- niques are unacceptable because they either significantly perturb nary, or require custom compilers. To understand the performance program optimization and execution with instrumentation, or rely of fully optimized modular code, we developed two novel binary on compiler-based information that is sometimes inaccurate or un- analysis techniques: 1) on-the-fly analysis of optimized machine available, which causes failures while gathering certain calling con- code to enable minimally intrusive and accurate attribution of costs texts. Second, by inlining procedures and transforming loops, op- to dynamic calling contexts; and 2) post-mortem analysis of op- timizing compilers introduce a significant semantic gap between timized machine code and its debugging sections to recover its the binary and source code. Thus, prior strategies for attributing program structure and reconstruct a mapping back to its source context-sensitive performance at the source level either compro- code. By combining the recovered static program structure with mise measurement accuracy or remain too close to the object code. dynamic calling context information, we can accurately attribute To clarify, we consider the capabilities of some popular tools performance metrics to calling contexts, procedures, loops, and in- using three related categories: calling context representation, mea- lined instances of procedures. We demonstrate that the fusion of surement technique and attribution technique. this information provides unique insight into the performance of Calling context representation. Performance tools typically complex modular codes. This work is implemented in the HPC- attribute performance metrics to calling context using a call graph 1 or call path profile. Two widely-used tools that collect call graph TOOLKIT performance tools. profiles are gprof [11] and Intel’s VTune [15]. A call graph profile Categories and Subject Descriptors C.4 [Performance of sys- consists of a node for each procedure and a set of directed edges tems]: Measurement techniques, Performance attributes. between nodes. An edge exists from node p to node q if p calls q. To represent performance measurements, edges and nodes are General Terms Performance, Measurement, Algorithms. weighted with metrics. Call graph profiles are often insufficient for modular applications because a procedure p that appears on Keywords Binary analysis, Call path profiling, Static analysis, multiple distinct paths is represented with one node, resulting in Performance tools, HPCTOOLKIT. shared paths and cycles. Consequently, with a call graph profile it is in general not possible to assign costs to p’s full calling context, 1. Introduction or even to long portions of it. To remove this imprecision, a call path profile [12] represents the full calling context of p as the path Modern programs frequently employ sophisticated modular de- of calls from the program’s entry point to p. Call path profiling is signs that exploit object-oriented abstractions and generics. Com- necessary to fully understand the performance of modular codes. position of C++ algorithm and data structure templates typically Measurement technique. There are two basic approaches for yields loop nests spread across multiple levels of routines. To im- obtaining calling context profiles: instrumentation and statistical prove the performance of such codes, compilers inline routines and sampling. Instrumentation-based tools use one of three principal optimize loops. However, careful hand-tuning is often necessary to instrumentation techniques. Tools such as Tau [26] use source code 1 instrumentation to insert special profiling code into the source pro- HPCTOOLKIT is an open-source suite of performance tools available gram before compilation. In contrast, VTune [15] uses static bi- from http://hpctoolkit.org. nary instrumentation to augment application binaries with profiling code. (gprof’s [11] instrumentation, though traditionally inserted by a compiler, is effectively in this category.) The third technique Permission to make digital or hard copies of all or part of this work for personal or is dynamic binary instrumentation. classroom use is granted without fee provided that copies are not made or distributed While source-level instrumentors collect measurements that are for profit or commercial advantage and that copies bear this notice and the full citation easily mapped to source code, their instrumentation can interfere on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. with compiler optimizations such as inlining and loop transforma- PLDI’09, June 15–20, 2009, Dublin, Ireland. tions. As a result, measurements using source-level instrumenta- Copyright © 2009 ACM 978-1-60558-392-1/09/06. $5.00 tion may not accurately reflect the performance of fully optimized code [28]. Binary instrumentors may also compromise optimiza- Much prior work on loop attribution either compromises mea- tion. For example, in some compilers gprof-instrumented code surement accuracy by relying on instrumentation [22, 26] or is cannot be fully optimized. based on context-less measurement [19]. A few sampling-based An important problem with both source and static binary in- call path profilers [2, 14, 22] identify loops, but at the binary level. strumentation is that they require recompilation or binary rewriting Moseley et al. [22] describe a sampling-based profiler (relying of a program and all its libraries. This requirement poses a sig- on unwind information) that additionally constructs a dynamic nificant inconvenience for large, complex applications. More criti- loop/call graph by placing loops within a call graph. However, cally, the need to see the whole program before run time can lead to by not accounting for loop or procedure transformations, this tool ‘blind spots,’ i.e., portions of the execution that are systematically attributes performance only to binary-level loops and procedures. excluded from measurement. For instance, source instrumentation Also, by using a dynamic loop/call graph, it is not possible to un- fails to measure any portion of the application for which source derstand the performance of procedures and loops in their full call- code is unavailable; this frequently includes critical system, math ing context. and communication libraries. For Fortran programs, this approach can also fail to associate costs with intrinsic functions or compiler- To understand the performance of modular programs, as part of inserted array copies. Static binary instrumentation is unable to the HPCTOOLKIT performance tools we built hpcrun, a call path cope with shared libraries dynamically loaded during execution. profiler that measures and attributes execution costs of unmodified, The third approach, dynamic binary instrumentation, supports fully optimized executables to their full calling context, as well fully optimized binaries and avoids blind spots by inserting instru- as loops and inlined code. Achieving this result required novel mentation in the executing application [4]. Intel’s recently-released solutions to three problems: Performance Tuning Utility (PTU) [14], includes a call graph pro- filer that adopts this approach by using Pin [18]. However, dynamic 1. To measure dynamic calling contexts, we developed a context- instrumentation remains susceptible to systematic measurement er- free on-line binary analysis for locating procedure bounds and ror because of instrumentation overhead. computing unwind information. We show its effectiveness on Indeed, all three instrumentation approaches suffer in two dis- applications in the SPEC CPU2006 suite compiled with Intel, tinct ways from overhead. First, instrumentation dilates total execu- Portland Group and PathScale compilers using peak optimiza- tion time, sometimes enough to preclude analysis of large produc- tion. tion runs or force users to a priori introduce blind spots via selec- 2. To attribute performance to user-level source code, we devel- tive instrumentation. For example, because of an average slowdown oped a novel post-mortem analysis of the optimized object code factor of 8, VTune requires users to limit measurement to so-called and its debugging sections to recover its program structure and ‘modules of interest’ [15]. Moreover, overhead is even more acute reconstruct a mapping back to its source code. The ability to if loops are instrumented. A recent Pin-based

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us