
A Framework for Tracking Memory Accesses in Scientific Applications Antonio J. Pena˜ Pavan Balaji Mathematics and Computer Science Division Mathematics and Computer Science Division Argonne National Laboratory Argonne National Laboratory Argonne, IL 60439 Argonne, IL 60439 Email: [email protected] Email: [email protected] Abstract—Profiling is of great assistance in understand- differently depending on the place from where it has ing and optimizing applications’ behavior. Today’s profiling been called (i.e., its stack trace), researchers devised techniques help developers focus on the pieces of code more advanced tools that use call graphs to orga- leading to the highest penalties according to a given per- nize profiling information. This approach is limited, formance metric. In this paper we describe a pair of tools we have extended to complement the traditional algorithm- however, to pointing to the conflicting piece of code oriented analysis. Our extended tools provide new object- consuming large portions of execution time; it does differentiated profiling capabilities that help software de- not provide any hint about the root of that time. velopers and hardware designers (1) understand access For instance, a highly optimized, compute-intensive patterns, (2) identify unexpected access patterns, and (3) task may legitimately be spending a considerable determine whether a particular memory object is con- amount of time, hiding other possible candidates for sistently featuring a troublesome access pattern. Memory objects found in this way may have gone unnoticed with optimization. the traditional profiling approach. This additional view may lead developers to think of different ways of storing A more informative approach consists of dif- data, leveraging different algorithms, or employing differ- ferentiating time spent in computation from that ent memory subsystems in future heterogeneous memory spent on memory accesses. In current multilevel systems. memory hierarchies, cache hits pose virtually no penalty, whereas cache misses are likely to lead I. INTRODUCTION to large amounts of processor stall cycles. Cache misses, therefore, are a commonly used metric to Analyzing applications’ performance has been determine whether a performance issue is caused by of interest since the early days of computers, with a troublesome data access pattern. the objective of exploiting the underlying hardware to the greatest possible extent. As computational In this regard, cache simulators have been used in power and the performance of data access paths optimizing applications [3], [4]. The main benefit of have been increasing, such analysis has become these is their flexibility, since they enable the cache less crucial in commodity applications. However, parameters to be easily changed and the analysis software profiling is still largely used to determine repeated for different architectures. A disadvantage the root of unexpected performance issues, and it is of this profiling approach, however, is the large highly relevant to the high-end computing commu- execution timings caused by the intrinsically large nity, where code is expected to be highly tuned. cache simulation and system emulation overheads. The first modern profiling approaches used code The introduction of hardware counters enabled a instrumentation to measure the time spent by dif- relatively accurate and fast method of profiling [5], ferent parts of applications [1], [2]. Recognizing [6], [7]. These are based mainly on sampling the that the same functional piece of code may behave hardware counters at a given frequency or deter- mined by certain events and relating these measure- memory subsystem on which to host the different ments with the code being executed. Both automated memory objects according to their access pattern. and user-driven tools exist for this purpose, and in The rest of the paper is organized as follows. some cases code instrumentation is not required. Section II introduces the Valgrind instrumentation Unlike their simulator-based counterpart, however, framework as the basis for our developed tools. these limit the analysis to the real platform on Sections III and IV describe our modifications to in- which they are executing on. Although different corporate per-object differentiation capabilities into architectures provide a different set of counters, the Valgrind core and the two tools we target. cache misses are commonly available and a widely Section V reviews related work, and Section VI used profiling metric. provides concluding remarks and ideas for future All these approaches are code focused. They work. point developers to the place of code showing large execution time or large amounts of cache misses. II. VALGRIND However, it is not uncommon to perform accesses to different memory objects from the same line of Valgrind is a generic instrumentation framework. code. For instance, a matrix-matrix multiplication A set of tools making use of this generic core implementation is very likely to access all the three constitute the Valgrind ecosystem. Valgrind’s most- memory buffers in the same line of code. In this used tool is Memcheck [12], a memory debugger, case, a code-oriented profiler would not provide executed by default on Valgrind’s invocation. enough information to determine the root of the Valgrind can be seen as a virtual machine. It problem. performs just-in-time compilation, translating the The approach pursued in this paper is intended code to a processor-agnostic intermediate represen- tation (IR), which the tools are free to analyze and to complement that view, differentiating those per- 2 formance metrics by memory object1 [8], [9]. The modify. Tools tend to use this process for inserting intent of this approach is to expose particular objects hooks (instrument) to perform different tasks at run showing problematic access patterns throughout the time. This code is taken back by the Valgrind core to execution lifetime, which may be imperceptible be executed. The process is performed repeatedly in from a traditional approach. This view focuses on chunks of code featuring a single entry and multiple data objects rather than lines of code. exit points, called “super blocks” (SBs), as depicted in Figure 1. In this paper we present a pair of tools we developed on top of state-of-the art technologies. The main drawback of this process is the incurred We incorporated per-object tracing capabilities into overhead. Typically the code translation process the widely used Valgrind instrumentation frame- itself—ignoring the tool’s tasks—poses an overhead work [10]. Next, we extended two of the tools from of around 4x to 5x. its large ecosystem to demonstrate the possibilities Valgrind exposes a rich API to its tools, plus a of this approach: Lackey [11] and Callgrind [4]. We client request mechanism for final applications to describe the intrinsics of our developments with the interact with the tools if needed. The API enables hope that they prove useful for subsequent exten- tools to interact with the Valgrind core, for instance, sions and research, such as profilers and runtime to inform the core of the required capabilities at systems targeting heterogeneous memory systems. initialization time, get debug information about the Indeed, we envision memory-object differentiation executed code generated by the compiler (typically to be a basic technique in these emerging systems code compiled with the -g option), manage stack as a means of determining the most appropriate traces, or get information about the thread status. It 1We refer as “memory object” to every memory entity as seen from 2The tools do not directly interact with the IR itself but, rather, with the code level, that is, statically and dynamically allocated variables a high-level application programming interface (API) and processed and buffers. data structures to manage it. Listing 1. Sample output from Lackey. I 04e9f363,5 S 7fefff4b8,8 I 04f23660,3 I 04f23663,7 L 05215e60,8 I 04f2366a,6 I 04f23670,6 I 04f23676,2 I 04f23691,3 I 04f23694,3 I 04f23697,2 ==16084== ==16084== Counted 1 call to main() ==16084== ==16084== Jccs: ==16084== total: 567,015 ==16084== taken: 312,383 ( 55%) ==16084== ==16084== Executed: ==16084== SBs entered: 511,667 ==16084== SBs completed: 357,784 ==16084== guest instrs: 3,279,975 ==16084== IRStmts: 18,712,277 ==16084== ==16084== Ratios: ==16084== guest instrs : SB entered = 64 : 10 ==16084== IRStmts : SB entered = 365 : 10 ==16084== IRStmts : guest instr = 57 : 10 Fig. 1. Simplified high-level view of the interaction between Valgrind ==16084== and its tools. ==16084== Exit code: 0 also provides a set of high-level containers such as a arrays, sets, or hash tables. In addition, it facilitates from this tool with the memory tracing functionality the possibility of intercepting the different memory enabled. allocation calls to enable tools to provide specific Our extensions enable Lackey to differentiate wrappers for them. The client request mechanism memory accesses to different objects, which will enables capabilities such as starting and stopping the provide application developers and hardware design- tool instrumentation on demand to save execution ers with information about raw access patterns to the time on uninteresting portions of code. different memory objects. In the following we introduce the two tools we have extended: Lackey and Callgrind. B. Callgrind Callgrind is autodefined as “a call-graph gener- A. Lackey ating cache and branch prediction profiler” and is Lackey is an example tool designed to show intended to be a profiling tool. By default, it collects Valgrind’s capabilities and its interaction with tools. the number of instructions, the number of function Its main functionality is to provide basic statistics calls, and the caller-callee relationship among func- about the executed code, such as number of calls tion calls (the call graph). All this data is related to to user-specified functions, number of conditional their source lines of code. If the cache simulation is branches, amount of superblocks, and number of enabled, cache misses can be used as a performance guest instructions. It can also provide counts of load, metric.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-