Automating the Application Data Placement in Hybrid Memory Systems

Automating the Application Data Placement in Hybrid Memory Systems

Automating the Application Data Placement in Hybrid Memory Systems Harald Servat∗, Antonio J. Pena˜ y, German´ Llorty, Estanislao Mercadaly, Hans-Christian Hoppe∗ and Jesus´ Labartayz ∗Intel Corporation yBarcelona Supercomputing Center (BSC) zUniversitat Politecnica` de Catalunya (UPC) Abstract—Multi-tiered memory systems, such as those based Tools like EVOP [2], ADAMANT [3], MACPO [4] and on Intel R Xeon PhiTMprocessors, are equipped with several mem- Intel R Advisor [5] solely rely on instruction-level instrumen- ory tiers with different characteristics including, among others, tation to monitor the data object allocation and their respective capacity, access latency, bandwidth, energy consumption, and volatility. The proper distribution of the application data objects accesses to advise the user about the most accessed variables into the available memory layers is key to shorten the time– and even their access patterns. These metrics are valuable to to–solution, but the way developers and end-users determine understand the application behavior, but the imposed over- the most appropriate memory tier to place the application data head limit their relevance because they alter the application objects has not been properly addressed to date. performance and generate a daunting amount of data for in- In this paper we present a novel methodology to build an extensible framework to automatically identify and place the production application runs, leading to long analysis times. application’s most relevant memory objects into the Intel Xeon To overcome these limitations, processor manufacturers Phi fast on-package memory. Our proposal works on top of in- have augmented their performance monitoring unit with sam- production binaries by first exploring the application behavior pling mechanisms to provide rich information, including and then substituting the dynamic memory allocations. This the referenced memory address. Precise-Event Based Sam- makes this proposal valuable even for end-users who do not have the possibility of modifying the application source code. We pling (PEBS) is the implementation of such a feature in demonstrate the value of a framework based in our methodology recent Intel processors [6]. Performance analysis tools such for several relevant HPC applications using different allocation as HPCToolkit [7], MemAxes [8], Extrae [9], and Intel R strategies to help end-users improve performance with minimal VtuneTMAmplifier [10] use a hybrid approach combining intervention. The results of our evaluation reveal that our instrumentation to track data allocation and PEBS to mon- proposal is able to identify the key objects to be promoted into fast on-package memory in order to optimize performance, itor the application data references. This approach enables leading to even surpassing hardware-based solutions. exploring in-production executions with a reduced overhead at Index Terms—heterogeneous memory, hybrid memory, high- the cost of providing statistical approximations, even though bandwidth memory, performance analysis, PEBS, sampling, in- approximations for long runs resemble the actual results. strumentation These tools typically identify the data structures that are associated to metrics like high-latency loads or high number I. INTRODUCTION of cache misses but leave the tedious work of substituting Hybrid memory systems (HM)1 accommodate memories the memory allocation calls to the application developer. This featuring different characteristics such as capacity, bandwidth, paper advances the current state of the art by introducing a latency, energy consumption, or volatility. A recent example novel framework design to help end-users, application devel- of an HM-processor is the Intel R Xeon PhiTM, containing opers and processor architects understand the usage of HM two memory systems: DDR and on-package Multi-Channel systems, and to automatically promote the critical data to the DRAM (MCDRAM) [1]. While HM systems present oppor- appropriate memory layer. The framework design consists of tunities in different fields, the efficient usage of these systems four stages: (1) low-overhead data collection using hardware- requires prior application knowledge because developers need based sampling mechanisms; (2) attribution of a cost based to determine which data objects to place in which of the on Last-Level Cache (LLC) misses to each data object; (3) available memory tiers. A common objective is to shorten the distribution of data objects for a given memory configuration; application time–to–solution and this translates into placing and (4) re-execution of the application binary automatically the appropriate data objects on the fastest memory. However, promoting the different data objects to the proper memory tier. fast memory is a scarce resource and the application working The two first stages rely on an unmodified open-source tools, set may not fit. Consequently, it is important to characterize the while the third stage is a derivative from an already existing application behavior to identify the (critical) data that benefits tool and the fourth stage relies newly developed interposition the most from being hosted into fast memory and, if not library. sufficient, keep the non-critical data away in slower memory. The contributions of this paper include: 1) the design of an extensible framework to automatically 1Also known as heterogeneous or multi-tiered memory systems. distribute the data objects of in-production binaries in 1 HM systems targeting performance, and implementing references to a memory simulator that calculates statistics it for Intel Xeon Phi processors; such as cache hits and misses for a given cache organization. 2) an exploration of several strategies to help determine SLO [13] suggests locality optimizations by analyzing the which application variables to place on which memory application reuse paths to find the root causes of poor data tier in HM systems; locality. This tool extends the GCC compiler2 to capture the 3) the evaluation of the proposed distribution approaches application’s memory accesses, function calls and loops to on a set of well-known benchmarks using the presented track data reuses, and then it analyzes the reused paths to framework methodology, including a comparison with suggest code loop transformations. MACPO captures memory already existing hardware and software solutions; and traces and computes metrics for the memory access behavior 4) the proposal of a novel metric to report the efficient use of source-level data structures. The tool uses PerfExpert [14] to of the fast on-package memory by applications. identify code regions with memory-related inefficiencies, then This paper follows contextualizing the work we present with it employs the LLVM compiler3 to instrument the memory already existing state–of–the–art tools and methodologies in references, and finally it calculates several reuse factors and Section II. Section III describes in detail the framework and the number of data streams in a loop nest. Intel Advisor is a its components. In Section IV we put the framework in use performance analysis tool that focuses on the thread and vector through several benchmarks and applications while analyzing performance, and it also explores the memory locality charac- the obtained results. Finally, Section V draws conclusions and teristics of a user-given code. The tool relies on PIN [15] to discusses possible future research directions. instrument binaries at instruction-level allowing correlation of the instructions and the memory access patterns. Tareador [16] II. RELATED WORK is a tool that estimates the amount of parallelism that can There exist several alternatives for taking advantage of the be extracted from a serial application using a task-based MCDRAM on Intel Xeon Phi processors. The user can benefit data-flow programming model. The tool employs dynamic from the fast on-package memory transparently by using it as instrumentation to monitor the memory accesses of delimited a direct-mapped LLC. However, if MCDRAM is configured regions of code to determine whether they can simultaneously in flat mode (i.e. sits on a different part of the address space), run without data race conditions, and then it simulates the then the easiest alternative for the user is to rely on the application execution based on this outcome. EVOP is an numactl command to place as much application data as emulator-based data-oriented profiling tool to analyze actual possible into the fast memory. Another alternative is to use program executions in a system equipped only with a DRAM- the autohbw library provided by the memkind package [11]. based memory [17]. EVOP uses dynamic instrumentation This library is injected into the application before process to monitor the memory references in order to detect which execution and it forwards dynamic allocations into MCDRAM memory structures are the most referenced and then estimate if the requested memory is within a user-given size range the CPU stall cycles incurred by the different memory objects (as long as it fits). The most tedious situation requires the to decide their optimal object placement in a heterogeneous developer to learn (somehow) about the application behavior memory system by means of the dmem advisor tool [2]. with respect to main memory accesses and manually change ADAMANT uses the PEBIL instrumentation package [18] the memory allocations so that they reside on MCDRAM using and includes tools to characterize application data objects, memkind. Even

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us