Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads

Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads

Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads Robert Schöne∗ Joseph Schuchart∗ Thomas Ilsche∗ Daniel Hackenberg∗ ∗ZIH, Technische Universität Dresden {robert.schoene|joseph.schuchart|thomas.ilsche|daniel.hackenberg}@tu-dresden.de Abstract for extending battery life in mobile devices or to en- sure maximum ROI of servers in production environ- ments. However, performance tuning is still a complex There are a variety of tools to measure the performance task that often requires specialized tools to gain insight of Linux systems and the applications running on them. into the behavior of applications. Today there are only However, the resulting performance data is often pre- a small number of tools available to developers for un- sented in plain text format or only with a very basic user derstanding the run-time performance characteristics of interface. For large systems with many cores and con- their code, both on the kernel and the user land side. current threads, it is increasingly difficult to present the Moreover, the increasing parallelism of modern multi- data in a clear way for analysis. Moreover, certain per- and many-core processors creates an additional chal- formance analysis and debugging tasks require the use lenge since scalability is usually not a major focus of of a high-resolution time-line based approach, again en- standard performance analysis tools. In contrast, scal- tailing data visualization challenges. Tools in the area ability of applications and performance analysis tools of High Performance Computing (HPC) have long been have long been topics in the High Performance Com- able to scale to hundreds or thousands of parallel threads puting (HPC) community. Nowadays, 96.4 % of the 500 and are able to help find performance anomalies. We fastest HPC installations run a Linux OS, as compared therefore present a solution to gather performance data to 39.6 % in 20031. Thus, the HPC community could using Linux performance monitoring interfaces. A com- benefit from better integration of Linux specific per- bination of sampling and careful instrumentation allows formance monitoring interfaces in their tools, as these us to obtain detailed performance traces with manage- are currently targeting parallel programs and rely on in- able overhead. We then convert the resulting output to strumenting calls to parallelization libraries such as the the Open Trace Format (OTF) to bridge the gap between Message Passing Interface (MPI) and OpenMP. On the the recording infrastructure and HPC analysis tools. We other hand, the Linux community could benefit from explore ways to visualize the data by using the graphical more scalable tools. We are therefore convinced that the tool Vampir. The combination of established Linux and topic of performance analysis should be mutually solved HPC tools allows us to create an interface for easy nav- by bringing together the expertise of both communities. igation through time-ordered performance data grouped by thread or CPU and to help users find opportunities In this paper, we present an approach towards for performance optimizations. scalable performance analysis for Linux using the perf infrastructure, which has been introduced with Linux 2.6.31 [8] and has undergone intensive devel- 1 Introduction and Motivation opment since then. This infrastructure allows users to access hardware performance counters, kernel-specific GNU/Linux has become one of the most widely used events, and information about the state of running ap- operating systems, ranging from mobile devices, to plications. Additionally, we present a new visualization laptop, desktop, and server systems to large high- method for ftrace-based kernel instrumentation. performance computing (HPC) installations. Perfor- 1Based on November 2003 and November 2013 statistics on mance is a crucial topic on all these platforms, e.g., http://top500.org • 63 • 64 • Scalable Tools for Non-Intrusive Performance Debugging of Parallel Linux Workloads Measurement Type Kernel Interface Common Userspace Tools and Libraries ptrace gdb, strace, ltrace ftrace trace-cmd, kernelshark, ktap Instrumentation kernel tracepoints LTTng, SystemTap, ktap, perf userspace tools dynamic probes SystemTap, ktap, perf userspace tools perf events perf userspace tools, PAPI Sampling OProfile (kernel module) OProfile daemon and tools Table 1: Common Linux Performance Analysis Interfaces and Tools The remainder of this paper is structured as follows: a scheduling event or what hints have been used when Section2 presents an overview of existing Linux per- allocating pages. kprobes are dynamic tracepoints that formance analysis tools. Section3 outlines the process can be added to the kernel at run-time [12] by using of acquiring and processing performance data from the the perf probe command. Such probes can also be perf and ftrace infrastructures followed by the presenta- inserted in userspace programs and libraries (uprobes). tion of different use-cases in Section4. The perf_event infrastructure can handle kprobes and uprobes as well as tracepoint events. This allows the 2 Linux Performance Monitoring Interfaces perf userspace tools to record the occurrences of these and Established Tools events and to integrate them into traces. The Linux Trace Toolkit next generation (LTTng)[10,11] is a tracing tool that allows users to measure and analyze user space and Several interfaces are available in the Linux kernel to kernel space and is scalable to large core counts. It enable the monitoring of processes and the kernel itself. writes traces in the Common Trace Format which is sup- Based on these interfaces, well-established userspace ported by several analysis tools. However, these tools tools and libraries are available to developers for var- do not scale well to traces with large event counts. Sys- ious monitoring tasks (see Table1). The ptrace [15] temTap [28] provides a scripting interface to access and interface can be used to attach to processes but is not react on kernel probes and ftrace points. Even though it suitable for gaining information about the performance is possible to write a generic (kernel) tracing tool with impact of kernel functions. ftrace [7] is a built-in instru- stap scripts, it is not intended for such a purpose. ktap mentation feature of the Linux kernel that enables ker- is similar to SystemTap with the focus on kernel trac- nel function tracing. It uses the -pg option of gcc to call ing. It supports tracepoints, dynamic probes, ftrace, and a special function from every function in a kernel call. others. This special function usually executes NOPs. An API, which is located in the Debugfs, can be used to replace the NOPs with a tracing function. trace-cmd [25] is In addition to the instrumentation infrastructure support a command line tool that provides comfortable access in the kernel, measurement points can also be triggered to the ftrace functionality. KernelShark [26] is a GUI by sampling. The perf_event infrastructure provides ac- for trace-cmd, which is able to display trace infor- cess to hardware-based sampling that is implemented mation about calls within the Linux kernel based on on x86 processors with performance monitoring units ftrace events. This allows users to understand the sys- (PMUs) that trigger APIC interrupts [5,14]. On such an tem behavior, e.g., which processes trigger kernel func- interrupt, the call-graph can be captured and written to tions and how tasks are scheduled. However, the Ker- a trace, which is usually done with the perf record nelShark GUI is not scalable to large numbers of CPU command-line tool but can also be achieved with low- cores and does not provide integration of sampling data, level access to a memory-mapped buffer that is shared e.g., to present context information about application with the kernel. In a post-mortem step, tools like perf call-paths. Nevertheless, support for ftrace is currently script and perf report use debugging symbols to being merged into the perf userspace tools [16]. Ker- map the resulting events in a trace file recorded by perf nel tracepoints [3] are instrumentation points in differ- record to function names. PAPI [6, 23] is the de facto ent kernel modules that provide event-specific informa- standard library for reading performance counter infor- tion, e.g., which process is scheduled to which CPU for mation and is supported by most HPC tools. On current 2014 Linux Symposium • 65 Linux systems, PAPI uses the perf_event interface via instructions|cache-misses|... the libpfm4 library. In addition to performance counter Other performance counters can be included in the access, PAPI is also able to use this interface for sam- timeline to get a better understanding of the effi- pling purposes. The OProfile kernel module [19] is up- ciency of the running code. For example, the in- dated regularly to support new processor architectures. struction counter allows us to determine the in- It provides access to hardware PMUs that can be used struction per cycle (IPC) value for tasks and CPUs for sampling, e.g., by the OProfile daemon [20]. and adding cache-misses provides insights into the memory usage. However, none of the Linux performance analysis tools are capable of processing very large amounts of trace data, and none feature scalable visualization interfaces. Scalable HPC performance analysis tools such as Vam- As an alternative to sampling, instrumentation can pro- pir [24], Score-P [18], HPCToolkit [1], and TAU [27], vide fine-grained information regarding the order and on the other hand, usually lack the close integration context of function calls. For debugging purposes, sam- with the Linux kernel’s performance and debugging in- pling is not a viable option. Thus, we use the kernel terfaces. tracing infrastructure ftrace to analyze kernel internal behavior. One alternative would be a combination of 3 Performance Data Acquisition and Conver- trace-cmd and KernelShark. However, KernelShark is sion limited in terms of scalability and visualization of im- portant information.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us