Black-Box Problem Diagnosis in Parallel File Systems Michael P

Black-Box Problem Diagnosis in Parallel File Systems Michael P

Black-Box Problem Diagnosis in Parallel File Systems Michael P. Kasick1, Jiaqi Tan2, Rajeev Gandhi1, Priya Narasimhan1 1 Electrical & Computer Engineering Department 2 DSO National Labs, Singapore Carnegie Mellon University Singapore 118230 Pittsburgh, PA 15213–3890 [email protected] {mkasick, rgandhi, priyan}@andrew.cmu.edu Abstract clients communicating with multiple I/O servers and one or more metadata servers, as shown in Figure 1. We focus on automatically diagnosing different perfor- Problem diagnosis is even more important in HPC mance problems in parallel file systems by identify- where the effects of performance problems are magnified ing, gathering and analyzing OS-level, black-box perfor- due to long-running, large-scale computations. Current mance metrics on every node in the cluster. Our peer- diagnosis of PVFS problems involve the manual analysis comparison diagnosis approach compares the statistical of client/server logs that record PVFS operations through attributes of these metrics across I/O servers, to identify code-level print statements. Such (white-box) problem the faulty node. We develop a root-cause analysis proce- diagnosis incurs significant runtime overheads, and re- dure that further analyzes the affected metrics to pinpoint quires code-level instrumentation and expert knowledge. the faulty resource (storage or network), and demonstrate Alternatively, we could consider applying existing that this approach works commonly across stripe-based problem-diagnosis techniques. Some techniques specify parallel file systems. We demonstrate our approach for a service-level objective (SLO) first and then flag run- realistic storage and network problems injected into three time SLO violations—however, specifying SLOs might different file-system benchmarks (dd, IOzone, and Post- be hard for arbitrary, long-running HPC applications. Mark), in both PVFS and Lustre clusters. Other diagnosis techniques first learn the normal (i.e., fault-free) behavior of the system and then employ 1 Introduction statistical/machine-learning algorithms to detect runtime File systems can experience performance problems that deviations from this learned normal profile—however, it can be hard to diagnose and isolate. Performance prob- might be difficult to collect fault-free training data for all lems can arise from different system layers, such as of the possible workloads in an HPC system. bugs in the application, resource exhaustion, misconfig- We opt for an approach that does not require the spec- urations of protocols, or network congestion. For in- ification of an SLO or the need to collect training data stance, Google reported the variety of performance prob- for all workloads. We automatically diagnose perfor- lems that occurred in the first year of a cluster’s opera- mance problems in parallel file systems by analyzing the tion [10]: 40–80 machines saw 50% packet-loss, thou- relevant black-box performance metrics on every node. sands of hard drives failed, connectivity was randomly Central to our approach is our hypothesis (borne out by lost for 30 minutes, 1000 individual machines failed, observations of PVFS’s and Lustre’s behavior) that fault- etc. Often, the most interesting and trickiest problems free I/O servers exhibit symmetric (similar) trends in to diagnose are not the outright crash (fail-stop) failures, their storage and network metrics, while a faulty server but rather those that result in a “limping-but-alive” sys- appears asymmetric (different) in comparison. A similar tem (i.e., the system continues to operate, but with de- hypothesis follows for the metadata servers. From these graded performance). Our work targets the diagnosis of hypotheses, we develop a statistical peer-comparison ap- such performance problems in parallel file systems used proach that automatically diagnoses the faulty server and for high-performance cluster computing (HPC). identifies the root cause, in a parallel file-system cluster. Large scientific applications consist of compute- The advantages of our approach are that it (i) exhibits intense behavior intermixed with periods of intense par- low overhead as collection of OS-level performance met- allel I/O, and therefore depend on file systems that can rics imposes low CPU, memory, and network demands; support high-bandwidth concurrent writes. Parallel Vir- (ii) minimizes training data for typical HPC workloads tual File System (PVFS) [6] and Lustre [23] are open- by distinguishing between workload changes and perfor- source, parallel file systems that provide such applica- mance problems with peer-comparison; and (iii) avoids tions with high-speed data access to files. PVFS and Lus- SLOs by being agnostic to absolute metric values in iden- tre are designed as client-server architectures, with many tifying whether/where a performance problem exists. We validate our approach by studying realistic stor- clients age and network problems injected into three file-system benchmarks (dd, IOzone, and PostMark) in two parallel file systems, PVFS and Lustre. Interestingly, but perhaps network unsurprisingly, our peer-comparison approach identifies the faulty node even under workload changes (usually a source of false positives for most black-box problem- ios0 ios1 ios2 iosN mds0 mdsM diagnosis techniques). We also discuss our experiences, metadata particularly the utility of specific metrics for diagnosis. I/Oservers servers 2 Problem Statement Figure 1: Architecture of parallel file systems, showing Our research is motivated by the following questions: (i) the I/O servers and the metadata servers. can we diagnose the faulty server in the face of a per- formance problem in a parallel file system, and (ii) if so, edge of PVFS/Lustre’s overall operation, we hypothe- can we determine which resource (storage or network) is size that the statistical trends of these performance data: causing the problem? (i) should be similar (albeit with inevitable minor differ- Goals. Our approach should exhibit: ences) across fault-free I/O servers, even under workload changes, and (ii) will differ on the culprit I/O server, as • Application-transparency so that PVFS/Lustre appli- compared to the fault-free I/O servers. cations do not require any modification. The approach should be independent of PVFS/Lustre operation. Assumptions. We assume that a majority of the I/O • Minimal false alarms of anomalies in the face of legit- servers exhibit fault-free behavior, that all peer server imate behavioral changes (e.g., workload changes due nodes have identical software configurations, and that to increased request rate). the physical clocks on the various nodes are synchro- • Minimal instrumentation overhead so that instru- nized (e.g., via NTP) so that performance data can be mentation and analysis does not adversely impact temporally correlated across the system. We also assume PVFS/Lustre’s operation. that clients and servers are comprised of homogeneous • Specific problem coverage that is motivated by anec- hardware and execute homogeneous workloads. These dotes of performance problems in a production paral- assumptions are reasonable in HPC environments where lel file-system deployment (see § 4). homogeneity is both deliberate and critical to large scale operation. Homogeneity of hardware and client work- Non-Goals. Our approach does not support: loads is not strictly required for our diagnosis approach • Code-level debugging. Our approach aims for coarse- (§ 12 describes our experience with heterogeneous hard- grained problem diagnosis by identifying the culprit ware). However we have not yet tested our approach with server, and where possible, the resource at fault. We deliberately heterogeneous hardware or workloads. currently do not aim for fine-grained diagnosis that would trace the problem to lines of PVFS/Lustre code. 3 Background: PVFS & Lustre • Pathological workloads. Our approach relies on I/O PVFS clusters consist of one or more metadata servers servers exhibiting similar request patterns. In paral- and multiple I/O servers that are accessed by one or more lel file systems, the request pattern for most work- PVFS clients, as shown in Figure 1. The PVFS server loads is similar across all servers—requests are either consists of a single monolithic user-space daemon that large enough to be striped across all servers or random may act in either or both metadata and I/O server roles. enough to result in roughly uniform access. However, PVFS clients consist of stand-alone applications that some workloads (e.g., overwriting the same portion use the PVFS library (libpvfs2) or MPI applications that of a file repeatedly, or only writing stripe-unit-sized use the ROMIO MPI-IO library (that supports PVFS in- records to every stripe-count offset) make requests dis- ternally) to invoke file operations on one or more servers. tributed to only a subset, possibly one, of the servers. PVFS can also plug in to the Linux Kernel’s VFS in- • Diagnosis of non-peers. Our approach fundamentally terface via a kernel module that forwards the client’s cannot diagnose performance problems on non-peer syscalls (requests) to a user-space PVFS client daemon nodes (e.g., Lustre’s single metadata server). that then invokes operations on the servers. This ker- Hypotheses. We hypothesize that, under a perfor- nel client allows PVFS file systems to be mounted under mance fault in a PVFS or Lustre cluster, OS-level perfor- Linux similar to other remote file systems like NFS. mance metrics should exhibit observable anomalous be- With PVFS, file-objects are distributed across all I/O havior on the culprit servers. Additionally, with knowl- servers in a cluster. In particular, file data is striped across each I/O server with a default stripe size of 64 kB. of a rogue processes. For example, if two remote file For each file-object, the first stripe segment is located servers (e.g., PVFS and GPFS) are collocated, the startup on the I/O server to which the object handle is assigned. of a second server (GPFS) might negatively impact the Subsequent segments are accessed in a round-robin man- performance of the server already running (PVFS) [5].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us