The Cost of Detecting Silent Data Corruption

The Cost of Detecting Silent Data Corruption

Uncovering Errors: The Cost of Detecting Silent Data Corruption Sumit Narayan John A. Chandy Samuel Lang Philip Carns Robert Ross Dept. of Electrical & Computer Engineering Mathematics & Computer Science Division University of Connecticut Argonne National Laboratory Storrs, CT 06269 Argonne, IL 60439 {sumit.narayan,john.chandy}@uconn.edu {slang,carns,rross}@mcs.anl.gov ABSTRACT disk” [1]. But despite such limitations, disk drives are the Data integrity is pivotal to the usefulness of any storage sys- most common form of storage medium, primarily because of tem. It ensures that the data stored is free from any mod- their ability to provide good I/O performance at a very low ification throughout its existence on the storage medium. price. Their price advantage motivated the design of redun- Hash functions such as cyclic redundancy checks or check- dant arrays of inexpensive disks (RAID) [2], where several sums are frequently used to detect data corruption during its cheap drives are put together to provide data redundancy transmission to permanent storage or its stay there. With- and high bandwidth. However, increasing the number of out these checks, such data errors usually go undetected drives also increases the chances of failures. Most of the re- and unreported to the system and hence are not communi- liability measures taken by RAID focus on protecting data cated to the application. They are referred as “silent data from complete disk failures. Mechanisms such as parity or corruption.” When an application reads corrupted or mal- mirroring can help with restoring data in cases of such fail- formed data, it leads to incorrect results or a failed system. ures but don’t, by themselves, guarantee the integrity of the Storage arrays in leadership computing facilities comprise data. The reason is that RAID does not have a notion of several thousands of drives, thus increasing the likelihood what is “correct” data from the user’s point of view. Disk of such failures. These environments mandate a file system drives suffer from another type of failure—data corruption. capable of detecting data corruption. Parallel file systems Data corruption refers to unintended modification of data have traditionally ignored providing integrity checks because during its transmission, storage, or retrieval. These changes of the high computational cost, particularly in dealing with go undetected or unreported to the storage system. They unaligned data request from scientific applications. In this result in incorrect data being flushed to the drive or sent paper, we assess the cost of providing data integrity on a to the application, depending on the flow of data operation. parallel file system. We present an approach that provides These kind of errors could occur for several reasons. Misdi- this capability with as low as 5% overhead for writes and rected writes, torn writes, data path corruption, and bit-rot 22% overhead for reads for aligned requests and some addi- are some of the ways in which data corruption can occur, tional cost for unaligned requests. resulting in incorrect data being stored on the block. Even if the data was stored correctly, a bug in the firmware code could cause incorrect data to be read from the disk resulting Keywords in an error. Since these kinds of data corruption are silent cyclic redundancy checks · data integrity · parallel file sys- and are not communicated to the system, they are referred tems as“silent data corruption.” Detecting such alteration of data requires integrity checks that provide users guarantees that 1. INTRODUCTION the same analysis of the same data leads to the same results. File systems use disk drives to provide a persistent medium Hash functions such as cyclic redundancy checks (CRCs) for applications to store data. These disk drives are com- or checksums are frequently used for testing the integrity of plex electro-mechanical systems with several moving parts, the data. In high-performance computing (HPC) environ- making them unreliable in terms of availability and integrity ments, providing an integrity check becomes a particularly of the data. They could suffer from potential data loss or hard problem. File systems connected to supercomputers complete drive failure. They are controlled by sophisticated such as IBM’s Blue Gene/P comprise large arrays of stor- firmware and software to communicate with other compo- age units typically consisting of thousands of drives, thus nents of the system. These codes could carry bugs and cause increasing the probability of such failures. HPC workloads data corruption. Given the uncertainties present with stor- are also a complex mix of concurrent, unaligned accesses of ing data on the disk, they are thus referred as “untrusted varying sizes, making the design requirements unpredictable. Further, any changes on these machines must be done with minimal impact on performance because the scientific ap- plications running on them are very sensitive to system’s ACM acknowledges that this contribution was authored or co-authored by throughput. a contractor or affiliate of the U.S. Government. As such, the Government A recent study of over 1.5 million disk drives in the NetApp retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only. database at different sites over 32 months showed that more Supercomputing PDSW ’09, Nov. 15, 2009. Portland, OR, USA than 8.5% of all nearline disks and around 1.9% of all en- Copyright c 2009 ACM ISBN 978-1-60558-883-4/09/11 ...$10.00. terprise class disks suffered from silent data corruption [3]. The same study also found that 13% of these nearline disk by the system [3–5]. CRCs and checksums provide check errors and nearly 38% of enterprise disk errors went unde- codes that help uncover such errors. These data verification tected by background verification processes. Further analy- codes are calculated inline with a write request and written sis of the same dataset showed that over 400,000 checksum to disk for validation at a later time. For every change to mismatches had occurred during the 41 months of observa- the data, its verification code is also updated accordingly. tion [4]. Another large-scale study of disk drives showed that During a read operation, the hash code is regenerated for faulty codes in storage system’s protocol stack accounted for the data read from the disk and verified with the code saved 5–10% of all failures [5]. Apart from these, an in-house study during write. If they match then the data is complete and at European Organization for Nuclear Research (CERN) verified; otherwise, a read error is sent to the application. also concluded that low-level data corruption exists [6]. They ran a simple program to write a 2 GB file on 3,000 nodes ev- 3. THE PARALLEL VIRTUAL FILE SYS- ery 2 hours for 5 weeks. During this period, they hit 500 errors on 100 nodes, 10% of which were sector-sized or page- TEM sized corrupt data and 80% were 64K regions of corrupted The Parallel Virtual File System (PVFS) is an open source data. Such a high rate of faults clearly indicates that data implementation of a parallel file system for high-performance corruption is not limited to just consumer-based disks but computing applications [7]. It consists of a set of servers is also present in enterprise-grade disk drives. In HPC envi- providing metadata and data services. It is based on the ronments, these errors could result in significant loss of both idea of separating metadata from the data, thus keeping data and computation time. Any faults in these petascale storage management functionalities away from the real data storage systems serving critical scientific applications is in- access. PVFS achieves high throughput by striping data tolerable. Hence, we need a better understanding of how to across many storage nodes. It provides separate API ab- perform data integrity checking in parallel file systems. stractions for network and storage modules that simplify In this paper, we study the cost of providing integrity infrastructure-specific implementations. The buffered met- checks in a parallel file system and present results by imple- hod interface (BMI) provides a network-independent inter- menting our ideas on the open source Parallel Virtual File face, while Trove provides a persistence layer for manipulat- System (PVFS). This paper is organized as follows. We dis- ing metadata and data streams. PVFS saves a file’s data in cuss data integrity in more detail in Section 2. In Section 3, a directory tree and metadata in a Berkeley DB database, we provide a brief overview of PVFS. Section 4 lays out both saved on a local file system. Each node that is part of our design and implementation in PVFS, and Section 5 pro- a file’s stripe has a portion of the file saved on a local file vides some experimental results. We end with a discussion system as a bstream. Typically, a file is striped across all of related work and our conclusions. available I/O servers, and the only available mechanism for fault tolerance other than providing RAID on local file sys- 2. DATA INTEGRITY tem is by attaching storage to multiple servers with fail-over enabled. This approach, however, provides tolerance only to Several methods are available for detecting data modifi- the failure of local file system or a complete node; it has no cation or silent corruption. Data integrity guarantees the provision for detecting and correcting data errors. Berke- users that once their data is stored, it is persistent and per- ley DB allows checksum verification of pages read from the petually available in the same condition as it was when sub- underlying file system enabled through a flag, thus protect- mitted.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us