
UC Santa Cruz UC Santa Cruz Previously Published Works Title PERSES: Data layout for low impact failures Permalink https://escholarship.org/uc/item/35h4t5dj Journal Proceedings - IEEE Computer Society's Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems, MASCOTS, 2015- February(February) ISSN 1526-7539 ISBN 9781479956104 Authors Wildani, A Miller, EL Adams, IF et al. Publication Date 2015-02-05 DOI 10.1109/MASCOTS.2014.17 Peer reviewed eScholarship.org Powered by the California Digital Library University of California 2014 IEEE 22nd International Symposium on Modelling, Analysis & Simulation of Computer and Telecommunication Systems PERSES: Data Layout for Low Impact Failures Avani Wildani∗, Ethan L. Miller†, Ian F. Adams‡, and Darrell D.E. Long† ∗The Salk Institute, †University of California: Santa Cruz, ‡EVault Inc. Abstract—Growth in disk capacity continues to outpace ad- over 2012 [5]. Cloud applications such as web hosting and vances in read speed and device reliability. This has led to storage document sharing maintain SLAs that guarantee low response systems spending increasing amounts of time in a degraded state time to users. In this environment, the cloud provider saves while failed disks reconstruct. Users and applications that do money by isolating failures to, and thus breaking SLAs with, not use the data on the failed or degraded drives are negligibly as few customers as possible. While they experience more impacted by the failure, increasing the perceived performance degradation, the marginal cost of additional slow response time of the system. We leverage this observation with PERSES,a statistical data allocation scheme to reduce the performance is often lower for users once any slowdown has occurred. impact of reconstruction after disk failure. Exacerbating this degradation, many systems sacrifice re- PERSES reduces degradation from the perspective of the user build speed in exchange for increased reliability. These systems by clustering data on disks such that data with high probability of typically place data evenly across disks in order to distribute co-access is placed on the same device as often as possible. Trace- and thus dilute the impact of failure events. When a failure driven simulations show that, by laying out data with PERSES, event does necessitate a rebuild, however, the rebuild can im- we can reduce the perceived time lost due to failure over three pact a wide array of people or projects even though each could years by up to 80% compared to arbitrary allocation. have only lost a very small amount of data. By combining existing reliability measures with selectively laying out data I. INTRODUCTION according to group membership, we can isolate failures so that they impact fewer users or projects, increasing the perceived Over time, failure events are inevitable in large disk-based availability of the system. storage systems [1], [2]. Even in failure events where data is not lost, there is a system-wide cost for accessing data on the Other use cases that are disproportionately affected by failed disk. The failed disk is inaccessible, and other disks and partial data degradation include compilation and select scien- the interconnecting network links may be saturated. This cost tific workloads. Compiling large code bases typically involve is increasing as growth in disk size outpaces growth in disk, multiple dependencies, and developers working with large code CPU, and network bandwidth [3], [4]. While a 100 GB drive bases often have to wait to compile until all of the files they that can be rebuilt at 50 MB/s is fully operational in about half need are available. An unlucky failure can halt the compilation an hour, a terabyte drive at 50 MB/s can take up to six hours to of large swathes of code for want of one file. Similarly, be back at full bandwidth. Additionally, during a rebuild there scientific data analysis can rely on a small number of key is often significant network overhead that can have negative files such as format descriptors or configuration files that slow side effects on disks that are not even involved in the rebuild. the analysis of terabytes of data. According to the Tech Lead of Facebook’s HBase engineering team: “Even losing a single We examine the issue of excessive rebuild time from the block of data incurs a high fixed cost, due to the overhead of perspective of projects: groups of records that are frequently locating and recovering the unavailable data. Therefore, given accessed together. If a failed disk contains even a small amount a fixed amount of unavailable data each year, it is much better of critical data for multiple projects, all of those projects could to have fewer incidents of data loss with more data each than suffer degraded performance until the rebuild is completed. more incidents with less data.” [6] Examples include developers losing a header file that a build depends on, a high performance system that loses an input PERSES is also a good fit for large storage systems where and misses its time slot, or a cloud provider that breaks a a small but constantly shifting subset of the data is in active performance SLA. use. Systems that organically grow around a dynamic set of requirements naturally tend to have a small set of data in active If, however, the failure occurs on a device that is not use and a long tail of accesses across the remainder of the actively being accessed, it has almost no productivity cost: data [7]. These systems resemble archives in that while there it is the proverbial tree fallen in a forest. PERSES is a data are distinct, consistent projects, there is little repeat access for allocation framework designed to decrease the impact of device popularity based allocation to take advantage of. We study failures on the productivity and perceived availability of a such archival-like workloads in this paper to form a baseline storage system. We name our system after PERSES, the Greek for improvement. titan of cleansing destruction, because it focuses destruction in a storage system to a small number of projects so that others PERSES improves availability by laying out project groups adjacently on disk to isolate faults. Maintaining an out-of- may thrive. Our goal with PERSES is to isolate failures away from as many projects as possible. band index of all of the data in a project requires significant administrative overhead. To avoid this, PERSES dynamically Multi-user cloud storage systems, a primary target of calculates associations in historical trace data to identify likely PERSES, are caught in the data deluge. According to the project groups. Since elements of a project are co-located 2013 IDC report, digital content is expected to grow to 4 on disk, any single failure impacts a very small number of zettabytes by the end of 2013, which would be a 48% increase projects. We derive project groups from either trace metadata 1526-7539/14 $31.00 © 2014 IEEE 71 DOI 10.1109/MASCOTS.2014.17 or access patterns. We use a statistical method to calculate an exabyte scale storage system with 99.999% annual data projects in O(n), where n is the number of accesses in the retention will lose data in a decade comparable to what a training set, without the overhead of maintaining groups by petabyte scale storage system with equal retention loses in hand. a century. While the percent remains constant, the value of data does not necessarily decrease as more of it is stored. To demonstrate PERSES, we injected failures into two Therefore, the point at which advanced reliability measures dataset groupings and ran them through a disk simulator with must be taken to ensure availability in a large storage system a variety of layouts based on calculated project groups. We is likely to be sooner than we are accustomed to. Several show that layout under PERSES leads to faults that affect studies have shown that storing data reliably over the archival fewer projects during the rebuild phase than a group-agnostic time scale presents additional challenges, such as accumulated layout. Our fault injection simulation compares the real time latent sector errors and data migrations, to the already difficult lost during disk rebuilds across layouts by measuring project field of storage reliability [10]–[13]. A challenge of this size hours gained, which we define as the delay avoided by all requires combining known techniques of enhancing reliability project groups as a result of the access requests that are delayed with methods optimized for the expected workload and time while disks are rebuilding compared to the reference allocation. scale of archival storage. We found that in our best case, PERSES gained over 4000 Recent work has shown failures remain an issue for large hours of project time during the three year trace compared disk-based storage systems. For example, Schroeder and Gib- to a random allocation and gained over 1000 hours compared son showed that the average disk replacement rates in the field to an ideal temporal allocation. Additionally, we found that are typically between 2 and 4%, implying a high reconstruction increasing the IOPs did not significantly hurt the fault isolation. rate [1]. Other studies have shown that latent sector errors can Finally, we discovered that restricting the minimum group size, lead to drive failure [2]. Pinheiro et al. showed that failures thereby reducing grouping noise, improved the performance are both difficult to predict and common [14]. Scrubbing under PERSES even on smaller disks, with a maximum pro- techniques can efficiently catch these otherwise unobserved ductivity improvement of 94% across large groups. Our main errors so that the disks can be rebuilt promptly, but it is still contributions are: often necessary to rebuild much of the disk [15].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-