Proactively Preventing Data Corruption

Total Page:16

File Type:pdf, Size:1020Kb

Proactively Preventing Data Corruption Proactively Preventing Data Corruption Introduction A common trait in most of the existing protective measures is that they work in Data corruption is an insidious problem in their own isolated domains or at best be- storage. There are many types of corruption tween two adjacent nodes in the I/O path. and many means to prevent them. There has been no common method for en- Enterprise class servers use error check- suring true end-to-end data integrity... until ing and correcting caches and memory to now. Before describing this new technolo- protect against single and double bit errors. gy in detail, let us take a look at how data System buses have similar protective mea- corruption is handled by currently shipping sures such as parity. Communications go- products. ing over the network are protected by check- sums. On the storage side many installations 1 Data Corruption employ RAID (Redundant Array of Inexpen- sive Disks) technology to protect against Corruption can occur as a result of bugs in disk failure. In the case of hardware both software and hardware. A common RAID the array firmware will often use ad- failure scenario involves incorrect buffers vanced checksumming techniques and me- being written to disk, often clobbering good dia scrubbing to detect and potentially cor- data. rect errors. The disk drives themselves also This latent type of corruption can go feature sophisticated error corrective mea- undetected for a long period of time. It sures, and storage protocols such as Fibre may take months before the application at- Channel and iSCSI feature a Cyclic Redun- tempts to reread the data from disk, at dancy Check (CRC) that guards against data which point the good data may have been corruption on the wire. lost forever. Short backup cycles may even At the top of the I/O stack, modern have caused all intact copies of the data to filesystems such as Oracle’s btrfs use check- be overwritten. summing techniques on both data and A crucial weapon in preventing this type filesystem metadata. This allows the filesys- of error is proactive data integrity protec- tem code to detect data that has gone bad tion: a method that prevents corrupted I/O either on disk or in transit. The filesystem requests from being written to disk. can then take corrective action, fail the I/O For several years Oracle has offered a request or notify the user. technology called HARD (Hardware Assisted 1 Figure 1 Five different protection scenarios. Starting at the bottom, the “Normal I/O” line il- lustrates the disjoint integrity coverage offered using a current operating system and standard hardware. The “HARD” line shows the protection envelope offered by the Oracle Database ac- cessing a disk array with HARD capability. “DIF” shows coverage using the integrity portions of the SCSI protocol. “DIX” shows the coverage offered by the Data Integrity Extensions. And finally at the top, “DIX + DIF” illustrates the full coverage provided by the Data Integrity Extensions in combination with T10 DIF. Resilient Data), which allows storage sys- including a checksum, to be included in an tems to verify the integrity of an Oracle I/O request. This appended data is referred database logical block before it is commit- to as integrity metadata or protection infor- ted to stable storage. Though the level of mation. protection offered by HARD is mandatory in Unfortunately, the SCSI protection enve- numerous enterprise and government de- lope only covers the path between the I/O ployments, adoption outside the mission- controller and the storage device. To rem- critical business segment has been slow. edy this, Oracle and a few select indus- The disk array vendors that license and im- try partners have collaborated to design a plement the HARD technology only offer it in method of exposing the data integrity fea- their very high end products. As a result, tures to the operating system. This technol- Oracle has been looking to provide a com- ogy, known as the Data Integrity Extensions, parable level of resiliency using an open and allows the operating system—and even ap- standards-based approach. plications such as the Oracle Database—to A recent extension to the SCSI family of generate protection data that will be veri- protocols allows extra protective measures, fied as the request goes through the entire 2 I/O stack. See Figure 1 for an illustration of the integrity coverage provided by the tech- nologies described above. Figure 2 DIF tuple contents 2 T10 Data Integrity Field protection. Each of these protection types T10 is the INCITS standards body responsible defines the contents of the three tag fields for the SCSI family of protocols. Data cor- in the DIF tuple. The guard tag contains a ruption has been a known problem in the 16-bit CRC of the 512 bytes of data in the storage industry for years and T10 has pro- sector. The application tag is for use by the vided the means to prevent it by extending application or operating system, and finally the SCSI protocol to allow integrity metadata the reference tag is used to ensure order- to be included in an I/O request. The ex- ing of the individual portions of the I/O re- tension to the SCSI block device protocol is quest. The reference tag varies depending called the Data Integrity Field or DIF. on protection type. The most common of Normal SCSI disks use a hardware sector these is Type 1 in which the reference tag size of 512 bytes. However, when used in- needs to match the 32 lower bits of the tar- side disk arrays the drives are often refor- get sector logical block address. This helps matted to a bigger sector size of 520 or 528 prevent misdirected writes, a common cor- bytes. The operating system is only exposed ruption error where data is written to the to the usual 512 bytes of data. The extra 8 wrong place on disk. or 16 bytes in each sector are used internally If the storage device detects a mismatch by the array firmware for integrity checks. between the data and the integrity metadata DIF is similar in the sense that the stor- the I/O will be rejected before it is written to age device must be reformatted to 520 byte disk. Also, since each node in the I/O path is sectors. The main difference between DIF free to inspect and verify the integrity meta- and proprietary array firmware is that the data, it is possible to isolate points of error. format of the extra 8 bytes of information For instance, it is conceivable that in the fu- per sector is well defined as well as being an ture advanced fabric switches will be able open standard. This means that every node to verify the integrity as data flows through in the I/O path can participate in generating the Storage Area Network. and verifying the integrity metadata. The fact that a storage device is format- Each DIF tuple is split up into three sec- ted using the DIF protection scheme is trans- tions called tags. There is a 16-bit guard parent to the operating system. In the case tag, a 16-bit application tag and a 32-bit ref- of a write request the I/O controller will re- erence tag. ceive a number of 512-byte buffers from the The DIF specification lists several types of operating system and proceed to generate 3 and append the appropriate 8 bytes of pro- 3 Data Integrity Extensions tection information to each sector. Upon receiving the request the SCSI disk will verify While increased resilience against errors be- that the data matches the included integrity tween controller and storage device is an metadata. In the case of a mismatch, the I/O improvement, the design goal was to en- will be rejected and an error returned to the able true end-to-end data integrity protec- operating system. tion. An obvious approach was to expose Similarly, in the case of a read request the the DIF information above the I/O controller storage device will include the protection in- level and let the operating system gain ac- formation and send 520 byte sectors to the cess to the integrity metadata. I/O controller. The controller will verify the integrity of the I/O, strip off the protection data and return 512 byte data buffers to the 3.1 Buffer Separation operating system. In other words, the added level of protec- Exposing the 520-byte sectors to the oper- tion between controller and storage device ating system is problematic, however. In- is completely transparent to the operating ternally, operating systems generally work system. Unfortunately, this also means the with sizes that are multiples of 512. On X86 operating system is unable to participate in and X86_64 hardware the system page size is the integrity verification process. This is 4096 KB. This means that 8 sectors fit nicely where the Data Integrity Extensions come in a page. It is extremely inconvenient for in. the operating system to deal with buffers that are multiples of 520 bytes. The Data Integrity Extensions allow the Allows I/O controller and storage device to ex- change protection information. operating system to gain access to the DIF contents without changing its internal Each data sector is protected by an 8-byte buffer size. This is achieved by separating integrity tuple the data buffers and the integrity metadata The contents of this tuple include a checksum buffers. The controller firmware will inter- and an incrementing counter that ensures the leave the data and integrity buffers on write I/O is intact.
Recommended publications
  • Hybrid Drowsy SRAM and STT-RAM Buffer Designs for Dark-Silicon
    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 1 Hybrid Drowsy SRAM and STT-RAM Buffer Designs for Dark-Silicon-Aware NoC Jia Zhan, Student Member, IEEE, Jin Ouyang, Member, IEEE,FenGe,Member, IEEE, Jishen Zhao, Member, IEEE, and Yuan Xie, Fellow, IEEE Abstract— The breakdown of Dennard scaling prevents us MCMC MC from powering all transistors simultaneously, leaving a large fraction of dark silicon. This crisis has led to innovative work on Tile link power-efficient core and memory architecture designs. However, link link RouterRouter the research for addressing dark silicon challenges with network- To router on-chip (NoC), which is a major contributor to the total chip NI power consumption, is largely unexplored. In this paper, we Core link comprehensively examine the network power consumers and L1-I$ L2 L1-D$ the drawbacks of the conventional power-gating techniques. To overcome the dark silicon issue from the NoC’s perspective, we MCMC MC propose DimNoC, a dim silicon scheme, which leverages recent drowsy SRAM design and spin-transfer torque RAM (STT-RAM) Fig. 1. 4 × 4 NoC-based multicore architecture. In each node, the local technology to replace pure SRAM-based NoC buffers. processing elements (core, L1, L2 caches, and so on) are attached to a router In particular, we propose two novel hybrid buffer architectures: through an NI. Routers are interconnected through links to form an NoC. 1) a hierarchical buffer architecture, which divides the input Four memory controllers are attached at the four corners, which will be used buffers into a set of levels with different power states and 2) a for off-chip memory access.
    [Show full text]
  • Z/OS ICSF Overview How to Send Your Comments to IBM
    z/OS Version 2 Release 3 Cryptographic Services Integrated Cryptographic Service Facility Overview IBM SC14-7505-08 Note Before using this information and the product it supports, read the information in “Notices” on page 81. This edition applies to ICSF FMID HCR77D0 and Version 2 Release 3 of z/OS (5650-ZOS) and to all subsequent releases and modifications until otherwise indicated in new editions. Last updated: 2020-05-25 © Copyright International Business Machines Corporation 1996, 2020. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures................................................................................................................ vii Tables.................................................................................................................. ix About this information.......................................................................................... xi ICSF features...............................................................................................................................................xi Who should use this information................................................................................................................ xi How to use this information........................................................................................................................ xi Where to find more information.................................................................................................................xii
    [Show full text]
  • An Analysis of Data Corruption in the Storage Stack
    An Analysis of Data Corruption in the Storage Stack Lakshmi N. Bairavasundaram∗, Garth R. Goodson†, Bianca Schroeder‡ Andrea C. Arpaci-Dusseau∗, Remzi H. Arpaci-Dusseau∗ ∗University of Wisconsin-Madison †Network Appliance, Inc. ‡University of Toronto {laksh, dusseau, remzi}@cs.wisc.edu, [email protected], [email protected] Abstract latent sector errors, within disk drives [18]. Latent sector errors are detected by a drive’s internal error-correcting An important threat to reliable storage of data is silent codes (ECC) and are reported to the storage system. data corruption. In order to develop suitable protection Less well-known, however, is that current hard drives mechanisms against data corruption, it is essential to un- and controllers consist of hundreds-of-thousandsof lines derstand its characteristics. In this paper, we present the of low-level firmware code. This firmware code, along first large-scale study of data corruption. We analyze cor- with higher-level system software, has the potential for ruption instances recorded in production storage systems harboring bugs that can cause a more insidious type of containing a total of 1.53 million disk drives, over a pe- disk error – silent data corruption, where the data is riod of 41 months. We study three classes of corruption: silently corrupted with no indication from the drive that checksum mismatches, identity discrepancies, and par- an error has occurred. ity inconsistencies. We focus on checksum mismatches since they occur the most. Silent data corruptionscould lead to data loss more of- We find more than 400,000 instances of checksum ten than latent sector errors, since, unlike latent sector er- mismatches over the 41-month period.
    [Show full text]
  • Detection Method of Data Integrity in Network Storage Based on Symmetrical Difference
    S S symmetry Article Detection Method of Data Integrity in Network Storage Based on Symmetrical Difference Xiaona Ding School of Electronics and Information Engineering, Sias University of Zhengzhou, Xinzheng 451150, China; [email protected] Received: 15 November 2019; Accepted: 26 December 2019; Published: 3 February 2020 Abstract: In order to enhance the recall and the precision performance of data integrity detection, a method to detect the network storage data integrity based on symmetric difference was proposed. Through the complete automatic image annotation system, the crawler technology was used to capture the image and related text information. According to the automatic word segmentation, pos tagging and Chinese word segmentation, the feature analysis of text data was achieved. Based on the symmetrical difference algorithm and the background subtraction, the feature extraction of image data was realized. On the basis of data collection and feature extraction, the sentry data segment was introduced, and then the sentry data segment was randomly selected to detect the data integrity. Combined with the accountability scheme of data security of the trusted third party, the trusted third party was taken as the core. The online state judgment was made for each user operation. Meanwhile, credentials that cannot be denied by both parties were generated, and thus to prevent the verifier from providing false validation results. Experimental results prove that the proposed method has high precision rate, high recall rate, and strong reliability. Keywords: symmetric difference; network; data integrity; detection 1. Introduction In recent years, the cloud computing becomes a new shared infrastructure based on the network. Based on Internet, virtualization, and other technologies, a large number of system pools and other resources are combined to provide users with a series of convenient services [1].
    [Show full text]
  • Linux Data Integrity Extensions
    Linux Data Integrity Extensions Martin K. Petersen Oracle [email protected] Abstract The software stack, however, is rapidly growing in com- plexity. This implies an increasing failure potential: Many databases and filesystems feature checksums on Harddrive firmware, RAID controller firmware, host their logical blocks, enabling detection of corrupted adapter firmware, operating system code, system li- data. The scenario most people are familiar with in- braries, and application errors. There are many things volves bad sectors which develop while data is stored that can go wrong from the time data is generated in on disk. However, many corruptions are actually a re- host memory until it is stored physically on disk. sult of errors that occurred when the data was originally written. While a database or filesystem can detect the Most storage devices feature extensive checking to pre- corruption when data is eventually read back, the good vent errors. However, these protective measures are al- data may have been lost forever. most exclusively being deployed internally to the de- vice in a proprietary fashion. So far, there have been A recent addition to SCSI allows extra protection infor- no means for collaboration between the layers in the I/O mation to be exchanged between controller and disk. We stack to ensure data integrity. have extended this capability up into Linux, allowing filesystems (and eventually applications) to be able to at- An extension to the SCSI family of protocols tries to tach integrity metadata to I/O requests. Controllers and remedy this by defining a way to check the integrity of disks can then verify the integrity of an I/O before com- an request as it traverses the I/O stack.
    [Show full text]
  • Understanding Real World Data Corruptions in Cloud Systems
    Understanding Real World Data Corruptions in Cloud Systems Peipei Wang, Daniel J. Dean, Xiaohui Gu Department of Computer Science North Carolina State University Raleigh, North Carolina {pwang7,djdean2}@ncsu.edu, [email protected] Abstract—Big data processing is one of the killer applications enough to require attention, little research has been done to for cloud systems. MapReduce systems such as Hadoop are the understand software-induced data corruption problems. most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in In this paper, we present a comprehensive study on the cloud data processing, which not only has serious impact on characteristics of the real world data corruption problems the integrity of individual application results but also affects the caused by software bugs in cloud systems. We examined 138 performance and availability of the whole data processing system. data corruption incidents reported in the bug repositories of In this paper, we present a comprehensive study on 138 real world four Hadoop projects (i.e., Hadoop-common, HDFS, MapRe- data corruption incidents reported in Hadoop bug repositories. duce, YARN [17]). Although Hadoop provides fault tolerance, We characterize those data corruption problems in four aspects: our study has shown that data corruptions still seriously affect 1) what impact can data corruption have on the application and system? 2) how is data corruption detected? 3) what are the the integrity, performance, and availability
    [Show full text]
  • Nasdeluxe Z-Series
    NASdeluxe Z-Series Benefit from scalable ZFS data storage By partnering with Starline and with Starline Computer’s NASdeluxe Open-E, you receive highly efficient Z-series and Open-E JovianDSS. This and reliable storage solutions that software-defined storage solution is offer: Enhanced Storage Performance well-suited for a wide range of applica- tions. It caters perfectly to the needs • Great adaptability Tiered RAM and SSD cache of enterprises that are looking to de- • Tiered and all-flash storage Data integrity check ploy a flexible storage configuration systems which can be expanded to a high avail- Data compression and in-line • High IOPS through RAM and SSD ability cluster. Starline and Open-E can data deduplication caching look back on a strategic partnership of Thin provisioning and unlimited • Superb expandability with more than 10 years. As the first part- number of snapshots and clones ner with a Gold partnership level, Star- Starline’s high-density JBODs – line has always been working hand in without downtime Simplified management hand with Open-E to develop and de- Flexible scalability liver innovative data storage solutions. Starline’s NASdeluxe Z-Series offers In fact, Starline supports worldwide not only great features, but also great Hardware independence enterprises in managing and pro- flexibility – thanks to its modular archi- tecting their storage, with over 2,800 tecture. Open-E installations to date. www.starline.de Z-Series But even with a standard configuration with nearline HDDs IOPS and SSDs for caching, you will be able to achieve high IOPS 250 000 at a reasonable cost.
    [Show full text]
  • MRAM Technology Status
    National Aeronautics and Space Administration MRAM Technology Status Jason Heidecker Jet Propulsion Laboratory Pasadena, California Jet Propulsion Laboratory California Institute of Technology Pasadena, California JPL Publication 13-3 2/13 National Aeronautics and Space Administration MRAM Technology Status NASA Electronic Parts and Packaging (NEPP) Program Office of Safety and Mission Assurance Jason Heidecker Jet Propulsion Laboratory Pasadena, California NASA WBS: 104593 JPL Project Number: 104593 Task Number: 40.49.01.09 Jet Propulsion Laboratory 4800 Oak Grove Drive Pasadena, CA 91109 http://nepp.nasa.gov i This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, and was sponsored by the National Aeronautics and Space Administration Electronic Parts and Packaging (NEPP) Program. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology. ©2013. California Institute of Technology. Government sponsorship acknowledged. ii TABLE OF CONTENTS 1.0 Introduction ............................................................................................................................................................ 1 2.0 MRAM Technology ................................................................................................................................................ 2 2.1
    [Show full text]
  • The Effects of Repeated Refresh Cycles on the Oxide Integrity of EEPROM Memories at High Temperature by Lynn Reed and Vema Reddy Tekmos, Inc
    The Effects of Repeated Refresh Cycles on the Oxide Integrity of EEPROM Memories at High Temperature By Lynn Reed and Vema Reddy Tekmos, Inc. 4120 Commercial Center Drive, #400, Austin, TX 78744 [email protected], [email protected] Abstract Data retention in stored-charge based memories, such as Flash and EEPROMs, decreases with increasing temperature. Compensation for this shortening of retention time can be accomplished by refreshing the data using periodic erase-write refresh cycles, although the number of these cycles is limited by oxide integrity. An alternate approach is to use refresh cycles consisting of a rewrite only cycles, without the prior erase cycle. The viability of this approach requires that this refresh cycle induces less damage than an erase-write cycle. This paper studies the effects of repeated refresh cycles on oxide integrity in a high temperature environment and makes comparisons to the damage caused by erase-write cycles. The experiment consisted of running a large number of refresh cycles on a selected byte. The control group was other bytes which were not subjected to refresh only cycles. The oxide integrity was checked by performing repeated erase-write cycles on each of the two groups to determine if the refresh cycles decreased the number of erase-write cycles before failure. Data was collected from multiple parts, with different numbers of refresh cycles, and at temperatures ranging from 25C to 190C. The experiment was conducted on microcontrollers containing embedded EEPROM memories. The microcontrollers were programmed to test and measure their own memories, and to report the results to an external controller.
    [Show full text]
  • On the Effects of Data Corruption in Files
    Just One Bit in a Million: On the Effects of Data Corruption in Files Volker Heydegger Universität zu Köln, Historisch-Kulturwissenschaftliche Informationsverarbeitung (HKI), Albertus-Magnus-Platz, 50968 Köln, Germany [email protected] Abstract. So far little attention has been paid to file format robustness, i.e., a file formats capability for keeping its information as safe as possible in spite of data corruption. The paper on hand reports on the first comprehensive research on this topic. The research work is based on a study on the status quo of file format robustness for various file formats from the image domain. A controlled test corpus was built which comprises files with different format characteristics. The files are the basis for data corruption experiments which are reported on and discussed. Keywords: digital preservation, file format, file format robustness, data integ- rity, data corruption, bit error, error resilience. 1 Introduction Long-term preservation of digital information is by now and will be even more so in the future one of the most important challenges for digital libraries. It is a task for which a multitude of factors have to be taken into account. These factors are hardly predictable, simply due to the fact that digital preservation is something targeting at the unknown future: Are we still able to maintain the preservation infrastructure in the future? Is there still enough money to do so? Is there still enough proper skilled manpower available? Can we rely on the current legal foundation in the future as well? How long is the tech- nology we use at the moment sufficient for the preservation task? Are there major changes in technologies which affect the access to our digital assets? If so, do we have adequate strategies and means to cope with possible changes? and so on.
    [Show full text]
  • Practical Risk-Based Guide for Managing Data Integrity
    1 ACTIVE PHARMACEUTICAL INGREDIENTS COMMITTEE Practical risk-based guide for managing data integrity Version 1, March 2019 2 PREAMBLE This original version of this guidance document has been compiled by a subdivision of the APIC Data Integrity Task Force on behalf of the Active Pharmaceutical Ingredient Committee (APIC) of CEFIC. The Task Force members are: Charles Gibbons, AbbVie, Ireland Danny De Scheemaecker, Janssen Pharmaceutica NV Rob De Proost, Janssen Pharmaceutica NV Dieter Vanderlinden, S.A. Ajinomoto Omnichem N.V. André van der Biezen, Aspen Oss B.V. Sebastian Fuchs, Tereos Daniel Davies, Lonza AG Fraser Strachan, DSM Bjorn Van Krevelen, Janssen Pharmaceutica NV Alessandro Fava, F.I.S. (Fabbrica Italiana Sintetici) SpA Alexandra Silva, Hovione FarmaCiencia SA Nicola Martone, DSM Sinochem Pharmaceuticals Ulrich-Andreas Opitz, Merck KGaA Dominique Rasewsky, Merck KGaA With support and review from: Pieter van der Hoeven, APIC, Belgium Francois Vandeweyer, Janssen Pharmaceutica NV Annick Bonneure, APIC, Belgium The APIC Quality Working Group 3 1 Contents 1. General Section .............................................................................................................................. 4 1.1 Introduction ............................................................................................................................ 4 1.2 Objectives and Scope .............................................................................................................. 5 1.3 Definitions and abbreviations ................................................................................................
    [Show full text]
  • Exploiting Asymmetry in Edram Errors for Redundancy-Free Error
    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TETC.2019.2960491, IEEE Transactions on Emerging Topics in Computing IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING 1 Exploiting Asymmetry in eDRAM Errors for Redundancy-Free Error-Tolerant Design Shanshan Liu (Member, IEEE), Pedro Reviriego (Senior Member, IEEE), Jing Guo, Jie Han (Senior Member, IEEE) and Fabrizio Lombardi (Fellow, IEEE) Abstract—For some applications, errors have a different impact on data and memory systems depending on whether they change a zero to a one or the other way around; for an unsigned integer, a one to zero (or zero to one) error reduces (or increases) the value. For some memories, errors are also asymmetric; for example, in a DRAM, retention failures discharge the storage cell. The tolerance of such asymmetric errors would result in a robust and efficient system design. Error Control Codes (ECCs) are one common technique for memory protection against these errors by introducing some redundancy in memory cells. In this paper, the asymmetry in the errors in Embedded DRAMs (eDRAMs) is exploited for error-tolerant designs without using any ECC or parity, which are redundancy-free in terms of memory cells. A model for the impact of retention errors and refresh time of eDRAMs on the False Positive rate or False Negative rate of some eDRAM applications is proposed and analyzed. Bloom Filters (BFs) and read-only or write-through caches implemented in eDRAMs are considered as the first case studies for this model.
    [Show full text]