RAID0.5: Active Data Replication for Low Cost Disk Array Data Protection

RAID0.5: Active Data Replication for Low Cost Disk Array Data Protection

RAID0.5: Active Data Replication for Low Cost Disk Array Data Protection John A. Chandy Department of Electrical and Computer Engineering University of Connecticut Storrs, CT 06269-2157 [email protected] Abstract— RAID has long been established as an effective This overhead makes RAID cost prohibitive in many way to provide highly reliable as well as high-performance disk environments, particularly single user desktop computers subsystems. However, reliability in RAID systems comes at the and laptops. RAID5 requires the installation of either cost of extra disks. In this paper, we describe a mechanism that we have termed RAID0.5 that enables striped disks with SCSI controllers or multiple IDE controllers and enclo- very high data reliability but low disk cost. We take advantage sures to contain the multiple disks. In a two-disk scenario of the fact that most disk systems use backup systems for that is feasible for most single PCs, the 100% overhead disaster recovery. With the use of these backup systems, the disk of RAID1 mirroring becomes costly. system needs to only replicate data since the last backup, thus It is in this light that we present RAID0.5, a midway drastically reducing the storage space requirement. Though RAID0.5 has the same data loss characteristics of traditional point between RAID0 and RAID1 in that it provides mirroring, the lower storage space comes at the cost of lower equivalent data loss guarantees to RAID1 with just availability. Thus, RAID0.5 is a tradeoff between lower disk slightly more overhead than RAID0. However, as we cost and lower availability while still preserving very high will show, the tradeoff between RAID0.5 and RAID1 data reliability. We present analytical reliability models and is the lower availability of a RAID0.5 system. The experimental results that demonstrate the enhanced reliability and performance of the proposed RAID0.5 system. remainder of the paper describes the overall architecture and analysis of RAID0.5, experimental results future I. INTRODUCTION directions, and related work. Disk arrays have long been used to improve the performance of storage systems [1], [2]. The parallelism II. RAID0.5 inherent in multi-disk systems can significantly boost The goal of the RAID0.5 architecture is to provide a both the throughput and response times as compared to a disk subsystem that can achieve high reliability with low single disk system. However, the increased performance cost. Our particular aim is to make the method feasible comes at the cost of lower reliability. As a result, for systems with relatively few disks. A typical RAID5 disk arrays need some form of redundancy to improve system will have 5-7 disks in the array, leading to a reliability. The most common and cost effective solution redundancy overhead of 14-20%. As you use fewer disks to improve the reliability of disk systems is the use of and approach RAID1 mirroring, the overhead quickly Redundant Array of Inexpensive (or Independent) Disks approaches 100%. With a RAID0.5 system, we would (RAID) [3]. The various levels of RAID specify different like to maintain a redundancy overhead of less than 20% methods of redundancy to provide reliability. The most for even 2 disk systems. In a two-disk system, in order commonly used forms of RAID are RAID1 or mirroring, to prevent data loss, it would seem that we would need which entails replication of the data on multiple disks, to replicate all the data on a disk thus leading to 100% and RAID5 or parity striping, which involves spreading overhead. Compressing the replicated data may allow data along with parity across multiple disks. Choosing for smaller overheads, but this is not guaranteed and the which RAID level to use is typically determined by cost compression process can be compute intensive. and application requirements. At the disk array level, The key to achieving low redundancy overheads is the redundancy choice is usually RAID5 as it provides to replicate only a portion of the data on a disk. If excellent availability, moderate storage overhead, and we assume that the system is being periodically backed adequate performance. up, then we need only to replicate data that has been All RAID levels require extra disks to provide redun- changed since the last backup. We define this data as dancy. In the case of RAID5, the redundancy overhead the active data. In such a scenario, if a disk fails, active is 1/D where D is the number of data disks in a data can be recovered from the remaining disk(s) and redundancy group. With RAID1, the overhead is 100%. inactive data can be recovered from the backup media. The backup window determines how much data needs to Disk 0 Disk 1 Disk 2 Disk 3 be replicated. For example, a weekly backup will create a D00 D01 D02 D03 larger active data set size than a daily backup. HP’s study D10 D11 D12 D13 of working set sizes showed that, on average, only 2% of Data Region the storage space is written to over a 24 hour period [4]. ... ... ... ... } The largest observed 24-hour write set size was just over Dn0 Dn1 Dn2 Dn3 10%. Thus, assuming daily backups, we need to only Replication replicate 10% of the data in the array. Assuming similar } Region access patterns over a week, the weekly working set size a) Initial Data Distribution should not change much, so a 15% replication overhead may be sufficient for weekly backups. Disk 0 Disk 1 Disk 2 Disk 3 The data distribution of a RAID0.5 system is shown D00 D01 D02 D03' for a 4 disk system in Figure 1. Each row in the disk D10' D11 D12 D13 represents one of n stripes as in a RAID0 disk array, Data and each cell represents a data block or chunk of a ... ... ... ... }Region stripe on a disk. On each disk, we dedicate a region Dn0 Dn1 Dn2 Dn3 for replication, the size of which is determined by the Replication D03'D10' D21' desired redundancy overhead which is determined by the } Region backup frequency as described above. In Figure 1a, the data distribution is shown for a system where the active b) Data Distribution after writes to D10, D21, and D03 data set is empty, i.e. immediately after a backup. After Fig. 1. RAID0.5 data distribution. writes to D10, D21, and D03, we can see in Figure 1b that the writes have been replicated to replication regions on adjacent disks. By replicating on the next disk (mod not require nonstop operation but do require absolute number of disks), we can ensure that the active data is data protection. Because of this high-reliability low- preserved on at least one disk in the event of a single availability characteristic, we placed RAID0.5 between disk failure. If a disk replication region fills up, we can the low-reliability, low-availability RAID0 and high- replicate the data to another disk or grow the replication reliability, high-availability RAID1. region. If this is not possible, the file system or disk In light of this distinction between system availability driver will respond as if the disk is full. and data loss protection, when doing reliability analysis III. RELIABILITY ANALYSIS we are more interested in the mean time to data loss (MTTDL) rather than the more conventional mean time When a disk fails, any requests to active data can be to failure (MTTF). The analysis is similar but MTTDL delivered from the replication region. However, requests implies nothing about system availability as MTTF does. to inactive data can not be serviced because the data is not available online and must be retrieved from backup A. RAID0.5 with replication region media. Thus, in a RAID0.5 system, as soon as failure We can develop a model for the MTTDL and availabil- is detected, the system must block all future accesses ity of a RAID0.5 system using a analysis methodology to disk until the failed disk can be reconstructed from similar to that outlined in [3]. Each disk is assumed backup. To prevent the chance of a second failure, the to have independent and exponential failure rates. We system should power down all disks in the system. Note assume d disks in the array and a replication ratio of r, that this implies that a RAID0.5 system is not available i.e. the ratio of the replication region size to data size. under failure even though there has been no data loss. Each disk is assumed to have a mean time to failure of This behavior is different from a RAID1 or RAID5 sys- MTTFDisk. The mean time to failure of one disk in the tem where data is still available even under a single disk Disk array is MTTF and data loss occurs when a second failure. Thus, for a RAID0.5 system there is a distinction d disk fails while the first failed disk is being repaired. between availability and data loss protection, whereas no such distinction exists for other RAID levels. A RAID0.5 MTTDLRAID0.5 = system trades off system availability for low redundancy MTTF 1 costs. Therefore, using RAID0.5 is not appropriate for Disk ∗ (1) data loss failure during repair mission critical applications that require 24/7 operation, d P r[ ] but it is appropriate for desktop applications that may If we define the mean time to repair the disk as MTTR , then assuming exponential failure rates, the Replication Disk Disk 0 Disk 1 Disk 2 Disk 3 Disk probability of a failure during the repair time is: D00 D01 D02 D03 − −MTTRDisk /MT T FDisk (d−1) 1 (e ) (2) D10 D11 D12 D13 Since MTTRDisk is almost always much less than D20 D21 D22 D23 MTTFDisk, the above expression can be approximated ..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us