Erasure Codes for Fixing Multiple Failures in Distributed Storage Systems

Erasure Codes for Fixing Multiple Failures in Distributed Storage Systems

Beehive: Erasure Codes for Fixing Multiple Failures in Distributed Storage Systems Jun Li, Baochun Li Department of Electrical and Computer Engineering University of Toronto, Canada Abstract tion with erasure codes, especially for cold or archival storage. By migrating from replication to erasure codes, Distributed storage systems have been increasingly de- distributed storage systems can enjoy a better capability ploying erasure codes (such as Reed-Solomon codes) for to tolerate unavailable data and meanwhile save storage fault tolerance. Though Reed-Solomon codes require overhead. Among erasure codes, Reed-Solomon (RS) much less storage space than replication, a significant codes become the most popular choice as RS codes make amount of network transfer and disk I/O will be imposed the optimal usage of storage space while providing the when fixing unavailable data by reconstruction. Tradi- same level of fault tolerance. tionally, it is expected that unavailable data are fixed sep- To achieve fault tolerance with RS codes, we need to arately. However, since it is observed that failures in the assume that data are stored in blocks with a fixed size, data center are correlated, fixing unavailable data of mul- a common practice for most distributed storage systems. tiple failures is both unavoidable and even common. In With k data blocks, RS codes compute r parity blocks, this paper, we show that reconstructing data of multi- such that any k of the total k + r blocks can recover all ple failures in batches can cost significantly less network data blocks. The data blocks and their corresponding transfer and disk I/O than fixing them separately. We parity blocks belong to the same stripe. Therefore, such present Beehive, a new design of erasure codes, that can RS codes can tolerate at most r failures within the same fix unavailable data of multiple failures in batches while stripe. For example, RS codes with k = 4 and r = 2 can consuming the optimal network transfer with nearly opti- tolerate any two missing blocks with 1:5x storage over- mal storage overhead. Evaluation results show that Bee- head, while three-way replication, achieving the same hive codes can save network transfer by up to 69:4% and level of fault tolerance, requires 3x storage overhead. disk I/O by 75% during reconstruction. Once one block becomes unavailable, the missing data should be fixed through a reconstruction operation and 1 Introduction saved at some other server. Under RS codes, the re- construction operation requires to download k existing Large-scale distributed storage systems, especially ones blocks and then decode the missing block, imposing k in data centers, store a massive amount of data that are times of the network transfer under replication. It has also increasing rapidly. Running upon commodity hard- been reported from a Facebook’s cluster that reconstruct- ware, these storage systems are expected to keep data ing unavailable data under RS codes can increase more available, against software and hardware failures that than 100 TB of data transfer in just one day [10]. Be- make data unavailable on a daily basis [12]. Tradition- sides, the same amount of disk I/O will also be imposed ally, replicated data are employed by distributed storage on the servers that store the downloaded blocks. systems to keep data available. For example, three repli- To save network transfer during reconstruction, there cas are stored by default in the Hadoop Distributed File have been considerable interests in the construction of a System (HDFS) [2]. class of erasure codes called minimum-storage regener- However, storing multiple copies of the original data ating (MSR) codes (e.g., [11]). The same as RS codes, brings expensive overhead to the storage system. For ex- MSR codes also make the optimal usage of storage ample, three copies mean that only 33% of the total stor- space. However, MSR codes can significantly save net- age space can be effectively used. Therefore, distributed work transfer during reconstruction. As shown in Fig. 1a, storage systems (e.g., [6]) have been replacing replica- we assume that the size of each block is 128 MB and 1 block block block block segment 1 1 1 1 (w symbols) block block block 64 MB block block 42.7 MB block 2 128 MB 2 64 MB 2 1 2 1 64 MB 42.7 MB block 128 MB block block 64 MB block block block 42.7 MB 42.7 MB ... ... 64 MB 42.7 MB 3 1 3 1 3 3 42.7 MB 128 MB 64 MB 6464 MB MB block block block 64 MB block 42.7 MBMB block block block block block 64 MB 4 4 4 64 MB 4 42.7 MB 1 2 k k+1 n block block block 64 MB block block 42.7 MB block generation 5 5 5 6 5 6 block network transfer block network transfer block disk read block disk read stripe 6 = 384 MB 6 = 256 MB 6 = 1024 MB 6 = 512 MB RS code MSR code MSR code Beehive code Figure 2: Illustration of notations: hierarchy of stripe, (a) one failure (b) two failures generation, block, and segment, where block 1 to k are Figure 1: The network transfer and disk read imposed by data blocks and block k + 1 to n are parity blocks. the reconstruction under RS, MSR and Beehive codes, with k = 3, r = 3, and blocks of size 128 MB. zon EC2. The experiment results show that compared to MSR codes, Beehive codes can save up to 42:9% of net- let k = 3 and r = 3 for both RS codes and MSR codes work traffic (with more savings under RS codes) and up here. While RS codes need to download three blocks to 75% of disk I/O during reconstruction. Though Bee- to reconstruct one missing block, MSR codes only need hive codes store less actual data than RS or MSR codes to download a fraction of each block. In this example, with the same storage space, we show that this overhead though MSR codes require to download data from one is marginal. more block, the total network transfer is still saved by 33:3% as only a half of network transfer is imposed on each block. Nevertheless, even though only a fraction 2 Background of a block is required for the reconstruction, in general Suppose that given k data blocks, we will have r parity it has to be encoded from the whole block.1 Therefore, blocks under some erasure code. If any k of all the k + r MSR codes do not ease but further increase the overhead blocks can decode the original data, such erasure codes of disk I/O, as the reconstruction requires to download make the optimal usage of storage space, e.g., RS codes. data from even more blocks than RS codes. A block contains a certain number of symbols. Typically, Traditionally, it is assumed that when there are un- a symbol is simply a single byte. The encoding, decod- available blocks, distributed storage systems will recon- ing and reconstruction operations of the erasure code are struct them separately. However, inside data centers, performed on such symbols with the arithmetic of the so- data unavailability events can be correlated. For exam- called finite field. In this paper, however, we do not rely ple, many disks fail at similar ages [8]. When one disk on any direct knowledge of the finite field and readers fails, it suggests a high probability that some other disks can simply consider its arithmetic as usual arithmetic. fail roughly at the same time. Not just limited to disk Given the original data, we can divide them into gen- failures, correlated data failures can also happen due to erations such that each generation contains k blocks ( f , various reasons [5] such as switch failures, power out- i i = 1 k) with the same size. For simplicity, we only ages, maintenance operations, or software glitches. Tak- ;:::; consider one generation in this paper, as all generations ing advantages of the correlated failures, we investigate will be encoded in the same way. Depending on the era- the construction of erasure codes that allows us to recon- sure codes, a block may consist of one or multiple seg- struct multiple missing blocks in batches, to save both ments, where each segment is a row vector of w sym- network transfer and disk I/O during reconstruction. bols. For simplicity, we assume that one symbol is a byte In this paper, we propose a new family of erasure in this paper. In other words, each segment contains w codes, called Beehive, that reconstruct multiple blocks bytes. Assuming that each block contains segments, at the same time. An instant benefit this brings is that a each block has w bytes, and we regard f as an × w each block will only be read once to reconstruct multiple a i a matrix. Let n = k + r. Given the k original blocks, era- blocks. As illustrated in Fig. 1b, the total amount of disk sure codes use an n × k generating matrix G to com- read is saved by 50% when we reconstruct two blocks to- a a T T T gether at the same time, while we can even further save pute all blocks in a stripe, i.e., G · f1 ··· fk , where T network transfer as well. In fact, Beehive codes achieve fi denotes the transpose of fi. We can divide G into n T T T the optimal network transfer per block in the reconstruc- submatrices of size a × ka such that G = g1 ··· gn . tion operation.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us