
Don’t Let RAID Raid the Lifetime of Your SSD Array Sangwhan Moon A. L. Narasimha Reddy Texas A&M University Texas A&M University Abstract Many studies have investigated SSD based RAID sys- Parity protection at system level is typically employed to tems. The notable study [1] points out the pitfalls of compose reliable storage systems. However, careful con- SSD based RAID5 in terms of performance. They dis- sideration is required when SSD based systems employ cuss the behavior of random writes and parity updates, parity protection. First, additional writes are required for and conclude striping provides much higher throughput parity updates. Second, parity consumes space on the than RAID5. We consider the impact of write workload device, which results in write amplification from less ef- on reliability. Previous studies [2, 3] have considered ficient garbage collection at higher space utilization. different architectures to reduce the parity update perfor- This paper analyzes the effectiveness of SSD based mance penalty. We focus on the problem of random and RAID and discusses the potential benefits and drawbacks small writes resulting in frequent parity updates. Recent in terms of reliability. A Markov model is presented to study [4] shows that randomness of workload is increas- estimate the lifetime of SSD based RAID systems in dif- ing with the advent of big data analytics and virtualiza- ferent environments. In a single array, our preliminary tion. results show that parity protection provides benefit only This paper focuses its attention on the reliability of with considerably low space utilizations and low data ac- an array of MLC SSDs. In this paper, we explore the cess rates. However, in a large system, RAID improves relationship between parity protection and the lifetime of data lifetime even when we take write amplification into an SSD array. The paper makes the following significant account. contributions: • We analyze the lifetime of SSD taking benefits and 1 Introduction drawbacks of parity protection into account. 1 Solid-state drives (SSDs) are attractive as a storage • The results from our analytical model show that component due to their high performance and low power RAID5 is less reliable than striping with a small consumption. Advanced integration techniques such as number of devices because of write amplification. multi-level cell (MLC) have considerably dropped cost- per-bit of SSDs such that wide deployment of SSDs is Sec. 2 provides background of system level parity pro- feasible. While their deployment is steadily increasing, tection and write amplification of SSDs. Sec. 3 explores their write endurance still remains as one of the main SSD based RAID. Sec. 4 builds a reliability model of concerns. SSD based RAID. Sec. 5 shows evaluation of our statis- tical model. Sec. 6 concludes this paper. Many protection schemes have been proposed to im- prove the reliability of SSDs. For example, error cor- 2 Backgroud recting codes (ECC), log-like writing of flash translation We categorize protection schemes for SSDs into two layer (FTL), garbage collection and wear leveling im- levels: device level protection and system level protec- prove the reliability of SSD at the device level. Compos- tion. Device level protection includes ECC, wear lev- ing an array of SSDs and employing system level parity eling, and garbage collection. System level protection protection is one of the popular protection schemes at the includes RAID5 and mirroring. In this paper, we will system level. In this paper, we study striping (RAID0), mostly focus on system level protection. mirroring (RAID1), and RAID5. RAID5 has improved the lifetime of HDD based stor- 2.1 System Level Protection age systems for decades. However, careful decisions In many cases, device level protections are not enough to should be made with SSDs when the system level parity protect data. For example, when the number of bit errors protection is employed. First, SSDs have limited write exceeds the number of correctable bit errors using ECC, endurance. Parity protection results in redundant writes data in a page may be lost without additional protection whenever a write is issued to the device array. Unlike mechanisms. A device can fail due to other reasons such HDDs, redundant writes for parity update can severely as the failure of device attachment hardware. In this pa- degrade the lifetime of SSDs. Parity data consumes de- per, we call the former as a page error and the latter as a vice capacity and increases the space utilization. While device failure. In order to protect device failures, system it has not been a serious problem in HDDs, increased level parity protection is employed. space utilization leads to less efficient garbage collection RAID5 is popular as it spreads workload well across which in turn increases the write workload. all the devices in the device array with relatively small space overhead for parity protection. The group of data 1We consider MLC SSDs in this paper, unless otherwise mentioned. blocks and the corresponding parity block is called a page group. RAID5 is resilient to one device failure or efficient when hot workload concentrates on a small por- one page error in a page group. tion of data. Mirroring is another popular technique to provide data 3 SSD based RAID protection at the system level. Two or more copies of the data are stored such that a device level failure does not Our analysis is based on an architecture where a RAID lead to data loss unless the original and all the replicas controller operates on top of a number of SSDs. As a are corrupted before the recovery from a failure is com- result, the set of pages within a page group have a con- pleted. When the original data is updated, the replicas stant logical address (before FTL translation) within the have to be updated as well. Read operations can be is- device. As the pages are written, the actual physical ad- sued to either the original or the replicas at the system dress of pages within the device change because of FTL level. When a device is corrupted, the paired devices are translation. However, this does not impact the member- used to recover the failed device. ship of the page group, based on the logical addresses. When the number of page errors in a page group exceeds 2.2 Write Amplification the number of correctable page errors by RAID, the page Protection schemes for SSD often require additional group fails and the storage system loses data. writes and those writes in turn reduce the reliability of In RAID5, for a small write, the data block and the SSDs. Since higher write amplification can reduce the parity block need to be updated, potentially resulting in lifetime of SSD severely, protection schemes should be a write amplification factor of 2. However, when a large configured carefully to maximize the lifetime improve- write that spans a number of devices is issued, the parity ment while minimizing write amplification. Write am- block can be updated once for updating N-1 data blocks plification severely degrades reliability, since the relia- with the help of a parity cache [2], where N is the number bility is highly dependent on the number of writes done of devices in the device array, resulting in a write ampli- at the SSDs. Main sources of the write amplification are fication factor of N/(N-1). Depending on the workload discussed in this section. mixture of small and large writes, the write amplification Recovery process In most of the recovery processes, at will be somewhere in between. In a mirroring system, least one write is required to write a corrected page. ECC the write amplification is 2 regardless of the size of the can correct a number of errors in a page simultaneously write request. with one write. For instance, fixing a bit error in a page Parity blocks increase space utilization. Suppose that takes one redundant write, while fixing ten bit errors in a 120GB of data is stored in four 80GB SSDs, RAID5 page also needs one additional write. stores 30GB of data and 10GB of parity in each device Our previous work [5] suggested threshold based ECC while striping stores only 30GB data per device. The (TECC) to reduce the write amplification from frequent increased amount of data results in less efficient garbage recovery by leaving bit errors in a page until it accumu- collection and more write amplification, which decreases lates a certain number of bit errors. TECC can drasti- the lifetime of SSDs. cally reduce the write amplification from ECC. It is noted that ECC based write amplification is a function of read 4 Lifetime Model workload unlike other sources of write amplification. Our lifetime model is based on a Markov model of [5] Garbage collection NAND flash memory is typically which analyzes the relationship of single SSD’s reliabil- written in a unit of a page, 4KB, and erased in a unit ity and device level protection. A number of symbols of a block, e.g., 512KB. It does not support in-place up- used in our model are shown in Table 1. date and accumulates writes in a log-like manner. In such A number of studies [7, 8] have investigated the MLC a log structured system, internal fragmentation and the bit error behavior of flash memory. There are different garbage collection process to tidy the fragmented data sources of bit error of flash memory: read disturb, data are inevitable. The garbage collection process moves retention failure, and write error. These studies model valid pages from one place to another place and this re- the bit error rate as an exponential function of the num- sults in increasing the number of writes issued to the de- ber of program/erase cycles (P/E cycles) the cell has gone vice.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-