Fastscale: Accelerate RAID Scaling by Minimizing Data Migration

Total Page:16

File Type:pdf, Size:1020Kb

Fastscale: Accelerate RAID Scaling by Minimizing Data Migration FastScale: Accelerate RAID Scaling by Minimizing Data Migration Weimin Zheng, Guangyan Zhang Tsinghua University [email protected] Abstract today’s server environments, many applications (e.g., e- business, scientific computation, and web servers) access Previous approaches to RAID scaling either require a data constantly. The cost of downtime is extremely high very large amount of data to be migrated, or cannot toler- [2], giving rise to the necessity of online and real-time ate multiple disk additions without resulting in disk im- scaling. balance. In this paper, we propose a new approach to Traditional approaches [3, 4, 5] to RAID scaling RAID-0 scaling called FastScale. First, FastScale mini- are restricted by preserving the round-robin order after mizes data migration, while maintaining a uniform data adding disks. The addressing algorithm can be expressed distribution. With a new and elastic addressing function, as follows for the ith scaling operation: it moves only enough data blocks from old disks to fill ½ an appropriate fraction of new disks without migrating d = x mod Ni fi(x) : (1) data among old disks. Second, FastScale optimizes data b = x=Ni migration with two techniques: (1) it accesses multiple physically successive blocks via a single I/O, and (2) it where block b of disk d is the location of logical block x, records data migration lazily to minimize the number of and Ni gives the total number of disks. Generally speak- metadata writes without compromising data consistency. ing, as far as RAID scaling from m disks to m + n is Using several real system disk traces, our experiments concerned, only the data blocks in the first stripe are not show that compared with SLAS, one of the most effi- moved. This indicates that almost 100 percent of data cient traditional approaches, FastScale can reduce redis- blocks have to be migrated no matter what the numbers tribution time by up to 86.06% with smaller maximum of old disks and new disks are. There are some efforts response time of user I/Os. The experiments also illus- [3, 5] concentrating on optimization of data migration. trate that the performance of the RAID-0 scaled using They improve the performance of RAID scaling by a cer- FastScale is almost identical with that of the round-robin tain degree, but do not overcome the limitation of large RAID-0. data migration completely. The most intuitive method to reduce data migration is the semi-RR [6] algorithm. It requires a block movement 1 Introduction only if the resulting disk number is one of new disks. The algorithm can be expressed as follows for the ith scaling Redundant Array of Inexpensive Disks (RAID) [1] was operation: proposed to achieve high performance, large capacity ½ g (x) if (x mod N ) < N and data reliability, while allowing a RAID volume to g (x) = i¡1 i i¡1 (2) i f (x) otherwise be managed as a single device. As user data increase i and computing powers enhance, applications often re- Semi-RR reduces data migration significantly. Unfortu- quire larger storage capacity and higher I/O performance. nately, it does not guarantee uniform distribution of data To supply needed capacity and/or bandwidth, one solu- blocks after subsequent scaling operations (see section tion is to add new disks to a RAID volume. This disk 2.4). This will deteriorate the initial equally distributed addition is termed “RAID scaling”. load. To regain uniform data distribution in all disks includ- In this paper, we propose a novel approach called ing the old and the new, RAID scaling requires certain FastScale to redistribute data for RAID-0 scaling. It ac- blocks to be moved onto added disks. Furthermore, in celerates RAID-0 scaling by minimizing data migration. 1 by up to 86.06% with smaller maximum response 1 m time of user I/Os. m n 1 ( ) ² The performance of the RAID scaled using FastScale is almost identical with that of the round- Ă Ă robin RAID. In this paper, we only describe our solution for RAID- 0 0, i.e., striping without parity. The solution can also work for RAID-10 and RAID-01. Although we do not handle D0 D1 Dm-1 Dm Dm+n-1 old disks new disks RAID-4 and RAID-5, we believe that our method pro- vides a good starting point for efficient scaling of RAID- 4 and RAID-5 arrays. Figure 1: Data migration using FastScale. Only data blocks are moved from old disks to new disks for regaining a uniform distribution, while no data is migrated among old disks. 2 Minimizing Data Migration As shown in Figure 1, FastScale moves only data blocks 2.1 Problem Statement from old disks to new disks enough for preserving the For disk addition into a RAID, it is desirable to ensure an uniformity of data distribution, while not migrating data even load on all the disks and minimal block movement. among old disks. Consequently, the migration fraction of Since the location of a block may be changed during a FastScale reaches the lower bound of the migration frac- scaling operation, another objective is to quickly com- tion, n=(m + n). In other words, FastScale succeeds in pute the current location of a block. minimizing data migration for RAID scaling. To achieve the above objectives, the following three We design an elastic addressing function through requirements should be satisfied for RAID scaling: which the location of one block can be easily computed without any lookup operation. By using this function, ² Requirement 1 (Uniform data distribution): If there FastScale changes only a fraction of the data layout while are B blocks stored on m disks, the expected number preserving the uniformity of data distribution. FastScale of blocks on each disk is approximately B=m so as has several unique features as follows: to maintain an even load. ² FastScale maintains a uniform data distribution af- ² Requirement 2 (Minimal Data Migration): During ter RAID scaling. the addition of n disks to a RAID with m disks stor- ² FastScale minimizes the amount of data to be mi- ing B blocks, the expected number of blocks to be grated entirely. moved is B £ n=(m + n). ² FastScale preserves a simple management of data ² Requirement 3 (Fast data Addressing): In a m-disk due to deterministic placement. RAID, the location of a block is computed by an algorithm with low space and time complexity. ² FastScale can sustain the above three features after multiple disk additions. 2.2 Two Examples of RAID Scaling FastScale also exploits special physical properties to optimize online data migration. First, it uses aggre- Example 1: To understand how the FastScale algorithm gate accesses to improve the efficiency of data migration. works and how it satisfies all of the three requirements, Second, it records data migration lazily to minimize the we take RAID scaling from 3 disks to 5 as an example. number of metadata updates while ensuring data consis- As shown in Figure 2, one RAID scaling process can be tency. divided into two stages logically: data migration and data We implement a detailed simulator that uses DiskSim filling. In the first stage, a fraction of existing data blocks as a worker module to simulate disk accesses. Under sev- are migrated to new disks. In the second stage, new data eral real-system workloads, we evaluate the traditional are filled into the RAID continuously. Actually, the two approach and the FastScale approach. The experimental stages, data migration and data filling, can be overlapped results demonstrate that: in time. For the RAID scaling, each 5 sequential locations in ² Compared with one of the most efficient traditional one disk are grouped into one segment. For the 5 disks, approaches, FastScale shortens redistribution time 5 segments with the same physical address are grouped 2 D0 0 3 6 9 12 15 18 21 24 27 30 D0 0 2 4 6 8 10 12 14 16 18 20 D1 1 4 7 10 13 16 19 22 25 28 31 D1 1 3 5 7 9 11 13 15 17 19 21 D2 2 5 8 11 14 17 20 23 26 29 32 2+3 3+2 D0 6 8 16 18 D0 6 9 12 21 24 27 D1 1 9 11 19 21 D1 1 10 13 16 25 28 31 D2 0 2 10 12 20 D2 2 5 14 17 20 29 32 D3 3 4 13 14 D3 0 3 7 15 18 22 30 D4 5 7 15 17 D4 4 8 11 19 23 26 New data is filled in New data is filled in D0 22 25 28 6 8 37 40 43 16 18 52 D0 33 35 6 9 12 43 45 21 24 27 53 D1 1 26 29 31 9 11 41 44 46 19 21 D1 1 36 37 10 13 16 46 47 25 28 31 D2 0 2 30 32 34 10 12 45 47 49 20 D2 2 5 38 39 14 17 20 48 49 29 32 D3 23 3 4 33 35 38 13 14 48 50 53 D3 0 3 7 40 41 15 18 22 50 51 30 D4 24 27 5 7 36 39 42 15 17 51 54 D4 34 4 8 11 42 44 19 23 26 52 54 Figure 3: RAID scaling from 2 disks to 5 using FastScale, where m < n. Figure 2: RAID scaling from 3 disks to 5 using FastScale, where m ¸ n. that the calculation overhead for the data addressing is very low. into one region.
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • VIA RAID Configurations
    VIA RAID configurations The motherboard includes a high performance IDE RAID controller integrated in the VIA VT8237R southbridge chipset. It supports RAID 0, RAID 1 and JBOD with two independent Serial ATA channels. RAID 0 (called Data striping) optimizes two identical hard disk drives to read and write data in parallel, interleaved stacks. Two hard disks perform the same work as a single drive but at a sustained data transfer rate, double that of a single disk alone, thus improving data access and storage. Use of two new identical hard disk drives is required for this setup. RAID 1 (called Data mirroring) copies and maintains an identical image of data from one drive to a second drive. If one drive fails, the disk array management software directs all applications to the surviving drive as it contains a complete copy of the data in the other drive. This RAID configuration provides data protection and increases fault tolerance to the entire system. Use two new drives or use an existing drive and a new drive for this setup. The new drive must be of the same size or larger than the existing drive. JBOD (Spanning) stands for Just a Bunch of Disks and refers to hard disk drives that are not yet configured as a RAID set. This configuration stores the same data redundantly on multiple disks that appear as a single disk on the operating system. Spanning does not deliver any advantage over using separate disks independently and does not provide fault tolerance or other RAID performance benefits. If you use either Windows® XP or Windows® 2000 operating system (OS), copy first the RAID driver from the support CD to a floppy disk before creating RAID configurations.
    [Show full text]
  • D:\Documents and Settings\Steph
    P45D3 Platinum Series MS-7513 (v1.X) Mainboard G52-75131X1 i Copyright Notice The material in this document is the intellectual property of MICRO-STAR INTERNATIONAL. We take every care in the preparation of this document, but no guarantee is given as to the correctness of its contents. Our products are under continual improvement and we reserve the right to make changes without notice. Trademarks All trademarks are the properties of their respective owners. NVIDIA, the NVIDIA logo, DualNet, and nForce are registered trademarks or trade- marks of NVIDIA Corporation in the United States and/or other countries. AMD, Athlon™, Athlon™ XP, Thoroughbred™, and Duron™ are registered trade- marks of AMD Corporation. Intel® and Pentium® are registered trademarks of Intel Corporation. PS/2 and OS®/2 are registered trademarks of International Business Machines Corporation. Windows® 95/98/2000/NT/XP/Vista are registered trademarks of Microsoft Corporation. Netware® is a registered trademark of Novell, Inc. Award® is a registered trademark of Phoenix Technologies Ltd. AMI® is a registered trademark of American Megatrends Inc. Revision History Revision Revision History Date V1.0 First release for PCB 1.X March 2008 (P45D3 Platinum) Technical Support If a problem arises with your system and no solution can be obtained from the user’s manual, please contact your place of purchase or local distributor. Alternatively, please try the following help resources for further guidance. Visit the MSI website for FAQ, technical guide, BIOS updates, driver updates, and other information: http://global.msi.com.tw/index.php? func=faqIndex Contact our technical staff at: http://support.msi.com.tw/ ii Safety Instructions 1.
    [Show full text]
  • Designing Disk Arrays for High Data Reliability
    Designing Disk Arrays for High Data Reliability Garth A. Gibson School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsbugh PA 15213 David A. Patterson Computer Science Division Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley, CA 94720 hhhhhhhhhhhhhhhhhhhhhhhhhhhhh This research was funded by NSF grant MIP-8715235, NASA/DARPA grant NAG 2-591, a Computer Measurement Group fellowship, and an IBM predoctoral fellowship. 1 Proposed running head: Designing Disk Arrays for High Data Reliability Please forward communication to: Garth A. Gibson School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh PA 15213-3890 412-268-5890 FAX 412-681-5739 [email protected] ABSTRACT Redundancy based on a parity encoding has been proposed for insuring that disk arrays provide highly reliable data. Parity-based redundancy will tolerate many independent and dependent disk failures (shared support hardware) without on-line spare disks and many more such failures with on-line spare disks. This paper explores the design of reliable, redundant disk arrays. In the context of a 70 disk strawman array, it presents and applies analytic and simulation models for the time until data is lost. It shows how to balance requirements for high data reliability against the overhead cost of redundant data, on-line spares, and on-site repair personnel in terms of an array's architecture, its component reliabilities, and its repair policies. 2 Recent advances in computing speeds can be matched by the I/O performance afforded by parallelism in striped disk arrays [12, 13, 24]. Arrays of small disks further utilize advances in the technology of the magnetic recording industry to provide cost-effective I/O systems based on disk striping [19].
    [Show full text]
  • Building Reliable Massive Capacity Ssds Through a Flash Aware RAID-Like Protection †
    applied sciences Article Building Reliable Massive Capacity SSDs through a Flash Aware RAID-Like Protection † Jaeho Kim 1 and Jung Kyu Park 2,* 1 Department of Aerospace and Software Engineering & Engineering Research Institute, Gyeongsang National University, Jinju 52828, Korea; [email protected] 2 Department of Computer Software Engineering, Changshin University, Changwon 51352, Korea * Correspondence: [email protected] † This Paper Is an Extended Version of Paper Published in the IEEE International Conference on Consumer Electronics (ICCE) 2020, Las Vegas, NV, USA, 4–6 January 2020. Received: 14 November 2020; Accepted: 16 December 2020; Published: 21 December 2020 Abstract: The demand for mass storage devices has become an inevitable consequence of the explosive increase in data volume. The three-dimensional (3D) vertical NAND (V-NAND) and quad-level cell (QLC) technologies rapidly accelerate the capacity increase of flash memory based storage system, such as SSDs (Solid State Drives). Massive capacity SSDs adopt dozens or hundreds of flash memory chips in order to implement large capacity storage. However, employing such a large number of flash chips increases the error rate in SSDs. A RAID-like technique inside an SSD has been used in a variety of commercial products, along with various studies, in order to protect user data. With the advent of new types of massive storage devices, studies on the design of RAID-like protection techniques for such huge capacity SSDs are important and essential. In this paper, we propose a massive SSD-Aware Parity Logging (mSAPL) scheme that protects against n-failures at the same time in a stripe, where n is protection strength that is specified by the user.
    [Show full text]
  • Rethinking RAID for SSD Reliability
    Differential RAID: Rethinking RAID for SSD Reliability Asim Kadav Mahesh Balakrishnan University of Wisconsin Microsoft Research Silicon Valley Madison, WI Mountain View, CA [email protected] [email protected] Vijayan Prabhakaran Dahlia Malkhi Microsoft Research Silicon Valley Microsoft Research Silicon Valley Mountain View, CA Mountain View, CA [email protected] [email protected] ABSTRACT sult, a write-intensive workload can wear out the SSD within Deployment of SSDs in enterprise settings is limited by the months. Also, this erasure limit continues to decrease as low erase cycles available on commodity devices. Redun- MLC devices increase in capacity and density. As a conse- dancy solutions such as RAID can potentially be used to pro- quence, the reliability of MLC devices remains a paramount tect against the high Bit Error Rate (BER) of aging SSDs. concern for its adoption in servers [4]. Unfortunately, such solutions wear out redundant devices at similar rates, inducing correlated failures as arrays age in In this paper, we explore the possibility of using device-level unison. We present Diff-RAID, a new RAID variant that redundancy to mask the effects of aging on SSDs. Clustering distributes parity unevenly across SSDs to create age dispari- options such as RAID can potentially be used to tolerate the ties within arrays. By doing so, Diff-RAID balances the high higher BERs exhibited by worn out SSDs. However, these BER of old SSDs against the low BER of young SSDs. Diff- techniques do not automatically provide adequate protec- RAID provides much greater reliability for SSDs compared tion for aging SSDs; by balancing write load across devices, to RAID-4 and RAID-5 for the same space overhead, and solutions such as RAID-5 cause all SSDs to wear out at ap- offers a trade-off curve between throughput and reliability.
    [Show full text]
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • Identify Storage Technologies and Understand RAID
    LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals IdentifyIdentify StorageStorage TechnologiesTechnologies andand UnderstandUnderstand RAIDRAID LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Lesson Overview In this lesson, you will learn: Local storage options Network storage options Redundant Array of Independent Disk (RAID) options LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Anticipatory Set List three different RAID configurations. Which of these three bus types has the fastest transfer speed? o Parallel ATA (PATA) o Serial ATA (SATA) o USB 2.0 LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options Local storage options can range from a simple single disk to a Redundant Array of Independent Disks (RAID). Local storage options can be broken down into bus types: o Serial Advanced Technology Attachment (SATA) o Integrated Drive Electronics (IDE, now called Parallel ATA or PATA) o Small Computer System Interface (SCSI) o Serial Attached SCSI (SAS) LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options SATA drives have taken the place of the tradition PATA drives. SATA have several advantages over PATA: o Reduced cable bulk and cost o Faster and more efficient data transfer o Hot-swapping technology LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options (continued) SAS drives have taken the place of the traditional SCSI and Ultra SCSI drives in server class machines. SAS have several
    [Show full text]
  • 6 O--C/?__I RAID-II: Design and Implementation Of
    f r : NASA-CR-192911 I I /N --6 o--c/?__i _ /f( RAID-II: Design and Implementation of a/t 't Large Scale Disk Array Controller R.H. Katz, P.M. Chen, A.L. Drapeau, E.K. Lee, K. Lutz, E.L. Miller, S. Seshan, D.A. Patterson r u i (NASA-CR-192911) RAID-Z: DESIGN N93-25233 AND IMPLEMENTATION OF A LARGE SCALE u DISK ARRAY CONTROLLER (California i Univ.) 18 p Unclas J II ! G3160 0158657 ! I i I \ i O"-_ Y'O J i!i111 ,= -, • • ,°. °.° o.o I I Report No. UCB/CSD-92-705 "-----! I October 1992 _,'_-_,_ i i I , " Computer Science Division (EECS) University of California, Berkeley Berkeley, California 94720 RAID-II: Design and Implementation of a Large Scale Disk Array Controller 1 R. H. Katz P. M. Chen, A. L Drapeau, E. K. Lee, K. Lutz, E. L Miller, S. Seshan, D. A. Patterson Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA 94720 Abstract: We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5" disk chives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19" racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance.
    [Show full text]
  • Summer Student Project Report
    Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June { September 2014 Abstract This report will outline two projects that were done as part of a three months long summer internship at CERN. In the first project we dealt with Worldwide LHC Computing Grid (WLCG) and its information sys- tem. The information system currently conforms to a schema called GLUE and it is evolving towards a new version: GLUE2. The aim of the project was to develop and adapt the current information system of the WLCG, used by the Large Scale Storage Systems at CERN (CASTOR and EOS), to the new GLUE2 schema. During the second project we investigated different RAID configurations so that we can get performance boost from CERN's disk systems in the future. RAID 1 that is currently in use is not an option anymore because of limited performance and high cost. We tried to discover RAID configurations that will improve the performance and simultaneously decrease the cost. 1 Information-provider scripts for GLUE2 1.1 Introduction The Worldwide LHC Computing Grid (WLCG, see also 1) is an international collaboration consisting of a grid-based computer network infrastructure incor- porating over 170 computing centres in 36 countries. It was originally designed by CERN to handle the large data volume produced by the Large Hadron Col- lider (LHC) experiments. This data is stored at CERN Storage Systems which are responsible for keeping and making available more than 100 Petabytes (105 Terabytes) of data to the physics community. The data is also replicated from CERN to the main computing centres within the WLCG.
    [Show full text]
  • RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018
    RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018 Copyright © 2018 ASUSTeK COMPUTER INC. All Rights Reserved. No part of this manual, including the products and software described in it, may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language in any form or by any means, except documentation kept by the purchaser for backup purposes, without the express written permission of ASUSTeK COMPUTER INC. (“ASUS”). Product warranty or service will not be extended if: (1) the product is repaired, modified or altered, unless such repair, modification of alteration is authorized in writing by ASUS; or (2) the serial number of the product is defaced or missing. ASUS PROVIDES THIS MANUAL “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ASUS, ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, LOSS OF BUSINESS, LOSS OF USE OR DATA, INTERRUPTION OF BUSINESS AND THE LIKE), EVEN IF ASUS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES ARISING FROM ANY DEFECT OR ERROR IN THIS MANUAL OR PRODUCT. SPECIFICATIONS AND INFORMATION CONTAINED IN THIS MANUAL ARE FURNISHED FOR INFORMATIONAL USE ONLY, AND ARE SUBJECT TO CHANGE AT ANY TIME WITHOUT NOTICE, AND SHOULD NOT BE CONSTRUED AS A COMMITMENT BY ASUS. ASUS ASSUMES NO RESPONSIBILITY OR LIABILITY FOR ANY ERRORS OR INACCURACIES THAT MAY APPEAR IN THIS MANUAL, INCLUDING THE PRODUCTS AND SOFTWARE DESCRIBED IN IT.
    [Show full text]
  • 1 Configuring SATA Controllers A
    RAID Levels RAID 0 RAID 1 RAID 5 RAID 10 Minimum Number of Hard ≥2 2 ≥3 ≥4 Drives Array Capacity Number of hard Size of the smallest (Number of hard (Number of hard drives * Size of the drive drives -1) * Size of drives/2) * Size of the smallest drive the smallest drive smallest drive Fault Tolerance No Yes Yes Yes To create a RAID set, follow the steps below: A. Install SATA hard drive(s) in your computer. B. Configure SATA controller mode in BIOS Setup. C. Configure a RAID array in RAID BIOS. (Note 1) D. Install the SATA RAID/AHCI driver and operating system. Before you begin, please prepare the following items: • At least two SATA hard drives or M.2 SSDs (Note 2) (to ensure optimal performance, it is recommended that you use two hard drives with identical model and capacity). (Note 3) • A Windows setup disk. • Motherboard driver disk. • A USB thumb drive. 1 Configuring SATA Controllers A. Installing hard drives Connect the SATA signal cables to SATA hard drives and the Intel® Chipset controlled SATA ports (SATA3 0~5) on the motherboard. Then connect the power connectors from your power supply to the hard drives. Or install your M.2 SSD(s) in the M.2 connector(s) on the motherboard. (Note 1) Skip this step if you do not want to create RAID array on the SATA controller. (Note 2) An M.2 PCIe SSD cannot be used to set up a RAID set either with an M.2 SATA SSD or a SATA hard drive.
    [Show full text]