Scalability of RAID Systems

Total Page:16

File Type:pdf, Size:1020Kb

Scalability of RAID Systems Scalability of RAID Systems Yan Li I V N E R U S E I T H Y T O H F G E R D I N B U Doctor of Philosophy Institute of Computing Systems Architecture School of Informatics University of Edinburgh 2010 Abstract RAID systems (Redundant Arrays of Inexpensive Disks) have dominated back- end storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network bandwidth, physical interconnect failures, correlated disk failures and long disk reconstruction time. This thesis studies the scalability of RAID systems in terms of both performance and reliability through simulation, using a discrete event driven simulator for RAID systems (SIMRAID) developed as part of this project. SIMRAID incorporates two benchmark workload generators, based on the SPC-1 and Iometer benchmark specifi- cations. Each component of SIMRAID is highly parameterised, enabling it to explore a large design space. To improve the simulation speed, SIMRAID develops a set of abstraction techniques to extract the behaviour of the interconnection protocol without losing accuracy. Finally, to meet the technology trend toward heterogeneous storage architectures, SIMRAID develops a framework that allows easy modelling of different types of device and interconnection technique. Simulation experiments were first carried out on performance aspects of scalabil- ity. They were designed to answer two questions: (1) given a number of disks, which factors affect back-end network bandwidth requirements; (2) given an interconnec- tion network, how many disks can be connected to the system. The results show that the bandwidth requirement per disk is primarily determined by workload features and stripe unit size (a smaller stripe unit size has better scalability than a larger one), with cache size and RAID algorithm having very little effect on this value. The maximum number of disks is limited, as would be expected, by the back-end network bandwidth. Studies of reliability have led to three proposals to improve the reliability and scal- ability of RAID systems. Firstly, a novel data layout called PCDSDF is proposed. PCDSDF combines the advantages of orthogonal data layouts and parity declustering data layouts, so that it can not only survive multiple disk failures caused by physical in- terconnect failures or correlated disk failures, but also has a good degraded and rebuild performance. The generating process of PCDSDF is deterministic and time-efficient. The number of stripes per rotation (namely the number of stripes to achieve rebuild i workload balance) is small. Analysis shows that the PCDSDF data layout can signif- icantly improve the system reliability. Simulations performed on SIMRAID confirm the good performance of PCDSDF, which is comparable to other parity declustering data layouts, such as RELPR. Secondly, a system architecture and rebuilding mechanism have been designed, aimed at fast disk reconstruction. This architecture is based on parity declustering data layouts and a disk-oriented reconstruction algorithm. It uses stripe groups instead of stripes as the basic distribution unit so that it can make use of the sequential nature of the rebuilding workload. The design space of system factors such as parity declustering ratio, chunk size, private buffer size of surviving disks and free buffer size are explored to provide guidelines for storage system design. Thirdly, an efficient distributed hot spare allocation and assignment algorithm for general parity declustering data layouts has been developed. This algorithm avoids conflict problems in the process of assigning distributed spare space for the units on the failed disk. Simulation results show that it effectively solves the write bottleneck problem and, at the same time, there is only a small increase in the average response time to user requests. Publication List: Y. Li, T. Courtney, R. N. Ibbett and N.Topham, “On the Scalability of the Back- • end Network of Storage Sub-Systems”, in SPECTS 2008, pp 464-471, Edinburgh UK. Y.Li and R.N. Ibbett, “SimRAID-An Efficent Performance Evaluation Tool for • Modern RAID Systems”, in Poster Competition of University of Edinbrugh In- formatics Jamboree, Edinburgh, 2007. the First prize Y. Li, T. Courtney, R. N. Ibbett and N.Topham, “Work in Progress: On The • Scalability of Storage Sub-System Back-end Network”, in WiP of FAST, San Jose, USA, 2007. Y. Li, T. Courtney, R. N. Ibbett and N. Topham, “Work in Progress: Performance • Evaluation of RAID6 Systems”, in WiP of FAST, San Jose, USA, 2007. Y. Li, T. Courtney, F. Chevalier and R. N. Ibbett,“SimRAID: An Efficient Per- • formance Evaluation Tool for RAID Systems”, in Proc. SCSC, pp 431-438, Calgary, Canada, 2006. ii T. Courtney, F.Chevalier and Y. Li, “Novel technique for accelerated simulation • of storage systems”, in IASTED PDCN, pp 266 - 272, Innsbruck, Austria, 2006. Y. Li and A. Goel. ”An Efficient Distributed Hot Sparing Scheme In A Par- • ity Declustered RAID Organization”, under US patent application. (application number 12247877). iii Acknowledgements Many thanks to my supervisor, Prof. Roland Ibbett, for his invaluable supervision, warm-hearted encouragement, patience to correct my non-native English and under- standing throughout my PhD. Without him, this Ph.D. and thesis would never have happened. Thanks must also go to Prof. Nigel Topham, my current supervisor, for his supportive supervision and discussion and for his trust and support. Without his support and trust, I wouldn’t have been able to finish this thesis. In addition, many thanks to my colleagues Tim Courtney and Franck Chevalier for their help and discussions. Their help has been invaluable for me, especially at the beginning of this PhD. My sincere thanks must go to David Dolman for solving HASE problems and maintaining HASE. Without his help and quick response, I wouldn’t be able to finish my experiments. Also thank to my other colleagues in the HASE group, Juan Carballo and Worawan Marurngsith for the useful discussions and help over the years. Many thanks to Atul Goel and Tom Theaker from NetApp for their help and sup- port during my internship in NetApp. I learned a lot from Atul on designing useful and practical systems. Chapter 6 of this thesis is based on my intern project. Tom Theaker has been a very supportive boss and I really appreciate the internship and job opportunities he gave me. Many thanks to my friends Lu, Yuanyuan, Lin and Guo for their friendship. Talking with them has always been my pleasure. Finally, my sincere thanks to my beloved parents and my dear husband Zhiheng for their unwavering support through all these years. iv Declaration I declare that this thesis was composed by myself, that the work contained herein is my own except where explicitly stated otherwise in the text, and that this work has not been submitted for any other degree or professional qualification except as specified. (Yan Li) v To Zhiheng and Grace. vi Table of Contents 1 Introduction 1 1.1 RAIDSystems ............................. 2 1.1.1 DataStriping .......................... 3 1.1.2 Redundancy........................... 4 1.1.3 OrthogonalRAID........................ 5 1.1.4 Degraded/Rebuilding Performance and Parity Declustering Data Layouts............................. 6 1.2 TechnologyTrendsinRAIDSystemArchitecture . ... 7 1.3 RAIDSystemsChallenges . .. .. .. .. .. .. 9 1.3.1 Performance Challenge - Limited Back-end Network Bandwidth 9 1.3.2 ReliabilityChallenges . 10 1.4 ResearchObjectives. .. .. .. .. .. .. .. 13 1.5 ThesisContributions . .. .. .. .. .. .. .. 13 1.6 ThesisOverview ............................ 15 2 RAID Systems 17 2.1 RAIDSystemImplementationIssues. 17 2.1.1 BasicRAIDLevels. .. .. .. .. .. .. 17 2.1.2 ReadingandWritingOperations . 21 2.1.3 DataLayout........................... 25 2.1.4 ReconstructionAlgorithms. 27 2.2 PhysicalComponentsofRAIDSystems . 29 2.2.1 HardDiskDrives........................ 29 2.2.2 RAIDController ........................ 33 2.2.3 Back-endNetworks. 38 2.2.4 Shelf-enclosures . .. .. .. .. .. .. 40 2.3 Summary ................................ 41 vii 3 Related Work 43 3.1 PerformanceModellingandEvaluation . 43 3.1.1 RAIDSystemPerformance. 44 3.1.2 FCNetworkPerformance . 45 3.2 RAIDSystemReliabilityStudy. 45 3.2.1 RAIDSystemReliabilityAnalysis. 46 3.2.2 ImprovingSystemReliability . 47 3.3 ParityDeclusteringDataLayoutDesign . .. 48 3.3.1 BIBDandBCBD........................ 49 3.3.2 PRIME and RELPR ....................... 51 3.3.3 NearlyRandomPermutationBasedMapping . 52 3.3.4 PermutationDevelopmentDataLayout . 53 3.3.5 StringParityDeclusteringDataLayout . 55 3.4 DistributedHotSparing. 55 3.5 SimulationofStorageSystems . 57 3.5.1 SimulationModelsofRAIDSystem . 57 3.5.2 SimulationModelsofFCnetwork . 59 3.6 Summary ................................ 59 4 The SIMRAID Simulation Model 62 4.1 HASE.................................. 62 4.1.1 OverviewofHASESystem . 63 4.1.2 HASEEntityandHierarchicalDesign . 65 4.1.3 TheHASE++DESEngine . 66 4.1.4 EventSchedulingandTimeAdvancing . 67 4.1.5 ExtensibleClockMechanism . 68 4.2 SIMRAIDModelDesign ....................... 70 4.2.1 OVERVIEW OF SIMRAID .................. 70 4.2.2 TransportLevelAbstractionTechniques . 72 4.2.3 Framework for modellingheterogeneous systems
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • VIA RAID Configurations
    VIA RAID configurations The motherboard includes a high performance IDE RAID controller integrated in the VIA VT8237R southbridge chipset. It supports RAID 0, RAID 1 and JBOD with two independent Serial ATA channels. RAID 0 (called Data striping) optimizes two identical hard disk drives to read and write data in parallel, interleaved stacks. Two hard disks perform the same work as a single drive but at a sustained data transfer rate, double that of a single disk alone, thus improving data access and storage. Use of two new identical hard disk drives is required for this setup. RAID 1 (called Data mirroring) copies and maintains an identical image of data from one drive to a second drive. If one drive fails, the disk array management software directs all applications to the surviving drive as it contains a complete copy of the data in the other drive. This RAID configuration provides data protection and increases fault tolerance to the entire system. Use two new drives or use an existing drive and a new drive for this setup. The new drive must be of the same size or larger than the existing drive. JBOD (Spanning) stands for Just a Bunch of Disks and refers to hard disk drives that are not yet configured as a RAID set. This configuration stores the same data redundantly on multiple disks that appear as a single disk on the operating system. Spanning does not deliver any advantage over using separate disks independently and does not provide fault tolerance or other RAID performance benefits. If you use either Windows® XP or Windows® 2000 operating system (OS), copy first the RAID driver from the support CD to a floppy disk before creating RAID configurations.
    [Show full text]
  • Building Reliable Massive Capacity Ssds Through a Flash Aware RAID-Like Protection †
    applied sciences Article Building Reliable Massive Capacity SSDs through a Flash Aware RAID-Like Protection † Jaeho Kim 1 and Jung Kyu Park 2,* 1 Department of Aerospace and Software Engineering & Engineering Research Institute, Gyeongsang National University, Jinju 52828, Korea; [email protected] 2 Department of Computer Software Engineering, Changshin University, Changwon 51352, Korea * Correspondence: [email protected] † This Paper Is an Extended Version of Paper Published in the IEEE International Conference on Consumer Electronics (ICCE) 2020, Las Vegas, NV, USA, 4–6 January 2020. Received: 14 November 2020; Accepted: 16 December 2020; Published: 21 December 2020 Abstract: The demand for mass storage devices has become an inevitable consequence of the explosive increase in data volume. The three-dimensional (3D) vertical NAND (V-NAND) and quad-level cell (QLC) technologies rapidly accelerate the capacity increase of flash memory based storage system, such as SSDs (Solid State Drives). Massive capacity SSDs adopt dozens or hundreds of flash memory chips in order to implement large capacity storage. However, employing such a large number of flash chips increases the error rate in SSDs. A RAID-like technique inside an SSD has been used in a variety of commercial products, along with various studies, in order to protect user data. With the advent of new types of massive storage devices, studies on the design of RAID-like protection techniques for such huge capacity SSDs are important and essential. In this paper, we propose a massive SSD-Aware Parity Logging (mSAPL) scheme that protects against n-failures at the same time in a stripe, where n is protection strength that is specified by the user.
    [Show full text]
  • Rethinking RAID for SSD Reliability
    Differential RAID: Rethinking RAID for SSD Reliability Asim Kadav Mahesh Balakrishnan University of Wisconsin Microsoft Research Silicon Valley Madison, WI Mountain View, CA [email protected] [email protected] Vijayan Prabhakaran Dahlia Malkhi Microsoft Research Silicon Valley Microsoft Research Silicon Valley Mountain View, CA Mountain View, CA [email protected] [email protected] ABSTRACT sult, a write-intensive workload can wear out the SSD within Deployment of SSDs in enterprise settings is limited by the months. Also, this erasure limit continues to decrease as low erase cycles available on commodity devices. Redun- MLC devices increase in capacity and density. As a conse- dancy solutions such as RAID can potentially be used to pro- quence, the reliability of MLC devices remains a paramount tect against the high Bit Error Rate (BER) of aging SSDs. concern for its adoption in servers [4]. Unfortunately, such solutions wear out redundant devices at similar rates, inducing correlated failures as arrays age in In this paper, we explore the possibility of using device-level unison. We present Diff-RAID, a new RAID variant that redundancy to mask the effects of aging on SSDs. Clustering distributes parity unevenly across SSDs to create age dispari- options such as RAID can potentially be used to tolerate the ties within arrays. By doing so, Diff-RAID balances the high higher BERs exhibited by worn out SSDs. However, these BER of old SSDs against the low BER of young SSDs. Diff- techniques do not automatically provide adequate protec- RAID provides much greater reliability for SSDs compared tion for aging SSDs; by balancing write load across devices, to RAID-4 and RAID-5 for the same space overhead, and solutions such as RAID-5 cause all SSDs to wear out at ap- offers a trade-off curve between throughput and reliability.
    [Show full text]
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • Identify Storage Technologies and Understand RAID
    LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals IdentifyIdentify StorageStorage TechnologiesTechnologies andand UnderstandUnderstand RAIDRAID LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Lesson Overview In this lesson, you will learn: Local storage options Network storage options Redundant Array of Independent Disk (RAID) options LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Anticipatory Set List three different RAID configurations. Which of these three bus types has the fastest transfer speed? o Parallel ATA (PATA) o Serial ATA (SATA) o USB 2.0 LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options Local storage options can range from a simple single disk to a Redundant Array of Independent Disks (RAID). Local storage options can be broken down into bus types: o Serial Advanced Technology Attachment (SATA) o Integrated Drive Electronics (IDE, now called Parallel ATA or PATA) o Small Computer System Interface (SCSI) o Serial Attached SCSI (SAS) LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options SATA drives have taken the place of the tradition PATA drives. SATA have several advantages over PATA: o Reduced cable bulk and cost o Faster and more efficient data transfer o Hot-swapping technology LESSON 4.1_4.2 98-365 Windows Server Administration Fundamentals Local Storage Options (continued) SAS drives have taken the place of the traditional SCSI and Ultra SCSI drives in server class machines. SAS have several
    [Show full text]
  • Architectures and Algorithms for On-Line Failure Recovery in Redundant Disk Arrays
    Architectures and Algorithms for On-Line Failure Recovery in Redundant Disk Arrays Draft copy submitted to the Journal of Distributed and Parallel Databases. A revised copy is published in this journal, vol. 2 no. 3, July 1994.. Mark Holland Department of Electrical and Computer Engineering Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 (412) 268-5237 [email protected] Garth A. Gibson School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 (412) 268-5890 [email protected] Daniel P. Siewiorek School of Computer Science Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA 15213-3890 (412) 268-2570 [email protected] Architectures and Algorithms for On-Line Failure Recovery In Redundant Disk Arrays1 Abstract The performance of traditional RAID Level 5 arrays is, for many applications, unacceptably poor while one of its constituent disks is non-functional. This paper describes and evaluates mechanisms by which this disk array failure-recovery performance can be improved. The two key issues addressed are the data layout, the mapping by which data and parity blocks are assigned to physical disk blocks in an array, and the reconstruction algorithm, which is the technique used to recover data that is lost when a component disk fails. The data layout techniques this paper investigates are variations on the declustered parity organiza- tion, a derivative of RAID Level 5 that allows a system to trade some of its data capacity for improved failure-recovery performance. Parity declustering improves the failure-mode performance of an array significantly, and a parity-declustered architecture is preferable to an equivalent-size multiple-group RAID Level 5 organization in environments where failure-recovery performance is important.
    [Show full text]
  • Summer Student Project Report
    Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June { September 2014 Abstract This report will outline two projects that were done as part of a three months long summer internship at CERN. In the first project we dealt with Worldwide LHC Computing Grid (WLCG) and its information sys- tem. The information system currently conforms to a schema called GLUE and it is evolving towards a new version: GLUE2. The aim of the project was to develop and adapt the current information system of the WLCG, used by the Large Scale Storage Systems at CERN (CASTOR and EOS), to the new GLUE2 schema. During the second project we investigated different RAID configurations so that we can get performance boost from CERN's disk systems in the future. RAID 1 that is currently in use is not an option anymore because of limited performance and high cost. We tried to discover RAID configurations that will improve the performance and simultaneously decrease the cost. 1 Information-provider scripts for GLUE2 1.1 Introduction The Worldwide LHC Computing Grid (WLCG, see also 1) is an international collaboration consisting of a grid-based computer network infrastructure incor- porating over 170 computing centres in 36 countries. It was originally designed by CERN to handle the large data volume produced by the Large Hadron Col- lider (LHC) experiments. This data is stored at CERN Storage Systems which are responsible for keeping and making available more than 100 Petabytes (105 Terabytes) of data to the physics community. The data is also replicated from CERN to the main computing centres within the WLCG.
    [Show full text]
  • RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018
    RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018 Copyright © 2018 ASUSTeK COMPUTER INC. All Rights Reserved. No part of this manual, including the products and software described in it, may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language in any form or by any means, except documentation kept by the purchaser for backup purposes, without the express written permission of ASUSTeK COMPUTER INC. (“ASUS”). Product warranty or service will not be extended if: (1) the product is repaired, modified or altered, unless such repair, modification of alteration is authorized in writing by ASUS; or (2) the serial number of the product is defaced or missing. ASUS PROVIDES THIS MANUAL “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ASUS, ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, LOSS OF BUSINESS, LOSS OF USE OR DATA, INTERRUPTION OF BUSINESS AND THE LIKE), EVEN IF ASUS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES ARISING FROM ANY DEFECT OR ERROR IN THIS MANUAL OR PRODUCT. SPECIFICATIONS AND INFORMATION CONTAINED IN THIS MANUAL ARE FURNISHED FOR INFORMATIONAL USE ONLY, AND ARE SUBJECT TO CHANGE AT ANY TIME WITHOUT NOTICE, AND SHOULD NOT BE CONSTRUED AS A COMMITMENT BY ASUS. ASUS ASSUMES NO RESPONSIBILITY OR LIABILITY FOR ANY ERRORS OR INACCURACIES THAT MAY APPEAR IN THIS MANUAL, INCLUDING THE PRODUCTS AND SOFTWARE DESCRIBED IN IT.
    [Show full text]
  • 1 Configuring SATA Controllers A
    RAID Levels RAID 0 RAID 1 RAID 5 RAID 10 Minimum Number of Hard ≥2 2 ≥3 ≥4 Drives Array Capacity Number of hard Size of the smallest (Number of hard (Number of hard drives * Size of the drive drives -1) * Size of drives/2) * Size of the smallest drive the smallest drive smallest drive Fault Tolerance No Yes Yes Yes To create a RAID set, follow the steps below: A. Install SATA hard drive(s) in your computer. B. Configure SATA controller mode in BIOS Setup. C. Configure a RAID array in RAID BIOS. (Note 1) D. Install the SATA RAID/AHCI driver and operating system. Before you begin, please prepare the following items: • At least two SATA hard drives or M.2 SSDs (Note 2) (to ensure optimal performance, it is recommended that you use two hard drives with identical model and capacity). (Note 3) • A Windows setup disk. • Motherboard driver disk. • A USB thumb drive. 1 Configuring SATA Controllers A. Installing hard drives Connect the SATA signal cables to SATA hard drives and the Intel® Chipset controlled SATA ports (SATA3 0~5) on the motherboard. Then connect the power connectors from your power supply to the hard drives. Or install your M.2 SSD(s) in the M.2 connector(s) on the motherboard. (Note 1) Skip this step if you do not want to create RAID array on the SATA controller. (Note 2) An M.2 PCIe SSD cannot be used to set up a RAID set either with an M.2 SATA SSD or a SATA hard drive.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • RAID Technology
    RAID Technology Reference and Sources: y The most part of text in this guide has been taken from copyrighted document of Adaptec, Inc. on site (www.adaptec.com) y Perceptive Solutions, Inc. RAID stands for Redundant Array of Inexpensive (or sometimes "Independent") Disks. RAID is a method of combining several hard disk drives into one logical unit (two or more disks grouped together to appear as a single device to the host system). RAID technology was developed to address the fault-tolerance and performance limitations of conventional disk storage. It can offer fault tolerance and higher throughput levels than a single hard drive or group of independent hard drives. While arrays were once considered complex and relatively specialized storage solutions, today they are easy to use and essential for a broad spectrum of client/server applications. Redundant Arrays of Inexpensive Disks (RAID) "KILLS - BUGS - DEAD!" -- TV commercial for RAID bug spray There are many applications, particularly in a business environment, where there are needs beyond what can be fulfilled by a single hard disk, regardless of its size, performance or quality level. Many businesses can't afford to have their systems go down for even an hour in the event of a disk failure; they need large storage subsystems with capacities in the terabytes; and they want to be able to insulate themselves from hardware failures to any extent possible. Some people working with multimedia files need fast data transfer exceeding what current drives can deliver, without spending a fortune on specialty drives. These situations require that the traditional "one hard disk per system" model be set aside and a new system employed.
    [Show full text]