Analysis and Concept of the New Profile Cluster for The

Total Page:16

File Type:pdf, Size:1020Kb

Analysis and Concept of the New Profile Cluster for The MASARYK UNIVERSITY F}w¡¢£¤¥¦§¨ ACULTY OF I !"#$%&'()+,-./012345<yA|NFORMATICS Analysis and Concept of the New Profile Cluster for the UCN Domain BACHELOR’S THESIS Martin Janek Brno, Spring 2009 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Advisor: Mgr. Pavel Tucekˇ iii Acknowledgement I would like to express my deepest gratitude to my advisor Mgr. Pavel Tucekˇ and consultant Mgr. Ing. Luka´sˇ Rychnovsky´ for their time, guidance and constructive advice. v Abstract Masaryk University relies on Microsoft Windows network to enable its users to access their files from any workstation connected to UCN (University Computer Network) domain. The solution currently in use is soon to be replaced with new hardware. The aim of this thesis is to analyse current clustering options available in Windows Server 2008 and suggest the best solution for this purpose. vii Keywords failover cluster, high availability, redundancy, Windows Server 2008, storage per- formance, windows domain profile, UCN ix Contents 1 Introduction .................................... 7 2 High Availability at the Hardware Level ................... 9 2.1 Hardware Redundancy ........................... 9 2.2 Dynamic Hardware Partitioning ..................... 10 2.3 RAID ..................................... 11 2.3.1 RAID 0 – Striping . 12 2.3.2 RAID 1 – Mirroring . 13 2.3.3 RAID 3 – Bit-Interleaved Parity . 13 2.3.4 RAID 4 – Block-Interleaved Parity . 13 2.3.5 RAID 5 – Distributed Block-Interleaved Parity . 14 2.3.6 RAID 6 – Distributed Block-Interleaved Dual Parity . 15 2.3.7 Nested RAID Levels . 15 2.4 Storage Area Network ........................... 16 2.4.1 Comparison of SAN and NAS . 16 2.4.2 Fibre Channel . 17 2.4.3 SAN and High Availability . 19 3 Testing Storage Array Performance ...................... 21 3.1 Testing Methodology ............................ 21 3.2 Run One Configuration and Results ................... 21 3.3 Run Two Configuration and Results ................... 22 4 Windows Server 2008 Clustering Options . 25 4.1 Network Load Balancing .......................... 25 4.2 Failover Clustering ............................. 26 4.2.1 How Failover Clustering Works . 27 4.2.2 Quorum Models . 28 4.2.3 Multi-site Clustering . 31 4.2.4 Hyper-V and Failover Clustering . 32 4.2.5 Cluster Shared Volumes . 32 5 Clustering Solution for Storing UCN Domain Profiles . 35 5.1 Current Solution ............................... 35 5.2 New Solution ................................ 35 5.3 Failure Scenarios .............................. 38 6 Conclusion ..................................... 41 Bibliography . 41 1 List of Tables 3.1 Run One Results – Operations per Second 22 3.2 Run One Results – Transfer Rate [MBps] 22 3.3 Run One Results – Average Response Time [ms] 22 3.4 Run One Results – Maximum Response Time [ms] 23 3.5 Size of UCN Profile Files 23 3.6 Run Two Results – Operations per Second 23 3.7 Run Two Results – Transfer Rate [MBps] 24 3.8 Run Two Results – Average Response Time [ms] 24 3.9 Run Two Results – Maximum Response Time [ms] 24 3 List of Figures 2.1 Dynamic Hardware Partitioning – a single physical server divided into three hardware partitions. Some components are not in use and are dedicated for use as spares in case of other components’ failure. 10 2.2 RAID 0 – Striping 12 2.3 RAID 1 – Mirroring 13 2.4 RAID 3 – Bit-Interleaved Parity 14 2.5 RAID 4 – Block-Interleaved Parity 14 2.6 RAID 5 – Distributed Block-Interleaved Parity 15 2.7 RAID 6 – Distributed Block-Interleaved Dual Parity 15 2.8 Storage Area Network with redundant switches and independent fabrics 17 2.9 Fibre Channel Arbitrated Loop 18 2.10 Fibre Channel Switched Fabrics 18 4.1 Disk Only Quorum Model; one node and disk communicate – the cluster runs 28 4.2 Disk Only Quorum Model; nodes can communicate but the disk is unavailable – the cluster is offline 29 4.3 Node Majority Quorum Model; majority of nodes can communicate – the cluster is online 29 4.4 Node Majority Quorum Model; quorum cannot be achieved – the cluster is offline 29 4.5 Node and Disk Majority Quorum Model; majority of devices can communicate – the cluster is online 30 4.6 Node and Disk Majority Quorum Model; majority of devices can communicate – the cluster is online 30 4.7 Node and Disk Majority Quorum Model; some devices can communicate but majority is not achieved – the cluster stops 30 5.1 Current Solution Schematics Diagram 36 5.2 New Solution Schematics Diagram; notice that each node of cluster one uses a single dual-port FC adapter instead of two independent adapters. 37 5.3 Storage Configuration: each array (16 disks) is divided into two virtual drives (VD). Each virtual disk contains two LUNs – one for the witness disk (W), another for data file systems (1–8). 39 5 Chapter 1 Introduction A vast number of businesses today rely on electronic data exchange. Many mission- critical applications and services of such companies reside on servers. A failure to keep the servers in continuous operation might result in core business services becoming unavailable and the business losing money and reputation. And with competition being only a few clicks away in the global market, the need of high availability has become greater than ever before. However, ensuring continuous operation of services is problematic. Servers fail despite being made of high-quality components [1]. Therefore it is essential to implement countermeasures should a failure occur. Redundancy is a way of increasing hardware reliability but it is also necessary to ensure that services op- eration will not be interrupted even in case of natural disasters such as fires or earthquakes. In order to meet these requirements and achieve high availability, businesses implement clustering. Failover clustering and network load balancing are two key concepts when continuous service operation is the goal. Clustered solutions can effectively deal with temporary hardware or software malfunction. Despite the fact that clustering helps to substantially improve services availability and eliminate single points of failure, it is not meant to replace dedicated fault- tolerant solutions [1]. Instead, clustering is a cost effective solution, which can take advantage of commodity components and leverage existing investments. Masaryk University relies on Microsoft Windows network for storing UCN (University Computer Network) domain user profiles [2]. Students and employ- ees accessing Windows based workstations use UCN domain login and can access their files from any workstation connected to UCN domain. The underlying infras- tructure is soon to be replaced with new hardware in order to increase reliability and storage space. This Bachelor’s thesis aims to achieve two objectives with re- spect to this fact. Firstly, we analyse, test and compare clustering options using Microsoft Windows Server 2008 operating system. The analysis of each method in- cludes hardware and software requirements, reliability of hardware and software used, equipment cost and ease of administration and installation. Secondly, we test performance of a new storage array in several RAID configurations. Based on the results obtained from the analysis and testing a suggestion on the most appropriate clustering solution is made. The characteristics of the disk array that we look at are read/write transfer rate, 7 1. INTRODUCTION input/output operations per second (ops) and latency. We use Iometer software [3] to measure these characteristics. We experiment with I/O request lengths accord- ing to statistics on size of the files that are commonly used in UCN domain profiles. We assume that our file server stores large amount of small-sized files (< 10 KB), so I/O ops and latency are crucial. We test two RAID configurations, namely RAID 10 and RAID 6, which are convenient because they both provide fault-tolerance and do not have a single bottleneck such as a dedicated parity disk [4]. Assuming that the majority of I/O operations are read operations, the otherwise decreased write performance of RAID 6 should not be a problem. This thesis is divided into four chapters. First chapter explains the principles of hardware mechanisms that are used in high availability solutions. Namely re- dundant hardware components, RAID levels and their suitability for our purpose (file server). In addition, it describes Storage Area Network (SAN) options that can be used with clustering [5]. Second chapter presents RAID testing methodology and results. Third chapter deals with high availability ensured by the operating system. It compares clustering options in Windows Server 2008 and presents the outcome of testing these options. This chapter also provides in-depth explanation of failover clustering. Fourth chapter builds on the previous three chapters and its aim is to suggest an optimal clustering solution for storing UCN domain pro- files. This includes choosing appropriate hardware, suggesting the most efficient way of storing data on the disk array, proposing a clustering model and justifying its suitability for our purpose. Fourth chapter also contains a diagram of network connections (Ethernet and SAN) necessary for high availability of our solution. 8 Chapter 2 High Availability at the Hardware Level 2.1 Hardware Redundancy Servers are made of high quality components that are often the reason of their higher price. Even though these components provide prolonged lifetime it is im- possible to ensure 100 percent reliability. That is why many platforms today offer redundant components, which further increase reliability. Redundant hardware can detect a failing component and assign its function to another component. Re- dundant components mostly include power supplies, cooling fans, network inter- face cards (NICs), network switches, redundant storage, CPUs and memory (that is usually ECC enabled to further increase error protection) [6]. Some high-end sys- tems provide true fault-tolerance by duplicating all their components including motherboard components [1].
Recommended publications
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • Summer Student Project Report
    Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June { September 2014 Abstract This report will outline two projects that were done as part of a three months long summer internship at CERN. In the first project we dealt with Worldwide LHC Computing Grid (WLCG) and its information sys- tem. The information system currently conforms to a schema called GLUE and it is evolving towards a new version: GLUE2. The aim of the project was to develop and adapt the current information system of the WLCG, used by the Large Scale Storage Systems at CERN (CASTOR and EOS), to the new GLUE2 schema. During the second project we investigated different RAID configurations so that we can get performance boost from CERN's disk systems in the future. RAID 1 that is currently in use is not an option anymore because of limited performance and high cost. We tried to discover RAID configurations that will improve the performance and simultaneously decrease the cost. 1 Information-provider scripts for GLUE2 1.1 Introduction The Worldwide LHC Computing Grid (WLCG, see also 1) is an international collaboration consisting of a grid-based computer network infrastructure incor- porating over 170 computing centres in 36 countries. It was originally designed by CERN to handle the large data volume produced by the Large Hadron Col- lider (LHC) experiments. This data is stored at CERN Storage Systems which are responsible for keeping and making available more than 100 Petabytes (105 Terabytes) of data to the physics community. The data is also replicated from CERN to the main computing centres within the WLCG.
    [Show full text]
  • Data Storage and High-Speed Streaming
    FYS3240 PC-based instrumentation and microcontrollers Data storage and high-speed streaming Spring 2013 – Lecture #8 Bekkeng, 8.1.2013 Data streaming • Data written to or read from a hard drive at a sustained rate is often referred to as streaming • Trends in data storage – Ever-increasing amounts of data – Record “everything” and play it back later – Hard drives: faster, bigger, and cheaper – Solid state drives – RAID hardware – PCI Express • PCI Express provides higher, dedicated bandwidth Overview • Hard drive performance and alternatives • File types • RAID • DAQ software design for high-speed acquisition and storage Streaming Data with the PCI Express Bus • A PCI Express device receives dedicated bandwidth (250 MB/s or more). • Data is transferred from onboard device memory (typically less than 512 MB), across a dedicated PCI Express link, across the I/O bus, and into system memory (RAM; 3 GB or more possible). It can then be transferred from system memory, across the I/O bus, onto hard drives (TB´s of data). The CPU/DMA-controller is responsible for managing this process. • Peer-to-peer data streaming is also possible between two PCI Express devices. PXI: Streaming to/from Hard Disk Drives RAM – Random Access Memory • SRAM – Static RAM: Each bit stored in a flip-flop • DRAM – Dynamic RAM: Each bit stored in a capacitor (transistor). Has to be refreshed (e.g. each 15 ms) – EDO DRAM – Extended Data Out DRAM. Data available while next bit is being set up – Dual-Ported DRAM (VRAM – Video RAM). Two locations can be accessed at the same time – SDRAM – Synchronous DRAM.
    [Show full text]
  • RAID Technology
    RAID Technology Reference and Sources: y The most part of text in this guide has been taken from copyrighted document of Adaptec, Inc. on site (www.adaptec.com) y Perceptive Solutions, Inc. RAID stands for Redundant Array of Inexpensive (or sometimes "Independent") Disks. RAID is a method of combining several hard disk drives into one logical unit (two or more disks grouped together to appear as a single device to the host system). RAID technology was developed to address the fault-tolerance and performance limitations of conventional disk storage. It can offer fault tolerance and higher throughput levels than a single hard drive or group of independent hard drives. While arrays were once considered complex and relatively specialized storage solutions, today they are easy to use and essential for a broad spectrum of client/server applications. Redundant Arrays of Inexpensive Disks (RAID) "KILLS - BUGS - DEAD!" -- TV commercial for RAID bug spray There are many applications, particularly in a business environment, where there are needs beyond what can be fulfilled by a single hard disk, regardless of its size, performance or quality level. Many businesses can't afford to have their systems go down for even an hour in the event of a disk failure; they need large storage subsystems with capacities in the terabytes; and they want to be able to insulate themselves from hardware failures to any extent possible. Some people working with multimedia files need fast data transfer exceeding what current drives can deliver, without spending a fortune on specialty drives. These situations require that the traditional "one hard disk per system" model be set aside and a new system employed.
    [Show full text]
  • Memory Systems : Cache, DRAM, Disk
    CHAPTER 24 Storage Subsystems Up to this point, the discussions in Part III of this with how multiple drives within a subsystem can be book have been on the disk drive as an individual organized together, cooperatively, for better reliabil- storage device and how it is directly connected to a ity and performance. This is discussed in Sections host system. This direct attach storage (DAS) para- 24.1–24.3. A second aspect deals with how a storage digm dates back to the early days of mainframe subsystem is connected to its clients and accessed. computing, when disk drives were located close to Some form of networking is usually involved. This is the CPU and cabled directly to the computer system discussed in Sections 24.4–24.6. A storage subsystem via some control circuits. This simple model of disk can be designed to have any organization and use any drive usage and confi guration remained unchanged of the connection methods discussed in this chapter. through the introduction of, fi rst, the mini computers Organization details are usually made transparent to and then the personal computers. Indeed, even today user applications by the storage subsystem presenting the majority of disk drives shipped in the industry are one or more virtual disk images, which logically look targeted for systems having such a confi guration. like disk drives to the users. This is easy to do because However, this simplistic view of the relationship logically a disk is no more than a drive ID and a logical between the disk drive and the host system does not address space associated with it.
    [Show full text]
  • 6Gb/S SATA RAID TB User Manual
    6Gb/s SATA RAID TB T12-S6.TB - Desktop RM12-S6.TB - Rackmount User Manual Version: 1.0 Issue Date: October, 2013 ARCHTTP PROXY SERVER INSTALLATION 5.5 For Mac OS 10.X The ArcHttp proxy server is provided on the software CD delivered with 6Gb/s SATA RAID controller or download from the www.areca. com.tw. The firmware embedded McRAID storage manager can configure and monitor the 6Gb/s SATA RAID controller via ArcHttp proxy server. The Archttp proxy server for Mac pro, please refer to Chapter 4.6 "Driver Installation" for Mac 10.X. 5.6 ArcHttp Configuration The ArcHttp proxy server will automatically assign one additional port for setup its configuration. If you want to change the "archttp- srv.conf" setting up of ArcHttp proxy server configuration, for example: General Configuration, Mail Configuration, and SNMP Configuration, please start Web Browser http:\\localhost: Cfg As- sistant. Such as http:\\localhost: 81. The port number for first con- troller McRAID storage manager is ArcHttp proxy server configura- tion port number plus 1. • General Configuration: Binding IP: Restrict ArcHttp proxy server to bind only single interface (If more than one physical network in the server). HTTP Port#: Value 1~65535. Display HTTP Connection Information To Console: Select “Yes" to show Http send bytes and receive bytes information in the console. Scanning PCI Device: Select “Yes” for ARC-1XXX series controller. Scanning RS-232 Device: No. Scanning Inband Device: No. 111 ARCHTTP PROXY SERVER INSTALLATION • Mail (alert by Mail) Configuration: To enable the controller to send the email function, you need to configure the SMTP function on the ArcHttp software.
    [Show full text]
  • University of California Santa Cruz Incorporating Solid
    UNIVERSITY OF CALIFORNIA SANTA CRUZ INCORPORATING SOLID STATE DRIVES INTO DISTRIBUTED STORAGE SYSTEMS A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in COMPUTER SCIENCE by Rosie Wacha December 2012 The Dissertation of Rosie Wacha is approved: Professor Scott A. Brandt, Chair Professor Carlos Maltzahn Professor Charlie McDowell Tyrus Miller Vice Provost and Dean of Graduate Studies Copyright c by Rosie Wacha 2012 Table of Contents Table of Contents iii List of Figures viii List of Tables xii Abstract xiii Acknowledgements xv 1 Introduction 1 2 Background and Related Work 6 2.1 Data Layouts for Redundancy and Performance . 6 RAID . 8 Parity striping . 10 Parity declustering . 12 Reconstruction performance improvements . 14 iii Disk arrays with higher fault tolerance . 14 2.2 Very Large Storage Arrays . 17 Data placement . 17 Ensuring reliability of data . 19 2.3 Self-Configuring Disk Arrays . 20 HP AutoRAID . 21 Sparing . 22 2.4 Solid-State Drives (SSDs) . 24 2.5 Mitigating RAID’s Small Write Problem . 27 2.6 Low Power Storage Systems . 29 2.7 Real Systems . 31 3 RAID4S: Supercharging RAID Small Writes with SSD 32 3.1 Improving RAID Small Write Performance . 32 3.2 Related Work . 38 All-SSD RAID arrays . 39 Hybrid SSD-HDD RAID arrays . 40 Other solid state technology . 41 3.3 Small Write Performance . 41 3.4 The RAID4S System . 43 3.5 The Low Cost of RAID4S . 46 3.6 Reduced Power Consumption . 48 iv 3.7 RAID4S Simulation Results . 52 Simulated array performance . 56 3.8 Experimental Methodology & Results .
    [Show full text]
  • Which RAID Level Is Right for Me?
    STORAGE SOLUTIONS WHITE PAPER Which RAID Level is Right for Me? Contents Introduction.....................................................................................1 RAID 10 (Striped RAID 1 sets) .................................................3 RAID Level Descriptions..................................................................1 RAID 50 (Striped RAID 5 sets) .................................................4 RAID 0 (Striping).......................................................................1 RAID 60 (Striped RAID 6 sets) .................................................4 RAID 1 (Mirroring).....................................................................2 RAID Level Comparison ..................................................................5 RAID 1E (Striped Mirror)...........................................................2 About Adaptec RAID .......................................................................5 RAID 5 (Striping with parity) .....................................................2 RAID 5EE (Hot Space).....................................................................3 RAID 6 (Striping with dual parity).............................................3 Data is the most valuable asset of any business today. Lost data of users. This white paper intends to give an overview on the means lost business. Even if you backup regularly, you need a performance and availability of various RAID levels in general fail-safe way to ensure that your data is protected and can be and may not be accurate in all user
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15
    SUSE Linux Enterprise Server 15 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi 1 Available Documentation xi 2 Giving Feedback xiii 3 Documentation Conventions xiii 4 Product Life Cycle and Support xv Support Statement for SUSE Linux Enterprise Server xvi • Technology Previews xvii I FILE SYSTEMS AND MOUNTING 1 1 Overview of File Systems
    [Show full text]
  • Of File Systems and Storage Models
    Chapter 4 Of File Systems and Storage Models Disks are always full. It is futile to try to get more disk space. Data expands to fill any void. –Parkinson’sLawasappliedto disks 4.1 Introduction This chapter deals primarily with how we store data. Virtually all computer systems require some way to store data permanently; even so-called “diskless” systems do require access to certain files in order to boot, run and be useful. Albeit stored remotely (or in memory), these bits reside on some sort of storage system. Most frequently, data is stored on local hard disks, but over the last few years more and more of our files have moved “into the cloud”, where di↵erent providers o↵er easy access to large amounts of storage over the network. We have more and more computers depending on access to remote systems, shifting our traditional view of what constitutes a storage device. 74 CHAPTER 4. OF FILE SYSTEMS AND STORAGE MODELS 75 As system administrators, we are responsible for all kinds of devices: we build systems running entirely without local storage just as we maintain the massive enterprise storage arrays that enable decentralized data replication and archival. We manage large numbers of computers with their own hard drives, using a variety of technologies to maximize throughput before the data even gets onto a network. In order to be able to optimize our systems on this level, it is important for us to understand the principal concepts of how data is stored, the di↵erent storage models and disk interfaces.Itisimportanttobeawareofcertain physical properties of our storage media, and the impact they, as well as certain historic limitations, have on how we utilize disks.
    [Show full text]
  • A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage
    Brigham Young University BYU ScholarsArchive Theses and Dissertations 2016-12-01 A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen Brigham Young University Follow this and additional works at: https://scholarsarchive.byu.edu/etd Part of the Electrical and Computer Engineering Commons BYU ScholarsArchive Citation Hansen, Christopher Glenn, "A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage" (2016). Theses and Dissertations. 6470. https://scholarsarchive.byu.edu/etd/6470 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science James Archibald, Chair Doran Wilde Michael Wirthlin Department of Electrical and Computer Engineering Brigham Young University Copyright © 2016 Christopher Glenn Hansen All Rights Reserved ABSTRACT A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen Department of Electrical and Computer Engineering, BYU Master of Science The constant evolution of new varieties of computing systems - cloud computing, mobile devices, and Internet of Things, to name a few - have necessitated a growing need for highly reliable, available, secure, and high-performing storage systems. While CPU performance has typically scaled with Moore’s Law, data storage is much less consistent in how quickly perfor- mance increases over time.
    [Show full text]
  • Hot Plug RAID Memory Technology for Fault Tolerance and Scalability Technology Brief
    Hot plug RAID memory technology for fault tolerance and scalability technology brief Abstract.............................................................................................................................................. 2 Introduction......................................................................................................................................... 2 Memory reliability................................................................................................................................2 Error detection and correction ............................................................................................................... 3 Parity checking ................................................................................................................................3 Error checking and correcting............................................................................................................ 3 Potential for system failures................................................................................................................ 3 Hot plug RAID memory......................................................................................................................... 4 Performance .................................................................................................................................... 5 Basic operation................................................................................................................................5 Hot-plug capabilities........................................................................................................................
    [Show full text]