How to Achieve Scalability for Continuous Media

Total Page:16

File Type:pdf, Size:1020Kb

How to Achieve Scalability for Continuous Media Striping Do esnt Scale How to Achieve Scalability for Continuous Media Servers with Replication ChengFu Chou Department of Computer Science University of Maryland at College Park Leana Golub chik University of Maryland Institute for Advanced Computer Studies and Department of Computer Science University of Maryland at College Park John CS Lui Department of Computer Science and Engineering The Chinese University of Hong Kong Abstract Multimedia applications place high demands for QoS p erformance and reliability on storage servers and communication networks These often stringent requirements make design of cost eective and scalable continuous media CM servers dicult In particular the choice of data placement techniques can have a signicant eect on the scalability of the CM server and its ability to utilize resources eciently In the recent past a great deal of work has fo cused on wide data striping as a technique which implicitly solves load balancing problems although it do es suer from multiple shortcomings Another approach to dealing with load imbalance problems is replication The main fo cus of this pap er is a study of scalability characteristics of CM servers as a function of tradeos b etween striping and replication More sp ecically striping is a go o d approach to load balancing while replication is a go o d approach to isolating no des from b eing dep endent on other system resources The appropriate compromise b etween the degree of striping and the degree of replication is key to the design of a scalable CM server This is the topic of our work Keywords continuous media servers distributed systems p erformance evaluation scalability endtoend design Technical areas Multimedia and Digital Libraries Distributed Systems Architecture The work of ChengFu Chou and Leana Golub chik and was supp orted in part by the NSF CAREER grant CCR Intro duction With the rapid growth of multimedia applications there is a growing need for largescale continuous media CM servers that can meet the user demand Multimedia applications such as video stream delivery digital libraries distance learning systems and so on place high demands for qualityof service QoS p erformance and reliability on storage servers and communication networks These often stringent requirements make endtoend design of costeective and scalable continuous media servers dicult The scalability of a CM servers architecture dep ends on its ability to expand as user demand and data sizes grow maintain p erformance characteristics under growth or reconguration maintain p erformance characteristics under degradation of system resources which can b e caused by losses in network and storage capacities 1 In particular in a continuous media server the choice of data placement techniques can have a signicant eect on the scalability of the system and its ability to utilize resources eciently Existing data placement techniques in conjunction with scheduling algorithms address two ma jor ineciencies in such systems various overheads in reading data from storage devices eg due to disk arm movement and load imbalance eg due to skews in data access patterns In this work we fo cus on the latter issue and sp ecically on its b earing on the scalability characteristics of a distributed CM server Due to the enormous storage and IO bandwidth requirements of multimedia data a CM server is exp ected to have a very large disk farm Thus we must necessarily consider designs which contain multiple disk clusters and pro cessing no des ie we must consider distributed designs An imp ortant consideration then is the placement of ob jects on the no des of the CM server As in traditional database systems data placement on a distributed storage subsystem directly aects the load balancing characteristics of that system In the recent past a great deal of research work eg as in has fo cused on wide data striping as a data placement technique for designing continuous media servers By wide data 1 Here by data placement we mean decision of which ob ject or fraction of an ob ject to place on which disk or disk cluster ie this do es not refer to data placement issues within a single disk striping we mean that each ob ject is strip ed across all the disks of the system Recall that the p otential load imbalance is largely due to the skews in data access patterns which without data striping could result in high loads on some disks containing the more p opular ob jects while the disks containing less p opular ob jects may b e idling Moreover the problem is exacerbated by the fact that access patterns change over time ie the p opularity of a particular ob ject is a function of time Thus an advantage of wide data striping is that it implicitly achieves load balance by decou pling an ob jects storage from its bandwidth requirements However wide data striping also suers from several shortcomings It is not practical to assume that a system can b e constructed from homogeneous disks ie as the system grows or exp eriences faults and thus disk replacement we are b e forced to use disks with dierent transfer and storage capacity characteristics having to strip e ob jects across heterogeneous disks would lead to further complications An appropriate choice of a striping unit the ob ject size and the communications network infrastructure dictate an upp er b ound on the numb er of disks over which that ob ject can b e strip ed b eyond which replication of ob jects is needed to increase the numb er of simultaneous users as describ ed in eg to the b est of our knowledge in implementations describ ed in 2 striping is p erformed over at most a few tens of homogeneous disks only Note that delivery of relatively short continuous media ob jects is of use to many imp ortant applications including digital libraries newsondemand systems and so on Due to the continuity constraints some form of synchronization in delivery of a single ob ject from multiple no des must b e considered The need for some form of synchronization arises from the fact that dierent fractions of an ob ject are b eing delivered from dierent no des at dierent times during the ob jects display and hence some form of co ordination b etween these no des and p erhaps the client is required in order to present a coherent display of the ob ject As the user demand and data sizes grow and hence the system requires more storage and disk bandwidth capacities the resulting expansion of the disk subsystem results in restriping of al l the ob jects 2 In fact we are not aware of in the current literature any largescale implementation that utilizes heterogeneous disks Due to the need for communication of data b etween the no des over which an ob ject is strip ed the capacity of the communication network limits the p erformance of the distributed CM server This limitation directly aects the scalability of the CM server and is one of the main issues we investigate in this work Another approach to dealing with the load imbalance problem arising from skews in data access patterns is replication ie creating a sucient numb er of copies of a p opular ob ject so as to meet the demand for that ob ject Sp ecically we consider a hybrid approach where instead of striping each ob jects across all the no des of the system we constrain the striping to a single no de and replicate p opular ob jects on several no des in order to provide sucient bandwidth capacity to service the demand for these ob jects Of course a disadvantage of this approach is a need for additional storage space Furthermore techniques are needed for adjusting the numb er of replicas as the access patterns change Some of these issues are addressed in in the context of workloads with relatively infrequent changes in ob ject access patterns as well as in our previous work where we prop ose dynamic replication techniques in the context of more frequent changes in data access patterns In this pap er we improve on our previous work on dynamic replication techniques as describ ed in Section However the main fo cus of this pap er is on the tradeos b etween striping and replication which are as follows In a smallscale CM server where all disks are assumed to b e connected to a single no de data striping can provide b etter p erformance characteristics than replication b ecause of its ability to deal with load imbalance problems without the need for additional storage space and without signicant networking constraints However in a largescale CM server data striping results in a need for signicant communication network capacities which can lead to p o or scalabil ity characteristics and high costs Essentially striping is a go o d approach to load balancing while replication is a go o d approach to isolating no des from b eing dep endent on other nonlo cal system resources That is the wider we strip e in a distributed CM system the more we are dep en dent on the availability of network capacity Furthermore replication has the b enet of increased reliability in terms of a longer mean time to loss of data from the disk subsystem see Section and b dealing with lack of network resources see Section including network partitioning The downside of replication is that it increases storage space requirements and hence cost However as storage costs decrease fairly rapidly and the need for scalability grows replication b ecomes a more attractive technique In summary the appropriate compromise b etween the degree of striping and the degree of replication is key to the design of a scalable distributed CM server This
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • Summer Student Project Report
    Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June { September 2014 Abstract This report will outline two projects that were done as part of a three months long summer internship at CERN. In the first project we dealt with Worldwide LHC Computing Grid (WLCG) and its information sys- tem. The information system currently conforms to a schema called GLUE and it is evolving towards a new version: GLUE2. The aim of the project was to develop and adapt the current information system of the WLCG, used by the Large Scale Storage Systems at CERN (CASTOR and EOS), to the new GLUE2 schema. During the second project we investigated different RAID configurations so that we can get performance boost from CERN's disk systems in the future. RAID 1 that is currently in use is not an option anymore because of limited performance and high cost. We tried to discover RAID configurations that will improve the performance and simultaneously decrease the cost. 1 Information-provider scripts for GLUE2 1.1 Introduction The Worldwide LHC Computing Grid (WLCG, see also 1) is an international collaboration consisting of a grid-based computer network infrastructure incor- porating over 170 computing centres in 36 countries. It was originally designed by CERN to handle the large data volume produced by the Large Hadron Col- lider (LHC) experiments. This data is stored at CERN Storage Systems which are responsible for keeping and making available more than 100 Petabytes (105 Terabytes) of data to the physics community. The data is also replicated from CERN to the main computing centres within the WLCG.
    [Show full text]
  • RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018
    RAID Configuration Guide Motherboard E14794 Revised Edition V4 August 2018 Copyright © 2018 ASUSTeK COMPUTER INC. All Rights Reserved. No part of this manual, including the products and software described in it, may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language in any form or by any means, except documentation kept by the purchaser for backup purposes, without the express written permission of ASUSTeK COMPUTER INC. (“ASUS”). Product warranty or service will not be extended if: (1) the product is repaired, modified or altered, unless such repair, modification of alteration is authorized in writing by ASUS; or (2) the serial number of the product is defaced or missing. ASUS PROVIDES THIS MANUAL “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL ASUS, ITS DIRECTORS, OFFICERS, EMPLOYEES OR AGENTS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES (INCLUDING DAMAGES FOR LOSS OF PROFITS, LOSS OF BUSINESS, LOSS OF USE OR DATA, INTERRUPTION OF BUSINESS AND THE LIKE), EVEN IF ASUS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES ARISING FROM ANY DEFECT OR ERROR IN THIS MANUAL OR PRODUCT. SPECIFICATIONS AND INFORMATION CONTAINED IN THIS MANUAL ARE FURNISHED FOR INFORMATIONAL USE ONLY, AND ARE SUBJECT TO CHANGE AT ANY TIME WITHOUT NOTICE, AND SHOULD NOT BE CONSTRUED AS A COMMITMENT BY ASUS. ASUS ASSUMES NO RESPONSIBILITY OR LIABILITY FOR ANY ERRORS OR INACCURACIES THAT MAY APPEAR IN THIS MANUAL, INCLUDING THE PRODUCTS AND SOFTWARE DESCRIBED IN IT.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • RAID Technology
    RAID Technology Reference and Sources: y The most part of text in this guide has been taken from copyrighted document of Adaptec, Inc. on site (www.adaptec.com) y Perceptive Solutions, Inc. RAID stands for Redundant Array of Inexpensive (or sometimes "Independent") Disks. RAID is a method of combining several hard disk drives into one logical unit (two or more disks grouped together to appear as a single device to the host system). RAID technology was developed to address the fault-tolerance and performance limitations of conventional disk storage. It can offer fault tolerance and higher throughput levels than a single hard drive or group of independent hard drives. While arrays were once considered complex and relatively specialized storage solutions, today they are easy to use and essential for a broad spectrum of client/server applications. Redundant Arrays of Inexpensive Disks (RAID) "KILLS - BUGS - DEAD!" -- TV commercial for RAID bug spray There are many applications, particularly in a business environment, where there are needs beyond what can be fulfilled by a single hard disk, regardless of its size, performance or quality level. Many businesses can't afford to have their systems go down for even an hour in the event of a disk failure; they need large storage subsystems with capacities in the terabytes; and they want to be able to insulate themselves from hardware failures to any extent possible. Some people working with multimedia files need fast data transfer exceeding what current drives can deliver, without spending a fortune on specialty drives. These situations require that the traditional "one hard disk per system" model be set aside and a new system employed.
    [Show full text]
  • Memory Systems : Cache, DRAM, Disk
    CHAPTER 24 Storage Subsystems Up to this point, the discussions in Part III of this with how multiple drives within a subsystem can be book have been on the disk drive as an individual organized together, cooperatively, for better reliabil- storage device and how it is directly connected to a ity and performance. This is discussed in Sections host system. This direct attach storage (DAS) para- 24.1–24.3. A second aspect deals with how a storage digm dates back to the early days of mainframe subsystem is connected to its clients and accessed. computing, when disk drives were located close to Some form of networking is usually involved. This is the CPU and cabled directly to the computer system discussed in Sections 24.4–24.6. A storage subsystem via some control circuits. This simple model of disk can be designed to have any organization and use any drive usage and confi guration remained unchanged of the connection methods discussed in this chapter. through the introduction of, fi rst, the mini computers Organization details are usually made transparent to and then the personal computers. Indeed, even today user applications by the storage subsystem presenting the majority of disk drives shipped in the industry are one or more virtual disk images, which logically look targeted for systems having such a confi guration. like disk drives to the users. This is easy to do because However, this simplistic view of the relationship logically a disk is no more than a drive ID and a logical between the disk drive and the host system does not address space associated with it.
    [Show full text]
  • Zfs-Ascalabledistributedfilesystemusingobjectdisks
    zFS-AScalableDistributedFileSystemUsingObjectDisks Ohad Rodeh Avi Teperman [email protected] [email protected] IBM Labs, Haifa University, Mount Carmel, Haifa 31905, Israel. Abstract retrieves the data block from the remote machine. zFS also uses distributed transactions and leases, instead of group- zFS is a research project aimed at building a decentral- communication and clustering software. We intend to test ized file system that distributes all aspects of file and stor- and show the effectiveness of these two features in our pro- age management over a set of cooperating machines inter- totype. connected by a high-speed network. zFS is designed to be zFS has six components: a Front End (FE), a Cooper- a file system that scales from a few networked computers to ative Cache (Cache), a File Manager (FMGR), a Lease several thousand machines and to be built from commodity Manager (LMGR), a Transaction Server (TSVR), and an off-the-shelf components. Object Store (OSD). These components work together to The two most prominent features of zFS are its coop- provide applications/users with a distributed file system. erative cache and distributed transactions. zFS integrates The design of zFS addresses, and is influenced by, issues the memory of all participating machines into one coher- of fault tolerance, security and backup/mirroring. How- ent cache. Thus, instead of going to the disk for a block ever, in this article, we focus on the zFS high-level archi- of data already in one of the machine memories, zFS re- tecture and briefly describe zFS’s fault tolerance character- trieves the data block from the remote machine.
    [Show full text]
  • RAID — Begin with the Basics How Does RAID Work? RAID Increases Data Protection and Performance by What Is RAID? Duplicating And/Or Spreading Data Over Multiple Disks
    RAID — Begin With The Basics How Does RAID Work? RAID increases data protection and performance by What is RAID? duplicating and/or spreading data over multiple disks. DRIVE 1 RAID stands for Redundant Array of Inexpensive Disks. RAID is a method of logically treating Mirroring Mirroring & Striping several hard drives as one unit. It can offer fault tolerance and higher throughput levels than a Duplicates data from primary Mirrors data that is striped, spread single hard drive or group of independent hard drives. DRIVE 2 drive to secondary drive evenly across multiple disks Why Do We Need It? RAID provides real-time data recovery when a hard drive fails, increasing system uptime and DRIVE 1 DRIVE 1 DRIVE 3 availability while protecting against loss of data. Multiple drives working together also increase system performance. DRIVE 2 DRIVE 2 DRIVE 4 Levels of RAID DRIVE 1 DRIVE 3 RAID Level Description Minimum # of Drives Benefit RAID 0 Data striping (no data protection) 2 Highest performance DRIVE 2 DRIVE 4 RAID 1 Disk mirroring 2 Highest data protection RAID 1E Disk mirroring 3 Highest data protection for an odd number of disks RAID 5 Data striping with distributed parity 3 Best cost/performance balance for multi-drive environments RAID 5EE Data striping with distributed parity with 4 The cost/performance balance of RAID 5 without setting aside a dedicated hotspare disk hotspare integrated into the array RAID 6 Data striping with dual distributed parity 4 Highest fault tolerance with the ability to survive two disk failures RAID 10 Data
    [Show full text]
  • Which RAID Level Is Right for Me?
    STORAGE SOLUTIONS WHITE PAPER Which RAID Level is Right for Me? Contents Introduction.....................................................................................1 RAID 10 (Striped RAID 1 sets) .................................................3 RAID Level Descriptions..................................................................1 RAID 50 (Striped RAID 5 sets) .................................................4 RAID 0 (Striping).......................................................................1 RAID 60 (Striped RAID 6 sets) .................................................4 RAID 1 (Mirroring).....................................................................2 RAID Level Comparison ..................................................................5 RAID 1E (Striped Mirror)...........................................................2 About Adaptec RAID .......................................................................5 RAID 5 (Striping with parity) .....................................................2 RAID 5EE (Hot Space).....................................................................3 RAID 6 (Striping with dual parity).............................................3 Data is the most valuable asset of any business today. Lost data of users. This white paper intends to give an overview on the means lost business. Even if you backup regularly, you need a performance and availability of various RAID levels in general fail-safe way to ensure that your data is protected and can be and may not be accurate in all user
    [Show full text]
  • Rethinking RAID-5 Data Layout for Better Scalability
    2816 IEEE TRANSACTIONS ON COMPUTERS, VOL. 63, NO. 11, NOVEMBER 2014 Rethinking RAID-5 Data Layout for Better Scalability Guangyan Zhang, Weimin Zheng, and Keqin Li Abstract—In RAID-5, data and parity blocks are distributed across all disks in a round-robin fashion. Previous approaches to RAID-5 scaling preserve such round-robin distribution, therefore requiring all the data to be migrated. In this paper, we rethink RAID-5 data layout and propose a new approach to RAID-5 scaling called MiPiL. First, MiPiL minimizes data migration while maintaining a uniform data distribution, not only for regular data but also for parity data. It moves the minimum number of data blocks from old disks to new disks for regaining a uniform data distribution. Second, MiPiL optimizes online data migration with piggyback parity updates and lazy metadata updates. Piggyback parity updates during data migration reduce the numbers of additional XOR computations and disk I/Os. Lazy metadata updates minimize the number of metadata writes without compromising data reliability. We implement MiPiL in Linux Kernel 2.6.32.9, and evaluate its performance by replaying three real-system traces. The results demonstrate that MiPiL consistently outperforms the existing “moving-everything” approach by 74.07-77.57% in redistribution time and by 25.78-70.50% in user response time. The experiments also illustrate that under the WebSearch2 and Financial1 workloads, the performance of the RAID-5 scaled using MiPiL is almost identical to that of the round-robin RAID-5. Index Terms—Data migration, disk array, metadata update, parity update, RAID-5 scaling 1INTRODUCTION IA disk striping and rotated parity, RAID-5 [1], [2] requires an efficient approach to redistributing the data online Vachieves high performance, large capacity, and data with the following requirements.
    [Show full text]
  • Guide to SATA Hard Disks Installation and RAID Configuration
    Guide to SATA Hard Disks Installation and RAID Configuration 1. Guide to SATA Hard Disks Installation ..............................2 1.1 Serial ATA (SATA) Hard Disks Installation ................2 2. Guide to RAID Configurations ...........................................3 2.1 Introduction of RAID .................................................3 2.2 RAID Configuration Precautions ..............................6 2.3 Installing Windows® 10 64-bit / 8.1 / 8.1 64-bit / 8 / 8 64-bit / 7 / 7 64-bit With RAID Functions .............7 2.4 Configuring a RAID array .........................................8 2.4.1 Configuring a RAID array Using UEFI Setup Utility ...... 9 2.4.2 Configuring a RAID array Using Intel RAID BIOS ...... 13 3. Installing Windows® on a HDD under 2TB in RAID mode ..................................................................17 4. Installing Windows® on a HDD larger than 2TB in RAID mode ..................................................................18 1 1. Guide to SATA Hard Disks Installation 1.1 Serial ATA (SATA) Hard Disks Installation Intel chipset supports Serial ATA (SATA) hard disks with RAID functions, including RAID 0, RAID 1, RAID 5, RAID 10 and Intel Rapid Storage. Please read the RAID configurations in this guide carefully according to the Intel southbridge chipset that your motherboard adopts. You may install SATA hard disks on this motherboard for internal storage devices. This section will guide you how to create RAID on SATA ports. 2 2. Guide to RAID Configurations 2.1 Introduction of RAID This motherboard adopts Intel southbridge chipset that inte- grates RAID controller supporting RAID 0 / RAID 1/ Intel Rapid Storage / RAID 10 / RAID 5 function with four independent Se- rial ATA (SATA) channels. This section will introduce the basic knowledge of RAID, and the guide to configure RAID 0 / RAID 1/ Intel Rapid Storage / RAID 10 / RAID 5 settings.
    [Show full text]