On Reducing Latency in Geo-Distributed Systems Through State Partitioning and Caching

Total Page:16

File Type:pdf, Size:1020Kb

On Reducing Latency in Geo-Distributed Systems Through State Partitioning and Caching Université de Neuchâtel Faculté des Sciences Institut d’Informatique On Reducing Latency in Geo-Distributed Systems through State Partitioning and Caching par Raluca Halalai Thèse présenté à la Faculté des Sciences pour l’obtention du grade de Docteur ès Sciences Acceptée sur proposition du jury: Prof. Pascal Felber, directeur de thèse Université de Neuchâtel, Suisse Prof. Philippe Cudré-Mauroux, Université de Fribourg, Suisse Prof. Fernando Pedone, Université de la Suisse italienne, Suisse Prof. Etienne Rivière, Université catholique de Louvain, Belgique Dr. Valerio Schiavoni, Université de Neuchâtel, Suisse Prof. François Taïani, Université de Rennes 1, France Soutenue le 14 mai 2018 Faculté des Sciences Secrétariat-décanat de Faculté Rue Emile-Argand 11 2000 Neuchâtel – Suisse Tél : + 41 (0)32 718 21 00 E-mail : [email protected] IMPRIMATUR POUR THESE DE DOCTORAT La Faculté des sciences de l'Université de Neuchâtel autorise l'impression de la présente thèse soutenue par Madame Raluca HALALAI Titre: “On Reducing Latency in Geo-Distributed Systems through State Partitioning and Caching” sur le rapport des membres du jury composé comme suit: • Prof. Pascal Felber, directeur de thèse, Université de Neuchâtel, Suisse • Dr Valerio Schiavoni, Université de Neuchâtel, Suisse • Prof. François Taïani, Université de Rennes 1, France • Prof. Fernando Pedone, Université de la Suisse italienne, Lugano, Suisse • Prof. Etienne Rivière, Université catholique de Louvain, Belgique • Prof. Philippe Cudré-Mauroux, Université de Fribourg, Suisse Neuchâtel, le 15 mai 2018 Le Doyen, Prof. R. Bshary Imprimatur pour thèse de doctorat www.unine.ch/sciences Acknowledgements This work would not have been possible without the people who have supported me through- out this journey. I am deeply grateful to my advisor, Pascal Felber, for his patience and wisdom, and for always encouraging me while also giving me the freedom to become independent in pursuing my goals. I feel very fortunate to having worked with him. I thank my outstanding thesis committee: Philippe Cudré-Mauroux, Fernando Pedone, Etienne Rivière, Valerio Schi- avoni, and François Taïani. Their insightful comments and advice helped crystallize the vision of this thesis. All the work presented here is the result of collaboration with many incredibly bright people to whom I am thankful for the many insightful discussions that brought clarity to the most difficult problems. Last but not least I thank my family and friends for their patience, support, and encouragement throughout these years. Résumé Les systèmes distribués modernes sont de plus en plus grands, et sont déployés dans plusieurs régions géographiques. L’objectif final de tels systèmes est de fournir des services à leurs utilisateurs ainsi que haute disponibilité et bonne performance. Cette thèse propose des techniques pour réduire le latence perçue par des utilisateurs. Pour commencer, nous considérons les systèmes qui utilisent la technique de réplication de machines à états afin de garantir la cohérence des données. La technique de réplication de machines à états copie un service à plusieurs emplacements et coordonne les répliques afin de sérialiser toutes les commandes émis par des clients. La coordination à grande échelle a un impact significatif sur la performance du système. Nous étudions comment le partition- nement d’état peut aider à réduire les performances sans affecter la sémantique du système. Premièrement, nous formalisons les conditions dans lesquelles un service est partitionnable et proposons une approche de partitionnement d’état générique. Nous partitionnons un service de coordination géo-distribué et montrons qu’il surpasse son homologue non partitionné, tout en offrant les mêmes garanties. Nous augmentons notre système avec un partitionne- ment d’état dynamique, qui s’adapte à la charge de travail. Notre évaluation montre que le partitionnement d’état dynamique a un impact positif sur les performances du notre système de fichiers. Finalement, nous étudions le compromis entre la latence et les coûts de stockage dans les systèmes de stockage qui utilisent des techniques de codage d’effacement. Afin d’améliorer les performances de lecture, les systèmes de stockage utilisent des caches qui sont proches des clients. Cependant, les stratégies de mise en cache traditionnelles ne sont pas conçu pour les particularités du codage d’effacement et ne sont pas bien adaptés à ce scénario. Nous avons proposé un algorithme pour mettre en cache des données codées et nous l’avons utilisé pour implémenter une système de mise en cache basée sur Memcached. Notre algorithme reconfigure le cache en fonction de la charge de travail et peut surpasser la performance des po- litiques de mise en cache traditionnelles comme Least Recently Used et Least Frequently Used. Mots clés : systèmes géo-distribués, cohérence, partitionnement d’état, mise en cache, codage d’effacement Abstract Modern distributed systems are increasingly large, spanning many datacenters from different geographic regions. The end goal of such systems is to provide services to their users with high availability and good performance. This thesis proposes approaches to reduce the access latency perceived by end users. First, we focus on systems that rely on the state machine replication approach in order to guarantee consistency. State machine replication copies a service at multiple physical loca- tions and coordinates replicas – possibly from distant regions, in order to serialize all requests issued by clients. Coordination at large scale has a significant impact on the performance of the system. We investigate how state partitioning can help reduce performance without breaking the semantics of the system. First, we formalize conditions under which a service is partitionable and proposed a generic state partitioning approach. We build a partitioned geo-distributed coordination service and show that it outperforms its non-partitioned coun- terpart, while providing the same guarantees. We further apply state partitioning in order to build a geo-distributed file system, which performs comparable to other de-facto industry implementations. We augment our system with dynamic state partitioning, which moves files among data centers in order to adapt to workload patterns. Our experiments show that performing state partitioning on the fly has a positive impact on the performance of the file system when the workload exhibits access locality. Second, we investigate the tradeoff between latency and storage cost in storage systems that employ erasure coding techniques. In order to improve read performance, storage sys- tems often use caches that are close to clients. However, traditional caching policies are not designed for the particularities of erasure coding and are not well-suited for this scenario. We proposed an algorithm for caching erasure-coded data and use it to implement a caching layer based on Memcached in front of the Amazon S3 storage system. Our caching algorithm reconfigures the cache based on workload patterns and is able to outperform traditional caching policies such as Least Recently Used and Least Frequently Used. Keywords: geo-distributed systems, strong consistency, state partitioning, caching, erasure coding Contents 1 Introduction 1 1.1 Problem definition....................................2 1.2 Proposed solution.....................................2 1.2.1 State partitioning in geo-distributed systems................2 1.2.2 Workload-aware state partitioning.......................3 1.2.3 Caching tailored to erasure-coded storage systems.............3 1.2.4 Summary of results................................4 1.3 Dissertation plan.....................................5 2 Background and Related Work7 2.1 CAP theorem........................................7 2.2 Replication mechanisms for strong consistency...................8 2.3 State partitioning.....................................9 2.4 Adaptive state partitioning............................... 11 2.5 Consistency in distributed file systems........................ 12 2.5.1 File systems with strong consistency..................... 12 2.5.2 File systems with weak consistency...................... 14 2.6 Storage cost-aware systems............................... 14 2.6.1 Erasure coding in storage systems....................... 15 2.6.2 Caching...................................... 15 2.6.3 Caching erasure-coded data.......................... 17 3 State Partitioning in Geo-Distributed Systems 19 3.1 Introduction........................................ 19 3.2 System model....................................... 20 3.3 Partitioning theorems.................................. 23 3.4 Protocols.......................................... 24 3.4.1 Initial construction................................ 24 3.4.2 A queue-based construction.......................... 25 3.4.3 Ensuring disjoint access parallelism...................... 25 3.5 ZooFence......................................... 28 3.5.1 Overview...................................... 28 3.5.2 Client-side library................................. 29 3.5.3 Executor...................................... 29 3.6 Evaluation......................................... 31 3.6.1 Concurrent queues service........................... 31 3.6.2 BookKeeper.................................... 33 3.7 Summary.......................................... 34 4 Workload-Aware State Partitioning 35 4.1 Introduction........................................ 35 4.2 System model and definitions.............................
Recommended publications
  • Elastic Storage for Linux on System Z IBM Research & Development Germany
    Peter Münch T/L Test and Development – Elastic Storage for Linux on System z IBM Research & Development Germany Elastic Storage for Linux on IBM System z © Copyright IBM Corporation 2014 9.0 Elastic Storage for Linux on System z Session objectives • This presentation introduces the Elastic Storage, based on General Parallel File System technology that will be available for Linux on IBM System z. Understand the concepts of Elastic Storage and which functions will be available for Linux on System z. Learn how you can integrate and benefit from the Elastic Storage in a Linux on System z environment. Finally, get your first impression in a live demo of Elastic Storage. 2 © Copyright IBM Corporation 2014 Elastic Storage for Linux on System z Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AIX* FlashSystem Storwize* Tivoli* DB2* IBM* System p* WebSphere* DS8000* IBM (logo)* System x* XIV* ECKD MQSeries* System z* z/VM* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
    [Show full text]
  • BESIII Physical Analysis on Hadoop Platform
    20th International Conference on Computing in High Energy and Nuclear Physics (CHEP2013) IOP Publishing Journal of Physics: Conference Series 513 (2014) 032044 doi:10.1088/1742-6596/513/3/032044 BESIII Physical Analysis on Hadoop Platform Jing HUO12, Dongsong ZANG12, Xiaofeng LEI12, Qiang LI12, Gongxing SUN1 1Institute of High Energy Physics, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China E-mail: [email protected] Abstract. In the past 20 years, computing cluster has been widely used for High Energy Physics data processing. The jobs running on the traditional cluster with a Data-to-Computing structure, have to read large volumes of data via the network to the computing nodes for analysis, thereby making the I/O latency become a bottleneck of the whole system. The new distributed computing technology based on the MapReduce programming model has many advantages, such as high concurrency, high scalability and high fault tolerance, and it can benefit us in dealing with Big Data. This paper brings the idea of using MapReduce model to do BESIII physical analysis, and presents a new data analysis system structure based on Hadoop platform, which not only greatly improve the efficiency of data analysis, but also reduces the cost of system building. Moreover, this paper establishes an event pre-selection system based on the event level metadata(TAGs) database to optimize the data analyzing procedure. 1. Introduction 1.1 The current BESIII computing architecture High Energy Physics experiment is a typical data-intensive application. The BESIII computing system now consist of 3PB+ data, 6500 CPU cores, and it is estimated that there will be more than 10PB data produced in the future 5 years.
    [Show full text]
  • Efficient Implementation of Data Objects in the OSD+-Based Fusion
    Efficient Implementation of Data Objects in the OSD+-Based Fusion Parallel File System Juan Piernas(B) and Pilar Gonz´alez-F´erez Departamento de Ingenier´ıa y Tecnolog´ıa de Computadores, Universidad de Murcia, Murcia, Spain piernas,pilar @ditec.um.es { } Abstract. OSD+s are enhanced object-based storage devices (OSDs) able to deal with both data and metadata operations via data and direc- tory objects, respectively. So far, we have focused on designing and implementing efficient directory objects in OSD+s. This paper, however, presents our work on also supporting data objects, and describes how the coexistence of both kinds of objects in OSD+s is profited to efficiently implement data objects and to speed up some commonfile operations. We compare our OSD+-based Fusion Parallel File System (FPFS) with Lustre and OrangeFS. Results show that FPFS provides a performance up to 37 better than Lustre, and up to 95 better than OrangeFS, × × for metadata workloads. FPFS also provides 34% more bandwidth than OrangeFS for data workloads, and competes with Lustre for data writes. Results also show serious scalability problems in Lustre and OrangeFS. Keywords: FPFS OSD+ Data objects Lustre OrangeFS · · · · 1 Introduction File systems for HPC environment have traditionally used a cluster of data servers for achieving high rates in read and write operations, for providing fault tolerance and scalability, etc. However, due to a growing number offiles, and an increasing use of huge directories with millions or billions of entries accessed by thousands of processes at the same time [3,8,12], some of thesefile systems also utilize a cluster of specialized metadata servers [6,10,11] and have recently added support for distributed directories [7,10].
    [Show full text]
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • Storage Systems and Input/Output 2018 Pre-Workshop Document
    Storage Systems and Input/Output 2018 Pre-Workshop Document Gaithersburg, Maryland September 19-20, 2018 Meeting Organizers Robert Ross (ANL) (lead organizer) Glenn Lockwood (LBL) Lee Ward (SNL) (co-lead) Kathryn Mohror (LLNL) Gary Grider (LANL) Bradley Settlemyer (LANL) Scott Klasky (ORNL) Pre-Workshop Document Contributors Philip Carns (ANL) Quincey Koziol (LBL) Matthew Wolf (ORNL) 1 Table of Contents 1 Table of Contents 2 2 Executive Summary 4 3 Introduction 5 4 Mission Drivers 7 4.1 Overview 7 4.2 Workload Characteristics 9 4.2.1 Common observations 9 4.2.2 An example: Adjoint-based sensitivity analysis 10 4.3 Input/Output Characteristics 11 4.4 Implications of In Situ Analysis on the SSIO Community 14 4.5 Data Organization and Archiving 15 4.6 Metadata and Provenance 18 4.7 Summary 20 5 Computer Science Challenges 21 5.1 Hardware/Software Architectures 21 5.1.1 Storage Media and Interfaces 21 5.1.2 Networks 21 5.1.3 Active Storage 22 5.1.4 Resilience 23 5.1.5 Understandability 24 5.1.6 Autonomics 25 5.1.7 Security 26 5.1.8 New Paradigms 27 5.2 Metadata, Name Spaces, and Provenance 28 5.2.1 Metadata 28 5.2.2 Namespaces 30 5.2.3 Provenance 30 5.3 Supporting Science Workflows - SAK 32 5.3.1 DOE Extreme Scale Use cases - SAK 33 5.3.2 Programming Model Integration - (Workflow Composition for on line workflows, and for offline workflows ) - MW 33 5.3.3 Workflows (Engine) - Provision and Placement MW 34 5.3.4 I/O Middleware and Libraries (Connectivity) - both on-and offline, (not or) 35 2 Storage Systems and Input/Output 2018 Pre-Workshop
    [Show full text]
  • HFAA: a Generic Socket API for Hadoop File Systems
    HFAA: A Generic Socket API for Hadoop File Systems Adam Yee Jeffrey Shafer University of the Pacific University of the Pacific Stockton, CA Stockton, CA [email protected] jshafer@pacific.edu ABSTRACT vices: one central NameNode and many DataNodes. The Hadoop is an open-source implementation of the MapReduce NameNode is responsible for maintaining the HDFS direc- programming model for distributed computing. Hadoop na- tory tree. Clients contact the NameNode in order to perform tively integrates with the Hadoop Distributed File System common file system operations, such as open, close, rename, (HDFS), a user-level file system. In this paper, we intro- and delete. The NameNode does not store HDFS data itself, duce the Hadoop Filesystem Agnostic API (HFAA) to allow but rather maintains a mapping between HDFS file name, Hadoop to integrate with any distributed file system over a list of blocks in the file, and the DataNode(s) on which TCP sockets. With this API, HDFS can be replaced by dis- those blocks are stored. tributed file systems such as PVFS, Ceph, Lustre, or others, thereby allowing direct comparisons in terms of performance Although HDFS stores file data in a distributed fashion, and scalability. Unlike previous attempts at augmenting file metadata is stored in the centralized NameNode service. Hadoop with new file systems, the socket API presented here While sufficient for small-scale clusters, this design prevents eliminates the need to customize Hadoop’s Java implementa- Hadoop from scaling beyond the resources of a single Name- tion, and instead moves the implementation responsibilities Node. Prior analysis of CPU and memory requirements for to the file system itself.
    [Show full text]
  • Maximizing the Performance of Scientific Data Transfer By
    Maximizing the Performance of Scientific Data Transfer by Optimizing the Interface Between Parallel File Systems and Advanced Research Networks Nicholas Millsa,∗, F. Alex Feltusb, Walter B. Ligon IIIa aHolcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC bDepartment of Genetics and Biochemistry, Clemson University, Clemson, SC Abstract The large amount of time spent transferring experimental data in fields such as genomics is hampering the ability of scientists to generate new knowledge. Often, computer hardware is capable of faster transfers but sub-optimal transfer software and configurations are limiting performance. This work seeks to serve as a guide to identifying the optimal configuration for performing genomics data transfers. A wide variety of tests narrow in on the optimal data transfer parameters for parallel data streaming across Internet2 and between two CloudLab clusters loading real genomics data onto a parallel file system. The best throughput was found to occur with a configuration using GridFTP with at least 5 parallel TCP streams with a 16 MiB TCP socket buffer size to transfer to/from 4{8 BeeGFS parallel file system nodes connected by InfiniBand. Keywords: Software Defined Networking, High Throughput Computing, DNA sequence, Parallel Data Transfer, Parallel File System, Data Intensive Science 1. Introduction of over 2,500 human genomes from 26 populations have been determined (The 1000 Genomes Project [3]); genome Solving scientific problems on a high-performance com- sequences have been produced for 3,000 rice varieties from puting (HPC) cluster will happen faster by taking full ad- 89 countries (The 3000 Rice Genomes Project [4]). These vantage of specialized infrastructure such as parallel file raw datasets are but a few examples that aggregate into systems and advanced software-defined networks.
    [Show full text]
  • Considerations When Choosing a Backup System for AFS
    Considerations when Choosing a Backup System for AFS By Kristen J. Webb President and CTO Teradactyl LLC. June 18, 2005 The Andrew File System® has a proven track record as a scalable and secure network file system. Organizations everywhere depend on AFS for the sharing of data both internally and externally. The most telling sign of the longevity of AFS is the adoption of the open source version of the software (OpenAFS). One of the important challenges facing AFS system administrators is the need to provide efficient, cost-effective backup strategies that can scale with ever increasing cell sizes. This paper provides an overview of the issues to consider when putting together a long term backup and recovery strategy. The topics include: Methods available for data backup and recovery; Impact of the backup function on cell performance and scale; Volume management; Centralized vs. Distributed backups; Disk-to-Tape, Disk-to-Disk and Disk-to-Disk-to-Tape comparisons; Disaster Recovery requirements; Advanced Solutions. The reader should have a basic understanding of backup and recovery concepts and familiarity with AFS architecture and cell administration. Introduction The purpose of this paper it to examine the general behavior of backup systems in relation to working with the Andrew File System (AFS). It is written as a guide for system administers to help make them aware of all of the considerations for backup, restore, and disaster recovery that they need to take into account when putting together a backup and recovery strategy for AFS. The paper includes discussions on the features and limitations of the native backup and recovery programs provided with AFS.
    [Show full text]
  • Locofs: a Loosely-Coupled Metadata Service for Distributed File Systems
    LocoFS: A Loosely-Coupled Metadata Service for Distributed File Systems Siyang Li∗ Youyou Lu Jiwu Shu† Tsinghua University Tsinghua University Tsinghua University [email protected] [email protected] [email protected] Tao Li Yang Hu University of Florida University of Texas, Dallas [email protected] [email protected] ABSTRACT 1 INTRODUCTION Key-Value stores provide scalable metadata service for distributed As clusters or data centers are moving from Petabyte level to Ex- file systems. However, the metadata’s organization itself, which is abyte level, distributed file systems are facing challenges in meta- organized using a directory tree structure, does not fit the key-value data scalability. The recent work IndexFS [38] uses hundreds of access pattern, thereby limiting the performance. To address this metadata servers to achieve high-performance metadata operations. issue, we propose a distributed file system with a loosely-coupled However, most of the recent active super computers only deploy metadata service, LocoFS, to bridge the performance gap between 1 to 4 metadata servers to reduce the complexity of management file system metadata and key-value stores. LocoFS is designed to and guarantee reliability. Besides, previous work [24, 39] has also decouple the dependencies between different kinds of metadata revealed that metadata operations consume more than half of all with two techniques. First, LocoFS decouples the directory content operations in file systems. The metadata service plays a major role and structure, which organizes file and directory index nodes ina in distributed file systems. It is important to support parallel pro- flat space while reversely indexing the directory entries.
    [Show full text]
  • A Persistent Storage Model for Extreme Computing Shuangyang Yang Louisiana State University and Agricultural and Mechanical College
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2014 A Persistent Storage Model for Extreme Computing Shuangyang Yang Louisiana State University and Agricultural and Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Yang, Shuangyang, "A Persistent Storage Model for Extreme Computing" (2014). LSU Doctoral Dissertations. 2910. https://digitalcommons.lsu.edu/gradschool_dissertations/2910 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. A PERSISTENT STORAGE MODEL FOR EXTREME COMPUTING A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Shuangyang Yang B.S., Zhejiang University, 2006 M.S., University of Dayton, 2008 December 2014 Copyright © 2014 Shuangyang Yang All rights reserved ii Dedicated to my wife Miao Yu and our daughter Emily. iii Acknowledgments This dissertation would not be possible without several contributions. It is a great pleasure to thank Dr. Hartmut Kaiser @ Louisiana State University, Dr. Walter B. Ligon III @ Clemson University and Dr. Maciej Brodowicz @ Indiana University for their ongoing help and support. It is a pleasure also to thank Dr. Bijaya B. Karki, Dr. Konstantin Busch, Dr. Supratik Mukhopadhyay at Louisiana State University for being my committee members and Dr.
    [Show full text]
  • The Ceph Distributed Storage System
    the ceph distributed storage system sage weil msst – april 17, 2012 outline ● why you should care ● what is it, what it does ● how it works, how you can use it ● architecture ● objects and data placement ● file system ● big data, cloud ● current status, roadmap ● who we are, why we do this why should you care about another storage system? requirements, time, money storage requirements ● scale ● terabytes, petabytes, exabytes ● heterogeneous hardware ● reliability and fault tolerance ● diverse storage needs ● object storage ● block devices ● shared file system (POSIX, coherent caches) ● structured data time ● ease of administration ● no manual data migration, load balancing ● painless scaling ● expansion and contraction ● seamless migration money ● low cost per gigabyte ● no vendor lock-in ● software solution ● commodity hardware ● open source what is ceph? unified storage system ● objects ● small or large ● multi-protocol Netflix VM Hadoop ● block devices radosgw RBD Ceph DFS ● snapshots, cloning RADOS ● files ● cache coherent ● snapshots ● usage accounting open source ● LGPLv2 ● copyleft ● free to link to proprietary code ● no copyright assignment ● no dual licensing ● no “enterprise-only” feature set ● active community ● commercial support distributed storage system ● data center (not geo) scale ● 10s to 10,000s of machines ● terabytes to exabytes ● fault tolerant ● no SPoF ● commodity hardware – ethernet, SATA/SAS, HDD/SSD – RAID, SAN probably a waste of time, power, and money architecture ● monitors (ceph-mon) ● 1s-10s, paxos ● lightweight
    [Show full text]
  • Distributed Systems 14
    Distributed Systems 14. Network File Systems (Network Attached Storage) Paul Krzyzanowski Rutgers University Fall 2018 October 22, 2018 © 2014-2018 Paul Krzyzanowski 1 Accessing files File sharing with socket-based programs HTTP, FTP, telnet: – Explicit access – User-directed connection to access remote resources We want more transparency – Allow user to access remote resources just as local ones NAS: Network Attached Storage October 22, 2018 © 2014-2018 Paul Krzyzanowski 2 File service models Upload/Download model Remote access model – Read file: copy file from server to client File service provides functional interface: – Write file: copy file from client to server – create, delete, read bytes, write bytes, etc… Advantage: Advantages: – Simple – Client gets only what’s needed – Server can manage coherent view of file system Problems: – Wasteful: what if client needs small Problem: piece? – Possible server and network congestion – Problematic: what if client doesn’t have enough space? • Servers are accessed for duration of file access – Consistency: what if others need to modify the same file? • Same data may be requested repeatedly October 22, 2018 © 2014-2018 Paul Krzyzanowski 3 Semantics of file sharing Sequential Semantics Session Semantics Read returns result of last write Relax the rules Easily achieved if • Changes to an open file are – Only one server initially visible only to the process – Clients do not cache data (or machine) that modified it. BUT • Need to hide or lock file under – Performance problems if no cache modification
    [Show full text]