Design and Implementation of SMB Locking in a Clustered File System

Total Page:16

File Type:pdf, Size:1020Kb

Design and Implementation of SMB Locking in a Clustered File System Design and Implementation of SMB Locking in a Clustered File System Aravind Srinivasan EMC, Isilon Storage Division 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. Agenda Overview OneFS Overview Fundamentals of Distributed Locking Challenges in implementing distributed locking Design and Implementation of DLM in OneFS Implementation of SMB locking on top of the DLM in OneFS 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 2 Overview Any clustered file system needs a robust Distributed Lock Manager (DLM) to synchronize resources A file sharing protocol, such as SMB must utilize the DLM appropriately to regulate access to files from multiple clients 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 3 OneFS Overview 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 4 Isilon OneFS Cluster NAS file server Scalable Add more storage in 5 mins Reliable 8x mirror / +4 parity Striped across nodes Single volume file system 3 to 144 nodes Fully symmetric peers No metadata servers Commodity hardware CPU, Mem, Disks 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 5 Isilon OneFS File System Concurrent access to all files with all protocols SMB1/SMB2 NFSv3/NFSv4 SSH HTTP/FTP 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 6 6 OneFS – High Level Overview OneFS is Isilon's sixth-generation operating system that provides the intelligence behind all Isilon scale-out storage systems. It combines the three layers of traditional storage architectures—file system, volume manager and RAID—into one unified software layer, creating a single intelligent file system that spans all nodes within a cluster. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 7 OneFS – High Level Overview Isilon's OneFS enables: Independent or linear scalability of performance and capacity A single point of management for large and rapidly growing repositories of data Mission-critical reliability and high availability with state-of-the-art data protection 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 8 Fundamentals of Distributed Locking Multiple writers to the same file - need a reader- writer lock Writers can be on different nodes – need a distributed locking system File contents corrupted! Clustered File System Volume /volume/somefile write write Node 1 Node 2 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 9 DLM Challenges Performance Multiple requirements depending upon the protocol requirements Exposing the appropriate APIs to utilize the DLM 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 10 Design and Implementation of DLM In OneFS Goal of DLM /ifs/somefile write write Lk resource OneFS Volume 1 File contents intact 2 EX-lock EX-lock Lk resource DLM module (lk) 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 11 DLM in OneFS From the perspective of the DLM a resource is simply an identifier. It can be a number or it can be an arbitrary blob of data (as in OneFS Lock Manager). Resources can have a number of modes which can be acquired which determine the level of exclusivity required by the client. The DLM in OneFS is named LK 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 12 Requirements of LK The goal of LK is to provide the infrastructure upon which POSIX, NFS and SMB can implement kernel enforced, cluster coherent locks. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 13 Requirements of LK The requirements can be grouped into the following major areas: Ranges allowed (ie: number of bits, behavior at boundaries) Semantics (ie: modes allowed) Wait types (ie: blocking, non-blocking, asynchronous) 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 14 Requirements of LK (Contd) Conversions allowed (ie: are conversions from one type of lock to another allowed? Eg: converting a lock from shared to exclusive) Reference counting semantics (stacked vs. reference counted) Fairness (strict vs. opportunistic) Miscellaneous 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 15 LK Design The DLM is split into two distinct roles: Initiator and Coordinator 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 16 LK - Coordinator The coordinator will deal with nodes as a whole, and won’t know about individual threads on a node. From the coordinator’s point of view, a node will request a lock, own the lock, and then release the lock. For example, if a node asks for an exclusive lock while holding a shared lock, the exclusive lock will be granted immediately, provided that no other nodes hold shared locks. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 17 LK - Initiator The initiator is the one requesting the lock. On the initiator side, there is one entry for each resource for which there is a local owner or waiter. Each entry contains a list of all the local owners and a number of queues containing waiters. The main queue hangs directly off of the lock entry, while the rest hang off of per-lock-type structures. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 18 Three Messages In LK LK uses three messages to communicate between the initiator and the coordinator Request - Generated by initiator and sent to coordinator. Contains the Needs and Wants of the initiator Grant - Generated by the coordinator and sent to initiator. Contains the goals for the resource on this initiator and the additional holds. Release Generated by the initiator and sent to the coordinator.Used to release an initiator's hold on a lock. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 19 LK Terms Need This is the mode of the lock that the client requires. Eg: needs shared mode Want This is the set of modes of the lock that the client may want as soon as they are not being used by another client. Eg: want exclusive and delete. Holds This is the set of additional modes of the lock which the coordinator has granted the initiator. eg: holding exclusive and shared Goal This is the set of modes that the initiator should attempt to achieve as soon as it is able. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 20 LK Terms (Contd) The resource parameter in LK represents what is being locked or unlocked. It is an arbitrary blob of data. The locker parameter represents who is locking or unlocking. This is the parameter which is used for deadlock detection. The domain parameter represents, not surprisingly, the lock domain. There can be multiple lock domain in existence at any time, each one controlling locks for a different aspect of the system. Eg: OPLOCK domain/CBRL domain The wait type parameter controls whether the potentially- blocking functions are allowed to block indefinitely. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 21 LK Callbacks Lock owners can register callbacks to be called when the node gives up a certain type of lock. Initiator delays releasing locks which have callbacks registered. Instead, it creates a special type of local waiter and puts it on the main queue. When the special local waiter is converted into a lock owner, the callback is then called. After the callback is done, its lock owner will go away, and the initiator will release the lock for real. That is, of course, unless there are still other lock callbacks pending in the main queue. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 22 SMB locking on top of LK in OneFS SMB in OneFS uses LK for all its locking purposes such as Oplocks and BRLs. An event channel is registered between the SMB daemon and the OneFS kernel. The results from LK are communicated via the registered event channel. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 23 SMB Locking on top of LK in OneFS The locker parameter is specified as part of the syscall to acquire the appropriate lock. The locker parameter can be either the client lease key (for leases) or the MID, TID and PID combination for BRLs or just the file pointer for legacy oplocks Basically, the locker uniquely identifies the owner of the lock. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 24 SMB Locking on top of LK in OneFS A unique 64 bit ID is also passed as part of the syscall, which will be used to register call backs in LK Whenever a lock is contended, the registered callback routine is triggered and will notify the userspace using the appropriate ID. The userspace has to maintain the async state and should respond to the message from the kernel appropriately 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 25 SMB Locking on top of LK in OneFS Using LK for locking, pushes all the SMB locking requirements down to the kernel, thereby significantly improving the performance and also achieving cluster coherency. The support for callbacks enable us to register async operations and prevent blocking in the kernel. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 26 Summary Distributed locking in OneFS is achieved by using a OneFS specific DLM called LK LK achieves the basic cluster coherency and also provides performance benefits as well as scalability LK can also be easily extended to support other future protocols by adding a new lock domain if necessary. 2012 Storage Developer Conference. © Insert Your Company Name. All Rights Reserved. 27 Questions? Contact Aravind Srinivasan [email protected] 2012 Storage Developer Conference.
Recommended publications
  • HP Storageworks Clustered File System Command Line Reference
    HP StorageWorks Clustered File System 3.0 Command Line reference guide *392372-001* *392372–001* Part number: 392372–001 First edition: May 2005 Legal and notice information © Copyright 1999-2005 PolyServe, Inc. Portions © 2005 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Neither PolyServe nor Hewlett-Packard shall be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or translated into another language without the prior written consent of Hewlett-Packard. The information is provided “as is” without warranty of any kind and is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Neither PolyServe nor HP shall be liable for technical or editorial errors or omissions contained herein. The software this document describes is PolyServe confidential and proprietary. PolyServe and the PolyServe logo are trademarks of PolyServe, Inc. PolyServe Matrix Server contains software covered by the following copyrights and subject to the licenses included in the file thirdpartylicense.pdf, which is included in the PolyServe Matrix Server distribution. Copyright © 1999-2004, The Apache Software Foundation. Copyright © 1992, 1993 Simmule Turner and Rich Salz.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • Designing High-Performance and Scalable Clustered Network Attached Storage with Infiniband
    DESIGNING HIGH-PERFORMANCE AND SCALABLE CLUSTERED NETWORK ATTACHED STORAGE WITH INFINIBAND DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ranjit Noronha, MS * * * * * The Ohio State University 2008 Dissertation Committee: Approved by Dhabaleswar K. Panda, Adviser Ponnuswammy Sadayappan Adviser Feng Qin Graduate Program in Computer Science and Engineering c Copyright by Ranjit Noronha 2008 ABSTRACT The Internet age has exponentially increased the volume of digital media that is being shared and distributed. Broadband Internet has made technologies such as high quality streaming video on demand possible. Large scale supercomputers also consume and cre- ate huge quantities of data. This media and data must be stored, cataloged and retrieved with high-performance. Researching high-performance storage subsystems to meet the I/O demands of applications in modern scenarios is crucial. Advances in microprocessor technology have given rise to relatively cheap off-the-shelf hardware that may be put together as personal computers as well as servers. The servers may be connected together by networking technology to create farms or clusters of work- stations (COW). The evolution of COWs has significantly reduced the cost of ownership of high-performance clusters and has allowed users to build fairly large scale machines based on commodity server hardware. As COWs have evolved, networking technologies like InfiniBand and 10 Gigabit Eth- ernet have also evolved. These networking technologies not only give lower end-to-end latencies, but also allow for better messaging throughput between the nodes. This allows us to connect the clusters with high-performance interconnects at a relatively lower cost.
    [Show full text]
  • Newest Trends in High Performance File Systems
    Newest Trends in High Performance File Systems Elena Bergmann Arbeitsbereich Wissenschaftliches Rechnen Fachbereich Informatik Fakult¨atf¨urMathematik, Informatik und Naturwissenschaften Universit¨atHamburg Betreuer Julian Kunkel 2015-11-23 Introduction File Systems Sirocco File System Summary Literature Agenda 1 Introduction 2 File Systems 3 Sirocco File System 4 Summary 5 Literature Elena Bergmann Newest Trends in High Performance File Systems 2015-11-23 2 / 44 Introduction File Systems Sirocco File System Summary Literature Introduction Current situation: Fundamental changes in hardware Core counts are increasing Performance improvement of storage devices is much slower Bigger system, more hardware, more failure probabilities System is in a state of failure at all times And exascale systems? Gap between produced data and storage performance (20 GB/s to 4 GB/s) I/O bandwidth requirement is high Metadata server often bottleneck Scalability not given Elena Bergmann Newest Trends in High Performance File Systems 2015-11-23 3 / 44 Introduction File Systems Sirocco File System Summary Literature Upcoming technologies until 2020 Deeper storage hierarchy (tapes, disc, NVRAM . ) Is traditional input/output technology enough? Will POSIX (Portable Operating System Interface) I/O scale? Non-volatile memory Storage technologies (NVRAM) Location across the hierarchy Node local storage Burst buffers New programming abstractions and workflows New generation of I/O ware and service Elena Bergmann Newest Trends in High Performance File Systems 2015-11-23
    [Show full text]
  • A Guide to the IBM Clustered Network File System
    Front cover A Guide to the IBM Clustered Network File System Discusses the technical architecture Explains installation and configuration Includes administration and operation information Ira Chavis Dave Coutts Bob Demkowicz Jay Huie Shuzheng Liu Sheryl Qualters Daniel Turkenkopf ibm.com/redbooks Redpaper International Technical Support Organization A Guide to the IBM Clustered Network File System November 2010 REDP-4400-01 Note: Before using this information and the product it supports, read the information in “Notices” on page v. Second Edition (November 2010) This edition applies to the IBM Clustered Network File System package. © Copyright International Business Machines Corporation 2010. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii The team that wrote this paper . vii Now you can become a published author, too! . viii Comments welcome. ix Stay connected to IBM Redbooks . ix Chapter 1. Introduction to the IBM Clustered Network File System. 1 1.1 Overview . 2 1.2 IBM Clustered Network File System . 3 1.3 Business trends and line of business requirements . 4 1.3.1 Product life cycle management for the automotive industry . 4 1.3.2 Financial services . 5 1.3.3 Electronic design automation . 6 1.3.4 Communications and digital media . 6 1.3.5 Pharmaceuticals . 7 1.3.6 Other industries. 7 1.4 Challenges and customer requirements . 7 1.5 Solution elements and key features . 8 1.6 Targeted users . 9 1.7 Required skills for the reader of this paper .
    [Show full text]
  • OCFS2: the Oracle Clustered File System, Version 2
    OCFS2: The Oracle Clustered File System, Version 2 Mark Fasheh Oracle [email protected] Abstract Disk space is broken up into clusters which can range in size from 4 kilobytes to 1 megabyte. This talk will review the various components File data is allocated in extents of clusters. This of the OCFS2 stack, with a focus on the file allows OCFS2 a large amount of flexibility in system and its clustering aspects. OCFS2 ex- file allocation. tends many local file system features to the File metadata is allocated in blocks via a sub cluster, some of the more interesting of which allocation mechanism. All block allocators in are posix unlink semantics, data consistency, OCFS2 grow dynamically. Most notably, this shared readable mmap, etc. allows OCFS2 to grow inode allocation on de- In order to support these features, OCFS2 logi- mand. cally separates cluster access into multiple lay- ers. An overview of the low level DLM layer will be given. The higher level file system locking will be described in detail, including 1 Design Principles a walkthrough of inode locking and messaging for various operations. A small set of design principles has guided Caching and consistency strategies will be dis- most of OCFS2 development. None of them cussed. Metadata journaling is done on a per are unique to OCFS2 development, and in fact, node basis with JBD. Our reasoning behind that almost all are principles we learned from the choice will be described. Linux kernel community. They will, however, come up often in discussion of OCFS2 file sys- OCFS2 provides robust and performant recov- tem design, so it is worth covering them now.
    [Show full text]
  • Sistemi Di Storage: Clustered Filesystems
    Agenda ● Clustered Filesystems ● Block and Object: The Storage Evolution ● Introduzione a GlusterFS Sistemi di Storage: Clustered Filesystems Un clustered file system è un file system che può essere collegato contemporaneamente a più server. Ci sono diversi metodi per creare un cluster di server ma la maggior parte di questi non prevede l'uso del cluster file system. Quando, però, il numero di nodi cresce e la complessità del cluster aumenta, il ricorso al clustered file system come risorsa condivisa può essere la soluzione più efficace. Che cos'è un Cluster? Definizione di Cluster: Collezione di sistemi di calcolo indipendenti (workstations or PCs) collegati mediante una rete di interconnessione a basso costo (commodity interconnection Network), che viene utilizzata come una singola unificata risorsa di calcolo. Utilizzo dei Cluster Possiamo far ricadere i cluster in tre famiglie che ne caratterizzano l’utilizzo: High Availability Cluster: i nodi off rono servizi ridondanti per garantirne la disponibilita' Load Balancing Cluster: i nodi si spartiscono il carico di un determinato servizio dinamicamente High Performance Computing (HPC) Cluster: i nodi eseguono in maniera coordinata programmi paralleli che fanno un uso intenso della CPU. Load Balancing Cluster I nodi si spartiscono il carico di un determinato servizio dinamicamente High Availability Cluster I nodi off rono servizi ridondanti per garantirne la disponibilita' Tipologie di Clustered Filesystems A disco condiviso La tipologia più utilizzata di cluster file system è a disco condiviso in cui due o più server accedono contemporaneamente ad un singolo sottosistema di storage che può essere un RAID o una SAN. Sono un esempio di questa tecnologia i file system VMFS e il Global File System.
    [Show full text]
  • Best Practices for Data Sharing in a Grid Distributed SAS Environment
    Best Practices for Data Sharing in a Grid Distributed SAS® Environment Updated July 2010 B E S T P R A C T I C E D OCUMENT Table of Contents 1 Abstract ................................................................................ 2 1.1 Storage performance is critical ................................................................... 2 1.2 Required background knowledge ............................................................... 2 1.3 Shared file system experience with SAS .................................................... 2 1.4 NAS Appliance (NFS or CIFS) ...................................................................... 3 1.5 Clustered and shared file system ................................................................ 5 1.6 Hybrid NAS and SAN systems - iSCSI ......................................................... 7 1.7 SAS experience with various shared file systems ..................................... 9 1.8 Comparing various storage architectures for SAS .................................. 10 2 Implementation Guidelines............................................... 11 2.1 Initial architecture design .......................................................................... 12 2.2 Throughput requirements for SAS applications ....................................... 12 2.3 Location of SASWORK and Utility Space ................................................. 12 2.4 Keep it simple ............................................................................................. 13 2.5 Design testing ............................................................................................
    [Show full text]
  • Around the Linux File System World in 45 Minutes
    Around the Linux File System World in 45 minutes Steve French IBM Samba Team [email protected] Abstract lines of code), most active (averaging 10 changesets a day!), and most important kernel subsystems. What makes the Linux file system interface unique? What has improved over the past year in this important part of the kernel? Why are there more than 50 Linux 2 The Linux Virtual File System Layer File Systems? Why might you choose ext4 or XFS, NFS or CIFS, or OCFS2 or GFS2? The differences are not al- The heart of the Linux file system, and what makes ways obvious. This paper will describe the new features Linux file systems unique, is the virtual file system in the Linux VFS, how various Linux file systems dif- layer, or VFS, which they connect to. fer in their use, and compare some of the key Linux file systems. 2.1 Virtual File System Structures and Relation- ships File systems are one of the largest and most active parts of the Linux kernel, but some key sections of the file sys- tem interface are little understood, and with more than The relationships between Linux file system compo- than 50 Linux file systems the differences between them nents is described in various papers [3] and is impor- can be confusing even to developers. tant to understand when carefully comparing file system implementations. 1 Introduction: What is a File System? 2.2 Comparison with other Operating Systems Linux has a rich file system interface, and a surprising number of file system implementations.
    [Show full text]
  • A Cross-Platform, High Performance Shared Storage System Master of Science Thesis in the Programme Networks and Distributed Systems
    A Cross-Platform, High Performance Shared Storage System Master of Science Thesis in the Programme Networks and Distributed Systems RICKARD NORDSTRAND Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG Göteborg, Sweden, December 2009 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. A Cross-Platform, High Performance Shared Storage System RICKARD NORDSTRAND © RICKARD NORDSTRAND, December 2009. Examiner: ROGER JOHANSSON Department of Computer Science and Engineering Chalmers University of Technology SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Cover: Sketch of a SweDisk rack case © Al Briscoe Department of Computer Science and Engineering Göteborg, Sweden, December 2009 Abstract Advancements in information technology pushes the requirements of networks and storage solutions to new levels. The digital media industry is one particular area where the increased performance demands conflicts with the requirements of multi-user, cross-platform systems.
    [Show full text]
  • Red Hat Gluster Storage Product Overview
    RED HAT GLUSTER STORAGE PRODUCT OVERVIEW Michael Lessard Senior Solutions Architect October 2015 THE RED HAT STORAGE MISSION To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads thereby accelerating the transition to modern IT infrastructures. Complex proprietary silos Storage Traditional ProprietarySoftware Proprietary Custom GUI Hardware ADMIN USER ProprietarySoftware Proprietary Custom GUI Hardware ADMIN USER THE FUTURE STORAGE OF FUTURE THE ProprietarySoftware Proprietary CustomGUI Hardware ADMIN USER Standardized, unified,open platforms Open, Software-DefinedStorage Standard Open Source Hardware Software Ceph ADMIN Control Plane (API,GUI) Computers and and Disks Standard Gluster USER +++ WHY BOTHER? PROPRIETARY Common, off-the-shelf hardware HARDWARE Lower cost, standardized supply chain SCALE-UP Scale-out architecture ARCHITECTURE Increased operational flexibility HARDWARE-BASED Software-based intelligence INTELLIGENCE More programmability, agility, and control CLOSED DEVELOPMENT Open development process PROCESS More flexible, well-integrated technology A RISING TIDE Software-Defined Storage is leading a shift in the SDS-P MARKET SIZE BY SEGMENT global storage industry, with far-reaching effects. $1,349B Block Storage $1,195B File Storage Object Storage $1,029B “By 2016, server-based storage solutions will lower Hyperconverged storage hardware costs by 50% or more.” $859B Gartner: “IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry” $706B “By 2020, between 70-80% of unstructured data will be held on $592B lower-cost storage managed by SDS environments.” $457B Innovation Insight: Separating Hype From Hope for Software-Defined Storage “By 2019, 70% of existing storage array products will also be available as software only versions” Innovation Insight: Separating Hype From Hope for Software-Defined Storage 2013 2014 2015 2016 2017 2018 2019 Source: IDC Market size is projected to increase approximately 20% year-over-year between 2015 and 2019.
    [Show full text]