Scalable Performance of the Panasas Parallel File System

Total Page:16

File Type:pdf, Size:1020Kb

Scalable Performance of the Panasas Parallel File System Scalable Performance of the Panasas Parallel File System Brent Welch1, Marc Unangst1, Zainul Abbasi1, Garth Gibson12, Brian Mueller1, Jason Small1, Jim Zelenka1, Bin Zhou1 1Panasas, Inc. 2Carnegie Mellon {welch,mju,zabbasi,garth,bmueller,jsmall,jimz,bzhou}@panasas.com Abstract manufacturing, and in general purpose computing environments. In these environments, hundreds or The Panasas file system uses parallel and redundant thousands of file system clients share data and generate access to object storage devices (OSDs), per-file RAID, very high aggregate I/O load on the file system. The distributed metadata management, consistent client Panasas system is designed to support several thousand caching, file locking services, and internal cluster clients and storage capacities in excess of a petabyte. management to provide a scalable, fault tolerant, high performance distributed file system. The clustered The unique aspects of the Panasas system are its use of design of the storage system and the use of client- per-file, client-driven RAID, its parallel RAID rebuild, driven RAID provide scalable performance to many its treatment of different classes of metadata (block, concurrent file system clients through parallel access to file, system) and a commodity parts based blade file data that is striped across OSD storage nodes. hardware with integrated UPS. Of course, the system RAID recovery is performed in parallel by the cluster has many other features (such as object storage, fault of metadata managers, and declustered data placement tolerance, caching and cache consistency, and a yields scalable RAID rebuild rates as the storage system simplified management model) that are not unique, but grows larger. This paper presents performance are necessary for a scalable system implementation. measures of I/O, metadata, and recovery operations for storage clusters that range in size from 10 to 120 2 Panasas File System Background storage nodes, 1 to 12 metadata nodes, and with file system client counts ranging from 1 to 100 compute This section makes a brief tour through the system to nodes. Production installations are as large as 500 provide an overview for the following sections. The storage nodes, 50 metadata managers, and 5000 clients. two overall themes to the system are object storage, which affects how the file system manages its data, and 1 Introduction clustering of components, which allows the system to scale in performance and capacity. Storage systems for high performance computing environments must be designed to scale in performance The storage cluster is divided into storage nodes and so that they can be configured to match the required manager nodes at a ratio of about 10 storage nodes to 1 load. Clustering techniques are often used to provide manager node, although that ratio is variable. The scalability. In a storage cluster, many nodes each storage nodes implement an object store, and are control some storage, and the overall distributed file accessed directly from Panasas file system clients system assembles the cluster elements into one large, during I/O operations. The manager nodes manage the seamless storage system. The storage cluster can be overall storage cluster, implement the distributed file hosted on the same computers that perform data system semantics, handle recovery of storage node processing, or they can be a separate cluster that is failures, and provide an exported view of the Panasas devoted entirely to storage and accessible to the file system via NFS and CIFS. Figure 1 gives a basic compute cluster via a network protocol. view of the system components. The Panasas storage system is a specialized storage 2.1 Object Storage cluster, and this paper presents its design and a number of performance measurements to illustrate the An object is a container for data and attributes; it is scalability. The Panasas system is a production system analogous to the inode inside a traditional UNIX file that provides file service to some of the largest compute system implementation. Specialized storage nodes clusters in the world, in scientific labs, in seismic data called Object Storage Devices (OSD) store objects in a processing, in digital animation studios, in local OSDFS file system. The object interface computational fluid dynamics, in semiconductor addresses objects in a two-level (partition ID/object ID) namespace. The OSD wire protocol provides byte- 2.2 System Software Components oriented access to the data, attribute manipulation, creation and deletion of objects, and several other The major software subsystems are the OSDFS object specialized operations [OSD04]. We use an iSCSI storage system, the Panasas file system metadata transport to carry OSD commands that are very similar manager, the Panasas file system client, the NFS/CIFS to the OSDv2 standard currently in progress within gateway, and the overall cluster management system. SNIA and ANSI-T10 [SNIA]. The Panasas client is an installable kernel module that runs inside the Linux kernel. The kernel module implements the standard VFS interface, so that the Compute node client hosts can mount the file system and use a POSIX Client interface to the storage system. We don’t require any patches to run inside the 2.4 or 2.6 Linux kernel, and have tested with over 200 Linux variants. RPC Each storage cluster node runs a common platform that is based on FreeBSD, with additional services to NFS/CIFS SysMgr provide hardware monitoring, configuration management, and overall control. The storage nodes use a specialized local file system Client PanFS iSCSI/ OSD (OSDFS) that implements the object storage primitives. They implement an iSCSI target and the OSD command set. The OSDFS object store and iSCSI Manager node target/OSD command processor are kernel modules. OSDFS OSDFS is concerned with traditional block-level file system issues such as efficient disk arm utilization, media management (i.e., error handling), high throughput, as well as the OSD interface. Storage node The cluster manager (SysMgr) maintains the global configuration, and it controls the other services and nodes in the storage cluster. There is an associated Figure 1: Panasas System Components management application that provides both a command The Panasas file system is layered over the object line interface (CLI) and an HTML interface (GUI). storage. Each file is striped over two or more objects to These are all user level applications that run on a subset provide redundancy and high bandwidth access. The of the manager nodes. The cluster manager is file system semantics are implemented by metadata concerned with membership in the storage cluster, fault managers that mediate access to objects from clients of detection, configuration management, and overall the file system. The clients access the object storage control for operations like software upgrade and system using the iSCSI/OSD protocol for Read and Write restart [Welch07]. operations. The I/O operations proceed directly and in The Panasas metadata manager (PanFS) implements parallel to the storage nodes, bypassing the metadata the file system semantics and manages data striping managers. The clients interact with the out-of-band across the object storage devices. This is a user level metadata managers via RPC to obtain access application that runs on every manager node. The capabilities and location information for the objects that metadata manager is concerned with distributed file store files. The performance of striped file access is system issues such as secure multi-user access, presented later in the paper. maintaining consistent file- and object-level metadata, client cache coherency, and recovery from client, storage node, and metadata server crashes. Fault Object attributes are used to store file-level attributes, tolerance is based on a local transaction log that is and directories are implemented with objects that store replicated to a backup on a different manager node. name to object ID mappings. Thus the file system metadata is kept in the object store itself, rather than The NFS and CIFS services provide access to the file being kept in a separate database or some other form of system for hosts that cannot use our Linux installable storage on the metadata nodes. Metadata operations are file system client. The NFS service is a tuned version of described and measured later in this paper. the standard FreeBSD NFS server that runs inside the kernel. The CIFS service is based on Samba and runs at user level. In turn, these services use a local instance of the file system client, which runs inside the FreeBSD The logging mechanism is described and measured in kernel. These gateway services run on every manager detail later in the paper. The system monitors the node to provide a clustered NFS and CIFS service. battery charge level, and will not allow a shelf chassis to enter service without an adequately charged battery 2.3 Commodity Hardware Platform to avoid data loss during back-to-back power failures. The storage cluster nodes are implemented as blades The other important aspect of the hardware is that that are very compact computer systems made from blades are a Field Replaceable Unit (FRU). Instead of commodity parts. The blades are clustered together to trying to repair a blade, if anything goes wrong with the provide a scalable platform. Up to 11 blades fit into a hardware, the whole blade is replaced. We settled on a 4U (7 inches) high shelf chassis that provides dual two-drive storage blade as a compromise between cost, power supplies, a high capacity battery, and one or two performance, and reliability. Having the blade as a 16-port GE switches. The switches aggregate the GE failure domain simplifies our fault tolerance ports from the blades into a 4 GE trunk. The 2nd switch mechanisms, and it provides a simple maintenance provides redundancy and is connected to a 2nd GE port model for system administrators. Reliability and data on each blade.
Recommended publications
  • HP Storageworks Clustered File System Command Line Reference
    HP StorageWorks Clustered File System 3.0 Command Line reference guide *392372-001* *392372–001* Part number: 392372–001 First edition: May 2005 Legal and notice information © Copyright 1999-2005 PolyServe, Inc. Portions © 2005 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Neither PolyServe nor Hewlett-Packard shall be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or translated into another language without the prior written consent of Hewlett-Packard. The information is provided “as is” without warranty of any kind and is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Neither PolyServe nor HP shall be liable for technical or editorial errors or omissions contained herein. The software this document describes is PolyServe confidential and proprietary. PolyServe and the PolyServe logo are trademarks of PolyServe, Inc. PolyServe Matrix Server contains software covered by the following copyrights and subject to the licenses included in the file thirdpartylicense.pdf, which is included in the PolyServe Matrix Server distribution. Copyright © 1999-2004, The Apache Software Foundation. Copyright © 1992, 1993 Simmule Turner and Rich Salz.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • Designing High-Performance and Scalable Clustered Network Attached Storage with Infiniband
    DESIGNING HIGH-PERFORMANCE AND SCALABLE CLUSTERED NETWORK ATTACHED STORAGE WITH INFINIBAND DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ranjit Noronha, MS * * * * * The Ohio State University 2008 Dissertation Committee: Approved by Dhabaleswar K. Panda, Adviser Ponnuswammy Sadayappan Adviser Feng Qin Graduate Program in Computer Science and Engineering c Copyright by Ranjit Noronha 2008 ABSTRACT The Internet age has exponentially increased the volume of digital media that is being shared and distributed. Broadband Internet has made technologies such as high quality streaming video on demand possible. Large scale supercomputers also consume and cre- ate huge quantities of data. This media and data must be stored, cataloged and retrieved with high-performance. Researching high-performance storage subsystems to meet the I/O demands of applications in modern scenarios is crucial. Advances in microprocessor technology have given rise to relatively cheap off-the-shelf hardware that may be put together as personal computers as well as servers. The servers may be connected together by networking technology to create farms or clusters of work- stations (COW). The evolution of COWs has significantly reduced the cost of ownership of high-performance clusters and has allowed users to build fairly large scale machines based on commodity server hardware. As COWs have evolved, networking technologies like InfiniBand and 10 Gigabit Eth- ernet have also evolved. These networking technologies not only give lower end-to-end latencies, but also allow for better messaging throughput between the nodes. This allows us to connect the clusters with high-performance interconnects at a relatively lower cost.
    [Show full text]
  • Newest Trends in High Performance File Systems
    Newest Trends in High Performance File Systems Elena Bergmann Arbeitsbereich Wissenschaftliches Rechnen Fachbereich Informatik Fakult¨atf¨urMathematik, Informatik und Naturwissenschaften Universit¨atHamburg Betreuer Julian Kunkel 2015-11-23 Introduction File Systems Sirocco File System Summary Literature Agenda 1 Introduction 2 File Systems 3 Sirocco File System 4 Summary 5 Literature Elena Bergmann Newest Trends in High Performance File Systems 2015-11-23 2 / 44 Introduction File Systems Sirocco File System Summary Literature Introduction Current situation: Fundamental changes in hardware Core counts are increasing Performance improvement of storage devices is much slower Bigger system, more hardware, more failure probabilities System is in a state of failure at all times And exascale systems? Gap between produced data and storage performance (20 GB/s to 4 GB/s) I/O bandwidth requirement is high Metadata server often bottleneck Scalability not given Elena Bergmann Newest Trends in High Performance File Systems 2015-11-23 3 / 44 Introduction File Systems Sirocco File System Summary Literature Upcoming technologies until 2020 Deeper storage hierarchy (tapes, disc, NVRAM . ) Is traditional input/output technology enough? Will POSIX (Portable Operating System Interface) I/O scale? Non-volatile memory Storage technologies (NVRAM) Location across the hierarchy Node local storage Burst buffers New programming abstractions and workflows New generation of I/O ware and service Elena Bergmann Newest Trends in High Performance File Systems 2015-11-23
    [Show full text]
  • A Guide to the IBM Clustered Network File System
    Front cover A Guide to the IBM Clustered Network File System Discusses the technical architecture Explains installation and configuration Includes administration and operation information Ira Chavis Dave Coutts Bob Demkowicz Jay Huie Shuzheng Liu Sheryl Qualters Daniel Turkenkopf ibm.com/redbooks Redpaper International Technical Support Organization A Guide to the IBM Clustered Network File System November 2010 REDP-4400-01 Note: Before using this information and the product it supports, read the information in “Notices” on page v. Second Edition (November 2010) This edition applies to the IBM Clustered Network File System package. © Copyright International Business Machines Corporation 2010. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii The team that wrote this paper . vii Now you can become a published author, too! . viii Comments welcome. ix Stay connected to IBM Redbooks . ix Chapter 1. Introduction to the IBM Clustered Network File System. 1 1.1 Overview . 2 1.2 IBM Clustered Network File System . 3 1.3 Business trends and line of business requirements . 4 1.3.1 Product life cycle management for the automotive industry . 4 1.3.2 Financial services . 5 1.3.3 Electronic design automation . 6 1.3.4 Communications and digital media . 6 1.3.5 Pharmaceuticals . 7 1.3.6 Other industries. 7 1.4 Challenges and customer requirements . 7 1.5 Solution elements and key features . 8 1.6 Targeted users . 9 1.7 Required skills for the reader of this paper .
    [Show full text]
  • OCFS2: the Oracle Clustered File System, Version 2
    OCFS2: The Oracle Clustered File System, Version 2 Mark Fasheh Oracle [email protected] Abstract Disk space is broken up into clusters which can range in size from 4 kilobytes to 1 megabyte. This talk will review the various components File data is allocated in extents of clusters. This of the OCFS2 stack, with a focus on the file allows OCFS2 a large amount of flexibility in system and its clustering aspects. OCFS2 ex- file allocation. tends many local file system features to the File metadata is allocated in blocks via a sub cluster, some of the more interesting of which allocation mechanism. All block allocators in are posix unlink semantics, data consistency, OCFS2 grow dynamically. Most notably, this shared readable mmap, etc. allows OCFS2 to grow inode allocation on de- In order to support these features, OCFS2 logi- mand. cally separates cluster access into multiple lay- ers. An overview of the low level DLM layer will be given. The higher level file system locking will be described in detail, including 1 Design Principles a walkthrough of inode locking and messaging for various operations. A small set of design principles has guided Caching and consistency strategies will be dis- most of OCFS2 development. None of them cussed. Metadata journaling is done on a per are unique to OCFS2 development, and in fact, node basis with JBD. Our reasoning behind that almost all are principles we learned from the choice will be described. Linux kernel community. They will, however, come up often in discussion of OCFS2 file sys- OCFS2 provides robust and performant recov- tem design, so it is worth covering them now.
    [Show full text]
  • Sistemi Di Storage: Clustered Filesystems
    Agenda ● Clustered Filesystems ● Block and Object: The Storage Evolution ● Introduzione a GlusterFS Sistemi di Storage: Clustered Filesystems Un clustered file system è un file system che può essere collegato contemporaneamente a più server. Ci sono diversi metodi per creare un cluster di server ma la maggior parte di questi non prevede l'uso del cluster file system. Quando, però, il numero di nodi cresce e la complessità del cluster aumenta, il ricorso al clustered file system come risorsa condivisa può essere la soluzione più efficace. Che cos'è un Cluster? Definizione di Cluster: Collezione di sistemi di calcolo indipendenti (workstations or PCs) collegati mediante una rete di interconnessione a basso costo (commodity interconnection Network), che viene utilizzata come una singola unificata risorsa di calcolo. Utilizzo dei Cluster Possiamo far ricadere i cluster in tre famiglie che ne caratterizzano l’utilizzo: High Availability Cluster: i nodi off rono servizi ridondanti per garantirne la disponibilita' Load Balancing Cluster: i nodi si spartiscono il carico di un determinato servizio dinamicamente High Performance Computing (HPC) Cluster: i nodi eseguono in maniera coordinata programmi paralleli che fanno un uso intenso della CPU. Load Balancing Cluster I nodi si spartiscono il carico di un determinato servizio dinamicamente High Availability Cluster I nodi off rono servizi ridondanti per garantirne la disponibilita' Tipologie di Clustered Filesystems A disco condiviso La tipologia più utilizzata di cluster file system è a disco condiviso in cui due o più server accedono contemporaneamente ad un singolo sottosistema di storage che può essere un RAID o una SAN. Sono un esempio di questa tecnologia i file system VMFS e il Global File System.
    [Show full text]
  • Best Practices for Data Sharing in a Grid Distributed SAS Environment
    Best Practices for Data Sharing in a Grid Distributed SAS® Environment Updated July 2010 B E S T P R A C T I C E D OCUMENT Table of Contents 1 Abstract ................................................................................ 2 1.1 Storage performance is critical ................................................................... 2 1.2 Required background knowledge ............................................................... 2 1.3 Shared file system experience with SAS .................................................... 2 1.4 NAS Appliance (NFS or CIFS) ...................................................................... 3 1.5 Clustered and shared file system ................................................................ 5 1.6 Hybrid NAS and SAN systems - iSCSI ......................................................... 7 1.7 SAS experience with various shared file systems ..................................... 9 1.8 Comparing various storage architectures for SAS .................................. 10 2 Implementation Guidelines............................................... 11 2.1 Initial architecture design .......................................................................... 12 2.2 Throughput requirements for SAS applications ....................................... 12 2.3 Location of SASWORK and Utility Space ................................................. 12 2.4 Keep it simple ............................................................................................. 13 2.5 Design testing ............................................................................................
    [Show full text]
  • Around the Linux File System World in 45 Minutes
    Around the Linux File System World in 45 minutes Steve French IBM Samba Team [email protected] Abstract lines of code), most active (averaging 10 changesets a day!), and most important kernel subsystems. What makes the Linux file system interface unique? What has improved over the past year in this important part of the kernel? Why are there more than 50 Linux 2 The Linux Virtual File System Layer File Systems? Why might you choose ext4 or XFS, NFS or CIFS, or OCFS2 or GFS2? The differences are not al- The heart of the Linux file system, and what makes ways obvious. This paper will describe the new features Linux file systems unique, is the virtual file system in the Linux VFS, how various Linux file systems dif- layer, or VFS, which they connect to. fer in their use, and compare some of the key Linux file systems. 2.1 Virtual File System Structures and Relation- ships File systems are one of the largest and most active parts of the Linux kernel, but some key sections of the file sys- tem interface are little understood, and with more than The relationships between Linux file system compo- than 50 Linux file systems the differences between them nents is described in various papers [3] and is impor- can be confusing even to developers. tant to understand when carefully comparing file system implementations. 1 Introduction: What is a File System? 2.2 Comparison with other Operating Systems Linux has a rich file system interface, and a surprising number of file system implementations.
    [Show full text]
  • A Cross-Platform, High Performance Shared Storage System Master of Science Thesis in the Programme Networks and Distributed Systems
    A Cross-Platform, High Performance Shared Storage System Master of Science Thesis in the Programme Networks and Distributed Systems RICKARD NORDSTRAND Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG Göteborg, Sweden, December 2009 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. A Cross-Platform, High Performance Shared Storage System RICKARD NORDSTRAND © RICKARD NORDSTRAND, December 2009. Examiner: ROGER JOHANSSON Department of Computer Science and Engineering Chalmers University of Technology SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Cover: Sketch of a SweDisk rack case © Al Briscoe Department of Computer Science and Engineering Göteborg, Sweden, December 2009 Abstract Advancements in information technology pushes the requirements of networks and storage solutions to new levels. The digital media industry is one particular area where the increased performance demands conflicts with the requirements of multi-user, cross-platform systems.
    [Show full text]
  • Red Hat Gluster Storage Product Overview
    RED HAT GLUSTER STORAGE PRODUCT OVERVIEW Michael Lessard Senior Solutions Architect October 2015 THE RED HAT STORAGE MISSION To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads thereby accelerating the transition to modern IT infrastructures. Complex proprietary silos Storage Traditional ProprietarySoftware Proprietary Custom GUI Hardware ADMIN USER ProprietarySoftware Proprietary Custom GUI Hardware ADMIN USER THE FUTURE STORAGE OF FUTURE THE ProprietarySoftware Proprietary CustomGUI Hardware ADMIN USER Standardized, unified,open platforms Open, Software-DefinedStorage Standard Open Source Hardware Software Ceph ADMIN Control Plane (API,GUI) Computers and and Disks Standard Gluster USER +++ WHY BOTHER? PROPRIETARY Common, off-the-shelf hardware HARDWARE Lower cost, standardized supply chain SCALE-UP Scale-out architecture ARCHITECTURE Increased operational flexibility HARDWARE-BASED Software-based intelligence INTELLIGENCE More programmability, agility, and control CLOSED DEVELOPMENT Open development process PROCESS More flexible, well-integrated technology A RISING TIDE Software-Defined Storage is leading a shift in the SDS-P MARKET SIZE BY SEGMENT global storage industry, with far-reaching effects. $1,349B Block Storage $1,195B File Storage Object Storage $1,029B “By 2016, server-based storage solutions will lower Hyperconverged storage hardware costs by 50% or more.” $859B Gartner: “IT Leaders Can Benefit From Disruptive Innovation in the Storage Industry” $706B “By 2020, between 70-80% of unstructured data will be held on $592B lower-cost storage managed by SDS environments.” $457B Innovation Insight: Separating Hype From Hope for Software-Defined Storage “By 2019, 70% of existing storage array products will also be available as software only versions” Innovation Insight: Separating Hype From Hope for Software-Defined Storage 2013 2014 2015 2016 2017 2018 2019 Source: IDC Market size is projected to increase approximately 20% year-over-year between 2015 and 2019.
    [Show full text]