Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux

Total Page:16

File Type:pdf, Size:1020Kb

Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Last Updated: 2021-07-23 Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Legal Notice Copyright © 2021 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project. The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. All other trademarks are the property of their respective owners. Abstract This document provides instructions on installing Red Hat Ceph Storage on Red Hat Enterprise Linux 7 running on AMD64 and Intel 64 architectures. Table of Contents Table of Contents .C . H. .A . P. .T .E . R. 1.. .W . .H . .A .T . .I S. R. .E . D. H. .A . T. C. .E .P . H. S. .T . O. .R . A. .G . .E .? . .4 . .C . H. .A . P. .T .E . R. 2. R. .E . Q. .U . .I R. .E .M . .E . N. .T . S. .F . O. R. .I .N . S. .T .A . L. .L . I.N . G. R. .E . D. H. .A . T. C. .E . P. .H . .S .T . O. R. .A . G. .E . 6. 2.1. PREREQUISITES 6 2.2. REQUIREMENTS CHECKLIST FOR INSTALLING RED HAT CEPH STORAGE 6 2.3. OPERATING SYSTEM REQUIREMENTS FOR RED HAT CEPH STORAGE 7 2.4. REGISTERING RED HAT CEPH STORAGE NODES TO THE CDN AND ATTACHING SUBSCRIPTIONS 8 Prerequisites 8 Procedure 8 Additional Resources 9 2.5. ENABLING THE RED HAT CEPH STORAGE REPOSITORIES 9 Prerequisites 9 Procedure 9 Additional Resources 10 2.6. CONSIDERATIONS FOR USING A RAID CONTROLLER WITH OSD NODES (OPTIONAL) 10 2.7. CONSIDERATIONS FOR USING NVME WITH OBJECT GATEWAY (OPTIONAL) 10 2.8. VERIFYING THE NETWORK CONFIGURATION FOR RED HAT CEPH STORAGE 11 Prerequisites 11 Procedure 11 Additional Resources 11 2.9. CONFIGURING A FIREWALL FOR RED HAT CEPH STORAGE 11 2.10. CREATING AN ANSIBLE USER WITH SUDO ACCESS 15 2.11. ENABLING PASSWORD-LESS SSH FOR ANSIBLE 17 Prerequisites 17 Procedure 17 Additional Resources 18 .C . H. .A . P. .T .E . R. 3. D. E. P. .L . O. .Y . I.N . .G . .R . E. .D . .H . A. .T . .C . E. .P .H . S. T. .O . R. .A . G. E. 1.9 . 3.1. PREREQUISITES 19 3.2. INSTALLING A RED HAT CEPH STORAGE CLUSTER 19 Prerequisites 19 Procedure 20 3.3. CONFIGURING OSD ANSIBLE SETTINGS FOR ALL NVME STORAGE 31 3.4. INSTALLING METADATA SERVERS 32 3.5. INSTALLING THE CEPH CLIENT ROLE 33 Prerequisites 33 Procedure 34 Additional Resources 35 3.6. INSTALLING THE CEPH OBJECT GATEWAY 35 Prerequisites 35 Procedure 35 Additional Resources 37 3.6.1. Configuring a multisite Ceph Object Gateway 37 3.7. INSTALLING THE NFS-GANESHA GATEWAY 39 Prerequisites 39 Procedure 39 Additional Resources 40 3.8. UNDERSTANDING THE LIMIT OPTION 41 3.9. ADDITIONAL RESOURCES 41 .C . H. .A . P. .T .E . R. 4. .U . P. .G . R. .A . D. .I N. .G . A. R. .E .D . H. .A . T. .C . E. .P . H. S. .T . O. .R . A. .G . E. C. .L . U. .S . T. E. .R . .4 . 2. Prerequisites 43 1 Red Hat Ceph Storage 3 Installation Guide for Red Hat Enterprise Linux 4.1. UPGRADING THE STORAGE CLUSTER 44 Procedure 44 4.2. UPGRADING RED HAT CEPH STORAGE DASHBOARD 48 .C . H. .A . P. .T .E . R. 5. W. H. A. .T . T. .O . .D . .O . .N . E. .X . T. .? . .4 .9 . .A .P . P. .E . N. .D . I. X. A . .T .R . O. U. .B . L. E. .S . H. .O . O. T. .I N. .G . .5 .0 . A.1. ANSIBLE STOPS INSTALLATION BECAUSE IT DETECTS LESS DEVICES THAN IT EXPECTED 50 .A .P . P. .E . N. .D . I. X. B . M. A. .N . .U . A. .L .L . Y. I.N . S. .T . A. .L .L . I.N . G. .R . E. D. .H . A. .T . C. .E . P. .H . .S . T. .O . R. .A . G. .E . 5. .1 . B.1. PREREQUISITES 51 Configuring the Network Time Protocol for Red Hat Ceph Storage 51 Prerequisites 51 Procedure: Configuring the Network Time Protocol for RHCS 51 Additional Resources 52 Monitor Bootstrapping 52 B.2. MANUALLY INSTALLING CEPH MANAGER 58 OSD Bootstrapping 59 .A .P . P. .E . N. .D . I. X. C . .I N. .S . T. A. .L . L. .I N. .G . T. H. .E . .C . E. .P . H. C. .O . .M . M. A. .N . D. L. .I N. E. .I N. T. E. .R . F. .A . C. .E . .6 . 5. Prerequisites 65 Procedure 65 .A .P . P. .E . N. .D . I. X. D . .M . A. .N . .U . A. .L .L . Y. I.N . S. .T . A. .L .L . I.N . .G . .C . E. .P . H. B. .L .O . .C . K. D. .E . V. .I C. .E . .6 .6 . Prerequisites 66 Procedure 66 .A .P . P. .E . N. .D . I. X. E . .. M. A. .N . U. .A . L. .L . Y. I.N . S. .T .A . L. .L . I.N . G. .C . E. P. .H . .O . .B . J. .E .C . T. G. .A . T. .E . W. A. .Y . .6 .9 . Prerequisites 69 Procedure 69 Additional Details 71 .A .P . P. .E . N. .D . I. X. F . .. O. V. .E .R . R. .I D. .I .N . G. C. .E . P. .H . .D . E. .F . A. .U . L. T. S. .E . T. T. .I N. .G . .S . .7 . 2. ..
Recommended publications
  • Serverless Network File Systems
    Serverless Network File Systems Thomas E. Anderson, Michael D. Dahlin, Jeanna M. Neefe, David A. Patterson, Drew S. Roselli, and Randolph Y. Wang Computer Science Division University of California at Berkeley Abstract In this paper, we propose a new paradigm for network file system design, serverless network file systems. While traditional network file systems rely on a central server machine, a serverless system utilizes workstations cooperating as peers to provide all file system services. Any machine in the system can store, cache, or control any block of data. Our approach uses this location independence, in combination with fast local area networks, to provide better performance and scalability than traditional file systems. Further, because any machine in the system can assume the responsibilities of a failed component, our serverless design also provides high availability via redundant data storage. To demonstrate our approach, we have implemented a prototype serverless network file system called xFS. Preliminary performance measurements suggest that our architecture achieves its goal of scalability. For instance, in a 32-node xFS system with 32 active clients, each client receives nearly as much read or write throughput as it would see if it were the only active client. 1. Introduction A serverless network file system distributes storage, cache, and control over cooperating workstations. This approach contrasts with traditional file systems such as Netware [Majo94], NFS [Sand85], Andrew [Howa88], and Sprite [Nels88] where a central server machine stores all data and satisfies all client cache misses. Such a central server is both a performance and reliability bottleneck. A serverless system, on the other hand, distributes control processing and data storage to achieve scalable high performance, migrates the responsibilities of failed components to the remaining machines to provide high availability, and scales gracefully to simplify system management.
    [Show full text]
  • XFS: There and Back ...And There Again? Slide 1 of 38
    XFS: There and Back.... .... and There Again? Dave Chinner <[email protected]> <[email protected]> XFS: There and Back .... and There Again? Slide 1 of 38 Overview • Story Time • Serious Things • These Days • Shiny Things • Interesting Times XFS: There and Back .... and There Again? Slide 2 of 38 Story Time • Way back in the early '90s • Storage exceeding 32 bit capacities • 64 bit CPUs, large scale MP • Hundreds of disks in a single machine • XFS: There..... Slide 3 of 38 "x" is for Undefined xFS had to support: • Fast Crash Recovery • Large File Systems • Large, Sparse Files • Large, Contiguous Files • Large Directories • Large Numbers of Files • - Scalability in the XFS File System, 1995 http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html XFS: There..... Slide 4 of 38 The Early Years XFS: There..... Slide 5 of 38 The Early Years • Late 1994: First Release, Irix 5.3 • Mid 1996: Default FS, Irix 6.2 • Already at Version 4 • Attributes • Journalled Quotas • link counts > 64k • feature masks • • XFS: There..... Slide 6 of 38 The Early Years • • Allocation alignment to storage geometry (1997) • Unwritten extents (1998) • Version 2 directories (1999) • mkfs time configurable block size • Scalability to tens of millions of directory entries • • XFS: There..... Slide 7 of 38 What's that Linux Thing? • Feature development mostly stalled • Irix development focussed on CXFS • New team formed for Linux XFS port! • Encumberance review! • Linux was missing lots of bits XFS needed • Lot of work needed • • XFS: There and..... Slide 8 of 38 That Linux Thing? XFS: There and..... Slide 9 of 38 Light that fire! • 2000: SGI releases XFS under GPL • • 2001: First stable XFS release • • 2002: XFS merged into 2.5.36 • • JFS follows similar timeline • XFS: There and....
    [Show full text]
  • Comparing Filesystem Performance: Red Hat Enterprise Linux 6 Vs
    COMPARING FILE SYSTEM I/O PERFORMANCE: RED HAT ENTERPRISE LINUX 6 VS. MICROSOFT WINDOWS SERVER 2012 When choosing an operating system platform for your servers, you should know what I/O performance to expect from the operating system and file systems you select. In the Principled Technologies labs, using the IOzone file system benchmark, we compared the I/O performance of two operating systems and file system pairs, Red Hat Enterprise Linux 6 with ext4 and XFS file systems, and Microsoft Windows Server 2012 with NTFS and ReFS file systems. Our testing compared out-of-the-box configurations for each operating system, as well as tuned configurations optimized for better performance, to demonstrate how a few simple adjustments can elevate I/O performance of a file system. We found that file systems available with Red Hat Enterprise Linux 6 delivered better I/O performance than those shipped with Windows Server 2012, in both out-of- the-box and optimized configurations. With I/O performance playing such a critical role in most business applications, selecting the right file system and operating system combination is critical to help you achieve your hardware’s maximum potential. APRIL 2013 A PRINCIPLED TECHNOLOGIES TEST REPORT Commissioned by Red Hat, Inc. About file system and platform configurations While you can use IOzone to gauge disk performance, we concentrated on the file system performance of two operating systems (OSs): Red Hat Enterprise Linux 6, where we examined the ext4 and XFS file systems, and Microsoft Windows Server 2012 Datacenter Edition, where we examined NTFS and ReFS file systems.
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Compatibility Matrix for CTE Agent with Data Security Manager Release 7.0.0 Document Version 15 January 19, 2021 Contents
    Compatibility Matrix for CTE Agent with Data Security Manager Release 7.0.0 Document Version 15 January 19, 2021 Contents Rebranding Announcement 6 CTE Agent for Linux 6 Interoperability 6 Table 1: Linux Interoperability with Third Party Applications 6 ESG (Efficient Storage GuardPoint) Support 7 Table 2: Efficient Storage GuardPoint Support 7 Linux Agent Raw Device Support Matrix 8 Red Hat, CentOS, and OEL non-UEK 6.10 Raw Device Support 8 Table 3: Red Hat 6.10 | CentOS 6.10 | OEL non-UEK 6.10 (x86_64)3 8 Red Hat, CentOS, and OEL non-UEK 7.5-7.9 Raw Device Support 9 Table 4: Red Hat 7.5-7.9 | CentOS 7.5-7.8 | OEL non-UEK 7.5-7.8 (x86_64)1,2 9 Red Hat, CentOS, and OEL non-UEK 8 Raw Device Support 9 Table 5: Red Hat 8.0-8.2 | CentOS 8.0-8.2 | OEL non-UEK 8.0-8.2 (x86_64)1,2 9 SLES 12 Raw Device Support 10 Table 6: SLES 12 SP3, SLES 12 SP4, and SLES 12 SP5 (x86_64)2 10 SLES 15 Raw Device Support 10 Table 7: SLES 15, SLES 15 SP1, and SLES 15 SP2 (x86_64) 10 Redhat 6.10/7.5 ACFS Support with Secvm 11 Table 8: Oracle ACFS/Secvm support on Redhat 6.10/7.5 (x86_64) 11 Table 9: Oracle ACFS/Secvm Stack with Red Hat 6.10/7.5 (x86_64) 11 Linux Agent File System Support Matrix 12 Red Hat, CentOS, and OEL non-UEK 6. 10 File System Support 12 Table 10: Red Hat 6.10 | CentOS 6.10 | OEL non-UEK 6.10 (x86_64)1,3 12 LDT Feature for Red Hat, CentOS, and OEL non-UEK 6.10 File System Support 13 Table 11: Red Hat 6.10 | CentOS 6.10 | OEL non-UEK 6.10 (x86_64)1 13 Red Hat, CentOS, and OEL non-UEK 7.5 - 7.9 File System Support 14 Table 12: Red Hat 7.5-7.9 | CentOS
    [Show full text]
  • Xfs: a Wide Area Mass Storage File System
    xFS: A Wide Area Mass Storage File System Randolph Y. Wang and Thomas E. Anderson frywang,[email protected] erkeley.edu Computer Science Division University of California Berkeley, CA 94720 Abstract Scalability : The central le server mo del breaks down when there can b e: The current generation of le systems are inadequate thousands of clients, in facing the new technological challenges of wide area terabytes of total client caches for the servers networks and massive storage. xFS is a prototyp e le to keep track of, system we are developing to explore the issues brought billions of les, and ab out by these technological advances. xFS adapts p etabytes of total storage. many of the techniques used in the eld of high p er- Availability : As le systems are made more scalable, formance multipro cessor design. It organizes hosts into allowing larger groups of clients and servers to work a hierarchical structure so lo cality within clusters of together, it b ecomes more likely at any given time workstations can b e b etter exploited. By using an that some clients and servers will b e unable to com- invalidation-based write back cache coherence proto col, municate. xFS minimizes network usage. It exploits the le system Existing distributed le systems were originally de- naming structure to reduce cache coherence state. xFS signed for lo cal area networks and disks as the b ottom also integrates di erent storage technologies in a uni- layer of the storage hierarchy. They are inadequate in form manner. Due to its intelligent use of lo cal hosts facing the challenges of wide area networks and massive and lo cal storage, we exp ect xFS to achieve b etter p er- storage.
    [Show full text]
  • Scaling Source Control for the Next Generation of Game Development by Mike Sundy & Tobias Roberts Perforce User's Conference May 2007, Las Vegas
    Scaling Source Control for the Next Generation of Game Development by Mike Sundy & Tobias Roberts Perforce User's Conference May 2007, Las Vegas Introduction: This document intends to show performance differences between various Windows and Linux Perforce server configurations. About the Authors Tobias Roberts is the Studio IT Architect for the Electronic Arts Redwood Shores (EARS) and Sims Studios. He has been focused on server performance and studio architecture for the past nine years. E-mail: [email protected]. Mike Sundy is the Senior Perforce Administrator for the EARS/Sims Studios. He has over eleven years of version control administration experience and has specialized in Perforce for the past five years. E-mail: [email protected] or [email protected] Installation Metrics At Electronic Arts, we check in art binary data in addition to code. There are typically a much greater number of data assets compared to code. 90% of files we store in Perforce are binary, the other 10% are code. Some of our servers have upwards of 1200 users, 6.3 million files, 600,000 changelists, and a 80 GB db.have. Oldest server has 7 years of P4 history. EA is primarily a Windows-based development shop, with upwards of 90% of client desktops running Windows. EA has over 4,000 Perforce users and 90+ Perforce servers scattered across dozens of worldwide studios. Our largest server has 1.5 TB of data. We use 5 TB of Perforce RCS storage total at the EARS/Sims studios. The EARS/Sims studios have 10 Perforce servers and approximately 1,000 users.
    [Show full text]
  • NOVA: the Fastest File System for Nvdimms
    NOVA: The Fastest File System for NVDIMMs Steven Swanson, UC San Diego XFS F2FS NILFS EXT4 BTRFS © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Disk-based file systems are inadequate for NVMM Disk-based file systems cannot 1-Sector 1-Block N-Block 1-Sector 1-Block N-Block Atomicity overwrit overwrit overwrit append append append exploit NVMM performance e e e Ext4 ✓ ✗ ✗ ✗ ✗ ✗ wb Ext4 ✓ ✓ ✗ ✓ ✗ ✓ Performance optimization Order Ext4 ✓ ✓ ✓ ✓ ✗ ✓ compromises consistency on Dataj system failure [1] Btrfs ✓ ✓ ✓ ✓ ✗ ✓ xfs ✓ ✓ ✗ ✓ ✗ ✓ Reiserfs ✓ ✓ ✗ ✓ ✗ ✓ [1] Pillai et al, All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications, OSDI '14. © 2017 SNIA Persistent Memory Summit. All Rights Reserved. BPFS SCMFS PMFS Aerie EXT4-DAX XFS-DAX NOVA M1FS © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Previous Prototype NVMM file systems are not strongly consistent DAX does not provide data ATomic Atomicity Metadata Data Snapshot atomicity guarantee Mmap [1] So programming is more BPFS ✓ ✓ [2] ✗ ✗ difficult PMFS ✓ ✗ ✗ ✗ Ext4-DAX ✓ ✗ ✗ ✗ Xfs-DAX ✓ ✗ ✗ ✗ SCMFS ✗ ✗ ✗ ✗ Aerie ✓ ✗ ✗ ✗ © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Ext4-DAX and xfs-DAX shortcomings No data atomicity support Single journal shared by all the transactions (JBD2- based) Poor performance Development teams are (rightly) “disk first”. © 2017 SNIA Persistent Memory Summit. All Rights Reserved. NOVA provides strong atomicity guarantees 1-Sector 1-Sector 1-Block 1-Block N-Block N-Block Atomicity Atomicity Metadata Data Mmap overwrite append overwrite append overwrite append Ext4 ✓ ✗ ✗ ✗ ✗ ✗ BPFS ✓ ✓ ✗ wb Ext4 ✓ ✓ ✗ ✓ ✗ ✓ PMFS ✓ ✗ ✗ Order Ext4 ✓ ✓ ✓ ✓ ✗ ✓ Ext4-DAX ✓ ✗ ✗ Dataj Btrfs ✓ ✓ ✓ ✓ ✗ ✓ Xfs-DAX ✓ ✗ ✗ xfs ✓ ✓ ✗ ✓ ✗ ✓ SCMFS ✗ ✗ ✗ Reiserfs ✓ ✓ ✗ ✓ ✗ ✓ Aerie ✓ ✗ ✗ © 2017 SNIA Persistent Memory Summit. All Rights Reserved.
    [Show full text]
  • Multi-Process Systems: • Information Marshalling and File System Unmarshalling • Block Management • Directory Implementation
    What we have discussed before • The file abstraction • File manager and file descriptors Multi-Process Systems: • Information marshalling and File system unmarshalling • Block management • Directory implementation Operating Systems and Distributed Systems Operating Systems and Distributed Systems 2 What we have discussed before What we have discussed before Fid = open("MyInput.txt", O_RDONLY)) numRead = read(Fid, buf, BUF_LEN)) Operating Systems and Distributed Systems Operating Systems and Distributed Systems The long-term information What we will learn storage problem • Must store large amounts of data • Files – Gigabytes -> terabytes -> petabytes • Directories & naming • Stored information must survive the • File system implementation termination of the process using it • Example file systems – Lifetime can be seconds to years – Must have some way of finding it! • Multiple processes must be able to access the information concurrently Operating Systems and Distributed Systems 5 Operating Systems and6 Distributed Systems What is a file system? The Linux file system • An organization of data and metadata on a storage device • The Linux file system architecture is an interesting example of abstracting complexity. • Another way to think about a file system is as a • Using a common set of API functions, a large variety of file protocol. systems can be supported on a large variety of storage devices. – Just as network protocols (such as IP) give meaning to the streams of data traversing the Internet, file systems • Example: the read function call, which allows some number of give meaning to the data on a particular storage medium. bytes to be read from a given file descriptor. • There are many types of file systems and media.
    [Show full text]
  • Red Hat Ceph* Storage and Intel®
    RED HAT CEPH STORAGE AND INTEL CACHE ACCELERATION SOFTWARE Accelerating object storage with the Intel SSD Data Center family SOLUTION OVERVIEW INTRODUCTION To manage massive data growth, organizations are increasingly choosing object storage solutions, allowing them to scale storage flexibly while controlling costs. Ceph is a popular solution, letting organizations deploy cost-effective industry-standard hardware as a part of proven software- defined storage infrastructure. With this shift, the storage media itself has become a key consider- Achieve flash-accelerated object ation. Traditional hard disk drives (HDDs) are affordable, but often lack the desired input/output (I/O) storage performance at lower performance for demanding workloads, such as storing large numbers of objects. Proprietary all- costs than proprietary flash arrays offer performance, but can be cost-prohibitive for large-scale deployments. all-flash array solutions. Red Hat® Ceph Storage combined with Intel® Solid State Drives (SSDs) Data Center family and Intel® Cache Acceleration Software (CAS) has emerged as a compelling option. Organizations can use Intel Use the Intel SSD Data Center CAS to selectively classify key portions of a given I/O workload for acceleration with the high-per- family and Intel CAS to intelli- formance and low latency of flash storage. The performance difference can be remarkable. When gently prioritize I/O for caching. testing large object-count storage workloads, Red Hat saw performance improvements of up to 400% for small-object writes when using Intel SSDs and Intel CAS.1 The solution is also cost-effec- Confidently deploy software- tive, achieving strong performance results with only 2-4 SSDs per system.
    [Show full text]
  • AV-Use File Systems for Multiple High-Definition Era
    Hitachi Review Vol. 56 (2007), No. 1 11 AV-use File Systems for Multiple High-definition Era Nobuaki Kohinata OVERVIEW: Accompanying the spread of AV equipment fitted with large- Damien Le Moal capacity HDDs and high-speed network interfaces, a new style of enjoying content—in which all recorded content can be enjoyed freely anywhere in Mika Mizutani the home—will become mainstream in the near future. In the file system for handling this new viewing/listening style, processing must be performed at high efficiency while assuring the access rates for writing to the HDD storing content and for reading data from the HDD. Aiming to create a middleware solution to meet the above-mentioned requirements, Hitachi has developed, and is presently commercializing, an AV-use file system that enables simultaneous access to multiple “high definition” content (i.e. HDTV programs). Focusing on developing middleware for improving the added- value of HDDs, we are continuing to intensify and push forward our research and development on fundamental technologies for supporting people’ s “new digital lives.” INTRODUCTION HDTV (high-definition TV) content that has already ACCOMPANYING the popularization of AV (audio- been recorded can be freely enjoyed in the home while visual) equipment fitted with network I/Fs (interfaces) programs on all channels are being recorded. Making and large-capacity HDDs (hard disk drives), and the this kind of viewing a reality necessitates a scheme launch of terrestrial digital broadcasting, it is that allows multiple read/writing processing operations considered that, from now onwards, the way that users on an HDD simultaneously at high efficiency while view and listen to content will continue to change.
    [Show full text]
  • The Evolution of File Systems
    The Evolution of File Systems Thomas Rivera, Hitachi Data Systems Craig Harmer, April 2011 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the Author nor the Presenter is an attorney and nothing in this presentation is intended to be nor should be construed as legal advice or opinion. If you need legal advice or legal opinion please contact an attorney. The information presented herein represents the Author's personal opinion and current understanding of the issues involved. The Author, the Presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. The Evolution of File Systems 2 © 2012 Storage Networking Industry Association. All Rights Reserved. 2 Abstract The File Systems Evolution Over time additional file systems appeared focusing on specialized requirements such as: data sharing, remote file access, distributed file access, parallel files access, HPC, archiving, security, etc. Due to the dramatic growth of unstructured data, files as the basic units for data containers are morphing into file objects, providing more semantics and feature- rich capabilities for content processing This presentation will: Categorize and explain the basic principles of currently available file system architectures (e.g.
    [Show full text]