Red Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure

Total Page:16

File Type:pdf, Size:1020Kb

Red Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure Red Hat® Ceph Storage and Network Solutions for Software Defined Infrastructure Frank Ober – Intel Non-Volatile Memory Solutions Group Tony Dempsey – Intel Network Platforms Group Dr. John Fitzpatrick – Openet Labs Data Center needs are Changing • From collecting to analyzing data • To virtualizing networks and storage • To delivering secure & compliant cloud like services 2 Traditional Data Center Resources are Limited Resource Internal Requests Resources 3 Intel Software Defined Infrastructure (SDI) Vision Dynamic, policy-driven resource management ▪ Abstraction of SW from HW, SERVICES DELIVERY provides flexibility and scalability App #1 App #2 App #3 App #4 ▪ Provisioning of resources Orchestration Software dynamically (pay-as-you-grow) Infrastructure Attributes ▪ Orchestration of diverse systems POWER PERFORMANCE THERMALS ULTILIZATION LOCATION LATENCY DURABILITY through application SLOs to enable seamless access Partner Hardware & Software Solutions SDI is not: Software defined Software Defined Software Defined • A single product or solution SERVERS STORAGE NETWORKING SDI is a framework: providing efficient, flexible, scalable standards-based compute, networking, and storage resources 4 Intel SDI Assets Intel® QuickAssist Intel® Storage Acceleration Library Technology (ISA-L) & Storage DPDK PROCESSOR PLATFORMS ® NETWORKING Intel Cache SSDs Acceleration AND FABRIC Software SOFTWARE ECOSYSTEM SERVER BOARDS Software & SYSTEMS Defined + Infrastructure SOLUTION REFERENCE ARCHITECTURES 5 Software Defined Storage 6 Software Defined Storage (SDS) Architecture DATA SERVICES APPLICATIONS SDS CONTROLLER • Part of the storage SW • Has visibility into and that runs in data plane to control of ALL storage optimize storage ORCHESTRATOR resources • For example: Data • Coordinates between Deduplication, Erasure Northbound API apps, orchestrator, and Code, Tiering storage systems DATA SERVICES SDS CONTROLLER • Allocates storage resources to meet SLAs Southbound API STORAGE SYSTEM STORAGE SYSTEM STORAGE SYSTEM STORAGE SYSTEM STORAGE SYSTEM [SAN] [Capacity] [DAS] [Performance] [NAS] Node Node 7 SDS Vision to Action • SDS will provide a convenient way to manage all storage in the datacenter • Framework is still in development • So, how to get started with SDS? – Focus on the storage system layer – Implement scale-out storage systems from OEMs and ISVs built on standard high-volume servers Deploying storage systems on standard, high-volume servers today supports a seamless transition to SDI tomorrow 8 Advent of Solid-State Drives (SSDs) •CPU = 175x vs. HDD IO = 1.3x •SSDs $30/GB -> <$1/GB (‘08-’15) •IOs reach the spindles in a random fashion •Lower TCO than HDD ($, Watts, Space) •Gets worse with higher # of apps or VMs per •$ / IOPS better for SSDs ($0.01 vs $0.80) LUN •SSDs are reliable (2MHr MTBF) SSDs are a cost effective, reliable means to remove the storage bottleneck * Performance and cost figures are estimates for informational purposes only and may vary 9 Red Hat® Ceph Storage • Ceph is an open-source, massively scalable, software-defined storage system which provides object, block and file system storage in a single platform. It runs on commodity hardware—saving you costs, giving you flexibility—and because it’s in the Linux kernel, it’s easy to consume. • Object Store (RADOSGW) Application Application Host/VM Client – A bucket based REST gateway compatible with S3 and swift Ceph Ceph Block Ceph Object Device Distributed • File System (Ceph FS) Ceph Object Gateway (RBD) File System Library (RADOS Reliable and (CephFS) – A POSIX-compliant distributed file system (LIBRADOS) Gateway) fully Posix- Allows applications distributed compliant Restful block device to directly access g ateway for distributed • Block device service (RBD) file system Ceph Object object storage – OpenStack native support Storage – Cinder-backend – Glance-backend Ceph Object Storage RADOS – Reliable, Autonomous, Distributed Object Store – Kernel client and QEMU/KVM driver Self-healing, self-managing intelligent storage nodes 10 Ceph Design Considerations • Consistency of an SSD Caching Drive within Ceph is key to overall consistent performance • The developers (technical documentation), warn about using the most consistent drives possible. Drives that protect the data. • Relatively inexpensive SSDs are often hiding issues that you need to find. http://ceph.com/docs/master/start/hardware-recommendations/#data-storage 11 Open Source Virtual Storage Manager for Ceph VSM is designed for a bundled ceph storage appliance, it creates a ceph cluster and does management plus monitoring. See https:// github.com/01org/virtual-storage-manager Allows easier Ceph Creation and Management Creates Ceph cluster Manages Ceph Monitors Ceph 12 Software Defined Networking 13 Intel Innovations in Networking ACCESS NETWORKS EDGE/CORE NETWORKS ENTERPRISE Intel ® VT-X Intel Quick Assist Intel® VT-d 10/40/100 Gb Intel Ethernet Data Plane Development Kit Intel Data Direct I/O Intel Ethernet Switch Silicon Hyperscan – Pattern Matching Data Plane Development Kit + + Intel® Architecture Intel® Acceleration Intel® Software Wireless Base Intelligent Station Edge Routers Wireless and Switches Infrastructure Media Network Network Processing Appliances Security Moore’s Law: The number of transistors incorporated in a chip will approximately double every 24 months 14 INDUSTRY-LEADING INNOVATION AND COMMITMENT TO TECHNOLOGY OPEN STANDARDS LEADERSHIP AND PLATFORMS Intel Delivers SOFTWARE-DEFINED INFRASTRUCTURE INVESTMENT ACCELERATING IN BUILDING THE MARKET STRONG ECOSYSTEMS 15 Workload Consolidation to IA A Decade of Investment Pre-90’s Early 2004-2005 2006-2012 2013+ Intel® Other Intel® Intel® Security Pattern Processors + Processors Scanning SW Architectures Carrier Grade Processors Intel® Linux* Processor + Networking Other Other Multi-core Processors / Networking Chipset Architectures Architectures Integration Accelerators/ Chipset Sensory Network Assets Data Plane Development Kit NPU/ASIC NPU/ASIC NPU/ASIC NPU/ASIC Intel® Media Center Development Kit DSP DSP DSP DSP Mindspeed, Axxia Assets One Instruction Set One Tool Many Architecture Suite Opportunities Base Station Processing Technology & KEY: Non-Intel Networking Investment Intel Investment Acquisition Network Processors 1. Intel® Media Server Studio requires an Intel® Xeon® E3 or Intel® CoreTM processor (processor graphics) 4:1 Strategy = Single architecture that consolidates the workloads into scalable and simplified solution 16 Open Source/Standards Fuel Transformation: Intel is a Leading Contributor Open Daylight IETF Service Open NFV & MEC Open Platform OpenStack Open Data Plane Linux Open Source Function Networking for NFV Orchestration vSwitch Development Kit Distribution Controller Chaining Foundation DPDK.org Contributor Investing in contributions to SDN/NFV initiatives Specifications -> Code -> Test -> Reference Platforms 17 *Other brands and names are the property of their respective owners Intel® Open Network Platform (ONP) Server Open Source Software Stack Based on ETSI-NFVI Reference What is it? Architecture • Server Hardware & Software reference design integrating Intel HW optimizations with Open Source ingredients used in SDN/ OPNFV project NFV Who is it targeted at? OpenStack Cloud OS • Directly to NFVI+VIM vendors: OEMs, TEMs, SIs, ODMs OpenDaylight Controller • Indirectly to: ISVs, CommSP • Adjacent Markets: Enterprise, Cloud SP DPDK Open vSwitch Where does my customer get it? • ONP Software is released on Intel’s 01.org on quarterly basis in Linux Fedora OS the form of: Reference Architecture Document; Benchmark Test KVM Hypervisor Report; Collateral ; Application Demo Setup* Intel® QuickAssist Technology Drivers How does it relate to Linux Foundation Open Platform for NFV Intel® Ethernet Drivers:10 & 40 GbE (OPNFV)? • ONP Server is available today and will stay as Intel’s program to contribute covered ingredients into OPNFV Industry standard High Volume Server Intel® Xeon® Intel® Communications Intel® Ethernet processor E5 v3 Chipset 89xx Series Controller XL710 18 * Other names and brands may be claimed as the property of others. Intel® Network Builders Program - Partners Enabling broad market participation to increase innovation To find out more visit us at: https://networkbuilders.intel.com/ 19 Other brands and names are the property of their respective owners Customer Success Stories: SDN/NFV 25+ PILOTS IN PROCESS Network Operators & Enterprises Embracing WORLDWIDE SDN/NFV Using General-Purpose Processing JUST A FEW Technology in Their Networks EXAMPLES Other brands and names are the property of their respective owners 20 Intel is Investing in SDN/NFV Transformation Advance Open Source and Standards Industry Consortia Intel ONP Reference Architecture Deliver Open Reference Designs Collaborate on Trials and Deployments Enable Open Ecosystem on IA Intel Network Telecom & Cloud Accelerating Network Builders Enterprise PoC’s / Transformation Through Deployments Industry Collaboration *Other brands and names are the property of their respective owners 21 Openet ETSI NFV Use Case 22 The world of computing has gone through rapid changes in recent years This has lead to great improvements in how software is created, deployed, managed and used but how does this relate to mobile networks? Our networks have not adapted to the same degree as many other computing driven industries Virtualization to the Rescue Reduced Reduced Rapid Dynamic Service time-to- Service TCO Scalability Agility market Deployment Openet’s Virtualization
Recommended publications
  • Optimizing the Ceph Distributed File System for High Performance Computing
    Optimizing the Ceph Distributed File System for High Performance Computing Kisik Jeong Carl Duffy Jin-Soo Kim Joonwon Lee Sungkyunkwan University Seoul National University Seoul National University Sungkyunkwan University Suwon, South Korea Seoul, South Korea Seoul, South Korea Suwon, South Korea [email protected] [email protected] [email protected] [email protected] Abstract—With increasing demand for running big data ana- ditional HPC workloads. Traditional HPC workloads perform lytics and machine learning workloads with diverse data types, reads from a large shared file stored in a file system, followed high performance computing (HPC) systems consequently need by writes to the file system in a parallel manner. In this case, to support diverse types of storage services. Ceph is one possible candidate for such HPC environments, as Ceph provides inter- a storage service should provide good sequential read/write faces for object, block, and file storage. Ceph, however, is not performance. However, Ceph divides large files into a number designed for HPC environments, thus it needs to be optimized of chunks and distributes them to several different disks. This for HPC workloads. In this paper, we find and analyze problems feature has two main problems in HPC environments: 1) Ceph that arise when running HPC workloads on Ceph, and propose translates sequential accesses into random accesses, 2) Ceph a novel optimization technique called F2FS-split, based on the F2FS file system and several other optimizations. We measure needs to manage many files, incurring high journal overheads the performance of Ceph in HPC environments, and show that in the underlying file system.
    [Show full text]
  • Proxmox Ve Mit Ceph &
    PROXMOX VE MIT CEPH & ZFS ZUKUNFTSSICHERE INFRASTRUKTUR IM RECHENZENTRUM Alwin Antreich Proxmox Server Solutions GmbH FrOSCon 14 | 10. August 2019 Alwin Antreich Software Entwickler @ Proxmox 15 Jahre in der IT als Willkommen! System / Netzwerk Administrator FrOSCon 14 | 10.08.2019 2/33 Proxmox Server Solutions GmbH Aktive Community Proxmox seit 2005 Globales Partnernetz in Wien (AT) Proxmox Mail Gateway Enterprise (AGPL,v3) Proxmox VE (AGPL,v3) Support & Services FrOSCon 14 | 10.08.2019 3/33 Traditionelle Infrastruktur FrOSCon 14 | 10.08.2019 4/33 Hyperkonvergenz FrOSCon 14 | 10.08.2019 5/33 Hyperkonvergente Infrastruktur FrOSCon 14 | 10.08.2019 6/33 Voraussetzung für Hyperkonvergenz CPU / RAM / Netzwerk / Storage Verwende immer genug von allem. FrOSCon 14 | 10.08.2019 7/33 FrOSCon 14 | 10.08.2019 8/33 Was ist ‚das‘? ● Ceph & ZFS - software-basierte Storagelösungen ● ZFS lokal / Ceph verteilt im Netzwerk ● Hervorragende Performance, Verfügbarkeit und https://ceph.io/ Skalierbarkeit ● Verwaltung und Überwachung mit Proxmox VE ● Technischer Support für Ceph & ZFS inkludiert in Proxmox Subskription http://open-zfs.org/ FrOSCon 14 | 10.08.2019 9/33 FrOSCon 14 | 10.08.2019 10/33 FrOSCon 14 | 10.08.2019 11/33 ZFS Architektur FrOSCon 14 | 10.08.2019 12/33 ZFS ARC, L2ARC and ZIL With ZIL Without ZIL ARC RAM ARC RAM ZIL ZIL Application HDD Application SSD HDD FrOSCon 14 | 10.08.2019 13/33 FrOSCon 14 | 10.08.2019 14/33 FrOSCon 14 | 10.08.2019 15/33 Ceph Network Ceph Docs: https://docs.ceph.com/docs/master/ FrOSCon 14 | 10.08.2019 16/33 FrOSCon
    [Show full text]
  • Red Hat Openstack* Platform with Red Hat Ceph* Storage
    Intel® Data Center Blocks for Cloud – Red Hat* OpenStack* Platform with Red Hat Ceph* Storage Reference Architecture Guide for deploying a private cloud based on Red Hat OpenStack* Platform with Red Hat Ceph Storage using Intel® Server Products. Rev 1.0 January 2017 Intel® Server Products and Solutions <Blank page> Intel® Data Center Blocks for Cloud – Red Hat* OpenStack* Platform with Red Hat Ceph* Storage Document Revision History Date Revision Changes January 2017 1.0 Initial release. 3 Intel® Data Center Blocks for Cloud – Red Hat® OpenStack® Platform with Red Hat Ceph Storage Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Learn more at Intel.com, or from the OEM or retailer. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.
    [Show full text]
  • Red Hat Data Analytics Infrastructure Solution
    TECHNOLOGY DETAIL RED HAT DATA ANALYTICS INFRASTRUCTURE SOLUTION TABLE OF CONTENTS 1 INTRODUCTION ................................................................................................................ 2 2 RED HAT DATA ANALYTICS INFRASTRUCTURE SOLUTION ..................................... 2 2.1 The evolution of analytics infrastructure ....................................................................................... 3 Give data scientists and data 2.2 Benefits of a shared data repository on Red Hat Ceph Storage .............................................. 3 analytics teams access to their own clusters without the unnec- 2.3 Solution components ...........................................................................................................................4 essary cost and complexity of 3 TESTING ENVIRONMENT OVERVIEW ............................................................................ 4 duplicating Hadoop Distributed File System (HDFS) datasets. 4 RELATIVE COST AND PERFORMANCE COMPARISON ................................................ 6 4.1 Findings summary ................................................................................................................................. 6 Rapidly deploy and decom- 4.2 Workload details .................................................................................................................................... 7 mission analytics clusters on 4.3 24-hour ingest ........................................................................................................................................8
    [Show full text]
  • Unlock Bigdata Analytic Efficiency with Ceph Data Lake
    Unlock Bigdata Analytic Efficiency With Ceph Data Lake Jian Zhang, Yong Fu, March, 2018 Agenda . Background & Motivations . The Workloads, Reference Architecture Evolution and Performance Optimization . Performance Comparison with Remote HDFS . Summary & Next Step 3 Challenges of scaling Hadoop* Storage BOUNDED Storage and Compute resources on Hadoop Nodes brings challenges Data Capacity Silos Costs Performance & efficiency Typical Challenges Data/Capacity Multiple Storage Silos Space, Spent, Power, Utilization Upgrade Cost Inadequate Performance Provisioning And Configuration Source: 451 Research, Voice of the Enterprise: Storage Q4 2015 *Other names and brands may be claimed as the property of others. 4 Options To Address The Challenges Compute and Large Cluster More Clusters Storage Disaggregation • Lacks isolation - • Cost of • Isolation of high- noisy neighbors duplicating priority workloads hinder SLAs datasets across • Shared big • Lacks elasticity - clusters datasets rigid cluster size • Lacks on-demand • On-demand • Can’t scale provisioning provisioning compute/storage • Can’t scale • compute/storage costs separately compute/storage costs scale costs separately separately Compute and Storage disaggregation provides Simplicity, Elasticity, Isolation 5 Unified Hadoop* File System and API for cloud storage Hadoop Compatible File System abstraction layer: Unified storage API interface Hadoop fs –ls s3a://job/ adl:// oss:// s3n:// gs:// s3:// s3a:// wasb:// 2006 2008 2014 2015 2016 6 Proposal: Apache Hadoop* with disagreed Object Storage SQL …… Hadoop Services • Virtual Machine • Container • Bare Metal HCFS Compute 1 Compute 2 Compute 3 … Compute N Object Storage Services Object Object Object Object • Co-located with gateway Storage 1 Storage 2 Storage 3 … Storage N • Dynamic DNS or load balancer • Data protection via storage replication or erasure code Disaggregated Object Storage Cluster • Storage tiering *Other names and brands may be claimed as the property of others.
    [Show full text]
  • Evaluation of Active Storage Strategies for the Lustre Parallel File System
    Evaluation of Active Storage Strategies for the Lustre Parallel File System Juan Piernas Jarek Nieplocha Evan J. Felix Pacific Northwest National Pacific Northwest National Pacific Northwest National Laboratory Laboratory Laboratory P.O. Box 999 P.O. Box 999 P.O. Box 999 Richland, WA 99352 Richland, WA 99352 Richland, WA 99352 [email protected] [email protected] [email protected] ABSTRACT umes of data remains a challenging problem. Despite the Active Storage provides an opportunity for reducing the improvements of storage capacities, the cost of bandwidth amount of data movement between storage and compute for moving data between the processing nodes and the stor- nodes of a parallel filesystem such as Lustre, and PVFS. age devices has not improved at the same rate as the disk ca- It allows certain types of data processing operations to be pacity. One approach to reduce the bandwidth requirements performed directly on the storage nodes of modern paral- between storage and compute devices is, when possible, to lel filesystems, near the data they manage. This is possible move computation closer to the storage devices. Similarly by exploiting the underutilized processor and memory re- to the processing-in-memory (PIM) approach for random ac- sources of storage nodes that are implemented using general cess memory [16], the active disk concept was proposed for purpose servers and operating systems. In this paper, we hard disk storage systems [1, 15, 24]. The active disk idea present a novel user-space implementation of Active Storage exploits the processing power of the embedded hard drive for Lustre, and compare it to the traditional kernel-based controller to process the data on the disk without the need implementation.
    [Show full text]
  • Design and Implementation of Archival Storage Component of OAIS Reference Model
    Masaryk University Faculty of Informatics Design and implementation of Archival Storage component of OAIS Reference Model Master’s Thesis Jan Tomášek Brno, Spring 2018 Masaryk University Faculty of Informatics Design and implementation of Archival Storage component of OAIS Reference Model Master’s Thesis Jan Tomášek Brno, Spring 2018 Declaration Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Jan Tomášek Advisor: doc. RNDr. Tomáš Pitner Ph.D. i Acknowledgement I would like to express my gratitude to my supervisor, doc. RNDr. Tomáš Pitner, Ph.D. for valuable advice concerning formal aspects of my thesis, readiness to help and unprecedented responsiveness. Next, I would like to thank to RNDr. Miroslav Bartošek, CSc. for providing me with long-term preservation literature, to RNDr. Michal Růžička, Ph.D. for managing the very helpful internal documents of the ARCLib project, and to the whole ARCLib team for the great col- laboration and willingness to meet up and discuss questions emerged during the implementation. I would also like to thank to my colleagues from the inQool com- pany for providing me with this opportunity, help in solving of practi- cal issues, and for the time flexibility needed to finish this work. Last but not least, I would like to thank to my closest family and friends for all the moral support. iii Abstract This thesis deals with the development of the Archival Storage module of the Reference Model for an Open Archival Information System.
    [Show full text]
  • Ceph Done Right for Openstack
    Solution Brief: + Ceph done right for OpenStack The rise of OpenStack For years, the desire to standardize on an OpenStack deployments vary drastically open platform and adopt uniform APIs was as business and application needs vary the primary driver behind OpenStack’s rise By its nature and by design from the outset, OpenStack introduces across public and private clouds. choice for every aspect and component that comprises the cloud system. Cloud architects who deploy OpenStack enjoy Deployments continue to grow rapidly; with more and larger a cloud platform that is flexible, open and customizable. clouds, deep adoption throughout users’ cloud infrastructure and maturing technology as clouds move into production. Compute, network and storage resources are selected by business requirement and application needs. For storage needs, OpenStack OpenStack’s top attributes are, not surprisingly, shared by the most cloud architects typically choose Ceph as the storage system for its popular storage software for deployments: Ceph. With Ceph, users scalability, versatility and rich OpenStack support. get all the benefits of open source software, along with interfaces for object, block and file-level storage that give OpenStack what While OpenStack cloud architects find Ceph integrates nicely it needs to run at its best. Plus, the combination of OpenStack without too much effort, actually configuring Ceph to perform and Ceph enables clouds to get faster and more reliable requires experienced expertise in Ceph Software Defined Storage the larger they get. (SDS) itself, hardware architecture, storage device selection and networking technologies. Once applications are deployed It sounds like a match made in heaven - and it is - but it can also be within an OpenStack cloud configured for Ceph SDS, running a challenge if you have your heart set on a DIY approach.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Cs 5412/Lecture 24. Ceph: a Scalable High-Performance Distributed File System
    CS 5412/LECTURE 24. CEPH: A Ken Birman SCALABLE HIGH-PERFORMANCE Spring, 2019 DISTRIBUTED FILE SYSTEM HTTP://WWW.CS.CORNELL.EDU/COURSES/CS5412/2019SP 1 HDFS LIMITATIONS Although many applications are designed to use the normal “POSIX” file system API (operations like file create/open, read/write, close, rename/replace, delete, and snapshot), some modern applications find POSIX inefficient. Some main issues: HDFS can handle big files, but treats them as sequences of fixed-size blocks. Many application are object-oriented HDFS lacks some of the “file system management” tools big-data needs HTTP://WWW.CS.CORNELL.EDU/COURSES/CS5412/2019SP 2 CEPH PROJECT Created by Sage Weihl, a PhD student at U.C. Santa Cruz Later became a company and then was acquired into Red Hat Linux Now the “InkStack” portion of Linux offers Ceph plus various tools to leverage it, and Ceph is starting to replace HDFS worldwide. Ceph is similar in some ways to HDFS but unrelated to it. Many big data systems are migrating to the system. HTTP://WWW.CS.CORNELL.EDU/COURSES/CS5412/2019SP 3 CEPH HAS THREE “APIS” First is the standard POSIX file system API. You can use Ceph in any situation where you might use GFS, HDFS, NFS, etc. Second, there are extensions to POSIX that allow Ceph to offer better performance in supercomputing systems, like at CERN. Finally, Ceph has a lowest layer called RADOS that can be used directly as a key-value object store. HTTP://WWW.CS.CORNELL.EDU/COURSES/CS5412/2019SP 4 WHY TALK DIRECTLY TO RADOS? SERIALIZATION/DESERIALIZATION! When an object is in memory, the data associated with it is managed by the class (or type) definition, and can include pointers, fields with gaps or other “subtle” properties, etc.
    [Show full text]
  • Openzfs on Linux Hepix 2014 October 16, 2014 Brian Behlendorf [email protected]
    OpenZFS on Linux HEPiX 2014 October 16, 2014 Brian Behlendorf [email protected] LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC 1 Lawrence Livermore National Laboratory LLNL-PRES-xxxxxx High Performance Computing Advanced Simulation Massive Scale Data Intensive Top 500 (June 2014) #3 Sequoia 20.1 Peak PFLOP/s 1,572,864 cores 55 PB of storage at 850 GB/s #9 Vulcan 5.0 Peak PFLOPS/s 393,216 cores 6.7 PB of storage at 106 GB/s World class computing resources 2 Lawrence Livermore National Laboratory LLNL-PRES-xxxxxx Linux Clusters 2001 – First Linux cluster Today – ~10 large Linux clusters (100– 3,000 nodes) and 10-20 smaller ones Near-commodity hardware Open source software stack (TOSS) Tri-Labratory Operating System Stack Modified RHEL targeted for HPC RHEL kernel optimized for clusters Moab and SLURM for scheduling Lustre parallel filesystem Additional packages for monitoring, power/console, compilers, etc LLNL Loves Linux 3 Lawrence Livermore National Laboratory LLNL-PRES-xxxxxx Lustre Lustre is our parallel filesystem of choice Livermore Computing Filesystems: Open – 20PB in 5 filesystems Secure – 22PB in 3 filesystems Plus the 55PB Sequoia filesystem 1000+ storage servers Billions of files Users want: Access to their data from every cluster Good performance High availability How do we design a system to handle this? Lustre is used extensively
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]