Software Defined Storage Overview

Total Page:16

File Type:pdf, Size:1020Kb

Software Defined Storage Overview Software Defined Storage Overview August 2019 Juan Jose Floristan Cloud Specialist Solution Architect 1 AGENDA 1. Why Red Hat Storage? 2. Red Hat Ceph Storage 3. Red Hat Gluster Storage 4. Red Hat Openshift Container Storage 2 Why Red Hat Storage? 3 Why Red Hat Storage? STORAGE IS EVOLVING TRADITIONAL STORAGE OPEN, SOFTWARE-DEFINED STORAGE Complex proprietary silos Standardized, unified, open platforms USER USER USER USER ADMIN Control Plane (API, GUI) ADMIN ADMIN ADMIN Software Ceph Gluster +++ Open Source Custom GUI Custom GUI Custom GUI Proprietary Hardware Proprietary Hardware Proprietary Hardware Standard Computers and Disks Proprietary Proprietary Proprietary Standard Hardware Software Software Software 4 Why Red Hat Storage? WHY THIS MATTERS PROPRIETARY Common, Lower cost, standardized supply chain HARDWARE off-the-shelf hardware SCALE-UP Scale-out Increased operational flexibility ARCHITECTURE architecture HARDWARE-BASED Software-based More programmability, agility, INTELLIGENCE intelligence and control CLOSED DEVELOPMENT Open development More flexible, well-integrated PROCESS process technology 5 Why Red Hat Storage? A RISING TIDE Software-Defined Storage is leading SDS-P MARKET SIZE BY SEGMENT $1,395M a shift in the global storage industry, Block Storage File Storage $1,195M with far-reaching effects. Object Storage $1,029M Hyperconverged “By 2020, between 70%-80% of unstructured $859M data will be held on lower-cost storage managed $705M $592M by SDS.” $475M Innovation Insight: Separating Hype From Hope for Software-Defined Storage “By 2019, 70% of existing storage array products will also be available as software-only versions.” 2013 2014 2015 2016 2017 2018 Innovation Insight: Separating Hype From Hope for Software-Defined Storage 2019 Source: IDC 6 Why Red Hat Storage? THE RED HAT STORAGE MISSION To offer a unified, open software-defined storage portfolio that delivers a range of data services for next generation workloads, thereby accelerating the transition to modern IT infrastructures. 7 Red Hat Ceph Storage 8 Red Hat Ceph Storage BUSINESS BENEFITS OPEN SOURCE No proprietary lock-in, with a large commercial ecosystem and broad community PEACE OF MIND Over a decade of active development, proven in production and backed by Red Hat LOWER COST More economical than traditional NAS/SAN, particularly at petabyte scale 9 Red Hat Ceph Storage RED HAT CEPH STORAGE Distributed, enterprise-grade object storage, proven at web scale Open, massively-scalable, software-defined Flexible, scale-out architecture on clustered standard hardware Single, efficient, unified storage platform (object, block, file) User-driven storage lifecycle management with 100% API coverage S3 compatible object API Designed for modern workloads like cloud infrastructure and data lakes 10 Red Hat Ceph Storage ARCHITECTURAL COMPONENTS APP HOST/VM CLIENT RGW RBD CEPHFS A web services gateway for object A reliable, fully distributed block A distributed file system with POSIX storage, compatible with S3 and device with cloud platform semantics & scale-out metadata Swift integration LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby) RADOS A software-based reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes and lightweight monitors 11 Red Hat Ceph Storage COMPLETE OPENSTACK STORAGE • Deeply integrated with modular OPENSTACK architecture and components for ephemeral & persistent storage Keystone API Swift API Glance API Manila API Cinder API Nova API ➢ Nova, Cinder, Manila, Glance, HYPERVISOR CEPH OBJECT GATEWAY CephFS Keystone, Ceilometer, Swift, CephFS (LibRBD) Gnocchi • 64 TB bundled ’once’ in Red Hat OpenStack Platform for evaluation RED HAT CEPH STORAGE 12 Red Hat Ceph Storage ADVANTAGES FOR OPENSTACK USERS • Instantaneous booting of 1 or 100s VM of VMs • Instant backups via seamless data migration between Glance, Cinder, Nova HYPERVISOR • Tiered I/O performance within single cluster • Multi-site replication for disaster recovery or archiving RED HAT CEPH STORAGE 13 Red Hat Ceph Storage TARGET USE CASES • Private Cloud - enterprise deployments growing for test & dev and production application deployments. FSI, retail and technology sectors. • Archive & Backup: object storage as a replacement for tape and expensive dedicated appliances. Hybrid cloud compatibility critical. • NFVi (new) - OpenStack with Ceph dominant reference platform for next-generation telco networks. Global demand for Ceph now standalone and hyperconverged. • Enterprise Virtualization (new): legacy protocol support for legacy VM storage to be managed on same platform as modern, private cloud storage. • Big Data (new) - object storage providing common, data lake for multiple analytics applications for greater efficiencies and better business insights 14 Red Hat Gluster Storage 15 Red Hat Gluster RED HAT GLUSTER STORAGE ADVANTAGES OPEN • Based on GlusterFS open source community project Open, software-defined distributed • Uses proven local file system (XFS) file and object storage system • Data is stored in native format • Uses an elastic hashing algorithm for data placement SCALABLE • Uses local filesystem’s xattrs to store metadata No Metadata Server • Nothing shared scale-out architecture • Global namespace ACCESSIBLE • NFS, SMB, object, Gluster native protocol Multi-Protocol the Same Data • Posix compliant • GlusterFS is based on filesystem in userspace (FUSE) MODULAR • Modular stackable arch allows easy addition of features No Kernel Dependencies ...without being tied to any kernel version ALWAYS-ON • Synchronous replication with self-healing for server failure High-Availability across data, systems and applications • Asynchronous geo-replication for site failure 16 Red Hat Gluster GLUSTER ARCHITECTURE NFS CIFS FUSE SERVERS WITH LOCAL DISKS Aggregates systems to one cohesive unit and presents using common protocols 17 Red Hat Gluster WHAT IS A SYSTEM? PHYSICAL VIRTUAL CLOUD SERVER (CPU/MEM) 18 Red Hat Gluster VOLUMES CLIENTS Bricks taken from multiple hosts become one addressable unit SERVER SERVER BRICK BRICK BRICK BRICK 19 Red Hat Gluster VOLUMES STORAGE NODE STORAGE NODE STORAGE NODE /export1 /export6 /export11 /export2 /export7 /export12 /export3 /export8 /export13 /export4 /export9 /export14 /export5 /export10 /export15 /scratchspace /homeshare 20 Red Hat Gluster NFS Better performance reading many small files from a single client 21 Red Hat Gluster SAMBA/CIFS Samba uses RHGS gfapi library to communicate directly with GlusterFS server process without going through FUSE 22 Red Hat Gluster OBJECT ACCESS Implements objects as files and directories under the container (”Swift on File”) 23 Red Hat Gluster GEO-REPLICATION One to One replication Site A Site B Cascading replication Site B Site A Site C 24 Red Hat Openshift Container Storage 25 Red Hat Openshift Container Storage WHAT STORAGE OPTIONS DO YOU HAVE FOR YOUR CONTAINERS? PORTABLE STORAGE ACROSS ON-PREM / PUBLIC CLOUDS SILOED OR POINT PLAY OUTDATED STORAGE STORAGE SOLUTIONS ARRAYS AND APPLIANCES • No hybrid cloud support • Scalability and high availability • Runs to all 3 public clouds fall short of customer needs • No unified control plane (K8s) • Single Vendor Support model • Vendor lock in and high TCO • Lack of automated or dynamic • Seamless user experience for provisioning of storage • Monolithic appliance model developers and DevOps 26 Red Hat Openshift Container Storage WHAT IS OPENSHIFT CONTAINER STORAGE? ● Highly scalable, enterprise-grade storage ● 2 Deployment options: independent and converge mode ● Native integration with OpenShift APPLICATION APPLICATION APPLICATION CONTAINER CONTAINER CONTAINER ● Unified Orchestration using Kubernetes for STORAGE STORAGE STORAGE CONTAINER CONTAINER CONTAINER applications and storage ● Greater control & ease of use for developers DISTRIBUTED, SECURE, SCALE-OUT STORAGE CLUSTER ● Developed, maintained, and deployed in synch with Red Hat OpenShift releases ● Single vendor Support Red Hat Openshift Container Storage STORAGE SUPPORT BEFORE OCS NFS* GlusterFS **/*** GCE Persistent Disk Azure File * VMDK *** **/*** Azure Block Cinder *** **/*** AWS EBS Ceph RBD *** 28 * No dynamic provisioning ** No Cross-Availability Zone Support *** No Shared-Storage Red Hat Openshift Container Storage STORAGE SUPPORT AFTER OCS Scalable Automated (1000+ volumes) (Dynamic Provisioning) Highly-Available Integrated (across availability zones) (installs with / runs on OpenShift) Persistent, resilient and … that travels with the elastic storage... platform. 29 * No dynamic provisioning ** No Cross-Availability Zone Support *** No Shared-Storage Red Hat Openshift Container Storage TWO FLAVORS OF CONTAINER STORAGE OPENSHIFT CONTAINER STORAGE OPENSHIFT CONTAINER STORAGE INDEPENDENT MODE CONVERGED MODE Use existing investment in traditional storage, Highly scalable, enterprise-grade storage, managed by storage admin – attach to standalone storage fully integrated into OpenShift Container Platform 30 Red Hat Openshift Container Storage CONVERGED MODE OPENSHIFT NODE 1 APP CONTAINER Co-Locate Storage and Apps OPENSHIFT NODE 2 OPENSHIFT NODE 3 APP Container APP Container MASTER Dynamic Provisioning OCS Container OCS Container Managed by OpenShift Infrastructure-Agnostic OCS Container OPENSHIFT NODE 4 31 Red Hat Openshift Container Storage STORAGE CONSOLIDATION IN THE CLOUD vs. EBS sc1 EBS gp2 EBS gp2 EBS st1 EBS io1 32 Red Hat Openshift Container Storage STORAGE CONSOLIDATION ON PREM PERSISTENT VOLUMES PROVIDED BY OPENSHIFT CONTAINER STORAGE OPENSHIFT OPENSHIFT OPENSHIFT OPENSHIFT OPENSHIFT CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER STORAGE POD STORAGE POD STORAGE POD STORAGE POD STORAGE POD BLOCK DEVICE BLOCK BLOCK BLOCK DEVICE BLOCK DEVICE DEVICE DEVICE FIBRE-CHANNEL ISCSI SAN SHARED SAS ARRAY 33 Thank you Red Hat is the world’s leading provider of enterprise open source software solutions. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. linkedin.com/company/red-hat facebook.com/redhatinc youtube.com/user/RedHatVideos twitter.com/RedHat 34.
Recommended publications
  • Gluster Roadmap: Recent Improvements and Upcoming Features
    Gluster roadmap: Recent improvements and upcoming features Niels de Vos GlusterFS co-maintainer [email protected] Agenda ● Introduction into Gluster ● Quick Start ● Current stable releases ● History of feature additions ● Plans for the upcoming 3.8 and 4.0 release ● Detailed description of a few select features FOSDEM, 30 January 2016 2 What is GlusterFS? ● Scalable, general-purpose storage platform ● POSIX-y Distributed File System ● Object storage (swift) ● Distributed block storage (qemu) ● Flexible storage (libgfapi) ● No Metadata Server ● Heterogeneous Commodity Hardware ● Flexible and Agile Scaling ● Capacity – Petabytes and beyond ● Performance – Thousands of Clients FOSDEM, 30 January 2016 3 Terminology ● Brick ● Fundamentally, a filesystem mountpoint ● A unit of storage used as a capacity building block ● Translator ● Logic between the file bits and the Global Namespace ● Layered to provide GlusterFS functionality FOSDEM, 30 January 2016 4 Terminology ● Volume ● Bricks combined and passed through translators ● Ultimately, what's presented to the end user ● Peer / Node ● Server hosting the brick filesystems ● Runs the Gluster daemons and participates in volumes ● Trusted Storage Pool ● A group of peers, like a “Gluster cluster” FOSDEM, 30 January 2016 5 Scale-out and Scale-up FOSDEM, 30 January 2016 6 Distributed Volume ● Files “evenly” spread across bricks ● Similar to file-level RAID 0 ● Server/Disk failure could be catastrophic FOSDEM, 30 January 2016 7 Replicated Volume ● Copies files to multiple bricks ● Similar to file-level
    [Show full text]
  • Storage Virtualization for KVM – Putting the Pieces Together
    Storage Virtualization for KVM – Putting the pieces together Bharata B Rao – [email protected] Deepak C Shettty – [email protected] M Mohan Kumar – [email protected] (IBM Linux Technology Center, Bangalore) Balamurugan Aramugam - [email protected] Shireesh Anjal – [email protected] (RedHat, Bangalore) Aug 2012 LPC2012 Linux is a registered trademark of Linus Torvalds. Agenda ● Problems around storage in virtualization ● GlusterFS as virt-ready file system – QEMU-GlusterFS integration – GlusterFS – Block device translator ● Virtualization management - oVirt and VDSM – VDSM-GlusterFS integration ● Storage integration – libstoragemgmt Problems in storage/FS in KVM virtualization ● Multiple choices for file system and virtualization management ● Lack of virtualization aware file systems ● File systems/storage functionality implemented in other layers of virtualization stack – Snapshots, block streaming, image formats in QEMU ● No well defined interface points in the virtualization stack for storage integration ● No standard interface/APIs available for services like backup and restore ● Need for a single FS/storage solution that works for local, SAN and NAS storage – Mixing different types of storage into a single filesystem namespace GlusterFS ● User space distributed file system that scales to several petabytes ● Aggregates storage resources from multiple nodes and presents a unified file system namespace GlusterFS - features ● Replication ● Striping ● Distribution ● Geo-replication/sync ● Online volume extension
    [Show full text]
  • Glusterfs Documentation Release 3.8.0
    GlusterFS Documentation Release 3.8.0 Gluster Community Aug 10, 2016 Contents 1 Quick Start Guide 3 1.1 Single Node Cluster...........................................3 1.2 Multi Node Cluster............................................4 2 Overview and Concepts 7 2.1 Volume Types..............................................7 2.2 FUSE................................................... 10 2.3 Translators................................................ 12 2.4 Geo-Replication............................................. 17 2.5 Terminologies.............................................. 19 3 Installation Guide 23 3.1 Getting Started.............................................. 23 3.2 Configuration............................................... 24 3.3 Installing Gluster............................................. 26 3.4 Overview................................................. 27 3.5 Quick Start Guide............................................ 28 3.6 Setup Baremetal............................................. 29 3.7 Deploying in AWS............................................ 30 3.8 Setting up in virtual machines...................................... 31 4 Administrator Guide 33 5 Upgrade Guide 35 6 Contributors Guide 37 6.1 Adding your blog............................................. 37 6.2 Bug Lifecycle.............................................. 37 6.3 Bug Reporting Guidelines........................................ 38 6.4 Bug Triage Guidelines.......................................... 41 7 Changelog 47 8 Presentations 49 i ii GlusterFS
    [Show full text]
  • High Velocity Kernel File Systems with Bento
    High Velocity Kernel File Systems with Bento Samantha Miller, Kaiyuan Zhang, Mengqi Chen, and Ryan Jennings, University of Washington; Ang Chen, Rice University; Danyang Zhuo, Duke University; Thomas Anderson, University of Washington https://www.usenix.org/conference/fast21/presentation/miller This paper is included in the Proceedings of the 19th USENIX Conference on File and Storage Technologies. February 23–25, 2021 978-1-939133-20-5 Open access to the Proceedings of the 19th USENIX Conference on File and Storage Technologies is sponsored by USENIX. High Velocity Kernel File Systems with Bento Samantha Miller Kaiyuan Zhang Mengqi Chen Ryan Jennings Ang Chen‡ Danyang Zhuo† Thomas Anderson University of Washington †Duke University ‡Rice University Abstract kernel-level debuggers and kernel testing frameworks makes this worse. The restricted and different kernel programming High development velocity is critical for modern systems. environment also limits the number of trained developers. This is especially true for Linux file systems which are seeing Finally, upgrading a kernel module requires either rebooting increased pressure from new storage devices and new demands the machine or restarting the relevant module, either way on storage systems. However, high velocity Linux kernel rendering the machine unavailable during the upgrade. In the development is challenging due to the ease of introducing cloud setting, this forces kernel upgrades to be batched to meet bugs, the difficulty of testing and debugging, and the lack of cloud-level availability goals. support for redeployment without service disruption. Existing Slow development cycles are a particular problem for file approaches to high-velocity development of file systems for systems.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Texttext Introduction Into Glusterfs Martin Alfke <Martin.Alfke@Buero20
    Introduction into GlusterFS Martin Alfke <[email protected]> TextText Wednesday, July 7, 2010 1 Agenda • General Information on GlusterFS • Architecture Overview • GlusterFS Translators • GlusterFS Configuration Wednesday, July 7, 2010 2 General Information Wednesday, July 7, 2010 3 General Information I File System Solutions Shared Disk File System •San or Block Access •Mostly used in HA setup (e.g. DRBD) Distributed File System •Network File System •NFS •SMB/CIFS •9P Distributed replicated File System •Replication •HA and offline operation •Coda •MS DFS •MooseFS 4 Wednesday, July 7, 2010 4 General Information II File System Solutions Distributed parallel File System •Setup across multiple servers •HPC Distributed replicated parallel File System •HPC and HA •Cosmos •MogileFS •GPFS (IBM) •GFS (Google) •Hadoop •GlusterFS 5 Wednesday, July 7, 2010 5 Customer Platform Shared Storage Issues Bad performance of NFS kernel stack Limitations of concurrent NFS accesses Customer data already on NFS system Environment and Requirements Debian GNU/Linux Version 4 (etch) – 3 years old Most D f-t p FS need complex data migration Solution has to be expandable 6 Wednesday, July 7, 2010 6 Why GlusterFS ? Decision basics Possibility to run NFS and GlusterFS in parallel No data migration necessary Easy setup Extendable (e.g. new storage nodes) min. Kernel 2.6.3x --> optimization for FUSE context switches ! 7 Wednesday, July 7, 2010 7 Architecture Overview Wednesday, July 7, 2010 8 GlusterFS Basics Hardware Any x86 Hardware Direct attached storage, RAID FC, Infiniband or iSCSI SAN Gigabit or 10 Gigabit network or Infiniband OS Linux, Solaris, OpenSolaris, OS X, FreeBSD ext3 or ext4 Filesystem (tested) Other POSIX compliant filesystems should also work Architecture No Meta-Data Server (fully distributed architecture - Elastic Hash) Replication (RAID 1) Distribution (RAID 0) FUSE (Standard) NFS (unfs3 - depreciated) SMB/CIFS DAV 9 Wednesday, July 7, 2010 9 GlusterFS Architecture Overview I Distribution Replication Images Copyright by Gluster.Inc.
    [Show full text]
  • Red Hat Virtualization 4.3 Planning and Prerequisites Guide
    Red Hat Virtualization 4.3 Planning and Prerequisites Guide Planning for the Installation and Configuration of Red Hat Virtualization 4.3 Last Updated: 2019-05-09 Red Hat Virtualization 4.3 Planning and Prerequisites Guide Planning for the Installation and Configuration of Red Hat Virtualization 4.3 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Legal Notice Copyright © 2019 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
    [Show full text]
  • Comparison of Kernel and User Space File Systems
    Comparison of kernel and user space file systems — Bachelor Thesis — Arbeitsbereich Wissenschaftliches Rechnen Fachbereich Informatik Fakultät für Mathematik, Informatik und Naturwissenschaften Universität Hamburg Vorgelegt von: Kira Isabel Duwe E-Mail-Adresse: [email protected] Matrikelnummer: 6225091 Studiengang: Informatik Erstgutachter: Professor Dr. Thomas Ludwig Zweitgutachter: Professor Dr. Norbert Ritter Betreuer: Michael Kuhn Hamburg, den 28. August 2014 Abstract A file system is part of the operating system and defines an interface between OS and the computer’s storage devices. It is used to control how the computer names, stores and basically organises the files and directories. Due to many different requirements, such as efficient usage of the storage, a grand variety of approaches arose. The most important ones are running in the kernel as this has been the only way for a long time. In 1994, developers came up with an idea which would allow mounting a file system in the user space. The FUSE (Filesystem in Userspace) project was started in 2004 and implemented in the Linux kernel by 2005. This provides the opportunity for a user to write an own file system without editing the kernel code and therefore avoid licence problems. Additionally, FUSE offers a stable library interface. It is originally implemented as a loadable kernel module. Due to its design, all operations have to pass through the kernel multiple times. The additional data transfer and the context switches are causing some overhead which will be analysed in this thesis. So, there will be a basic overview about on how exactly a file system operation takes place and which mount options for a FUSE-based system result in a better performance.
    [Show full text]
  • Elo Provides On-Demand Infrastructure with Red Hat
    CUSTOMER CASE STUDY ELO PROVIDES ON-DEMAND INFRASTRUCTURE WITH RED HAT Elo Serviços S.A., a Brazilian payment card company, has grown rapidly since its foundation. However, the market faces increasing competition from innovative financial technology (fintech) startups. To stay competitive, Elo needed an agile, efficient IT environment that would simplify management and speed time to market. With enterprise open source software from Red Hat—such as Red Hat OpenShift Container Platform and Red Hat Ansible Tower—Elo can deploy, manage, and update its customer service and applications faster to stay ahead of tradi- SOFTWARE tional and fintech competition. Red Hat® Enterprise Linux® Red Hat OpenShift® Container Platform Red Hat Ansible® Tower Red Hat Satellite Red Hat Gluster® Storage São Paulo, Brazil FINANCIAL SERVICES SERVICES HEADQUARTERS 115 EMPLOYEES Red Hat Technical Account 115 MILLION CARDS Manager (TAM) IN CIRCULATION Red Hat Consulting “With Red Hat OpenShift, we can plan and complete a proof of concept in 1-2 weeks, HARDWARE BENEFITS then launch what we’ve developed into Dell PowerEdge R730 • Reduced service time to 2S/2U Rack Server production faster to stay ahead of market by speeding server the competition.” deployment from 45 days to 1-2 days ANDERSON AGAPITO I.T. INFRASTRUCTURE MANAGER, ELO • Simplified management with greater automation • Enhanced compliance and security, including PCI-DSS certification • Gained access to expert guidance facebook.com/redhatinc @redhat linkedin.com/company/red-hat redhat.com STAYING COMPETITIVE IN A RAPIDLY GROWING MARKET Elo Serviços S.A., a Brazilian credit card company, was founded in 2011 as a joint venture between three of the country’s leading banks: Banco do Brasil, Bradesco, and Caixa Econômica Federal.
    [Show full text]
  • An Introduction to Clustering
    An Introduction to Clustering 45drives.blogspot.com/2016/11/an-introduction-to-clustering-how-to.html How to build a Petabyte cluster using GlusterFS with ZFS running on Storinator massive storage servers If you checked out our recent video on clustering, you may have seen me keep a high- speed file transfer going, while I did just about everything I could think of that would give a system administrator nightmares. I was pulling out drives, yanking out network cables and hitting the power switch (virtually) on a server. This robustness is one of the key features of a storage cluster. In this blog, I'm going to give you an overview of how we built that cluster, along with tons of detail on how I configured it. I'm hoping that when you see what I did, you'll see that clustering technology has improved and simplified to the point where even very small organizations can build affordable storage solutions that are incredibly robust and massively scalable. Overview of Clustering A cluster is a group of storage servers that work together as a single volume. Therefore, within an individual server, you can combine drives in various ways to create volumes. With clustering, you install and configure cluster software on top of each server, thus combining them to create a single storage space with larger aggregate capacity, or trading off some of your raw capacity for robustness via redundancy. 1/6 Here are the main elements of the cluster that I built: Servers I used four Storinator XL60 Enhanced massive storage servers (45drives.com).
    [Show full text]
  • Lamassu: Storage-Efficient Host-Side Encryption
    Lamassu: Storage-Efficient Host-Side Encryption Peter Shah and Won So NetApp Inc. Abstract moves downstream through the stack. This strategy can Many storage customers are adopting encryption solu- take many forms, such as built-in application encryption, tions to protect critical data. Most existing encryption OS-based file system encryption or VM-level encryp- solutions sit in, or near, the application that is the source tion [3, 19, 22]. We term any encryption that runs on of critical data, upstream of the primary storage system. the same physical hardware as the primary application Placing encryption near the source ensures that data re- data-source encryption. mains encrypted throughout the storage stack, making it In general, existing data-source encryption solutions easier to use untrusted storage, such as public clouds. interfere with content-driven data management features Unfortunately, such a strategy also prevents down- provided by storage systems — in particular, deduplica- stream storage systems from applying content-based fea- tion. If a storage controller does not have access to the tures, such as deduplication, to the data. In this paper, we keys used to secure data, it cannot compare the contents present Lamassu, an encryption solution that uses block- of encrypted data to determine which sections, if any, are oriented, host-based, convergent encryption to secure duplicates. data, while preserving storage-based data deduplication. In this paper, we present an alternative encryption Unlike past convergent encryption systems, which typi- strategy that provides the benefits of upstream encryp- cally store encryption metadata in a dedicated store, our tion while preserving storage-based data deduplication system transparently inserts its metadata into each file’s on downstream storage.
    [Show full text]
  • Introduction to Gluster
    Introduction to Gluster Man-Suen Chan Overview Architecture Volumes Features Design of AOPP Cluster Performance Data management Puppet Issues Recommendations Introduction to gluster archictecture All nodes are the same, no meta-data servers etc. In this respect like Isilon Each node contributes storage, bandwidth and processing to the cluster, as you add more nodes performance increases linearly in a scale-out fashion. Different data layouts are possible but it is possible to have the entire cluster as one namespace or to divide it into volumes. All network connections are used, enabling bandwidth to be scaled. Adding and removing nodes can be done online Nodes contribute “bricks” to the cluster which are basically disk partitions. There is no need for these to be identical in hardware but it is better if they are the same size. They can also have any file system on them (default xfs). These are set up on RHEL (or clone) servers with gluster software installed. Sold by Red Hat as “Red Hat Storage” - very expensive and same software more or less but has support. Red Hat supports XFS over LVM volumes. Alternatively you can set up gluster for free (no support). Client access via NFS, glusterfs native client or object storage Volumes Basic building blocks of gluster cluster consisting of several bricks. One cluster can have several volumes. Several types for example: Distributed. Files are distributed over several bricks as whole files. One copy, rely on underlying RAID for redundancy. Replicated. Files are distributed over several bricks as whole files and in multiple copies. Number of bricks needs to be multiple of replicate number.Gluster itself provides redundancy.
    [Show full text]