Deployment Guide Deployment Guide SUSE Enterprise Storage 6 by Tomáš Bažant, Alexandra Settle, Liam Proven, and Sven Seeberg

Total Page:16

File Type:pdf, Size:1020Kb

Deployment Guide Deployment Guide SUSE Enterprise Storage 6 by Tomáš Bažant, Alexandra Settle, Liam Proven, and Sven Seeberg SUSE Enterprise Storage 6 Deployment Guide Deployment Guide SUSE Enterprise Storage 6 by Tomáš Bažant, Alexandra Settle, Liam Proven, and Sven Seeberg Publication Date: 07/04/2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2021 SUSE LLC Copyright © 2016, RedHat, Inc, and contributors. The text of and illustrations in this document are licensed under a Creative Commons Attribution- Share Alike 4.0 International ("CC-BY-SA"). An explanation of CC-BY-SA is available at http:// creativecommons.org/licenses/by-sa/4.0/legalcode . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Innity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its aliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. All other trademarks are the property of their respective owners. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide x 1 Available Documentation x 2 Feedback xi 3 Documentation Conventions xi 4 About the Making of This Manual xii 5 Ceph Contributors xii I SUSE ENTERPRISE STORAGE 1 1 SUSE Enterprise Storage 6 and Ceph 2 1.1 Ceph Features 2 1.2 Core Components 3 RADOS 3 • CRUSH 4 • Ceph Nodes and Daemons 5 1.3 Storage Structure 6 Pool 6 • Placement Group 7 • Example 7 1.4 BlueStore 8 1.5 Additional Information 10 2 Hardware Requirements and Recommendations 11 2.1 Network Overview 11 Network Recommendations 12 2.2 Multiple Architecture Configurations 14 2.3 Hardware Configuration 15 Minimum Cluster Configuration 15 • Recommended Production Cluster Configuration 17 iv Deployment Guide 2.4 Object Storage Nodes 18 Minimum Requirements 18 • Minimum Disk Size 19 • Recommended Size for the BlueStore's WAL and DB Device 19 • Using SSD for OSD Journals 19 • Maximum Recommended Number of Disks 20 2.5 Monitor Nodes 20 2.6 Object Gateway Nodes 21 2.7 Metadata Server Nodes 21 2.8 Admin Node 21 2.9 iSCSI Nodes 22 2.10 SUSE Enterprise Storage 6 and Other SUSE Products 22 SUSE Manager 22 2.11 Naming Limitations 22 2.12 OSD and Monitor Sharing One Server 22 3 Admin Node HA Setup 24 3.1 Outline of the HA Cluster for Admin Node 24 3.2 Building a HA Cluster with Admin Node 25 4 User Privileges and Command Prompts 27 4.1 Salt/DeepSea Related Commands 27 4.2 Ceph Related Commands 27 4.3 General Linux Commands 28 4.4 Additional Information 28 II CLUSTER DEPLOYMENT AND UPGRADE 29 5 Deploying with DeepSea/Salt 30 5.1 Read the Release Notes 30 v Deployment Guide 5.2 Introduction to DeepSea 31 Organization and Important Locations 32 • Targeting the Minions 33 5.3 Cluster Deployment 35 5.4 DeepSea CLI 45 DeepSea CLI: Monitor Mode 46 • DeepSea CLI: Stand-alone Mode 46 5.5 Configuration and Customization 48 The policy.cfg File 48 • DriveGroups 53 • Adjusting ceph.conf with Custom Settings 63 6 Upgrading from Previous Releases 64 6.1 General Considerations 64 6.2 Steps to Take before Upgrading the First Node 65 Read the Release Notes 65 • Verify Your Password 65 • Verify the Previous Upgrade 65 • Upgrade Old RBD Kernel Clients 67 • Adjust AppArmor 67 • Verify MDS Names 67 • Consolidate Scrub-related Configuration 68 • Back Up Cluster Data 69 • Migrate from ntpd to chronyd 69 • Patch Cluster Prior to Upgrade 71 • Verify the Current Environment 73 • Check the Cluster's State 74 • Migrate OSDs to BlueStore 75 6.3 Order in Which Nodes Must Be Upgraded 77 6.4 Oine Upgrade of CTDB Clusters 77 6.5 Per-Node Upgrade Instructions 78 Manual Node Upgrade Using the Installer DVD 79 • Node Upgrade Using the SUSE Distribution Migration System 81 6.6 Upgrade the Admin Node 83 6.7 Upgrade Ceph Monitor/Ceph Manager Nodes 84 6.8 Upgrade Metadata Servers 84 6.9 Upgrade Ceph OSDs 86 6.10 Upgrade Gateway Nodes 89 vi Deployment Guide 6.11 Steps to Take after the Last Node Has Been Upgraded 91 Update Ceph Monitor Setting 91 • Disable Insecure Clients 92 • Enable the Telemetry Module 92 6.12 Update policy.cfg and Deploy Ceph Dashboard Using DeepSea 93 6.13 Migration from Profile-based Deployments to DriveGroups 95 Analyze the Current Layout 95 • Create DriveGroups Matching the Current Layout 96 • OSD Deployment 97 • More Complex Setups 97 7 Customizing the Default Configuration 98 7.1 Using Customized Configuration Files 98 Disabling a Deployment Step 98 • Replacing a Deployment Step 99 • Modifying a Deployment Step 100 • Modifying a Deployment Stage 101 • Updates and Reboots during Stage 0 103 7.2 Modifying Discovered Configuration 104 Enabling IPv6 for Ceph Cluster Deployment 106 III INSTALLATION OF ADDITIONAL SERVICES 108 8 Installation of Services to Access your Data 109 9 Ceph Object Gateway 110 9.1 Object Gateway Manual Installation 110 Object Gateway Configuration 111 10 Installation of iSCSI Gateway 117 10.1 iSCSI Block Storage 117 The Linux Kernel iSCSI Target 118 • iSCSI Initiators 118 10.2 General Information about ceph-iscsi 119 10.3 Deployment Considerations 120 vii Deployment Guide 10.4 Installation and Configuration 121 Deploy the iSCSI Gateway to a Ceph Cluster 121 • Create RBD Images 121 • Export RBD Images via iSCSI 122 • Authentication and Access Control 123 • Advanced Settings 125 10.5 Exporting RADOS Block Device Images Using tcmu-runner 128 11 Installation of CephFS 130 11.1 Supported CephFS Scenarios and Guidance 130 11.2 Ceph Metadata Server 131 Adding and Removing a Metadata Server 131 • Configuring a Metadata Server 131 11.3 CephFS 137 Creating CephFS 137 • MDS Cluster Size 138 • MDS Cluster and Updates 139 • File Layouts 140 12 Installation of NFS Ganesha 145 12.1 Preparation 145 General Information 145 • Summary of Requirements 146 12.2 Example Installation 146 12.3 Active-Active Configuration 147 Prerequisites 147 • Configure NFS Ganesha 148 • Populate the Cluster Grace Database 149 • Restart NFS Ganesha Services 150 • Conclusion 150 12.4 More Information 150 IV CLUSTER DEPLOYMENT ON TOP OF SUSE CAAS PLATFORM 4 (TECHNOLOGY PREVIEW) 151 13 SUSE Enterprise Storage 6 on Top of SUSE CaaS Platform 4 Kubernetes Cluster 152 13.1 Considerations 152 13.2 Prerequisites 152 viii Deployment Guide 13.3 Get Rook Manifests 153 13.4 Installation 153 Configuration 153 • Create the Rook Operator 155 • Create the Ceph Cluster 155 13.5 Using Rook as Storage for Kubernetes Workload 156 13.6 Uninstalling Rook 157 A Ceph Maintenance Updates Based on Upstream 'Nautilus' Point Releases 158 Glossary 170 B Documentation Updates 173 B.1 Maintenance update of SUSE Enterprise Storage 6 documentation 173 B.2 June 2019 (Release of SUSE Enterprise Storage 6) 174 ix Deployment Guide About This Guide SUSE Enterprise Storage 6 is an extension to SUSE Linux Enterprise Server 15 SP1. It combines the capabilities of the Ceph (http://ceph.com/ ) storage project with the enterprise engineering and support of SUSE. SUSE Enterprise Storage 6 provides IT organizations with the ability to deploy a distributed storage architecture that can support a number of use cases using commodity hardware platforms. This guide helps you understand the concept of the SUSE Enterprise Storage 6 with the main focus on managing and administrating the Ceph infrastructure. It also demonstrates how to use Ceph with other related solutions, such as OpenStack or KVM. Many chapters in this manual contain links to additional documentation resources. These include additional documentation that is available on the system as well as documentation available on the Internet. For an overview of the documentation available for your product and the latest documentation updates, refer to https://documentation.suse.com . 1 Available Documentation The following manuals are available for this product: Book “Administration Guide” The guide describes various administration tasks that are typically performed after the installation. The guide also introduces steps to integrate Ceph with virtualization solutions such as libvirt , Xen, or KVM, and ways to access objects stored in the cluster via iSCSI and RADOS gateways. Deployment Guide Guides you through the installation steps of the Ceph cluster and all services related to Ceph. The guide also illustrates a basic Ceph cluster structure and provides you with related terminology. HTML versions of the product manuals can be found in the installed system under /usr/share/ doc/manual . Find the latest documentation updates at https://documentation.suse.com where you can download the manuals for your product in multiple formats. x Available Documentation SES 6 2 Feedback Several feedback channels are available: Bugs and Enhancement Requests For services and support options available for your product, refer to http://www.suse.com/ support/ . To report bugs for a product component, log in to the Novell Customer Center from http:// www.suse.com/support/ and select My Support Service Request. User Comments We want to hear your comments and suggestions for this manual and the other documentation included with this product.
Recommended publications
  • Administració De Sistemes GNU Linux Mòdul4 Administració
    Administració local Josep Jorba Esteve PID_00238577 GNUFDL • PID_00238577 Administració local Es garanteix el permís per a copiar, distribuir i modificar aquest document segons els termes de la GNU Free Documentation License, Version 1.3 o qualsevol altra de posterior publicada per la Free Software Foundation, sense seccions invariants ni textos de la oberta anterior o posterior. Podeu consultar els termes de la llicència a http://www.gnu.org/licenses/fdl-1.3.html. GNUFDL • PID_00238577 Administració local Índex Introducció.................................................................................................. 5 1. Eines bàsiques per a l'administrador........................................... 7 1.1. Eines gràfiques i línies de comandes .......................................... 8 1.2. Documents d'estàndards ............................................................. 10 1.3. Documentació del sistema en línia ............................................ 13 1.4. Eines de gestió de paquets .......................................................... 15 1.4.1. Paquets TGZ ................................................................... 16 1.4.2. Fedora/Red Hat: paquets RPM ....................................... 19 1.4.3. Debian: paquets DEB ..................................................... 24 1.4.4. Nous formats d'empaquetat: Snap i Flatpak .................. 28 1.5. Eines genèriques d'administració ................................................ 36 1.6. Altres eines .................................................................................
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • USB Composite Gadget Using CONFIG-FS on Dra7xx Devices
    Application Report SPRACB5–September 2017 USB Composite Gadget Using CONFIG-FS on DRA7xx Devices RaviB ABSTRACT This application note explains how to create a USB composite gadget, network control model (NCM) and abstract control model (ACM) from the user space using Linux® CONFIG-FS on the DRA7xx platform. Contents 1 Introduction ................................................................................................................... 2 2 USB Composite Gadget Using CONFIG-FS ............................................................................. 3 3 Creating Composite Gadget From User Space.......................................................................... 4 4 References ................................................................................................................... 8 List of Figures 1 Block Diagram of USB Composite Gadget............................................................................... 3 2 Selection of CONFIGFS Through menuconfig........................................................................... 4 3 Select USB Configuration Through menuconfig......................................................................... 4 4 Composite Gadget Configuration Items as Files and Directories ..................................................... 5 5 VID, PID, and Manufacturer String Configuration ....................................................................... 6 6 Kernel Logs Show Enumeration of USB Composite Gadget by Host ................................................ 6 7 Ping
    [Show full text]
  • HP Storageworks Clustered File System Command Line Reference
    HP StorageWorks Clustered File System 3.0 Command Line reference guide *392372-001* *392372–001* Part number: 392372–001 First edition: May 2005 Legal and notice information © Copyright 1999-2005 PolyServe, Inc. Portions © 2005 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Neither PolyServe nor Hewlett-Packard shall be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or translated into another language without the prior written consent of Hewlett-Packard. The information is provided “as is” without warranty of any kind and is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Neither PolyServe nor HP shall be liable for technical or editorial errors or omissions contained herein. The software this document describes is PolyServe confidential and proprietary. PolyServe and the PolyServe logo are trademarks of PolyServe, Inc. PolyServe Matrix Server contains software covered by the following copyrights and subject to the licenses included in the file thirdpartylicense.pdf, which is included in the PolyServe Matrix Server distribution. Copyright © 1999-2004, The Apache Software Foundation. Copyright © 1992, 1993 Simmule Turner and Rich Salz.
    [Show full text]
  • Oracle® Linux 7 Managing File Systems
    Oracle® Linux 7 Managing File Systems F32760-07 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Unionfs: User- and Community-Oriented Development of a Unification File System
    Unionfs: User- and Community-Oriented Development of a Unification File System David Quigley, Josef Sipek, Charles P. Wright, and Erez Zadok Stony Brook University {dquigley,jsipek,cwright,ezk}@cs.sunysb.edu Abstract If a file exists in multiple branches, the user sees only the copy in the higher-priority branch. Unionfs allows some branches to be read-only, Unionfs is a stackable file system that virtually but as long as the highest-priority branch is merges a set of directories (called branches) read-write, Unionfs uses copy-on-write seman- into a single logical view. Each branch is as- tics to provide an illusion that all branches are signed a priority and may be either read-only writable. This feature allows Live-CD develop- or read-write. When the highest priority branch ers to give their users a writable system based is writable, Unionfs provides copy-on-write se- on read-only media. mantics for read-only branches. These copy- on-write semantics have lead to widespread There are many uses for namespace unifica- use of Unionfs by LiveCD projects including tion. The two most common uses are Live- Knoppix and SLAX. In this paper we describe CDs and diskless/NFS-root clients. On Live- our experiences distributing and maintaining CDs, by definition, the data is stored on a read- an out-of-kernel module since November 2004. only medium. However, it is very convenient As of March 2006 Unionfs has been down- for users to be able to modify the data. Uni- loaded by over 6,700 unique users and is used fying the read-only CD with a writable RAM by over two dozen other projects.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • ODROID-HC2: 3.5” High Powered Storage  February 1, 2018
    ODROID WiFi Access Point: Share Files Via Samba February 1, 2018 How to setup an ODROID with a WiFi access point so that an ODROID’s hard drive can be accessed and modied from another computer. This is primarily aimed at allowing access to images, videos, and log les on the ODROID. ODROID-HC2: 3.5” High powered storage February 1, 2018 The ODROID-HC2 is an aordable mini PC and perfect solution for a network attached storage (NAS) server. This device home cloud-server capabilities centralizes data and enables users to share and stream multimedia les to phones, tablets, and other devices across a network. It is an ideal tool for many use Using SquashFS As A Read-Only Root File System February 1, 2018 This guide describes the usage of SquashFS PiFace: Control and Display 2 February 1, 2018 For those who have the PiFace Control and Display 2, and want to make it compatible with the ODROID-C2 Android Gaming: Data Wing, Space Frontier, and Retro Shooting – Pixel Space Shooter February 1, 2018 Variations on a theme! Race, blast into space, and blast things into pieces that are racing towards us. The fun doesn’t need to stop when you take a break from your projects. Our monthly pick on Android games. Linux Gaming: Saturn Games – Part 1 February 1, 2018 I think it’s time we go into a bit more detail about Sega Saturn for the ODROID-XU3/XU4 Gaming Console: Running Your Favorite Games On An ODROID-C2 Using Android February 1, 2018 I built a gaming console using an ODROID-C2 running Android 6 Controller Area Network (CAN) Bus: Implementation
    [Show full text]
  • Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
    Comparative Analysis of Distributed and Parallel File Systems’ Internal Techniques Viacheslav Dubeyko Content 1 TERMINOLOGY AND ABBREVIATIONS ................................................................................ 4 2 INTRODUCTION......................................................................................................................... 5 3 COMPARATIVE ANALYSIS METHODOLOGY ....................................................................... 5 4 FILE SYSTEM FEATURES CLASSIFICATION ........................................................................ 5 4.1 Distributed File Systems ............................................................................................................................ 6 4.1.1 HDFS ..................................................................................................................................................... 6 4.1.2 GFS (Google File System) ....................................................................................................................... 7 4.1.3 InterMezzo ............................................................................................................................................ 9 4.1.4 CodA .................................................................................................................................................... 10 4.1.5 Ceph.................................................................................................................................................... 12 4.1.6 DDFS ..................................................................................................................................................
    [Show full text]
  • Deploying OFS Technology in the Wild: a Case Study
    13th ANNUAL WORKSHOP 2017 DEPLOYING OFS TECHNOLOGY IN THE WILD A CASE STUDY Susan Coulter / HPC-Design Los Alamos National Laboratory [ March 31, 2017 ] LA-UR-17-22449 HOW THE STORY STARTS… . LANL / CSCNSI • Summer school for Junior/Senior Computer Science majors • Project: Compare 100G Ethernet to IB EDR • Cluster built with IB FDR • Preliminary test compared FDR to EDR 2 OpenFabrics Alliance Workshop 2017 FIRST WRINKLE . LANL deployed Damselfly IB backbone HUNTER • Only EDR systems in production • SM, slipknot cluster, redcap cluster • Most other systems FDR-connected • Built early with Mellanox-OFED GARCIA • Replaced with TOSS(RedHat) bundled OFS Trinity Lustre • Tri-Lab Operating System Stack Common Lustre • TOSS2 -> RedHat6 • TOSS3 -> RedHat7 • LANL upgrade schedule slower than LLNL upgrade schedule • LANL running version(s) LLNL has frozen Mid/Long Term Archive IB EDR 3 OpenFabrics Alliance Workshop 2017 WRINKLES WITHIN WRINKLES . Disk-ful / Disk-less / Configuration Management • Install / test Mellanox OFED on TOSS standalone system – easy • Non-standard kernels use Mellanox script – easy • Cfengine controls cluster configuration • RPMs only – automation preferred except under extreme circumstances • Local updates repo (kernel RPMs and associated libraries) • Newer version number • depmod –a » /etc/depmod.d/mlnx-ofa_kernel.conf • Hybrid images – RAM and NFS mount • Necessary kernel modules need to be in RAM » rdma_cm requires configfs.ko override ib_uverbs * weak-updates/mlnx-ofa_kernel/drivers/infiniband/core override ib_addr * weak-updates/mlnx-ofa_kernel/drivers/infiniband/core override ib_umad * weak-updates/mlnx-ofa_kernel/drivers/infiniband/core override ib_core * weak-updates/mlnx-ofa_kernel/drivers/infiniband/core 4 OpenFabrics Alliance Workshop 2017 SUCCESS ! . Campaign / Scality system upgraded • ~25% increase in performance • Uses >lots< of small messages .
    [Show full text]
  • Interaction Between the User and Kernel Space in Linux
    1 Interaction Between the User and Kernel Space in Linux Kai Lüke, Technische Universität Berlin F Abstract—System calls based on context switches from user to kernel supported in POSIX as well as System V style. A very space are the established concept for interaction in operating systems. common principle for IPC are sockets, and pipes can be seen On top of them the Linux kernel offers various paradigms for commu- as their most simple case. Besides the popular IP family with nication and management of resources and tasks. The principles and TCP/UDP, the local Unix domain sockets play a big role basic workings of system calls, interrupts, virtual system calls, special for many applications, while Netlink sockets are specific purpose virtual filesystems, process signals, shared memory, pipes, Unix or IP sockets and other IPC methods like the POSIX or System V to the Linux kernel and are not often found in user space message queue and Netlink are are explained and related to each other applications due to portability to other operating systems. in their differences. Because Linux is not a puristic project but home for There have been attempts to bring the D-BUS IPC into many different concepts, only a mere overview is presented here with the kernel with a Netlink implementation, kdbus and then focus on system calls. Bus1, but this will not be covered here. Also comparative studies with other operating systems are out of scope. 1 INTRODUCTION 2 KERNEL AND USER SPACE ERNELS in the Unix family are normally not initiating K any actions with outside effects, but rely on requests The processes have virtualized access to the memory as well from user space to perform these actions.
    [Show full text]
  • Designing High-Performance and Scalable Clustered Network Attached Storage with Infiniband
    DESIGNING HIGH-PERFORMANCE AND SCALABLE CLUSTERED NETWORK ATTACHED STORAGE WITH INFINIBAND DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Ranjit Noronha, MS * * * * * The Ohio State University 2008 Dissertation Committee: Approved by Dhabaleswar K. Panda, Adviser Ponnuswammy Sadayappan Adviser Feng Qin Graduate Program in Computer Science and Engineering c Copyright by Ranjit Noronha 2008 ABSTRACT The Internet age has exponentially increased the volume of digital media that is being shared and distributed. Broadband Internet has made technologies such as high quality streaming video on demand possible. Large scale supercomputers also consume and cre- ate huge quantities of data. This media and data must be stored, cataloged and retrieved with high-performance. Researching high-performance storage subsystems to meet the I/O demands of applications in modern scenarios is crucial. Advances in microprocessor technology have given rise to relatively cheap off-the-shelf hardware that may be put together as personal computers as well as servers. The servers may be connected together by networking technology to create farms or clusters of work- stations (COW). The evolution of COWs has significantly reduced the cost of ownership of high-performance clusters and has allowed users to build fairly large scale machines based on commodity server hardware. As COWs have evolved, networking technologies like InfiniBand and 10 Gigabit Eth- ernet have also evolved. These networking technologies not only give lower end-to-end latencies, but also allow for better messaging throughput between the nodes. This allows us to connect the clusters with high-performance interconnects at a relatively lower cost.
    [Show full text]