Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 SP2

Total Page:16

File Type:pdf, Size:1020Kb

Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 SP2 SUSE Linux Enterprise Server 15 SP2 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 SP2 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview of File Systems in Linux 2 1.1 Terminology 3 1.2 Btrfs 3 Key Features 4 • The Root File System Setup on SUSE Linux Enterprise Server 4 • Migration from ReiserFS and Ext File Systems to Btrfs 9 • Btrfs Administration 10 • Btrfs Quota Support for Subvolumes 10 • Swapping on Btrfs 12 • Btrfs send/ receive 12 • Data Deduplication Support 16 • Deleting Subvolumes from the Root File System 17 1.3 XFS 18 High Scalability by Using Allocation Groups 18 • High Performance through Efficient Management of Disk Space 18 • Preallocation to Avoid File System Fragmentation 19 1.4 Ext2 19 1.5 Ext3 20 Easy and Highly Reliable Upgrades from Ext2 20 • Reliability and Performance 21 • Converting an Ext2 File System into Ext3 21 • Ext3 File System Inode Size and Number of Inodes 22 iii Storage Administration Guide 1.6 Ext4 26 1.7 ReiserFS 26 1.8 Other Supported File Systems 27 1.9 Large File Support in Linux 28 1.10 Linux Kernel Storage Limitations 29 1.11 Troubleshooting File Systems 30 Btrfs Error: No space is left on device 30 • Freeing Unused File System Blocks 32 • Btrfs: Balancing Data across Devices 33 • No Defragmentation on SSDs 34 1.12 Additional Information 34 2 Resizing File Systems 35 2.1 Use Cases 35 2.2 Guidelines for Resizing 35 File Systems that Support Resizing 36 • Increasing the Size of a File System 36 • Decreasing the Size of a File System 37 2.3 Changing the Size of a Btrfs File System 37 2.4 Changing the Size of an XFS File System 38 2.5 Changing the Size of an Ext2, Ext3, or Ext4 File System 39 3 Using UUIDs to Mount Devices 40 3.1 Persistent Device Names with udev 40 3.2 Understanding UUIDs 40 3.3 Additional Information 41 4 Multi-tier Caching for Block Device Operations 42 4.1 General Terminology 42 4.2 Caching Modes 43 iv Storage Administration Guide 4.3 bcache 44 Main Features 44 • Setting Up a bcache Device 44 • bcache Configuration Using sysfs 46 4.4 lvmcache 46 Configuring lvmcache 46 • Removing a Cache Pool 48 II LOGICAL VOLUMES (LVM) 50 5 LVM Configuration 51 5.1 Understanding the Logical Volume Manager 51 5.2 Creating Volume Groups 53 5.3 Creating Logical Volumes 56 Thinly Provisioned Logical Volumes 59 • Creating Mirrored Volumes 60 5.4 Automatically Activating Non-Root LVM Volume Groups 61 5.5 Resizing an Existing Volume Group 62 5.6 Resizing a Logical Volume 63 5.7 Deleting a Volume Group or a Logical Volume 65 5.8 Using LVM Commands 66 Resizing a Logical Volume with Commands 69 • Using LVM Cache Volumes 71 5.9 Tagging LVM2 Storage Objects 72 Using LVM2 Tags 73 • Requirements for Creating LVM2 Tags 73 • Command Line Tag Syntax 74 • Configuration File Syntax 74 • Using Tags for a Simple Activation Control in a Cluster 76 • Using Tags to Activate On Preferred Hosts in a Cluster 76 6 LVM Volume Snapshots 80 6.1 Understanding Volume Snapshots 80 6.2 Creating Linux Snapshots with LVM 82 6.3 Monitoring a Snapshot 82 v Storage Administration Guide 6.4 Deleting Linux Snapshots 83 6.5 Using Snapshots for Virtual Machines on a Virtual Host 83 6.6 Merging a Snapshot with the Source Logical Volume to Revert Changes or Roll Back to a Previous State 85 III SOFTWARE RAID 88 7 Software RAID Configuration 89 7.1 Understanding RAID Levels 89 RAID 0 89 • RAID 1 90 • RAID 2 and RAID 3 90 • RAID 4 90 • RAID 5 90 • RAID 6 91 • Nested and Complex RAID Levels 91 7.2 Soft RAID Configuration with YaST 91 RAID Names 94 7.3 Monitoring software RAIDs 95 7.4 For More Information 95 8 Configuring Software RAID for the Root Partition 96 8.1 Prerequisites for Using a Software RAID Device for the Root Partition 96 8.2 Setting Up the System with a Software RAID Device for the Root (/) Partition 97 9 Creating Software RAID 10 Devices 101 9.1 Creating Nested RAID 10 Devices with mdadm 101 Creating Nested RAID 10 (1+0) with mdadm 102 • Creating Nested RAID 10 (0+1) with mdadm 104 9.2 Creating a Complex RAID 10 106 Number of Devices and Replicas in the Complex RAID 10 107 • Layout 108 • Creating a Complex RAID 10 with the YaST Partitioner 110 • Creating a Complex RAID 10 with mdadm 113 vi Storage Administration Guide 10 Creating a Degraded RAID Array 116 11 Resizing Software RAID Arrays with mdadm 118 11.1 Increasing the Size of a Software RAID 119 Increasing the Size of Component Partitions 120 • Increasing the Size of the RAID Array 121 • Increasing the Size of the File System 122 11.2 Decreasing the Size of a Software RAID 123 Decreasing the Size of the File System 123 • Decreasing the Size of the RAID Array 123 • Decreasing the Size of Component Partitions 124 12 Storage Enclosure LED Utilities for MD Software RAIDs 127 12.1 The Storage Enclosure LED Monitor Service 128 12.2 The Storage Enclosure LED Control Application 129 Pattern Names 130 • List of Devices 133 • Examples 134 12.3 Additional Information 134 13 Toubleshooting software RAIDs 135 13.1 Recovery after Failing Disk is Back Again 135 IV NETWORK STORAGE 137 14 iSNS for Linux 138 14.1 How iSNS Works 138 14.2 Installing iSNS Server for Linux 140 14.3 Configuring iSNS Discovery Domains 141 Creating iSNS Discovery Domains 142 • Adding iSCSI Nodes to a Discovery Domain 143 14.4 Starting the iSNS Service 145 14.5 Further Information 145 vii Storage Administration Guide 15 Mass Storage over IP Networks: iSCSI 146 15.1 Installing the iSCSI LIO Target Server and iSCSI Initiator 147 15.2 Setting Up an iSCSI LIO Target Server 148 iSCSI LIO Target Service Start-up and Firewall Settings 148 • Configuring Authentication for Discovery of iSCSI LIO Targets and Initiators 149 • Preparing the Storage Space 151 • Setting Up an iSCSI LIO Target Group 152 • Modifying an iSCSI LIO Target Group 155 • Deleting an iSCSI LIO Target Group 156 15.3 Configuring iSCSI Initiator 157 Using YaST for the iSCSI Initiator Configuration 157 • Setting Up the iSCSI Initiator Manually 160 • The iSCSI Initiator Databases 161 15.4 Setting Up Software Targets Using targetcli-fb 163 15.5 Using iSCSI Disks when Installing 167 15.6 Troubleshooting iSCSI 168 Portal Error When Setting Up Target LUNs on an iSCSI LIO Target Server 168 • iSCSI LIO Targets Are Not Visible from Other Computers 169 • Data Packets Dropped for iSCSI Traffic 169 • Using iSCSI Volumes with LVM 169 • iSCSI Targets Are Mounted When the Configuration File Is Set to Manual 170 15.7 iSCSI LIO Target Terminology 170 15.8 Additional Information 172 16 Fibre Channel Storage over Ethernet Networks: FCoE 173 16.1 Configuring FCoE Interfaces during the Installation 174 16.2 Installing FCoE and the YaST FCoE Client 175 16.3 Managing FCoE Services with YaST 176 16.4 Configuring FCoE with Commands 179 16.5 Managing FCoE Instances with the FCoE Administration Tool 181 viii Storage Administration Guide 16.6 Additional Information 183 17 NVMe over Fabric 184 17.1 Overview 184 17.2 Setting Up an NVMe over Fabric Host 184 Installing Command Line Client 184 • Discovering NVMe over Fabric Targets 185 • Connecting to NVMe over Fabric Targets 185 • Multipathing 186 17.3 Setting Up an NVMe over Fabric Target 186 Installing Command Line Client 186 • Configuration Steps 186 • Back Up and Restore Target Configuration 188 17.4 Special Hardware Configuration 189 Overview 189 • Broadcom 189 • Marvell 189 17.5 More Information 190 18 Managing Multipath I/O for Devices 191 18.1 Understanding Multipath I/O 191 18.2 Hardware Support 191 Storage Arrays That Are Automatically Detected for Multipathing 192 • Tested Storage Arrays for Multipathing Support 194 • Storage Arrays that Require Specific Hardware Handlers 195 18.3 Planning for Multipathing 195 Prerequisites 195 • Disk Management Tasks 196 • Software RAIDs 196 • High-Availability Solutions 197 • Always Keep the initrd in Synchronization with the System Configuration 197 18.4 Multipath Management Tools 197 Device Mapper Multipath Module 198 • Multipath I/O Management Tools 200 • Using MDADM for Multipathed Devices 201 • The multipath Command 201 • The mpathpersist Utility 204 ix Storage Administration Guide 18.5 Configuring the System for Multipathing 205 Enabling, Disabling, Starting and Stopping
Recommended publications
  • The Kernel Report
    The kernel report (ELC 2012 edition) Jonathan Corbet LWN.net [email protected] The Plan Look at a year's worth of kernel work ...with an eye toward the future Starting off 2011 2.6.37 released - January 4, 2011 11,446 changes, 1,276 developers VFS scalability work (inode_lock removal) Block I/O bandwidth controller PPTP support Basic pNFS support Wakeup sources What have we done since then? Since 2.6.37: Five kernel releases have been made 59,000 changes have been merged 3069 developers have contributed to the kernel 416 companies have supported kernel development February As you can see in these posts, Ralink is sending patches for the upstream rt2x00 driver for their new chipsets, and not just dumping a huge, stand-alone tarball driver on the community, as they have done in the past. This shows a huge willingness to learn how to deal with the kernel community, and they should be strongly encouraged and praised for this major change in attitude. – Greg Kroah-Hartman, February 9 Employer contributions 2.6.38-3.2 Volunteers 13.9% Wolfson Micro 1.7% Red Hat 10.9% Samsung 1.6% Intel 7.3% Google 1.6% unknown 6.9% Oracle 1.5% Novell 4.0% Microsoft 1.4% IBM 3.6% AMD 1.3% TI 3.4% Freescale 1.3% Broadcom 3.1% Fujitsu 1.1% consultants 2.2% Atheros 1.1% Nokia 1.8% Wind River 1.0% Also in February Red Hat stops releasing individual kernel patches March 2.6.38 released – March 14, 2011 (9,577 changes from 1198 developers) Per-session group scheduling dcache scalability patch set Transmit packet steering Transparent huge pages Hierarchical block I/O bandwidth controller Somebody needs to get a grip in the ARM community.
    [Show full text]
  • Administració De Sistemes GNU Linux Mòdul4 Administració
    Administració local Josep Jorba Esteve PID_00238577 GNUFDL • PID_00238577 Administració local Es garanteix el permís per a copiar, distribuir i modificar aquest document segons els termes de la GNU Free Documentation License, Version 1.3 o qualsevol altra de posterior publicada per la Free Software Foundation, sense seccions invariants ni textos de la oberta anterior o posterior. Podeu consultar els termes de la llicència a http://www.gnu.org/licenses/fdl-1.3.html. GNUFDL • PID_00238577 Administració local Índex Introducció.................................................................................................. 5 1. Eines bàsiques per a l'administrador........................................... 7 1.1. Eines gràfiques i línies de comandes .......................................... 8 1.2. Documents d'estàndards ............................................................. 10 1.3. Documentació del sistema en línia ............................................ 13 1.4. Eines de gestió de paquets .......................................................... 15 1.4.1. Paquets TGZ ................................................................... 16 1.4.2. Fedora/Red Hat: paquets RPM ....................................... 19 1.4.3. Debian: paquets DEB ..................................................... 24 1.4.4. Nous formats d'empaquetat: Snap i Flatpak .................. 28 1.5. Eines genèriques d'administració ................................................ 36 1.6. Altres eines .................................................................................
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • The Linux Kernel Module Programming Guide
    The Linux Kernel Module Programming Guide Peter Jay Salzman Michael Burian Ori Pomerantz Copyright © 2001 Peter Jay Salzman 2007−05−18 ver 2.6.4 The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the Open Software License, version 1.1. You can obtain a copy of this license at http://opensource.org/licenses/osl.php. This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose. The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the Open Software License. In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic. Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact. If you have contributed new material to this book, you must make the material and source code available for your revisions. Please make revisions and updates available directly to the document maintainer, Peter Jay Salzman <[email protected]>. This will allow for the merging of updates and provide consistent revisions to the Linux community. If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the Linux Documentation Project (LDP).
    [Show full text]
  • Security Assurance Requirements for Linux Application Container Deployments
    NISTIR 8176 Security Assurance Requirements for Linux Application Container Deployments Ramaswamy Chandramouli This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 NISTIR 8176 Security Assurance Requirements for Linux Application Container Deployments Ramaswamy Chandramouli Computer Security Division Information Technology Laboratory This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 October 2017 U.S. Department of Commerce Wilbur L. Ross, Jr., Secretary National Institute of Standards and Technology Walter Copan, NIST Director and Under Secretary of Commerce for Standards and Technology NISTIR 8176 SECURITY ASSURANCE FOR LINUX CONTAINERS National Institute of Standards and Technology Internal Report 8176 37 pages (October 2017) This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose. This p There may be references in this publication to other publications currently under development by NIST in accordance with its assigned statutory responsibilities. The information in this publication, including concepts and methodologies, may be used by federal agencies even before the completion of such companion publications. Thus, until each ublication is available free of charge from: http publication is completed, current requirements, guidelines, and procedures, where they exist, remain operative. For planning and transition purposes, federal agencies may wish to closely follow the development of these new publications by NIST.
    [Show full text]
  • Rootless Containers with Podman and Fuse-Overlayfs
    CernVM Workshop 2019 (4th June 2019) Rootless containers with Podman and fuse-overlayfs Giuseppe Scrivano @gscrivano Introduction 2 Rootless Containers • “Rootless containers refers to the ability for an unprivileged user (i.e. non-root user) to create, run and otherwise manage containers.” (https://rootlesscontaine.rs/ ) • Not just about running the container payload as an unprivileged user • Container runtime runs also as an unprivileged user 3 Don’t confuse with... • sudo podman run --user foo – Executes the process in the container as non-root – Podman and the OCI runtime still running as root • USER instruction in Dockerfile – same as above – Notably you can’t RUN dnf install ... 4 Don’t confuse with... • podman run --uidmap – Execute containers as a non-root user, using user namespaces – Most similar to rootless containers, but still requires podman and runc to run as root 5 Motivation of Rootless Containers • To mitigate potential vulnerability of container runtimes • To allow users of shared machines (e.g. HPC) to run containers without the risk of breaking other users environments • To isolate nested containers 6 Caveat: Not a panacea • Although rootless containers could mitigate these vulnerabilities, it is not a panacea , especially it is powerless against kernel (and hardware) vulnerabilities – CVE 2013-1858, CVE-2015-1328, CVE-2018-18955 • Castle approach : it should be used in conjunction with other security layers such as seccomp and SELinux 7 Podman 8 Rootless Podman Podman is a daemon-less alternative to Docker • $ alias
    [Show full text]
  • System Calls System Calls
    System calls We will investigate several issues related to system calls. Read chapter 12 of the book Linux system call categories file management process management error handling note that these categories are loosely defined and much is behind included, e.g. communication. Why? 1 System calls File management system call hierarchy you may not see some topics as part of “file management”, e.g., sockets 2 System calls Process management system call hierarchy 3 System calls Error handling hierarchy 4 Error Handling Anything can fail! System calls are no exception Try to read a file that does not exist! Error number: errno every process contains a global variable errno errno is set to 0 when process is created when error occurs errno is set to a specific code associated with the error cause trying to open file that does not exist sets errno to 2 5 Error Handling error constants are defined in errno.h here are the first few of errno.h on OS X 10.6.4 #define EPERM 1 /* Operation not permitted */ #define ENOENT 2 /* No such file or directory */ #define ESRCH 3 /* No such process */ #define EINTR 4 /* Interrupted system call */ #define EIO 5 /* Input/output error */ #define ENXIO 6 /* Device not configured */ #define E2BIG 7 /* Argument list too long */ #define ENOEXEC 8 /* Exec format error */ #define EBADF 9 /* Bad file descriptor */ #define ECHILD 10 /* No child processes */ #define EDEADLK 11 /* Resource deadlock avoided */ 6 Error Handling common mistake for displaying errno from Linux errno man page: 7 Error Handling Description of the perror () system call.
    [Show full text]
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • Studying the Real World Today's Topics
    Studying the real world Today's topics Free and open source software (FOSS) What is it, who uses it, history Making the most of other people's software Learning from, using, and contributing Learning about your own system Using tools to understand software without source Free and open source software Access to source code Free = freedom to use, modify, copy Some potential benefits Can build for different platforms and needs Development driven by community Different perspectives and ideas More people looking at the code for bugs/security issues Structure Volunteers, sponsored by companies Generally anyone can propose ideas and submit code Different structures in charge of what features/code gets in Free and open source software Tons of FOSS out there Nearly everything on myth Desktop applications (Firefox, Chromium, LibreOffice) Programming tools (compilers, libraries, IDEs) Servers (Apache web server, MySQL) Many companies contribute to FOSS Android core Apple Darwin Microsoft .NET A brief history of FOSS 1960s: Software distributed with hardware Source included, users could fix bugs 1970s: Start of software licensing 1974: Software is copyrightable 1975: First license for UNIX sold 1980s: Popularity of closed-source software Software valued independent of hardware Richard Stallman Started the free software movement (1983) The GNU project GNU = GNU's Not Unix An operating system with unix-like interface GNU General Public License Free software: users have access to source, can modify and redistribute Must share modifications under same
    [Show full text]
  • Dm-X: Protecting Volume-Level Integrity for Cloud Volumes and Local
    dm-x: Protecting Volume-level Integrity for Cloud Volumes and Local Block Devices Anrin Chakraborti Bhushan Jain Jan Kasiak Stony Brook University Stony Brook University & Stony Brook University UNC, Chapel Hill Tao Zhang Donald Porter Radu Sion Stony Brook University & Stony Brook University & Stony Brook University UNC, Chapel Hill UNC, Chapel Hill ABSTRACT on Amazon’s Virtual Private Cloud (VPC) [2] (which provides The verified boot feature in recent Android devices, which strong security guarantees through network isolation) store deploys dm-verity, has been overwhelmingly successful in elim- data on untrusted storage devices residing in the public cloud. inating the extremely popular Android smart phone rooting For example, Amazon VPC file systems, object stores, and movement [25]. Unfortunately, dm-verity integrity guarantees databases reside on virtual block devices in the Amazon are read-only and do not extend to writable volumes. Elastic Block Storage (EBS). This paper introduces a new device mapper, dm-x, that This allows numerous attack vectors for a malicious cloud efficiently (fast) and reliably (metadata journaling) assures provider/untrusted software running on the server. For in- volume-level integrity for entire, writable volumes. In a direct stance, to ensure SEC-mandated assurances of end-to-end disk setup, dm-x overheads are around 6-35% over ext4 on the integrity, banks need to guarantee that the external cloud raw volume while offering additional integrity guarantees. For storage service is unable to remove log entries documenting cloud storage (Amazon EBS), dm-x overheads are negligible. financial transactions. Yet, without integrity of the storage device, a malicious cloud storage service could remove logs of a coordinated attack.
    [Show full text]
  • Course Outline & Schedule
    Course Outline & Schedule Call US 408-759-5074 or UK +44 20 7620 0033 Suse Linux Advanced System Administration Curriculum Linux Course Code SLASA Duration 5 Day Course Price $2,425 Course Description This instructor led SUSE Linux Advanced System Administration training course is designed to teach the advanced administration, security, networking and performance tasks required on a SUSE Linux Enterprise system. Targeted to closely follow the official LPI curriculum (generic Linux), this course together with the SUSE Linux System Administration course will enable the delegate to work towards achieving the LPIC-2 qualification. Exercises and examples are used throughout the course to give practical hands-on experience with the techniques covered. Objectives The delegate will learn and acquire skills as follows: Perform administrative tasks with supplied tools such as YaST Advanced network configuration Network troubleshooting and analysing packets Creating Apache virtual hosts and hosting user web content Sharing Windows and Linux resources with SAMBA Configuring a DNS server and configuring DNS logging Configuring a DHCP server and client Sharing Linux network resources with NFS Creating Unit Files Configuring AutoFS direct and indirect maps Configuring a secure FTP server Configuring a SQUID proxy server Creating Btrfs subvolumes and snapshots Backing-up and restoring XFS filesystems Configuring LVM and managing Logical Volumes Managing software RAID Centralised storage with iSCSI Monitoring disk status and reliability with SMART Perpetual
    [Show full text]
  • Fragmentation in Large Object Repositories
    Fragmentation in Large Object Repositories Experience Paper Russell Sears Catharine van Ingen University of California, Berkeley Microsoft Research [email protected] [email protected] ABSTRACT 1. INTRODUCTION Fragmentation leads to unpredictable and degraded application Application data objects continue to increase in size. performance. While these problems have been studied in detail Furthermore, the increasing popularity of web services and other for desktop filesystem workloads, this study examines newer network applications means that systems that once managed static systems such as scalable object stores and multimedia archives of “finished” objects now manage frequently modified repositories. Such systems use a get/put interface to store objects. versions of application data. Rather than updating these objects in In principle, databases and filesystems can support such place, typical archives either store multiple versions of the objects applications efficiently, allowing system designers to focus on (the V of WebDAV stands for “versioning” [25]), or simply do complexity, deployment cost and manageability. wholesale replacement (as in SharePoint Team Services [19]). Similarly, applications such as personal video recorders and Although theoretical work proves that certain storage policies media subscription servers continuously allocate and delete large, behave optimally for some workloads, these policies often behave transient objects. poorly in practice. Most storage benchmarks focus on short-term behavior or do not measure fragmentation. We compare SQL Applications store large objects as some combination of files in Server to NTFS and find that fragmentation dominates the filesystem and as BLOBs (binary large objects) in a database. performance when object sizes exceed 256KB-1MB. NTFS Only folklore is available regarding the tradeoffs.
    [Show full text]