Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4

Total Page:16

File Type:pdf, Size:1020Kb

Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview of File Systems in Linux 2 1.1 Terminology 3 1.2 Btrfs 3 Key Features 4 • The Root File System Setup on SUSE Linux Enterprise Server 4 • Migration from Ext and ReiserFS File Systems to Btrfs 9 • Btrfs Administration 10 • Btrfs Quota Support for Subvolumes 10 • Btrfs send/receive 11 • Data Deduplication Support 15 • Deleting Subvolumes from the Root File System 16 1.3 XFS 17 High Scalability by Using Allocation Groups 17 • High Performance through Efficient Management of Disk Space 17 • Preallocation to Avoid File System Fragmentation 18 1.4 Ext2 18 1.5 Ext3 19 Easy and Highly Reliable Upgrades from Ext2 19 • Reliability and Performance 20 • Converting an Ext2 File System into Ext3 20 • Ext3 File System Inode Size and Number of Inodes 21 iii Storage Administration Guide 1.6 Ext4 25 1.7 ReiserFS 26 1.8 Other Supported File Systems 26 1.9 Large File Support in Linux 27 1.10 Linux Kernel Storage Limitations 29 1.11 Troubleshooting File Systems 29 Btrfs Error: No space is left on device 30 • Freeing Unused File System Blocks 31 • Btrfs: Balancing Data across Devices 32 • No Defragmentation on SSDs 33 1.12 Additional Information 33 2 Resizing File Systems 35 2.1 Use Cases 35 2.2 Guidelines for Resizing 35 File Systems that Support Resizing 36 • Increasing the Size of a File System 36 • Decreasing the Size of a File System 37 2.3 Changing the Size of a Btrfs File System 37 2.4 Changing the Size of an XFS File System 38 2.5 Changing the Size of an Ext2, Ext3, or Ext4 File System 39 2.6 Changing the Size of a Reiser File System 40 3 Using UUIDs to Mount Devices 41 3.1 Persistent Device Names with udev 41 3.2 Understanding UUIDs 41 3.3 Additional Information 42 4 Multi-tier Caching for Block Device Operations 43 4.1 General Terminology 43 4.2 Caching Modes 44 iv Storage Administration Guide 4.3 bcache 45 Main Features 45 • Setting Up a bcache Device 45 • bcache Configuration Using sysfs 47 4.4 lvmcache 47 Configuring lvmcache 47 • Removing a Cache Pool 49 II LOGICAL VOLUMES (LVM) 51 5 LVM Configuration 52 5.1 Understanding the Logical Volume Manager 52 5.2 Creating Volume Groups 54 5.3 Creating Logical Volumes 57 Thinly Provisioned Logical Volumes 60 • Creating Mirrored Volumes 61 5.4 Automatically Activating Non-Root LVM Volume Groups 62 5.5 Resizing an Existing Volume Group 63 5.6 Resizing a Logical Volume 64 5.7 Deleting a Volume Group or a Logical Volume 66 5.8 Using LVM Commands 67 Resizing a Logical Volume with Commands 70 • Dynamic Aggregation of LVM Metadata via lvmetad 72 • Using LVM Cache Volumes 73 5.9 Tagging LVM2 Storage Objects 74 Using LVM2 Tags 74 • Requirements for Creating LVM2 Tags 75 • Command Line Tag Syntax 75 • Configuration File Syntax 76 • Using Tags for a Simple Activation Control in a Cluster 77 • Using Tags to Activate On Preferred Hosts in a Cluster 78 6 LVM Volume Snapshots 81 6.1 Understanding Volume Snapshots 81 6.2 Creating Linux Snapshots with LVM 83 6.3 Monitoring a Snapshot 83 v Storage Administration Guide 6.4 Deleting Linux Snapshots 84 6.5 Using Snapshots for Virtual Machines on a Virtual Host 84 6.6 Merging a Snapshot with the Source Logical Volume to Revert Changes or Roll Back to a Previous State 86 III SOFTWARE RAID 89 7 Software RAID Configuration 90 7.1 Understanding RAID Levels 90 RAID 0 90 • RAID 1 91 • RAID 2 and RAID 3 91 • RAID 4 91 • RAID 5 91 • RAID 6 92 • Nested and Complex RAID Levels 92 7.2 Soft RAID Configuration with YaST 92 RAID Names 95 7.3 Troubleshooting Software RAIDs 96 Recovery after Failing Disk is Back Again 96 7.4 For More Information 97 8 Configuring Software RAID for the Root Partition 98 8.1 Prerequisites for Using a Software RAID Device for the Root Partition 98 8.2 Setting Up the System with a Software RAID Device for the Root (/) Partition 99 9 Creating Software RAID 10 Devices 103 9.1 Creating Nested RAID 10 Devices with mdadm 103 Creating Nested RAID 10 (1+0) with mdadm 104 • Creating Nested RAID 10 (0+1) with mdadm 106 9.2 Creating a Complex RAID 10 108 Number of Devices and Replicas in the Complex RAID 10 109 • Layout 110 • Creating a Complex RAID 10 with the YaST Partitioner 112 • Creating a Complex RAID 10 with mdadm 115 vi Storage Administration Guide 10 Creating a Degraded RAID Array 118 11 Resizing Software RAID Arrays with mdadm 120 11.1 Increasing the Size of a Software RAID 121 Increasing the Size of Component Partitions 122 • Increasing the Size of the RAID Array 123 • Increasing the Size of the File System 124 11.2 Decreasing the Size of a Software RAID 125 Decreasing the Size of the File System 125 • Decreasing the Size of the RAID Array 125 • Decreasing the Size of Component Partitions 126 12 Storage Enclosure LED Utilities for MD Software RAIDs 129 12.1 The Storage Enclosure LED Monitor Service 130 12.2 The Storage Enclosure LED Control Application 131 Pattern Names 132 • List of Devices 135 • Examples 136 12.3 Additional Information 136 IV NETWORK STORAGE 137 13 iSNS for Linux 138 13.1 How iSNS Works 138 13.2 Installing iSNS Server for Linux 140 13.3 Configuring iSNS Discovery Domains 142 Creating iSNS Discovery Domains 142 • Adding iSCSI Nodes to a Discovery Domain 143 13.4 Starting the iSNS Service 145 13.5 For More Information 145 14 Mass Storage over IP Networks: iSCSI 146 14.1 Installing the iSCSI LIO Target Server and iSCSI Initiator 147 vii Storage Administration Guide 14.2 Setting Up an iSCSI LIO Target Server 148 iSCSI LIO Target Service Start-up and Firewall Settings 148 • Configuring Authentication for Discovery of iSCSI LIO Targets and Initiators 149 • Preparing the Storage Space 151 • Setting Up an iSCSI LIO Target Group 152 • Modifying an iSCSI LIO Target Group 156 • Deleting an iSCSI LIO Target Group 156 14.3 Configuring iSCSI Initiator 157 Using YaST for the iSCSI Initiator Configuration 157 • Setting Up the iSCSI Initiator Manually 160 • The iSCSI Initiator Databases 161 14.4 Using iSCSI Disks when Installing 163 14.5 Troubleshooting iSCSI 163 Portal Error When Setting Up Target LUNs on an iSCSI LIO Target Server 163 • iSCSI LIO Targets Are Not Visible from Other Computers 164 • Data Packets Dropped for iSCSI Traffic 164 • Using iSCSI Volumes with LVM 164 • iSCSI Targets Are Mounted When the Configuration File Is Set to Manual 165 14.6 iSCSI LIO Target Terminology 165 14.7 Additional Information 167 15 Fibre Channel Storage over Ethernet Networks: FCoE 168 15.1 Configuring FCoE Interfaces during the Installation 169 15.2 Installing FCoE and the YaST FCoE Client 170 15.3 Managing FCoE Services with YaST 171 15.4 Configuring FCoE with Commands 174 15.5 Managing FCoE Instances with the FCoE Administration Tool 176 15.6 Additional Information 178 16 NVMe over Fabric 179 16.1 Overview 179 viii Storage Administration Guide 16.2 Setting Up an NVMe over Fabric Host 179 Installing Command Line Client 179 • Discovering NVMe over Fabric Targets 180 • Connecting to NVMe over Fabric Targets 180 • Multipathing 181 16.3 Setting Up an NVMe over Fabric Target 181 Installing Command Line Client 181 • Configuration Steps 181 • Back Up and Restore Target Configuration 183 16.4 Special Hardware Configuration 184 Overview 184 • Broadcom 184 • Marvell 184 16.5 More Information 185 17 Managing Multipath I/O for Devices 186 17.1 Understanding Multipath I/O 186 17.2 Hardware Support 186 Storage Arrays That Are Automatically Detected for Multipathing 187 • Tested Storage Arrays for Multipathing Support 189 • Storage Arrays that Require Specific Hardware Handlers 190 17.3 Planning for Multipathing 190 Prerequisites 190 • Disk Management Tasks 191 • Software RAIDs 191 • High-Availability Solutions 192 • Always Keep the initrd in Synchronization with the System Configuration 192 17.4 Multipath Management Tools 192 Device Mapper Multipath Module 193 • Multipath I/O Management Tools 195 • Using MDADM for Multipathed Devices 196 • The multipath Command 196 • The mpathpersist Utility 199 17.5 Configuring the System for Multipathing 200 Enabling, Disabling, Starting and Stopping Multipath I/O Services 200 • Preparing SAN Devices
Recommended publications
  • MLNX OFED Documentation Rev 5.0-2.1.8.0
    MLNX_OFED Documentation Rev 5.0-2.1.8.0 Exported on May/21/2020 06:13 AM https://docs.mellanox.com/x/JLV-AQ Notice This document is provided for information purposes only and shall not be regarded as a warranty of a certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no representations or warranties, expressed or implied, as to the accuracy or completeness of the information contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall have no liability for the consequences or use of such information or for any infringement of patents or other rights of third parties that may result from its use. This document is not a commitment to develop, release, or deliver any Material (defined below), code, or functionality. NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any other changes to this document, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such information is current and complete. NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects to applying any customer general terms and conditions with regards to the purchase of the NVIDIA product referenced in this document. No contractual obligations are formed either directly or indirectly by this document. NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military, aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA product can reasonably be expected to result in personal injury, death, or property or environmental damage.
    [Show full text]
  • Administració De Sistemes GNU Linux Mòdul4 Administració
    Administració local Josep Jorba Esteve PID_00238577 GNUFDL • PID_00238577 Administració local Es garanteix el permís per a copiar, distribuir i modificar aquest document segons els termes de la GNU Free Documentation License, Version 1.3 o qualsevol altra de posterior publicada per la Free Software Foundation, sense seccions invariants ni textos de la oberta anterior o posterior. Podeu consultar els termes de la llicència a http://www.gnu.org/licenses/fdl-1.3.html. GNUFDL • PID_00238577 Administració local Índex Introducció.................................................................................................. 5 1. Eines bàsiques per a l'administrador........................................... 7 1.1. Eines gràfiques i línies de comandes .......................................... 8 1.2. Documents d'estàndards ............................................................. 10 1.3. Documentació del sistema en línia ............................................ 13 1.4. Eines de gestió de paquets .......................................................... 15 1.4.1. Paquets TGZ ................................................................... 16 1.4.2. Fedora/Red Hat: paquets RPM ....................................... 19 1.4.3. Debian: paquets DEB ..................................................... 24 1.4.4. Nous formats d'empaquetat: Snap i Flatpak .................. 28 1.5. Eines genèriques d'administració ................................................ 36 1.6. Altres eines .................................................................................
    [Show full text]
  • Security Assurance Requirements for Linux Application Container Deployments
    NISTIR 8176 Security Assurance Requirements for Linux Application Container Deployments Ramaswamy Chandramouli This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 NISTIR 8176 Security Assurance Requirements for Linux Application Container Deployments Ramaswamy Chandramouli Computer Security Division Information Technology Laboratory This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 October 2017 U.S. Department of Commerce Wilbur L. Ross, Jr., Secretary National Institute of Standards and Technology Walter Copan, NIST Director and Under Secretary of Commerce for Standards and Technology NISTIR 8176 SECURITY ASSURANCE FOR LINUX CONTAINERS National Institute of Standards and Technology Internal Report 8176 37 pages (October 2017) This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8176 Certain commercial entities, equipment, or materials may be identified in this document in order to describe an experimental procedure or concept adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the entities, materials, or equipment are necessarily the best available for the purpose. This p There may be references in this publication to other publications currently under development by NIST in accordance with its assigned statutory responsibilities. The information in this publication, including concepts and methodologies, may be used by federal agencies even before the completion of such companion publications. Thus, until each ublication is available free of charge from: http publication is completed, current requirements, guidelines, and procedures, where they exist, remain operative. For planning and transition purposes, federal agencies may wish to closely follow the development of these new publications by NIST.
    [Show full text]
  • System Calls System Calls
    System calls We will investigate several issues related to system calls. Read chapter 12 of the book Linux system call categories file management process management error handling note that these categories are loosely defined and much is behind included, e.g. communication. Why? 1 System calls File management system call hierarchy you may not see some topics as part of “file management”, e.g., sockets 2 System calls Process management system call hierarchy 3 System calls Error handling hierarchy 4 Error Handling Anything can fail! System calls are no exception Try to read a file that does not exist! Error number: errno every process contains a global variable errno errno is set to 0 when process is created when error occurs errno is set to a specific code associated with the error cause trying to open file that does not exist sets errno to 2 5 Error Handling error constants are defined in errno.h here are the first few of errno.h on OS X 10.6.4 #define EPERM 1 /* Operation not permitted */ #define ENOENT 2 /* No such file or directory */ #define ESRCH 3 /* No such process */ #define EINTR 4 /* Interrupted system call */ #define EIO 5 /* Input/output error */ #define ENXIO 6 /* Device not configured */ #define E2BIG 7 /* Argument list too long */ #define ENOEXEC 8 /* Exec format error */ #define EBADF 9 /* Bad file descriptor */ #define ECHILD 10 /* No child processes */ #define EDEADLK 11 /* Resource deadlock avoided */ 6 Error Handling common mistake for displaying errno from Linux errno man page: 7 Error Handling Description of the perror () system call.
    [Show full text]
  • For Immediate Release
    an ellisys company FOR IMMEDIATE RELEASE SerialTek Contact: Simon Thomas, Director, Sales and Marketing Phone: +1-720-204-2140 Email: [email protected] SerialTek Debuts PCIe x16 Gen5 Protocol Analysis System and Web Application New Kodiak™ Platform Brings SerialTek Advantages to More Computing and Data Storage Markets Longmont, CO, USA — February 24, 2021 — SerialTek, a leading provider of protocol test solutions for PCI Express®, NVM Express®, Serial Attached SCSI, and Serial ATA, today introduced an advancement in the PCIe® test and analysis market with the release of the Kodiak PCIe x16 Gen5 Analysis System, as well as the industry’s first calibration-free PCIe x16 add-in-card (AIC) interposer and a new web-based BusXpert™ user interface to manage and analyze traces more efficiently than ever. The addition of PCIe x16 Gen5 to the Kodiak analyzer and SI-Fi™ interposer family brings previously unavailable analysis capabilities and efficiencies to computing, datacenter, networking, storage, AI, and other PCIe x16 Gen5 applications. With SerialTek’s proven calibration-free SI-Fi interposer technology, the Kodiak’s innovative state-of-the-art design, and the new BusXpert analyzer software, users can more easily set up the analyzer hardware, more accurately probe PCIe signals, and more efficiently analyze traces. Kodiak PCIe x16 Gen5 Analysis System At the core of the Kodiak PCIe x16 analyzer is an embedded hardware architecture that delivers substantial and unparalleled advancements in capture, search, and processing acceleration. Interface responsiveness is markedly advanced, searches involving massive amounts of data are fast, and hardware filtering is flexible and powerful. “Once installed in a customer test environment the Kodiak’s features and benefits are immediately obvious,” said Paul Mutschler, CEO of SerialTek.
    [Show full text]
  • Course Outline & Schedule
    Course Outline & Schedule Call US 408-759-5074 or UK +44 20 7620 0033 Suse Linux Advanced System Administration Curriculum Linux Course Code SLASA Duration 5 Day Course Price $2,425 Course Description This instructor led SUSE Linux Advanced System Administration training course is designed to teach the advanced administration, security, networking and performance tasks required on a SUSE Linux Enterprise system. Targeted to closely follow the official LPI curriculum (generic Linux), this course together with the SUSE Linux System Administration course will enable the delegate to work towards achieving the LPIC-2 qualification. Exercises and examples are used throughout the course to give practical hands-on experience with the techniques covered. Objectives The delegate will learn and acquire skills as follows: Perform administrative tasks with supplied tools such as YaST Advanced network configuration Network troubleshooting and analysing packets Creating Apache virtual hosts and hosting user web content Sharing Windows and Linux resources with SAMBA Configuring a DNS server and configuring DNS logging Configuring a DHCP server and client Sharing Linux network resources with NFS Creating Unit Files Configuring AutoFS direct and indirect maps Configuring a secure FTP server Configuring a SQUID proxy server Creating Btrfs subvolumes and snapshots Backing-up and restoring XFS filesystems Configuring LVM and managing Logical Volumes Managing software RAID Centralised storage with iSCSI Monitoring disk status and reliability with SMART Perpetual
    [Show full text]
  • How SUSE Helps Pave the Road to S/4HANA
    White Paper How SUSE Helps Pave the Road to S/4HANA Sponsored by: SUSE and SAP Peter Rutten Henry D. Morris August 2017 IDC OPINION According to SAP, there are now 6,800 SAP customers that have chosen S/4HANA. In addition, many more of the software company's customers still have to make the decision to move to S/4HANA. These customers have to determine whether they are ready to migrate their database for mission-critical workloads to a new database (SAP HANA) and a new application suite (S/4HANA). Furthermore, SAP HANA runs on Linux only, an operating environment that SAP customers may not have used extensively before. This white paper discusses the choices SAP customers have for migrating to S/4HANA on Linux and looks at how SUSE helps customers along the way. SITUATION OVERVIEW Among SAP customers, migration from traditional databases to the SAP HANA in-memory platform is continuing steadily. IDC's Data Analytics Infrastructure (SAP) Survey, November 2016 (n = 300) found that 49.3% of SAP customers in North America are running SAP HANA today. Among those that are not on SAP HANA, 29.9% will be in two to four years and 27.1% in four to six years. The slower adopters say that they don't feel rushed, stating that there's time until 2025 to make the decision, and that their current databases are performing adequately. While there is no specific order that SAP customers need to follow to get to S/4HANA, historically businesses have often migrated to SAP Business Warehouse (BW) on SAP HANA first as BW is a rewarding starting point for SAP HANA for two reasons: there is an instant performance improvement and migration is relatively easy since BW typically does not contain core enterprise data that is business critical.
    [Show full text]
  • NVIDIA Magnum IO Gpudirect Storage
    NVIDIA Magnum IO GPUDirect Storage Installation and Troubleshooting Guide TB-10112-001_v1.0.0 | August 2021 Table of Contents Chapter 1. Introduction........................................................................................................ 1 Chapter 2. Installing GPUDirect Storage.............................................................................2 2.1. Before You Install GDS.............................................................................................................2 2.2. Installing GDS............................................................................................................................3 2.2.1. Removal of Prior GDS Installation on Ubuntu Systems...................................................3 2.2.2. Preparing the OS................................................................................................................3 2.2.3. GDS Package Installation.................................................................................................. 4 2.2.4. Verifying the Package Installation.....................................................................................4 2.2.5. Verifying a Successful GDS Installation............................................................................5 2.3. Installed GDS Libraries and Tools...........................................................................................6 2.4. Uninstalling GPUDirect Storage...............................................................................................7 2.5. Environment
    [Show full text]
  • File Handling in Python
    hapter C File Handling in 2 Python There are many ways of trying to understand programs. People often rely too much on one way, which is called "debugging" and consists of running a partly- understood program to see if it does what you expected. Another way, which ML advocates, is to install some means of understanding in the very programs themselves. — Robin Milner In this Chapter » Introduction to Files » Types of Files » Opening and Closing a 2.1 INTRODUCTION TO FILES Text File We have so far created programs in Python that » Writing to a Text File accept the input, manipulate it and display the » Reading from a Text File output. But that output is available only during » Setting Offsets in a File execution of the program and input is to be entered through the keyboard. This is because the » Creating and Traversing a variables used in a program have a lifetime that Text File lasts till the time the program is under execution. » The Pickle Module What if we want to store the data that were input as well as the generated output permanently so that we can reuse it later? Usually, organisations would want to permanently store information about employees, inventory, sales, etc. to avoid repetitive tasks of entering the same data. Hence, data are stored permanently on secondary storage devices for reusability. We store Python programs written in script mode with a .py extension. Each program is stored on the secondary device as a file. Likewise, the data entered, and the output can be stored permanently into a file.
    [Show full text]
  • USB Composite Gadget Using CONFIG-FS on Dra7xx Devices
    Application Report SPRACB5–September 2017 USB Composite Gadget Using CONFIG-FS on DRA7xx Devices RaviB ABSTRACT This application note explains how to create a USB composite gadget, network control model (NCM) and abstract control model (ACM) from the user space using Linux® CONFIG-FS on the DRA7xx platform. Contents 1 Introduction ................................................................................................................... 2 2 USB Composite Gadget Using CONFIG-FS ............................................................................. 3 3 Creating Composite Gadget From User Space.......................................................................... 4 4 References ................................................................................................................... 8 List of Figures 1 Block Diagram of USB Composite Gadget............................................................................... 3 2 Selection of CONFIGFS Through menuconfig........................................................................... 4 3 Select USB Configuration Through menuconfig......................................................................... 4 4 Composite Gadget Configuration Items as Files and Directories ..................................................... 5 5 VID, PID, and Manufacturer String Configuration ....................................................................... 6 6 Kernel Logs Show Enumeration of USB Composite Gadget by Host ................................................ 6 7 Ping
    [Show full text]
  • Z/OS Distributed File Service Zseries File System Implementation Z/OS V1R13
    Front cover z/OS Distributed File Service zSeries File System Implementation z/OS V1R13 Defining and installing a zSeries file system Performing backup and recovery, sysplex sharing Migrating from HFS to zFS Paul Rogers Robert Hering ibm.com/redbooks International Technical Support Organization z/OS Distributed File Service zSeries File System Implementation z/OS V1R13 October 2012 SG24-6580-05 Note: Before using this information and the product it supports, read the information in “Notices” on page xiii. Sixth Edition (October 2012) This edition applies to version 1 release 13 modification 0 of IBM z/OS (product number 5694-A01) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright International Business Machines Corporation 2010, 2012. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . xiii Trademarks . xiv Preface . .xv The team who wrote this book . .xv Now you can become a published author, too! . xvi Comments welcome. xvi Stay connected to IBM Redbooks . xvi Chapter 1. zFS file systems . 1 1.1 zSeries File System introduction. 2 1.2 Application programming interfaces . 2 1.3 zFS physical file system . 3 1.4 zFS colony address space . 4 1.5 zFS supports z/OS UNIX ACLs. 4 1.6 zFS file system aggregates. 5 1.6.1 Compatibility mode aggregates. 5 1.6.2 Multifile system aggregates. 6 1.7 Metadata cache. 7 1.8 zFS file system clones . 7 1.8.1 Backup file system . 8 1.9 zFS log files.
    [Show full text]
  • Netapp Cloud Volumes ONTAP Simple and Fast Data Management in the Cloud
    Datasheet NetApp Cloud Volumes ONTAP Simple and fast data management in the cloud Key Benefits The Challenge • NetApp® Cloud Volumes ONTAP® In today’s IT ecosystem, the cloud has become synonymous with flexibility and efficiency. software lets you control your public When you deploy new services or run applications with varying usage needs, the cloud cloud storage resources with industry- provides a level of infrastructure flexibility that allows you to pay for what you need, when leading data management. you need it. With virtual machines, the cloud has become a go-to deployment model for applications that have unpredictable cycles or variable usage patterns and need to be • Multiple storage consumption models spun up or spun down on demand. provide the flexibility that allows you to use just what you need, when you Applications with fixed usage patterns often continue to be deployed in a more need it. traditional fashion because of the economics in on-premises data centers. This situation • Rapid point-and-click deployment creates a hybrid cloud environment, employing the model that best fits each application. from NetApp Cloud Manager enables In this hybrid cloud environment, data is at the center. It is the only thing of lasting value. you to deploy advanced data It is the thing that needs to be shared and integrated across the hybrid cloud to deliver management systems on your choice business value. It is the thing that needs to be secured, protected, and managed. of cloud in minutes. You need to control what happens to your data no matter where it is.
    [Show full text]