SLES 11 SP2: Storage Administration Guide About This Guide

Total Page:16

File Type:pdf, Size:1020Kb

SLES 11 SP2: Storage Administration Guide About This Guide www.suse.com/documentation Storage Administration Guide Linux Enterprise Server 11 SP2 September 28, 2012 Legal Notices Copyright © 2006–2012 Novell, Inc. and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither Novell, Inc., SUSE LINUX Products GmbH, the authors, nor the translators shall be held liable for possible errors or the consequences thereof. Trademarks For Novell trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/trademarks/ tmlist.html). Linux* is a registered trademark of Linus Torvalds. All other third party trademarks are the property of their respective owners. A trademark symbol (®, ™, etc.) denotes a Novell trademark; an asterisk (*) denotes a third party trademark. Contents About This Guide 11 1 Overview of File Systems in Linux 13 1.1 Terminology . 13 1.2 Major File Systems in Linux . 14 1.2.1 Btrfs . 14 1.2.2 Ext2 . 17 1.2.3 Ext3 . 17 1.2.4 ReiserFS . 18 1.2.5 XFS . 19 1.2.6 Feature Comparison . 20 1.3 Other Supported File Systems . 21 1.4 Large File Support in Linux . 21 1.5 Managing Devices with the YaST2 Partitioner. .22 1.6 Additional Information . 22 2 What’s New for Storage in SLES 11 25 2.1 What’s New in SLES 11 SP2 . 25 2.2 What’s New in SLES 11 SP1 . 25 2.2.1 Saving iSCSI Target Information . 26 2.2.2 Modifying Authentication Parameters in the iSCSI Initiator . 26 2.2.3 Allowing Persistent Reservations for MPIO Devices . 26 2.2.4 MDADM 3.0.2 . 26 2.2.5 Boot Loader Support for MDRAID External Metadata . 26 2.2.6 YaST Install and Boot Support for MDRAID External Metadata . 26 2.2.7 Improved Shutdown for MDRAID Arrays that Contain the Root File System . 27 2.2.8 MD over iSCSI Devices . .27 2.2.9 MD-SGPIO. 27 2.2.10 Resizing LVM 2 Mirrors . 27 2.2.11 Updating Storage Drivers for Adapters on IBM Servers . 27 2.3 What’s New in SLES 11 . 28 2.3.1 EVMS2 Is Deprecated . .28 2.3.2 Ext3 as the Default File System. 28 2.3.3 JFS File System Is Deprecated . 28 2.3.4 OCFS2 File System Is in the High Availability Release. 29 2.3.5 /dev/disk/by-name Is Deprecated . 29 2.3.6 Device Name Persistence in the /dev/disk/by-id Directory . 29 2.3.7 Filters for Multipathed Devices . .29 2.3.8 User-Friendly Names for Multipathed Devices . 29 2.3.9 Advanced I/O Load-Balancing Options for Multipath. 30 2.3.10 Location Change for Multipath Tool Callouts. 30 2.3.11 Change from mpath to multipath for the mkinitrd -f Option . 30 2.3.12 Change from Multibus to Failover as the Default Setting for the MPIO Path Grouping Policy . 30 3 Planning a Storage Solution 31 3.1 Partitioning Devices . 31 3.2 Multipath Support . 31 Contents 3 3.3 Software RAID Support . 31 3.4 File System Snapshots . 31 3.5 Backup and Antivirus Support . 32 3.5.1 Open Source Backup. 32 3.5.2 Commercial Backup and Antivirus Support . 32 4 LVM Configuration 33 4.1 Understanding the Logical Volume Manager . .33 4.2 Creating LVM Partitions . 35 4.3 Creating Volume Groups . 36 4.4 Configuring Physical Volumes . 38 4.5 Configuring Logical Volumes . 39 4.6 Tagging LVM2 Storage Objects . 41 4.6.1 Using LVM2 Tags . 42 4.6.2 Requirements for Creating LVM2 Tags . 42 4.6.3 Command Line Tag Syntax . 43 4.6.4 Configuration File Syntax. 43 4.6.5 Using Tags for a Simple Activation Control in a Cluster . 44 4.6.6 Using Tags to Activate On Preferred Hosts in a Cluster . 45 4.7 Resizing a Volume Group . 46 4.8 Resizing a Logical Volume with YaST . 47 4.9 Resizing a Logical Volume with Commands . .48 4.10 Deleting a Volume Group. 49 4.11 Deleting an LVM Partition (Physical Volume) . 49 5 Resizing File Systems 51 5.1 Guidelines for.
Recommended publications
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 12 SP4 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xii 2 Giving Feedback xiv 3 Documentation Conventions xiv 4 Product Life Cycle and Support xvi Support Statement for SUSE Linux Enterprise Server xvii • Technology Previews xviii I FILE SYSTEMS AND MOUNTING 1 1 Overview
    [Show full text]
  • Building a Beowulf Cluster
    Building a Beowulf cluster Åsmund Ødegård April 4, 2001 1 Introduction The main part of the introduction is only contained in the slides for this session. Some of the acronyms and names in this paper may be unknown. In Appendix B we includ short descriptions for some of them. Most of this is taken from “whatis” [6] 2 Outline of the installation ² Install linux on a PC ² Configure the PC to act as a install–server for the cluster ² Wire up the network if that isn’t done already ² Install linux on the rest of the nodes ² Configure one PC, e.g the install–server, to be a server for your cluster. These are the main steps required to build a linux cluster, but each step can be done in many different ways. How you prefer to do it, depends mainly on personal taste, though. Therefor, I will translate the given outline into this list: ² Install Debian GNU/Linux on a PC ² Install and configure “FAI” on the PC ² Build the “FAI” boot–floppy ² Assemble hardware information, and finalize the “FAI” configuration ² Boot each node with the boot–floppy ² Install and configure a queue system and software for running parallel jobs on your cluster 3 Debian The choice of Linux distribution is most of all a matter of personal taste. I prefer the Debian distri- bution for various reasons. So, the first step in the cluster–building process is to pick one of the PCs as a install–server, and install Debian onto it, as follows: ² Make sure that the computer can boot from cdrom.
    [Show full text]
  • Summer Student Project Report
    Summer Student Project Report Dimitris Kalimeris National and Kapodistrian University of Athens June { September 2014 Abstract This report will outline two projects that were done as part of a three months long summer internship at CERN. In the first project we dealt with Worldwide LHC Computing Grid (WLCG) and its information sys- tem. The information system currently conforms to a schema called GLUE and it is evolving towards a new version: GLUE2. The aim of the project was to develop and adapt the current information system of the WLCG, used by the Large Scale Storage Systems at CERN (CASTOR and EOS), to the new GLUE2 schema. During the second project we investigated different RAID configurations so that we can get performance boost from CERN's disk systems in the future. RAID 1 that is currently in use is not an option anymore because of limited performance and high cost. We tried to discover RAID configurations that will improve the performance and simultaneously decrease the cost. 1 Information-provider scripts for GLUE2 1.1 Introduction The Worldwide LHC Computing Grid (WLCG, see also 1) is an international collaboration consisting of a grid-based computer network infrastructure incor- porating over 170 computing centres in 36 countries. It was originally designed by CERN to handle the large data volume produced by the Large Hadron Col- lider (LHC) experiments. This data is stored at CERN Storage Systems which are responsible for keeping and making available more than 100 Petabytes (105 Terabytes) of data to the physics community. The data is also replicated from CERN to the main computing centres within the WLCG.
    [Show full text]
  • Dwarf's Guide to Debian GNU/Linux
    Dwarf’s Guide to Debian GNU/Linux 2001 Dale Scheetz Dwarf’s Guide to Debian GNU/Linux Copyright c 2001 Dale Scheetz Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being Chapter 1 Introduction, with no Front-Cover Texts, and with the Back-Cover Texts being “The early development of the material in this work was produced with the financial support of Planet Linux. This support was intrumental in bringing this project to completion.” A copy of the license is included in the section entitled “Appendix 9: GNU Free Documentation License” which can be found on page 271. Trademark Acknowledgements All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. The publisher cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. Apple and Macintosh are registered trademarks of Apple Computer, Inc. CP/M is a registered trademark of Caldera, Inc. IBM is a registered trademark of International Business Machines, Inc. MS is a trademark of Microsoft Corporation. Windows is a trademark of Microsoft Corporation. X Window System is a registered trademark of X Consortium, Inc. ii dedicated to Linux users everywhere iii CREDITS First I want to thank Ian Murdock for writing the History section. His per- spectives on those early years have helped latecomers like Dwarf understand the founding principles upon which Debian is based.
    [Show full text]
  • RAID Technology
    RAID Technology Reference and Sources: y The most part of text in this guide has been taken from copyrighted document of Adaptec, Inc. on site (www.adaptec.com) y Perceptive Solutions, Inc. RAID stands for Redundant Array of Inexpensive (or sometimes "Independent") Disks. RAID is a method of combining several hard disk drives into one logical unit (two or more disks grouped together to appear as a single device to the host system). RAID technology was developed to address the fault-tolerance and performance limitations of conventional disk storage. It can offer fault tolerance and higher throughput levels than a single hard drive or group of independent hard drives. While arrays were once considered complex and relatively specialized storage solutions, today they are easy to use and essential for a broad spectrum of client/server applications. Redundant Arrays of Inexpensive Disks (RAID) "KILLS - BUGS - DEAD!" -- TV commercial for RAID bug spray There are many applications, particularly in a business environment, where there are needs beyond what can be fulfilled by a single hard disk, regardless of its size, performance or quality level. Many businesses can't afford to have their systems go down for even an hour in the event of a disk failure; they need large storage subsystems with capacities in the terabytes; and they want to be able to insulate themselves from hardware failures to any extent possible. Some people working with multimedia files need fast data transfer exceeding what current drives can deliver, without spending a fortune on specialty drives. These situations require that the traditional "one hard disk per system" model be set aside and a new system employed.
    [Show full text]
  • Partition.Pdf
    Linux Partition HOWTO Anthony Lissot Revision History Revision 3.5 26 Dec 2005 reorganized document page ordering. added page on setting up swap space. added page of partition labels. updated max swap size values in section 4. added instructions on making ext2/3 file systems. broken links identified by Richard Calmbach are fixed. created an XML version. Revision 3.4.4 08 March 2004 synchronized SGML version with HTML version. Updated lilo placement and swap size discussion. Revision 3.3 04 April 2003 synchronized SGML and HTML versions Revision 3.3 10 July 2001 Corrected Section 6, calculation of cylinder numbers Revision 3.2 1 September 2000 Dan Scott provides sgml conversion 2 Oct. 2000. Rewrote Introduction. Rewrote discussion on device names in Logical Devices. Reorganized Partition Types. Edited Partition Requirements. Added Recovering a deleted partition table. Revision 3.1 12 June 2000 Corrected swap size limitation in Partition Requirements, updated various links in Introduction, added submitted example in How to Partition with fdisk, added file system discussion in Partition Requirements. Revision 3.0 1 May 2000 First revision by Anthony Lissot based on Linux Partition HOWTO by Kristian Koehntopp. Revision 2.4 3 November 1997 Last revision by Kristian Koehntopp. This Linux Mini−HOWTO teaches you how to plan and create partitions on IDE and SCSI hard drives. It discusses partitioning terminology and considers size and location issues. Use of the fdisk partitioning utility for creating and recovering of partition tables is covered. The most recent version of this document is here. The Turkish translation is here. Linux Partition HOWTO Table of Contents 1.
    [Show full text]
  • IBM Tivoli Storage Resource Manager: Messages Preface
    IBM Tivoli Storage Resource Manager Messages Version 1 Release 2 SC32-9079-00 IBM Tivoli Storage Resource Manager Messages Version 1 Release 2 SC32-9079-00 Note! Before using this information and the product it supports, be sure to read the general information under “Notices”, on page 119. Second Edition (April 2003) | This edition applies to Version 1 Release 2 of the IBM Tivoli Storage Resource Manager (product numbers | 5698-SRM, 5698-SRC, and 5698-SRD) and to any subsequent releases until otherwise indicated in new editions. © Copyright International Business Machines Corporation 2003. All rights reserved. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Preface ...............v Getting help ..............1 IBM Tivoli Storage Resource Manager Publications . v IBMLink Assistance ...........1 || Related Publications ...........v Describing an error with keywords ......2 IBM International Technical Support Center Publications (Redbooks) ..........vi Chapter 2. Messages .........5 IBM Tivoli Storage Resource Manager Web Site . vi Translations ..............vi Appendix. Notices .........119 Contacting customer support ........vi Trademarks ..............120 Reporting a problem ..........vii Glossary .............123 Chapter 1. Introduction ........1 Understanding messages ..........1 © Copyright IBM Corp. 2003 iii iv IBM Tivoli Storage Resource Manager: Messages Preface This publication contains explanations and suggested actions for messages issued
    [Show full text]
  • Red Hat Linux 7.3 the Official Red Hat Linux X86
    Red Hat Linux 7.3 The Official Red Hat Linux x86 Installation Guide Red Hat Linux 7.3: The Official Red Hat Linux x86 Installation Guide Copyright © 2002 by Red Hat, Inc. Red Hat, Inc. 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park NC 27709 USA rhl-ig-x86(EN)-7.3-HTML-RHI (2002-04-05T13:43-0400) Copyright © 2002 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/). Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder. Distribution of the work or derivative of the work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from the copyright holder. The admonition graphics (note, tip, and so on) were created by Marianne Pecci <[email protected]>. They may be redistributed with written permission from Marianne Pecci and Red Hat, Inc.. Red Hat, Red Hat Network, the Red Hat "Shadow Man" logo, RPM, Maximum RPM, the RPM logo, Linux Library, PowerTools, Linux Undercover, RHmember, RHmember More, Rough Cuts, Rawhide and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc. in the United States and other countries. Linux is a registered trademark of Linus Torvalds. Motif and UNIX are registered trademarks of The Open Group.
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15
    SUSE Linux Enterprise Server 15 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi 1 Available Documentation xi 2 Giving Feedback xiii 3 Documentation Conventions xiii 4 Product Life Cycle and Support xv Support Statement for SUSE Linux Enterprise Server xvi • Technology Previews xvii I FILE SYSTEMS AND MOUNTING 1 1 Overview of File Systems
    [Show full text]
  • A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage
    Brigham Young University BYU ScholarsArchive Theses and Dissertations 2016-12-01 A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen Brigham Young University Follow this and additional works at: https://scholarsarchive.byu.edu/etd Part of the Electrical and Computer Engineering Commons BYU ScholarsArchive Citation Hansen, Christopher Glenn, "A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage" (2016). Theses and Dissertations. 6470. https://scholarsarchive.byu.edu/etd/6470 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science James Archibald, Chair Doran Wilde Michael Wirthlin Department of Electrical and Computer Engineering Brigham Young University Copyright © 2016 Christopher Glenn Hansen All Rights Reserved ABSTRACT A Secure, Reliable and Performance-Enhancing Storage Architecture Integrating Local and Cloud-Based Storage Christopher Glenn Hansen Department of Electrical and Computer Engineering, BYU Master of Science The constant evolution of new varieties of computing systems - cloud computing, mobile devices, and Internet of Things, to name a few - have necessitated a growing need for highly reliable, available, secure, and high-performing storage systems. While CPU performance has typically scaled with Moore’s Law, data storage is much less consistent in how quickly perfor- mance increases over time.
    [Show full text]
  • Linux Professional Institute Tutorials LPI Exam 101 Prep: Hardware and Architecture Junior Level Administration (LPIC-1) Topic 101
    Linux Professional Institute Tutorials LPI exam 101 prep: Hardware and architecture Junior Level Administration (LPIC-1) topic 101 Skill Level: Introductory Ian Shields ([email protected]) Senior Programmer IBM 08 Aug 2005 In this tutorial, Ian Shields begins preparing you to take the Linux Professional Institute® Junior Level Administration (LPIC-1) Exam 101. In this first of five tutorials, Ian introduces you to configuring your system hardware with Linux™. By the end of this tutorial, you will know how Linux configures the hardware found on a modern PC and where to look if you have problems. Section 1. Before you start Learn what these tutorials can teach you and how you can get the most from them. About this series The Linux Professional Institute (LPI) certifies Linux system administrators at two levels: junior level (also called "certification level 1") and intermediate level (also called "certification level 2"). To attain certification level 1, you must pass exams 101 and 102; to attain certification level 2, you must pass exams 201 and 202. developerWorks offers tutorials to help you prepare for each of the four exams. Each exam covers several topics, and each topic has a corresponding self-study tutorial on developerWorks. For LPI exam 101, the five topics and corresponding developerWorks tutorials are: Hardware and architecture © Copyright IBM Corporation 1994, 2008. All rights reserved. Page 1 of 43 developerWorks® ibm.com/developerWorks Table 1. LPI exam 101: Tutorials and topics LPI exam 101 topic developerWorks tutorial Tutorial summary Topic 101 LPI exam 101 prep (topic (This tutorial). Learn to 101): configure your system Hardware and architecture hardware with Linux.
    [Show full text]
  • The Linux System Administrators' Guide
    The Linux System Administrators’ Guide Version 0.6.1 Lars Wirzenius <[email protected]> The Linux System Administrators’ Guide: Version 0.6.1 by Lars Wirzenius An introduction to system administration of a Linux system for novices. Copyright 1993–1998 Lars Wirzenius. Trademarks are owned by their owners. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to process the document source code through TeX or other formatters and print the results, and distribute the printed document, provided the printed document carries copying permission notice identical to this one, including the references to where the source code can be found and the official home page. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions. The author would appreciate a notification of modifications, translations, and printed versions. Thank you. Table of Contents Dedication...................................................................................................................................................7 Source and pre-formatted versions available..........................................................................................8
    [Show full text]