IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux X86 What's New in Version 4.3

Total Page:16

File Type:pdf, Size:1020Kb

IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux X86 What's New in Version 4.3 IBM Spectrum Protect Version 4 Release 3 Note: Before you use this information and the product it supports, read the information in “Notices” on page 85. First edition (December 2020) This edition applies to Version 7.1.7 and later of the IBM Spectrum® Protect server, and to all subsequent releases and modifications until otherwise indicated in new editions or technical newsletters. © Copyright International Business Machines Corporation 2013, 2020. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents About this document..............................................................................................v Support for IBM Spectrum Protect blueprint and server automated configuration...................................v What's new in Version 4.3.................................................................................... vii Chapter 1. Introduction......................................................................................... 1 Chapter 2. Implementation requirements...............................................................3 Hardware and software prerequisites.........................................................................................................5 Hardware requirements......................................................................................................................... 5 Software requirements........................................................................................................................ 11 Planning worksheets..................................................................................................................................12 Chapter 3. Storage configuration blueprints......................................................... 17 Small FlashSystem configuration..............................................................................................................18 Medium FlashSystem configuration..........................................................................................................20 Large FlashSystem configuration.............................................................................................................. 21 IBM Elastic Storage Server systems..........................................................................................................23 Chapter 4. Setting up the system..........................................................................25 Step 1: Set up and configure hardware.....................................................................................................25 Step 2: Install the operating system......................................................................................................... 27 Step 3: IBM FlashSystem Storage: Configure multipath I/O....................................................................30 Step 4: IBM FlashSystem Storage: Configure file systems for IBM Spectrum Protect............................31 Configure a file system by using the script.......................................................................................... 32 Configure a file system by using the manual procedure..................................................................... 32 Step 5: IBM Elastic Storage Server systems: Configuring the system..................................................... 33 Step 6: Test system performance..............................................................................................................36 Step 7: Install the IBM Spectrum Protect backup-archive client............................................................ 40 Step 8: Install the IBM Spectrum Protect server......................................................................................40 Obtain the installation package........................................................................................................... 41 Install the IBM Spectrum Protect server.............................................................................................41 Chapter 5. Configuring the IBM Spectrum Protect server...................................... 43 Removing an IBM Spectrum Protect blueprint configuration...................................................................50 Chapter 6. Completing the system configuration...................................................51 Changing default passwords..................................................................................................................... 51 Registering nodes and associating them with predefined client schedules............................................51 Reorganizing database tables and indexes...............................................................................................53 Chapter 7. Next steps.......................................................................................... 55 Optional: Set up node replication and storage pool protection............................................................... 56 Appendix A. Performance results.........................................................................59 Extra Small system performance measurements.....................................................................................59 Small system performance measurements.............................................................................................. 60 Medium system performance measurements.......................................................................................... 61 Large system performance measurements.............................................................................................. 62 iii Workload simulation tool results...............................................................................................................63 Appendix B. Configuring the disk system by using commands............................... 67 Appendix C. Using a response file with the Blueprint configuration script..............73 Appendix D. Using predefined client schedules.....................................................75 Appendix E. Modification of blueprint configurations............................................ 79 Appendix F. Troubleshooting................................................................................81 Appendix G. Accessibility.................................................................................... 83 Notices................................................................................................................85 Index.................................................................................................................. 89 iv About this document This information is intended to facilitate the deployment of an IBM Spectrum Protect server by using detailed hardware specifications to build a system and automated scripts to configure the software. To complete the tasks, you must have an understanding of IBM Spectrum Protect and scripting. Support for IBM Spectrum Protect blueprint and server automated configuration The information in this document is distributed on an "as is" basis without any warranty that is either expressed or implied. Support assistance for the use of this material is limited to situations where IBM Spectrum Protect support is entitled and where the issues are not specific to a blueprint implementation. © Copyright IBM Corp. 2013, 2020 v vi IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86 What's new in Version 4.3 The IBM Spectrum Protect Blueprint configuration script, hardware and software requirements, and documentation are updated. Updated operating system support available with IBM Spectrum Protect Version 8.1.11 • Red Hat Enterprise Linux (RHEL) 8 for Linux x86 • Microsoft Windows Server 2019 Extra-small blueprint size The blueprints for Linux x86 and Windows now include instructions for building an extra-small blueprint configuration that runs in a virtual machine. Technical and other updates were made throughout the book. Look for the vertical bar ( | ) in the margin. © Copyright IBM Corp. 2013, 2020 vii viii IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86 Chapter 1. Introduction This document provides detailed steps to build a extra small, small, medium, or large IBM Spectrum Protect server with disk-only storage that uses data deduplication on a Linux® x86 system. Two options for the storage architecture are included: • IBM FlashSystem® with Fibre Channel attachments • IBM Elastic Storage® Server with an Ethernet attachment together with Flash storage from another storage system, which uses IBM® FlashSystem or similar technology By following prerequisite steps precisely, you can set up hardware and prepare your system to run the IBM Spectrum Protect Blueprint configuration script, TSMserverconfig.pl, for a successful deployment. The settings and options that are defined by the script are designed to ensure optimal performance, based on the size of your system. Overview The following roadmap lists the main tasks that you must complete to deploy a server: 1. Determine the size of the configuration that you want to implement. 2. Review the requirements and prerequisites for the server system. 3. Set up the hardware by using detailed blueprint specifications for system layout. 4. Configure the hardware and install the Red Hat Enterprise Linux x86-64 operating system. 5. Prepare storage for IBM Spectrum Protect. 6. Run the IBM Spectrum Protect workload simulation
Recommended publications
  • Elastic Storage for Linux on System Z IBM Research & Development Germany
    Peter Münch T/L Test and Development – Elastic Storage for Linux on System z IBM Research & Development Germany Elastic Storage for Linux on IBM System z © Copyright IBM Corporation 2014 9.0 Elastic Storage for Linux on System z Session objectives • This presentation introduces the Elastic Storage, based on General Parallel File System technology that will be available for Linux on IBM System z. Understand the concepts of Elastic Storage and which functions will be available for Linux on System z. Learn how you can integrate and benefit from the Elastic Storage in a Linux on System z environment. Finally, get your first impression in a live demo of Elastic Storage. 2 © Copyright IBM Corporation 2014 Elastic Storage for Linux on System z Trademarks The following are trademarks of the International Business Machines Corporation in the United States and/or other countries. AIX* FlashSystem Storwize* Tivoli* DB2* IBM* System p* WebSphere* DS8000* IBM (logo)* System x* XIV* ECKD MQSeries* System z* z/VM* * Registered trademarks of IBM Corporation The following are trademarks or registered trademarks of other companies. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
    [Show full text]
  • IBM Spectrum Scale CSI Driver for Container Persistent Storage
    Front cover IBM Spectrum Scale CSI Driver for Container Persistent Storage Abhishek Jain Andrew Beattie Daniel de Souza Casali Deepak Ghuge Harald Seipp Kedar Karmarkar Muthu Muthiah Pravin P. Kudav Sandeep R. Patil Smita Raut Yadavendra Yadav Redpaper IBM Redbooks IBM Spectrum Scale CSI Driver for Container Persistent Storage April 2020 REDP-5589-00 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (April 2020) This edition applies to Version 5, Release 0, Modification 4 of IBM Spectrum Scale. © Copyright International Business Machines Corporation 2020. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. iii iv IBM Spectrum Scale CSI Driver for Container Persistent Storage Contents Notices . vii Trademarks . viii Preface . ix Authors. ix Now you can become a published author, too . xi Comments welcome. xii Stay connected to IBM Redbooks . xii Chapter 1. IBM Spectrum Scale and Containers Introduction . 1 1.1 Abstract . 2 1.2 Assumptions . 2 1.3 Key concepts and terminology . 3 1.3.1 IBM Spectrum Scale . 3 1.3.2 Container runtime . 3 1.3.3 Container Orchestration System. 4 1.4 Introduction to persistent storage for containers Flex volumes. 4 1.4.1 Static provisioning. 4 1.4.2 Dynamic provisioning . 4 1.4.3 Container Storage Interface (CSI). 5 1.4.4 Advantages of using IBM Spectrum Scale storage for containers . 5 Chapter 2. Architecture of IBM Spectrum Scale CSI Driver . 7 2.1 CSI Component Overview. 8 2.2 IBM Spectrum Scale CSI driver architecture.
    [Show full text]
  • The Linux Command Line
    The Linux Command Line Fifth Internet Edition William Shotts A LinuxCommand.org Book Copyright ©2008-2019, William E. Shotts, Jr. This work is licensed under the Creative Commons Attribution-Noncommercial-No De- rivative Works 3.0 United States License. To view a copy of this license, visit the link above or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042. A version of this book is also available in printed form, published by No Starch Press. Copies may be purchased wherever fine books are sold. No Starch Press also offers elec- tronic formats for popular e-readers. They can be reached at: https://www.nostarch.com. Linux® is the registered trademark of Linus Torvalds. All other trademarks belong to their respective owners. This book is part of the LinuxCommand.org project, a site for Linux education and advo- cacy devoted to helping users of legacy operating systems migrate into the future. You may contact the LinuxCommand.org project at http://linuxcommand.org. Release History Version Date Description 19.01A January 28, 2019 Fifth Internet Edition (Corrected TOC) 19.01 January 17, 2019 Fifth Internet Edition. 17.10 October 19, 2017 Fourth Internet Edition. 16.07 July 28, 2016 Third Internet Edition. 13.07 July 6, 2013 Second Internet Edition. 09.12 December 14, 2009 First Internet Edition. Table of Contents Introduction....................................................................................................xvi Why Use the Command Line?......................................................................................xvi
    [Show full text]
  • Filesystem Hierarchy Standard
    Filesystem Hierarchy Standard LSB Workgroup, The Linux Foundation Filesystem Hierarchy Standard LSB Workgroup, The Linux Foundation Version 3.0 Publication date March 19, 2015 Copyright © 2015 The Linux Foundation Copyright © 1994-2004 Daniel Quinlan Copyright © 2001-2004 Paul 'Rusty' Russell Copyright © 2003-2004 Christopher Yeoh Abstract This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. All trademarks and copyrights are owned by their owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Permission is granted to make and distribute verbatim copies of this standard provided the copyright and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this standard under the conditions for verbatim copying, provided also that the title page is labeled as modified including a reference to the original standard, provided that information on retrieving the original standard is included, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this standard into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the copyright holder. Dedication This release is dedicated to the memory of Christopher Yeoh, a long-time friend and colleague, and one of the original editors of the FHS.
    [Show full text]
  • Oracle® Linux 7 Managing File Systems
    Oracle® Linux 7 Managing File Systems F32760-07 August 2021 Oracle Legal Notices Copyright © 2020, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • File Systems and Storage
    FILE SYSTEMS AND STORAGE On Making GPFS Truly General DEAN HILDEBRAND AND FRANK SCHMUCK Dean Hildebrand manages the PFS (also called IBM Spectrum Scale) began as a research project Cloud Storage Software team that quickly found its groove supporting high performance comput- at the IBM Almaden Research ing (HPC) applications [1, 2]. Over the last 15 years, GPFS branched Center and is a recognized G expert in the field of distributed out to embrace general file-serving workloads while maintaining its original and parallel file systems. He pioneered pNFS, distributed design. This article gives a brief overview of the origins of numer- demonstrating the feasibility of providing ous features that we and many others at IBM have implemented to make standard and scalable access to any file GPFS a truly general file system. system. He received a BSc degree in computer science from the University of British Columbia Early Days in 1998 and a PhD in computer science from Following its origins as a project focused on high-performance lossless streaming of multi- the University of Michigan in 2007. media video files, GPFS was soon enhanced to support high performance computing (HPC) [email protected] applications, to become the “General Parallel File System.” One of its first large deployments was on ASCI White in 2002—at the time, the fastest supercomputer in the world. This HPC- Frank Schmuck joined IBM focused architecture is described in more detail in a 2002 FAST paper [3], but from the outset Research in 1988 after receiving an important design goal was to support general workloads through a standard POSIX inter- a PhD in computer science face—and live up to the “General” term in the name.
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO
    Filesystems HOWTO Filesystems HOWTO Table of Contents Filesystems HOWTO..........................................................................................................................................1 Martin Hinner < [email protected]>, http://martin.hinner.info............................................................1 1. Introduction..........................................................................................................................................1 2. Volumes...............................................................................................................................................1 3. DOS FAT 12/16/32, VFAT.................................................................................................................2 4. High Performance FileSystem (HPFS)................................................................................................2 5. New Technology FileSystem (NTFS).................................................................................................2 6. Extended filesystems (Ext, Ext2, Ext3)...............................................................................................2 7. Macintosh Hierarchical Filesystem − HFS..........................................................................................3 8. ISO 9660 − CD−ROM filesystem.......................................................................................................3 9. Other filesystems.................................................................................................................................3
    [Show full text]
  • A Sense of Time for Node.Js: Timeouts As a Cure for Event Handler Poisoning
    A Sense of Time for Node.js: Timeouts as a Cure for Event Handler Poisoning Anonymous Abstract—The software development community has begun to new Denial of Service attack that can be used against EDA- adopt the Event-Driven Architecture (EDA) to provide scalable based services. Our Event Handler Poisoning attack exploits web services. Though the Event-Driven Architecture can offer the most important limited resource in the EDA: the Event better scalability than the One Thread Per Client Architecture, Handlers themselves. its use exposes service providers to a Denial of Service attack that we call Event Handler Poisoning (EHP). The source of the EDA’s scalability is also its Achilles’ heel. Multiplexing unrelated work onto the same thread re- This work is the first to define EHP attacks. After examining EHP vulnerabilities in the popular Node.js EDA framework and duces overhead, but it also moves the burden of time sharing open-source npm modules, we explore various solutions to EHP- out of the thread library or operating system and into the safety. For a practical defense against EHP attacks, we propose application itself. Where OTPCA-based services can rely on Node.cure, which defends a large class of Node.js applications preemptive multitasking to ensure that resources are shared against all known EHP attacks by making timeouts a first-class fairly, using the EDA requires the service to enforce its own member of the JavaScript language and the Node.js framework. cooperative multitasking [89]. An EHP attack identifies a way to defeat the cooperative multitasking used by an EDA-based Our evaluation shows that Node.cure is effective, broadly applicable, and offers strong security guarantees.
    [Show full text]
  • Shared File Systems: Determining the Best Choice for Your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC
    Paper SAS569-2017 Shared File Systems: Determining the Best Choice for your Distributed SAS® Foundation Applications Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT If you are planning on deploying SAS® Grid Manager and SAS® Enterprise BI (or other distributed SAS® Foundation applications) with load balanced servers on multiple operating systems instances, , a shared file system is required. In order to determine the best shared file system choice for a given deployment, it is important to understand how the file system is used, the SAS® I/O workload characteristics performed on it, and the stressors that SAS Foundation applications produce on the file system. For the purposes of this paper, we use the term "shared file system" to mean both a clustered file system and shared file system, even though" shared" can denote a network file system and a distributed file system – not clustered. INTRODUCTION This paper examines the shared file systems that are most commonly used with SAS and reviews their strengths and weaknesses. SAS GRID COMPUTING REQUIREMENTS FOR SHARED FILE SYSTEMS Before we get into the reasons why a shared file system is needed for SAS® Grid Computing, let’s briefly discuss the SAS I/O characteristics. GENERAL SAS I/O CHARACTERISTICS SAS Foundation creates a high volume of predominately large-block, sequential access I/O, generally at block sizes of 64K, 128K, or 256K, and the interactions with data storage are significantly different from typical interactive applications and RDBMSs. Here are some major points to understand (more details about the bullets below can be found in this paper): SAS tends to perform large sequential Reads and Writes.
    [Show full text]
  • Intel® Solutions for Lustre* Software SC16 Technical Training
    SC’16 Technical Training All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at http://www.intel.com/content/www/us/en/software/intel-solutions-for-lustre-software.html. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
    [Show full text]
  • Outline of Ext4 File System & Ext4 Online Defragmentation Foresight
    Outline of Ext4 File System & Ext4 Online Defragmentation Foresight LinuxCon Japan/Tokyo 2010 September 28, 2010 Akira Fujita <[email protected]> NEC Software Tohoku, Ltd. Self Introduction ▐ Name: Akira Fujita Japan ▐ Company: NEC Software Tohoku, Ltd. in Sendai, Japan. Sendai ● ▐ Since 2004, I have been working at NEC Software Tohoku developing Linux file system, mainly ext3 and ● ext4 filesystems. Tokyo Currently, I work on the quality evaluation of ext4 for enterprise use, and also develop the ext4 online defragmentation. Page 2 Copyright(C) 2010 NEC Software Tohoku, Ltd. All Rights Reserved. Outline ▐ What is ext4 ▐ Ext4 features ▐ Compatibility ▐ Performance measurement ▐ Recent ext4 topics ▐ What is ext4 online defrag ▐ Relevant file defragmentation ▐ Current status / future plan Page 3 Copyright(C) 2010 NEC Software Tohoku, Ltd. All Rights Reserved. What is ext4 ▐ Ext4 is the successor of ext3 which is developed to solve performance issues and scalability bottleneck on ext3 and also provide backward compatibility with ext3. ▐ Ext4 development began in 2006. Included in stable kernel 2.6.19 as EXPERIMENTAL (ext4dev). Since kernel 2.6.28, ext4 has been released as stable (Renamed from ext4dev to ext4 in kernel 2.6.28). ▐ Maintainers Theodore Ts'o [email protected] , Andreas Dilger [email protected] ▐ ML [email protected] ▐ Ext4 Wiki http://ext4.wiki.kernel.org Page 4 Copyright(C) 2010 NEC Software Tohoku, Ltd. All Rights Reserved. Ext4 features Page 5 Copyright(C) 2010 NEC Software Tohoku, Ltd. All Rights Reserved. Ext4 features Bigger file/filesystem size support. Compared to ext3, ext4 is: 8 times larger in file size, 65536 times(!) larger in filesystem size.
    [Show full text]