SUSE Linux Enterprise High Availability

Total Page:16

File Type:pdf, Size:1020Kb

SUSE Linux Enterprise High Availability SUSE Linux Enterprise High Availability Extension 11 SP4 High Availability Guide High Availability Guide SUSE Linux Enterprise High Availability Extension 11 SP4 by Tanja Roth and Thomas Schraitle Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE or Novell trademarks, see the Novell Trademark and Service Mark list http://www.novell.com/ company/legal/trademarks/tmlist.html . All other third party trademarks are the property of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xiii 1 Feedback xiv 2 Documentation Conventions xiv 3 About the Making of This Manual xv I INSTALLATION AND SETUP 1 1 Product Overview 2 1.1 Availability as Add-On/Extension 2 1.2 Key Features 3 Wide Range of Clustering Scenarios 3 • Flexibility 3 • Storage and Data Replication 4 • Support for Virtualized Environments 4 • Support of Local, Metro, and Geo Clusters 4 • Resource Agents 5 • User-friendly Administration Tools 5 1.3 Benefits 6 1.4 Cluster Configurations: Storage 10 1.5 Architecture 12 Architecture Layers 12 • Process Flow 15 2 System Requirements and Recommendations 16 2.1 Hardware Requirements 16 2.2 Software Requirements 17 2.3 Storage Requirements 17 2.4 Other Requirements and Recommendations 17 iii High Availability Guide 3 Installation and Basic Setup 21 3.1 Definition of Terms 21 3.2 Overview 23 3.3 Installation as Add-on 25 3.4 Automatic Cluster Setup (sleha-bootstrap) 26 3.5 Manual Cluster Setup (YaST) 30 YaST Cluster Module 30 • Defining the Communication Channels 31 • Defining Authentication Settings 35 • Transferring the Configuration to All Nodes 36 • Synchronizing Connection Status Between Cluster Nodes 40 • Configuring Services 41 • Bringing the Cluster Online 43 3.6 Mass Deployment with AutoYaST 44 II CONFIGURATION AND ADMINISTRATION 46 4 Configuration and Administration Basics 47 4.1 Global Cluster Options 47 Overview 47 • Option no-quorum-policy 48 • Option stonith- enabled 49 4.2 Cluster Resources 49 Resource Management 49 • Supported Resource Agent Classes 50 • Types of Resources 51 • Resource Templates 52 • Advanced Resource Types 53 • Resource Options (Meta Attributes) 56 • Instance Attributes (Parameters) 59 • Resource Operations 62 • Timeout Values 63 4.3 Resource Monitoring 64 4.4 Resource Constraints 66 Types of Constraints 66 • Scores and Infinity 69 • Resource Templates and Constraints 70 • Failover Nodes 70 • Failback Nodes 72 • Placing Resources Based on Their Load Impact 72 • Grouping Resources by Using Tags 76 iv High Availability Guide 4.5 Managing Services on Remote Hosts 76 Monitoring Services on Remote Hosts with Nagios Plug-ins 76 • Managing Services on Remote Nodes with pacemaker_remote 78 4.6 Monitoring System Health 79 4.7 Maintenance Mode 80 4.8 For More Information 82 5 Configuring and Managing Cluster Resources (Web Interface) 83 5.1 Hawk—Overview 83 Starting Hawk and Logging In 83 • Main Screen: Cluster Status 85 5.2 Configuring Global Cluster Options 89 5.3 Configuring Cluster Resources 91 Configuring Resources with the Setup Wizard 92 • Creating Simple Cluster Resources 93 • Creating STONITH Resources 95 • Using Resource Templates 96 • Configuring Resource Constraints 98 • Specifying Resource Failover Nodes 102 • Specifying Resource Failback Nodes (Resource Stickiness) 103 • Configuring Placement of Resources Based on Load Impact 104 • Configuring Resource Monitoring 105 • Configuring a Cluster Resource Group 107 • Configuring a Clone Resource 108 5.4 Managing Cluster Resources 109 Starting Resources 110 • Cleaning Up Resources 111 • Removing Cluster Resources 112 • Migrating Cluster Resources 112 • Using Maintenance Mode 114 • Viewing the Cluster History 116 • Exploring Potential Failure Scenarios 118 • Generating a Cluster Report 121 5.5 Monitoring Multiple Clusters 122 5.6 Hawk for Geo Clusters 124 5.7 Troubleshooting 124 v High Availability Guide 6 Configuring and Managing Cluster Resources (GUI) 126 6.1 Pacemaker GUI—Overview 126 Logging in to a Cluster 126 • Main Window 127 6.2 Configuring Global Cluster Options 129 6.3 Configuring Cluster Resources 130 Creating Simple Cluster Resources 130 • Creating STONITH Resources 133 • Using Resource Templates 134 • Configuring Resource Constraints 135 • Specifying Resource Failover Nodes 139 • Specifying Resource Failback Nodes (Resource Stickiness) 140 • Configuring Placement of Resources Based on Load Impact 141 • Configuring Resource Monitoring 144 • Configuring a Cluster Resource Group 145 • Configuring a Clone Resource 149 6.4 Managing Cluster Resources 149 Starting Resources 150 • Cleaning Up Resources 151 • Removing Cluster Resources 152 • Migrating Cluster Resources 152 • Changing Management Mode of Resources 154 7 Configuring and Managing Cluster Resources (Command Line) 155 7.1 crmsh—Overview 155 Getting Help 156 • Executing crmsh's Subcommands 157 • Displaying Information about OCF Resource Agents 159 • Using Configuration Templates 160 • Testing with Shadow Configuration 162 • Debugging Your Configuration Changes 163 • Cluster Diagram 163 7.2 Configuring Global Cluster Options 164 7.3 Configuring Cluster Resources 165 Creating Cluster Resources 165 • Creating Resource Templates 166 • Creating a STONITH Resource 167 • Configuring Resource Constraints 168 • Specifying Resource Failover Nodes 171 • Specifying Resource Failback Nodes (Resource Stickiness) 171 • Configuring Placement of Resources Based on Load vi High Availability Guide Impact 171 • Configuring Resource Monitoring 174 • Configuring a Cluster Resource Group 175 • Configuring a Clone Resource 175 7.4 Managing Cluster Resources 176 Starting a New Cluster Resource 177 • Cleaning Up Resources 177 • Removing a Cluster Resource 178 • Migrating a Cluster Resource 178 • Grouping/Tagging Resources 179 • Using Maintenance Mode 179 7.5 Setting Passwords Independent of cib.xml 180 7.6 Retrieving History Information 181 7.7 For More Information 182 8 Adding or Modifying Resource Agents 183 8.1 STONITH Agents 183 8.2 Writing OCF Resource Agents 183 8.3 OCF Return Codes and Failure Recovery 185 9 Fencing and STONITH 187 9.1 Classes of Fencing 187 9.2 Node Level Fencing 188 STONITH Devices 188 • STONITH Implementation 189 9.3 STONITH Resources and Configuration 190 Example STONITH Resource Configurations 190 9.4 Monitoring Fencing Devices 193 9.5 Special Fencing Devices 193 9.6 Basic Recommendations 195 9.7 For More Information 196 10 Access Control Lists 197 10.1 Requirements and Prerequisites 197 vii High Availability Guide 10.2 Enabling Use of ACLs In Your Cluster 198 10.3 The Basics of ACLs 199 Setting ACL Rules via XPath Expressions 199 • Setting ACL Rules via Abbreviations 201 10.4 Configuring ACLs with the Pacemaker GUI 202 10.5 Configuring ACLs with Hawk 203 10.6 Configuring ACLs with crmsh 204 11 Network Device Bonding 206 11.1 Configuring Bonding Devices with YaST 206 11.2 Hotplugging of Bonding Slaves 209 11.3 For More Information 211 12 Load Balancing with Linux Virtual Server 212 12.1 Conceptual Overview 212 Director 212 • User Space Controller and Daemons 213 • Packet Forwarding 213 • Scheduling Algorithms 214 12.2 Configuring IP Load Balancing with YaST 214 12.3 Further Setup 220 12.4 For More Information 220 13 Geo Clusters (Multi-Site Clusters) 221 III STORAGE AND DATA REPLICATION 222 14 OCFS2 223 14.1 Features and Benefits 223 14.2 OCFS2 Packages and Management Utilities 224 14.3 Configuring OCFS2 Services and a STONITH Resource 225 14.4 Creating OCFS2 Volumes 227 viii High Availability Guide 14.5 Mounting OCFS2 Volumes 229 14.6 Using Quotas on OCFS2 File Systems 231 14.7 For More Information 231 15 DRBD 232 15.1 Conceptual Overview 232 15.2 Installing DRBD Services 234 15.3 Configuring the DRBD Service 235 15.4 Testing the DRBD Service 241 15.5 Tuning DRBD 243 15.6 Troubleshooting DRBD 243 Configuration 244 • Host Names 244 • TCP Port 7788 244 • DRBD Devices Broken after Reboot 245 15.7 For More Information 245 16 Cluster Logical Volume Manager (cLVM) 246 16.1 Conceptual Overview 246 16.2 Configuration of cLVM 247 Configuring Cmirrord 247 • Creating the Cluster Resources 250 • Scenario: cLVM With iSCSI on SANs 251 • Scenario: cLVM With DRBD 255 16.3 Configuring Eligible LVM2 Devices Explicitly 257 16.4 For More Information 258 17 Storage Protection 259 17.1 Storage-based Fencing 259 Overview 260 • Number of SBD Devices 260 • Setting Up Storage-based Protection 261 17.2 Ensuring Exclusive Storage Activation 268 Overview 268 • Setup 268 ix High Availability Guide 17.3 For More Information 270 18 Samba Clustering 271 18.1 Conceptual Overview 271 18.2 Basic Configuration 272 18.3 Joining an Active Directory Domain 275 18.4 Debugging and Testing Clustered Samba 276 18.5 For More Information 278 19 Disaster Recovery
Recommended publications
  • Fault-Tolerant Components on AWS
    Fault-Tolerant Components on AWS November 2019 This paper has been archived For the latest technical information, see the AWS Whitepapers & Guides page: Archivedhttps://aws.amazon.com/whitepapers Notices Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers. © 2019 Amazon Web Services, Inc. or its affiliates. All rights reserved. Archived Contents Introduction .......................................................................................................................... 1 Failures Shouldn’t Be THAT Interesting ............................................................................. 1 Amazon Elastic Compute Cloud ...................................................................................... 1 Elastic Block Store ........................................................................................................... 3 Auto Scaling ....................................................................................................................
    [Show full text]
  • Vsphere Availability Vmware Vsphere 6.5 Vmware Esxi 6.5 Vcenter Server 6.5
    vSphere Availability VMware vSphere 6.5 VMware ESXi 6.5 vCenter Server 6.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs. EN-002085-01 vSphere Availability You can ®nd the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to: [email protected] Copyright © 2009–2017 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com 2 VMware, Inc. Contents About vSphere Availability 5 Updated Information for vSphere Availability 7 1 Business Continuity and Minimizing Downtime 9 Reducing Planned Downtime 9 Preventing Unplanned Downtime 10 vSphere HA Provides Rapid Recovery from Outages 10 vSphere Fault Tolerance Provides Continuous Availability 11 Protecting the vCenter Server Appliance with vCenter High Availability 12 Protecting vCenter Server with VMware Service Lifecycle Manager 12 2 Creating and Using vSphere HA Clusters 13 How vSphere HA Works 13 vSphere HA Admission Control 21 vSphere HA Interoperability 26 Creating a vSphere HA Cluster 29 Configuring vSphere Availability Settings 31 Best Practices for vSphere HA Clusters 39 3 Providing Fault Tolerance for Virtual Machines 43 How Fault Tolerance
    [Show full text]
  • Rethinking Database High Availability with RDMA Networks
    Rethinking Database High Availability with RDMA Networks Erfan Zamanian1, Xiangyao Yu2, Michael Stonebraker2, Tim Kraska2 1 Brown University 2 Massachusetts Institute of Technology [email protected], fyxy, stonebraker, [email protected] ABSTRACT copy propagate to all the backup copies synchronously such that Highly available database systems rely on data replication to tol- any failed primary server can be replaced by a backup server. erate machine failures. Both classes of existing replication algo- The conventional wisdom of distributed system design is that rithms, active-passive and active-active, were designed in a time the network is a severe performance bottleneck. Messaging over when network was the dominant performance bottleneck. In essence, a conventional 10-Gigabit Ethernet within the same data center, these techniques aim to minimize network communication between for example, delivers 2–3 orders of magnitude higher latency and replicas at the cost of incurring more processing redundancy; a lower bandwidth compared to accessing the local main memory of trade-off that suitably fitted the conventional wisdom of distributed a server [3]. Two dominant high availability approaches, active- database design. However, the emergence of next-generation net- passive and active-active, both adopt the optimization goal of min- works with high throughput and low latency calls for revisiting imizing network overhead. these assumptions. With the rise of the next-generation networks, however, conven- In this paper, we first make the case that in modern RDMA- tional high availability protocol designs are not appropriate any- enabled networks, the bottleneck has shifted to CPUs, and there- more, especially in a setting of Local Area Network (LAN).
    [Show full text]
  • High Availability Guide 12C (12.2.1.1) E73182-01
    Oracle® Fusion Middleware High Availability Guide 12c (12.2.1.1) E73182-01 June 2016 Oracle Fusion Middleware High Availability Guide, 12c (12.2.1.1) E73182-01 Copyright © 2013, 2016, Oracle and/or its affiliates. All rights reserved. Primary Author: Christine Ford Contributing Authors: Michael Blevins, Suvendu Ray, Lingaraj Nayak, Maneesh Jain, Peter LaQuerre Contributors: Gururaj BS This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency- specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]
  • Web Application Hosting in the AWS Cloud AWS Whitepaper Web Application Hosting in the AWS Cloud AWS Whitepaper
    Web Application Hosting in the AWS Cloud AWS Whitepaper Web Application Hosting in the AWS Cloud AWS Whitepaper Web Application Hosting in the AWS Cloud: AWS Whitepaper Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Web Application Hosting in the AWS Cloud AWS Whitepaper Table of Contents Abstract ............................................................................................................................................ 1 Abstract .................................................................................................................................... 1 An overview of traditional web hosting ................................................................................................ 2 Web application hosting in the cloud using AWS .................................................................................... 3 How AWS can solve common web application hosting issues ........................................................... 3 A cost-effective alternative to oversized fleets needed to handle peaks ..................................... 3 A scalable solution to handling unexpected traffic
    [Show full text]
  • Oracle Database High Availability Overview, 10G Release 2 (10.2) B14210-02
    Oracle® Database High Availability Overview 10g Release 2 (10.2) B14210-02 July 2006 Oracle Database High Availability Overview, 10g Release 2 (10.2) B14210-02 Copyright © 2005, 2006, Oracle. All rights reserved. Primary Author: Immanuel Chan Contributors: Andrew Babb, Tammy Bednar, Barb Lundhild, Rahim Mau, Valarie Moore, Ashish Ray, Vivian Schupmann, Michael T. Smith, Lawrence To, Douglas Utzig, James Viscusi, Shari Yamaguchi The Programs (which include both the software and documentation) contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited. The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose. If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations.
    [Show full text]
  • Zero-Downtime Deployment in a High Availability Architecture - Controlled Experiment of Deployment Automation in a High Availability Architecture
    Bachelor Degree Project Zero-Downtime Deployment in a High Availability Architecture - Controlled experiment of deployment automation in a high availability architecture. Author: Axel Nilsson ​ Supervisor: Johan Hagelbäck ​ Semester. HT 2018 ​ Subject: Computer Science ​ Abstract Computer applications are no longer local installations on our computers. Many modern web applications and services rely on an internet connection to a centralized server to access the full functionality of the application. High availability architectures can be used to provide redundancy in case of failure to ensure customers always have access to the server. Due to the complexity of such systems and the need for stability, deployments are often avoided and new features and bug fixes cannot be delivered to the end user quickly. In this project, an automation system is proposed to allow for deployments to a high availability architecture while ensuring high availability. The purposed automation system is then tested in a controlled experiment to see if it can deliver what it promises. During low amounts of traffic, the deployment system showed it could make a deployment with a statistically insignificant change in error rate when compared to normal operations. Similar results were found during medium to high levels of traffic for successful deployments, but if the system had to recover from a failed deployment there was an increase in errors. However, the response time during the experiment showed that the system had a significant effect on the response time
    [Show full text]
  • EOS: the Next Generation Extensible Operating System
    White Paper EOS: The Next Generation Extensible Operating System Performance, resiliency and programmability across the entire network are now fundamental business requirements for next generation cloud and enterprise data center networks. The need for agility and deployment at scale with regards to provisioning and network operations requires a new level of automation and integration with current data center infrastructure. The underlying design of the network operating system provides the architectural foundation to meet these requirements. Arista Networks has designed and delivered EOS (Extensible Operating System), the industry-leading, Linux- based network operating system, to address these requirements. EOS is a robust, programmable and innovative operating system, featuring a single software image that runs across the entire portfolio of Arista’s award-winning network switches as well as in a virtual machine instance (vEOS). This guarantees consistent operations, workflow automation and high availability, while significantly reducing data center operational expenses. This whitepaper discusses current network operating systems and explains how Arista’s EOS architecture uniquely support next generation data centers. arista.com White Paper Limitations of Legacy Network Operating Systems Traditional enterprise data center networks have long been susceptible to software crashes and unplanned outages. Multiple, different software releases across switch platforms made deploying new features and services a lengthy and time-consuming process. Combined with error-prone manual configuration, inevitably network uptime was compromised. At the core of this problem was the aging monolithic software that still runs on most of today’s networking equipment. This presents a fundamental limitation to very high network reliability. The reality is that a single bug or defect anywhere in these shared-memory systems exposes the entire network to disruption, since there is no mechanism for software fault containment.
    [Show full text]
  • Administration Guide Administration Guide SUSE Linux Enterprise High Availability Extension 12 SP4 by Tanja Roth and Thomas Schraitle
    SUSE Linux Enterprise High Availability Extension 12 SP4 Administration Guide Administration Guide SUSE Linux Enterprise High Availability Extension 12 SP4 by Tanja Roth and Thomas Schraitle This guide is intended for administrators who need to set up, congure, and maintain clusters with SUSE® Linux Enterprise High Availability Extension. For quick and ecient conguration and administration, the High Availability Extension includes both a graphical user interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs. Publication Date: September 10, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006–2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
    [Show full text]
  • Performance of Cluster-Based High Availability Database in Cloud Containers
    Performance of Cluster-based High Availability Database in Cloud Containers Raju Shrestha OsloMet - Oslo Metropolitan University, Oslo, Norway Keywords: Performance, High Availability, Database, Cloud, Galera Cluster, Virtual Machine, Container, Docker. Abstract: Database is an important component in any software application, which enables efficient data management. High availability of databases is critical for an uninterruptible service offered by the application. Virtualization has been a dominant technology behind providing highly available solutions in the cloud including database, where database servers are provisioned and dynamically scaled based on demands. However, containerization technology has gained popularity in recent years because of light-weight and portability, and the technology has seen increased number of enterprises embracing containers as an alternative to heavier and resource-consuming virtual machines for deploying applications and services. A relatively new cluster-based synchronous multi-master database solution has gained popularity recently and has seen increased adoption against the traditional master-slave replication for better data consistency and high availability. This article evaluates the performance of a cluster-based high availability database deployed in containers and compares it to the one deployed in virtual machines. A popular cloud software platform, OpenStack, is used for virtual machines. Docker is used for containers as it is the most popular container technology at the moment. Results show better performance by HA Galera cluster database setup using Docker containers in most of the Sysbench benchmark tests compared to a similar setup using OpenStack virtual machines. 1 INTRODUCTION Wiesmann and Schiper, 2005; Curino et al., 2010) is a solution which is being used traditionally since Most of today’s modern cloud-based applications and long time.
    [Show full text]
  • Vmware Vsphere 7.0 Vmware Esxi 7.0 Vcenter Server 7.0 Vsphere Availability
    vSphere Availability Modified on 13 AUG 2020 VMware vSphere 7.0 VMware ESXi 7.0 vCenter Server 7.0 vSphere Availability You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com © Copyright 2009-2020 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc. 2 Contents About vSphere Availability 6 Updated Information 7 1 Business Continuity and Minimizing Downtime 8 Reducing Planned Downtime 8 Preventing Unplanned Downtime 9 vSphere HA Provides Rapid Recovery from Outages 9 vSphere Fault Tolerance Provides Continuous Availability 11 Protecting vCenter Server with vCenter High Availability 11 Protecting vCenter Server with VMware Service Lifecycle Manager 11 2 Creating and Using vSphere HA Clusters 12 How vSphere HA Works 12 Primary and Secondary Hosts 13 Host Failure Types 14 Determining Responses to Host Issues 15 VM and Application Monitoring 17 VM Component Protection 18 Network Partitions 19 Datastore Heartbeating 20 vSphere HA Security 21 vSphere HA Admission Control 22 Cluster Resources Percentage Admission Control 23 Slot Policy Admission Control 25 Dedicated Failover Hosts Admission Control 28 vSphere HA Interoperability 28 Using vSphere HA with vSAN 28 Using vSphere HA and DRS Together 30 Other vSphere HA Interoperability Issues 31 Creating a vSphere HA Cluster 32 vSphere HA Checklist 32 Create a vSphere HA Cluster in the vSphere Client 33 Configuring vSphere Availability Settings 35 Configuring
    [Show full text]
  • High Availability Failover Cluster
    Understanding High Availability in the SAN Mark S Fleming, IBM SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member companies and individual members may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced in their entirety without modification The SNIA must be acknowledged as the source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the author nor the presenter is an attorney and nothing in this presentation is intended to be, or should be construed as legal advice or an opinion of counsel. If you need legal advice or a legal opinion please contact your attorney. The information presented herein represents the author's personal opinion and current understanding of the relevant issues involved. The author, the presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. Understanding High Availability in the SAN © 2011 Storage Networking Industry Association. All Rights Reserved. 2 Abstract Understanding High Availability in the SAN This session will appeal to those seeking a fundamental understanding of High Availability (HA) configurations in the SAN. Modern SANs have developed numerous methods using hardware and software to assure high availability of storage to customers. The session will explore basic concepts of HA; move through a sample configuration from end-to-end; investigate HA and virtualization, converged networks and the cloud; and discuss some of the challenges and pitfalls faced in testing HA configurations.
    [Show full text]