Disaster Recovery in Google Cloud with Cloud Volumes ONTAP
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
IBM Hyperswap: an Automated Disaster Recovery Solution Disaster Recovery at Different Geographical Sites
IBM Systems Technical white paper June 2020 IBM HyperSwap: An automated disaster recovery solution Disaster recovery at different geographical sites Table of contents Introduction.......................................................................................... 2 Problem statement .............................................................................. 3 Validating the system for HyperSwap failures ................................ 6 Generic best practices for IBM HyperSwap ..................................... 7 Summary............................................................................................... 8 Get more information .......................................................................... 8 About the author .................................................................................. 8 Acknowledgments ............................................................................... 9 1 IBM Systems Technical white paper June 2020 Introduction Overview This technical paper is based on a client engagement and is developed to assist Challenge IBM® Business Partners, field sales representatives, technical specialists, and How to configure an automated IBM clients to understand when can IBM HyperSwap® be deployed on IBM disaster recovery solution Spectrum Virtualize systems and the steps to validate the configuration for between existing IBM Storwize failure cases. It also includes the best practices involved for the specific client V7000 control enclosures at engagement and can be applied in a similar environment. -
Dell Vs. Netapp: Don't Let Cloud Be an Afterthought
Dell vs. NetApp: Don’t let cloud be an afterthought Introduction Three waves Seven questions Eight ways to the cloud Contents Introduction 3 Three waves impacting your data center 4 Seven questions to ask about your data center 7 Eight ways you can pave a path to the cloud today 12 Resources 16 2 Introduction Three waves Seven questions Eight ways to the cloud It’s time to put on your belt, suspenders, and overalls Nearly everything seems uncertain Asking you to absorb the full now. With so much out of your control, operating expenses of a forklift data you need to take a comprehensive migration may be good for Dell’s approach to business continuity, as business, but it’s not necessarily the well as to infrastructure disruption. As right thing for your bottom line. It’s an IT professional, you are a frontline time to consider your options. hero responding to the increasing demands of an aging infrastructure In this eBook we’ll help you and an evolving IT landscape. determine where your data center stands, and what you need to have Amid this rapid change, Dell in place so you can pave a path to Technologies is asking you to migrate the cloud today. your data center… again. 3 Introduction Three waves Seven questions Eight ways to the cloud Catch the wave, connect your data, connect the world IT evolution happens in waves. And like ocean waves, one doesn’t come to an end before the next one begins. We see the data services market evolving in three waves: 4 Introduction Three waves Seven questions Eight ways to the cloud First wave: 1 It’s all about infrastructure Users and applications create data, and infrastructure delivers capabilities around capacity, speed, and efficiency. -
AIX Disaster Recovery with IBM Power Virtual Server
AIX Disaster Recovery with IBM Power Virtual Server An IBM Systems Lab Services Tutorial Primitivo Cervantes Vess Natchev [email protected] TABLE OF CONTENTS CHAPTER 1: SOLUTION OVERVIEW............................. 1 1.1 Introduction .......................................................................... 1 1.2 Use Cases ............................................................................. 1 1.2.1 Geographic Logical Volume Manager (GLVM) Replication .... 1 1.2.2 Geographic Logical Volume Manager (GLVM) Replication with PowerHA ............................................................................... 2 1.3 Solution Components and Requirements ................................... 2 1.3.1 Components ................................................................. 2 1.3.2 Requirements ............................................................... 2 1.4 Solution Diagram ................................................................... 3 CHAPTER 2: IMPLEMENTATION .................................. 4 2.1 Base IBM Cloud PowerVSI and Networking setup ....................... 4 2.1.1 Open an IBM Cloud account ............................................ 4 2.1.2 Create PowerVS location Service and Subnet(s) ................ 5 2.1.3 Provision AIX and IBM i VSIs in each PowerVS location ....... 5 2.1.4 Order Direct Link Connect Classic to connect PowerVS location to IBM Cloud ............................................................. 5 2.1.5 Order two Vyatta Gateways, one in each datacenter .......... 6 2.1.6 Request a Generic -
Data Center Disaster Recovery
Data Center Disaster Recovery KwaiSeng Consulting Systems Engineer Presentation_ID © 2006 Cisco Systems, Inc. All rights reserved. Cisco Confidential 1 Agenda Data Center—The Evolution Data Center Disaster Recovery Objectives Failure Scenarios Design Options Components of Disaster Recovery Site Selection—Front End GSLB Server High Availability—Clustering Data Replication and Synchronization—SAN Extension Data Center Technology Trends Summary © 2006 Cisco Systems, Inc. All rights reserved. 2 The Evolution of Data Centers © 2006 Cisco Systems, Inc. All rights reserved. 3 Data Center Evolution Networked Data Center Phase Data Center Continuous Data Center Availability Virtualization Data Center Network Consolidation Optimization Compute Internet Evolution Computing Data Center Client/ Networking Server 1. Consolidation Mainframes 2. Integration Content 3. Virtualization Networking 4. High Availability Thin Client: HTTP Business Agility Business TCP/IP Network Terminal Evolution 1960 1980 2000 2010 © 2006 Cisco Systems, Inc. All rights reserved. 4 Today’s Data Center Integration of Many Systems and Services Storage N-Tier Front End Network Applications Network Application/Server WAN/ Optimization FC Security Internet Switch Web Servers Resilient Cache IP Firewall DR Data Center Scalable Infrastructure Application and Server Optimization NAS App Servers Content Data Center Security IDS Switch MAN/ DC Storage Networks Internet VSANs Distributed Data Centers DB Servers FC Switch Mainframe IP Comm. Operations FC Switch RAID Metro Network -
Small Business Disaster Recovery
QUICK GUIDES Small Business Disaster Recovery When a disaster occurs, businesses must take care of their employees’ needs, communicate the impact, address financial matters (e.g., insurance, disaster assistance), restore operations, and organize recovery. Below are resources to help reopen your business and make progress through long-term recovery. For more details, visit: www.uschamberfoundation.org/ccc. TOP 10 TIPS FOR RECOVERY Keep detailed 1. Implement your disaster plan. Assess damage and consider if a backup location is needed. records of business 2. Shift your team and leadership from preparedness to recovery. activity and the 3. Implement a communications strategy to ensure that the facts go directly to extra expenses employees, suppliers, customers, and the media. of keeping your 4. Encourage employees to take appropriate actions and communicate. business operating 5. Document damage, file insurance claim, and track recovery. in a temporary 6. Cultivate partnerships in the community with businesses, government, and location during the nonprofits. interruption period. 7. Provide employee support and assistance. 8. Connect with chambers of commerce, economic development, and other community If you are forced to support organizations. close down, include 9. Document lessons learned and update your plan. expenses that 10. Consider disaster assistance. Contact the Disaster Help Desk for support at continue during the 1-888-MY-BIZ-HELP (1-888-692-4943), or visit www.facebook.com/USCCFhelpdesk or time that the business https://twitter.com/USCCFhelpdesk. is closed, such as advertising and the cost of utilities. RECOVERY RESOURCES The Insurance Information Institute This is a checklist for reopening your business after a disaster. -
Azure Disaster Recovery- As-A-Service: Plan, Implement, Manage Help Ensure Speed and Ease of Cloud Recovery Through Proven Azure Disaster Recovery Solutions
Education IT Professional Services Azure Disaster Recovery- as-a-Service: Plan, Implement, Manage Help ensure speed and ease of cloud recovery through proven Azure disaster recovery solutions From power outages to natural disasters or human error, unplanned disruptions can be costly and detrimental to your district’s educational and business priorities. In the event of a Highlights disaster, you need a reliable, tested and cost-effective solution to respond with speed and agility while reducing the financial losses • Leverage the cloud, at an affordable price that result from these outages. point, to maintain business continuity. Most school districts do not have a DR strategy or plan because • Run disaster recovery drills annually with no the cost of a secondary data centre is prohibitive and technically business or application impact. very complex. Performing a DR drill can be difficult and carries with it the risk of impacting production workloads. Data loss and • Minimize errors when failing over and drastically reduce downtime. application errors are a common occurrence when failing over to a secondary site. • Save money by using Azure as your secondary data centre. In addition, traditional DR requires constant updating and maintenance in order ensure the environment continues to • Realize a tested and proven Recovery Plan. function and perform optimally. This requires dedicated cycles • Protect heterogeneous environments with a from IT staff. single solution: Hyper-V, VMware, physical servers. Minimize risks and avoid costly incidents • Achieve Ministry mandate for DR IBM K-12’s Microsoft Azure Site Recovery is a Disaster Recovery as a Service (DRaaS) solution. It removes the complexity of traditional DR, and greatly reduces the cost to implement and maintain it. -
Infrastructure : Netapp Solutions
Infrastructure NetApp Solutions NetApp October 06, 2021 This PDF was generated from https://docs.netapp.com/us-en/netapp-solutions/infra/rhv- architecture_overview.html on October 06, 2021. Always check docs.netapp.com for the latest. Table of Contents Infrastructure . 1 NVA-1148: NetApp HCI with Red Hat Virtualization. 1 TR-4857: NetApp HCI with Cisco ACI . 84 Workload Performance. 121 Infrastructure NVA-1148: NetApp HCI with Red Hat Virtualization Alan Cowles, Nikhil M Kulkarni, NetApp NetApp HCI with Red Hat Virtualization is a verified, best-practice architecture for the deployment of an on- premises virtual datacenter environment in a reliable and dependable manner. This architecture reference document serves as both a design guide and a deployment validation of the Red Hat Virtualization solution on NetApp HCI. The architecture described in this document has been validated by subject matter experts at NetApp and Red Hat to provide a best-practice implementation for an enterprise virtual datacenter deployment using Red Hat Virtualization on NetApp HCI within your own enterprise datacenter environment. Use Cases The NetApp HCI for Red Hat OpenShift on Red Hat Virtualization solution is architected to deliver exceptional value for customers with the following use cases: 1. Infrastructure to scale on demand with NetApp HCI 2. Enterprise virtualized workloads in Red Hat Virtualization Value Proposition and Differentiation of NetApp HCI with Red Hat Virtualization NetApp HCI provides the following advantages with this virtual infrastructure solution: • A disaggregated architecture that allows for independent scaling of compute and storage. • The elimination of virtualization licensing costs and a performance tax on independent NetApp HCI storage nodes. -
Netapp Global File Cache 1.0.3 User Guide
NetApp Global File Cache 1.0.3 User Guide Important: If you are using Cloud Manager to enable Global File Cache, you should use https://docs.netapp.com/us-en/occm/concept_gfc.html for a step-by-step walkthrough. Cloud Manager automatically provisions the GFC Management Server instance alongside the GFC Core instance and enables entitlement / licensing. You can still use this guide as a reference. Chapter 7 through 13 contains in-depth information and advanced configuration parameters for GFC Core and GFC Edge instances. Additionally, this document includes overall onboarding and application best practices. 2 User Guide © 2020 NetApp, Inc. All Rights Reserved. TABLE OF CONTENTS 1 Introduction ........................................................................................................................................... 8 1.1 The GFC Fabric: Highly Scalable and Flexible ...............................................................................................8 1.2 Next Generation Software-Defined Storage ....................................................................................................8 1.3 Global File Cache Software ............................................................................................................................8 1.4 Enabling Global File Cache using NetApp Cloud Manager .............................................................................8 2 NetApp Global File Cache Requirements ......................................................................................... -
Netapp MAX Data
WHITE PAPER Intel® Optane™ Persistent Memory Maximize Your Database Density and Performance NetApp Memory Accelerated Data (MAX Data) uses Intel Optane persistent memory to provide a low-latency, high-capacity data tier for SQL and NoSQL databases. Executive Summary To stay competitive, modern businesses need to ingest and process massive amounts of data efficiently and within budget. Database administrators (DBAs), in particular, struggle to reach performance and availability levels required to meet service-level agreements (SLAs). The problem stems from traditional data center infrastructure that relies on limited data tiers for processing and storing massive quantities of data. NetApp MAX Data helps solve this challenge by providing an auto-tiering solution that automatically moves frequently accessed hot data into persistent memory and less frequently used warm or cold data into local or remote storage based on NAND or NVM Express (NVMe) solid state drives (SSDs). The solution is designed to take advantage of Intel Optane persistent memory (PMem)—a revolutionary non-volatile memory technology in an affordable form factor that provides consistent, low-latency performance for bare-metal and virtualized single- or multi-tenant database instances. Testing performed by Intel and NetApp (and verified by Evaluator Group) demonstrated how MAX Data increases performance on bare-metal and virtualized systems and across a wide variety of tested database applications, as shown in Table 1. Test results also showed how organizations can meet or exceed SLAs while supporting a much higher number of database instances, without increasing the underlying hardware footprint. This paper describes how NetApp MAX Data and Intel Optane PMem provide low-latency performance for relational and NoSQL databases and other demanding applications. -
Netapp Cloud Volumes ONTAP Simple and Fast Data Management in the Cloud
Datasheet NetApp Cloud Volumes ONTAP Simple and fast data management in the cloud Key Benefits The Challenge • NetApp® Cloud Volumes ONTAP® In today’s IT ecosystem, the cloud has become synonymous with flexibility and efficiency. software lets you control your public When you deploy new services or run applications with varying usage needs, the cloud cloud storage resources with industry- provides a level of infrastructure flexibility that allows you to pay for what you need, when leading data management. you need it. With virtual machines, the cloud has become a go-to deployment model for applications that have unpredictable cycles or variable usage patterns and need to be • Multiple storage consumption models spun up or spun down on demand. provide the flexibility that allows you to use just what you need, when you Applications with fixed usage patterns often continue to be deployed in a more need it. traditional fashion because of the economics in on-premises data centers. This situation • Rapid point-and-click deployment creates a hybrid cloud environment, employing the model that best fits each application. from NetApp Cloud Manager enables In this hybrid cloud environment, data is at the center. It is the only thing of lasting value. you to deploy advanced data It is the thing that needs to be shared and integrated across the hybrid cloud to deliver management systems on your choice business value. It is the thing that needs to be secured, protected, and managed. of cloud in minutes. You need to control what happens to your data no matter where it is. -
Unified Networking on 10 Gigabit Ethernet: Intel and Netapp Provide
Unified Networking on 10 Gigabit Ethernet Intel and NetApp provide a simple and flexible path to cost- effective performance of next-generation storage networks. EXECUTIVE SUMMARY CONTENTS Unified networking over 10 Gigabit Ethernet (10GbE) in the data center Executive Summary . .1 offers compelling benefits, including a simplified infrastructure, lower Simplifying the Network with 10GbE . 2 equipment and power costs, and the flexibility to meet the needs of the Evolving with the Data Center . .2 evolving, virtualized data center. In recent years, growth in Ethernet-based The Promise of Ethernet Storage . .3 storage has surpassed that of storage-specific fabrics, driven in large part by Ethernet Enhancements for Storage . 4 the increase in server virtualization. The ubiquity of storage area network (SAN) access will be critical as virtualization deployments continue to Data Center Bridging for Lossless Ethernet .................. 4 grow and new, on-demand data center models emerge. Fibre Channel over Ethernet: With long histories in Ethernet networking and storage, Intel and NetApp Extending Consolidation............. 4 are two leaders in the transition to 10GbE unified networking. This paper Contrasting FCoE Architectures....... 4 explores the many benefits of unified networking, the approaches to Open FCoE: Momentum enabling it, and the pivotal roles Intel and NetApp are playing in helping in the Linux* Community . 4 to bring important, consolidation-driving technologies to enterprise data Intel Ethernet Unified Networking . 5 center customers. Reliability . 5 Cost-Effectiveness . 5 Scalable Performance .............. 6 Ease of Use ...................... 6 NetApp Ethernet Storage: Proven, I/O Unification Engine . .6 Conclusion . 7 Solution provided by: SIMPLIFYING THE As server virtualization continues to 16 times the network capacity, and over 20 NETWORK WITH 10GBE grow, 10GbE and unified networking times the compute capacity by 2015.4 A are simplifying server connectivity. -
SAP on Google Cloud: Disaster Recovery Strategies
SAP on Google Cloud: Disaster-recovery strategies Overview © 2020 Google LLC. All rights reserved. Contents About this document 3 Introduction 3 Disaster recovery 3 Architecture considerations 4 Disaster-recovery strategies 5 Scenario 1: RTO within days, RPO dependent on business function 6 Scenario 2: RTO less than a day, RPO within minutes 7 Scenario 3: RTO in minutes, RPO close to zero 9 Disaster-recovery planning and testing 11 Recommendations 11 Capacity planning 11 Automation 12 Disaster-recovery plan 12 Summary 12 Further reading 13 2 © 2020 Google LLC. All rights reserved. About this document This document is part of a series about working with SAP on Google Cloud. The series includes the following documents: ● High availability ● Migration strategies ● Backup strategies and solutions ● Disaster-recovery strategies (this document) Introduction This document helps architects to make smart decisions when designing disaster-recovery (DR) architectures and strategies. The document considers not just the criticality of individual solutions, but also the different components of a typical SAP system. A good disaster-recovery strategy begins with a business impact analysis that defines two key metrics: ● Recovery Time Objective (RTO): How long you can afford to have your business offline. ● Recovery Point Objective (RPO): How much data loss you can sustain before you run into compliance issues due to financial losses. For both cases, you must determine the costs to your business while the system is offline or for data loss and re-creation. Typically, the smaller your RTO and RPO values are (that is, the faster your application must recover from an interruption), the more your application will cost to run.