Set up DB2 for Linux, UNIX, and Windows for High Availability Using Microsoft Cluster Server Virtualize DB2 on MSCS Using Vmware

Total Page:16

File Type:pdf, Size:1020Kb

Set up DB2 for Linux, UNIX, and Windows for High Availability Using Microsoft Cluster Server Virtualize DB2 on MSCS Using Vmware Set up DB2 for Linux, UNIX, and Windows for high availability using Microsoft Cluster Server Virtualize DB2 on MSCS using VMWare Jack Xia December 06, 2012 Anthony Corrente IBM® DB2® for Linux®, UNIX®, and Windows® has a number of options to provide high availability for production systems such as high availability and disaster recovery (HADR), DB2 pureScale, and DB2 MSCS. This article shows you how to set up a DB2 cluster in a virtual Microsoft Windows environment, including how to quickly set up a training and testing system in a virtual environment such as the cloud. This article will also help you to learn and practice DB2 MSCS without special hardware. Finally, simple troubleshooting methods are introduced to help you with common issues. This article is an update of a previously published white paper on IBM.com. Introduction DB2 for Linux, UNIX, and Windows has been recognized for providing very resilient data management to achieve high data integrity, reliability, consistency, and availability. It has the following options to provide high availability for production systems. • DB2 high availability and disaster recovery (HADR), is a data replication feature that provides a high availability solution for both partial and complete site failures. HADR protects against data loss by replicating data changes from a source database (primary), to a target database (standby). HADR is available in all editions of DB2 10, and supports up to three standby nodes for high availability and disaster recovery. • DB2 pureScale, an optional feature, offers a shared-disk clustering technology that helps provide continuous availability and scalability. • DB2 MSCS is another option for users who require a high availability solution in a windows environment. This article concentrates on the last option that can be used for training, development, or testing scenarios running VMware images on a single physical Microsoft Windows machine. Figure 1 shows the typical topology of DB2 Enterprise Server Edition (ESE) and MSCS. © Copyright IBM Corporation 2012 Trademarks Set up DB2 for Linux, UNIX, and Windows for high availability Page 1 of 34 using Microsoft Cluster Server developerWorks® ibm.com/developerWorks/ Figure 1. Typical topology Environment used in this article In this article, a single physical personal computer was used. Three virtual systems running Window 2008 Server were created in this environment as follows. • One of the virtual machines is the domain controller and Microsoft iSCSI Software Target server. • The other two are domain member servers with iSCSI initiator installed. Note: Only Windows Server 2008 Enterprise and Datacenter support Failover Cluster. Figure 2 shows the db2cls.com domain consisting of two Virtual Windows 2008 Enterprise Server, Host 1 - Cluster node1, Host 2- Cluster node2, and one Virtual Windows 2008 Datacenter Server, Host 0 - Domain controller, all connected through a TCP/IP network cloud. Set up DB2 for Linux, UNIX, and Windows for high availability Page 2 of 34 using Microsoft Cluster Server ibm.com/developerWorks/ developerWorks® Figure 2. Environment used in this article Note: The following tables are used as reference throughout this article. Please refer to these details in subsequent sections. Table 1. Physical machine one Component Hardware used in this article CPU Intel Core i7 Q720 1.6GHz Memory 8GB Disk 500GB Network 1 network card Operation system Windows 7 Professional, 64 bit with SP1 Table 2. Virtual machine one Item Detail Role Domain controller Domain name db2cls.com Memory 1GB Disk 1 local OS disk of 15GB Operation system Windows 7 Professional, 64 bit with SP1 Set up DB2 for Linux, UNIX, and Windows for high availability Page 3 of 34 using Microsoft Cluster Server developerWorks® ibm.com/developerWorks/ IP addresses Total of 2 IP addresses • 192.168.64.10 • 192.168.109.1 Table 3. Virtual machine two Item Detail Role Member of domain: db2cls.com Cluster name db2cluster1 Memory 1GB Disks 1 local OS disk of 20GB 2 shared virtual disks: • Virtual disk1 for MSCS quorum of 512MB • Virtual disk2 for DB2 instance and database of 4GB Operation system Windows 7 Professional, 64 bit with SP1 IP addresses Total of 2 IP addresses • 192.168.64.11 • 192.168.109.2 Table 4. Virtual machine three Item Detail Role Member of domain: db2cls.com Cluster name db2cluster1 Memory 1GB Disks 1 local OS disk of 20GB 2 shared virtual disks: • Virtual disk1 for MSCS quorum of 512MB • Virtual disk2 for DB2 instance and database of 4GB Operation system Windows 7 Professional, 64 bit with SP1 IP addresses Total of 2 IP addresses • 192.168.64.12 • 192.168.109.3 Table 5. MSCS cluster Item Detail MSCS cluster name db2cls.com quorum configuration Node and disk majority Cluster IP address 192.168.64.100 Disks One shared disk for quorum Table 6. DB2 cluster Item Detail DB2 MSCS cluster name db2cluster1.db2cls.com DB2 cluster instance db3 Resource group name db2Group DB2 cluster host name db2clus Disks 1 for both db3 instance and database Set up DB2 for Linux, UNIX, and Windows for high availability Page 4 of 34 using Microsoft Cluster Server ibm.com/developerWorks/ developerWorks® IP address 192.168.64.89 Set up virtual machines and domain 1. Install VMware player version 4 on your physical machine as per the details in Table 1. 2. Create three Windows Server 2008 R2 VMware virtual machine images as per Table 2, Table 3, and Table 4. 3. Create db2cls.com domain on those three Windows Server 2008 R2 VMware images. From HOST0, two members (HOST1 and HOST2) can be found in Active Directory Users and Computers window as shown in Figure 3. Figure 3. db2cls.com domain with two nodes HOST1, HOST2 Set up shared disks for the cluster on HOST0 For a typical MSCS cluster, at least one shared disk is required for all nodes in the cluster. In this VMware environment, each normal VMware virtual disk can only be accessed by a single VMware image. The iSCSI storage overcomes this limit. So in this article, Microsoft iSCSI technology is used to create an iSCSI target with two virtual disks on HOST0, then set HOST1 and HOST2 to connect to it. You need two virtual disks because the cluster needs one, and the db2 instance and database need another one. Microsoft iSCSI Target software can be downloaded from the Related topics section. The version used in this article is 3.3. 1. As shown in Figure 4, a single iSCSI target host0-disk0 with two virtual disks is used. Set up DB2 for Linux, UNIX, and Windows for high availability Page 5 of 34 using Microsoft Cluster Server developerWorks® ibm.com/developerWorks/ Figure 4. Microsoft iSCSI target on HOST0 2. Right-click iSCSI Targets and click Create iSCSI Target, as shown in Figure 5. Figure 5. Creating a single iSCSI Target Set up DB2 for Linux, UNIX, and Windows for high availability Page 6 of 34 using Microsoft Cluster Server ibm.com/developerWorks/ developerWorks® 3. Type host0-disk0 in the iSCSI target name field, and type clustOS in the Description field, as shown in Figure 6. Figure 6. Set the iSCST target name as host0-disk0 4. Make sure the IP addresses for both HOST1 and HOST2 are added, which allows these two servers to access the virtual disks of the target, as shown in Figure 7. Set up DB2 for Linux, UNIX, and Windows for high availability Page 7 of 34 using Microsoft Cluster Server developerWorks® ibm.com/developerWorks/ Figure 7. Add the IP addresses of HOST1, HOST2 to the target 5. Create two virtual disks, as shown in Figure 8, and name one as Virtual Disk 0 with size 500MB, and the other as Virtual Disk 1, size 4GB. Set up DB2 for Linux, UNIX, and Windows for high availability Page 8 of 34 using Microsoft Cluster Server ibm.com/developerWorks/ developerWorks® Figure 8. Create virtual disk for the target 6. For the first virtual disk, Virtual Disk 0, add the description clusterOS, which is used by the cluster itself. Then specify F:\share-0.vhd, as shown in Figure 9. Figure 9. Set the virtual disk as local file share-0.vhd Set up DB2 for Linux, UNIX, and Windows for high availability Page 9 of 34 using Microsoft Cluster Server developerWorks® ibm.com/developerWorks/ 7. For the second virtual disk, Virtual Disk 1, add the description DB2 Shared Disk, which is used by db2 instance and database, and then specify G:\DB2Data\DB2Disk1.vhd, as shown in Figure 10. Figure 10. Local file DB2Disk1.vhd for Virtual Disk 1 8. After Virtual Disk 1 is created, there are two files created on HOST0, as shown in Figure 11. Figure 11. Two files created for Virtual Disk 1 Set up DB2 for Linux, UNIX, and Windows for high availability Page 10 of 34 using Microsoft Cluster Server ibm.com/developerWorks/ developerWorks® 9. As shown in Figure 12, the two virtual disks are ready for the cluster. Figure 12. Two virtual disks created under host0-disk0 Set up the shared disks on HOST1 and HOST2 1. Select the Microsoft iSCSI Initiator from the Start menu of the Windows operating system, as shown in Figure 13. Set up DB2 for Linux, UNIX, and Windows for high availability Page 11 of 34 using Microsoft Cluster Server developerWorks® ibm.com/developerWorks/ Figure 13. Start iSCSI Initiator on HOST1 2. Connect to the Target dialog box, and then click OK, as shown in Figure 14. Figure 14. Target dialog box Note: Make sure that both check boxes are selected in the Connect To Target dialog box. 3. Type host0 in the Target field, and then click Quick Connect, as shown in Figure 15.
Recommended publications
  • Implementing IBM DB2 9.7 Enterprise Server Edition with Microsoft
    Implementing IBM® DB2® 9.7 Enterprise Server Edition with Microsoft Failover Clustering March 2012 Jeremy Brumer Michelle Chiu Aslam Nomani Lui Tang This document contains proprietary information of IBM. It is provided under a license agreement and is protected by copyright law. The information contained in this publication does not include any product warranties, and any statements provided in this document should not be interpreted as such. © Copyright International Business Machines Corporation 2012. All rights reserved. U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. 2 Notices, trademarks, service marks, and disclaimers IBM, the IBM logo, DB2, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml. Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product or service names may be the trademarks or service marks of others. The furnishing of this document does not imply giving license to any IBM patents. References in this document to IBM products, Programs, or Services do not imply that IBM intends to make these available in all countries in which IBM operates. 3 Contents Abstract 5 Introduction
    [Show full text]
  • Microsoft Windows Server 2003. Server Clusters: Architecture
    Server Clusters : Architecture Overview For Windows Server 2003 Microsoft Corporation Published: March 2003 Abstract Server clusters are one of two Microsoft® Windows® clustering technologies available for the Microsoft Windows Server family of products. Windows Server 2003 clusters provide failover support for back-end applications and services that require high availability and data integrity. These back-end applications include enterprise applications such as database, file server, enterprise resource planning (ERP), and messaging systems. This white paper focuses on the architecture and features of the cluster service and describes its terminology, concepts, design goals, key components, and planned future directions. Microsoft® Windows® Server 2003 Technical Article The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AS TO THE INFORMATION IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document.
    [Show full text]
  • Backup Exec 20 - 20.X Software Compatibility List (SCL) Updated on June 15, 2020
    Backup Exec 20 - 20.x Software Compatibility List (SCL) Updated on June 15, 2020 Click here for the HTML version of this document. <https://download.veritas.com/resources/content/live/OSVC/100046000/100046610/en_US/be_20_scl.html> Copyright © 2020 Veritas Technologies LLC. All rights reserved. Veritas, the Veritas Logo, and Backup Exec are trademarks or registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. Backup Exec™ 20 - 20.x SCL 2020-06-15 Introduction This document lists the available operating systems, platforms, and applications specifically supported by Veritas to be compatible with Backup Exec ™ 20. The following guidelines regarding this Software Compatibility List (SCL) should be understood: This release supports both Subscription and Perpetual licensing models. • Veritas support of 3rd party manufacturer products: Veritas will cooperate with 3rd party vendors and attempt to assist in the diagnosis of problems found between the 3rd party products and Veritas products. Veritas Technical Support, for the combination of Veritas and 3rd party products listed in this document, is conditional on the 3rd party product being supported by the original manufacturer or vendor. If the original vendor has ceased providing support for a version of their product, Veritas also will no longer be able to provide support on that product in combination with our products. • Minor update version support: The contents of this document including 3rd party applications, databases, operating systems, in combination with Veritas products represent what has been tested in Veritas labs, or in Veritas-supervised partner labs, but is not intended to be a complete list of supported products and versions.
    [Show full text]
  • EMC Recoverpoint/Cluster Enabler Plug-In Version 4.1 Product Guide Contents
    ® EMC RecoverPoint/Cluster Enabler Plug-in Version 4.1 Product Guide P/N 300-010-474 REV A03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com Copyright © 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Document/Whitepaper Library on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. 2 EMC RecoverPoint/Cluster Enabler Plug-in Version 4.1 Product Guide Contents Chapter 1 About Cluster Enabler Cluster Enabler overview ................................................................ 18 Cluster Enabler plug-in architecture........................................20 Cluster Enabler components.....................................................21 Cluster Enabler Manager interface................................................. 23 The
    [Show full text]
  • Netiq Appmanager for Microsoft Cluster Server Management Guide
    Management Guide NetIQ® AppManager® for Microsoft Cluster Server April 2019 Legal Notice For information about NetIQ legal notices, disclaimers, warranties, export and other use restrictions, U.S. Government restricted rights, patent policy, and FIPS compliance, see https://www.netiq.com/company/legal/. © 2019 NetIQ Corporation. All Rights Reserved. For information about NetIQ trademarks, see https://www.netiq.com/company/legal/. All third-party trademarks are the property of their respective owners. Contents About this Book and the Library 5 About NetIQ Corporation 7 1 Introducing AppManager for Microsoft Cluster Server 9 1.1 Understanding MSCS . 9 1.2 Why Monitor MSCS? . 10 1.3 How AppManager Can Help. 11 2 Installing AppManager for Microsoft Cluster Server 13 2.1 System Requirements . 13 2.2 Installing the Module . 14 2.3 Deploying the Module with Control Center. 15 2.4 Silently Installing the Module . 16 2.5 Permissions for Running MSCS Knowledge Scripts . 17 2.6 Post-installation Considerations. 17 2.7 Discovering Microsoft Cluster Server Resources. 17 2.8 Upgrading Knowledge Script Jobs . 18 3 MSCS Knowledge Scripts 21 3.1 EventLog . 22 3.2 GroupDown . 24 3.3 GroupOwnerChange . 25 3.4 HealthCheck. 26 3.5 NetInterfaceDown . 28 3.6 NetworkDown. 29 3.7 NodeDown . 30 3.8 ResourceDown. 30 3.9 ResourceOwnerChange. 31 A Monitoring Clustered Resources and Applications 33 A.1 Monitoring Clusters with AppManager . 33 A.2 Planning to Monitor Cluster Resources . 33 A.3 Installing on an Active Cluster Node . 34 A.4 Monitoring a Log File on a Shared Cluster Disk. 34 A.5 Monitoring Applications in an Active/Passive Cluster.
    [Show full text]
  • Analysis of Server Clustering Its Uses And
    Maksim Paderin ANALYSIS OF SERVER CLUSTERING ITS USES AND IMPLEMENTATION Bachelor’s thesis Information Technology 2017 Author (authors) Degree Time Maksim Paderin Information December 2017 Technology Title 73 pages Analysis of server clustering, its uses and implementation 0 pages of appendices Commissioned by South-Eastern University of Applied Sciences Supervisor Matti Juutilainen Abstract Clustering became an extremely important networking technology in the latter days. It perfectly coexists with other trending IT technologies and concepts such as virtualization, cloud computing and IoT. The goal of this study is to learn more about clustering as a concept, about co-existing technologies, and about operating systems which could help to form networks, and, specifically, clusters. Another goal of the study is to apply gained knowledge to practice and implement a working cluster. The practical part is targeted on the use of DigitalOcean and Amazon Web Services technologies in synergy with two very different operating systems, Linux and Windows. Keywords clustering, networking, servers, vmware, windows, linux, clouds, virtualization, docker, digitalocean, aws CONTENTS 1 INTRODUCTION .......................................................................................................... 6 2 SERVER CLUSTERING CONCEPTS .......................................................................... 7 2.1 Clusters as technology ........................................................................................... 7 2.2 Cluster roles and architecture
    [Show full text]
  • HP X3000 Network Storage Systems Product Overview
    QuickSpecs HP X3000 Network Storage Systems Product Overview IP-based gateway services for arrays/SANs and highly available shared storage solutions built on industry standards You made the decision to invest in a storage array, but now you've found that you still have file and application servers that aren't connected. So you're still spending time managing islands of storage and your array isn't meeting its full potential. Wouldn't it be nice to complete the consolidation to truly create one unified storage pool to expand and protect? HP X3000 Network Storage Systems boost the value of your array or SAN by adding Windows-powered IP-based gateway services to it. You'll spend less time and effort on maintenance and deliver a better return-on-investment for your array/SAN when you consolidate to a single, unified storage pool that can serve files to your clients and blocks to your servers via multiple protocols (FC/iSCSI or SAS/iSCSI). Industry-standards protect your investment by providing compatibility with your network and applications, and a Microsoft Cluster Server (MSCS) license is included to provide high availability and reduce downtime. What is an X3000 HP X3000 Network Storage Systems are enhanced Windows-powered gateway and shared storage Network Storage System? solutions for your medium business, workgroup, or datacenter environment. They are built on industry- standard HP ProLiant servers and come with the Microsoft® Windows Storage Server 2008 Enterprise operating system -- including cluster services -- pre-installed. They connect via Ethernet and add file, print, iSCSI, and management services to an array or SAN, or they can be used in conjunction with array storage to build an affordable and highly-available unified storage solution.
    [Show full text]
  • SQL Server Failover Effects on Applications Connected to the Cluster
    UNIVERSITY OF ALASKA ANCHORAGE CSCE A470 CAPSTONE PROJECT SQL Server Failover Effects on Applications Connected to the Cluster Author: James A. Tweet Supervisor Prof. Mock, PhD Anchorage AK, May 2016 © Copyright 2016 by James A. Tweet [email protected] Version 5.0 Table of Contents List of Figures ................................................................................................................................. ii Abstract ............................................................................................................................................ i Chapter 1 Introduction .................................................................................................................... 1 1.1 The SQL Server failover cluster ........................................................................................... 1 1.2 ADO.NET, ODBC, JDBC .................................................................................................... 2 1.3 Learning from the past .......................................................................................................... 2 1.4 The Plan ................................................................................................................................ 3 Chapter 2 ......................................................................................................................................... 5 2.1 Technologies ......................................................................................................................... 5 2.2
    [Show full text]
  • Backup Exec 16 Software Compatibility List (SCL) Updated on February 12, 2018
    Backup Exec 16 Software Compatibility List (SCL) Updated on February 12, 2018 Copyright © 2018 Veritas Technologies LLC. All rights reserved. Veritas, the Veritas Logo, and Backup Exec are trademarks or registered trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. Backup Exec™16 SCL 2018-02-12 Introduction This document lists the available operating systems, platforms, and applications specifically supported by Veritas to be compatible with Backup Exec ™ 16. The following guidelines regarding this Software Compatibility List (SCL) should be understood: • Veritas support of 3rd party manufacturer products: Veritas will cooperate with 3rd party vendors and attempt to assist in the diagnosis of problems found between the 3rd party products and Veritas products. Veritas Technical Support, for the combination of Veritas and 3rd party products listed in this document, is conditional on the 3rd party product being supported by the original manufacturer or vendor. If the original vendor has ceased providing support for a version of their product, Veritas also will no longer be able to provide support on that product in combination with our products. • Minor update version support: The contents of this document including 3rd party applications, databases, operating systems, in combination with Veritas products represent what has been tested in Veritas labs, or in Veritas-supervised partner labs, but is not intended to be a complete list of supported products and versions. Manufacturers of these products may frequently release minor version updates to their products as maintenance during their product's normal life cycle.
    [Show full text]
  • Oncommand Unified Manager 6.4 Installation and Setup Guide for Microsoft® Windows
    OnCommand® Uni®ed Manager 6.4 Installation and Setup Guide For Microsoft® Windows® February 2016 | 215-10949_A0 [email protected] Table of Contents | 3 Contents Unified Manager installation and setup on Windows ............................... 5 How Unified Manager works with Windows .............................................................. 5 System requirements for Unified Manager ................................................ 6 Installing Unified Manager .......................................................................... 8 Prerequisites for installing Unified Manager .............................................................. 8 Installing Unified Manager on Windows .................................................................. 10 Performing an unattended installation of Unified Manager ...................................... 11 Setting up Unified Manager in a failover clustering environment ......... 13 Requirements for Unified Manager in a failover clustering environment ................. 13 Installing Unified Manager on MSCS ....................................................................... 14 Configuring Unified Manager server with MSCS using configuration scripts ......... 14 Configuring after installation .................................................................... 17 Configuring your environment after initial setup ...................................................... 17 Configuring Unified Manager to send alert notifications .......................................... 17 Configuring notification
    [Show full text]
  • CLUSTER COMPUTING Introduction: a Computer Cluster Is a Group Of
    CLUSTER COMPUTING CLUSTER COMPUTING Introduction: A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. The High Performance Computing (HPC) allows scientists and engineers to deal with very complex problems using fast computer hardware and specialized software. Since often these problems require hundreds or even thousands of processor hours to complete, an approach, based on the use of supercomputers, has been traditionally adopted. Recent tremendous increase in a speed of PC-type computers opens relatively cheap and scalable solution for HPC, using cluster technologies. The conventional MPP (Massively Parallel Processing) supercomputers are oriented on the very high-end of performance. As a result, they are relatively expensive and require special and also expensive maintenance support. Better understanding of applications and algorithms as well as a significant improvement in the communication network technologies and processors speed led to emerging of new class of systems, called clusters of SMP(symmetric multi processor) or networks of workstations (NOW), which are able to compete in performance with MPPs and have excellent price/performance ratios for special applications types. A cluster is a group of independent computers working together as a single system to ensure that mission-critical applications and resources are as highly available as possible.
    [Show full text]
  • Quickspecs RA4100 Hewlett Packard
    RETIRED: Retired products sold prior to the November 1, 2015 separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. may have older product names and model numbers that differ from current models. ProLiant Cluster HA/F100 and HA/F200 for QuickSpecs RA4100 Hewlett Packard Overview Models ESA Building Blocks RA4100 (rack) (discontinued) 146013-001 RA4100 Rack to Tower Conversion Kit (discontinued) 154253-B21 RA4000 Redundant Controller (discontinued) 223187-B21 FC-AL Switch 8 (discontinued) 177862-B21 FC-AL Switch 3 Port Expansion Module (discontinued) 177863-B21 FC Switch 8 – EL (discontinued) 176219-B21 Fibre Channel SAN Switch 8-EL – Bundle (discontinued) 253723-001 FC Switch 16 – EL (discontinued) 212776-B21 Fibre Channel SAN Switch 16-EL – Bundle (discontinued) 253740-001 SAN Switch 8 (discontinued) 158222-B21 Fibre Channel SAN Switch/8 (supports Quickloop) – Bundle (discontinued) 253704-001 SAN Switch 16 (discontinued) 158223-B21 Fibre Channel SAN Switch/16 (supports Quickloop) – Bundle (discontinued) 253705-001 Fibre Channel Storage Hub 7 (discontinued) 234453-001 Fibre Channel Storage Hub-7 Rack Mounting Kit (discontinued) 136127-B21 64-bit/66-MHz Fibre Channel Host Adapter 120186-B21 Redundant Host Adapter Failover Software Kit for Windows® NT® 174681-B21 (single server, dual path) (discontinued) High Availability F100 Cluster (HA/F100) Kit for RA4100 (Microsoft® Windows NT 4.0EE, Windows 309816-B22/ 2000 Advanced Server, or Windows Server 2003) (dual server, single path) (discontinued) -292 (Japan) High Availability F200 Cluster (HA/F200) Kit for (Microsoft Windows NT 4.0EE, Windows 2000 380357-B24/ Advanced Server, or Windows Server 2003 -294 (Japan) (dual server, dual path) (discontinued) Secure Path for Windows 2000 V 3.1c (single server, dual path) Can only be purchased with the ProLiant Cluster HA/F200 for RA4100 Kit.
    [Show full text]