Setting up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.0) Author Names: Brandon Perkins, Chris Negus 05/4/2012
Total Page:16
File Type:pdf, Size:1020Kb
Setting Up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.0) Author Names: Brandon Perkins, Chris Negus 05/4/2012 INTRODUCTION This tech brief describes how to configure Red Hat Enterprise Virtualization Manager (RHEV-M) in a two- node, Red Hat Cluster Suite (RHCS) highly available (HA) cluster. To make your RHEV-M highly available, you can configure it to run as a service in a HA cluster. RHCS High availability clusters eliminate single points of failure, so if the node on which a service (in this case the RHEV-M) is running should become inoperative, the service can start up again (fail over) to another cluster node without interruption or data loss. Configuring a Red Hat Enterprise Linux Cluster is described in the RHEL 6 Cluster Administration Guide. Refer to that guide for help extending or modifying your cluster: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/index.html NOTE: Although not strictly required, it is generally better to run at least a three node cluster. Besides offering extra resources, the additional node makes it less likely you will end up in a "split-brain" condition, where both nodes believe they control the cluster. NOTE: The procedures in this tech brief contain several long, complex commands. Consider copying this document, or plain text copies of it, to the cluster nodes so you can copy and paste commands into the shell. UNDERSTANDING SYSTEM REQUIREMENTS There are many different ways of setting up a high availability RHEV-M cluster. In our example, we used the following components: • Two cluster nodes. Install two machines with Red Hat Enterprise Linux to act as cluster nodes. • A cluster web user interface. A Red Hat Enterprise Linux system (not on either of the cluster nodes) running the luci Web-based high-availability administration application. You want this running on a third system, so if either node goes down, you can still manage the cluster from another system. • Network storage. Shared network storage is required. This procedure shows how to use HA LVM from a RHEL 6 system, which is backed by iSCSI storage. (Fibre channel and NFS are other technologies you could use instead of iSCSI.) • Red Hat products. This procedure combines components from Red Hat Enterprise Linux, Red Hat Cluster Suite, and Red Hat Enterprise Virtualization. Using this information, set up two physical systems as cluster nodes (running ricci), another physical system that holds the cluster manager web user interface (running luci), and a final system that contains the HA LVM storage. Figure 1 shows the basic layout of the systems used to test this procedure: Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 1 Figure 1: Example RHEV-M on HA cluster configuration For our example, we used two NICS on the cluster nodes. We used a 192.168.99.0 network for communication within the cluster and 192.168.100.0 for the network facing the RHEV environment. We used a SAN and create a high-availability LVM with multiple logical volumes that are shared by the cluster. The procedures that follow describe how to set up the cluster nodes, cluster web user interface, HA LVM storage, and the clustered service running the RHEV-M. INSTALLING THE TWO CLUSTER NODES (RICCI) In this example, the two Red Hat Enterprise Linux 6.2 cluster nodes must: • Registration: Be registered with RHN. • Entitlements: Have Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and Red Hat Cluster Suite entitlements. • Channels: Be subscribed to the Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64) base/parent channel, the RHEL Server Supplementary (v. 6 64-bit x86_64) child channel (which provides both the java-1.6.0-sun and java-1.6.0-sun-devel packages), and the RHEL Server High Availability (v. 6 for 64- bit x86_64) child channel (which provides ricci and all supporting clustering packages). • Security: Run with SELinux in Enforcing mode, using the targeted processes protected policy, and Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 2 have iptables packet filtering started and persisted. (Recommended) • Storage: Have some form of shared block device, such as fibre channel or iSCSI, between the two nodes. For our example, we assume a shared iSCSI device that appears as /dev/sdb is configured on both nodes before configuring the storage portion of the cluster. Here are the basic steps you need to run to install both cluster node systems: 1. Install RHEL. On both cluster nodes, install a Red Hat Enterprise Linux 6 Server system (a base install is all that is needed at this point). In testing, we used a RHEL 6.2 Basic Server installation. Because these systems must run the Red Hat Enterprise Virtualization Manager (RHEV-M), it must meet the hardware and software requirements described in the RHEV Installation Guide (http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html- single/Installation_Guide/index.html#chap-Installation_Guide- Installing_the_RHEV_Manager-Manager). Running rhevm-setup and related procedures (such as setting up firewall rules) is covered later in the chapter. 2. Configure networking. How you set up networking is up to you and your environment. In our example, to make sure application traffic doesn't interfere with cluster traffic, we put the cluster on a separate network with static IP addresses (eth0). On the RHEV network, we assume there is a properly configured DHCP server (eth1). Here's what our network configuration files look like: • On both cluster nodes (node1 and node2) we added these lines to the /etc/hosts file: 192.168.99.1 node1.example.com 192.168.99.2 node2.example.com 192.168.99.3 luci.example.com • On node1, we set the hostname, a static IP address for eth0 and dhcp for eth1 interfaces in several files, as noted below: /etc/sysconfig/network HOSTNAME=node1.example.com /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=dhcp TYPE=Ethernet /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=none TYPE=Ethernet IPADDR=192.168.99.1 NETMASK=255.255.255.0 • On node2, we also set the hostname, a static IP address for eth0 and dhcp for eth1 interfaces in several files, as noted below: Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 3 Copyright © 2012 Red Hat, Inc. “Red Hat,” Red Hat Linux, the Red Hat “Shadowman” logo, and the products www.redhat.com listed are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. /etc/sysconfig/network HOSTNAME=node2.example.com /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=dhcp TYPE=Ethernet /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=none TYPE=Ethernet IPADDR=192.168.99.2 NETMASK=255.255.255.0 With that configured, we started the network service: # service NetworkManager stop # chkconfig NetworkManager off # service network start # chkconfig network on 3. Register RHEL. Register all systems with Red Hat Network. Here's one way to do that: # rhnreg_ks --activationkey="your_activation_key" 4. Add channels to each node. On both cluster nodes, for each required channel (High Availability, Supplementary, RHEVM, and JBoss channels), run the rhn-channel command to add the necessary channels. For example: # rhn-channel --add --channel=rhel-x86_64-server-supplementary-6 # rhn-channel --add --channel=rhel-x86_64-server-ha-6 # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3 # rhn-channel --add --channel=jbappplatform-5-x86_64-server-6-rpm 5. Install ricci. On both cluster nodes, install the ricci RPMs on each node. # yum -y install ricci 6. Update packages. On both cluster nodes, run yum update to make sure that all your packages are up to date. Reboot if there is a new kernel. # yum update # reboot 7. Open firewall: On both cluster nodes, open up the necessary ports in iptables. Assuming your firewall is currently configured, with the proper rules installed in the running kernel, run these commands to create a custom chain of firewall rules (called RHCS) and save those changes permanently: Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 4 # iptables -N RHCS # iptables -I INPUT 1 -j RHCS # iptables -A RHCS -p udp --dst 224.0.0.0/4 -j ACCEPT # iptables -A RHCS -p igmp -j ACCEPT # iptables -A RHCS -m state --state NEW -m multiport -p tcp --dports \ 40040,40042,41040,41966,41967,41968,41969,14567,16851,11111,21064,50006,\ 50008,50009,8084 -j ACCEPT # iptables -A RHCS -m state --state NEW -m multiport -p udp --dports \ 6809,50007,5404,5405 -j ACCEPT # service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] 8. Start ricci. On both cluster nodes, start the ricci daemon on both nodes and configure it to start on boot: # chkconfig ricci on # service ricci start Starting oddjobd: [ OK ] generating SSL certificates... done Generating NSS database... done Starting ricci: [ OK ] 9. Set password. On both cluster nodes, set the ricci user password: # passwd ricci Changing password for user ricci. New password: ********** Retype new password: ********** passwd: all authentication tokens updated successfully. At this point both cluster nodes should be running the ricci servers and be ready to be managed by the cluster web user interface (luci), as described in the next section. INSTALLING THE CLUSTER WEB USER INTERFACE (LUCI) On the RHEL system you have chosen to run the cluster web user interface (luci), enable the same channels you did for the cluster nodes.