Setting Up a Highly Available Enterprise Virtualization Manager (RHEV 3.0) Author Names: Brandon Perkins, Chris Negus 05/4/2012

INTRODUCTION This tech brief describes how to configure Red Hat Enterprise Virtualization Manager (RHEV-M) in a two- node, Red Hat Cluster Suite (RHCS) highly available (HA) cluster. To make your RHEV-M highly available, you can configure it to run as a service in a HA cluster. RHCS High availability clusters eliminate single points of failure, so if the node on which a service (in this case the RHEV-M) is running should become inoperative, the service can start up again (fail over) to another cluster node without interruption or data loss. Configuring a Cluster is described in the RHEL 6 Cluster Administration Guide. Refer to that guide for help extending or modifying your cluster: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/index.html

NOTE: Although not strictly required, it is generally better to run at least a three node cluster. Besides offering extra resources, the additional node makes it less likely you will end up in a "split-brain" condition, where both nodes believe they control the cluster.

NOTE: The procedures in this tech brief contain several long, complex commands. Consider copying this document, or plain text copies of it, to the cluster nodes so you can copy and paste commands into the shell.

UNDERSTANDING SYSTEM REQUIREMENTS There are many different ways of setting up a high availability RHEV-M cluster. In our example, we used the following components: • Two cluster nodes. Install two machines with Red Hat Enterprise Linux to act as cluster nodes. • A cluster web user interface. A Red Hat Enterprise Linux system (not on either of the cluster nodes) running the luci Web-based high-availability administration application. You want this running on a third system, so if either node goes down, you can still manage the cluster from another system. • Network storage. Shared network storage is required. This procedure shows how to use HA LVM from a RHEL 6 system, which is backed by iSCSI storage. (Fibre channel and NFS are other technologies you could use instead of iSCSI.) • Red Hat products. This procedure combines components from Red Hat Enterprise Linux, Red Hat Cluster Suite, and Red Hat Enterprise Virtualization. Using this information, set up two physical systems as cluster nodes (running ricci), another physical system that holds the cluster manager web user interface (running luci), and a final system that contains the HA LVM storage. Figure 1 shows the basic layout of the systems used to test this procedure:

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 1 Figure 1: Example RHEV-M on HA cluster configuration

For our example, we used two NICS on the cluster nodes. We used a 192.168.99.0 network for communication within the cluster and 192.168.100.0 for the network facing the RHEV environment. We used a SAN and create a high-availability LVM with multiple logical volumes that are shared by the cluster. The procedures that follow describe how to set up the cluster nodes, cluster web user interface, HA LVM storage, and the clustered service running the RHEV-M.

INSTALLING THE TWO CLUSTER NODES (RICCI) In this example, the two Red Hat Enterprise Linux 6.2 cluster nodes must: • Registration: Be registered with RHN. • Entitlements: Have Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and Red Hat Cluster Suite entitlements. • Channels: Be subscribed to the Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64) base/parent channel, the RHEL Server Supplementary (v. 6 64-bit x86_64) child channel (which provides both the java-1.6.0-sun and java-1.6.0-sun-devel packages), and the RHEL Server High Availability (v. 6 for 64- bit x86_64) child channel (which provides ricci and all supporting clustering packages). • Security: Run with SELinux in Enforcing mode, using the targeted processes protected policy, and

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 2 have iptables packet filtering started and persisted. (Recommended) • Storage: Have some form of shared block device, such as fibre channel or iSCSI, between the two nodes. For our example, we assume a shared iSCSI device that appears as /dev/sdb is configured on both nodes before configuring the storage portion of the cluster. Here are the basic steps you need to run to install both cluster node systems: 1. Install RHEL. On both cluster nodes, install a Red Hat Enterprise Linux 6 Server system (a base install is all that is needed at this point). In testing, we used a RHEL 6.2 Basic Server installation. Because these systems must run the Red Hat Enterprise Virtualization Manager (RHEV-M), it must meet the hardware and software requirements described in the RHEV Installation Guide (http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html- single/Installation_Guide/index.html#chap-Installation_Guide- Installing_the_RHEV_Manager-Manager). Running rhevm-setup and related procedures (such as setting up firewall rules) is covered later in the chapter. 2. Configure networking. How you set up networking is up to you and your environment. In our example, to make sure application traffic doesn't interfere with cluster traffic, we put the cluster on a separate network with static IP addresses (eth0). On the RHEV network, we assume there is a properly configured DHCP server (eth1). Here's what our network configuration files look like: • On both cluster nodes (node1 and node2) we added these lines to the /etc/hosts file: 192.168.99.1 node1.example.com 192.168.99.2 node2.example.com 192.168.99.3 luci.example.com

• On node1, we set the hostname, a static IP address for eth0 and dhcp for eth1 interfaces in several files, as noted below: /etc/sysconfig/network HOSTNAME=node1.example.com

/etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=dhcp TYPE=Ethernet

/etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=none TYPE=Ethernet IPADDR=192.168.99.1 NETMASK=255.255.255.0

• On node2, we also set the hostname, a static IP address for eth0 and dhcp for eth1 interfaces in several files, as noted below:

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 3

Copyright © 2012 Red Hat, Inc. “Red Hat,” , the Red Hat “Shadowman” logo, and the products www.redhat.com listed are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. /etc/sysconfig/network HOSTNAME=node2.example.com

/etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=dhcp TYPE=Ethernet

/etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=??:??:??:??:??:?? NM_CONTROLLED=no ONBOOT=yes BOOTPROTO=none TYPE=Ethernet IPADDR=192.168.99.2 NETMASK=255.255.255.0

With that configured, we started the network service: # service NetworkManager stop # chkconfig NetworkManager off # service network start # chkconfig network on

3. Register RHEL. Register all systems with . Here's one way to do that: # rhnreg_ks --activationkey="your_activation_key"

4. Add channels to each node. On both cluster nodes, for each required channel (High Availability, Supplementary, RHEVM, and JBoss channels), run the rhn-channel command to add the necessary channels. For example: # rhn-channel --add --channel=rhel-x86_64-server-supplementary-6 # rhn-channel --add --channel=rhel-x86_64-server-ha-6 # rhn-channel --add --channel=rhel-x86_64-server-6-rhevm-3 # rhn-channel --add --channel=jbappplatform-5-x86_64-server-6-rpm

5. Install ricci. On both cluster nodes, install the ricci RPMs on each node. # yum -y install ricci

6. Update packages. On both cluster nodes, run yum update to make sure that all your packages are up to date. Reboot if there is a new kernel. # yum update # reboot

7. Open firewall: On both cluster nodes, open up the necessary ports in iptables. Assuming your firewall is currently configured, with the proper rules installed in the running kernel, run these commands to create a custom chain of firewall rules (called RHCS) and save those changes permanently:

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 4 # iptables -N RHCS # iptables -I INPUT 1 -j RHCS # iptables -A RHCS -p udp --dst 224.0.0.0/4 -j ACCEPT # iptables -A RHCS -p igmp -j ACCEPT # iptables -A RHCS -m state --state NEW -m multiport -p tcp --dports \ 40040,40042,41040,41966,41967,41968,41969,14567,16851,11111,21064,50006,\ 50008,50009,8084 -j ACCEPT # iptables -A RHCS -m state --state NEW -m multiport -p udp --dports \ 6809,50007,5404,5405 -j ACCEPT # service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

8. Start ricci. On both cluster nodes, start the ricci daemon on both nodes and configure it to start on boot:

# chkconfig ricci on # service ricci start Starting oddjobd: [ OK ] generating SSL certificates... done Generating NSS database... done Starting ricci: [ OK ]

9. Set password. On both cluster nodes, set the ricci user password:

# passwd ricci Changing password for user ricci. New password: ********** Retype new password: ********** passwd: all authentication tokens updated successfully.

At this point both cluster nodes should be running the ricci servers and be ready to be managed by the cluster web user interface (luci), as described in the next section.

INSTALLING THE CLUSTER WEB USER INTERFACE (LUCI) On the RHEL system you have chosen to run the cluster web user interface (luci), enable the same channels you did for the cluster nodes. Then run the following steps to install and configure luci: 1. Install luci. Install the luci RPMs: # yum -y install luci

2. Start luci. Start the luci daemon:

# chkconfig luci on # service luci start Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `luci.example.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci): (none suitable found, you can still do it manually as mentioned above) Generating a 2048 bit RSA private key Writing new private key to '/var/lib/luci/certs/host.pem' Starting saslauthd: [ OK ]

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 5 Start luci... [ OK ] Point your web browser to https://luci.example.com:8084 (or equivalent) to access luci

3. Login to luci. As instructed by the start-up script, point your web browser to the address shown (https://luci.example.com:8084 in this example) and log in as the root user, as prompted. 4. Name the cluster. Select Manage Clusters -> Create, then fill in the Cluster Name (for example, RHEVMCluster). 5. Identify cluster nodes. Fill in the Node Name (fully-qualified domain name or name in /etc/hosts) and Password (the password for the user ricci) for the first cluster node. Click the Add Another Node button and add the same information for the second cluster node. (Repeat if you had decided to create more than two nodes.) 6. Add cluster options. Select the following options, then click the Create Cluster button: • Use the Same Password for All Nodes: Select this check box. • Download Packages: Select this radio button. • Reboot Nodes Before Joining Cluster: Select this check box. • Enable Shared Storage Support: Leave this unchecked. After you click the Create Cluster button, if the nodes can be contacted luci will set up each cluster node, downloading packages as needed, and add each node to the cluster. When each node is set up, the High Availability Management screen appears as shown in Figure 2:

Figure 2: Nodes successfully added to RHEVMCluster

7. Create failover domain. Click the Failover Domains tab. Click the Add button and fill in the following information as prompted: • Name. Fill in any node you like (such as prefer_node1). • Prioritized. Check this box. • Restricted. Check this box. • Member. Click the Member box for each node. • Priority. Add a "1" for node1; Add a "2" for node2 under the Priority column.

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 6 Click Create to apply the changes to the fail over domain. 8. Add Fence Devices. Configure appropriate fence devices for the hardware you have. Add a fence device and instance for each node. These settings will be particular to your hardware and software configuration. Refer to the Cluster Administration Guide for help with configuring fence devices: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html- single/Cluster_Administration/index.html

CREATING A HIGHLY AVAILABLE LVM (HA LVM) For this procedure, we chose to implement a Highly Available LVM (HA LVM) configuration. The HA LVM setup is appropriate for failover configurations (while a Clustered LVM is best when a clustered application runs simultaneously on multiple nodes). The HA LVM provides LVM Failover by: • Providing a mirroring mechanism between two SAN-connected systems • Allowing a system to take over serving content from a system that fails To set up HA LVM Failover (using the original method), perform the following steps: 1. Configure block storage. If you haven't already done so, configure a shared storage device that contains at least 20G of space. You will use 11G in this procedure. For our test we used a shared iSCSI device that appeared as /dev/sdb on both nodes. (You can use Fibre Channel or any other networked storage medium that results in that medium beinge available as a device on both nodes.) • iSCSI Target system: Here is what we did on a system configured to share a storage device as an iSCSI target. We started with a blank disk attached to a RHEL 6 system where the disk appeared as /dev/sdc. After installing the RHEL system and the extra hard disk, run the following: # yum install scsi-target-utils

Next, create an entry in the /etc/tgt/targets.conf file, that looked similar to the following:

After that, restart the tgtd service and set it to start at each reboot. # chkconfig tgtd on # service tgtd start

• Attach storage to node1 and node2. On both nodes, you need to install iscsi-initiator-utils, then discover the iSCSI target and login to the initiator. # yum install iscsi-initiator-utils # iscsiadm -m discovery -t st -p 192.168.0.104 192.168.0.104:3260,1 iqn.2012-03.com.example.node3:mystore # iscsiadm -m node -T iqn.2012-03.com.example.node3:mystore -p 192.168.0.104 -l Logging in to [iface: default, target: iqn.2012-03.com.example.storage:mystore, portal: 192.168.0.104,3260] (multiple) Login to [iface: default, target: iqn.2012-03.com.example.storage:mystore, portal: 192.168.0.104,3260] successful.

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 7 2. Check locking type (both nodes). Ensure that the parameter "locking_type" in the global section of "/etc/lvm/lvm.conf" is set to the value "1" on both nodes: # lvmconf --disable-cluster # grep ' locking_type = ' /etc/lvm/lvm.conf locking_type = 1

3. Create LVM partition (node 1). Create the logical volumes and file systems using standard LVM2 and file system commands on only one node. First create the LVM partition. This example assumes a whole second disk (/dev/sdb or whatever device name was assigned to the iSCSI device) is being used as a new LVM physical volume. Here we use fdisk to create the LVM (8e) partition type and run partx -a to make sure the change is synced with the kernel: # fdisk -cu /dev/sdb Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-20480, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-20480, default 20480): Using default value 20480

Command (m for help): t Selected partition 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes 64 heads, 32 sectors/track, 20480 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x4b0c2d8d

Device Boot Start End Blocks Id System /dev/sdb1 12048020971504 8e Linux LVM

Command (m for help): w The partition table has been altered!

Calling ioctl() to re-read partition table. Syncing disks. # partx -a /dev/sdb 4. Create the LVM physical volume (node 1). Still on the first cluster node, identify the partition as an LVM physical volume: # pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 8 5. Create the LVM volume group (node 1). Create the volume group from the new LVM physical volume (named RHEVMVolGroup in this example): # vgcreate RHEVMVolGroup /dev/sdb1 Volume group "RHEVMVolGroup" successfully created

6. Create LVM logical volumes and file systems (on node 1). Table 1 shows the logical volumes you create to be used as shared storage between the nodes. Table 1 - Logical volumes used for shared storage between cluster nodes. Logical Volume Name Size Description lv_pgsql 5.00g Contains Postgresql database lv_jbossas 2.00g Contains JBoss database lv_rhevm 1.00g Contains RHEV-M data lv_rhevm-dwh 1.00g Contains rhevm-dwh data lv_rhevm-reports 1.00g Contains RHEV-M reports lv_rhevm-reports-server 1.00g Contains RHEV-M server reports Here are the commands for creating those LVM logical volumes and adding ext4 file systems to them. Notice that two "for" loops are used to create like volumes and add file systems to each logical volume:

# lvcreate -L5.00g -n lv_pgsql RHEVMVolGroup Logical volume "lv_pgsql" created # lvcreate -L2.00g -n lv_jbossas RHEVMVolGroup Logical volume "lv_jbossas" created # for i in rhevm rhevm-dwh rhevm-reports rhevm-reports-server; do \ lvcreate -L1.00g -n lv_$i RHEVMVolGroup; done Logical volume "lv_rhevm" created Logical volume "lv_rhevm-dwh" created Logical volume "lv_rhevm-reports" created Logical volume "lv_rhevm-reports-server" created # for i in $(ls -1 /dev/RHEVMVolGroup/lv_*); do mkfs.ext4 $i; done mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) ... Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.

You now have six partitions, with device names that begin with /dev/RHEVMVolGroup/lv_*.

SETTING UP SHARED RESOURCES (FROM LUCI) This section creates the RHEV-M cluster service (called rhevm in this example). Open the High Availability Management page (luci) again from your Web browser (for example, http://luci.example.com:8084). From

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 9 that interface, you will: • Identify the IP address for this cluster • Identify the rhevm service and add the IP address to the service • Identify and configure the high-availability LVM At several points, you will run commands from the cluster nodes (ricci) to make sure that the configuration you set from luci is working.

Identifying the Cluster Service's IP Address From luci, do the following to identify the cluster service's IP address: 1. Select the cluster. Click on the cluster name (for example, RHEVMCluster). 2. Add an IP address resource. Select the Resources tab, then click Add and choose IP Address. 3. Fill in IP address information. Enter the following: • IP Address. Fill in a valid IP address. Ultimately, this IP address (192.168.100.3 in our example) is used from a Web browser to access the RHEV-M (for example, https://192.168.100.3:8080). • Monitor Link. Check this box. 4. Submit information. Click the Submit button.

Creating the rhevm Service From luci, with the cluster still selected, add a new service and associate the IP address to it as follows: 5. Add a Service Group. Click on the Service Groups tab and select Add. 6. Fill in Service Group information. • Service name. Assign a name to the serve (for example, rhevm) • Automatically start this service. Check this box. • Failover Domain. Select the prefer_node1 you created earlier. • Recovery Policy. Select Relocate. 7. Add the IP address resource. Select the Add Resource button. Then select the IP Address you added earlier. 8. Submit information. Click the Submit button. In a few seconds, the rhevm service should start up.

Testing if rhevm Service Can Be Reached From each of the cluster nodes, check the status of the rhevm services you created. Then, from any machine on the network, check that you can reach the cluster from the IP address you just assigned. 1. From a shell on each of the cluster nodes verify that the service group is running:

# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm node1.example.com started

2. From a shell from anywhere that can reach the cluster over the network, verify that you can ping the IP address associated with the IP Address resource created in the previous section (press Ctrl+C to end the ping):

# ping 192.168.100.3

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 10 PING 192.168.100.3 (192.168.100.3) 56(84) bytes of data. 64 bytes from 192.168.100.3: icmp_seq=1 ttl=64 time=0.084 ms 64 bytes from 192.168.100.3: icmp_seq=2 ttl=64 time=0.069 ms ^C --- 10.16.144.227 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 5426ms rtt min/avg/max/mdev = 0.069/0.075/0.084/0.010 ms

Creating the High Availability LVM (luci) Back at the High Availability Management page (luci), you can identify the high-availability LVM resources as follows: 1. Add HA LVM resource. Click on the Resources tab, select Add, and select HA LVM as the resource type. 2. Fill in HA LVM information: • Name. Fill in the name (for example, RHEVM HA LVM). • Volume group name. Identify the volume group created earlier (for example RHEVMVolGroup). • Logical volume name. Leave this field blank. • Fence the node if it is unable to clean up LVM tags. Click this box so a checkmark appears. 3. Submit information. Press the Submit button. The RHEVM HA LVM global resource should appear under the Resources tab. 4. Service Groups. Click the Service Groups tab. 5. Select rhevm Service. Click on the rhevm service you added earlier under the Name field. 6. Add Resource. Near the bottom of the page, select the Add Resource button, then select the RHEVM HA LVM resource you just added from the drop-down box. 7. Submit information. Press the Submit button. The RHEVM HA LVM global resource should appear associated with the rhevm service group.

Configuring the High Availability LVM (cluster nodes) You need to configure the lvm.conf file on each of the cluster nodes, then create a new initial RAM disk on those nodes and reboot. 1. Edit the "volume_list" field in /etc/lvm/lvm.conf. Include the name of your root volume group (VolGroup) and the hostname of the local system as listed in /etc/cluster/cluster.conf preceded by @. Note that this string MUST match the node name given in /etc/cluster/cluster.conf. So if you used an IP address as the local system name, enter that here. Here is a sample entry from /etc/lvm/lvm.conf on node1:

volume_list = [ "vg_node1", "@node1.example.com" ]

Here is a sample entry from /etc/lvm/lvm.conf on node2: volume_list = [ "vg_node2", "@node2.example.com" ]

This tag will be used to activate shared volume groups or logical volumes. DO NOT include the names of any volume groups that are to be shared using HA-LVM. 2. Update the initial RAM disk (initrd) on all your cluster nodes. (The initial RAM disk must be newer

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 11 than the cluster.conf file.) To do this, make sure you are running the kernel you intend to use, then type the following command:

# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

3. Reboot all nodes to ensure the correct initial RAM disk is in use:

# reboot

Testing that HA LVM is Available 1. Following the reboot, from a shell on one of the cluster nodes, find which node the service group is running on:

# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm node1.example.com started

2. On both nodes, verify the physical volume "/dev/sdb1" on the shared storage appears by typing:

# pvs PV VG Fmt Attr PSize PFree /dev/sdb1 RHEVMVolGroup lvm2 a- 20.00g 10.00g /dev/vda2 VolGroup lvm2 a- 9.51g 0

The physical volume should be backing the RHEVMVolGroup with an lvm2 format and allocatable (a-) attribute. 3. On both nodes, verify that the volume group RHEVMVolGroup appears:

# vgs VG #PV #LV #SN Attr VSize VFree RHEVMVolGroup 1 6 0 wz--n- 20.00g 10.00g VolGroup 1 2 0 wz--n- 9.51g 0

4. On the node currently running the rhevm service group, the volume group should have the one physical volume, six logical volumes, and have the following attributes: • writeable permission (w) • resizeable (i) • normal allocation policy To verify that six logical volumes appear, type the following on the node currently running the rhevm service group:

# lvs LV VG Attr LSize Origin Snap% Move ... lv_pgsql RHEVMVolGroup -wi-a 5.00g lv_rhevm RHEVMVolGroup -wi-a- 1.00g lv_rhevm-dwh RHEVMVolGroup -wi-a- 1.00g lv_rhevm-reports RHEVMVolGroup -wi-a- 1.00g lv_rhevm-reports-server RHEVMVolGroup -wi-a- 1.00g lv_jbossas RHEVMVolGroup -wi-a- 2.00g lv_root VolGroup -wi-ao 5.57g lv_swap VolGroup -wi-ao 3.94g

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 12 The logical volumes: should be backed by the "RHEVMVolGroup", and have the following attributes: • writeable permission • inherited allocation policy • active state 5. On the node NOT currently running the rhevm service group, verify that six logical volumes appear:

# lvs LV VG Attr LSize Origin ... lv_pgsql RHEVMVolGroup -wi--- 5.00g lv_rhevm RHEVMVolGroup -wi--- 1.00g lv_rhevm-dwh RHEVMVolGroup -wi--- 1.00g lv_rhevm-reports RHEVMVolGroup -wi--- 1.00g lv_rhevm-reports-server RHEVMVolGroup -wi--- 1.00g lv_jbossas RHEVMVolGroup -wi--- 2.00g lv_root VolGroup -wi-ao 5.57g lv_swap VolGroup -wi-ao 3.94g

The logical volumes: should be backed by the RHEVMVolGroup, and have the following attributes (note that these logical volumes should NOT be in the active state): • writeable permission (w) • inherited allocation policy (i) 6. Relocate the "rhevm" service group:

# clusvcadm -r rhevm Trying to relocate service:rhevm...Success service:rhevm is now running on node2.example.com

Run the test items 4 and 5 above again to verify that the logical volumes are in the active state on the node running the service group and not in the active state on the node that is not running the service group. ADD MOUNT POINTS AND SET FILE SYSTEMS TO BE MOUNTED Before adding the filesystem resources, you need to create all six mount points on the two nodes. Once the mount point exists, you need to create file system resources that will ultimately mount those file systems in the proper locations. Here's how to do those things:

1. Create mount points. On both nodes, run the following two for loops to create the needed directories, then list them to make sure they exist: # for i in /var/lib/pgsql /usr/share/rhevm /usr/share/rhevm-dwh /usr/share/rhevm-reports /usr/share/rhevm-reports-server /var/lib/jbossas; do mkdir -p $i; done

# for i in /var/lib/pgsql /usr/share/rhevm /usr/share/rhevm-dwh /usr/share/rhevm-reports /usr/share/rhevm-reports-server /var/lib/jbossas; do ls -d $i; done

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 13 /var/lib/pgsql /usr/share/rhevm /usr/share/rhevm-dwh /usr/share/rhevm-reports /usr/share/rhevm-reports-server /var/lib/jbossas

2. Add file system resources. You need to create a separate file system resource in the cluster for each shared logical volume. From luci, use information about each file system listed in Table 2 to do the bulleted procedure that follows.

Table 2 - LVM high availability file system device names and mount points Name FS Mount point Device, FS label, or UUID Type lv_pgsql ext4 /var/lib/pgsql /dev/mapper/RHEVMVolGroup-lv_pgsql lv_rhevm ext4 /usr/share/rhevm /dev/mapper/RHEVMVolGroup-lv_rhevm lv_rhevm-dwh ext4 /usr/share/rhevm-dwh /dev/mapper/RHEVMVolGroup-lv_rhevm-dwh lv_rhevm-reports ext4 /usr/share/rhevm-reports /dev/mapper/RHEVMVolGroup-lv_rhevm-reports lv_rhevm-reports- ext4 /usr/share/rhevm-reports- /dev/mapper/RHEVMVolGroup-lv_rhevm-reports- server server server lv_jbossas ext4 /var/lib/jbossas /dev/mapper/RHEVMVolGroup-lv_jbossas

• From luci, select the Resources tab, then select Add. • From the Add Resource to Cluster pop-up, select Filesystem from the drop-down box. Then add the following information to the Filesystem entry: • Enter the Name from the table above. • Enter the Filesystem type from the table above. • Enter the Mount point from the table above. • Enter the Device, FS label, or UUID from the table above. • Leave Mount options and Filesystem ID (optional) blank. • Check the Reboot host node if unmount fails box. • Press the "Submit" button • Return to the Resources tab and repeat the previous bullet points for each filesystem in Table 2. 3. Add file system resources to rhevm. Once all six filesystem resources are entered, add each of them to the rhevm service group: • From luci, select the Service Groups tab. • Under the Name column, click on the rhevm service group name. • At the bottom of the page, click the Add Resource button. • Click the Select a Resource Type box. All six filesystem resources you added should appear on the list. • Click on one of the filesystem entries from the list that you added in the previous step.

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 14 • Repeat this entire step until all six filesystem resources are added. • Press the "Submit" button 4. Test filesystem is available (where rhevm is running). Login to the node currently running the rhevm service. Check that the six logical volume ext4 filesystems are mounted read/write on the node running the rhevm service: # cat /etc/mtab /dev/mapper/VolGroup-lv_root / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 devpts /dev/pts devpts rw,gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs rw,rootcontext="system_u:object_r:tmpfs_t:s0" 0 0 /dev/vda1 /boot ext4 rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 none /sys/kernel/config configfs rw 0 0 /dev/mapper/RHEVMVolGroup-lv_pgsql /var/lib/pgsql ext4 rw 0 0 /dev/mapper/RHEVMVolGroup-lv_rhevm /usr/share/rhevm ext4 rw 0 0 /dev/mapper/RHEVMVolGroup-lv_rhevm--dwh /usr/share/rhevm-dwh ext4 rw 0 0 /dev/mapper/RHEVMVolGroup-lv_rhevm--reports /usr/share/rhevm-reports ext4 rw 0 0 /dev/mapper/RHEVMVolGroup-lv_rhevm--reports--server /usr/share/rhevm-reports-server ext4 rw 0 0 /dev/mapper/RHEVMVolGroup-lv_jbossas /var/lib/jbossas ext4 rw 0 0

5. Test filesystem is not available (where rhevm is not running). Check that the six logical volume ext4 filesystems are NOT mounted on the node NOT running the rhevm service: # cat /etc/mtab /dev/mapper/VolGroup-lv_root / ext4 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 devpts /dev/pts devpts rw,gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs rw,rootcontext="system_u:object_r:tmpfs_t:s0" 0 0 /dev/vda1 /boot ext4 rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 none /sys/kernel/config configfs rw 0 0

INSTALLING RED HAT ENTERPRISE VIRTUALIZATION MANAGER (RHEV-M) The cluster nodes are ready for you to start installing the RHEV-M software. This procedure generally follows the RHEV-M installation procedure in the Red Hat Enterprise Virtualization Installation Guide: http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Virtualization/3.0/html- single/Installation_Guide/index.html

We have repeated the procedure here (more tersely) because it needs to diverge from the standard procedure in some places. Refer to that guide if you need more details for a certain step. Refer to the "Troubleshooting Red Hat Enterprise Virtualization Manager Installation (RHEV 3.0)" tech brief if you run into problems installing your RHEV-M:

https://access.redhat.com/knowledge/techbriefs/troubleshooting-red-hat- enterprise-virtualization-manager-installation-rhev-30

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 15 Install rhevm 1. Install rhevm packages (all cluster nodes). On both cluster nodes, install all the packages needed to run RHEV-M on both cluster node machines: # yum -y install rhevm

This will pull in a number of dependencies (such as JBoss packages), including hundreds of RPMs. So the process may take some time. 2. Delete shared directory contents (all nodes EXCEPT current rhevm). Remove the contents of the shared filesystem directories from the cluster node(s) that is NOT currently running the rhevm service group. The following commands remove the directories (and their contents), creates new empty directories, and checks that they exist: # for i in /var/lib/pgsql /usr/share/rhevm /usr/share/rhevm-dwh /usr/share/rhevm-reports /usr/share/rhevm-reports-server /var/lib/jbossas; do rm -rf $i && mkdir -p $i; done

# for i in /var/lib/pgsql /usr/share/rhevm /usr/share/rhevm-dwh /usr/share/rhevm-reports /usr/share/rhevm-reports-server /var/lib/jbossas; do ls -d $i; done /var/lib/pgsql /usr/share/rhevm /usr/share/rhevm-dwh /usr/share/rhevm-reports /usr/share/rhevm-reports-server /var/lib/jbossas

The packages needed to set up the RHEV Manager are now installed on both machines. But the RHEV-M is not configured.

Setup RHEV-M Once you open the firewall to allow access to the RHEV manager (RHEV-M), there are two basic steps for setting up the RHEV-M. The rhevm-setup command configures and starts the RHEV-M so it is available to be used from a Web browser. The rhevm-manage-domains command lets you configure your RHEV-M to use authentication from a Linux IPA server or Microsoft Active directory server. 1. Open firewall ports. Make sure your current firewall rules are in place (service iptables start). Then run the following steps to open up the necessary firewall ports to allow HTTP and HTTPS access to the RHEV-M and permanently save your firewall rules to /etc/sysconfig/iptables:

# iptables -N RHEVM # iptables -I INPUT 1 -j RHEVM # iptables -A RHEVM -m state --state NEW -p tcp --dport 8080 -j ACCEPT # iptables -A RHEVM -m state --state NEW -p tcp --dport 8443 -j ACCEPT # service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

2. Set up RHEV-M from active rhevm service group. On the system that is currently running your rhevm cluster service group (probably node1), run rhevm-setup command. Here is an example: # rhevm-setup Welcome to RHEV Manager setup utility

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 16 HTTP Port [8080] : HTTPS Port [8443] : Host fully qualified domain name, note that this name should be fully resolvable [node1.example.com] : myrhevm.example.com Password for Administrator (admin@internal) : ********** Confirm password : ********** Database password (required for secure authentication with locally created db): ******** Confirm password : ******** Organization Name for the Certificate: LinuxToys The default storage type you will be using ['NFS'| 'FC'| 'ISCSI'] [NFS] : Should the installer configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no'] [yes] : Mount point path: /mnt/MyISOs Display name for the ISO Domain: MYISOS Firewall ports need to be opened.

You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up. Alternately you can configure the firewall later using an example iptables file found under /usr/share/rhevm/conf/iptables.example

Configure iptables ? ['yes'| 'no']: no

......

Proceed with the configuration listed above? (yes|no): yes

......

**** Installation completed successfully ******

(Please allow RHEV Manager a few moments to start up.....)

Additional information:

* SSL Certificate fingerprint: 1A:66:7F:6D:02:F1:1C:F2:83:E2:02:4C:8F:EB:10:E9:30:5F:8E:FE * SSH Public key fingerprint: 8e:45:ec:f2:ff:e3:48:2c:47:5d:6e:d4:2f:0b:c1:d5 * A default ISO share has been created on this host. If IP based access restrictions are required, edit /mnt/MyISOs entry in /etc/exports * The firewall has been updated, the old iptables configuration file was saved to /usr/share/rhevm/conf/iptables.backup.131643-03292012_14327 * The installation log file is available at: /var/log/rhevm/rhevm- setup_2012_03_29_12_55_53.log * Please use the user "admin" and password specified in order to login into RHEV Manager

* To configure additional users, first configure authentication domains using the 'rhevm-manage-domains' utility * To access RHEV Manager please go to the following URL: http://myrhevm.example.com:8080

3. Add additional authentication. At this point, you can authenticate to the RHEV-M via an Internet Explorer web browser using the admin account and password you just entered. If you want to configure centralized IPA or Active Directory authentication for your RHEV-M, you can do so with the rhevm-manage-domains command. Here is an example of an IPA configuration (the syntax is the same for adding an Active Directory server):

# rhevm-manage-domains -action=add -domain=ipaserver.example.com -user=admin # rhevm-manage-domains -action=validate

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 17 Domain ipaserver.example.com is valid Manage Domains completed successfully

4. Verify RHEV-M is available. You should be able to test that all is well with the RHEV-M running on node 1 by using the virtual IP resource. If that IP address is associated with a FQDN, use the name to access the RHEV-M service (in our example, https://myrhevm.example.com:8443). Otherwise, use the IP address defined for the virtual IP resource (in our example, 192.168.100.3). So, for example, in an Internet Explorer window, the address might look like: https://192.168.100.3:8443.

Shutdown the RHEV-M Service Group Once RHEV-M is in a known working state on node1, we now need to get it into a known working state on node2, but we need node1 to be down. Follow these steps to : node1# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm node1.example.com started node1# service jbossas stop Stopping jbossas: [ OK ] node1# service postgresql stop Stopping postgresql service: [ OK ] node1# clusvcadm -d rhevm -m $HOSTNAME Member node1.example.com disabling service:rhevm...Success node1# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm (node1.example.com) disabled

Copy specific files from node1 to node2 There are a number of files that were modified on node1 during the setup process that now need to be copied exactly to node2. Run the following command on the first node, replacing node2.example.com with the name of the second node in your cluster:

[node1]# for i in /etc/jbossas/jbossas.conf /etc/rhevm/ \ /etc/yum/pluginconf.d/versionlock.list /etc/pki/rhevm/ \ /etc/jbossas/rhevm-slimmed/ /root/.pgpass /root/.rnd; \ do rsync -e ssh -avx $i node2.example.com:$i; done

On the second node, run the following command to set proper ownership on the .keystore file:

[node2]# chown jboss:jboss /etc/pki/rhevm/.keystore

Start the RHEV-M Service Group on node2 At this point, everything should be setup to run on Node 2 after the following steps: node2# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm (node1.example.com) disabled node2# clusvcadm -e rhevm -m $HOSTNAME

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 18 Member node2.example.com trying to enable service:rhevm... Success service:rhevm is now running on node2.example.com node2# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm node2.example.com started node2# service postgresql start Starting postgresql service: [ OK ] node2# service jbossas start Starting jbossas: [ OK ]

Verify RHEV-M is sane on node2 At this point you should be able to test that all is well with RHEV-M running on node2, by using the virtual IP resource:

$ ping -c 2 myrhevm.example.com PING myrhevm.example.com (192.168.100.3) 56(84) bytes of data. 64 bytes from myrhevm.example.com (192.168.100.3):icmp_seq=1 ttl=64 time=0.046 ms 64 bytes from myrhevm.example.com (192.168.100.3):icmp_seq=2 ttl=64 time=0.053 ms

--- myrhevm.example.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.046/0.049/0.053/0.007 ms

Stop the RHEV-M Service Group on node2 Once RHEV-M is in a known working state on node2, we now need to create our cluster service. To begin, you need to shut down the individual rhevm services (postgresql and jbossas) and the rhevm service group. node2# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm node2.example.com started node2# service jbossas stop Stopping jbossas: [ OK ] node2# service postgresql stop Stopping postgresql service: [ OK ] node2# clusvcadm -d rhevm -m $HOSTNAME Member node2.example.com disabling service:rhevm...Success node2# clustat -s rhevm Service Name Owner (Last) State ------service:rhevm (node2.example.com) disabled

Disable postgresql and jbossas daemons The jbossas and postgresql services that make up the rhevm service group must be turned off and disabled from the command line, so they can be enabled on demand by the rhevm service group. On both nodes, disable the JBossAS (jbossas) and PostgreSQL (postgresql) daemons:

# chkconfig jbossas off # chkconfig postgresql off

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 19 # chkconfig --list jbossas jbossas 0:off 1:off 2:off 3:off 4:off 5:off 6:off # chkconfig --list postgresql postgresql 0:off 1:off 2:off 3:off 4:off 5:off 6:off

Setup shared PostgreSQL and JBoss service resources The postgresql and jbossas services need to be able to start and stop when the rhevm service group is started and stopped on a node. To do that, you need to add them as resources, then connect them to the rhevm service group.

Add PostgreSQL as a resource Login to the High Availability Management page (luci) and do the following to add the postgresql service as a Script type of resource: 1. Select the cluster. Click on the cluster name (for example, RHEVMCluster). 2. Add the postgresql service resource. Select the Resources tab, click Add and choose Script. 3. Fill in Script information. Enter the following: • Name. Type the name of the service (use postgresql). • Full Path to Script File. Type the path to the service (/etc/rc.d/init.d/postgresql). 4. Submit information. Click the Submit button.

Add JBoss as a resource From luci, do the following to add the jbossas service as a Script type of resource: 1. Select the cluster. Click on the cluster name (for example, RHEVMCluster). 2. Add the jbossas service resource. Select the Resources tab, click Add and choose Script. 3. Fill in Script information. Enter the following: • Name. Type the name of the service (use jbossas). • Full Path to Script File. Type the path to the service (/etc/rc.d/init.d/jbossas). 4. Submit information. Click the Submit button.

Add PostgreSQL and JBOSS to the rhevm service group

Once both resources are entered, add each of these resources to the rhevm service group from luci: 1. Service Groups. Click the Service Groups tab. 2. Select rhevm Service. Click on the rhevm service you added earlier under the Name field. 3. Add postgresql Resource. Near the bottom of the page, select the Add Resource button, then select the postgresql resource you just added from the drop-down box. 4. Add jbossas Resource. Near the bottom of the page, select the Add Resource button, then select the jbossas resource you just added from the drop-down box. 5. Submit information. Press the Submit button. The postgresql and jbossas scripts are now part of the rhevm service group.

Start the rhevm service group Then rhevm service group is now defined. You can start that service group as follows:

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 20 1. Select the rhevm service group. Click the box next to the rhevm service group. 2. Start rhevm. With the rhevm service group box selected, select start to start it.

Test the rhevm service group The basic rhevm cluster service group is now in place. Here are some ways to test that the rhevm is working: 1. Determine which node is running the rhevm service group: Type the following on any node to see where the rhevm service is currently running: # clustat -s rhevm service Name Owner (Last) State service:rhevm node2.example.com started

2. Check the IP address of the service is accessible: Start by pinging the host name or IP address representing your RHEV-M from another system to make sure it can reach the node running the service: # ping -c 2 myrhevm PING myrhevm (192.168.0.103) 56(84) bytes of data. 64 bytes from myrhevm (192.168.100.103): icmp_seq=1 ttl=64 time=0.050 ms 64 bytes from myrhevm (192.168.100.103): icmp_seq=2 ttl=64 time=0.051 ms

--- myrhevm ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.050/0.050/0.051/0.007 ms

3. Check that the shared HA LVM logical volumes are available and active: On the active node, type lvs and make sure the shared logical volumes are available and active (active logical volumes show an "a" under the Attr column):

# lvs LV VG Attr LSize Origin ... lv_jbossas RHEVMVolGroup -wi-ao 2.00g lv_pgsql RHEVMVolGroup -wi-ao 5.00g lv_rhevm RHEVMVolGroup -wi-ao 1.00g lv_rhevm-dwh RHEVMVolGroup -wi-ao 1.00g lv_rhevm-reports RHEVMVolGroup -wi-ao 1.00g lv_rhevm-reports-server RHEVMVolGroup -wi-ao 1.00g lv_home vg_node2 -wi-ao 92.64g lv_root vg_node2 -wi-ao 50.00g lv_swap vg_node2 -wi-ao 5.88g

4. Check that the logical volumes were properly mounted: Use the mount command to see that the shared logical volumes are mounted:

# mount | grep RHEVM /dev/mapper/RHEVMVolGroup-lv_pgsql on /var/lib/pgsql type ext4 (rw) /dev/mapper/RHEVMVolGroup-lv_rhevm on /usr/share/rhevm type ext4 (rw) /dev/mapper/RHEVMVolGroup-lv_rhevm--dwh on /usr/share/rhevm-dwh type ext4 (rw) /dev/mapper/RHEVMVolGroup-lv_rhevm--reports on /usr/share/rhevm-reports type ext4 (rw) /dev/mapper/RHEVMVolGroup-lv_rhevm--reports--server on /usr/share/rhevm-reports-server type ext4 (rw) /dev/mapper/RHEVMVolGroup-lv_jbossas on /var/lib/jbossas type ext4 (rw)

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 21 5. Check that the jbossas and postgresql services are running: Type the following:

# service jbossas status jbossas (pid 9231) is running # service postgresql status postmaster (pid 9115) is running

6. Access the RHEV-M from a browser: From an Internet Explorer window, make sure you can login and access the RHEV-M from the FQDN or IP address of the RHEV-M service. The URL will be something like the following: http://myrhevm.example.com:8080

7. Switch to the next cluster node: Open a connection to luci. Select the RHEVMCluster cluster name, select the Service Groups tab, and select rhevm. With the rhevm checked, select Disable. Then select the Start on node... box and chose a different node to run the rhevm service group and select the start button. The service is now running on the next node. 8. Retest: Repeat the previous steps to make sure the rhevm service can run on all configured nodes.

Configure additional RHEV-M services Now that the basic RHEV-M service is working in a cluster, you can add additional RHEV-M services to the rhevm service group. These include rhevm-notifierd (sends email containing oVirt events), rhevm-reports- dwh (lets users create reports that monitor RHEV systems), and rhevm-reports (lets you run pre-configured reports).

Setup email of oVirt events (rhevm-notifierd) To use rhevm-notifierd to send oVirt events to email, 1. Add mail server entry. On both nodes, modify /etc/rhevm/notifier/notifier.conf to change MAIL_SERVER= to direct email to the localhost: MAIL_SERVER=localhost

2. Add rhevm-notifierd resource. Go back to luci and click on Resources, then Add, the select Script as the resource type. 3. Fill in Script information. Enter the following: • Name. Type the name of the service (use rhevm-notifierd). • Full Path to Script File. Type the path to the service (/etc/rc.d/init.d/rhevm-notifierd). 4. Submit information. Click the Submit button. Next, add the resource to the rhevm service group from luci: 1. Service Groups. Click the Service Groups tab. 2. Select rhevm Service. Click on the rhevm service you added earlier under the Name field. 3. Add rhevm-notifierd Resource. Near the bottom of the page, select the Add Resource button, then select the rhevm-notifierd resource you just added from the drop-down box. 4. Submit information. Press the Submit button. The rhevm-notifierd script is now part of the rhevm service group. To test the rhevm-notifierd service script, do the following from the RHEV-M interface:

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 22 1. Select events. Select some events to be registered to and use a valid email address. 2. Trigger events. Trigger those events you selected. (For example: moving a host into and out of maintenance mode). 3. Check that you receive the emails. At this point, if you do nothing else, the email will appear to come from the actual node, not the virtual IP, that ran the alert, I consider this good, but postfix/sendmail can be configured to come from the virtual IP as well. Go to luci and do the following: 1. Relocate rhevm. Relocate the rhevm service group to the other node. 2. Trigger events. Once the service is completely moved to the other node, go back to the RHEVM and trigger the same events you did before. 3. Check that you receive the emails. Again check that you receive the mail.

Setup RHEVM History ETL service (rhevm-etl) On both nodes, install the rhevm-reports-dwh RPM. This will also pull-in a couple of other dependencies:

# yum -y install rhevm-reports-dwh

From luci, remove the jbossas resource from the rhevm service group: 1. Click on Service Groups. 2. Click on the rhevm link. 3. Click on the Remove link on the jbossas script resource. 4. Press the Submit button. On the node that is NOT running the rhevm service group, remove files that are served by our HA LVM:

# rm -rf /usr/share/rhevm-dwh/* /var/lib/jbossas/server/

Go back to luci, create the rhevm-etl service resource, and add it to the rhevm service group as follows: 1. Add rhevm-etl resource. Click on the Resources tab, select Add, and select Script as the resource type. 2. Fill in information for the rhevm-etl service: • Name. Fill in the name (for example, rhevm-etl). • Full Path to Script File. Fill in the path to the script (/etc/rc.d/init.d/rhevm-etl). 3. Submit information. Press the Submit button. The resource should appear under the Resources tab. 4. Service Groups. Click the Service Groups tab. 5. Select rhevm Service. Click on the rhevm service you added earlier under the Name field. 6. Add Resource. Near the bottom of the page, select the Add Resource button, then select the rhevm-etl resource you just added from the drop-down box. 7. Submit information. Press the Submit button. The rhevm-etl resource should appear associated with the rhevm service group.

Setup rhevm-reports (rhevm-reports) On both nodes, install the rhevm-reports RPM. This will also pull-in a couple of other dependencies:

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 23 # yum -y install rhevm-reports

On the node that is NOT running rhevm service group, remove files that are served by the shared storage:

# rm -rf /usr/share/rhevm-reports/* /usr/share/rhevm-reports-server/* # rm -rf /var/lib/jbossas/*

On the node that IS currently running the rhevm service group, do the following: 1. Execute rhevm-reports-setup. Stop the JBoss service and add a password for the rhevm-admin user when prompted: # rhevm-reports-setup Welcome to rhevm-reports setup utility Please choose a password for the admin user (rhevm-admin): ******** Editing XML files... [ DONE ] Setting DB connectivity... [ DONE ] Deploying Server... [ DONE ] Updating Redirect Servlet... [ DONE ] Copying files into jbossas profile... [ DONE ] Please choose a password for the admin user (rhevm-admin): Re-type password: Importing reports... [ DONE ] Starting Jboss... [ DONE ] Successfully installed rhevm-reports.

2. Execute rhev-dwh-setup. Stop the JBoss service when prompted. # rhevm-dwh-setup In order to proceed the installer must stop the JBoss service Setting DB connectivity... [ DONE ] Creating DB... [ DONE ] Starting Jboss... [ DONE ] Starting RHEVM-ETL... [ DONE ] Successfully installed rhevm-dwh. The installation log file is available at: /var/log/rhevm/rhevm-dwh-setup- 2012_05_04_13_11_52.log

3. Disable the rhevm service group and make sure it is disabled: # clusvcadm -d rhevm -m $HOSTNAME Member node1.example.com disabling service:rhevm...Success # clustat -s rhevm Service Name Owner (Last) State ------service:rhevm (node1.example.com) disabled

Go to luci and add the jbossas resource back into the rhevm service group: 1. Service Groups. Click the Service Groups tab. 2. Select rhevm Service. Click on the rhevm service you added earlier under the Name field. 3. Add jbossas Resource. Near the bottom of the page, select the Add Resource button, then select the jbossas resource you just added from the drop-down box. 4. Submit information. Press the Submit button. The postgresql and jbossas scripts are now part of the rhevm service group.

Setting up a Highly Available RHEV-M 3.0 | Perkins, Negus 24 5. Restart the rhevm service group. Select the check box next to the rhevm service group and At this point, your entire rhevm service group is up and running. Test the reports portion of this setup next.

Test reports 1. Login to the reports interface. 2. See if you have any reports (for example, see if you have a RHEL or RHEV-H host to do a host inventory). 3. Relocate the rhevm service group either through luci or clusvcadm. 4. Repeat steps 1 and 2.

Sample cluster.conf

The /etc/cluster/cluster.conf file shown below reflects the cluster created from the procedure you just finished. It is a good idea to become familiar with this file and its contents if you need to debug your RHEV-M setup.