An Archived Oracle Technical Paper August 2007

Installing and Configuring Sun System Calendar Server 6.3 With Sun Cluster 3.1 Software

Important note: this paper was originally published before the acquisition of by Oracle in 2010. The original paper is enclosed and distributed as- is. It refers to products that are no longer sold and references technologies that have since been re-named.

Installing and Configuring Sun Java™ System Calendar Server 6.3 With Sun™ Cluster 3.1 Software

Durga Deep Tirunagari August 2007 Sun Microsystems, Inc.

An Archived Oracle Technical Paper Copyright © 2007 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved.

U.S. Government Rights - Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. Use is subject to license terms. This distribution may include materials developed by third parties.

Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and in other countries, exclusively licensed through X/Open Company, Ltd. X/Open is a registered trademark of X/Open Company, Ltd.

All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.

Sun, Sun Microsystems, the Sun logo, Java, Solaris, Solstice DiskSuite, Sun Cluster, and are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. This product is covered and controlled by U.S. Export Control laws and may be subject to the export or import laws in other countries. Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially designated nationals lists is strictly prohibited.

DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.

Installing and Configuring Calendar Server 6.3 With Sun Cluster 3.1 Software 2 An Archived Oracle Technical Paper Table of Contents 1. Installing and Configuring the Asymmetric Cluster ...... 5 1.0 Install Sun Cluster 3.1 Software...... 5 1.1 Prepare Both Nodes of the Cluster...... 6 1.1.1 Edit the /etc/hosts File...... 6 1.1.2 Create the Mount Points...... 6 1.1.3 Create the Calendar Server User...... 7 1.2 Install the Calendar Server 6.3 Binaries on Both Cluster Nodes...... 8 1.3 Install the Calendar HA Agent Package, SUNWscics, on Both Cluster Nodes...... 8 1.4 Configure the Cluster...... 8 1.4.1 Register the Resource Type Registration Files and Verify the Registration...... 8 1.4.2 Create a Failover Resource Group for the Calendar Server...... 9 1.4.3 Create a Logical Host Name Resource...... 9 1.4.4 Create an HAStoragePlus Resource...... 10 1.4.5 Enable the HAStoragePlus Resource...... 10 1.4.6 Test the Successful Creation of the Resource Group...... 10 1.5 Configure Calendar Server 6.3 on the Primary Node ...... 11 1.6 Configure Calendar Server 6.3 on the Secondary Node ...... 11 1.6.1 Fail Over to the Secondary Node...... 12 1.6.2 Create the Symbolic Link...... 12 1.6.3 Run csconfigurator on the Secondary Node...... 12 1.6.4 Verify That csconfigurator Ran Successfully...... 12 1.7 Edit the Calendar Server Configuration File ...... 12 1.8 Create the Calendar Server HA Resource...... 13 1.9 Verify That Everything Is Working ...... 13 2. Unconfiguring the Asymmetric HA Cluster ...... 14 2.1 Take the Resource Group Offline...... 14 2.2 Disable the Calendar Server Resource...... 14 2.3 Disable the Logical Host Name Resource...... 14 2.4 Disable the HAStoragePlus Resource...... 15 2.5 Remove the Calendar Server Resource...... 15 2.6 Remove the Logical Host Name Resource...... 15 2.7 Remove the HAStoragePlus Resource...... 15 2.8 Remove the Resource Group...... 15 2.9 Unregister the Resource Types That Are Not in Use...... 15 3. Installing Sun Cluster 3.1 Software...... 16 3.1 Install the Solaris 10 Operating System on Both Nodes of the Cluster...... 16 3.2 Install the Sun Cluster 3.1 Software Using the Java ES Installer...... 17 3.3 Configure the Sun Cluster 3.1 Software ...... 19 4. Installing the Calendar Server 6.3 ...... 21 4.1 Select the Calendar Server 6.3 in the Installation Wizard...... 21 4.2 Specify the Installation Directories...... 22 4.3 Choose the Configure Later Option...... 22 4.4 Verify the Installation...... 23 5. Configuring the Calendar Server 6.3 ...... 25

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 3 An Archived Oracle Technical Paper 5.1 Start the Configuration Wizard...... 25 5.2 Specify the Fully Qualified Host Name ...... 26 5.3 Specify the Directory in Which to Store the Calendar Server Configuration...... 27 5.4 Specify the Archive and Hot Backup Directories...... 29 5.5 Specify the Calendar Server User...... 31 5.6 Verify That the Configuration Was Successful...... 32 6. Creating File Systems Using Solaris Volume Manager...... 34 6.1 Create the Diskset...... 34 6.1.1 Create the State Database Replicas...... 34 6.1.2 Verify the State Database Replicas...... 35 6.1.3 Create the Diskset...... 35 6.2 Add the Mediator Hosts...... 36 6.2.1 Take Ownership of the Diskset...... 36 6.2.2 Add the Mediator Hosts...... 37 6.2.3 Verify That the Mediator Hosts Were Successfully Added...... 37 6.3 Add Drives to the Diskset...... 37 6.4 Create Volumes...... 37 6.4.1 Create Disk Stripes and Mirrors...... 37 6.4.2 Verify the Creation of Disk Stripes and Mirrors...... 38 6.4.3 Create a Soft Partition...... 38 6.5 Modify the md.tab File on Both Nodes...... 38 6.5.1 Modify md.tab on the Primary Node...... 38 6.5.2 Modify md.tab on the Secondary Node...... 39 6.6 Create UFS File Systems...... 39 6.6.1 Create the New UFS File Systems on the Raw Disks...... 39 6.6.2 Verify the Creation of the File System...... 40 7. Useful Sun Cluster Administration Commands...... 40 7.1 Determine the Cluster Status...... 41 7.2 Bring the Resource Group Online...... 41 7.3 Verify That the Resource Group Is Online...... 41 7.4 Display the Status of the Quorum Device...... 41 7.5 Display the Status of the Cluster Nodes...... 41 7.6 Display the Configuration of the Cluster...... 41 7.7 Check the Validity of the /etc/vfstab File...... 42 8. Enabling Cluster Debugging...... 42 8.1 Log Messages From the Calendar Server Agents...... 42 8.2 Edit the syslog.conf File...... 42 9. Example Output From Commands...... 43 9.2 Output of scstat...... 46 9.3 Output of scstat -g...... 49 9.4 Output of scstat -q...... 49 9.5 Output of scconf ...... 50 10. References...... 51 11. For More Information...... 51

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 4 An Archived Oracle Technical Paper Introduction

This document describes how to install and configure a two-node, asymmetric, high availability (HA) cluster for Sun Java System Calendar Server 6.3 (hereafter referred to as “Calendar Server 6.3”) with Sun Cluster 3.1 software. This document also covers unconfiguring the same two-node cluster. These instructions are valid on SPARC® platforms.

1. Installing and Configuring the Asymmetric Cluster To configure a two-node, asymmetric, HA cluster for Calendar Server 6.3, perform the procedures presented in the following sections.

1.0 Install Sun Cluster 3.1 Software 1.1 Prepare Both Nodes of the Cluster 1.1.1 Edit the /etc/hosts File 1.1.2 Create the Mount Points 1.1.3 Create the Calendar Server User 1.2 Install the Calendar Server 6.3 Binaries on Both Cluster Nodes 1.3 Install the Calendar Server HA Agent Package, SUNWscics, on Both Cluster Nodes 1.4 Configure the Cluster 1.4.1 Register the Resource Type Registration Files and Verify the Registration 1.4.2 Create a Failover Resource Group for the Calendar Server 1.4.3 Create a Logical Host Name Resource 1.4.4 Create an HAStoragePlus Resource 1.4.5 Enable the HAStoragePlus Resource 1.4.6 Test the Successful Creation of the Resource Group 1.5 Configure Calendar Server 6.3 on the Primary Node 1.6 Configure Calendar Server 6.3 on the Secondary Node 1.6.1 Fail Over to the Secondary Node 1.6.2 Create the Symlink 1.6.3 Run csconfigurator on the Secondary Node 1.6.4 Verify That csconfigurator Ran Successfully 1.7 Edit the Calendar Server Configuration File 1.8 Create the Calendar Server HA Resource 1.9 Verify That Everything Is Working

1.0 Install Sun Cluster 3.1 Software Install the Sun Cluster 3.1 software using the Sun Java Enterprise System (hereafter referred to as “Java ES”) installer on both nodes. In this document, the two nodes are named loquacious.example.com and reticent.example.com. After the installation is complete, you must configure the two-node cluster. In this document, the cluster name is Calendarcluster.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 5 An Archived Oracle Technical Paper Section 3 of this document contains instructions on installing and configuring the Sun Cluster 3.1 software.

1.1 Prepare Both Nodes of the Cluster

1.1.1 Edit the /etc/hosts File For example, the two nodes in the cluster have physical host names loquacious.example.com and reticent.example.com. The two-node cluster has the logical host name atlantic. The logical host name atlantic is assigned a logical IP address. For example, the /etc/hosts file looks like the following.

Output from cat /etc/hosts on the primary node, loquacious.example.com, looks like this:

127.0.0.1 localhost 100.10.100.221 loquacious.example.com loquacious loghost 100.10.100.222 reticent.example.com reticent # Cluster Node 100.10.100.223 atlantic.example.com atlantic << This is a logical IP address associated with the logical host name atlantic.example.com >>

Output from cat /etc/hosts on the secondary node, reticent.example.com, looks like this:

127.0.0.1 localhost 100.10.100.221 reticent.example.com reticent loghost 100.10.100.222 loquacious.example.com loquacious # Cluster Node 100.10.100.223 atlantic.example.com atlantic << This is a logical IP address associated with the logical host name atlantic.example.com >>

1.1.2 Create the Mount Points Create the required file systems. For information on how to create the file systems, see Section 6 of this document.

Once the file systems have been created on the shared storage (/SharedDisk), create the required mount points on both the primary and secondary nodes. For example, on each node perform the following command:

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 6 An Archived Oracle Technical Paper # mkdir -p /SharedDisk

Once you are done creating the file systems, verify that your /etc/vfstab file looks like one of the following.

If your mount point is a Cluster File System (CFS):

As shown below in bold, the /dev/md/polarbear/dsk/d2000 line should be identical on both nodes of the two-node cluster. That is, /etc/vfstab should have this line on both loquacious.example.com and reticent.example.com.

# cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/polarbear/dsk/d2000 /dev/md/polarbear/rdsk/d2000 /SharedDisk ufs 2 yes global,logging

If your mount point is a Failover File System (FFS):

As shown below in bold, the /dev/md/polarbear/dsk/d2000 line should be identical on both nodes of the two-node cluster. That is, /etc/vfstab should have this line on both loquacious.example.com and reticent.example.com.

# cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/polarbear/dsk/d2000 /dev/md/polarbear/rdsk/d2000 /SharedDisk ufs 2 no logging

On the primary node, loquacious.example.com, mount the shared disk on /SharedDisk.

# mount /SharedDisk

1.1.3 Create the Calendar Server User Create the Calendar Server user, and then add the user to the group. For the examples that follow, the user is named icsuser and the group is named icsgroup. The icsuser user must exist on all nodes of the cluster, and each instance of icsuser must use the same user identifier (UID). In this

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 7 An Archived Oracle Technical Paper example, the /etc/passwd entry for icsuser shows the UID 203389, which should be identical on all nodes of the cluster.

# cat /etc/passwd icsuser:x:203389:20000002::/:/bin/sh

# su icsuser $ id uid=203389(icsuser) gid=20000002(icsgroup)

1.2 Install the Calendar Server 6.3 Binaries on Both Cluster Nodes

Install the Calendar Server 6.3 binaries on both nodes, loquacious.example.com and reticent.example.com. In the examples that follow, the default Calendar Server root, /opt, was chosen for the installation.

During the installation process, choose the Configure Later option.

For more information on installing Calendar Server 6.3, refer to Section 4 of this document.

1.3 Install the Calendar HA Agent Package, SUNWscics, on Both Cluster Nodes Using the Java ES installer, install the Calendar Server HA agent package (SUNWscics) on both nodes of the cluster.

1.4 Configure the Cluster To configure the cluster, perform the steps described in this section. All of the following commands assume that the PATH environment variable is set to /usr/cluster/bin on both nodes of the cluster.

1.4.1 Register the Resource Type Registration Files and Verify the Registration On the primary node, add the two required Resource Type Registration (RTR) files, SUNW.scics and SUNW.HAStoragePlus to configure the cluster and make it aware of the resource types that will be used. Unless otherwise stated, run the following commands on the primary node (loquacious.example.com).

To register the Calendar Server RTR file SUNW.scics, use the following command:

# scrgadm -a -t SUNW.scics

Verify that this RTR file has been successfully registered, as follows.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 8 An Archived Oracle Technical Paper # scrgadm -pv -t SUNW.scics

Res Type name: SUNW.scics (SUNW.scics) Res Type description: Sun Cluster Agent for Calendar Server (SUNW.scics) Res Type base directory: /opt/SUNWscics/bin (SUNW.scics) Res Type single instance: False (SUNW.scics) Res Type init nodes: All potential masters (SUNW.scics) Res Type failover: True (SUNW.scics) Res Type version: 1.0 (SUNW.scics) Res Type API version: 2 (SUNW.scics) Res Type installed on nodes: (SUNW.scics) Res Type packages: SUNWscics (SUNW.scics) Res Type system: False

To register the HAStoragePlus RTR file, SUNW.HAStoragePlus, use the following command:

# scrgadm -a -t SUNW.HAStoragePlus

Verify that this RTR file has been successfully registered:

# scrgadm -pv -t SUNW.HAStoragePlus

Res Type name: SUNW.HAStoragePlus:4 (SUNW.HAStoragePlus:4) Res Type description: HA Storage Plus (SUNW.HAStoragePlus:4) Res Type base directory: usr/cluster/lib/rgm/rt/hastorageplus (SUNW.HAStoragePlus:4) Res Type single instance: False (SUNW.HAStoragePlus:4) Res Type init nodes: All potential masters (SUNW.HAStoragePlus:4) Res Type failover: False (SUNW.HAStoragePlus:4) Res Type proxy: False (SUNW.HAStoragePlus:4) Res Type version: 4 (SUNW.HAStoragePlus:4) Res Type API version: 2 (SUNW.HAStoragePlus:4) Res Type installed on nodes: (SUNW.HAStoragePlus:4) Res Type packages: SUNWscu (SUNW.HAStoragePlus:4) Res Type system: False

1.4.2 Create a Failover Resource Group for the Calendar Server For this instance of the Calendar Server, create a failover resource group called CAL-RG with loquacious.example.com as the primary node and reticent.example.com as the failover (secondary) node.

# scrgadm -a -g CAL-RG -h loquacious,reticent

1.4.3 Create a Logical Host Name Resource Create a logical host name resource called atlantic, and add this to the resource group CAL-RG. After creating the failover resource group, bring it online on the primary node.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 9 An Archived Oracle Technical Paper # scrgadm -a -L -g CAL-RG -l atlantic # scrgadm -c -j atlantic -y R_description="LogicalHostname resource for atlantic" # scswitch -Z -g CAL-RG

Verify that the logical host name has been added to the resource group.

# scrgadm -pv -j atlantic

(CAL-RG) Res name: atlantic (CAL-RG:atlantic) Res R_description: LogicalHostname resource for atlantic (CAL-RG:atlantic) Res resource type: SUNW.LogicalHostname:2 (CAL-RG:atlantic) Res type version: 2 (CAL-RG:atlantic) Res resource group name: CAL-RG (CAL-RG:atlantic) Res resource project name: default (CAL-RG:atlantic{loquacious}) Res enabled: True (CAL-RG:atlantic{reticent}) Res enabled: True (CAL-RG:atlantic{loquacious}) Res monitor enabled: True (CAL-RG:atlantic{reticent}) Res monitor enabled: True

1.4.4 Create an HAStoragePlus Resource Create an HAStoragePlus resource called calendar-hasp-resource. This HAStoragePlus resource will manage the mount point /SharedDisk. In the examples, both the CFS and FFS are used.

# scrgadm -a -j calendar-hasp-resource -g CAL-RG -t SUNW.HAStoragePlus -x FileSystemMountPoints ="/SharedDisk" -x AffinityOn=TRUE

1.4.5 Enable the HAStoragePlus Resource Enable the HAStoragePlus resource created previously.

# scswitch -e -j calendar-hasp-resource

1.4.6 Test the Successful Creation of the Resource Group Perform a failover to the secondary node and run the df and ping commands to test that the following are true:

● The mount point, /SharedDisk, is accessible by both nodes.

● The logical host name is online.

● The resource group CAL-RG is functioning. Perform the failover to the secondary node by issuing the following command:

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 10 An Archived Oracle Technical Paper # scswitch -z -g CAL-RG -h reticent

Make sure the mount point is accessible by running the following command.

# df -kh #/dev/md/polarbear/dsk/d2000 33G 101M 33G 1% /SharedDisk

Run ping on reticent.example.com.

# ping atlantic.example.com

atlantic.example.com is alive

You can repeat the same exercise by failing back and forth between the two nodes and repeating the same experiment of running the df and ping commands to make sure that the logical host name (atlantic.example.com) and the mount point (/SharedDisk) are accessible from both nodes.

1.5 Configure Calendar Server 6.3 on the Primary Node Fail over so that you are back on the primary node, loquacious.example.com.

While on the primary node, run the Calendar Server configuration program (csconfigurator).

The csconfigurator.sh script is located under the /opt/SUNWics5/cal/sbin directory. Recall that you installed the Calendar Server under /opt.

# cd /opt/SUNWics5/cal/sbin

# ./csconfigurator.sh

During the configuration, you are prompted for the fully qualified host name. Enter the fully qualified logical host name (atlantic.example.com, in this example).

When the program prompts you to “Select directory to store configuration and data files,” specify the mount point (/SharedDisk, in this example).

Section 5 of this document contains sample screen shots from the configuration program.

1.6 Configure Calendar Server 6.3 on the Secondary Node After you have finished configuring the primary node, fail over to the secondary node (reticent). It

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 11 An Archived Oracle Technical Paper is not necessary to run the Calendar Server configuration UI program again. However, to ensure that you perform the exact same configuration on the secondary node, run the csconfigurator.sh utility, located under the /opt/SUNWics5/cal/sbin directory.

1.6.1 Fail Over to the Secondary Node Using the scswitch command, fail over from the primary node to the secondary node.

# scswitch -z -g CAL-RG_reticent -h loquacious

1.6.2 Create the Symbolic Link After failing over to the secondary node, create the following symbolic link to the already existing configuration on the shared disk.

# cd /opt/SUNWics5/cal

# ln -s /Shared/config .

1.6.3 Run csconfigurator on the Secondary Node Now run the csconfigurator program on the secondary node, but with different options.

# cd /opt/SUNWics5/cal/sbin

# ./csconfigurator.sh -nodisplay -noconsole -novalidate

1.6.4 Verify That csconfigurator Ran Successfully Check that all the tasks passed, as they did the first time you ran the configuration program.

1.7 Edit the Calendar Server Configuration File Edit the ics.conf file by assigning values to the configuration parameters local.hostname, local.servername, and so on.

In the following example, atlantic is the logical host name assigned to this Calendar Server resource, and 100.10.100.223 is the logical IP address assigned to the logical host name assigned to this Calendar Server resource.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 12 An Archived Oracle Technical Paper ics.conf

! The following are the changes for making Calendar Server highly available. ! These lines are added at the end of the current ics.conf file: local.server.ha.enabled="yes" local.server.ha.agent="SUNWscics" service.http.listenaddr="100.10.100.223" local.hostname="atlantic" local.servername="atlantic" service.ens.host="atlantic" service.http.calendarhostname="atlantic.example.com" local.autorestart="yes" service.listenaddr = "100.10.100.223"

1.8 Create the Calendar Server HA Resource Create the Calendar Server HA resource and specify its dependency on the HAStoragePlus resource and the logical host name. In the example that follows, the resource is named calendar-cs-resource, and it has a dependency on the HAStoragePlus resource (calendar-hasp-resource) and the logical host name (atlantic). The Calendar Server is installed in /opt.

# scrgadm -a -j calendar-cs-resource -t SUNW.scics -g CAL-RG -x ICS_serverroot=/opt/SUNWics5/cal -y Resource_dependencies=atlantic,calendar-hasp- resource

Enable the Calendar Server resource:

# scswitch -e -j calendar-cs-resource

1.9 Verify That Everything Is Working Fail over the resource group (CAL-RG). For example, fail over CAL-RG from the primary node to the secondary node. Then try failing over from one node to the other multiple times, making sure that the failover is successful each time.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 13 An Archived Oracle Technical Paper # scswitch -z -g CAL-RG -h reticent

Run some Calendar Server tests. cscal list cscal -v list cscal -v list cscal delete cscomponents -v list csdb -v -t caldb list cspurge -s 0 -e 0" followed by "csdb -v -t caldb list" csexport -c calendar .ics csimport -c calendar .xml

Also try killing any of the Calendar Server processes, for example, pkill cshttpd. Do this multiple times. The Calendar Server should fail over from one node to the other.

2. Unconfiguring the Asymmetric HA Cluster To unconfigure the cluster, perform the commands shown in the following sections:

2.1 Take the Resource Group Offline 2.2 Disable the Calendar Server Resource 2.3 Disable the Logical Host Name Resource 2.4 Disable the HAStoragePlus Resource 2.5 Remove the Calendar Server Resource 2.6 Remove the Logical Host Name Resource 2.7 Remove the HAStoragePlus Resource 2.8 Remove the Resource Group 2.9 Unregister the Resource Types That Are Not in Use

2.1 Take the Resource Group Offline

# scswitch -F -g CAL-RG

2.2 Disable the Calendar Server Resource

# scswitch -n -j calendar-cs-resource

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 14 An Archived Oracle Technical Paper 2.3 Disable the Logical Host Name Resource

# scswitch -n -j atlantic

2.4 Disable the HAStoragePlus Resource

# scswitch -n -j calendar-hasp-resource

2.5 Remove the Calendar Server Resource

# scrgadm -r -j calendar-cs-resource

2.6 Remove the Logical Host Name Resource

# scrgadm -r -j atlantic

2.7 Remove the HAStoragePlus Resource

# scrgadm -r -j calendar-hasp-resource

2.8 Remove the Resource Group

# scrgadm -r -g CAL-RG

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 15 An Archived Oracle Technical Paper 2.9 Unregister the Resource Types That Are Not in Use

# scrgadm -r -t SUNW.HAStoragePlus

# scrgadm -r -t SUNW.scics

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 16 An Archived Oracle Technical Paper 3. Installing Sun Cluster 3.1 Software To install the Sun Cluster 3.1 software, perform the procedures presented in the following sections. 3.1 Install the SolarisTM 10 Operating System on Both Nodes of the Cluster 3.2 Install the Sun Cluster 3.1 Software Using the Java ES Installer 3.3 Configure the Sun Cluster 3.1 Software

3.1 Install the Solaris 10 Operating System on Both Nodes of the Cluster Make sure your local file system layout (on both the cluster nodes) looks like the file system in Table 1 before installing the Solaris 10 OS. While installing the Solaris 10 OS, make sure you create a file system layout like the example shown here. Table 1 displays the layout of the disk slices. Table 1: File System Allocation

Slice Contents Size Description Allocation

0 / 30GB Remaining free space on the disk after allocating space to slices 1 through 7. Used for the Solaris OS, Sun Cluster software, data- services software, volume-manager software, Sun Management Center agent and Sun Cluster module agent packages, root file systems, and database and application software.

1 swap 2GB Remaining bytes for the Solaris OS. 512 Mbytes for Sun Cluster software.

2 overlap 33GB The entire disk.

3 /globaldevices 1024MB The Sun Cluster software later assigns this slice a different mount point and mounts the slice as a cluster file system.

7 volume 100MB Used by Solstice DiskSuiteTM or Solaris Volume manager Manager software for the state database replica, or used by Veritas Volume Manager (VxVM) for installation after you free the slice.

After installing the Solaris 10 OS, the partition table looks like the following example.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 17 An Archived Oracle Technical Paper partition> p Current partition table (original): Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 22365 30.81GB (22366/0/0) 64615374 1 swap wu 22366 - 23817 2.00GB (1452/0/0) 4194828 2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180 3 unassigned wm 23818 - 24543 1.00GB (726/0/0) 2097414 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 24544 - 24614 100.16MB (71/0/0) 205119

3.2 Install the Sun Cluster 3.1 Software Using the Java ES Installer Following are example screen shots of the Java ES installer.

Figure 1: Java ES Installation Wizard

After you launch the installation wizard and accept the license agreements, you are prompted for inputs by the installation wizard.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 18 An Archived Oracle Technical Paper Figure 2: Wizard Welcome Screen Select the Sun Cluster 3.1 software from the installation wizard, as shown in Figure 3.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 19 An Archived Oracle Technical Paper Figure 3: Choosing Sun Cluster 3.1 Software

3.3 Configure the Sun Cluster 3.1 Software After installing the Sun Cluster 3.1 software, first register both nodes (reticent.example.com and loquacious.example.com) as the two nodes of the cluster. The name of the newly created cluster is Calendarcluster.

Then, run the configuration program, scinstall, located under the /usr/cluster/bin directory. After the installation completes, reboot all the nodes in the cluster.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 20 An Archived Oracle Technical Paper # pwd /usr/cluster/bin

# ls -rlt scinstall

-r-xr-xr-x 1 root bin 74027 Aug 20 2006 scinstall

Here is an excerpt from running ./scinstall. The cluster named Calendarcluster has been created.

# ./scinstall

<<< Excerpt from Sun Cluster Installation >>

Your responses indicate the following options to scinstall:

scinstall -i \ -C Calendarcluster \ -F \ -T node=loquacious,node=reticent,authtype=sys \ -w netaddr=172.16.0.0,netmask=255.255.248.0,maxnodes=64,maxprivatenets=10 \ -A trtype=dlpi,name=qfe0 -A trtype=dlpi,name=qfe1 \ -B type=switch,name=switch1 -B type=switch,name=switch2 \ -m endpoint=:qfe0,endpoint=switch1 \ -m endpoint=:qfe1,endpoint=switch2 \ -P task=quorum,state=INIT Are these the options you want to use (yes/no) [yes]? Are these the options you want to use (yes/no) [yes]? Do you want to continue with this configuration step (yes/no) [yes]? Checking device to use for global devices file system ... done : : : << Omitted lines. See section 9.1 for a detailed output.>>

Initializing cluster name to "Calendarcluster" ... done (/dev/did/rdsk/d2s2) added; votecount = 1, bitmask of nodes with configured paths = 0x3. Nov 17 16:32:15 reticent cl_runtime: NOTICE: CMM: Cluster members: loquacious reticent. Nov 17 16:32:15 reticent cl_runtime: NOTICE: CMM: node reconfiguration #6 completed.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 21 An Archived Oracle Technical Paper 4. Installing the Calendar Server 6.3 To install the Calendar Server 6.3, use the procedures presented in the following sections: 4.1 Select the Calendar Server 6.3 in the Installation Wizard 4.2 Specify the Installation Directories 4.3 Choose the Configure Later Option 4.4 Verify the Installation

4.1 Select the Calendar Server 6.3 in the Installation Wizard Using the Sun Java Communications Suite 5 installer, install the Calendar Server 6.3.

Select Calendar Server 6.3 from the Choose Software Components screen, as shown in Figure 4. You must run the Directory Preparation tool after the installation is complete. For instructions on running the Directory Preparation Tool, see the Sun Java Communications Suite Installation Guide (http://docs.sun.com/app/docs/doc/819-7560).

Figure 4: Choosing Calendar Server 6.3

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 22 An Archived Oracle Technical Paper 4.2 Specify the Installation Directories At the Specify Installation Directories screen, shown in Figure 5, choose the default Calendar Server installation root, /opt.

Figure 5: Specifying Installation Directories

4.3 Choose the Configure Later Option Select the Configure Later option, as shown in Figure 6. See Section 5 of this document for information on how to configure the Calendar Server.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 23 An Archived Oracle Technical Paper Figure 6: Choosing the Configure Later Option

4.4 Verify the Installation Verify that the installation is complete, as shown in Figure 7.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 24 An Archived Oracle Technical Paper Figure 7: Installation Complete Screen Once installed, all the Calendar Server data can be found under /opt/SUNWics5/cal.

[dt120194@algorithms]/opt/SUNWics5/cal 39 % ls -rl total 39460 drwxr-xr-x 3 root bin 512 Apr 16 15:22 tools drwxr-xr-x 4 root bin 512 Apr 16 15:22 share drwxr-xr-x 2 root bin 1024 Apr 16 15:22 sbin drwxr-xr-x 4 root bin 2048 Apr 16 15:22 lib drwxr-xr-x 11 root bin 2048 Apr 16 15:23 html drwxr-xr-x 8 root bin 512 Apr 16 15:23 csapi

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 25 An Archived Oracle Technical Paper 5. Configuring the Calendar Server 6.3 To configure the Calendar Server 6.3, use the procedures presented in the following sections: 5.1 Start the Configuration Wizard 5.2 Specify the Fully Qualified Host Name 5.3 Specify the Directory in Which to Store the Calendar Server Configuration 5.4 Specify the Archive and Hot Backup Configuration Panel Directories 5.5 Specify the Calendar Server User 5.6 Verify That the Configuration Was Successful

5.1 Start the Configuration Wizard Bring up the configuration wizard by running the Calendar Server csconfigurator program.

# cd /opt/SUNWics5/cal/sbin

# ./csconfigurator.sh

Figure 8 shows the first screen of the Calendar Server Configuration Wizard. In the examples that follow, only the screens important to the high availability configuration are shown. The intervening screens are not shown. For those screens, fill in the appropriate information and continue until you reach the next screen shown in the examples.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 26 An Archived Oracle Technical Paper Figure 8: Configuration Wizard Welcome Screen

5.2 Specify the Fully Qualified Host Name Answer the appropriate questions and when prompted for the fully qualified host name, choose the logical host name, as shown in Figure 9. In our example, the logical host name is atlantic.example.com.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 27 An Archived Oracle Technical Paper Figure 9: Specifying the Fully Qualified Host Name

5.3 Specify the Directory in Which to Store the Calendar Server Configuration When prompted for the directory in which to store configuration and data files, choose the directories mounted on the shared disk, as shown in Figure 10. Table 2 lists all the directories in which to store configuration and data files. Table 2: Directories for Storing Configuration and Data Files Wizard Field Directory to Specify Config Directory /SharedDisk/config Database Directory /SharedDisk/csdb Attachment Store Directory /SharedDisk/astore Logs Directory /SharedDisk/logs Temporary Files Directory /SharedDisk/tmp

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 28 An Archived Oracle Technical Paper Figure 10: Specifying Directories in Which to Store Configuration and Data Files Once the location of these directories is specified, click Create Directory, as shown in Figure 11, to create the directories.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 29 An Archived Oracle Technical Paper Figure 11: Creating New Configuration Directory

5.4 Specify the Archive and Hot Backup Directories When prompted for the archive directory and the hot backup directory, choose the directories mounted on the shared disk, as shown in Table 3 and Figure 12.

Table 3: Directories for Archive and Hot Backup Wizard Field Directory to Specify Archive Directory /SharedDisk/csdb/archive Hot Backup Directory /SharedDisk/csdb/hotbackup

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 30 An Archived Oracle Technical Paper Figure 12: Specifying Archive and Hot Backup Directories Once the location of these directories is specified, click Create Directory, as shown in Figure 13, to create the directories.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 31 An Archived Oracle Technical Paper Figure 13: Creating New Archive and Backup Directories

5.5 Specify the Calendar Server User When asked for the Calendar Server user and group, enter icsuser as the user name and icsgroup as the group, as shown in Figure 14. Make sure you select the Start on system startup option (de-select the Start after successful configuration option.)

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 32 An Archived Oracle Technical Paper Figure 14: Specifying Calendar Server User

5.6 Verify That the Configuration Was Successful When you choose the Configure Now option, verify that the configuration has completed successfully.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 33 An Archived Oracle Technical Paper Figure 15: Ready to Configure Screen

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 34 An Archived Oracle Technical Paper 6. Creating File Systems Using Solaris Volume Manager To create the file systems, use the procedures presented in the following sections: 6.1 Create the Diskset 6.1.1 Create the State Database Replicas 6.1.2 Verify the State Database Replicas 6.1.3 Create the Diskset 6.2 Add the Mediator Hosts 6.2.1 Take Ownership of the Diskset 6.2.2 Add the Mediator hosts 6.2.3 Verify That the Mediator Hosts Were Successfully Added 6.3 Add Drives to the Diskset 6.4 Create Volumes 6.4.1 Create Disk Stripes and Mirrors 6.4.2 Verify the Creation of Disk Stripes and Mirrors 6.4.3 Create a Soft Partition 6.5 Modify the md.tab File on Both Nodes 6.5.1 Modify md.tab on the Primary Node 6.5.2 Modify md.tab on the Secondary Node 6.6 Create UFS File Systems 6.6.1 Create the New UFS File Systems on the Raw Disks 6.6.2 Verify the Creation of the File System

6.1 Create the Diskset Create a diskset using the steps that follow.

6.1.1 Create the State Database Replicas Before you create the state database replicas, check to see if there are already existing disksets on both nodes. If not, create state database replicas on each node.

For example, to check if there are already existing disksets on both nodes, use the metadb command on each node.

# metadb metadb: loquacious: there are no existing databases

# metadb metadb: reticent: there are no existing databases

For example, to create the state database replicas, use the metadb -a -f -c command.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 35 An Archived Oracle Technical Paper On node 1 (loquacious):

# metadb -a -f -c 3 c2t0d0s7

On node 2 (reticent):

# metadb -a -f -c 3 c2t0d0s7

6.1.2 Verify the State Database Replicas Verify the state database replicas on each node using the metadb command.

On node 1 (loquacious):

# metadb flags first blk block count a u 16 8192 /dev/dsk/c2t0d0s7 a u 8208 8192 /dev/dsk/c2t0d0s7 a u 16400 8192 /dev/dsk/c2t0d0s7

On node 2 (reticent):

# metadb flags first blk block count a u 16 8192 /dev/dsk/c2t0d0s7 a u 8208 8192 /dev/dsk/c2t0d0s7 a u 16400 8192 /dev/dsk/c2t0d0s7

6.1.3 Create the Diskset As in the following example, create a diskset named polarbear.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 36 An Archived Oracle Technical Paper You can run this command on either node:

# metaset -s polarbear metaset: loquacious: setname "polarbear": no such set

Create the diskset:

# metaset -s polarbear -a -h loquacious reticent

Verify the creation of the diskset:

# metaset -s polarbear Set name = polarbear, Set number = 1

Host Owner loquacious reticent

To determine which disks are shared, use the command in the following example.

# scdidadm -L 1 telstra:/dev/rdsk/c0t6d0 /dev/did/rdsk/d1 2 telstra:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2 2 shaw:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2 3 telstra:/dev/rdsk/c1t1d0 /dev/did/rdsk/d3 3 shaw:/dev/rdsk/c1t1d0 /dev/did/rdsk/d3 4 telstra:/dev/rdsk/c1t4d0 /dev/did/rdsk/d4 4 shaw:/dev/rdsk/c1t4d0 /dev/did/rdsk/d4 5 telstra:/dev/rdsk/c1t6d0 /dev/did/rdsk/d5 5 shaw:/dev/rdsk/c1t6d0 /dev/did/rdsk/d5 6 telstra:/dev/rdsk/c2t1d0 /dev/did/rdsk/d6 7 telstra:/dev/rdsk/c2t0d0 /dev/did/rdsk/d7 8 shaw:/dev/rdsk/c0t6d0 /dev/did/rdsk/d8 9 shaw:/dev/rdsk/c2t1d0 /dev/did/rdsk/d9 10 shaw:/dev/rdsk/c2t0d0 /dev/did/rdsk/d10

From the previous output, you can determine that the following disks are shared:

/dev/did/rdsk/d2 /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d5

6.2 Add the Mediator Hosts To add mediator hosts, use the steps that follow.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 37 An Archived Oracle Technical Paper 6.2.1 Take Ownership of the Diskset From the primary node, loquacious, take ownership of the diskset.

# metaset -s polarbear -t

6.2.2 Add the Mediator Hosts

# metaset -s polarbear -a -m loquacious reticent

6.2.3 Verify That the Mediator Hosts Were Successfully Added Run the medstat command to verify that the mediator hosts were successfully added to the polarbear diskset.

# medstat -s polarbear

Mediator Status Golden loquacious Ok No reticent Ok No

6.3 Add Drives to the Diskset Add drives to the polarbear diskset and verify that the drives were added.

# metaset -s polarbear -a /dev/did/rdsk/d2 /dev/did/rdsk/d3

# metaset -s polarbear

Set name = polarbear, Set number = 1

Host Owner loquacious Yes reticent

Driv Dbase

d2 Yes

d3 Yes

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 38 An Archived Oracle Technical Paper 6.4 Create Volumes To create volumes, use the steps that follow.

6.4.1 Create Disk Stripes and Mirrors Create disk stripes and mirrors, as shown in the following example.

# metainit -s polarbear d20 1 1 /dev/did/rdsk/d2s0 polarbear/d20: Concat/Stripe is setup

# metainit -s polarbear d30 1 1 /dev/did/rdsk/d3s0 polarbear/d30: Concat/Stripe is setup

# metainit -s polarbear d200 -m d20 polarbear/d200: Mirror is setup

# metainit -s polarbear d300 -m d30 polarbear/d300: Mirror is setup

6.4.2 Verify the Creation of Disk Stripes and Mirrors Verify that the disk stripes and mirrors have been successfully created, as shown in the following example.

# metastat -s polarbear -p

polarbear/d300 -m polarbear/d30 1 polarbear/d30 1 1 /dev/did/rdsk/d3s0 polarbear/d200 -m polarbear/d20 1 polarbear/d20 1 1 /dev/did/rdsk/d2s0

6.4.3 Create a Soft Partition Create a volume in the shared diskset with soft partitions, as shown in the following example.

# metainit -s polarbear d2000 -p d200 10g d2000: Soft Partition is setup

# metainit -s polarbear d3000 -p d300 10g d3000: Soft Partition is setup

6.5 Modify the md.tab File on Both Nodes To modify the md.tab file on both nodes, use the steps that follow.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 39 An Archived Oracle Technical Paper 6.5.1 Modify md.tab on the Primary Node On node loquacious, run the following command to modify the /etc/lvm/md.tab file.

cd /etc/lvm cp md.tab md.tab.back metastat -s polarbear -p >> /etc/lvm/md.tab

Running this command causes the md.tab file to look like the following example on the primary node, loquacious.

cat /etc/lvm/md.tab on loquacious # # Hot Spare Pool of devices # # hsp001 /dev/dsk/c1t0d0s0 # blue/hsp001 /dev/dsk/c2t0d0s0 # # 100MB Soft Partition # # d1 -p /dev/dsk/c1t0d0s1 100M # blue/d1 -p /dev/dsk/c2t0d0s1 100M polarbear/d300 -m polarbear/d30 1 polarbear/d30 1 1 /dev/did/rdsk/d3s0 polarbear/d200 -m polarbear/d20 1 polarbear/d20 1 1 /dev/did/rdsk/d2s0 polarbear/d2000 -p /dev/md/orion/rdsk/d200 -o 209715264 -b 209715200 polarbear/d3000 -p /dev/md/orion/rdsk/d300 -o 32 -b 209715200

6.5.2 Modify md.tab on the Secondary Node

Edit the /etc/lvm/md.tab file on the secondary node, reticent, by copying the same entries from md.tab on the primary node, loquacious.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 40 An Archived Oracle Technical Paper vi /etc/lvm/md.tab

cat /etc/lvm/md.tab on reticent

# # Hot Spare Pool of devices # # hsp001 /dev/dsk/c1t0d0s0 # blue/hsp001 /dev/dsk/c2t0d0s0 # # 100MB Soft Partition # # d1 -p /dev/dsk/c1t0d0s1 100M # blue/d1 -p /dev/dsk/c2t0d0s1 100M polarbear/d300 -m polarbear/d30 1 polarbear/d30 1 1 /dev/did/rdsk/d3s0 polarbear/d200 -m polarbear/d20 1 polarbear/d20 1 1 /dev/did/rdsk/d2s0 polarbear/d2000 -p /dev/md/orion/rdsk/d200 -o 209715264 -b 209715200 polarbear/d3000 -p /dev/md/orion/rdsk/d300 -o 32 -b 209715200

6.6 Create UFS File Systems To create the UFS file systems, use the steps that follow.

6.6.1 Create the New UFS File Systems on the Raw Disks Create the new UFS file systems on the raw disks using the newfs command.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 41 An Archived Oracle Technical Paper # newfs /dev/md/polarbear/rdsk/d3000 newfs: construct a new file system /dev/md/polarbear/rdsk/d3000: (y/n)? y Warning: 4432 sector(s) in last cylinder unallocated /dev/md/polarbear/rdsk/d3000: 71118512 sectors in 11576 cylinders of 48 tracks, 128 sectors 34725.8MB in 724 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, Initializing cylinder groups: ...... super-block backups for last 10 cylinder groups at: 70190368, 70288800, 70387232, 70485664, 70584096, 70682528, 70780960, 70879392, 70977824, 71076256

# newfs /dev/md/polarbear/rdsk/d2000 newfs: construct a new file system /dev/md/polarbear/rdsk/d2000: (y/n)? y Warning: 4432 sector(s) in last cylinder unallocated /dev/md/polarbear/rdsk/d2000: 71118512 sectors in 11576 cylinders of 48 tracks, 128 sectors 34725.8MB in 724 cyl groups (16 c/g, 48.00MB/g, 5824 i/g) super-block backups (for fsck -F ufs -o b=#) at: 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920, Initializing cylinder groups: ...... super-block backups for last 10 cylinder groups at: 70190368, 70288800, 70387232, 70485664, 70584096, 70682528, 70780960, 70879392, 70977824, 71076256

6.6.2 Verify the Creation of the File System Check on both disks to verify that the file systems were successfully created.

# fstyp /dev/md/polarbear/dsk/d2000 ufs

# fstyp /dev/md/polarbear/dsk/d3000 ufs

7. Useful Sun Cluster Administration Commands The following sections provide useful commands for performing the administration tasks. 7.1 Determine the Sun Cluster Status 7.2 Bring the Resource Group Online 7.3 Verify That the Resource Group Is Online 7.4 Display the Status of the Quorum Device 7.5 Display the Status of the Cluster Nodes 7.6 Display the Configuration of the Cluster 7.7 Check the Validity of the /etc/vfstab File

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 42 An Archived Oracle Technical Paper 7.1 Determine the Cluster Status

# scstat -pvv

Detailed output of this command is shown in Section 9.2 of this document.

7.2 Bring the Resource Group Online

# scswitch -Z -g CAL-RG

7.3 Verify That the Resource Group Is Online

# scstat -g

Detailed output of this command is shown in Section 9.3 of this document.

7.4 Display the Status of the Quorum Device

# scstat -q

Detailed output of this command is shown in Section 9.4 of this document.

7.5 Display the Status of the Cluster Nodes

# scstat -n

-- Cluster Nodes --

Node name Status ------Cluster node: loquacious Online Cluster node: reticent Online

7.6 Display the Configuration of the Cluster

# ./scconf -p | grep -i cluster

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 43 An Archived Oracle Technical Paper Detailed output of this command is shown in Section 9.5 of this document.

7.7 Check the Validity of the /etc/vfstab File The sccheck configuration check utility verifies that the mount points exist. If there are no errors, nothing is returned. This utility verifies that the /etc/vfstab file entries are correct on both nodes of the cluster.

# sccheck

8. Enabling Cluster Debugging To enable cluster debugging, use the procedures presented in the following sections: 8.1 Log Messages From the Calendar Server Agents 8.2 Edit the sylog.conf File

8.1 Log Messages From the Calendar Server Agents The following command increases the loglevel to the maximum possible value (9).

mkdir -p /var/cluster/rgm/rt/SUNW.scics echo 9 > /var/cluster/rgm/rt/SUNW.scics/loglevel

8.2 Edit the syslog.conf File Add the following line to syslog.conf to log all the debug messages to the file /var/adm/sunclusterlog.

In syslog.conf, add the following line:

daemon.debug /var/adm/sunclusterlog

Then, restart the syslogd daemon.

# pkill -HUP syslogd

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 44 An Archived Oracle Technical Paper Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 45 An Archived Oracle Technical Paper 9. Example Output From Commands The following sections provide the output from commands used in previous sections. 9.1 Output of scinstall 9.2 Output of scstat 9.3 Output of scstat -g 9.4 Output of scstat -q 9.5 Output of scconf

9.1 Output of scinstall

Initializing authentication options ... done Initializing configuration for adapter "qfe0" ... done Initializing configuration for adapter "qfe1" ... done Initializing configuration for switch "switch1" ... done Initializing configuration for switch "switch2" ... done Initializing configuration for cable ... done Initializing configuration for cable ... done Initializing private network address options ... done

Setting the node ID for "loquacious" ... done (id=1)

Checking for global devices global file system ... done Updating vfstab ... done

Verifying that NTP is configured ... done Initializing NTP configuration ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done

Configuring IP multipathing groups ...done

Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.111706161631 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing.

Log file - /var/cluster/logs/install/scinstall.log.12414

Rebooting ...

Connection to loquacious closed. Nov 17 15:47:44 loquacious last message repeated 2 times Nov 17 16:16:32 loquacious reboot: rebooted by softwareRunner Nov 17 16:16:32 loquacious syslogd: going down on signal 15 syncing file systems... done

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 46 An Archived Oracle Technical Paper rebooting... Resetting ... screen not found. keyboard not found. Keyboard not present. Using ttya for input and output.

Sun Fire 280R (2 X UltraSPARC-III+) , No Keyboard Copyright 1998-2002 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.5, 2048 MB memory installed, Serial #53447534. Ethernet address 0:3:ba:2f:8b:6e, Host ID: 832f8b6e.

Rebooting with command: boot Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfcb2a60,0:a File and args: SunOS Release 5.10 Version Generic_118833-31 64-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Configuring devices. Hostname: loquacious SUNW,eri0 : 100 Mbps full duplex link up NIS domain name is example.com Loading smf(5) service descriptions: 41/41 scdidadm: Could not load DID instance list. scdidadm: Cannot open /etc/cluster/ccr/did_instances. Booting as part of a cluster NOTICE: CMM: Node loquacious (nodeid = 1) with votecount = 1 added. NOTICE: CMM: Node loquacious: attempting to join cluster. NOTICE: CMM: Cluster has reached quorum. NOTICE: CMM: Node loquacious (nodeid = 1) is up; new incarnation number = 1163809224. NOTICE: CMM: Cluster members: loquacious. NOTICE: CMM: node reconfiguration #1 completed. NOTICE: CMM: Node loquacious: joined cluster. ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast did instance 1 created. did subpath loquacious:/dev/rdsk/c0t6d0 created for instance 1. did instance 2 created. did subpath loquacious:/dev/rdsk/c1t0d0 created for instance 2. did instance 3 created. did subpath loquacious:/dev/rdsk/c1t1d0 created for instance 3. did instance 4 created. did subpath loquacious:/dev/rdsk/c1t4d0 created for instance 4. did instance 5 created. did subpath loquacious:/dev/rdsk/c1t6d0 created for instance 5. did instance 6 created. did subpath loquacious:/dev/rdsk/c2t1d0 created for instance 6. did instance 7 created. did subpath loquacious:/dev/rdsk/c2t0d0 created for instance 7. Configuring DID devices obtaining access to all attached disks loquacious console login: Configuring the /dev/global directory (global devices) loquacious console login: Nov 17 16:21:56 loquacious svc.startd[8]: system/cluster/scsymon- srv:default misconfigured: transitioned to maintenance (see 'svcs -xv' for details) loquacious console login: loquacious console login: Nov 17 16:22:07 loquacious java[1726]: pkcs11_softtoken: Keystore version failure.

>>> Autodiscovery of Cluster Transport <<<

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 47 An Archived Oracle Technical Paper If you are using Ethernet or Infiniband adapters as the cluster transport adapters, autodiscovery is the best method for configuring the cluster transport.

Do you want to use autodiscovery (yes/no) [yes]?

Probing ......

The following connections were discovered:

loquacious:qfe0 switch1 reticent:qfe0 loquacious:qfe1 switch2 reticent:qfe1

Is it okay to add these connections to the configuration (yes/no) [yes]?

>>> Confirmation <<<

Your responses indicate the following options to scinstall:

scinstall -i \ -C Calendarcluster \ -N loquacious \ -A trtype=dlpi,name=qfe0 -A trtype=dlpi,name=qfe1 \ -m endpoint=:qfe0,endpoint=switch1 \ -m endpoint=:qfe1,endpoint=switch2

Checking device to use for global devices file system ... done

Adding node "reticent" to the cluster configuration ... done Adding adapter "qfe0" to the cluster configuration ... done Adding adapter "qfe1" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done

Copying the config from "loquacious" ... done

Copying the postconfig file from "loquacious" if it exists ... done done

Setting the node ID for "reticent" ... done (id=2)

Verifying the major number for the "did" driver with "loquacious" ... done

Checking for global devices global file system ... done Updating vfstab ... done

Verifying that NTP is configured ... done Initializing NTP configuration ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done

Configuring IP multipathing groups ...done

Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.111706162651 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 48 An Archived Oracle Technical Paper Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing.

Updating file ("ntp.conf.cluster") on node loquacious ... done Updating file ("hosts") on node loquacious ... done

Log file - /var/cluster/logs/install/scinstall.log.12641

Rebooting ... Rebooting with command: boot Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfe49765,0:a File and args: SunOS Release 5.10 Version Generic_118833-31 64-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Configuring devices. Hostname: reticent SUNW,eri0 : 100 Mbps full duplex link up NIS domain name is example.com Loading smf(5) service descriptions: 39/41 40/41 41/41 scdidadm: Could not load DID instance list. scdidadm: Cannot open /etc/cluster/ccr/did_instances. Booting as part of a cluster NOTICE: CMM: Node loquacious (nodeid = 1) with votecount = 1 added. NOTICE: CMM: Node reticent (nodeid = 2) with votecount = 0 added. NOTICE: clcomm: Adapter qfe1 constructed NOTICE: clcomm: Adapter qfe0 constructed NOTICE: CMM: Node reticent: attempting to join cluster. NOTICE: CMM: Node loquacious (nodeid: 1, incarnation #: 1163809224) has become reachable. NOTICE: clcomm: Path reticent:qfe1 - loquacious:qfe1 online NOTICE: clcomm: Path reticent:qfe0 - loquacious:qfe0 online NOTICE: CMM: Cluster has reached quorum. NOTICE: CMM: Node loquacious (nodeid = 1) is up; new incarnation number = 1163809224. NOTICE: CMM: Node reticent (nodeid = 2) is up; new incarnation number = 1163809844. NOTICE: CMM: Cluster members: loquacious reticent. NOTICE: CMM: node reconfiguration #3 completed. NOTICE: CMM: Node reticent: joined cluster. ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast DID subpath "/dev/rdsk/c1t0d0s2" created for instance "2". DID subpath "/dev/rdsk/c1t1d0s2" created for instance "3". DID subpath "/dev/rdsk/c1t4d0s2" created for instance "4". DID subpath "/dev/rdsk/c1t6d0s2" created for instance "5". did instance 8 created. did subpath reticent:/dev/rdsk/c0t6d0 created for instance 8. did instance 9 created. did subpath reticent:/dev/rdsk/c2t1d0 created for instance 9. did instance 10 created. did subpath reticent:/dev/rdsk/c2t0d0 created for instance 10. Configuring DID devices obtaining access to all attached disks reticent console login: Configuring the /dev/global directory (global devices) Nov 17 16:32:13 reticent cl_runtime: NOTICE: CMM: Cluster members: loquacious reticent. Nov 17 16:32:13 reticent cl_runtime: NOTICE: CMM: node reconfiguration #4 completed. Nov 17 16:32:14 reticent cl_runtime: NOTICE: CMM: Votecount changed from 0 to 1 for node reticent. Nov 17 16:32:14 reticent cl_runtime: NOTICE: CMM: Cluster members: loquacious reticent. Nov 17 16:32:14 reticent cl_runtime: NOTICE: CMM: node reconfiguration #5 completed. Nov 17 16:32:15 reticent cl_runtime: NOTICE: CMM: Quorum device 1

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 49 An Archived Oracle Technical Paper 9.2 Output of scstat

-- Cluster Nodes --

Node name Status ------Cluster node: loquacious Online Cluster node: reticent Online

------

-- Cluster Transport Paths --

Endpoint Endpoint Status ------Transport path: loquacious:qfe1 reticent:qfe1 Path online Transport path: loquacious:qfe0 reticent:qfe0 Path online

------

-- Quorum Summary --

Quorum votes possible: 3 Quorum votes needed: 2 Quorum votes present: 3

-- Quorum Votes by Node --

Node Name Present Possible Status ------Node votes: loquacious 1 1 Online Node votes: reticent 1 1 Online

-- Quorum Votes by Device --

Device Name Present Possible Status ------Device votes: /dev/did/rdsk/d2s2 1 1 Online

------

-- Device Group Servers --

Device Group Primary Secondary ------Device group servers: dsk/d1 - - Device group servers: dsk/d2 - - Device group servers: dsk/d3 - - Device group servers: dsk/d4 - - Device group servers: dsk/d5 - - Device group servers: dsk/d6 - - Device group servers: dsk/d7 - - Device group servers: dsk/d8 - - Device group servers: dsk/d9 - - Device group servers: dsk/d10 - - Device group servers: polarbear reticent loquacious

-- Device Group Spares --

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 50 An Archived Oracle Technical Paper Device Group Spare Nodes ------Device group spares: dsk/d1 - Device group spares: dsk/d2 - Device group spares: dsk/d3 - Device group spares: dsk/d4 - Device group spares: dsk/d5 - Device group spares: dsk/d6 - Device group spares: dsk/d7 - Device group spares: dsk/d8 - Device group spares: dsk/d9 - Device group spares: dsk/d10 - Device group spares: polarbear -

-- Device Group Inactives --

Device Group Inactive Nodes ------Device group inactives: dsk/d1 - Device group inactives: dsk/d2 - Device group inactives: dsk/d3 - Device group inactives: dsk/d4 - Device group inactives: dsk/d5 - Device group inactives: dsk/d6 - Device group inactives: dsk/d7 - Device group inactives: dsk/d8 - Device group inactives: dsk/d9 - Device group inactives: dsk/d10 - Device group inactives: polarbear -

-- Device Group Transitions --

Device Group In Transition Nodes ------Device group transitions: dsk/d1 - Device group transitions: dsk/d2 - Device group transitions: dsk/d3 - Device group transitions: dsk/d4 - Device group transitions: dsk/d5 - Device group transitions: dsk/d6 - Device group transitions: dsk/d7 - Device group transitions: dsk/d8 - Device group transitions: dsk/d9 - Device group transitions: dsk/d10 - Device group transitions: polarbear -

-- Device Group Status --

Device Group Status ------Device group status: dsk/d1 Offline Device group status: dsk/d2 Offline Device group status: dsk/d3 Offline Device group status: dsk/d4 Offline Device group status: dsk/d5 Offline Device group status: dsk/d6 Offline Device group status: dsk/d7 Offline Device group status: dsk/d8 Offline Device group status: dsk/d9 Offline Device group status: dsk/d10 Offline Device group status: polarbear Online

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 51 An Archived Oracle Technical Paper -- Multi-owner Device Groups --

Device Group Online Status ------

------

-- Resource Groups and Resources -- Group Name Resources ------Resources: MS-RG atlantic calendar-hasp-resource calendar-cs-resource

-- Resource Groups --

Group Name Node Name State ------Group: MS-RG loquacious Offline Group: MS-RG reticent Online

-- Resources --

Resource Name Node Name State Status Message ------Resource: atlantic loquacious Offline Offline - LogicalHostname offline. Resource: atlantic reticent Online Online - LogicalHostname online.

Resource: calendar-hasp-resource loquacious Offline Offline Resource: calendar-hasp-resource reticent Online Online

Resource: calendar-cs-resource loquacious Offline Offline - Stop Succeeded Resource: calendar-cs-resource reticent Online Online - Start succeeded.

------

-- IPMP Groups --

Node Name Group Status Adapter Status ------IPMP Group: loquacious sc_ipmp0 Online eri0 Online IPMP Group: reticent sc_ipmp0 Online eri0 Online

------

9.3 Output of scstat -g

-- Resource Groups and Resources --

Group Name Resources ------Resources: MS-RG atlantic calendar-hasp-resource calendar-cs-resource

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 52 An Archived Oracle Technical Paper -- Resource Groups --

Group Name Node Name State ------Group: MS-RG reticent Offline Group: MS-RG loquacious Online

-- Resources --

Resource Name Node Name State Status Message ------Resource: atlantic reticent Offline Offline - LogicalHostname offline. Resource: atlantic loquacious Online Online - LogicalHostname online.

Resource: calendar-hasp-resource reticent Offline Offline Resource: calendar-hasp-resource loquacious Online Online

Resource: calendar-cs-resource reticent Offline Offline - Stop Succeeded Resource: calendar-cs-resource loquacious Online Online - Start succeeded.

9.4 Output of scstat -q

-- Quorum Summary --

Quorum votes possible: 3 Quorum votes needed: 2 Quorum votes present: 3

-- Quorum Votes by Node --

Node Name Present Possible Status ------Node votes: loquacious 1 1 Online Node votes: reticent 1 1 Online

-- Quorum Votes by Device --

Device Name Present Possible Status ------Device votes: /dev/did/rdsk/d2s2 1 1 Online

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 53 An Archived Oracle Technical Paper 9.5 Output of scconf

Cluster name: Calendarcluster Cluster ID: 0x46153CB8 Cluster install mode: disabled Cluster private net: 172.16.0.0 Cluster private netmask: 255.255.0.0 Cluster new node authentication: unix Cluster new node list: <. - Exclude all nodes> Cluster transport heart beat timeout: 10000 Cluster transport heart beat quantum: 1000 Cluster nodes: loquacious reticent Cluster node name: loquacious Node private hostname: clusternode1-priv Cluster node name: reticent Node private hostname: clusternode2-priv Cluster transport junctions: switch1 switch2 Cluster transport junction: switch1 Cluster transport junction: switch2 Cluster transport cables

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 54 An Archived Oracle Technical Paper 10. References

● Sun Java System Calendar Server 6.3 Administration Guide: http://docs.sun.com/app/docs/doc/819-4654

● Sun Java Communications Suite 5 Installation Guide: http://docs.sun.com/app/docs/doc/819-7560

● Sun Cluster Software Installation Guide for Solaris OS: http://docs.sun.com/app/docs/doc/817-6543

● Sun Cluster 3.1 Software Installation Guide: http://docs.sun.com/app/docs/doc/816-3388

● Sun Cluster 3.1 System Administration Guide: http://docs.sun.com/app/docs/doc/816-3384

● Solaris Volume Manager Administration Guide: http://docs.sun.com/app/docs/doc/819-2789

11. For More Information Here are some additional resources:

● Downloads for Sun Java Communications Suite software, related software, and software updates: ● http://www.sun.com/software/communications_suite/get.xml

● http://www.sun.com/bigadmin/hubs/comms/downloads/updates.jsp

● Sun training courses at http://www.sun.com/training/:

● Sun Java System Communication Services Administration (MSG-2379)

● Sun Cluster 3.1 Administration (ES-338)

● Sun Cluster 3.1 Advanced Administration (ES-438)

● Solaris Volume Manager Administration (ES-222)

● Open source resources: http://www.sun.com/software/opensource/learnmore.jsp

● Developer forum for Sun Java System Calendar Server: http://forum.java.sun.com/forum.jspa?forumID=737

● Documents:

● Sun Java Communications Suite 5 communications services manuals: http://www.sun.com/bigadmin/hubs/comms/library/manuals.commserv.jsp

● Sun Java Communications Suite 5 Calendar Server manuals: http://www.sun.com/bigadmin/hubs/comms/library/manuals.calendar.jsp

● Related sites, tools, and articles:

● BigAdmin Communications Suite Hub, which has resources for Sun Java

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 55 An Archived Oracle Technical Paper Communications Suite 5 software: http://www.sun.com/bigadmin/hubs/comms

● Tools and scripts for Sun Java Communications Suite 5: http://www.sun.com/bigadmin/hubs/comms/downloads/tools.jsp

● Technical articles and white papers for Java Communications Suite 5: http://www.sun.com/bigadmin/hubs/comms/library/techarticles.jsp

● What's New site for Sun Java Communications Suite 5: http://www.sun.com/bigadmin/hubs/comms/overview/whatisnew.jsp

● Configuring Sun Java System Messaging Server 6.3 With Sun Cluster 3.1 Software: http://www.sun.com/bigadmin/features/hub_articles/message_srvr_cluster.jsp

● Installing and Configuring Sun Cluster 3.1 09/04 Software for High-Availability Applications: http://www.sun.com/bigadmin/features/articles/install_cluster.html

12. Licensing Information Unless otherwise specified, the use of this software is authorized pursuant to the terms of the license found at http://www.sun.com/bigadmin/common/berkeley_license.html.

Installing and Configuring Sun Java System Calendar Server 6.3 With Sun Cluster 3.1 Software 56 An Archived Oracle Technical Paper

Installing and Configuring Sun Java System Copyright © 2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the Calendar Server 6.3 With Sun Cluster 3.1 contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other Software warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or August 2007 fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are Author: Durga Deep Tirunagari formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle Corporation Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. World Headquarters 500 Oracle Parkway AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Redwood Shores, CA 94065 Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license U.S.A. and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Worldwide Inquiries: Company, Ltd. 1010 Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com