An Archived Oracle Technical Paper September 2009

Maintaining Solaris Systems with Solaris Live Upgrade and Update on Attach

Important note: this paper was originally published before the acquisition of by Oracle in 2010. The original paper is enclosed and distributed as- is. It refers to products that are no longer sold and references technologies that have since been re-named.

An Archived Oracle Technical Paper Maintaining Solaris™ Systems with Solaris Live Upgrade and Update on Attach Hartmut Streppel, Sun Microsystems Dirk Augustin, Sun Microsystems Martin Müller, Sun Microsystems

Sun BluePrints™ Online

Part No 821-0247-10 Revision 1.0, 09/03/09

An Archived Oracle Technical Paper Sun Microsystems, Inc.

Contents

Maintaining Solaris Systems with Solaris Live Upgrade and Update on Attach...... 1 Minimizing Disruption When Upgrading the Solaris OS...... 2 Patch Management with Sun™ xVM Ops Center...... 2

Using Solaris Live Upgrade...... 3 Using a ZFS™ File System with Solaris Live Upgrade...... 3 Synchronization in Solaris Live Upgrade...... 4 Commands Overview...... 4 A Simple Solaris Live Upgrade Example...... 5 Creating and Configuring the New Boot Environment...... 5 Applying Patches to the New Boot Environment...... 5 Activating the New Boot Environment...... 6 Methods for Patching and Upgrading Solaris Cluster Nodes...... 7 Standard Upgrade...... 7 Rolling Upgrade...... 8 Dual-Partition Upgrade...... 8

Using Solaris Update on Attach...... 10 Commands Overview...... 10 Update on Attach Examples...... 11 Limitations of the Update on Attach Feature...... 12 Moving the Local Zones...... 12 Shutting Down and Exporting the Local Zones...... 12 Importing the Local Zones...... 14 Rolling Back the Updated Local Zones...... 17 Update on Attach in a Solaris Cluster Environment...... 18 The Sample Configuration...... 18 Failover Zones...... 19 Update Procedure Overview...... 20 Upgrading the First Cluster Node...... 20 Running the Update on Attach Feature...... 22 Bringing the Failover Zone Back Under Cluster Control...... 23 Upgrading the Second Node...... 25

Conclusion...... 27 About the Authors...... 27 Acknowledgments...... 27 References...... 28 Ordering Sun Documents...... 29 Accessing Sun Documentation Online...... 29

An Archived Oracle Technical Paper Sun Microsystems, Inc.

Appendix...... 30 Output of the lucreate Command...... 30 Output of the luupgrade Command...... 30 Output of the luactivate Command...... 32

An Archived Oracle Technical Paper 1 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Chapter 1 Maintaining Solaris Systems with Solaris Live Upgrade and Update on Attach

The improved reliability of current systems means that unplanned downtime is rare. In contrast, planned downtime is considered almost unavoidable, and it is estimated that over 90% of application downtime is planned. While some planned downtime can be attributed to hardware issues, maintaining, upgrading, and replacing application and system software typically accounts for most of the planned downtime. Solaris™ Live Upgrade and the Solaris update on attach feature can each reduce the planned downtime required for and application software upgrades.

This Sun BluePrints™ article explains how to use Solaris Live Upgrade and the Solaris update on attach feature, with Solaris Zones1, Solaris Cluster, and the ZFS™ file system for software maintenance in general and the application of operating system patches in particular.

The article addresses the following topics: • “Using Solaris Live Upgrade” on page 3, explains how to upgrade standard or clustered systems using Solaris Live Upgrade. • “Using Solaris Update on Attach” on page 10, explains how to upgrade standard or

. clustered systems using the Solaris update on attach feature.

This article is targeted at Solaris system administrators, infrastructure architects, datacenter architects, and anyone who requires detailed knowledge of nondisruptive upgrade techniques and technologies in the Solaris Operating System (OS). The article expands on information currently available on upgrading the Solaris OS and assumes a familiarity with Solaris 10 OS, the ZFS file system, and Solaris Zones. In addition, it covers topics that are only relevant for those familiar with the deployment of Solaris systems clustered with Solaris Cluster software.

Note: In the case of failure, the procedures described in this Sun BluePrints article can result in data loss. To avoid unintentional disruption to production systems, the following precautions are recommended: • Backup all data before running any of the examples or commands • Use ZFS snapshots to save current state prior to running any of the procedures, to help roll- back in case of failure • Test the procedures first on test systems that are identical to the production systems

1. The terms Solaris Container and Solaris Zone are interchangeable. For consistency, the termed used throughout this BluePrint article is Solaris Zone or simply zone, however, other documentation both from Sun and other sources An Archivedmight use Solaris Oracle Container or container Technical in similar contexts. Paper 2 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Minimizing Disruption When Upgrading the Solaris OS Using traditional techniques, upgrading the operating environment in a production system can cause application downtime, since the upgrade usually requires system downtime. With Solaris Live Upgrade and the update on attach feature available as standard features of the Solaris OS, system downtime is minimized because most of the upgrade is executed without affecting the functionality of the operating environment. Downtime due to upgrade failures is also minimized when the ZFS file system snapshots are used due to the fact that the system can easily be reverted to its previous state, if required.

Solaris Live Upgrade enables Solaris Operating Systems (versions 2.6, 7, 8, 9, and 10) to continue to run while an administrator upgrades to a later release of the OS. It includes the capability to create and modify an inactive duplicate boot environment and perform maintenance tasks — such as installing patches — while the primary environment continues to run undisturbed. The duplicate boot environment is then designated as the active boot environment and the changes are activated by rebooting the system.

With the update on attach feature, available with Solaris 10 10/08 (Update 6) or later, a local zone can be updated automatically when it is attached to a global zone following the migration of the local zone between systems. The update on attach feature relies on Solaris Zones. . The increasing deployment of virtualization technologies enable business critical applications to be isolated from the underlying hardware and run transparently on different servers. This flexibility allows administrators to further reduce application downtime — both planned and unplanned, by moving applications to run on alternative servers when executing an upgrade of their usual platform.

Patch Management with Sun™ xVM Ops Center Comparing and reconciling patch levels is essential to successfully upgrading and maintaining systems with multiple virtual operating system environments, using Solaris Live Upgrade and the update on attach feature. This is true irrespective of whether the environments are in standalone, consolidated, virtualized, shared storage, or clustered configurations, or any combination thereof.

At the time of writing, Sun xVM Ops Center does not support the complex patch and upgrade scenarios described in this Sun BluePrints article.

An Archived Oracle Technical Paper 3 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Chapter 2 Using Solaris Live Upgrade

Solaris Live Upgrade enables system administrators to create and upgrade a boot environment that is initially inactive, without affecting the running system. The inactive boot environment is created with the zones it contains. Once created, the inactive boot environment is upgraded, and the necessary patches and packages are installed. The inactive boot environment can then be designated as the active boot environment, and when the system is next rebooted, it is booted using the newly created boot environment with no additional downtime.

Before running Solaris Live Upgrade and creating a new boot environment, the Solaris Live Upgrade packages that match the target version of the OS must be installed on the system to be updated, with the latest release of the appropriate patches. Further details are available in “Installation Guide: Solaris Live Upgrade and Upgrade Planning” in the Solaris installation guide relevant to the target version of Solaris. See “References” on page 28 for a link to this article for Solaris 10 5/09.

Note: It is essential that the appropriate version of Solaris Live Upgrade, including the latest applicable patches, is installed prior to running any of the examples described in this article.

Two common use-cases for Solaris Live Upgrade are described in this chapter: • A system that boots from a ZFS file system, including a detailed example . • A clustered environment that requires minimal downtime

Note: While Live Upgrade with Solaris Cluster is generally supported, it cannot be used at the time of writing with Solaris Cluster systems that boot from a ZFS root file system. This restriction is expected to be removed in a later release. See “References” on page 28 for a link to instructions for performing a Live Upgrade to a Solaris Cluster system which does not boot from a ZFS root file system.

Using a ZFS™ File System with Solaris Live Upgrade The ZFS file system is used in this example, since Solaris Live Upgrade can use ZFS file system clones for the boot environment file systems. ZFS file system clones are writable ZFS file system snapshots that can be created almost instantaneously and consume virtually no disk space. While Solaris Live Upgrade can be used with other file systems, creating a boot environment on a file system other than the ZFS file system, is likely to take a very long time. This is due to the fact that when a ZFS file system is cloned, a file-block is copied only when its contents are updated while the cloning of any other file system requires that each file system block be physically copied to the clone when it is created.

An Archived Oracle Technical Paper 4 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Synchronization in Solaris Live Upgrade Solaris Live Upgrade maintains the administrative integrity of the system when the newly activated boot environment is booted by copying critical files and directories that have changed between the creation of the newly activated boot environment and the time the system is booted from it. These files are listed in /etc/lu/synclist. This process is called synchronization, and is meant for files such as /etc/passwd or /etc/group. Despite the synchronization process, once the alternative boot environment is created, it is not advisable to apply critical changes to the operating environment until the new boot environment is activated.

Commands Overview Solaris Live Upgrade consists of a set of Solaris commands to manage boot environments. These commands are listed in Table 1.

Table 1. Solaris Live Upgrade commands

Command Functionality lucreate Create a new boot environment, based on the current or other boot environment, join or separate the file systems of a boot environment onto a new boot environment, specify separate file systems belonging to a particular zone inside of the new boot environment, and create the file systems for a boot environment luactivate Activate the specified boot environment by making the boot environment’s . root partition bootable lucompare Compare the contents of the current boot environment with another boot environment lucurr Display the name of the currently running boot environment ludelete Delete all records associated with a boot environment ludesc Create or update boot environment descriptions created with ludesc or lucreate lufslist List the configuration of a boot environment lumake Populate the file systems of a specified boot environment lumount, Mount or unmount the boot environment’s file systems to access the boot luumount environment’s files while the boot environment is not active lurename Rename the boot environment lustatus Display the status information for boot environments luupgrade Install software on a boot environment — upgrade an operating system image, extract a Solaris Flash archive onto a boot environment, add or remove a package or patch to or from a boot environment, check or obtain information about packages

Note: In the past, the lu(1M) command provided a Forms and Menu Language Interpreter (FMLI) based user interface for Solaris Live Upgrade administration. While it is still present in Solaris 10, Sun no longer recommends using the lu command. An Archived Oracle Technical Paper 5 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

A Simple Solaris Live Upgrade Example In the following simple example, a ZFS file system based system and its zones are upgraded with the following steps: • The new boot environment is created and configured • Patches are applied to the new, still inactive boot environment • The new boot environment is activated • The system is rebooted

Creating and Configuring the New Boot Environment The lucreate command, used to create the new, inactive boot environment, has a single mandatory command line option -n, to name the boot environment. In addition, the lucreate command supports several command line options, including an option to create the new boot environment based on the currently active boot environment or based on another, previously created boot environment. The user can also specify separate file systems belonging to a particular zone inside of the new boot environment, and to create the file systems for a boot environment, but leave those file systems unpopulated.

In the following example, the lucreate command is used to create the new boot environment named new_be from the currently active boot:

s10u6# lucreate -n new_be .

Note: When lucreate is invoked for the first time in a given instance of the Solaris OS environment, the system generates a default name that is used to save the currently active boot environment. Alternatively, the -c command line option can be used to assign a user- defined name for the active boot environment.

As the lucreate command runs, it reports on the various stages of creating and populating the new boot environment. See section “Output of the lucreate Command” on page 30 in the “Appendix” for a full listing of the output of the lucreate command.

Applying Patches to the New Boot Environment The luupgrade command is used to install software on an inactive boot environment, after it is created by the lucreate command. The luupgrade command performs the following functions: • Upgrade the operating system image on the inactive boot environment • Add or remove packages or patches to or from the inactive boot environment

Note: A full operating system upgrade will not be discussed in this Sun BluePrints article.

An Archived Oracle Technical Paper 6 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

The luupgrade command does not require the version of the Solaris OS it is running on to be identical to that of the Solaris OS it is installing on the boot environment. However, this capability is subject to several limitations, as described in the luupgrade(1M) manual page. The lustatus command is used to check the status of a boot environment, after luupgrade has completed.

The following example demonstrates the use of luupgrade to install patch 139503-01 on an inactive boot environment called new_be, without disrupting the existing environment. The patch is automatically installed in the global and local zones of new_be. The directory containing the patch is specified with the-s command line option. The -s option can be used to point to a patch order file, or to a directory with patches, packages, or a Solaris OS image:

s10u6# luupgrade -t -n new_be -s /var/tmp/139503-01

As the luupgrade command runs, it reports on its actions: • Validate the patch • Add the patch to new_be • Report which patches are accepted or rejected for installation on new_be • Patch the zones in new_be • Report on success or failure

See section “Output of the luupgrade Command” on page 30 in the “Appendix” for a .. full listing of the output of the luupgrade command in this example.

Activating the New Boot Environment The luactivate command is used to activate new_be:

s10u6# luactivate new_be

The luactivate command activates new_be, by making its root partition bootable. It then displays a lengthy message with a warning to the user to only use the init or shutdown commands to reboot and recovery instructions in case the system fails to boot with new_be. For a full listing of the output of the luactivate command, see section “Output of the luactivate Command” on page 32 in the “Appendix”.

For successful activation: • new_be cannot have mounted partitions

• The lustatus command must report new_be as complete

• new_be cannot be involved in a comparison with the lucompare(1M) command

While running the luactivate command defines the new boot environment as the active boot environment, it will only become the running boot environment after An Archived Oracle Technical Paper 7 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

a system reboot. In the interim, due to routine system administration actions, the previously active boot environment might diverge from the newly activated boot environment. To minimize the extent of this divergence it is recommended that the system be rebooted as quickly as possible. Also, actions that would cause the new boot environment to diverge from the currently running boot environment should be avoided. To reboot, one of either the init or shutdown commands must be used, since the activation of the new boot environment is completed as part of an orderly shutdown.

Note: For the duration of the reboot the users experience downtime. However, a reboot can be expected to take a relatively short time and can be scheduled for a time that minimizes disruption.

Methods for Patching and Upgrading Solaris Cluster Nodes Solaris Cluster environments are by nature mission-critical, with higher reliability and resilience requirements compared to typical noncluster environments. There are three different approaches to upgrading and patching a clustered environment. These include standard upgrade, rolling upgrade, and dual-partition upgrade. To minimize downtime and improve availability and reliability, each of the approaches can be combined with Solaris Live Upgrade. In this section, an overview of these approaches is provided. . Standard Upgrade A standard upgrade is performed as follows: • All resource groups and resources are shut down and brought to the unmanaged state • The cluster is shut down • The cluster nodes are rebooted into noncluster mode • The upgrades are performed • All of the nodes are rebooted into cluster mode • The services are re-enabled

Without using Solaris Live Upgrade, this procedure requires relatively long cluster downtime since it is executed while the applications are unavailable. However, if a new boot environment is installed on each node using Solaris Live Upgrade and activated, the only cluster downtime required is during a cluster reboot because the upgrade itself is performed on the inactive boot environments. In addition, the risk of unplanned downtime is reduced due to the fact that an upgrade failure can be resolved by reactivating the previous boot environment and rebooting the cluster.

An Archived Oracle Technical Paper 8 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Rolling Upgrade During a rolling upgrade, the cluster is always up and services are always available, except for the duration of their switch between cluster nodes. The following steps are performed on each node that is upgraded as part of a rolling upgrade: • All of the services on the node to be upgraded are moved to other nodes • The node is rebooted into noncluster mode • The node is upgraded • The upgraded node is rebooted into cluster mode and rejoins the cluster

Following the reboot of the first upgraded node and until all the nodes are upgraded, the cluster runs different versions of the upgraded software on different nodes. This state is only permitted to occur as an interim state for the purpose of the rolling upgrade and while it is in progress.

While a cluster node is removed from the cluster to perform the upgrade, the cluster runs with a lower level of redundancy. The duration of this higher-risk period can be minimized by upgrading each node with Solaris Live Upgrade while the cluster is running as normal. Once a node is upgraded, it can be rebooted and join the cluster so that the cluster is back to its original level of redundancy. The time the cluster runs with mixed software versions and at reduced redundancy can be minimized by performing the reboot of several nodes simultaneously. As with the standard upgrade, the risk of unplanned downtime is reduced since an upgrade failure can be resolved by reactivating the previous boot environment and rebooting the cluster. . Note: A rolling upgrade is not possible with a major Solaris OS or Solaris Cluster upgrade.

Dual-Partition Upgrade Where a rolling upgrade is not possible — for example, when executing a major version upgrade for the Solaris OS or Solaris Cluster — the dual-partition upgrade method can be used. In this method, the cluster is split into two sets of nodes called partitions. The Solaris Cluster software helps to decide how to layout the partitions by proposing a partition layout during the initiation of a dual-partition upgrade, where all services can run in any partition and have access to that partition’s storage. Once the old partition takes over the HA services and the other partition is booted into noncluster mode, the other partition can be upgraded or patched.

During the reboot of the now upgraded or patched cluster partition, the Solaris Cluster software synchronizes with the old and still active partition and performs the following actions: • Stop the services still running in the old partition • Detach the active storage controlled by cluster resources from the old partition

An Archived Oracle Technical Paper 9 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

• Shut down the old partition • Attach the storage previously detached to the new partition • Continue to boot the upgraded partition into cluster mode • Start the HA services in the new partition

At this point, the old partition should be upgraded and rebooted into cluster mode.

Solaris Live Upgrade can be used to speed up the upgrade process, with the partitioning and reboot occurring only after each node has had an updated boot environment installed and activated. As with the other upgrade methods, the risk of unplanned downtime is reduced since an upgrade failure can be resolved by reactivating the previous boot environment and rebooting the cluster.

Note: One way that the dual-partition upgrade differs from a rolling upgrade is that the cluster is down at one point in time, but the new version of it is available shortly after. The length of time that users experience a service disruption with rolling and dual-partition upgrades is similar.

.

An Archived Oracle Technical Paper 10 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Chapter 3 Using Solaris Update on Attach

Solaris Zones allow each application to run in a self-contained, isolated, movable environment. Solaris Zones provide applications with the security and reliability of a dedicated server for each application, without the need to provide this server with its own hardware platform. In addition, moving an application to a different hardware platform with the same architecture becomes a simple exercise in shutting down the application and its zone on one platform, and restarting them on the other platform2.

Each instance of the Solaris OS consists of a single global zone that controls the hardware resources, and any number of local zones that use the global zone to access these resources. When a local zone is moved between systems, the global zone on the destination system might not have the same packages and patch levels as the global zone on the source system. As a result, once the local zone is moved, it must be updated to match the destination system’s global zone. If many local zones are deployed on a system, updating them when they are moved using traditional tools requires a significant effort. Once initiated by the user, this process can be automated with the update on attach feature, which examines a local zone as it is moved to a new platform. The update on attach feature then determines the packages that need to be upgraded to match the global zone that controls the new platform, and executes the upgrade. . Note: It is essential that the latest applicable patches are installed on the target systems prior to running any of the examples described.

Commands Overview Update on attach is a feature of Solaris Zones that is controlled by several subcommands of the zoneadm and zonecfg commands. These commands are listed in Table 2.

Table 2. Solaris Zone commands relevant to update on attach

Command Sub- Functionality command zoneadm list Display name and details of zones detach Detach the specified zone, as the first step in moving it from one system to another attach Attach a previously detached zone zonecfg create Create an in-memory configuration for the specified zone

2. Moving the zone can be achieved either by restarting the zone on the new platform when the new platform has access to the storage that contains the zone’s file system — i.e., its zonepath — or by first copying the zone’s file An Archivedsystem from Oraclethe original platform Technical to the new platform and restarting Paper the zone. 11 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Executing the various tasks associated with upgrade on attach is simplified by certain features of the ZFS file system, most notably by its snapshot capability, which facilitates a quick rollback. The ZFS file system commands used in the upgrade on attach examples are listed in Table 3.

Table 3. ZFS file system commands relevant to update on attach

Command Sub- Functionality command snapshot Create a snapshot. list List the property information of the ZFS file system data sets and snapshots. rollback Roll back a data set to its previous snapshot. zpool export Export ZFS file system pools from the system. The devices are marked as exported, but are still considered in use by other subsystems. import List or import ZFS file system pools.

Solaris Cluster resources and resource groups are managed by the clrs and clrg commands respectively. The Solaris Cluster commands used in the upgrade on attach in a Solaris Cluster environment example are listed in Table 4.

Table 4. Solaris Cluster commands to manage resources that are relevant to update on attach

Command Sub- Functionality . command clrg switch Changes the node or zone that is mastering a resource group status Generates a status report for resource groups offline Takes a resource group offline online Brings a resource group online clrs disable Disables the specified resources enable Enables the specified resources status Generates a status report for the specified resources

Update on Attach Examples Two examples of using the update on attach feature are described: • Moving local zones (zone1 and zone2) that share storage from one system (­oldhost) to another system (newhost), resulting in the automatic upgrade of the local zones to the patch and package level of newhost’s global zone. Subsequently, the zones are rebooted. The update is then reversed by using a snapshot of oldhost, and the zones are returned to oldhost.

An Archived Oracle Technical Paper 12 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

• Update on attach on a failover zone tzone­ on a two node cluster comprised of node1 and node2 controlled by Solaris Cluster 3.2 1/09 software. tzone’s zonepath is on a ZFS file system on shared storage.tzone is moved off of node1, the global zone on node1 is upgraded, tzone is moved back to node1, and upgraded to the patch and package level of the global zone on node1.

Limitations of the Update on Attach Feature Update on attach is relevant only to Solaris Zones and not to other virtualization technologies. In addition, it is subject to the following limitations and prerequisites: • The systems must run Solaris 10 10/08 (Update 6), which is the first version of the Solaris OS to support the update on attach feature, or later. • When running the update on attach feature on a local zone, it only updates packages that are required to be consistent in all zones. • Neither the source nor target operating environments can include patches classed as Interim Diagnostic Relief (IDR) patches. When IDR patches are present in the source operating environment, they must be removed before upgrading the system3. • Update on attach does not update application software. • The patch level of each package and patch on newhost is equal to or higher than its equivalent on oldhost.

For further details of these and other limitations please see the article “The Zones . Update on Attach Feature and Patching in the Solaris 10 OS”, through the link to it in “References” on page 28.

Moving the Local Zones Moving the local zones consists of the following steps, described in the sections to follow: • Shutting down the local zones and detaching them, saving their state using a ZFS file system snapshot, and exporting the zpool containing their zonepaths. • Importing the zpool containing the zonepaths to the destination host and attaching it while running the update on attach feature. • Rolling back the zones to their previous state if the update on attach feature fails to successfully update them.

Shutting Down and Exporting the Local Zones The process of exporting the zones consists of the following steps:

1. Run the zoneadm list command to display all of the zones that are currently installed on oldhost. The -v command line option enables verbose output,

3. For Solaris 10 5/09 or later, The IDR patches can be removed automatically by using the -b option to zoneadm when An Archivedattaching the Oracle local zone in step 8Technical below. Paper 13 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

and -i and -c display all of the installed and configured zones. In this instance, there are two local zones ­— zone1 and zone2 — with the zonepaths /shared-zones/zone1 and /shared-zones/zone2 respectively:

oldhost# zoneadm list -vic ID NAME STATUS PATH BRAND IP 0 global running / native shared 3 zone1 running /shared-zones/zone1 native shared 4 zone2 running /shared-zones/zone2 native shared

2. Shut down the local zones:

oldhost# zlogin zone1 shutdown -i 0 oldhost# zlogin zone2 shutdown -i 0

Each invocation of the shutdown command runs the shutdown scripts inside the zone, removes the zone’s runtime resources, and halts it.

3. When the state of the zone changes from running to installed, as shown by the zoneadm list command, the zones have been halted and can be detached:

oldhost# zoneadm -z zone1 detach oldhost# zoneadm -z zone2 detach

The zoneadm detach command saves the zones’ package and patch state information into single files. These files are used to reattach the zones using the zoneadm attach command, once each of the zones is moved. . 4. Generate a read-only snapshot of the shared-zones zpool, called shared-zones@downrev. This snapshot retains the complete state of the zones’ file systems:

oldhost# zfs snapshot -r shared-zones@downrev

5. List the volumes currently known by the ZFS file system onol dhost. Note the presence of the shared-zones@downrev snapshot that is not mounted, in addition to the shared-zones file system:

oldhost# zfs list NAME USED AVAIL REFER MOUNTPOINT ... shared-zones 4.18G 15.4G 4.14G /shared-zones shared-zones@downrev 46.3M - 4.13G -

6. Export the pool to prepare it to be moved:

oldhost# zpool export shared-zones

An Archived Oracle Technical Paper 14 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Note: A ZFS file system pool cannot be exported if it has a spare device that is shared with other pools and is currently used.

7. List the zones on oldhost. Only the global zone should be running:

oldhost# zoneadm list -vic ID NAME STATUS PATH BRAND IP 0 global running / native shared - zone1 configured /shared-zones/zone1 native shared - zone2 configured /shared-zones/zone2 native shared

Before moving a local zone, the patch levels of the packages the local zone contains should be compared with the global zone running on the local zone’s host, to check that they are identical. This should be done as part of regular system maintenance to identify inconsistencies introduced in error. The patch levels of the global and local zones can be compared by examining the list of packages and patches for each zone generated with the pkginfo and showrev -p commands. Alternatively, Sun xVM Ops Center can be used to automate this process.

Note: Only the packages that are installed on the global zone on newhost and that have the SUNW_PKG_ALL_ZONES flag set to true are updated. TheSUNW_PKG_ALL_ZONES flag indicates that these packages must be consistent across all zones, or are in a package directory inherited by the zone. The rest of the packages can diverge and remain untouched.

. Given that oldhost and newhost share storage, once the zones are halted and detached from oldhost and the snapshots prepared, the zpools containing their zonepaths can be exported from oldhost, imported, and reattached to newhost.

Importing the Local Zones Once the zones are exported on oldhost and the zpool is available to newhost, the zones can be attached and activated on newhost with the following steps:

1. Run the zpool import command without any parameters to see what pools are available for import:

newhost# zpool import pool: shared-zones id: 9978900745171968029 state: ONLINE action: action: The pool can be imported using its name or numeric identifier. config: shared-zones ONLINE c0d1 ONLINE

An Archived Oracle Technical Paper 15 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

2. Run the zpool import command with the pool name to execute the import:

newhost# zpool import shared-zones

3. Run the zoneadm list command to determine what zones are running on this system. Only the global zone should be listed:

newhost# zoneadm list -vic ID NAME STATUS PATH BRAND IP 0 global running / native shared

4. Run the zfs list command to check that the file systems that are part of the shared-zones zpool, are mounted:

newhost# zfs list NAME USED AVAIL REFER MOUNTPOINT ... shared-zones 4.18G 15.4G 4.14G /shared-zones shared-zones@downrev 46.3M - 4.13G -

5. If the file systems are not mounted, run thezfs mount command to mount them. This can take one of two forms, depending on the value of the mountpoint property set by the zfs set command. If the mountpoint property was set to a specific path, run the following command:

newhost# zfs mount shared-zones . If the mountpoint property was set to legacy, run the following command:

newhost# mount -F zfs shared-zones /shared-zones

6. Create the new configured zones with thezonecfg create command. Use the -z option to specify the name of the zone, and -a to define the path for the detached zone that was moved to newhost:

newhost# zonecfg -z zone1 create -a /shared-zones/zone1 newhost# zonecfg -z zone2 create -a /shared-zones/zone2

Note: The silent success of the zonecfg command does not indicate that the transfer of the zone was successful, since the zone is validated when it is attached, not when it is configured.

An Archived Oracle Technical Paper 16 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

7. Run the zoneadm attach command, without invoking the update on attach feature, to try to attach zone1 to newhost. In this example, oldhost has a single package — SUNWstosreg — that has not been installed on newhost, and two patches — 119963 and 120812 — that are not at the same revision level on both hosts. These discrepancies cause the zoneadm attach command to fail:

newhost# zoneadm -z zone1 attach These packages installed on the source system are inconsistent with this system: SUNWstosreg: not installed (1.0,REV=2007.05.21.20.36) These patches installed on the source system are inconsistent with this system: 119963: version mismatch (10) (11) 120812: version mismatch (25) (27)

8. Run the zoneadm attach command again to attach zone2 to newhost. This time invoke the update on attach feature by specifying the -u command line option:

newhost# zoneadm -z zone2 attach -u Getting the list of files to remove Removing 15 files Remove 2 of 2 packages Installing 31 files Add 2 of 2 packages . Updating editable files The file within the zone contains a log of the zone update.

The superfluous packages are removed and the missing packages are installed to make zone2 compatible with the package and patch state of the global zone on newhost.

Note: The time required for the update on attach feature to complete the software upgrade depends on the number of packages and patches that need to be updated. An invocation of the update on attach feature with only a few patches should normally take a short period of time — probably no more than a few minutes. However, complex operating system upgrades can take much longer.

An Archived Oracle Technical Paper 17 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Rolling Back the Updated Local Zones When update on attach fails and the local zones are damaged, the local zones must be restored to their original state using the snapshot that was saved earlier4 (see “Shutting Down and Exporting the Local Zones” on page 12). The zpools with the zonepaths must be exported from oldhost first: 1. Check that the snapshot exists on oldhost and that both zones are configured with the zoneadm list command:

oldhost# zoneadm list -vic ID NAME STATUS PATH BRAND IP 0 global running / native shared - zone1 configured /shared-zones/zone1 native shared - zone2 configured /shared-zones/zone2 native shared

2. Run the zpool import command with no parameters to see whether the zpool is available for import:

oldhost# zpool import pool: shared-zones id: 9978900745171968029 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: shared-zones ONLINE c0d1 ONLINE . 3. Run the zpool import command using the zpool name:

oldhost# zpool import shared-zones

4. Run a zfs list command to check that the snapshot is available on oldhost:

oldhost# zfs list NAME USED AVAIL REFER MOUNTPOINT ... shared-zones 4.19G 15.4G 4.14G /shared-zones shared-zones@downrev 46.3M - 4.13G -

4. Saving ZFS filesystem snapshots prior to executing a procedure that can potentially damage local zones to help An Archivedenable their Oracleroll-back to a prior stateTechnical is a recommended precaution Paper to help minimize unplanned disruption. 18 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

5. Try to attach zone1 to oldhost using the zoneadm attach command. The command fails because upgrades that were applied by the update on attach feature to zone1 when it was attached to newhost prevent the reattachment of zone1 to oldhost:

oldhost# zoneadm -z zone1 attach These packages installed on this system were not installed on the source system: SUNWstosreg (1.0,REV=2007.05.21.20.36) These patches installed on the source system are inconsistent with this system: 119963: version mismatch (11) (10) 120812: version mismatch (27) (25)

6. Run the zfs rollback command on the saved snapshot to revert it to its previous state and make it possible to reattach both local zones to oldhost:

oldhost# zfs rollback shared-zones@downrev oldhost# zoneadm -z zone1 attach oldhost# zoneadm -z zone2 attach

Update on Attach in a Solaris Cluster Environment In the past, mission-critical applications would be deployed on independent cluster environments and there were no dependencies between the different clusters . running the different applications. Today, as part of the growing need for more efficient application deployment, several mission-critical applications can be consolidated onto a single Solaris Cluster platform using Solaris Zones. This trend produces cluster deployments with many applications and failover zones.

By their nature, cluster environments require higher availability than nonclustered environments at both the application and infrastructure levels. This need implies that planned system and application downtime must be kept to the bare minimum, making it difficult to schedule planned downtime. When a cluster is shared by multiple applications this scheduling difficulty is further exacerbated by the need to coordinate planned downtime with multiple application owners. Conversely, when using the update on attach feature, a separate downtime can be set for each application and zone.

The Sample Configuration In this example, the cluster to be patched consists of two nodes — node1 and node2. The Solaris OS version is Solaris 10 Update 5 with patches. The local zone, tzone, is controlled by the cluster, tzone is part of the resource group (RG) tzone-rg, and

An Archived Oracle Technical Paper 19 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

tzone’s zonepath is configured on shared storage in a ZFS file system mounted on /zones/tzone, as verified by running thezoneadm list command:

node1# zoneadm list -icv ID NAME STATUS PATH BRAND IP 0 global running / native shared 3 tzone running /zones/tzone native shared

Note: This example was tested using Solaris Cluster 3.2 1/09. Using update on attach to install Solaris Cluster patches is currently not supported.

Failover Zones In a Solaris Cluster deployment with multiple applications, the applications can be switched from a node to be patched to another node in the cluster, freeing the first node to be patched or upgraded without the users experiencing any application downtime. In such a configuration, a rolling upgrade can provide optimal application availability. However, with failover zones all zones installed on a given node must be patched simultaneously with the global zone. As a result, the downtime must be coordinated with all of the owners of applications running in failover zones. A rolling upgrade using update on attach can be used to circumvent this issue.

A failover zone is usually used to allow application owners to control their application’s operating environment, and they must manage their application within . the zone. The failover zone is separately managed and monitored from the global zone. By attaching the shared storage with the failover zone’s zonepath to another cluster node, the other node can boot the failover zone and provide the service.

In a Solaris Cluster environment failover zones are configured using the HA Container agent. The HA Container agent integrates a failover zone into the Solaris Cluster environment by providing the means to start, stop, and monitor the zone. When a node with a failover zone fails, the failover zone is switched to another node. A Solaris Cluster failover zone is configured as a resource group. This resource group typically contains three resources:

• A storage resource of type HAStoragePlus, which controls the shared storage that hosts the failover zone’s file systems

• A logical IP address resource of type LogicalHostname that is used to access the failover zone

• The zone resource itself of the generic data service type (gds) used for the HA container agent, which is dependent on the storage and logical IP resources

Note: The dependency between the zone resource and the IP address and storage resources means that the zone resource can only be started if the IP address and storage resources are available.

An Archived Oracle Technical Paper 20 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Update Procedure Overview The following steps provide an overview of the update on attach process for failover zones. These steps are described in detail in the following sections: 1. Move all of the local zones to be updated with update on attach from node1 to node2 using cluster commands.

2. Upgrade the global zone on node1.

3. Reboot node1 to rejoin the cluster. The cluster now runs nodes with different patch or package versions or different Solaris OS minor releases. This intermediate state is not recommended for production systems and should only be allowed for as short a time as possible — while the upgrade is in progress.

4. For each failover zone on node2, remove the zone resource from the cluster software’s control, move the zonepath to node1 using cluster commands, attach the zone to node1, and upgrade it with the update on attach feature. To speed up the process, this can be performed in parallel for multiple zones. While the zones are updated, only the applications running in each zone suffer downtime.

5. Boot the zone once, then shut it down and instruct the cluster to regain control of it.

6. Upgrade node2.

Note: In the interests of simplicity, the steps to create ZFS snapshots to help enable roll-back if the update procedure described here fails are omitted from the description. See the section . “Moving the Local Zones” on page 12 for an example of using ZFS snapshots for this purpose.

Upgrading the First Cluster Node To upgrade node1, the resource group controlling tzone is switched to node2. node1 is then rebooted in noncluster mode and upgraded using one of the described procedures. This means that tzone is active on node2 and provides all of its services, while node1 can be upgraded without affecting the applications’ availability. Before starting this procedure, make sure that no LUN containing an affected zonepath is configured as a quorum device:

1. List the names of the Solaris Cluster resources with the clrs command to ascertain that this prerequisite is met:

node1# clrs list -v -g tzone-rg Resource Name Resource Type Resource Group ------tzone-ip SUNW.LogicalHostname:2 tzone-rg tzone-hasp SUNW.HAStoragePlus:6 tzone-rg tzone-rs SUNW.gds:6 tzone-rg

An Archived Oracle Technical Paper 21 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

2. The clrg command manages Solaris Cluster data service resource groups. The clrg switch command changes the specified node or zone that is mastering a resource group. Run the clrg switch command to move resource group tzone-rg to node2:

node1# clrg switch -n node2 tzone-rg

3. After the upgrade the global zone on node1 will run with a newer patch-level than local zone tzone, so if tzone is switched to node1 it could fail. To avoid this risk, prevent tzone from switching to node1 by setting tzone’s Nodelist property as defined in thetzone-rg resource group to include only the nodes that have not been upgraded yet:

node2# clrg set -p Nodelist=node2 tzone-rg

Note: The time the cluster is allowed to remain in this state should be minimized, since while tzone cannot be switched to node1, tzone has an increased risk of failure due to the reduced node count.

4. Run the zoneadm detach command to change tzone’s state from installed to configured, so that the subsequent update procedure on node1 does not attempt to update tzone. Use zoneadm list to verify that tzone’s state has in fact changed:

node1# zoneadm -z tzone detach node1# zoneadm -z tzone list -v . ID NAME STATUS PATH BRAND IP - tzone configured /zones/tzone native shared

Note: The zonepath itself has already been switched to the other node in step 2, above.

5. The detach operation creates the file/zones/tzone/SUNWdetached.xml in tzone’s now empty zonepath on node1. This file contains package and patch information about the detached zone. As the zone has already been switched, its contents are not needed. However, the existence of this file would interfere with the attach process. Remove it with the following command:

node1# rm -f /zones/tzone/SUNWdetached.xml

6. Reboot node1 in noncluster mode by using the -x option so that the global zone can be upgraded:

node1# reboot -x

7. Upgrade node1’s global zone and allow node1 to rejoin the cluster.

An Archived Oracle Technical Paper 22 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Running the Update on Attach Feature The following steps move tzone’s zonepath to node1 and run the update on attach feature to upgrade tzone to the level of the global zone on node1: 8. Recall that node1 was removed from the list of nodes that can master tzone by removing it from the tzone Nodelist property in the tzone-rg resource group. Now return it, since the resource group is going to be switched over to node1:

node2# clrg set -p Nodelist=node1,node2 tzone-rg

9. To prevent the zone from booting when the resource group is switched over, the zone resource is disabled, shutting down tzone without detaching it. To switch over the zonepath using a cluster command its HAStoragePlus resource must remain online together with its logical IP address, because there is an implicit dependency on the logical IP address. Check the status of the resource group and then disable the zone resource:

node2# clrg status tzone-rg === Cluster Resource Groups === Group Name Node Name Suspended Status ------tzone-rg node2 No Online node1 No Offline node2# clrs disable tzone-rs

The IP address and the zonepath are still active while the local zone itself is . disabled.

10. Detach tzone to change its state to configuredso that the subsequent update procedure on node2 does not attempt to update tzone:

node2# zoneadm -z tzone detach

12. The zoneadm attach -u command, when invoked in step 14, initially looks for the /zones/tzone/SUNWdetached.xml file that contains the package and patch status of the zone. If one is found, the real package and patch status of the zone to be attached is not considered. While normally the real package and patch status of the zone and its reflection in /zones/tzone/SUNWdetached.xml are identical, there are certain cases where it is not. For this reason, the file is removed:

node2# rm -f /zones/tzone/SUNWdetached.xml

An Archived Oracle Technical Paper 23 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

13. Run the clrg switch command to switch the tzone-rg resource group to node1 and import and mount the zpool containing the tzone’s zonepath:

node2# clrg switch -n node1 tzone-rg

Note: In production environments with many failover zones, several zones can be moved and updated simultaneously to expedite the upgrade process.

14. Attach tzone to node1 using the -u option of the zoneadm attach command to run the update on attach feature. In the invocation shown here, running the update on attach feature fails since node1 contains a package with an earlier revision than that of tzone and the update on attach feature is not able to downgrade packages:

node1# zoneadm -z tzone attach -u zoneadm: zone ‘node1-template’: ERROR:attempt to downgrade package SUNWdmcon 3.0.2,REV=2006.12.11.14.47.33 to version 3.0.2,REV=2006.10.13.15.53.40

15. After upgrading the appropriate package, successfully run the update on attach feature:

node1# zoneadm -z tzone attach -u Getting the list of files to remove Removing 1115 files Remove 785 of 785 packages Installing 1455 files . Add 664 of 664 packages Updating editable files The file within the zone contains a log of the zone update.

Bringing the Failover Zone Back Under Cluster Control At this stage, the failover zone is up to date and can be brought back into the cluster: 16. Boot tzone manually and verify that it booted successfully by logging into its console:

node1# zoneadm -z tzone boot node1# zlogin -C tzone tzone# ^D node1#

Note: The first time tzone is booted it must be booted manually and not by the cluster software. This is because automatic administrative procedures that might be triggered during this first boot process can create a significant delay in its completion. This delay can be misinterpreted by the Solaris Cluster software as a failure of tzone.

An Archived Oracle Technical Paper 24 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

17. Stop tzone:

node1# zlogin tzone shutdown -i 0

18. Check tzone’s status:

node1# clrs status -g tzone-rg === Cluster Resources === Resource Name Node Name State Status Message ------tzone-ip node2 Offline Offline - LogicalHostname offline. node1 Online Online - LogicalHostname online. tzone-hasp node2 Offline Offline node1 Online Online tzone-rs node2 Offline Offline node1 Offline Offline

The output of the clrs status command shows that the resources tzone-ip and tzone-hasp, which represent the logical IP address and storage resources, are still online. The tzone-rs resource, which represents the failover zone itself, is still offline.

19. At this point, tzone and node1 have both been upgraded, while node2 has not. Thus, tzone and node2 are running different software versions and tzone should not be permitted to fail over to node2. To prevent tzone from failing over to node2, node2 is removed from the list of nodes that can host tzone by changing . the Nodelist property in the tzone’s group to include only node1:

node1# clrg set -p Nodelist=node1 tzone-rg

20. Enable tzone so that it will boot under cluster control:

node1# clrs enable tzone-rs

21. Check that tzone’s status is running:

node1# zoneadm -z tzone list -v ID NAME STATUS PATH BRAND IP 3 tzone running /zones/tzone native shared

An Archived Oracle Technical Paper 25 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Upgrading the Second Node Once all of the failover zones are upgraded by moving them from node2 to node1, and the update on attach feature is invoked, node2 can be upgraded using the same method that was used for node1.

22. Before upgrading node2, verify that the status of tzone on node2 is configured and not installed:

node2# zoneadm -z tzone list -v ID NAME STATUS PATH BRAND IP - tzone configured /zones/tzone native shared

23. Upgrade node2 as required.

24. Attach tzone to node2 using the -F option to force the operation. Following the upgrade of node2, tzone’s state remains configured on node2. Running the zoneadm attach command with the -F option changes the status to installed. While normally using -F is dangerous and not recommended, in this case it is permissible since tzone has just been upgraded correctly:

node2# zoneadm -z tzone attach -F node2# zoneadm -z tzone list -v ID NAME STATUS PATH BRAND IP - tzone installed /zones/tzone native shared

Note: Normally, the attach command compares the state of the global zone and the local . zone to be attached. However, it can be safely assumed that the process that has just been executed makes this comparison unnecessary. If the zone is disabled, switching its zonepath and re-attaching it would entail further non-essential service interruptions to the services on the zone. Running attach -F simply changes the zone’s state to installed, which is sufficient for the Solaris Cluster HA Container agent to work without disabling the zone.

25. After upgrading all of the nodes revert the nodelist property of tzone back to include both node1 and node2 to restore the cluster to its fully redundant configuration:

node1# clrg set -p Nodelist=node1,node2 tzone-rg

Now the failover zone is under full cluster control and can be automatically or manually switched between the nodes in the event of a failover.

An Archived Oracle Technical Paper 26 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Preventing upgrade failures in production systems The procedure to upgrade a cluster described here results in a short service interruption while the failover zones are switched, and while the update on attach is in progress. In addition, temporarily removing the nodes from the cluster while the upgrade is in progress creates an increased risk of failure due to reduced redundancy. If the upgrade fails, the risk is increased further since the cluster must function with fewer nodes for a longer period of time.

To minimize this risk, the upgrade should first be tested on a separate test system using a snapshot of the current operating environment, and updated with the update on attach feature. Any failure in this test procedure identifies and helps resolve the issues that might result in a failure of the upgrade.

The period of reduced redundancy can be shortened by using Live Upgrade to upgrade the individual nodes, since each node is fully functional while it is being upgraded, and is removed from the cluster only while it is rebooted.

.

An Archived Oracle Technical Paper 27 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Chapter 4 Conclusion

Virtualization technologies such as Solaris Zones create immense opportunity for a more cost-effective, efficient, flexible, and reliable datacenter environment, at the cost of increased complexity and some application interdependency. The maintenance requirements of the ever-increasing number of virtual operating system environments create challenges at both the technical and administration levels.

At the technical level, the systems must be upgraded to help ensure stability and functionality. At the administrative level, the large number of stakeholders of the interdependent applications create serious challenges in planning and coordinating agreed maintenance downtime.

The maintenance technologies described in this Sun BluePrints article remove many of the interdependencies between applications and reduce the extent of the planned downtime required for maintaining operating environments. These technologies reduce the datacenter maintenance challenges and risks while increasing the ability to meet the businesses users’ service level requirements.

About the Authors This Sun BluePrints article was authored by a team of specialists from the Sun Microsystems systems practice in Germany: . • Martin Müller has been working as an IT architect and datacenter ambassador for Sun Microsystems for the past 12 years. He is a specialist in Sun’s SPARC® systems and the Solaris Operating System. • Hartmut Streppel has been working as a principal field technologist and datacenter ambassador for Sun Microsystems for the past 10 years. He is an acknowledged worldwide expert in the field of high availability, business continuity, and Solaris Cluster software. • Dirk Augustin has been working for Sun Microsystems as an SAP solution and datacenter architect for the past 9 years. He is a certified SAP consultant.

Acknowledgments The authors would like to recognize Ralf Zenses, Lajos Hodi, and Detlef Drewanz for their technical advice.

An Archived Oracle Technical Paper 28 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

References

Table 5. References for more information

Web Sites BigAdmin, System Administrator http://sun.com/bigadmin/ Resources and Community Solaris 10 5/09 Release and Installation http://docs.sun.com/app/docs/coll/1236.10/ Collection Solaris 10 5/09 Installation Guide: Solaris http://docs.sun.com/app/docs/doc/820-7013/ Live Upgrade and Upgrade Planning Sun Cluster Upgrade Guide for Solaris OS http://docs.sun.com/app/docs/doc/820-4678/ (for Sun Cluster 3.2 1/09) Sun xVM Ops Center 2.0 Wiki http://wikis.sun.com/display/xvmOC2dot0/ Home/ Sun’s ZFS Learning Center http://sun.com/software/solaris/ zfs_learning_center.jsp OpenSolaris Community: ZFS http://opensolaris.org/os/community/zfs/ Sun xVM Ops Center http://sun.com/software/products/ xvmopscenter/ System Administration Guide: Solaris http://docs.sun.com/app/docs/doc/817-1592/ Containers-Resource Management and Solaris Zones How to Upgrade the Solaris OS and Sun http://docs.sun.com/app/docs/doc/820-4678/ . Cluster 3.2 1/09 Software (Live Upgrade) chapupgrade-5478/ The Zones Update on Attach Feature and http://sun.com/bigadmin/features/articles/ Patching in the Solaris 10 OS zone_attach_patch.jsp Non-Global Zone State Model http://docs.sun.com/app/docs/doc/817-1592/ zones.intro-12/ Performing a Live Upgrade to Sun Cluster http://docs.sun.com/app/docs/doc/820-4678/ 3.2 1/09 Software gcssh/ Sun BluePrints articles Best Practices For Running Oracle http://wikis.sun.com/display/BluePrints/ Databases in Best+Practices+for+Running+Oracle+ Databases+in+Solaris+Containers

Other Articles Minimizing Downtime In SAP http://sun.com/third-party/global/sap/ Environments collateral/Minimizing_Downtime_of_SAP_ Environments_WP.pdf

An Archived Oracle Technical Paper 29 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Ordering Sun Documents The SunDocsSM program provides more than 250 manuals from Sun Microsystems, Inc. If you live in the United States, Canada, Europe, or Japan, you can purchase documentation sets or individual manuals through this program.

Accessing Sun Documentation Online The docs.sun.com Web site enables you to access Sun technical documentation online. You can browse the docs.sun.com archive or search for a specific book title or subject. The URL is http://docs.sun.com.

To reference Sun BluePrints article Online articles, visit the Sun BluePrints Online Web site at: http://sun.com/blueprints/online.html.

.

An Archived Oracle Technical Paper 30 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Chapter 5 Appendix

Output of the lucreate Command s10u6# lucreate -n new_be Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Creating snapshot for on . . Creating clone for on . Creating snapshot for on . Creating clone for on . Population of boot environment successful. Creation of boot environment successful.

Output of the luupgrade Command s10u6# luupgrade -t -n new_be -s /var/tmp/1139503-01 Validating the contents of the media . The media contains 1 software patches that can be added. Mounting the boot environment . Adding patches to the boot environment . Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done!

An Archived Oracle Technical Paper 31 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 139503-01 Preparing checklist for non-global zone check... Checking non-global zones... This patch passes the non-global zone check. 139503-01 Summary for zones: Zone sparse Rejected patches: None. Patches that passed the dependency check: 139503-01 Zone whole Rejected patches: None. Patches that passed the dependency check: 139503-01 Patching global zone Adding patches... Checking installed patches... Verifying sufficient file system capacity (dry run method)... . Installing patch packages... Patch 139503-01 has been successfully installed. See /a/var/sadm/patch/139503-01/log for details Patch packages installed: SUNWckr Done! Patching non-global zones... Patching zone sparse Adding patches... Checking installed patches... Verifying sufficient file system capacity (dry run method)... Installing patch packages... Patch 139503-01 has been successfully installed. See /a/var/sadm/patch/139503-01/log for details Patch packages installed: SUNWckr Done! Patching zone whole Adding patches...

An Archived Oracle Technical Paper 32 Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Checking installed patches... Verifying sufficient file system capacity (dry run method)... Installing patch packages... Patch 139503-01 has been successfully installed. See /a/var/sadm/patch/139503-01/log for details Patch packages installed: SUNWckr Done! Unmounting the boot environment . The patch add to the boot environment completed.

Output of the luactivate Command s10u6# luactivate new_be A Live Upgrade Sync operation will be performed on startup of boot environment . ************************************************************* The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target boot environment. ************************************************************* In case of a failure while booting to the target boot . environment, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Boot the machine to Single User mode using a different boot device (like the Solaris Install CD or Network). Examples: At the PROM monitor (ok prompt): For boot to Solaris CD: boot cdrom -s For boot to network: boot net -s 3. Mount the Current boot environment root slice to some directory (like /mnt). You can use the following command to mount: mount -Fzfs /dev/dsk/c0d0s0 /mnt 4. Run utility with out any arguments from the current boot environment root slice, as shown below: /mnt/sbin/luactivate 5. luactivate, activates the previous working boot environment and indicates the result. 6. Exit Single User mode and reboot the machine. ************************************************************* Modifying boot archive service Activation of boot environment successful

Note: This example is specific to SPARC systems with a PROM monitor, running the Solaris OS.

An Archived Oracle Technical Paper Maintaining Solaris with Live Upgrade and Update On Attach Sun Microsystems, Inc.

Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 1-650-960-1300 or 1-800-555-9SUN (9786) Web sun.com © 2009 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, the Sun logo, Sun BluePrints, Solaris, and ZFS, are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. Information subject to change without notice. Printed in USA 09/09

An Archived Oracle Technical Paper

Maintaining Solaris Systems with Solaris Live Copyright © 2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the Upgrade and Update on Attach contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other September 2009 warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any Oracle Corporation means, electronic or mechanical, for any purpose, without our prior written permission. World Headquarters 500 Oracle Parkway Oracle and are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Redwood Shores, CA 94065 U.S.A. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Worldwide Inquiries: Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license Phone: +1.650.506.7000 and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Fax: +1.650.506.7200 Company, Ltd. 1010 oracle.com

An Archived Oracle Technical Paper