JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Upgrade Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Preparing to Upgrade Oracle Solaris Cluster Software

2.  Upgrading Zones Managed by Oracle Solaris Cluster Software

3.  Performing a Standard Upgrade

4.  Performing a Rolling Upgrade

5.  Completing the Upgrade

Completing a Cluster Upgrade

How to Commit the Upgraded Cluster

How to Verify the Upgrade

How to Finish the Upgrade

6.  Recovering From an Incomplete Upgrade

Index

Completing a Cluster Upgrade

How to Commit the Upgraded Cluster

Before You Begin

Ensure that all upgrade procedures are completed for all cluster nodes that you are upgrading.

  1. From one node, check the upgrade status of the cluster.
    phys-schost# scversions
  2. From the following table, perform the action that is listed for the output message from Step 1.
    Output Message
    Action
    Upgrade commit is needed.
    Proceed to Step 3.
    Upgrade commit is NOT needed. All versions match.
    Upgrade commit cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions.
    Return to the Oracle Solaris Cluster upgrade procedures that you used and upgrade the remaining cluster nodes.
    Check upgrade cannot be performed until all cluster nodes are upgraded. Please run scinstall(1m) on cluster nodes to identify older versions.
    Return to the Oracle Solaris Cluster upgrade procedures that you used and upgrade the remaining cluster nodes.
  3. After all nodes have rejoined the cluster, from one node commit the cluster to the upgrade.
    phys-schost# scversions -c

    Committing the upgrade enables the cluster to utilize all features in the newer software. New features are available only after you perform the upgrade commitment.

  4. From one node, verify that the cluster upgrade commitment has succeeded.
    phys-schost# scversions
    Upgrade commit is NOT needed. All versions match.

Next Steps

Go to How to Verify the Upgrade.

How to Verify the Upgrade

Perform this procedure to verify that the cluster is successfully upgraded to Oracle Solaris Cluster 4.1 software. Perform all steps from the global zone only.

Before You Begin

  1. On each node, assume the root role.
  2. On each upgraded node, view the installed levels of Oracle Solaris Cluster software.
    phys-schost# clnode show-rev -v

    The first line of output states which version of Oracle Solaris Cluster software the node is running. This version should match the version that you just upgraded to.

  3. From any node, verify that all upgraded cluster nodes are running in cluster mode (Online).
    phys-schost# clnode status

    See the clnode(1CL) man page for more information about displaying cluster status.

  4. From any node, view the boot environment (BE) created by the upgrade.
    # beadm list

    Record the name of the upgraded BE and any other BEs that you might want to boot back into if needed.

Example 5-1 Verifying Upgrade to Oracle Solaris Cluster 4.1 Software

The following example shows the commands used to verify upgrade of a two-node cluster to Oracle Solaris Cluster 4.1 software. The cluster node names are phys-schost-1 and phys-schost-2.

phys-schost# clnode show-rev -v
4.1
…
phys-schost# clnode status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                          Status
---------                                          ------
phys-schost-1                                      Online
phys-schost-2                                      Online

Next Steps

Go to How to Finish the Upgrade.

How to Finish the Upgrade

Perform this procedure to finish Oracle Solaris Cluster upgrade. Perform all steps from the global zone only.

Before You Begin

Ensure that all steps in How to Verify the Upgrade are completed.

  1. If you upgraded any data services that are not supplied on the product media, register the new resource types for those data services.

    Follow the documentation that accompanies the data services.

  2. If necessary, reset the resource_security property.

    After upgrade, the resource_security property for the cluster is reset to COMPATIBLE. To use a different security policy for RGM resources, run the following command from one node of the cluster:

    phys-schost# cluster set -p resource_security=policy clustername

    You can alternatively use the clsetup utility from the Other Cluster Tasks menu option. For more information about the resource_security property, see the cluster(1CL) man page.

  3. Migrate resources to new resource type versions.

    You must migrate all resources to the Oracle Solaris Cluster 4.1 resource-type version to use the new features and bug fixes that are provided in this release.

    See Upgrading a Resource Type in Oracle Solaris Cluster Data Services Planning and Administration Guide, which contains procedures which use the command line. Alternatively, you can perform the same tasks by using the Resource Group menu of the clsetup utility. The process involves performing the following tasks:

    • Registering the new resource type.

    • Migrating the eligible resource to the new version of its resource type.

    • Modifying the extension properties of the resource type.


      Note - The Oracle Solaris Cluster 4.1 release might introduce new default values for some extension properties. These changes affect the behavior of any existing resource that uses the default values of such properties. If you require the previous default value for a resource, modify the migrated resource to set the property to the previous default value.


  4. In the global zone, re-enable all disabled resources and bring online all resource groups.
    • To use the clsetup utility, perform the following steps:
      1. From any node, start the clsetup utility.
        phys-schost# clsetup

        The clsetup Main Menu is displayed.

      2. Choose the menu item, Resource Groups.

        The Resource Group Menu is displayed.

      3. Choose the menu item, Enable/Disable a Resource.
      4. Choose a resource to enable and follow the prompts.
      5. Repeat Step d for each disabled resource.
      6. When all resources are re-enabled, type q to return to the Resource Group Menu.
      7. Choose the menu item, Online/Offline or Switchover a Resource Group.
      8. Follow the prompts to put each resource group into the managed state and then bring the resource group online.
      9. When all resource groups are back online, exit the clsetup utility.

        Type q to back out of each submenu, or press Ctrl-C.

    • To use the command line, perform the following steps:
      1. Enable each disabled resource.
        # clresource enable resource
      2. Verify that each resource is enabled.
        # clresource status
      3. Bring online each resource group.
        # clresourcegroup online -emM resourcegroup
      4. Verify that each resource group is online.
        # clresourcegroup status
  5. If zone clusters are configured in the cluster, in each zone cluster re-enable all disabled resources and bring online all resource groups.
    # clresourcegroup online -Z zonecluster resource-group
    # clresource enable -Z zonecluster resource
    # clresourcegroup online -eM -Z zonecluster resource-group
  6. If, before upgrade, you enabled automatic node reboot if all monitored shared-disk paths fail, ensure that the feature is still enabled.

    Also perform this task if you want to configure automatic reboot for the first time.

    1. Determine whether the automatic reboot feature is enabled or disabled.
      phys-schost# clnode show
      • If the reboot_on_path_failure property is set to enabled, no further action is necessary.
      • If reboot_on_path_failure property is set to disabled, proceed to the next step to re-enable the property.
    2. Enable the automatic reboot feature.
      phys-schost# clnode set -p reboot_on_path_failure=enabled node
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Specifies that the node will reboot if all monitored disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.

    3. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===                          
      
      Node Name:                                      node
      …
        reboot_on_path_failure:                          enabled
      …
  7. Revalidate the upgraded cluster configuration.

    See How to Validate the Cluster in Oracle Solaris Cluster Software Installation Guide.

  8. (Optional) Capture the ZFS root pool property information for future reference.
    phys-schost# zpool get all rootpool > filename

    Store the file in a location outside the cluster. If you make any root pool configuration changes, run this command again to capture the changed configuration. If necessary, you can use this information to restore the root pool partition configuration. For more information, see the zpool(1M) man page.

  9. (Optional) Make a backup of your cluster configuration.

    An archived backup of your cluster configuration facilitates easier recovery of your cluster configuration.

    For more information, see How to Back Up the Cluster Configuration in Oracle Solaris Cluster System Administration Guide.

Troubleshooting

Resource-type migration failure - Normally, you migrate resources to a new resource type while the resource is offline. However, some resources need to be online for a resource-type migration to succeed. If resource-type migration fails for this reason, error messages similar to the following are displayed:

phys-schost - Resource depends on a SUNW.HAStoragePlus type resource that is not online anywhere. (C189917) VALIDATE on resource nfsrs, resource group rg, exited with non-zero exit status. (C720144) Validation of resource nfsrs in resource group rg on node phys-schost failed.

If resource-type migration fails because the resource is offline, use the clsetup utility to re-enable the resource and then bring its related resource group online. Then repeat migration procedures for the resource.

Java binaries location change - If the location of the Java binaries changed during the upgrade of Oracle Solaris software, you might see error messages similar to the following when you attempt to run the cacaoadm start command:

phys-schost# /usr/sbin/cacaoadm startNo suitable Java runtime found. Java 1.7 or higher is required.Jan 3 17:10:26 ppups3 cacao: No suitable Java runtime found. Java 1.7 or higher is required.Cannot locate all the dependencies

This error is generated because the start command cannot locate the current location of the Java binaries. The JAVA_HOME property still points to the directory where the previous version of Java was located, but that previous version was removed during upgrade.

To correct this problem, change the setting of JAVA_HOME in the following configuration file to use the current Java directory:

/etc/opt/SUNWcacao/cacao.properties

Next Steps

The cluster upgrade is complete.