Oracle® Solaris Cluster Software Installation Guide

Exit Print View

Updated: September 2014, E39580-02
 
 

How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML File)

Perform this procedure to configure a new global-cluster node by using an XML cluster configuration file. The new node can be a duplication of an existing cluster node that runs the Oracle Solaris Cluster 4.2 software.

This procedure configures the following cluster components on the new node:

  • Cluster node membership

  • Cluster interconnect

  • Global devices

Before You Begin

Perform the following tasks:

  1. Ensure that the Oracle Solaris Cluster software is not yet configured on the potential node that you want to add to a cluster.
    1. Assume the root role on the potential node.
    2. Determine whether the Oracle Solaris Cluster software is configured on the potential node.
      phys-schost-new# /usr/sbin/clinfo -n
      • If the command fails, go to Step 2.

        The Oracle Solaris Cluster software is not yet configured on the node. You can add the potential node to the cluster.

      • If the command returns a node ID number, the Oracle Solaris Cluster software is already a configured on the node.

        Before you can add the node to a different cluster, you must remove the existing cluster configuration information.

    3. Boot the potential node into noncluster mode.
      • SPARC:
        ok boot -x
      • x86:
        1. In the GRUB menu, use the arrow keys to select the appropriate Oracle Solaris entry and type e to edit its commands.

          For more information about GRUB based booting, see Booting a System in Booting and Shutting Down Oracle Solaris 11.2 Systems .

        2. In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry.
        3. Add -x to the multiboot command to specify that the system boot into noncluster mode.
        4. Press Enter to accept the change and return to the boot parameters screen.

          The screen displays the edited command.

        5. Type b to boot the node into noncluster mode.

          Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the –x option to the kernel boot parameter command.
    4. Unconfigure the Oracle Solaris Cluster software from the potential node.
      phys-schost-new# /usr/cluster/bin/clnode remove
  2. If you are duplicating a node that runs the Oracle Solaris Cluster 4.2 software, create a cluster configuration XML file.
    1. Assume the root role on the cluster node that you want to duplicate.
    2. Export the existing node's configuration information to a file.
      phys-schost# clnode export -o clconfigfile
      –o

      Specifies the output destination.

      clconfigfile

      The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.

      For more information, see the clnode (1CL) man page.

    3. Copy the cluster configuration XML file to the potential node that you will configure as a new cluster node.
  3. Assume the root role on the potential node.
  4. Ensure that TCP wrappers for RPC are disabled on all nodes of the cluster.

    The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.

    1. On each node, display the status of TCP wrappers for RPC.

      TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.

      # svccfg -s rpc/bind listprop config/enable_tcpwrappers
      config/enable_tcpwrappers  boolean true
    2. If TCP wrappers for RPC are enabled on a node, disable TCP wrappers and refresh the RPC bind service.
      # svccfg -s rpc/bind setprop config/enable_tcpwrappers = false
      # svcadm refresh rpc/bind
      # svcadm restart rpc/bind
  5. Modify or create the cluster configuration XML file as needed.
    • If you are duplicating an existing cluster node, open the file that you created with the clnode export command.

    • If you are not duplicating an existing cluster node, create a new file.

      Base the file on the element hierarchy that is shown in the clconfiguration (5CL) man page. You can store the file in any directory.

    • Modify the values of the XML elements to reflect the node configuration that you want to create.

      See the clconfiguration (5CL) man page for details about the structure and content of the cluster configuration XML file.

  6. Validate the cluster configuration XML file.
    phys-schost-new# xmllint --valid --noout clconfigfile
  7. Configure the new cluster node.
    phys-schost-new# clnode add -n sponsor-node -i clconfigfile
    -n sponsor-node

    Specifies the name of an existing cluster member to act as the sponsor for the new node.

    –i clconfigfile

    Specifies the name of the cluster configuration XML file to use as the input source.

  8. If TCP wrappers are used in the cluster, ensure that the clprivnet0 IP addresses for all added nodes are added to the /etc/hosts.allow file on each cluster node.

    Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.

    1. On each node, display the IP addresses for all clprivnet0 devices.
      # /usr/sbin/ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      clprivnet0/N      static   ok           ip-address/netmask-length
    2. On each node, edit the /etc/hosts.allow file with the IP addresses of all clprivnet0 devices in the cluster.
  9. (Optional) Enable automatic node reboot if all monitored shared-disk paths fail.

    Note -  At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
    1. Enable automatic reboot.
      phys-schost# clnode set -p reboot_on_path_failure=enabled
      -p

      Specifies the property to set

      reboot_on_path_failure=enable

      Enables automatic node reboot if failure of all monitored shared-disk paths occurs.

    2. Verify that automatic reboot on disk-path failure is enabled.
      phys-schost# clnode show
      === Cluster Nodes ===
      
      Node Name:                                      node
      …
      reboot_on_path_failure:                          enabled
      …

Troubleshooting

Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.

Next Steps

If you added a node to a cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.

Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.