Go to main content

Oracle® Solaris Cluster 4.3 Software Installation Guide

Exit Print View

Updated: June 2019
 
 

Planning the Oracle Solaris Cluster Environment

This section provides guidelines for planning and preparing the following components for Oracle Solaris Cluster software installation and configuration:

For detailed information about Oracle Solaris Cluster components, see the Oracle Solaris Cluster 4.3 Concepts Guide.

Oracle Solaris Cluster Software Version

All nodes in a cluster must run the same version of Oracle Solaris Cluster software.

Memory Requirements

The Oracle Solaris Cluster 4.3 software requires the following memory requirements for every cluster node:

  • Minimum of 1.5 Gbytes of physical RAM (2 Gbytes typical)

  • Minimum of 6 Gbytes of available hard drive space

Actual physical memory and hard drive requirements are determined by the applications that are installed. Consult the application's documentation or contact the application vendor to calculate additional memory and hard drive requirements.

Licensing

Ensure that you have available all necessary license certificates before you begin software installation. Oracle Solaris Cluster software does not require a license certificate, but each node installed with Oracle Solaris Cluster software must be covered under your Oracle Solaris Cluster software license agreement.

For licensing requirements for volume-manager software and applications software, see the installation documentation for those products.

Software Updates

After installing each software product, you must also install any required software updates. For proper cluster operation, ensure that all cluster nodes maintain the same update level.

For general guidelines and procedures for applying software updates, see Chapter 11, Updating Your Software in Oracle Solaris Cluster 4.3 System Administration Guide.

Geographic Edition

If a zone cluster will be configured in an Oracle Solaris Cluster Geographic Edition (Geographic Edition) configuration, the zone cluster must meet the following requirements:

  • Each zone-cluster node must have a public-network IP address that corresponds to the zone-cluster node's hostname.

  • The zone-cluster node's public-network IP address must be accessible by all nodes in the Geographic Edition configuration's partner cluster.

  • Each zone-cluster node must have a failover IP address that maps to the hostname that corresponds to the zone-cluster name.

If you plan to use the Oracle Solaris Cluster Manager browser interface to administer Geographic Edition components, all cluster nodes must have the same root password. For more information about Oracle Solaris Cluster Manager, see Chapter 13, Using the Oracle Solaris Cluster Manager Browser Interface in Oracle Solaris Cluster 4.3 System Administration Guide.

Public-Network IP Addresses

For information about the use of public networks by the cluster, see Public Network Adapters in Oracle Solaris Cluster 4.3 Concepts Guide.

You must set up a number of public-network IP addresses for various Oracle Solaris Cluster components. The number of addresses that you need depends on which components you include in your cluster configuration. Each Oracle Solaris host in the cluster configuration must have at least one public-network connection to the same set of public subnets.

The following table lists the components that need public-network IP addresses assigned. Add these IP addresses to the following locations:

  • Any naming services that are used

  • The local /etc/inet/hosts file on each global-cluster node, after you install Oracle Solaris software

  • The local /etc/inet/hosts file on any exclusive-IP non-global zone

Table 2  Oracle Solaris Cluster Components That Use Public-Network IP Addresses
Component
Number of IP Addresses Needed
Administrative console
1 IP address per subnet
Global-cluster nodes
1 IP address per node, per subnet
Zone-cluster nodes
1 IP address per node, per subnet
Domain console network interface
1 IP address per domain
(Optional) Non-global zones
1 IP address per subnet
Console-access device
1 IP address
Logical addresses
1 IP address per logical host resource, per subnet

For more information about planning IP addresses, see Planning for Network Deployment in Oracle Solaris 11.3.

Console-Access Devices

You must have console access to all cluster nodes. A service processor (SP) is used to communicate between the administrative console and the global-cluster node consoles.

For more information about console access, see the Oracle Solaris Cluster 4.3 Concepts Guide.

You can use the Oracle Solaris pconsole utility to connect to the cluster nodes. The utility also provides a master console window from which you can propagate your input to all connections that you opened. For more information, see the pconsole(1) man page that is available when you install the Oracle Solaris terminal/pconsole package.

Public Network Configuration

Public networks communicate outside the cluster. Consider the following points when you plan your public network configuration:

  • Separation of public and private network Public networks and the private network (cluster interconnect) must use separate adapters, or you must configure tagged VLAN on tagged-VLAN capable adapters and VLAN-capable switches to use the same adapter for both the private interconnect and the public network.

    Alternatively, create virtual NICs on the same physical interface and assign different virtual NICs to the private and public networks.

  • Minimum – All cluster nodes must be connected to at least one public network. Public-network connections can use different subnets for different nodes.

  • Maximum You can have as many additional public-network connections as your hardware configuration allows.

  • Scalable services – All nodes that run a scalable service must either use the same subnet or set of subnets or use different subnets that are routable among themselves.

  • Logical addresses – Each data-service resource group that uses a logical address must have a hostname specified for each public network from which the logical address can be accessed. For additional information about data services and resources, also see the Oracle Solaris Cluster 4.3 Concepts Guide.

  • IPv4 – Oracle Solaris Cluster software supports IPv4 addresses on the public network.

  • IPv6 – See the Oracle Solaris Cluster 4 Compatibility Guide (http://www.oracle.com/technetwork/server-storage/solaris-cluster/overview/solariscluster4-compatibilityguide-1429037.pdf) for a list of data services that support IPv6 addresses on the public network.

  • Public Network Management – Each public-network adapter that is used for data-service traffic must belong to a PNM object that includes IPMP groups, link aggregations, and VNICs that are directly backed by link aggregations. If a public-network adapter is not used for data-service traffic, you do not have to configure it in a PNM object.

    Unless there are one or more non-link-local IPv6 public network interfaces in the public network configuration, the scinstall utility automatically configures a multiple-adapter IPMP group for each set of public-network adapters in the cluster that uses the same subnet. These groups are link-based with transitive probes.


    Note -  IPMP groups are created only on unused physical adapters.

    You can manually configure in IPMP groups all interfaces that will be used for data-service traffic. You can configure the IPMP groups either before or after the cluster is established.

    The scinstall utility ignores adapters that are already configured in an IPMP group. You can use probe-based IPMP groups or link-based IPMP groups in a cluster. Probe-based IPMP groups, which test the target IP address, provide the most protection by recognizing more conditions that might compromise availability.

    If any adapter in an IPMP group that the scinstall utility configures will not be used for data-service traffic, you can remove that adapter from the group.

    For guidelines on IPMP groups, see Chapter 2, About IPMP Administration in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.3. To modify IPMP groups after cluster installation, follow the guidelines in How to Administer IP Network Multipathing Groups in a Cluster in Oracle Solaris Cluster 4.3 System Administration Guide and procedures in Chapter 3, Administering IPMP in Administering TCP/IP Networks, IPMP, and IP Tunnels in Oracle Solaris 11.3. For more information on link aggregations, see Chapter 2, Configuring High Availability by Using Link Aggregations in Managing Network Datalinks in Oracle Solaris 11.3.

  • Local MAC address support All public-network adapters must use network interface cards (NICs) that support local MAC address assignment. Local MAC address assignment is a requirement of IPMP.

  • local-mac-address setting – The local-mac-address? variable must use the default value true for Ethernet adapters. Oracle Solaris Cluster software does not support a local-mac-address? value of false for Ethernet adapters.

For more information about public-network interfaces, see Oracle Solaris Cluster 4.3 Concepts Guide.

Quorum Server Configuration

You can use Oracle Solaris Cluster Quorum Server software to configure a machine as a quorum server and then configure the quorum server as your cluster's quorum device. You can use a quorum server instead of or in addition to shared disks and NAS filers.

Consider the following points when you plan the use of a quorum server in an Oracle Solaris Cluster configuration.

  • Network connection – The quorum-server computer must connect to your cluster through the public network on the same subnet that is used by the cluster nodes it serves. Otherwise, the quorum server might be unavailable to the cluster during a node reboot and prevent the cluster from forming.

  • Supported hardware – The supported hardware platforms for a quorum server are the same as for a global-cluster node.

  • Operating system – Oracle Solaris software requirements for Oracle Solaris Cluster software apply as well to Quorum Server software.

  • Restriction for non-global zones – In the Oracle Solaris Cluster 4.3 release, a quorum server cannot be installed and configured in a non-global zone.

  • Service to multiple clusters – You can configure a quorum server as a quorum device to more than one cluster.

  • Mixed hardware and software – You do not have to configure a quorum server on the same hardware and software platform as the cluster or clusters for which it provides quorum. For example, a SPARC based machine that runs the Oracle Solaris 10 OS can be configured as a quorum server for an x86 based cluster that runs the Oracle Solaris 11 OS.

  • Spanning tree algorithm – You must disable the spanning tree algorithm on the Ethernet switches for the ports that are connected to the cluster public network where the quorum server will run.

  • Using a cluster node as a quorum server – You can configure a quorum server on a cluster node to provide quorum for clusters other than the cluster that the node belongs to. However, a quorum server that is configured on a cluster node is not highly available.

NFS Guidelines

Consider the following points when you plan the use of Network File System (NFS) in an Oracle Solaris Cluster configuration:

  • NFS client – No Oracle Solaris Cluster node can be an NFS client of an HA for NFS exported file system that is being mastered on a node in the same cluster. Such cross-mounting of HA for NFS is prohibited. Use the cluster file system to share files among global-cluster nodes.

  • NFSv3 protocol – If you are mounting file systems on the cluster nodes from external NFS servers, such as NAS filers, and you are using the NFSv3 protocol, you cannot run NFS client mounts and the HA for NFS data service on the same cluster node. If you do, certain HA for NFS data-service activities might cause the NFS daemons to stop and restart, interrupting NFS services. However, you can safely run the HA for NFS data service if you use the NFSv4 protocol to mount external NFS file systems on the cluster nodes.

  • Locking – Applications that run locally on the cluster must not lock files on a file system that is exported through NFS. Otherwise, local blocking (for example, flock or fcntl) might interfere with the ability to restart the lock manager ( lockd). During restart, a blocked local process might be granted a lock which might be intended to be reclaimed by a remote client. This situation would cause unpredictable behavior.

  • NFS security features – Oracle Solaris Cluster software does not support the following options of the share_nfs(1M) command:

    • secure

    • sec=dh

    However, Oracle Solaris Cluster software does support the following security features for NFS:

    • The use of secure ports for NFS. You enable secure ports for NFS by adding the entry set nfssrv:nfs_portmon=1 to the /etc/system file on cluster nodes.

    • The use of Kerberos with NFS.

  • Fencing – Zone clusters support fencing for all supported NAS devices, shared disks, and storage arrays.

Service Restrictions

Observe the following service restrictions for Oracle Solaris Cluster configurations:

  • Routers Do not configure cluster nodes as routers (gateways) due to the following reasons:

    • Routing protocols might inadvertently broadcast the cluster interconnect as a publicly reachable network to other routers, despite the setting of the IFF_PRIVATE flag on the interconnect interfaces.

    • Routing protocols might interfere with the failover of IP addresses across cluster nodes that impact client accessibility.

    • Routing protocols might compromise proper functionality of scalable services by accepting client network packets and dropping them, instead of forwarding the packets to other cluster nodes.

  • NIS+ servers – Do not configure cluster nodes as NIS or NIS+ servers. There is no data service available for NIS or NIS+. However, cluster nodes can be NIS or NIS+ clients.

  • Install servers – Do not use an Oracle Solaris Cluster configuration to provide a highly available installation service on client systems.

  • RARP Do not use an Oracle Solaris Cluster configuration to provide an rarpd service.

  • Remote procedure call (RPC) program numbers If you install an RPC service on the cluster, the service must not use any of the following program numbers:

    • 100141

    • 100142

    • 100248

    These numbers are reserved for the Oracle Solaris Cluster daemons rgmd_receptionist, fed, and pmfd, respectively.

    If the RPC service that you install also uses one of these program numbers, you must change that RPC service to use a different program number.

  • Scheduling classes Oracle Solaris Cluster software does not support the running of high-priority process scheduling classes on cluster nodes. Do not run either of the following types of processes on cluster nodes:

    • Processes that run in the time-sharing scheduling class with a high priority

    • Processes that run in the real-time scheduling class

    Oracle Solaris Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Oracle Solaris Cluster kernel threads from acquiring needed CPU cycles.

Network Time Protocol (NTP)

Observe the following guidelines for NTP:

  • Synchronization – The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time.

  • Accuracy – Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

See the Oracle Solaris Cluster 4.3 Concepts Guide for further information about cluster time. For more information about NTP, see the ntpd(1M) man page that is delivered in the Oracle Solaris service/network/ntp package.

Oracle Solaris Cluster Configurable Components

This section provides guidelines for the following Oracle Solaris Cluster components that you configure:

Global-Cluster Name

Specify a name for the global cluster during Oracle Solaris Cluster configuration. The global cluster name should be unique throughout the enterprise.

For information about naming a zone cluster, see Zone Clusters.

Global-Cluster Node Names and Node IDs

The name of a node in a global cluster is the same name that you assign to the physical or virtual host when you install it with the Oracle Solaris OS. See the hosts(4) man page for information about naming requirements.

In single-host cluster installations, the default cluster name is the name of the node.

During Oracle Solaris Cluster configuration, you specify the names of all nodes that you are installing in the global cluster. The node name must be same as the output of the uname -n command..

A node ID number is assigned to each cluster node for intracluster use, beginning with the number 1. Node ID numbers are assigned to each cluster node in the order that the node becomes a cluster member. If you configure all cluster nodes in one operation, the node from which you run the scinstall utility is the last node assigned a node ID number. You cannot change a node ID number after it is assigned to a cluster node.

A node that becomes a cluster member is assigned the lowest available node ID number. If a node is removed from the cluster, its node ID becomes available for assignment to a new node. For example, if in a four-node cluster the node that is assigned node ID 3 is removed and a new node is added, the new node is assigned node ID 3, not node ID 5.

If you want the assigned node ID numbers to correspond to certain cluster nodes, configure the cluster nodes one node at a time in the order that you want the node ID numbers to be assigned. For example, to have the cluster software assign node ID 1 to phys-schost-1, configure that node as the sponsoring node of the cluster. If you next add phys-schost-2 to the cluster established by phys-schost-1, phys-schost-2 is assigned node ID 2.

For information about node names in a zone cluster, see Zone Clusters.

Private Network Configuration


Note -  You do not need to configure a private network for a single-host global cluster. The scinstall utility automatically assigns the default private-network address and netmask even though a private network is not used by the cluster.

Oracle Solaris Cluster software uses the private network for internal communication among nodes and among non-global zones that are managed by Oracle Solaris Cluster software. An Oracle Solaris Cluster configuration requires at least two connections to the cluster interconnect on the private network. When you configure Oracle Solaris Cluster software on the first node of the cluster, you specify the private-network address and netmask in one of the following ways:

  • Accept the default private-network address (172.16.0.0) and default netmask (255.255.240.0). This IP address range supports a combined maximum of 64 nodes and non-global zones, a maximum of 12 zone clusters, and a maximum of 10 private networks.


    Note -  The maximum number of nodes that an IP address range can support does not reflect the maximum number of nodes that the hardware or software configuration can currently support.
  • Specify a different allowable private-network address and accept the default netmask.

  • Accept the default private-network address and specify a different netmask.

  • Specify both a different private-network address and a different netmask.

If you choose to specify a different netmask, the scinstall utility prompts you for the number of nodes and the number of private networks that you want the IP address range to support. The utility also prompts you for the number of zone clusters that you want to support. The number of global-cluster nodes that you specify should also include the expected number of unclustered non-global zones that will use the private network.

The utility calculates the netmask for the minimum IP address range that will support the number of nodes, zone clusters, and private networks that you specified. The calculated netmask might support more than the supplied number of nodes, including non-global zones, zone clusters, and private networks. The scinstall utility also calculates a second netmask that would be the minimum to support twice the number of nodes, zone clusters, and private networks. This second netmask would enable the cluster to accommodate future growth without the need to reconfigure the IP address range.

The utility then asks you what netmask to choose. You can specify either of the calculated netmasks or provide a different one. The netmask that you specify must minimally support the number of nodes and private networks that you specified to the utility.


Note -  Changing the cluster private IP address range might be necessary to support the addition of nodes, non-global zones, zone clusters, or private networks.

To change the private-network address and netmask after the cluster is established, see How to Change the Private Network Address or Address Range of an Existing Cluster in Oracle Solaris Cluster 4.3 System Administration Guide. You must bring down the cluster to make these changes.

However, the cluster can remain in cluster mode if you use the cluster set-netprops command to change only the netmask. For any zone cluster that is already configured in the cluster, the private IP subnets and the corresponding private IP addresses that are allocated for that zone cluster will also be updated.


If you specify a private-network address other than the default, the address must meet the following requirements:

  • Address and netmask sizes – The private network address cannot be smaller than the netmask. For example, you can use a private network address of 172.16.10.0 with a netmask of 255.255.255.0. However, you cannot use a private network address of 172.16.10.0 with a netmask of 255.255.0.0.

  • Acceptable addresses – The address must be included in the block of addresses that RFC 1918 reserves for use in private networks. You can contact the InterNIC to obtain copies of RFCs or view RFCs online at http://www.rfcs.org.

  • Restriction on use of PNM objects for private-network adapters – Oracle Solaris Cluster software does not support the configuration of any PNM objects, such as link aggregation or IPMP groups, on private-network adapters for the cluster private interconnect. PNM objects can only be configured for adapters for the public network in an Oracle Solaris Cluster configuration.

  • Use in multiple clusters – You can use the same private-network address in more than one cluster provided that the clusters are on different private networks. Private IP network addresses are not accessible from outside the physical cluster.

  • Oracle VM Server for SPARC - When guest domains are created on the same physical machine and are connected to the same virtual switch, the private network is shared by such guest domains and is visible to all these domains. Proceed with caution before you specify a private-network IP address range to the scinstall utility for use by a cluster of guest domains. Ensure that the address range is not already in use by another guest domain that exists on the same physical machine and shares its virtual switch.

  • VLANs shared by multiple clusters – Oracle Solaris Cluster configurations support the sharing of the same private-interconnect VLAN among multiple clusters. You do not have to configure a separate VLAN for each cluster. However, for the highest level of fault isolation and interconnect resilience, limit the use of a VLAN to a single cluster.

  • IPv6 Oracle Solaris Cluster software does not support IPv6 addresses for the private interconnect. The system does configure IPv6 addresses on the private-network adapters to support scalable services that use IPv6 addresses. However, internode communication on the private network does not use these IPv6 addresses.

See Planning for Network Deployment in Oracle Solaris 11.3 for more information about private networks.

Private Hostnames

The private hostname is the name that is used for internode communication over the private-network interface. Private hostnames are automatically created during Oracle Solaris Cluster configuration of a global cluster or a zone cluster. These private hostnames follow the naming convention clusternodenode-id -priv, where node-id is the numeral of the internal node ID. During Oracle Solaris Cluster configuration, the node ID number is automatically assigned to each node when the node becomes a cluster member. A node of the global cluster and a node of a zone cluster can both have the same private hostname, but each hostname resolves to a different private-network IP address.

After a global cluster is configured, you can rename its private hostnames by using the clsetup(1CL) utility. Currently, you cannot rename the private hostname of a zone-cluster node.

The creation of a private hostname for a non-global zone is optional. There is no required naming convention for the private hostname of a non-global zone.

Cluster Interconnect

The cluster interconnects provide the hardware pathways for private-network communication between cluster nodes. Each interconnect consists of a cable that is connected in one of the following ways:

  • Between two transport adapters

  • Between a transport adapter and a transport switch

For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster 4.3 Concepts Guide.


Note -  You do not need to configure a cluster interconnect for a single-host cluster. However, if you anticipate eventually adding more nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use.

During Oracle Solaris Cluster configuration, you specify configuration information for one or two cluster interconnects.

  • If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network. For more information, see the guidelines for tagged VLAN adapters in Transport Adapters.

  • You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability. If a single interconnect fails, the cluster is at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure.

You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup utility.

For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Oracle Solaris Cluster Hardware Administration Manual. For general information about the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster 4.3 Concepts Guide.

Transport Adapters

For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type. If your configuration is a two-host cluster, you also specify whether your interconnect is a point-to-point connection (adapter to adapter) or uses a transport switch.

Consider the following guidelines and restrictions:

  • IPv6 – Oracle Solaris Cluster software does not support IPv6 communications over the private interconnects.

  • Local MAC address assignment – All private network adapters must use network interface cards (NICs) that support local MAC address assignment. Link-local IPv6 addresses, which are required on private-network adapters to support IPv6 public-network addresses for scalable data services, are derived from the local MAC addresses.

  • Tagged VLAN adapters – Oracle Solaris Cluster software supports tagged Virtual Local Area Networks (VLANs) to share an adapter between the private cluster interconnect and the public network. You must use the dladm create-vlan command to configure the adapter as a tagged VLAN adapter before you configure it with the cluster.

    To configure a tagged VLAN adapter for the cluster interconnect, specify the adapter by its VLAN virtual device name. This name is composed of the adapter name plus the VLAN instance number. The VLAN instance number is derived from the formula (1000*V)+N, where V is the VID number and N is the PPA.

    As an example, for VID 73 on adapter net2, the VLAN instance number would be calculated as (1000*73)+2. You would therefore specify the adapter name as net73002 to indicate that it is part of a shared virtual LAN.

    For information about configuring VLAN in a cluster, see Configuring VLANs as Private Interconnect Networks in Oracle Solaris Cluster Hardware Administration Manual. For information about creating and administering VLANs, see the dladm(1M) man page and Chapter 3, Configuring Virtual Networks by Using Virtual Local Area Networks in Managing Network Datalinks in Oracle Solaris 11.3.

  • SPARC: Oracle VM Server for SPARC guest domains – For Oracle VM Server for SPARC guest domains that are configured as cluster nodes, specify adapter names by their virtual names, vnetN, such as vnet0 and vnet1. Virtual adapter names are recorded in the /etc/path_to_inst file.

  • Logical network interfaces – Logical network interfaces are reserved for use by Oracle Solaris Cluster software.

Transport Switches

If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switchN, where N is a number that is automatically assigned during configuration, or create another name.

Also specify the switch port name or accept the default name. The default port name is the same as the internal node ID number of the Oracle Solaris host that hosts the adapter end of the cable. However, you cannot use the default port name for certain adapter types.

Clusters with three or more nodes must use transport switches. Direct connection between cluster nodes is supported only for two-host clusters. If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect.


Tip  -  If you specify a transport switch, you can more easily add another node to the cluster in the future.

Global Fencing

Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations. By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default global fencing setting of prefer3. With the prefer3 setting, the SCSI-3 protocol is used.

If any device is unable to use the SCSI-3 protocol, the pathcount setting should be used instead, where the fencing protocol for the shared disk is chosen based on the number of DID paths that are attached to the disk. Non-SCSI-3 capable devices are limited to two DID device paths within the cluster. Fencing can be turned off for devices which do not support either SCSI-3 or SCSI-2 fencing. However, data integrity for such devices cannot be guaranteed during split-brain situations.

In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing in certain situations.


Caution

Caution  -  If you disable fencing under other situations than the ones described, your data might be vulnerable to corruption during application failover. Examine this data corruption possibility carefully when you consider turning off fencing.


The situations in which you can disable global fencing are as follows:

  • The shared storage does not support SCSI reservations.

    If you turn off fencing for a shared disk that you then configure as a quorum device, the device uses the software quorum protocol. This is true regardless of whether the disk supports SCSI-2 or SCSI-3 protocols. Software quorum is a protocol in Oracle Solaris Cluster software that emulates a form of SCSI Persistent Group Reservations (PGR).

  • You want to enable systems that are outside the cluster to gain access to storage that is attached to the cluster.

If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster. After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol of a quorum device, you must first unconfigure the quorum device. Then set the new fencing protocol of the disk and reconfigure it as a quorum device.

For more information about fencing behavior, see Failfast Mechanism in Oracle Solaris Cluster 4.3 Concepts Guide. For more information about setting the fencing protocol of individual shared disks, see the cldevice(1CL) man page. For more information about the global fencing setting, see the cluster(1CL) man page.

Quorum Devices

Oracle Solaris Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster. For more information about the purpose and function of quorum devices, see Quorum and Quorum Devices in Oracle Solaris Cluster 4.3 Concepts Guide.

During Oracle Solaris Cluster installation of a two-host cluster, you can choose to have the scinstall utility automatically configure an available shared disk in the configuration as a quorum device. The scinstall utility assumes that all available shared disks are supported as quorum devices.

If you want to use a quorum server or an Oracle ZFS Storage Appliance NAS device as the quorum device, you configure it after scinstall processing is completed.

After installation, you can also configure additional quorum devices by using the clsetup utility.


Note -  You do not need to configure quorum devices for a single-host cluster.

If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually.

Consider the following points when you plan quorum devices:

  • Minimum – A two-host cluster must have at least one quorum device, which can be a shared disk, a quorum server, or a NAS device. For other topologies, quorum devices are optional.

  • Odd-number rule – If more than one quorum device is configured in a two-host cluster or in a pair of hosts directly connected to the quorum device, configure an odd number of quorum devices. This configuration ensures that the quorum devices have completely independent failure pathways.

  • Distribution of quorum votes – For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by nodes. Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable even if all nodes are functioning.

  • Connection – You must connect a quorum device to at least two nodes.

  • SCSI fencing protocol – When a SCSI shared-disk quorum device is configured, its fencing protocol is automatically set to SCSI-2 in a two-host cluster or SCSI-3 in a cluster with three or more nodes.

  • Changing the fencing protocol of quorum devices – For SCSI disks that are configured as a quorum device, you must unconfigure the quorum device before you can enable or disable its SCSI fencing protocol.

  • Software quorum protocol – You can configure supported shared disks that do not support SCSI protocol, such as SATA disks, as quorum devices. You must disable fencing for such disks. The disks would then use the software quorum protocol, which emulates SCSI PGR.

    The software quorum protocol would also be used by SCSI-shared disks if fencing is disabled for such disks.

  • Replicated devices – Oracle Solaris Cluster software does not support replicated devices as quorum devices.

  • ZFS storage pools – Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide a quorum vote to the cluster.

    After a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device.

For more information about quorum devices, see Quorum and Quorum Devices in Oracle Solaris Cluster 4.3 Concepts Guide.

SPARC: Guidelines for Oracle VM Server for SPARC Logical Domains as Cluster Nodes

Consider the following points when you create an Oracle VM Server for SPARC logical domain on a physically clustered machine that is SPARC hypervisor capable, for use as a cluster node:

  • Supported domain types - You can configure Oracle VM Server for SPARC guest domains, I/O domains, and control domains as cluster nodes.

  • SR-IOV devices – An SR-IOV device is supported with a logical domain that is configured to run as a cluster node. See the Oracle Solaris Cluster 4 Compatibility Guide (http://www.oracle.com/technetwork/server-storage/solaris-cluster/overview/solariscluster4-compatibilityguide-1429037.pdf) for information about supported SR-IOV devices.

  • SCSI LUN requirement – The virtual shared storage device, or virtual disk back end, of an Oracle VM Server for SPARC guest domain must be a full SCSI LUN in the I/O domain. You cannot use an arbitrary virtual device.

  • Fencing – Do not export a storage LUN to more than one guest domain on the same physical machine unless you also disable fencing for that device. Otherwise, if two different guest domains on the same machine both are visible to a device, the device will be fenced whenever one of the guest domains halts. The fencing of the device will panic any other guest domain that subsequently tries to access the device.

  • Network isolation – Guest domains that are located on the same physical machine but are configured in different clusters must be network isolated from each other. Use one of the following methods:

    • Configure the clusters to use different network interfaces in the I/O domain for the private network.

    • Use different network addresses for each of the clusters when you perform initial configuration of the clusters.

  • Networking in guest domains – Network packets to and from guest domains must traverse service domains to reach the network drivers through virtual switches. Virtual switches use kernel threads that run at system priority. The virtual-switch threads must be able to acquire needed CPU resources to perform critical cluster operations, including heartbeats, membership, checkpoints, and so forth. Configuring virtual switches with the mode=sc setting enables expedited handling of cluster heartbeat packets. However, the reliability of other critical cluster operations can be enhanced by adding more CPU resources to the service domain under the following workloads:

    • High-interrupt load, for example, due to network or disk I/O. Under extreme load, virtual switches can preclude system threads from running for a long time, including virtual-switch threads.

    • Real-time threads that are overly aggressive in retaining CPU resources. Real-time threads run at a higher priority than virtual-switch threads, which can restrict CPU resources for virtual-switch threads for an extended time.

  • Non-shared storage – For non-shared storage, such as for Oracle VM Server for SPARC guest-domain OS images, you can use any type of virtual device. You can back such virtual devices by any implement in the I/O domain, such as files or volumes. However, do not copy files or clone volumes in the I/O domain for the purpose of mapping them into different guest domains of the same cluster. Such copying or cloning would lead to problems because the resulting virtual devices would have the same device identity in different guest domains. Always create a new file or device in the I/O domain, which would be assigned a unique device identity, then map the new file or device into a different guest domain.

  • Exporting storage from I/O domains – If you configure a cluster that is composed of Oracle VM Server for SPARC I/O domains, do not export its storage devices to other guest domains that also run Oracle Solaris Cluster software.

  • Oracle Solaris I/O multipathing – Do not run Oracle Solaris I/O multipathing software (MPxIO) from guest domains. Instead, run Oracle Solaris I/O multipathing software in the I/O domain or control domain and export it to the guest domains.

  • Virtual disk multipathing - Do not configure the virtual disk multipathing feature of Oracle VM Server for SPARC on a logical domain that is configured as a cluster node.

  • Live migration and cold migration restrictions and support - The live migration and cold migration features of Oracle VM Server for SPARC are not supported for logical domains that are configured to run as cluster nodes.

    However, guest domains that are configured to be managed by the HA for Oracle VM Server for SPARC data service do support cold migration and live migration. But no other type of logical domain in this data service configuration is supported to use cold migration or live migration.

For more information about Oracle VM Server for SPARC, see the Oracle VM Server for SPARC 3.1 Administration Guide.

Zone Clusters

A zone cluster is a cluster of Oracle Solaris non-global zones. You can use the clsetup utility or the Oracle Solaris Cluster Manager browser interface to create a zone cluster and add a network address, file system, ZFS storage pool, or storage device. You can also use a command line interface (the clzonecluster utility) to create a zone cluster, make configuration changes, and remove a zone cluster. For more information about using the clzonecluster utility, see the clzonecluster(1CL) man page. For more information about Oracle Solaris Cluster Manager, see Chapter 13, Using the Oracle Solaris Cluster Manager Browser Interface in Oracle Solaris Cluster 4.3 System Administration Guide.

Supported brands for zone clusters are solaris, solaris10, and labeled. The labeled brand is used exclusively in a Trusted Extensions environment. To use the Trusted Extensions feature of Oracle Solaris, you must configure the Trusted Extensions feature for use in a zone cluster. No other use of Trusted Extensions is supported in an Oracle Solaris Cluster configuration.

You can also specify a shared-IP zone cluster or an exclusive-IP zone cluster when you run the clsetup utility.

  • Shared-IP zone clusters work with solaris or solaris10 brand zones. A shared-IP zone cluster shares a single IP stack between all the zones on the node, and each zone is allocated an IP address.

  • Exclusive-IP zone clusters work with solaris and solaris10 brand zones. An exclusive-IP zone cluster supports a separate IP instance stack.

Consider the following points when you plan the creation of a zone cluster:

Global-Cluster Requirements and Guidelines

  • Global cluster – The zone cluster must be configured on a global Oracle Solaris Cluster configuration. A zone cluster cannot be configured without an underlying global cluster.

  • Cluster mode – The global-cluster node from which you create or modify a zone cluster must be in cluster mode. If any other nodes are in noncluster mode when you administer a zone cluster, the changes that you make are propagated to those nodes when they return to cluster mode.

  • Adequate private-IP addresses – The private IP-address range of the global cluster must have enough free IP-address subnets for use by the new zone cluster. If the number of available subnets is insufficient, the creation of the zone cluster fails.

  • Changes to the private IP-address range – The private IP subnets and the corresponding private IP-addresses that are available for zone clusters are automatically updated if the global cluster's private IP-address range is changed. If a zone cluster is deleted, the cluster infrastructure frees the private IP-addresses that were used by that zone cluster, making the addresses available for other use within the global cluster and by any other zone clusters that depend on the global cluster.

  • Supported devices – Devices that are supported with Oracle Solaris zones can be exported to a zone cluster. Such devices include the following:

    • Oracle Solaris disk devices (cNtXdYsZ)

    • DID devices (/dev/did/*dsk/dN)

    • Solaris Volume Manager and Solaris Volume Manager for Sun Cluster multi-owner disk sets (/dev/md/setname/*dsk/dN)

Zone-Cluster Requirements and Guidelines

  • Distribution of nodes – You cannot host multiple nodes of the same zone cluster on the same host machine. A host can support multiple zone-cluster nodes as long as each zone-cluster node on that host is a member of a different zone cluster.

  • Node creation – You must create at least one zone-cluster node at the time that you create the zone cluster. You can use the clsetup utility or the clzonecluster command to create the zone cluster. The name of the zone-cluster node must be unique within the zone cluster. The infrastructure automatically creates an underlying non-global zone on each host that supports the zone cluster. Each non-global zone is given the same zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster. For example, if you create a zone cluster that is named zc1, the corresponding non-global zone name on each host that supports the zone cluster is also zc1.

  • Cluster name – Each zone-cluster name must be unique throughout the cluster of machines that host the global cluster. The zone-cluster name cannot also be used by a non-global zone elsewhere in the cluster of machines, nor can the zone-cluster name be the same as that of a global-cluster node. You cannot use “all” or “global” as a zone-cluster name, because these are reserved names.

  • Public-network IP addresses – You can optionally assign a specific public-network IP address to each zone-cluster node.


    Note -  If you do not configure an IP address for each zone cluster node, two things will occur:
    • That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.

    • The cluster software will activate any Logical Host IP address on any NIC.


  • Private hostnames – During creation of the zone cluster, a private hostname is automatically created for each node of the zone cluster, in the same way that hostnames are created in global clusters. Currently, you cannot rename the private hostname of a zone-cluster node. For more information about private hostnames, see Private Hostnames.

  • Oracle Solaris Zones brands – All nodes of a zone cluster are configured as non-global zones of the solaris, solaris10, or labeled brand that is set with the cluster attribute. No other brand types are permitted in a zone cluster.

    For Trusted Extensions, you must use only the labeled brand.

  • Restriction of the Immutable Zones property file-mac-profile – The Oracle Solaris Zones property file-mac-profile is currently not supported by the clzonecluster command.

    In addition, do not attempt to use the Oracle Solaris zonecfg command to configure the file-mac-profile property in any underlying non-global zone of a zone cluster. Doing so might cause unexpected behavior of cluster services in that zone cluster.

  • IP type - You can create a zone cluster that is either the shared IP type or the exclusive IP type. If the IP type is not specified, a shared-IP zone cluster is created by default.

  • Global_zone=TRUE resource-type property – To register a resource type that uses the Global_zone=TRUE resource-type property, the resource-type file must reside in the /usr/cluster/global/rgm/rtreg/ directory of the zone cluster. If that resource-type file resides in any other location, the command to register the resource type is rejected.

  • File systems – You can use the clsetup utility or the clzonecluster command to add the following types of file systems for use by the zone cluster. A file system is exported to a zone cluster by using either a direct mount or a loopback mount. Adding a file system with the clsetup utility is done in cluster scope, which affects the entire zone cluster.

    • By direct mount:

      • UFS local file system

      • StorageTek QFS stand-alone file system

      • StorageTek QFS shared file system, only when used to support Oracle RAC

      • Oracle Solaris ZFS (exported as a data set)

      • NFS from supported NAS devices

    • By loopback mount:

      • UFS local file system

      • StorageTek QFS stand-alone file system

      • StorageTek QFS shared file system, only when used to support Oracle RAC

      • UFS cluster file system

    You configure an HAStoragePlus or ScalMountPoint resource to manage the mounting of the file system.

Guidelines for Trusted Extensions in a Zone Cluster

Consider the following points when you use the Trusted Extensions feature of Oracle Solaris in a zone cluster:

  • Only zone-cluster support – In an Oracle Solaris Cluster configuration with Trusted Extensions enabled, applications must run only in a zone cluster. No other non-global zones can be used on the cluster. You must use only the clzonecluster command to create a zone cluster. Do not use the txzonemgr command to create a non-global zone on a cluster that has Trusted Extensions enabled.

  • Trusted Extensions scope – You can either enable or disable Trusted Extensions for the entire cluster configuration. When Trusted Extensions is enabled, all non-global zones in the cluster configuration must belong to one of the zone clusters. You cannot configure any other kind of non-global zone without compromising security.

  • IP addresses – Each zone cluster that uses Trusted Extensions must use its own IP addresses. The special networking feature in Trusted Extensions that enables an IP address to be shared between multiple non-global zones is not supported with Oracle Solaris Cluster software.

  • Loopback mounts – You cannot use loopback mounts that have write permissions in a zone cluster that uses Trusted Extensions. Use only direct mounts of file systems that permit write access, or use loopback mounts that have only read permissions.

  • File systems – Do not configure in the zone cluster the global device that underlies a file system. Configure only the file system itself in the zone cluster.

  • Storage device name – Do not add an individual slice of a storage device to a zone cluster. You must add the entire device to a single zone cluster. The use of slices of the same storage device in different zone clusters compromises the security of those zone clusters.

  • Application installation – Install applications only in the zone cluster or in the global cluster and then export to the zone cluster by using read-only loopback mounts.

  • Zone cluster isolation – When Trusted Extensions is used, the name of a zone cluster is a security label. In some cases, the security label itself might be information that cannot be disclosed, and the name of a resource or resource group might be a sensitive piece of information that cannot be disclosed. When an inter-cluster resource dependency or inter-cluster resource-group affinity is configured, the name of the other cluster becomes visible as well as the name of any affected resource or resource group. Therefore, before you establish any inter-cluster relationships, evaluate whether this information can be made visible according to the your requirements.