This section describes how to complete the following tasks:
Before you can perform these tasks, you must have completed the Cloud Admin tasks, as described in Administering the Exalogic vDC and Setting Up the Infrastructure.
Log in to the Exalogic Control console as the CloudAdmin
user.
For more information about creating this user role, see Creating the Cloud Admin User.
An Account needs at least one Cloud User. You cannot remove a Cloud User from an account, which has a single Cloud User associated.
To remove a Cloud User from an account with multiple Cloud Users, complete the following steps:
In an Exalogic vDC, the underlying CPU hardware resources of the machine are allocated to the vServers in the form of vCPUs. By default, each vCPU consumes one CPU hardware thread.
For example, each Sun Fire X4170 M2 compute node on an Exalogic X2-2 machine consists of two 6-core sockets—that is, 12 cores per compute node. Each core supports two hardware threads. So a single X2-2 compute node can support 24 vCPUs—one vCPU per hardware thread—in the default configuration.
To improve hardware utilization and to facilitate denser consolidation of applications, starting with the Exalogic Elastic Cloud Software release 2.0.4.0.0, the Exalogic vDC can be configured to support more vCPUs than the available CPU threads—that is, the CPU resources can be oversubscribed.
Cloud Admin
users can enable CPU oversubscription by increasing the vCPU-to-physical-CPU-threads ratio. For example, in a vDC that is based on a standard Exalogic X2-2 machine, when the vCPU-to-physical-CPU-threads ratio is increased from the default 1:1 to 2:1, the number of vCPUs available on each Sun Fire X4170 M2 compute node on an Exalogic X2-2 machine increases from 24 to 48. Table 10-1 shows the number of vCPUs available in the vDC at various vCPU-to-physical-CPU-threads ratios.
Table 10-1 Number of vCPUs at Different vCPU-to-Physical-CPU-threads Ratios
vCPU-to-Physical-CPU-threads Ratio | X2-2 Full | X2-2 1/2 | X2-2 1/4 | X2-2 1/8 |
---|---|---|---|---|
1:1 – CPU oversubscription not enabled |
720 |
384 |
192 |
96 |
2:1 |
1440 |
768 |
384 |
192 |
3:1 |
2160 |
1152 |
576 |
288 |
To enable CPU oversubscription, run the following steps:
This section describes the following example scenarios:
An Exalogic X2-2 full rack is set up as a virtual datacenter.
At the default vCPU-to-physical-CPU-threads ratio of 1:1, 720 vCPUs (24 x 30) are available in the vDC.
The Cloud Admin
determines that more than 720 vCPUs are required to support the workloads planned for the vDC.
In this scenario, the Cloud Admin
can increase the number of vCPUs available in the vDC, by enabling CPU oversubscription. For example, the Cloud Admin
can increase the number of vCPUs available in the vDC to 1440 (2 x 24 x 30), by increasing the vCPU-to-physical-CPU ratio to 2:1, as described in Configuring CPU Oversubscription.
An Exalogic X2-2 full rack is set up as a virtual datacenter.
At the default vCPU-to-physical-CPU-threads ratio of 1:1, 720 vCPUs (24 x 30) are available in the vDC.
The vCPU quotas of the accounts in the vDC are fully allocated to the vServers created by the cloud users assigned to the accounts.
The memory and storage quotas of the accounts in the vDCs are not yet fully allocated to vServers.
Cloud Users
want to create more vServers, but cannot do so because the vCPU quota is fully allocated.
In this scenario, the Cloud Admin
can allow Cloud Users
to create more vServers, by enabling CPU oversubscription. For example, the Cloud Admin
can increase the number of vCPUs available in the vDC to 1440 (2 x 24 x 30), by increasing the vCPU-to-physical-CPU ratio to 2:1, as described in Configuring CPU Oversubscription. Then, the Cloud Admin
can increase the vCPU quota of the accounts in the vDC to enable creation of additional vServers.
An Exalogic X2-2 full rack is set up as a virtual datacenter.
CPU oversubscription has been enabled and the oversubscription ratio is currently at 2:1—that is, 1440 vCPUs (2 x 24 x 30) are available in the vDC.
The vCPU quotas of the accounts in the vDC are fully allocated to the vServers created by the cloud users assigned to the accounts.
The memory and storage quotas of the accounts in the vDCs are not yet fully allocated to vServers.
Cloud Users
want to create more vServers, but cannot do so because the vCPU quota is fully allocated.
In this scenario, the Cloud Admin
can allow Cloud Users
to create more vServers, by increasing the CPU oversubscription ratio further. For example, the Cloud Admin
can increase the number of vCPUs available in the vDC to 2160 (3 x 24 x 30), by increasing the vCPU-to-physical-CPU ratio to 3:1, as described in Configuring CPU Oversubscription. Then, the Cloud Admin
can increase the vCPU quota of the accounts in the vDC to enable creation of additional vServers.
An Exalogic X2-2 full rack is set up as a virtual datacenter.
CPU oversubscription has been enabled and the ratio is currently at 3:1—that is, 2160 vCPUs (3 x 24 x 30) are available in the vDC.
Only 1000 of the available 2160 vCPUs are being used by vServers.
The Cloud Admin
decides that an oversubscription ratio of 2:1 would suffice for this vDC.
In this scenario, the Cloud Admin
can decrease the number of vCPUs available in the vDC to 1440 (2 x 24 x 30), by decreasing the vCPU-to-physical-CPU ratio to 2:1.
Note:
In any of the CPU oversubscription scenarios described earlier, to ensure that application performance across the vDC remains predictable, the Cloud Admin
must keep the CPU Cap at 100% and not recommend to change. Note that the CPU Cap feature is deprecated.
In certain situations, you may want to make an OVS node unavailable for vServer placement. For example, when you want to perform maintenance on a compute node.
To make an OVS node unavailable for vServer placement, you must perform the following tasks:
Task 1: Identify and Tag the OVS Node with vserver_placement.ignore_node=true
Task 2: Migrate the vServers on the Tagged OVS Node to Other Available OVS Nodes
Identify the OVS node that you want to make unavailable for vServer placement.
Tag the OVS node with vserver_placement.ignore_node=true
by doing the following:
Log in to Exalogic Control as the ELAdmin
user.
From the navigation pane on the left, click Assets.
Under Assets, from the drop-down list, select Server Pools, as shown in the following screenshot:
A list of the OVS nodes is displayed.
Select the OVS node you identified in step 1, as shown in the following screenshot:
The OVS node dashboard is displayed as shown in the following screenshot:
Click the Summary tab.
From the Actions pane on the right, click Edit Tags.
The Edit Tags dialog box is displayed.
Click the plus (+) button.
Enter vserver_placement.ignore_node
as the Tag Name.
Enter true
as the Value. The following screenshot shows the Edit Tags dialog box after adding the vserver_placement.ignore_node=true
tag:
Click the Save button. Once the job is complete, the tag should be visible in the Summary tab under the Tags table. Any new vServers that you create will be placed on nodes other than ones tagged with vserver_placement.ignore_node=true
.
Note:
To make the OVS node available for placement again, either delete the tag or set the value of the vserver_placement.ignore_node
tag to false.
Identify the vServers running on the tagged OVS node by doing the following:
Log in to Exalogic Control as a user with the Cloud Admin
role.
From the navigation pane on the left, click vDC Management.
Expand vDCs.
Expand MyCloud.
Expand Server Pools.
Click your server pool as shown in the following screenshot:
The server pool dashboard is displayed.
Click the Summary tab.
Scroll down to the Oracle VM Servers section as shown in the following screenshot:
Select the OVS node you tagged in Task 1: Identify and Tag the OVS Node with vserver_placement.ignore_node=true.
Under the Virtual Machines section, note the vServers hosted on the node.
Stop the vServers you identified in the previous step by following the steps described in Stopping vServers.
Start the vServers you stopped in the previous step by following the steps described in Starting vServers.
Note:
If a HA-enabled vServer fails, while selecting a node for restarting the failed vServer, Exalogic Control considers all of the available nodes, including those that are tagged with vserver_placement.ignore_node=true
. To make an OVS node unavailable to HA-enabled vServers as well, you must perform Task 3 (optional): Place the OVS Node in Maintenance Mode.
See MOS document 1594316.1 at:
https://support.oracle.com/epmos/faces/DocumentDisplay?id=1594316.1