In the following post we are going to discuss how we Implement and Manage Complex DRS solutions
The following are objectives from the blueprint :
- Configure DPM, including appropriate DPM threshold
- Create DRS and DPM alarms
- Configure / Modify EVC mode on an existing DRS cluster
- Configure applicable power management settings for ESXi hosts
- Configure DRS cluster for efficient/optimal load distribution
- Properly apply virtual machine automation levels based upon application requirements
- Administer DRS / Storage DRS
- Create DRS / Storage DRS affinity and anti-affinity rules
- Configure advanced DRS / Storage DRS settings
- Configure and Manage vMotion / Storage vMotion
- Create and manage advanced resource pool configurations
Using VMware workstation:
- Microsoft Servers 2012R2 for Services (DNS , DHCP, etc…)
- Installed esx0
- Installed esx1
- Installed VCSA
- vSphere 6.0 Resource Management Guide
Configure DPM, including appropriate DPM threshold:
Before we can configure DPM on our cluster we need to enable the cluster for DRS to do that click on the Cluster >> Manage>>vSphere DRS>> edit
For you host to support DPM you will need to use one of three power management protocols to bring a host out of standby mode:
- Intelligent Platform Management Interface (IPMI),
- Hewlett-Packard Integrated Lights-Out (iLO)
- WakeOn-LAN (WOL).
Each level you move the DPM Threshold sliderchange thelevel of priority in the set of recommendations that are executed automatically. These priority ratings are based on the amount of over- or under-utilization found in the DRS cluster and the improvement that is expected from the intended host power state change.
Create DRS and DPM alarms:
We already discuss how to create alarms in vCenter so i am not going to cover it again , what you want to look for in this case is alarms in the cluster level for DRS and host level for DPM
Configure / Modify EVC mode on an existing DRS cluster:
when you have CPU from different generation you can set up a base line of the CPU for vMotion to work (vMotion is not compatible across host with different CPUs)
Using EVC is the solution for the problem above. To Configure EVC click on the Cluster >>Manage>>Settings>>VMware EVC>>Edit
Here you choose the Processor (Inter or AMD) and than you choose the lowest common denominator for your ESXi host CPU.
Note: you will need to power the running VM’s if you have running VM’s.
Configure applicable power management settings for ESXi hosts:
ESXi can configure several power management features that the host hardware provides to adjust the trade-off between performance and power use.
Using power management policy you can choose from the followings:
- High performance – The VMkernel detects certain power management features, but will not use them unless the BIOS requests them for power capping or thermal events.
- Balanced (default) – The VMkernel uses the available power management features conservatively to reduce host energy consumption with minimal compromise to performance
- Lower power – The VMkernel aggressively uses available power management features to reduce host energy consumption at the risk of lower performance.
- Custom – The VMkernel bases its power management policy on the values of several advanced configuration parameters. You can set these parameters in the vSphere Web Client Advanced Settings dialog box
To change the settings click on the Host>>Manage>>Settings>>Power Management>>Edit
Configure DRS cluster for efficient/optimal load distribution:
There is a great blog on by Matthew Meyer on VMware blog site , so i will just share the link
Thank you @mattdmeyer
Properly apply virtual machine automation levels based upon application requirements:
When enabling DRS you can configure the automation level for the entire cluster but you can also configure the automation level per VM. to do that you will need to make sure that “Enable Individual virtual machine automation levels” is check (its the default)
There are 3 automation levels:
- Manual – vCenter will ONLY recommend moving resources.
- Partially Automated – VM’s will automatically place onto host at power on and vCenter will recommend moving resources
- Fully Automated – vCenter take full control on re-balancing the VM’s
And than you can go to VM Overrides and configure automation at the VM level.
By writing this objective we already did most of the administration in DRS. Administration task includes:
- Adding/Removing host to cluster
- Adding/Removing VM to cluster
- Managing Power resources
- Configuring affinity rules (there is a sub objective for this task)
Create DRS affinity and anti-affinity rules
There are four affinity rules configuration:
- VM -VM anti-affinity rule: DRS will not allow the following VM’s to be on the same ESXi host
- VM-VM affinity rules:DRS will make sure that the followng VM’s will always be on the same ESXi host
- VM-Host anti affinity rules: DRS will not allow the following VM’s to be on a certain ESXi host
- VM-Host affinity rules: DRS will make sure that the followng VM’s will always be on the certain ESXi host
To add VM’s/Host DRS group from Cluster>>Manage>>Setting>>VM/Host Group and then click on Add.
you can change between Hosts and VM’s.
Enable / Disable DRS affinity rule is just a check box when you define the rule.
To configure VM-VM affinity/anti-affinity rules: from Cluster>>Manage>>Setting>>VM/Host rules and then click on Add.
To configure VM-Host affinity/anti-affinity rules:
After creating the groups you can choose the third option “Virtual Machine to hosts” when creating a rule.
You can also choose the “Should” option and DRS will move the VM’s only as the last option.
Administer Storage DRS:
This is a repeat objective we already discuss it in datastore cluster see link below
Configure and Manage vMotion / Storage vMotion:
vMotion allow to migrate running VM from one host to another , vMotion is use with all the DRS function that we discuss so far.
- Shared storage
- vMotion network with minimum 1G NIC – (recommended dedicated NIC)
- Network labels used for virtual machine port groups are consistent across hosts.
- Virtual devices such as CDROM / Floppy disk must be disconnect from the host
- CPU compatibility
Configure vMotion network:
add VMkernel port that can support vMotion traffic
Configure Storage vMotion
Storage vMotion allow to migrate running vm disk file from one datastore to another on the same ESXi host.
To migrate running VMdisk file to another datastore, you use the same screen as vMotion to migrate the compute but change
Create and manage advanced resource pool configurations:
Basics: (just a reminder,we already discuss this in the VCP posts)
Reservation :Specify a guaranteed CPU or memory allocation for this resource pool.
Defaults to 0.A nonzero reservation is subtracted from the unreserved resources of the parent (host or resource pool). The resources are considered reserved,regardless of whether virtual machines are associated with the resource pool.
Limits:Specify the upper limit for this resource pool’s CPU or memory allocation.The limits only act at the vSphere level and can cause issues for the VM because the VM dont know about the limits.
Shares:Specify shares for this resource pool with respect to the parent’s total resources. Sibling resource pools share resources according to their relative share values bounded by the reservation and limit. apply ONLY in period of contention
Creating Resource pool:
The system considers the resources available in the selected resource pool and its direct parent resource pool.
If the parent resource pool also has the Expandable Reservation option selected, it can borrow resources from its parent resource pool. Borrowing resources occurs recursively from the ancestors of the current resource pool as long as the Expandable Reservation option is selected.
If you power on a virtual machine in this resource pool, and the combined reservations of the virtual machines are larger than the reservation of the resource pool, the resource pool can use resources from its parent or ancestors.
Adding VM to the resource pool:
To add VM to the resource pool just drag and drop the vm into the resource pool
Thanks for reading