VCAP-DCV Deploy Objective 6.2

Home / Advanced VM Settings / VCAP-DCV Deploy Objective 6.2

In the following post we are going to discuss “Optimize Virtual Machine resources”

The following are objectives from the blueprint :

  • Adjust Virtual Machine properties according to a deployment plan:
    • Network configurations
    • CPU configurations
    • Storage configurations
  • Configure Flash Read Cache reservations
  • Modify Transparent Page Sharing and large memory page settings
  • Optimize a Virtual Machine for latency sensitive workloads
  • Troubleshoot Virtual Machine performance issues based on application workload

Lab Setup:

Using VMware workstation:

  • Microsoft Servers 2012R2 for Services (DNS , DHCP, etc…)
  • Installed esx0
  • Installed VCSA

 Documents used:

  • vSphere Virtual Machine Administration Guide
  • vSphere Resource Management Guide
  • VMware KB 2001003

Adjust Virtual Machine properties according to a deployment plan:

Network configurations:

When you configure networking for a virtual machine, you select or change an adapter type, a network connection, and whether to connect the network when the virtual machine powers on.

The type of network adapters that are available depend on the following factors:

  • The virtual machine compatibility
  • Whether the virtual machine compatibility has been updated to the latest version for the current host.
  • The guest operating system.

Supported NIC’s:

  • E1000E Emulated version of the Intel 82574 Gigabit Ethernet NIC.
  • E1000 Emulated version of the Intel 82545EM Gigabit Ethernet NIC, with drivers available in most newer guest operating systems
  • Flexible Identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it.
  •  Vlance Emulated version of the AMD 79C970 PCnet32 LANCE NIC, an older 10 Mbps NIC with drivers available in 32-bit legacy guest operating systems.
  • VMXNET Optimized for performance in a virtual machine and has no physical counterpart. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
  • VMXNET 2 (Enhanced) Based on the VMXNET adapter but provides high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads.
  • VMXNET 3 A paravirtualized NIC designed for performance. VMXNET 3 offers all the features available in VMXNET 2 and adds several new features, such as multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. .
  • SR-IOV passthrough Representation of a virtual function (VF) on a physical NIC with SR-IOV support. The virtual machine and the physical adapter exchange data without using the VMkernel as an intermediary. This adapter type is suitable for virtual machines where latency might cause failure or that require more CPU resources.

To change VM Networking:

vm_settings1

CPU configurations:

VM CPU Settings:( we already discuss these features in previous post)

You can change the following:

  • Number of vCPU’s
  • Core Per Sockts
  • Enable/Disable CPU hot add
  • Configure resources (Reservation, limits and shares)
  • Configure Hyperthreaded Core Sharing
  • Configure Processor Scheduling Affinity
  • Change CPU Identification Mask Settings
  • Change CPU/MMU Virtualization Settings

 

vm_settings2

Storage configurations:

VM Storage Settings:( we already discuss these features in previous post)

You can change the following:

  • Size of the disk
  • Type (Think/Thick)
  • Configure Flash Read Cache
  • Configure shares
  • Configure IOP’s limits
  • Change the SCSI Controller type

vm_settings3

Configure Flash Read Cache reservations

To accelerate virtual machine performance, you can configure virtual machines to use vSphere Flash Read Cache™

We already discuss this feature back in the storage objectives

VCAP-DCV Deploy Objective 2.1 – part 3

 

vm_settings4

Modify Transparent Page Sharing and large memory page settings:

Use cases: When you have several VM’s that running instances of the same guest operating system, have the same applications or components loaded, or contain common data. The ESXi host can use  transparent page sharing technique to securely eliminate redundant copies of memory pages.

 

In the advanced settings you can Use the Mem.ShareScanTime and Mem.ShareScanGHz  to control the rate at which the system scans memory to identify opportunities for sharing memory.

vm_settings5

You can also disable sharing for individual virtual machines by setting the sched.mem.pshare.enable option to FALSE

vm_settings6

To determine the effectiveness of memory sharing for a given workload, try running the workload, and use resxtop or esxtop to observe the actual savings. Find the information in the PSHARE field of the interactive mode in the Memory page.

Optimize a Virtual Machine for latency sensitive workloads:

You can adjust the latency sensitivity of a virtual machine to optimize the scheduling delay for latency sensitive applications, Depending on the application you can choose from the following options:

  • Low
  • Normal
  • Medium
  • High

The High setting is designed for extremely latency-sensitive applications and all the functionalities and tunings that this feature provides are applied.

vm_settings7

Troubleshoot Virtual Machine performance issues based on application workload:

there is a good VMware KB article that we can use for this objective:

Troubleshooting ESX/ESXi virtual machine performance issues (2001003):  https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2001003

 

Thanks for reading

Mordi.

 

Leave a Reply

Your email address will not be published. Required fields are marked *