Home About VCP Virtual Home Lab VCP6 Study Guide VCAP6-DCV Deploy Study guide VCAP6 – DCV Design Collection
in blueprint - 11 Jan, 2016
by mordi - no comments
VCP6-DCV blueprint section 3:Configure and Administer Advanced vSphere 6.x Storage -Objective 3.4 – Part 1

In this post we will continue covering the storage objective from the blueprint(almost done :-)). Again I will NOT follow the order of blueprint objective but i will cover all of them.Some of the task we already did in previous posts look at my posts regrading iSCSI and NFS datastore.

The following are the blueprint objective:

  • Describe VAAI primitives for block devices and NAS
  • Differentiate VMware file system technologies
  • Upgrade VMFS3 to VMFS5
  • Compare functionality of newly created vs. upgraded VMFS5 datastores
  • Differentiate Physical Mode RDMs and Virtual Mode RDMs
  • Create a Virtual/Physical Mode RDM
  • Differentiate NFS 3.x and 4.1 capabilities
  • Compare and contrast VMFS and NFS datastore properties
  • Configure Bus Sharing
  • Configure Multi-writer locking
  • Connect an NFS 4.1 datastore using Kerberos
  • Create/Rename/Delete/Unmount VMFS datastores
  • Mount/Unmount an NFS datastore
  • Extend/Expand VMFS datastores
  • Place a VMFS datastore in Maintenance Mode
  • Select the Preferred Path/Disable a Path to a VMFS datastore
  • Enable/Disable vStorage API for Array Integration (VAAI)
  • Given a scenario, determine a proper use case for multiple VMFS/NFS datastores

VMware vSphere Storage APIs – Array Integration (VAAI):

also referred to as hardware acceleration or hardware offload APIs, are a set of APIs to enable communication between ESXi hosts and storage devices. The APIs define a set of “storage primitives” that enable the ESXi host to offload certain storage operations to the array, which reduces resource overhead on the ESXi hosts and can significantly improve performance for storage-intensive operations such as storage cloning, zeroing, and so on.

VAAI Block Primitives:

The following operations required on VMFS metadata locking(ATS):

  • Acquire on-disk locks
  • Upgrade an optimistic lock to an exclusive/physical lock.
  • Unlock a read-only/multiwriter lock
  • Acquire a heartbeat
  • Clear a heartbeat
  • Replay a heartbeat
  • Reclaim a heartbeat
  • Acquire on-disk lock with dead owner

XCOPY (Extended Copy) and  Write Same (Zero).

VAAI NAS Primitives:

  • Full File Clone
  • Fast File Clone/Native Snapshot Support
  • Extended Statistics
  • Reserve Space


Enable/Disable vStorage API for Array Integration (VAAI):

VAAI is enabled by default you can only turn it OFF.

Before we turn it off i will like show where we will see if our storage device support VAAI (shown as Hardware Acceleration), as you can see non of my storage device in my lab is supported.


To turn OFF VAAI we need to change some value to 0 in the CLI , you will also need to do it to all primitives (ATS(locking)XCOPY and WRITE_SAME)

First lets list the device and then change the value:

# esxcli system settings advanced list –option /VMFS3/HardwareAcceleratedLocking
# esxcli system settings advanced set –int-value 0 –option /VMFS3/HardwareAcceleratedLocking

and then listed again to verify:

# esxcli system settings advanced list –option /VMFS3/HardwareAcceleratedLocking



you will need to continue and do the same for XCOPY and WRITE_SAME here are the commands :


#esxcli system settings advanced set –int-value 0 –option /DataMover/HardwareAcceleratedMove

(verify)#esxcli system settings advanced list –option /DataMover/HardwareAcceleratedMove


#esxcli system settings advanced set –int-value 0 –option /DataMover/HardwareAcceleratedInit

(verify)#esxcli system settings advanced list –option  /DataMover/HardwareAcceleratedInit



VMFS is VMware proprietary file system that works on Block devices.The latest version of VMFS is VMFS5 which is only in vSphere6

Characteristics of VMFS5 Datastores:

  • Greater than 2TB storage devices for each VMFS5 extent.
  • Support of virtual machines with large capacity virtual disks, or disks greater than 2TB.
  • Increased resource limits such as file descriptors.
  • Standard 1MB file system block size with support of 2TB virtual disks.
  • Greater than 2TB disk size for RDMs.
  • Support of small files of 1KB.
  • Ability to open any file located on a VMFS5 datastore in a shared mode by a maximum of 32 hosts.
  • Scalability improvements on storage devices that support hardware acceleration.
  • Default use of ATS-only locking mechanisms on storage devices that support ATS.
  • Ability to reclaim physical storage space on thin provisioned storage devices.
  • Online upgrade process that upgrades existing datastores without disrupting hosts or virtual machines that are currently running


Upgrade VMFS3 to VMFS5:

Since VMFS3 and VMFS5 have different characteristics you will have to take that into consideration when upgrading the filesystem.

here are some of the differences taking from the VMware storage guide.


Source: VMware vSphere Storage document


Source: VMware vSphere Storage document


In my lab i dont have VMFS3 or older but to do the upgrade just right click on the datastore and click on upgrade to VMFS5


Extend/Expand VMFS datastores:


If you remember from my iSCSI datastore we created 40G LUN , i increased the LUN size to 55G in Microsoft 2012R server and now all i have to do is the following

  • Right click and increase the size of the datastore
  • choose my partition configuration
  • Re-Scan again on all ESXi hosts and verify datastore size

here are the screenshots:



I will need to add additional LUN to aggregate the space  and after scanning i can increase the datastore again and verify!




Place a VMFS datastore in Maintenance Mode:

To put datastore in maintenance mode , we will need to install VMware Storage DRS , i dont have it installed yet but i will show where to enable maintenance mode


Select the Preferred Path/Disable a Path to a VMFS datastore:

To enable/Disable a path to the storage Click on the ESXi host>> Manage>> Storage >>Storage device >> Paths>>Enable/Disable



Thanks for reading





























Leave a Reply