VCP6-DCV blueprint section 3:Configure and Administer Advanced vSphere 6.x Storage -Objective 3.1 – Part 2

Home / iSCSI Datastore / VCP6-DCV blueprint section 3:Configure and Administer Advanced vSphere 6.x Storage -Objective 3.1 – Part 2

In this post we will continue implement Storage in our lab and cover the blueprint objective.

For the following subjects i will not be able to demonstrate in my home lab since i use VMware workstation or just dont have the actual Hardware. but We can get more information from VMware storage document:

  • Configure/Edit hardware/dependent hardware initiators
  • Configure FC/iSCSI/FCoE LUNs as ESXi boot devices

As for:

  • Create an NFS share for use with vSphere

I am not sure what VMware requirements are , you configure the NFS share on a specific server or NAS or maybe they are talking about vSAN? i will revisit this subject in later posts.

If you reading this part and you know what are the requirement for this subject , please comment. Thanks

Determine use cases for fiber channel zoning:

The way i look at zoning is the same way i look at VLANs but just on different protocol.It basically provide access control

Use Cases:

  • Reduces the number of targets and LUNs presented to a host.
  • Controls and isolates paths in a fabric.
  • Can prevent non-ESXi systems from accessing a particular storage system, and from possibly destroying VMFS data.
  • Can be used to separate different environments, for example, a test from a production environment.

Configure iSCSI port binding:

To configure iSCSI port binding we will have to configure new VMkernels. Only VMkernel adapters compatible with the iSCSI port binding requirements and available physical network adapters are listed.

To do that we will need to configure first our Storage server (Windows 2012R2) with MPIO and than we will need to change some networking in our lab to meet the requirements, after all these steps are done will enable port binding on all our ESXi hosts.

So lets get started.

Adding MPIO feature in our Windows 2012R2 storage server.


After you add the feature under Server Manager >> File and Storage Services >> iSCSI click on tools and click on MPIO


Click on Discover Multi-paths tabs and check the add support for iSCSI devices box the system will ask for a restart click Ok for restart.


Now lets just verify.


Lets go back to our vCenter and configure the network to support port binding , as you can see from our previous post we already have two storage ports (I change the names from iSCSI and NFS to Port A/B) new we will need to assign each DPortGroup to a specific Uplink . the picture below show the current settings.


To change the current setting , click on the first storage DPortgroup (Storage-PortB) under teaming and failover move the uplink 2 to unused uplinks. Click Ok.


Click on the second storage DPortgroup (Storage-PortA) under teaming and failover move the uplink 1 to unused uplinks. Click Ok.


Now you can see that each uplink associate with different vmnic


Setting the networking part is out of the way, lets go back to the iSCSI Software HBA and click on Port Binding and click add. choose both port groups and Click Ok.


You will get a warning message that you will need to re-scan to devices , click on the re-scan icon and the error will disappear. Repeat this Process to all ESXi hosts.


Thanks for reading





Leave a Reply

Your email address will not be published. Required fields are marked *