We are now in the second section of the blueprint , this section is the storage part (my favorite) and in the following posts we are going to discuss how we deploy and manage a vSphere 6.x storage infrastructure
We are going to discuss the following objective from the blueprint :
- Determine use cases for Raw Device Mapping
- Apply storage presentation characteristics according to a deployment plan:
- VMFS re-signaturin
Using VMware workstation:
- Microsoft Servers 2012R2 for Services (DNS , DHCP, etc…)
- installed esx0
- Installed VCSA
- Storage device that can support snapshot/cloning
- vSphere6 Storage Guide
Determine use cases for Raw Device Mapping:
An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM allows a virtual machine to directly access and use the storage device. The RDM contains
metadata for managing and redirecting disk access to the physical device.
There are two modes of RDM :physical and virtual
Physical Mode: Allows the guest operating system to access the hardware directly. Physical compatibility is useful if you are using SAN-aware applications on the virtual machine. However, a virtual machine with a physical
compatibility RDM cannot be cloned, made into a template, or migrated if the migration involves copying the disk.
Virtual Mode: Allows the RDM to behave as if it were a virtual disk, so you can use such features as taking snapshots, cloning, and so on. When you clone the disk or make a template out of it, the contents of the LUN are copied into a .vmdk virtual disk file. When you migrate a virtual compatibility mode RDM, you can migrate the mapping file or copy the contents of the LUN into a virtual disk.
- When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN.
- In any MSCS clustering scenario that spans physical hosts – virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as virtual disks on a shared VMFS.
From VMware vSphere storage doc:
” When a storage device contains a VMFS datastore copy, you can mount the datastore with the existing signature or assign a new signature.
Each VMFS datastore created in a storage disk has a unique signature, also called UUID, that is stored in the file system superblock. When the storage disk is replicated or its snapshot is taken on the storage side, the resulting disk copy is identical, byte-for-byte, with the original disk. As a result, if the original storage disk
contains a VMFS datastore with UUID X, the disk copy appears to contain an identical VMFS datastore, or a VMFS datastore copy, with exactly the same UUID X “
Important info before resignaturing:
- Resignaturing datastore is irreversible
- VMFS replicated copy is no longer treated as replicate
- You can resignature spanned datastore only if the extents are online
- If the process interrupted, you can resume it
Note: for this task you will need a storage device that support snapshots or cloning.
- Create a volume and mount it as a datastore
- Take a snapshot/clone of the datastore via your storage device
- present the snapshot/clone to your ESXi host
The next step you can perform via the GUI or the CLI (i will show both):
Continue with the wizard and you will see 3 options , choose the “Assign a new signature”
Via CLI: To list the snapshots volume run the command esxcli storage vmfs snapshot list and look for the volume that can be resignature.
record the volume name and run the following command : esxcli storage vmfs snapshot resignature -l ‘volume_name’
Thanks for reading