I recently had to reorganise the storage on one of our VSAN clusters. The hosts have H710P array controllers, which don’t have pass-thru capability, so each disk has to be created as a RAID0 Virtual Disk on the array controller.
In addition, the 2 SSD drives had been placed into a single RAID0 array, which needed breaking apart, to enable the use of 2 VSAN Disk Groups (giving 2 separate failure domains instead of 1 great big one!)
On top of this, with only 3 hosts in the farm at this point in time, there was no option to fully evacuate the data from each host, I had to treat each server as a “failure” and allow VSAN to create new mirror copies after the reconfiguration of each host.
Here are the steps I went through:
Original – 2x SSD in RAID0, 4 separate RAID0 HDD drives – in one disk group
New – 2 separate RAID0 SSD, 10 separate RAID0 HDD drives – equally divided between 2 disk groups
Steps
- Place host in maintenance mode, with “Ensure accessibility” option.
(can only choose “Full data migration” if there are more than 3 hosts in the cluster and sufficient storage)
- To complete entering maintenance mode, it will be necessary to power down the NSX Controller running on this host.
- Attach remote server console (iDRAC) and reboot server
- Enter the BIOS (F2 at server boot)
- Select “Device Configuration”
- Select “Integrated RAID Controller 1: Dell PERC < PERC H710P Mini>”
- Delete the old SSD Virtual Disk:
- Select “Select Virtual Disk Operations”
- Choose the SSD disk from the “Select Virtual Disk” dropdown
- Select “Delete Virtual Disk”
- Tick the checkbox to Confirm and Select Yes
- Click Back
- Select “Create Virtual Disk”
- For each SDD to add as a VSAN SSD disk perform the following:
- Leave RAID Level at RAID0
- Select “Select Physical Disks”
- Select Media Type “SSD”
- Select the appropriate disk from the list
- Select “Apply Changes”
- Select “OK”
- Enter a “Virtual Disk Name” of “VSAN1_SSD1” or “VSAN2_SSD1”
- Leave all other settings at default and choose “Create Virtual Disk”
- Tick the checkbox to Confirm and Select Yes
- Select “OK”
- Repeat for the other SSD
- For each HDD to add as a VSAN disk perform the following:
- Select “Create Virtual Disk”
- Leave RAID Level at RAID0
- Leave Media Type at “HDD”
- Select the appropriate disk from the list
- Select “Apply Changes”
- Select “OK”
- Enter a “Virtual Disk Name” of “VSAN_HDD”
- Leave all other settings at default and choose “Create Virtual Disk”
- Tick the checkbox to Confirm and Select Yes
- Select “OK”
- Repeat for the other HDD drives
- Select “Back”, “Back”, “Finish” , “Finish” , “Finish” to leave the BIOS
- Allow the host to boot back up
- Allow the host to reconnect into VCenter
- Select the Cluster the host is in, and choose the Manage tab and Virtual SAN “Disk Management” subheading
- Select the disk group showing “Unhealthy” and click the “Remove Disk Group” icon.
- Select “Yes” to remove the disk group
- Launch PowerCLI and use the following script to change the disk type of the SSDs to SSD:
$server = “hostname.domain.name” Connect-VIServer -Server $server -user root -password ******* $esxcli = Get-EsxCli -VMHost $server $localDisk = Get-ScsiLun | where {$_.CapacityGB -lt 200 -and $_.CapacityGB -gt 180}|foreach {$canName = $_.CanonicalName;$satp = ($esxcli.storage.nmp.device.list() | where {$_.Device -eq $canName }).StorageArrayType;$esxcli.storage.nmp.satp.rule.add($null,$null,$null,$canname,$null,$null,$null,”enable_ssd”,$null,$null,$satp,$null,$null,$null);$esxcli.storage.core.claiming.reclaim($canName)} $esxcli.storage.core.device.list()|select Device, Size, IsSSD Disconnect-VIServer -confirm:$false
(I found this on a forum post, and which I now can’t locate to give the proper attribution, sorry)
- Return to the Web Client and navigate to the host, select the “Manage” tab and the “Storage” and “Storage Devices” subsections. Note the “naa id” of the disks marked as SDD.
These need the partition tables clearing, so they can be reused by VSAN - Clearing the partition table:
- SSH to the host, and login as root
- cd /vmfs/devices/disks
- Use “ls <id>” to ensure the disk is there
- Issue the command “partedUtil mklabel /vmfs/devices/disks/<id> msdos” to clear the old and incorrect GPT table
- Repeat for the other SSD.
- Return to VCenter Web Client and select the Cluster the host is in, and choose the Manage tab and Virtual SAN “Disk Management” subheading
- Select the host and select the “Create a New Disk Group” icon
- Select an SSD and 5 HDD drives and click “OK” (if the SSDs aren’t displayed, you may need to do a storage rescan)
- Repeat to create a second disk group
- Ensure both disk groups are created successfully
- Return to the Hosts and Clusters view
- Take the host out of “Maintenance Mode”
- Select the cluster, and navigate to the “Monitor” tab, and the “Virtual SAN” and “Virtual Disks” subsections.
- Monitor until all entries in “Physical Disk Placement” are showing “Active” for all VM disk components. This will not start until the timer (configurable in Advanced Setting “VSAN.ClomRepairDelay”, default 60 minutes) has expired.