Windows Failover Cluster VM Snapshot Issue

I configured my first WFC servers a few weeks back, having previously been at an all Veritas Cluster Server shop. Nothing particularly special about them, in fact 2 of the clusters are just 2 node clusters with an IP resource acting as a VIP.

We came to configuring backups this week, and the day after the backup had run on one of the cluster nodes, I noticed that the resource had failed over to the second node in the cluster.

Digging into the eventlog showed a large number of NTFS warnings (eventIds 50, 98, 140), as well as errors for FailoverClustering  (eventIds 1069, 1177, 1564) and Service Control Manager (eventIds 7024, 7031, 7036).

wfcerrors

A bit of digging into KB articles such as KB1037959 reveals that snapshotting is not supported with WFC.

However, the issue seems to be caused by quiescing the VM and capturing the memory state with the snapshot. Just snapshotting the disk state does not appear to cause any issues with NTFS or Clustering in our testing, but obviously this is just a crash-consistent backup.

Testing network connectivity

One of the difficulties of working in a tightly controlled datacenter environment is establishing whether something isn’t working because of firewall rules. With most things, you can just test TCP connections using the telnet client, which is a nice simple command line utility that I generally include in Windows installs purely for that purpose.

With UDP it’s a little more difficult, and with trying to confirm that firewalls were blocking UDP/1434 between MS SQL Server installations in two sites, I’ve ended up with the following.

  • Wireshark installed and running on both servers
  • Powershell used with the following function Test-Port

With wireshark running and a capture filter for port 1434, I have then been running test-port -comp destserver -port 1434 -udp -udptimeout 10000 and checking both wireshark captures.

While the test-port reports success (UDP is connectionless after all, so a send is generally going to be accepted) , the wireshark tells a different story, of packets leaving and not arriving. One for the firewall guys to resolve.

I also discovered while looking into this, that on linux there’s a way of testing both TCP and UDP connections from the commannd line using special files:

/dev/tcp/host/port
If host is a valid hostname or Internet address, and port is an integer port number
or service name, bash attempts to open a TCP connection to the corresponding socket.

/dev/udp/host/port
If host is a valid hostname or Internet address, and port is an integer port number
or service name, bash attempts to open a UDP connection to the corresponding socket.

For example:

rich@www:~$ cat < /dev/tcp/localhost/22
SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1
^C

See also my post here on testing network connectivity from ESXi.

NSX LoadBalancer – character “/” is not permitted in server name

lberr

This was an odd error that a colleague brought to me while testing automation around the configuration of an NSX Edge.

He had created the Edge successfully, and configured the Load Balancer, but on trying to enable it, it was erroring. When he tried enabling it through the Web Client, the above error was displayed, and the change was automatically reverted.

After a lot of digging, I discovered that the configuration for the Load Balancer had a Pool where the “IP Address / VC Container” object was a Service Group, and one of the members of that Service Group was an IPSet for the CIDR block that NSX was trying to include in the server name.

I’m not sure whether that is even a supported configuration, but I changed it to point to a Service Group that included the members of the target web farm, and the Load Balancer could then be configured successfully.

Github Desktop from behind a corporate proxy server

After having just helped a colleague get through the tortuous path of configuring Github Desktop to work through a proxy, I thought it might be worth blogging it all.

Different parts of Github Desktop require the proxy information to be provided in different ways, and without all 3 pieces of configuration, you will find that some things work, but not others.

  1. Internet Explorer proxy setting
    This *has* to be set to a specific proxy server, and not using an autoconfig script.
  2. .gitconfig
    This is found in your user home directory (usually C:\Users\<Username>) and requires the following lines:
    [http]
    proxy = http:// <proxy-address>:<port>
    [https]
    proxy = http:// <proxy-address>:<port>
  3. HTTPS_PROXY/HTTP_PROXY environment variable
    You can set this in your local environment, or in the system environment settings, as long as it’s visible to the Github Desktop processes.
    eg.
    set HTTPS_PROXY=http://<proxy-address>:<port>

If a userid/password is required, it’s recommended that you run something like CNTLM to do the authentication, rather than adding the plaintext credentials to the proxy string.

Once you’ve configured all that, if you’re using Enterprise Github, you will probably need to use a Personal Access Token, rather than your password, to authenticate Github Desktop. This can be created by logging in with a browser and going to Settings / Personal Access Tokens.

I hope that helps someone out, but if not, I’m sure I’ll be using it as a reminder when I have to change it all between using it at Home and at Work…

HP Insight Remote Support WBEM problem and resolution

So this post is a little off topic. I recently had to migrate an HP Business Critical “dial home” service from an HP SIM based install, to their new HP IRS 7.x. Mostly, this was because the HP SIM version was going obsolete at the beginning of next year.

The install of IRS itself is very simple, especially compared with the HP SIM based version which took a number of days to get set up correctly. However, getting it working and monitoring the servers wasn’t quite as straightforward as expected.

Issue #1 Windows Servers

The first problem I had was getting it to monitor the Windows servers via SNMP. Tracing through, there were a number of items which needed configuring:

  • IRS Server – SNMP Service not running. This is the default setting on our standard build. Ok set to Automatic and start.
  • IRS Server – SNMP Service config. Add the chosen community string to be accepted from the target servers
  • Target Servers – SNMP Service config. Add the chosen community string to be accepted from the IRS Server.
  • Target Servers – SNMP Serive config. Add the IRS server (and community string) as a trap destination
  • Target Servers – HP System Management Homepage. Change to use SNMP instead of WBEM as its data source

Once all that was in place, a test trap populated each server into the IRS console, and all was good. And yes, I know that this is pretty obvious, most of these I’d already done, but I’ve listed them all for completeness…

Issue #2 HP-UX Servers

On the HP-UX servers, SNMP wasn’t used, but a WBEM user was already created for the HP SIM based service, and this was going to be reused.

The servers added to the IRS console without any issue, however they would not register for WBEM alerts. After a lot of investigation between me and the Unix guys (who fortunately now sit very close to me after an office re-org!) we were still at a loss, so I had to raise a support call with HP.

The lady from HP was quickly on the case, and the cause was, as they say, a doozy. When you install IRS it apparently sets the WBEM registration URL, to ….. localhost. D’Oh!

The necessary commands to change this, assuming an IRS server IP of 172.16.0.1 are (for WBEM and WMI respectively)

rsadmin config -set wbem.subscription.url=https://172.16.0.1:7905/wbem

This should respond:
GLOBAL wbem.subscription.url was https://localhost:7905/wbem => https://172.16.0.1:7905/wbem

and..

rsadmin config -set wmi.subscription.url=https://172.16.0.1:7905/wmi

This should respond:
GLOBAL wmi.subscription.url was https://localhost:7905/wmi => https://172.16.0.1:7905/wmi

This cured the problem and I was able to create all the necessary WBEM subscriptions.

Reconfiguring VSAN storage on Dell PERC H710P Mini Array Controlller

I recently had to reorganise the storage on one of our VSAN clusters. The hosts have H710P array controllers, which don’t have pass-thru capability, so each disk has to be created as a RAID0 Virtual Disk on the array controller.

In addition, the 2 SSD drives had been placed into a single RAID0 array, which needed breaking apart, to enable the use of 2 VSAN Disk Groups (giving 2 separate failure domains instead of 1 great big one!)

On top of this, with only 3 hosts in the farm at this point in time, there was no option to fully evacuate the data from each host, I had to treat each server as a “failure” and allow VSAN to create new mirror copies after the reconfiguration of each host.

Here are the steps I went through:

Original – 2x SSD in RAID0, 4 separate RAID0 HDD drives – in one disk group

New – 2 separate RAID0 SSD, 10 separate RAID0 HDD drives – equally divided between 2 disk groups

Steps

  1. Place host in maintenance mode, with “Ensure accessibility” option.

    (can only choose “Full data migration” if there are more than 3 hosts in the cluster and sufficient storage)

  2. To complete entering maintenance mode, it will be necessary to power down the NSX Controller running on this host.
  3. Attach remote server console (iDRAC) and reboot server
  4. Enter the BIOS (F2 at server boot)
  5. Select “Device Configuration”
  6. Select “Integrated RAID Controller 1: Dell PERC < PERC H710P Mini>”
  7. Delete the old SSD Virtual Disk:
    1. Select “Select Virtual Disk Operations”
    2. Choose the SSD disk from the “Select Virtual Disk” dropdown
    3. Select “Delete Virtual Disk”
    4. Tick the checkbox to Confirm and Select Yes
    5. Click Back
  8. Select “Create Virtual Disk”
  9. For each SDD to add as a VSAN SSD disk perform the following:
    1. Leave RAID Level at RAID0
    2. Select “Select Physical Disks”
    3. Select Media Type “SSD”
    4. Select the appropriate disk from the list
    5. Select “Apply Changes”
    6. Select “OK”
    7. Enter a “Virtual Disk Name” of “VSAN1_SSD1” or “VSAN2_SSD1”
    8. Leave all other settings at default and choose “Create Virtual Disk”
    9. Tick the checkbox to Confirm and Select Yes
    10. Select “OK”
    11. Repeat for the other SSD
  1. For each HDD to add as a VSAN disk perform the following:
    1. Select “Create Virtual Disk”
    2. Leave RAID Level at RAID0
    3. Leave Media Type at “HDD”
    4. Select the appropriate disk from the list
    5. Select “Apply Changes”
    6. Select “OK”
    7. Enter a “Virtual Disk Name” of “VSAN_HDD”
    8. Leave all other settings at default and choose “Create Virtual Disk”
    9. Tick the checkbox to Confirm and Select Yes
    10. Select “OK”
    11. Repeat for the other HDD drives
  2. Select “Back”, “Back”, “Finish” , “Finish” , “Finish” to leave the BIOS
  3. Allow the host to boot back up
  4. Allow the host to reconnect into VCenter
  5. Select the Cluster the host is in, and choose the Manage tab and Virtual SAN “Disk Management” subheading
  6. Select the disk group showing “Unhealthy” and click the “Remove Disk Group” icon.
  7. Select “Yes” to remove the disk group
  8. Launch PowerCLI and use the following script to change the disk type of the SSDs to SSD:
    $server = “hostname.domain.name”
    Connect-VIServer -Server $server -user root -password *******
    $esxcli = Get-EsxCli -VMHost $server
    $localDisk = Get-ScsiLun | where {$_.CapacityGB -lt 200 -and $_.CapacityGB -gt 180}|foreach {$canName = $_.CanonicalName;$satp = ($esxcli.storage.nmp.device.list() | where {$_.Device -eq $canName }).StorageArrayType;$esxcli.storage.nmp.satp.rule.add($null,$null,$null,$canname,$null,$null,$null,”enable_ssd”,$null,$null,$satp,$null,$null,$null);$esxcli.storage.core.claiming.reclaim($canName)}
    $esxcli.storage.core.device.list()|select Device, Size, IsSSD
    Disconnect-VIServer -confirm:$false
    

    (I found this on a forum post, and which I now can’t locate to give the proper attribution, sorry)

  9. Return to the Web Client and navigate to the host, select the “Manage” tab and the “Storage” and “Storage Devices” subsections. Note the “naa id” of the disks marked as SDD.
    These need the partition tables clearing, so they can be reused by VSAN
  10. Clearing the partition table:
    1. SSH to the host, and login as root
    2. cd /vmfs/devices/disks
    3. Use “ls <id>” to ensure the disk is there
    4. Issue the command “partedUtil mklabel /vmfs/devices/disks/<id> msdos” to clear the old and incorrect GPT table
    5. Repeat for the other SSD.
  11. Return to VCenter Web Client and select the Cluster the host is in, and choose the Manage tab and Virtual SAN “Disk Management” subheading
  12. Select the host and select the “Create a New Disk Group” icon
  13. Select an SSD and 5 HDD drives and click “OK” (if the SSDs aren’t displayed, you may need to do a storage rescan)
  14. Repeat to create a second disk group
  15. Ensure both disk groups are created successfully
  16. Return to the Hosts and Clusters view
  17. Take the host out of “Maintenance Mode”
  18. Select the cluster, and navigate to the “Monitor” tab, and the “Virtual SAN” and “Virtual Disks” subsections.
  19. Monitor until all entries in “Physical Disk Placement” are showing “Active” for all VM disk components. This will not start  until the timer (configurable in Advanced Setting “VSAN.ClomRepairDelay”, default 60 minutes) has expired.

Removing unwanted VMware Tools modules

I had a fault raised to me a few weeks back, over a Windows VM that was flagging a warning in it’s eventlog:

vnetfilter

I set up a test VM using the same base image, and found that it also had the same issue.

Some digging around in the VMware KB turned up this article. We don’t use vShield on this particular environment, and a little more investigation showed the HGFS driver also loaded. Basically, the base VM template had a “Full” VMware Tools install instead of the normal “Typical” install.

I figured there should be a way of removing the unwanted modules, and this page seemed to imply it was possible. We don’t like to do anything interactively, so I moved straight to the command line.

start /wait setup.exe /S /v" /qn REBOOT=R ADDLOCAL=ALL REMOVE=VShield,Hgfs"

..looked like it should do the trick.

A quick trial run though, showed it left the VShield and Hgfs modules installed.

My next attempt, deleting the vShield and vmHgfs “services” and running the same command line, also fundamentally failed. This time it actually reinstalled the drivers when I let it do an automatic tools upgrade to match the host.

My next approach was to perform an uninstall of the VMtools, so that I could do a clean install without the unnecessary modules. This of course failed because the VM was running on VMXNET3, and removing the VMtools removed the drivers and broke the link to the automation server!

The final solution I ended up with, was the following scripting steps:

  1. Copy VMtools install files to target VM
  2. Copy VMXNET3 drivers to target VM
  3. Uninstall VMTools (with a powershell script)
    Install VMXNET3 drivers with “pnputil” utility
    Reboot the VM
  4. Reinstall VMtools without the VShield and Hgfs services (using the command line shown above)

I hope this helps anyone in the same predicament. If anyone has found a way of automating just the removal of the vShield and HFGS drivers, please let me know!