VFRC Cache Stats with PowerCLI

I’ve recently set up VMware Flash Read Cache on a couple of ESXi servers. They were bought with the SSDs to do this, as they only had internal disk rather than and external array, but for some reason the configuration never happened.

I’ve written a script to perform the configuration, but it’s not quite ready for release, however when monitoring the effectiveness of the cache, I’d been using esxcli to check the stats. Enabling SSH and logging on to the hosts was tiresome so I whipped up a quick few lines of PowerCLI to do the job:

$esxcli = get-vmhost <hostname> | get-esxcli
$caches = $esxcli.storage.vflash.cache.list() 
foreach ($cache in $caches) {
    $stats = $esxcli.storage.vflash.cache.stats.get($cache.Name,"vfc")   
    $cache | select Name, @{N="CacheUsage%";E={$stats.Cacheusagerateasapercentage}},@{N="HitRate";E={$stats.Read.Cachehitrateasapercentage}}

The output looks something like:

Name                          CacheUsage% HitRate
----                          ----------- -------
vfc-2915015888-VM1            99          8
vfc-2910392434-VM2            99          12
vfc-2910723509-VM3            99          11
vfc-2914146967-VM4            99          11

VRO 7.x SSL Cert Configuration – Update

I wrote some time ago about the updates required for vRealize Orchestrator to use signed certificates and more secure TLS/Cipher configuration.

Now that I’m planning the 7.5 upgrades, which are a prerequisite for vSphere 6.7, I thought I’d revisit the configuration of the SFCB service (runs on port 5489) to see if I could persuade it to function with a signed certificate rather than the self-signed cert that I’d had to leave it on before.

One of the complications is that as part of the startup of SFCB, it tests accessing the service, and if that fails it will restart and test a number of times before eventually shutting the service down and exiting with an error.

After a lot of trial and error, and heading down a number of dead ends I discovered the following:

* The service logs to /opt/vmware/var/log/vami/vami-sfcb.log
* /opt/vmware/share/vami/vami_sfcb_test is a quick way to run the test that the startup script uses

And finally:
* If you’re using the same cert in the server.pem as the client.pem file, you need to make sure it has both “TLS Web Server Authentication” and “TLS Web Client Authentication” capabilities. It was the latter that my certificate was lacking before, which generated the error:
Class enumeration FAILED: Socket error: [SSL: SSLV3_ALERT_UNSUPPORTED_CERTIFICATE] sslv3 alert unsupported certificate (_ssl.c:726)

Now that I’ve generated a certificate with the correct capabilities, all the SSL ports on the Orchestrator appliance respond with CA signed certificates, which is one more thing eliminated from our vulnerability scans.

vSphere 6.5U2 – Refresh CA Certs fails when host in maintenance mode

Our internal vulnerability scans have been reporting the ESXi hosts use ‘self-signed’ certs for a while, and while addressing this have found an odd issue.

If you try ‘Refresh CA Certificates’ from VCenter while the host is in maintenance mode, it errors out.

Take the host out of maintenance mode and it succeeds.
This is the case whether you use the Flash client, the HTML5 client, or even PowerCLI.

The ‘Renew Certificate’ option works fine irrespective of whether the host is in maintenance mode or not.

VMCA Cross-Signed cert time errors

I’ve been looking at updating our vcenter certificate managers to use certificates signed by our internal PKI. Having not done a huge amount of work with certs, it’s a little daunting, but running through the process on a nested test lab reduces the stress factor – it’s not really an issue even if it all screws up

The process is fairly well documented here (and here for updating the vmhost certs), but I found some odd errors that threw me.

Issue 1 – Can’t install the cert for an hour after it’s been signed.

If you try to install the certificate straight away you get an error and it rolls back the update. Looking in the log (/var/log/vmware/vmcad/certificate-manager.log) I found this error :

2018-11-21T15:26:45.424Z INFO certificate-manager Command output :-
 Using config file : /var/tmp/vmware/MACHINE_SSL_CERT.cfg
Error: 70034, VMCAGetSignedCertificatePrivate() failedStatus : Failed
Error Code : 70034
Error Message : Start Time Error

Simply waiting until the certificate is more than an hour old allows the cert to be successfully installed. No idea why…

Issue 2 – Can’t deploy updated certs to vmhosts until the certificate is more than 24h old

If you try and update vmhost certificates straight away, the task errors with the message :Screen Shot 2018-11-23 at 13.58.58

Again, simply waiting until the certificate is more than 24h allows certificates to be deployed successfully.


ESXi UDP Source Port Pass Vulnerability

I’ve not blogged for a while, one of the main reasons was because I had a VCAP exam fail to launch. I was intending to use it to recertify, and have spent a significant part of this year going back and forth with certification support trying to sort out resolution. Anyway, after 3 extensions to my cert expiry, I went and did the VCP6.5DCV Delta exam, so I’m recertified until Sept 2020 (jeepers, 2020, that’s like, the future!) and can think about other stuff again…

Our security team do very regular vulnerability scans, which keeps us on our toes with deploying patches, and tweaking baseline configs. One of the on-going vuln reports we’ve had for some time is for ESXi UDP Source Port Pass Firewall, for UDP port 53.

This issue comes about because the ESXi firewall is¬†stateless and doesn’t know whether inbound traffic is related to an existing outbound connection. An attacker could use this to probe multiple UDP ports by setting the UDP Source Port of a packet to 53 so that the firewall treats it as a reply to an outbound DNS lookup.

VMware support came back with a recommendation to restrict the ‘DNS Client’ firewall rule for ESXi to only allow communication with the DNS Servers, so that any other agent (such as the vulnerability scanner) would not be able to pass through traffic to or from UDP 53.

While this is achievable through the web client, it wouldn’t be practical to update a large number of hosts in that way, so I decided to look at PowerCLI

The following code will fetch a list of hosts, then cycle through each one and set the list of allowed IPs for the DNS Client service to be the IPs which have been set as it’s DNS servers.

# Get list of hosts
$vmhosts = get-vmhost 

foreach ($vmhost in $vmhosts) {
	# Connect ESXCLI
	$EsxCli = Get-EsxCli -VMhost $vmhost
	# List DNS Servers 

	# List existing allowed IP addresses for firewall rule
	# If allowed IPs is currently 'All' then disable that setting
	if($EsxCli.network.firewall.ruleset.allowedip.list("dns").AllowedIPAddresses -eq "All") {
		$EsxCli.network.firewall.ruleset.set($false, $true, "dns")

    # Add any missing DNS server entries as allowed targets
    foreach($dnsaddr in ($vmhost | get-vmhostnetwork).dnsaddress) {
    	if($EsxCli.network.firewall.ruleset.allowedip.list("dns").AllowedIPAddresses | where {$_ -contains $dnsaddr}) {
    		write-output "$dnsaddr already in the list"
    	} else {
		    $EsxCli.network.firewall.ruleset.allowedip.add($dnsaddr, "dns")


Clearing old Host Profile answer files

We recently had a problem where the Fault Tolerance logging service seemed to be randomly getting assigned to the VMotion vmknic, instead of it’s dedicated vmknic. This obviously prevented FT state sync from occuring, a fact that I discovered in a 20 minute change window at 4.30AM ūüė¶

I found the cause of the state sync failure by reading through th vmware.log file for the affected VM, and noticing that the sync seemed to be trying to happen between source and destination IPs on different subnets. Looking at the host IP services configuration within the cluster I found a host which was correct (fortunately the host the FT primary was on was correct too), and used that for the secondary VM which enabled sync to occur.

The problem was affecting roughly 50% of the cluster, and had apparently happened a number of times earlier and been corrected. I noticed that these hosts also had remnants of a host profile answer file – just the Hostname and VMotion interface details, whereas the hosts that were still configured correctly didn’t have any answer file settings stored in VCenter.

Easy I though, bit of PowerCLI will sort that, so had a look for cmdlets for viewing/modifying answer file settings. I hit a blank pretty much straightaway. There are cmdlets for host profiles, one of which allows you to include answerfiles as part of applying a host profile, but nothing for viewing/modifying/removing answer files.

So to the Views we go. A bit of searching turned up this which was helpful, and after a bit of testing I came up with:

$hostProfileManagerView = Get-View "HostProfileManager"
$blank = New-Object VMware.Vim.AnswerFileOptionsCreateSpec

foreach ($vmhost in (Get-Cluster <cluster> | Get-VMhost | sort Name)) {
   $file = $hostProfileManagerView.RetrieveAnswerFile($vmhost.ExtensionData.MoRef)
   if ($file.UserInput.length -gt 0) {
     $file = $hostProfileManagerView.UpdateAnswerFile($vmhost.ExtensionData.MoRef,$blank)
     $file = $hostProfileManagerView.RetrieveAnswerFile($vmhost.ExtensionData.MoRef)
     Write-Output "$($vmhost.Name) $([string]$file.UserInput)"

This iterates through each host in the cluster, and if it has an answerfile, it replaces it with a blank one.

ESXi TLS/SSL/Cipher configuration

Anyone that’s had to configure the TLS/SSL settings for their VMware infrastructure will have probably come across William Lam’s posting on the subject. This provided a much needed script for disabling the weaker protocols on ports 443 (rhttpproxy) and 5989 (sfcb), but leaves out the HA agent on port 8182, and doesn’t alter ciphers – we are having to remove the TLS_RSA ciphers to counter TLS ROBOT ¬†warnings.

The vSphere TLS Reconfigurator utility does fix the TLS protocols for port 8182 (HA communications), but can only be used when the ESXi version is the same minor version as the vCenter, and none of the options will amend the ciphers being used. This was a useful posting I came across for amending the cipher list.
I did attempt to use the (new to ESXi 6.5) Advanced Setting –¬†UserVars.ESXiVPsAllowedCiphers but it appears that this isn’t actually implemented yet. Certainly the rhttpproxy ignores the setting when it starts, and I have raised an SR with VMware to investigate this.

So I thought it might be useful to list the ports that tend to crop up on a vulnerability scan and what is required to fix them, in case there are elements that you may need to configure beyond what the usual utilities and scripts are capable of, such as standalone hosts.

I have only tried these on recent ESXi 6.0U3 and 6.5U1 builds

TCP/443 -VMware HTTP Reverse Proxy and Host Daemon

Set Advanced Settings:
UserVars.ESXiVPsDisabledProtocols to¬†“sslv3,tlsv1,tlsv1.1”
If it’s ESXi 6.0 the following two are also needed:
UserVars.ESXiRhttpproxyDisabledProtocols to “sslv3,tlsv1,tlsv1.1”
UserVars.VMAuthdDisabledProtocols to¬†“sslv3,tlsv1,tlsv1.1”
For the removal of TLS_RSA ciphers:
UserVars.ESXiVPsAllowedCiphers to
The ESXiVPsAllowedCiphers setting does not work, instead manually edit /etc/vmware/rhttpproxy/config.xml and add a cipherList entry:


Restart rhttpproxy service or reboot host

TCP/5989 – VMware Small Footprint CIM Broker

Edit /etc/sfcb/sfcb.cfg and add lines
enableTLSv1: false
enableTLSv1_1: false
enableTLSv1_2: true

Restart sfcb / CIM service or reboot

From what I have seen, the default is to have SSLv3/TLSv1/TLSv1.1 disabled anyway.

TCP/8080 –¬†VMware vSAN VASA Vendor Provider

Should be fixed by the TCP/443 settings

TCP/8182 –¬†VMware Fault Domain Manager

Set Advanced Setting on the *Cluster* :
das.config.vmacore.ssl.protocols to “tls1.2”

Go to each host and initiate “Reconfigure for vSphere HA”

TCP/9080 –¬†VMware vSphere API for IO Filters

Should be fixed by the TCP/443 settings