VMCA Cross-Signed cert time errors

I’ve been looking at updating our vcenter certificate managers to use certificates signed by our internal PKI. Having not done a huge amount of work with certs, it’s a little daunting, but running through the process on a nested test lab reduces the stress factor – it’s not really an issue even if it all screws up

The process is fairly well documented here (and here for updating the vmhost certs), but I found some odd errors that threw me.

Issue 1 – Can’t install the cert for an hour after it’s been signed.

If you try to install the certificate straight away you get an error and it rolls back the update. Looking in the log (/var/log/vmware/vmcad/certificate-manager.log) I found this error :

2018-11-21T15:26:45.424Z INFO certificate-manager Command output :-
 Using config file : /var/tmp/vmware/MACHINE_SSL_CERT.cfg
Error: 70034, VMCAGetSignedCertificatePrivate() failedStatus : Failed
Error Code : 70034
Error Message : Start Time Error

Simply waiting until the certificate is more than an hour old allows the cert to be successfully installed. No idea why…

Issue 2 – Can’t deploy updated certs to vmhosts until the certificate is more than 24h old

If you try and update vmhost certificates straight away, the task errors with the message :Screen Shot 2018-11-23 at 13.58.58

Again, simply waiting until the certificate is more than 24h allows certificates to be deployed successfully.



ESXi UDP Source Port Pass Vulnerability

I’ve not blogged for a while, one of the main reasons was because I had a VCAP exam fail to launch. I was intending to use it to recertify, and have spent a significant part of this year going back and forth with certification support trying to sort out resolution. Anyway, after 3 extensions to my cert expiry, I went and did the VCP6.5DCV Delta exam, so I’m recertified until Sept 2020 (jeepers, 2020, that’s like, the future!) and can think about other stuff again…

Our security team do very regular vulnerability scans, which keeps us on our toes with deploying patches, and tweaking baseline configs. One of the on-going vuln reports we’ve had for some time is for ESXi UDP Source Port Pass Firewall, for UDP port 53.

This issue comes about because the ESXi firewall is¬†stateless and doesn’t know whether inbound traffic is related to an existing outbound connection. An attacker could use this to probe multiple UDP ports by setting the UDP Source Port of a packet to 53 so that the firewall treats it as a reply to an outbound DNS lookup.

VMware support came back with a recommendation to restrict the ‘DNS Client’ firewall rule for ESXi to only allow communication with the DNS Servers, so that any other agent (such as the vulnerability scanner) would not be able to pass through traffic to or from UDP 53.

While this is achievable through the web client, it wouldn’t be practical to update a large number of hosts in that way, so I decided to look at PowerCLI

The following code will fetch a list of hosts, then cycle through each one and set the list of allowed IPs for the DNS Client service to be the IPs which have been set as it’s DNS servers.

# Get list of hosts
$vmhosts = get-vmhost 

foreach ($vmhost in $vmhosts) {
	# Connect ESXCLI
	$EsxCli = Get-EsxCli -VMhost $vmhost
	# List DNS Servers 

	# List existing allowed IP addresses for firewall rule
	# If allowed IPs is currently 'All' then disable that setting
	if($EsxCli.network.firewall.ruleset.allowedip.list("dns").AllowedIPAddresses -eq "All") {
		$EsxCli.network.firewall.ruleset.set($false, $true, "dns")

    # Add any missing DNS server entries as allowed targets
    foreach($dnsaddr in ($vmhost | get-vmhostnetwork).dnsaddress) {
    	if($EsxCli.network.firewall.ruleset.allowedip.list("dns").AllowedIPAddresses | where {$_ -contains $dnsaddr}) {
    		write-output "$dnsaddr already in the list"
    	} else {
		    $EsxCli.network.firewall.ruleset.allowedip.add($dnsaddr, "dns")


Clearing old Host Profile answer files

We recently had a problem where the Fault Tolerance logging service seemed to be randomly getting assigned to the VMotion vmknic, instead of it’s dedicated vmknic. This obviously prevented FT state sync from occuring, a fact that I discovered in a 20 minute change window at 4.30AM ūüė¶

I found the cause of the state sync failure by reading through th vmware.log file for the affected VM, and noticing that the sync seemed to be trying to happen between source and destination IPs on different subnets. Looking at the host IP services configuration within the cluster I found a host which was correct (fortunately the host the FT primary was on was correct too), and used that for the secondary VM which enabled sync to occur.

The problem was affecting roughly 50% of the cluster, and had apparently happened a number of times earlier and been corrected. I noticed that these hosts also had remnants of a host profile answer file – just the Hostname and VMotion interface details, whereas the hosts that were still configured correctly didn’t have any answer file settings stored in VCenter.

Easy I though, bit of PowerCLI will sort that, so had a look for cmdlets for viewing/modifying answer file settings. I hit a blank pretty much straightaway. There are cmdlets for host profiles, one of which allows you to include answerfiles as part of applying a host profile, but nothing for viewing/modifying/removing answer files.

So to the Views we go. A bit of searching turned up this which was helpful, and after a bit of testing I came up with:

$hostProfileManagerView = Get-View "HostProfileManager"
$blank = New-Object VMware.Vim.AnswerFileOptionsCreateSpec

foreach ($vmhost in (Get-Cluster <cluster> | Get-VMhost | sort Name)) {
   $file = $hostProfileManagerView.RetrieveAnswerFile($vmhost.ExtensionData.MoRef)
   if ($file.UserInput.length -gt 0) {
     $file = $hostProfileManagerView.UpdateAnswerFile($vmhost.ExtensionData.MoRef,$blank)
     $file = $hostProfileManagerView.RetrieveAnswerFile($vmhost.ExtensionData.MoRef)
     Write-Output "$($vmhost.Name) $([string]$file.UserInput)"

This iterates through each host in the cluster, and if it has an answerfile, it replaces it with a blank one.

ESXi TLS/SSL/Cipher configuration

Anyone that’s had to configure the TLS/SSL settings for their VMware infrastructure will have probably come across William Lam’s posting on the subject. This provided a much needed script for disabling the weaker protocols on ports 443 (rhttpproxy) and 5989 (sfcb), but leaves out the HA agent on port 8182, and doesn’t alter ciphers – we are having to remove the TLS_RSA ciphers to counter TLS ROBOT ¬†warnings.

The vSphere TLS Reconfigurator utility does fix the TLS protocols for port 8182 (HA communications), but can only be used when the ESXi version is the same minor version as the vCenter, and none of the options will amend the ciphers being used. This was a useful posting I came across for amending the cipher list.
I did attempt to use the (new to ESXi 6.5) Advanced Setting –¬†UserVars.ESXiVPsAllowedCiphers but it appears that this isn’t actually implemented yet. Certainly the rhttpproxy ignores the setting when it starts, and I have raised an SR with VMware to investigate this.

So I thought it might be useful to list the ports that tend to crop up on a vulnerability scan and what is required to fix them, in case there are elements that you may need to configure beyond what the usual utilities and scripts are capable of, such as standalone hosts.

I have only tried these on recent ESXi 6.0U3 and 6.5U1 builds

TCP/443 -VMware HTTP Reverse Proxy and Host Daemon

Set Advanced Settings:
UserVars.ESXiVPsDisabledProtocols to¬†“sslv3,tlsv1,tlsv1.1”
If it’s ESXi 6.0 the following two are also needed:
UserVars.ESXiRhttpproxyDisabledProtocols to “sslv3,tlsv1,tlsv1.1”
UserVars.VMAuthdDisabledProtocols to¬†“sslv3,tlsv1,tlsv1.1”
For the removal of TLS_RSA ciphers:
UserVars.ESXiVPsAllowedCiphers to
The ESXiVPsAllowedCiphers setting does not work, instead manually edit /etc/vmware/rhttpproxy/config.xml and add a cipherList entry:


Restart rhttpproxy service or reboot host

TCP/5989 – VMware Small Footprint CIM Broker

Edit /etc/sfcb/sfcb.cfg and add lines
enableTLSv1: false
enableTLSv1_1: false
enableTLSv1_2: true

Restart sfcb / CIM service or reboot

From what I have seen, the default is to have SSLv3/TLSv1/TLSv1.1 disabled anyway.

TCP/8080 –¬†VMware vSAN VASA Vendor Provider

Should be fixed by the TCP/443 settings

TCP/8182 –¬†VMware Fault Domain Manager

Set Advanced Setting on the *Cluster* :
das.config.vmacore.ssl.protocols to “tls1.2”

Go to each host and initiate “Reconfigure for vSphere HA”

TCP/9080 –¬†VMware vSphere API for IO Filters

Should be fixed by the TCP/443 settings


PowerCLI shortcuts

I’ve just set up some shortcuts for connecting to our various VMware environments, as I was sick of typing out the full

connect-viserver vcsa-name.dns.name

every time.

If you want this to apply for just your userid, you can create (or edit if it already exists)  %UserProfile%\Documents\Windows­PowerShell\profile.ps1

And if you want it to apply for all users, you can create (or edit)

I created the latter, and added lines such as:

function ENV1 {connect-viserver vcsa-name-1.dns.name}
function ENV2 {connect-viserver vcsa-name-2.dns.name}

Now to connect to a VCenter, all I have to type is ENV1
Do you have any favourite powershell/powerCLI shortcuts like this?

PowerCLI prompting for credentials

One of our VCenters has been prompting for credentials when running connect-viserver since it was first set up, rather than passing through the signed in user’s credentials, and I decided to look into this annoyance.

The particular instance of VCenter has an external PSC, and this web page states that only the PSC needs to be joined to the domain. Indeed, you can’t add the VCSA appliance to the domain through the web interface if it has an external PSC, the option simply isn’t there.

One thing that did stand out from that web page was:

If you want to enable an Active Directory user to log in to a vCenter Server instance by using the vSphere Client with SSPI, you must join the vCenter Server instance to the Active Directory domain. For information about joining a vCenter Server Appliance with an external Platform Services Controller to an Active Directory domain, see the VMware knowledge base article at http://kb.vmware.com/kb/2118543.

I then discovered on this web page :

If you run Connect-VIServer or Connect-CIServer without specifying the User, Password, or Credential parameters, the cmdlet searches the credential store for available credentials for the specified server. If only one credential object is found, the cmdlet uses it to authenticate with the server. If none or more than one PSCredential objects are found, the cmdlet tries to perform a SSPI authentication. If the SSPI authentication fails, the cmdlet prompts you to provide credentials.

Putting those two paragraphs together, 1) AD login with SSPI requires the VCSA to be added to the domain, even with an external PSC, and 2) PowerCLI attempts to use SSPI if it has no credential objects.

The KB article in the first paragraph gives details of how to add the VCSA to the domain from command line, so I did the following:

  • Started PowerCLI
    Ran connect-viserver command to test
    Prompts for credentials
  • Ran the likewise command to add the VCSA to the domain
    Ran connect-viserver command to test
    Prompts for credentials
  • Restarted the VCenter services
    Ran connect-viserver command to test
    Prompts for credentials
    Oh &%$&…..
  • Tested from another Windows server – start up PowerCLI
    Ran connect-viserver command to test
    Loads with no prompt for credentials
  • Returned to original Windows server and restarted PowerCLI
    Ran connect-viserver command to test
    Loads with no prompt for credentials

So it would seem that you at least need to restart PowerCLI, and maybe you need to restart VCenter services (I’m not sure if that was needed now), once you’ve added the VCSA to the domain.

Remediating security issues on VRO 6.6

I’ve recently had to fix a bunch of security vulnerabilities on vRealize Operations 6.6, and thought it may be worth documenting for anyone else trying to fix the same issues.

It was mostly around use of weaker protocols, and self-signed certificates, and I think I’ve managed to isolate the minimum work necessary to fix, happy to be corrected if there are better ways of doing it, or if I’ve missed anything.

  1. Appliance interface on TCP/5480
    • SSH on to the appliance as root
    • replace¬† /opt/vmware/etc/lighttpd/server.pem with a signed certificate (including certification chain if it’s a private CA) and private key.
    • edit /opt/vmware/etc/lighttpd/lighttpd.conf and replace
        ssl.cipher-list = "HIGH:!aNULL:!ADH:!EXP:!MD5:!DH:!3DES:!CAMELLIA:!PSK:!SRP:@STRENGTH"
        ssl.honor-cipher-order = "enable"
        ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM"
        ssl.use-compression = "disable"
        setenv.add-response-header  += ( "Strict-Transport-Security" => "max-age=63072000; includeSubDomains; preload",
            "X-Frame-Options" => "SAMEORIGIN",
            "X-Content-Type-Options" => "nosniff")
  2. Appliance SFCB interface on TCP/5489
    • SSH onto the appliance as root
    • vi /opt/vmware/share/sfcb/genSslCert.sh and update the line:
      umask 0277; openssl req -x509 -days 10000 -newkey rsa:2048 \
      umask 0277; openssl req -x509 -days 730 -newkey rsa:2048 \
    • vi /opt/vmware/etc/ssl/openssl.conf and update
      commonName=<appliance FQDN>
      and add lines
      DNS.2 = <appliance FQDN>
      DNS.3 = <appliance hostname>
      at the end
    • cd /opt/vmware/etc/sfcb/
      and issue
      to update the certificates.
  3. Update the VCO service and configuration console
    • Log in to https://vcoserver:8283/vco-controlcenter/#/control-app/certificates
    • Generate a new SSL certificate with the correct common name and organization details
    • from a root bash shell on the appliance, generate a CSR with:
      -certreq -alias dunes -keypass "password" -keystore
      "/etc/vco/app-server/security/jssecacerts" -file "/tmp/cert.csr"
      -storepass "password"

      (the password is found at /var/lib/vco/keystore.password )
    • Sign the CSR with your Certification Authority
    • Copy the cert to the VCO server as /tmp/cert.cer
    • Re-import the signed certificate with:
      -importcert -alias dunes -keypass "password" -file "/tmp/cert.cer
      -keystore "/etc/vco/app-server/security/jssecacerts" -storepass
    • Verify the keystore with:
      keytool -list -keystore "/etc/vco/app-server/security/jssecacerts" -storepass "password" 
    • Edit the following files to remove TLS1.0
      search for sslEnabledProtocols= and change to read sslEnabledProtocols="TLSv1.1, TLSv1.2"
      also change ciphers= line to remove 3DES ciphers.
  4. Reboot the appliance
  5. Test connections with the following statements:
    openssl s_client -connect <servername>:5480 -tls1
    openssl s_client -connect <servername>:5480 -tls1_2openssl s_client -connect <servername>:5489 -tls1
    openssl s_client -connect <servername>:5489 -tls1_2

    openssl s_client -connect <servername>:8281 -tls1
    openssl s_client -connect <servername>:8281 -tls1_2

    openssl s_client -connect <servername>:8283 -tls1
    openssl s_client -connect <servername>:8283 -tls1_2

    The tls1 connections should now fail, and the tls1.2 connections should still work.

If anyone has examples of getting the SFCB to work with a CA signed certificate I’d be interested, as I’ve tried a number of things without success. It may be down to the properties of the certificate, but the above is sufficient for my requirements at the moment.