PowerCLI code snippet to get storage driver details

This is just a brief post to share a code snippet that I built to display the storage driver in use.

The driver and it’s version are critical for VMware VSAN, and I needed a quick and easy way of checking it. I might revise the code at a later date to run across multiple hosts in a cluster, and output the results in a table, but for now, here’s the basics.

connect-viserver <vcname>
$esxcli = Get-EsxCli -vmhost <esxihostname>
$adapter = $esxcli.storage.core.adapter.list() |
select Description,Driver,HBAName | where {$_.HBAName -match "vmhba0"}
$driver = $adapter.Driver -replace "_", "-"
$esxcli.software.vib.list() |
Select Name,Version,Vendor,ID,AcceptanceLevel,InstallDate,ReleaseDate,Status |
Where {$_.Name -match ($driver + "$")}

This displays output such as

Name            : scsi-megaraid-sas
Version         : 6.603.55.00-1OEM.550.0.0.1331820
Vendor          : LSI
ID              : LSI_bootbank_scsi-megaraid-sas_6.603.55.00-1OEM.550.0.0.1331820
AcceptanceLevel : VMwareCertified
InstallDate     : 2016-05-03
ReleaseDate     :
Status          :

This works for the servers I’ve tried it on (Dell) but as usual YMMV…

Github Desktop from behind a corporate proxy server

After having just helped a colleague get through the tortuous path of configuring Github Desktop to work through a proxy, I thought it might be worth blogging it all.

Different parts of Github Desktop require the proxy information to be provided in different ways, and without all 3 pieces of configuration, you will find that some things work, but not others.

  1. Internet Explorer proxy setting
    This *has* to be set to a specific proxy server, and not using an autoconfig script.
  2. .gitconfig
    This is found in your user home directory (usually C:\Users\<Username>) and requires the following lines:
    [http]
    proxy = http:// <proxy-address>:<port>
    [https]
    proxy = http:// <proxy-address>:<port>
  3. HTTPS_PROXY/HTTP_PROXY environment variable
    You can set this in your local environment, or in the system environment settings, as long as it’s visible to the Github Desktop processes.
    eg.
    set HTTPS_PROXY=http://<proxy-address>:<port>

If a userid/password is required, it’s recommended that you run something like CNTLM to do the authentication, rather than adding the plaintext credentials to the proxy string.

Once you’ve configured all that, if you’re using Enterprise Github, you will probably need to use a Personal Access Token, rather than your password, to authenticate Github Desktop. This can be created by logging in with a browser and going to Settings / Personal Access Tokens.

I hope that helps someone out, but if not, I’m sure I’ll be using it as a reminder when I have to change it all between using it at Home and at Work…

ESXi 6.0 – Switching from persistent scratch to transient scratch

KB article 1033696 is very helpful when you want to configure persistent scratch on your USB/SDCard/PXE booted ESXi host, however when you want to go the other way, things can be slightly complicated.

Consider the following situation. You have installed ESXi onto a local USB stick, and have temporarily retasked a drive from what will become your VSAN array, to be used to run up VCenter and a PSC.
On the next reboot, ESXi will see the persistent local storage, and automatically choose to run scratch on it.
From that point onwards, how do you switch back and release the disk for use by VSAN?

You can’t set the advanced configuration "ScratchConfig.ConfiguredScratchLocation" to  blank (eg “”), that was the first thing I tried. It accepts the command, but the setting remains pointed at the VMFS location.

You can’t just unmount or delete the VMFS filesystem, it’s in use

You can’t set the advanced configuration "ScratchConfig.ConfiguredScratchLocation" to  /tmp/scratch, it accepts the value, but on reboot, it’s discovered the VMFS filesystem again.

Other combinations of advanced configuration settings, and editing or removing the /etc/vmware/locker.conf also failed to stop it from loading the scratch onto the VMFS filesystem at boot.

In the end, I was able to get around this, by using storcli to offline the disk. The server could then be rebooted without mounting the VMFS filesystem, so scratch was then running from /tmp/scratch (on the ramdisk). The disk could then be brought online again, and the VMFS filesystem destroyed. I guess an alternative approach would be to point the scratch location at an NFS location, which should take precedence over a “discovered” local persistent VMFS filesystem, and allow the VMFS filesystem to be deleted.

I hope that helps someone else, as I spent far more time than I should have going round in a loop, steadily losing my marbles, because there didn’t seem to be any information around about how to do it.