vSphere 6.7 – Resetting the SSL state back to zero

I have a few other tutorials on here regarding vSphere SSL certificates. I found that there were a variety of issues which led to a problematic SSL state that was difficult to recovery from.

The guide will show you how to get back to a stable starting point so that once you understand the process, you can install custom SSL certificates without any problems.

This guide only covers a VCSA environment. If you have a Windows environment, the tools and paths will be different, however, the concepts are the same.

I have created some scripts to make this process simpler. When I have a moment, I will upload them to github and link it HERE. (If you would find these helpful before I get that done, please contact me and I’ll get them uploaded sooner).

Step 1:
Unregister any 3rd-Party Extensions. These will often block successfully installing / updating the PSC certificates. Here are a couple of useful example links, or refer to the documentation for your 3rd-Party Extension Provider.
Remove Extensions using SSH
Remove Extensions using the MOB Browser

Step 2:
Attempt to use Certificate Manager to revert to default / self-signed certificates. This may not work if you are having other SSL related issues but try.

Step 3:
Identify and remove all non-VMware Root CA’s registered in the certificate store. This can feel complicated the first time. You will need to get familiar with a few tools, hopefully you are comfortable with the Linux CLI. This was tedious enough for me that I wrote some scripts which I will reference in addition to showing you the command line utilities. The instructions for doing this will be included below.

Step 4:
Assuming that Step 2 was not successful before, attempt Step 2 again. If you can’t get Step 2 working then installing your own certs won’t go any better.

Step 5:
If you can’t get Step 2 working then you are going to have to parse through your logs for warnings or errors. I recommend backing up or deleting your Certificate Manager Log file and running Step 2 over again. This way you will only have to parse data from one run at the process.

rm /var/log/vmware/vmafd/certificate-manager.log
Or use my gencerts.sh script
cat /var/log/vmware/vmafd/certificate-manager.log |grep -i 'warning \|error \|fail' |more

I recommend starting with your favorite search engine for errors, but feel free to reach out to me if you can’t find a solution.

Step 6:
If you can’t get to the bottom of this, then I recommend upgrading to the latest update available and installing all available patches then trying again from Step 2.

If this doesn’t work then you might need to reinstall your PSC. This isn’t too difficult actually. Just do a backup, run the installer (Install the latest version) and redeploy using the backup files.

Viewing the Contents of the Root Certificate Store

vecs-cli usage:
/usr/lib/vmware-vmafd/bin/vecs-cli entry list --store TRUSTED_ROOTS --text

This will make it easier to read
/usr/lib/vmware-vmafd/bin/vecs-cli entry list --store TRUSTED_ROOTS --text |grep -i 'Alias\|Subject:\|Before\|After\|issuer'

Or use my listcerts.sh script

Example Output
Alias : f052cf63552bc9cb365c199b3320fa383415979f
        Issuer: CN=CA, DC=vsphere, DC=local, C=US, ST=California, O=wdl-psc-00.lab.local, OU=VMware Engineering
            Not Before: Feb 21 20:31:25 2019 GMT
            Not After : Feb 18 20:31:25 2029 GMT
        Subject: CN=CA, DC=vsphere, DC=local, C=US, ST=California, O=wdl-psc-00.lab.local, OU=VMware Engineering
Alias : 5ab252164061b935c22128f875a264fec8efd1d0
        Issuer: CN=CA, DC=vsphere, DC=local, C=US, ST=California, O=pdx-psc-00.lab.local, OU=VMware Engineering
            Not Before: Feb 27 16:20:59 2019 GMT
            Not After : Feb 24 16:20:59 2029 GMT
        Subject: CN=CA, DC=vsphere, DC=local, C=US, ST=California, O=pdx-psc-00.lab.local, OU=VMware Engineering
Alias : ff1f984a104a7c265ab6a3bd98c5b9a22c809b70
        Issuer: DC=local, DC=lab, CN=lab-PDX-DC-01-CA
            Not Before: Feb 20 17:08:18 2019 GMT
            Not After : Feb 20 17:18:18 2039 GMT
        Subject: DC=local, DC=lab, CN=lab-PDX-DC-01-CA
Alias : e6575bb7c6e3486bd4355e236e8dbefb0ddfb013
        Issuer: DC=local, DC=lab, CN=lab-PDX-DC-01-CA
            Not Before: Mar  2 18:51:15 2019 GMT
            Not After : Mar  2 19:01:15 2021 GMT
        Subject: C=US, ST=OR, L=PDX, O=Local Lab, OU=Engineering, CN=PDX-PSC-00-CA
                CA Issuers - URI:ldap:///CN=lab-PDX-DC-01-CA,CN=AIA,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=lab,DC=local?cACertificate?base?objectClass=certificationAuthority
Alias : e56a6f43a38003e101c2abfc35f0ad50de7218b9
        Issuer: DC=local, DC=lab, CN=lab-PDX-DC-01-CA
            Not Before: Feb 25 05:23:12 2019 GMT
            Not After : Feb 25 05:33:12 2021 GMT
        Subject: C=US, ST=WA, L=WDL, O=Lab.local, OU=Engineering, CN=WDL-PSC-00-CA
                CA Issuers - URI:ldap:///CN=lab-PDX-DC-01-CA,CN=AIA,CN=Public%20Key%20Services,CN=Services,CN=Configuration,DC=lab,DC=local?cACertificate?base?objectClass=certificationAuthority

In the case above you would want to identify (and delete) the aliases for everything that isn’t a VMware self-signed cert.

The process is below or you can use my deletecert.sh script.

Backing up the Aliases
Backing up the aliases is part of deleting them. I will assume that you have a folder called /certs on your PSC host.

vecs-cli usage:
/usr/lib/vmware-vmafd/bin/vecs-cli entry getcert --store TRUSTED_ROOTS --alias $ALIAS --output /certs/$ALIAS.crt

/usr/lib/vmware-vmafd/bin/vecs-cli entry getcert --store TRUSTED_ROOTS --alias ff1f984a104a7c265ab6a3bd98c5b9a22c809b70 --output /certs/ff1f984a104a7c265ab6a3bd98c5b9a22c809b70.crt

Un-publishing the Alias
The alias needs to be unpublished before it is deleted or there is some risk that the certificate will be restored to the certificate store. The backup copy of the cert is used for this process.

dir-cli usage:
/usr/lib/vmware-vmafd/bin/dir-cli trustedcert unpublish --cert "/certs/$ALIAS.crt"

/usr/lib/vmware-vmafd/bin/dir-cli trustedcert unpublish --cert "/certs/ff1f984a104a7c265ab6a3bd98c5b9a22c809b70.crt"

Deleting the Alias

vecs-cli usage:
/usr/lib/vmware-vmafd/bin/vecs-cli entry delete --store TRUSTED_ROOTS --alias $ALIAS

/usr/lib/vmware-vmafd/bin/vecs-cli entry delete --store TRUSTED_ROOTS --alias ff1f984a104a7c265ab6a3bd98c5b9a22c809b70

Afterward list the aliases again to make sure the one you deleted is gone.

vSPhere 6.7 – Custom SSL Certificates

Note: There are 20 tutorials for installing custom SSL certificates out there on the net. I’m not going to cover that in detail. What I will cover here is all of the little things that you will need in order to be successful following one of those tutorials. They aren’t comprehensive and everything goes according to plan. In my experience that just isn’t a fair representation of what happens, especially after upgrades or not getting it right the first time. So read this first and then go try one of the tutorials. Or if you’re stuck on one of those tutorials, hopefully this will get you out of the muck. I’ll post links to some useful tutorials and documentation at the end of this post.

Custom SSL certificates prior to vSphere 6.x used to be a frustrating proposition. It wasn’t particularly easy to do and once done, it caused a lot of misc operational issues. It almost always made troubleshooting more difficult and it could cause communication issues between 3rd party components. The way its done now is consistent and convenient. It’s even easy once its in place.

In 6.x, the Platform Services Controller issues SSL certificates to every component participating in the SSO domain. Even 3rd party plug-ins are issued an SSL cert directly from the PSC. This is done by default with a self-signed root CA certificate. All we need to do in order to have the PSC issue valid SSL certificates for our own environment is to authorize it as a signing authority in our SSL signing chain.

Check out my other blog posts and security pages on SSL basics and 80/20 rules for success.

The process is the same for 6.5 and 6.7. I have had more success with these tools working the further along I am in versions. If you’re considering an upgrade to 6.7 then do that first. If you’re on 6.5 or 6.7, install all available updates and patches first.

First of all, before you start or if you’re having trouble, install the latest patches. Before you get into the meat of installing your own certificates, install the latest versions, updates, and patches. Did I say that enough times? There are some bugs in the SSL tools, pretty much in every version. One of them is even related to creating your CSRs so upgrade before you even get started.

I am only covering the process for the VCSA and not a Windows VC server. I have a distributed VCSA environment in my lab with multiple sites, PSCs, and VCs. The important pieces covered here should apply equally to a Windows install, however, some of the commands and paths referenced will be different on Windows.

Step 0: Install all available upgrades and patches. (One last time).
Then make sure that the DNS CNAME and PTR records for these things are correct:
Every ESXi host

Step 1: Verify that you have an Enterprise Certificate Authority in your environment, that you are able to request certificates, and that you know how to contact the CA administrator. Also make sure that your CA configuration is up to date and using SHA256 instead of SHA1. SHA1 signed certificates will not be considered valid by most clients. This shouldn’t be an issue unless your CA has been around a long time or unless you’re starting out with an older OS to provide your CA, like Server2012. Note that even SHA256 certificates will have an SHA1 thumbprint. Don’t worry about that while troubleshooting, its normal.

Step 2: Follow the instructions (in other tutorials) for creating a VCSA Signing Certificate Template for 6.5 and higher. You will need this template to correctly fill your CSR.

Step 3: Download and install OpenSSL on your workstation, whether Windows or Linux. And look up the instructions for converting PKCS12 certificates to PEM format. You might need this for Step 8.

Step 4: If you have already done custom SSL certificates on your VCSA and are having trouble, or if you are replacing existing custom SSL certificates (If you don’t have an SSL blank slate) then you will want to follow this tutorial to reset your SSL environment back to zero.

Step 5: Login to your VCSA all-in one, or your external Platform Services Controller and run the certificate manager program.

Step 6: Choose “Replace SSL Certificate with Custom Signing Certificate and replace certificates.” The actual number for this varies depending on vSphere version and install type. You will need to enter an SSO admin credential.
Then choose to create a Certificate Signing Request.
When asked if you want to replace all certs, answer yes.
When asked if you want to configure the SSL configuration file, choose yes.
For this step, what you enter here is important. Enter the typical answers for Location, State, etc.
When asked about the Common Name, DO NOT ENTER THE FQDN OF YOUR PSC HOST. If you do, the whole process will fail several steps later.
This will be the name of the CA that is created. Ask your CA admin or examine an existing SSL certificate to determine if there is a naming convention. If you’re not sure, use HOSTNAME-CA.
Don’t enter an IP Address for your PSC, its unnecessary. The last question asking the name of your CA, use what you entered for the CN above. I will refer to this a CA_NAME for the rest of the guide.
Save your CSR and key someplace that is easy to find like /root or /tmp.

Step 7: Get your CSR signed with the VCSA Template created in Step 2 and export it as Base64.

Step 8: Add the whole certificate chain to your certificate.
check out my guide for doing this here

Step 9: Copy your new certificate chain file back to the PSC host

Step 10: If Certificate Manager is still up on your PSC then continue to import Custom SSL Certificate. If it’s not, rerun Certificate Manager. Choose the custom SSL option again, and then choose to import your custom certificate chain.
Provide the full path to your certificate
Provide the full path to your key file
Watch the prompts on the above two lines carefully. Notice if it accepts your cert before entering the key.

If you immediately encounter an error with the certificates. Then check Steps 0, 4, 6 and 8.

It will go through a lengthy process of generating and replacing keys and restarting services. If this process fails, refer to Steps 0, 4, and 6 as the problem will almost always be in one of those places.

Step 11: If you are on 6.7u1, this step is optional. Run certificate manager again. Choose to replace the Machine Certificate.
When asked if you want to reconfigure the SSL configuration, choose yes.
When asked for the CN, go ahead and use the FQDN now.
When asked for the CA name(last question) use CA_NAME identified in step 6.

Step 12: If you are on 6.7u1, this step is optional. Run certificate manager again. Choose to replace the Web Services Certificate.
When asked if you want to reconfigure the SSL configuration, choose yes.
When asked for the CN, use something different than you used for step 11. Maybe web-FQDN.
When asked for the CA name(last question) use CA_NAME identified in step 6.

Step 13: If you have an external PSC, then login to your vCenter Server and perform steps 11 and 12 for the vCenter Server.

Step 14: Restart the services on your vCenter Server (especially if you have an external PSC)

Step 15: Navigate to to FQDN of your vCenter Server. If you don’t have a clean SSL state, then inspect the site’s SSL certificate. If you see the whole chain, then its probably a caching / cookie issue. Clear your cookies or restart your browser. If it still isn’t working, look at the specific error message in your browser for a clue to the problem.

Step 16: Navigate to the VM Admin URL for your PSC and VSCA appliance(s). You MIGHT find that you have a valid SSL certificate there. You MIGHT find that you don’t.
If you don’t, then follow these instructions to fix it.

Step 17: ESXi Host Certificates. Your ESXi hosts won’t accept a new certificate from the PSC until that certificate is 24 hours old. If you don’t want to wait for 24 hours, you can adjust this setting. It is an advanced vCenter Server setting and is configured in minutes. Just change this setting to something like 5 minutes.
When the hosts are done adding you can change it back to the default.

Step 18: Re-register your 3rd-party plug-ins. You may have had to disable or remove 3rd-party plug-ins in order to get this far. If you did, now is when you can re-register them. Keep in mind that the certificates issued by the PSC are only for inter-service communication. The actual server management URLs of your extension servers will need their own Custom SSL certificates to secure front-end management traffic. I will hopefully be providing enough examples of those to make securing whichever ones you have a piece of cake.

If you found this article helpful, take a look at my other vSpehre or SSL related posts and pages. Especially Breaking Bad SSL Habits.

I wrote this up a bit after I did the process so if anything isn’t quite right, feel free to let me know and I’ll fix it.

VMware Documentation
Creating a Signing CA Template
Replacing Default SSL Certificates with Custom Certificates

Excellent Example Video

vSPhere 6.7 – Storage Class / Persistent Memory

Among the new and exciting media and IO enhancement in vSphere 6.7 is support for Storage Class Memory. Also called Persistent Memory or Non-Volatile DIMMs. Not to be confused with Non-Volatile Memory, PEM is actually RAM with some added mechanism for making it persistent. Like a battery and or backing by NAND Flash. There are basic types and each manufacturer can have their own proprietary features. NVDIMMs are not drive media and are not accessed as disk devices although a virtual device can be abstracted on top of them.

More information about PMEM can be found here: https://en.wikipedia.org/wiki/NVDIMM

Need Persistent Memory or want to do a PoC? I can do that for you, please contact me.

vSphere 6.7 offers two different mechanisms for granting NVDIMM access to virtual machines. I will briefly describe them here and go into more detail below as I demonstrate each one. The terms can get really overlapped here because PMEM is used in a couple of different contexts. PMEM is an access method for Persistent Memory and PMEM is also an abbreviation for Persistent Memory. I apologize, I am not responsible for this ambiguity 🙂

NVDIMM – Capacity out of your Persistent Memory Pool can be allocated directly to your Virtual Machine. With this method, your VM has a Virtual NVDIMM installed. This method is night and day faster than the PMEM method described below. With this method NVDIMM aware applications can make use of the NVDIMM natively or you can use kernel space tools to create a file system construct on top of the memory addresses allocated to the NVDIMMs.

PMEM – Capacity out of your Persistent Memory Pool is allocated to your Virtual Machine as a VMDK and attached to a Virtual Disk Controller. The VMDK resides in the PMEM datastore which is a different construct with different rules than a regular datastore. PMEM backed virtual disks can be storage migrated out and back into PMEM. This allows PMEM space to be dynamically allocated depending on need. The PMEM shows up in the guest as a disk (or NVMe namespace) and can be used normally. Although there are some operational advantages to the PMEM method, it is not nearly as fast as the direct NVDIMM method.

Persistent Memory in the web interface. First we need to verify that there is PMEM in the ESXi host. You can do this by navigating to the Hardware Summary section or the Memory Section of the ESXi Host Configuration context.

The PMEM Datastore
Each ESXi host with PMEM installed with have a single PMEM Datastore. This Datastore shows up in a variety of places but it doesn’t show up in any of the main Datastore context menus. It is mostly hidden.
The PMEM Datastore has a long randomly generated name that can’t be changed (as of 6.7u1).

The primary method for moving or creating a virtual disk in the PMEM Datastore is to assign the PMEM Storage Profile to the Virtual Disk. This will move it to (or create it in) the PMEM Datastore which is backed by Persistent Memory.

The VM Must be Powered Down to install a PMEM disk or an NVDIMM Device.
The PMEM device will perform optimally as an NVMe Namespace. See my blog on NVMe and vSphere 6.7 here

Adding a new PMEM virtual disk as an NVMe Namespace
Once the device is added, power on the VM and configure as an NVMe device.
See my blog on NVMe and vSphere 6.7 here

Migrating a VMDK off of a PMEM Device
The storage migration menu will have new PMem and Hybrid Options.
PMem – Moves all VMDK files to PMEM storage, leaving only the VM config files on a normal datastore.
Hybrid – Allows migrating non-PMem disks, leaving PMem where it is
Standard – Moves all VMDKs to normal Datastores

Choosing “Configure per disk” will allow specific decisions to be made per disk. While creating a PMEM disk requires selecting a VM Storage Policy and not a Datastore Selection, this menu is the opposite. Choose “Browse” and select the PMem radial to move a non-PMem disk to PMem.

You can storage motion a PMEM VMDK file off of PMEM while the VM is running
You can storage motion a VMDK file on to PMEM while the VM is running
You cannot manually migrate the VMDK to PMEM by changing the VMDK Storage Profile while the VM is running
Manual or DRS vMotion works for VMs with PMEM. The VM is not pinned to a host when using PMEM. A storage migration is not required. As long as the target host has a PMEM pool with sufficient capacity, the PMEM disk will automatically migrate to the target host.

Virtual NVDIMM
The VM must be powered off to add a virtual NVDIMM. NVDIMM namespaces are configured at boot time and so hot-add is not likely to work.

The VM must be VM Hardware Version 14 (ESXi 6.7) or higher
The Guest Operating System must have native support for NVDIMM
I am using Ubuntu 18.04 here, directions for a different OS may vary.

Install LIBNVDIMM Support Command Line Utility ndctl

Check if pmem device has been created

We are lucky with Ubuntu 18, the pmem device was created by default and we did not have to construct it.

View the properties of the device

The mode of this device matters and you must set it properly for your intended use.

fsdax – Allows direct memory mappings bypassing the page cache. Best for filesystems that are implemented properly for DAX kernel access. (XFS and EXt4)
devdax – Similar to fsdax, use if you will be passing this device through to a virtual machine (would be nested in this case)
sector – Use this mode for legacy file systems without full DAX support that don’t support checksums. Mostly used for small boot file systems.
raw – Just a raw device. Can be used for file systems but without DAX support.

I will be using XFS for performance reasons and so I will change the mode from raw to fsdax.

Now we can format the pmem device and mount it normally.

For more information about managing NVDIMM Namespaces.

vMotioning VM’s with NVDIMM devices

As long as the target ESXi host has a PMEM pool with enough capacity available, a standard vMotion will be successful.

vSPhere 6.7 – NVMe Disk Controller

VMware has added a lot of new media and IO related enhancements to vSphere 6.7. One of those enhancements is the Virtual NVMe Disk Controller.
Also check out my blog on new Persistent Memory features in 6.7! This is exciting stuff. here

First of all, this disk controller does not require the underlying media to be NVMe. You can attach a VMDK file from any datastore to the NVMe controller.

I ran some basic IO tests comparing the performance of the NVMe controller vs the Paravirtualized SCSI controller. My initial results show that there isn’t a benefit to using the NVMe controller with VMDKs that aren’t backed by an NVMe controller. For virtual disks that are backed by an NVMe controller, I am seeing a significant advantage.

While neither my test system nor my benchmarks were not optimized for ultra high performance. There was a significant increase in IO with a significant reduction in disk utilization. I am going to dive deeper into this in another thread and see how far I can push things. I’m also curious what I can push out of an NVMe RDM attached to the virtual NVMe controller.

Adding the NVMe controller is not difficult, however, NVMe devices are not treated the same way as SCSI devices are so there are some new considerations.

I’m using Ubuntu 18.04 for my test VM so installing the OS level prerequisites may be different for your Operating System.

IMPORTANT NOTE: Claims about performance enhancements depend greatly on the configuration of the Hardware, Hypervisor, and Guest Operating System. I will have a high-performance blog entry coming out soon that will go into more detail.

Step 1: Upgrade VM Hardware to Version 14 (ESXi 6.7 Compatible)

After Clicking through the prompts, you can see that your VM is now HW Version 14 which can support NVMe (and you may notice some other cool things I will talk about soon)

Step 2: Install Virtual NVMe Controller for the VM

Step 3: Power on the VM

Step 4: Upgrade / Install VMware Tools to the latest version. (Either the VMware Guest Tools Installer or the open-vm-tools package work fine)

Step 5: Configure your VM for NVMe Support

Verify that your VM has an NVMe Controller

Install the nvme-cli utility

List NVMe Namespaces. None are listed here because I did not attach a drive to the controller yet (so I can show you hot add).

Step 6: Attach a Virtual Disk to the NVMe Controller

Step 7: Re-scan for new NameSpaces note that none are listed yet, verifying hot-add.

Identify the device ID of your NVMe Controller

Re-scan the Controller for new Namespaces

Step 8: Create a new file system. Although some applications can use a Namespace natively, most of us will need to use a file system.

Your Namespace now has a file system on it. You can mount it and use it like any other drive. If the Hardware Controller backing the VMDK file is NVMe then you should see a significant performance advantage.

More information about managing NVMe Namespaces in Linux can be found here:



vCenter 6.7 Dark Theme

I recently upgraded my lab to vSphere 6.7U1 which was fairly painless compared to some of the previous migrations I’ve done. I may write a blog on that process soon. I took good notes for a customer so it would be quick but I’m in no hurry because there are already plenty of blogs on that subject. There are some exciting new features in 6.7 that I will definitely be blogging about soon though! (writing now).

Not every blog post has to be a manifesto or super-techie-deep-dive. Sometimes something is just novel enough for throwing out there without much content. So without further ado, I give you…. the Dark Theme.

First of all, log into the new HTML5 interface.
Things to Notice:
The Flash Web Client has not been deprecated
I have a valid SSL Certificate (Check out my SSL Security Section)

Once you’re logged in to the HTML5 interface, click on your user ID in the top right tool bar.
Then click “Switch Theme”

That’s it, Dark Theme. I was using 6.7 for a while before I knew about this feature but I prefer the look. Happy Virtualization!

Create a free website or blog at WordPress.com.

Up ↑

Create your website with WordPress.com
Get started