vSPhere 6.7 – NVMe Disk Controller

VMware has added a lot of new media and IO related enhancements to vSphere 6.7. One of those enhancements is the Virtual NVMe Disk Controller.
Also check out my blog on new Persistent Memory features in 6.7! This is exciting stuff. here

First of all, this disk controller does not require the underlying media to be NVMe. You can attach a VMDK file from any datastore to the NVMe controller.

I ran some basic IO tests comparing the performance of the NVMe controller vs the Paravirtualized SCSI controller. My initial results show that there isn’t a benefit to using the NVMe controller with VMDKs that aren’t backed by an NVMe controller. For virtual disks that are backed by an NVMe controller, I am seeing a significant advantage.

While neither my test system nor my benchmarks were not optimized for ultra high performance. There was a significant increase in IO with a significant reduction in disk utilization. I am going to dive deeper into this in another thread and see how far I can push things. I’m also curious what I can push out of an NVMe RDM attached to the virtual NVMe controller.

Adding the NVMe controller is not difficult, however, NVMe devices are not treated the same way as SCSI devices are so there are some new considerations.

I’m using Ubuntu 18.04 for my test VM so installing the OS level prerequisites may be different for your Operating System.

IMPORTANT NOTE: Claims about performance enhancements depend greatly on the configuration of the Hardware, Hypervisor, and Guest Operating System. I will have a high-performance blog entry coming out soon that will go into more detail.

Step 1: Upgrade VM Hardware to Version 14 (ESXi 6.7 Compatible)

After Clicking through the prompts, you can see that your VM is now HW Version 14 which can support NVMe (and you may notice some other cool things I will talk about soon)

Step 2: Install Virtual NVMe Controller for the VM

Step 3: Power on the VM

Step 4: Upgrade / Install VMware Tools to the latest version. (Either the VMware Guest Tools Installer or the open-vm-tools package work fine)

Step 5: Configure your VM for NVMe Support

Verify that your VM has an NVMe Controller

Install the nvme-cli utility

List NVMe Namespaces. None are listed here because I did not attach a drive to the controller yet (so I can show you hot add).

Step 6: Attach a Virtual Disk to the NVMe Controller

Step 7: Re-scan for new NameSpaces note that none are listed yet, verifying hot-add.

Identify the device ID of your NVMe Controller

Re-scan the Controller for new Namespaces

Step 8: Create a new file system. Although some applications can use a Namespace natively, most of us will need to use a file system.

Your Namespace now has a file system on it. You can mount it and use it like any other drive. If the Hardware Controller backing the VMDK file is NVMe then you should see a significant performance advantage.

More information about managing NVMe Namespaces in Linux can be found here:


vCenter 6.7 Dark Theme

I recently upgraded my lab to vSphere 6.7U1 which was fairly painless compared to some of the previous migrations I’ve done. I may write a blog on that process soon. I took good notes for a customer so it would be quick but I’m in no hurry because there are already plenty of blogs on that subject. There are some exciting new features in 6.7 that I will definitely be blogging about soon though! (writing now).

Not every blog post has to be a manifesto or super-techie-deep-dive. Sometimes something is just novel enough for throwing out there without much content. So without further ado, I give you…. the Dark Theme.

First of all, log into the new HTML5 interface.
Things to Notice:
The Flash Web Client has not been deprecated
I have a valid SSL Certificate (Check out my SSL Security Section)

Once you’re logged in to the HTML5 interface, click on your user ID in the top right tool bar.
Then click “Switch Theme”

That’s it, Dark Theme. I was using 6.7 for a while before I knew about this feature but I prefer the look. Happy Virtualization!

How kids are bypassing Internet Content Filters and seeing pretty much anything

I remember when I was a kid set loose on a computer. It seemed like I could find my way around anything my parents or school set up to keep me out. It just just a matter of understanding how the system worked and you could work around it.

Systems are much more complicated and secure these days. Things like Internet content filters are centrally managed outside the reach of users so we have to do very clever things to bypass them. Most of the time even these clever things are blocked (most of them) and in general our users can’t access the things we don’t want them to access. I personally can usually bypass anything I want and access whatever I want wherever I want to access it. But that is a function of having decades of experience.

I was just having a conversation with my son who attends a public middle school in our town. He told me that “kids are using cash to get to anything they want on the internet.” I assumed he meant cash, like maybe they had some sort of pay-per access web proxy account. Not the case though, he meant “cache” and its as simple as clicking a different part of the blocked site’s hyperlink in the Google search engine.

That’s right, Google has a cached copy of the whole Internet and you can look right at it with no special tools at all. I have no idea what a “dorkmaster” is but I know that school administrators don’t want kids browsing Urban Dictionary.

By clicking on the little green down-arrow next to the hyperlink, some other options come up. In this case cached is the only one. Clicking on the Cached option, I am taken to a cached copy of the Urban Dictionary site where I learn that a “dorkmaster” is pretty much the same thing as a “dungeon master” in table top RPG gaming.

From here I can click “Browse” and have a quick gander at the “popular” things my kids can learn at school these days.

Here is what Google has to say about it’s cache.

Here is the URL of the Google Cache Service

Looking at this URL, I have no idea if “” may be a requirement for other Google services that schools make use of. Is it needed for Google Drive, Goolge Docs, gMail? It is probably safe to block though.

I don’t know so I’m not sure that blocking this URL is the right thing to do. But this is definitely something I am going to look into and discuss with some of my customers who operate public computers with filtering requirements. A quick search seems to indicate that these cached sites are properly categorized so a good content filter or NGFW should be able to block them if configured properly. This approach would not prevent Google Cache from working where appropriate. A little more digging and then I should probably have a chat with the school’s IT department.

Thoughts, comments? Do you have a good solution to this problem?

Cloud, Security, and Automation

Cloud, Security, and Automation are like three peas in a pod. They go together like peanut butter & chocolate, syrup & pancakes, or cheese & well… everything.

There is a lot of debate over what cloud means. To business leaders it tends to mean agility, intelligent frameworks, and business process automation. When they say “Cloud” they mean the fully realized promise of DevOps. Most often well tenured technologist hear “Cloud” and they think “Managed Services” or “Hyper-scale Infrastructure as a Service.”

I can think of quite a few organizations where leadership has asked for “Cloud” and what their teams are building, while technically “Cloud” are not what was asked for. Those projects tend to fail with nobody really understanding why. That’s why it can be really helpful to work with somebody who understands not only what “Cloud” is, but what different people mean when they say the word cloud. Perhaps we need a new cloud vocabulary. Like the Greek language, having something upward of 20 words for love (so there is no ambiguity), perhaps we should all be much more specific about what cloud means to us.

Why Automate?

While the DevOps model and philosophy may not work for everyone, the hallmark of a successful cloud project is Automation. Automation allows us to create tooling that is free from understanding what lies beneath. So instead of a queue of tasks waiting for one person with one skill set, that person templatizes their work and builds policies around the execution. The result is that anybody can now do this work pragmatically and the person who wrote the task can now focus on other things. This is the concept of agility through eliminating toil. Organizations that don’t have to wait for routine tasks to complete are much more efficient, not to mention freeing up the time of somebody who is able to do high-value work.

Automation is Security?

Automation also brings compliance through policy. An automated task is completed the same way every time with no missed steps or type-o’s. Updating the automation tooling is also a good time to update process documentation. This all leads to a much more consistent and supportable environment.

Automation ideally eliminates the need for privileged access. Often routine infrastructure tasks involve sensitive processes or sensitive information. This means they need to be completed by a qualified person and that person is one of few with the privileged access needed to complete the tasks. Typically these people share common administrative passwords that are tightly guarded. Part of the problem here is that they become the gatekeepers and simple things can get delayed by their lack of availability. Also the sharing of admin / root passwords makes for a security auditing nightmare. Sure Role-based access control can usually be configured but that is a significant time and maintenance investment.

A good middle ground is to use one password, but not allow any one at all access to it. The password is securely stored in your automation infrastructure. Users are allowed to request processes (possibly triggering an approval workflow) and then the automation tool executes the task while logging who requested it. No need to wait on or provide privileged access.

On-Prem, Off-Prem, DevOps, As-a-Service?

The short answer is that it doesn’t matter. The value of the “Cloud” is realized through automating tasks and providing services at a layer which abstracts the underlying platform from the end user. You can pay all of the monthly fees you want but you aren’t really going to have “Cloud” until the way those resources are consumed is programmatic.

I can help with tools that simplify some of the hyperscaler and hybrid complexities, automate tasks on or off prem, and can even help with getting an on-prem infrastructure under a subscription service model. Want to know more, please contact me.

Adventures in Consulting

It’s been about 6 months since I started my position at Bridge Data Solutions. While working for a tech manufacturer was fun and had its merits, I am consultant at heart. Even though I worked someplace where I was able to be passionate about and confident in the products we sold, it did get old working to fit the same product into every situation while also not having anything to add to some of the more interesting projects.

At Bridge Data, I am working in a sales capacity but I am also my own Solutions Architect, my own Systems Engineer, and I am often even my own Install Tech. The different hats I’ve worn over the years have given me a broad skill set and its really nice to be able to work with a customer on every aspect of their projects. I think it brings a comprehensive continuity that leads to the best outcomes.

Recently, I have been asked to head up Bridge Data’s Cloud, Security, and Automation practice. There is a synergy between these things and the three of them are going to be fundamental to Information Technology in the coming years more than they ever have before.

In response, we are bringing a comprehensive list of new offerings that will enable customers to reach equilibrium in the Hybrid Cloud Data Center, leverage multiple service providers, and reduce toil. All while wrapping this up in securities and policy based controls. We are also offering a Cloud Portal which will allow consumption of Cloud services while provide billing and usable insight across every Data Center or Cloud Endpoint in your environment.

I am excited to be a part of this and really looking forward to helping my customers with the old challenges as well as the new.

SoftEther Episode I – Adventures in Layer 2 Tunneling

These are my adventures in Layer 2 Tunneling using SoftEther. May you find them useful!

Episode I – Adventures in Layer 2 Tunneling
Episode II – Roar Warrior
Episode III – Basic Site to Site
Episode IV – Is it a Bridge? Is it a Switch?
Episode V – What about my Gateway?
Episode VI – Where to insert Layer 3

So what’s the problem? No Layer 2 connectivity between sites and need for a simple fast Road Warrior VPN.

One of the biggest things that was missing from my lab was Layer 2 tunneling. “Why would you want Layer 2 connectivity between sites?” people ask and there are two answers. The first is that many of my customers have Layer 2 connections between locations and I want to be able to replicate customer environments. The second answer is because I am putting a particular focus on hybrid cloud workload portability and this feature is important in that space.

I don’t have MPLS, dark fiber, or Nexus 7k’s in my lab. The infrastructure overhead and networking costs to implement multicast and BGP on my perimeter are out of scope for a lab and whatever I do, I want it to be extensible to the cloud. So what’s the right approach?

I was looking for novel replacement for OpenVPN Access server and I found SoftEther. It’s a Layer 2 VPN software that’s very easy to install and continues to deliver impressive features as I need them. First it is amazingly simple for a Road Warrior setup especially for non-static environments like a home lab. I personally implement dynamic DNS so I can always find my home router but it’s not necessary with SoftEther. With SoftEther’s dynamic DNS you just register a Cname with and you can always get to your VPN Server. It even supports firewall and NAT traversal meaning that you can literally connect to anywhere the server happens to be with no network configuration at all. But that’s just where this started.

SoftEther supports Site to Site Layer 2 connections. Take a look here at some of the reasons that this has not been a popular option in the past.

MTU Hell and Extra Mangling with GRE, IPSEC, and NFQUEUE
Layer 2 Tunnel with SSH Taps? Yes you can

Accomplishing a Layer 2 link that actually works well isn’t trivial and EVPN / VXLAN are for another day. With SoftEther you just point and click using a friendly GUI or workable CLI. You can fine tune things and implement strong security as well.

Next Episode: Road Warrior


Thank you for stopping by.  Whether you came here deliberately, by accident, or were lured in by the smell, you are welcome.  Peruse, learn, comment, contribute, but please don’t hate.

Not all who wander are lost. — J.R.R. Tolkien

Create a free website or blog at

Up ↑