Installing an Azure Stack PoC in a VMware Virtual Machine

Overview

Azure Stack is offered as an HCI solution by Microsoft OEM hardware partners and is supported by Microsoft. In short, Azure Stack is an on-premise Azure Region. I won’t go into detail about that here, keep an eye out for a different article on that.

Azure Stack requires a minimum of 4 hosts with identical storage configuration. You’re able to install an Azure Stack PoC on a single host provided that it has enough resources. This is called the ASDK (Azure Stack Development Environment). There are some firm MINIMUM requirements for installing the ASDK and they should not be neglected.

Minimum Hardware Requirements

  • Physical Computer
  • 96GB of RAM, 16 CPU cores
  • 450GB of storage across 4 physical drives.
  • Drives must be identified as SAS connected
  • Drives must be identified as NVMe, SSD, or HDD

We’re not just setting up a hypervisor here. A fully deployed ASDK includes many virtual machines all of which are running cloud fabric services. It can take a full day to deploy all of these resources so please, take the HW requirements seriously.

I happen to have a server in my lab that can provide this level of resources, however, I don’t want to bother with reconfiguring the host outside of my vSphere cluster so I was determined to deploy it as a VM. There are several other articles on the net about this topic, however, no single one of them provided everything that I needed to be successful. Also none of them that I found directly solved the last problem I had to overcome. I didn’t try this on VMware workstation so those tutorials may have been adequate for that purpose but this one will cover getting the ASDK up and running on ESXi

Disclaimer

I’m not going to go through the setup procedure step-by-step. I am assuming that you have read the documentation and are aware of the process for deploying Azure Stack. Perhaps I will do a post specifically on that but there are plenty out there already. The items below will only address overcoming the barriers, not the whole process of pre-work, staging, or actual installation steps.

Recommendations

My first recommendation is to read this article in it’s entirety before you start installing things. Some of the more critical things are covered last but need to be taken into account from the beginning.

Inside your ASDK Virtual Disk environment, disable the Windows Update Service. Installation of Azure Stack can take 24 hours. You don’t want Windows deciding to update itself during this time.

I do recommend following the resource recommendations. If you are able, use RDMs or VMDKs that aren’t sharing a drive. If you’ve got SSDs or SAN backed storage with plenty of performance, that is fine too, just keep in mind that there will be a nested hypervisor and a lot nested VMs which will all need resources to perform adequately. You don’t have to have a full 96G of RAM or 16-cores but you want to. Give it more if you can.

Last, I recommend that you research anything that doesn’t just make sense. I usually go into detail and fully explain things but I’m short on time today. If you don’t know how to do these things, look them up, nothing is mysterious or proprietary. If you can’t find a file or setting referenced here then first familiarize yourself with the whole installation process using the most recent guides.

I’m also not going to specify line numbers for modifying files because they change somewhat frequently.

Requirement 1 – Physical Computer

Yeah it checks. I didn’t find a satisfactory way to obscure that the installation was inside of a virtual machine from the installer. The trick is to modify the installation scrips to disregard the fact that its installing in a VM.

  • Enable CPU Virtualization on the VM, you will need to run a nested Hypervisor so don’t forget this step
  • Modify the asdk-installer.ps1 script. Search for “Physical” and identify the code block with an if statement referring to this being a physical host. Comment out the block or modify it in your own way to makes sure that installation continues if the host is detected as a VM
  • Note this file doesn’t exist until you have already failed an install or you manually pre-stage all of the dependencies. Inside the ASDK Boot VMDK, modify C:\CloudDeployment\Roles\PhysicalMachines\Tests\BareMetal.Tests.ps1 The same goes here and there are tutorials on this already. Find the check blocking installation on a VM and alter it. Just change the $true/$false flag in the if statement.
  • Note: I have read that HW Version 11 is required. I have not tested with prior versions.

After performing these steps, installation should continue on a Virtual Machine. You can always rerun the InstallAzureStackPoC.ps1 with the -rerun flag and install will pick up where it left off provided there are no truly serious errors.

Requirement 2 – Resource Minimums

I do recommend having the minimum number of resources. Yes, I know, broken record. There are reasons! But we can’t always make that a reality so you can alter the minimum resource requirements in these files. Note: Sometimes these paths change.

  • C:\CloudDeployment\Configuration\Roles\Infrastructure\BareMetal\OneNodeROle.xml
  • C:\CloudDeployment\Configuration\Roles\Fabric\VirtualMachines\OneNodeRole.xml

Requirement 3 – Physical Hard Drives

The physical hard drive requirement isn’t a performance issue. The drives will be used to make a Storage Spaces Direct Storage Pool and so the drives will need to pass the Failover Cluster Health Checks for Storage Pools Direct. It will check for multi-writer capability. Also there are drive minimums for Storage Pools Direct. If you provide 8 devices with 2TB of storage you will get a resilient Storage Pool, otherwise you will get a non-resilient Storage Pool (No parity or mirroring). If you have a mix of HDD and SSD then provide a couple of small SSD devices (minimum of 2) as a cache tier.

  • Use RDMs or make your VMDK file Eager Zero Thick (There is no way around this)
  • Do not enable VirtualSSD options if the devices are HDD
  • Select “Multi-Writer” sharing for the VMDK files
  • Use the LSI Logic SAS controller
  • Select Virtual SCSI Bus Sharing

This will allow your disks to pass the Cluster Validation Checks

Requirement 4 – SAS Bus

If you’re using the LSI Logic SAS Virtual SCSI Adapter then this is a non-issue. I don’t recommend fighting with the other device types.

This will probably not apply to a virtual environment but for completeness, you can add acceptable BUS types here (But you still must conform to Storage Spaces Direct requirements) Note: The instructions have deprecated and should be a non-issue for vSphere so I have removed them. I may update this later.

Requirement 5 – Drive must be identified as NVMe, SSD, or HDD

In my ESXi 6.7 environment, SSD are properly identified in the Windows guest as SSD drives. HDD drives are detected as SSD or “unspecified.” If HDDs are improperly detected as SSDs, there seems to be a performance issue as Storage Spaces Direct is an intelligent storage system and may be trying to use native SSD command sets. Unspecified drive types are not allowed and fail the health checks. Note: If you have all SSDs and they are all detected as SSDs then this section is probably unnecessary.

Here is where this gets problematic. As of this post we can manually specify the media type of a drive using PowerShell. However, that cannot be done until the disk is a member of a non-primordial storage pool. As soon as they are removed from the storage pool, they lose their custom attributes. This creates a chicken and egg dilemma. Luckily if we create the proper Storage Pool for the ASDK installer, it will use it and successfully install. The trick here is just knowing what to call it.

  • Create a storage pool called SU1_Pool and include all of your drives
  • Make sure that all HDDs are properly marked as HDDs and all SSDs are properly marked as SSDs
  • Change any FrindlyNames to your preference
  • See this guide here for examples: Managing Storage Spaces Direct with PowerShell

This should be enough to get you through any of the hurdles that I hit along the way. Feel free to ping me with questions or assistance.

Comments are closed.

Create a free website or blog at WordPress.com.

Up ↑

%d bloggers like this: