Installing an Azure Stack PoC in a VMware Virtual Machine

Overview

Azure Stack is offered as an HCI solution by Microsoft OEM hardware partners and is supported by Microsoft. In short, Azure Stack is an on-premise Azure Region. I won’t go into detail about that here, keep an eye out for a different article on that.

Azure Stack requires a minimum of 4 hosts with identical storage configuration. You’re able to install an Azure Stack PoC on a single host provided that it has enough resources. This is called the ASDK (Azure Stack Development Environment). There are some firm MINIMUM requirements for installing the ASDK and they should not be neglected.

Minimum Hardware Requirements

  • Physical Computer
  • 96GB of RAM, 16 CPU cores
  • 450GB of storage across 4 physical drives.
  • Drives must be identified as SAS connected
  • Drives must be identified as NVMe, SSD, or HDD

We’re not just setting up a hypervisor here. A fully deployed ASDK includes many virtual machines all of which are running cloud fabric services. It can take a full day to deploy all of these resources so please, take the HW requirements seriously.

I happen to have a server in my lab that can provide this level of resources, however, I don’t want to bother with reconfiguring the host outside of my vSphere cluster so I was determined to deploy it as a VM. There are several other articles on the net about this topic, however, no single one of them provided everything that I needed to be successful. Also none of them that I found directly solved the last problem I had to overcome. I didn’t try this on VMware workstation so those tutorials may have been adequate for that purpose but this one will cover getting the ASDK up and running on ESXi

Disclaimer

I’m not going to go through the setup procedure step-by-step. I am assuming that you have read the documentation and are aware of the process for deploying Azure Stack. Perhaps I will do a post specifically on that but there are plenty out there already. The items below will only address overcoming the barriers, not the whole process of pre-work, staging, or actual installation steps.

Recommendations

My first recommendation is to read this article in it’s entirety before you start installing things. Some of the more critical things are covered last but need to be taken into account from the beginning.

Inside your ASDK Virtual Disk environment, disable the Windows Update Service. Installation of Azure Stack can take 24 hours. You don’t want Windows deciding to update itself during this time.

I do recommend following the resource recommendations. If you are able, use RDMs or VMDKs that aren’t sharing a drive. If you’ve got SSDs or SAN backed storage with plenty of performance, that is fine too, just keep in mind that there will be a nested hypervisor and a lot nested VMs which will all need resources to perform adequately. You don’t have to have a full 96G of RAM or 16-cores but you want to. Give it more if you can.

Last, I recommend that you research anything that doesn’t just make sense. I usually go into detail and fully explain things but I’m short on time today. If you don’t know how to do these things, look them up, nothing is mysterious or proprietary. If you can’t find a file or setting referenced here then first familiarize yourself with the whole installation process using the most recent guides.

I’m also not going to specify line numbers for modifying files because they change somewhat frequently.

Requirement 1 – Physical Computer

Yeah it checks. I didn’t find a satisfactory way to obscure that the installation was inside of a virtual machine from the installer. The trick is to modify the installation scrips to disregard the fact that its installing in a VM.

  • Enable CPU Virtualization on the VM, you will need to run a nested Hypervisor so don’t forget this step
  • Modify the asdk-installer.ps1 script. Search for “Physical” and identify the code block with an if statement referring to this being a physical host. Comment out the block or modify it in your own way to makes sure that installation continues if the host is detected as a VM
  • Note this file doesn’t exist until you have already failed an install or you manually pre-stage all of the dependencies. Inside the ASDK Boot VMDK, modify C:\CloudDeployment\Roles\PhysicalMachines\Tests\BareMetal.Tests.ps1 The same goes here and there are tutorials on this already. Find the check blocking installation on a VM and alter it. Just change the $true/$false flag in the if statement.
  • Note: I have read that HW Version 11 is required. I have not tested with prior versions.

After performing these steps, installation should continue on a Virtual Machine. You can always rerun the InstallAzureStackPoC.ps1 with the -rerun flag and install will pick up where it left off provided there are no truly serious errors.

Requirement 2 – Resource Minimums

I do recommend having the minimum number of resources. Yes, I know, broken record. There are reasons! But we can’t always make that a reality so you can alter the minimum resource requirements in these files. Note: Sometimes these paths change.

  • C:\CloudDeployment\Configuration\Roles\Infrastructure\BareMetal\OneNodeROle.xml
  • C:\CloudDeployment\Configuration\Roles\Fabric\VirtualMachines\OneNodeRole.xml

Requirement 3 – Physical Hard Drives

The physical hard drive requirement isn’t a performance issue. The drives will be used to make a Storage Spaces Direct Storage Pool and so the drives will need to pass the Failover Cluster Health Checks for Storage Pools Direct. It will check for multi-writer capability. Also there are drive minimums for Storage Pools Direct. If you provide 8 devices with 2TB of storage you will get a resilient Storage Pool, otherwise you will get a non-resilient Storage Pool (No parity or mirroring). If you have a mix of HDD and SSD then provide a couple of small SSD devices (minimum of 2) as a cache tier.

  • Use RDMs or make your VMDK file Eager Zero Thick (There is no way around this)
  • Do not enable VirtualSSD options if the devices are HDD
  • Select “Multi-Writer” sharing for the VMDK files
  • Use the LSI Logic SAS controller
  • Select Virtual SCSI Bus Sharing

This will allow your disks to pass the Cluster Validation Checks

Requirement 4 – SAS Bus

If you’re using the LSI Logic SAS Virtual SCSI Adapter then this is a non-issue. I don’t recommend fighting with the other device types.

This will probably not apply to a virtual environment but for completeness, you can add acceptable BUS types here (But you still must conform to Storage Spaces Direct requirements) Note: The instructions have deprecated and should be a non-issue for vSphere so I have removed them. I may update this later.

Requirement 5 – Drive must be identified as NVMe, SSD, or HDD

In my ESXi 6.7 environment, SSD are properly identified in the Windows guest as SSD drives. HDD drives are detected as SSD or “unspecified.” If HDDs are improperly detected as SSDs, there seems to be a performance issue as Storage Spaces Direct is an intelligent storage system and may be trying to use native SSD command sets. Unspecified drive types are not allowed and fail the health checks. Note: If you have all SSDs and they are all detected as SSDs then this section is probably unnecessary.

Here is where this gets problematic. As of this post we can manually specify the media type of a drive using PowerShell. However, that cannot be done until the disk is a member of a non-primordial storage pool. As soon as they are removed from the storage pool, they lose their custom attributes. This creates a chicken and egg dilemma. Luckily if we create the proper Storage Pool for the ASDK installer, it will use it and successfully install. The trick here is just knowing what to call it.

  • Create a storage pool called SU1_Pool and include all of your drives
  • Make sure that all HDDs are properly marked as HDDs and all SSDs are properly marked as SSDs
  • Change any FrindlyNames to your preference
  • See this guide here for examples: Managing Storage Spaces Direct with PowerShell

This should be enough to get you through any of the hurdles that I hit along the way. Feel free to ping me with questions or assistance.

Advertisements

How to Mange Storage Pools Direct with PowerShell

Mini Guide

I need to get some examples documented here as reference for another post so this will be skinny but I will circle back and make it more complete later.

Getting Physical Disk Information

Get-PhysicalDisk
Get-PhysicalDisk |fl *Friendly*,*Media*,*Size*,*Serial*

Changing Physical Disk Properties

Note that Physical Disk properties can only be changed for disks that are in a non-Primordial storage pool. Once the disks are removed from a storage pool, the manually assigned properties revert.

$disk = Get-PhysicalDisk -SerialNumber 600029212*
Set-PhysicalDisk -InputObject $disk -NewFriendlyName HDD1 -MediaType HDD

Persistent Memory 101

I’ve written a few guides for Persistent Memory recently and slipped in bits and pieces of information here and there. I decided to consolidate the little things like nomenclature into a different place. So if things in another post aren’t clear just because you haven’t read extensively then the answers should be here. Moving those things here will make them easier to find and keep them consistent. Hopefully its also less distracting to the content in the other posts.

Persistent Memory is different than traditional non-volatile memory in that it is actual memory addressable memory. Were talking about DIMMs located in DIMMs sockets on a motherboard. This is actual DRAM that that has an added mechanism or mechanisms for making it persistent across reboots or power outages.

While being incredibly fast, RAM does not easily lend itself to being a storage media. Most applications function by accessing a block device, not a memory region and these constructs just aren’t available without some assistance. DIMMs are not inherently all that serviceable. You have a pool of RAM and one stick goes bad. How can this be tolerated? How can I perform a hot-memory replacement with minimal impact? To address all of these things, CPU and Motherboard manufacturers have had to extend some specifications in order to do things like partition and group the DIMMs in intelligent and/or configurable ways.

Operating Systems have had to add device types and features to support an array of different access methods. Do we want to access our Persistent Memory as a traditional block device, a PMEM aware block device, or a character device that applications can access natively? All of these are possible but require different configuration steps and have different performance profiles.

Persistent Memory Nomenclature

PMEM
PMEM refers to DRAM memory spaces which have been backed by a persistence mechanism such as a battery and/or NAND directly attached for de-staging.

NVDIMM
An NVDIMM is the physical component of Persistent Memory. NVDIMMs fit into a RAM socket on an NVDIMM compatible motherboard.

Region
A PMEM Region is a logical unit of PMEM. A region could be a single NVDIMM or a partition on a single NVMDIMM. It could be all of the space on all of your NVDIMMs collectively, or it could be a partition sliced across multiple NVDIMMs. As a gross generalization, Regions are configured in BIOS/EFI and are constructed before the OS boots. I like to think of NVDIMMs as physical disks and a Region as a Logical Drive.

LIBNVDIMM
The Linux Kernel PMEM driver. This library is required for initializing NVDIMM Regions, and constructing Namespaces.

Name Space
Much like NVMe, PMEM makes use of Name Spaces. A Namespace can be an entire Region or a piece of a Region. The Namespace is the basic construct the Operating System will work against.

Page Cache or Buffer Cache
Read-after write cache where blocks written to persistent media are tracked and cached in memory. This process speeds up IO for disk based media. This process is unnecessary for PMEM and can actually slow things down. When PMEM is used as a block device hosting a file system, the page cache is in use.

DAX
Capability that allows a file system to bypass the page cache and write directly to PMEM. Currently EXT4 and XFS have DAX support if mounted with the dax mount option.

vPMem
VMware nomenclature for passing NVDIMM directly to a VM. When using vPMEM, PMEM capacity is passed directly through to a VM as a virtual NVDIMM. The guest Operating System must support NVDIMMs.

vPMemDisk
VMware nomenclature for presenting PMEM capacity to a VM as a vmdk file connected to a virtual SCSI controller.

ndctl
User space cli tool for configuring PMEM namespaces

PMDK
The Persistent Memory Development Kit provides additional tools and libraries for managing PMEM.

Character Device
Devices where the driver communicates by sending and receiving a single character at a time rather than a whole block of data. This is the type of device used by applications with native PMEM support.

How to break a hung lock in NetApp ONTAP 9

If the reason you need to close locked files is to stop the whole CIFS server, then there are different instructions.  If you are just trying to recover a file that has a stuck lock, this should help.

If you are familiar with Window file servers, the NetApp CIFS server works the same way.  You can connect to it with the File Sharing MMC tool in Windows.  The user you are logged into Windows as needs to have Administrator rights on the File Share.

Start->Run->mmc

In the MMC

File->Add/Remove Snap-In->Shared Folders->Add

Select “Another Computer” and enter the name of the vServer.

You can leave “All” selected.

Click “Okay”

Navigate to “Open Files”

Find the file that is locked

Right-Click on the file and choose “Close Open File”

This will fix a locked file issue most of the time.

To close the file in the ONTAP Command Line, it is a lot more complicated.

You need to know the vServer Name (file server)

You need to know the Volume Name (usually the share name)

It helps to know the whole path to the file.

Login to the ONTAP Command Line

To show ALL locks (lots of output)

vserver locks show -protocol cifs

To show all locks on one vserver

vserver locks show -protocol cifs -vserver [vservername]

To show all locks on a specific volume and or path

vserver locks show -protocol cifs -vserver [vservername] -volume [volumeName}-path [ontapPathToFile]

Example

vserver locks show -vserver wdl-svm-management -volume wdl_files

wdl-ontap1::> vserver locks show -vserver wdl-svm-management -volume wdl_files  -protocol cifs                                                       
Vserver: wdl-svm-management
Volume   Object Path               LIF         Protocol  Lock Type   Client
-------- ------------------------- ----------- --------- ----------- ----------
wdl_files
         /wdl_files/               wdl-svm-management_cifs_lif1
                                               cifs      share-level 10.0.10.210
                Sharelock Mode: read-deny_none
                                                                     10.0.10.209
                Sharelock Mode: read-deny_none
         /wdl_files/home/Administrator/Security/Certificates/CAs
                                   wdl-svm-management_cifs_lif1
                                               cifs      share-level 10.0.10.209
                Sharelock Mode: read-deny_none
         /wdl_files/home/Administrator/Security/Certificates/CAs/LAB-PDX-DC-01-CA
                                   wdl-svm-management_cifs_lif1
                                               cifs      share-level 10.0.10.209
                Sharelock Mode: read-deny_none
4 entries were displayed.

To break a lock, use the break command and the full path from the output above

vserver locks break -vserver [vservername] -volume [volumename] -path [full_path]

Example:

wdl-ontap1::> vserver locks break -vserver wdl-svm-management -volume wdl_files -path /wdl_files/home/Administrator/Security/Certificates/CAs/LAB-PDX-DC-01-CA

Warning: Breaking file locks can cause applications to become unsynchronized and may lead to data corruption.
Do you want to continue? {y|n}: y
1 entry was acted on.

How to include the whole Certificate Chain in a PEM SSL Certificate

There are a few reasons that your application server might require access to a full certificate chain.  In most cases we are uploading and importing certificates in PEM format.  For the purposes of this article we will consider PEM, x.509, and Base64 synonymous.  They are overlapping standards (think JSON vs YAML).  Different tools in the same process chain will refer to the same data by each of these conventions so for this article, just think of them as the same thing. With all this in mind, when given the choice, choose Base64 as your export format.

If you have certificates or key files that are not in PEM format then you may need to convert them.  This is pretty simple using OpenSSL.  If you are doing a lot with SSL, make sure you have OpenSSL configured on your security workstation.  I may show examples of using OpenSSL, but documenting it’s use is out of scope for this article.

Some nomenclature:
Root Certificate Authority:  The top level of the certificate signing chain.  (Often kept offline for security purposes)
Trusted Root Authority:  A CA that has been configured as “Trusted” on an SSL client.  It doesn’t matter is a cert is signed and by who if the client doesn’t trust the source.
Intermediate / Subordinate / Signing Authority:  A Certificate Authority which is authorized by a higher-level authority to sign certificates.  There can be multiple levels of Authorities.
Certificate Signing Request(CSR):  An request generated by a user or application that is encoded with the host details that are required by the certificate.  A private key is also generated at the time a CSR is created.
Certificate Key:  An encrypted Private Key file that is required to unlock an SSL certificate for use.

Certificate: A PEM formatted SSL certificate text looks like this:

—–BEGIN CERTIFICATE—–
MIIDkDCCAnigAwIBAgIQTuVOyQrH5olB+fnG7NW1VjANBgkqhkiG9w0BAQsFADBHMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxEzARBgoJkiaJk/IsZAEZFgNsYWIxGTAXBgNVBAMTEGxhYi1QRFgtREMtMDEtQ0EwHhcNMTkwMjIwMTcwODE4WhcNMzkwMjIwMTcxODE4WjBHMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxEzARBgoJkiaJk/IsZAEZFgNsYWIxGTAXBgNVBAMTEGxhYi1QRFgtREMtMDEtQ0EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCwH8y2AFprKxti31lkPb0SCSyTPqE8ifusCLRYMXVwquUDASxcxBam9Ulwt3vVJ5ZW56pBF2R3pbN+BZXGheo1Zb+RWBJqr45O14NjTRTtdhqrE2Xfs0cye7
—–END CERTIFICATE—–

There, with all of that out of the way… Your application has requested that the certificate you provide contains the entire signing chain.  So what do you do?  In some cases you might be asked to supply the certificate and the chain separately.  In this case, you will still need to build the chain.  In most cases, you will be asked to provide the certificate and the chain in one PEM certificate file.

First you need to identify your certificate chain.  You can sometimes download the whole chain from your CA.  That chain may or may not be in PEM format and may need to be converted using OpenSSL.  For simplicity, let’s assume that you may have an easier method to get YOUR chain but I’ll show how to build the chain by hand.


Above we the the certificate chain for the SSL certificate issued for mysite.lab.local. The certificate was signed by lab-WDL-DC1-CA which is subordinate to lab-PDX-DC-01-CA. You can also call lab-WDL-DC1-CA an Intermediate CA.

Most of the time, an application like a web server will only need the certificate itself and the associated private key file. Sometimes the application will require a full chain. There are different reasons. The SSL certificate might be used for bi-directional communication and needs the full chain so it knows to trust other servers signed in the chain. Or the application might act as a signing authority itself and needs knowledge of the whole chain.

In any case, if you have to provide the whole chain, you are generally only given the option of uploading one PEM file. In that case, you will want to structure it in this way.

—–BEGIN CERTIFICATE—–
If you are including the server cert in the chain, it goes here
—–END CERTIFICATE—–
—–BEGIN CERTIFICATE—–
The last CA in the chain goes here
—–END CERTIFICATE—–
—–BEGIN CERTIFICATE—–
Intermediate / Subordinate CA’s go here, one after the other, ascending order
—–END CERTIFICATE—–
—–BEGIN CERTIFICATE—–
The Root CA Certificate goes here
—–END CERTIFICATE—–

So based on the image of the certificate chain above, a valid chain including the certificate would look like this.

—–BEGIN CERTIFICATE—–
MIIF1TCCBL2gAwIBAgITcQAAACz2nO0ua9rYBwABAAAALDANBgkqhkiG9w0BAQsFADBHMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxEzARBgoJkiaJk/IsZAEZFgNsYWIxGTAXBgNVBAMTEGxhYi1QRFgtREMtMDEtQ0EwHhcNMTkwMzA3MjMyMTMwWhcNMjEwMzA2MjMyMTMwWjCBjzELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAldBMQwwCgYDVQQHzi7KK5j6hL4/fvccfbcjdB3TEwECtOmMVIZuycdslGs90ET9WxxOqsheQY0rUCL6hxD+gAAAAAAAAAJQVv/+qnW2hwQKAApEgghsYWItb2N1bYISbGFiLW9jdWcnZpY2VzLENOPUNvbmZpZ3VyYXRpb24sREM9bGFiLERDPWxvY2FsP2NBQ
—–END CERTIFICATE—–
—–BEGIN CERTIFICATE—–
Tj1sYWItUERYLURDLTAxLUNBKDEpLENOPXBkeC1kYy0wMSxDTj1DRFAsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2VydmljZXMsQ049Q29uZmlndXJhdGlvbixEQz1sYWIsREM9bG9jYWw/Y2VydGlmaWNhdGVSZXZvY2F0aW9uTGlzdD9iYXNlP29iamVjdENsYXNzPWNSTERpc3RyaWJ1dGlvblBvaW50MIHABggrBgEFBQcBAQSBszCBsDCBrQYIKwYBBQUHMAKGgaBsZGFwOi8vL0NOPWxhYi1QRFgtREMtMDEtQ0EsQ049QUlBLENOPVB1YmxpYyUyMEtleSUyMFNlcnZpY2VzLENOPVNlcnZpY2VzLENOPUNvbmZpZ3VyYXRpb24sREM9bGFiLERDPWxvY2FsP2NBQ2VydGlmaWNhdGU/YmFzPAOI6gOgCWA8D9u677tURcgQfXuYOnve
—–END CERTIFICATE—–
—–BEGIN CERTIFICATE—–
MIIDkDCCAnigAwIBAgIQTuVOyQrH5olB+fnG7NW1VjANBgkqhkiG9w0BAQsFADBHMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxEzARBgoJkiaJk/IsZAEZFgNsYWIcxeLNihMSOLARu5/1gUZgAPucZJWvIRYBP9LOcjTUJPxvkX9pcFzswtzmdSU3sa7vr0lJhpA==ENsYXNzPWNSTERpc3RyaWJ1dGlvblBvaW50MIHABggrBgEFBQcBAQSBszCBsDCBrQYIKwYBBQUHMAKGgaBsZGFwOi8vL0NOPWxhYi1QRFgtREMtMDEtQ0EsQ049QUlBLENOPVB1YmxpYyUyMEtleSUyMFNlcnZpY2VzLENOPVNlcnZpY2VzLENOPUNvbmZpZ3VyYXRpb24sREM9bGFiLERDPWxvY2FsP2NBQ2VydGlmaWNhdGU/Y
—–END CERTIFICATE—–

Adventures in Consulting

It’s been about 6 months since I started my position at Bridge Data Solutions. While working for a tech manufacturer was fun and had its merits, I am consultant at heart. Even though I worked someplace where I was able to be passionate about and confident in the products we sold, it did get old working to fit the same product into every situation while also not having anything to add to some of the more interesting projects.

At Bridge Data, I am working in a sales capacity but I am also my own Solutions Architect, my own Systems Engineer, and I am often even my own Install Tech. The different hats I’ve worn over the years have given me a broad skill set and its really nice to be able to work with a customer on every aspect of their projects. I think it brings a comprehensive continuity that leads to the best outcomes.

Recently, I have been asked to head up Bridge Data’s Cloud, Security, and Automation practice. There is a synergy between these things and the three of them are going to be fundamental to Information Technology in the coming years more than they ever have before.

In response, we are bringing a comprehensive list of new offerings that will enable customers to reach equilibrium in the Hybrid Cloud Data Center, leverage multiple service providers, and reduce toil. All while wrapping this up in securities and policy based controls. We are also offering a Cloud Portal which will allow consumption of Cloud services while provide billing and usable insight across every Data Center or Cloud Endpoint in your environment.

I am excited to be a part of this and really looking forward to helping my customers with the old challenges as well as the new.

SoftEther Episode I – Adventures in Layer 2 Tunneling

These are my adventures in Layer 2 Tunneling using SoftEther. May you find them useful!

Episode I – Adventures in Layer 2 Tunneling
Episode II – Roar Warrior
Episode III – Basic Site to Site
Episode IV – Is it a Bridge? Is it a Switch?
Episode V – What about my Gateway?
Episode VI – Where to insert Layer 3

So what’s the problem? No Layer 2 connectivity between sites and need for a simple fast Road Warrior VPN.

One of the biggest things that was missing from my lab was Layer 2 tunneling. “Why would you want Layer 2 connectivity between sites?” people ask and there are two answers. The first is that many of my customers have Layer 2 connections between locations and I want to be able to replicate customer environments. The second answer is because I am putting a particular focus on hybrid cloud workload portability and this feature is important in that space.

I don’t have MPLS, dark fiber, or Nexus 7k’s in my lab. The infrastructure overhead and networking costs to implement multicast and BGP on my perimeter are out of scope for a lab and whatever I do, I want it to be extensible to the cloud. So what’s the right approach?

I was looking for novel replacement for OpenVPN Access server and I found SoftEther. It’s a Layer 2 VPN software that’s very easy to install and continues to deliver impressive features as I need them. First it is amazingly simple for a Road Warrior setup especially for non-static environments like a home lab. I personally implement dynamic DNS so I can always find my home router but it’s not necessary with SoftEther. With SoftEther’s dynamic DNS you just register a Cname with SoftEther.net and you can always get to your VPN Server. It even supports firewall and NAT traversal meaning that you can literally connect to anywhere the server happens to be with no network configuration at all. But that’s just where this started.

SoftEther supports Site to Site Layer 2 connections. Take a look here at some of the reasons that this has not been a popular option in the past.

MTU Hell and Extra Mangling with GRE, IPSEC, and NFQUEUE
Layer 2 Tunnel with SSH Taps? Yes you can

Accomplishing a Layer 2 link that actually works well isn’t trivial and EVPN / VXLAN are for another day. With SoftEther you just point and click using a friendly GUI or workable CLI. You can fine tune things and implement strong security as well.

Next Episode: Road Warrior

Welcome

Thank you for stopping by.  Whether you came here deliberately, by accident, or were lured in by the smell, you are welcome.  Peruse, learn, comment, contribute, but please don’t hate.

Not all who wander are lost. — J.R.R. Tolkien

Create a free website or blog at WordPress.com.

Up ↑