When configuring a VSAN cluster, it is recommended to disable heartbeat datastores in your cluster, as this ensures that if only the VSAN network fails, vSphere HA will still restart the VMs on another host in the cluster (more info on why the heartbeat datastore should be disabled can be found in the VSAN Stretched Cluster guide here).
Now, when datastore heartbeats are disabled on your cluster, you may then see the following warning message on your hosts:
This is because vSphere HA requires a minimum of two shared datastores between all hosts in a vSphere HA enabled cluster for heartbeat detection (more info at the following VMware KB: https://kb.vmware.com/kb/2004739)
So if the only shared storage available is VSAN, then you may want to remove this warning. To do that:
- In the vSphere Web Client, right click your cluster and select Settings
- Under vSphere HA go to Edit
- Under Advanced Options add the following Configuration Parameter: das.ignoreInsufficientHbDatastore
- For its value, enter true
- Disable then re-enable HA for your cluster to apply the changes.
Of course if you don’t want VMs to fail over to another host in your cluster in the event the VSAN network is unavailable, then you will need to configure another non-VSAN datastore to use for heart beats.
I came across a minor but annoying problem today after performing ESXi upgrades in my lab using VUM. Each of my ESXi hosts have a local 8GB boot disk (non-flash), and at install a local 4GB FAT32 partition was created automatically for scratch. However for some reason after upgrading the hosts to 6.0 U2 (from U1) this partition was removed for some of my hosts (but not all), therefore showing this ugly warning message in the web client for the hosts:
The fix was simple. In the web client:
- Create a new VMFS datastore on the boot disk (I made it 3 GB). E.g. “esx-01b_local”
- Create a new directory on the datastore for scratch. E.g. “.locker-esx-01b”
- Change the advanced property “ScratchConfig.ConfiguredScratchLocation” to the VMFS volume that was created. E.g. “/vmfs/volumes/esx-01b_local/.locker-esx-01b”
- Reboot the host (obviously make sure you’re in maintenance mode first!).
The reasoning for this occuring I will leave to the fact that my lab is running in a nested environment, which caused the ESXi upgrade process to be very slow (each ESXi upgrade took almost 30 minutes to complete).
Please note that this applies to local non-flash storage. If you are installing ESXi on Flash/SD cards, the above fix is not recommended due to the fact that Flash and SD cards can have potentially limited read/write cycles available (and having scratch on them can wear them out pretty quickly). In this case, you should use either a seperate local datastore that is on a HDD/SSD (but not VSAN, as VSAN is not supported), or a remote syslog server.
Now for the exciting conclusion on how to isolate your vSphere Replication and NFC traffic! (for Part 1 click here)
Note: There is a requirement to modify the vSphere Replication appliances directly to configure the static networks on each new vnic. Making these changes requires modification of config files on the appliance, which is not officially supported so do this at your own risk!
Continue reading “Isolating vSphere Replication NFC Traffic – Part 2”
So recently i have been working on a customer engagement to automate the migration of thier ESXi hosts between vCenters, with no downtime to the VMs running on the hosts. There are plenty of blog posts on the internet on how to do this and I won’t post my completed script (over 1500 lines of gross-ness) however one tricky problem I came across was how to migrate the vCenter tag assignments across with the VMs and other inventory objects. Continue reading “vCenter Tag Assignment Migrations with PowerCLI”
For part 2 click here!
As many of you would already know, vSphere Replication 6.0 introduced the ability to isolate the replication traffic from all other traffic in your datacentre. For many security conscience organisations, this was a big deal…they can now ensure that replication traffic is not routed the wrong way.
This is fine for traffic between the source site and target, however what isn’t entirely clear is how to also isolate the NFC traffic between the vSphere Replication Server appliance and ESXi hosts. Continue reading “Isolating vSphere Replication NFC traffic – Part 1”
So I have finally got around to creating a tech blog site, primarily focused on my findings working as a senior consultant at VMware. Here you’ll find what will hopefully be some useful info in the world of virtualization and cloud.
Oh, and I might even throw in some other stuff too, cause that could be fun.