HP LeftHand OS 11.0 delayed

It seems HP has delayed their release of LeftHand OS to the 30th of November, as stated on their StoreVirtual page:


So, some more days to wait. I want vSphere 5.5 support! =]

#HP LeftHand OS 11.0 available NOW!

Today I noticed that LeftHand OS 11.0 has been released!

Strangely, I have not received an e-mail notifying the new version or see any tweets coming by about this subject. Maybe I missed it, or I am among the first ones to notice the available downloads.

Check out the following page for the available downloads at hp.com:

Some notes from that page:

  • LeftHand OS 11.0 and the Storage Replication Adapter (SRA) version 11.0 for VMware Site Recovery Manager (SRM) 5.0.1 and 5.1 is not yet VMware certified. Once certified, the SRA 11.0 will support LeftHand OS 10.0 and higher. The certification status will be reflected on the StoreVirtual Compatibility Matrix and VMware Compatibility Guide.
  • When using HP Insight Control for vCenter Server, Insight Control for vCenter 7.2.3 is required for LeftHand OS 11.0 support.
  • When using HP Insight Remote Support, Insight Remote Support 7.0.8 Content Level Update 1 (CLU 1) is required for LeftHand OS 11.0 support. CLU 1 will be available in December 2013.

Please see my other articles for a feature overview and things you need to prepare before upgrading. Note that the upgrade article is based on LHOS 10.5. I don’t think many things changed in the upgrade path. If so, I will update the article with the changes.

Update: There are some changes in the upgrade process indeed. Something called Online Upgrades is a new way to upgrade your existing storage nodes to LeftHand OS 11.0. This method will be available around 11th of November. Using this method, you don’t need to download the software separately anymore. You can still upgrade using the traditional way.

Upgrading to LeftHand OS 10.5

Last night I performed an upgrade to SANiQ *ahum* LeftHand OS 10.5 from SANiQ 9.5 on 16 HP LeftHand P4500 G2 storage nodes and want to share a couple of things I learned from this process.

Before actually upgrading I spent some time analysing the possible risks and impact.

HP states that when using the CMC (Centralized Management Console) no downtime whatsoever should occur. This is possible due to the fact that CMC never reboots storage nodes simultaneously when they are the ones responsible for a specific LUN (which is, ofcourse, protected by Network Raid-10).

The possibility that we would suffer data loss was nil and reading other people’s experience with upgrading the storage nodes in combination with VMware was nothing but positive.

Still we wanted to take no risk at all and scheduled an extra backup, right before upgrading the nodes. The backup was performed after regular office hours (6 PM) so if disaster would strike, the least amount of user data would be lost. Running all (7) Veeam backup jobs at the same time took a while to complete (5 hours approximately) and after that I was good to go.

I started the upgrade process around 11 PM and actively monitored all of our systems. Not a single error or warning came by and no downtime was experienced (except the storage nodes themselves of course while they were rebooting).

The HP FOM (Failover Manager) was upgraded first and next the storage nodes were upgraded. They all power cycled and some had to restripe before the process continued. After all nodes were rebooted and upgraded, CMC installed another patch on all systems after they all had another power cycle. This process took about 5 hours to complete.

This slideshow requires JavaScript.

I performed a check after the upgrade completed and concluded that only minor issues occured:

  • SQL service on two VMs was stopped, not sure if this is a coincidence or due to the upgrade. Manually started the services OK.
  • Disk access lost on some ESXi hosts, but shortly after the access was resumed automatically.
  • One VM was marked as ‘inaccessible’. Removed it from inventory and re-added it to solve.

So, no major issues but quite some time to complete.

Oh; you should increase the Bandwith Priority of your Management Group inside CMC to increase the speed which will be used to restripe your nodes. I changed this from 16 MB/sec (default) to 40 MB/sec to decrease the total time needed to restripe.

My conclusion is that CMC is a great tool to perform an unattended upgrade of storage clusters. I would trust the tool even without running a backup prior to the process. Still I would recommend running the upgrade in off-hours due to the path failovers, restriping and possible latency spikes.