- Storage White Papers
Today VMware announced the release of vSphere 4.1, the next generation of their ever evolving hypervisor. To be honest it would be hard to miss this announcement judging by the flurry of “first to blog” posts I’ve seen on RSS feeds and Twitter today (see the list at the end of this post). I’d advise you to read the press release (which can be found here) as I don’t typically do product announcments on behalf of vendors, however as usual I’ll be picking out things I find relevant.
VMware claim significant increases in the numbers of VMs that can be supported via VMware vCenter Server – now up to 10,000 concurrently powered on virtual machines. This is a *huge* number and I’m sure other folks out there will point to installations scaling to this size, but really, this is a big number. It confirms to me that virtualisation is the only way forward for supporting large server installations. Ironically I was looking today at the history of VM (IBM’s VM that is), which is now 38 years old. I remember managing 8-10 MVS systems under VM in the early 90′s. Things have certainly scaled since then.
Quality of Service
New features in vSphere 4.1 enable more granular control over shared storage and network resources. Whilst I don’t know the detail, I can appreciate that the ability to prioritise any shared workload is a must as environments scale. This means not reverting to the blunt instrument of physical partitioning or partitioning without due consideration to load prioritisation. I mention this specifically as it has increased relevance to the storage environment where partitioning and multi-tenancy solutions are still crudely implemented. For example, MultiStore with FlexShare only prioritises workload; it doesn’t guarantee QOS.
There are a number of storage improvements within vSphere 4.1.
- SIOC (Storage I/O Control) – Provides I/O prioritisation of virtual machines running on shared storage across a cluster of ESX servers.
- Full support for 8Gb/s Fibre Channel HBAs
- Support for iSCSI hardware offload with Broadcom’s NetXtreme II iSCSI HBA
- Boot from SAN for ESX
- vStorage API for Array Integration (VAAI) support
The last two are probably the most interesting. Booting ESX(i) from SAN enables stateless deployment with all of the metadata for running the hypervisor separated from the hardware. This means ESX(i) installations can be booted from a SAN disk on any machine and that the hypervisor can be moved around either the local data centre or remote data centre – failover of the hypervisor itself. Ultimately the resources of hypervisor and guests need to be entirely hardware independemt. Boot from SAN goes a way to enabling this.
VAAI support has a number of separate components; Full Copy, Hardware Assisted Locking and Block Zeroing.
- Full Copy – this feature offloads the copying of a virtual machine to the underlying storage hardware and has a number of benefits; in VDI this significantly benefits templated deployments; for vMotion it reduces the amount of CPU, memory and I/O overhead between storage and hypervisor. Most interesting is the concept of xcopy to copy virtual machines between arrays.
- Hardware Assisted Locking – this feature reduces the overhead of SCSI reservations on VMFS volumes by providing an “atomic set and test” functionality at the block level. Although this will improve VMFS performance, it also opens the door to federated storage infrastructures where individual VMDKs can be reserved to a host. This complementary to the features seen in VPLEX.
- Block Zeroing – Why write zeros when you can mark the data as empty and throw the writes away? This is exactly what Block Zeroing does; don’t write streams of data; simply tell the storage which blocks are zeros and move on. The quoted benefit for this is eagerzeroedthick VMDKs. These can be created much faster when Block Zeroing is supported (something 3Par InServ has been doing for some time)
EMC and 3Par have already announced VAAI support today; but with no immediate availability. 3Par VAAI support is available from September; EMC VAAI support depends on platform; CLARiiON requires FLARE 30 (currently not GA) and VMAX requires an update to Enginuity due in 4Q2010 – no mention of VPLEX. Of the other vendors, Netapp have a vague press release with no specifics on support timescales or code versions. Hitachi have also announced support on the AMS platform but not provides timescales or code versions.
vSphere 4.1 pushes the virtualisation boundaries further on; the storage integrations are a welcome step forward in reducing resource overheads and improving data mobility. It’s a shame that vendors have announced support today without any direct support from day one. Those of you rushing to download ESX(i) 4.1 will have to wait just that little bit longer to test with your storage systems.
Here’s a roundup of some of the posts so far.
- Netapp: The Inflexibility of Flexvols (12,276)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (11,439)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9,646)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (7,476)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (7,340)
- Review: Compellent Storage Center – Part II (7,309)
- Why Does Microsoft Hyper-V Not Support NFS? (6,702)
- How To: Enable iSNS Server in Windows 2008 (6,258)
- Data ONTAP 8.0 – Part III (5,740)
- How Many IOPS? (4,683)
- Comparing and Contrasting All Flash Arrays – All Vendors (330)
- The Virtual Machine is a Legacy Construct (50)
- Netapp: The Inflexibility of Flexvols (20)
- Reinventing The Storage Array and Learning From Blackblaze (17)
- XtremIO: What You Need to Know (Updated) (16)
- How Many IOPS? (14)
- Why Does Microsoft Hyper-V Not Support NFS? (10)
- How To: Enable iSNS Server in Windows 2008 (9)
- What EMC Should Have Done With VNX (9)
- HP 3PAR 7450 All Flash Array (9)