This is one of a series of posts discussing the new features in Windows Server 2012, due to be shipped this year and currently in public beta release as Windows Server “8″. You can find references to other related posts at the end of this article. This post reviews the new Storage Spaces feature.
Storage Spaces is a feature that takes standard JBODs (Just a Bunch of Disks) connected to a Windows 8 Server and allows them to be used to create pools of storage. The pools can then be used to create volumes on the server. The benefit of using Storage Spaces is that it enables advanced features such as resiliency and space optimisation. It also forms a basis for using directly connected disks with the Windows hypervisor, Hyper-V. The concept of Storage Spaces is not new. Many operating systems have logical volume managers (LVMs) and in fact, Windows already offers some volume management features. Disks can be partitioned and recombined to create mirrored and parity protected RAID volumes. However storage spaces does things differently using disk pools, which provides significant advances over the disk management functions available today.
Storage Spaces implements a new Storage Management Provider with the introduction of the Storage Spaces Controller. This new bus type adheres to the architecture we saw when the iSCSI software initiator was introduced into Windows and implements the new “spaceport” driver. This creates new disk drive types within the operating system and provides the framework around which features such as thin provisioning and de-duplication can be implemented.
Storage Spaces comes deployed by default with Windows 8 Server. It is part of the Storage Services Role, which itself is part of File and Storage Services. Configuration is performed under the “File and Storage Services” option of Server Manager, which can be launched from the Windows toolbar at the bottom of the screen. By default, all unallocated disks are added to the “Primordial Pool” until they are used.
In my testing I created two pools from a set of ten available disks. Each disk was a 1TB LUN from a Drobo 1200i currently under test in the lab. In the 5-Way Pool, I added five disks, four of which were defined for general use, the fifth defined as a hot spare. The hot spares perform exactly that function; they are available should one of the primary disks fail. Within my two pools I created both parity (similar to RAID-5) and mirrored LUNs. Although there were an odd number of disks in the pool (deliberately chosen), the allocation on the mirrored disk was evenly spaced across all available devices. Clearly the Storage Spaces Controller defined LUNs are split into blocks and mirrored across all devices to get best performance from the disks available.
As a test of the mirroring resiliency, I picked a disk and removed it forcefully from the host by deleting the LUN on the Drobo. This caused Windows to set the physical disk state to a warning mode, highlighting the volume and the disk had issues (see the yellow triangles). The pool was then rebalanced across the remaining two disks and the LUN remained accessible.
Pros and Cons
Storage Spaces is certainly a move forward from the facilities provided under Disk Management in prior Windows versions. However the implementation seems very much a version 1.0 so far. On the plus side, the implementation provides simple mirroring, RAID, dynamic groups and thin provisioning. However on the negative side there are still a number of flaws;
- It’s seems impossible to track a LUNs SCSI ID through to the “PhysicalDisk” name for disks that are part of Storage Spaces. Once added to a Storage Spaces pool, disks disappear from general view in Disk Management.
- There doesn’t appear to be a way to remove a failed disk from a disk pool, so the pool shows a permanent failed status.
- There’s no indication of RAID rebuild activity, when a disk fails and is recovered onto other free space. The pool status remains “unknown”.
- When additional capacity is added to a pool, existing allocations don’t appear to be balanced across the available new space.
- The wizards incorrectly show the available capacity of a pool to include hot spare space and failed/missing disks, which isn’t directly available for allocation. This can cause LUN creation to fail.
- A pool may have free space available, but be unable to honour the mirroring type (i.e. have only 1 disk with free capacity when mirroring is requested). This shows as insufficient free space rather than the true error.
- It doesn’t appear to be possible to drain or evacuate a physical disk in order to remove it from a pool.
- Netapp: The Inflexibility of Flexvols (10,232)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (9,683)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (8,073)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (6,037)
- Review: Compellent Storage Center – Part II (5,680)
- Data ONTAP 8.0 – Part III (5,170)
- Why Does Microsoft Hyper-V Not Support NFS? (5,094)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (4,559)
- How To: Enable iSNS Server in Windows 2008 (4,548)
- Back to Blogging (4,455)
- 3PAR Continues to be HP Storage Cornerstone (31)
- Netapp: The Inflexibility of Flexvols (16)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (14)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (13)
- ViPR – Frankenstorage Revisited (10)
- Rise of The IT Generalist – A Bad Idea? (9)
- HP Discover – Las Vegas 11-13 June 2013 and Software Defined Storage (9)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (8)
- Managing Microcode Upgrades (8)