A recent post from Martin “The Bod” Glassborow got me thinking about the whole process of LUN consolidation. I’ve done lots of migrations where people quake at the thought of changing the LUN size from one array to another. Now, I almost always want to change LUN sizes, as the vendor specific ones – 8.43GB/13.59GB etc are pretty painful and wasteful at the same time.
There’s another good reason to standardise on LUNs. If you’ve implemented a good dual-vendor strategy and sorted your firmware driver stack out, then you can position to take storage from any of your preferred vendors. There’s nothing better than having all of your vendors sweating on that next 500TB purchase when they know you take your storage from either or EMC/HDS/HP/IBM.
If LUNs and the I/O stack are all standardised, you can move data around too. The difficult part as alluded to in Martin’s post is achieving the restacking of data.
Here’s the problem; SAN storage is inherently block based and the underlying hardware has no idea of how you will lay out your data. Have a look at the following diagram. Each LUN from a SAN perspective is divided into blocks and each block has a logical block address. The array just services requests from the host for a block of data and reads/writes it on demand. It is the operating system which determines how the file system should be laid out on the underlying storage. Each volume will have a standard location (or standard method of calculating the location) for what was called the VTOC (Volume Table of Contents), also known as the FAT (File Allocation Table) in DOS and MFT (Master File Table) in NTFS. There are similar constructs for other O/S versions like Linux but I’m not 100% certain of the terminology so won’t risk the rath of getting it wrong.
The layout of data on a file system is not a trivial task. Apart from keeping track of files, there’s the requirement to keep track of free space and to be able to recreate the file index in the case of corruption, so some kind of journalling is likely to be implemented. There are also features such as compression, Single Instancing, Encryption, etc which all add to the mix of understanding exactly how file data is laid out on disk.
Now think of how multiple LUNs are currently connected together. This will be achieved with either a Volume Manager (like VxVM), supplied as a separate product, or a native LVM (logical volume manager). All of these tools will spread the “logical” volume across multiple LUNs and will format the LUN with information to enable the volume to be recreated if the LUNs are moved to another host. VxVM achieves this by having a private area on each LUN which contains metadata to rebuild the logical volume. Each LUN can be divided into sub-disks and then recombined into a logical volume, as shown in this diagram.
So a physical LUN from an array may contain a whole or partial segment of a host volume, including LVM metadata. Determining what part, whether all the parts are on this array (and where) is a tricky task – and we’re expecting that the transmission protocol (i.e. the fabric) can determine all of this information “on the fly” as it were.
My thought would be – why bother with a fabric-based consolidation tool? Products like VxVM provide a wide set of commands for volume migration, although not automated they certainly make the migration task more simple. I’ve seen some horrendous VxVM implementations, which would require some pretty impressive logic to be developed in order to understand how to deconstruct and reconstruct a volume. However life is not that simple, and host-based migrations aren’t always easy to execute on, so potentially a product would be commercially viable, even if the first implementation was an offline version which couldn’t cope with host I/O at the same time.
Funny, what’s required sounds a bit like a virtualisation product – perhaps the essence of this is already coded in SVC, UVM or Incipient?
_uacct = “UA-1104321-2″;
- Netapp: The Inflexibility of Flexvols (9,950)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (9,388)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (7,844)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (5,818)
- Review: Compellent Storage Center – Part II (5,458)
- Data ONTAP 8.0 – Part III (5,059)
- Why Does Microsoft Hyper-V Not Support NFS? (4,871)
- Back to Blogging (4,450)
- How To: Enable iSNS Server in Windows 2008 (4,328)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (4,152)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (3)
- Understanding EVA (3)
- How To: Enable iSNS Server in Windows 2008 (3)
- XtremIO (aka Project X) – Where’s the Innovation? (Updated) (3)
- Netapp: The Inflexibility of Flexvols (3)
- Enterprise Computing: HP Blades Tech Day – Roundup (2)
- Booting from PCIe SSD – Do We Need It? (2)
- Who’s Doing Software Defined Storage? (2)
- Enterprise Computing: Is the Solid State Drive Hype Over? (2)
- Is the Revolution Over? (2)