Home | Reviews | Review: Sun Storage 7000 Unified Storage System – Part I

Review: Sun Storage 7000 Unified Storage System – Part I

0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×

sun_logo1It’s clear from recent technology announcements that storage is moving towards being a commodity offering.  Modular arrays are gaining in popularity as the underlying technology becomes more reliable.  Look at the hard disk drive; SATA devices are now capable of 1.2 million hours MTBF.  Vendors like 3Par, Equallogic/Dell and Compellent are increasing their market share as customers look for value in reducing their costs in both hardware acquisition and the effort of managing monolithic arrays.

Centralised storage is now almost ubiqutous in the datacentre.  This demand has driven the availability of lower cost and higher capacity devices more than ever before.  With protocols such as CIFS, NFS and iSCSI, centralised storage doesn’t have to be complex and storage arrays are following the direction of servers by moving towards commodity hardware.

In November 2008, Sun announced their entry into the commodity storage market with the Sun 7000 Unified Storage Server (USS) series (aka Amber Road).  Over the last month, I’ve been reviewing the 7210 array (the mid-range offering) and as an early product release, I can say I like it.

The Proposition

Sun’s proposition is pretty simple; with USS they are providing highly scalable storage solutions built on commodity hardware and open software components.  A mix of technologies is used to enable the use of low-cost SATA drives, supplemented by SSD for read and write optimisation.  The software stack is built on OpenSolaris and ZFS; Sun claim that the combination of ZFS, flash and SATA drives yields the best price/capacity and price/performance metrics in the industry today.  Unlike many other storage vendors, Sun are taking the approach of offering all current and future software features as part of the standard hardware cost.  This extends to the lifetime of the technology, so as new software features are made available in future releases, the customer can simply upgrade the USS and take advantage of them at no extra cost.

The Product Range

There are currently three models in the product range; the 7110, 7210 and 7410.  The 7110 entry-level model comes with up to 2TB of storage in a 2U rack-mounted form-factor.  This is achieved by using 2.5″ drives; up to 14x 146GB 10K models.  Front-end host connectivity is through four 1Gbps Ethernet Ports.  The 7210 mid-range offering accomodates up to 48 1TB 3.5″ SATA drives, plus two 18GB SSD drives for logged writes (more on this later).  Connectivity is also provided through four 1Gbps Ethernet ports.  The high-end model (7410) offers up to 576 1TB drives, eight 18GB SSD drives for write cache and six 100GB SSD drives for read cache.  Front-end connectivity is again provided by four 1Gbps Ethernet ports.

Hardware

sun-7210-front17200-hardware-picture1So, the 7210 model I trialled came with the standard 48x 1TB drives and two 18GB SSDs.   As a comparison I’ve racked it next to my ageing Clariion array.  This shows how things have changed over time!  The Clariion has a mere 360GB (10x 36GB drives) and takes over twice the space.  To give an idea of how the components are laid out, see the next graphic, which is a partial screenshot of the web administration GUI.  Drives are laid out in a vertical fashion, with all of the server components (memory, processors and ports) at the back of the unit.  Drives are replaced by pulling the 7210 forward in the rack and raising the top cover, which hinges upwards to provide access.

Although currently the 7210 can’t be expanded, there are options in place to allow this in the future.  The motherboard of the array supports up to three PCIe slots and the higher specification 7410 already supports expansion arrays accomodating 24x 1TB drives.

Now, the USS is effectively a server as a storage array.  This is nothing new; the Clariion I mentioned earlier has two clusters running NT4 embedded and plenty of other vendors sell similar technology.  From a hardware perspective, what’s more interesting is the use of solid state to drive performance out of the commodity SATA drives in the array.  In the next post, I’ll be looking at this and how it integrates into ZFS.

About Chris M Evans

Chris M Evans has worked in the technology industry since 1987, starting as a systems programmer on the IBM mainframe platform, while retaining an interest in storage. After working abroad, he co-founded an Internet-based music distribution company during the .com era, returning to consultancy in the new millennium. In 2009 Chris co-founded Langton Blue Ltd (www.langtonblue.com), a boutique consultancy firm focused on delivering business benefit through efficient technology deployments. Chris writes a popular blog at http://blog.architecting.it, attends many conferences and invitation-only events and can be found providing regular industry contributions through Twitter (@chrismevans) and other social media outlets.
0 Flares Twitter 0 Facebook 0 Google+ 0 StumbleUpon 0 Buffer 0 LinkedIn 0 0 Flares ×