Find One of our Global Locations

“Doing more with less” has become the reigning mantra among IT professionals, across both technical and management functions. This makes sense given today’s perfect storm of exponential data growth and lean budgeting.

Here’s a brief recap of the scalability characteristics that are necessary to maximize storage ROI and streamline management processes to free up IT teams from tedious manual procedures.

Given today’s diversity of use cases and environments, a detailed analysis is required to estimate scalable storage-driven cost savings. However, a good storage vendor should be able to present you with a realistic (and compelling) total cost of ownership (TCO).

Let’s look at a scenario that highlights two approaches to scaling through which a best-in-class scalable storage solution can deliver ROI benefits.

Imagine a rapidly growing small-enterprise technology company with 6,000 current employees worldwide. The company’s primary critical applications include VDI (4,500 desktops), MS-Exchange (6,000 mailboxes), and 70+ SQL databases.

A small (and busy) IT team manages the company’s entire storage infrastructure, working closely with the staff that administers virtualization, database platforms, and e-mail.

The company’s MS-Exchange installation is the sole tenant of a Nimble Storage CS500 Array (~70TB effective capacity with ES1-H45 disk expansion shelf).

Using InfoSight, Nimble Storage’s cloud-connected support and management system, the IT team can see that the Exchange volumes currently occupy 92% of the array’s total effective capacity, projected to be used up in 8 weeks. However, the array is performing well below its maximum throughput, with IOPS and cache to spare.

For the company’s rapidly expanding SQL platform, it will deploy another Nimble Storage array – a CS300 (minus expansion shelf). The database team’s revised performance spec lists 40,000 IOPS as the minimum requirement. However, SQL would only require 35% of the array’s effective capacity.

One array needs more capacity; the other, more IOPS. The storage team can follow one of two viable scaling approaches.

Option 1: Upgrade Individual Arrays

Nimble Storage enables independent scaling of performance and capacity, thus the problem can be solved with a simple upgrade to each array.

For the MS-Exchange array, capacity can be easily expanded by adding an ES1 disk expansion shelf, which is a 3U appliance that connects to the array, growing effective capacity by up to 68 TB with a single shelf.

For the SQL deployment, the Nimble CS300’s throughput can be easily augmented by a factor of 3 with a simple controller upgrade, converting it to a CS500 without any downtime.

The benefit of this approach is that performance and capacity upgrades can be tailored to individual needs, quickly resolving bottlenecks without disruption, and at the lowest incremental cost.

Option 2: Cluster Arrays in a Scale-Out Configuration

Pooling all storage resources into a cluster gives the Exchange array access to the SQL array’s excess capacity, and SQL can leverage the majority of the combined IOPS. Most importantly, all of this can be achieved without spending a dime. Though configuring the two-node cluster requires a slightly higher involvement, it greatly extends the ROI of the current storage infrastructure while deferring upgrade costs for several quarters, considering the short time needed to configure host connections and rebalance each of the volumes across a newly defined storage pool. In addition, the consolidation of storage resources eliminates the previous silos, making everything easier to manage from a single console.

Nimble is an example of highly flexible storage scalability, but it can sometimes be difficult to determine the optimal approach. With its powerful data sciences engine, InfoSight generates guidance on the optimal approach to scaling cache, compute, and capacity within a single array or within a scale-out cluster. Such approach is presented to the customer with product-level specifics on upgrades and expansion shelf additions. Whether scaling involves individual performance and capacity upgrades or clustering arrays, critical applications will continue to run without disruption.

Conclusion

Simply stated, the storage solution that scales seamlessly, simply, and most cost-effectively is that which would keep you and your IT team (as well as your entire organization) safely afloat in what’s become a vast and turbulent sea of data.

Seven C’s of Scalable Storage Blog

Part 1: Capacity, Cache, and Compute
Part 2: Clustering, Configuration, and Continuity

 

 

 

Written by:
Sean Roth