Find One of our Global Locations

Anyone familiar with storage today understands the notion of thin provisioning storage capacity. The idea is that hosts and applications don’t immediately (if ever) use all the space allocated to them. So you can logically “over-allocate” a virtualized pool of storage capacity – increasing storage utilization and saving storage capacity.

If you take a moment to think about it, you realize that the principle of thin provisioning is quite ubiquitous. Bank ATMs (or branches for that matter) store enough cash to meet the typical daily withdrawal needs of a predicted fraction of their local client base, but obviously not enough for everyone to withdraw all their deposits. Doing so allows them to put the rest of the deposits to work in loans/investments, funding account services. Car sharing services like Zipcar are increasingly popular in big cities with limited parking, or college campuses where many students can’t afford cars. For occasional users they offer flexible pick up/usage options on demand (say for a couple of hours), at lower cost than car ownership or traditional rentals. Fractional aircraft ownership services like NetJets provide companies and executives the flexibility (and luxury) of private air travel at a fraction of the cost of owning private jets.

graphic1-300x296

In each case, the asset operator leverages knowledge of usage patterns to allow efficient & fairly predictable sharing with far fewer assets than a “100% reserved” model. In other words the operator “thin provisions” an expensive asset to maximize its usage and therefore reduce its cost, making it affordable to a bigger user group.

Back to storage – it’s common knowledge today that flash offers big performance gains, but the problem is that it comes at a high cost per GB of capacity (about 20x-100x higher than commodity disk depending on the grade of flash). Capacity reduction techniques help reduce flash cost somewhat, but many of them are also available on disk based systems so a huge cost gap still remains. This is why flash deployment of is typically limited to the narrow sliver of applications where one can justify the high cost per GB. This is unfortunate because a much broader pool of applications could benefit by leveraging flash intelligently. What’s worse (as most users intuitively know) is that this expensive investment is underutilized because a big percentage of data blocks within the application pool aren’t being accessed at any given point in time (between 80%-95% are inactive for most applications).  Think about inactive tables in databases, old emails or inactive VMs. Even worse, think about capacity used by snapshots or replication copies: would you ever consider storing several days (let alone weeks) of snapshots in expensive flash storage?
io-activity3But what if you could take advantage of this knowledge that only a fraction of the data blocks need high performance at a given point in time? Rather than “thick provisioning” performance like in all-flash systems, what if you had a way to share flash across a broader pool of applications in a way that was responsive enough to handle all their performance needs, but at a much lower cost per GB?

Well that’s what a hybrid flash/disk solution can offer, if it can deliver on a couple of counts:

  • The flash pool needs to be large enough to cover the performance needs (active data) of the relevant applications. The flash pool size could even be configurable depending on the performance needs, and the level of assurance required. Think of this as equivalent to buying more fractional shares in the jet fleet – all the way up to 100% in the extreme case.
  • Data placement in flash needs to be truly dynamic – capable of adapting to workload changes or hot spots within milliseconds to seconds, rather than hours or days.

A combination of these characteristics enables predictably high performance and low latency, at a far lower cost than the “all flash/thick provisioned” scenario. Essentially, this is what CASL does –thin provisioning flash (read) performance and reducing effective cost, so that a much broader universe of applications can benefit from it.

Ajay Singh
Written by:
Ajay Singh