As discussed in an earlier blog, the concept of thin provisioning in storage isn’t just limited to capacity. The intelligent use of flash can also provide a way to “thin provision” high performance across a wide variety of workloads. The benefits are obvious: flash delivers great performance, but continues to be expensive (per GB, 15x to 50x more expensive than commodity low RPM disk). Data reduction techniques such as compression and dedupe can slightly reduce the gap, but RAID and over-provisioning (to deal with garbage collection) significantly dilutes these savings, leaving flash too expensive for the vast majority of workloads.

Similar to the other earlier cited examples (ATMs, car sharing services, and NetJets) that efficiently “thin provision” an expensive asset, the system must have some key attributes:

  • First and foremost, it must be able to deliver meaningful savings to make “thin provisioning” worthwhile;
  • It must provide deep insight into usage patterns; and
  • It should be able to adapt to changes quickly.

Let’s examine how the Nimble Adaptive Flash platform delivers on these criteria.

Meaningful Savings through Thin Provisioning Flash

Thanks to the powerful telemetry of InfoSight™ (which collects 10 million counters per day, per system), we can tell precisely how each workload is performing (IOPS, bandwidth, and latency), and precisely how that correlates with CPU, cache, and disk IO usage. Our data scientists then aggregate data across tens of thousands of volumes (where we know exactly which application is running, thanks in part, to pre-tuned application profiles selected by customers). This allows our data scientists to deliver two powerful insights:Thin Provisioning Flash

  1. The median amount of flash required for almost all applications to deliver high hit-rates (and low latency) is only 3 to 14 percent of the data set.
  2. The required flash percentage actually declines as data sets grow. For example, a 100GB Oracle volume might need 10GB of flash for low read latencies, but a 1TB Oracle volume doesn’t need 100GB. Rather, it probably needs less than 50GB.

Remember: This is a comparison between usable flash capacities. The fact that we don’t need RAID and over-provisioning for our flash layer means that we save another 40 percent (and, actually, saving 40 percent of raw flash equals savings of 66 percent of usable flash). Plus, of course, inactive data blocks like snapshot blocks don’t need to be held in flash at all. Depending on snapshot retention requirements, this means that another 20 to 30 percent of flash can be saved. Add it all up and Nimble’s Adaptive Flash platform delivers big savings – more than could be reliably provided by data reduction techniques.

Over Provisioning Flash

Deep Insight into Usage Patterns

Storage Usage PatternsInfoSight isn’t just leveraged by Nimble’s support and data scientists; our customers have ready access to examine system headroom indicators and latency history via an easy to read heat-map (without ever installing any monitoring or analytics tools). The InfoSight drill down tabs provide clear visibility into latency contributors and cache usage per workload – time of day, day of week, and IO pattern. Customers can also obtain precise upgrade recommendations as they add more workload on the system. Now, Nimble employees also have access to sizing tools, based on installed base distributions, to size systems appropriate for new deployments based on the intended workload.

Adapt to Changes Quickly

Unlike a tiered flash architecture, Nimble’s Adaptive Flash can adjust to workload changes quickly (within seconds not hours) and with fine-grained efficiency (hot data is managed at a 4KB granularity, not MB or GB). If you need more flash as new workloads are added to the system, you can add it non-disruptively when needed, increasing the flash/disk ratio dynamically.

And what if you want the guarantee of a “thick provisioned” system? What if having sub-millisecond average latency every minute of the day is not good enough for a particular workload? What if you want every single read operation to have sub-millisecond latency? Adaptive Flash can give you that, too:  just dial the flash ratio all the way up to 100 percent of the data set to “turn off thin provisioning.” The current Adaptive Flash platform can support up to 64TB of usable flash per storage pool. And when I say usable, I mean usable – no assumptions about data reduction rates. Now, it’s true that for transactional workloads, we typically average more than 2x savings (thanks to inline compression), which would allow over 120TB of data to be held in flash.  Moreover, you still don’t need to waste flash to hold snapshot data, or for RAID and over-provisioning, because there is a persistent copy of all writes on disk.

Thin Provisioning Flash for Live Data

Nimble Adaptive Flash delivers predictable, high performance while saving expensive flash by “thin provisioning.”  It provides deep visibility to the end user so they would know exactly how their system behaves, allowing immediate growth on demand. It adapts to workload changes quickly and dynamically, but also allows the user to add resources non-disruptively at any point. Finally, if you want to disable “thin-provisioning” and get a “dedicated flash” experience, we also allow that for any storage pool while still using flash more efficiently than a “tiered” or a “flash only” system. Combined, all of these benefits allow the Nimble Adaptive Flash platform to deliver a compelling blend of efficiency, visibility, and control.