By Richard Jooss – Product Management
One of the thorniest problems facing enterprise IT architects today is how to best deploy flash in their data centers – they need a cost-efficient way to deliver the promises of higher performance that flash provides. This challenge is made even more difficult by the ever-changing and often opaque demands that various applications place on their storage systems.
From day one, Nimble’s Adaptive Flash platform has done a great job of providing flash-like performance without the expense of flash-only solutions by leveraging the proprietary CASL (Cache Accelerated Sequential Layout) data layout engine. Today’s release of Nimble OS 2.3 gives users more flexibility by allowing them to choose an all-flash service level for specific workloads and volumes.
By default, our arrays automatically move data in and out of cache based on how “hot” it is, but the addition of all-flash service levels, or “volume pinning” as the techies call it, lets users override those cache admission and eviction algorithms. When one or more volumes are pinned, every block for those volumes is read into the flash layer and won’t be evicted. Every write also goes into flash. This means that every read for that volume will come from cache, guaranteeing the reads will see those low millisecond-type flash latencies. Note, the writes to a pinned volume will also go to spinning disk. The reason for this is that we want to get all the advantages of all-flash, but not the disadvantages. RAID protection is provided by the spinning disk so there is no need to waste expensive flash for that, and snapshot data is held on the spinning disk so we don’t have to consume expensive flash for that either. CASL delivers up to 10,000+ writes per second per 7.2K RPM HDD (hard disk drive), so the HDDs are not a limiting factor.
In talking with customers, the two most common questions about volume pinning are whether it will increase the performance of their arrays, and whether it will make their systems more efficient. The answer to both of the questions is “probably not”, as the arrays are already very effective at making sure the right data is in cache.
Volume pinning is really about volume level performance, not system level performance or efficiency. Volume pinning allows one to specify that a particular volume is more important than others and, therefore, that every read for that volume should come from cache – even if that means other volumes will get a lower cache hit rate.
A key question to ask is which volumes or applications you should pin, because there’s no generic answer. As the graphic shows, Nimble’s data scientists have determined, based on thousands of deployed systems, that the typical working set is small compared to the dataset size across applications. So, in general, there is no specific application that requires pinning. SAP HANA is one exception to this rule; Nimble does recommend pinning for SAP HANA to get the absolute best restart times for this critical application.
It makes sense to pin an application when the application is important enough to justify all-flash and is more important than the other applications running on the system. From a technical perspective there are also a few interesting cases for pinning. It can make sense to pin a volume that has sporadic access over a longer time period. For example, if a volume is only accessed weekly for reporting, much of its day may be pushed out of cache between accesses and pinning can prevent this. Another case is where an application requires or demands low latency for all IO though the app doesn’t perform a large amount of IO. From a system efficiency perspective, this data will probably not be in cache because of the low access rate. Pinning allows the user to ensure it will be in cache not for system wide efficiency but for efficiency of the important application. My colleague Nick Dyer explores these and other issues in greater depth in a related post on the NimbleConnect community.
Volume pinning is a great enhancement to Nimble’s Adaptive Flash platform, giving the user the ability to selectively prioritize particular applications with an all-flash service level without forcing a flash-only approach across the board.