Nimble recently completed a survey of 599 respondents regarding business drivers and challenges in deploying VDI. While the interest in VDI continues to grow, costs and performance were flagged as the biggest storage-related challenges impeding VDI deployments.

Those of us who are familiar with VDI would agree that this is not all that surprising. Storage performance heavily determines the responsiveness of virtual desktops and if the user experience is diminished, users will not accept VDI. And in these times of tightening budgets, costs are always subject to close scrutiny.

On closer analysis though, you realize that addressing both cost and performance together is a paradoxical problem to overcome with traditional storage solutions. Let’s see why.

VDI has some unique workload characteristics. At steady state, VDI behaves predictably with IOPS tied closely to the profile of desktop workloads being run. However, in the course of a normal day, VDI infrastructure also goes through boot-storms and login storms (the period when multiple desktop users try to boot or log in at the same time) which cause a peak in read IOs. There are also virus scanning and OS upgrade operations that occur from time to time, which triggers a spike in writes.


Traditional storage wisdom recommends throwing flash and expensive high-RPM drives to provision for these peak scenarios.

Wouldn’t that then cause storage costs to shoot up? So how does one get around this conundrum?

It would seem the crux of the problem comes down to efficiency. The Oxford dictionary defines efficiency as achieving maximum productivity with minimum wasted effort or expense. In other words if we can meet the performance demands of VDI “efficiently,” that would by definition mitigate the cost challenge.

While efficiency has been used in the storage industry predominantly in the context of capacity efficiency (i.e. $/GB), it is equally critical to focus on performance efficiency as well (i.e. $/IOPS). A combination of those two sets of efficiencies would result in “affordable performance.” Unfortunately, most storage systems today tend to be optimized for one or the other, not both.

But wait—that’s not all. We just talked about the tendency of VDI IO’s to fluctuate throughout the day. Clearly there is more to performance efficiency that goes beyond purely $/IOPS. A truly efficient solution needs to be able to deliver performance when needed without incurring high overheads i.e. “adaptive” performance. In essence, what you really need is “affordable” and “adaptive” performance.

Unfortunately traditional tiered solutions fall short in this regard owing to the complexity and overhead associated with data classification, data movement, and the granularity of data movement. One could construct a system that is highly optimized around $/GB and $/IOPS, but has data trapped in low performance tier, rendering the system unresponsive to fluctuating workload demands.

Can you architect a system that truly delivers “affordable” and “adaptive” performance? The answer is yes and it comes down to not just what resources the storage solution leverages for performance, but how it leverages those resources.

Let’s shift gears and look at real-world examples of customers who have successfully deployed VDI, and what has worked for them. One approach is to start out with VDI by consolidating virtual desktop workloads with other workloads over the same storage infrastructure.

No longer do you have to purchase silo’d infrastructure that has to be managed separately, thus cutting down on both capex and opex costs.

Of course, the storage array needs to be able to deliver “adaptive, affordable performance” and simplified management to effectively handle the consolidated workload.

And once you are ready to scale up to higher desktop numbers, rinse, repeat.