Find One of our Global Locations

VMware first proposed Virtual Volumes (VVols) at VMworld in 2012 as a way to make storage more VM-centric – “making the VMDK a first class citizen in the storage world,” as their architect Cormac Hogan described it. Ever since their demo at last year’s VMworld I’ve been fielding a lot of questions about VVols, most frequently these three key questions:

  • What problems do VVols solve;
  • How does it solve them; and
  • Are all VVol implements equivalent?

What’s the Problem?

Application needs in a virtualized environment are on the rise – regardless of whether the application is mission-critical, business-critical, or development and test. As always the basic storage needs are for availability, capacity, performance, and recoverability.

Mission-critical apps have the greater need for these requirements than development and test. However, it’s important to remember that an organization’s storage infrastructure must support application’s basic needs today and tomorrow.

Mapping between an application’s capacity, performance, availability, and recovery service level agreements (SLAs) / objectives (SLOs) and underlying storage solution is typically done manually. For every application, storage administrators must carve out datastores to accommodate specific sets of requirements such as the number of input/output operations per second (IOPS), the number of snapshots and replications per hour or minute, the number of snapshots to retain, and block size optimization.

In addition, the storage administrator must be knowledgeable about what’s been provisioned, what needs to be provisioned, and what can be provisioned. VMWare’s vSphere storage features like Storage IO Control (SIOC), Storage Distributed Resource Scheduler (SDRS), Profile Driven Storage alleviate some of these challenges. But until now, it hasn’t been possible to automate these functions in a truly policy-driven way. This is the huge problem that VVols are designed to solve.

How Do VVols Address This Problem?

VVols provide for a protocol-agnostic, policy-driven framework that allows application needs to be captured in the form of “policy”. The “policy,” in turn, must match the storage platform’s capabilities. This kind of mapping allows for correct placement of a specific workload on a specific set of storage container, with proper monitoring and enforcement of policy compliance. Problem solved.

Are All VVol Implementations Equivalent?

Given that each virtual machine’s virtual disk (VMDK) is a first class citizen on the storage array in a VVol world, the architecture needs to accommodate the unique workload needs of each and every type of application. Block size, for example, is unique across different applications – Microsoft recommends 32KB for its Exchange mailbox database, 16KB for logs, and 8KB for SQL Server.

A “one block size fits all” approach in traditional storage solutions is possible, but less than ideal. It means that the architecture can be optimized for one application workload but not others. In virtualized environments mixed workloads are the norm. These days, incumbent vendors are accommodating all kinds of workloads without properly optimizing for each of them.

It’s the same with data protection and recovery. Array-based snapshots must be highly efficient so that backup and recovery are instantaneous for each virtual machine. This is where the disadvantages of the “copy on write” snapshot technology used by traditional vendors become much more apparent. The extra copies needed for preserving snapshots often result in significant space constraints and performance degradation. Many vendors have taken a layered approach so that new “advanced” features like redirect-on-write are layered atop of the old architecture. But this approach just adds further overhead, and complicates storage provisioning to accommodate VVol integration. So no, not all VVol implementations are equivalent.

In future blog posts we’ll go into more detail on some of the work we’re doing to integrate with VVols, especially at the application optimization level.

Written by:
Wendel Yu