Find One of our Global Locations

By Jeff Feierfeil – Product Management

For IT architects in corporate data centers and commercial data service providers, the key storage issues center around the integrity, availability and scalability of different data storage systems – and the security of the data they store.

These concerns have increased substantially with the growth of cloud and mobile computing, spurring many IT leaders to call in their security and compliance teams to assess the measures being taken to protect corporate data, including how well their storage systems are safeguarded. This is already happening in all kinds of organizations, especially in finance, health care, the legal industry, and all levels of government.

One of the most basic ways of securing data is to implement or a storage solution that provides for “encryption of data at rest” – data stored in persistent storage media such as disks and tapes. The threat vector here is the possibility of the theft of the entire storage array and/or its HDDs (hard disk drives) and SSDs (solid state drives). Encryption ensures that secrecy of the stored data when any number of drives (HDDs or SSDs) are stolen or removed from a storage system, or when the entire array is stolen.

Nimble Storage now has more than 5,500 customers, and from talking with them about data security, we discovered that a variety of factors are causing them to look at encrypting their data:

  • New mandates coming from the corporate security office or a parent company requiring all content be encrypted due to the nature of the data (i.e.: health records) or other internal company standard. This could be required in scenarios where storage is located in a remote (lights-out) data center or a co-lo (co-location facility) where IT staff may not be physically present and existing security safeguards are insufficient.
  • Check-box compliance policies might be needed to satisfy regulatory or funding requirements. For example, health care providers may be required to encrypt the data at rest to qualify for government funding. Customers working in the defense or Federal government sectors may have no choice but to encrypt, and oftentimes with much more stringent requirements such as FIPS (more on this later.)
  • Ensuring data secrecy not only when a storage system or a drive within it is stolen, but also for returned or scrapped material such as a failed drive. Optional “non-return” disk maintenance options can mitigate this, but come with additional expense.
  • The ability to instantly and irrecoverably shred (destroy) data at a system or volume level by obliterating encryption keys is essential for customers who have unique security requirements for specific data sets, or service providers who want to be able to do this for an individual client requesting guaranteed data erasure.
  • Ensuring data secrecy against “sniffing” on replication streams over the WAN (wide area network), even in the presence of a VPN (virtual private network). Typically, customers use VPNs to prevent external sniffing, but this only secures the session. Additional measures may require sending the data already encrypted, further preventing any potential data sniffing. This can be achieved by having the storage system send the encrypted data either independently of encrypting the data at rest, or sending the data in its original encrypted format.
  • Ensuring data secrecy against “sniffing” on reads and writes over the SAN (storage area network). While this is a more comprehensive measure than simply encrypting the data at rest on the storage system, it is also more heavy-handed in terms of its cost, performance impact and compromises on some of the more advanced data management aspects of modern storage systems. This is typically achieved with either a secure channel such as IPsec or host-side encryption such as an encrypting file system (i.e.: BitLocker), or an application encryption feature, such as database encryption. A secure channel is complementary to encryption of data at rest. On the other hand, host-side encryption would remove the need for storage-side encryption, but customers often avoid it because of inefficient implementations (i.e.: reduced performance) and, more importantly, because it thwarts data reduction mechanisms such as compression and deduplication.

As Nimble has moved further and further into the enterprise, encryption of data at rest has become an increasingly important requirement for the customers we’re serving. As a result, we’ve added it into our latest Nimble OS release, version 2.3.

Based on conversations with people working in a number of enterprise customers, we set an ambitious set of design and implementation goals for encryption of data at rest:

  • Must be simple (no expensive hardware upgrade costs)
  • Should follow the same Nimble licensing/pricing model by being available at zero cost to new and existing customers without any license, via a simple, non-disruptive software upgrade
  • Must be supported on all platforms, including legacy hardware to the extent technically possible.
  • Must support encryption of data on both HDDs and SSDs within the array and external shelves
  • Must preserve compression, or any future data reduction technologies
  • Should work seamlessly with existing advanced data management features such as snapshots, replication and zero-copy cloning
  • Should be easy to manage the encryption process and not significantly impact performance
  • Should be extensible enough to accommodate multi-tenant environments
  • Should incorporate additional security features or certifications for enterprises and entities with heightened security requirements.

These detailed goals helped us right from the outset, as we confronted the problem of how the encryption would actually be done. It turns out there are two basic approaches to encryption, and the enterprise data storage industry is a mixed bag. Most vendors have chosen to take a pretty basic approach, called Full Disk Encryption (FDE), though a few have chosen a more complex but more feature-rich “CPU-based” solution built around controllers with on-board central processing units.

At Nimble, we looked at both and realized that FDE made things easy from an implementation standpoint – you just order newer disks – but it compromised on many of the objectives we had set out to meet. Aspects such as cost (requiring new FDE-capable HDDs and/or SSDs), support for legacy non-FDE platforms, limited management capabilities (i.e.: key management) and FIPS (Federal Information Processing Standards) compliance made FDE look like a lower value proposition for our customers. Proponents of FDE will point out the performance advantage of FDE over a CPU-based approach, but that’s not nearly as much of a factor today thanks to modern microprocessor technology, and FDE lacks many of the additional flexibilities offered by a CPU-based system.

Taking a CPU-based approach to encryption allows Nimble to take advantage of the on-board CPU encryption capabilities built into the CPUs on the arrays we’ve been shipping for the last few years. In addition to meeting the objectives we initially set out to address, this lets us do some new interesting and beneficial things:

  • Encryption capability is instantly available as part of a simple software upgrade, enabling it on the whole system or per volume. You can contrast this with a heavy-handed approach to purchasing new FDE enabled HDDs/SSDs and then needing to migrate data to these new drives.
  • Allowing an encryption key per volume, as opposed to an entire disk, which provides per-volume choice – you can now pick and choose which volumes deserve encryption. This also enables per-volume or per-tenant shredding. Simple deletion of a volume will result in the removal of access to the key associated with that volume.
  • Encrypted data is preserved over replication. Specifically, data which is encrypted on the primary array is replicated in an encrypted format to its downstream partner, allowing an extra layer of security that cannot be achieved with FDE.
  • Encryption of data with the benefits of space reduction. Encryption done post-compression preserves space savings and works seamlessly with space-efficient snapshots and zero-copy clones as well as volumes that are striped across multi-array pools in a cluster.
  • Easier third-party key management integration can be implemented, allowing for a fuller and richer set of key management capabilities.
  • Candidacy for FIPS certification, since we control the encryption library and it can be fully audit / validated by an independent party.

To implement this architecture, we started with a FIPS-certified version of OpenSSL, leveraging the on-board AES-NI (Intel’s Advanced Encryption Standard New Instructions) instruction set on the newer SandyBridge CPUs. The built-in AES-NI support provides a very high level of encryption throughput such that the actual impact of doing the encryption is negligible.

These newer multi-core beasts offload many of the heavyweight instructions, so performance concerns are no longer an issue. In fact, the graph below demonstrates the power of the newer processors comparing workloads with AES-NI enabled versus disabled. Clearly, the desire to leverage modern processors in favor of richer encryption features is compelling.

encryption-blog

Encryption is done using AES-256 in XTS cipher mode, which is the cipher mode designed specifically for storage. Both are FIPS 140-2 approved algorithms. The data encryption keys are stored locally in a table encrypted using a master key, which is in turn encrypted by a pass-phrase that the user creates when initializing encryption for the first time.

Two modes of operation relative to the keys can be selected when initializing encryption services – Available and Secure. They differ only in how things behave when the array boots up after a clean or unclean shutdown, or reboots. High-security conscious customers may choose to select Secure mode in favor of the convenience of not having to manually enter the pass-phrase after a shutdown or reboot. The upside is that it locks access to all volumes by hosts until the pass-phrase is entered, thus thwarting any attempt to access a stolen array. This can also be a temporarily secure way of shipping an array to a remote location without concern of theft in transit. The Available mode is the more common one, and has no strict requirement for entering the pass-phrase in the event the array reboots. This mode is useful for operation in a physically secure location, and to enable “lights out mode” operation. Data still remains encrypted on the HDDs / SSDs, but the array will boot completely, continuing to allow the same hosts to access volumes as before.

Encryption also enhances the security of replicated data. One aspect of security that is often overlooked is the secure transmission of data over the wire and residing at a target site in a DR (disaster recovery) scenario. With Nimble replication, blocks are sent to the downstream partner in encrypted form and the keys for volumes are transmitted securely using a wrapping key shared between the replication partners. The wrapping key is generated using a secure transaction over SSL (secure sockets layer) that is authenticated using the partner’s shared secret.

Looking toward the future, the Nimble data security architecture is ready for many additional encryption features. For example, FIPS 140-2 validation is currently under way for the cryptographic module used, and we’ll soon be publishing our certification publicly.

Many Nimble customers, primarily those in federal, state, or local government, but also some in finance and healthcare, have these stringent requirements. We’re currently well into the FIPS 140-2 level-1 certification process, with Nimble now on the “Cryptographic Module Validation Program FIPS 140-1 and FIPS 140-2 Modules In Process” list.

For organizations seeking to improve their data security, Nimble’s encryption of data at rest is a welcome new capability.

Written by:
Jeff Feierfeil