Why Converge Primary and Backup Storage? »

As Varun mentioned in his introduction, since Nimble’s inception we have spent hundreds of hours meeting with IT organizations of all sizes. We engaged them in an open dialogue to understand their challenges, and were listening very closely to the issues they expressed related to storage and backup. These candid discussions helped us build a deeper understanding of their daily challenges, as well as some chronic pain points. Among the most pervasive were fast growth in storage requirements and expense, the associated growth in the cost and complexity of backup, and a less-than-ideal level of disaster recovery preparedness. To elaborate:

  • Even during economic slowdowns, most companies continue to experience fast growth in primary storage capacity requirements. This in turn drives the need for expensive high-performance primary storage, typically powered by high-RPM drives. Although there is awareness of flash storage, most consider this a high-end solution for only the most pressing performance challenges. A broad cross-section of customers is also frustrated with their vendors’ pricing models, which they view as excessive.
  • Despite spending a lot of money on backup–including the adoption of disk based backup technologies–most IT groups say the backup process remains painfully resource intensive. The daily process is based on identifying changed data on the application servers and periodically copying large quantities off to backup devices (and even larger quantities each weekend). All of this relies on many moving parts from disparate vendors, and is one of the less reliable processes IT teams manage. Consequently, it puts a severe load on servers, networks and administrators, leading to the designation of long backup windows. Some organizations do use technologies like snapshots for short term recovery, but most are very limited in how many they can retain due to the capacity consumption on expensive primary storage.
  • Many, if not most, organizations believe their disaster recovery plan is inadequate, but struggle to improve it. This is partly because they find it hard to justify investments towards something that management perceives as a low probability event. But certainly some of the gaps are due to the cost and complexity of common DR solutions, and practical constraints like the limited availability of WAN bandwidth. As a result, the DR scheme for many organizations is linked to the backup process (either because they ship tapes offsite, or they replicate their backups). However the resource intensiveness of the backup process means many organizations can only afford daily backups, limiting the recovery points that an application could be restored to. This also means that the DR copy is in a backup format, slowing down and complicating the actual recovery.

Although myriad technologies try to address one or the other aspect of this picture, few existing solutions attempt to fundamentally simplify it. However in our conversations we heard over and over again that most organizations were open to considering new approaches that cohesively addressed their chronic pain points, even if they deviated from conventional wisdom in some respects. In related blog posts such as this one we describe how we shaped our converged storage and backup approach so we could accomplish just that.