RAID is Certainly not Dead But Its Future Looks Small The month of October saw a sizeable uptick in the readership of a blog entry that appeared nearly two years ago on DCIG’s website on the topic of data loss on SATA storage systems. While this blog entry received a fair amount of interest when it was first published, exactly what prompted a resurgence of interest in this topic this month is unclear. Maybe it is just an anomoly driven by the whimsical interests of Internet users who are for whatever reason searching on this topic, finding this blog entry and then reading it. However it may be a more ominous indication that SATA disk drives, which became popular 2 – 3 years ago in enterprises, are wearing out and that the traditional RAID technologies used to protect them are failing. As a result, users are looking for information as to why RAID, in some circumstances, is not doing the job in their environment. The death of RAID (or at least RAID 5) has previously been forecast by some analysts. But even now, when I look at the features of new storage arrays, the number of RAID options that they support is always prominently mentioned. A good example is earlier this week Overland Storage announced its new SnapSAN S1000. It offered at least 10 different possible ways that RAID could be configured (including RAID 5) on a storage array that starts under $10K in price so do not tell me that RAID is dead or even on its last legs. But there is no disputing that the capacities of SATA disk drives are forecast and expected to cross the 4, 8, 16 and 32 TB thresholds over the next decade. As that occurs, it becomes questionable if current RAID technologies are adequate to protect these size disk drives. If the increased interest in DCIG’s 2008 blog entry is any indication, it would appear no. So am I predicting the death of RAID? Clearly I am not. RAID technology is as much a part of the storage landscape as tape and odds are that innovation will continue to occur in RAID that will make it a relevant technology for the foreseeable future. Yet it was clear from speaking to a few users and storage providers in attendance at Storage Networking World (SNW) in Dallas, TX, earlier this month, that new approaches to protecting data stored on larger capacity SATA disk drives are going to be needed in the next decade in order to meet their anticipated needs. One specific company that I met with at length while at SNW was a company called Amplidata. It is already innovating in this space to overcome two of the better known limitations of RAID to include. The increasing length of time to rebuild larger capacity drives. Rebuild times for 2 TB drives are already known to take four hours or longer to complete though I have heard that in some cases, depending on how busy the storage system is, it can take days for a rebuild of a disk drive of this size to finish. The need to keep all disks in its RAID group spinning so no power savings can be realized. Spin down is likely to become more important in the years to come as more data is archived to disk. Intelligently managing data placement is likely to become a function of the storage array as opposed to the software to facilitate the spin down of these drives. So what Amplidata’s AmpliStor does is distribute and store data redundantly across a large number of disks. The algorithm that AmpliStor uses first puts the data into an object and then stores the data across multiple disks in the AmpliStor system. By storing the data as an object, Amplidata can reconstruct the original data from any of the disks on which the data within the object resides. This technique eliminates the growing concerns about the rebuild times associated with large disk drives since the original data can be retrieved and reconstructed even if a one, two or even more disks fail. Also, should disk drives in the system be spun down to save energy, they do not need to be spun up to retrieve needed data since the data can be retrieved and reconstructed from other spinning disks on the system. While it unlikely that AmpliStor or its underlying technology will be widely adopted in the next few year, the simple fact is that increasing capacities of disk drives will eventually make technologies like what AmpliStor has embedded inside of it a prerequisite in almost any high capacity enterprise storage system. So in the same way that enterprise storage vendors started to adopt RAID 6 about five years ago to prevent the loss of data should two SATA drives fail, look for some variation of the technology that Amplidata has implemented in its AmpliStor to begin to find its way into enterprise storage systems over the next decade to prevent the loss of data on these ever largeer disk drives. At the same time, expect RAID to find a new home on smaller storage arrays where the level of protection and speed of recovery that RAID provides should be more than adequate. My introduction to HP's Converged Infrastructure As 3PAR is integrated into HP, there is a lot of new stuff to for us to figure out. One of the most important concepts at HP is Converged Infrastructure (CI). The basic idea of CI is to maximize a customer's investement in technology by consolidating resources in common, modular building blocks. 3PAR customers are already accustomed to the idea from with our InServ storage systems, but CI goes far beyond 3PAR's storage vision by including server and network technologies. It's a big idea with huge implications for product engineering, manufacturing, maintenance and support – and it raises the importance of software in data center solutions. As 3PAR is integrated into HP, there is a lot of new stuff to for us to figure out. One of the most important concepts at HP is Converged Infrastructure (CI). The basic idea of CI is to maximize a customer's investement in technology by consolidating resources in common, modular building blocks. 3PAR customers are already accustomed to the idea from with our InServ storage systems, but CI goes far beyond 3PAR's storage vision by including server and network technologies. It's a big idea with huge implications for product engineering, manufacturing, maintenance and support – and it raises the importance of software in data center solutions. InMage vContinuum Taps into VMware to Provide a Near Zero Impact and Recovery Solution for SMBs Small and medium businesses (SMBs) are rapidly moving towards virtualizing their physical servers using VMware. But as they do so, they are also looking to minimize the cost, complexity and overhead that the backup of VMware servers introduces while increasing their ability to recover their newly virtualized applications. It is these concerns that InMage’s new vContinuum software addresses by using a new technique to tap into VMware that provides near zero impact backups with near real time recoveries. Right now Gartner estimates that as many as 28% of all physical servers currently run virtualization with that percentage expected to grow to 50% by 2012. Further, it is expected that server virtualization among SMBs (companies with fewer than 999 employees and less than $500 million in revenue) will grow much faster than the overall market during this same period of time. The problem that these SMBs are encountering as they adopt server virtualization is automating the non-disruptive protection and recovery of VMware’s guest OSes. While backup software providers have adapted to take advantage of new backup techniques as well as new features found in VMware vSphere, these approaches still have the following shortcomings: Array based snapshots. Snapshots move backup overhead off the host but require the deployment of an external storage array of the same or similar kinds- not always a cost-effective or practical option for SMBs. VMware vStorage APIs for Data Protection. vSphere’s new vStorage APIs can create snapshots on the vSphere host without the need for external storage. However backing up the snapshot still creates overhead on the primary storage used by the VMware host. vSphere Change Block Tracking (CBT).CBT was also included with VMware vSphere 4.0 and tracks changes to blocks on a VM since its last backup. While it eliminates the need for external storage, it is not enabled by default plus it incurs server overhead when turned on and is not widely supported by backup applications. VMware Site Recovery Manager (SRM). Primarily used for highly available and clustered configurations, it requires compatible arrays on both local and remote sites with separate management of replication and recovery policies. This means additional learning curves and load on already constrained IT resources plus the storage requirements alone usually put the cost of this solution beyond the reach of most SMBs. Each of these techniques introduces varying levels cost, complexity or overhead or, in some cases, a combination of all three in order to be implemented in a VMware environment. It is these challenges that InMage’s new vContinuum software addresses. vContinuum is based upon InMage’s existing and proven enterprise data protection software that is already in use by many enterprise organizations and resold by enterprise storage providers. However, InMage has specifically built vContinuum for VMware implementations in SMB environments in the following ways: First, vContinuum integrates with VMware at the hypervisor level to discover, provision and manage protection policies at a VM granularity.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages62 Page
-
File Size-