RAID is Certainly not Dead But Its Future Looks Small

The month of October saw a sizeable uptick in the readership of a blog entry that appeared nearly two years ago on DCIG’s website on the topic of data loss on SATA storage systems. While this blog entry received a fair amount of interest when it was first published, exactly what prompted a resurgence of interest in this topic this month is unclear.

Maybe it is just an anomoly driven by the whimsical interests of Internet users who are for whatever reason searching on this topic, finding this blog entry and then reading it.

However it may be a more ominous indication that SATA disk drives, which became popular 2 – 3 years ago in enterprises, are wearing out and that the traditional RAID technologies used to protect them are failing. As a result, users are looking for information as to why RAID, in some circumstances, is not doing the job in their environment.

The death of RAID (or at least RAID 5) has previously been forecast by some analysts. But even now, when I look at the features of new storage arrays, the number of RAID options that they support is always prominently mentioned.

A good example is earlier this week Overland Storage announced its new SnapSAN S1000. It offered at least 10 different possible ways that RAID could be configured (including RAID 5) on a storage array that starts under $10K in price so do not tell me that RAID is dead or even on its last legs.

But there is no disputing that the capacities of SATA disk drives are forecast and expected to cross the 4, 8, 16 and 32 TB thresholds over the next decade. As that occurs, it becomes questionable if current RAID technologies are adequate to protect these size disk drives. If the increased interest in DCIG’s 2008 blog entry is any indication, it would appear no.

So am I predicting the death of RAID? Clearly I am not. RAID technology is as much a part of the storage landscape as tape and odds are that innovation will continue to occur in RAID that will make it a relevant technology for the foreseeable future.

Yet it was clear from speaking to a few users and storage providers in attendance at Storage Networking World (SNW) in Dallas, TX, earlier this month, that new approaches to protecting data stored on larger capacity SATA disk drives are going to be needed in the next decade in order to meet their anticipated needs.

One specific company that I met with at length while at SNW was a company called Amplidata. It is already innovating in this space to overcome two of the better known limitations of RAID to include.

The increasing length of time to rebuild larger capacity drives. Rebuild times for 2 TB drives are already known to take four hours or longer to complete though I have heard that in some cases, depending on how busy the storage system is, it can take days for a rebuild of a disk drive of this size to finish. The need to keep all disks in its RAID group spinning so no power savings can be realized. Spin down is likely to become more important in the years to come as more data is archived to disk. Intelligently managing data placement is likely to become a function of the storage array as opposed to the to facilitate the spin down of these drives.

So what Amplidata’s AmpliStor does is distribute and store data redundantly across a large number of disks. The algorithm that AmpliStor uses first puts the data into an object and then stores the data across multiple disks in the AmpliStor system. By storing the data as an object, Amplidata can reconstruct the original data from any of the disks on which the data within the object resides.

This technique eliminates the growing concerns about the rebuild times associated with large disk drives since the original data can be retrieved and reconstructed even if a one, two or even more disks fail. Also, should disk drives in the system be spun down to save energy, they do not need to be spun up to retrieve needed data since the data can be retrieved and reconstructed from other spinning disks on the system.

While it unlikely that AmpliStor or its underlying technology will be widely adopted in the next few year, the simple fact is that increasing capacities of disk drives will eventually make technologies like what AmpliStor has embedded inside of it a prerequisite in almost any high capacity enterprise storage system.

So in the same way that enterprise storage vendors started to adopt RAID 6 about five years ago to prevent the loss of data should two SATA drives fail, look for some variation of the technology that Amplidata has implemented in its AmpliStor to begin to find its way into enterprise storage systems over the next decade to prevent the loss of data on these ever largeer disk drives. At the same time, expect RAID to find a new home on smaller storage arrays where the level of protection and speed of recovery that RAID provides should be more than adequate. My introduction to HP's Converged Infrastructure

As 3PAR is integrated into HP, there is a lot of new stuff to for us to figure out. One of the most important concepts at HP is Converged Infrastructure (CI). The idea of CI is to maximize a customer's investement in technology by consolidating resources in common, modular building blocks. 3PAR customers are already accustomed to the idea from with our InServ storage systems, but CI goes far beyond 3PAR's storage vision by including server and network technologies. It's a big idea with huge implications for product engineering, manufacturing, maintenance and support – and it raises the importance of software in data center solutions.

As 3PAR is integrated into HP, there is a lot of new stuff to for us to figure out. One of the most important concepts at HP is Converged Infrastructure (CI). The basic idea of CI is to maximize a customer's investement in technology by consolidating resources in common, modular building blocks. 3PAR customers are already accustomed to the idea from with our InServ storage systems, but CI goes far beyond 3PAR's storage vision by including server and network technologies. It's a big idea with huge implications for product engineering, manufacturing, maintenance and support – and it raises the importance of software in data center solutions.

InMage vContinuum Taps into VMware to Provide a Near Zero Impact and Recovery Solution for SMBs

Small and medium businesses (SMBs) are rapidly moving towards virtualizing their physical servers using VMware. But as they do so, they are also looking to minimize the cost, complexity and overhead that the of VMware servers introduces while increasing their ability to recover their newly virtualized applications. It is these concerns that InMage’s new vContinuum software addresses by using a new technique to tap into VMware that provides near zero impact with near real time recoveries.

Right now Gartner estimates that as many as 28% of all physical servers currently run virtualization with that percentage expected to grow to 50% by 2012. Further, it is expected that server virtualization among SMBs (companies with fewer than 999 employees and less than $500 million in revenue) will grow much faster than the overall market during this same period of time.

The problem that these SMBs are encountering as they adopt server virtualization is automating the non-disruptive protection and recovery of VMware’s guest OSes. While backup software providers have adapted to take advantage of new backup techniques as well as new features found in VMware vSphere, these approaches still have the following shortcomings: Array based snapshots. Snapshots move backup overhead off the host but require the deployment of an external storage array of the same or similar kinds- not always a cost-effective or practical option for SMBs. VMware vStorage for Data Protection. vSphere’s new vStorage APIs can create snapshots on the vSphere host without the need for external storage. However backing up the snapshot still creates overhead on the primary storage used by the VMware host. vSphere Change Block Tracking (CBT).CBT was also included with VMware vSphere 4.0 and tracks changes to blocks on a VM since its last backup. While it eliminates the need for external storage, it is not enabled by default plus it incurs server overhead when turned on and is not widely supported by backup applications. VMware Site Recovery Manager (SRM). Primarily used for highly available and clustered configurations, it requires compatible arrays on both local and remote sites with separate management of replication and recovery policies. This means additional learning curves and load on already constrained IT resources plus the storage requirements alone usually put the cost of this solution beyond the reach of most SMBs.

Each of these techniques introduces varying levels cost, complexity or overhead or, in some cases, a combination of all three in order to be implemented in a VMware environment. It is these challenges that InMage’s new vContinuum software addresses. vContinuum is based upon InMage’s existing and proven enterprise data protection software that is already in use by many enterprise organizations and resold by enterprise storage providers. However, InMage has specifically built vContinuum for VMware implementations in SMB environments in the following ways: First, vContinuum integrates with VMware at the hypervisor level to discover, provision and manage protection policies at a VM granularity. Using the vCLI, vContinuum does what InMage refers to as a “tap” of the protected Virtual Machine to track, copy and record the writes of each VM. To avoid creating overhead on the vSphere server, vContinuum uses the memory of the vSphere server to cache writes before they are copied and stored on the vContinuum server. Second, vContinuum protects all of the writes of VM including its boot and application volumes.The vContinuum tap captures all writes to files and volumes assigned to that VM but it excludes writes to page files associated with that VM and can also be configured to exclude writes to other volumes associated with that VM deemed “not valuable or needed. Third, it can protect data regardless of the storage type. vSphere implementations may use DAS, NAS or SAN though in SMB environments DAS, NAS and iSCSI SANs are the most likely storage options. Since vContinuum operates at the guest level, as long as the storage is under vSphere’s control, vSphere can protect any data written to any of these volumes. Fourth, vContinuum leverages InMage’s application integration so application consistent recovery points can be created. The ability of InMage’s DR software to create application consistent recovery points is carried forward into vContinuum so that as Exchange, Oracle, SQL Server or SharePoint VMs are protected, their data is protected in application consistent state. Stored this way recoveries are more akin to a “point and click” operation as opposed to needing to restore a crash consistent copy of the application and then having to apply redo logs to the image in order to recover the application.

But maybe most importantly what InMage does with vContinuum is make it affordable for SMBs while continuing to provide them the near real time recoverability for which its software is known. vContinuum is priced per vSphere server in a tiered model, so an SMB can protect as many VMs on a vSphere server as that server can host.

Further, using the granular levels of protection and recoverability that vContinuum provides, SMBs can pick and choose exact files, volumes or VMs they want to protect and/or recover on a vSphere server. Or they can just protect and/or recover the entire vSphere server with all of its VMs.

InMage has an established history of providing enterprise caliber data protection software that is available from leading enterprise storage providers including Hitachi Data Systems (HDS), Pillar Data Systems and Xiotech. vContinuum is built upon that software foundation to meet the specific implementation, functionality and price requirements of SMB virtualized server environments. In so doing, InMage vContinuum eliminates the cost, complexity, and overhead that are commonly found in VMware backup solutions while providing SMBs the granular backups and near real time recoveries that they seek at a price they can afford.

New Intelligent Monitoring Framework in Storage Foundation HA 5.1 SP1 Introduces Proactive Detection of Failed Processes for Faster Recoveries

Downtime is rarely an option for mission critical applications and while many strides have been made over the last decade to ensure uninterrupted application availability, some gaps in providing protection still remain. One of these is reducing the time it takes to detect when a failure has occurred so a recovery on a secondary server can be promptly initiated and successfully completed. It is expediting these server recoveries that the new Intelligent Monitoring Framework introduced in Storage Foundation HA 5.1 Service Pack 1 (SP1) accomplishes.

Many organizations are more than happy with the speed of failover on the clustering solution that they use to support mission critical applications as these clustering solutions enable failovers to occur in mere minutes. But for some of these mission critical applications minutes of downtime can prove unacceptable. Yet closing this failover gap from over a minute to a minute or less has proven to be a challenge for two reasons.

First, clustering solutions poll the services on a server to detect if one of its services has failed. However depending on what service has failed and when it fails, it could take up to a minute before the failure of the service is even detected so a failover of the server cannot be initiated until that occurs. This time it takes to detect the service failure is a major contributor to the minute or more of application downtime.

Second, the polling could theoretically be sped up so it checks for failed services on the operating services more frequently but this increases overhead on the server. This could then in turn slow application performance which is also unacceptable to these end users. These trade-offs have to date left users in this stalemate.

This brings us to theintroduction of the Intelligent Monitoring Framework (IMF) in Storage Foundation HA 5.1 SP1 for Veritas Cluster Server (VCS) , Mount and Oracle agents. IMF is an extension to the existing VCS agent framework that has been enhanced in SP1 to detect state change notifications within seconds on the operating system and without increasing the overhead on the application server.

To accomplish this, the VCS agent no longer polls the operating system as it did in the past looking for notifications that a process has died or is in a “hung” state. Rather the SP1 VCS agent takes a more passive role and interfaces directly with the operating system kernel on each application server via APIs in that operating system kernel.

Now when a process on the application server dies or “hangs” the operating system generates an alert that is exposed by its API and then automatically and nearly instantaneously captured by the VCS agent. This eliminates the need for the VCS agent to continually poll for these changes as the VCS agent is proactively notified by the server’s operating system while also expediting notifications to the VCS agent.

The prior technique of polling required the VCS agent to go through and check each process on each server’s operating system for failures via a round robin process that could take up to 60 seconds or longer. Using this new technique of monitoring notifications, as soon as an event occurs on the OS, the VCS agent is immediately notified and a failover to the secondary server can then be initiated.

IMF’s technique of capturing alerts generated by the operating system also comes into play once the failover on the secondary physical server has started. Now if a specific process should “hang” on the secondary server during a failover, IMF will again be alerted as to which process or processes on that secondary server is causing the failover to “hang.” Alerts can then be generated and corrective action taken so the failover to the secondary server can be completed.

In the last few years the need for mission critical applications to maintain a constant state of availability has grown more important such that even a minute of downtime is too long for some applications. IMF in Storage Foundation HA 5.1 SP1 takes that concern off the table with its pro-active detection of process failures while also reducing the overhead associated with monitoring these failures. In so doing, Storage Foundation HA 5.1 SP1 gives organizations the faster recoveries and lower server overhead that they seek by merely upgrading to the latest version of an application that they already know and trust.

Kubisys Thin Capture Helps One Company Stay in Compliance with PCI DSS Standards while Mitigating Risk

The situation confronting a VMware and Windows architect that I recently spoke with is probably one to which many system administrators can relate. On one hand, he had a requirement to make patches and updates to his company’s systems to keep them in compliance with PCI DSS regulations. On the other, making such changes could result in system downtime and disrupt his company’s operations (i.e. – stop its flow of income.) To resolve it, he turned to a new technology called the Thin Capture appliance from Kubisys.

The Payment Card Industry Data Security Standard (PCI DSS) is a compliance standard to which all companies that accept credit card payments must comply. These standards comprise of twelve (12) requirements, two of which are: developing and maintaining secure systems and applications (#6) and regularly testing them (#11).

Satisfying these two PCI DSS requirements fell to this VMware and Windows Architect to which I spoke. These two requirements called for him to apply hot fixes shortly after their release to ensure his servers were protected against the latest security threats.

It was applying these hot fixes that put him in a quandary. A Microsoft hot fix could break his production point-of-sale (POS) software as occasionally it was not compatible with the hot fix. This could result in an outage that would negatively impact his company’s daily sales and disrupt its back end supply chain.

To avoid that scenario, he tried using VMware to create clones of his Windows virtual machines (VMs) to test these hot fixes before applying them. He knew that by using clones, he could avoid down-time to the production application, and thoroughly test the hot fixes. But he discovered that creating clones and then conducting these tests could take up to a week to complete which then would have left his company out of compliance with the PCI DSS standards and vulnerable to security threats.

Uncertain as to what to do next, he turned to one of his preferred solutions providers and described these issues. His provider suggested he check out the KubisysThin Capture™ appliance so he invited Kubisys onsite to explain how it worked and do a demo.

In a previous blog entry I described the Kubisys Thin Capture appliance but, in brief, the Thin Capture appliance communicates with Windows servers and initiates snapshots on Windows production server using the Windows native Volume Shadow Copy Service (VSS). The snapshot remains on the Windows servers with only needed data accessed by the Kubisys appliance.

This snapshot is used as the source by the Thin Capture Appliance so OS patches or application software upgrades can be tested against it. As the test runs and additional data is needed to complete the test, the Thin Capture appliance accesses the snapshot on the production server over the IP network and only moves the data that is needed in order to complete the test.

This was exactly what he needed as it solved problems for him on multiple levels. During his testing he was able to:

Discover his physical and virtual machines inside of his firewall and recreate them on the Kubisys Thin Capture appliance because of its AD integration. Reconfigure his firewall to grant the Kubisys Thin Capture appliance secure access to his web-facing POS systems Test application upgrades and OS patches with near real- time copies of his production systems Identify and troubleshoot problems prior to applying fixes, patches or upgrades to his system Run snapshots of the application on the Kubisys Thin Capture appliance so he could provide screen shots of the application without touching the application to prove needed updates and patches had been applied so his company could remain in compliance.

Possibly the biggest benefit he realized was thatit eliminated his dependency on VMware clones. While many may say VMware clones can provide these types of benefits, this VMware architect says that is not the case. He still had to change networking protocols to do these types of tests using VMware clones plus assign storage to each clone. Further, he was dependent on his networking and storage teams to do tasks such as DNS changes and assigning and reclaiming IP address and storage capacity.

The Kubisys Thin Capture appliance had its own internal storage and could host application servers. So once he deployed it he only had to bother his network team once to do the initial network configuration and he did not have to bother his storage team at all. He told me, “Using the Kubisys Thin Capture appliance I am now self-sufficient and do not need to bother anyone to test applications. This has reduced my PCI compliance testing setup time from days to an hour or less.“

Now some of you may wonder why I did not cite the name of the person or the company in this blog entry but hopefully the reasons are obvious. This is a real problem that many companies supporting mission-critical, revenue generating applications and who are subject to PCI DSS compliance standards deal with on a regular basis.

However it would not be prudent for anyone to say on the record that they are “out of compliance”, even if they are only out of compliance for brief periods of time until testing is done. Say this “on the record” could put them in hot water with auditors.

However companies go through this balancing act all of the time. They are often put in the undesirable position of having to balance the need to apply fixes, patches or upgrades to mission critical applications to remain in compliance with PCI DSS standards while attempting not to bring the company to its knees should they only find out after the patch is applied that it is incompatible with the application and causes it to fail.

Server virtualization has clearly helped in minimizing the risks associated with applying fixes, patches and upgrades but, as this VMware architect discovered, it still has its limitations. This is why the Kubisys Thin Capture appliance could quickly become a must-have solution in environments that are concerned about making the changes that they need to remain in compliance with security standards such as PCI DSS without putting their mission critical applications unnecessarily at risk.

To get more details about this company’s experience with the Kubisys Thin Capture appliance, you may download a DCIG Case Study on this Kubisys implementation here.

8-bit ISA Cards, 11 Year Old Computer Systems and Other Hazards of Server Virtualization

I always enjoy going to the quarterly Omaha VMware User Group (VMUG) meetings if for no other reason that I never know who I am going to meet or what I am going to learn. This meeting was no exception. While more sparsely attended than the last meeting (~ 60 people attendance), the stories they shared illustrated to me that most organizations are still years away from fully virtualizing all servers in their data centers.

Overall I was impressed by the progress that attendees are making in virtualizing their environments though, considering this is a VMUG event, it only makes sense that these attendees would be the first ones on board with any new VMware release. It appeared the majority of them were using vSphere 4.0 and about 20% were already using vSphere 4.1 in some capacity in their environment.

The speed of adoption and who was adopting did catch me somewhat off guard. In talking to one IT director of a church in Lincoln, NE, who was in attendance, he said that adoption of server virtualization in non-profits has been extremely aggressive.

At a recent conference that he attended, fully 85% of those IT directors of churches in attendance at that conference had already virtualized their environments and the other 15% were planning do so. However he partly attributed the speed of adoption to the hefty discounts that server virtualization providers offer to 401(c)(3) organizations (about 70% off of the retail price.)

Yet it became clear in talking to the individuals in attendance that while most wanted to virtualize every system that they managed as quickly as they could, there are obstacles that will preclude most of these organizations from virtualizing a number of their applications for the foreseeable future.

For instance:

A network administrator for the city of Omaha lamented that he has an 11 year old computer system running a city payroll program and cannot get funds to upgrade it to a more current system. He found this particularly ironic in light of the fact that the city of Omaha just instituted a city wide 2% tax on all restaurant sales to cover a shortfall in the pension fund for firemen and policemen. However the computer system that spits out those pension checks is six years out of date with no funds currently budgeted for an upgrade. The IT director of the church in Lincoln, NE, tells me that the phone system his church uses still runs on DOS- based PC and its interface with the church switchboard is an 8-bit ISA card. Most of the parts for the PC are no longer available and while he has been finding them as needed on eBay, due to the age of the parts even eBay is drying up as a source. Video surveillance is becoming more critical for some in attendance but virtualizing video surveillance applications really isn’t practical due to the number of feeds coming in and the write traffic that these feeds can generate so it looks like this will remain a stand alone app for the time being.

Other reasons that organizations might delay server virtualization or at least that might give them some pause has to do with troubleshooting issues in VMware itself. The session I attended was presented by Nathan Small, a staff engineer for VMware who handles escalated support calls on the ones related to storage. Nathan’s name is one you might want to remember as he shared that the storage support team at VMware takes double the number of #1 priority calls of any other support team at VMware.

In listening to his presentation on VMware Advanced Root Cause Analysis, it is not surprising that his team gets so many calls related to storage. Most of his presentation focused on knowing what log files were important and then decoding the cryptic messages stored in them.

While many of these log files were located in the /var/log directory in vSphere, what error messages were stored in specific log files could vary according to what version of vSphere you were running. vSphere 4.1 introduced some new log files so iSCSI error messages that may have been written to the vmkernel.log file in vSphere 4.0 may now be written to the vmkisciid.log file in vSphere 4.1.

He also warned those companies that were running both vSphere and ESXi that the names of the error logs in those operating systems were not the same. While vSphere writes messages to the vmkernel.log file, in ESXi that file is simply called “messages.”

ESXi administrators will also want to think twice about rebooting a VMware ESXi file without first saving the logs. In the case of the vmkisciid.log file on the ESXi server, that file is cleared on an ESXi reboot. So while a reboot may solve the immediate problem, if one goes back to determine the root cause of the problem, it may be impossible to diagnose if the problem is somehow related to the iSCSI driver.

All in all, most of the users in attendance were glad they had virtualized their environment and could not envision going back to a physical environment. If anything, they were looking forward to the day where their 11 old computer payroll system and DOS-based phone systems with 8-bit ISA cards were a thing of the past. But as VMware’s Small illustrated, using VMware is not without its caveats especially when it comes to decoding the error messages in VMware’s log files.

If You Are Talking Enterprise Server Virtualization You Better Include IBM’s Unified Storage Solution in that Conversation

Any time that anyone in any size business starts to talk about how to improve IT efficiency while driving down costs the topic of server virtualization inevitably comes up. But enterprise companies need to take that conversation to another level and make sure they talk about selecting the right networked storage solution to support their virtualized servers as the wrong storage solution may negate whatever benefits that server virtualization provides. To avoid that scenario any conversation that an enterprise shop has around virtual servers and storage should include a unified storage solution such as the IBM N series throughout.

The decision to deploy server virtualization in any size organization is usually driven by the following three factors:

It decreases hardware and software costs It improves IT efficiency in delivering new applications It improves resource utilization

But this is where the similarities between what server virtualization deployments in small and large businesses look like. Small businesses may get by with commodity storage in support of their virtual servers. But enterprise organizations need to virtualize hundreds, thousands or even tens of thousands of servers which demands they deploy storage solutions that solve enterprise level problems.

Due to the size and scope of these virtualized server implementations, networked storage solutions become the default solution. But these storage systems are not created equally and differ in their abilities to: Allocate storage capacity to virtual machines (VMs) as they need it Allocate the right type of storage capacity and/or performance to VMs that need it Efficiently and effectively use their internal storage capacity Internally scale to meet increasing storage and performance demands of VMs Make available the right storage networking protocol to each VM Provide the appropriate data protection options that VM environments need

It is because networked storage solutions differ so dramatically that theinitial savings from server virtualization are sometimes consumed and even exceeded by the costs associated with deploying a networked storage solution that lacks these options. It is for these reasons that it is imperative when enterprise organizations select a networked storage solution that it have the right cross section of features.

While there are many such networked storage systems available, a hybrid networked storage system known as unified storage emerges as a favored choice for virtual servers for the following two reasons:

Concurrent support of multiple storage protocols. Concurrent support of SAN (FC, FCoE, iSCSI) and NAS (CIFS, NFS) sets unified storage systems apart from other networked storage solutions. By supporting all of these protocols, organizations may consolidate all of their storage onto a single system plus it gives them the flexibility to use the most appropriate protocol for each application. Avoids costs of deploying multiple storage systems. If an organization uses separate storage systems for SAN and NAS, the storage each one contains need to be managed separately and there is option to share excess capacity on one with the other. Unified storage systems free organizations to create a single logical pool of storage that can be allocated to either NAS or SAN attached application servers.

However in enterprise environments, networked storage solution needs to be more than just multi-protocol or create a single logical pool of storage. Multi-protocol support certainly helps to lower storage networking costs, improve IT efficiency and increase storage utilization but multi-protocol support is only piece of the larger storage puzzle.

An enterprise unified storage solution needs to support other features for it to truly meet all of the requirements that an enterprise organization will have. Features it should offer include:

Snapshots for faster backups and recoveries.Server virtualization changes the conversation around data protection and traditional backup methods eventually hit a wall. Snapshot functionality is a prerequisite in these environments and for it to be considered enterprise class it should also integrate with leading enterprise backup software. Deduplication and thin provisioning for efficient storage utilization. Server virtualization tends to result in storage over provisioning and spawn server virtualization sprawl. Deduplication and thin provisioning are two key technologies that the storage system must support to mitigate the impact of these technologies. Multiple tiers of storage. Server virtualization results in the aggregation of applications that create new demands for the right type of storage for each application to accommodate capacity, performance or both. To deliver on this the storage system must provide multiple tiers of storage with different price points and performance features. Scalable so as the storage or performance requirements grow, so can the storage system. The challenging aspect of managing virtual server environments is that organizations can rarely if ever predict what they will need their storage system to provide more – capacity, performance or both. To accommodate the unpredictability of these environments, the storage system must be able to scale in either of these directions.

In this respect, IBM’s unified storage solution, the N series, has already demonstrated why it is such a good fit for virtual server deployments and explains why more businesses are adopting it to meet these exact needs. It offers:

Deduplication and thin provisioning for primary storage Concurrent multi-protocol support so businesses can cost-effectively connect their servers – virtual or physical – using whatever SAN or NAS protocol is most for that application. Multiple tiers of storage Clustering technology so organizations can scale out capacity, performance or both Snapshot support that integrates with leading enterprise backup software VMware certification

Two of the features that specifically stand out on the IBM N series are its Deduplication and Snapshot features. In the IBM portfolio of storage solutions, the N series is the only one that can deduplicate data on primary storage which is already being identified as a “must-have” feature for virtualized environments.

Preliminary DCIG research has found that early adopters of deduplicating data in virtualized environments have achieved data reduction ratios of 20:1 or greater. So businesses are essentially leaving money on the table if they do not use storage systems with this technology.

The N series’ Snapshot feature provides an equally compelling argument for adoption in virtualized environments.Storage system based snapshots are becoming the preferred mechanism for protecting virtual machines since they eliminate backup windows and impact on VMs.

While the N series Snapshot feature also provides this functionality, it stands apart from many othe r competing solutions as it creates VM snapshots without needing extra storage capacity. Further, because of the N series integration with many leading enterprise backup software products, its snapshots can be discovered and either backed up to tape or used as a primary source for recovery.

Conversations around server virtualization often start with better delivering on IT efficiency and cutting costs. It is for those reasons that companies invite IBM to the table since it has such a strong legacy in providing server hardware as well as service and support.

But conversations about enterprise server virtualization deployments inevitably turn to storage. It is when they do that businesses need to make networked storage in general and unified storage specifically a part of that conversation. Further, they need to make sure IBM continues to have a seat at the table as that discussion occurs.

DCIG’s research has shown that the IBM N series is one of the most compelling unified storage solutions for virtual server deployments within businesses. Yes, it provides the next generation unified storage interface that companies need for their virtualized server deployments. But it just as importantly provides the underlying scale-out storage architecture that will help ensure a server virtualization deployment is not just a flash in the pan when it comes to lowering costs and improving IT efficiency. Rather it is designed to continue to deliver these benefits that companies expect when they talk about their server virtualization deployment in the years to come.

Iomega Bundles Capacity and Performance in New External SSD Drive

Iomega, the anchor company in the Consumer and Small Business Products division of storage giant EMC, last week introduced an External SSD Flash Drive designed for business and “prosumer” users. Boasting USB 3.0, built-in encryption, and a suite of backup and security software, the drive is the vanguard of a new breed of rugged and compact external storage. Although expensive by consumer standards, business and pro users will welcome its combination of features and performance.

Flash memory-based solid state drives (SSDs) are nothing new, of course, but neither are they as compact and portable as thumb-sized USB flash drives. Further, portable hard disk drives, including those made by Iomega, are hot sellers at retail for consumers looking to add capacity to their PCs. But every one of these products entails a trade-off in terms of usability, performance, and price.

SATA SSDs are fast and capacious but difficult for end-users to install and use. External drives are easier to connect, but their performance is limited by the USB 2.0 bus, although multiple companies, including Iomega, have recently launched USB 3.0 portable and desktop HDDs for use with high end desktop computers and laptop models now shipping with USB 3.0 ports. This has led to the development of two distinct portable device categories: Small and cheap flash drives and large and slow portable hard disk drives.

Iomega’s new USB 3.0 SSD attempts to combine the best features of both device categories in a single package. Their SSD drive line features generous capacity points (64, 128, or 256 GB) and high performance (USB 3.0 is roughly 10 times faster than common USB 2.0 ports.)

Although pricey compared to portable flash drives and hard drives at $229, $399, and $749, respectively, thisnew external device is competitive with internal SSDs plus it offers the same capacity and performance as an internal SSD without the hassle of SATA or PCI installation.

This new drive is, as they say, neither fish nor fowl. It is a new category of storage and will therefore carve out a new market niche. Iomega clearly believes that it will be attractive to businesses, creative professionals and early adopters, as they have bundled it with their corporate- friendly Protection Suite features:

Encryption “v.Clone” disk imaging QuickProtect and Roxio Retrospect Express backup software Trend Micro Internet Security Home Online Backup.

This feature set is similar to Iomega’s existing eGo portable hard drive offerings, and shows that the company expects to attract similar business and upscale individual customers. The built-in 256-bit AES encryption is a critical feature for a portable device, especially a fast SSD priced at hundreds of dollars. Loss and theft of portable drives is common, and this expensive device will be a tempting target.

Anyone spending this much on an external drive will be storing valuable and sensitive data on it. Although the loss of such a drive will be disappointing, the fact that the data it contained is secure will be reassuring to buyers. The AES encryption, though a hardware feature, requires installing a client application on the PC for access.

Portable hard disk drives are susceptible to physical damage as well, and this is another area where Iomega’s SSD shines. Solid state flash storage is almost impervious to shock, and the company claims the drive and its metal case will survive a drop of 10 feet. A generous 3-year warranty demonstrates their faith in the product, and experience shows that SSDs are exceptionally rugged.

Apple Macintosh users are left out in the cold at this point, however. The bundled software, including the encryption client, is Windows-only. One cannot fault Iomega for this, however, since Apple has not yet released a computer with a USB 3.0 port. Although the drive is backwards compatible with USB 2.0, the performance will disappoint.

It’s too bad that Apple is lagging, too, since their customers are exactly the sort upscale professionals that would be interested in (and could afford) a portable SSD like this. Demanding applications like video editing would fly with over 300 MB/s of real-world read and write performance, but only PC users will be able to use the drive. For those without a USB 3.0 port, Iomega does offer USB 3.0 adapters for PC users (an ExpressCard for laptops or a PCI Express card for desktops), priced at just $39.99. These adapters are Windows-only as well, however.

Although expensive, Iomega’s External SSD Flash Drive combines portability, performance, capacity, and durability unmatched by existing SSDs, flash drives, and portable hard disk drives. Corporations looking to equip their mobile professionals with a rugged and reliable external storage solution should consider investing in drives like this rather than constantly replacing failed mechanical hard disks. The built-in AES encryption is a huge benefit for these organizations, as is the rest of the software bundle. Mac users will just have to wait for Apple to get on the USB 3.0 bandwagon.

Predictable Scaling with the HP X9000 (built on Ibrix software)

I'm only about 6 months behind many of the world's leading independent storage bloggers on learning about HP's storage products, so I've been eager to catch up to them. Imagine my delight this morning when I picked up Greg Knieriemen's tweet on the most recent report from ESG on our X9000 Scale-Out NAS systems. Thanks to Brian Garret and Vinny Choinski of ESG for their straightforward analysis.

I was somewhat familiar with Ibrix as a software product that powered NAS clusters, but the new ESG Labs report helped me grasp HP's vision for the X9000 storage appliances much better.

Interested readers should view the report to see the results as well as the methodology that was used. There were three test beds covering throughput, content delivery and file creation metrics culled from a of X9000 configurations. The X9320 is a storage appliance with internal disks and the X9300 is a gateway version of the product that connects to external SAN storage. Another model, the 9720, which is the super-sized version of the 9320 (full 42u rack) not used in the tests.

3PAR customers will be familiar with the processing architecture of the X9000. The granular "head unit" of the X9000 system is called a couplet, and is a pair of fault- tolerant NAS heads. This is similar to 3PAR's storage system architecture where nodes are added in pairs.

But the surprising thing about scalability for the x9000 is not necessarily how large it can grow, but how effectively it can also be employed in much smaller environments. As the ESG Labs report concludes:

Who would have guessed that companies overwhelmed by Word and PowerPoint archives could benefit from the same solution as those burdened by 100-TB annual growth of genome sequencing data? Who knew that a NAS developed for high- performance computing could evolve into a graceful, cost- effective scale-out solution with predictable and near-linear performance for small and large files and exotic and everyday applications? The challenges that scale-out NAS solves are much more “everyday” than “lunatic fringe,” and the X9000 makes it consumable by almost anyone. If you are facing file system growth and complexity challenges, you should consider the X9000. It’s affordable, includes commercial features like snapshots and replication, and lets NFS and CIFS work on the same file system. You can buy a scale-out architecture that will grow with you and meet the needs of your business without interruption. The Fusion segmented file system, combined with HP’s servers and storage (not to mention HP’s buying power and supply-chain advantage), brings what started as a niche solution to the masses.

I'm only about 6 months behind many of the world's leading independent storage bloggers on learning about HP's storage products, so I've been eager to catch up to them. Imagine my delight this morning when I picked up Greg Knieriemen's tweet on the most recent report from ESG on our X9000 Scale-Out NAS systems. Thanks to Brian Garret and Vinny Choinski of ESG for their straightforward analysis.

I was somewhat familiar with Ibrix as a software product that powered NAS clusters, but the new ESG Labs report helped me grasp HP's vision for the X9000 storage appliances much better.

Interested readers should view the report to see the results as well as the methodology that was used. There were three test beds covering throughput, content delivery and file creation metrics culled from a mix of X9000 configurations. The X9320 is a storage appliance with internal disks and the X9300 is a gateway version of the product that connects to external SAN storage. Another model, the 9720, which is the super-sized version of the 9320 (full 42u rack) not used in the tests.

3PAR customers will be familiar with the processing architecture of the X9000. The granular "head unit" of the X9000 system is called a couplet, and is a pair of fault- tolerant NAS heads. This is similar to 3PAR's storage system architecture where nodes are added in pairs.

But the surprising thing about scalability for the x9000 is not necessarily how large it can grow, but how effectively it can also be employed in much smaller environments. As the ESG Labs report concludes:

Who would have guessed that companies overwhelmed by Word and PowerPoint archives could benefit from the same solution as those burdened by 100-TB annual growth of genome sequencing data? Who knew that a NAS file system developed for high- performance computing could evolve into a graceful, cost- effective scale-out solution with predictable and near-linear performance for small and large files and exotic and everyday applications? The challenges that scale-out NAS solves are much more “everyday” than “lunatic fringe,” and the X9000 makes it consumable by almost anyone. If you are facing file system growth and complexity challenges, you should consider the X9000. It’s affordable, includes commercial features like snapshots and replication, and lets NFS and CIFS work on the same file system. You can buy a scale-out architecture that will grow with you and meet the needs of your business without interruption. The Fusion segmented file system, combined with HP’s servers and storage (not to mention HP’s buying power and supply-chain advantage), brings what started as a niche solution to the masses.

Symantec Extends Web-based Benefits of Its Operations Readiness Tool for Storage Foundation and NetBackup Administrators

Ask almost any system administrator what he or she spends the majority of their working day doing and the response almost always includes managing changes and updates to their systems. It is for this reason that over three (3) years ago Symantec introduced its complimentary web-based SymantecOperations Readiness Tool (formerly known as Veritas Operations Services). Now with this month’s latest release, Symantec extends the benefits that the Operations Readiness Tool provides to Storage Foundation, NetBackup and Storage Foundation for Windows users.

The original purpose for Symantec Operations Readiness Tool is simple: Enable system administrators to work smarter, not harder as it provides them access to a web-based tool with a centralized repository of knowledge and best practices that reduces the time that they spend planning for and managing changes in their Storage Foundation environment.

Change always involves risk and the last thing any system administrator wants to introduce into their mission critical environments is risk. However these are typically the environments in which Symantec Storage Foundation (SF) and Veritas Cluster Server (VCS) run. So when changes are made to their environment or patches or upgrades are released for either SF or VCS, it introduces an element of uncertainty as to if there will be any impact to production applications.

To address these concerns and minimize the time that system administrators have to spend planning and managing these changes, the Symantec Operations Readiness Tool website was created. It enables system administrators to proactively and confidently apply needed patches and upgrades to their Storage Foundation and VCS implementations as well as make it possible for organizations to verify that other planned changes to their environment would work as intended.

In this respect, the Operations Readiness Tool website has succeeded as, since its creation, over 95% of Operations Readiness Tool users report that they can proactively meet these objectives using this website. Examples of tasks that they use it for include identifying:

Which Storage Foundation (SF) patches a virtual or physical machine lacks What system or environmental risks exist and specific recommendations on how to mitigate them What SF patches, VCS agents, or system level updates need to be applied in order to ensure that a new and/or existing application works or performs as intended Which VCS agents they need based upon the mix of applications discovered on the server What new storage systems are supported by the latest version of Symantec Storage Foundation’s Array Support Library (ASL).

To perform these different tasks the Operations Readiness Tool website provides a data collector. This software collects, analyzes and reports on what changes are required on datacenter servers in order to prepare for installations or upgrades, find and mitigate risks, or inventory deployed products and licenses. It then produces a report that lists the optimizations that a system administrator should make on each server.

Enterprise organizations that already use the Operations Readiness Tool find that one of the biggest benefits they derive from it is a consistent approach to preparing for installations and upgrades across their server environment. Normally this preparation is a complex, time consuming task reserved for senior level administrators or system architects to perform.

But using the Operations Readiness Tool website, they reduce the time spent on this task, standardize the procedure and even potentially convert these tasks into ones that junior level administrators can perform. Further, these reports can be passed along to executive managers so they have a sense of the time and work that is required to prepare each system in advance of its software installation or upgrade as well as how much time that they are now saving using the Operations Readiness Tool as opposed to doing these tasks manually.

In this month’s release, Symantec takes some additional steps to improve its support for Storage Foundation for Windows as well as NetBackup. The Operations Readiness Tool currently has one data collector script that works across all versions of and now with this latest Operations Readiness Tool update, many of the same features are now available for Windows SF deployments as well. The Windows data collector could previously do pre-installation checks but it has been enhanced to do licensing and upgrade checks in addition to risk assessments that puts its functionality about on par with the Unix data collector.

Support for NetBackup in this release of the Operations Readiness Tool is still in its early stages. In this release backup administrators can leverage the Operations Readiness Tool to gather up to date information on what NetBackup products are deployed in their environment.

Going forward, backup administrators can expect to leverage the Operations Readiness Tool to manage and support their NetBackup environments in much the same way that UNIX and Windows systems administrators leverage it to support their SF and VCS environments.

Symantec’s introduction of its Operations Readiness Tool to support Storage Foundation deployments over three years ago alleviated an ongoing management concern for administrators by helping make patch and upgrade management on individual servers a much simpler activity. Since then enterprise organizations have found it useful for a multitude of other purposes to include standardizing the application of fixes, patches and upgrades as well as managing changes in their environment.

This month’s upgrade to the Operations Readiness Tool continues to help enterprises in their efforts by broadening its support for environments as well as for NetBackup. In so doing, both NetBackup and Windows SF administrators can look forward to better leveraging the Operations Readiness Tool to reduce the time that they spend on managing their systems while increasing their levels of efficiency and effectiveness that their UNIX system administrative counterparts have enjoyed for years.

The Symantec Operations Readiness Tools website can be found at https://sort.symantec.com.

Imation and BDT Products Tease SNW Attendees with Forthcoming 8 Slot RDX Disk Library

SMBs are being confronted with some tough choices right now when it comes to backup and recovery. While most want to use disk as their primary backup target, trying to balance recovery time objectives (RTOs), getting their data offsite and still keeping their costs under control makes this a fine line to walk. However an interesting answer to this problem was jointly presented to me last week at SNW by Imation and BDT Products.

Using a disk-based backup target is becoming almost the de facto solution for SMBs to adopt. But simply adopting disk does not eliminate many of the problems these organizations have regarding disk-based backup. Specific problems that they can encounter include:

They only have one site so backing up solely to a disk appliance leaves them exposed should the site be impacted by a catastrophic event like a tornado or flood. This could destroy all of their backup data and leave them exposed. New removable disk options like RDX cartridges give them the benefits of disk with the portability of tape that they were accustomed. However it re-introduces the requirement to daily handle disk cartridges and take these offsite daily, weekly and monthly. Backing up to the cloud is becoming more popular until they actually have to recover data from the cloud. While recovering individual files is usually not a problem, trying to recover large amounts of data in a short time takes some of the shine off of the cloud’s glow. Further, doing the initial backup of all of any organization’s data to the cloud has been known to choke WAN connections and take days or even weeks to complete.

It is this problem of how to best introduce disk into SMB environments while still addressing all of these concerns that a new product that Imation and BDT Products are coming to market with in early 2011 that SMBs should find of interest. Though it is currently just in beta right now and only a pre- ship model was available for viewing at SNW last week, the concept that it introduces merits attention since it hits on many of the hot buttons that SMBs have.

What Imation and BDT Products demonstrated at SNW was an 8 slot disk library populated with RDX disk drives. While that in itself is nothing to get excited about, what made it interesting was that it gives users three (3) options to present itself to backup software: an RDX disk target; an LTO ; or, a raw disk volume. Further each slot in the disk library can be uniquely presented to the backup software. So the application for this disk library in SMB environments is this. They now have a solution that functions much like tape auto loaders do but, with 8 slots in this disk library that can each function as a separate and distinct backup target, it can conceivably be configured as followed:

Slots 1 – 4 sequentially configured as the target for daily backups (M – Th) Slots 5 – 8 sequentially configured as a target for the weekly full backups (every F)

Used in this manner, organizations can eliminate the need to handle media every day and even minimize the requirement to take media offsite. But how is that possible since the RDX disk cartridges are still in the disk library?

This concern is addressed by a new relationship that Imation announced with Nine Technology last week. While the crux of that announcement centered on using RDX cartridges as a backup target to eliminate the bandwidth overhead associated with the initial backup to Nine Technology’s backup storage cloud, there is another potential application here.

Many SMBs are reluctant to switch out their existing backup software, especially to replace it with backup software that no one has ever heard of before. However what makes this relationship with Nine Technology interesting is that Nine Technology’s cloud backup software can potentially be loaded on this forthcoming 8 slot disk library appliance from BDT Products.

Used in this way, organizations can deploy this new disk library, configure it as a backup target using their existing backup software and then, once the backups are complete, configure Nine Technology’s cloud backup software to copy these nightly backups to the cloud once the initial backup is complete.

Now using this combination of RDX cartridges, Nine Technology’s backup software and 8 slot disk library, SMBs:

Get the performance of disk-based backup Have data stored locally for fast recoveries Have the option to move any day’s backup data offsite at any time should they choose to avoid the latency of recoveries that Schedule backup data to be moved to the cloud daily, weekly or monthly Can continue to use established tape (now disk) rotation processes if they are working Avoid disrupting current backup routines Use the RDX technology as a means to seed to avoid choking the WAN Minimize the amount of time they have to spend handling media

In fairness, this is all hypothetical but in talking with representatives from Imation and BDT Products, they said there was no reason this could not be done. Further, since both Imation and BDT Products are the ones typically called upon to manufacture these types of products for better known brands such as Dell and HP, this solution or some variation thereof may find its way into the market place sometime in 2011.

Finally, Imation and BDT Products are exercising some constraint in pricing this product. While they did not commit to any prices, when I asked them if this 8 slot disk library would be priced around $10,000, they seemed to indicate that the retail price would be significantly less than that.

All in all, this looks like a pretty cool little solution that SMBs can expect to come to market in 2011 assuming Imation and BDT Products continue to execute and Dell and HP decide to bring them to market.

Note: SMBs for the purpose of this blog entry are defined as small and midsize businesses that backup no more than 1 terabyte of data nightly.

Virtual wired-dude demo of 3PAR management

My friend JR, an SE at 3PAR (now HP) , made this demo showing our autonomic management capabilities. It was a bit long, so I scrunchified it and now it makes him sound like he was completely caffeined out when he made it. That's what friends are for, right JR?

My friend JR, an SE at 3PAR (now HP) , made this demo showing our autonomic management capabilities. It was a bit long, so I scrunchified it and now it makes him sound like he was completely caffeined out when he made it. That's what friends are for, right JR?

FalconStor Regroups, Refocuses under New CEO McNeil

To say that FalconStor has had some struggles over the past few weeks would probably be a bit of an understatement. Any time that a company’s CEO abruptlyresigns with “certain improper payments” cited as the reason for his departure, it can leave a company floundering and seeking direction. However having had an opportunity to chat with FalconStor’s new CEO, Jim McNeil, at SNW over dinner this past week, he is already helping FalconStor move past the CEO’s departure and regroup and refocus under his leadership.

I did speak to McNeil briefly about Huai’s departure and while he could not and did not comment excessively on it, he did say that he was looking forward to all of the facts coming out about it. As many of the details cannot yet be disclosed, speculation is running rampant as to what did occur which is only adding fuel to the fire. However he expressed confidence that once FalconStor is in a position to share all of the details that it will not be nearly as bad as many are making the situation out to be.

We then turned our attention to the question, “Where does FalconStor go from here?” Despite only being on the job a couple of weeks, McNeil already had some pretty good answers. Prior to being named CEO, McNeil had joined FalconStor as its Chief Strategy Officer so in this respect he enters the CEO position with some clarity on where he wants to take FalconStor.

To that end he wants FalconStor to be laser focused on data protection. That has been FalconStor’s sweet spot since its inception and it already offers a number of software products that support that initiative including Continuous Data Protector (CDP), File-interface Deduplication System (FDS) and Virtual Tape Library (VTL). Further, it has relationships with many storage providers to OEM these products.

Yet because FalconStor is a software company and uses the same virtualization engine underneath the covers to deliver all of these products, is has developed a reputation of being a “jack of all trades, master of none.” By this I mean if an end user does a side-by-side comparison of FalconStor with almost any competitive product in the market, FalconStor can fill in all of the checkboxes and say it can deliver that functionality in almost all areas. But because FalconStor can be adapted to do almost anything, sometimes it is difficult to point to one thing for which FalconStor has developed a reputation for being the “go-to” provider for a specific solution.

To that end, either he took my point to heart or other people have also made that comment to him as McNeil posted ablog entry on FalconStor’s website yesterday. In it, he enumerated three ways in which how the current backup process is currently broken and how he plans to take FalconStor down a path of what he refers to as “service-oriented data protection” or SODP.

In it, he borrows some of the terminology that it is already appearing in the industry regarding the concept of “vBlocks” as it applies to building virtual storage infrastructures and applies that to backup. This redefines the concept of backup from being more reactionary as it is today to one that make backup part of the initial as well as the ongoing build out of the virtual infrastructure at a service level.

He says, “We can begin to implement data backup, retention and archival rules on a service-by-service basis. Were you ever asked if your DR solution was commensurate with you SLA? Now you can not only answer the question but also deliver the goods. In summary, a little perspective can go a long way. By thinking about the solution in the same way that our customers think about their service-delivery challenges, we are one step closer to delivering an operational model that fits in with the bigger picture.”

As part of “delivering an operational model that fits in with the bigger picture”, I also asked McNeil about what he plans to do to make it easier to deploy FalconStor’s software. One of the criticisms of FalconStor in the past has been that it can be difficult to configure and that it often requires professionals with high levels of skill to implement it.

McNeil responded by saying that FalconStor has already taken a couple of steps in that direction to address those concerns. First, it does offer its software in the form of an appliance so the software comes preconfigured. This eliminates or at least minimizes the need for a professional services engagement to configure the software and set it up in a user’s environment.

However he went on to stress that FalconStor still plans to remain a software company and play in enterprise environments. So while it can and should take some steps to make the installation of its storage software on any hardware platform more of a turnkey experience, it is always going to have the flexibility for users to configure it for optimal use in their environment.

So to say that FalconStor is out of the woods and turned the corner with McNeil at the helm is probably premature. But it appears to me that he has a good grasp of the challenges that FalconStor faces and what steps he needs to take to correct them. Hopefully he will be given the chance to execute on them and have the “interim” label taken out of his title.

HP Shares Details about the Future of 3PAR within its StorageWorks Division

Now that the acquisition of 3PAR by HP is a done deal, there are three big questions on the minds of many. How will 3PAR’s InServ Storage Servers fit into HP’s overall storage portfolio? Is HP’s relationship with HDS over? Does HP keep its EVA line of storage? These are some of the questions I was able to get answered this week when I met with Craig Nunes, the new HP Director of StorageWorks Marketing at Storage Networking World (SNW) 2010.

First, in regards to where does 3PAR fit into the HP StorageWorks portfolio, Nunes referenced an presentation made by HP’s Executive Vice President and General Manager, Enterprise Storage, Servers and Networks, Dave Donatelli, provided. Donatelli views 3PAR as THE architecture for the next decade and delivering advanced features today that customers desire.

He positions 3PAR as addressing a variety of markets from mid to enterprise to the cloud. On that point, Nunes indicated the 3PAR family is largely complementary within the HP storage line-up in the context of addressing application workloads which break down into two broad classifications, predictable and unpredictable.

Predictable workloads are those one might associate with traditional enterprise deployments likeSAP or a Microsoft Exchange workgroup, whereby capacity growth and workload type are largely understood and resistant to large, unforecast changes.

Unpredictable workloads are those most often found in virtualized environments where capacity demand and workload type can vary widely since content and applications are often coming from outside the enterprise (as is the case with social networking or cloud hosting) or within the enterprise in a more dynamic form (as is the case with large scale server virtualization deployments). In this case, workload demands can peak and change at almost any time.

Prior to HP’s acquisition of 3PAR, the XP and EVA storage solutions that HP offered could best be described as a fit for predictable workloads across a broad range of applications and deployments. However with the acquisition of 3PAR and it high end T-Class and midrange F-Class models, HP now has an offering for the growing number of virtual environments with unpredictable workloads.

So in terms of where 3PAR fits in HP’s current storage stack, it can probably best be summed up as follows:

HP XP (HDS) 3PAR T-Class

3PAR F-Class

HP EVA

HP P4000 (formerly Lefthand Networks)

Seeing that lineup somewhat answers the next two questions as to what HP plans to do with its current XP and EVA lines of storage. The short answer is that for now all of HP’s current storage offering are still on the table.

While there is arguably some overlap between the HP XP and 3PAR T-Class, one area where the HP XP still has a distinct advantage over all of HP’s other storage offerings, 3PAR or otherwise, is its mainframe connectivity. Further, to the best of my knowledge, HP has no plans in the near term to invest in providing mainframe connectivity for the 3PAR T-Class though in the long term, who knows?

So in all likelihood, HP’s relationship with Hitachi Ltd will not go away and the HP XP will continue to be a part of the HP StorageWorks portfolio for the foreseeable future. But my gut feeling is that HP will more aggressively push 3PAR storage in all of its enterprise accounts and only bring up the HP XP in accounts that need mainframe connectivity or where installed base preference exists for the XP.

A good measure of this will be next week on Wednesday when I attend the Q4 2010 VMwage User Group (VMUG) in Omaha where HP’s Technical Team will be in attendance and presenting at the event. I will be curious to see what storage HP will be pushing but my bet is already on 3PAR as I was just contacted yesterday by a former 3PAR sales rep (now HP) who will coincidentally be in town the same time as the event.

As to the future of HP EVA, long term I see the EVA continuing to provide connectivity below the 3PAR F-Series and alongside the iSCSI-based HP P4000. The EVA installed base is massive and for those who appreciate the ease of use of the EVA and are deploying in more traditional, predictable workload environments, the EVA will continue to rule.

One other minor question that people have also been wondering is, “Will the 3PAR name stick around?” While no one really knows for sure, my guess is probably not. HP wants its brand on everything and while 3PAR was fairly well known in the enterprise storage space, outside of that space, not so much.

So my sense is that based upon what HP did with Lefthand Networks and re-branding it the “P-Series”, I would expect sometime in the near future that the 3PAR brand will suffer the same fate as Lefthand Networks. However I can see the 3PAR “T-Class” and “F-Class” model designations for its InServ Storage Servers persevering as those seem to fit within the HP storage branding philosophy. Introducing The Clandestine Airplane Cinema and good things happening at SNIA

SNW in Dallas was very educational and fun – an excellent show and there are some informative Infosmack interviews in the works that people might want to check out.

You might also want to go directly to SNIA site to catch up on the latest developments in the Cloud Storage Initiative (CSI) and their Green Storage Initiative (GSI).

The Cloud Storage Initiative is developing the means to create and transfer metadata for data stored in the cloud. This is a huge deal because it promises to alleviate one of the largest concerns about cloud storage, which is portability of data among different cloud storage service and IAAS providers.

The Green Storage Initiative introduced a new power efficiency program called SNIA Emerald, which is providing power consumption measurements for storage. There are many challenges involved with this sort of work and I give the folks working on this at SNIA a lot of credit for making progress through such a thorny topic. SNIA Emerald is an excellent example of how SNIA is providing leadership for the entire storage industry.

SNW in Dallas was very educational and fun – an excellent show and there are some informative Infosmack interviews in the works that people might want to check out.

You might also want to go directly to SNIA site to catch up on the latest developments in the Cloud Storage Initiative (CSI) and their Green Storage Initiative (GSI). The Cloud Storage Initiative is developing the means to create and transfer metadata for data stored in the cloud. This is a huge deal because it promises to alleviate one of the largest concerns about cloud storage, which is portability of data among different cloud storage service and IAAS providers.

The Green Storage Initiative introduced a new power efficiency program called SNIA Emerald, which is providing power consumption measurements for storage. There are many challenges involved with this sort of work and I give the folks working on this at SNIA a lot of credit for making progress through such a thorny topic. SNIA Emerald is an excellent example of how SNIA is providing leadership for the entire storage industry.

Tape is Back in the Storage Conversation at Fall SNW 2010

Of all the topics that I thought I might be writing about after my first day in attendance at the fall Storage Networking World (SNW) conference in 2010, I did not think tape would be it. In fact, it was not even on my radar screen walking into the show. But after meeting with the Ultrium LTO team yesterday at SNW, it is clear that tape is back in the storage conversation and those arguing for its broader adoption and continued use have much more to talk about than its power savings, larger capacities and faster speeds.

Over the past few years (OK, the last decade), tape has been about as exciting to write about as sliced bread. (My apologies to all you sliced bread lovers out there.) Every other year or so I would meet with the LTO team and they would tell me about how they had doubled their capacity and increased the speeds of the tape drives but other than that nothing had fundamentally changed.

Oh sure, they may have added WORM capabilities 5-7 years ago that two insurance companies in downtime NYC now use (speaking tongue in check now) and encryption which many more financial institutions should be using. However for the most part it was thanks for the update, thanks for the coffee and I’ll see you in two years when you tell me the same thing again.

This year was completely different. Yes, we spent a minute (if that) discussing increases in capacity and performance that LTO-5 offer (though I don’t even recall what they are as I write this) but more importantly spent some time discussing the new use cases for tape that LTO-5’s new Linear Tape File System (LTFS) technology creates.

The major (and I would almost classify it as a breakthrough) advance that LTFS provides is that LTFS changes tape from application dependent to application independent. Up to now, any time someone wanted to store or retrieve data from tape, it required a third party backup application such as CommVault® Simpana® 9 or Symantec NetBackup to do it so it was not very user friendly.

Oh sure, there are some UNIX folks that know UNIX well enough to mount and unmount tapes and can then run the tar commands to write data to these tapes. But, hello, how much business value does that really add in this day and age and who really has time to do that? Let’s try zero so with so little end user or application friendliness, tape had become extremely application dependent.

However with LTFS, tape cartridges look just like the C: or D: drive on your PC. LTFS formats the tape so an operating system can recognize it such that files can be dragged and dropped from your hard drive to the tape and vice versa.

To store and retrieve data to tape there is no special application or even any special knowledge needed in order to do it. All you need to know how to do is your basic copy and paste. Hello usability. Granted, this still only works on and Mac with Windows functionality on the way and a special LTFS driver has to be loaded on the operating system but the important item here is that it switches tape from “hard to use’ to “easy to use.”

As to why LTFS puts tape back in the storage conversation, it’s simple. By tape adopting some of the usability features of disk, it can once again be used to store large data files that really are not well suited for disk such as audio, image and video files. These file types neither compress nor deduplicate well and, 30 days after they are created, may rarely be accessed and need to be archived.

This is actually where tape’s features of low power consumption and longevity come into play. By storing this data out to tape, the data is just as accessible as it was on disk (though the initial access to the file may take longer due to the need for the tape to reposition) without introducing disk’s costs.

LTFS also addresses another historical concern of tape: data migration from prior generations of tape to current generations. How many users do not have old tapes sitting around either because it is too much work to migrate the data from the old tape to the newer, higher capacity tape or because you cannot even read the data from the old tape to begin with because no one knows what application stored the data to the tape in the first place and so they can’t copy it to the new tape cartridge.

Using LTFS, as future generations of LTO are released, all one has to do is put an LTO-5 tape and an LTO-7 tape in two separate tape drives and easily move the data from the LTO-5 tape cartridge to the new LTO-7 tape cartridge. Data migration can then occur without any application dependencies plus you can potentially reduce the total number of tapes under management by as much as 75% since LTO-7 is forecast to have 4x as much storage capacity as LTO-5.

I never believed for one minute that tape was dead but over the last decade it certainly appeared to have become irrelevant. LTFS changes all of that. Not only has LTFS put tape back in the storage conversation, it makes it possible for tape to even potentially be considered for use as a storage device in the cloud.

Further, since LTFS now looks like disk, why can’t it sit behind a deduplication solution such as one from FalconStor that can use any type of disk but now store deduplicated data to tape? There are probably reasons why it can’t work right now but who is to say that cannot change in the future?

So is LTFS ready for prime time today? No as there are still a lot of particulars that need to be worked out. But just the fact that I am talking about it, examining new ways that tape can be used and that there are already practical use cases for LTFS as it stands today means that other people are likely having the same thoughts as me as to the new possibilities that tape offers as opposed to relegating it to the path to nowhere.

Working Socially in Barcelona and the future of EVA

(Gaudi's La Pedrera , in Barcelona)

There was an HP marketing event in Barcelona last week for European, Middle Eastern and African journalists, analysts and social media people. I was invited to attend and met a lot of people I hadn't met before – both from within HP as well as people from the storage blogosphere. The event was not exclusive to storage, but covered servers and networking products too – and emphasized HP's vision for doing more work with fewer products, which is the underlying philosophy of HP's Converged Infrastructure (CI).

The social media attendees were given front row seating during the product presentations. I'm sure it seemed a bit odd to the presenters that the first two rows of attendees were often heads-down engaged with the Twittersphere, but that's certainly indicative of the way social technologies are changing marketing. I give the HP marketing team a lot of credit for putting social media front and center during these sessions.

The social media people assembled there included influential bloggers; Ilja Coolen, Alessandro Perilli, Chris Evans, Greg Knieriemen, Paul Miller, Chris Mellor (yes, Chris Mellor from The Register, who spans both traditional journalism and social media), and HP insiders; Tom Augenthaler, Kristie Popp, Chris Purcell, Andy Bryant, Lee Johns and Calvin Zito. The group dynamic was terrific – a lot of energetic conversations and humor.

Barcelona is known for its fine restaurants – each of them better than the last – and we certainly ate well. My recommendation is a small restaurant called El Clandestino. Andy Bryant found it on Google and although it perplexed our taxi drivers by being on an almost invisible street on the maps, it was worth the effort.

One of the big questions people had before this event was what would happen to HP's EVA product line after the 3PAR acquisition. The video below, taken at Montjuic on my way out of town (more on that later) discusses that topic.

(Gaudi's La Pedrera , in Barcelona)

There was an HP marketing event in Barcelona last week for European, Middle Eastern and African journalists, analysts and social media people. I was invited to attend and met a lot of people I hadn't met before – both from within HP as well as people from the storage blogosphere. The event was not exclusive to storage, but covered servers and networking products too – and emphasized HP's vision for doing more work with fewer products, which is the underlying philosophy of HP's Converged Infrastructure (CI).

The social media attendees were given front row seating during the product presentations. I'm sure it seemed a bit odd to the presenters that the first two rows of attendees were often heads-down engaged with the Twittersphere, but that's certainly indicative of the way social technologies are changing marketing. I give the HP marketing team a lot of credit for putting social media front and center during these sessions.

The social media people assembled there included influential bloggers; Ilja Coolen, Alessandro Perilli, Chris Evans, Greg Knieriemen, Paul Miller, Chris Mellor (yes, Chris Mellor from The Register, who spans both traditional journalism and social media), and HP insiders; Tom Augenthaler, Kristie Popp, Chris Purcell, Andy Bryant, Lee Johns and Calvin Zito. The group dynamic was terrific – a lot of energetic conversations and humor.

Barcelona is known for its fine restaurants – each of them better than the last – and we certainly ate well. My recommendation is a small restaurant called El Clandestino. Andy Bryant found it on Google and although it perplexed our taxi drivers by being on an almost invisible street on the maps, it was worth the effort.

One of the big questions people had before this event was what would happen to HP's EVA product line after the 3PAR acquisition. The video below, taken at Montjuic on my way out of town (more on that later) discusses that topic.

Veritas Operations Manager 3.1 Brings New Automation to Storage Management

One of the trends in the upcoming decade in storage management is already taking shape: Automation. This trend is driven in large part by the new storage management features that have been introduced in the previous decade to address specific application challenges, but that have – in the process – created their own set of challenges in managing these features. It is these challenges that today’srelease of Veritas Operations Manager 3.1 is designed to address. Complexity has found its way into data centers over the last decade as a result of the introduction of new technologies. However, enterprise networked storage systems in all of their different forms with all of their different options have probably contributed as much to this increase in complexity as any other technology in the data center.

Consider just some of the options that are available on enterprise storage arrays that make managing storage more complex:

Different tiers of disk with different capacities and speed Different types of RAID to address specific availability, cost and performance concerns Thin provisioning to minimize storage allocation Multiple storage networking interfaces (FC, Ethernet, Infiniband)

In addition, there are over 30 providers of enterprise storage arrays who each implement features on their storage arrays in different flavors. As a result, organizations deploying multiple storage arrays can easily create silos of storage capacity, lose track of how much storage is on specific arrays, what capacity is utilized on these arrays and over- allocate capacity to prevent applications from running out of capacity and experiencing outages. It is no wonder that managing storage has become so complex!

This complexity is in part what Symantec’s Veritas Storage Foundation High Availability (HA) has previously addressed by enabling organizations to pool these storage resources, optimize storage utilization, provide a common means for managing storage and preventing downtime.

But as enterprise organizations continue to consolidate data centers, accelerate their adoption of server virtualization and increase the number of servers (physical or virtual) under Storage Foundation’s management, they need to automate the processes associated with the ongoing management of Storage Foundation, as well.

So though Storage Foundation provides a consistent means to manage storage across multiple operating systems presented to them by multiple storage arrays, administrators still had to figure out:

How the underlying LUN presented to Storage Foundation is constructed (what RAID group, performance level, etc) Additional features present in the LUN (thin provisioning, SCSI-3 reservation) Types of storage to be provisioned to the application (high performance, high capacity, or highly available)

It is these three challenges that Veritas Operations Manager, bundled free with Storage Foundation, addresses in the following ways.

First, it eliminates the need for the administrator to determine how each LUN presented by a storage array to Storage Foundation is constructed beneath the covers. Storage Foundation can communicate with the attached storage array to gather information about the LUN: its RAID configuration, the type of disk it is using and other special features it may offer such as thin provisioning or a SCSI-3 reservation. This information is then gathered and provided to Veritas Operations Manager.

Second, using Veritas Operations Manager administrators get a complete view of how much and what type of storage is assigned to each physical or virtual server. Equipped with that information, they can use Veritas Operations Manager to create a storage templates (for example, Gold, Silver and Bronze) that automatically put these LUNs into the appropriate storage pools based upon the template definitions.

Third, these templates are then applied to all of the servers that are under Veritas Operation Manager’s control so that as new LUNs are presented and discovered by Storage Foundation on each of these servers, the template automatically detects the properties of that LUN and puts it into the correct storage pool on that server.

In this way, as an application needs more storage of a specific type, Storage Foundation can be set to automatically pull a LUN from the appropriate pool and assign it to that application without the need for administrative intervention.

Veritas Operations Manager can also work in reverse. If Storage Foundation detects that all of the space on a thinly provisioned LUN has been freed with its Thin Reclamation API, it can reclaim that LUN and put it back into the appropriate storage pool for future use.

Veritas Operations Manager 3.1 does not eliminate the need for server and storage administrators to communicate with application owners so they understand what the application requirements are. But once those application requirements are established, Veritas Operations Manager does provide them with the visibility that they need into their storage infrastructure so they can create a catalog of servers and services that, once applied, can automate the deployment and reclamation of storage to applications in their environment.

CommVault and Symantec Square Off in the Battle for Backup

Last week Josef Pfeiffer, a Symantec NetBackup product manager, posted a comment in response to a blog entry that I wrote regarding the CommVault® Simpana® 9 release. In his comment, he touched on one of the new debates in the enterprise battle for backup by posing the following question, “Why not upgrade to the latest release (NetBackup 7) and get more functionality rather than settle for less features and a big migration that may or may not work?”

Now as to whether or not Simpana 9 has less features that NetBackup 7 as Pfieffer claims, I an unsure as I have never gone through and done an in-depth side-by-side comparison of each and every feature contained in each product. However it is fairly clear that a number of the features that CommVault announced in Simpana 9 have been available in NetBackup for some time. This includes source side deduplication, broad support for storage array based snapshots and capacity based licensing as Pfeiffer mentions.

In this sense, CommVault only puts itself back on equal footing with NetBackup by introducing these features into Simpana 9. Further, there are other features that for now CommVault does not yet have a good answer.

For instance, CommVault does media server based deduplication (as NetBackup does) but CommVault still does not offer integration with target-based deduplication solutions such as what Symantec’s OpenStorage API (OST) offers for EMC Data Domain or even EMC NetWorker now provides with its DD Boost technology. Whether or not CommVault plans to offer that type of functionality in the future is still unclear.

But in speaking with CommVault customers, the reason that they cite and why I see CommVault continuing to have success in the battle for backup is not entirely due to the number of features that CommVault offers. Rather it goes more to how CommVault is delivering the features it does support by first seeking to automate the deployment of these features and then automating the management of these features after they are deployed. This brings to mind a conversation that I had with a few members of EMC’s Backup and Recovery Services (BRS) team a few weeks ago. DCIG and SMB Research are jointly putting together a Virtual Server Backup Software Buyer’s Guide and we were discussing a few of the questons in the survey to which EMC was responding as it had some questions about it.

During that conversation, EMC made a statement that struck me. It said, “We can respond ‘yes’ to almost any one of these questions in the survey because the functionality is inherently present in the product. It is more of a matter of how you want those features delivered.”

As a former end user and now a business owner, I know how I want features presented. I want a button to push (in a GUI) or, even better yet, a policy that automatically pushes the button for me and warns me when pushing the button is not working or will cause something else to fail.

Just having a feature present in the product that requires me to build scripts is only valuable when the business problem that I am having makes it imperative that I learn how to implement that script so I can leverage that feature and get back to running my business. Further, I will likely only implement that feature on that one application that needs its functionality.

I think this illustrates the biggest differentiator between CommVault and Symantec. As of right now, CommVault is doing a better job of automating the delivery and management of the features that it currently supports than Symantec. So even though Symantec may offer more technical features than CommVault and has even offered these features for a longer period of time, based upon what I have learned in talking to former NetBackup users who have switched to CommVault users, NetBackup’s features are not always as easy to automate.

I had one such conversation with Herbalife’s Principal IT Engineer, Andy Hansen, and blogged about it a little over a year ago. Three specific areas to which he pointed to as his reasons for switching to CommVault had little to do with a side-by-side comparison of the features of the two products. In fact, he found Symantec and CommVault comparable in the features that mattered in his environment. Rather his reasons for switching to CommVault included:

Using CommVault, backup went from a position requiring a FTE to a task that could be managed as part of another FTE’s responsibilities. CommVault automated the backup and recovery of his Oracle databases No extra backup reporting software was necessary

Now did some of his success have something to do with using disk instead of tape as his primary backup target? I would argue yes. But what ultimately swayed his decision to select CommVault over Symantec was the degree to which CommVault had integrated and automated the delivery and then the ongoing management of its features into its core product (at that time, Simpana 8.)

I also deduce that Hansen is still pleased with his CommVault implementation as a video clip of him speaking appeared on last week’s Simpana 9 launch webcast and virtual show.

So is NetBackup ahead of Simpana 9 in the features that it offers? Uncertain but for the sake of argument, let’s say yes. But are the two close enough in the number of features that they offer that intangibles such as automation and the ease of management after software deployment are starting to take priority in users’ minds over feature functionality? I would say absolutely yes.

In fact, my gut also tells me that Symantec recognizes this shift in customer sentiment is occurring and is part of the reason that Symantec is putting so much emphasis on “Working smarter, not harder.” Further, having attended Symantec Vision this past spring, in conversations I have had with Symantec since and then monitoring its ongoing announcements, it also clear that this initiative is taking priority at Symantec and that it is making good progress on delivering on this initiative.

So in trying to answer Pfieffer’s question as to whether or not enterprise organizations should upgrade or migrate their backup software, it really depends on what they are trying to accomplish. But my sense is that more organizations are starting to place a premium on software that automates and simplifies the initial deployment and/or upgrade and then its ongoing management.

This means that decisions as to whether to upgrade a current backup software or migrate to another one is now being heavily influenced by how well the backup software provider can answer and then prove during testing that it can automate the deployment and management of tasks that are now frequently done in a manual fashion.

VM6 Software Offers Virtualized SMB Environments an HA Solution Designed (and Priced) Just for Them

What small and midsize business (SMB) – and when I say SMB, I mean an SMB with 20 servers or less – wouldn’t kill to put those servers into a highly available (HA) and virtualized configuration? But to do so generally means deploying VMware, VMware Site Recovery Manager (SRM), some sort of networked storage solution and a bona fide expert to set it up and then manage this configuration. Now thanks to VM6 Software, the costs and complexity associated with doing that are no longer a prerequisite.

SMBs rarely have it easy when it comes to getting access to virtualized solutions that meet their needs. While there is a lot of buzz (rightfully so) around VMware and the impact it is having on reducing costs while improving availability in enterprise organizations, it still has to be about 10x easier to find an IT guy who knows how to setup and configure a Microsoft Windows Server than it is to find an VMware administrator to perform the same task. So the whole idea of an SMB deploying VMware and then hosting all of its applications on a server virtualization platform which none of its staff has ever managed before is more than just a stretch for these folks.

So if they want to pursue server virtualization right now and still stay with an operating system they know and understand, then that means Microsoft Windows 2008 R2 with Hyper-V. But if they are going to do that, that creates the need for HA and some sort of clustering solution because you can’t have all of your applications virtualized and then have the underlying server supporting them fail. That also means the SMB is going to need some sort of external storage solution to host this data which also has to be configured as highly available.

So the first two things that every SMB thinks about when that configuration is described to them is the cost and complexity associated with deploying it. And when they calculate what 20 physical servers cost versus what this new configuration costs plus factor in the extra risks and unknowns associated with deploying it, it is only natural thatmany of them stick with what they know.

However this is the exact problem that VM6 Software’sVMex solves (yes, its name bears a striking similarity to another well known vendor’s product.) Here is what makes it different from other solutions and why SMBs may find it even more appealing than VMware.

First, VMex is designed to work with Microsoft Windows 2008 R2 with Hyper-V. This means that SMBs can stay with an operating system that their IT staff know and understand.

Second, VMex functions as clustering software. SMBs can purchase two physical servers, install Microsoft Windows 2008 R2 with Hyper-V on each of them and then install VMex at the parent level on Hyper-V. In this way, should one physical server fail, VMex will fail all of the guest OSes on that server over to the secondary one.

Equally important, the introduction of VMex does not negate the ability of Windows administrators to take advantage of the Live Migration feature found in Hyper-V. So in this respect, VMex and Hyper-V complement each other nicely with VMex providing a failover solution in the event of the failure of an entire physical server while Hyper-V enables the movement of a guest OS from one Hyper-V server to another.

Third, and maybe most important, VMex virtualizes the disk associated with each physical server and creates a vritual SAN or V-SAN out of it. VMex essentially inserts itself between the Windows Hyper-V and its storage capacity such that the storage is no longer under the control of Hyper-V but under the control of VMex.

Once these disks are virtualized and under the control of VMex, features such as snapshots and replication can be introduced such that as data is written by a guest OS to a disk on one server, VMex can intercept those write I/Os and asynchronously copy those writes over to the other server. This functionality is significant since by making internal disks function like a SAN, organizations can do either Live Migrations of VMs using Hyper-V or entire server failovers from one server to another using VMex without deploying any external storage array.

This configuration should also not be confused with Virtual SAN Appliances (VSAs) that other providers offer. While these other providers enable the creation of virtual SANs using internal disk, they also first require the creation of a guest OS on the Hyper-V server on which their virtualization software is installed. This works OK for low performance applications but can negatively impact performance for applications like Microsoft Exchange and SQL Server since the traffic has to make an extra hop through this VSA.

VMex’s technique avoids that. Since it is installed at the OS layer and not on a guest OS, high performance applications like Microsoft Exchange and SQL Server do not have to route their traffic through a VSA. Rather it follows the same path it has always taken through the OS out to the storage such that the primary factor limiting performance of a read or write is the disk drive’s speed.

VM6 Software’s VMex certainly should not be confused with enterprise clustering solutions such as what Symantec’s Veritas Cluster Server offer. However, with a price starting at $2595 per server, the math for SMBs who posses 20 servers or less and who are looking to virtualize this environment starts to make sense using VMex. Because by sticking with technology they know and understand (Microsoft Windows 2008 R2 with Hyper-V), reducing their server count from 20 to as little as 2 and potentially eliminating the need for external storage, suddenly they have a very potent and cost effective virtualization solution for their environment.