Lenovo TruScale and Nutanix Enterprise Cloud Accelerate Enterprise Transformation

Digital transformation is an enterprise imperative. Enabling that transformation is the focus of ’s TruScale data center infrastructure services. The combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Cloud is the Transformation Trigger

Many enterprises are seeking to go to the cloud, or at least to gain the benefits associated with the cloud. These benefits include:

pay-as-you-go operational costs instead of large capital outlays agility to rapidly deploy new applications flexibility to adapt to changing business requirements

For many IT departments, the trigger for serious consideration of a move to the cloud is when the CFO no longer wants to approve IT acquisitions. Unfortunately, the journey to the cloud often comes with a loss of control over both costs and data assets. Thus many enterprise IT leaders are seeking a path to cloud benefits without sacrificing control of costs and data.

TruScale Brings True Utility Computing to Data Center Infrastructure

The Lenovo Data Center Group focused on the needs of these enterprise customers by asking themselves:

What are customers trying to do? What would be a winning consumption model for customers? The answer they came up with is Lenovo TruScale Infrastructure Services.

Nutanix invited DCIG analysts to attend the recent .NEXT conference. While there we met with many participants in the Nutanix ecosystem, including an interview with Laura Laltrello, VP and GM of Lenovo Data Center Services. This article, and DCIG’s selection of Lenovo TruScale as one of three Best of Show products at the conference, is based largely on that interview.

As noted in the DCIG Best of Show at Nutanix .NEXT article, TruScale literally introduces utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure. Clients can commit to a certain level of usage and be billed a lower rate for that baseline. This is similar to reserved instances on Amazon Web Services, except that customers only pay for actual usage, not reserved capacity. Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

Data center workloads tie directly to revenue. Want IT to focus on enabling digital transformation, not infrastructure management. Need to retain possession or secure control of their data.

Lenovo TruScale Offers Everything as a Service

TruScale can manage everything as a service, including both hardware and software. Lenovo works with its customers to figure out which licensing programs make the most sense for the customer. Where feasible, TruScale includes software licensing as part of the service.

Lenovo Monitors and Manages Data Center Infrastructure

TruScale does not require companies to install any extra software. Instead, it gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data it collects to trigger support interventions. Lenovo services professionals handle all routine maintenance including installing firmware updates and replacing failed components to ensure maximum uptime. Thus, Lenovo manages data center infrastructure below the application layer.

Lenovo Provides Continuous Infrastructure (and Cost) Visibility

Lenovo also uses the data it collects to provide near real- time usage data to customers via a dashboard. This dashboard graphically presents performance versus key metrics including actual vs budget. In short, Lenovo’s approach to utility data center computing provides a distinctive and easy means to deploy and manage infrastructure across its entire lifecycle.

Lenovo Integrates with Nutanix Prism

Lenovo TruScale infrastructure services cover the entire range Lenovo ThinkSystem and ThinkAgile products. The software defined infrastructure products include pre-integrated solutions for Nutanix, Azure HCI, Azure Stack and VMware.

Lenovo has taken extra steps to integrate its products with Nutanix. These include:

ThinkAgile XClarity Integrator for Nutanix is available via the Nutanix Calm marketplace. It works in concert with Prism to integrate server data and alerts into the Prism management console. ThinkAgile Network Orchestrator is an industry-first integration between Lenovo switches and Prism. It reduces error and downtime by automatically changing physical switch configurations when changes are made to virtual Nutanix networks.

Nutanix Automates the Application Layer

Nutanix software simplifies the deployment and management of enterprise applications at scale. The following graphic, taken from the opening keynote lists each Nutanix component and summarizes its function. Source: Nutanix

The Nutanix .NEXT conference featured many customers telling how Nutanix has transformed their data center operations. Their statements about Nutanix include:

“stable and reliable virtual desktop infrastructure”

“a private cloud with all the benefits of public, under our roof and able to keep pace with our ambitions”

“giving me irreplaceable time and memories with family”

“simplicity, ease of use, scale”

Lenovo TruScale + Nutanix = Accelerated Enterprise Transformation

I was not initially a fan of the term “digital transformation.” It felt like yet another slogan that really meant, “Buy more of my stuff.” But practical applications of machine learning and artificial intelligence are here now and truly do present significant new opportunities (or threats) for enterprises in every industry. Consequently, and more than at any time in the past, the IT department has a crucial role to play in the success of every company.

Enterprises need their IT departments to transition from being “Information Technology” departments to “Intelligent Transformation” departments. TruScale and Nutanix each enable such a transition by freeing up IT staff to focus on the business rather than on technology. Together, the combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Transform and thrive.

Disclosure: As noted above, Nutanix invited DCIG analysts to attend the .NEXT conference. Nutanix covered most of my travel expenses. However, neither Nutanix nor Lenovo sponsored this article.

Updated on 5/24/2019.

HYCU-X Piggybacks on Existing HCI Platforms to Put Itself in the Scale-out Conversation

Vendors are finding multiple ways to enter the scale-out hyper-converged infrastructure (HCI) backup conversation. Some acquire other companies such as StorageCraft did in early 2017 with its acquisition of ExaBlox. Others build their own such as Cohesity and Commvault did. Yet among these many iterations of scale-out, HCI-based backup systems, HYCU’s decision to piggyback its new HYCU-X on top of existing HCI offerings, starting with Nutanix’s AHV HCI Platform, represents one of the better and more insightful ways to deliver backup using a scale-out architecture. To say that HYCU and Nutanix were inextricably linked before the HYCU-X announcement almost goes without saying. HYCU was the first to market in June 2017 with a backup solution specifically targeted and integrated with the Nutanix AHV HCI Platform. Since then, HYCU has been a leader in providing backup solutions targeted at Nutanix AHV environments.

In coming out with HYCU-X,HYCU addresses an overlooked segment in the HCI backup space. Companies looking for a scale-out secondary storage systems to use as their backup solution typically had to go with a product that was: 1. New to the backup market 2. New to the HCI market; or 3. New to both the backup and HCI markets.

Of these three, a backup provider that fell into either the 2nd or 3rd category where it was or is in any way new to the HCI market is less than ideal. Unfortunately, this is where most backup products fall as the HCI market itself is still relatively new and maturing.

However, this scenario puts these vendors in a tenuous position when it comes to optimizing their backup product. They must continue to improve and upgrade their backup solution even as they try to build and maintain an emerging and evolving HCI platform that supports it. This is not an ideal situation for most backup providers as it can sap their available resources.

By HYCU initially delivering HYCU-X built on Nutanix’s AHV Platform, it avoids having to create and maintain separate teams to build separate backup and HCI solutions. Rather, HYCU can rely upon Nutanix’s pre-existing and proven AHV HCI Platform and focus on building HYCU-X to optimize Nutanix AHV Platform for use in this role as a scale-out HCI backup platform. In so doing, both HCYU and Nutanix can strive to continue to deliver features and functions that can be delivered in as little as one-click.

Now could companies use Nutanix or other HCI platforms as a scale-out storage target without HYCU-X? Perhaps. But with HYCU-X, companies get the backup engine they need to manage the snapshot and replication features natively found on the HCI platform.

By HYCU starting with Nutanix, companies can leverage the Nutanix AHV HCI Platform as a backup target. They can then use HYCU-X to manage the data once it lands there. Further, companies can then potentially use HYCU-X to backup other applications in their environment.

While some may argue that using Nutanix instead of purpose- built scale-out secondary HCI solutions from other backup providers will cost more, the feedback that HYCU has received from its current and prospective customer base suggests this the opposite is true. Companies find that by time they deploy these other providers’ backup and HCI solutions, their costs could exceed the costs of a Nutanix solution running HYCU-X.

The scale-out backup HCI space continues to gain momentum for good reason. Companies want the ease of management, flexibility, and scalability that these solutions provide along with the promise that they give for them to make disaster recoveries much simpler to adopt and easier to manage over time.

By HYCU piggybacking initially on the Nutanix AHV HCI Platform to deliver a scale-out backup solution, companies get the reliability and stability of one of the largest, established HCI providers and access to a backup solution that runs natively on the Nutanix AHV HCI Platform. That will be a hard combination to beat.

VMware Shows New Love for Public Clouds and Containers

In recent months and years, many have come to question VMware’s commitment to public clouds and containers used by enterprise data centers (EDCs). No one disputes that VMware has a solid footprint in EDCs and that it is in no immediate danger of being displaced. However, many have wondered how or if it will engage with public cloud providers such as Amazon as well as how it would address threats posed by Docker. At VMworld 2017, VMware showed new love for these two technologies that should help to alleviate these concerns.

Public cloud offerings such as are available from Amazon and container technologies such as what Docker offers have captured the fancy of enterprise organizations and for good reasons. Public clouds provide an ideal means for organizations of all size to practically create hybrid private-public clouds for disaster recovery and failover. Similarly, container technologies expedite and simplify application testing and development as well as provide organizations new options to deploy applications into production with even fewer resources and overhead than what virtual machines require.

However, the rapid adoption and growth of these two technologies in the last few years among enterprises had left VMware somewhat on the outside looking in. While VMware had its own public cloud offering, vCloud Air, it did not compete very well with the likes of Amazon Web Services (AWS) and Microsoft Azure as vCloud Air was primarily a virtualization platform. This feature gap probably led to VMware’s decision to create a strategic alliance with Amazon in October 2016 to run its vSphere-based cloud services on AWS and its subsequent decision in May 2017 to divest itself of vCloud Air altogether and sell it to OVH. This strategic partnership between AWS and VMware became a reality at VMworld 2017 with the announcement of the initial availability of VMware Cloud on AWS. Using VMware Cloud Foundation, administrators can use a single interface to manage their vSphere deployments whether reside locally or in Amazon’s cloud. The main caveat is this service is currently only available in the AWS US West region. VMware expects to roll this program out throughout the rest of AWS’s regions worldwide in 2018.

VMware’s pricing for this offering is as follows

Region: US West (Oregon)

On-Demand (hourly) 1 Year Reserved* 3 Year Reserved* List Price ($ per host) $8.3681 $51,987 $109,366 Effective Monthly** $6,109 $4,332 $3,038 Savings Over On-Demand 30% 50%

*Coming Soon. Pricing Option Available at Initial Availability: Redeem HPP or SPP credits for on-demand consumption model.

**Effective monthly pricing is shown to help you calculate the amount of money that a 1-year and 3-year term commitment will save you over on-demand pricing. When you purchase a term commitment, you are billed for every hour during the entire term that you select, regardless of whether the instances are running or not. Source: VMware

The other big news coming out of VMworld was its response to the threat/opportunity presented by container technologies. To tackle this issue, it partnered with Pivotal Software, Inc., and collaborated with Google Cloud to offer the new Pivotal Container Service (PKS) that combines the Pivotal Cloud Foundry and VMware’s software-defined data center infrastructure offerings.

Source: Pivotal Software

One of the major upsides of this offering is a defined, supported code level for use by enterprises for testing and development. Container technologies are experiencing a tremendous of change and innovation. While this may foretell great things for container platforms, this degree of innovation makes it difficult for enterprises to do predictable and meaningful application testing and development when the underlying code base is changing so swiftly.

By Google, Pivotal, and VMware partnering to deliver this platform, enterprises have access to a more predictable, stable, and supported container code base than what they might obtain independently. Further, they can have more confidence that the the platform on which they test their code will work in VMware environments in the months and years to come.

VMware’s commitment to public cloud and container providers has been somewhat unclear over the past few years. But what VMware made clear at this year’s VMworld is that it no longer views cloud and container providers such as Amazon and Google as threats. Rather, it finally embraced what its customers already understood. VMware excels at virtualization and Amazon and Google excel at cloud and container technologies. At VMworld 2017, it admitted to itself and the whole world that if you could not beat them, join them which was the right move for move VMware and the customers it seeks to serve.

DCIG Announces Calendar of Planned Buyer’s Guide Releases in the First Half of 2015

At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team“. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions.

In the past year, we have added a number of offerings to our repertoire of products and services. In addition to producing our popular Buyer’s Guides and well known blogs, we now offer Competitive Research Services, Executive Interviews, Executive White papers, Lead Generation, Special Reports and Webinars. Even more unique, DCIG now offers an RFP/RFI Analysis Software Suite. This suite gives anyone (vendor, end- user or technology reseller) the ability to license the same software that DCIG uses internally to develop its Buyer’s Guide. In this way, you may use the software to do your internal technology assessments with your own scores and rankings so that the results align more closely with your specific business needs.

While we diversify our portfolio, it’s important to note that we also increased our Buyer’s Guide publication output by nearly 40% to thirteen (13) over our 2013 publications. We also contracted for over 30 Competitive Advantage reports in 2014. This success is largely due the well-planned timeline, more clearly defined processes, and the addition of new analysts. The team is busy and here is a sneak peek at the Buyer’s Guides that they are currently working on during the first half of 2015 (in order of target release date):

Hybrid Storage Array: Hybrid Storage Array is a physical storage appliance that dynamically places data in a storage pool that combines and HDD storage (and in some cases NVRAM and/or DRAM) resources by intelligently caching data and metadata and/or by automatically moving data from one performance tier to another. The design goal of a hybrid storage array is to typically provide sub-2-millisecond response times associated with flash memory storage arrays with capacity and cost similar to HDD-based arrays.

SDS Server SAN: A new Buyer’s Guide for DCIG, the SDS Server SAN is a collection of servers combining compute, memory and internal DAS storage, which enables organizations to remove the need to for external storage in a virtualized environment. The SDS Server SAN software provides the glue between the compute and storage portions of the environment allowing for clustering of not only the virtual host but the underlying file system as well. SDS Server SAN’s typically bundle compute, storage and hypervisors and employ the usage of SSD as a tier for storage caching; SAS and/or SATA HDDs for data storage; and, support of one or more hypervisors.

Hybrid Cloud Backup Appliance: A Hybrid Cloud Backup Appliance is a physical appliance that comes prepackaged with server, storage and backup software. What makes this Buyer’s Guide stand apart from the Integrated Backup Appliances is that the Hybrid Cloud Backup Appliance must support backup both locally and to cloud providers. In this new Buyer’s Guide for DCIG, DCIG evaluates which cloud provider or providers that the appliance natively supports, the options it offers to backup to the cloud and even what options are available to recover data and/or applications with a cloud provider.

Private Cloud Storage Array: Private Cloud Storage Array is a physical storage appliance located behind an organization’s firewall that enables the delivery of storage as a service to end users within an enterprise. Private cloud storage brings the benefits of public cloud storage to the enterprise—rapid provisioning/de-provisioning on storage resources through self-service tools and automated management, scalability, and REST API support for cloud-native apps—while still meeting corporate data protection, security and compliance requirements

Flash Memory Storage Array: The Flash Memory Buyer’s Guide is a refresh from 2014. The flash array is a solid state storage disk system that contains multiple flash memory drives instead of hard disk drives.

Unified Communications: Another new guide for DCIG, Unified communications (UC) is any system that integrates real-time and non-real-time enterprise communication services such as voice, messaging, instant messaging, presence, audio and video conferencing and mobility features. The purpose of UC is to provide a consistent user-interface and experience across multiple devices and media-types.

Watch the latter half of the year as DCIG plans to refresh Buyer’s Guides on the following topics:

Big Data Tape Library Deduplicating Backup Appliance High End Storage Array Integrated Backup Appliance Midrange Unified Storage SDS Storage Virtualization Virtual Server Backup Software

We also have other topics that we are evaluating as the basis for new Buyer’s Guides so look for announcements on their availability in the latter half of this year.

DCIG Announces Calendar of Planned Buyer’s Guide Releases in the First Half of 2015

At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team“. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions.

In the past year, we have added a number of offerings to our repertoire of products and services. In addition to producing our popular Buyer’s Guides and well known blogs, we now offer Competitive Research Services, Executive Interviews, Executive White papers, Lead Generation, Special Reports and Webinars. Even more unique, DCIG now offers an RFP/RFI Analysis Software Suite. This suite gives anyone (vendor, end- user or technology reseller) the ability to license the same software that DCIG uses internally to develop its Buyer’s Guide. In this way, you may use the software to do your internal technology assessments with your own scores and rankings so that the results align more closely with your specific business needs.

While we diversify our portfolio, it’s important to note that we also increased our Buyer’s Guide publication output by nearly 40% to thirteen (13) over our 2013 publications. We also contracted for over 30 Competitive Advantage reports in 2014. This success is largely due the well-planned timeline, more clearly defined processes, and the addition of new analysts. The team is busy and here is a sneak peek at the Buyer’s Guides that they are currently working on during the first half of 2015 (in order of target release date):

Hybrid Storage Array: Hybrid Storage Array is a physical storage appliance that dynamically places data in a storage pool that combines flash memory and HDD storage (and in some cases NVRAM and/or DRAM) resources by intelligently caching data and metadata and/or by automatically moving data from one performance tier to another. The design goal of a hybrid storage array is to typically provide sub-2-millisecond response times associated with flash memory storage arrays with capacity and cost similar to HDD-based arrays.

SDS Server SAN: A new Buyer’s Guide for DCIG, the SDS Server SAN is a collection of servers combining compute, memory and internal DAS storage, which enables organizations to remove the need to for external storage in a virtualized environment. The SDS Server SAN software provides the glue between the compute and storage portions of the environment allowing for clustering of not only the virtual host but the underlying file system as well. SDS Server SAN’s typically bundle compute, storage and hypervisors and employ the usage of SSD as a tier for storage caching; SAS and/or SATA HDDs for data storage; and, support of one or more hypervisors.

Hybrid Cloud Backup Appliance: A Hybrid Cloud Backup Appliance is a physical appliance that comes prepackaged with server, storage and backup software. What makes this Buyer’s Guide stand apart from the Integrated Backup Appliances is that the Hybrid Cloud Backup Appliance must support backup both locally and to cloud providers. In this new Buyer’s Guide for DCIG, DCIG evaluates which cloud provider or providers that the appliance natively supports, the options it offers to backup to the cloud and even what options are available to recover data and/or applications with a cloud provider.

Private Cloud Storage Array: Private Cloud Storage Array is a physical storage appliance located behind an organization’s firewall that enables the delivery of storage as a service to end users within an enterprise. Private cloud storage brings the benefits of public cloud storage to the enterprise—rapid provisioning/de-provisioning on storage resources through self-service tools and automated management, scalability, and REST API support for cloud-native apps—while still meeting corporate data protection, security and compliance requirements

Flash Memory Storage Array: The Flash Memory Buyer’s Guide is a refresh from 2014. The flash array is a solid state storage disk system that contains multiple flash memory drives instead of hard disk drives.

Unified Communications: Another new guide for DCIG, Unified communications (UC) is any system that integrates real-time and non-real-time enterprise communication services such as voice, messaging, instant messaging, presence, audio and video conferencing and mobility features. The purpose of UC is to provide a consistent user-interface and experience across multiple devices and media-types.

Watch the latter half of the year as DCIG plans to refresh Buyer’s Guides on the following topics:

Big Data Tape Library Deduplicating Backup Appliance High End Storage Array Integrated Backup Appliance Midrange Unified Storage SDS Storage Virtualization Virtual Server Backup Software

We also have other topics that we are evaluating as the basis for new Buyer’s Guides so look for announcements on their availability in the latter half of this year.

DCIG Updating the Private Cloud Storage Array Buyer’s Guide

DCIG is in the process of researching the Private Cloud Storage Array marketplace with the intention of publishing the DCIG 2015-16 Private Cloud Storage Array Buyers Guide in March/April 2015. This will be an update to theDCIG 2013 Private Cloud Storage Array Buyers Guide. Since the publication of 2013 edition, nearly every vendor has come out with new models and new vendors have arrived on the scene warranting a fresh snapshot of this dynamic marketplace.

The purpose of this courtesy notice is five-fold

1. To inform prospective storage purchasers and storage vendors that DCIG intends to publish the DCIG 2015-16 Private Cloud Storage Array Buyers Guide in March/April 2015. 2. To describe the appeal of private cloud storage while clarifying DCIG’s definition of private cloud storage. 3. To disclose DCIG’s inclusion criteria and enumerate the products identified in our preliminary research. 4. To give individuals an opportunity to inform DCIG of additional products that may qualify for inclusion in the guide. (While there is no charge to have qualifying products included in any DCIG Buyer’s Guide, the DCIG analyst team working on the guide ultimately determines which products will and will not be included.) 5. To give notice of key dates for participants.

The Appeal of Private Cloud Storage

Private cloud storage brings the benefits of public cloud storage to the enterprise— scalability, rapid provisioning/de- provisioning on storage resources through self-service tools and REST API support—while still meeting corporate data protection, security and compliance requirements.

One of the first things that comes to mind when considering private cloud storage is scalability. Organizations can start small with as much capacity and performance as currently required and then scale as business needs dictate. Additional nodes can be added to the existing storage environment in real-time, providing expanded processing and storage capacities as well as adding a level of redundancy.

Private cloud storage arrays typically provide an orchestration and/or automated provisioning interface. Routine administration tasks such as the provisioning and decommissioning of resources can be automated. This frees the storage administrator to focus their time and energy on critical business related issues rather than the daily maintenance of the array. Private cloud storage arrays also offer a level of resource management granularity that traditional storage arrays do not. Multi-tenancy allows a single infrastructure to be used by multiple “customers” reducing costs by sharing resources. Utilization reporting and billing functionality ensures that customers are only paying for what they use. Combining these two features, multi-tenancy and reporting and billing, provides a powerful one-two punch for organizations that charge back (or show back) for storage resources.

DCIG Definition of Private Cloud Storage Array

Private Cloud Storage Array is a physical storage appliance located inside an organization’s firewall that enables the delivery of storage as a service to end users within an enterprise. Private cloud storage brings the benefits of public cloud storage to the enterprise— scalability, rapid provisioning/de-provisioning on storage resources through self-service tools and REST API support—while still meeting corporate data protection, security and compliance requirements.

Inclusion Criteria

Must be a scale-out architecture Must function as a true cluster or storage grid. Usually a minimum of three nodes is required to function as a highly available configuration Must support the insertion and removal of individual nodes without service interruption in order to dynamically grow/shrink allocated storage capacity, CPU resources and cache as business needs dictate Must support one or more of the following storage networking protocols: CIFS and NFS iSCSI and/or FCoE Must integrate with orchestration and/or automated provisioning tools, generally accomplished through support for REST API. Must provide reporting and billing functionality, or integration with 3rd party billing tools. Must primarily function using storage local to the device and/or its direct peers (more than a cloud storage gateway) Must be available as an appliance that is available as a single SKU and includes its own hardware and software. Must be generally available on March 1, 2015

Products That Appear to Meet the Inclusion Criteria

Cleversafe dsNet Appliances Coraid EtherDrive Dell Compellent SC8000 DataDirect Networks Web Object Scaler (WOS) EMC Isilon NL-Series EMC Isilon S-Series EMC Isilon X-Series Fujitsu Storage ETERNUS CD10000 Gridstore GS-1000-2 Gridstore GS-1100-4 Gridstore GS-3000 HDS NAS 4100 HP StoreAll 9320 HP StoreAll 9730 HP StoreAll 8800 HP StoreVirtual 4330 Storage HP StoreVirtual 4530 Storage HP StoreVirtual 4630 Storage HP StoreVirtual 4730 Storage Huawei OceanStor N8500 IBM SONAS Kaminario K2 NEC HYDRAstor NetApp FAS2552 NetApp FAS2554 NetApp FAS8000 Series Nimble Storage CS700 Nimbus Data E-Class Flash Memory Nutanix Overland Storage SnapScale X4 Pivot3 vSTAC Data Pivot3 vSTAC Watch Scale Computing SCr Storage Node Scale Computing HC3 Platform (HC4000) SolidFire SF9010

Key Dates for Participants (DCIG reserves the right to revise this schedule)

1/28/15 – 2/6/15 Products may be proposed for inclusion in the Buyer’s Guide 2/9/15 – 2/16/15 Vendor survey review period 2/16/15 Last date for vendor updates prior to competitive scoring 3/9/15 – 3/13/15 Vendor data sheet review period 3/13/15 Last date for vendor updates to be reflected on data sheets 3/28/15 – 4/4/15 Anticipated publication date

DCIG Updating the Private Cloud Storage Array Buyer’s Guide

DCIG is in the process of researching the Private Cloud Storage Array marketplace with the intention of publishing the DCIG 2015-16 Private Cloud Storage Array Buyers Guide in March/April 2015. This will be an update to theDCIG 2013 Private Cloud Storage Array Buyers Guide. Since the publication of 2013 edition, nearly every vendor has come out with new models and new vendors have arrived on the scene warranting a fresh snapshot of this dynamic marketplace.

The purpose of this courtesy notice is five-fold

1. To inform prospective storage purchasers and storage vendors that DCIG intends to publish the DCIG 2015-16 Private Cloud Storage Array Buyers Guide in March/April 2015. 2. To describe the appeal of private cloud storage while clarifying DCIG’s definition of private cloud storage. 3. To disclose DCIG’s inclusion criteria and enumerate the products identified in our preliminary research. 4. To give individuals an opportunity to inform DCIG of additional products that may qualify for inclusion in the guide. (While there is no charge to have qualifying products included in any DCIG Buyer’s Guide, the DCIG analyst team working on the guide ultimately determines which products will and will not be included.) 5. To give notice of key dates for participants.

The Appeal of Private Cloud Storage

Private cloud storage brings the benefits of public cloud storage to the enterprise— scalability, rapid provisioning/de- provisioning on storage resources through self-service tools and REST API support—while still meeting corporate data protection, security and compliance requirements.

One of the first things that comes to mind when considering private cloud storage is scalability. Organizations can start small with as much capacity and performance as currently required and then scale as business needs dictate. Additional nodes can be added to the existing storage environment in real-time, providing expanded processing and storage capacities as well as adding a level of redundancy.

Private cloud storage arrays typically provide an orchestration and/or automated provisioning interface. Routine administration tasks such as the provisioning and decommissioning of resources can be automated. This frees the storage administrator to focus their time and energy on critical business related issues rather than the daily maintenance of the array.

Private cloud storage arrays also offer a level of resource management granularity that traditional storage arrays do not. Multi-tenancy allows a single infrastructure to be used by multiple “customers” reducing costs by sharing resources. Utilization reporting and billing functionality ensures that customers are only paying for what they use. Combining these two features, multi-tenancy and reporting and billing, provides a powerful one-two punch for organizations that charge back (or show back) for storage resources.

DCIG Definition of Private Cloud Storage Array

Private Cloud Storage Array is a physical storage appliance located inside an organization’s firewall that enables the delivery of storage as a service to end users within an enterprise. Private cloud storage brings the benefits of public cloud storage to the enterprise— scalability, rapid provisioning/de-provisioning on storage resources through self-service tools and REST API support—while still meeting corporate data protection, security and compliance requirements.

Inclusion Criteria

Must be a scale-out architecture Must function as a true cluster or storage grid. Usually a minimum of three nodes is required to function as a highly available configuration Must support the insertion and removal of individual nodes without service interruption in order to dynamically grow/shrink allocated storage capacity, CPU resources and cache as business needs dictate Must support one or more of the following storage networking protocols: CIFS and NFS iSCSI Fibre Channel and/or FCoE Must integrate with orchestration and/or automated provisioning tools, generally accomplished through support for REST API. Must provide reporting and billing functionality, or integration with 3rd party billing tools. Must primarily function using storage local to the device and/or its direct peers (more than a cloud storage gateway) Must be available as an appliance that is available as a single SKU and includes its own hardware and software. Must be generally available on March 1, 2015

Products That Appear to Meet the Inclusion Criteria

Cleversafe dsNet Appliances Coraid EtherDrive Dell Compellent SC8000 DataDirect Networks Web Object Scaler (WOS) EMC Isilon NL-Series EMC Isilon S-Series EMC Isilon X-Series Fujitsu Storage ETERNUS CD10000 Gridstore GS-1000-2 Gridstore GS-1100-4 Gridstore GS-3000 HDS NAS 4100 HP StoreAll 9320 HP StoreAll 9730 HP StoreAll 8800 HP StoreVirtual 4330 Storage HP StoreVirtual 4530 Storage HP StoreVirtual 4630 Storage HP StoreVirtual 4730 Storage Huawei OceanStor N8500 IBM SONAS Kaminario K2 NEC HYDRAstor NetApp FAS2552 NetApp FAS2554 NetApp FAS8000 Series Nimble Storage CS700 Nimbus Data E-Class Flash Memory Nutanix Overland Storage SnapScale X4 Pivot3 vSTAC Data Pivot3 vSTAC Watch Scale Computing SCr Storage Node Scale Computing HC3 Platform (HC4000) SolidFire SF9010

Key Dates for Participants (DCIG reserves the right to revise this schedule)

1/28/15 – 2/6/15 Products may be proposed for inclusion in the Buyer’s Guide 2/9/15 – 2/16/15 Vendor survey review period 2/16/15 Last date for vendor updates prior to competitive scoring 3/9/15 – 3/13/15 Vendor data sheet review period 3/13/15 Last date for vendor updates to be reflected on data sheets 3/28/15 – 4/4/15 Anticipated publication date Facebook’s Disaggregated Racks Strategy Provides an Early Glimpse into Next Gen Data Center Infrastructures

Few organizations regardless of their size can claim to have 1.35 billion users, have to manage the upload and ongoing management of 930 million photos a day or be responsible for the transmission of 12 billion messages daily. Yet these are the challenges that Facebook’s data center IT staff routinely encounter. To respond to them, Facebook is turning to a disaggregated racks strategy to create a next gen cloud computing data center infrastructure that delivers the agility, scalability and cost-effective attributes it needs to meet its short and long term compute and storage needs.

At this past week’s Storage Visions in Las Vegas, NV, held at the , Facebook’s Capacity Management Engineer, Jeff Qin, delivered a keynote that provided some valuable insight into how uber-large enterprise data center infrastructures may need to evolve to meet their unique compute and storage requirements. As these data centers daily may ingest hundreds TBs of data that must be managed, manipulated and often analyzed in near real-time conditions, even the most advanced server, networking and storage architectures that exist today break down.

Qin explained that in Facebook’s early days it also started out using these technologies that most enterprises use today. However the high volumes of data that it ingests coupled with end-user expectations that the data be processed quickly and securely and then managed and retained for years (and possibly forever) exposed the shortcomings of these approaches. Facebook quickly recognized that buying more servers, networking and storage and then scaling them out and/or up resulted in costs and overhead that became onerous. Further, Facebook recognized that the available CPU, memory and storage capacity resources contained in each server and storage node were not being used efficiently.

To implement an architecture that most closely aligns with its needs, Facebook is currently in the process of implementing a Disaggregated Rack strategy. At a high level, this approach entails the deployment of CPU, memory and storage in separate and distinct pools. Facebook then creates virtual servers that are tuned to each specific application’s requirements by pulling and allocating resources from these pools to each virtual server. The objective when creating each of these custom application servers is to utilize 90% of the allocated resources to use them as optimally as possible.

Facebook expects that by taking this approach that, over time, it can save in the neighborhood of $1 billion. While Qin did not provide the exact road map as to howFacebook would achieve these savings, he provided enough hints in his other comments in his keynote that one could draw some conclusions as to how they would be achieved.

For example, Facebook already only acquires what it refers to as “vanity free” servers and storage. By this, one may deduce that it does not acquire servers from the likes of Dell or HP or storage from the likes of EMC, HDS or NetApp (though Qin did mention Facebook did initially buy from these types of companies.) Rather, it now largely buys and configures its own servers and configures and configures them by itself to meet its specific processing and storage needs. Also it appears that Facebook may be or is already buying the component parts that make up server and storage such as the underlying CPU, memory, HDDs and network cabling to create its next gen cloud computing data center. Qin did say that what he was sharing at Storage Visions represented what equated to a 2 year strategy for Facebook so exactly how far down the path that it is toward implementing it is unclear.

Having presented that vision for Facebook, the presentations at Storage Visions for the remainder of that day and the next were largely spent showing why this is the future at many large enterprise data centers but why it will take some time to come to fruition. For instance, there were some presentations on next generation interconnect protocols such as PCI Express, Infiniband, iWarp and RoCE (RDMA over Converged Ethernet).

This high performance, low latency protocols are needed in order to deliver the high levels of performance between these various pools of resources that enterprises will need. As resources get disaggregated, their ability can achieve the same levels of performance that can within servers or storage arrays diminishes since there is more distance and communication required between them. While performance benchmarks of 700 nanoseconds are already being achieved using some of these protocols, these are in dedicated, point-to- point environments and not in switched fabric networks.

Further, there was very little discussion as to what type of cloud operating system would overlay all of these components so as to make the creation and ongoing management of these application-specific virtual servers across these pools of resources possible. Even assuming such an OS did exist, tools that manage its performance and underlying components would still need to be developed and tested before such an OS could realistically be deployed in most production environments.

Facebook’s Qin provided a compelling early look into what the next generation of cloud computing may look like in enterprise data centers. However the rest of the sessions at Storage Visions also provided a glimpse into just how difficult the task will be for Facebook to deliver on this ideal as many of the technologies needed are still in their infancy stages if they exist at all.

Today it is Really All About the Integrated Solution

As I attended sessions at Microsoft TechEd 2014 last week and talked with people in the exhibit hall a number of themes emerged including “mobile first, cloud first”, hybrid cloud, migration to the cloud, disaster recovery as a service, and flash memory storage as a game-changer in the data center. But as I reflect on the entire experience, a statement made John Loveall, Principal Program Manager forMicrosoft Windows Server during one of his presentations sums up to overall message of the conference, “Today it is really all about the integrated solution.”

The rise of the pre-integrated appliance in enterprise IT has certainly not gone unnoticed by DCIG. Indeed, we have developed multiple buyer’s guides to help businesses understand the marketplace for these appliances and accelerate informed purchase decisions.

The new IT service imperative is time to deployment. Once a business case has been made for implementing a new service, every week that passes before the service is in production is viewed by the business as a missed revenue growth or cost savings opportunity—because that is what it is. The opportunity costs associated with IT staff researching, purchasing, integrating and testing all the components of a solution in many cases outweigh any potential cost savings.

An appliance-based approach to IT shrinks the time to deployment. The key elements of a rapidly deployable appliance-based solution include pre-configured hardware and software that has been pre-validated to work well together and then tested prior to being shipped to the customer. In many cases the appliance vendor also provides a simplified management tool that facilitates the rapid deployment and management of the service.

Some vendors in the TechEd exhibit hall that exemplify this appliance-based approach included DataOn, HVEconneXions, InMage, Nutanix and Violin Memory.

DataOn was previewing their next-generation Cluster-in-a-Box. Although the DataOn booth was showing their products pre- configured with Windows Server 2012 R2 and Storage Spaces, they also support other operating environments and are Nexenta certified. Nutanix takes a similar approach to deliver what they call a “radically simple converged infrastructure”.

I met the David Harmon, President of HVE ConneXions at the Huawei booth. HVE is using Huawei networking gear in combination with HVE’s own flash memory appliances to deliver VMware View-based virtual desktops to clients at a cost of around $200 per desktop. He told me of a pilot implementation where two HVE staff converted a 100 computer lab of Windows XP desktops to Windows 7 virtual desktops in just two days.

InMage Systems was showing their InMage 4000 all-in-one purpose-built backup and disaster recovery appliance that can also provide public and private cloud migration. I spoke with Joel Ferman, VP of Marketing, who told me that their technology is used by Cisco, HP and Sunguard AS; and that they had never lost a head-to-head proof of concept for either backup or disaster recovery. InMage claims their solution can be deployed in less than a day with no downtime. The appliance won the Windows IT Pro Best of TechEd 2014 award in the Backup & Recovery category.

Violin Memory was displaying their Windows Flash Array, an appliance that ships with Windows Storage Server 2012 R2 pre- installed. The benefits of this appliance-based approach was explained by Eric Herzog, Violin Memory’s CMO this way, “Customers do not need to buy Windows Storage Server, they do not need to buy blade servers, nor do they need to buy the RDMA 10-gig-embedded NICs. Those all come prepackaged in the array ready to go and we do Level 1 and Level 2 support on Windows Server 2012 R2.”

Today it is really all about the integrated solution. In many cases, the opportunity to speed the time to deployment is the deciding factor in selecting an appliance-based solution. In other cases, the availability of a pre-configured appliance puts sophisticated capabilities within reach of smaller IT departments composed primarily of IT generalists who lack the specialized technical skills required to assemble such solutions on their own. In either case, the ultimate benefit is that businesses gain the IT capabilities they need with a minimum investment of time.

This is the second in a series of blog entries based on my experience at Microsoft TechEd 2014. The first entry focused on how Microsoft’s inclusion of Storage Spaces software in Windows Server 2012 R2 paves the way for server SAN, and how Microsoft Azure Site Recovery and StorSimple promote hybrid cloud adoption. DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide. In this Buyer’s Guide, DCIG weights, scores and ranks 10 converged infrastructure solutions from six (6) different providers. Like previous DCIG Buyer’s Guides, this Buyer’s Guide provides the critical information that all size organizations need when selecting a converged infrastructure solution to help expedite application deployments and then simplify their long term management.

The need for a “datacenter in a box,” or “cluster in a can,” as some like to refer to converged infrastructure offerings, is one that is growing and expanding into many new markets. Originally geared mainly towards small to mid-sized businesses (SMBs) because of their general lack of either highly-trained specialists or an insufficiently staffed IT department to keep up with the many demands of their workplace, converged infrastructures are now finding their way into the enterprise companies.

The market for converged infrastructure systems is expected to grow as high as $8 billion in revenue for calendar year 2013. While this may seem insignificant when compared to the $114 billion in revenue garnered by general infrastructure, its projections for future growth fall in line with the high expectations for converged infrastructures expressed in this Guide. The converged infrastructure market is expected to grow 50 percent over the next three years, compared to only a 1.2 percent growth for the general infrastructure market.

Bundled and sold as a single SKU, converged infrastructures offer the full scope of hardware and software that an organization needs to essentially do a turnkey deployment. Converged infrastructures package multiple technologies together in a single unit to include compute, storage and networking, along with a bundle of software for automation, management and orchestration. Converged infrastructures are becoming THE solution for enterprise organizations that have satellite or remote offices that need a consistent, consoli- dated IT implementation which may be easily managed and maintained remotely.

The benefits of implementing converged infrastructures into an organization are numerous, but they can be summarized as follows:

Offer help to overworked and understaffed IT teams. Converged infrastructures do not require as much research, planning, and time to implement in an environment as would buying each piece separately. Converged infrastructures present a validated and tested configuration whose key Return on Investment (ROI) lies in the staff budgeting benefits that come from its ease of implementation. Improvements to IT staff workflow. Take the following example scenario: in a non-converged environment, storage and networking each need to be configured for specific servers and the company’s network. In a larger company that has a segmented IT department, the network team needs to become involved to provision what is necessary for an internal network as well as externally—and that is only the first step.

In a converged environment an IT department makes the decision that it needs 50 virtual machines (VMs) with some Exchange and SQL Server applications hosted on the solution as well. In this case, the converged infrastructure provider puts together an integrated, right-sized solution with all of the necessary server, storage and networking components.

This converged solution may then be set up via a wizard-based GUI on the front-end with VMs can be provisioned from there. Because all parts of the package are provided by one manufacturer, an organization does not need to become its own integrator.

Backup and recovery software included.Converged infrastructures often offer native backup and recovery software that can deduplicate and compress backup data so there is no need to purchase a separate backup tool eliminating the need for IT staff to test and implement this software. Converged infrastructure solutions from larger vendors may even replicate data from a converged infrastructure solution to a non-converged one and vice versa. These are especially desirable features, especially for organizations that have virtual environments located at regional or national data centers that they have previously constructed themselves. Improved business continuity. Converged infrastructure solutions are often architected to take advantage of the many numerous failover and high availability features found on today’s enterprise hypervisors. By deploying each application as a VM on the converged infrastructure solution, it immediately has access and can take advantage of features such as High Availability, Dynamic Resources Scheduler (DRS), vMotion, and others. In this scenario, access to this functionality can be presumed, as opposed to having to ask IT staff to dedicate time to test and implement these features.

It is in this context that DCIG presents its 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide. As prior Buyer’s Guides have done, this Buyer’s Guide puts at the fingertips of organizations a comprehensive list of converged infrastructure solutions and the features they offer in the form of detailed, standardized data sheets that can assist them in this important buying decision.

The 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide accomplishes the following objectives:

Evaluates converged infrastructure solutions with a starting list price of $US50,000 or less. Provides an objective, third party evaluation of converged infrastructures that weights, scores and ranks their features from an end user’s viewpoint Includes recommendations on how to best use the Buyer’s Guide Scores and ranks the features on each converged infrastructure based upon criteria that matter most to end users so they can quickly know which products are the most appropriate for them to use and under what conditions Provides data sheets for 10 converged infrastructures from six (6) different providers so end users can do quick comparisons of the features that are supported and not supported on each product Provides insight into which features on a converged infrastructure will result in improved performance Gives any organization the ability to request competitive bids from different providers of converged infrastructures that are apples-to-apples comparisons

The DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide evaluates the following 10 solutions that include (in alphabetical order):

Pivot3 vSTAC Edge Appliance Pivot3 vSTAC R2S Appliance Pivot3 VDI R2 P Cubed Appliance Pivot3 vSTAC R2S P Cubed Appliance Pivot3 vSTAC Watch R2 Quanta CB220 Riverbed Granite Core/Edge Scale Computing HC3 Hyperconvergence Simplivity Omnicube CN-2000 Zenith Infotec TigerCloud.

The Pivot3 vSTAC R2S P Cubed Appliance & Pivot3 vSTAC R2S Appliance shared the Best-in-Class ranking among converged infrastructures evaluated in this Buyer’s Guide. Both products scoring at the top in this Buyer’s Guide showed an innate flexibility in a highly competitive space. The Pivot3 models assembled all the pieces necessary for organizations to expect them to remain near the top in this space.

The DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide is immediately available through the DCIG analyst portal for subscribing users by following thislink .