September 2018 Welcome to the Age of Composable Infrastructure Kaminario unveils software-defined, composable storage solutions for the modern datacenter.

In this issue

Kaminario is Changing the Way IT Deploys Enterprise-Class Solid State Storage Capability 2

Figure 1. Magic Quadrant for Solid-State Arrays1 3

Research from : Critical Capabilities for Solid-State Arrays 5 Kaminario is Changing the Way IT Deploys Enterprise-Class Solid State Storage Capability

Software Defined meets Solid State Storage Kaminario has been in the business of delivering all- flash enterprise-class storage since 2011. Relying on a unique software-defined architecture, the Kaminario K2 has been rated among the most capable solid-state storage platforms by Gartner since 2014. Key to its differentiation, the ability to scale out for performance and scale up in capacity has made the Kaminario K2 one of the leading platforms for building cloud- scale application infrastructure. With the majority of Kaminario’s business coming from SaaS, Consumer Internet, or Cloud Service Providers, the company has seen, first hand, the evolving requirements for cloud- scale storage infrastructure.

No Compromises on Technical Capability Kaminario’s software-defined architecture delivers true enterprise-class capability in a highly flexible, extremely cost-efficient storage solution leveraging 100% industry-standard hardware. Ranked a Leader in Gartner’s Magic Quadrant for the past two years, Kaminario’s unique technology solution competes favorably with traditional storage array solutions. The Kaminario K2 was ranked the highest for analytics and high performance computing and within the top three for all use cases in 2018 Gartner Critical Capabilities Report. 3

Figure 1. Magic Quadrant for Solid-State Arrays1

Source: Gartner (July 2018)

The Composable Storage Paradigm architecture leverages standard hardware building Kaminario is committed to the vision of delivering blocks that can be easily scaled-up, scaled-out, scaled- composable storage solutions that deliver a new level down, or scaled-in as business needs dictate. of flexibility and control to SAN storage infrastructure. Unlike traditional scale-up or scale-out storage arrays, Enterprise-class Data Services Kaminario can scale in any direction. This lets IT While highly differentiated in its delivery model, organizations optimize their storage infrastructure for Kaminario delivers true enterprise-class data services the specific performance, capacity, and cost-efficiency demanded by world-class IT organizations. Highly needs of their application. This highly flexible software efficient data reduction, native replication utilities,

1 Gartner’s Magic Quadrant for Solid-State Arrays G00338339, Analyst(s): Valdis Filks | John Monroe | Joseph Unsworth | Santhosh Rao, 23 July 2018 4

robust integration support, and true enterprise- The Economics and Agility of Software Defined class availability put Kaminario in the same class of Storage solutions as traditional storage arrays. Kaminario delivers its software-defined solution as either a pre-integrated appliance or as a usage-based In addition to the rich data services, Kaminario also software model. In either case, fully integrated delivers the unique ability to support mixed workloads solutions are available, on-demand, through a strategic and deliver consistent low latency for a wide array of alliance with Tech Data (NYSE: TECD), a Fortune 100, applications such as, real time analytics, transactional global IT distributor. The combination of world-class processing and high performance computing. software-defined storage with best of breed hardware infrastructure distribution delivers the economics and Advanced Analytics, Management, Automation, flexibility that only hyperscale datacenters achieve. and Orchestration Kaminario’s storage platform is complemented with a As cloud-scale operators strive to emulate the world-class analytics, management, and automation software-defined datacenter efficiencies of the public platform called Clarity. Clarity provides Kaminario cloud, Kaminario is the ideal partner for building customers with a rich toolset for monitoring, planning, highly flexible, highly cost efficient software-defined and automating common storage management tasks. storage infrastructure. Kaminario Flex is an orchestration platform that can dynamically control storage resources delivering true Source: Kaminario autonomous datacenter management capabilities. Research from Gartner: Critical Capabilities for Solid-State Arrays

Solid-state arrays have become faster, smaller and more reliable, and they’re a safe business decision, due to guarantees that can’t be obtained in other areas of infrastructure. Here, Gartner analyzes 18 SSA product families across high-impact use cases for infrastructure and operations leaders.

Key Findings ■■ Solid-state arrays are highly secure and are insulated from Spectre and Meltdown security exposures, because arrays do not allow application code or user code access to the storage software.

■■ High-performance, nonvolatile memory express, Peripheral Component Interconnect Express solid-state drive use in solid-state arrays is increasing; however external NVMe over Fabric host connections are rarely used, and often only with proprietary NVMe-oF implementations.

■■ To improve density and performance and create integrated software stack advantages, vendors are designing and offering their own flash modules, rather than using only standard SSDs.

■■ Many vendors are quoting prices for SSAs by effective or expected capacity, rather than raw capacity. 6

Recommendations Strategic Planning Assumptions Infrastructure and operations leaders focused on In 2022, artificial intelligence (AI)/machine learning building and sustaining dependable infrastructure will represent more than 25% of solid-state array should: (SSA) workloads, which is an increase from fewer than 5% in 2018. ■■ Use SSAs to reduce the fault domain of an outage or security exposure and improve the level of By 2020, 30% of SSAs will be based on nonvolatile security in their data centers. memory express (NVMe) technology, which is an increase from fewer than 5% in 2017. ■■ Request that the offer contain the raw capacity and use the raw capacity to compare offers at a price What You Need to Know per raw terabyte, when purchasing or comparing Customer satisfaction with SSAs is high. Nevertheless, the price of SSAs SSAs are still improving in performance, automation and predictive/preemptive support. Performance ■ ■ Request guarantees that keep financial improvements are due to the increased adoption comparisons valid with no cost remedies for of the NVMe protocols used by internal SSDs and any claims of effective storage array capacity, backplanes. This is slowly expanding to front-end performance or any other promise. This is because NVMe-oF, which uses Fibre Channel NVMe (FC- effective capacity and performance is workload- NVMe) or high-speed Ethernet host/storage network and data-dependent and can be variable. connections. This, in turn, is causing transformation changes in the storage market, because the ■ ■ Select SSAs when a project, service, application performance differences between internal or external or infrastructure requires low latency, consistent storage have narrowed enough to allow external performance, high availability and a small storage to be used as slow memory. environmental footprint. SSAs offer transparency and guarantees that can’t be obtained from other The sophistication of array software that integrates storage, hyperconverged, integrated or server with applications, public cloud services and solutions. hypervisors causes high levels of storage automation as long as APIs are developed, published and supported by all parties. This simplifies array administration storage provisioning via APIs 7

and patching. Service outages and preemptive I&O leaders who need to have a secure, fast, reliable maintenance and support tasks are reduced, due and flexible storage infrastructure can use this to end-to-end application to array AI and machine research to select an SSA that best meets their learning analytics used to monitor, predict problems needs. The SSAs analyzed in this research are diverse and suggest configuration changes or patches before — scale-up, scale-out, scale-in. Some are purpose- a problem occurs. The overall advantages of storage built, and some are evolutions from previous disk disaggregation do not tie into upgrades or expansions array designs. Nevertheless, the market offers many to servers and the obvious security benefits of storage more products from SSA vendors that have not arrays. Customers can select SSAs to create an agile been included in this research. All of the SSAs in and flexible IT infrastructure. This can be isolated this research have received overall scores above 3.0, from server issues, reliably used, and nondisruptively which signifies that they meet or exceed all of the upgraded and expanded for several years. requirements required of an SSA.

This research analyzes which products are poised for scalable performance and capacity, coupled with a smooth transition to NVMe technology, to achieve sustained, predictable performance. Manageability and interoperability from the core to the cloud to the edge, with predictive/prescriptive performance, cost and issue resolution define the characteristics of leading solid-state drive (SSD) products. 8

Analysis Critical Capabilities Use-Case Graphics

Figure 1. Vendors’ Product Scores for the Online Transaction-Processing Use Case

Source: Gartner (August 2018) 9

Figure 2. Vendors’ Product Scores for the Server Virtualization Use Case

Source: Gartner (August 2018) 10

Figure 3. Vendors’ Product Scores for the High-Performance Computing Use Case

Source: Gartner (August 2018) 11

Figure 4. Vendors’ Product Scores for the Analytics Use Case

Source: Gartner (August 2018) 12

Figure 5. Vendors’ Product Scores for the Virtual Desktop Infrastructure Use Case

Source: Gartner (August 2018) 13

Vendors used and configured as a 16PB system is expected to be rare. Nevertheless, this is the case for most SSAs EMC Unity All-Flash that have maximum raw capacity specifications of This series of arrays had a hardware upgrade in May more than 5PB, and the Unity All-Flash is successfully 2017. The new Unity-F model numbers are the Unity-F used in high-end environments for high-availability 350/450/550 and 650. In addition, a significant applications and workloads. Another recent upgrade software update, v.4.3, became available for the and competitive feature that’s available with the Unity Unity All-Flash family in December 2017 which added All-Flash is the CloudIQ option. This is the predictive deduplication functionality. Overall, the December support offering, which exploits analytics to produce 2017 software update, plus additional loyalty service recommendations and warnings for the array. programs, improved the product’s competitiveness in the marketplace. The Unity All-Flash systems are Traditional fault and exception monitoring is also unified arrays that support block and file protocols. included with CloudIQ, when enabled and used The maximum raw capacity of Unity All-Flash series with the Unity All-Flash models. The Unity All-Flash has nearly doubled, with raw capacity scalability from software is available in a software-defined storage 1.1TB to 16PB. (SDS) option that enables customers to make servers or virtual machines (VMs) appear as storage arrays. All Unity All-Flash models now offer selectable This provides flexibility and options to customers in deduplication, in addition to the existing data ways that the Unity All-Flash storage features can compression functionality for block and file data. be implemented and exploited. All of the software The effective capacity has increased significantly. features of Unity All-Flash are included in the base Synchronous replication is available only for price of the array. There are no bundles or extra block storage, and file-based volumes or storage software options. Similar to the Dell VMAX All-Flash in the Unity All-Flash can only be replicated with arrays, the same guarantees, remedies and “pay as asynchronous replication. It requires only 2U of you go” offerings (e.g., Flex on Demand) are provided rack space, when configured with 100TB of raw for the Unity All-Flash. These are part of the “Future- storage capacity. Even though the Unity All-Flash has Proof Storage Loyalty” program, which covers 4:1 traditionally been positioned as a midrange array, the storage efficiency guarantees, as well as trade-in and Unity-All-Flash scales to a high maximum capacity of migration offers. 16PB. This is higher than the traditionally positioned high-end Dell VMAX All-Flash, which scales to only Dell EMC VMAX All-Flash 4PB of raw capacity. The VMAX All-Flash has an extensive and proven set of enterprise features (excluding deduplication). Because the Unity All-Flash is a dual-controller array, Therefore, Dell leads with VMAX All-Flash arrays in and the VMAX All-Flash F is a multinode/scale-out high-performance, high-availability data centers. Dell array, the practical usage of a Unity All-Flash F to be 14

has also used the VMAX All-Flash to move its storage payment method as part of the Dell “Data Centre array sales to flash from disk arrays, because most Utility” program. The software for this is packaged customers purchase more VMAX All-Flash arrays than in two bundles (F” and FX), which are included in the disk-based VMAX arrays. The models in the VMAX All- initial array purchase. However, individual features can Flash family have not changed significantly since the still be purchased separately. last upgrade in 2017, and the series consists of the VMAX 250F, 450F, 850F and 950F, which range from A new version of the VMAX All-Flash SSA, the 15TB to 4.4PB. The system is built from individual PowerMax, became available on 7 May 2018. This V-bricks, which are analogous to nodes or controllers. offers new features, such as deduplication, faster The V-bricks contain a controller, cache and storage internal NVMe connections and the ability to use and can be upgraded in 13TB increments. However, storage class memory; however, it was not included, for flexibility in the VMAX All Flash, different SSD because it has only recently become available. capacities can be mixed and matched. Compression is also flexible: It can be enabled or disabled at the Fujitsu Storage Eternus AF Series volume group level. The Fujitsu AF250 S2 and AF650 S2 series of SSAs were updated in January 2018, This was mainly a Significant new VMAX All-Flash features in 2017 were performance improvement, with a larger controller Recoverpoint support, which enables heterogeneous cache, faster controllers, and faster and more efficient replication among any other storage arrays that 32Gbps FC connections. The S2 series replaced the S1 support or connect to Recoverpoint. Secure snapshots arrays (which are no longer available); however, backward protect from malicious or accidental deletion. The compatibility between the S2 and S1 generations is VMAX All-Flash is still a physically large system and maintained for the administration graphical user interface requires more rack space than many competitors’ (GUI), replication and clustering. Compatibility between products. For example, a 100TB raw capacity VMAX the generations enables simple storage migrations from All Flash requires 10U of rack space, compared with the S1 to the S2; however, data-in-place controller or many other vendors’ offerings, which require only array upgrades from the S1 to the S2 can’t be done. 2U of rack space to provide the same raw storage However, capacity upgrade flexibility is good, because the capacity. Power and cooling requirements are also size of the SSDs in the arrays can be mixed, as required relatively higher for the VMAX All-Flash. by the customer.

Dell has the “Future-Proof Storage Loyalty” program, Overall, the Eternus S2 arrays have competitive which covers a 4:1 storage efficiency guarantee, trade- scalability with the maximum raw capacity of the in, migration and all-in one software pricing bundles. AF650 S2 being 2.9PB. The S2 series continues to Customers that don’t want to buy an SSA and prefer be one of the most fully featured SSAs, because flexible consumption can use this program’s pay- the S2 models offer all of the standard and latest as-you-go, “Flex on Demand” monthly operational array features with a great degree of flexibility. In- 15

line compression and data reduction are mature deduplication; however, this may cause a performance and have been available for more than 18 months. overhead on the system. This can be avoided by These data reduction features can be selectively leveraging the FMD-based, hardware-assisted enabled or disabled on a per-volume basis, as per compression feature offered by the array. workload requirements. Similarly quality of service (QoS) provides performance control at the logical Hitachi Vantara VSP F series offers a data reduction unit number (LUN)/volume level, so that different guarantee of 2:1 on its arrays and a 100% uptime applications with different service levels can be guarantee. Enterprises can use VSP’s native cloud consolidated in the array. tiering capabilities by creating policies to move aging data to Amazon Web Services (AWS) S3 All of the array software features are available in or Microsoft Azure. The VSP platform supports an all-inclusive offer in which the software features all major hypervisors, OSs, backup software and are included in the base price or the array. High cloud management platforms (CMPs). Hitachi availability can be achieved with the Eternus clustering also provides a storage plug-in for containers that feature. Current S2 models do not offer a NVMe back provides persistent storage for Mesophere DC/OS end or a NVMe-oF front end, nor do they support and Docker containers, as well as data services, such integration with Amazon, Azure or storage as snapshots, cloning, replication and monitoring. cloud interfaces. It provides a comprehensive list of APIs that integrate with CMPs, such OpenStack, VMware and Hitachi VSP F Series ServiceNow. Hitachi offers two forms of — Foundation Hitachi VSP F series consists of four products (F400, and Advanced — with the latter including remote F600, F800 and F1500) that address a broad range replication, advanced management and active-active of performance and scalability requirements. Vantara support. The Foundation package is bundled with all develops its own flash modules (FMDs), based on VSP F series. Additional software can be purchased a NAND flash components sourced from multiple NAND la carte or as part of the Advanced package. vendors. In 2017, Hitachi Vantara introduced support for SSDs, in addition to FMDs, thus offering greater Overall Hitachi Vantara’s strategy for the VSP F arrays architecture flexibility. The VSP platform does not is conservative and practical, with a 2:1 data reduction support NVMe SSDs. All platforms continue to operate estimate and NVMe support when the protocol with an internal SAS interface. Software capabilities and devices become more mature. However, a new are delivered through its time-tested flagship OS, refreshed series (not analyzed here), VSP F350, F370, Hitachi SVOS. The Hitachi SVOS offers all major data F700 and F900, became generally available in May services, such as snapshots, cloning, synchronous and 2018. It also included a new version of SVOS, called asynchronous replication, as well as metro clustering. SVOS RF, which updated array memory management and solid-state performance. SVOS 7.0 and later support selective compression and 16

HPE 3PAR StoreServ All-Flash Arrays A common management interface, improved QoS The HPE 3PAR StoreServ portfolio has an extensive features and granular performance and health range of products, starting with its entry-level 8200 monitoring are simple to use. 3PAR high availability series. It progresses higher with more compute is comprehensive with HPE Persistent Cache, resources and memory with the 9450, which became Persistent Port, Peer Persistent, and Asynchronous available in June 2017 and extends to the 20000 and Synchronous Remote Copy features. This is series. This portfolio provides capacities ranging from supported by a high-availability guarantee of six 2.4TB to 8PB of raw capacity. Given its architecture, nines (99.9999%) and a 12-month availability the product is able to efficiently use cost-effective guarantee. However, to qualify for these reliability/ standard SSDs to drive an aggressive overall system high-availability guarantees, customers must have at price that can also be purchased via a consumption- least four nodes and a more-expensive, mission-critical based pricing model. Well-established, thin- support contract. HPE offers investment protection provisioning capabilities complement selective thin with its ability to nondisruptively migrate to NVMe deduplication, which proceeds after the initial zero- PCIe technology, support for storage class memory block, bit-pattern detection. The compression uses (e.g., 3D XPoint), and a three-year technology refresh an Express Scan method that identifies redundant extension business program. or incompressible data and avoids wasting compute resources. HPE’s Get Thinner Guarantee promises a HPE Nimble Storage All-Flash Series minimum of 75% data reduction ratio, compared with The Nimble Storage AF series from HPE remains fully provisioned volume, without storage efficiency an easy-to-use array predicated on its predictive features. analytics for automation and cost-effective use of flash technology. The AF Series leverages the common HPE integrated InfoSight capabilities into architecture NimbleOS (aka Cache Accelerated the StoreServ portfolio in 2018, although the Sequential Layout [CASL]) file system, with its hybrid recommendation engine is forthcoming. Infosight is a arrays that resonate with users for their simplicity premier predictive analytics platform that provides AI- and competitive pricing. The same Nimble OS driven management for proactive issue resolution with administrative GUI is used to manage solid-state and visibility beyond storage into the infrastructure and hybrid Nimble offerings, which can also interoperate application layer. HPE 3PAR StoreServ has extensive and replicate data between each other. The product ecosystem support that now includes cloud integration is available as a single array or scaled-out/federated via HPE Cloud Bank Storage. This feature leverages cluster. The product provides data access via block deduplication and compression to provide efficient storage protocols, such as FC and iSCSI but does not import/export of data interoperable with AWS and have file protocols. The CASL file system allows the Azure cloud, as well as private clouds. use of cost-effective, consumer-grade SSD technology. Its scale remains the same as in 2017, ranging from 17

5.7TB to 553TB in a single array and from 23TB to workloads. Huawei manufactures its own SSDs and 2.2PB in a four-array cluster. This is supported by sources its NAND from multiple suppliers, thus selectable in-line deduplication and compression in enabling flexibility in array design at aggressive NimbleOS software that further enhances the array’s price points. The Dorado V3 series can scale out to effective capacity. 16 controllers and as much as 36.8PB in capacity, making it one of the largest scale-out block storage Its leading differentiating feature is InfoSight Predictive systems available. The 100GBe and 32Gb FC Analytics. This remote monitoring and support tool interfaces are offered as part of the array, thus leverages AI to provide proactive problem management facilitating faster throughput from the host to the and resolution, such as performance problems caused array. The Dorado 5000V3 is also offered with an by misconfigurations. The monitoring and diagnostics internal NVMe SSD-based configuration. However, are based on extensive telemetry data that is analyzed migration to this platform must be performed using to resolve and anticipate upcoming issues beyond a Huawei-provided migration tool or by engaging storage and across the hardware infrastructure and professional services, because the platform can’t be application layer. This supports automation of Level 1 added to an existing Dorado cluster. and Level 2 support issues, providing simpler planning and greater manageability for users. The AF series can The Dorado platform offers in-line compression and integrate with cloud environments through its Cloud deduplication, along with other data services, such as Volumes feature. cloning, snapshots, synchronous and asynchronous replication. File services are not supported by the Infosight also provides visibility into cloud Dorado V3. The platform also features ebackup, a tool environments and proactive recommendations for that provides backup and recovery integration with users to help optimize on- and off-premises storage. AWS S3 and Huawei Cloud. Huawei offers a workload- Customer-friendly business programs on performance, specific data reduction guarantee program ranging maintenance extensions, and future-proofed for from 1.6:1 for databases to 5:1 for VDI workloads. next-generation technologies (including NVMe PCIe Dorado V3 also features eService, a cloud-based and storage-class memory) support high customer analytics platform for performance and capacity satisfaction levels. management. All data services, including replication, are included in the base software license, thus Huawei OceanStor Dorado V3 Series simplifying the buying cycle. OceanStor Dorado V3 became generally available in March 2017 and comprises of Dorado 5000 V3 Huawei OceanStor F V5 Series and OceanStor Dorado 6000 V3. The Dorado V3 OceanStor F V5 series is a unified storage platform, is Huawei’s flagship flash array and is positioned positioned primarily for use cases such as file sharing for mission-critical, ultralow-latency, block-based and analytics. OceanStor F V5 is offered in four 18

variants (18000F V5, 6000F V5, 5000F V5 and 2000F own flash module technology, which it has developed V5) that address a wide range of scalability and internally. The capacity improvement came through performance requirements. All four systems support use of advanced flash technology and is optimized to SAS-based SSDs ranging from 600GB to 15.36 TB TLC enhance performance and increase reliability, while or MLC SSDs. However, NVMe SSDs are not supported. using less-expensive, consumer-grade flash technology. The OceanStor F V5 series provides all essential The A9000 has a maximum raw capacity of 258TB, data services, including in-line compression, in-line and FlashSystem A9000R can achieve 1032TB across deduplication, snapshots, cloning, integrated backup, a fully populated rack, with six enclosures. QoS and storage migration. However, each of these licenses is priced separately, thus complicating the FlashSystem A9000 includes IBM Hyper-Scale buying experience. Manager and Hyper-Scale Mobility software, which enables the central management and data mobility QoS can be configured at the port level, as well as at across 144 individual A9000 storage systems. The the CPU level, providing better control over application system has compression and deduplication, with performance. Along with secure multitenancy guarantees based on IBM providing an initial analysis capabilities, QoS makes the OceanStor F V5 series via workload diagnosis. FlashSystem A9000 is used by an attractive solution for service provider and shared customers as a low-latency SSA for high-performance services environments, particularly with large file applications on single servers. However, it has support storage requirements. Data reduction technologies, and can be used as shared storage area network (SAN) such as compression and deduplication, can be high-performance storage. Conversely, FlashSystem disabled or enabled at the LUN level. The platform A9000R is positioned for big data and analytics. Given supports a broad range of backup software, OSs and its high performance and reliability, this product hypervisors, as well as CMPs; however, it does not features robust security that now includes Software support Docker plug-ins. The array provides native Configuration and Library Manager (SKLM). integration with the AWS S3 and Swift-based object storage platforms, for backup and archiving. The FlashSystem A9000 architecture inherits its software from the XIV, and its GUI continues to IBM FlashSystem A9000 Series improve for simpler administration and greater IBM FlashSystem A9000 Series consists of two granularity of storage capacity and performance models: FlashSystem A9000 and FlashSystem monitoring. The FlashSystem also has cloud A9000R, which is the rack version. FlashSystem integration when using IBM software-defined Spectrum A9000 has seen considerable improvements in its Accelerate and Virtualize for IBM cloud environments. capacity and sizable reduction in its power and This also requires IBM Spectrum connect, which is free cooling with the evolution of the 900 just a bunch for IBM customers; however, Accelerate and Virtualize come at additional cost. of flash (JBOF). IBM’s hardware is predicated on its 19

IBM Storwize All-Flash Series compatible on-premises solutions with other public IBM’s Storwize series of SSAs consists of Storwize cloud integration planned. The Storwize series offers V5030F and V7000F. Both became generally available synchronous and asynchronous replication and in September 2016, and the systems can scale up Hyperswap capabilities for high-availability, active- to larger capacities than the A9000. The V-series to-active requirements. The Storwize arrays provides is positioned primarily for virtualized storage AES/XTS 256 encryption for data at rest. infrastructure consisting of general-purpose database workloads and back-office environments. Unlike Kaminario K2 the A9000 FlashSystem products, these offerings The sixth-generation Kaminario K2 family, announced use industry-standard SSD technology to achieve a and available in February 2017, was augmented with maximum of 11.6PB of raw capacity. In addition, the new K2.N NVMe-based series. This was announced Storwize series can be clustered to reach its maximum during the fourth quarter of 2017, but became architecture capacity limit of 32PB. Based on the available in May 2018. Both product families feature application data, the system detects the suitability FC, iSCSI and NVMe-oF host connectivity and reinforce for data reduction, allowing it to be flexible to the an excellent track record of flexible product innovation. LUN level. In-line compression is available and in-line Customers who own prior-generation K2 arrays can deduplication became generally available on 8 May mix and match these with new sixth-generation 2018 and, thus, has yet to be field-validated. IBM K2 nodes, enabling simple product migration and guarantees a base 2:1 compression savings ratio for investment protection. Kaminario has enhanced the Storwize series. However, if customers use IBM’s its software offerings, with its VisionOS storage “Comprestimator” tool, which results in an estimated management suite which is now complemented 5:1 compression ratio, then IBM will guarantee this with Flex (automation and orchestration) and Clarity extra saving. However, it remains to be seen whether (analytics and machine learning). The company this program will be enhanced, now that deduplication continues to offer “Assured Capacity, Availability, is available. Performance, Scale, Maintenance and SSD Life” programs (Kaminario ForeSight). The Storwize series has strong ecosystem support, extensive API integration and support for hypervisors Combined, the new products provide a base for and the main data protection vendors. Through IBM’s diversely “composable” storage, with proven scale software-defined offering, Spectrum Virtualize, users up and scale out capabilities from 7.4TB to 4PB, can have cloud integration. For backup and archiving managed by a simple GUI that is intuitive and easy to the cloud, IBM offers Transparent Cloud Tiering to use, even by nonstorage administrators. The that provides snapshots to AWS S3 or OpenStack sixth-generation K2 and K2.N also have the latest Swift Object Storage. This support extends to IBM high-speed external interconnects: 32 Gbps FC and Cloud and AWS, or private clouds implementing 25/50/100GbE. Deduplication is selectable, but 20

compression cannot be disabled. The K2 supports the next version of its OS, ONTAP 9.4 for all AFF asynchronous replication; however, synchronous systems. Highlights include support for NVMe/FC replication is not available. System security is good, for AFF A800, A700, A700s and A300, cloud tiering, with Advanced Encryption Standard (AES) encryption SMB multichannel, as well as the introduction of new at the SSD level and key management. REST APIs. The AFF A800 can be added to an existing NetApp Cluster, thus facilitating seamless migration NetApp AFF A-Series of data from existing systems to the new NVMe-based The NetApp AFF A-Series product line consists of AFF A800. five products A200, A300, A700, A700s and A800. In 2017, NetApp continued to make incremental NetApp also introduced workload-specific data investments to its product portfolio and introduced reduction guarantees that ensures as much as 7:1 several new features and enhancements through its data reduction for certain workloads. The AFF Series latest update to its OS: ONTAP 9.3 is the current integrates with a broad range of backup software, version. ONTAP now offers deeper integration with CMPs, public clouds and hypervisors, including the cloud. The FabricPool feature can natively tier to containers. The AFF Series has quickly been able to AWS S3, Microsoft Azure, or NetApp StorageGRID. offer and support large-size SSD configurations up CloudSync facilitates file synchronization between to 30TBs that reduce rack footprint significantly; on-premises NFS servers to AWS S3. ONTAP Cloud however, compared with smaller-capacity SSDs, these helps manage storage entities within AWS, Azure, are known to increase RAID rebuild times. Google and IBM Cloud with strong integration with on-premises NetApp AFF instances. This is particularly NetApp SF Series useful for use cases, such as disaster recovery and NetApp has continued to enhance its SolidFire product application migration to and from the cloud. line and integrated it with its existing ONTAP-based AFF Series and management software. The scale-out NetApp has further built on its central systems array is built using industry-standard hardware, it can monitoring platform AutoSupport, by introducing scale to 100 nodes; however, a minimum of four are ActiveIQ. ActiveIQ is used for configuration issues, required. SolidFire’s Element OS software delivers the capacity management and performance optimization. required data services and administration capabilities. In May 2018, NetApp released and is now shipping The stand-alone Element storage software layer is the AFF A800, the vendors first end-to-end, NVMe- also used in NetApp HCI solutions to provide the SDS based SSA that connects internal SSDs via NVMe. The storage services, scale and high availability. SolidFire AFF A800 is a dual-controller system that supports systems are positioned for DevOps, private cloud and as many as 48 internal NVMe drives. The AFF A800 service provider environments that can start small and supports NVMe/FC and 32Gb FC fabric, as well as scale out over time. 100Gb Ethernet connections. NetApp also released 21

Four SolidFire models in the portfolio — SF4805, FlashArray//X series scales to 878TB raw. The main SF9605, SF19210 and SF38410 —can be mixed difference between the M and X models is that the and matched, and they can also be used with older X series has faster internal NVMe connections and models. SolidFire has, to date, offered good investment also uses the DirectFlash storage modules, instead protection and backward compatibility. The SF38410 of standard SSDs. The DirectFlash storage modules was announced in 2017. It is available with 3.84TB are designed by Pure Storage, and they enable Pure SSDs and can scale to 38.4TB raw capacity per node. Storage to control the format, capacity, performance, All products provide SSDs that are based on SATA/SAS density and software in the storage media layer. The interface and do not support NVMe. Workload-specific DirectFlash modules range from 2.2TB to 18.3TB, and storage efficiency guarantee programs ensure further the DirectFlash software has increased integration reduction in net storage footprint ranging from 1.3:1 with the Purity Operating Environment, which offloads to 7.2:1. SolidFire systems support a broad range of traditional, SSD-based software functions, such as hypervisors and containers, including Docker, as well garbage collection, encryption and error correction. as integrating with the snapshot capabilities of leading backup software vendors. The FlashArray//X series will replace the FlashArray//M series overtime, but backward SolidFire systems also offer native backup to S3 or compatibility and simple “nonforklift” migrations Swift-compatible object storage platforms. Unlike from the //M-series to the //X-series are available. traditional RAID implementations, SolidFire implements Customers with the //M-series can do a nondisruptive a proprietary data protection algorithm called Helix, //M-series controller upgrade to the //X-series which spreads data evenly across all nodes of the controllers, which effectively makes an existing cluster. This increases high availability, reduces //M-series a new //X-series. rebuild time significantly and spreads the load evenly across the cluster during the rebuild activity. SolidFire Pure Storage continues to offer an all-inclusive, supports synchronous, asynchronous and snapshot- array-based storage software licensing model, plus based replication. All software and hardware are performance, availability and effective capacity bundled, and systems are sold on a per node basis. guarantees. To provide customer investment protection there are also support, maintenance, upgrade and Pure Storage M and X Series NVMe-ready offerings via the “Evergreen” subscription The Pure Storage M and X families of SSAs share services. New, on-premises, pay-as-you-go services, the same controller software and administration such as storage-as-a-service via the Pure Storage software. The Pure Storage//M series has been Evergreen Storage Service (ES2), are now available in available since 2015, with controller, software and SSD the U.S.; Europe, the Middle East and Africa (EMEA); enhancements during 2017. However, in April 2017, and the Asia/Pacific (APAC) region. This enables customers to pay for their own private data center a newer FlashArray//X70 array became available. The FlashArray//M scales to 512TB raw and the solid-state storage via monthly operational charges. 22

However, customers are charged for the effective seven. Each blade is 8.8TB, 17TB or 52.8TB, and the storage capacity usage, not the raw capacity they system scales from 61TB to 792TB in four rack units. use. Customer satisfaction with Pure Storage arrays The midsize 17TB blade, which provides a midsize is high. High availability, plus fast application failover, FlashBlade became available in 2017, and blades can can be provided via synchronous replication and be nondisruptively added or removed. Administration active-active stretched clusters. External NVMe-oF and of the FlashBlade shares the same look and feel as the FC-NVMe connections to hosts are not yet available. M and X family of arrays and the Pure1 administration suite. It can manage all //M, //X and FlashBlade Pure Storage FlashBlade arrays, together with a centralized point of control, The file-storage-based Pure Storage FlashBlade monitoring and reporting. has been available for 18 months, and customer adoption, market acceptance, performance and simple Due to the nature of analytics, AI and machine administration have been good. The initial positioning learning workloads, which require high input/output of the FlashBlade was for traditional file and object operations per second (IOPS) and parallel access, storage workloads. However, due to the high levels of the FlashBlade has been successful in these new parallelism in the scale-in multiple controller/blade high-performance data analytics workloads. Highly design, the FlashBlade has also become suitable for parallelized, internal scale-out, solid-state storage high-performance (i.e., low-latency, high-bandwidth) with the highly parallelized Nvidia DGX-1 GPU based workloads and applications, such as near real-time servers, has led to a partnership between the vendors. analytics, AI and machine learning. Nevertheless, the This led to the creation of a joint integrated system, FlashBlade can still be used for traditional file and the AIRI, which is a ready-made AI system in a rack. object storage workloads. The Pure Storage FlashBlade continues to successfully During the past 12 months, significant new push SSAs into new segments and applications. All enhancements to the FlashBlade have been SMB, Pure Storage guarantees, Evergreen maintenance, pay- IPv6, LDAP/NFS, S3 object support, and snapshots. as-you-grow programs are available for the FlashBlade, The HTTP/HTTPS/SMB/NFS are in a single username and storage features are included in the base price. space, and a RESTful API has become available. However, data deduplication is not supported; only Similar to the X-series SSAs, DirectFlash modules, compression is offered. the FlashBlades, have been designed and engineered by Pure Storage. This provides similar benefits in Tintri EC6000 Series terms of density and storage software offload and Tintri continues to deliver on its promise of a simple- disaggregation, which ultimately give Pure Storage the to-use SSA product line that’s extensively integrated ability to fine-tune the format, NAND/ into hypervisors. The vendor’s product line was packaging, performance and reliability. The minimum upgraded in July and August 2017, when the new number of blades that a customer can purchase is EC6000 series of flash arrays became generally 23

available. However, the official public announcement Western Digital IntelliFlash HD, N and T Series was made in September 2017. The older T5000 The Western Digital IntelliFlash T-Series is a well- models can no longer be purchased. The new EC6000 established SSA, with a proven product track record series provides more scale, performance and capacity of providing extensive features, ease of use/ownership than the previous T5000 series. The smallest model and reliability, with FC, iSCSI, NFS, CIFS and SMB in the EC6000 series, the EC6030 starts at 6.2TB and 3.0 support. IntelliFlash is available as a high-density the EC6000 expands to the largest EC6090 which has (HD) family, as well as an N-Series NVMe-based family. 129TB of raw capacity. This is a significant capacity Both allow performance and QoS to be selected and increase over the previous T5000, which was smaller. managed at the cache, port, volume and LUN level across block-and-file protocols. Today’s IntelliFlash However, compared with competitors’ arrays, the raw T-/HD/N-Series models scale in raw capacity from capacity of the Tintri arrays are quite small. This is 11.5TB to 1.3PB; individually the N-Series arrays due to the Tintri positioning of the arrays into high scale in raw capacity from 19TB to 184TB. However, data reduction environments and workloads in which InteliFlash provides a cluster mode that enables eight the effective, postdata reduction storage capacity is arrays to be clustered together; therefore, in these several times larger than the raw capacity. Rather than configurations, scalability is increased to a maximum buying and implementing a single, large SSA with the of 10.4PB (T-/N-Series) or 1.47PB (N-Series) for all ability to scale, Tintri’s recommended deployment protocols. method is to install many small, modular EC6000 systems, which are managed as one entity with the All IntelliFlash arrays are physically and logically Tintri Global Center Advanced administration software. space-efficient. Compression and deduplication are both in-line and also individually selectable at the Host connections can be upgraded to optional volume level across file-and-block protocols and can 4 x 40GigE ports, which can also be used for provide up to 10:1 effective capacities in certain asynchronous replication. However, the deep applications. T-Series customers can nondisruptively integration of the Tintri arrays into the hypervisors upgrade to the N-Series. IntelliFlash arrays offer via NFS does not work with physical servers at a block all-inclusive storage licensing, fully featured storage LUN or volume level, but only with virtualized servers services and instrumentation scale-out clustering, running hypervisors at the logical virtual machine which provide higher flexibility than most other SSAs. (VM) level. This implementation enables Tintri arrays Cloud-based analytics is used to provide problem to quickly restore whole VMs, rather than individual prediction and trending within the Intellicare support volumes. An additional benefit of this VM software- program. Guaranteed upgrades are provided by the level snapshot implementation is the large number Lifetime Storage program and effective capacity of snapshots — as many as 128 snapshots per VM, guarantees are part of the Flash 5 commitment 128,000 VM snapshots per array or 1,000,000 file- program. Data is encrypted on the SSDs and managed system-level snapshots. by internal key management. 24

X-IO ISE 900 Series Context X-IO launched its fourth-generation product, the ISE The popularity and adoption of SSAs continues. This 900 series on September 2017, replacing its ISE is because of the SSAs’ performance advantage, as 800 series, which had been released nearly 2.5 years well as simpler administration, extensive guarantee earlier. The array remains oriented toward its core and upgrade programs, security, improved reliability competency of low-latency, higher-performance use compared with HDDs and high availability. Smaller, cases and workloads. It now boasts new capabilities, denser storage, feature licensing and purchasing with a lower price point. The array is now based on 3D methods make the overall value proposition more than TLC SSD technology, allowing it to be more economical speeds and feeds. and scaling from 9.6TB to 230TB raw in a 2U node supported by 16Gb fiber channel connections. The reduced environmental requirements of SSAs, Also, data reduction has been enhanced with a new, such as power and cooling, also have incidental and patented deduplication ability that claims to use important advantages over general-purpose arrays, considerably fewer CPU resources and less DRAM, servers with internal storage and other HDD-based thereby increasing its cost-efficiency. However, on storage systems. Due to these benefits, as well as compression, this ability is available and continues to administration GUIs, which have been designed with be refined and optimized. ease of use in mind, storage administration overhead has been reduced, and storage provisioning tasks Deduplication and compression are selectable per can now be performed by nonstorage specialists. volume, but both are enabled and disabled together. As a result, less time needs to be spent performing Snapshots and asynchronous replication round out detailed configuration, performance tuning and the other notable feature improvements. The ISE 900 problem determination tasks, due to the sophisticated series lacks native cloud integration, synchronous controller software that, in some instances, can be replication and lags in terms of cloud-based purchased separately as SDS offerings. predictive analytics, compared with leading providers. Although some of the manageability features may Product/Service Class Definition still be evolving, X-IO does have competitive business The following descriptions and criteria classify SSA programs centered on efficiency and customer architectures by their externally visible characteristics, satisfaction with lowest-price guarantee, clear rather than vendor claims or other nonproduct criteria maintenance costs and flexible controller upgrades to that may be influenced by short-term trends in the ensure investment protection. The controller upgrades SSA storage market. do not provide a pathway to NVMe technology, because that is available via the separate Axellio product line. 25

SSA ■■ Logical volumes, files or objects are The SSA category is a subcategory of the broader fragmented and spread across user-defined external-controller-based (ECB) storage market. SSAs collections, such as solid-state pools, groups are scalable, dedicated solutions based solely on or redundant array of independent disk (RAID) solid-state semiconductor technology for data storage sets. that can never be configured with HDD technology. The ■ SSA category is distinct from SSD-only racks in ECB ■ Capacity, performance and throughput are storage arrays. An SSA must be a stand-alone product limited by physical packaging constraints, such denoted with a specific name and model number, as the number of slots in a backplane and/or which typically (but not always) includes an OS and interconnected constraints. data management software optimized for solid-state technology. Scale-Out Architectures ■■ Capacity, performance, throughput and To be considered an SSA, the storage software connectivity scale with the number of nodes in management layer should enable most, if not all, of the system. the following benefits: ■■ Logical volumes, files or objects are ■■ High availability fragmented and spread across multiple storage nodes to protect against hardware failure and ■■ Enhanced-capacity efficiency — perhaps improve performance. through thin provisioning, compression or data deduplication ■■ Scalability is limited by software and networking architectural constraints, not ■■ Data management physical packaging or interconnect limitations.

■■ Automated tiering within SSD technologies Unified Architectures ■■ These can simultaneously support one or more ■ ■ Perhaps, other advanced software capabilities block, file and/or object protocol — such as — such as application-specific and OS-specific Fibre Channel (FC), iSCSI, Network File System acceleration, based on the unique workload (NFS), SMB (aka CIFS), Fibre Channel over requirements of the data type being processed Ethernet (FCoE) and InfiniBand.

Scale-Up Architectures ■■ Gateway and integrated data flow ■■ Front-end connectivity, internal, and back- implementations are included. end bandwidth are fixed or scale to packaging constraints, independent of capacity. 26

■■ These architectures can be implemented as scale- storage, and dynamically allocate server cycles, up or scale-out arrays. bandwidth and cache — based on QoS algorithms and/or policies. Gateway implementations provision block storage to gateways implementing network-attached storage Mapping the strengths and weaknesses of these (NAS) and object storage protocols. Gateway- different storage architectures to various use cases style implementations run separate NAS and SAN should begin with an overview of each architecture’s microcode loads on virtualized or physical servers. strengths and weaknesses and an understanding of As a result, they have different thin-provisioning, workload requirements (see Table 1). autotiering, snapshot and remote copy features, which are not interoperable. By contrast, integrated or unified storage implementations use the same primitives independent of the protocol. This enables them to create snapshots that span SAN and NAS

Table 1. SSA Architectures

Strengths Weaknesses Scale-Up ■■ Mature architectures: ■■ Performance and bandwidth do not ■■ Reliable scale with capacity ■■ Cost-competitive ■■ Limited compute power can make a ■■ Large ecosystems high impact ■■ Independently upgrade: ■■ Electronics failures and microcode ■■ Host connections updates may be high-impact events ■■ Back-end capacity ■■ May offer shorter recovery point objectives (RPOs) over asynchronous distances Scale-Out ■■ IOPS and GB/sec scale with capacity ■■ Electronics costs are high, relative ■■ Nondisruptive load balancing to back-end storage costs ■■ Greater fault tolerance than scale-up architectures ■■ Use of commodity components Unified ■■ Maximal deployment flexibility ■■ Performance may vary by protocol ■■ Comprehensive storage efficiency features (block versus file) Source: Gartner (August 2018) 27

Critical Capabilities Definition microcode updates, diagnostics that minimize human error when troubleshooting, and nondisruptive Each critical capability consists of two parts. There is repair activities. User-visible features can include an overview section, which is limited to 300 characters tolerance of multiple disk and/or node failures, fault- or fewer, and a general section, which can be as long isolation techniques, and built-in protection against as the author wishes. The overview section must be data corruption, as well as other techniques, such one paragraph, no bullets or numbered lists. This as snapshots and replication (see Note 2), to meet overview is what will appear when the reader hovers customers’ RPOs and recovery time objectives (RTOs). over each critical capability on the interactive display; remember, each critical capability name cannot be Scalability more than 35 characters, including spaces. This refers to the storage system’s ability to grow Performance capacity, as well as performance and host connectivity. The concept of usable scalability links capacity growth This collective term is often used to describe IOPS, and system performance to SLAs and application bandwidth (MB/second) and response times, needs. (Capacities are total raw capacity and are not milliseconds per input/output (I/O), that are visible to usable, unless otherwise stated.) attached servers.

Ecosystem Storage Efficiency Ecosystem refers to the ability of the platform This refers to the ability of the platform to support to support multiple protocols, OSs, third-party storage efficiency technologies, such as compression, independent software vendor (ISV) applications, APIs deduplication and thin provisioning, as well as improve and multivendor hypervisors. utilization rates, while reducing storage acquisition and ownership costs. Multitenancy and Security RAS This refers to the ability of a storage system to support diverse workloads, isolate workloads from Reliability, availability and serviceability (RAS) refers each other, and provide user access controls and to a design philosophy that consistently delivers auditing capabilities that log changes into the system high availability by building systems with reliable configuration. components and “derating” components to increase their mean time between failure (MTBF). Manageability

Systems are designed to tolerate marginal Manageability refers to the automation, management, components and hardware, and microcode designs monitoring and reporting related to tools and that minimize the number of critical failure modes, programs supported by the platform. serviceability features that enable nondisruptive 28

The tools and programs can include single-pane The need to deliver low I/O response times to large management consoles, as well as monitoring and numbers of VMs or desktops that generate cache- reporting tools designed to assist support personnel in unfriendly workloads, while providing 24/7 availability. seamlessly managing systems and monitoring system This causes performance and storage efficiency to be use and efficiency. They can also be used to anticipate heavily weighted, followed closely by RAS. and correct system alarms and fault conditions before or soon after they occur. High-Performance Computing HPC clusters comprise large numbers of servers Use Cases and storage arrays, which combine to deliver high Each use case consists of two parts. There is an computing densities and aggregated throughput. overview section, which is limited to 175 characters or fewer, and a general section, which can be as long Commercial HPC environments are characterized by as the author wishes. The overview section must be the need for high throughput and parallel read-and- one paragraph, no bullets or numbered lists. This write access to large volumes of data. Performance, overview is what will appear on each use-case tab on scalability and RAS are important considerations for the interactive display; remember, each use case name this use case. cannot be more than 50 characters, including spaces. Analytics Online Transaction Processing This use case applies to all analytic applications that Online transaction processing (OLTP) is closely are packaged or provide business intelligence (BI) affiliated with business-critical applications, such as capabilities for a specific domain or business problem. database management systems (DBMSs). Analytics does not apply only to storage consumed by DBMSs require 24/7 availability and subsecond big data applications using map/reduce technologies transaction response times; hence, the greatest (see definition in “Hype Cycle for Data Science, emphasis is on performance and RAS features. 2016”). Manageability and storage efficiency are important, because they enable the storage system to scale Virtual Desktop Infrastructure with data growth, while staying within budgetary VDI is the practice of hosting a desktop OS within a constraints. VM running on a centralized server. IT is a variation of the client/server computing model. Server Virtualization This use case encompasses business-critical This is sometimes referred to as server-based applications, back-office and batch workloads, and computing (SBC). Performance and storage efficiency development. (in-line data reduction) features are heavily weighted 29

for this use case, for which SSAs are emerging as looking research, and is not based solely on past popular alternatives. The performance weighting was sales and revenue, but future market direction, reduced by 5%, and manageability was increased technologies and customer demands might not by 5%. Manageability has become a relatively include the vendor’s SSA product with the highest greater concern and priority in this use case than sales revenue or units sold. Similar products based performance. on common platforms/software or offerings that Gartner believes will merge or be combined into Vendors Added and Dropped one, such as NVMe-capable products, will have one entry in this research, because of their common Added technology and features. However, SSA products Huawei not included in the Critical Capabilities research may be mentioned and analyzed in the Magic Dropped Quadrant. None ■■ The ultimate decision for product, series or Inclusion Criteria model inclusion will be decided by Gartner, as determined by Gartner client interest, as well as ■■ The vendor’s SSA product must have been Gartner’s view of the present and future direction in general availability (GA) by 4 March 2018. of the market, technology and users requirements. This does not include product trial or early ship programs, such as beta systems, directed ■■ Just a Bunch of Flash or SSDs (JBOF/S) will not be availability or dates of product announcements. included within the Critical Capabilities research. This research covers arrays and, therefore, only ■■ The specific SSA vendor must have revenue of SSAs with internal controllers providing storage recognizable, combined SSA sales of at least features or high-level data services, such as thin $15 million during the past 12 months prior to provisioning, data reduction features, replication 31 March 2018. This is based on documented and snapshots, will be included. information provided to Gartner by each vendor, so that Gartner can verify this as clear and convincing The SSAs evaluated in this research include scale-up, proof beyond reasonable doubt. scale-out and unified storage architectures. Because these arrays have different availability characteristics, ■■ The Gartner SSA Magic Quadrant and Critical performance profiles, scalability, ecosystem support, Capabilities authors choose and reserve the right pricing and warranties, they enable users to tailor to include one or two products from each vendor solutions against operational needs, planned new in the CC, based on the author’s analysis of the application deployments, forecast growth rates and market and client interest. Because this is forward- asset management strategies. 30

Although this SSA Critical Capabilities research ■■ Excelero represents vendors with dedicated systems that meet our inclusion criteria, the application workload ■■ E8 ultimately governs which solutions should be considered, regardless of the criteria involved. ■■ Pavilion Data Systems Some vendors may still warrant investigation based on application workload needs for their SSD-only ■■ Pivot3 offerings. The following providers and products were considered for this research, but did not meet ■■ Nexsan the inclusion criteria, despite offering SSD-only configuration options to existing products: ■■ Nimbus Data

■■ American Megatrends ■■ VAST Data

■■ Apeiron ■■ Vexata

■■ DirectData Networks (DDN) ■■ Violin Systems

Table 2. Weighting for Critical Capabilities in Use Cases

Critical Online Transaction Server High-Performance Analytics Virtual Desktop Capabilities Processing Virtualization Computing Infrastructure Performance 28% 20% 44% 36% 25% Storage 14% 20% 4% 15% 30% Efficiency RAS 20% 15% 15% 13% 15% Scalability 8% 10% 20% 18% 4% Ecosystem 10% 10% 3% 4% 8% Multitenancy 5% 5% 4% 4% 5% and Security Manageability 15% 20% 10% 10% 13% Total 100% 100% 100% 100% 100% As of August 2018 Source: Gartner (August 2018) 31

This methodology requires analysts to identify the Critical Capabilities Rating critical capabilities for a class of products/services. These are the product and service ratings for the solid- Each capability is then weighed in terms of its relative state arrays. importance for specific product/service use cases.

Table 3. Product/Service Rating on Critical Capabilities Critical Capabilities Dell EMC Unity All-Flash Dell EMC VMAX All-Flash Eternus AF Series Fujitsu Storage VSP F Series Hitachi Arrays StoreServ All-Flash HPE 3PAR Series All-Flash Storage HPE Nimble V3 Series OceanStor Dorado Huawei OceanStor F V5 Series Huawei A9000 Series IBM FlashSystem Series All-Flash IBM Storwize K2 Kaminario AFF A-Series NetApp SF Series NetApp FlashBlade Storage Pure M and X Series Storage Pure N and T Series HD, Digital IntelliFlash Western Tintri EC6000 Series X-IO ISE 900 Series

Performance 3.9 4.0 3.9 4.2 4.0 3.9 3.9 3.8 4.2 3.8 4.5 4.1 3.8 4.7 4.4 4.0 3.7 4.0 Storage Efficiency 3.9 2.8 4.3 3.7 3.9 4.4 3.5 3.5 3.9 3.0 4.1 4.0 3.8 2.8 4.5 4.5 4.4 3.7

RAS 3.6 3.9 3.7 4.0 3.7 3.7 3.5 3.6 3.7 3.7 3.7 3.7 3.9 3.6 3.8 3.5 3.5 3.4 Scalability 3.7 3.7 3.4 4.0 4.2 3.7 3.4 4.2 3.7 3.9 4.0 3.9 4.0 3.8 3.4 3.7 3.4 3.2 Ecosystem 3.8 4.2 3.6 4.0 3.7 3.7 3.6 3.5 3.6 3.9 3.5 4.1 3.5 2.8 3.7 3.8 3.6 3.2 Multitenancy and 3.7 3.8 3.6 3.8 3.6 3.8 3.4 3.5 3.9 3.9 3.3 3.6 3.9 2.9 3.3 3.4 3.4 3.4

Security Manageability 3.8 3.7 3.6 3.6 4.0 4.5 3.5 3.5 3.8 3.9 3.9 3.9 4.1 4.0 4.2 3.7 4.1 3.1 As of August 2018

Source: Gartner (August 2018) 32

Table 4 shows the product/service scores for each use To determine an overall score for each product/service case. The scores, which are generated by multiplying in the use cases, multiply the ratings in Table 3 by the the use case weightings by the product/service weightings shown in Table 2. ratings, summarize how well the critical capabilities are met for each use case.

Table 4. Product Score in Use Cases Use Cases Dell EMC Unity All-Flash Dell EMC VMAX All-Flash Eternus AF Series Fujitsu Storage VSP F Series Hitachi Arrays StoreServ All-Flash HPE 3PAR Series All-Flash Storage HPE Nimble V3 Series OceanStor Dorado Huawei OceanStor F V5 Series Huawei A9000 Series IBM FlashSystem Series All-Flash IBM Storwize K2 Kaminario AFF A-Series NetApp SF Series NetApp FlashBlade Storage Pure M and X Series Storage Pure N and T Series HD, Digital IntelliFlash Western Tintri EC6000 Series X-IO ISE 900 Series

Online Transaction 3.79 3.75 3.79 3.94 3.89 3.98 3.61 3.66 3.88 3.71 3.99 3.94 3.86 3.76 4.06 3.85 3.77 3.53 Processing Server Virtualization 3.80 3.67 3.80 3.89 3.91 4.05 3.58 3.65 3.86 3.67 3.97 3.94 3.87 3.65 4.07 3.89 3.84 3.48 High-Performance 3.79 3.85 3.74 4.03 3.97 3.90 3.66 3.79 3.94 3.79 4.13 3.96 3.88 4.08 4.03 3.83 3.66 3.60 Computing Analytics 3.80 3.72 3.79 3.98 3.95 3.96 3.63 3.75 3.92 3.70 4.10 3.96 3.87 3.89 4.07 3.89 3.75 3.59 Virtual Desktop 3.82 3.58 3.89 3.90 3.89 4.07 3.60 3.62 3.90 3.58 4.02 3.95 3.84 3.60 4.16 3.98 3.90 3.58 Infra-structure

As of August 2018 Source: Gartner (August 2018) 33

Evidence are more important than others, depending on the use Data has been gathered from client interactions during case being evaluated. the past year, vendor briefings and references; and detailed questionnaire responses from and review calls Each vendor’s product or service is evaluated in terms with all profiled vendors. of how well it delivers each capability, on a five-point scale. These ratings are displayed side-by-side for Critical Capabilities Methodology all vendors, allowing easy comparisons between the different sets of features. This methodology requires analysts to identify the critical capabilities for a class of products or services. Ratings and summary scores range from 1.0 to 5.0: Each capability is then weighted in terms of its relative importance for specific product or service use cases. 1 = Poor or Absent: most or all defined requirements Next, products/services are rated in terms of how for a capability are not achieved well they achieve each of the critical capabilities. A score that summarizes how well they meet the critical 2 = Fair: some requirements are not achieved capabilities for each use case is then calculated for each product/service. 3 = Good: meets requirements

“Critical capabilities” are attributes that differentiate 4 = Excellent: meets or exceeds some requirements products/services in a class in terms of their quality and performance. Gartner recommends that users 5 = Outstanding: significantly exceeds requirements consider the set of critical capabilities as some of the most important criteria for acquisition decisions. To determine an overall score for each product in the use cases, the product ratings are multiplied by the In defining the product/service category for evaluation, weightings to come up with the product score in use the analyst first identifies the leading uses for the cases. products/services in this market. What needs are end- users looking to fulfill, when considering products/ The critical capabilities Gartner has selected do not services in this market? Use cases should match represent all capabilities for any product; therefore, common client deployment scenarios. These distinct may not represent those most important for a specific client scenarios define the Use Cases. use situation or business objective. Clients should use a critical capabilities analysis as one of several The analyst then identifies the critical capabilities. sources of input about a product before making a These capabilities are generalized groups of features product/service decision. commonly required by this class of products/services. Each capability is assigned a level of importance in fulfilling that particular need; some sets of features Source: Gartner Research Note G00338538, Valdis Filks, John Monroe, Joseph Unsworth, Santhosh Rao, 6 August 2018 Welcome to the Age of Composable Infrastructure is published by Kaminario. Editorial content supplied by Kaminario is independent of Gartner analysis. All Gartner research is used with Gartner’s permission, and was originally published as part of Gartner’s syndicated research service available to all entitled Gartner clients. © 2018 Gartner, Inc. and/or its affiliates. All rights reserved. The use of Gartner research in this publication does not indicate Gartner’s endorsement of Kaminario’s products and/or strategies. Reproduction or distribution of this publication in any form without Gartner’s prior written permission is forbidden. The information contained herein has been obtained from Contact us sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner For more information contact us at: does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity” on its website.