Research Report Strategic Planning for Software Defined/Standard Environments

Executive Summary “Software defined” computing is all about abstracting systems, storage and networking hardware such that these resources can be programmed through software by users as well as vendors to perform new functions. Using this software defined approach enables enterprises to tightly integrate application service level objectives with the underlying infrastructure to enable dynamic automated reconfiguration of resources based on application/workload requirements or business-based policies; and it enables enterprises to improve Quality of Service (QoS) levels by software defining network and storage hardware behaviors (giving you the ability to introduce new availability, security or other features instead of having to wait until your vendor supplies them).

By designing and implementing a software defined strategy, IT organizations will be able to:

 Significantly drive down capital expenses by making better use of hardware;  Significantly drive down operating expense costs by automating provisioning and other management functions;  Improve IT responsiveness (speed) by making infrastructure more agile (while also minimizing time-to-value); and  Eliminate vendor lock-in by running your own software instead of proprietary software to drive your hardware.

But once your systems, storage, and network resources have been software defined, they will also need to be integrated with one another (otherwise your information systems will consist of software defined silos). At Clabby Analytics, we believe that the way that these resources will be tied together into dynamic working pools of resources will be through the use of an evolving cloud standard known as OpenStack – as well as through the use of associated standards such as TOSCA and Open Services for Lifecycle Collaboration (or OSLC for short).

In this Research Report, Clabby Analytics describes some of the activities that we see taking place in software defines systems (SDS), software defined networks (SDN) and software defined storage (SDS). We also describe what OpenStack is, and explain why we believe that it will become a major integration point for the software defined data centers of the future. The way we see it, enterprise information technology (IT) executives need to articulate a combined “software defined/standard cloud” strategy – a roadmap for build- ing a programmable, constantly-in-flux, dynamically efficient information infrastructure.

Strategic Planning for Software Defined/Standard Cloud Environments

The Big Picture in Software Defined Environments and Cloud Standards In short, Clabby Analytics is advocating that your enterprise: 1) virtualize and automatic- ally provision systems/networks/storage; 2) articulate a software defined strategy; and, 3) integrate this environment as part of a standard cloud. In the follow graphic we describe the elements that your organization needs to include in its software defined cloud strategy.

Figure 1: Building a Sofware Defined/Standard Cloud Environment

Source: Clabby Analytics, September, 2013

October, 2013 © 2013 Clabby Analytics Page 2 Strategic Planning for Software Defined/Standard Cloud Environments

Software Defined: The Basic Concepts The basic concept of “software defined” calls for the abstraction of hardware resources (systems, networks, and storage) such that new feature/functions (programmable through software that can address underlying, abstracted hardware) can be added at will.

A Closer Look at Software Defined Servers Why would an enterprise want to “software define” its servers? The answer is: to utilize a server’s resources in different ways to better serve certain workloads. For example, a workload may not need access to a lot of memory or storage – nor need advanced security and high availability – and it may not require a fast processor. In this case, the server hardware could be abstracted and software could be written to provide very basic comput- ing services to that workload. (The IT buyer would not have to buy an expensive, enterprise-class server to handle this workload – a low cost, software defined, barebones system could handle it).

As we see it, systems makers and independent software vendors (ISVs) are already actively “software defining” systems environments (mainframes, RISC machines, and x86-based servers). These traditional servers are being tuned and “defined” to handle specific work- loads optimally. As an example, consider IBM’s PureData System for Hadoop, PureData System for Analytics, PureData System for Operational Analytics, and PureData System for Transactions – or IBM’s PureSystem (see this report). The resources on these servers have been redefined by software to deliver optimized services for specific workloads.

Another example of how vendors are software defining server environments can be found in IBM’s Platform Computing environment. A big challenge being faced by IT executives today is how to correctly deploy and manage large cloud server environments. The way Platform works is that its Symphony offering abstracts the underlying infrastructure – creating a hardware independent interface to underlying abstracted resources. Using this interface, the hardware can be “instructed” how to behave and what services to deliver – so services such as business processes or automation functions can be overlayed onto underlying hardware. This is the heart of what software defined systems is all about.

At present, we see little need for IT executives to write a lot of software defined programs in order to software define traditional server environments. Vendors are already doing this for the user community. We do, however, see a need to software define tomorrow’s microservers (such as Advanced RISC Machine – ARM servers) because these server environments are not as mature as traditional servers.

As we described in this report, new microserver systems designs are starting to come to market. The ecosystem that surrounds these servers is in its infancy – so there is plenty of need to “software define” how the system resources in these microservers will be used.

A Closer Look at Software Defined Networks In sharp contrast to the server world, the networking world is just beginning to be virtual- ized and software defined. From a virtualization perspective, much of the action we have seen to date has been in the area of virtual overlay networks. And, from a software defined

October, 2013 © 2013 Clabby Analytics Page 3 Strategic Planning for Software Defined/Standard Cloud Environments perspective, much of the action has centered around an evolving standard known as OpenFlow.

An Example of Network Virtualization: DOVE Distributed overlay networks provide an excellent example of how networks can be virtualized. Think about a network from a bandwidth perspective. Is your organization making maximum use of the bandwidth available to it? Probably not. Now consider the concept of virtual overlay networks such as IBM’s SDN for Virtual Environments based on distributed overlay virtual Ethernet technology (DOVE). DOVE networks can use existing networking infrastructure and don’t require special controllers or OpenFlow-enabled switches in order to virtualize a network in order to extract unused bandwidth. Using a DOVE/SDN approach, network managers/administrators can introduce optimized network services, can let applications take control of the available network bandwidth, and can abstract the physical network and thus build a high-utilization rate networked environment that can be tailored to meet specific workload requirements. For instance, new generation Hadoop (Big Data) applications as well as video services (see this report) have different bandwidth requirements as compared with traditional transaction processing or business applications. Using software defined principles; users can define their networks in such a manner as to more easily accommodate diverse workloads.

Formal Standards Activity: OpenFlow and OpenDaylight From our perspective, the networking market is tightly controlled by a few large vendors that don’t necessarily benefit from virtualization (because they won’t sell as much hardware if IT buyers are able to exploit volumes of unused bandwidth). Further, these vendors have proprietary locks on their hardware (for instance, most of these vendors tightly control their switch control planes – and this helps drive the sale of their proprietary management software because it knows how to work with these control planes). Network users, on the other hand, are demanding that vendors provide virtualization facilities in order to help enterprises make use of unused bandwidth – and network users also want proprietary network controls (such as ownership of switch control planes) torn down.

User demands for better virtualization facilities and open switch technologies are changing vendor strategies in the network market. Cisco, the market’s 800 pound gorilla, is being forced to adopt an SDN strategy – and is being forced to open its control plane thanks to OpenFlow. Meanwhile, other vendors that don’t have as huge a market share as Cisco see an opportunity to gain marketshare by offering open standard network solutions. Cisco, Juniper, and Big Switch all have their own SDN product sets, yet come at the network virtualization issue from different perspectives. EMC, HP and IBM are aggressively trying to get a foothold in SDN in order to open new inroads into the network marketplace – as well as to find ways to integrate their systems and storage products into integrated, software defined pools. For a more in depth overview of the SDN marketplace, please see our soon to be released report (November 2013) at ClabbyAnalytics.com. For information on one vendor’s specific SDN strategy (IBM’s strategy), please see our new “IBM Software Defined Networking: Two Approaches to Network Virtualization and Centralized Network Management” report located here.

As for evolving standards, one standard (OpenFlow) is being designed to help software define network switches (we describe OpenFlow in more depth in this report). OpenFlow involves decoupling the control of a switch’s control plane from the switch hardware and

October, 2013 © 2013 Clabby Analytics Page 4 Strategic Planning for Software Defined/Standard Cloud Environments giving it to a software-based controller, providing a programmatic interface into network equipment. OpenFlow enables network administrators to set-up and manage individual switches from a central console rather than manually configuring each and every physical switch. For cloud infrastructures this can improve flexibility and efficiency, and can also enable the use of commodity switches. Further, virtual machines can be provisioned and deprovisioned quickly, along with the network infrastructure required to support them.

This is a “big deal” because OpenFlow enables end users to take control of their network devices, thus breaking down vendor lock-in while enabling enterprises to build their own network functions into their network devices to meet their own specific requirements.

Another important standards effort related to SDN is the OpenDaylight initiative. OpenDaylight is an open community of developers and business professionals dedicated to supporting an open source controller framework and a collaborative decision-making process. The goal of this project is to further SDN adoption and innovation through the formation of a common SDN framework for building applications.

A Closer Look at Software Defined Storage To get storage costs under control, IT storage managers have largely focused on deduplication and performance tuning – and on creating storage tiers, using advanced management products to improve systems/storage/network integration while reducing management costs. Storage managers have focused to a lesser degree on storage virtualization, automated provisioning, and centralized heterogeneous management.

The situation in software defined storage (SDS) is that IT buyers have not done enough to virtualize heterogeneous storage to date. Failure to virtualize heterogeneous storage means that storage arrays are often underutilized within many enterprises. Enterprises should take steps now to virtualize storage in order to reduce storage costs.

As for software defining storage, we are starting to see SDS products come to market that abstract storage and make it programmable through software in order to deliver new management efficiencies, QoS feature/functions and additional services. For example, IBM, EMC and Red Hat offer such products today.

IBM and Software Defined Storage IBM’s SAN Volume Controller (SVC) is an early example of the concept of SDS (it has been available on the market for over ten years – and there have been over 40,000 shipments of SVC engines running in more than 10,000 SVC storage environments). Customers using SVC already have a “storage hypervisor” (code that enables storage to be abstracted), independent of the underlying hardware and protocols. This allows the pooling of heterogeneous storage resources across an enterprise infrastructure comprised of a broad mix of servers and hypervisors (see Figure 2). This foundation also enables additional software defined services such as:

October, 2013 © 2013 Clabby Analytics Page 5 Strategic Planning for Software Defined/Standard Cloud Environments

 Centralized management of shared storage resources, self-service administration, security, and monitoring with IBM’s SmartCloud Entry solution;  Automated self-service provisioning for management of application and middleware patterns with IBM SmartCloud Provisioning;  SmartCloud Orchestrator coordinates complex tasks and workflows applying resources as needed for dynamic workloads and applications  Operational, infrastructure and business analytics for improved operational efficiency as well as provide new business insights

Figure 2 – IBM Software defined Storage Architecture Based on OpenStack

Source: IBMCorporation, October, 2013

Programmable storage via open will encourage a broad range of solution providers to develop additional storage services such as application optimization replication, encryp- tion, storage management, metadata management, backup, archiving and others.

IBM has also demonstrated an unwavering commitment to open standards as a Platinum Sponsor of OpenStack and as a key contributor ( #1 contributor to the core of the Havana release focused on metering and orchestration of application patterns, and #5 contributor overall for all OpenStack projects).

EMC and Software Defined Storage Another example of how software defined storage can be implemented can be found in EMC’s ViPR product offering. EMC’s ViPR reminds us of the OpenFlow network approach to software defined networks in that a ViPR controller is used to abstract underlying storage (just like an OpenFlow controller is used to abstract network switches). Here’s how it works:

October, 2013 © 2013 Clabby Analytics Page 6 Strategic Planning for Software Defined/Standard Cloud Environments

 EMC has built a SDS controller known as the ViPR Controller that abstracts underlying storage at the control plane level (not at the data plane or path levels). Abstracting the control plane enables ViPR present storage services without turning the underlying storage into just a bunch of disks (JBOD). What this means is that ViPR Controller users can still use all the native capabilities of the arrays such as SRDF and Unisphere – but new features can be added using software defined storage programmatic interfaces. Also note that abstracting the control plane also means no performance degradation.  The ViPR Controller provides provisioning, self-service, reporting and automation functions across block- and file-oriented storage systems. It can abstract EMC’s VMAX, VNX, VPLEX, RecoverPoint and Isilon environments, as well as 3rd party storage (NetApp at present – and others at a later date) as well as commodity storage. In addition to RecoverPoint block replication, the ViPR user interface allows users to provision EMC FAST and EMC and third-party snaps (leveraging snapshot technologies inherent to the arrays – such as TimeFinder for VMAX)  Once storage is abstracted, the resultant virtual storage pools can be leveraged as a logical scale-out layer for hosting a range of data services.  The first release of the ViPR Controller includes an Object Data Service that supports several application programming interfaces (APIs) including Amazon’s S3, OpenStack Swift and the EMC Atmos APIs. ViPR users (including EMC customers and its business partners) can write applications to these APIs and the applications will run on the ViPR platform. Objects can be stored and accessed on ViPR-managed file arrays.  A future release of ViPR will add support for storing objects on EMC Atmos, EMC Centera, additional third-party arrays as well as commodity hardware. EMC is exposing the northbound ViPR API and supporting a developer community to facilitate the development of additional third-party data services.  ViPR also supports Hyper-V with an add-in to System Center .  An HDFS data service will be available later in 2013 with plans for additional data services in 2014 and beyond.

We also note that EMC’s ViPR supports the evolving open standard for cloud environments – the OpenStack environment – using a Cinder plugin. This is important because ultimately systems, storage and network services will all need to be integrated in order to ensure that workloads get the resources that they need to execute computing jobs. As you will see later in this report, we believe that OpenStack (as well as a few other open services) will become that integration point.

The whole emphasis in EMC’s ViPR environment from a SDS perspective is to encourage the creation of objects that can exploit the underlying ViPR controller to deliver new services. Figure 3 (next page) shows the relationships between EMC’s Object Data Service, the ViPR controller, and underlying abstracted storage.

October, 2013 © 2013 Clabby Analytics Page 7 Strategic Planning for Software Defined/Standard Cloud Environments

Figure 3 – EMC’s ViPR Environment

Source: EMC Corporation, September, 2013 Red Hat and Software Defined Storage Another example of software defined storage can be found at Red Hat. In a recent Clabby Analytics report entitled: “Hats off to Red Hat Storage Server – Open Source Software defined Storage” we described how Red Hat is using software defined techniques to improve storage array performance while lowering storage costs. What Red Hat has done is that it has defined a storage infrastructure (built using open source components such as GlusterFS community release, oVirt [the open source virtualization platform], and XFS File System) that Red Hat claims (as part of an IDC study) can deliver up to 52% savings in storage hardware costs, and up to 20% savings in operational costs. In addition to creating an open source-based SDS infrastructure, Red Hat has tied RHSS into OpenStack (as we have been recommending throughout this report). It uses the OpenStack application program interfaces to tie its storage offerings to the evolving standard cloud. Integrating These Environments: Use the Standard Cloud The final step in implementing a SDE strategy is to integrate systems, storage, networks, and services such that workloads can get dynamic, automated access to pools of systems, storage and network resources.

We are seeing this type of workflow integration activity taking place in various standards organizations using standards such as Open Services for Lifecycle Collaboration (OSLC), the Oasis TOSCA standards (Technical Committee to Advance Open Standard for Cloud Portability), and OpenStack. These open cloud standards essentially provide 3 layers of services: OpenStack is the foundational infrastructure layer; TOSCA provides platform services and patterns that will enable the portability of workloads between environments; and OSLC is a composition layer that links services and facilitates easy integration of heterogeneous tools and processes.

OpenStack OpenStack is an evolving standard for building heterogeneous cloud environments. It is freely available under the open source Apache 2.0 license enterprises can run it, build on it and/or submit changes back to the project. At present, the OpenStack community includes over 830 developers who have contributed code to the open source OpenStack cloud standardization effort.

OpenStack standards support will enable choice at the component, management and application layers as well as provide consistent management of OpenStack-enabled

October, 2013 © 2013 Clabby Analytics Page 8 Strategic Planning for Software Defined/Standard Cloud Environments infrastructure components. Open APIs will encourage the development of a broad range of services and applications that can exploit the software defined architecture. OpenStack is the key to software defined, as customers will need interoperability between vendor solutions to take full advantage of the management, flexibility and efficiency benefits of software defined environments.

Although some major vendors have not signed on to OpenStack to enable their cloud environments, we believe that over time OpenStack standards will dominate the cloud marketplace. The basis for this belief is that we see clouds as becoming too complex to deploy and manage; we see non-interoperable cloud silos – and we know that users want simplicity and interoperability. Ultimately, the value that users receive from clouds is found in the services delivers, not in the cloud plumbing.

With OpenStack as the basis for cloud interoperability, IT buyers can tie their software defined systems, storage and networks together using common cloud standards. Accordingly, we see OpenStack as an essential integration point for the SDE.

OSLC OSLC is an evolving cloud integration standard that that promotes lifecycle integration tools that can be used to simplify integration across heterogeneous environments. The idea behind OSLC is that by using a common standard for lifecycle integration, workloads can live where they were created, yet be linked to other programs and used where needed. TOSCA The evolving Oasis TOSCA standards define the relationship of an application/workload to underlying cloud services. TOSCA is focused on enabling a compose once, deploy anywhere architecture for applications using an approach that helps to express application requirements in a language that IT infrastructure understands. This will enable cloud-based infrastructure and resources to adapt on-the-fly to workload and applications requirements. TOSCA is supported by Capgemini, CA Technologies, Cisco, Citrix, EMC, IBM, NetApp, PWC, Red Hat, SAP AG, Software AG, Virtunomic, and WSO2. Summary Observations For over a decade Clabby Analytics has reported on the progress being made in server virtualization and provisioning. At this juncture the benefits of server virtualization are well known: better utilization (which serves to improve return-on-investment [ROI]); lower software licensing costs; lower high availability costs; lower management costs; and simplified testing. And these cost savings can be tremendous. One of our reports (found here) describes how one organization expects to save $2.3 billion in capital expenses (CapEx) and operational expenses (OpEx) costs – largely by virtualizing/automatically provisioning its server environment.

But there is a lot more money that can be saved by virtualizing networks and storage; by software defining these environments; and by integrating software defined systems/storage/ and networks into a standards- based cloud environment. To pursue these savings, we are advocating that enterprises architect a “software defined/standard cloud” strategy.

October, 2013 © 2013 Clabby Analytics Page 9 Strategic Planning for Software Defined Environments

Although server virtualization is a well-known entity, much work remains to be done in virtualizing networks and storage. We are starting to see strong progress in network virtualization with the advent of overlay networks that can use existing network infra- structure to exploit unused network bandwidth and deliver new network services. We are also seeing progress in software defining networks with the growing adoption of the OpenFlow standard.

In storage, we believe that enterprises need to become more aggressive in virtualizing their heterogeneous storage. Once this is done, enterprises should plan on abstracting that storage, making it programmable through open APIs and providing new functions and automated services that will drive down storage acquisition and management costs as well as provide new ways for applications to take advantage of storage. IT planners should also start to follow the SDS efforts being made in the vendor community (see the IBM, EMC and Red Hat examples herein). Efforts such as these have the potential to very significant- ly drive down storage costs.

Our key message in all of our virtualization/software defined reports to date is that enterprises that are interested in truly maximizing information systems ROI need to virtualize and automatically provision their systems, storage, and network infrastructure in order to increase overall resource utilization. Further, enterprises need to abstract underlying hardware in order to make it user programmable so that new functionality can be developed and vendor lock-in can be eliminated. This is what “software defined environments” are all about.

But there is more to building a software defined environment than programming abstracted resources. Each silo (the server silo, the storage silo, and the network silo) needs to be integrated with one another such that they all work in tandem in resource pools to maximize resource usage. We see new standards efforts gaining traction that will help enterprises perform this integration. A new standard for building heterogeneous clouds – OpenStack – is evolving quickly. Another standard that simplifies application and process flow across clouds is also evolving (OSLC). And the Oasis standards organization is working on TOSCA standards (Technical Committee to Advance Open Standard for Cloud Portability) that will be used to help enterprises define the relationship of an application/- workload to underlying cloud services.

In order to maximize return-on-investment (ROI) in information systems – and in order to minimize vendor lock-in – enterprise information technology executives need to formulate a “software defined environment/standard cloud” strategy. Implementing an SDE/Standard Cloud strategy will not only reduce CapEX through better utilization, it will also OpEX through centralized, policy-driven automated management (fewer administrative resources are required accordingly). Application performance and adherence to service level agreements (SLAs) will also be improved as resources are pooled and managed automatically as part of a dynamic, constantly-in-flux infrastructure that finds unused resources and returns them to resource pools for re-use by new workloads. This continuous optimization of resources will dramatically improve response to business needs while improving operational efficiency.

Clabby Analytics Clabby Analytics is an independent technology research and http://www.clabbyanalytics.com analysis organization. Unlike many other research firms, we Telephone: 001 (207) 846-6662 advocate certain positions — and encourage our readers to find counter opinions —then balance both points-of-view in order to © 2013 Clabby Analytics decide on a course of action. Other research and analysis All rights reserved October, 2013 conducted by Clabby Analytics can be found at: www.ClabbyAnalytics.com.