March 2020 How Software-Defined Servers Will Drive the Future of Infrastructure and Operations

In this issue

Introduction 2

How Software-Defined Servers Will Drive the Future of Infrastructure and Operations 3

Research from Top 10 Technologies That Will Drive the Future of Infrastructure and Operations 10

About TidalScale 23 2

Introduction

SIXTEEN YEARS AGO, InformationWeek published a prescient call to arms for building an intelligent IT infrastructure. “The mounting complexity of today’s IT infrastructure,” cautioned the author, is having a “draining effect…on IT resources.”1

If the need for flexible, on-demand IT infrastructure was obvious back in 2004, imagine where we find ourselves today. Businesses now run on data. They analyze it to uncover opportunities, identify efficiencies, and to define their competitive advantage. But that dependency comes with real-world challenges, particularly with data volumes doubling every year2 and IoT data growth outpacing enterprise data by 50X.3

Talk about “mounting complexity.”

The “draining effect” on IT resources observed 16 years ago is hitting IT operations where they live—both in their ability to meet SLAs and in their efforts to do more within limited budgets. Legacy platforms fail to keep up with growing and unpredictable workloads. Traditional approaches to scaling force IT departments into the same old system sizing, purchasing, and deployment cycles that can last months, even years.

Today’s CIOs are right to ask: If my largest servers can’t handle my SAP HANA, Oracle Database, data analytics or other in-memory workloads, can I really afford to commit a year or more to simply bringing a new system online? And does that do anything to solve my need for a more flexible, intelligent infrastructure?

More and more CIOs are realizing that the answer to both questions is a resounding, “No.”

This paper explores how new technologies are redefining the IT landscape—not because they’re bright and shiny new tech objects, but because they’re a crucial part of the agile, scalable and intelligent infrastructures that modern enterprises need. It describes how one acclaimed breakthrough—software-defined servers—aligns with many of the most exciting and influential technology trends that will drive the future of Infrastructure and Operations.

Read this report to learn how software-defined servers turn traditionally fixed and cloud resources into fluid, on-demand assets that combine to create virtual servers of any size— scaling beyond physical hardware boundaries, entirely on demand and with zero OS or application modifications. Discover how enterprises rely on this disruptive, award-winning technology to reduce IT infrastructure complexity, maximizing the business value of their data while increasing application performance and driving down costs.

Time is fleeting, data is exploding, and complex new problems demand better solutions.

Gary Smerdon CEO TidalScale

1https://www.informationweek.com/applications/building-an-intelligent-it-infrastructure/d/d-id/1028655 2https://techjury.net/stats-about/big-data-statistics/ 3https://insidebigdata.com/2017/02/16/the-exponential-growth-of-data/ How Software-Defined Servers Will Drive the Future of Infrastructure and Operations

Agile. Scalable. Intelligent.

These are the defining characteristics of IT infrastructures that all enterprises must adopt if they hope to compete in a world where change is accelerating. Too many organizations are weighed down by applications, processes and hardware designed for a more predictable age.

And as more applications and services move to cloud and open-source solutions, where does that migration leave IT environments with significant investments in legacy hardware and technology? Large-scale hardware solution vendors believe they have an answer. They’re introducing composable (or converged, or hyperconverged) architectures built around their own proprietary platforms. Some are further along than others, but they all share a similar goal: to solidify the grip on IT environments that they’ve enjoyed for years, and sometimes decades.

The Future Is in Breakthrough Technologies New technologies, however, promise to disrupt the world of vendor lock-in and proprietary solutions. Analysts view them as the key to modernizing IT.

In Top 10 Technologies That Will Drive the Future of Infrastructure and Operations (Gartner: Arun Chandrasekaran and Andrew Lerner, October 29, 2019), a Gartner research report featured later in this paper, the authors short-list a range of emerging technologies and offerings that they expect will help create the I&O environments CIOs are seeking. 4

Figure 1. Top 10 Technologies That Will Have the Greatest Impact on Infrastructure and Operations1

Source: Gartner (October 2019)

These are all worthy choices for a top 10 list, even Software-Defined Servers and the Modern if many are still in the proof-of-concept stage. Datacenter Far from complete solutions, they aim to solve a One breakthrough in particular adds value to the specific problem or set of problems. This makes aspects of I&O that nearly all of these technologies sense, because the task of modernizing IT requires address. That breakthrough is TidalScale software- more than a single solution. It requires an array of defined server technology. TidalScale software technologies that work together to create a fluid, on- combines multiple commodity servers into virtual demand environment. And the more they use industry- systems of any size. TidalScale software virtualizes all standard technologies, the better. resources (cores, memory and IO) resident in those servers and combines them to create a software-

1 Top 10 Technologies That Will Drive the Future of Infrastructure and Operations, 29 October 2019, G00430091, Arun Chandrasekaran, Andrew Lerner 5

defined server. This virtual server appears as a single where the application needs it most. The system also system to the OS and application software. monitors and learns from its performance, enabling the TidalScale platform to optimize itself over time. By creating software-defined servers on demand, TidalScale does for rigid, inflexible servers what Because TidalScale software works with any software-defined solutions have done for storage application without requiring a single software and networking. The result is modern, agile and modification, TidalScale’s ML capabilities are cost-effective IT infrastructure. (See How TidalScale complimentary to—and work seamlessly with—AI that Modernizes IT on page 8.) exists at the application level.

TidalScale and Gartner’s Top Tech TREND: Container Management (Orchestration) Trends The use of containers is growing so popular that How software-defined server technology impacts Gartner predicts that by 2022, 75% of global Gartner’s chosen technologies reveals its broad organizations will run containerized applications in application throughout a modern datacenter. production. This makes orchestration software an attractive offering. But even as containers grow in TREND: Artificial Intelligence for IT Operations popularity, challenges exist in three primary areas: (AIOps) Platforms ■ Mobility (live migration of executing processes is AI is becoming a major force throughout all of computing, difficult or impossible) but the focus of most AIOps implementations is on applications embedded with AI. The goal is to achieve ■ Orchestration (adding and removing containers greater utilization at the application level. This is valuable, while servers are running can increase latency) to be sure, but it misses an opportunity to infuse AI capabilities in a way that’s arguably more foundational. ■ Security (most environments lack hardware- enforced separation between multiple containers WHERE TIDALSCALE FITS: TidalScale has taken that running on the same server) foundational approach by integrating machine learning (ML)—a key enabling component of AI—at the server WHERE TIDALSCALE FITS: TidalScale works (rather than application) level. To optimize application seamlessly as part of containerized and DevOps performance, TidalScale uses ML to automatically environments. In addition it addresses the three main locate cores, memory and I/O where they will deliver challenges of containers: the optimal performance for the application in use. For instance, if the application is memory-intensive, then ■ TidalScale solves container mobility challenges the ML algorithms in a TidalScale software-defined by utilizing the hardware below the OS kernel to server migrate the system’s memory in real time to 6

transparently move resources across nodes to NCS Analytics, a wherever they’re needed, including cores, memory, Colorado-based provider and networking resources. All of these resources of data analytics solutions can be configured and mobilized as needed. for governments and financial institutions serving high- risk industries, relies on TidalScale for its own DevOps ■ Container orchestration is simpler because environment. Analysts using R, Python and other tools TidalScale’s control panel allows for point-and- create software-defined servers on demand—servers click provisioning of physical resources at the rack large enough for them to accelerate the processing of level. For instance, the ability to pool all server development models by as much as 240X compared to resources in a rack while maintaining locality local or traditional cloud infrastructure servers. They makes it easier to orchestrate storage, CPU, are also able to iterate models 25X more often than memory and related container components than before. The result is a more productive and responsive on the individual server level. team. Using TidalScale, just three NCS analysts are able to produce output that would require a team ■ TidalScale software leverages a data center of nine using traditional servers. (Learn more: www. control plane that preserves the security of tidalscale.com/customers) physical resource access. Everything managed by TidalScale exists outside of the guest operating TREND: Edge Computing system and on a network that is not available to Local processing of data is attractive because there’s the guest. Containers running in the guest are simply not enough bandwidth to send data to the cloud another step further removed from access to core for real-time (or even timely) processing. Today, the datacenter components. most extreme examples are self-driving cars and other applications that take data-driven actions in a fraction TREND: DevOps Toolchain of a second. And today’s extreme examples are likely Most organizations today use an array of tools to to become tomorrow’s standard. Edge computing is streamline application development and delivery. a key enabler that is still tightly linked to flexible core Unintegrated tools, however, can make deployment and infrastructure needs. management a headache. WHERE TIDALSCALE FITS: Organizations will require WHERE TIDALSCALE FITS: As a software provider, more capacity, more compute and more memory TidalScale understands the benefits and challenges to draw insights and value from data. A flexible of the DevOps toolchain solutions. From a solution I&O infrastructure helps organizations rely on edge provider perspective, TidalScale is a part of many computing applications. They can accommodate DevOps environments because customers rely on new ideas and use cases as they emerge, rather than TidalScale to provision resources to developers when forcing IT teams to predict needs two to three years and where they’re needed. into the future to size traditional server procurements. 7

TidalScale software-defined servers are an ideal Mohawk Industries, foundation for turning previously fixed assets (X86- a Fortune 500 based servers) into a fluid pool of resources capable maker of flooring to supporting edge applications as they evolve. products, uses TidalScale software- TREND: Hybrid Cloud defined servers for QA and disaster recovery. Mohawk Between cost optimization and availability, weaving public provisions lower-cost copies of traditional large and private cloud environments has plenty going for it. But systems from commodity two-socket servers. This orchestrating these environments can be complicated, allows Mohawk to meet its system mirroring needs at and in I&O, complicated usually means costly. half the I&O cost of traditional servers. (Learn more: www.tidalscale.com/customers) WHERE TIDALSCALE FITS: Hybrid cloud environments are attractive in multiple scenarios. Organizations may TREND: Next-Generation Memory want public cloud access to certain platforms that are Inexpensive DRAM and fast flash memory? Or slow not available on premise. Or they may want to establish DRAM and expensive flash? After 20 years of discussion a high-availability or disaster recovery environment in and hype, the marketplace may finally weigh in on the public cloud. Hybrid cloud configurations provide the prospects of next-generation memory as actual the ability to install resources on premise where implementations begin to emerge. constraints require, seamlessly integrated with cloud resources where flexibility allows. WHERE TIDALSCALE FITS: When next-generation memory achieves significant market penetration, TidalScale software-defined servers allow TidalScale will be uniquely positioned to support it organizations to define the resources they need, and add value. TidalScale hides the complexity of entirely on demand, both in private and public these next-generation solutions, which operating cloud environments. Because TidalScale creates system and applications view as much slower and software-defined servers from industry-standard less predictable than DRAM. TidalScale also provides X86-based systems—prevalent in data centers an excellent option for I&O environments that prefer and IaaS environments alike—there’s no need to to continue using traditional memory platforms they worry if one hybrid half mirrors the other. (And for know and trust, scaling memory far beyond what organizations using memory-intensive applications like individual CPUs can directly address. Oracle Database or SAP HANA, there’s another plus: Creating software-defined servers with standard rack systems from public cloud providers is generally more affordable than leasing large-memory configurations.) 8

How Software-Defined Servers Freed from the physical limitations and cost burdens Modernize IT of traditional scale-up servers, organizations find TidalScale helps them modernize their IT Software-defined servers offer enterprises an easy environments in multiple ways. (See Figure 2.) and affordable way to add flexibility to existing IT infrastructure, both on-premise and in the cloud. Meeting the Needs of CIOs Today— Software-defined servers bring the benefits of and in the Future virtualization to scale. Where traditional virtualization enables multiple system images to be deployed on Software-defined servers offer enterprises an easy a single server, TidalScale software-defined servers and affordable way to add flexibility, scalability and combine multiple physical servers into a single virtual intelligence to existing IT infrastructure, both on- machine providing scale-up capacity for running a premise and in the cloud. With CIOs expected to standard OS and application stack with no code changes. make IT ever more responsive to the demands of the

Figure 2. How Software-Defined Servers Will Drive the Future of Infrastructure and Operations

Source: TidalScale 9

business, software-defined servers have a crucial role achieve business goals using industry-standard servers to play there as well. instead of large, proprietary scale-up systems.

Today’s CIOs are expected to meet competing What technologies will define the future of I&O? Those agendas: they’re tasked with supporting revenue that help CIOs meet the needs of both the business generation for the business (which tends to require and IT—making it easier, faster and less costly to investment in greater IT capacity), while at the same handle workloads that are growing increasingly large time containing IT infrastructure costs (which favors and unpredictable. making more creative use of existing resources). With TidalScale software-defined servers, that future is Traditionally, those two agendas have been in conflict. already here. (See Figure 3.) TidalScale software-defined servers provide a way to bridge that gap: Organizations can Learn more at www.tidalscale.com. scale IT infrastructure capacity to meet the revenue- growth demands of the business, even while driving Source: TideScale down infrastructure costs because they’re able to

Figure 3. Software-Defined Servers: Helping CIOs Meet Top-Line Directives While Reducing TCO

Source: TidalScale Research from Gartner Top 10 Technologies That Will Drive the Future of Infrastructure and Operations

Enabling innovation through an agile, scalable and intelligent infrastructure is vital for digital organizations. We highlight the 10 most compelling early-stage technologies that enterprise architecture and technology innovation leaders should use to drive infrastructure innovation through 2024.

Key Findings ■ There continues to exist significant technical debt in infrastructure and operations (I&O) due to complex legacy applications and outdated processes, which cripple agility, enhance business risks and often result in bloated costs.

■ There is a clear market shift in innovation — from legacy IT infrastructure vendors to cloud providers and open-source communities — that are heralding the next wave of infrastructure innovation.

■ Technology innovation leaders struggle to increase their ability to absorb, deploy and scale technological innovations.

Recommendations ■ Enterprise architecture and technology innovation leaders driving innovation should:

■ Ideate with business leaders and I&O leaders on potential use cases where these technologies can be applied to enhance business value.

■ Develop a decision map on which of these technologies are relevant within their organization and a timeline for its implementation. 11

■ Improve the scale and efficiency of innovation by digital business era, innovation leaders need to drive identifying internal champions both within I&O and “creative destruction,” often willing to fundamentally business units to lead these efforts, incentivize rethink the technology architecture, deployment them, measure accurately the success/failure and environment and operating models. We have analyzed iterate toward continuous improvement. many transformative technologies and selected the 10 that we believe will have the greatest impact ■ Invest beyond technology, as technology isn’t a on I&O (see Figure 1). These technologies are all silver bullet and needs to be complemented by highly transformative and will mature within the next process automation, skills training and cultural five years. As an enterprise architecture (EA) and hacks to ensure that these projects don’t fail. technology innovation leader, you must assess how your organization could exploit these technologies. Analysis The needs of digital business are forcing technology In this research, we define each technology, predict innovation leaders to rethink infrastructure strategies its market adoption, assess why you should care for next-generation workloads. To succeed in the and list some of the imperatives that you should be

Figure 1. Top 10 Technologies That Will Have the Greatest Impact on Infrastructure and Operations

Source: Gartner (October 2019) 12

aware of. Although this research focuses on individual ■ Competitive differentiation/disruption. They do technologies, you should be alert for opportunities so through superior responsiveness to market that arise when two or more of these trends combine and end-user demand based on machine-based to enable new capabilities. analysis of shifts.

Technology No. 1: Artificial Intelligence Prediction: By 2022, at least 25% of large enterprises for IT Operations (AIOps) Platforms will be using artificial intelligence for IT operations (AIOps) platforms and digital experience monitoring Description: AIOps platforms combine big data (DEM) technology exclusively to monitor the nonlegacy and machine learning to support several primary segments of their IT estates, up from 2% in 2018. IT operations functions. They do so through the scalable ingestion and analysis of the ever-increasing Maturity: Emerging; market adoption is 5% to 20%. volume, variety and velocity of data that IT operations generate. These platforms enable the concurrent use AIOps platform vendors have a broad range of of multiple data sources, data collection methods, and capabilities, such as event correlation, anomaly analytical and presentation technologies. detection, root cause and optimization analysis, that continues to grow. However, vendors differ Why EA and Technology Innovation Leaders Should in their data-ingest and out-of-the-box use cases Care: AIOps platforms bring: made available with minimal configuration. Building your own AIOps platform requires a high degree of ■ Agility and productivity gains. They do so by operational, analytical and data science skills. analyzing both IT and business data, yielding insights on user interaction, business activity and What You Should Know: AIOps skills and IT operations supporting IT system behavior. maturity are the usual inhibitors in ensuring quick time to value when using these tools, followed by data ■ Service improvement and cost reduction. They quality as an emerging challenge for some of the do so by significantly cutting the time and effort more mature deployments. The market is experiencing required to identify the cause of uptime and broad “analytics” confusion, intentional obfuscation performance issues. Behavior-prediction-informed by vendors and rising costs associated with increasing forecasting can support resource optimization data volumes. Cultural barriers exist and end users efforts. must acquire new skills to use AIOps platforms. Before planning to adopt an AIOps platform, decide whether a ■ Risk mitigation. They do so by analyzing monitoring tool that uses machine learning would be a monitoring, configuration and service desk data. sufficient alternative. They identify anomalies from both operations and security perspectives. 13

Sample Vendors: BigPanda; BMC; Devo; Elastic FPGA accelerators can enable dramatic performance (Elasticsearch); Loom Systems; Moogsoft; Splunk; improvements within significantly smaller energy StackState; Sumo Logic. consumption footprints than comparable mainstream technologies. FPGA accelerators are well suited to Technology No. 2: Compute Accelerators artificial intelligence (AI) inference workloads, as they excel in low-precision processing capabilities in Description: Compute accelerators are composed of: energy-efficient footprints.

■ Graphics processing unit (GPU) accelerators. Prediction: Through 2022, computational resources These use a GPU to accelerate highly parallel used in AI will increase by at least 4x from 2018, compute-intensive portions of workloads in making AI the top category of workloads driving conjunction with a CPU. infrastructure decisions.

■ Deep neural network (DNN) application-specific Maturity: GPUs are most mature (5% to 20% market integrated circuits (ASICs). These purpose-specific adoption) followed by ASICs and FPGAs, which are processors accelerate DNN computations. adolescent and early mainstream, respectively (1% to 5% market adoption). ■ Field-programmable gate array (FPGA) accelerators. These server-based reconfigurable GPU subsystems are actively deployed in HPC and computing accelerators deliver extreme high AI. In AI, DNN technologies are maturing quickly, performance by enabling programmable hardware- and most of the DNN frameworks support GPU level application acceleration. acceleration. Gartner describes GPU accelerators as a mature mainstream technology. Why EA and Technology Innovation Leaders Should Care: Compute accelerators bring extreme Gartner expects that the market for DNN ASICs performance and power efficiency. GPU-accelerators will mature quickly, possibly within the three-year can deliver extreme performance for highly parallel depreciation horizon of new systems. FPGAs are compute-intensive workloads in HPC, DNN training typically configured using hardware programming and inferencing. GPU computing is also available as a languages that are very complex to use, which has cloud service and may be economical for applications held back widespread adoption of FPGA accelerators. where utilization is low but time to market needs are Gartner describes FPGA accelerators as an early high. DNN ASICs will enable neural-network-based mainstream technology. systems to address more opportunities in your business through improved cost and performance. What You Should Know: Widespread use of DNN Use cases that can benefit from DNNs include speech- ASICs will require the standardization of neural to-text, image recognition and natural-language network architectures and support across diverse DNN processing. 14

frameworks. Choose DNN ASICs that offer or support pipeline and the infrastructure via APIs. It also aids in the broadest set of DNN frameworks to deliver the life cycle management of containers. business value faster. Why EA and Technology Innovation Leaders Should For GPU accelerators, selecting the right GPU compute Care: Container runtimes make it easier to exploit platforms that offer the most mature software stack container functions, such as providing integration can be challenging. Optimize infrastructure costs with DevOps tools and workflows. Containers provide by evaluating cloud-hosted GPU environments for productivity and agility benefits, including: proof-of-concept and prototype phases. Identify the costs associated with the skills necessary for FPGAs ■ The ability to accelerate and simplify the and with the programming challenges of FPGAs. Use application life cycle cloud-based FPGA services to accelerate development. ■ Workload portability between different Sample Vendors: We have broken down our list by environments accelerator type: ■ Improved efficiency of resource utilization ■ GPU accelerators: AMD; Cray; ; Hewlett Packard Enterprise; IBM; ; NVIDIA; Container management software also makes it easier Supermicro to achieve scalability and production readiness, and to optimize the environment to meet business SLAs. ■ DNN ASICs: ; ; Graphcore; Intel; SambaNova Systems; Wave Computing Prediction: By 2022, more than 75% of global organizations will be running containerized ■ FPGA accelerators: Amazon Web Services; Baidu; applications in production, which is a significant Intel; Azure; Xilinx increase from fewer than 30% in 2018.

Technology No. 3: Container Maturity: Adolescent; 5%-20% market adoption. Management (Orchestration) Grassroots adoption of containers from individual Description: Container management software developers has been significant. These developers supports the management of containers at scale will use containers with increasing frequency in in production environments. It includes container development and testing, particularly for Linux. runtimes, container orchestration, job scheduling Gartner expects container management software to and resource management. Container management remain an adolescent technology through 2020 and software brokers the communication between the possibly 2021. continuous integration/continuous deployment 15

What You Should Know: Explore container technology DevOps toolchain enables development and operations for packaging and deploying Linux applications and team members to work together, with common beware that Windows containers lag behind in maturity objectives and metrics. This helps to ensure quality, and ecosystem integration. Container management on-time application delivery to the business. Of may be appropriate for you if your organization: particular interest to operations teams are tools that do continuous configuration automation (such as Chef, ■ Is DevOps-oriented or plans to become so Puppet, Ansible and SaltStack), APM/infrastructure management (such as Cisco AppDynamics, Datadog, ■ Has high-volume, scale-out applications with a Dynatrace, Elastic, New Relic and Splunk) and release willingness to adopt microservices architecture or automation tools (such as CloudBees, Microsoft and large-scale batch workloads XebiaLabs).

■ Has aspirational goals of increased software Prediction: By 2022, at least 30% of enterprises will velocity and immutable infrastructure adopt a standard set of tools across their DevOps practices, up from less than 10% in 2018. ■ Prioritizes provisioning and scaling time to be critical (measured in seconds, not minutes like Maturity: Adolescent; 5%-20% market adoption. VMs) The market is rapidly evolving through acquisitions, Sample Vendors: Amazon Web Services; D2IQ; the emergence of open-source and new commercial Docker; Google Cloud Platform; IBM; Microsoft Azure; products, and the continued development of cloud Pivotal; Rancher Labs; Red Hat; VMware. architecture. Competition between the largest vendors, particularly of cloud platforms, will continue to Technology No. 4: DevOps Toolchain disrupt the market. Description: A DevOps toolchain is composed of tools What You Should Know: DevOps toolchains can for supporting DevOps pipeline activity and providing include dozens of unintegrated tools, which makes fast feedback as part of the software development automation a technically complex and arduous task. life cycle. It typically covers six main activities: plan, We recommend that IT organizations develop a create, verify, release, configure and monitor. Pipeline toolchain strategy that establishes business objectives, activities have started with discrete tools for various identifies practices to achieve those objectives and steps, but vendors are delivering solutions across the then selects tools to support those practices. If your application development and delivery cycle. organization supports more than a single unified delivery platform (such as cloud, mainframe, mobile Why EA and Technology Innovation Leaders Should apps and packaged applications), you’ll require a Care: A well-designed, integrated and automated variety of tools from multiple vendors. The toolchain 16

should focus on removing execution barriers and Prediction: By 2022, more than 50% of enterprise- automating the development and continuous delivery generated data will be created and processed outside process. Remember that even open-source tools aren’t the data center or cloud. free. There’s a cost attached to learning, integrating and (especially when they’re integrated) replacing them. Maturity: Adolescent; 5%-20% market adoption.

Sample Vendors: Atlassian; CloudBees; GitLab; Most of the technology for creating the physical Harness; Microsoft; Red Hat; ServiceNow; XebiaLabs. infrastructure of edge is readily available. However, widespread application of the topology and explicit Technology No. 5: Edge Computing application and networking architectures are common only in vertical applications, such as retail and Description: Edge computing is a distributed manufacturing. The increase in demand for the IoT computing topology that places information and the proliferation of its use cases has dramatically processing close to the things or people that produce increased interest in edge technologies and or consume that information. It aims to: architectures. This is because edge computing is the IoT’s accepted topological design pattern (namely, the ■ Keep traffic and processing local and off the “where” a “thing” is placed in an overall architecture). center of the network The still-nascent state of non-IoT edge applications has prevented wider adoption. ■ Reduce latency and unnecessary traffic

What You Should Know: Don’t risk being left behind ■ Establish a hub for interconnection between — begin considering edge design patterns in your interested peers medium- to longer-term infrastructure architectures. Expect to do more planning and experimentation with ■ Establish a hub for the data thinning of complex data thinning and cloud interconnection than with media types or computationally heavy loads applications, such as client-facing web properties and branch office solutions. You must become familiar Why EA and Technology Innovation Leaders Should with an emerging application model, in which edge Care: Edge computing solves many urgent issues, gateways and hubs serve as the linchpins for deploying such as unacceptable latency and bandwidth and cost heterogeneous, multicloud and multiendpoint limitations, given a massive increase in edge-located applications. data. Edge computing will enable aspects of the Internet of Things (IoT) and digital business well into Sample Vendors: Akamai; Amazon; Cisco; Cloudflare; the very near future. HPE; Microsoft; Pixeom; Vapor IO; Verizon; ZEDEDA. 17

Technology No. 6: Hybrid Cloud Google and Outposts from AWS. Gartner expects this to be an area of intense competition and innovation in Description: Hybrid cloud is the integration of the coming years. private and public cloud services to support parallel, integrated or complementary tasks. Hybrid cloud What You Should Know: When using hybrid cloud computing needs integration between two or more computing services, establish security, management, internal or external environments at the data, process, and governance guidelines and standards to management or security layers. coordinate the use of these services with internal (or external) applications and services to form a hybrid Why EA and Technology Innovation Leaders Should environment. Approach sophisticated cloudbursting Care: Hybrid cloud offers enterprises the best of both and dynamic execution cautiously because these worlds — the cost optimization, agility, flexibility, are the least mature and most problematic hybrid scalability and elasticity benefits of public cloud, in approaches. To encourage experimentation and conjunction with the control, compliance, security and cost savings, and to prevent inappropriately risky reliability of private cloud. The solutions that hybrid implementations, create guidelines/policies on the cloud provides include service integration, availability/ appropriate use of the different hybrid cloud models. disaster recovery, cross-service security, policy-based workload placement and runtime optimization, and Sample Vendors: Amazon Web Services (AWS); cloud service composition and dynamic execution (for Google; Hewlett Packard Enterprise (HPE); IBM; example, cloudbursting). Microsoft; Nutanix; OpenStack; Rackspace; Flexera (RightScale); VMware. Prediction: By 2022, more than 80% of organizations will have deployed a hybrid cloud or multicloud model for their IT needs. Technology No. 7: Intent-Based Networking Maturity: Adolescent; 5%-20% market adoption. Description: An intent-based networking system (IBNS) provides: Organizations have started adopting foundational technologies (such as cloud management tools ■ Translation and validation: It can take a higher- and continuous integration tools), but full-blown level business policy as input from end users and implementations still are riddled with complexity. convert it to the required network configuration. Leading cloud IaaS providers have either released products or made announcements for hybrid and ■ Automation: It can configure appropriate network multicloud deployments in the past few years. Among changes across existing network infrastructure. the most notable of these are Azure Stack from Microsoft, VMC from AWS/VMware, Anthos from 18

■ State awareness: The system ingests real-time are “washed” with intent. However, real-world network status for systems under its control. enterprise adoption of IBNSs was nascent as of early 2019, and Gartner estimated fewer than 50 full ■ Assurance and dynamic optimization: The system deployments. Through 2019, early rollouts are likely continuously validates that business intent is being to be in larger-scale environments for well-defined and met and can take corrective action when it isn’t. specific use cases, such as spine/leaf data center networks. We don’t expect the number of commercial Why EA and Technology Innovation Leaders Should enterprise deployments will reach the hundreds Care: Intent-based networking can transform network until the end of 2020. Adoption will be pragmatic, operations. IBNSs improve network agility and associated with new build-outs or network refresh availability and support unified intent and policy initiatives. across heterogenous infrastructures. When the technology matures, a full IBNS implementation will What You Should Know: IBNSs require the ability reduce the time to deliver network infrastructure to abstract and model network behavior, which services to business leaders by 50% to 90%. It will has proved difficult, particularly in multivendor also reduce the number and duration of outages by at environments. IBNSs also require a substantial least 50%. IBNSs also: cultural shift in the design and operation of networks. This will create barriers to adoption in many risk- ■ Reduce operating expenditure averse organizations. Choose an IBNS that supports multivendor network infrastructures and extends into ■ Optimize performance public cloud environments to support a broader range of use cases and avoid vendor lock-in. Deploy IBNSs in ■ Cut dedicated tooling costs phases.

■ Enhance documentation Sample Vendors: Apstra; Cisco; Forward Networks; Gluware; Huawei; Intentionet; Juniper Networks; ■ Improve compliance NetYCE; Veriflow Systems.

Prediction: By 2022, more than 1,500 large Technology No. 8: Next-Generation enterprises will use intent-based networking systems Memory in production, up from less than 15 today. Description: Next-generation memory is a type of nonvolatile memory capable of displacing DRAM in Maturity: Emerging; less than 1% market adoption. servers. It has the density and manufacturing cost close to flash memory, but is fast enough to augment Unfortunately, the technology is massively overhyped, DRAM, or even replace it over time. and things such as automation and programmability 19

Why EA and Technology Innovation Leaders Should What You Should Know: The 3D XPoint memory cell Care: Next-generation memory enables a five to 10 technology is shipping in less demanding Peripheral times increase in fast, local storage capacity. This Component Interconnect Express (PCIe) and NVM means scale-up computing systems can perform faster Express (NVMe) solid-state storage devices, where or handle larger analytics workloads. The nonvolatile memory management is far simpler. These devices storage will be significantly faster than any solid-state — because of their low-write latency — exhibit high- drive (SSD) but limited by PCIe interface when used as transaction rates compared with flash-based SSDs. storage. Alternatively, this type of memory can provide Fully exploiting these performance attributes requires greater consolidation, reducing costs by shrinking the changes to software and drivers. data center space required. When used as persistent memory, it has the potential to accelerate the adoption Sample Vendors: Dell; Hewlett Packard Enterprise; of in-memory computing architectures. Inspur; Intel; Lenovo; Supermicro.

Prediction: By 2022, 20% of CPU socket servers Technology No. 9: NVMe and NVMe-oF with two or more sockets will ship with 3D XPoint Description: Nonvolatile memory express (NVMe) and NVDIMMs, up from less than 1% in 2018. nonvolatile memory express over fabrics (NVMe-oF) are host controller and network protocols that are taking Maturity: Emerging; 1%-5% market adoption. advantage of the parallel-access and low-latency features of solid-state storage and the PCIe bus. Gartner describes next-generation memory as an NVMe-oF extends access to nonvolatile memory (NVM) adolescent technology. Next-generation memory remote storage subsystems across a network. techniques include Hewlett Packard Enterprise’s (HPE’s) ion migration memristor and Intel-Micron’s Why EA and Technology Innovation Leaders Should 3D XPoint phase change and spin-transfer torque Care: NVMe and NVMe-oF offerings can have a memory. Only these three technologies have a dramatic impact on business use cases where low- viable opportunity between the cost, reliability and latency requirements are critical to the bottom performance attributes of DRAM and flash memory. line. Though requiring potential infrastructure Of the three, Intel-Micron’s 3D XPoint technology enhancements, the clear benefits these technologies is the furthest along and will probably hold off the can provide will immediately attract high-performance competition through 2020. Reducing the cost of computing and OLTP use cases where clients keeping data in the “main memory” and simplifying can quickly show a positive ROI. The easiest way high-availability architecture can accelerate the to consume these technologies will be through mainstream organization adoption of in-memory acquisition of modern solid-state arrays (SSAs) that computing architectures. support these. 20

Prediction: By 2022, over 20% of new SSA shipments Sample Vendors: Dell EMC; Excelero; IBM; Kaminario; will utilize NVMe-oF to optimize performance, latency NetApp; Pavilion Data Systems; Pure Storage. and effective bandwidth, up from 1% today. Technology No. 10: Serverless Maturity: Emerging; 1%-5% market adoption. Computing Description: Serverless computing is a model of NVMe is a fast-growing storage protocol that is being IT service delivery. It uses the underlying enabling used internally within SSAs and servers. However, resources as an opaque, almost unlimited, shared NVMe-oF, which requires a storage network, is still pool that is continuously available without advance emerging and developing at different rates, depending provisioning. The price of the pool is included in the on the network encapsulation method. There are many cost of the consumed IT service. Serverless computing NVMe-oF offerings that use fifth-generation and/or automatically provisions and operates the runtime sixth-generation Fibre Channel (FC) NVMe available, environment. but adoption of NVMe-oF within 25/40/50/100 Gigabit Ethernet is slower. In November 2018, the Why EA and Technology Innovation Leaders Should NVMe standards body ratified NVMe/TCP as a new Care: Serverless computing enables organizations transport mechanism. In the future, it’s likely that to build applications quickly and deploy them at TCP/IP will evolve to be an important data center a large scale. It’s suitable when a quick response transport for NVMe. and dynamic scalability are vital. It can enable organizations to exploit new application architectures, What You Should Know: Buyers should clearly identify such as microservices patterns, which can bring workloads where the scalability and performance of competitive differentiation. Serverless computing NVMe-based SSA and NVMe-oF justify the premium can be economical for variable workloads because cost of an end-to-end NVMe-deployed SSA, such it provisions and consumes infrastructure resources as AI/ML or transaction processing. Next, identify only when necessary. Serverless computing is appropriate potential array, network interface card transformational in terms of flexibility and reduced (NIC)/host bus adapter (HBA) and network fabric operating costs. suppliers to verify that interoperability testing has been performed and that reference customers are Prediction: More than 30% of global enterprises will available. During the next 12 months, most SSA have deployed serverless computing technologies by vendors will offer SSAs with internal NVMe storage, 2022, which is an increase from fewer than 5% today. followed by support of NVMe-oF connectivity to the compute hosts. 21

Maturity: Emerging; 1%-5% market adoption. ■ Factor in API gateway, network egress and other costs. Exceptions such as workloads with heavy Serverless delivery of IT services has gained attention invocations can make API gateway costs high. since Amazon popularized its Amazon Web Services (AWS) Lambda function platform as a service ■ Revise data classification policies and controls (fPaaS). However, the significance of serverless because objects in a content store can now also computing, as demonstrated by the leading vendors represent code, as well as data. (including Amazon, Google and Microsoft), extends beyond functions. Many PaaS capabilities will be ■ Rethink IT operations from infrastructure delivered with serverless characteristics, and some management to application governance to ensure are already (such as Amazon Athena). Serverless you can secure, monitor and debug, and meet fPaaS capabilities are also starting to converge on application SLAs. Kubernetes, with the Knative project having potential to be the underlying abstraction to enable it. However, Sample Vendors: Amazon; Cloudflare; Google; IBM; on-premises implementations are uncommon because Iguazio; Microsoft Azure; Oracle; Pivotal; Red Hat. of data integration and scalability challenges. Evidence What You Should Know: Serverless computing Gartner analysts combed through hundreds of Hype represents a broad set of technologies, but the most Cycle profiles to select early-stage technologies that transformative and widely used flavor of serverless would offer significant long-term benefits to customers computing is fPaaS. With fPaaS, fine-grained over the next five-year period. Detailed discussions units of custom application logic are packaged as on the list were also held within Gartner research functions, which are executed when triggered by communities with participation from a broad set of events. Serverless computing demands substantial analysts. transformation management. Instead of managing physical infrastructures, you’ll have to: Market adoption is expressed as a percentage of all companies that have adopted that technology. ■ Take an application-centric view and understand the interdependencies between application components and whether their design enhances scalability, reliability, security and performance. 22

Explanation of Maturity Levels:

Table 1. Maturity Level

Maturity Level Status Products/Vendors Emerging ■ Commercialization by vendors ■ First generation ■ Pilots and deployments by industry leaders ■ High price ■ Much customization Adolescent ■ Maturing technology capabilities and process ■ Second generation understanding ■ Less customization ■ Uptake beyond early adopters Early mainstream ■ Proven technology ■ Third generation ■ Vendors, technology and adoption rapidly ■ More out-of-box methodologies evolving Mature ■ Robust technology ■ Several dominant vendors mainstream ■ Not much evolution in vendors or technology Source: Gartner (October 2019)

Source: Gartner Research Note G00430091, Arun Chandrasekaran, Andrew Lerner, 29 October 2019 About TidalScale

TidalScale is the industry leader in software-defined server technology. TidalScale’s software solution aggregates the memory, cores, and I/O of multiple physical servers to create a virtual, software-defined server. These virtual servers enable in-memory performance for database and analytics workloads, eliminating the need for costly scale-up solutions or complex scale-out infrastructure. They are built upon standard commodity hardware, require no changes to applications and operating systems, and can be deployed within minutes on premises or in the cloud. TidalScale has been optimized for Oracle Database and SAP Hana and was listed among the 20 most promising Oracle and SAP solution providers in 2019 by CIO Review. The company has earned numerous awards, including the Red Herring Top 100 Global Award, and was named a Gartner Cool Vendor and an IDC Innovator in 2017. TidalScale is privately held with backing from , HWVP, Sapphire Ventures, Photography Credits , SK Hynix, Forte Ventures, Citrix and Samsung. Title cover: Photo by Shifaaz Shamoon on Unsplash Learn more at www.tidalscale.com. Page 3: Photo by Boris Bobrov on Unsplash Page 10: Photo by Krzysztof Kowalik on Unsplash Page 23: Photo by Guilherme Stecanella on Unsplash

How Software-Defined Servers Will Drive the Future of Infrastructure and Operations is published by TidalScale. Editorial content supplied by TidalScale is independent of Gartner analysis. All Gartner research is used with Gartner’s permission, and was originally published as part of Gartner’s syndicated research service available to all entitled Contact us Gartner clients. © 2020 Gartner, Inc. and/or its affiliates. All rights reserved. The use of Gartner research in this publication does not indicate Gartner’s endorsement of TidalScale’s products and/or strategies. Reproduction or distribution of this publication in any form without Gartner’s prior written permission is forbidden. The information For more information contact us at: contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. The opinions expressed herein are www.tidalscale.com subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity” on its website.