Quick viewing(Text Mode)

How Microsoft Azure Can Meet Enterprise Cloud Customer Demand Contents

How Azure can meet enterprise customer demand Contents

Introduction ...... 2 What do enterprise cloud customers want? ...... 3 Introduction The challenge for cloud service providers ...... 3

The solution for ...... 4

1. Optimizing infrastructure for Builders of cloud solutions know that the needs improved performance ...... 4 of enterprise customers change rapidly. Each challenge requires a new, innovative engineering 2. Transforming applications with artificial response. By working closely with customers, intelligence-powered innovations ...... 4 has gained valuable insight into what enterprises 3. Harnessing the potential of data from across the want from their cloud service providers (CSPs). We’d now like to this information with you, organization ...... 5 as we feel it could be of interest to you as a 4. Balancing security and control with the Microsoft Azure cloud engineer. We’ve mapped need for agility ...... 5 the trends we’ve identified to the joint Microsoft Azure and Intel® portfolio to show how, Next steps ...... 6 together, we can meet future customer demands. What do enterprise cloud customers want?

Economic uncertainty is the number one challenge to market, while 88 percent cited revenue acceleration. enterprises face as we begin the new decade, yet many Meanwhile, just under half (47 percent) said cost reduction remain under pressure from investors to drive top-line was a top priority, while nearly a third (29 percent) wanted to revenue growth. As we know, organizations that can better manage regulatory and compliance risks and increase respond quickly to the market stand to gain the most customer satisfaction1. financially. Meanwhile, customers now expect seamless and IT transformation is central to achieving these goals and the simple cross-channel experiences, compelling enterprises to good news is that CIOs see the cloud as a key enabler of IT tear down their traditional siloes in favor of a holistic infrastructure and its modernization. Companies have, on approach. average, around half of their workloads running on public Combined, these pressures mean business agility is more and private cloud platforms. This figure is expected to rise important than ever. When asked about their top priorities, to three-quarters by 2022, with roughly two-thirds of those 71 percent of enterprises said they needed greater agility workloads housed in shared public platforms within data to react to changing customer needs and speed centers built out by the major CSPs2.

The challenge for cloud service providers

Despite the trend towards the cloud, enterprises are Improving performance, powering innovation with artificial analyzing whether they’ll be able to capture the agility they intelligence (AI), harnessing cross-company data, and need simply by shifting applications to cloud platforms. enhancing security are all fundamental in delivering greater For success, CIOs acknowledge that they must first fully business agility and better-quality customer experiences. reassess the infrastructure stack and the way it works.

3 The solution for Microsoft Azure

To enterprises achieve business agility, enterprise Intel® Optane™ persistent memory (PMem), a feature also cloud customers can look to Microsoft Azure and Intel to introduced with the 2nd gen Intel Scalable processors, modernize their IT infrastructure stack in four ways. Here improves VM density for virtual desktop infrastructure (VDI) we highlight how cloud innovations from Intel serve a by up 22 percent without degrading the end-user experience6. crucial role in each of these processes. This has potential to help CSPs like Azure improve scalability and lower total cost of ownership (TCO) of in-demand, high- 1. Optimizing infrastructure for performance VDI workloads. If you have a workload that improved performance requires a large memory footprint, the first step toward accessing Intel Optane PMem is to ask your Intel account Running Azure virtual machines (VMs) on 2nd generation team for a demonstration. Bring your engineering colleagues Intel® Xeon® Scalable processors lets you optimize application along to the demo, and the Intel team can workshop how performance for web and data services, desktop , best to provide you with . and business applications moving to Azure. For example, Microsoft Azure’s new general purpose and 2. Transforming applications with memory-optimized Azure VMs Dv4 and Ev4 families based on -powered the 2nd generation Intel® Xeon® Platinum 8272CL processor innovations deliver up to approximately 20 percent CPU performance improvement compared to their predecessors, the Dv3 and Ev3 As enterprises get better building AI algorithms, and data VM families3. Meanwhile, Azure Ddsv4-series VMs powered by models become highly accurate and highly performant, 2nd gen Intel Xeon Platinum 8272CL processors can complete developer workflows become the bottleneck constraining up to 51 percent more web transactions per second than the productivity. This doesn’t need to be the case. Intel® FPGAs, the Intel® Distribution of OpenVINO™ toolkit and ONNX Runtime Ds_v3-series VMs4. bring the efficient deployment of AI at the edge via Azure Stack 2nd gen Intel Xeon Scalable processors feature Intel® Edge into reach for more enterprises. Deep Learning Boost (Intel® DL Boost)—a set of embedded Intel FPGAs can be reconfigured for different types of designed to accelerate AI deep learning use machine learning models. This flexibility makes it easy for cases such as image recognition, object detection, speech enterprises to accelerate applications based on the most recognition and language translation. Intel DL Boost extends optimal numerical precision and memory model. Because Intel® Advanced Vector Extensions 512 (Intel® AVX-512) with a FPGAs are reconfigurable, customers can stay current with new Vector Neural Network Instruction (VNNI) that significantly the requirements of rapidly changing AI algorithms. increases deep learning inference performance over previous generations5. The Intel Distribution of OpenVINO toolkit provides developers with improved neural network performance on a variety of Intel®

4 processors and helps them further unlock cost-effective, It’s common for data to be encrypted while it is in transit real-time applications. over the network and while it’s at rest in storage, but typically data is in an unencrypted state when it is in use, in memory. Also, customers can speed machine learning model This leaves it open to attack from malicious insiders, hackers inferencing across Intel® hardware using ONNX Runtime, an and , and other third parties. The challenges are open-source project founded by Microsoft and supported by how to prevent the threat posed by malicious insiders Intel. ONNX Runtime supports machine learning optimization with administrative privileges or direct access to hardware across nGraph deep learning , the Intel Distribution on which data is being processed; how to protect against of OpenVINO toolkit and Intel® for hackers and malware that exploit bugs in the operating Deep Neural Networks (Intel® MKL-DNN). system, application or ; and how to prevent other 3. Harnessing the potential of data third parties accessing data without consent from across the organization Azure’s DCsv2 confidential computing VMs run on servers powered by Intel® Xeon® E-2286G processors with Intel® Three-quarters of enterprise-generated data is created and Guard Extensions (Intel® SGX) in a completely 7 processed outside a traditional or cloud , and cloud-based virtualized environment. Ecosystem partners many enterprises want to speed time to results by processing like Ajuna, Fortanix, and Scone are already harnessing Intel this data close to its source. The ability to capture and SGX to deliver more secure applications running on Microsoft harness data from across the organization is crucial. Azure confidential computing DCsv2-series VMs. Intel provides CPUs that enable Azure to meet any enterprise Intel SGX offers hardware-based memory encryption that data processing need from high-performance 2nd gen Intel isolates specific application code and data in memory. Intel Xeon Scalable processors with built-in AI acceleration to SGX allows user-level code to allocate private regions of power the most demanding Industrial of Things memory, called enclaves, which are designed to be protected (IIoT) use cases, to power-efficient Intel ® and Intel® from processes running at higher privilege levels. Only ™ processors. SGX offers such a granular level of control and protection. Using Intel® Arria® 10 FPGAs, Azure Stack Edge enables Because Intel SGX hardware helps protect data and keeps customers to accelerate data inferencing and transfer it encrypted while the CPU is processing it, even the data over the Internet to Azure Cloud in real time for deep and hypervisor cannot access it, nor can analytics or to re-train and improve user models at scale. anyone with physical access to the server, protecting it against malicious insiders. 4. Balancing security and control Intel is bringing Intel SGX to the mainstream server with the need for agility platform with its upcoming 3rd generation Intel® Xeon® 8 Finally, enterprises will be looking to Microsoft Azure Scalable processors . Intel SGX, along with new features to provide the best aspects of a cloud-to-edge digitally that include Intel® Total Memory Encryption (Intel® TME), connected world—universal access, pooled resources, Intel® Platform Resilience (Intel® PFR), and new improved efficiency and agility—while still helping cryptographic accelerators will strengthen the platform and maintain and trust. help improve the overall confidentiality and integrity of data.

5 Next Steps

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.​ Intel technologies are helping Microsoft Azure to Performance results are based on testing as of dates shown in configurations and may not reflect all publicly availableupdates. ​ See backup for configuration details. No product or component transform enterprise customers’ IT infrastructure can be absolutely secure.

and drive greater business agility. Your costs and results may vary.

Get in touch with your Intel account team today to Intel technologies may require enabled hardware, software or service activation.

arrange a demo of Intel Optane PMem, and to discuss Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. how Intel can support the cloud workloads you are © Intel . Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its . Other names and may be claimed as the property of others. defining and building. 1120/JS//PDF 345034-001EN

Visit ://www.intel.com/content/www/us/en/ now/microsoft-cloud. 1 https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/unlocking-business-acceleration-in-a-hybrid-cloud-world#

2 https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/cloud-adoption-to-accelerate-it-modernization

3 https://azure.microsoft.com/en-us/blog/new-general-purpose-and-memoryoptimized-azure-virtual-machines-with-intel-now-available/

4 Testing by Principled Technologies as of September 2020. See https://www.principledtechnologies.com/Intel/Xeon-8272CL-Microsoft-Azure-WordPress-0920. for details

5 14x inference throughput improvement on Intel® Xeon® Platinum 8280 processor with Intel® Deep Learning Boost (Intel® DL Boost): Tested by Intel as of 2/20/2019. 2 socket Intel® Xeon® Platinum 8280 processor, 28 cores HT On ON Total Memory 384 GB (12 slots/ 32GB/ 2933 MHz), BIOS: SE5C620.86B. 0D.01.0271.120720180605 (ucode: 0x200004d), 18.04.1 LTS, kernel 4.15.0-45-generic, SSD 1x sda INTEL SSDSC2BA80 SSD 745.2GB, nvme1n1 INTEL SSDPE2KX040T7 SSD 3.7TB, Deep Learning Framework: Intel® Optimization for Caffe* version: 1.1.3 (commit hash: 7010334f159da247db3fe3a9d96a3116ca06b09a), ICC version 18.0.1, MKL DNN version: v0.17 (commit hash: 830a10059a018cd2634d94195140cf2d8790a75a, model https://github.com/intel/caffe/blob/master/models/ intel_optimized_models/int8/resnet50_int8_full_conv.prototxt, BS=64, syntheticData, 4 instance/2 socket, Datatype: INT8 vs. Tested by Intel as of July 11, 2017: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS * release 7.3.1611 (Core), 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC). Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (https://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time --forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, synthetic dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models/resnext_50, Intel® C++ Compiler . 17.0.2 20170213, Intel® MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“.

6 https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/IntelOptaneDC-PMEM-memory-mode-.pdf

7 https://princetondg.com/perspectives/5g-flexible-dc-infrastructure/#:~:text=Research%20agency%20Gartner%20predicts%20that,percent%20in%202018%20 %5B10%5D.

8 https://newsroom.intel.com/news-releases/intel-xeon-scalable-platform-built-most-sensitive-workloads/, https://azure.microsoft.com/en-us/blog/azure-and-intel- commit-to-delivering-next-generation-confidential-computing/

6