BOOTSTORM and VM DISK ACCESS: Nvme Vs. SATA Technical Report

Total Page:16

File Type:pdf, Size:1020Kb

BOOTSTORM and VM DISK ACCESS: Nvme Vs. SATA Technical Report BOOTSTORM AND VM DISK ACCESS: NVMe vs. SATA Technical Report Antonio Cisternino, Maurizio Davini The University of Pisa, IT Centre Abstract As solid state drive (SSD) technology matures, the performance it delivers is constrained by old disk controller technologies. The NVMe protocol on PCIe interface allows you to bypass the controller by attaching a disk drive directly to the bus. In this report we present the results of comparing NVMe and SATA drive performance in the context of virtualization. In the experiment we observed how different kinds of drive react to a VM bootstorm and to intense simultaneous drive access from all the running VMs. Results show that an NVMe interface offers significant benefits over SATA: bootstorm time is on average 75% slower when using SATA; moreover, when all the VMs perform disk access using diskspd simultaneously NVMe outperforms SATA by a factor greater than four, offering on average more than 250MB/s bandwidth to each VM. Antonio Cisternino, Maurizio Davini TR-03-16 Executive Summary NVMe SSD drives provide more performance in terms of bandwidth and lower latency than SAS and SATA based counterparts (see [5]). However, it is always difficult understanding whether this increased performance will correspond to a real benefit in a real-world application. In this paper we have focused our attention on two important aspects of virtualization: bootstorm and disk performance from a VM standpoint. We present the results of our experiments to test the performance of virtual machines running on Microsoft Hyper-V with virtual drives stored on PCIe® and SATA SSD drives (20nm MLC). The solid state technology used in both drives is the same, so the tests allow an appreciation of the effective difference that the interface makes in terms of performance. Performed tests aim at understanding the potential impact of adopting controller-less disks on a virtualization platform and consist of: • checking the impact of PCIe interface when a bootstorm1 occurs; • verifying the throughput that can be achieved concurrently by using the open source diskspd tool [7] on all the virtual machines running on a single hypervisor. The results have shown that employing PCIe drives with NVMe offer significant improvements in the virtualization scenarios examined. In particular, we observed a speed- up of 80% in the bootstorm of 50 virtual machines, but improvement already shows up when booting just 5 VMs simultaneously. Once the VMs finished the boot they started generating sequential and random access on the virtual disk drive using the diskspd tool [7]; we measured the average bandwidth provided by a PCIe drive and compared it with a SATA drive; results show that when 50VMs access the virtual drives, stored on the same SSD physical drive, random access on a PCIe interface delivers on average 273MB/s to every VM whereas a SATA interface is capable of providing only 90MB/s in the same condition. Also in this case we noticed significant benefits with a smaller number of virtual machines in execution. 1 i.e. when in certain conditions the system has to start a large number of virtual machines simultaneously NB: Other names and brands may be claimed as the property of others. Introduction This report is the third in a series dedicated to evaluating solid state drive (SSD) performances. In the first report [5] we compared the performance of SATA and NVMe PCIe SSD connections using Hammer DB, showing that the NVMe protocol used for attaching drives directly to the PCIe bus offers significant improvements with respect to SATA controller in terms of bandwidth and latency under a database-like workload. In the second report [6] we tested the impact of PCIe interface with respect to SATA when aggregating drives using different forms of software defined RAID: Microsoft Storage Spaces [9] and Intel RSTe [8]; experiments have shown superior software RAID performance of PCIe interface, capable of delivering up to 10GB/s when aggregating four drives using RAID0. We are interested in exploring how changing the hardware interface used for accessing a SSD drive affects the performance of a server acting as hypervisor for a number of virtual machines. We focused our attention on two important areas for virtualization: bootstorm (i.e. simultaneous boot of many virtual machines) and virtual disk benchmarking (i.e. all VMs performing a disk benchmark simultaneously). Tests have been measured by combining performance counters of the hypervisor host running Windows Server 2012 R2, and the output generated by each VM running diskspd tool [7]. We used Windows 10 Enterprise as a guest OS executed by virtual machines. We monitored several performance counters, including CPU, memory, power absorbed, disk throughput and disk queue length. Virtual drives have been tested for 40 seconds executing diskspd for generating sequential access with 128KB blocks for read and write, and random access with 4KB blocks for read and write. Measuring boot time is not easy since VMs are opaque for the hypervisor. We configured guest VMs for auto-logon and to execute a power shell script as a startup script: a file indicating the current date and time has been generated before starting the disk test procedure. We considered the boot of all VMs complete by taking the latest boot completion time as indicated by generated files. Virtual machines stored files containing the test results on a network shared on the hypervisor available on an internal virtual network defined on Hyper-v. We used a DHCP server to assign IP addresses to virtual machines, and the IP of every virtual machine has been used as an identifier. Since every VM test generates six files and seven additional files contain the beginning time and the performance counters recorded for the experiments, the experiment running 50 VMs generates 307 files. We analysed the 2536 files generated by repeating the experiment for different VM numbers and drive kinds using an F# script designed to parse text files and access the binary log format generated by the perfmon tool. IT Center Università di Pisa P.I.: 00286820501 C.F.: 80003670504 Largo Bruno Pontecorvo, 3, I-56127 Pisa Italy We monitored several performance counters, including CPU, memory, power absorbed, disk throughput and disk queue length. Virtual drives have been tested for 40 seconds executing diskspd for generating sequential access with 128KB blocks for read and write, and random access with 4KB blocks for read and write. Measuring boot time is not easy since VMs are opaque for the hypervisor. We configured guest VMs for auto- logon and to execute a power shell script as a startup script: a file indicating the current date and time has been generated before starting the disk test procedure. We considered the boot of all VMs complete by taking the latest boot completion time as indicated by generated files. Virtual machines stored files containing the test results on a network share on the hypervisor available on an internal virtual network defined on Hyper-v. We used a DHCP server to assign IP addresses to virtual machines, and the IP of every virtual machine has been used as an identifier. Since every VM test generates 6 files and 7 additional files contain the beginning time and the performance counters recorded for the experiments, the experiment running 50 VMs generates 307 files. We analyzed the 2536 files generated by repeating the experiment for different VM numbers and drive kind using an F# script designed to parse text files and access the binary log format generated by the perfmon tool. Boot storm analysis Boot storm happens when a hypervisor starts simultaneously a number of virtual machines. Operating system boot generates a significant amount of disk operations that may trash the performance leading to a very slow startBootstorm of virtual machines andAnalysis of the services delivered by them. A possible mitigation is a deferred power on of lessBootstorm critical happens virtual machines. when a hypervisor starts a number of virtual machines simultaneously. Operating system boot generates a significant amount of disk operations that may trash the performance, leading to a very slow start of virtual Previousmachines experiments and the services have delivered already by presented them. A possible the superiority mitigation isof a NVMe deferred interface power-on over of less SATA, critical so it virtual comes machines. without surprise that the boot duration is faster when VM virtual drives reside on NVMe drive. However, what the Previous experiments have already presented the superiority of NVMe interface over SATA, so it comes as no surprise that experimentthe boot duration has clearly is faster shown when VMis that virtual the drives benefits reside are on significant NVMe drive. starting However, from what the the boot experiment storm of has just clearly 5 virtual shown machines,is that the benefitswhere we are found significant, the boot starting time from on NVMethe bootstorm taking half of just time five of virtualthe SATA machines, drive. Withwhere 50 we virtualfound themachines boot time theon NVMeboot time taking is half251s the when time usingof the NVMeSATA drive. and 452sWith 50when virtual using machines SATA. theThe boot disk time bandwidth is 251s when usage using in the NVMe graph and 452s when using SATA. The disk bandwidth usage in the graph explains the benefits: after five VM the SATA drive reaches explainsits maximum the capacity,benefits: delaying after 5 theVM requests the SATA made drive by reaches booting VMs.its maximum capacity delaying the requests made by booting VMs. Boot duration (s) Disk bandwidth (bytes/s) 500 2500000000 400 2000000000 300 1500000000 NVMe - N NVMe Disk (B/s) 200 Time (s) SATA 1000000000 Bandwidth SATA - D 100 500000000 Disk (B/s) 0 0 1 2 3 4 5 6 7 8 9 10 20 30 40 50 1 2 3 4 5 6 7 8 9 1020304050 #VM #VM IT Center DuringDuring the bootstorm,boot storm we we monitored monitored power power absorbed, absorbed, memory memory and CPU and usage.
Recommended publications
  • Using Dell EMC Unity Storage in an Equinix Cloud-Connected Data Center
    Using Dell EMC Unity Storage in an Equinix Cloud- Connected Data Center A Detailed Review Abstract This whitepaper discusses use cases for running Dell EMC Unity storage in a public cloud colocation facility, validates storage latency and performance, and provides recommendations. October 2019 Dell EMC Technical White Paper Internal Use - Confidential Revisions Version Date Description 0.01 February 2019 Initial draft 0.18 May 2019 Updated to include Microsoft Azure 0.24 October 2019 Minor edits Acknowledgements This paper was produced by the following members of the Dell EMC Presales team: Author: Ryan Tassotti This whitepaper was possible thanks to the contributions from: • Rajiv Juneja Dell EMC • Alex Seymour Dell EMC • Gabriel Lageyre Equinix • Lee Sharping Equinix • Mischa Jampolsky Equinix The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any software described in this publication requires an applicable software license. © 2019 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. Equinix, Platform Equinix, and IBX are registered trademarks of Equinix, Inc. Equinix Cloud Exchange Fabric, ECX Fabric, and Equinix International Business Exchange are trademarks of Equinix, Inc. Dell believes the information in this document is accurate as of its publication date. The information is subject to change without notice. 2 Using Dell EMC Unity Storage in an Equinix Cloud-Connected Data Center Internal Use - Confidential Table of contents Revisions............................................................................................................................................................................
    [Show full text]
  • Starwind Virtual SAN ® Benchmarking Best Practices 1 One Stop Virtualization Shop
    One Stop Virtualization Shop StarWind Virtual SAN® Benchmarking Best Practices FEBRUARY,2019 BEST PRACTICES StarWind Virtual SAN ® Benchmarking Best Practices 1 One Stop Virtualization Shop Trademarks “StarWind”, “StarWind Software” and the StarWind and the StarWind Software logos are registered trademarks of StarWind Software. “StarWind LSFS” is a trademark of StarWind Software which may be registered in some jurisdictions. All other trademarks are owned by their respective owners. Changes The material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, StarWind Software assumes no liability resulting from errors or omissions in this document, or from the use of the information contained herein. StarWind Software reserves the right to make changes in the product design without reservation and without notification to its users. Technical Support and Services If you have questions about installing or using this software, check this and other documents first - you will find answers to most of your questions on the Technical Papers webpage or in StarWind Forum. If you need further assistance, please contact us . About StarWind StarWind is a pioneer in virtualization and a company that participated in the development of this technology from its earliest days. Now the company is among the leading vendors of software and hardware hyper-converged solutions. The company’s core product is the years-proven StarWind Virtual SAN, which allows SMB and ROBO to benefit from cost-efficient hyperconverged IT infrastructure. Having earned a reputation of reliability, StarWind created a hardware product line and is actively tapping into hyperconverged and storage appliances market.
    [Show full text]
  • SQL Server Performance Tuning on Google Compute Engine
    SQL Server Performance Tuning on Google Compute Engine Erik Darling Brent Ozar Unlimited March 2017 Table of contents Introduction Measuring your existing SQL Server Trending backup size Projecting future space requirements Trending backup speed Bonus section: Backing up to NUL Trending DBCC CHECKDB Trending index maintenance Recap: your current vital stats Sizing your Google Compute Engine VM Choosing your instance type Compute Engine’s relationship between cores and memory Memory is more important in the cloud Choosing your CPU type Putting it all together: build, then experiment Measuring what SQL Server is waiting on An introduction to wait stats Getting more granular wait stats data Wait type reference list CPU Memory Disk Locks Latches Misc Always On Availability Groups waits Demo: Showing wait stats with a live workload About the database: orders About the workload Measuring SQL Server with sp_BlitzFirst Baseline #1: Waiting on PAGEIOLATCH, CXPACKET, SOS_SCHEDULER_YIELD Mitigation #1: Fixing PAGEIOLATCH, SOS_SCHEDULER_YIELD Configuring SQL Server to use the increased power TempDB Moving TempDB Max server memory 1 CPU Baseline #2: PAGEIOLATCH gone, SOS_SCHEDULER_YIELD still here Mitigation #2: Adding cores for SOS_SCHEDULER_YIELD waits Baseline #3: High CPU, and now LCK* waits Mitigation #3: Fixing LCK* waits with optimistic isolation levels Batch requests per second 2 Introduction This whitepaper discusses how to create a SQL server in Compute Engine and then use performance metrics to optimize its performance. This paper is intended for database administrators, Windows admins, or developers planning to build your first SQL Servers in Google Compute Engine. In this white paper, you’ll learn how to do the following: ● Measure your current SQL Server using data already have.
    [Show full text]
  • Contents Introduction
    Contents Introduction ............................................................................................................................................. 2 General Performance Notes .................................................................................................................... 2 CB Protection Server One-tier Architecture .............................................................................................. 2 CB Protection Server Two-tier Architecture .............................................................................................. 3 Choosing Proper Hardware for the SQL Server .................................................................................... 3 Choosing Proper Hardware for CB Protection Server ........................................................................... 3 Network Configuration ......................................................................................................................... 3 Network Latency Impact....................................................................................................................... 3 SQL Permissions ..................................................................................................................................... 5 SQL Server Memory Configuration .......................................................................................................... 6 CB Protection Server Database Storage .................................................................................................
    [Show full text]
  • Seyyed Ahmad Javadi
    Seyyed Ahmad Javadi <[email protected]> Address: Lab FE08, 15 JJ Thomson Avenue, Cambridge CB3 0FD http://www3.cs.stonybrook.edu/~sjavadi/ Phone number: +44-7936 875192 https://github.com/sajavadi RESEARCH INTERESTS • Cloud Computing, Distributed Systems, Performance Analysis and Modeling, Operating Systems EDUCATION PhD in Computer Science, Stony Brook University, USA Aug. 2014-Jun. 2019 • Advisor: Dr. Anshul Gandhi • Thesis title: Analytical Approaches for Dynamic Scheduling in Cloud Environments MSc in Software Engineering, Sharif University of Technology, Iran Sep. 2010-Sep. 2012 • Advisor: Prof. Rasool Jalili • Thesis title: Analysis of Non-monotonic Property in Access Control for Pervasive Computing Environments BSc in Software Engineering, Ferdowsi University of Mashhad, Iran Sep. 2006-Jul. 2010 TECHNICAL AND RESEARCH EXPERIENCE • Postdoctoral Researcher, University of Cambridge, CompAcctSys, July 2019-present – Current research: I am currently working on audit logging challenges in the Internet of Things, online social platforms, and cloud computing environments. The goal is to propose novel audit logging and distributed log analysis approaches that preserve users’ personal data privacy. These techniques can be leveraged to audit applications accessing users’ personal data at run-time and their access pattern as well as to detect malicious usages of online social platforms and machine learning as services, just to name a few. • Research Assistant, Stony Brook University, PACE Lab, January 2018-June 2019 – Resource-adaptive Batch Workload Management in Cloud Environments: Resource under-utilization is com- mon in cloud data centers. To improve server utilization, batch workloads can be run in the background to leverage idle resources. However, customer Virtual Machines’ (VMs) performance can be degraded because of resource con- tention.
    [Show full text]
  • 3PAR Storeserv 8440 All-Flash Array and Store More Data in Less Space Thanks to Deduplication the Company Where Arthur Works Has a Fantastic Cafeteria
    A Principled Technologies report: Hands-on testing. Real-world results. 30 times more IOPS Serve more users 01101001010 01011101001 01010010011 00100100111 96% lower latency Provide a better 01101001010 01011101001 01010010011 customer experience 00100100111 Data deduplication ratio of 11 to 1 Make the most of storage capacity compared to a legacy storage solution Improve database performance and response time with the HPE 3PAR StoreServ 8440 all-flash array and store more data in less space thanks to deduplication The company where Arthur works has a fantastic cafeteria. Lately, though, he’s been brown- bagging it and eating in his office to avoid running into database administrators. “You have to do something about the database servers!” they usually say, but he knows the problem is actually storage. While hard drive-based arrays once met the company’s needs, the data demands of employees and customers have exploded. Upgrading to speedier all-flash storage seems necessary, but Arthur is concerned about capacity. What his company needs is an SSD-based solution that can accommodate its current and future volumes of data. Could the HPE 3PAR StoreServ 8440 Storage array, which is powered by Intel® Xeon® processors, be what Arthur is looking for? In the Principled Technologies datacenter, we tested this all-flash storage solution, along with the older, hard drive-based HPE 3PAR StoreServ 7400 Storage array. We found that upgrading to the 8440 could dramatically improve both performance and response time and boost storage efficiency with inline deduplication. With results like that, Arthur could start eating in the cafeteria again! This report uses a fictional scenario in which a datacenter administrator considers upgrading to the all-flash HPE 3PAR StoreServ 8440 Storage array.
    [Show full text]
  • The Diskspd Storage Performance Tool
    The DiskSpd Storage Performance Tool DiskSpd is a highly customizable I/O load generator tool that can be used to run storage performance tests against files, partitions, or physical disks. DiskSpd can generate a wide variety of disk request patterns for use in analyzing and diagnosing storage performance issues, without running a full end-to- end workload. You can simulate SQL Server I/O activity or more complex, changing access patterns, returning detailed XML output for use in automated results analysis. Contents 1 Vision for the DiskSpd tool................................................................................................................... 3 1.1 Acknowledgments ...................................................................................................................... 3 2 The DiskSpd command and parameters ............................................................................................. 3 2.1 DiskSpd basic parameters ........................................................................................................... 3 2.2 DiskSpd event parameters .......................................................................................................... 7 2.3 ETW parameters for use with DiskSpd.exe................................................................................. 7 2.4 Size conventions for DiskSpd parameters .................................................................................. 8 3 Customizing DiskSpd tests ..................................................................................................................
    [Show full text]
  • 3Rd Gen Intel® Xeon® Scalable Platform Technology Preview
    Performance made flexible. 3rd Gen Intel® Xeon® Scalable Platform Technology Preview Lisa Spelman Corporate Vice President, Xeon and Memory Group Data Platforms Group Intel Corporation Unmatched Portfolio of Hardware, Software and Solutions Move Faster Store More Process Everything Optimized Software and System-Level Solutions 3 Announcing Today – Intel’s Latest Data Center Portfolio Targeting 3rd Gen Intel® Xeon® Scalable processors Move Faster Store More Process Everything Intel® Optane™ SSD P5800X Fastest SSD on the planet 3rd Gen Intel® Xeon® Scalable Intel® Optane™ Persistent processor Memory 200 series Intel® Ethernet E810- Intel’s highest performing server CPU 2CQDA2 Up to 6TB memory per socket with built-in AI and security solutions + Native data persistence Up to 200GbE per PCIe 4.0 slot for Intel® Agilex™ FPGA bandwidth-intensive workloads Intel® SSD D5-P5316 First PCIe 4.0 144-Layer QLC Industry leading FPGA logic performance and 3D NAND performance/watt Enables up to 1PB storage in 1U Optimized Solutions >500 Partner Solutions 4 World’s Most Broadly Deployed Data Center Processor From the cloud to the intelligent edge >1B Xeon Processor Cores Deployed in the >50M Cloud Since 20131 Intel® Xeon® >800 cloud providers Scalable Processors with Intel® Xeon® processor-based servers Shipped deployed1 Intel® Xeon® Scalable Processors are the Foundation for Multi-Cloud Environments 1. Source: Intel internal estimate using IDC Public Cloud Services Tracker and Intel’s Internal Cloud Tracker, as of March 1, 2021. 5 Why Customers Choose
    [Show full text]
  • SQL Server Configuration Guide
    BS SQL Server Configuration Guide Product Version: 8.6 Document Date: February, 2021 Copyright © 2004–2021 VMware, Inc. All rights reserved. This product may be covered under one or more patents pending. Carbon Black is a registered trademark and/or trademark of VMware, Inc. in the United States and other countries. All other trademarks and product names may be the trademarks of their respective owners. VMware, Inc. | 1100 Winter Street, Waltham, MA 02451 USA | Tel: 617.393.7400 Doc Revision 1.0 VMware Carbon Black App Control SQL Server Configuration Guide Contents Introduction ................................................................................................................................................... 3 General Performance Notes ......................................................................................................................... 3 App Control Server One-tier Architecture ..................................................................................................... 3 App Control Server Two-tier Architecture ..................................................................................................... 4 Choosing Proper Hardware for the SQL Server ....................................................................................... 4 Choosing Proper Hardware for App Control Server .................................................................................. 4 Network Configuration ..............................................................................................................................
    [Show full text]
  • Sql-Server-Whitepaper-Br.Pdf
    Aumente o desempenho do Servidor SQL e reduza custos com as unidades de estado sólido DC500M Enterprise da Kingston Technology Índice Resumo Executivo ................................................................................................................................................ 3 O Problema: Fim do suporte ao SQL Server 2008 ................................................................................................ 6 A Solução: Substituir os drives do HDD pelas Unidades de Estado Sólido (SSDs) Data Center DC500 Enterprise da Kingston Technology e atualizar para o SQL Server 2017 ............................................................................... 7 Hardware ............................................................................................................................................................... 9 Software .............................................................................................................................................................. 10 Cenários de Testes de Benchmark ..................................................................................................................... 13 Resultados dos Testes ........................................................................................................................................ 16 Resultados: SQL Server 2008 R2 com 16 vCores no HDD .................................................................................... 16 Resultados: SQL Server 2017 no DC500M 16 vCores .........................................................................................
    [Show full text]
  • Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers Using Violin Windows Flash Arrays
    Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers using Violin Windows Flash Arrays Danyu Zhu Liang Yang Dan Lovinger A Microsoft White Paper Published: October 2014 This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Server, Hyper-V are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Violin Memory is a registered trademark of Violin Memory, Inc in the United States. The names of other actual companies and products mentioned herein may be the trademarks of their respective owners. Microsoft White Paper 1 Summary This white paper demonstrates the capabilities and performance for Violin Windows Flash Array (WFA), a next generation All-Flash Array storage platform. With the joint efforts of Microsoft and Violin Memory, WFA provides built-in high performance, availability and scalability by the tight integration of Violin’s All Flash Array and Microsoft Windows Server 2012 R2 Scale-Out File Server Cluster. The following results highlight the scalability, throughput, bandwidth, and latency that can be achieved from the platform presented in this report using two Violin WFA-64 arrays in a Scale-Out File Server Cluster in a virtualized environment: Throughput: linear scale to over 2 million random read IOPS or 1.6 million random write IOPS.
    [Show full text]
  • Monitoring and Building for Sharepoint Farm Performance
    Monitoring and Building for SharePoint Farm Performance Sean P. McDonough Microsoft MVP Chief Technology Officer Bitstream Foundry LLC Download the SPTechCon Mobile App! Search for SPTechCon in your App Store and download the 2018 Mobile App to stay connected throughout the entire event. • Conference and Session Feedback • Get up-to-date show details • Reference speaker profiles • Take notes and download presentations • Connect with other attendees • Find exhibiting sponsors and much more! What We’ll Be Covering 1. Farm Environments 2. Getting a Solid Start 3. Tools and Monitoring Servers 4. Page Performance Monitoring 5. Questions & Answers 6. (LOTS of) References Farm Environments Farm Environments Yes, I said farm, not stamp • Subtle distinction, but it means we’re likely on-premises … • No SharePoint Online / Office 365 • Unless you’re on a “farm in the cloud” • Why on-premises? • Significant surface reduction for monitoring in the cloud • It’s “someone else’s” problem (i.e., a value-add for consumers) • Administrative APIs very limited vs. on-premises • Limited tools (no perfmon, developer dashboard, etc.) • In short: we can’t get at the counters and logs we need! Farm Environments Some on-premises assumptions I’m making for this session: • The big question: are you virtualizing? • Virtualization affords many options • Virtualization provides many ways to destroy performance • My assumption: you are virtualizing your environment • Within the datacenter and beyond it • Easier than ever to build farm interdependencies and distributed
    [Show full text]