Vturbo: Accelerating Virtual Machine I/O Processing Using Designated

Total Page:16

File Type:pdf, Size:1020Kb

Vturbo: Accelerating Virtual Machine I/O Processing Using Designated vTurbo: Accelerating Virtual Machine I/O Processing Using Designated Turbo-Sliced Core Cong Xu, Sahan Gamage, Hui Lu, Ramana Kompella, Dongyan Xu Department of Computer Science, Purdue University Abstract each core is quite powerful, another major scaling fac- tor comes by allocating multiple VMs per core. While In a virtual machine (VM) consolidation environment, the ever increasing core count in modern systems may it has been observed that CPU sharing among multiple suggest the possibility of a dedicated core per VM, it VMs will lead to I/O processing latency because of the is not likely to happen any time soon, as evidenced in CPU access latency experienced by each VM. In this pa- current cloud computing environments such as Amazon per, we present vTurbo, a system that accelerates I/O pro- EC2, where a 3GHz CPU may be shared by three small cessing for VMs by offloading I/O processing to a des- instances. ignated core. More specifically, the designated core – In practice, CPU sharing among VMs can be quite called turbo core – runs with a much smaller time slice complicated. Each VM is typically assigned one or more (e.g., 0.1ms) than the cores shared by production VMs. virtual CPUs (vCPUs) which are scheduled by the hy- Most of the I/O IRQs for the production VMs will be pervisor on to physical CPUs (pCPUs) ensuring propor- delegated to the turbo core for more timely processing, tional fairness. The number of vCPUs is usually larger hence accelerating the I/O processing for the production than the number of pCPUs, which means that, even if VMs. Our experiments show that vTurbo significantly a vCPU is ready for execution, it may not find a free improves the VMs’ network and disk I/O throughput, pCPU immediately and thus needs to wait for its turn, which consequently translates into application-level per- causing CPU access latency. If a VM is running I/O- formance improvement. intensive applications, this latency can have a significant negative impact on application performance. While sev- 1 Introduction eral efforts [30, 12, 17, 28, 19, 25] have made this obser- Cloud computing is arguably one of the most transforma- vation in the past, and have in fact provided solutions tive trends in recent times. Many enterprises and busi- to improve VMs’ I/O performance, the improvements nesses are increasingly migrating their applications to are still moderate compared to the available I/O capac- public cloud offerings such as Amazon EC2 [1] and Mi- ity because, they do not explicitly focus on reducing the crosoft Azure [8]. By purchasing or leasing cloud servers most important and common component of I/O process- with a pay-as-you-go charging model, enterprises bene- ing workflow—namely IRQ processing latency. fit from significant cost savings in running their applica- To explain this more clearly, let us look at I/O process- tions, both in terms of capital expenditure (e.g., reduced ing in modern OSes today. There are two basic stages server costs) as well as operational expenditure (e.g., involved typically. (1) Device interrupts are processed management staff). On the other hand, cloud providers synchronously in an IRQ context in the kernel and the generate revenue by achieving good performance for data (e.g., network data, disk reads) is buffered in kernel their “tenants” while maintaining reasonable cost of op- buffers; (2) The application eventually copies the data eration. from kernel buffer to its user-level buffer in an asyn- One of the key factors influencing the cost of cloud chronous fashion whenever it gets scheduled by the pro- platforms is server consolidation—the ability to host cess scheduler. If the OS were running directly on a multiple virtual machines (VM) in the same physical physical machine, or if there were a dedicated CPU for server. If the cloud providers can increase the level of a given VM, the IRQ processing component gets sched- server consolidation, i.e., pack more VMs in each phys- uled almost instantaneously by preempting the currently ical machine, they can generate more revenue from their running process. However, for a VM that shares CPU infrastructure investment and possibly pass cost savings with other VMs, the IRQ processing may be significantly on to their customers. Two main resources that typi- delayed because the VM may not be running when the cally dictate the level of server consolidation, memory I/O event (e.g., network packet arrival) occurs. and CPU. Memory is strictly partitioned across VMs, IRQ processing delay can affect both network and disk although there are techniques (e.g., memory ballooning I/O performance significantly. For example, in the case [29]) for dynamically adjusting the amount of memory of TCP, incoming packets are staged in the shared mem- available to each VM. CPU can also be strictly parti- ory between the hypervisor (or privileged domain) and tioned across VMs, with the trend of ever increasing the guest OS, which delays the ACK generation and can number of cores per physical host. However, given that result in significant throughput degradation. For UDP USENIX Association 2013 USENIX Annual Technical Conference (USENIX ATC ’13) 243 flows, there is no such time-sensitive ACK generation The other regular kernel threads are scheduled on a reg- that governs the throughput. However, since there is lim- ular core with regular slicing just like what exists to- ited buffer space in the shared memory (ring buffer) be- day. Since the IRQ handlers are executed by the turbo tween the guest OS and the hypervisor, it may fill up core, they are handled almost synchronously with a mag- quickly leading to packet loss. IRQ processing delay nitude smaller latency. For example, assuming 5 VMs can also impact disk write performance. Applications of- and 0.1ms quantum for the turbo core, an IRQ request is ten just write to memory buffers and return. The kernel processed within 0.4ms compared to 120 ms (assuming threads handling disk I/O will flush the data in memory 30ms time slice for regular cores). to the disk in the background. As soon as one block write The turbo core is accessible to all VMs in the system. is done, the IRQ handler will schedule the next write and If a VM runs only CPU-bound processes, it may choose so on. If the IRQ processing is delayed, write throughput not to use this core since its performance is not likely to will be significantly reduced. be good due to frequent context switches. Even if a VM Unfortunately, none of the existing efforts explicitly chooses to schedule a CPU-bound process on the turbo tackle this problem. Instead, they propose indirect ap- core, it has virtually no impact on other VMs’ turbo core proaches that moderately shorten IRQ processing latency access latency thus providing good isolation between hence achieving only modest improvement. Further, be- VMs. We ensure fair-sharing among VMs with differ- cause of the specific design choices made in those ap- ential requirement between regular/turbo cores because, proaches, the IRQ processing latency cannot be funda- otherwise, it would motivate VMs to push more process- mentally eliminated (e.g., made negligible) by any of the ing to the turbo core. Thus, for example, if there are two designs, meaning that they cannot achieve close to op- VMs—VM1 requesting 100% of the regular core, and timal performance. For instance, the vSlicer approach VM2 requesting 50% regular and 50% turbo cores, the [30] schedules I/O-intensive VMs more frequently us- regular core will be split 75-25% while VM2 obtains the ing smaller micro time slices, which implicitly lowers full 50% of the turbo core, thus equalizing the total CPU the IRQ processing latency, but not significantly. Also usage for both VMs. We also note that, while we men- it does not work under all scenarios. For example, tion one turbo core in the system, our design seamlessly if two I/O latency-sensitive VMs and two non-latency- allows multiple turbo cores in the system driven by I/O sensitive VMs share the same pCPU, the worst-case IRQ processing load of all VMs in the host. This makes our processing latency will be about 30ms, which is still design extensible to higher bandwidth networks (10Gbps non-trivial, even though it is better than without vS- and beyond) and higher disk I/O bandwidths that require licer (which would be 90ms). Similarly, another ap- significant IRQ processing beyond what a single core can proach called vBalance [10] proposes routing the IRQ provide. to the vCPU that is scheduled for the corresponding VM. To summarize, the main contributions of this paper are This may work well for SMP VMs that have more than as follows: one vCPU, but will not improve performance for sin- (1) We propose a new class of high-frequency schedul- gle vCPU VMs. Even in the SMP case, it improves the ing CPU core named turbo core and a new class of co- chances that at least one vCPU is scheduled; but fun- vCPU called turbo vCPU. The turbo vCPU pinned on damentally it does not eliminate IRQ processing latency turbo core(s) is used for timely processing of the I/O because each vCPU is contending for the physical CPU IRQs thus accelerating I/O processing for VMs in the independently. same physical host. To solve this problem more fundamentally, we aim to (2) We develop a simple but effective VM scheduling make the IRQ processing latency for a CPU-sharing VM policy giving general CPU cores and turbo core magni- almost similar to the scenario where the VM is given a tudes different time-slice.
Recommended publications
  • Hyper-V Performance Comparison: Microsoft Windows Server 2008 SP2 and R2 with Intel Xeon Processor X5570
    TEST REPORT JULY 2009 Hyper-V performance comparison: Microsoft Windows Server 2008 SP2 and R2 with Intel Xeon processor X5570- and E5450-based servers Executive summary KEY FINDINGS Microsoft® Corporation (Microsoft) and Intel® Corporation (Intel) commissioned Principled z The Intel Xeon processor X5570-based server with Technologies® (PT) to measure Hyper-V R2 the optimum number of CSUs delivered 37.1 percent virtualization performance improvements using more vConsolidate performance when running vConsolidate on the following Microsoft operating Microsoft Windows Server 2008 Enterprise Edition systems and Intel processors: R2 than when runningTEST Microsoft REPORT Windows Server 2008 EnterpriseFEBRUARY Edition SP2. (See 2006 Figure 1.) • Microsoft Windows Server® 2008 z Microsoft Windows Server 2008 Enterprise Edition Enterprise Edition SP2 with Hyper-V on SP2 running on the Intel Xeon processor X5570- Intel® Xeon® processor E5450 based server with the optimum number of CSUs • Microsoft Windows Server 2008 Enterprise delivered 98.7 percent more vConsolidate Edition SP2 with Hyper-V on Intel Xeon performance than it did running on the Intel Xeon processor X5570 processor E5450-based server with the optimum • Microsoft Windows Server 2008 Enterprise number of CSUs. (See Figure 1.) Edition R2 with Hyper-V on Intel Xeon processor X5570 Figure 1 shows the vConsolidate results for all three configurations with the optimum number of vConsolidate work units, consolidation stack units (CSUs). The Intel Xeon processor X5570-based server with the optimum number of CSUs (eight) delivered 37.1 percent more vConsolidate performance when running Microsoft Windows Server 2008 Enterprise Edition R2 than when running Microsoft Windows Server 2008 Enterprise Edition SP2.
    [Show full text]
  • Intel X86 Considered Harmful
    Intel x86 considered harmful Joanna Rutkowska October 2015 Intel x86 considered harmful Version: 1.0 1 Contents 1 Introduction5 Trusted, Trustworthy, Secure?......................6 2 The BIOS and boot security8 BIOS as the root of trust. For everything................8 Bad SMM vs. Tails...........................9 How can the BIOS become malicious?.................9 Write-Protecting the flash chip..................... 10 Measuring the firmware: TPM and Static Root of Trust........ 11 A forgotten element: an immutable CRTM............... 12 Intel Boot Guard............................. 13 Problems maintaining long chains of trust............... 14 UEFI Secure Boot?........................... 15 Intel TXT to the rescue!......................... 15 The broken promise of Intel TXT.................... 16 Rescuing TXT: SMM sandboxing with STM.............. 18 The broken promise of an STM?.................... 19 Intel SGX: a next generation TXT?................... 20 Summary of x86 boot (in)security.................... 21 2 Intel x86 considered harmful Contents 3 The peripherals 23 Networking devices & subsystem as attack vectors........... 23 Networking devices as leaking apparatus................ 24 Sandboxing the networking devices................... 24 Keeping networking devices outside of the TCB............ 25 Preventing networking from leaking out data.............. 25 The USB as an attack vector...................... 26 The graphics subsystem......................... 29 The disk controller and storage subsystem............... 30 The audio
    [Show full text]
  • Performance Best Practices for Vmware Workstation Vmware Workstation 7.0
    Performance Best Practices for VMware Workstation VMware Workstation 7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, see http://www.vmware.com/support/pubs. EN-000294-00 Performance Best Practices for VMware Workstation You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the latest product updates. If you have comments about this documentation, submit your feedback to: [email protected] Copyright © 2007–2009 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com 2 VMware, Inc. Contents About This Book 5 Terminology 5 Intended Audience 5 Document Feedback 5 Technical Support and Education Resources 5 Online and Telephone Support 5 Support Offerings 5 VMware Professional Services 6 1 Hardware for VMware Workstation 7 CPUs for VMware Workstation 7 Hyperthreading 7 Hardware-Assisted Virtualization 7 Hardware-Assisted CPU Virtualization (Intel VT-x and AMD AMD-V)
    [Show full text]
  • TVS-Ecx80+ Edge Cloud Turbo Vnas Series the Best Storage Solution for Edge Cloud Use Your Turbo Vnas As a PC
    TVS-ECx80+ Edge Cloud Turbo vNAS Series The Best Storage Solution for Edge Cloud Use your Turbo vNAS as a PC 1 2 3 Keys to Super Boost 3 System Performance Flagship model 1 2 3 TVS-EC1080+ Intel® Quad-Core Xeon® E3-1245 v3 3.4GHz and 32GB RAM Built-in dual-port 10GbE and 256GB mSATA modules 10GbE Network Adapter DDR3 Memory mSATA Module MAX 32GB 4K 2K RAM • Supports Intel® Quad-Core Xeon® E3-1245 v3 3.4GHz / Intel® Core™ i3 Dual-Core processors integrated with Intel HD Graphics P4600 • Inbuilt two 10GbE ports reaching over 2000 MB/s throughput and 223,000 IOPs • Scale-up storage to 400 TB raw capacity • Powered by QTS 4.1.2 with new HD Station 2.0 for 4K/2K video TVS-ECx80+ Turbo vNAS Series transcoding and playback • Q’center centralized management system for managing multiple QNAP Turbo vNAS units • Use the NAS as a PC with exclusive QvPC Technology • Designed for file management, backup and disaster recovery TVS-EC1080+-E3 TVS-EC1080-E3 TVS-EC880-E3 • NAS and iSCSI-SAN unified storage solution for server virtualization TVS-EC1080-i3 Hybrid Enterprise Cloud Storage Architecture With the advent of cloud computing, it is inevitable for enterprises to increase their investments in cloud services. However, enterprises are reducing IT expenses to maximize the return on investment (ROI). In addition to controlling rising costs, IT administrators must take many considerations when facilitating cloud environment. They need to incorporate new technology into existing systems without impacting the stability and performance of the system and user experience.
    [Show full text]
  • Oracle Databases on Vmware Best Practices Guide Provides Best Practice Guidelines for Deploying Oracle Databases on Vmware Vsphere®
    VMware Hybrid Cloud Best Practices Guide for Oracle Workloads Version 1.0 May 2016 © 2016 VMware, Inc. All rights reserved. Page 1 of 81 © 2016 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMware, Inc. 3401 Hillview Ave Palo Alto, CA 94304 www.vmware.com © 2016 VMware, Inc. All rights reserved. Page 2 of 81 VMware Hybrid Cloud Best Practices Guide for Oracle Workloads Contents 1. Introduction ...................................................................................... 9 2. vSphere ......................................................................................... 10 3. VMware Support for Oracle Databases on vSphere ....................... 11 3.1 VMware Oracle Support Policy .................................................................................... 11 3.2 VMware Oracle Support Process................................................................................. 12 4. Server Guidelines .......................................................................... 13 4.1 General Guidelines ...................................................................................................... 13 4.2 Hardware Assisted Virtualization ................................................................................
    [Show full text]
  • SAP Solutions on Vmware Vsphere Guidelines Summary and Best Practices
    SAP On VMware Best Practices Version 1.1 December 2015 © 2015 VMware, Inc. All rights reserved. Page 1 of 61 SAP on VMware Best Practices © 2015 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more patents listed at http://www.vmware.com/download/patents.html. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. VMware, Inc. 3401 Hillview Ave Palo Alto, CA 94304 www.vmware.com © 2015 VMware, Inc. All rights reserved. Page 2 of 61 SAP on VMware Best Practices Contents 1. Overview .......................................................................................... 5 2. Introduction ...................................................................................... 6 2.1 Support ........................................................................................................................... 6 2.2 SAP ................................................................................................................................ 6 2.3 vCloud Suite ................................................................................................................... 9 2.4 Deploying SAP with vCloud Suite ................................................................................ 11 3. Virtualization Overview ..................................................................
    [Show full text]
  • Configuring a Failover Cluster on a Dell Poweredge VRTX a Principled Technologies Deployment Guide 3
    A Principled Technologies deployment guide commissioned by Dell Inc. TABLE OF CONTENTS Table of contents ..................................................................................... 2 Introduction ............................................................................................ 3 About the components ............................................................................ 3 About the Dell PowerEdge VRTX ........................................................3 About the Dell PowerEdge M620 server nodes..................................4 About the Intel Xeon processor E5 family ..........................................4 About Microsoft Windows Server 2012 Hyper-V ...............................4 We show you how – Configuring a failover cluster on the Dell PowerEdge VRTX ....................................................................................................... 5 Preparing the Chassis Management Controller ..................................5 Networking overview ..........................................................................9 Setting up the Live Migration network in the Dell CMC .................. 10 Installing and configuring Windows Server 2012 ............................ 11 Creating storage access and installing Hyper-V ............................... 11 Creating virtual switches for the public/VM and Live Migration networks .......................................................................................... 12 Setting up the failover cluster ........................................................
    [Show full text]
  • Tintri Vmstore™ for Microsoft® Hyper-V®
    SOLUTION Tintri VMstore™ for BRIEF Microsoft® Hyper-V® VM-Aware Intelligent Infrastructure Optimized to Power Hyper-V Tintri VMstore is designed from the ground up for virtualized environments and the cloud. Global enterprises have deployed hundreds of thousands of VMs on Tintri storage systems, supporting business critical applications such as Microsoft SQL Server, Exchange, SharePoint, SAP, VDI, and Active Directory. Manageability VMstore is optimized for superior performance and reliability in • Visibility — Root cause latency across storage, host, network and Hyper-V environments. With native Microsoft Server Message Block hypervisor (SMB) 3.0 and integration with Microsoft System Center Virtual • Actionable — Real-time, per-VM analytics and historical data to Machine Manager (SCVMM), VMstore enables key capabilities troubleshoot and resolve issues for enterprise workloads such as Transparent Failover and High Availability (HA). • Automation — Set and forget, end-to-end via PowerShell and REST API Consolidate your Hyper-V applications onto scalable, performant, Performance and easy-to-use intelligent infrastructure. When the data that drives your Hyper-V applications resides on VMstore, your operations are • Per-VM isolation, VM-level QoS with max and min IOPS guarantee dramatically simplified. Go from pilot to production with a few clicks • <1ms latency — With all-flash you get sub-millisecond latencies and of a button. Enjoy a simplified administrative experience so you can predictable performance at high loads focus on your business. Experience Different! • Utilization - no more over provisioning. Fully utilize capacity and curb Integration With ODX the VM sprawl To simplify deployment, VMstore supports auto-discovery of Value hosts. VMstore also offers VM-level visibility and control, which • Efficiency — 6x smaller storage footprint and 60x annual management dramatically enhances user experience when virtualizing mission- reduction critical Microsoft enterprise applications and helps accelerate private cloud deployments.
    [Show full text]
  • Vmware Workstation
    Troubleshooting Notes VMwareTM Workstation Version 3.2 Please note that you will always find the most up-to-date technical docu- mentation on our Web site at http://www.vmware.com/support/. VMware, Inc. The VMware Web site also provides the latest product updates. 3145 Porter Drive, Bldg. F Palo Alto, CA 94304 Copyright © 1998–2002 VMware, Inc. All rights reserved. U.S. Patent No. 6,397,242 and patents pending. www.vmware.com VMware, the VMware boxes logo, GSX Server and ESX Server are trademarks of VMware, Inc. Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation. Linux is a registered trademark of Linus Torvalds. All other marks and names mentioned herein may be trademarks of their respective companies. Revision: 20020905 Table of Contents Troubleshooting VMware Workstation _____________________________5 How to Troubleshoot VMware Workstation ____________________________6 Three Layers of Troubleshooting __________________________________6 Installing VMware Workstation and a Guest Operating System ___________7 First Steps in Diagnosing Problems with a Virtual Machine ______________7 Steps to Diagnosing Problems in the Next Layer — VMware Workstation __9 Steps to Diagnosing Problems in the Bottom Layer — the Host Machine __10 Where to Find More Information _________________________________12 Requesting Support ___________________________________________12 Installation and Configuration Issues _____________________________ 15 Troubleshooting Installation and Configuration ________________________16 Unpacking
    [Show full text]
  • Performance Evaluation Using Docker and Kubernetes Disha Yaduraiah
    FRIEDRICH-ALEXANDER-UNIVERSITAT¨ ERLANGEN-NURNBERG¨ TECHNISCHE FAKULTAT¨ • DEPARTMENT INFORMATIK Lehrstuhl f¨urInformatik 10 (Systemsimulation) Performance Evaluation using Docker and Kubernetes Disha Yaduraiah Masterarbeit Performance Evaluation using Docker and Kubernetes Masterarbeit im Fach Computational Engineering vorgelegt von Disha Yaduraiah angefertigt am Lehrstuhl f¨urInformatik 10 Prof. Dr. Ulrich R¨ude Chair of System Simulation Dr. Andrew John Hewett Dr.-Ing. habil. Harald K¨ostler Senior Software Architect ERASMUS Koordinator Siemens Healthcare GmbH Services Department of Computer Science Erlangen, Germany Erlangen, Germany Aufgabensteller: Dr.-Ing. habil. Harald K¨ostler Betreuer: Dipl.-Inform. Christian Godenschwager Bearbeitungszeitraum: 01 Dez 2017 - 01 Juni 2018 Erkl¨arung Ich versichere, dass ich die Arbeit ohne fremde Hilfe und ohne Benutzung anderer als der angegebenen Quellen angefertigt habe und dass die Arbeit in gleicher oder ¨ahnlicher Form noch keiner anderen Pr¨ufungsbeh¨ordevorgelegen hat und von dieser als Teil einer Pr¨ufungsleistungangenommen wurde. Alle Ausf¨uhrungen,die w¨ortlich oder sinngem¨aߨubernommen wurden, sind als solche gekennzeichnet. Declaration I declare that the work is entirely my own and was produced with no assistance from third parties. I certify that the work has not been submitted in the same or any similar form for assessment to any other examining body and all references, direct and indirect, are indicated as such and have been cited accordingly. Erlangen, 30 May, 2018 ......................................... (Disha Yaduraiah) iii Abstract Containerization technology has now made it possible to run multiple applications on the same servers with easy packaging and shipping ability. Unlike the old techniques of hard- ware virtualization, containers rest on the top of a single Linux instance leaving with a light-weight image containing applications.
    [Show full text]
  • Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers Using Violin Windows Flash Arrays
    Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers using Violin Windows Flash Arrays Danyu Zhu Liang Yang Dan Lovinger A Microsoft White Paper Published: October 2014 This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes. © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Server, Hyper-V are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Violin Memory is a registered trademark of Violin Memory, Inc in the United States. The names of other actual companies and products mentioned herein may be the trademarks of their respective owners. Microsoft White Paper 1 Summary This white paper demonstrates the capabilities and performance for Violin Windows Flash Array (WFA), a next generation All-Flash Array storage platform. With the joint efforts of Microsoft and Violin Memory, WFA provides built-in high performance, availability and scalability by the tight integration of Violin’s All Flash Array and Microsoft Windows Server 2012 R2 Scale-Out File Server Cluster. The following results highlight the scalability, throughput, bandwidth, and latency that can be achieved from the platform presented in this report using two Violin WFA-64 arrays in a Scale-Out File Server Cluster in a virtualized environment: Throughput: linear scale to over 2 million random read IOPS or 1.6 million random write IOPS.
    [Show full text]
  • 10Th Gen Intel® Core™ Processor Families Datasheet, Vol. 1
    10th Generation Intel® Core™ Processor Families Datasheet, Volume 1 of 2 Supporting 10th Generation Intel® Core™ Processor Families, Intel® Pentium® Processors, Intel® Celeron® Processors for U/Y Platforms, formerly known as Ice Lake July 2020 Revision 005 Document Number: 341077-005 Legal Lines and Disclaimers You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel technologies' features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Intel technologies may require enabled hardware, specific software, or services activation. Check with your system manufacturer or retailer. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548- 4725 or visit www.intel.com/design/literature.htm.
    [Show full text]