Forgoing Hypervisor Fidelity for Measuring Virtual Machine Performance

Forgoing Hypervisor Fidelity for Measuring Virtual Machine Performance

Forgoing hypervisor fidelity for measuring virtual machine performance Oliver R. A. Chick Gonville and Caius College This dissertation is submitted for the degree of Doctor of Philosophy FORGOING HYPERVISOR FIDELITY FOR MEASURING VIRTUAL MACHINE PERFORMANCE OLIVER R. A. CHICK For the last ten years there has been rapid growth in cloud computing, which has largely been powered by virtual machines. Understanding the performance of a virtual machine is hard: There is limited access to hardware counters, tech- niques for probing have higher probe effect than on physical machines, and per- formance is tightly coupled with the hypervisor’s scheduling decisions. Yet, the need for measuring virtual machine performance is high as virtual machines are slower than physical machines and have highly-variable performance. Current performance-measurement techniques demand hypervisor fidelity: They execute the same instructions on a virtual machine and physical machine. Whilst fidelity has historically been considered an advantage as it allows the hy- pervisor to be transparent to virtual machines, the use case of hypervisors has changed from multiplexing access to a single mainframe across an institution to forming a building block of the cloud. In this dissertation I reconsider the argument for hypervisor fidelity and show the advantages of software that co-operates with the hypervisor. I focus on pro- ducing software that explains the performance of virtual machines by forgoing hypervisor fidelity. To this end, I develop three methods of exposing the hy- pervisor interface to performance measurement tools: (i) Kamprobes is a tech- nique for probing virtual machines that uses unprivileged instructions rather than interrupt-based techniques. I show that this brings the time requires to fire a probe in a virtual machine to within twelve cycles of native performance. (ii) Shadow Kernels is a technique that uses the hypervisor’s memory manage- ment unit so that an operating system kernel can have per-process specialisation, which can be used to selectively fire probes, with low overheads (835 354 cycles ± per page) and minimal operating system changes (340 LoC). (iii) Soroban uses machine learning on the hypervisor’s scheduling activity to report the virtualisa- tion overhead in servicing requests and can distinguish between latency caused by high virtual machine load and latency caused by the hypervisor. Understanding the performance of a machine is particularly difficult when executing in the cloud due to the combination of the hypervisor and other virtual machines. This dissertation shows that it is worthwhile forgoing hypervisor fidelity to improve the visibility of virtual machine performance. DECLARATION This dissertation is my own work and contains nothing which is the outcome of work done in collaboration with others, except where specified in the text. This dissertation is not substantially the same as any that I have submitted for a degree or diploma or other qualification at any other university. This dissertation does not exceed the prescribed limit of 60 000 words. Oliver R. A. Chick November 30, 2015 ACKNOWLEDGEMENTS This work was principally supported by the Engineering and Physical Sciences Research Council [grant number EP/K503009/1] and by internal funds from the University of Cambridge Computer Laboratory. I should like to pay personal thanks to Dr Andrew Rice and Dr Ripduman So- han for their countless hours of supervision and technical expertise, without which I would have been unable to conduct my research. Further thanks to Dr Ramsey M. Faragher for encouragement and help in wide-ranging areas. Special thanks to Lucian Carata and James Snee for their efforts in cod- ing reviews and being prudent collaborators, as well as Dr Jeunese A. Payne, Daniel R. Thomas, and Diana A. Vasile for proof reading this dissertation. My gratitude goes to Prof. Andy Hopper for his support for the Resourceful project. All members of the DTG, especially Daniel R. Thomas and other inhabitants of SN14 have provided me with both wonderful friendships and technical assis- tance, which has been invaluable throughout my Ph.D. Final thanks naturally go to my parents for their perpetual support. CONTENTS 1 Introduction 15 1.1 Defining ‘forgoing hypervisor fidelity’ . 16 1.2 Limitations of hypervisor fidelity in performance measurement tools 17 1.3 The case for forgoing hypervisor fidelity in performance measure- ment tools . 18 1.4 Kamprobes . 20 1.5 Shadow Kernels . 21 1.6 Soroban . 22 1.7 Scope of thesis . 23 1.7.1 Xen hypervisor . 23 1.7.2 GNU/Linux operating system . 23 1.7.3 Paravirtualised guests . 24 1.7.4 x86-64 . 24 1.8 Overview . 25 2 Background 27 2.1 Historical justification for hypervisor fidelity . 28 2.2 Contemporary uses for virtualisation . 29 2.3 Virtualisation performance problems . 33 2.3.1 Privileged instructions . 33 2.3.2 I/O . 33 2.3.3 Networking . 34 2.3.4 Increased contention . 34 2.3.5 Locking . 34 2.3.6 Unpredictable timing . 35 2.3.7 Summary . 35 2.4 The changing state of hypervisor fidelity . 35 2.4.1 Historical changes to hypervisor fidelity . 35 2.4.2 Recent changes to hypervisor fidelity . 36 2.4.3 Current state of hypervisor fidelity . 38 2.4.3.1 Installing guest additions . 38 2.4.3.2 Moving services into dedicated domains . 38 2.4.3.3 Lack of transparency of HVM containers . 39 2.4.3.4 Hypervisor/operating system semantic gap . 39 2.4.4 Summary . 39 2.5 Rethinking operating system design for hypervisors . 40 2.6 Virtual machine performance measurement . 41 2.6.1 Kernel probing . 41 2.6.2 Kernel specialisation . 42 2.6.3 Performance interference . 43 2.6.3.1 Measurement . 43 2.6.3.2 Modelling . 44 2.6.3.3 Summary . 45 2.7 Application to a broader context . 46 2.7.1 Containers . 46 2.7.2 Microkernels . 47 2.8 Summary . 47 3 Kamprobes: Probing designed for virtualised operating systems 49 3.1 Introduction . 50 3.2 Current probing techniques . 51 3.2.1 Linux: Kprobes . 51 3.2.2 Windows: Detours . 52 3.2.3 FreeBSD, NetBSD, OS X: DTrace function boundary tracers 53 3.2.4 Summary . 53 3.3 Experimental evidence against virtualising current probing tech- niques . 54 3.3.1 Cost of virtualising Kprobes . 54 3.3.2 Cost of virtualised interrupts . 57 3.3.3 Other causes of slower performance when virtualised . 58 3.4 Kamprobes design . 59 3.5 Implementation . 60 3.5.1 Kamprobes API . 60 3.5.2 Kernel module . 61 3.5.3 Changes to the x86-64 instruction stream . 61 3.5.3.1 Inserting Kamprobes into an instruction stream . 61 3.5.3.2 Kamprobe wrappers . 62 3.6 Evaluation . 69 3.6.1 Inserting probes . 69 3.6.2 Firing probes . 71 3.6.3 Kamprobes executing on bare metal . 74 3.7 Evaluation summary . 75 3.8 Discussion . 76 3.8.1 Backtraces . 76 3.8.2 FTrace compatibility . 76 3.8.3 Instruction limitations . 76 3.8.4 Applicability to other instruction sets and ABIs . 76 3.9 Conclusion . 77 4 Shadow kernels: A general mechanism for kernel specialisation in exist- ing operating systems 79 4.1 Introduction . 80 4.2 Motivation . 82 4.2.1 Shadow Kernels for probing . 82 4.2.2 Per-process kernel profile-guided optimisation . 84 4.2.3 Kernel optimisation and fast-paths . 84 4.2.4 Kernel updates . 85 4.3 Design and implementation . 86 4.3.1 User space API . 86 4.3.2 Linux kernel module . 87 4.3.2.1 Module insertion . 88 4.3.2.2 Initialisation of a shadow kernel . 88 4.3.2.3 Adding pages to the shadow kernel . 89 4.3.2.4 Switching shadow kernel . 89 4.3.2.5 Interaction with other kernel modules . 90 4.4 Evaluation . 91 4.4.1 Creating a shadow kernel . 91 4.4.2 Switching shadow kernel . 93 4.4.2.1 Switching time . 93 4.4.2.2 Effects on caching . 95 4.4.3 Kamprobes and Shadow Kernels . 97 4.4.4 Application to web workload . 102 4.4.5 Evaluation summary . 103 4.5 Alternative approaches . 103 4.6 Discussion . 105 4.6.1 Modifications required to kernel debuggers . 105 4.6.2 Software guard extensions . 105 4.7 Conclusion . 106 5 Soroban: Attributing latency in virtualised environments 107 5.1 Introduction . 108 5.2 Motivation . 109 5.2.1 Performance monitoring . 110 5.2.2 Virtualisation-aware timeouts . 110 5.2.3 Dynamic allocation . 111 5.2.4 QoS-based, fine-grained charging . 111 5.2.5 Diagnosing performance anomalies . 112 5.3 Sources of virtualisation overhead . 112 5.4 Effect of virtualisation overhead on end-to-end latency . 116 5.5 Attributing latency . 118 5.5.1 Justification of Gaussian processes . 121 5.5.2 Alternative approaches . 122 5.6 Choice of feature vector elements . 123 5.7 Implementation . 126 5.7.1 Xen modifications . 126 5.7.1.1 Exposing scheduler data . 126 5.7.1.2 Sharing scheduler data between Xen and its vir- tual machines . 127 5.7.2 Linux kernel module . 127 5.7.3 Application modifications . 128 5.7.3.1 Soroban API . ..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    172 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us