Large Pages and Lightweight Memory Management in Virtualized Environments: Can You Have It Both Ways?

Large Pages and Lightweight Memory Management in Virtualized Environments: Can You Have It Both Ways?

Large Pages and Lightweight Memory Management in Virtualized Environments: Can You Have it Both Ways? Binh Pham§ J´an Vesel´y§ Gabriel H. Loh† Abhishek Bhattacharjee§ § Department of Computer Science † AMD Research RutgersUniversity AdvancedMicroDevices,Inc. {binhpham, jan.vesely, abhib}@cs.rutgers.edu [email protected] ABSTRACT 1. INTRODUCTION Large pages have long been used to mitigate address With cloud computing, virtualization technologies (e.g., translation overheads on big-memory systems, particu- KVM, Xen, ESX, Docker, HyperV and others) are ag- larly in virtualized environments where TLB miss over- gressively employed by companies like Amazon or Rack- heads are severe. We show, however, that far from being space to consolidate diverse workloads (encapsulated in a panacea, large pages are used sparingly by modern vir- virtual machines or VMs) to ensure high utilization of tualization software. This is because large pages often physical systems while achieving good performance. preclude lightweight memory management, which can The judicious use of large pages [26,38,39] is particu- outweigh their Translation Lookaside Buffer (TLB) ben- larly critical to the performance of cloud environments. efits. For example, they reduce opportunities to dedu- Large pages can boost address translation performance plicate memory among virtual machines in overcommit- in hypervisor-based virtualization and containers [11]. ted systems, interfere with lightweight memory moni- Specifically, in hypervisor-based virtualization, the hy- toring, and hamper the agility of virtual machine (VM) pervisor and guests maintain separate page tables, and migrations. While many of these problems are particu- therefore require high-latency two-dimensional page ta- larly severe in overcommitted systems with scarce mem- ble walks on Translation Lookaside Buffer (TLB) misses. ory resources, they can (and often do) exist generally in Past work has shown that this is often the primary con- cloud deployments. In response, virtualization software tributor to the performance difference between virtual- often (though it doesn’t have to) splinters guest operat- ized and bare-metal performance [15,18]. Large pages ing system (OS) large pages into small system physical (e.g., 2MB pages instead of baseline 4KB pages in x86- pages, sacrificing address translation performance for 64) counter these overheads by dramatically reducing overall system-level benefits. We introduce simple hard- the number of page table entries (PTEs), increasing ware that bridges this fundamental conflict, using spec- TLB hit rates and reducing miss latencies. Even con- ulative techniques to group contiguous, aligned small tainers (which do not require two-dimensional page ta- page translations such that they approach the address ble walks) are dependent on large pages to improve TLB translation performance of large pages. Our General- reach, which is otherwise inadequate to address the hun- ized Large-page Utilization Enhancements (GLUE) al- dreds of GB to several TB of memory present on the low system hypervisors to splinter large pages for agile systems containers are often deployed in [18]. memory management, while retaining almost all of the Unfortunately however, the coarser granularity of large TLB performance of unsplintered large pages. pages can curtail lightweight and agile system memory management. For example, large pages can reduce con- Categories and Subject Descriptors solidation in real-world cloud deployments where mem- ory resources are over-committed by precluding oppor- C.1.0 [Processor Architectures]: General tunities for page sharing [20,42]. They can also interfere with a hypervisor’s ability to perform lightweight guest Keywords memory usage monitoring, its ability to effectively al- Virtual Memory, Virtualization, TLB, Speculation locate and place data in non-uniform memory access systems [19,44], and can hamper operations like agile VM migration [41]. As a result, virtualization software often (though it doesn’t have) chooses to splinter or break large pages into smaller baseline pages. While these decisions may be appropriate for overall perfor- Permission to make digital or hard copies of all or part of this work for mance as they improve consolidation ratios and mem- personal or classroom use is granted without fee provided that copies are ory management, they do present a lost opportunity in not made or distributed for profit or commercial advantage and that copies terms of reducing address translation overheads. bear this notice and the full citation on the first page. To copy otherwise, to This paper proposes hardware that bridges this fun- republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. damental conflict to reclaim the address translation per- MICRO 2015 Waikiki, Hawaii USA formance opportunity lost by large page splintering. Copyright 2015 ACM 978-1-4503-4034-2/15/12 ...$15.00. Specifically, we observe that the act of splintering a 0.4 large page is usually performed to achieve finer-grained memory management rather than to fundamentally al- 0.3 ter virtual or physical address spaces. Therefore, the 0.2 vast majority of constituent small pages retain the orig- 0.1 inal contiguity and alignment in both virtual and phys- ical address spaces that allowed them to be merged Fraction of runtime 0.0 tigr into large pages in the first place. In response, we astar gups mcf canneal average graph500omnetppmummer d caching sw testingg analytics xalancbmk propose Generalized Large-page Utilization En- GemsFDTD cactusADM hancements (GLUE) to identify splintered large page- Figure 1: Percent of execution time for address transla- sized memory regions. GLUE augments standard TLBs tion, for applications on a Linux VM on VMware’s ESX to store information that identifies these contiguous, server, running on an x86-64 architecture. Overheads are 18% on average despite the fact that the OS uses aligned, but splintered regions. GLUE then uses TLB both 4KB and 2MB pages. speculation to identify the constituent translations. Small system physical pages are speculated by interpolating (even though they use standard one-dimensional page around the information stored about a single specula- table walks), primarily because TLB reach is unable to tive large-page translation in the TLB. Speculations are match main memory capacities. verified by page table walks, now removed from the pro- Our work focuses on hypervisor-based virtualization cessor’s critical path of execution, effectively convert- as it presents the greater challenge on address transla- ing the performance of correct speculations into TLB tion. However, we have also characterized page splin- hits. GLUE accurate, software-transparent, readily- tering on containers and present these results. implementable, and allows large pages to be compatible, rather than at odds, with lightweight memory manage- Large pages and address translation: To counter ment. Specifically, our contributions are: increased TLB overheads in virtual servers, virtualiza- tion vendors encourage using large pages aggressively [15]. • We characterize the prevalence of page splintering OSes construct large pages by allocating a sufficient in virtualized environments. We find that large number of baseline contiguous virtual page frames to pages are conflicted with lightweight memory man- contiguous physical page frames, aligned at boundaries agement across a range of hypervisors (e.g., ESX, determined by the size of the large page [4]. For ex- KVM) across architectures (e.g., ARM, x86-64) ample, x86-64 2MB large pages require 512 contiguous and container-based technologies. 4KB baseline pages, aligned at 2MB boundaries. Large pages replace multiple baseline TLB entries with a sin- • We propose interpolation-based TLB speculation gle large page entry (increasing capacity), and reduce to leverage splintered but well-aligned system phys- the number of levels in the page table (reducing miss ical page allocation to improve performance by an penalty). average of 14% across our workloads. This repre- In hypervisor-based virtualization, these benefits are sents almost all the address translation overheads magnified as they are applied to two page tables. While in the virtualized systems we study. a large page reduces the number of page table walk memory references from four to three in native cases, • We investigate design trade-offs and splintering char- the reduction is from twenty-four to fifteen for virtual acteristics to explain the benefits of GLUE. We machines. However, because TLBs cache guest virtual show the robustness of GLUE, which improves per- to system physical translations directly, a “true” large formance in every single workload considered. page is one that is large in both the guest and the hy- 2. BACKGROUND pervisor page table. Virtualization and TLB overheads: In hypervisor- 3. MOTIVATION AND OUR APPROACH based virtualized systems with two-dimensional page Real-system virtualization overheads: We begin table support, guests maintain page tables mapping guest by profiling address translation overheads on hypervisor- virtual pages (GVPs) to guest physical pages (GPPs), based virtualization technologies. Figure 1 quantifies which are then converted by the hypervisor to system address translation overheads (normalized to total run- physical pages (SPPs) via a nested or extended page ta- time) when running a Ubuntu 12.04 server (3.8 kernel) ble [11]. TLBs cache frequently used GVP to SPP map- as the guest operating

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us