Toward Reconfigurable Kernel Datapaths with Learned

Toward Reconfigurable Kernel Datapaths with Learned

Toward Reconfigurable Kernel Datapaths with Learned Optimizations Yiming Qiu Hongyi Liu Thomas Anderson Rice University Rice University University of Washington Yingyan Lin Ang Chen Rice University Rice University Abstract Keywords Today’s computing systems pay a heavy “OS tax”, as ker- Operating system kernels; Machine learning; RMT nel execution accounts for a significant amount of resource ACM Reference Format: footprint. This is not least because today’s kernels abound Yiming Qiu, Hongyi Liu, Thomas Anderson, Yingyan Lin, and Ang with hardcoded heuristics that are designed with unstated Chen. 2021. Toward Reconfigurable Kernel Datapaths with Learned assumptions, which rarely generalize well for diversifying Optimizations. In Workshop on Hot Topics in Operating Systems applications and device technologies. (HotOS ’21), May 31-June 2, 2021, Ann Arbor, MI, USA. ACM, New We propose the concept of reconfigurable kernel datap- York, NY, USA, 8 pages. https://doi.org/10.1145/3458336.3465288 aths that enables kernels to self-optimize dynamically. In this architecture, optimizations are computed from empirical 1 Introduction data using machine learning (ML), and they are integrated into the kernel in a safe and systematic manner via an in- Operating system kernels are being stressed from above and kernel virtual machine. This virtual machine implements the below. As a general-purpose resource manager, the OS kernel reconfigurable match table (RMT) abstraction, where tables needs to support different applications, and it needs to mul- are installed into the kernel at points where performance- tiplex different types of hardware platforms. As of late, both critical events occur, matches look up the current execution applications and hardware platforms are diversifying rapidly. context, and actions encode context-specific optimizations On the applications side, for instance, container or microser- computed by ML, which may further vary from applica- vice workloads are latency-sensitive, while MapReduce-like tion to application. Our envisioned architecture will support data processing jobs are throughput-oriented with intensive both offline and online learning algorithms, as well asvar- IO requirements (e.g., for bulk synchronization, checkpoint- ied kernel subsystems. An RMT verifier will check program ing, or recovery). Home user applications (e.g., document or well-formedness and model efficiency before admitting an photo editing software) are yet another class, with their own RMT program to the kernel. An admitted program can be complex disk IO patterns [25] and frequent interactions with interpreted in bytecode or just-in-time compiled to optimize the cloud. This complexity ensures that no one-size-fits-all the kernel datapaths. optimization strategy exists that would simultaneously work well for all scenarios. CCS Concepts Likewise, hardware technologies are developing faster • Computing methodologies → Machine learning; • Soft- than the software system stack [7], with characteristics that ware and its engineering → Operating systems; differ from generation to generation, and from vendor to vendor within each generation. The best IO scheduling al- gorithms for hard disks will inevitably underperform for Permission to make digital or hard copies of all or part of this work for both SSDs and density-optimized shingled disks. To compli- personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear cate this picture even further, devices are becoming smarter, this notice and the full citation on the first page. Copyrights for components enclosing embedded controllers that run proprietary algo- of this work owned by others than ACM must be honored. Abstracting with rithms for local management. Having these uncontrolled, credit is permitted. To copy otherwise, or republish, to post on servers or to blackbox code running in the devices may confound even redistribute to lists, requires prior specific permission and/or a fee. Request the best-tuned kernel optimizations. permissions from [email protected]. HotOS ’21, May 31-June 2, 2021, Ann Arbor, MI, USA The confluence of these two trends calls for a fundamen- © 2021 Association for Computing Machinery. tal rethink as to how the OS kernel should specialize for a ACM ISBN 978-1-4503-8438-4/21/05...$15.00 particular scenario in order to perform well, and how these https://doi.org/10.1145/3458336.3465288 specializations should generalize for unseen scenarios that 175 may arise. Two recent approaches can be viewed as approx- OS tax: it has been reported that kernel execution accounts imating this goal. The kernel bypass approach argues that for 20% of data center CPU cycles [28] while data centers rep- resource management is best left to the application. User- resent 1% of worldwide electricity consumption [1]. There- land applications are given direct access to network cards fore, improving the efficiency of OS kernels has significant or disks (e.g., using DPDK/SPDK), and they implement their implications for a wide range of deployment scenarios. own optimizations as needed. Alternatively, eBPF allows an 2 Motivation application to dynamically inject constrained code into the kernel for customization, aiming to achieve similar effects. Machine learning techniques have produced early but suc- However, neither approach answers the question as to what cessful results in computer systems, replacing well-tuned optimizations should be implemented when. Applications index structures for data retrieval [29], predicting hardware may not have sufficient knowledge about the entire software device state for better management [24], and managing C++ and hardware stack (or even about their own behaviors) to object memory efficiently39 [ ]. Zhang and Huang [54] have adequately implement good optimizations, and any changes argued that ML should be applied to the OS kernel as well. may be invalidated by new hardware. When individual appli- Our idea is inspired by this work, and it proposes a system- cations choose their own strategies, the kernel also loses its atic approach to integrate ML into the kernel via an RMT centralized view needed for cross-application optimizations. virtual machine. Our vision: Reconfigurable kernel datapaths. In this 2.1 Envisioned benefits paper, we advocate for a fundamentally different approach, and provide an answer that draws inspiration from two lines We believe that reconfigurable kernel datapaths has the po- of recent work—the increasingly powerful set of machine tential to unleash four classes of benefits that are hard to learning (ML) techniques, and the efforts in specializing net- achieve in today’s OS kernels. work stacks with reconfigurable match table (RMT). Our key #1. Lean monitoring: Operating system kernels employ idea is to develop reconfigurable kernel datapaths, where the a large set of runtime monitors, which aim at characterizing mechanisms are based on an RMT-style architecture in the current workloads and activating different built-in heuristics. kernel, and the policies are learned using ML. The OS kernel These monitoring events, however, introduce cache pollution, dynamically discovers the best policies for each scenario in runtime overhead, and in some cases, they work by inten- the form of an RMT program, and enforces these policies by tionally causing some performance degradation. An example configuring the in-kernel virtual machine. By translating this of the latter is the CPU scheduler on a NUMA machine— programmable yet lightweight primitive into the OS kernel, in order to detect memory affinity, the scheduler needs to we provide an architecture that allows for varied types of monitor a thread’s page-level access pattern; Linux does this adaptivity. By harnessing the power of ML, we can eliminate by periodically unmapping a process’s pages, so that the many best-effort heuristics that abound in today’s kernel kernel can trap the page faults and monitor access locations. datapaths, and enable optimizations to generalize to unseen By introducing ML, we can potentially enable the kernel to applications, workloads, or hardware platforms. reduce the amount of necessary monitoring. For instance, Application-specific kernel optimizations and extensions a feature selection process using feature importance rank- were well explored in the 1990s. Exokernel [19] argues for ing [33] may allow the kernel to forego the monitoring of eliminating OS abstractions entirely and leaving their imple- events that contribute little useful information. mentations to the applications. SPIN [9], on the other hand, #2. Better configurations: The wide range of heuristics allows applications to inject safe code into the kernel for and configuration parameters in the OS kernel may notbe dynamic extension. They share similar limitations as their optimal; tuning kernel parameters to achieve better configu- modern equivalents with kernel bypass and eBPF injection. rations is also a challenging task. Moreover, heuristics are In contrast, a key goal of our idea is to automatically identify activated only after a bootstrapping phase (e.g., is this partic- kernel optimizations via ML-based reconfiguration, so ap- ular thread I/O bound? then increase its scheduling priority). plications no longer have to specialize the kernel in one-off In our design, ML algorithms should be able to explore a manners. broader range of decision making strategies, resulting in bet- Research challenges.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us