RCU Usage in the Linux Kernel: One Decade Later

Total Page:16

File Type:pdf, Size:1020Kb

RCU Usage in the Linux Kernel: One Decade Later RCU Usage In the Linux Kernel: One Decade Later Paul E. McKenney Silas Boyd-Wickizer Jonathan Walpole Linux Technology Center MIT CSAIL Computer Science Department IBM Beaverton Portland State University Abstract unique among the commonly used kernels. Understand- ing RCU is now a prerequisite for understanding the Linux Read-copy update (RCU) is a scalable high-performance implementation and its performance. synchronization mechanism implemented in the Linux The success of RCU is, in part, due to its high perfor- kernel. RCU’s novel properties include support for con- mance in the presence of concurrent readers and updaters. current reading and writing, and highly optimized inter- The RCU API facilitates this with two relatively simple CPU synchronization. Since RCU’s introduction into the primitives: readers access data structures within RCU Linux kernel over a decade ago its usage has continued to read-side critical sections, while updaters use RCU syn- expand. Today, most kernel subsystems use RCU. This chronization to wait for all pre-existing RCU read-side paper discusses the requirements that drove the devel- critical sections to complete. When combined, these prim- opment of RCU, the design and API of the Linux RCU itives allow threads to concurrently read data structures, implementation, and how kernel developers apply RCU. even while other threads are updating them. This paper describes the performance requirements that 1 Introduction led to the development of RCU, gives an overview of the RCU API and implementation, and examines how ker- The first Linux kernel to include multiprocessor support nel developers have used RCU to optimize kernel perfor- is not quite 20 years old. This kernel provided support mance. The primary goal is to provide an understanding for concurrently running applications, but serialized all of the RCU API and how to apply it. execution in the kernel using a single lock. Concurrently The remainder of the paper is organized as follows. Sec- executing applications that frequently invoked the kernel tion 2 explains the important requirements for production- performed poorly. quality RCU implementations. Section 3 gives an Today the single kernel lock is gone, replaced by highly overview of RCU’s API and overall design. Section 4 concurrent kernel subsystems. Kernel intensive applica- introduces a set of common usage patterns that cover most tions that would have performed poorly on dual-processor uses of RCU, illustrating how RCU is used in Linux. In machines 15 years ago, now scale and perform well on some cases its use as a replacement for existing synchro- multicore machines with many processors [3]. nization mechanisms introduces subtle semantic prob- Kernel developers have used a variety of techniques to lems. Section 5 discusses these problems and commonly improve concurrency, including fine-grained locks, lock- applied solutions. Section 6 highlights the importance of free data structures, per-CPU data structures, and read- RCU by documenting its use in Linux over time and by copy-update (RCU), the topic of this paper. Uses of the specific subsystems. Section 7 discusses related work and RCU API have increased from none in 2002 to over 6500 Section 8 presents conclusions. in 2013 (see Figure 1). Most major Linux kernel subsys- tems use RCU as a synchronization mechanism. Linus 2 RCU Requirements Torvalds characterized a recent RCU-based patch to the virtual file system “as seriously good stuff” because de- RCU fulfills three requirements dictated by the kernel: velopers were able to use RCU to remove bottlenecks (1) support for concurrent readers, even during updates; affecting common workloads [25]. RCU is not unique to (2) low computation and storage overhead; and (3) deter- Linux (see [7, 13, 19] for other examples), but Linux’s ministic completion time. The first two are performance- wide variety of RCU usage patterns is, as far as we know, related requirements, while the third is important for real- 1 10000 lock or waiting to fulfill cache misses, each read from 9000 the AVC takes several hundred cycles. Incurring a single 8000 cache miss, which can cost hundreds of cycles, would double the cost of accessing the AVC. RCU must coor- 7000 dinate readers and updaters in a way that provides low 6000 overhead synchronization in the common case. 5000 A third requirement is deterministic completion times 4000 for read operations. This is critical to real-time re- sponse [11], but also has important software-engineering # RCU API Uses 3000 benefits, including the ability to use RCU within non- 2000 maskable interrupt (NMI) handlers. Accessing shared 1000 data within NMI handlers is tricky, because a thread might 0 be interrupted within a critical section. Using spin locks can lead to deadlock, and lock-free techniques utilizing optimistic concurrency control lead to non-deterministic 2002 2004 2006 2008 2010 2012 2014 2016 completion times if the operation must be retried many Year times. Related synchronization primitives, such as read-write Figure 1: The number of uses of the RCU API in Linux locks, Linux local-global locks, and transactional mem- kernel code from 2002 to 2014. ory, do not fulfill the requirements discussed here. None of these primitives provide concurrent read and write operations to the same data. They all impose a storage overhead. Even the storage overhead for a read-write lock, time response and software engineering reasons. This which is a single integer, is unacceptable for some cases. section describes the three requirements and the next two Read-write locks and local-global locks use expensive sections describe how RCU is designed to fulfill these atomic instructions or memory barriers during acquisition requirements and how kernel developers use RCU. and release. The primary RCU requirement is support for concur- The next section describes an RCU design and API rent reading of a data structure, even during updates. The that provides concurrent reads and writes, low space and Linux kernel uses many data structures that are read and execution overhead, and deterministic execution times. updated intensively, especially in the virtual file system (VFS) and in networking. For example, the VFS caches 3 RCU Design directory entry metadata – each known as a dentry – for recently accessed files. Every time an application opens a RCU is a library for the Linux kernel that allows ker- file, the kernel walks the file path and reads the dentry nel subsystems to synchronize access to shared data in for each path component out of the dentry cache. Since an efficient manner. The core of RCU is based on two applications might access many files, some only once, primitives: RCU read-side critical sections, which we the kernel is frequently loading dentrys into the cache will refer to simply as RCU critical sections, and RCU and evicting unused dentrys. Ideally, threads reading synchronization. A thread enters an RCU critical sec- from the cache would not interfere with each other or be tion by calling rcu_read_lock and completes an RCU impeded by threads performing updates. critical section by calling rcu_read_unlock. A thread The second RCU requirement is low space and execu- uses RCU synchronization by calling synchronize_rcu, tion overhead. Low space overhead is important because which guarantees not to return until all the RCU critical the kernel must synchronize access to millions of kernel sections executing when synchronize_rcu was called objects. On a server in our lab, for example, the kernel have completed. synchronize_rcu does not prevent caches roughly 8 million dentrys. An overhead of more new RCU critical sections from starting, nor does it wait than a few bytes per dentry is an unacceptable storage for the RCU critical sections to finish that were started overhead. after synchronize_rcu was called. Low execution overhead is important because the ker- Developers can use RCU critical sections and RCU nel accesses data structures frequently using extremely synchronization to build data structures that allow con- short code paths. The SELinux access vector cache current reading, even during updates. To illustrate one (AVC) [21] is an example of a performance critical data possible use, consider how to safely free the memory structure that the kernel might access several times during associated with a dentry when an application removes a system call. In the absence of spinning to acquire a a file. One way to implement this is for a thread to ac- 2 void rcu_read_lock() cutes a context switch, the thread calling synchronize_ { rcu briefly executes on every CPU. Notice that the cost preempt_disable[cpu_id()]++; synchronize_rcu } of is independent of the number of times threads execute rcu_read_lock and rcu_read_ void rcu_read_unlock() unlock. { In practice Linux implements synchronize_rcu by preempt_disable[cpu_id()]--; waiting for all CPUs in the system to pass through a con- } text switch, instead of scheduling a thread on each CPU. This design optimizes the Linux RCU implementation for void synchronize_rcu(void) low-cost RCU critical sections, but at the cost of delaying { synchronize_rcu callers longer than necessary. In prin- for_each_cpu(int cpu) ciple, a writer waiting for a particular reader need only run_on(cpu); } wait for that reader to complete an RCU critical section. The reader, however, must communicate to the writer that the RCU critical section is complete. The Linux Figure 2: A simplified version of the Linux RCU imple- RCU implementation essentially batches reader-to-writer mentation. communication by waiting for context switches. When possible, writers can use an asynchronous version of synchronize_rcu, call_rcu, that will asynchronously quire a pointer to a dentry only from the directory cache invokes a specified callback after all CPUs have passed and to always execute in an RCU critical section when through at least one context switch. manipulating a dentry. When an application removes a The Linux RCU implementation tries to amortize file, the kernel, possibly in parallel with dentry readers, the cost of detecting context switches over many removes the dentry from the directory cache, then calls synchronize_rcu and call_rcu operations.
Recommended publications
  • LMAX Disruptor
    Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads Martin Thompson Dave Farley Michael Barker Patricia Gee Andrew Stewart May-2011 http://code.google.com/p/disruptor/ 1 Abstract LMAX was established to create a very high performance financial exchange. As part of our work to accomplish this goal we have evaluated several approaches to the design of such a system, but as we began to measure these we ran into some fundamental limits with conventional approaches. Many applications depend on queues to exchange data between processing stages. Our performance testing showed that the latency costs, when using queues in this way, were in the same order of magnitude as the cost of IO operations to disk (RAID or SSD based disk system) – dramatically slow. If there are multiple queues in an end-to-end operation, this will add hundreds of microseconds to the overall latency. There is clearly room for optimisation. Further investigation and a focus on the computer science made us realise that the conflation of concerns inherent in conventional approaches, (e.g. queues and processing nodes) leads to contention in multi-threaded implementations, suggesting that there may be a better approach. Thinking about how modern CPUs work, something we like to call “mechanical sympathy”, using good design practices with a strong focus on teasing apart the concerns, we came up with a data structure and a pattern of use that we have called the Disruptor. Testing has shown that the mean latency using the Disruptor for a three-stage pipeline is 3 orders of magnitude lower than an equivalent queue-based approach.
    [Show full text]
  • A Thread Synchronization Model for the PREEMPT RT Linux Kernel
    A Thread Synchronization Model for the PREEMPT RT Linux Kernel Daniel B. de Oliveiraa,b,c, R^omulo S. de Oliveirab, Tommaso Cucinottac aRHEL Platform/Real-time Team, Red Hat, Inc., Pisa, Italy. bDepartment of Systems Automation, UFSC, Florian´opolis, Brazil. cRETIS Lab, Scuola Superiore Sant'Anna, Pisa, Italy. Abstract This article proposes an automata-based model for describing and validating sequences of kernel events in Linux PREEMPT RT and how they influence the timeline of threads' execu- tion, comprising preemption control, interrupt handling and control, scheduling and locking. This article also presents an extension of the Linux tracing framework that enables the trac- ing of kernel events to verify the consistency of the kernel execution compared to the event sequences that are legal according to the formal model. This enables cross-checking of a kernel behavior against the formalized one, and in case of inconsistency, it pinpoints possible areas of improvement of the kernel, useful for regression testing. Indeed, we describe in details three problems in the kernel revealed by using the proposed technique, along with a short summary on how we reported and proposed fixes to the Linux kernel community. As an example of the usage of the model, the analysis of the events involved in the activation of the highest priority thread is presented, describing the delays occurred in this operation in the same granularity used by kernel developers. This illustrates how it is possible to take advantage of the model for analyzing the preemption model of Linux. Keywords: Real-time computing, Operating systems, Linux kernel, Automata, Software verification, Synchronization.
    [Show full text]
  • Review Der Linux Kernel Sourcen Von 4.9 Auf 4.10
    Review der Linux Kernel Sourcen von 4.9 auf 4.10 Reviewed by: Tested by: stecan stecan Period of Review: Period of Test: From: Thursday, 11 January 2018 07:26:18 o'clock +01: From: Thursday, 11 January 2018 07:26:18 o'clock +01: To: Thursday, 11 January 2018 07:44:27 o'clock +01: To: Thursday, 11 January 2018 07:44:27 o'clock +01: Report automatically generated with: LxrDifferenceTable, V0.9.2.548 Provided by: Certified by: Approved by: Account: stecan Name / Department: Date: Friday, 4 May 2018 13:43:07 o'clock CEST Signature: Review_4.10_0_to_1000.pdf Page 1 of 793 May 04, 2018 Review der Linux Kernel Sourcen von 4.9 auf 4.10 Line Link NR. Descriptions 1 .mailmap#0140 Repo: 9ebf73b275f0 Stephen Tue Jan 10 16:57:57 2017 -0800 Description: mailmap: add codeaurora.org names for nameless email commits ----------- Some codeaurora.org emails have crept in but the names don't exist for them. Add the names for the emails so git can match everyone up. Link: http://lkml.kernel.org/r/[email protected] 2 .mailmap#0154 3 .mailmap#0160 4 CREDITS#2481 Repo: 0c59d28121b9 Arnaldo Mon Feb 13 14:15:44 2017 -0300 Description: MAINTAINERS: Remove old e-mail address ----------- The ghostprotocols.net domain is not working, remove it from CREDITS and MAINTAINERS, and change the status to "Odd fixes", and since I haven't been maintaining those, remove my address from there. CREDITS: Remove outdated address information ----------- This address hasn't been accurate for several years now.
    [Show full text]
  • Linux Foundation Collaboration Summit 2010
    Linux Foundation Collaboration Summit 2010 LTTng, State of the Union Presentation at: http://www.efficios.com/lfcs2010 E-mail: [email protected] Mathieu Desnoyers April 15th, 2010 1 > Presenter ● Mathieu Desnoyers ● EfficiOS Inc. ● http://www.efficios.com ● Author/Maintainer of ● LTTng, LTTV, Userspace RCU ● Ph.D. in computer engineering ● Low-Impact Operating System Tracing Mathieu Desnoyers April 15th, 2010 2 > Plan ● Current state of LTTng ● State of kernel tracing in Linux ● User requirements ● Vertical vs Horizontal integration ● LTTng roadmap for 2010 ● Conclusion Mathieu Desnoyers April 15th, 2010 3 > Current status of LTTng ● LTTng dual-licensing: GPLv2/LGPLv2.1 ● UST user-space tracer – Userspace RCU (LGPLv2.1) ● Eclipse Linux Tools Project LTTng Integration ● User-space static tracepoint integration with gdb ● LTTng kernel tracer – maintainance-mode in 2009 (finished my Ph.D.) – active development restarting in 2010 Mathieu Desnoyers April 15th, 2010 4 > LTTng dual-licensing GPLv2/LGPLv2.1 ● LGPLv2.1 license is required to share code with user-space tracer library. ● License chosen to allow tracing of non-GPL applications. ● Headers are licensed under BSD: – Demonstrates that these headers can be included in non-GPL code. ● Applies to: – LTTng, Tracepoints, Kernel Markers, Immediate Values Mathieu Desnoyers April 15th, 2010 5 > User-space Tracing (UST) (1) ● LTTng port to user-space ● Re-uses Tracepoints and LTTng ring buffer ● Uses Userspace RCU for control synchronization ● Shared memory map with consumer daemon ● Per-process per-cpu ring buffers Mathieu Desnoyers April 15th, 2010 6 > User-space Tracing (UST) (2) ● The road ahead – Userspace trace clock for more architectures ● Some require Linux kernel vDSO support for trace clock – Utrace ● Provide information about thread creation, exec(), etc..
    [Show full text]
  • Read-Copy Update (RCU)
    COMP 790: OS Implementation Read-Copy Update (RCU) Don Porter COMP 790: OS Implementation Logical Diagram Binary Memory Threads Formats Allocators User Today’s Lecture System Calls Kernel RCU File System Networking Sync Memory Device CPU Management Drivers Scheduler Hardware Interrupts Disk Net Consistency COMP 790: OS Implementation RCU in a nutshell • Think about data structures that are mostly read, occasionally written – Like the Linux dcache • RW locks allow concurrent reads – Still require an atomic decrement of a lock counter – Atomic ops are expensive • Idea: Only require locks for writers; carefully update data structure so readers see consistent views of data COMP 790: OS Implementation Motivation (from Paul McKenney’s Thesis) 35 "ideal" "global" 30 "globalrw" 25 20 Performance of RW 15 lock only marginally 10 better than mutex 5 lock Hash Table Searches per Microsecond 0 1 2 3 4 # CPUs COMP 790: OS Implementation Principle (1/2) • Locks have an acquire and release cost – Substantial, since atomic ops are expensive • For short critical regions, this cost dominates performance COMP 790: OS Implementation Principle (2/2) • Reader/writer locks may allow critical regions to execute in parallel • But they still serialize the increment and decrement of the read count with atomic instructions – Atomic instructions performance decreases as more CPUs try to do them at the same time • The read lock itself becomes a scalability bottleneck, even if the data it protects is read 99% of the time COMP 790: OS Implementation Lock-free data structures
    [Show full text]
  • Connect User Guide Armv8-A Memory Systems
    ARMv8-A Memory Systems ConnectARMv8- UserA Memory Guide VersionSystems 0.1 Version 1.0 Copyright © 2016 ARM Limited or its affiliates. All rights reserved. Page 1 of 17 ARM 100941_0100_en ARMv8-A Memory Systems Revision Information The following revisions have been made to this User Guide. Date Issue Confidentiality Change 28 February 2017 0100 Non-Confidential First release Proprietary Notice Words and logos marked with ® or ™ are registered trademarks or trademarks of ARM® in the EU and other countries, except as otherwise stated below in this proprietary notice. Other brands and names mentioned herein may be the trademarks of their respective owners. Neither the whole nor any part of the information contained in, or the product described in, this document may be adapted or reproduced in any material form except with the prior written permission of the copyright holder. The product described in this document is subject to continuous developments and improvements. All particulars of the product and its use contained in this document are given by ARM in good faith. However, all warranties implied or expressed, including but not limited to implied warranties of merchantability, or fitness for purpose, are excluded. This document is intended only to assist the reader in the use of the product. ARM shall not be liable for any loss or damage arising from the use of any information in this document, or any error or omission in such information, or any incorrect use of the product. Where the term ARM is used it means “ARM or any of its subsidiaries as appropriate”. Confidentiality Status This document is Confidential.
    [Show full text]
  • Using Visualization to Understand the Behavior of Computer Systems
    USING VISUALIZATION TO UNDERSTAND THE BEHAVIOR OF COMPUTER SYSTEMS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Robert P. Bosch Jr. August 2001 c Copyright by Robert P. Bosch Jr. 2001 All Rights Reserved ii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. Mendel Rosenblum (Principal Advisor) I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. Pat Hanrahan I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Dr. Mark Horowitz Approved for the University Committee on Graduate Studies: iii Abstract As computer systems continue to grow rapidly in both complexity and scale, developers need tools to help them understand the behavior and performance of these systems. While information visu- alization is a promising technique, most existing computer systems visualizations have focused on very specific problems and data sources, limiting their applicability. This dissertation introduces Rivet, a general-purpose environment for the development of com- puter systems visualizations. Rivet can be used for both real-time and post-mortem analyses of data from a wide variety of sources. The modular architecture of Rivet enables sophisticated visualiza- tions to be assembled using simple building blocks representing the data, the visual representations, and the mappings between them.
    [Show full text]
  • Evolutionary Tree-Structured Storage: Concepts, Interfaces, and Applications
    Evolutionary Tree-Structured Storage: Concepts, Interfaces, and Applications Dissertation submitted for the degree of Doctor of Natural Sciences Presented by Marc Yves Maria Kramis at the Faculty of Sciences Department of Computer and Information Science Date of the oral examination: 22.04.2014 First supervisor: Prof. Dr. Marcel Waldvogel Second supervisor: Prof. Dr. Marc Scholl i Abstract Life is subdued to constant evolution. So is our data, be it in research, business or personal information management. From a natural, evolutionary perspective, our data evolves through a sequence of fine-granular modifications resulting in myriads of states, each describing our data at a given point in time. From a technical, anti- evolutionary perspective, mainly driven by technological and financial limitations, we treat the modifications as transient commands and only store the latest state of our data. It is surprising that the current approach is to ignore the natural evolution and to willfully forget about the sequence of modifications and therefore the past state. Sticking to this approach causes all kinds of confusion, complexity, and performance issues. Confusion, because we still somehow want to retrieve past state but are not sure how. Complexity, because we must repeatedly work around our own obsolete approaches. Performance issues, because confusion times complexity hurts. It is not surprising, however, that intelligence agencies notoriously try to collect, store, and analyze what the broad public willfully forgets. Significantly faster and cheaper random-access storage is the key driver for a paradigm shift towards remembering the sequence of modifications. We claim that (1) faster storage allows to efficiently and cleverly handle finer-granular modifi- cations and (2) that mandatory versioning elegantly exposes past state, radically simplifies the applications, and effectively lays a solid foundation for backing up, distributing and scaling of our data.
    [Show full text]
  • Interrupt Handling
    CCChahahaptptpteeerrr 555 KKKeeerrrnnneeelll SSSyyynnnccchhhrrrooonnniiizzzaaatttionionion Hsung-Pin Chang Department of Computer Science National Chung Hsing University Outline • Kernel Control Paths • When Synchronization Is Not Necessary • Synchronization Primitives • Synchronizing Accesses to Kernel Data Structure • Examples of Race Condition Prevention Kernel Control Paths • Kernel control path – A sequence of instructions executed by the kernel to handle interrupts of different kinds – Each kernel request is handled by a different kernel control path Kernel Control Paths (Cont.) • Kernel requests may be issued in several possible ways – A process executing in User Mode causes an exception-for instance, by executing at int0x80 instruction – An external devices sends a signal to a Programmable Interrupt Controller Kernel Control Paths (Cont.) – A process executing in Kernel Mode causes a Page Fault exception – A process running in a MP system and executing in Kernel Mode raises an interprocessorinterrupt Kernel Control Paths (Cont.) • Kernel control path is quite similar to the process, except – Does not have a descriptor – Not scheduled through scheduler, but rather by inserting sequence of instructions into the kernel code Kernel Control Paths (Cont.) • In some cases, the CPU interleaves kernel control paths when one of the following event occurs – A process switch occurs, i.e., when the schedule() function is invoked – An interrupt occurs while the CPU is running a kernel control path with interrupt enabled – A deferrable function
    [Show full text]
  • Memory Barriers: a Hardware View for Software Hackers
    Memory Barriers: a Hardware View for Software Hackers Paul E. McKenney Linux Technology Center IBM Beaverton [email protected] April 5, 2009 So what possessed CPU designers to cause them ing ten instructions per nanosecond, but will require to inflict memory barriers on poor unsuspecting SMP many tens of nanoseconds to fetch a data item from software designers? main memory. This disparity in speed — more than In short, because reordering memory references al- two orders of magnitude — has resulted in the multi- lows much better performance, and so memory barri- megabyte caches found on modern CPUs. These ers are needed to force ordering in things like synchro- caches are associated with the CPUs as shown in Fig- nization primitives whose correct operation depends ure 1, and can typically be accessed in a few cycles.1 on ordered memory references. Getting a more detailed answer to this question requires a good understanding of how CPU caches CPU 0 CPU 1 work, and especially what is required to make caches really work well. The following sections: 1. present the structure of a cache, Cache Cache 2. describe how cache-coherency protocols ensure Interconnect that CPUs agree on the value of each location in memory, and, finally, 3. outline how store buffers and invalidate queues Memory help caches and cache-coherency protocols achieve high performance. We will see that memory barriers are a necessary evil that is required to enable good performance and scal- Figure 1: Modern Computer System Cache Structure ability, an evil that stems from the fact that CPUs are orders of magnitude faster than are both the in- Data flows among the CPUs’ caches and memory terconnects between them and the memory they are in fixed-length blocks called “cache lines”, which are attempting to access.
    [Show full text]
  • The Linux Operating System
    T HE L INUX O PERATING S YSTEM William Stallings Copyright 2008 This document is an extract from Operating Systems: Internals and Design Principles, Sixth Edition William Stallings Prentice Hall 2008 ISBN-10: 0-13-600632-9 ISBN-13: 978-0-13-600632-9 http://williamstallings.com/OS/OS6e.html M02_STAL6329_06_SE_C02.QXD 2/22/08 7:02 PM Page 94 94 CHAPTER 2 / OPERATING SYSTEM OVERVIEW of the System V kernel and produced a clean, if complex, implementation. New fea- tures in the release include real-time processing support, process scheduling classes, dynamically allocated data structures, virtual memory management, virtual file sys- tem, and a preemptive kernel. SVR4 draws on the efforts of both commercial and academic designers and was developed to provide a uniform platform for commercial UNIX deployment. It has succeeded in this objective and is perhaps the most important UNIX variant. It incorporates most of the important features ever developed on any UNIX system and does so in an integrated, commercially viable fashion. SVR4 runs on processors ranging from 32-bit microprocessors up to supercomputers. BSD The Berkeley Software Distribution (BSD) series of UNIX releases have played a key role in the development of OS design theory. 4.xBSD is widely used in academic installations and has served as the basis of a number of commercial UNIX products. It is probably safe to say that BSD is responsible for much of the popularity of UNIX and that most enhancements to UNIX first appeared in BSD versions. 4.4BSD was the final version of BSD to be released by Berkeley, with the de- sign and implementation organization subsequently dissolved.
    [Show full text]
  • Synchronizations in Linux
    W4118 Operating Systems Instructor: Junfeng Yang Learning goals of this lecture Different flavors of synchronization primitives and when to use them, in the context of Linux kernel How synchronization primitives are implemented for real “Portable” tricks: useful in other context as well (when you write a high performance server) Optimize for common case 2 Synchronization is complex and subtle Already learned this from the code examples we’ve seen Kernel synchronization is even more complex and subtle Higher requirements: performance, protection … Code heavily optimized, “fast path” often in assembly, fit within one cache line 3 Recall: Layered approach to synchronization Hardware provides simple low-level atomic operations , upon which we can build high-level, synchronization primitives , upon which we can implement critical sections and build correct multi-threaded/multi-process programs Properly synchronized application High-level synchronization primitives Hardware-provided low-level atomic operations 4 Outline Low-level synchronization primitives in Linux Memory barrier Atomic operations Synchronize with interrupts Spin locks High-level synchronization primitives in Linux Completion Semaphore Futex Mutex 5 Architectural dependency Implementation of synchronization primitives: highly architecture dependent Hardware provides atomic operations Most hardware platforms provide test-and-set or similar: examine and modify a memory location atomically Some don’t, but would inform if operation attempted was atomic 6 Memory
    [Show full text]