COS 318: Operating Systems Overview

Total Page:16

File Type:pdf, Size:1020Kb

COS 318: Operating Systems Overview 9/16/19 Important Times u PreCepts: COS 318: Operating Systems l Mon: 7:30-8:20pm, 104 CS building l Tues: 7:30-8:20pm, 105 CS building l This week (TODAY and TOMORROW): Overview • Tutorial on Assembly programming and kernel debugging u ProjeCt 1 Jaswinder Pal Singh and a Fabulous Course Staff l Design review: Computer SCienCe Department • Next Mon/Tues: 3:00–7:00 pm (Signup online), 010 Friend PrinCeton University l Project 1 due: Sunday 9/29 at 11:55pm (http://www.Cs.prinCeton.edu/courses/cos318/) u Immediate To-Do: l Make sure you have your projeCt partner 2 Today Hardware of A TypiCal Computer u Overview of OS funCtionality CPU . CPU u Overview of OS components u InteraCting with the OS Memory Chipset u Booting a Computer I/O bus ROM Network 3 4 1 9/16/19 An Overview of HW FunCtionality Software in a TypiCal Computer u Executing the machine code (CPU, cache, memory) l instruCtions for ALU, branch, memory operations Memory CPU l instruCtions for communiCating with I/O deviCes Applications . u Performing I/O operations . Libraries, Runtime Systems l I/O deviCes and the CPU can exeCute conCurrently Operating System l Every deviCe controller is in charge of one device type CPU BIOS l Every deviCe controller has a local buffer ROM l CPU moves data btwn main memory and loCal buffers l I/O is btwn deviCe and loCal buffer of deviCe Controller OS l DeviCe Controller uses interrupt to inform CPU it is done Apps Network u Protection Data l Timer, paging (e.g. TLB), mode bit (e.g., kernel/user ) 6 TypiCal Unix OS StruCture TypiCal Unix OS StruCture User funCtion Calls written by programmers and Application Application compiled by programmers. Libraries User level Libraries Kernel level Portable OS Layer Portable OS Layer Machine-dependent layer Machine-dependent layer 7 8 2 9/16/19 TypiCal Unix OS StruCture QuiCk Review: How AppliCation is Created • Written by elves foo.c gcC foo.s as foo.o • ObjeCts pre-compiled • Defined in headers bar.C gcC bar.s as bar.o ld a.out Application • Input to linker • Invoked like funCtions • May be “resolved” when libC.a … Libraries program is loaded u gcC can compile, assemble, and link together u Compiler (part of gcC) Compiles a program into assembly Portable OS Layer u Assembler Compiles assembly Code into reloCatable object file u Linker links object files into an executable Machine-dependent layer u For more information: l Read man page of a.out, elf, ld, and nm l Read the doCument of ELF 9 10 AppliCation: How it’s exeCuted What an exeCutable appliCation looks like u On Unix, “loader” does the job u Four segments 2n -1 l Read an exeCutable file l Code/Text – instruCtions Stack l Layout the Code, data, heap and stack l Data – global variables l DynamiCally link to shared libraries l Stack l Prepare for the OS kernel to run the appliCation l Heap u Why: l Separate Code and data? AppliCation Heap *.o, *.a ld a.out loader l Have staCk and heap go towards eaCh other? Shared Initialized data library Code 0 11 12 3 9/16/19 Responsibilities for the segments TypiCal Unix OS StruCture u StaCk l Layout by ? l AlloCated/deallocated by ? l Local names are absolute/relative? u Heap Application l Who sets the starting address? l AlloCated/deallocated by ? Libraries l How do appliCation programs manage it? u Global data/Code l Who alloCates? Portable OS Layer “Guts” of system calls l Who defines names and referenCes? l Who translates referenCes? Machine-dependent layer l Who reloCates addresses? l Who lays them out in memory? 13 14 Must Support Multiple AppliCations Multiple AppliCation ProCesses u In multiple windows l Browser, shell, powerpoint, word, … Application Application Application u Use Command line to run multiple appliCations … ‘ ’ % ls –al | grep ^d Libraries Libraries Libraries % foo & % bar & Portable OS Layer Machine-dependent layer 15 16 4 9/16/19 OS ServiCe Examples TypiCal Unix OS StruCture u Examples that are not provided at user level l System Calls: file open, Close, read and write l Control the CPU so that users won’t cause problems • while ( 1 ) ; Application • Bootstrap l Protection: • System initialization • Interrupt and exCeption • Keep user programs from Crashing OS Libraries • I/O deviCe driver • Keep user programs from Crashing eaCh other • Memory management u Examples that are provided at user level • Mode switChing • ProCessor management l Read time of day Portable OS Layer l ProteCted user-level aCtivities Machine-dependent layer 17 18 Today OS Components u Overview of OS funCtionality u ResourCe manager for eaCh HW resourCe l CPU: proCessor management u Overview of OS components l RAM: memory management u InteraCting with the OS l Disk: file system and secondary-storage management u Booting a Computer l I/O deviCe management (keyboards, mouse, …) u Additional serviCes: l networking l window manager (GUI) l command-line interpreters (e.g., shell) l resource alloCation and aCCounting l protection • Keep user programs from Crashing OS • Keep user programs from Crashing eaCh other 20 5 9/16/19 ProCessor Management Memory Management u Goals u Goals l Overlap between I/O and CPU I/O CPU Register: 1x computation l Support for programs to run CPU I/O and to be written more easily L1 cache: 2-4x l Time sharing l Allocation and management l Multiple CPU alloCation CPU L2 cache: ~10x u Issues l Transfers from and to secondary storage L3 cache: ~50x l Do not waste CPU resourCes CPU u Issues l Synchronization and mutual I/O DRAM: ~200-500x exClusion CPU l EffiCiency & Convenience Disks: ~30M x l Fairness and deadlock CPU l Fairness l Protection CPU ArChive storage: >1000M x 22 23 File System I/O DeviCe Management u Goals: u Goals l Manage disk bloCks l InteraCtions between l Map between files and disk bloCks User 1 . User n deviCes and appliCations u TypiCal file system Calls User 1 . User n l Ability to plug in new l Open a file with authentiCation File system services deviCes l Read/write data in files u l Close a file Issues Library support File . File u Issues l Diversity of deviCes, third- l Reliability party hardware Driver Driver l Safety l EffiCiency I/O I/O l EffiCienCy l Fairness device . device l Manageability l Protection and sharing 24 25 6 9/16/19 Window Systems Summary u Goals u Overview of OS funCtionality l InteraCting with a user l Layers of abstraction l InterfaCes to examine and l ServiCes to appliCations manage apps and the system l ResourCe management u Issues u Overview of OS Components l Inputs from keyboard, mouse, l Processor management touCh sCreen, … l Memory management l Display output from appliCations l I/O deviCe management and systems l File system l Where is the Window System? l Window system • All in the kernel (Windows) • All at user level l … • Split between user and kernel (Unix) 26 27 Today How the OS is Invoked u Overview of OS funCtionality u ExCeptions u Overview of OS components l Normal or program error: faults, traps, aborts u InteraCting with the OS l Special software generated: INT 3 l MaChine-check exCeptions u Booting a Computer u Interrupts l Hardware (by external deviCes) l Software: INT n u System Calls u See Intel doCument volume 3 for details 28 29 7 9/16/19 Interrupts Interrupt and ExCeptions (1) u Raised by external events Vector # Mnemonic Description Type u Interrupt handler is in kernel 0: 0 #DE Divide error (by zero) Fault u Eventually resume the 1: 1 #DB Debug Fault/trap Interrupt … 2 NMI interrupt Interrupt interrupted proCess handler u A way to i: 3 #BP Breakpoint Trap i+1: 4 #OF Overflow Trap l SwitCh CPU to another … process 5 #BR BOUND range exceeded Trap l Overlap I/O with CPU N: 6 #UD Invalid opcode Fault 7 #NM Device not available Fault l Handle other long-latenCy events 8 #DF Double fault Abort 9 Coprocessor segment overrun Fault 10 #TS Invalid TSS (Task State Segment). Kernel/HW bug. 30 31 Interrupt and ExCeptions (2) System Calls Vector # Mnemonic Description Type u Operating system API 11 #NP Segment not present Fault l InterfaCe between an appliCation and the operating 12 #SS Stack-segment fault Fault system kernel 13 #GP General protection Fault u Categories of system Calls 14 #PF Page fault Fault l Process management l Memory management 15 Reserved Fault l File management 16 #MF Floating-point error (math fault) Fault l DeviCe management 17 #AC Alignment check Fault l CommuniCation 18 #MC Machine check Abort 19-31 Reserved 32-255 User defined Interrupt 32 33 8 9/16/19 How many system Calls? System Call MeChanism u 6th Edition Unix: ~45 u Assumptions l User Code Can be arbitrary u POSIX: ~130 l User Code Cannot modify kernel User User u FreeBSD: ~130 memory program program u Linux: ~250 u Design Issues return call l User makes a system Call with u Windows 7: > 900 parameters l The Call meChanism switChes entry code to kernel mode Kernel in l ExeCute system Call protected memory l Return with results 34 35 OS Kernel: Trap Handler Interrupt, trap and syscall vector Interrupt u Table set up by OS kernel; pointers to Code to run service on different events HW Device Syscall table routines Interrupt Processor Interrupt Register Vector Table System Call System ServiCe System HW exceptions dispatcher services handleTimerInterrupt() { SW exceptions ..
Recommended publications
  • Overcoming Traditional Problems with OS Huge Page Management
    MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages.
    [Show full text]
  • Linkers and Loaders Hui Chen Department of Computer & Information Science CUNY Brooklyn College
    CISC 3320 MW3 Linkers and Loaders Hui Chen Department of Computer & Information Science CUNY Brooklyn College CUNY | Brooklyn College: CISC 3320 9/11/2019 1 OS Acknowledgement • These slides are a revision of the slides by the authors of the textbook CUNY | Brooklyn College: CISC 3320 9/11/2019 2 OS Outline • Linkers and linking • Loaders and loading • Object and executable files CUNY | Brooklyn College: CISC 3320 9/11/2019 3 OS Authoring and Running a Program • A programmer writes a program in a programming language (the source code) • The program resides on disk as a binary executable file translated from the source code of the program • e.g., a.out or prog.exe • To run the program on a CPU • the program must be brought into memory, and • create a process for it • A multi-step process CUNY | Brooklyn College: CISC 3320 9/11/2019 4 OS CUNY | Brooklyn College: CISC 3320 9/11/2019 5 OS Compilation • Source files are compiled into object files • The object files are designed to be loaded into any physical memory location, a format known as an relocatable object file. CUNY | Brooklyn College: CISC 3320 9/11/2019 6 OS Relocatable Object File • Object code: formatted machine code, but typically non-executable • Many formats • The Executable and Linkable Format (ELF) • The Common Object File Format (COFF) CUNY | Brooklyn College: CISC 3320 9/11/2019 7 OS Examining an Object File • In Linux/UNIX, $ file main.o main.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped $ nm main.o • Also use objdump, readelf, elfutils, hexedit • You may need # apt-get install hexedit # apt-get install elfutils CUNY | Brooklyn College: CISC 3320 9/11/2019 8 OS Linking • During the linking phase, other object files or libraries may be included as well • Example: $ g++ -o main -lm main.o sumsine.o • A program consists of one or more object files.
    [Show full text]
  • 6 Boot Loader
    BootBoot LoaderLoader 66 6.1 INTRODUCTION The Boot Loader is a utility program supplied with the ADSP-21000 Family Development Software. The loader converts an ADSP-21xxx executable program, generated by the linker, into a format which can be used to boot a target hardware system and initialize its memory. The loader’s invocation command is LDR21K. The boot loader replaces the MEM21K memory initializer used with the ADSP-21020 processor. Any executable file to be processed with the LDR21K boot loader must not be processed by MEM21K. The -nomem switch of the C compiler should be used when compiling any C source files—this switch prevents the compiler from running MEM21K. The following naming conventions are used throughout this chapter: The loader refers to LDR21K contained in the software release. The boot loader refers to the executable file that performs the memory initialization on the target. The boot file refers to the output of the loader that contains the boot loader and the formatted system configurations. Booting refers to the process of loading the boot loader, initialization system memory, and starting the application on the target. Memory is referred to as being either data memory, program memory, or internal memory. Remember that the ADSP-21020 processor has separate external program and data memories, but does not have any internal memory. The ADSP-2106x SHARC has both internal and external memory. 6 – 1 66 BootBoot LoaderLoader To use the loader, you should be familiar with the hardware requirements of booting an ADSP-21000 target. See Chapter 11 of the ADSP-2106x SHARC User’s Manual or Chapter 9 of the ADSP-21020 User’s Manual for further information.
    [Show full text]
  • Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
    Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Measuring Software Performance on Linux Technical Report
    Measuring Software Performance on Linux Technical Report November 21, 2018 Martin Becker Samarjit Chakraborty Chair of Real-Time Computer Systems Chair of Real-Time Computer Systems Technical University of Munich Technical University of Munich Munich, Germany Munich, Germany [email protected] [email protected] OS program program CPU .text .bss + + .data +/- + instructions cache branch + coherency scheduler misprediction core + pollution + migrations data + + interrupt L1i$ miss access + + + + + + context mode + + (TLB flush) TLB + switch data switch miss L1d$ +/- + (KPTI TLB flush) miss prefetch +/- + + + higher-level readahead + page cache miss walk + + multicore + + (TLB shootdown) TLB coherency page DRAM + page fault + cache miss + + + disk + major minor I/O Figure 1. Event interaction map for a program running on an Intel Core processor on Linux. Each event itself may cause processor cycles, and inhibit (−), enable (+), or modulate (⊗) others. Abstract that our measurement setup has a large impact on the results. Measuring and analyzing the performance of software has More surprisingly, however, they also suggest that the setup reached a high complexity, caused by more advanced pro- can be negligible for certain analysis methods. Furthermore, cessor designs and the intricate interaction between user we found that our setup maintains significantly better per- formance under background load conditions, which means arXiv:1811.01412v2 [cs.PF] 20 Nov 2018 programs, the operating system, and the processor’s microar- chitecture. In this report, we summarize our experience on it can be used to improve high-performance applications. how performance characteristics of software should be mea- CCS Concepts • Software and its engineering → Soft- sured when running on a Linux operating system and a ware performance; modern processor.
    [Show full text]
  • Guide to Openvms Performance Management
    Guide to OpenVMS Performance Management Order Number: AA–PV5XA–TE May 1993 This manual is a conceptual and tutorial guide for experienced users responsible for optimizing performance on OpenVMS systems. Revision/Update Information: This manual supersedes the Guide to VMS Performance Management, Version 5.2. Software Version: OpenVMS VAX Version 6.0 Digital Equipment Corporation Maynard, Massachusetts May 1993 Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description. Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital or an authorized sublicensor. © Digital Equipment Corporation 1993. All rights reserved. The postpaid Reader’s Comments forms at the end of this document request your critical evaluation to assist in preparing future documentation. The following are trademarks of Digital Equipment Corporation: ACMS, ALL–IN–1, Bookreader, CI, DBMS, DECnet, DECwindows, Digital, MicroVAX, OpenVMS, PDP–11, VAX, VAXcluster, VAX DOCUMENT, VMS, and the DIGITAL logo. All other trademarks and registered trademarks are the property of their respective holders. ZK4521 This document was prepared using VAX DOCUMENT Version 2.1. Contents Preface ............................................................ ix 1 Introduction to Performance Management 1.1 Knowing Your Workload ....................................... 1–2 1.1.1 Workload Management .................................... 1–3 1.1.2 Workload Distribution ..................................... 1–4 1.1.3 Code Sharing ............................................ 1–4 1.2 Evaluating User Complaints .
    [Show full text]
  • Virtual Table Hijacking Protection Enhancement for CFG
    Liberation Guard: Virtual Table Hijacking Protection Enhancement for CFG Eyal Itkin [email protected] eyalitkin.wordpress.com Abstract—Control Flow Guard (CFG) is an advanced defense mechanism by Microsoft, that aims to mitigate exploiting tech- niques using control flow integrity checks. In this paper we1 present a proposed enhancement of CFG, that adds virtual table integrity checks which will mitigate most virtual table hijacking exploits. The proposed defense creates a strong differentiation between ordinary and virtual functions, thus significantly nar- rowing the exploit options available when controlling an indirect virtual call. This differentiation will impose strong restrictions over current virtual table hijacking exploits, thus significantly raising the protection CFG can offer to protected programs2. I. PRELIMINARIES Fig. 1. Example of address translation for an unaligned address (at the top) and an aligned address (at the bottom). A. Control Flow Guard Overview Control Flow Guard (CFG) is an advanced control flow integrity (CFI) defense mechanism introduced by Microsoft in In order to use this CFI knowledge, the compiler adds Windows 8.1 and Windows 10 [4]. CFG aims to significantly a validation check prior to each indirect call (only call restrict the allowed control flow in the cases of indirect calls, assembly opcode, and not jump opcodes). This function is and is supported in Visual Studio 2015. stored in ntdll.dll, and exported to the rest of the DLLs. This defense mechanism is based on the fact that during The function verifies the requested address against the stored compilation time the compiler ”learns” where ”legitimate” bitmap, while acting as a NOP in case the bit is ”1” and crashes functions start, and records these addresses in the compiled the program in case the bit is ”0”.
    [Show full text]
  • Virtual Memory §5.4 Virt Virtual Memory U Al Memo Use Main Memory As a “Cache” For
    Chapter 5 Large and Fast: Exppgloiting Memory Hierarchy Part II Virtual Memory §5.4 Virt Virtual Memory u al Memo Use main memory as a “cache” for secondary (disk) storage r y Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holdinggqy its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on dis k Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Replacement and Writes
    [Show full text]
  • Operating Systems
    UC Santa Barbara Operating Systems Christopher Kruegel Department of Computer Science UC Santa Barbara http://www.cs.ucsb.edu/~chris/ Virtual Memory and Paging UC Santa Barbara • What if a program is too big to be loaded in memory • What if a higher degree of multiprogramming is desirable • Physical memory is split in page frames • Virtual memory is split in pages • OS (with help from the hardware) manages the mapping between pages and page frames 2 Mapping Pages to Page Frames UC Santa Barbara • Virtual memory: 64KB • Physical memory: 32KB • Page size: 4KB • Virtual memory pages: 16 • Physical memory pages: 8 3 Memory Management Unit UC Santa Barbara • Automatically performs the mapping from virtual addresses into physical addresses 4 Memory Management Unit UC Santa Barbara • Addresses are split into a page number and an offset • Page numbers are used to look up a table in the MMU with as many entries as the number of virtual pages • Each entry in the table contains a bit that states if the virtual page is actually mapped to a physical one • If it is so, the entry contains the number of physical page used • If not, a page fault is generated and the OS has to deal with it 5 Page Tables UC Santa Barbara • Page tables contain an entry for each virtual table • If virtual memory is big (e.g., 32 bit and 64 bit addresses) the table can become of unmanageable size • Solution: instead of keeping them in the MMU move them to main memory • Problem: page tables are used each time an access to memory is performed.
    [Show full text]
  • An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K
    An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K. Qureshi, and Karsten Schwan, Georgia Institute of Technology https://www.usenix.org/conference/atc16/technical-sessions/presentation/huang This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. An Evolutionary Study of inu emory anagement for Fun and rofit Jian Huang, Moinuddin K. ureshi, Karsten Schwan Georgia Institute of Technology Astract the patches committed over the last five years from 2009 to 2015. The study covers 4587 patches across Linux We present a comprehensive and uantitative study on versions from 2.6.32.1 to 4.0-rc4. We manually label the development of the Linux memory manager. The each patch after carefully checking the patch, its descrip- study examines 4587 committed patches over the last tions, and follow-up discussions posted by developers. five years (2009-2015) since Linux version 2.6.32. In- To further understand patch distribution over memory se- sights derived from this study concern the development mantics, we build a tool called MChecker to identify the process of the virtual memory system, including its patch changes to the key functions in mm. MChecker matches distribution and patterns, and techniues for memory op- the patches with the source code to track the hot func- timizations and semantics. Specifically, we find that tions that have been updated intensively.
    [Show full text]
  • The UNIX Time- Sharing System
    1. Introduction There have been three versions of UNIX. The earliest version (circa 1969–70) ran on the Digital Equipment Cor- poration PDP-7 and -9 computers. The second version ran on the unprotected PDP-11/20 computer. This paper describes only the PDP-11/40 and /45 [l] system since it is The UNIX Time- more modern and many of the differences between it and older UNIX systems result from redesign of features found Sharing System to be deficient or lacking. Since PDP-11 UNIX became operational in February Dennis M. Ritchie and Ken Thompson 1971, about 40 installations have been put into service; they Bell Laboratories are generally smaller than the system described here. Most of them are engaged in applications such as the preparation and formatting of patent applications and other textual material, the collection and processing of trouble data from various switching machines within the Bell System, and recording and checking telephone service orders. Our own installation is used mainly for research in operating sys- tems, languages, computer networks, and other topics in computer science, and also for document preparation. UNIX is a general-purpose, multi-user, interactive Perhaps the most important achievement of UNIX is to operating system for the Digital Equipment Corpora- demonstrate that a powerful operating system for interac- tion PDP-11/40 and 11/45 computers. It offers a number tive use need not be expensive either in equipment or in of features seldom found even in larger operating sys- human effort: UNIX can run on hardware costing as little as tems, including: (1) a hierarchical file system incorpo- $40,000, and less than two man years were spent on the rating demountable volumes; (2) compatible file, device, main system software.
    [Show full text]