The Politics of Cultural Programming in Public Spaces

Total Page:16

File Type:pdf, Size:1020Kb

The Politics of Cultural Programming in Public Spaces Interface and Execution Models in the Fluke Kernel Bryan Ford Mike Hibler Jay Lepreau Roland McGrath Patrick Tullmann Department o f Computer Science University of Utah Technical Report UUCS-98-013 August, 1998 Abstract of magnitude, average latency varies by a factor of six, and memory use favors the interrupt model as expected, but not We have defined and implemented a new kernel API that by a large amount. We find that the overhead for restarting makes every exported operation either fully interruptible the most costly kernel operation ranges from 2-8%. and restartable, thereby appearing atomic to the user. To achieve interruptibility, all possible states in which a thread may become blocked for a “long” time are completely rep­ 1 Introduction resentable as valid kernel API calls, without needing to re­ tain any kernel internal state. This paper attempts to bring to light an important and useful control-flow property of OS kernel interface seman­ This API provides important functionality. Since all ker­ tics that has been neglected in prevailing systems, and to nel operations appear atomic, services such as transparent distinguish this interface property from the control-flow checkpointing and process migration that need access to properties of an OS kernel implementation. An essential the complete and consistent state of a process can be im­ issue of operating system design and implementation is plemented by ordinary user-mode processes. Atomic op­ when and how one thread can block and relinquish con­ erations also enable applications to provide reliability in a trol to another, and how the state of a thread suspended more straightforward manner. by blocking or preemption is represented in the system. This API also allows novel kernel implementation tech­ This crucially affects both the kernel interface that repre­ niques and evaluation of existing techniques, which we ex­ sents these states to user code, and the fundamental inter­ plore in this paper. Our new kernel’s single source im­ nal organization of the kernel implementation. A central plements either the “process” or the “interrupt” execution aspect of this internal structure is the execution model in model on both uni- and multiprocessors, depending only which the kernel handles processor traps, hardware inter­ on a configuration option affecting a small amount of code. rupts, and system calls. In the process model, which is Our kernel structure avoids the major complexities of tra­ used by traditional monolithic kernels such as BSD, Linux, ditional implementations of the interrupt model, neither re­ and Windows NT, each thread of control in the system has quiring ad hoc saving of state, nor limiting the operations its own kernel stack. In the interrupt model, used by sys­ (such as demand-paged memory) that can be handled by tems such as V [7], QNX [14], and Aegis [12], the ker­ the kernel. Finally, our interrupt model configuration can nel uses only one kernel stack per processor—for typi­ support the process model for selected components, with cal uniprocessor kernels, just one kernel stack, period. A the attendant flexibility benefits. thread in a process-model kernel retains its kernel stack We report preliminary measurements comparing fully, state when it sleeps, whereas in an interrupt-model kernel partially and non-preemptible configurations of both pro­ threads must manually save any important kernel state be­ cess and interrupt model implementations. We find that fore sleeping. This saved kernel state is often known as a the interrupt model has a modest speed edge in some continuation [10], since it allows the thread to “continue” benchmarks, maximum latency varies nearly three orders where it left off. In this paper we draw attention to the distinction be­ This research was supported in part by the Defense Advanced Re­ search Projects Agency, monitored by the Department of the Army under tween an interrupt-model kernel implementation, which is contract number DABT63-94-C-0058, and the Air Force Research Lab­ a kernel that uses only one kernel stack per processor by oratory, Rome Research Site, USAF, under agreement number F30602- manually saving implicit kernel state for sleeping threads, 96-2-0269. and an “atomic” kernel API, which is an API designed so Contact information: [email protected]. Dept, of Computer Sci­ ence, 50 S. Central Campus Drive, Rm. 3190, University of Utah, SLC, that sleeping threads need no such implicit kernel state at UT 84112-9205. http://www.cs.utah.edu/projects/flux/. all. These two kernel properties are related but fall on or- Execution Model interrupt-model kernel by eliminating the need to store im­ Interrupt Process plicit state in continuations. In addition, our kernel supports both internal execution * Fluke Fluk* • ITS models through a build-time configuration option affect­ (inierrupt-modcl) (proccss-modcl) . ing only a small fraction of the source enabling the first oB < “apples-to-apples” comparison between them. Our kernel (O v v demonstrates that the two models are not as fundamentally -a (original) (C arter) Mach Mach o different as they have been considered to be in the past; (Dravcs) (original) • • however, they each have strengths and weaknesses. Some Oh processor architectures have an inherent bias towards the < process model— e.g., a 5-10% kernel entry/exit perfor­ C mance difference on the x86. It is also easier to make o BSD • process-model kernels fully preemptible. Full preemptibil- ity comes at a cost, but this cost is associated with pre- emptibility, not with the process model itself. Process- model kernels tend to use more per-thread kernel mem­ Figure 1: The kernel implementation and API model continuums. ory, but this is a problem in practice only if the kernel V was originally a pure interrupt-model kernel but was later modified is liberal in its use of stack space and thus requires large to be partly process-model; Mach was a pure process-model kernel later modified to be partly interrupt-model. kernel stacks, or if the system uses a very large number of threads. Thus, we show that although an atomic API is highly beneficial, the kernel’s internal execution model thogonal dimensions, as illustrated in Figure 1. In a purely is less important: the interrupt-based organization has a atomic API, all possible states in which a thread may sleep slight size advantage, whereas the process-based organiza­ for a noticeable amount of time are cleanly visible and ex­ tion has somewhat more flexibility. portable to user mode. For example, the state of a thread Finally, contrary to conventional wisdom, our kernel involved in any system call is always well-defined, com­ demonstrates that it is practical to use legacy process- plete, and immediately available for examination or modi­ model code even within interrupt-model kernels and even fication by other threads; this is true even if the system call on architectures such as the x86 that make it difficult. The is long-running and consists of many stages. In general, key is to run the legacy code in user mode but in the ker­ this means that all system calls and exception handling nel’s address space. mechanisms must be cleanly interruptible and restartable, Our key contributions in this work are: in the same way that the instruction sets of modern proces­ sor architectures are cleanly interruptible and restartable. • To present a kernel supporting a pure atomic API and For purposes of readability, in the rest of this paper we will demonstrate the advantages and drawbacks of this ap­ refer to API’s with these properties as “atomic,” as well as proach. the properties themselves. We have developed a new kernel called Fluke which ex­ • To explore the relationship between an “atomic API” ports a purely atomic API. This API allows the complete and the kernel’s execution model. state of any user-mode thread to be examined and modi­ fied by other user-mode threads without being arbitrarily • To present the first “apples-to-apples” comparison be­ delayed. In unpublished work that we were not aware of tween the two kernel implementation models using a until very recently, the MIT ITS [11] system implemented kernel that supports both, revealing that the models an API with a similar property, some 30 years ago. Sev­ are not as different as commonly believed. eral other systems came close to providing this property • To show that it is practical to use process-model but still had a few situations in which thread state was legacy code in an interrupt-model kernel, and to not always extractable. Supporting a purely atomic API present several techniques for doing so. slightly widens the kernel interface due to the need to ex­ port additional state information. However, such an API nested transactions, but yield significant benefits, (ii) It is well known provides the atomicity property that gives important ro­ that providing atomicity at lower layers allows higher layers to be written bustness advantages, making it easier to build fault-tolerant more simply, (iii) ITS exploited an atomic API for a number of prop­ erties; it particularly made it easy to write user-mode schedulers, as one systems [24].' It also simplifies implementation of a pure could set the state of a thread at any time. [4, 3] (iv) Large telecomm ap­ plications use an “auditor,” a daemon that periodically wakes up and test 'Some examples of this are (i) This property is similar to “chained each of the critical data structures in the system to see if it does not vio­ transactions” [Gray93] which allow a single transaction to progress late some assertions. For critical OS’es one could think of applying the through intermediate stages while building up state, but is able to roll­ same concept to the kernel, (v) One could both debug and detect deadlock back only to the last stage.
Recommended publications
  • Operating System Support for Parallel Processes
    Operating System Support for Parallel Processes Barret Rhoden Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2014-223 http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-223.html December 18, 2014 Copyright © 2014, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Operating System Support for Parallel Processes by Barret Joseph Rhoden A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Eric Brewer, Chair Professor Krste Asanovi´c Professor David Culler Professor John Chuang Fall 2014 Operating System Support for Parallel Processes Copyright 2014 by Barret Joseph Rhoden 1 Abstract Operating System Support for Parallel Processes by Barret Joseph Rhoden Doctor of Philosophy in Computer Science University of California, Berkeley Professor Eric Brewer, Chair High-performance, parallel programs want uninterrupted access to physical resources. This characterization is true not only for traditional scientific computing, but also for high- priority data center applications that run on parallel processors. These applications require high, predictable performance and low latency, and they are important enough to warrant engineering effort at all levels of the software stack.
    [Show full text]
  • Sched-ITS: an Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating Systems Course
    Wright State University CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2017 Sched-ITS: An Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating Systems Course Bharath Kumar Koya Wright State University Follow this and additional works at: https://corescholar.libraries.wright.edu/etd_all Part of the Computer Engineering Commons, and the Computer Sciences Commons Repository Citation Koya, Bharath Kumar, "Sched-ITS: An Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating Systems Course" (2017). Browse all Theses and Dissertations. 1745. https://corescholar.libraries.wright.edu/etd_all/1745 This Thesis is brought to you for free and open access by the Theses and Dissertations at CORE Scholar. It has been accepted for inclusion in Browse all Theses and Dissertations by an authorized administrator of CORE Scholar. For more information, please contact [email protected]. SCHED – ITS: AN INTERACTIVE TUTORING SYSTEM TO TEACH CPU SCHEDULING CONCEPTS IN AN OPERATING SYSTEMS COURSE A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science By BHARATH KUMAR KOYA B.E, Andhra University, India, 2015 2017 Wright State University WRIGHT STATE UNIVERSITY GRADUATE SCHOOL April 24, 2017 I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY Bharath Kumar Koya ENTITLED SCHED-ITS: An Interactive Tutoring System to Teach CPU Scheduling Concepts in an Operating System Course BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQIREMENTS FOR THE DEGREE OF Master of Science. _____________________________________ Adam R. Bryant, Ph.D. Thesis Director _____________________________________ Mateen M. Rizki, Ph.D. Chair, Department of Computer Science and Engineering Committee on Final Examination _____________________________________ Adam R.
    [Show full text]
  • Scheduling: Introduction
    7 Scheduling: Introduction By now low-level mechanisms of running processes (e.g., context switch- ing) should be clear; if they are not, go back a chapter or two, and read the description of how that stuff works again. However, we have yet to un- derstand the high-level policies that an OS scheduler employs. We will now do just that, presenting a series of scheduling policies (sometimes called disciplines) that various smart and hard-working people have de- veloped over the years. The origins of scheduling, in fact, predate computer systems; early approaches were taken from the field of operations management and ap- plied to computers. This reality should be no surprise: assembly lines and many other human endeavors also require scheduling, and many of the same concerns exist therein, including a laser-like desire for efficiency. And thus, our problem: THE CRUX: HOW TO DEVELOP SCHEDULING POLICY How should we develop a basic framework for thinking about scheduling policies? What are the key assumptions? What metrics are important? What basic approaches have been used in the earliest of com- puter systems? 7.1 Workload Assumptions Before getting into the range of possible policies, let us first make a number of simplifying assumptions about the processes running in the system, sometimes collectively called the workload. Determining the workload is a critical part of building policies, and the more you know about workload, the more fine-tuned your policy can be. The workload assumptions we make here are mostly unrealistic, but that is alright (for now), because we will relax them as we go, and even- tually develop what we will refer to as ..
    [Show full text]
  • Contributions for Resource and Job Management in High Performance Computing Yiannis Georgiou
    Contributions for Resource and Job Management in High Performance Computing Yiannis Georgiou To cite this version: Yiannis Georgiou. Contributions for Resource and Job Management in High Performance Computing. Distributed, Parallel, and Cluster Computing [cs.DC]. Université de Grenoble, 2010. English. tel- 01499598 HAL Id: tel-01499598 https://hal-auf.archives-ouvertes.fr/tel-01499598 Submitted on 31 Mar 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Public Domain THÈSE Pour obtenir le grade de DOCTEUR DE L’UNIVERSITÉ DE GRENOBLE Spécialité : Informatique Arrêté ministériel : 7 août 2006 Présentée par « Yiannis GEORGIOU » Thèse dirigée par « Olivier RICHARD » et codirigée par « Jean-Francois MEHAUT » préparée au sein du Laboratoire d'Informatique de Grenoble dans l'École Doctorale de Mathématiques, Sciences et Technologies de l'Information, Informatique Contributions for Resource and Job Management in High Performance Computing Thèse soutenue publiquement le « 5 novembre 2010 », devant le jury composé de : M. Daniel, HAGIMONT Professeur à INPT/ENSEEIHT, France, Président M. Franck, CAPPELLO Directeur de Recherche à INRIA, France, Rapporteur M. William T.C., KRAMER Directeur de Recherche à NCSA, USA, Rapporteur M. Morris, JETTE Informaticien au LLNL, USA, Membre Mme.
    [Show full text]
  • Operating Systems Processes and Threads
    COS 318: Operating Systems Processes and Threads Prof. Margaret Martonosi Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall11/cos318 Today’s Topics Processes Concurrency Threads Reminder: Hope you’re all busy implementing your assignment 2 (Traditional) OS Abstractions Processes - thread of control with context Files- In Unix, this is “everything else” Regular file – named, linear stream of data bytes Sockets - endpoints of communication, possible between unrelated processes Pipes - unidirectional I/O stream, can be unnamed Devices Process Most fundamental concept in OS Process: a program in execution one or more threads (units of work) associated system resources Program vs. process program: a passive entity process: an active entity For a program to execute, a process is created for that program Program and Process main() main() { { heap ... ... foo() foo() ... ... } } stack bar() bar() { { registers ... ... PC } } Program Process 5 Process vs. Program Process > program Program is just part of process state Example: many users can run the same program • Each process has its own address space, i.e., even though program has single set of variable names, each process will have different values Process < program A program can invoke more than one process Example: Fork off processes 6 Simplest Process Sequential execution No concurrency inside a process Everything happens sequentially Some coordination may be required Process state Registers Main memory I/O devices • File system • Communication ports … 7 Process Abstraction Unit of scheduling One (or more*) sequential threads of control program counter, register values, call stack Unit of resource allocation address space (code and data), open files sometimes called tasks or jobs Operations on processes: fork (clone-style creation), wait (parent on child), exit (self-termination), signal, kill.
    [Show full text]
  • Processes, Address Spaces, and Context Switches
    Processes, Address Spaces, and Context Switches Chester Rebeiro IIT Madras Executing Apps (Process) • Process – A program in execution – Most important abstraction in an OS – Comprises of $gcc hello.c • Code from ELF In the • Data user space • Stack of process ELF • Heap Executable Process • State in the OS In the kernel (a.out) $./a.out • Kernel stack space – State contains : registers, list of open files, related processes, etc. 2 Program ≠ Process Program Process code + static and global data Dynamic instantiation of code + data + heap + stack + process state One program can create several A process is unique isolated entity processes 3 Process Address Space • Virtual Address Map MAX_SIZE – All memory a process can Stack address – Large contiguous array of addresses from 0 to Heap MAX_SIZE Data Text (instructions) 0 4 Process Address Space • Each process has a different address space • This is achieved by the use of virtual memory • Ie. 0 to MAX_SIZE are virtual memory addresses MAX_SIZE Stack Stack MAX_SIZE Heap Heap Data Data Process A Process B Process A Text Text Process B Page Table (instructions) (instructions) Page Table 0 0 5 Virtual Address Mapping Process A Process B Stack Stack Heap Heap Process A Process B Data Page Page Data Table Table Text Text (instructions) (instructions) Virtual Memory Physical Memory Virtual Memory 6 Advantages of Virtual Address Map • Isolation (private address space) – One process cannot access another process’ memory • Relocatable – Data and code within the process is relocatable • Size – Processes can be much larger than physical memory 7 Kernel can access User Process can access Address Map in xv6 Process DeviceMemory (instructions) Text + Data, Text Kernel Stack Heap Data Text (0x80000000) KERNBASE 0xFE000000 0 • every process address space address process every mapped kernel into Entire – – kernelspace Easyaccess userof datafrom duringsystem calls) usercode kernelto code(ie.
    [Show full text]
  • 1. Introduction to Concurrent Programming the Operating System Allocates the CPU(S) Among the Processes/Threads in the System
    1. Introduction to Concurrent Programming The operating system allocates the CPU(s) among the processes/threads in the system: the operating systems selects the process and the process selects the thread, or the threads are scheduled directly by the operating system. A concurrent program contains two or more threads that execute concurrently and work together to perform some task. In general, each ready thread receives a time-slice (called a quantum) of the CPU. If a thread waits for something, it relinquishes the CPU. When a program is executed, the operating system creates a process containing the code When a running thread’s quantum has completed, the thread is preempted to allow and data of the program and manages the process until the program terminates. another ready thread to run. Switching the CPU from one process or thread to another is known as a context switch. Per-process state information: the process’ state, e.g., ready, running, waiting, or stopped ¡ multiple CPUs multiple threads can execute at the same time. the program counter, which contains the address of the next instruction to be executed ¡ single CPU threads take turns running for this process saved CPU register values The scheduling policy may also consider a thread’s priority. We assume that the memory management information (page tables and swap files), file descriptors, and scheduling policy is fair, which means that every ready thread eventually gets to execute. outstanding I/O requests Multiprocessing operating systems enable several programs/processes to execute simultaneously. A thread is a unit of control within a process: when a thread runs, it executes a function in the program - the “main thread” executes the “main” function and other threads execute other functions.
    [Show full text]
  • Green, Green Threads of Home
    Green, green threads of home Johan Montelius HT2019 1 Introduction This is an assignment where you will implement your own thread library. Instead of using the operating systems threads you will create your own scheduler and context handler. Before even starting to read this you should be up and running using regular threads, spin locks, conditional variables, monitors etc. You should also preferably have done a smaller exercise that shows you how we can work with contexts. Note - the things you will do in this assignment are nothing that you would do in real life. Whenever you want to implement a multi-threaded program you would use the pthread library. We will however manage con- texts explicitly to implement something that behaves similar to the pthread library. Why? - To better understand how threads work on the inside. We will call our implementation green since they will be implemented in user space i.e. the operating system will not be involved in the scheduling of threads (it does perform the context switching for us so the threads are not all green). 2 Managing contexts The library functions that we will use are: getcontext(), makecontext(), set- context() and swapcontext(). Look up the man pages for these functions so that you get a basic understanding of what they do. The functions that we want to implement are the following (arguments will be described later): green_create() : initializes a green thread green_yield() : suspends the current thread and selects a new thread for execution green_join() : the current thread is suspended waiting for a thread to terminate Our handler needs to keep track of the currently running thread and a set of suspended threads ready for execution.
    [Show full text]
  • Process Tracking for Parallel Job Control
    Process Tracking for Parallel Job Control Hubertus Franke, Jos´eE. Moreira, and Pratap Pattnaik IBM T. J. Watson Research Center Yorktown Heights, NY 10598-0218, USA g ffrankeh,jmoreira,pratap @us.ibm.com http://www.research.ibm.com Abstract. Job management subsystems in parallel environments have to address two important issues: (i) how to associate processes present in the system to the tasks of parallel jobs, and (ii) how to control execution of these tasks. The stan- dard UNIX mechanism for job control, process groups, is not appropriate for this purpose as processes can escape their original groups and start new ones. We introduce the concept of genealogy, in which a process is identified by the genetic footprint it inherits from its parent. With this concept, tasks are defined by sets of processeswith a common ancestor. Process tracking is the mechanism by which we implement the genealogy concept in the IBM AIX operating sys- tem. No changes to the kernel are necessary and individual process control is achieved through standard UNIX signaling methods. Performance evaluation, on both uniprocessor and multiprocessor systems, demonstrate the efficacy of job control through process tracking. Process tracking has been incorporated in a re- search prototype gang-schedulingsystem for the IBM RS/6000 SP. 1 Introduction The job management subsystem of a computing environment is responsible for all as- pects of controlling job execution. This includes starting and terminating jobs as well as the details related to their scheduling. Parallel jobs executing on a distributed or clustered system are typically comprised by a set of concurrently executing tasks that collaborate with each other.
    [Show full text]
  • Process Lifecycle & Context Switching
    CSC501 Operating SystemsPrinciples Process Lifecycle & Context Switching 1 Last Lecture q Processes vs. Threads PerformanceResponsivenessFexibility Security Processes Threads Better Worse 2 Last Lecture q Processes vs. Threads q Today Q Process Lifecycle Q Context Switching 3 OS Process Management 4 Process Lifecycle New Terminated Admitted Exit Scheduler/Interrupt Ready Running Scheduler Dispatch I/O or Event Completion Waiting I/O or Event Wait 5 Process State q As a process executes, it changes state Q new: The process is being created Q running: Instructions are being executed Q waiting: The process is waiting for some event to occur Q ready: The process is waiting to be assigned to a processor Q terminated: The process has finished execution q These are the important conceptual states Q Actual name and number of states are OS-dependent 6 Process Manipulation q Performed by OS routines q Example operations Q Creation Q Termination Q Suspension Q Resumption q State variable in process table records activity 7 Process Creation q Parent process creates children processes, Q Which, in turn create other processes, Q Forming a tree of processes 8 Question: Which one is the first process? Question: Any other responsibilities? Process Creation q Policy on resource sharing Q Parent and children share all resources Q Children share subset of parent’s resources Q Parent and child share no resources q Policy on execution Q Parent and children execute concurrently Q Parent waits until children terminate q Policy on address space Q Child duplicate of parent Q Child has a program loaded into it 10 Process Creation (Cont.) q UNIX examples Q fork system call creates new process Q exec system call used after a fork to replace the process’ memory space with a new program 11 Process Creation (Cont.) q The UNIX shell is command-line interpreter whose basic purpose is for user to run applications on a UNIX system q cmdarg1 arg2 ..
    [Show full text]
  • Fork() System Call and Processes
    Fork() System Call and Processes CS449 Spring 2016 Terminating signals review SIGKILL: Terminates a process immediately. This signal cannot be handled (caught), ignored or blocked. (The "kill -9" command in linux generates this signal). SIGTERM: Terminates a process immediately. However, this signal can be handled, ignored or caught in code. If the signal is not caught by a process, the process is killed. Also, this is used for graceful termination of a process. (The "kill" command in linux if specified without any signal number like -9, will send SIGTERM) SIGINT: Interrupts a process. (The default action is to terminate gracefully). This too, like, SIGTERM can be handled, ignored or caught. The difference between SIGINT and SIGTERM is that the former can be sent from a terminal as input characters. This is the signal generated when a user presses Ctrl+C. (Sidenote: Ctrl+C denotes EOT(End of Transmission) for (say) a network stream) SIGQUIT: Terminates a process. This is different from both SIGKILL and SIGTERM in the sense that it generates a core dump of the process and also cleans up resources held up by a process. Like SIGINT, this can also be sent from the terminal as input characters. It can be handled, ignored or caught in code. This is the signal generated when a user presses Ctrl+\. SIGABRT: The abort signal is normally sent by a program when it wants to terminate itself and leave a core dump around. This can be handy during debugging and when you want to check assertions in code. The easiest way for a process to send a SIGABRT to itself is via the abort() function, which does nothing more than a raise(SIGABRT), which, in turn, does a kill(getpid(), SIGABRT).
    [Show full text]
  • Operating Systems
    Operating Systems Steven Hand Michaelmas / Lent Term 2008/09 17 lectures for CST IA Handout 3 Operating Systems — N/H/MWF@12 What is an Operating System? • A program which controls the execution of all other programs (applications). • Acts as an intermediary between the user(s) and the computer. • Objectives: – convenience, – efficiency, – extensibility. • Similar to a government. Operating Systems — Introduction 1 An Abstract View App 1 App 2 App N Operating System Hardware • The Operating System (OS): – controls all execution. – multiplexes resources between applications. – abstracts away from complexity. • Typically also have some libraries and some tools provided with OS. • Are these part of the OS? Is IE a tool? – no-one can agree. • For us, the OS ≈ the kernel. Operating Systems — Introduction 2 In The Beginning. • 1949: First stored-program machine (EDSAC) • to ∼ 1955: “Open Shop”. – large machines with vacuum tubes. – I/O by paper tape / punch cards. – user = programmer = operator. • To reduce cost, hire an operator: – programmers write programs and submit tape/cards to operator. – operator feeds cards, collects output from printer. • Management like it. • Programmers hate it. • Operators hate it. ⇒ need something better. Operating Systems — Evolution 3 Batch Systems • Introduction of tape drives allow batching of jobs: – programmers put jobs on cards as before. – all cards read onto a tape. – operator carries input tape to computer. – results written to output tape. – output tape taken to printer. • Computer now has a resident monitor: – initially control is in monitor. – monitor reads job and transfer control. – at end of job, control transfers back to monitor. • Even better: spooling systems. – use interrupt driven I/O.
    [Show full text]