Limited Preemptive Scheduling in Real-Time Systems

Total Page:16

File Type:pdf, Size:1020Kb

Limited Preemptive Scheduling in Real-Time Systems Mälardalen University Press Dissertations No. 199 LIMITED PREEMPTIVE SCHEDULING IN REAL-TIME SYSTEMS Abhilash Thekkilakattil 2016 School of Innovation, Design and Engineering Mälardalen University Press Dissertations No. 199 LIMITED PREEMPTIVE SCHEDULING IN REAL-TIME SYSTEMS Abhilash Thekkilakattil Akademisk avhandling som för avläggande av teknologie doktorsexamen i datavetenskap vid Akademin för innovation, design och teknik kommer att offentligen försvaras fredagen den 27 maj 2016, 13.15 i Gamma, Mälardalens högskola, Västerås. Fakultetsopponent: Associate Professor Reinder Bril, Eindhoven University of Technology Copyright © Abhilash Thekkilakattil, 2016 ISBN 978-91-7485-254-7 ISSN 1651-4238 Printed by Arkitektkopia, Västerås, Sweden Akademin för innovation, design och teknik Mälardalen University Press Dissertations No. 199 Mälardalen University Press Dissertations No. 199 LIMITED PREEMPTIVE SCHEDULING IN REAL-TIME SYSTEMS Abhilash Thekkilakattil LIMITED PREEMPTIVE SCHEDULING IN REAL-TIME SYSTEMS Abhilash Thekkilakattil Akademisk avhandling som för avläggande av teknologie doktorsexamen i datavetenskap vid Akademin för innovation, design och teknik kommer att offentligen försvaras fredagen den 27 maj 2016, 13.15 i Gamma, Mälardalens högskola, Västerås. Akademisk avhandling Fakultetsopponent: Associate Professor Reinder Bril, Eindhoven University of Technology som för avläggande av teknologie doktorsexamen i datavetenskap vid Akademin för innovation, design och teknik kommer att offentligen försvaras fredagen den 27 maj 2016, 13.15 i Gamma, Mälardalens högskola, Västerås. Fakultetsopponent: Associate Professor Reinder Bril, Eindhoven University of Technology Akademin för innovation, design och teknik Akademin för innovation, design och teknik Abstract Preemptive and non-preemptive scheduling paradigms typically introduce undesirable side effects when scheduling real-time tasks, mainly in the form of preemption overheads and blocking, that potentially compromise timeliness guarantees. The high preemption overheads in preemptive real- time scheduling may imply high resource utilization, often requiring significant over-provisioning, e.g., pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand, can be infeasible even for tasksets with very low utilization, due to the blocking on higher priority tasks, e.g., when one or more tasks have WCETs greater than the shortest deadline. Limited preemptive scheduling facilitates the reduction of both preemption related overheads as well as blocking by deferring preemptions to favorable locations in the task code. In this thesis, we investigate the feasibility of limited preemptive scheduling of real-time tasks on uniprocessor and multiprocessor platforms. We derive schedulability tests for global limited preemptive scheduling under both Earliest Deadline First (EDF) and Fixed Priority Scheduling (FPS) paradigms. The tests are derived in the context of two major mechanisms for enforcing limited preemptions, viz., defer preemption for a specified duration (i.e., Floating Non-Preemptive Regions) and defer preemption to the next specified location in the task code (i.e., Fixed Preemption Points). Abstract Moreover, two major preemption approaches are considered, viz., wait for the lowest priority job to become preemptable (i.e., a Lazy Preemption Approach (LPA)) and preempt the first executing lower priority job that becomes preemptable (i.e., an Eager Preemption Approach (EPA)). Evaluations using synthetically generated tasksets indicate that adopting an eager preemption approach is beneficial in terms of schedulability in the context of global FPS. Further evaluations simulating different global Preemptive and non-preemptive scheduling paradigms typically limited preemptive scheduling algorithms expose runtime anomalies with respect to the observed number of preemptions, indicating that limited preemptive scheduling may not necessarily reduce the introduce undesirable side effects when scheduling real-time tasks, number of preemptions in multiprocessor systems. We then theoretically quantify the sub-optimality mainly in the form of preemption overheads and blocking, that (the worst-case performance) of limited preemptive scheduling on uniprocessor and multiprocessor potentially compromise timeliness guarantees. The high preemption platforms using resource augmentation, e.g., processor speed-up factors to achieve optimality. Finally, we propose a sensitivity analysis based methodology to control the preemptive behavior of real-time overheads in preemptive real-time scheduling may imply high resource tasks using processor speed-up, in order to satisfy multiple preemption behavior related constraints. utilization, often requiring significant over-provisioning, e.g., The results presented in this thesis facilitate the analysis of limited preemptively scheduled real-time tasks on uniprocessor and multiprocessor platforms. pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand, can be infeasible even for tasksets with very low utilization, due to the blocking on higher priority tasks, e.g., when one or more tasks have WCETs greater than the shortest deadline. Limited preemptive scheduling facilitates the reduction of both preemption related overheads as well as blocking by deferring preemptions to favorable locations in the task code. In this thesis, we investigate the feasibility of limited preemptive scheduling of real-time tasks on uniprocessor and multiprocessor platforms. We derive schedulability tests for global limited preemptive scheduling under both Earliest Deadline First (EDF) and Fixed Priority Scheduling (FPS) paradigms. The tests are derived in the context of two major mechanisms for enforcing limited preemptions, viz., defer preemption for a specified duration (i.e., Floating Non-Preemptive Regions) and defer preemption to the next specified location in the task code (i.e., Fixed Preemption Points). Moreover, two major preemption approaches are considered, viz., wait for the lowest priority job to ISBN 978-91-7485-254-7 become preemptable (i.e., a Lazy Preemption Approach (LPA)) and ISSN 1651-4238 i Abstract Preemptive and non-preemptive scheduling paradigms typically introduce undesirable side effects when scheduling real-time tasks, mainly in the form of preemption overheads and blocking, that potentially compromise timeliness guarantees. The high preemption overheads in preemptive real-time scheduling may imply high resource utilization, often requiring significant over-provisioning, e.g., pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand, can be infeasible even for tasksets with very low utilization, due to the blocking on higher priority tasks, e.g., when one or more tasks have WCETs greater than the shortest deadline. Limited preemptive scheduling facilitates the reduction of both preemption related overheads as well as blocking by deferring preemptions to favorable locations in the task code. In this thesis, we investigate the feasibility of limited preemptive scheduling of real-time tasks on uniprocessor and multiprocessor platforms. We derive schedulability tests for global limited preemptive scheduling under both Earliest Deadline First (EDF) and Fixed Priority Scheduling (FPS) paradigms. The tests are derived in the context of two major mechanisms for enforcing limited preemptions, viz., defer preemption for a specified duration (i.e., Floating Non-Preemptive Regions) and defer preemption to the next specified location in the task code (i.e., Fixed Preemption Points). Moreover, two major preemption approaches are considered, viz., wait for the lowest priority job to become preemptable (i.e., a Lazy Preemption Approach (LPA)) and i ii iii preempt the first executing lower priority job that becomes preemptable (i.e., an Eager Preemption Approach (EPA)). Evaluations using synthetically generated tasksets indicate that adopting an eager preemption approach is beneficial in terms of schedulability in the Faculty Opponent: context of global FPS. Further evaluations simulating different global Reinder Bril, Technische Universiteit Eindhoven, Netherlands limited preemptive scheduling algorithms expose runtime anomalies • with respect to the observed number of preemptions, indicating that limited preemptive scheduling may not necessarily reduce the number Examiners: of preemptions in multiprocessor systems. We then theoretically quantify the sub-optimality (the worst-case performance) of limited Giorgio Buttazzo, Scuola Superiore Sant’Anna di Pisa, Italy preemptive scheduling on uniprocessor and multiprocessor platforms • Gerhard Fohler, Universitat¨ Kaiserslautern, Germany using resource augmentation, e.g., processor speed-up factors to • achieve optimality. Finally, we propose a sensitivity analysis based Liliana Cucu-Grosjean, INRIA Paris-Rocquencourt, France methodology to control the preemptive behavior of real-time tasks • using processor speed-up, in order to satisfy multiple preemption behavior related constraints. The results presented in this thesis Reserve: facilitate the analysis of limited preemptively scheduled real-time tasks Damir Isovic, Malardalens¨ Hogskola,¨ Sweden on uniprocessor and multiprocessor platforms. • Supervisors: Sasikumar Punnekkat, Malardalens¨ Hogskola,¨ Sweden • Radu Dobrin, Malardalens¨ Hogskola,¨ Sweden • ii iii preempt the first executing lower priority job that becomes preemptable (i.e.,
Recommended publications
  • NUMA-Aware Thread Migration for High Performance NVMM File Systems
    NUMA-Aware Thread Migration for High Performance NVMM File Systems Ying Wang, Dejun Jiang and Jin Xiong SKL Computer Architecture, ICT, CAS; University of Chinese Academy of Sciences fwangying01, jiangdejun, [email protected] Abstract—Emerging Non-Volatile Main Memories (NVMMs) out considering the NVMM usage on NUMA nodes. Besides, provide persistent storage and can be directly attached to the application threads accessing file system rely on the default memory bus, which allows building file systems on non-volatile operating system thread scheduler, which migrates thread only main memory (NVMM file systems). Since file systems are built on memory, NUMA architecture has a large impact on their considering CPU utilization. These bring remote memory performance due to the presence of remote memory access and access and resource contentions to application threads when imbalanced resource usage. Existing works migrate thread and reading and writing files, and thus reduce the performance thread data on DRAM to solve these problems. Unlike DRAM, of NVMM file systems. We observe that when performing NVMM introduces extra latency and lifetime limitations. This file reads/writes from 4 KB to 256 KB on a NVMM file results in expensive data migration for NVMM file systems on NUMA architecture. In this paper, we argue that NUMA- system (NOVA [47] on NVMM), the average latency of aware thread migration without migrating data is desirable accessing remote node increases by 65.5 % compared to for NVMM file systems. We propose NThread, a NUMA-aware accessing local node. The average bandwidth is reduced by thread migration module for NVMM file system.
    [Show full text]
  • Integrating Preemption Threshold Scheduling and Dynamic Voltage Scaling for Energy Efficient Real-Time Systems
    Integrating Preemption Threshold Scheduling and Dynamic Voltage Scaling for Energy Efficient Real-Time Systems Ravindra Jejurikar1 and Rajesh Gupta2 1 Centre for Embedded Computer Systems, University of California Irvine, Irvine CA 92697, USA jeÞÞ@cec׺ÙciºedÙ 2 Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA 92093, USA gÙÔØa@c׺Ùc×dºedÙ Abstract. Preemption threshold scheduling (PTS) enables designing scalable real-time systems. PTS not only decreases the run-time overhead of the system, but can also be used to decrease the number of threads and the memory require- ments of the system. In this paper, we combine preemption threshold scheduling with dynamic voltage scaling to enable energy efficient scheduling in real-time systems. We consider scheduling with task priorities defined by the Earliest Dead- line First (EDF) policy. We present an algorithm to compute threshold preemption levels for tasks with given static slowdown factors. The proposed algorithm im- proves upon known algorithms in terms of time complexity. Experimental results show that preemption threshold scheduling reduces on an average 90% context switches, even in the presence of task slowdown. Further, we describe a dynamic slack reclamation technique that working in conjunction with PTS that yields on an average 10% additional energy savings. 1 Introduction With increasing mobility and proliferation of embedded systems, low power consump- tion is an important aspect of embedded systems design. Generally speaking, the pro- cessor consumes a significant portion of the total energy, primarily due to increased computational demands. Scaling the processor frequency and voltage based on the per- formance requirements can lead to considerable energy savings.
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Chapter 1. Origins of Mac OS X
    1 Chapter 1. Origins of Mac OS X "Most ideas come from previous ideas." Alan Curtis Kay The Mac OS X operating system represents a rather successful coming together of paradigms, ideologies, and technologies that have often resisted each other in the past. A good example is the cordial relationship that exists between the command-line and graphical interfaces in Mac OS X. The system is a result of the trials and tribulations of Apple and NeXT, as well as their user and developer communities. Mac OS X exemplifies how a capable system can result from the direct or indirect efforts of corporations, academic and research communities, the Open Source and Free Software movements, and, of course, individuals. Apple has been around since 1976, and many accounts of its history have been told. If the story of Apple as a company is fascinating, so is the technical history of Apple's operating systems. In this chapter,[1] we will trace the history of Mac OS X, discussing several technologies whose confluence eventually led to the modern-day Apple operating system. [1] This book's accompanying web site (www.osxbook.com) provides a more detailed technical history of all of Apple's operating systems. 1 2 2 1 1.1. Apple's Quest for the[2] Operating System [2] Whereas the word "the" is used here to designate prominence and desirability, it is an interesting coincidence that "THE" was the name of a multiprogramming system described by Edsger W. Dijkstra in a 1968 paper. It was March 1988. The Macintosh had been around for four years.
    [Show full text]
  • Resource Access Control in Real-Time Systems
    Resource Access Control in Real-time Systems Advanced Operating Systems (M) Lecture 8 Lecture Outline • Definitions of resources • Resource access control for static systems • Basic priority inheritance protocol • Basic priority ceiling protocol • Enhanced priority ceiling protocols • Resource access control for dynamic systems • Effects on scheduling • Implementing resource access control 2 Resources • A system has ρ types of resource R1, R2, …, Rρ • Each resource comprises nk indistinguishable units; plentiful resources have no effect on scheduling and so are ignored • Each unit of resource is used in a non-preemptive and mutually exclusive manner; resources are serially reusable • If a resource can be used by more than one job at a time, we model that resource as having many units, each used mutually exclusively • Access to resources is controlled using locks • Jobs attempt to lock a resource before starting to use it, and unlock the resource afterwards; the time the resource is locked is the critical section • If a lock request fails, the requesting job is blocked; a job holding a lock cannot be preempted by a higher priority job needing that lock • Critical sections may nest if a job needs multiple simultaneous resources 3 Contention for Resources • Jobs contend for a resource if they try to lock it at once J blocks 1 Preempt J3 J1 Preempt J3 J2 blocks J2 J3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Priority inversion EDF schedule of J1, J2 and J3 sharing a resource protected by locks (blue shading indicated critical sections).
    [Show full text]
  • A Hardware Preemptive Multitasking Mechanism Based on Scan-Path Register Structure for FPGA-Based Reconfigurable Systems
    A Hardware Preemptive Multitasking Mechanism Based on Scan-path Register Structure for FPGA-based Reconfigurable Systems S. Jovanovic, C. Tanougast and S. Weber Université Henri Poincaré, LIEN, 54506 Vandoeuvre lès Nancy, France [email protected] Abstract all the registers as well as all memories used in the circuit In this paper, we propose a hardware preemptive and to determine the starting point of the restoration multitasking mechanism which uses scan-path register process. On the other hand, the goal of the restoration is to structure and allows identifying the total task’s register restore the task’s context of processing as it was before its size for the FPGA-based reconfigurable systems. The interruption, in order to continue the execution previously main objective of this preemptive mechanism is to stopped. Several techniques allowing the hardware suspend hardware task having low priority, replace it by preemption are proposed in the literature. The pros and high-priority task and restart them at another time cons of mostly used approaches we will disscus in the (and/or from another area of the FPGA in FPGA-based following. designs). The main advantages of the proposed method The readback approach for context saving and storing are that it provides an attractive way for context saving of an outgoing task is one of the approaches which was and restoring of a hardware task without freezing other mostly used for the FPGA-based designs. No additional tasks during pre-emption phases and a small area hardware structures, simple implementation, no extra overhead. We show its feasibility by allowing us to design design efforts and no extra hardware consumption are the a simple computing example as well as the one of the most important advantages of this approach.
    [Show full text]
  • Memory and Cache Contention Denial-Of-Service Attack in Mobile Edge Devices
    applied sciences Article Memory and Cache Contention Denial-of-Service Attack in Mobile Edge Devices Won Cho and Joonho Kong * School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Korea; [email protected] * Correspondence: [email protected] Abstract: In this paper, we introduce a memory and cache contention denial-of-service attack and its hardware-based countermeasure. Our attack can significantly degrade the performance of the benign programs by hindering the shared resource accesses of the benign programs. It can be achieved by a simple C-based malicious code while degrading the performance of the benign programs by 47.6% on average. As another side-effect, our attack also leads to greater energy consumption of the system by 2.1× on average, which may cause shorter battery life in the mobile edge devices. We also propose detection and mitigation techniques for thwarting our attack. By analyzing L1 data cache miss request patterns, we effectively detect the malicious program for the memory and cache contention denial-of-service attack. For mitigation, we propose using instruction fetch width throttling techniques to restrict the malicious accesses to the shared resources. When employing our malicious program detection with the instruction fetch width throttling technique, we recover the system performance and energy by 92.4% and 94.7%, respectively, which means that the adverse impacts from the malicious programs are almost removed. Keywords: memory and cache contention; denial of service attack; shared resources; performance; en- Citation: Cho, W.; Kong, J. Memory ergy and Cache Contention Denial-of-Service Attack in Mobile Edge Devices.
    [Show full text]
  • A Case for NUMA-Aware Contention Management on Multicore Systems
    A Case for NUMA-aware Contention Management on Multicore Systems Sergey Blagodurov Sergey Zhuravlev Mohammad Dashti Simon Fraser University Simon Fraser University Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract performance of individual applications or threads by as much as 80% and the overall workload performance by On multicore systems, contention for shared resources as much as 12% [23]. occurs when memory-intensive threads are co-scheduled Unfortunately studies of contention-aware algorithms on cores that share parts of the memory hierarchy, such focused primarily on UMA (Uniform Memory Access) as last-level caches and memory controllers. Previous systems, where there are multiple shared LLCs, but only work investigated how contention could be addressed a single memory node equipped with the single memory via scheduling. A contention-aware scheduler separates controller, and memory can be accessed with the same competing threads onto separate memory hierarchy do- latency from any core. However, new multicore sys- mains to eliminate resource sharing and, as a conse- tems increasingly use the Non-Uniform Memory Access quence, to mitigate contention. However, all previous (NUMA) architecture, due to its decentralized and scal- work on contention-aware scheduling assumed that the able nature. In modern NUMA systems, there are mul- underlying system is UMA (uniform memory access la- tiple memory nodes, one per memory domain (see Fig- tencies, single memory controller). Modern multicore ure 1). Local nodes can be accessed in less time than re- systems, however, are NUMA, which means that they mote ones, and each node has its own memory controller. feature non-uniform memory access latencies and multi- When we ran the best known contention-aware sched- ple memory controllers.
    [Show full text]
  • Thread Evolution Kit for Optimizing Thread Operations on CE/Iot Devices
    Thread Evolution Kit for Optimizing Thread Operations on CE/IoT Devices Geunsik Lim , Student Member, IEEE, Donghyun Kang , and Young Ik Eom Abstract—Most modern operating systems have adopted the the threads running on CE/IoT devices often unintentionally one-to-one thread model to support fast execution of threads spend a significant amount of time in taking the CPU resource in both multi-core and single-core systems. This thread model, and the frequency of context switch rapidly increases due to which maps the kernel-space and user-space threads in a one- to-one manner, supports quick thread creation and termination the limited system resources, degrading the performance of in high-performance server environments. However, the perfor- the system significantly. In addition, since CE/IoT devices mance of time-critical threads is degraded when multiple threads usually have limited memory space, they may suffer from the are being run in low-end CE devices with limited system re- segmentation fault [16] problem incurred by memory shortages sources. When a CE device runs many threads to support diverse as the number of threads increases and they remain running application functionalities, low-level hardware specifications often lead to significant resource contention among the threads trying for a long time. to obtain system resources. As a result, the operating system Some engineers have attempted to address the challenges encounters challenges, such as excessive thread context switching of IoT environments such as smart homes by using better overhead, execution delay of time-critical threads, and a lack of hardware specifications for CE/IoT devices [3], [17]–[21].
    [Show full text]
  • Computer Architecture Lecture 12: Memory Interference and Quality of Service
    Computer Architecture Lecture 12: Memory Interference and Quality of Service Prof. Onur Mutlu ETH Zürich Fall 2017 1 November 2017 Summary of Last Week’s Lectures n Control Dependence Handling q Problem q Six solutions n Branch Prediction n Trace Caches n Other Methods of Control Dependence Handling q Fine-Grained Multithreading q Predicated Execution q Multi-path Execution 2 Agenda for Today n Shared vs. private resources in multi-core systems n Memory interference and the QoS problem n Memory scheduling n Other approaches to mitigate and control memory interference 3 Quick Summary Papers n "Parallelism-Aware Batch Scheduling: Enhancing both Performance and Fairness of Shared DRAM Systems” n "The Blacklisting Memory Scheduler: Achieving High Performance and Fairness at Low Cost" n "Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems” n "Parallel Application Memory Scheduling” n "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning" 4 Shared Resource Design for Multi-Core Systems 5 Memory System: A Shared Resource View Storage 6 Resource Sharing Concept n Idea: Instead of dedicating a hardware resource to a hardware context, allow multiple contexts to use it q Example resources: functional units, pipeline, caches, buses, memory n Why? + Resource sharing improves utilization/efficiency à throughput q When a resource is left idle by one thread, another thread can use it; no need to replicate shared data + Reduces communication latency q For example,
    [Show full text]
  • Preemptive Multitasking on Atmel® AVR® Microcontroller
    Advances in Information Science and Computer Engineering Preemptive Multitasking on Atmel® AVR® Microcontroller HABIBUR RAHMAN, Senthil Arumugam Muthukumaraswamy School of Engineering & Physical Sciences Heriot Watt University Dubai Campus Dubai International Academic City UNITED ARAB EMIRATES [email protected]; [email protected] Abstract: - This paper demonstrates the need for multitasking and scenario where multitasking is the only solution and how it can be achieved on an 8-bit AVR® microcontroller. This project explains how to create a simple kernel in a single C file, and execute any number of tasks in a multithreaded fashion. It first explains how the AVR® engine works and how it switches between different tasks using preemptive scheduling algorithm with the flexibility of blocking a task to allowing it more execution time based on their priority level. The code written for this project is basically in C, however the kernel code is mostly assembly functions called by C. The development environment is Atmel Studio®. The code is in such a way that it can be ported over any 8-bit AVR® microcontroller, however, this project demonstrates the results in both simulation and hardware chip on device Atmega8A. Key-Words: - AVR®, Microcontroller, Threading, Preemptive Scheduling, Multitasking. 1 Introduction architecture and will eventually lead to downloading Microcontroller development has been and compiling hundreds of files according to the exponential over the past two decades. Development provided operating system instructions and also in terms of speed, transistor density, multi-core spend time learning how to use it. embedding as well as number of peripheral subsystem on SoC(System on Chip) is common, and But if the user who has basic understanding of C development tools and compilers have been created language and very good understanding on that allows writing code for microcontroller on microcontroller wants to implement multitasking for higher level of abstraction layer an easy task.
    [Show full text]
  • UNIX History Page 1 Tuesday, December 10, 2002 7:02 PM
    UNIX History Page 1 Tuesday, December 10, 2002 7:02 PM CHAPTER 1 UNIX Evolution and Standardization This chapter introduces UNIX from a historical perspective, showing how the various UNIX versions have evolved over the years since the very first implementation in 1969 to the present day. The chapter also traces the history of the different attempts at standardization that have produced widely adopted standards such as POSIX and the Single UNIX Specification. The material presented here is not intended to document all of the UNIX variants, but rather describes the early UNIX implementations along with those companies and bodies that have had a major impact on the direction and evolution of UNIX. A Brief Walk through Time There are numerous events in the computer industry that have occurred since UNIX started life as a small project in Bell Labs in 1969. UNIX history has been largely influenced by Bell Labs’ Research Editions of UNIX, AT&T’s System V UNIX, Berkeley’s Software Distribution (BSD), and Sun Microsystems’ SunOS and Solaris operating systems. The following list shows the major events that have happened throughout the history of UNIX. Later sections describe some of these events in more detail. 1 UNIX History Page 2 Tuesday, December 10, 2002 7:02 PM 2 UNIX Filesystems—Evolution, Design, and Implementation 1969. Development on UNIX starts in AT&T’s Bell Labs. 1971. 1st Edition UNIX is released. 1973. 4th Edition UNIX is released. This is the first version of UNIX that had the kernel written in C. 1974. Ken Thompson and Dennis Ritchie publish their classic paper, “The UNIX Timesharing System” [RITC74].
    [Show full text]