QNX® Neutrino® Realtime Operating System

Total Page:16

File Type:pdf, Size:1020Kb

QNX® Neutrino® Realtime Operating System QNX Neutrino Realtime Operating System Programmer’s Guide For QNX Neutrino 6.3 2004, QNX Software Systems Ltd. QNX Software Systems Ltd. 175 Terence Matthews Crescent Kanata, Ontario K2M 1W8 Canada Voice: 613-591-0931 Fax: 613-591-3579 Email: [email protected] Web: http://www.qnx.com/ QNX Software Systems Ltd. 2004. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise without the prior written permission of QNX Software Systems Ltd. Although every precaution has been taken in the preparation of this book, we assume no responsibility for any errors or omissions, nor do we assume liability for damages resulting from the use of the information contained in this book. Technical support options To obtain technical support for any QNX product, visit the Technical Support section in the Support area on our website (www.qnx.com). You’ll find a wide range of support options, including our free web-based QNX Developer’s Network. QNX, Momentics, Neutrino, and Photon are registered trademarks of QNX Software Systems Ltd. All other trademarks and registered trademarks belong to their respective owners. Contents About This Book xvii Note to Windows users xx Recommended reading xx 1 Compiling and Debugging 1 Choosing the version of the OS 3 Conforming to standards 4 Header files in include 7 Self-hosted or cross-development 7 A simple example 8 Self-hosted 10 Cross-development with network filesystem 10 Cross-development with debugger 11 Cross-development, deeply embedded 11 Using libraries 14 Static linking 15 Dynamic linking 15 Runtime loading 15 Static and dynamic libraries 15 Platform-specific library locations 17 Linking your modules 18 Creating shared objects 19 Debugging 20 Debugging in a self-hosted environment 20 Debugging in a cross-development environment 21 May 31, 2004 Contents iii 2004, QNX Software Systems Ltd. The GNU debugger (gdb)22 The process-level debug agent 22 A simple debug session 29 Configure the target 30 Compile for debugging 30 Start the debug session 30 Get help 31 Sample boot image 33 2 Programming Overview 35 Process model 37 An application as a set of processes 38 Processes and threads 39 Some definitions 39 Priorities and scheduling 41 Priority range 41 BLOCKED and READY states 42 The ready queue 43 Suspending a running thread 45 When the thread is blocked 45 When the thread is preempted 45 When the thread yields 46 Scheduling algorithms 46 FIFO scheduling 47 Round-robin scheduling 48 Why threads? 49 Summary 50 3 Processes 51 Starting processes — two methods 53 Process creation 53 Concurrency 54 Using fork() and forkpty() 55 iv Contents May 31, 2004 2004, QNX Software Systems Ltd. Inheriting file descriptors 55 Process termination 56 Normal process termination 57 Abnormal process termination 57 Affect of parent termination 59 Detecting process termination 59 4 Writing a Resource Manager 71 What is a resource manager? 73 Why write a resource manager? 74 Under the covers 77 The types of resource managers 82 Components of a resource manager 83 iofunc layer 83 resmgr layer 84 dispatch layer 85 thread pool layer 87 Simple device resource manager examples 87 Single-threaded device resource manager example 88 Multi-threaded device resource manager example 94 Data carrying structures 97 The Open Control Block (OCB) structure 98 The attribute structure 99 The mount structure 105 Handling the IO READ message 107 Sample code for handling IO READ messages 108 Ways of adding functionality to the resource manager 112 Handling the IO WRITE message 116 Sample code for handling IO WRITE messages 117 Methods of returning and replying 119 Returning with an error 120 Returning using an IOV array that points to your data 120 Returning with a single buffer containing data 121 May 31, 2004 Contents v 2004, QNX Software Systems Ltd. Returning success but with no data 121 Getting the resource manager library to do the reply 122 Performing the reply in the server 122 Returning and telling the library to do the default action 124 Handling other read/write details 125 Handling the xtype member 125 Handling pread*() and pwrite*() 127 Handling readcond() 129 Attribute handling 129 Updating the time for reads and writes 130 Combine messages 131 Where combine messages are used 131 The library’s combine-message handling 133 Extending Data Control Structures (DCS) 139 Extending the OCB and attribute structures 139 Extending the mount structure 142 Handling devctl() messages 142 Sample code for handling IO DEVCTL messages 145 Handling ionotify() and select() 149 Sample code for handling IO NOTIFY messages 153 Handling private messages and pulses 160 Handling open(), dup(), and close() messages 163 Handling client unblocking due to signals or timeouts 164 Handling interrupts 166 Sample code for handling interrupts 167 Multi-threaded resource managers 169 Multi-threaded resource manager example 169 Thread pool attributes 171 Thread pool functions 174 Filesystem resource managers 175 Considerations for filesystem resource managers 175 Taking over more than one device 175 vi Contents May 31, 2004 2004, QNX Software Systems Ltd. Handling directories 177 Message types 183 Connect messages 183 I/O messages 183 Resource manager data structures 184 5 Transparent Distributed Processing Using Qnet 187 What is Qnet? 189 Benefits of Qnet 189 What works best 190 What type of application is well-suited for Qnet? 191 Qnet drivers 191 How does it work? 192 Locating services using GNS 196 Quality of Service (QoS) and multiple paths 205 Designing a system using Qnet 208 The product 208 Developing your distributed system 209 Configuring the data cards 209 Configuring the controller card 210 Enhancing reliability via multiple transport buses 211 Redundancy and scalability using multiple controller cards 213 Autodiscovery vs static 214 When should you use Qnet, TCP/IP or NFS? 215 Writing a driver for Qnet 218 6 Writing an Interrupt Handler 223 Overview 225 Attaching and detaching interrupts 225 The Interrupt Service Routine (ISR) 226 Advanced topics 236 May 31, 2004 Contents vii 2004, QNX Software Systems Ltd. Interrupt environment 236 Ordering of shared interrupts 237 Interrupt latency 237 Atomic Operations 237 7 Heap Analysis: Making Memory Errors a Thing of the Past 239 Introduction 241 Dynamic Memory Management 241 Heap Corruption 242 Common sources 244 Detecting and Reporting Errors 246 Using the malloc debug library 247 What’s checked? 250 Controlling the level of checking 251 Manual Checking (Bounds Checking) 255 Getting pointer information 256 Getting the heap buffer size 257 Memory Leaks 258 Tracing 258 Causing a trace and giving results 259 Analyzing dumps 260 Compiler Support 261 C++ issues 261 Bounds checking GCC 263 Summary 264 A Freedom from Hardware and Platform Dependencies 265 Common problems 267 I/O space vs memory-mapped 267 Big-endian vs little-endian 268 Alignment and structure packing 269 viii Contents May 31, 2004 2004, QNX Software Systems Ltd. Atomic operations 270 Solutions 270 Determining endianness 270 Swapping data if required 271 Accessing unaligned data 272 Examples 273 Accessing I/O ports 276 B Conventions for Makefiles and Directories 279 Structure 281 Makefile structure 283 The recurse.mk file 283 Macros 284 Directory structure 286 The project level 286 The section level (optional) 286 The OS level 286 The CPU level 287 The variant level 287 Specifying options 287 The common.mk file 287 The variant-level makefile 288 Recognized variant names 288 Using the standard macros and include files 290 The qconfig.mk include file 291 The qrules.mk include file 294 The qtargets.mk include file 298 Advanced topics 299 Collapsing unnecessary directory levels 300 Performing partial builds 301 More uses for LIST 302 GNU configure 303 May 31, 2004 Contents ix 2004, QNX Software Systems Ltd. C Developing SMP Systems 309 Introduction 311 Building an SMP image 311 The impact of SMP 312 To SMP or not to SMP 312 Processor affinity 312 SMP and synchronization primitives 313 SMP and FIFO scheduling 313 SMP and interrupts 313 SMP and atomic operations 314 Designing with SMP in mind 315 Use the SMP primitives 315 Assume that threads really do run concurrently 316 Break the problem down 316 D Using GDB 319 GDB commands 321 Command syntax 321 Command completion 322 Getting help 324 Running programs under GDB 327 Compiling for debugging 328 Setting the target 328 Starting your program 329 Your program’s arguments 330 Your program’s environment 331 Your program’s input and output 332 Debugging an already-running process 333 Killing the child process 334 Debugging programs with multiple threads 334 Debugging programs with multiple processes 336 Stopping and continuing 337 Breakpoints, watchpoints, and exceptions 337 x Contents May 31, 2004 2004, QNX Software Systems Ltd. Continuing and stepping 352 Signals 357 Stopping and starting multithreaded programs 359 Examining the stack 360 Stack frames 361 Backtraces 362 Selecting a frame 363 Information about a frame 365 MIPS machines and the function stack 366 Examining source files 367 Printing source lines 367 Searching source files 369 Specifying source directories 370 Source and machine code 371 Shared libraries 373 Examining data 374 Expressions 375 Program variables 376 Artificial arrays 378 Output formats 379 Examining memory 381 Automatic display 383 Print settings 385 Value history 392 Convenience variables 394 Registers 396 Floating point hardware 398 Examining the symbol
Recommended publications
  • Kernel-Scheduled Entities for Freebsd
    Kernel-Scheduled Entities for FreeBSD Jason Evans [email protected] November 7, 2000 Abstract FreeBSD has historically had less than ideal support for multi-threaded application programming. At present, there are two threading libraries available. libc r is entirely invisible to the kernel, and multiplexes threads within a single process. The linuxthreads port, which creates a separate process for each thread via rfork(), plus one thread to handle thread synchronization, relies on the kernel scheduler to multiplex ”threads” onto the available processors. Both approaches have scaling problems. libc r does not take advantage of multiple processors and cannot avoid blocking when doing I/O on ”fast” devices. The linuxthreads port requires one process per thread, and thus puts strain on the kernel scheduler, as well as requiring significant kernel resources. In addition, thread switching can only be as fast as the kernel's process switching. This paper summarizes various methods of implementing threads for application programming, then goes on to describe a new threading architecture for FreeBSD, based on what are called kernel- scheduled entities, or scheduler activations. 1 Background FreeBSD has been slow to implement application threading facilities, perhaps because threading is difficult to implement correctly, and the FreeBSD's general reluctance to build substandard solutions to problems. Instead, two technologies originally developed by others have been integrated. libc r is based on Chris Provenzano's userland pthreads implementation, and was significantly re- worked by John Birrell and integrated into the FreeBSD source tree. Since then, numerous other people have improved and expanded libc r's capabilities. At this time, libc r is a high-quality userland implementation of POSIX threads.
    [Show full text]
  • 2. Virtualizing CPU(2)
    Operating Systems 2. Virtualizing CPU Pablo Prieto Torralbo DEPARTMENT OF COMPUTER ENGINEERING AND ELECTRONICS This material is published under: Creative Commons BY-NC-SA 4.0 2.1 Virtualizing the CPU -Process What is a process? } A running program. ◦ Program: Static code and static data sitting on the disk. ◦ Process: Dynamic instance of a program. ◦ You can have multiple instances (processes) of the same program (or none). ◦ Users usually run more than one program at a time. Web browser, mail program, music player, a game… } The process is the OS’s abstraction for execution ◦ often called a job, task... ◦ Is the unit of scheduling Program } A program consists of: ◦ Code: machine instructions. ◦ Data: variables stored and manipulated in memory. Initialized variables (global) Dynamically allocated (malloc, new) Stack variables (function arguments, C automatic variables) } What is added to a program to become a process? ◦ DLLs: Libraries not compiled or linked with the program (probably shared with other programs). ◦ OS resources: open files… Program } Preparing a program: Task Editor Source Code Source Code A B Compiler/Assembler Object Code Object Code Other A B Objects Linker Executable Dynamic Program File Libraries Loader Executable Process in Memory Process Creation Process Address space Code Static Data OS res/DLLs Heap CPU Memory Stack Loading: Code OS Reads on disk program Static Data and places it into the address space of the process Program Disk Eagerly/Lazily Process State } Each process has an execution state, which indicates what it is currently doing ps -l, top ◦ Running (R): executing on the CPU Is the process that currently controls the CPU How many processes can be running simultaneously? ◦ Ready/Runnable (R): Ready to run and waiting to be assigned by the OS Could run, but another process has the CPU Same state (TASK_RUNNING) in Linux.
    [Show full text]
  • Processor Affinity Or Bound Multiprocessing?
    Processor Affinity or Bound Multiprocessing? Easing the Migration to Embedded Multicore Processing Shiv Nagarajan, Ph.D. QNX Software Systems [email protected] Processor Affinity or Bound Multiprocessing? QNX Software Systems Abstract Thanks to higher computing power and system density at lower clock speeds, multicore processing has gained widespread acceptance in embedded systems. Designing systems that make full use of multicore processors remains a challenge, however, as does migrating systems designed for single-core processors. Bound multiprocessing (BMP) can help with these designs and these migrations. It adds a subtle but critical improvement to symmetric multiprocessing (SMP) processor affinity: it implements runmask inheritance, a feature that allows developers to bind all threads in a process or even a subsystem to a specific processor without code changes. QNX Neutrino RTOS Introduction The QNX® Neutrino® RTOS has been Effective use of multicore processors can multiprocessor capable since 1997, and profoundly improve the performance of embedded is deployed in hundreds of systems and applications ranging from industrial control systems thousands of multicore processors in to multimedia platforms. Designing systems that embedded environments. It supports make effective use of multiprocessors remains a AMP, SMP — and BMP. challenge, however. The problem is not just one of making full use of the new multicore environments to optimize performance. Developers must not only guarantee that the systems they themselves write will run correctly on multicore processors, but they must also ensure that applications written for single-core processors will not fail or cause other applications to fail when they are migrated to a multicore environment. To complicate matters further, these new and legacy applications can contain third-party code, which developers may not be able to change.
    [Show full text]
  • CPU Scheduling
    CPU Scheduling CS 416: Operating Systems Design Department of Computer Science Rutgers University http://www.cs.rutgers.edu/~vinodg/teaching/416 What and Why? What is processor scheduling? Why? At first to share an expensive resource ± multiprogramming Now to perform concurrent tasks because processor is so powerful Future looks like past + now Computing utility – large data/processing centers use multiprogramming to maximize resource utilization Systems still powerful enough for each user to run multiple concurrent tasks Rutgers University 2 CS 416: Operating Systems Assumptions Pool of jobs contending for the CPU Jobs are independent and compete for resources (this assumption is not true for all systems/scenarios) Scheduler mediates between jobs to optimize some performance criterion In this lecture, we will talk about processes and threads interchangeably. We will assume a single-threaded CPU. Rutgers University 3 CS 416: Operating Systems Multiprogramming Example Process A 1 sec start idle; input idle; input stop Process B start idle; input idle; input stop Time = 10 seconds Rutgers University 4 CS 416: Operating Systems Multiprogramming Example (cont) Process A Process B start B start idle; input idle; input stop A idle; input idle; input stop B Total Time = 20 seconds Throughput = 2 jobs in 20 seconds = 0.1 jobs/second Avg. Waiting Time = (0+10)/2 = 5 seconds Rutgers University 5 CS 416: Operating Systems Multiprogramming Example (cont) Process A start idle; input idle; input stop A context switch context switch to B to A Process B idle; input idle; input stop B Throughput = 2 jobs in 11 seconds = 0.18 jobs/second Avg.
    [Show full text]
  • CPU Scheduling
    Chapter 5: CPU Scheduling Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multi-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation Operating System Concepts – 10th Edition 5.2 Silberschatz, Galvin and Gagne ©2018 Objectives Describe various CPU scheduling algorithms Assess CPU scheduling algorithms based on scheduling criteria Explain the issues related to multiprocessor and multicore scheduling Describe various real-time scheduling algorithms Describe the scheduling algorithms used in the Windows, Linux, and Solaris operating systems Apply modeling and simulations to evaluate CPU scheduling algorithms Operating System Concepts – 10th Edition 5.3 Silberschatz, Galvin and Gagne ©2018 Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst followed by I/O burst CPU burst distribution is of main concern Operating System Concepts – 10th Edition 5.4 Silberschatz, Galvin and Gagne ©2018 Histogram of CPU-burst Times Large number of short bursts Small number of longer bursts Operating System Concepts – 10th Edition 5.5 Silberschatz, Galvin and Gagne ©2018 CPU Scheduler The CPU scheduler selects from among the processes in ready queue, and allocates the a CPU core to one of them Queue may be ordered in various ways CPU scheduling decisions may take place when a
    [Show full text]
  • CPU Scheduling
    10/6/16 CS 422/522 Design & Implementation of Operating Systems Lecture 11: CPU Scheduling Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken from previous versions of the CS422/522 lectures taught by Prof. Bryan Ford and Dr. David Wolinsky, and also from the official set of slides accompanying the OSPP textbook by Anderson and Dahlin. CPU scheduler ◆ Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. ◆ CPU scheduling decisions may take place when a process: 1. switches from running to waiting state. 2. switches from running to ready state. 3. switches from waiting to ready. 4. terminates. ◆ Scheduling under 1 and 4 is nonpreemptive. ◆ All other scheduling is preemptive. 1 10/6/16 Main points ◆ Scheduling policy: what to do next, when there are multiple threads ready to run – Or multiple packets to send, or web requests to serve, or … ◆ Definitions – response time, throughput, predictability ◆ Uniprocessor policies – FIFO, round robin, optimal – multilevel feedback as approximation of optimal ◆ Multiprocessor policies – Affinity scheduling, gang scheduling ◆ Queueing theory – Can you predict/improve a system’s response time? Example ◆ You manage a web site, that suddenly becomes wildly popular. Do you? – Buy more hardware? – Implement a different scheduling policy? – Turn away some users? Which ones? ◆ How much worse will performance get if the web site becomes even more popular? 2 10/6/16 Definitions ◆ Task/Job – User request: e.g., mouse
    [Show full text]
  • A Novel Hard-Soft Processor Affinity Scheduling for Multicore Architecture Using Multiagents
    European Journal of Scientific Research ISSN 1450-216X Vol.55 No.3 (2011), pp.419-429 © EuroJournals Publishing, Inc. 2011 http://www.eurojournals.com/ejsr.htm A Novel Hard-Soft Processor Affinity Scheduling for Multicore Architecture using Multiagents G. Muneeswari Research Scholar, Computer Science and Engineering Department RMK Engineering College, Anna University-Chennai, Tamilnadu, India E-mail: [email protected] K. L. Shunmuganathan Professor & Head Computer Science and Engineering Department RMK Engineering College, Anna University-Chennai, Tamilnadu, India E-mail: [email protected] Abstract Multicore architecture otherwise called as SoC consists of large number of processors packed together on a single chip uses hyper threading technology. This increase in processor core brings new advancements in simultaneous and parallel computing. Apart from enormous performance enhancement, this multicore platform adds lot of challenges to the critical task execution on the processor cores, which is a crucial part from the operating system scheduling point of view. In this paper we combine the AMAS theory of multiagent system with the affinity based processor scheduling and this will be best suited for critical task execution on multicore platforms. This hard-soft processor affinity scheduling algorithm promises in minimizing the average waiting time of the non critical tasks in the centralized queue and avoids the context switching of critical tasks. That is we assign hard affinity for critical tasks and soft affinity for non critical tasks. The algorithm is applicable for the system that consists of both critical and non critical tasks in the ready queue. We actually modified and simulated the linux 2.6.11 kernel process scheduler to incorporate this scheduling concept.
    [Show full text]
  • A Study on Setting Processor Or CPU Affinity in Multi-Core Architecture For
    International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Index Copernicus Value (2013): 6.14 | Impact Factor (2013): 4.438 A Study on Setting Processor or CPU Affinity in Multi-Core Architecture for Parallel Computing Saroj A. Shambharkar, Kavikulguru Institute of Technology & Science, Ramtek-441 106, Dist. Nagpur, India Abstract: Early days when we are having Single-core systems, then lot of work is to done by the single processor like scheduling kernel, servicing the interrupts, run the tasks concurrently and so on. The performance was may degrades for the time-critical applications. There are some real-time applications like weather forecasting, car crash analysis, military application, defense applications and so on, where speed is important. To improve the speed, adding or having multi-core architecture or multiple cores is not sufficient, it is important to make their use efficiently. In Multi-core architecture, the parallelism is required to make effective use of hardware present in our system. It is important for the application developers take care about the processor affinity which may affect the performance of their application. When using multi-core system to run an multi threaded application, the migration of process or thread from one core to another, there will be a probability of cache locality or memory locality lost. The main motivation behind setting the processor affinity is to resist the migration of process or thread in multi-core systems. Keywords: Cache Locality, Memory Locality, Multi-core Architecture, Migration, Ping-Pong effect, Processor Affinity. 1. Processor Affinity possible as equally to all cores available with that device.
    [Show full text]
  • TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T
    TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T. Meehean A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2011 © Copyright by Joseph T. Meehean 2011 All Rights Reserved i ii To Heather for being my best friend Joe Passamani for showing me what’s important Rick and Mindy for reminding me Annie and Nate for taking me out Greg and Ila for driving me home Acknowledgments Nothing of me is original. I am the combined effort of everybody I’ve ever known. — Chuck Palahniuk (Invisible Monsters) My committee has been indispensable. Their comments have been enlight- ening and valuable. I would especially like to thank Andrea for meeting with me every week whether I had anything worth talking about or not. Her insights and guidance were instrumental. She immediately recognized the problems we were seeing as originating from the CPU scheduler; I was incredulous. My wife was a great source of support during the creation of this work. In the beginning, it was her and I against the world, and my success is a reflection of our teamwork. Without her I would be like a stray dog: dirty, hungry, and asleep in afternoon. I would also like to thank my family for providing a constant reminder of the truly important things in life. They kept faith that my PhD madness would pass without pressuring me to quit or sandbag. I will do my best to live up to their love and trust. The thing I will miss most about leaving graduate school is the people.
    [Show full text]
  • Processor Scheduling
    Background • The previous lecture introduced the basics of concurrency Processor Scheduling – Processes and threads – Definition, representation, management • We now understand how a programmer can spawn concurrent computations • The OS now needs to partition one of the central resources, the CPU, between these concurrent tasks 2 Scheduling Scheduling • The scheduler is the manager of the CPU • Threads alternate between performing I/O and resource performing computation • In general, the scheduler runs: • It makes allocation decisions – it chooses to run – when a process switches from running to waiting certain processes over others from the ready – when a process is created or terminated queue – when an interrupt occurs – Zero threads: just loop in the idle loop • In a non-preemptive system, the scheduler waits for a running process to explicitly block, terminate or yield – One thread: just execute that thread – More than one thread: now the scheduler has to make a • In a preemptive system, the scheduler can interrupt a resource allocation decision process that is running. Terminated • The scheduling algorithm determines how jobs New Ready Running are scheduled Waiting 3 4 Process States Scheduling Evaluation Metrics • There are many possible quantitative criteria for Terminated evaluating a scheduling algorithm: New Ready Running – CPU utilization: percentage of time the CPU is not idle – Throughput: completed processes per time unit Waiting – Turnaround time: submission to completion – Waiting time: time spent on the ready queue Processes
    [Show full text]
  • Module 6: CPU Scheduling
    Chapter 5 CPU Scheduling Da-Wei Chang CSIE.NCKU 1 Source: Abraham Silberschatz, Peter B. Galvin, and Greg Gagne, "Operating System Concepts", 10th Edition, Wiley. Outline • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Multiple-Processor Scheduling • Thread Scheduling • Operating Systems Examples • Algorithm Evaluation 2 Basic Concepts • Scheduling is a basis of multiprogramming – Switching the CPU among processes improves CPU utilization • CPU-I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait 3 Alternating Sequence of CPU and I/O Bursts 4 Histogram of CPU-burst Times CPU Burst Distribution A large # of short CPU bursts and a small # of long CPU bursts IO bound many short CPU bursts, few long CPU bursts CPU bound more long CPU bursts 5 CPU Scheduler • Short term scheduler • Selects among the processes in memory that are ready to execute, and allocates the CPU to one of them • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state (IO, wait for child) 2. Switches from running to ready state (timer expire) 3. Switches from waiting to ready (IO completion) 4. Terminates 6 Non-preemptive vs. Preemptive Scheduling • Non-preemptive Scheduling/Cooperative Scheduling – Scheduling takes place only under circumstances 1 and 4 – Process holds the CPU until termination or waiting for IO – MS Windows 3.1; Mac OS ( before Mac OS X) – Does not require specific HW support for preemptive scheduling • E.g., timer • Preemptive Scheduling – Scheduling takes place
    [Show full text]
  • The Joy of Scheduling
    The Joy of Scheduling Jeff Schaffer, Steve Reid QNX Software Systems [email protected], [email protected] Abstract The scheduler is at the heart of the operating system: it In realtime operating systems, the scheduler must also make governs when everything — system services, applications, sure that the tasks meet their deadlines. A “task” can be a and so on — runs. Scheduling is especially important in few lines of code in a program, a process, or a thread of realtime systems, where tasks are expected to run in a timely execution within a multithreaded process. and deterministic manner. In these systems, if the designer doesn’t have complete control of scheduling, unpredictable Common scheduling algorithms and unwanted system behavior can and will occur. A very simple system that repeatedly monitors and processes a few pieces of data doesn’t need an elaborate Software developers and system designers should scheduler; in contrast, the scheduler in a complex system understand how a particular OS scheduler works in order to must be able to handle many competing demands for the be able to understand why something doesn’t execute in a processor. Because the needs of different computer systems timely manner. This paper describes some of the more vary, there are different scheduling algorithms. commonly used scheduling algorithms and how scheduling works. This knowledge can help developers debug and Cyclic executive / run to completion correct scheduling problems and create more efficient In a cyclic executive, the system loops through a set of tasks. systems. Each task runs until it’s finished, at which point the next one is run.
    [Show full text]