Key Concepts and Topics

Total Page:16

File Type:pdf, Size:1020Kb

Key Concepts and Topics Unit 2 (2.1 Processes) Chapter 3 Key Concepts and Topics: Process Process States New Running A process is the unit of work in a system. Such a system consists of a collection of concurrently executing processes, some of which are OS processes and the rest of which are user processes. As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states: New, Running, Waiting, Ready and Terminated. New: The process is being created. Running: Instructions are being created. Waiting Ready Terminated Process Control Block Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a signal). Ready: The process is waiting to be assigned to a processor. Terminated: The process has finished execution. Each process is represented in the OS by a process control block also called a task control block. Process Scheduling Process Scheduler Job Scheduler CPU Scheduler The objective of multi-programming is to have some processes running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process for program execution on the CPU. Often, in a batch system, more processes are submitted than can be executed immediately. These processes are spooled to a mass-storage device, where they are kept for later execution. The job scheduler selects processes from this pool and loads them into memory for execution. The CPU scheduler selects from among the processes that are ready to execute and allocates the CPU to one of them. The key idea behind a medium-term scheduler is that sometimes it can be advantageous to remove a process from memory and thus reduce the degree of multiprogramming. Medium-Term Scheduler Swapping Context Switch Scheduling Queues A process that can be re-introduced into memory and have its execution continue where it left off is called swapping. The process is swapped out and later swapped in by the medium-term scheduler. Switching t he CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. As processes enter the system, they are put into a job queue, which consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is generally stored as a linked list. A ready-queue generally contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue. [Scheduling Queues] Job Queue Ready Queue Device Queue Degree of Multiprogramming As processes enter the system, they are put into a job queue, which consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. The list of processes waiting for a particular I/O device is called a device queue. The Long-term scheduler executes much less frequently; minutes may separate the creation of one new process and the next. The long term scheduler controls the Degree of Multiprogramming (the number of processes in memory). Process Creation Fork() Process Termination In Unix, this process is called init. In Windows, it is the System Idle Process. This process is the parent or grand-parent of all other processes. New Child Processes are created by another process (the Parent Process). A new process is created by the fork() system call. The new process consists of a copy of the address space of the original process. A process terminates [ends] when it finishes executing its final statement and asks the operating system to delete it by using the exit() system call. Study questions. 1. What are the main features of processes? 2. What information is included in PCB? 3. What data structures are involved in process scheduling? 4. What is the rationale for each kind of scheduler: long term, short term and medium term schedulers? 5. How do you use fork() to create a process? The main features of processes include: Scheduling, creation and termination. Scheduling: The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. Creation: Starting a process. Termination: Finishing [ending] a process. What information is included in a PCB? Process Control Block is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system. The role of the PCBs is central in process management: they are accessed and/or modified by most OS utilities, including those involved with scheduling, memory and I/O resource access and performance monitoring. It can be said that the set of the PCBs defines the current state of the operating system. Data structuring for processes is often done in terms of PCBs. For example, pointers to other PCBs inside a PCB allow the creation of those queues of processes in various scheduling states ("ready", "blocked", etc.) that we previously mentioned. What data structures are involved in process scheduling? array, file, record, tree What is the rationale for each kind of scheduler: long term, short term and medium term schedulers. Long Term Scheduling is heavily influenced by resource-allocation considerations, especially memory management. Short Term (CPU) Scheduling is the selection of one process from the ready queue. Medium Term Scheduling is the belief that sometimes it can be advantageous to remove a process from memory and thus reduce the degree of multiprogramming. How do you use fork() to create a process A new process is created by the fork () system. The new process consists of a copy of the address space of the original process. This mechanism allows the parent process to communicate easily with its child process. Both processes continue execution at the instruction after the fork( ), with one difference: the return code for the fork ( ) is zero [0] for the new (child) process, whereas the (nonzero) process identifier of the child is returned to the parent. Key Concepts and Topics: Process Cooperation Interprocess Communication Shared-Memory Message-Passing A process is Cooperating if it can affect or be affected by the other processes executing in the system Processes executing concurrently in the operating system may be either independent processes or cooperating processes. In the shared-memory model, a region of memory that is shared by cooperating processes is established. Message passing sends a message to a process and relies on the process and the supporting infrastructure to select and invoke the actual code to run. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name. Message passing is key to some models of concurrency and object-oriented programming. Producer Consumer Problem Bounded & Unbounded buffer Direct and Indirect communication Naming The producer consumer problem provides a useful metaphor for the client-server paradigm. We generally think of a server as a producer and a client as a consumer. For example, a web server produces HTML files and images, which are consumed by the client web browser requesting the resource. The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. In this case the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full. Under Direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. With indirect communication, the messages are sent and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Processes that want to communicate must have a way to refer to each other. This can be accomplished by assigning an identity and/or identifier to each element. (i.e.) Naming Mailbox or Port Synchronization Buffering Zero bounded and Unbounded Capacity Messages are sent to and received from mailboxes, called ports in Mach. A process can communicate with another process via a number of different mailboxes, but two [2] processes can communicate only if they have a shared mailbox. Communication between processes takes place through calls to send () and receive () primitives. There are different design options for implementing each primitive. Message passing may be either blocking or unblocking also known as synchronous and asynchronous. A buffer, of course, is a memory area that stores data being transferred between two [2] devices or a device and an application. With zero bounded the queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives a message. Unbounded capacity, the queue’s length is potentially infinite; thus any number of messages can wait in it.
Recommended publications
  • Thread Scheduling in Multi-Core Operating Systems Redha Gouicem
    Thread Scheduling in Multi-core Operating Systems Redha Gouicem To cite this version: Redha Gouicem. Thread Scheduling in Multi-core Operating Systems. Computer Science [cs]. Sor- bonne Université, 2020. English. tel-02977242 HAL Id: tel-02977242 https://hal.archives-ouvertes.fr/tel-02977242 Submitted on 24 Oct 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Ph.D thesis in Computer Science Thread Scheduling in Multi-core Operating Systems How to Understand, Improve and Fix your Scheduler Redha GOUICEM Sorbonne Université Laboratoire d’Informatique de Paris 6 Inria Whisper Team PH.D.DEFENSE: 23 October 2020, Paris, France JURYMEMBERS: Mr. Pascal Felber, Full Professor, Université de Neuchâtel Reviewer Mr. Vivien Quéma, Full Professor, Grenoble INP (ENSIMAG) Reviewer Mr. Rachid Guerraoui, Full Professor, École Polytechnique Fédérale de Lausanne Examiner Ms. Karine Heydemann, Associate Professor, Sorbonne Université Examiner Mr. Etienne Rivière, Full Professor, University of Louvain Examiner Mr. Gilles Muller, Senior Research Scientist, Inria Advisor Mr. Julien Sopena, Associate Professor, Sorbonne Université Advisor ABSTRACT In this thesis, we address the problem of schedulers for multi-core architectures from several perspectives: design (simplicity and correct- ness), performance improvement and the development of application- specific schedulers.
    [Show full text]
  • Operating Systems
    Operating Systems CPU Scheduling Based on Ch. 6 of OS Concepts by SGG Scheduling ● If we have more processes in the ready state than cores we have to decide which ones run and which ones wait ● In other words we have to do scheduling ● We have to make sure that – The CPU and the rest of the system are used efficiently – The we do not have starvation – The users are satisfied – Etc. The life of a Process ● From a scheduler point of view, a process alternates between two states – CPU burst (ready/running) – I/O burst (waiting) ● The CPU scheduler is sometimes refered to as short-term scheduler ● There are two kinds of schedulers – Preemptive – Non preemptive When Scheduling Takes Place ● A scheduler may be activated and make a decision in one of four situations – 1. A process switches from running to waiting (blocks) – 2. A process is temporarily paused because of some interrupt – 3. An I/O (or other operation) completes – 4. A process terminates ● If scheduling occurs in 1 and 4 only: non-preemtive ● If scheduling occurs in all four: preemptive Preemptive Schedulers ● Preemptive schedulers are slightly more complicated ● The difficulty is that we may have race conditions and will need sychronization. ● The kernel itself may not be preemptive even on preemptive systems. – Simple to use and verify – Unsuitable for real time use ● Most modern kernels are preemtable Context Switching ● Context switching involves – Switching to monitor mode – Saving the state of the current process in the CPU – Loading the state of another process – Switching to user mode and jumping to the saved PC ● Context switching is rather expensive – Page table installation – Clearing of cache – Etc.
    [Show full text]
  • CPU Scheduling
    Chapter 5: CPU Scheduling Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multi-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation Operating System Concepts – 10th Edition 5.2 Silberschatz, Galvin and Gagne ©2018 Objectives Describe various CPU scheduling algorithms Assess CPU scheduling algorithms based on scheduling criteria Explain the issues related to multiprocessor and multicore scheduling Describe various real-time scheduling algorithms Describe the scheduling algorithms used in the Windows, Linux, and Solaris operating systems Apply modeling and simulations to evaluate CPU scheduling algorithms Operating System Concepts – 10th Edition 5.3 Silberschatz, Galvin and Gagne ©2018 Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait CPU burst followed by I/O burst CPU burst distribution is of main concern Operating System Concepts – 10th Edition 5.4 Silberschatz, Galvin and Gagne ©2018 Histogram of CPU-burst Times Large number of short bursts Small number of longer bursts Operating System Concepts – 10th Edition 5.5 Silberschatz, Galvin and Gagne ©2018 CPU Scheduler The CPU scheduler selects from among the processes in ready queue, and allocates the a CPU core to one of them Queue may be ordered in various ways CPU scheduling decisions may take place when a
    [Show full text]
  • TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T
    TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T. Meehean A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2011 © Copyright by Joseph T. Meehean 2011 All Rights Reserved i ii To Heather for being my best friend Joe Passamani for showing me what’s important Rick and Mindy for reminding me Annie and Nate for taking me out Greg and Ila for driving me home Acknowledgments Nothing of me is original. I am the combined effort of everybody I’ve ever known. — Chuck Palahniuk (Invisible Monsters) My committee has been indispensable. Their comments have been enlight- ening and valuable. I would especially like to thank Andrea for meeting with me every week whether I had anything worth talking about or not. Her insights and guidance were instrumental. She immediately recognized the problems we were seeing as originating from the CPU scheduler; I was incredulous. My wife was a great source of support during the creation of this work. In the beginning, it was her and I against the world, and my success is a reflection of our teamwork. Without her I would be like a stray dog: dirty, hungry, and asleep in afternoon. I would also like to thank my family for providing a constant reminder of the truly important things in life. They kept faith that my PhD madness would pass without pressuring me to quit or sandbag. I will do my best to live up to their love and trust. The thing I will miss most about leaving graduate school is the people.
    [Show full text]
  • Module 6: CPU Scheduling
    Chapter 5 CPU Scheduling Da-Wei Chang CSIE.NCKU 1 Source: Abraham Silberschatz, Peter B. Galvin, and Greg Gagne, "Operating System Concepts", 10th Edition, Wiley. Outline • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Multiple-Processor Scheduling • Thread Scheduling • Operating Systems Examples • Algorithm Evaluation 2 Basic Concepts • Scheduling is a basis of multiprogramming – Switching the CPU among processes improves CPU utilization • CPU-I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait 3 Alternating Sequence of CPU and I/O Bursts 4 Histogram of CPU-burst Times CPU Burst Distribution A large # of short CPU bursts and a small # of long CPU bursts IO bound many short CPU bursts, few long CPU bursts CPU bound more long CPU bursts 5 CPU Scheduler • Short term scheduler • Selects among the processes in memory that are ready to execute, and allocates the CPU to one of them • CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state (IO, wait for child) 2. Switches from running to ready state (timer expire) 3. Switches from waiting to ready (IO completion) 4. Terminates 6 Non-preemptive vs. Preemptive Scheduling • Non-preemptive Scheduling/Cooperative Scheduling – Scheduling takes place only under circumstances 1 and 4 – Process holds the CPU until termination or waiting for IO – MS Windows 3.1; Mac OS ( before Mac OS X) – Does not require specific HW support for preemptive scheduling • E.g., timer • Preemptive Scheduling – Scheduling takes place
    [Show full text]
  • CSMC 412 Operating Systems Prof
    CSMC 412 Operating Systems Prof. Ashok K Agrawala Set 9 March 20 1 CPU Scheduling Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018 Chapter 5: CPU Scheduling • Basic Concepts • Scheduling Criteria • Scheduling Algorithms • Thread Scheduling • Multiple-Processor Scheduling • Real-Time CPU Scheduling • Operating Systems Examples • Algorithm Evaluation March 20 3 Objectives • To introduce CPU scheduling, which is the basis for multi- programmed operating systems • To describe various CPU-scheduling algorithms • To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system • To examine the scheduling algorithms of several operating systems March 20 4 Mechanism • Process states • Ready Queue • Select a process from the ready queue • Change its state to running • Decide on a time Quantum and set the timer with that value • Restore the saved state of the process • Change the Instructor Counter value from the saved value March 20 5 Basic Concepts • Maximum CPU utilization obtained with multiprogramming • CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait • CPU burst followed by I/O burst • CPU burst distribution is of main concern March 20 6 Histogram of CPU-burst Times March 20 7 CPU Scheduler Short-term scheduler also called Dispatcher selects from among the processes in ready queue, and allocates the CPU to one of them Queue may be ordered in various ways CPU scheduling decisions may take place when a process: 1. Switches from running to waiting
    [Show full text]
  • Chapter 5: Process Scheduling Chapter 5: Process Scheduling
    Chapter 5: Process Scheduling Chapter 5: Process Scheduling 5.1 Basic Concepts 5.2 Scheduling Criteria 5.3 Scheduling Algorithms 5.3.1 First-Come, First-Served Scheduling 5.3.2 Shortest-Job-First Scheduling 5.3.3 Priority Scheduling 5.3.4 Round-Robin Scheduling 5.3.5 Multilevel Queue Scheduling 5.4 Thread Scheduling 5.5 Multiple-Processor Scheduling 5.6 Real-Time CPU Scheduling Priority-Based, Rate-Monotonic, EDF, Proportional Sharing 5.7 Operating Systems Examples 5.8 Algorithm Evaluation 5.2 5.1 Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait Alternating Sequence of CPU And I/O Bursts Histogram of CPU-burst Times 5.3 Short-Term CPU Scheduler Selects from among the (ready-state) processes in memory that are ready to execute, and allocates the CPU to one of them CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under 1 and 4 is nonpreemptive All other scheduling is preemptive 5.4 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency – time it takes for the dispatcher to stop one process and start another running 5.5 5.2 Scheduling Criteria CPU utilization – keep the CPU
    [Show full text]
  • CPU Scheduling
    Operating Systems (Fall/Winter 2019) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review • Motivation to use threads • Concurrency vs parallelism • Kernel threads vs user threads • Thread models • Thread issues: fork/exec, signal handling, thread cancellation • LWP • Linux thread implementation: clone system call • Pthread TLS Contents • Basic concepts • Scheduling criteria • Scheduling algorithms • Thread scheduling • Multiple-processor scheduling • Operating systems examples Some Terms • Kernel threads - not processes - are being scheduled by the OS • However, “thread scheduling” and “process scheduling” are used interchangeably. • We use “process scheduling” when discussing general ideas and “thread scheduling” to refer thread-specific concepts • Also “run on a CPU” -> run on a CPU’s core Basic Concepts • Process execution consists of a cycle of CPU execution and I/O wait • CPU burst and I/O burst alternate • CPU burst distribution varies greatly from process to process, and from computer to computer, but follows similar curves • Maximum CPU utilization obtained with multiprogramming • CPU scheduler selects another process when current one is in I/O burst Alternating Sequence of CPU and I/O Bursts Histogram of CPU-burst Distribution • A large number of short CPU bursts, and small number of long CPU bursts CPU Scheduler • CPU scheduler selects from among the processes in ready queue, and allocates the CPU to one of them • CPU scheduling decisions
    [Show full text]
  • Chapter 5: CPU Scheduling
    Chapter 5: CPU Scheduling Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018, revised by S. Weiss 2020 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multi-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation Operating System Concepts – 10th Edition 5.2 Silberschatz, Galvin and Gagne ©2018, revised by S. Weiss 2020 Objectives Understand difference between different types of scheduling Describe various CPU scheduling algorithms Assess CPU scheduling algorithms based on scheduling criteria Explain the issues related to multiprocessor and multicore scheduling Describe various real-time scheduling algorithms Describe the scheduling algorithms used in the Linux operating system Operating System Concepts – 10th Edition 5.3 Silberschatz, Galvin and Gagne ©2018, revised by S. Weiss 2020 Medium Term Scheduling The medium term scheduler controls the degree of multi- programming. It admits processes into memory to compete for the CPU Also called a swapper when it is used to control the process mix by removing and restoring processes Goal of medium term scheduler is to keep a good mix of processes in memory so that the CPU is always kept busy. Operating System Concepts – 10th Edition 5.4 Silberschatz, Galvin and Gagne ©2018, revised by S. Weiss 2020 Basic Problem Objective of multi-programming is to have some process running at all times, to maximize CPU utilization. Need to keep many programs in memory Process executes on CPU until it executes some instruction for which it has to wait (e.g. I/O) Process is removed from CPU and another must be chosen to run This is purpose of CPU scheduler: which process should run? Operating System Concepts – 10th Edition 5.5 Silberschatz, Galvin and Gagne ©2018, revised by S.
    [Show full text]
  • • Aims to Assign Processes to Be Executed by the CPU in a Way That
    CPU SCHEDULING CPU SCHEDULING (CONT’D) Aims to assign processes to be executed by the CPU in a way that meets system • objectives such as response time, throughput, and processor efficiency Broken down into three separate functions: • – Long term scheduling = the decision to add to the pool of processes being exe- cuted – Medium term scheduling = the decision to add to the number of processes that are partially or fully into main memory – Short term scheduling = decides which available process will be executed by the CPU – I/O scheduling = decides which process’ pending I/O request is handled by the available I/O devices CS 409, FALL 2013 SCHEDULING/1 CS 409, FALL 2013 SCHEDULING/2 CPU SCHEDULING (CONT’D) NESTED SCHEDULING FUNCTIONS CS 409, FALL 2013 SCHEDULING/2 CS 409, FALL 2013 SCHEDULING/3 QUEUING DIAGRAM SHORT-TERM PRIORITY SCHEDULING CS 409, FALL 2013 SCHEDULING/4 CS 409, FALL 2013 SCHEDULING/5 LONG- AND MEDIUM-TERM SCHEDULER SHORT-TERM SCHEDULING (DISPATCHER) Executes most frequently, makes fine-grained decisions of which process to execute Long-term scheduler controls the degree of multiprogramming • • next – May need to limit this degree to provide satisfactory service to the current set of Invoked for every occurrence of an event that may lead to the blocking of the current processes • process – Must decide when the operating system can take on one or more additional processes – E.g, clock interrupt, I/O interrupt, OS call, signal, semaphore – Must decide which jobs to accept and turn into processes Attempts to optimize certain aspect of the system behaviour = needs a set of criteria • First come, first served to evaluate its policy ∗ Priority – User-oriented criteria (such as response time) relates the behaviour of the sys- ∗ Execution times, I/O requirements, etc.
    [Show full text]
  • CPU Scheduling (Part II)
    CPU Scheduling (Part II) Amir H. Payberah [email protected] Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 1 / 58 Motivation Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 2 / 58 Reminder I CPU scheduling is the basis of multiprogrammed OSs. I By switching the CPU among processes, the OS makes the computer more productive. Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 3 / 58 Basic Concepts I In a single-processor system, only one process can run at a time. I Others must wait until the CPU is free and can be rescheduled. I The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 4 / 58 Scheduling Criteria I CPU utilization I Throughput I Turnaround time I Waiting time I Response time Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 5 / 58 Process Scheduling Algorithms I First-Come, First-Served Scheduling I Shortest-Job-First Scheduling I Priority Scheduling I Round-Robin Scheduling I Multilevel Queue Scheduling I Multilevel Feedback Queue Scheduling Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 6 / 58 Thread Scheduling Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 7 / 58 Thread Scheduling (1/2) I Distinction between user-level and kernel-level threads. I When threads supported by the OS, threads scheduled, not pro- cesses. Amir H. Payberah (Tehran Polytechnic) CPU Scheduling 1393/7/28 8 / 58 I System-Contention Scope (SCS) • In one-to-one model.
    [Show full text]
  • Scheduling Techniques: FCFS
    SchedulingTitolo presentazione sottotitolo A.Y. 2017-18 Milano, XX mese 20XX ACSO Tutoring MSc Eng. Michele Zanella Process Scheduling Goals: • Multiprogramming: having some process running at all times, to maximize CPU utilization • Time sharing: switching the CPU among processes so frequently that users can interact with each program while it is running Scheduler: it is in charge of selecting an available process for program execution on the CPU Michele Zanella, ACSO Tutoring, Scheduling CPU-I/O Burst Cycle • Process execution consists of a cycle of CPU execution and I/O wait. • Process execution begins with a CPU burst and is followed by an I/O burst. Michele Zanella, ACSO Tutoring, Scheduling Preemptive scheduling • Scheduling decision when: • A process swtiches from RUNNING to WAIT state • A process terminates • A process swtiches from RUNNING to READY • A process swtiches from WAITING to READY • Non-Preemptive: once the CPU has been allocated to a process, the process keeps the CPU until it releases it by terminating or by switching to the WAIT state. Michele Zanella, ACSO Tutoring, Scheduling Dispatcher 1. Switching context 2. Switching to user mode 3. Jumping to the propoer location in the user program to restart that program • It should be as fast as possible Michele Zanella, ACSO Tutoring, Scheduling Scheduling Criteria • CPU utilization: keeping the CPU as busy as possible • Throughput: number of processes that are completed per time unit • Turnaround time: how long it takes to execute the process. Sum of the periods spent waiting
    [Show full text]