Scheduling Teil 2

Total Page:16

File Type:pdf, Size:1020Kb

Scheduling Teil 2 Scheduling Teil 2 Prof. Dr. Margarita Esponda Freie Universität Berlin WS 2011/2012 Multilevel Feedback Queue Scheduling Vier Prioritätsklassen Beispiel: neue Prozesse Quantum = 4 Höchste Priorität Quantum = 8 Quantum = 16 FCFS Niedrigste Priorität Die Prozesse wandern mit der Zeit nach unten. Niedrigere Priorität, aber größere Quantum. M. Esponda-Argüero Multilevel Feedback Queue Scheduling Parameter: - Anzahl der Warteschlangen. - Scheduling-Algorithmen für jede Warteschlange. - Wann können Prozesse zwischen den Warteschlangen wandern. - Wo werden neu gestartete Prozesse eingeordnet. - Wie oft wird Scheduling-Information aktualisiert - Welches Feedback wird verwendet. M. Esponda-Argüero BSD 5.2-Scheduler "multilevel feedback queue scheduler" ✴ Dynamische Prioritäten ✴ nice-Wert. Nettigkeit eines Threads gegenüber den anderen. -20 0 20 ich bin nicht nett und er will gleich ich bin sehr nett und will mehr CPU-Zeit bleiben gibt CPU-Zeit ab der Prioritätszerfall die Prioritätszerfall wird gebremst wird beschleunigt M. Esponda-Argüero BSD 5.2 - Scheduler Scheduling-Klassen Priorität Klasse Thread-Type 0 - 63 ITHD Botton-half kernel (Interrupt) 63 - 127 KERN Top-half kernel 160 128 - 159 REALTIME Real Time 161 162 160 - 223 TIMESHARE Time-sharing user . 224 - 255 IDLE Idle user 222 223 M. Esponda-Argüero BSD 5.2 Scheduler "multilevel feedback queue scheduler" ✴ 64 verschiedene Prioritäten in der Time-sharing-Klasse von 160 (PRI_MIN_TIMESHARE) bis 223 (PRI_MAX_TIMESHARE) ✴ In jedem 4. Zeit-Quantum (ca. 40 Millisekunden) werden die Prioritäten neu berechnet ✴ die Prioritäten fallen linear in Bezug auf die verbrauchte CPU- Zeit ✴ negative nice-Werte verspäten der Zerfall der Priorität von rechenintensiven Threads ✴ Ein-/Ausgabe intensive Threads werden favorisiert M. Esponda-Argüero BSD-Scheduler Berechnung der Prioritäten im BSD 5.2 Die Prioritäten werden mit Hilfe der nice und recent_cpu Komponenten der Thread-Struktur berechnet. recent _ cpu user _ priority = PRI _ MIN _TIMESHARE − + 2 ⋅ nice 4 Wobei - nice vom Benutzer gesetzt wird, - recent_cpu mit einem exponentiell gewichteten gleitenden Durchschnitt approximiert wird - und PRI_MIN_TIMESHARE = 160 M. Esponda-Argüero BSD-Scheduler Berechnung der Prioritäten Berechnung des recent_cpu eines Threads: ✴ pro Tacktzyklus (clock tick) wird recent_cpu um 1 inkrementiert, wenn das Thread gerade ausgeführt wird ✴ jede Sekunde wird in Abhängigkeit der durchschnittlichen Anzahl der Threads (avg_load) des Systems eine Korrektur gemacht 2 ⋅ avg _loadt recent _ cput = recent _ cput −1 + nice 2 ⋅ avg _loadt + 1 avg_load ist die durchschnittliche Anzahl von Threads (run-queue + wait-queue) in den letzten 60 Sekunden. i=t −60 length(run _ queue i ) + length(wait _ queuei ) avg _load = ∑i=t t 60 M. Esponda-Argüero BSD 5.2 -Scheduler Berechnung der Prioritäten Wenn ein Thread gerade aus dem Warte-Zustand raus kommt, wird seine recent_cpu wie folgt berechnet: sleep _ time ⎛ 2 ⋅ avg _load ⎞ t recent _ cpu t recent _ cpu nice t = ⎜ ⎟ t −1 + ⎝ 2 ⋅ avg _loadt + 1⎠ M. Esponda-Argüero BSD 5.2 - Scheduler Nehmen wir an, wir haben nur ein Thread mit nice = 0 und Ti = akkumulierte Tackt-Zyklen (Ausführungszeit) innerhalb des Zeitintervalls i 2 recent _ cpu = T = 0,66 ⋅T 3 0 0 recent _ cpu = 0,66 ⋅(T1 + 0,66 ⋅T0 ) = 0,66 ⋅T1 + 0,44 ⋅T0 recent _ cpu = 0,66 ⋅T2 + 0,44 ⋅T1 + 0,30 ⋅T0 recent _ cpu = 0,66 ⋅T3 + 0,44 ⋅T2 + 0,30 ⋅T1 + 0,20 ⋅T0 recent _ cpu = 0,66 ⋅T4 + 0,44 ⋅T3 + 0,30 ⋅T2 + 0,20 ⋅T1 + 0,13⋅T0 Nach fünf Berechnungen wird der CPU-Verbrauch vor 5 Sekunden fast vergessen. M. Esponda-Argüero Lotterie-Scheduling - von Waldspurger und Weihl - 1994 entwickelt - Der Grundgedanke ist hier, einem Prozess Lotterielose für verschiedene Ressourcen (z.B. CPU-Zeit) zu geben. - Wer eine bestimmte Ressource bekommt, wird beim Ziehen eines Lotterieloses entschieden. - Prozesse mit höherer Priorität bekommen Extra-Lose. M. Esponda-Argüero Lotterie-Scheduling Interessante Eigenschaften • Kooperierende Prozesse können Lose austauschen. • Ein gerade blockierter Client-Prozess kann seine Lose an den Server- Prozess abgeben. • Server-Prozesse brauchen keine Lose, wenn keine Client-Prozesse vorhanden sind. • Einige Probleme lassen sich einfacher als mit anderen Methoden lösen . • Alle Prozesse haben eine Mindestanzahl von Losen, so dass keiner verhungern kann. M. Esponda-Argüero "Hybrid Lottery Scheduling" D. Petroe, K. Milford und G. Gibson • Kombination der Priorität-Klassen des BSD-Scheduling mit der Lottery-Scheduling • Dynamische Anpassung der Lose mit Hilfe von Lose-Währungen • Der Overhead ist größer und die Implementierung komplexer Literatur-Empfehlung: Implementing Lottery Scheduling: Matching the Specializations in Traditional Schedulers David Petrou, Kohn W. Milford und Garth A. Gibson 1999 M. Esponda-Argüero Stride-Scheduling ✴ Waldspurger and Weilhl, 1995 ✴ deterministisches fair-share-Scheduling ✴ Tickets anstatt Lotterie-Lose ✴ Viele Tickets bedeute mehr Recht auf CPU-Zeit ✴ Ressourcen werden direkt proportional zur Anzahl von Tickets zugeteilt. z.B. Ein Prozess, der 10 Tickets hat, bekommt innerhalb einer bestimmten Zeitspanne doppelt so viel CPU-Zeit als einer, der nur 5 Tickets hat. ✴ die Prozesse bekommen immer das gleiche CPU-Quantum ✴ ein Zeitintervall stride, das ein Prozess zwischen den CPU- Zuteilungen warten muss, wird berechnet. M. Esponda-Argüero Stride-Scheduling Einfaches Konzept - CPU-Kapazitäten werden von Prozessen reserviert (Tickets). - Ein Ticket stell eine minimale CPU-Zeiteinheit dar. - Jeder Prozess hat ein pass-Wert, der dynamisch berechnet wird. - Der pass-Wert wächst umgekehrt proportional zu den reservierten CPU-Kapazitäten. - Das Inkrement des pass-Werts wird stride genannt. stride1 strideTi = ticketsTi Beispiel: stride1 = 20 M. Esponda-Argüero Stride-Scheduling Beispiel: stride1 stride1 = 20 strideTi = ticketsTi Am Anfang ist der pass-Wert = 0 für alle Prozesse Der Scheduler wählt den Prozess mit dem kleinsten pass-Wert Stride-Scheduling Quantum A B C C C A C C A B C Was passiert, wenn neue Prozesse gestartet oder beendet werden? Ein pass-Wert muss berechnet werden. In dem System gibt es einen globalen pass-Wert der aktualisiert wird, indem ein globaler stride-Wert addiert wird. stride global _ stride = 1 global _tickets pass global _ pass stride Tnew = + Tnew M. Esponda-Argüero Stride-Scheduling Prozesse haben auch eine remain-Komponente, die wie folgt berechnet wird: remain pass global _ pass = Ti − Prozesse, die aus der Warteliste neu gestartet werden, müssen nur ihre remain-Komponente zum globalen pass-Wert addieren, um ihren eigenen pass-Wert zu berechnen. Wenn ein Prozess unterbrochen wird, wird ihr pass-Wert mit folgender Formel berechnet. time pass pass consumed stride Ti = Ti + ⋅ Ti timeallocated M. Esponda-Argüero Stride-Scheduling pass Anfangszustand der Warteschlangen HEAD 0 A B C 1 2 3 5 2 Tickets . n 0 . A B C C B C A C B C . pass 15 10 6 12 20 18 30 24 30 30 HEAD 6 C . time . 10 B . 15 A . M. Esponda-Argüero Früheres Linux-Scheduling Der O(1)-Scheduler - SMP (Symmetric Multiprocessing) - Prozessor Affinität - Lastverteilung - Unterstützung von interaktiven Threads Priorität Quantum - real-time-Bereich 0 - 99 Höhere Prioritäten 200 ms. … - nice-Bereich 100 - 140 Niedrige Prioritäten 10 ms. Höhere Prioritäten haben ein längeres Zeitquantum. Niedrigere Prioritäten haben ein kürzeres Zeitquantum. Echtzeit-Tasks bekommen immer eine feste Priorität. Prozesse (Threads) im nice-Bereich haben dynamische Prioritäten, die sich um ±5 verändern können. M. Esponda-Argüero Linux-Scheduling (früher) Linux hatte zwei Listen nach Prioritäten sortiert. Aktive Tasks Ausgelaufene Tasks [0] [0] [1] [1] [2] [2] . [140] [140] Tasks, die unterbrochen werden, oder deren Quantum ausgelaufen ist, werden zum Array der ausgelaufenen Tasks geschickt und ihre Priorität wird neu berechnet. Wenn die Liste der aktiven Tasks leer ist, werden beide Listen vertauscht. M. Esponda-Argüero Completely Fair Scheduler Neuer Scheduler von Linux. Alle Prozesse (mit verschiedenen Prioritäten), die im Bereit- Zustand sind, werden mit einem einzigen R/B-Baum verwaltet. Jede CPU hat einen eigenen Scheduler mit einem eigenen R/B- Baum. Die Prozesse werden in den Baum nach einer sogenannten virtuellen Zeit einsortiert, die vom Scheduler jedes Mal neu berechnet wird. M. Esponda-Argüero Completely Fair Scheduler früher Prozesse A B C Gewichte 4 3 1 Summe = 8 4/8 3/8 1/8 CPU Zeit M. Esponda-Argüero Completely Fair Scheduler Prozesse A B C Gewichte 4 3 1 Summe = 8 4/8 3/8 1/8 CPU Zeit 100% 100% 100% M. Esponda-Argüero Completely Fair Scheduler Prozesse A B C Gewichte 4 3 1 Summe = 8 4/8 3/8 1/8 CPU Zeit 8 8 8 M. Esponda-Argüero Completely Fair Scheduler struct sched_entity{ struct load_weight load; 27 struct rb_node run_node; struct list_head group_node; ... 17 33 } 6 24 31 50 Nil Nil Nil 2 Nil 29 Nil Nil Nil Nil Nil Nil O(log(n)) virtuelle Laufzeit braucht mehr CPU-Zeit braucht weniger CPU-Zeit M. Esponda-Argüero Multithreading-Architektur von Solaris Many-to-Many Thread-Modell M. Esponda-Argüero Thread-Scheduling Thread Thread-Modelle - Benutzer-Threads PCS - one-to-one - Kernel-Threads SCS - many-to-one - many-to-many PCS Process-contention scope SCS System-contention scope Prozess-Thread Threads werden auf sogenannte LWP Prozesse innerhalb des Kernels abgebildet. In dem SCS-Scheduling Kernel-Thread konkurrieren alle Threads des Systems für die CPU-Zeit. M. Esponda-Argüero Pthread-Scheduling Zwei Start-Attribute: PTHREAD_SCOPE_PROCESS Scheduling in PCS PTHREAD_SCOPE_SYSTEM Scheduling in SCS Linux und Mac OS
Recommended publications
  • Lecture 11: Deadlock, Scheduling, & Synchronization October 3, 2019 Instructor: David E
    CS 162: Operating Systems and System Programming Lecture 11: Deadlock, Scheduling, & Synchronization October 3, 2019 Instructor: David E. Culler & Sam Kumar https://cs162.eecs.berkeley.edu Read: A&D 6.5 Project 1 Code due TOMORROW! Midterm Exam next week Thursday Outline for Today • (Quickly) Recap and Finish Scheduling • Deadlock • Language Support for Concurrency 10/3/19 CS162 UCB Fa 19 Lec 11.2 Recall: CPU & I/O Bursts • Programs alternate between bursts of CPU, I/O activity • Scheduler: Which thread (CPU burst) to run next? • Interactive programs vs Compute Bound vs Streaming 10/3/19 CS162 UCB Fa 19 Lec 11.3 Recall: Evaluating Schedulers • Response Time (ideally low) – What user sees: from keypress to character on screen – Or completion time for non-interactive • Throughput (ideally high) – Total operations (jobs) per second – Overhead (e.g. context switching), artificial blocks • Fairness – Fraction of resources provided to each – May conflict with best avg. throughput, resp. time 10/3/19 CS162 UCB Fa 19 Lec 11.4 Recall: What if we knew the future? • Key Idea: remove convoy effect –Short jobs always stay ahead of long ones • Non-preemptive: Shortest Job First – Like FCFS where we always chose the best possible ordering • Preemptive Version: Shortest Remaining Time First – A newly ready process (e.g., just finished an I/O operation) with shorter time replaces the current one 10/3/19 CS162 UCB Fa 19 Lec 11.5 Recall: Multi-Level Feedback Scheduling Long-Running Compute Tasks Demoted to Low Priority • Intuition: Priority Level proportional
    [Show full text]
  • Process Scheduling Ii
    PROCESS SCHEDULING II CS124 – Operating Systems Spring 2021, Lecture 12 2 Real-Time Systems • Increasingly common to have systems with real-time scheduling requirements • Real-time systems are driven by specific events • Often a periodic hardware timer interrupt • Can also be other events, e.g. detecting a wheel slipping, or an optical sensor triggering, or a proximity sensor reaching a threshold • Event latency is the amount of time between an event occurring, and when it is actually serviced • Usually, real-time systems must keep event latency below a minimum required threshold • e.g. antilock braking system has 3-5 ms to respond to wheel-slide • The real-time system must try to meet its deadlines, regardless of system load • Of course, may not always be possible… 3 Real-Time Systems (2) • Hard real-time systems require tasks to be serviced before their deadlines, otherwise the system has failed • e.g. robotic assembly lines, antilock braking systems • Soft real-time systems do not guarantee tasks will be serviced before their deadlines • Typically only guarantee that real-time tasks will be higher priority than other non-real-time tasks • e.g. media players • Within the operating system, two latencies affect the event latency of the system’s response: • Interrupt latency is the time between an interrupt occurring, and the interrupt service routine beginning to execute • Dispatch latency is the time the scheduler dispatcher takes to switch from one process to another 4 Interrupt Latency • Interrupt latency in context: Interrupt! Task
    [Show full text]
  • Embedded Systems
    Sri Chandrasekharendra Saraswathi Viswa Maha Vidyalaya Department of Electronics and Communication Engineering STUDY MATERIAL for III rd Year VI th Semester Subject Name: EMBEDDED SYSTEMS Prepared by Dr.P.Venkatesan, Associate Professor, ECE, SCSVMV University Sri Chandrasekharendra Saraswathi Viswa Maha Vidyalaya Department of Electronics and Communication Engineering PRE-REQUISITE: Basic knowledge of Microprocessors, Microcontrollers & Digital System Design OBJECTIVES: The student should be made to – Learn the architecture and programming of ARM processor. Be familiar with the embedded computing platform design and analysis. Be exposed to the basic concepts and overview of real time Operating system. Learn the system design techniques and networks for embedded systems to industrial applications. Sri Chandrasekharendra Saraswathi Viswa Maha Vidyalaya Department of Electronics and Communication Engineering SYLLABUS UNIT – I INTRODUCTION TO EMBEDDED COMPUTING AND ARM PROCESSORS Complex systems and micro processors– Embedded system design process –Design example: Model train controller- Instruction sets preliminaries - ARM Processor – CPU: programming input and output- supervisor mode, exceptions and traps – Co- processors- Memory system mechanisms – CPU performance- CPU power consumption. UNIT – II EMBEDDED COMPUTING PLATFORM DESIGN The CPU Bus-Memory devices and systems–Designing with computing platforms – consumer electronics architecture – platform-level performance analysis - Components for embedded programs- Models of programs-
    [Show full text]
  • Advanced Scheduling
    Administrivia • Project 1 due Friday noon • If you need longer, email cs140-staff. - Put “extension” in the subject - Tell us where you are, and how much longer you need. - We will give short extensions to people who don’t abuse this • Section Friday to go over project 2 • Project 2 Due Friday, Feb. 7 at noon • Midterm following Monday, Feb. 10 • Midterm will be open book, open notes - Feel free to bring textbook, printouts of slides - Laptop computers or other electronic devices prohibited 1 / 38 Fair Queuing (FQ) [Demers] • Digression: packet scheduling problem - Which network packet should router send next over a link? - Problem inspired some algorithms we will see today - Plus good to reinforce concepts in a different domain. • For ideal fairness, would use bit-by-bit round-robin (BR) - Or send more bits from more important flows (flow importance can be expressed by assigning numeric weights) Flow 1 Flow 2 Round-robin service Flow 3 Flow 4 2 / 38 SJF • Recall limitations of SJF from last lecture: - Can’t see the future . solved by packet length - Optimizes response time, not turnaround time . but these are the same when sending whole packets - Not fair Packet scheduling • Differences from CPU scheduling - No preemption or yielding—must send whole packets . Thus, can’t send one bit at a time - But know how many bits are in each packet . Can see the future and know how long packet needs link • What scheduling algorithm does this suggest? 3 / 38 Packet scheduling • Differences from CPU scheduling - No preemption or yielding—must send whole packets .
    [Show full text]
  • (FCFS) Scheduling
    Lecture 4: Uniprocessor Scheduling Prof. Seyed Majid Zahedi https://ece.uwaterloo.ca/~smzahedi Outline • History • Definitions • Response time, throughput, scheduling policy, … • Uniprocessor scheduling policies • FCFS, SJF/SRTF, RR, … A Bit of History on Scheduling • By year 2000, scheduling was considered a solved problem “And you have to realize that there are not very many things that have aged as well as the scheduler. Which is just another proof that scheduling is easy.” Linus Torvalds, 2001[1] • End to Dennard scaling in 2004, led to multiprocessor era • Designing new (multiprocessor) schedulers gained traction • Energy efficiency became top concern • In 2016, it was shown that bugs in Linux kernel scheduler could cause up to 138x slowdown in some workloads with proportional energy waist [2] [1] L. Torvalds. The Linux Kernel Mailing List. http://tech-insider.org/linux/research/2001/1215.html, Feb. 2001. [2] Lozi, Jean-Pierre, et al. "The Linux scheduler: a decade of wasted cores." Proceedings of the Eleventh European Conference on Computer Systems. 2016. Definitions • Task, thread, process, job: unit of work • E.g., mouse click, web request, shell command, etc.) • Workload: set of tasks • Scheduling algorithm: takes workload as input, decides which tasks to do first • Overhead: amount of extra work that is done by scheduler • Preemptive scheduler: CPU can be taken away from a running task • Work-conserving scheduler: CPUs won’t be left idle if there are ready tasks to run • For non-preemptive schedulers, work-conserving is not always
    [Show full text]
  • Lottery Scheduler for the Linux Kernel Planificador Lotería Para El Núcleo
    Lottery scheduler for the Linux kernel María Mejía a, Adriana Morales-Betancourt b & Tapasya Patki c a Universidad de Caldas and Universidad Nacional de Colombia, Manizales, Colombia, [email protected] b Departamento de Sistemas e Informática at Universidad de Caldas, Manizales, Colombia, [email protected] c Department of Computer Science, University of Arizona, Tucson, USA, [email protected] Received: April 17th, 2014. Received in revised form: September 30th, 2014. Accepted: October 20th, 2014 Abstract This paper describes the design and implementation of Lottery Scheduling, a proportional-share resource management algorithm, on the Linux kernel. A new lottery scheduling class was added to the kernel and was placed between the real-time and the fair scheduling class in the hierarchy of scheduler modules. This work evaluates the scheduler proposed on compute-intensive, I/O-intensive and mixed workloads. The results indicate that the process scheduler is probabilistically fair and prevents starvation. Another conclusion is that the overhead of the implementation is roughly linear in the number of runnable processes. Keywords: Lottery scheduling, Schedulers, Linux kernel, operating system. Planificador lotería para el núcleo de Linux Resumen Este artículo describe el diseño e implementación del planificador Lotería en el núcleo de Linux, este planificador es un algoritmo de administración de proporción igual de recursos, Una nueva clase, el planificador Lotería (Lottery scheduler), fue adicionado al núcleo y ubicado entre la clase de tiempo-real y la clase de planificador completamente equitativo (Complete Fair scheduler-CFS) en la jerarquía de los módulos planificadores. Este trabajo evalúa el planificador propuesto en computación intensiva, entrada-salida intensiva y cargas de trabajo mixtas.
    [Show full text]
  • 2. Virtualizing CPU(2)
    Operating Systems 2. Virtualizing CPU Pablo Prieto Torralbo DEPARTMENT OF COMPUTER ENGINEERING AND ELECTRONICS This material is published under: Creative Commons BY-NC-SA 4.0 2.1 Virtualizing the CPU -Process What is a process? } A running program. ◦ Program: Static code and static data sitting on the disk. ◦ Process: Dynamic instance of a program. ◦ You can have multiple instances (processes) of the same program (or none). ◦ Users usually run more than one program at a time. Web browser, mail program, music player, a game… } The process is the OS’s abstraction for execution ◦ often called a job, task... ◦ Is the unit of scheduling Program } A program consists of: ◦ Code: machine instructions. ◦ Data: variables stored and manipulated in memory. Initialized variables (global) Dynamically allocated (malloc, new) Stack variables (function arguments, C automatic variables) } What is added to a program to become a process? ◦ DLLs: Libraries not compiled or linked with the program (probably shared with other programs). ◦ OS resources: open files… Program } Preparing a program: Task Editor Source Code Source Code A B Compiler/Assembler Object Code Object Code Other A B Objects Linker Executable Dynamic Program File Libraries Loader Executable Process in Memory Process Creation Process Address space Code Static Data OS res/DLLs Heap CPU Memory Stack Loading: Code OS Reads on disk program Static Data and places it into the address space of the process Program Disk Eagerly/Lazily Process State } Each process has an execution state, which indicates what it is currently doing ps -l, top ◦ Running (R): executing on the CPU Is the process that currently controls the CPU How many processes can be running simultaneously? ◦ Ready/Runnable (R): Ready to run and waiting to be assigned by the OS Could run, but another process has the CPU Same state (TASK_RUNNING) in Linux.
    [Show full text]
  • CPU Scheduling
    CS 372: Operating Systems Mike Dahlin Lecture #3: CPU Scheduling ********************************* Review -- 1 min ********************************* Deadlock ♦ definition ♦ conditions for its occurrence ♦ solutions: breaking deadlocks, avoiding deadlocks ♦ efficiency v. complexity Other hard (liveness) problems priority inversion starvation denial of service ********************************* Outline - 1 min ********************************** CPU Scheduling • goals • algorithms and evaluation Goal of lecture: We will discuss a range of options. There are many more out there. The important thing is not to memorize the scheduling algorithms I describe. The important thing is to develop strategy for analyzing scheduling algorithms in general. ********************************* Preview - 1 min ********************************* File systems 1 02/24/11 CS 372: Operating Systems Mike Dahlin ********************************* Lecture - 20 min ********************************* 1. Scheduling problem definition Threads = concurrency abstraction Last several weeks: what threads are, how to build them, how to use them 3 main states: ready, running, waiting Running: TCB on CPU Waiting: TCB on a lock, semaphore, or condition variable queue Ready: TCB on ready queue Operating system can choose when to stop running one and when to start the next ready one. OS can choose which ready thread to start (ready “queue” doesn’t have to be FIFO) Key principle of OS design: separate mechanism from policy Mechanism – how to do something Policy – what to do, when to do it 2 02/24/11 CS 372: Operating Systems Mike Dahlin In this case, design our context switch mechanism and synchronization methodology allow OS to switch from any thread to any other one at any time (system will behave correctly) Thread/process scheduling policy decides when to switch in order to meet performance goals 1.1 Pre-emptive v.
    [Show full text]
  • Cross-Layer Resource Control and Scheduling for Improving Interactivity in Android
    SOFTWARE—PRACTICE AND EXPERIENCE Softw. Pract. Exper. 2014; 00:1–22 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/spe Cross-Layer Resource Control and Scheduling for Improving Interactivity in Android ∗ † Sungju Huh1, Jonghun Yoo2 and Seongsoo Hong1,2,3, , 1Department of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Suwon-si, Gyeonggi-do, Republic of Korea 2Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea 3Advanced Institutes of Convergence Technology, Suwon-si, Gyeonggi-do, Republic of Korea SUMMARY Android smartphones are often reported to suffer from sluggish user interactions due to poor interactivity. This is partly because Android and its task scheduler, the completely fair scheduler (CFS), may incur perceptibly long response time to user-interactive tasks. Particularly, the Android framework cannot systemically favor user-interactive tasks over other background tasks since it does not distinguish between them. Furthermore, user-interactive tasks can suffer from high dispatch latency due to the non-preemptive nature of CFS. To address these problems, this paper presents framework-assisted task characterization and virtual time-based CFS. The former is a cross-layer resource control mechanism between the Android framework and the underlying Linux kernel. It identifies user-interactive tasks at the framework-level, by using the notion of a user-interactive task chain. It then enables the kernel scheduler to selectively promote the priorities of worker tasks appearing in the task chain to reduce the preemption latency. The latter is a cross-layer refinement of CFS in terms of interactivity. It allows a task to be preempted at every predefined period.
    [Show full text]
  • TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T
    TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T. Meehean A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2011 © Copyright by Joseph T. Meehean 2011 All Rights Reserved i ii To Heather for being my best friend Joe Passamani for showing me what’s important Rick and Mindy for reminding me Annie and Nate for taking me out Greg and Ila for driving me home Acknowledgments Nothing of me is original. I am the combined effort of everybody I’ve ever known. — Chuck Palahniuk (Invisible Monsters) My committee has been indispensable. Their comments have been enlight- ening and valuable. I would especially like to thank Andrea for meeting with me every week whether I had anything worth talking about or not. Her insights and guidance were instrumental. She immediately recognized the problems we were seeing as originating from the CPU scheduler; I was incredulous. My wife was a great source of support during the creation of this work. In the beginning, it was her and I against the world, and my success is a reflection of our teamwork. Without her I would be like a stray dog: dirty, hungry, and asleep in afternoon. I would also like to thank my family for providing a constant reminder of the truly important things in life. They kept faith that my PhD madness would pass without pressuring me to quit or sandbag. I will do my best to live up to their love and trust. The thing I will miss most about leaving graduate school is the people.
    [Show full text]
  • Implementation and Evaluation of Proportional Share
    IMPLEMENTATION AND EVALUATION OF PROPORTIONAL SHARE SCHEDULER ON LINUX KERNEL 2.6 A thesis presented to the faculty of the Russ College of Engineering and Technology of Ohio University In partial fulfillment of the requirements for the degree Master of Science Pradeep Kumar Srinivasan March 2008 This thesis titled IMPLEMENTATION AND EVALUATION OF PROPORTIONAL SHARE SCHEDULER ON LINUX KERNEL 2.6 by PRADEEP KUMAR SRINIVASAN has been approved for the School of Electrical Engineering and Computer Science and the Russ College of Engineering and Technology of Ohio University by Frank Drews Assistant Professor of Electrical Engineering and Computer Science Dennis Irwin Dean, Russ College of Engineering and Technology ABSTRACT Srinivasan, Pradeep Kumar, M.S., March 2008, Computer Science Implementation and Evaluation of Proportional Share Scheduler on Linux Kernel 2.6 (121pp.) Director of Thesis: Frank Drews There is a steady proliferation of Time Sensitive applications, adding more di- versity to the regular workload on General Purpose Operating Systems (GPOS). Priority based Scheduling in GPOS are not designed to handle a mix of regular workload and Time Sensitive applications. An Alternative approach to handle this mixed workload is to use Proportional Share Scheduling (PSS). In PSS model, each task receives a share of the re- source in proportion to the weight assigned to it. Most of the PSS schemes proposed in the literature assume the existence of an interface, that can translate application require- ments in to weights. And some leave the onus of specifying weights to the application on the User/Application Developer. In this thesis, we describe our implementation of Earliest Eligible Virtual Deadline First (EEVDF) based PSS on Linux Kernel.
    [Show full text]
  • Scheduling, Cont
    University of New Mexico CPU Scheduling (Cont.) 02/02/2021 Professor Amanda Bienz Textbook pages 214 - 217 Operating System Concepts - 10th Edition Enhanced eText Portions from Prof. Patrick Bridges’ Slides University of New Mexico Review • Want to optimize scheduler based on some metric • So far, looked at two performance metrics • Tturnaround = Tcompletion − Tarrival • Tresponse = Tfirstrun − Tarrival University of New Mexico Review • For systems with mixed workloads, there is generally not an easy single metric to optimize (i.e. turnaround time or response time) • General-purpose systems rely on heuristic schedulers that try to balance the qualitative performance of the system • Question : What’s wrong with round robin? • Aside : How hard is ‘optimal’ scheduling for an arbitrary performance metric? University of New Mexico Multilevel Queue • Using priority scheduling, have separate queue for each priority • Schedule process in highest-priority queue University of New Mexico Multilevel Queue • Similar to priority scheduling, every process with same priority could be executed with round-robin University of New Mexico Multilevel Queue • Prioritization based upon process type University of New Mexico Multi-Level Feedback Queue (MLFQ) • Goal : general purpose scheduling • Must support two job types with distinct goals • “Interactive” programs care about response time • “Batch” programs care about turn-around time • Approach : multiple levels of round-robin: • Each level has higher priority than lower levels, and preempts them University
    [Show full text]