0 5 10 15 20 1 Slide 3 Performance of Round-Robin Sche

Total Page:16

File Type:pdf, Size:1020Kb

0 5 10 15 20 1 Slide 3 Performance of Round-Robin Sche Performance of round-robin scheduling: LAST LECTURE ➜ Average waiting time: not optimal ➜ Performance depends heavily on size of time-quantum: Scheduling Algorithms: - too short: overhead for context switch becomes too ➜ Slide 1 FIFO Slide 3 expensive ➜ Shortest job next - too large: degenerates to FCFS policy ➜ Shortest remaining job next - rule of thumb: about 80% of all bursts should be shorter than ➜ Highest Response Rate Next (HRRN) 1 time quantum ➜ no starvation ROUND-ROBIN PRIORITIES quantum=1: ➜ A Each thread is associated with a priority B ➜ Basic mechanism to influence scheduler decision: C D • Scheduler will always choose a thread of higher priority over E one of lower priority 0 5 10 15 20 • Implemented via multiple FCFS ready queues (one per Slide 2 quantum=4: Slide 4 priority) A ➜ Lower-priority may suffer starvation B C • adapt priority based on thread’s age or execution history D E ➜ Priorities can be defined internally or externally 0 5 10 15 20 - internal: e.g., memory requirements, I/O bound vs CPU ➜ Scheduled thread is given a time slice bound ➜ Running thread is preempted upon clock interrupt, running - external: e.g., importance of thread, importance of user thread is returned to ready queue ROUND-ROBIN 1 PRIORITIES 2 Feedback scheduling: RQ0 Release Admit Processor Priority queueing: Queue Runable processes headers RQ1 Release Processor Priority 4 (Highest priority) Slide 5 Slide 7 ¥ ¥ ¥ Priority 3 ¥ ¥ ¥ Priority 2 RQn Release Processor Priority 1 (Lowest priority) Priorities influence access to resources, but do not guarantee a certain fraction of the resource (CPU etc)! Feedback scheduling: 0 5 10 15 20 LOTTERY SCHEDULING A ➜ process gets “lottery tickets” for various resources Feedback B q = 1 C ➜ more lottery tickets imply better access to resource D E Advantages: Slide 6 A Slide 8 Feedback i B ➜ Simple q = 2 C D ➜ Highly responsive E ➜ Allows cooperating processes/threads to implement individual ➜ Penalize jobs that have been running longer scheduling policy (exchange of tickets) i ➜ q =2 : longer time slice for lower priority (reduce starvation) PRIORITIES 3 LOTTERY SCHEDULING 4 Example (taken from Embedded Systems Programming: Four processes a running concurrently ➜ Process A: 15% of CPU time Note: UNIX traditionally uses counter-intuitive priority ➜ Process B: 25% of CPU time representation (higher value = less priority) ➜ Process C: 5% of CPU time Bands: ➜ Process D: 55% of CPU time ➜ Decreasing order of priority Slide 9 How many tickets should each process get to achieve this? Slide 11 • Swapper Number of tickets in proportion to CPU time, e.g., if we have • Block I/O device control • 20 tickets overall File manipulation • Character I/O device control ➜ Process A: 15% of tickets: 3 • User processes ➜ Process B: 25% of tickets: 5 ➜ Process C: 5% of tickets: 1 ➜ Process D: 55% of tickets: 11 TRADITIONAL UNIX SCHEDULING (SVR3, 4.3 BSD) Objectives: Advantages: ➜ support for time sharing ➜ ➜ good response time for interactive users relatively simple, effective ➜ ➜ support for low-priority background jobs works well for single processor systems Disadvantages: Slide 10 Strategy: Slide 12 ➜ ➜ Multilevel feedback using round robin within priorities significant overhead in large systems (recomputing priorities) ➜ ➜ Priorities are recomputed once per second response time not guaranteed ➜ non-preemptive kernel: lower priority process in kernel mode • Base priority divides all processes into fixed bands of priority can delay high-priority process levels • Priority adjustment capped to keep processes within bands ➜ Favours I/O-bound over CPU-bound processes TRADITIONAL UNIX SCHEDULING (SVR3,4.3BSD) 5 NON-PREEMPTIVE VS PREEMPTIVE KERNEL 6 Local memory CPU Complete system NON-PREEMPTIVE VS PREEMPTIVE KERNEL M M M M C C C C C C C C C+ M C+ M C+ M ➜ kernel data structures have to be protected C M C Inter- C M Shared C ➜ connect Internet basically, two ways to solve the problem: memory M C C M C C • Non-preemptive: disable all (most) interrupts while in kernel C C C C C C C C C+ M C+ M C+ M mode, so no other thread can get into kernel mode while in M M M M critial section (a) (b) (c) Slide 13 - Priority inversion possible Slide 15 - Coarse grained (b) Loosely coupled multiprocessor - Works only for uniprocessor • Each processor has its own memory and I/O channels • Preemptive: just lock kernel data structure which is currently • Generally called a distributed memory multiprocessor modified (c) Distributed System - More fine-grained • complete computer systems connected via wide area - Introduces additional overhead, can reduce throughput network • communicate via message passing MULTIPROCESSOR SCHEDULING What kind of systems and applications are there? PARALLELISM Classification of Multiprocessor Systems: Independent parallelism: Local memory CPU Complete system ➜ Separate applications/jobs M M M M ➜ No synchronization C C C C C C C C C+ M C+ M C+ M ➜ Parallelism improves throughput, responsiveness C M C Inter- C M Shared C connect Internet ➜ Slide 14 memory M C C M Slide 16 Parallelism doesn’t affect execution time of (single threaded) C C C C C C programs C C C C C+ M C+ M C+ M M M M M Coarse and very coarse-grained parallelism: (a) (b) (c) ➜ Synchronization among processes is infrequent (a) Tightly coupled multiprocessing ➜ Good for loosely coupled multiprocessors • Processors share main memory, controlled by single • Can be ported to multiprocessor with little change operating system, called symmetric multi-processor (SMP) system MULTIPROCESSOR SCHEDULING 7 PARALLELISM 8 Medium-grained parallelism: ASSIGNMENT OF THREADS TO PROCESSORS ➜ Parallel processing within a single application ➜ • Application runs as multithreaded process Treat processors as a pooled resource and assign threads to processors on demand ➜ Threads usually interact frequently ➜ Good for SMP systems • Permanently assign threads to a processor ➜ Unsuitable for loosely-coupled systems - Dedicate short-term queue for each processor ✔ Slide 17 Slide 19 Low overhead Fine-grained parallelism: ✖ Processor could be idle while another processor has a ➜ Highly parallel applications backlog • e.g., parallel execution of loop iterations • Dynamically assign process to a processor ✖ ➜ Very frequent synchronisation higher overhead ✖ poor locality ➜ Works only well on special hardware ✔ better load balancing • vector computers, symmetric multithreading (SMT) hardware MULTIPROCESSOR SCHEDULING Multiprocessor Scheduling: ASSIGNMENT OF THREADS TO PROCESSORS Which process should be run next and where? Who decides which thread runs on which processor? We discuss: Master/slave architecture: ➜ Tightly coupled multiprocessing ➜ Key kernel functions always run on a particular processor ➜ Very coarse to medium grained parallelism Slide 18 Slide 20 ➜ Master is responsible for scheduling ➜ Homogeneous systems (all processors have same specs, access ➜ Slave sends service request to the master to devices) ✔ simple Design Issues: ✔ one processor has control of all resources, no synchronisation ➜ How to assign processes/threads to the available processors? ✖ Failure of master brings down whole system ➜ Multiprogramming on individual processors? ✖ Master can become a performance bottleneck ➜ Which scheduling strategy ? ➜ Scheduling dependend processes ASSIGNMENT OF THREADS TO PROCESSORS 9 ASSIGNMENT OF THREADS TO PROCESSORS 10 LOAD SHARING: TIME SHARING 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7 CPU A 5 6 7 A 5 6 7 CPU 12 8 9 10 11 CPU 4 8 9 10 11 goes idle 8 9 10 11 goes idle 12 13 14 15 12 13 14 15 B 13 14 15 Priority Priority Priority Peer architecture: 7 A B C 7 B C 7 C 6 D E 6 D E 6 D E 5 F 5 F 5 F ➜ Operating system can execute on any processor 4 4 4 3 G H I 3 G H I 3 G H I ➜ 2 J K 2 J K 2 J K Each processor does self-scheduling 1 1 1 ➜ 0 L M N 0 L M N 0 L M N Slide 21 Complicates the operating system Slide 22 (a) (b) (c) - Make sure no two processors schedule the same thread ➜ Load is distributed evenly across the processors - Synchronise access to resources ➜ Use global ready queue ➜ Proper symmetric multiprocessing • Threads are not assigned to a particular processor • Scheduler picks any ready thread (according to scheduling policy) • Actual scheduling policy less important than on uniprocessor ➜ No centralized scheduler required Disadvantages of time sharing: ➜ Central queue needs mutual exclusion • Potential race condition when several CPUs are trying to pick a thread from ready queue • May be a bottleneck blocking processors Slide 23 ➜ Preempted threads are unlikely to resume execution on the same processor • Cache use is less efficient, bad locality ➜ Different threads of same process unlikely to execute in parallel • Potentially high intra-process communication latency ASSIGNMENT OF THREADS TO PROCESSORS 11 LOAD SHARING: TIME SHARING 12 GANG SCHEDULING Combined time and space sharing: ➜ Simultaneous scheduling of threads that make up a single process Thread A running 0 ➜ Useful for applications where performance severely degrades when any part of the application is not running A B A B A B CPU 0 0 0 0 0 0 0 • e.g., often need to synchronise with each other Request 2 Request 1 CPU Slide 24 Reply 1 Reply 2 Slide 26 0 1 2 3 4 5 CPU 1 B1 A1 B1 A1 B1 A1 0 A0 A1 A2 A3 A4 A5 C 1 B0 B1 B2 0 C1 C2 Time 0 100 200 300 400 500 600 2 D0 D1 D2 D3 D4 E0 Time 3 E1 E2 E3 E4 E5 E6 slot 4 A0 A1 A2 A3 A4 A5 5 B0 B1 B2 C0 C1 C2 6 D0 D1 D2 D3 D4 E0 7 E1 E2 E3 E4 E5 E6 SMP SUPPORT IN MODERN GENERAL PURPOSE OS’Sa LOAD SHARING: SPACE SHARING ➜ Solaris 8.0: up to 128 ➜ Linux 2.4: up to 32 Scheduling
Recommended publications
  • Definition Process Scheduling Queues
    Dear Students Due to the unprecedented situation, Knowledgeplus Training center is mobilized and will keep accompanying and supporting our students through this difficult time. Our Staff will be continuously, sending notes and exercises on a weekly basis through what’s app and email. Students are requested to copy the notes and do the exercises on their copybooks. The answers to the questions below will be made available on our website on knowledgeplus.mu/support.php. Please note that these are extra work and notes that we are providing our students and all classes will be replaced during the winter vacation. We thank you for your trust and are convinced that, together, we will overcome these troubled times Operating System - Process Scheduling Definition The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. Process Scheduling Queues The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue. The Operating System maintains the following important process scheduling queues − • Job queue − This queue keeps all the processes in the system.
    [Show full text]
  • Lottery Scheduling in the Linux Kernel: a Closer Look
    LOTTERY SCHEDULING IN THE LINUX KERNEL: A CLOSER LOOK A Thesis presented to the Faculty of California Polytechnic State University, San Luis Obispo In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by David Zepp June 2012 © 2012 David Zepp ALL RIGHTS RESERVED ii COMMITTEE MEMBERSHIP TITLE: LOTTERY SCHEDULING IN THE LINUX KERNEL: A CLOSER LOOK AUTHOR: David Zepp DATE SUBMITTED: June 2012 COMMITTEE CHAIR: Michael Haungs, Associate Professor of Computer Science COMMITTEE MEMBER: John Bellardo, Assistant Professor of Computer Science COMMITTEE MEMBER: Aaron Keen, Associate Professor of Computer Science iii ABSTRACT LOTTERY SCHEDULING IN THE LINUX KERNEL: A CLOSER LOOK David Zepp This paper presents an implementation of a lottery scheduler, presented from design through debugging to performance testing. Desirable characteristics of a general purpose scheduler include low overhead, good overall system performance for a variety of process types, and fair scheduling behavior. Testing is performed, along with an analysis of the results measuring the lottery scheduler against these characteristics. Lottery scheduling is found to provide better than average control over the relative execution rates of processes. The results show that lottery scheduling functions as a good mechanism for sharing the CPU fairly between users that are competing for the resource. While the lottery scheduler proves to have several interesting properties, overall system performance suffers and does not compare favorably with the balanced performance afforded by the standard Linux kernel’s scheduler. Keywords: lottery scheduling, schedulers, Linux iv TABLE OF CONTENTS LIST OF TABLES…………………………………………………………………. vi LIST OF FIGURES……………………………………………………………….... vii CHAPTER 1. Introduction................................................................................................... 1 2.
    [Show full text]
  • Multilevel Feedback Queue Scheduling Program in C
    Multilevel Feedback Queue Scheduling Program In C Matias is featureless and carnalize lethargically while psychical Nestor ruralized and enamelling. Wallas live asleep while irreproducible Hallam immolate paramountly or ogle wholly. Pertussal Hale cutinize schismatically and circularly, she caponizes her exanimation licences ineluctably. Under these lengths, higher priority queue program just for scheduling program in multilevel feedback queue allows movement of this multilevel queue Operating conditions of scheduling program in multilevel feedback queue have issue. Operating System OS is but software which acts as an interface between. Jumping to propose proper location in the user program to restart. N Multilevel-feedback-queue scheduler defined by treaty following parameters number. A fixed time is allotted to every essence that arrives in better queue. Why review it where for the scheduler to distinguish IO-bound programs from. Computer Scheduler Multilevel Feedback to question. C Program For Multilevel Feedback Queue Scheduling Algorithm. Of the program but spend the parameters like a UNIX command line. Roadmap Multilevel Queue Scheduling Multilevel Queue Example. CPU Scheduling Algorithms in Operating Systems Guru99. How would a queue program as well organized as improving the survey covered include distributed computing. Multilevel Feedback Queue Scheduling MLFQ CPU Scheduling. A skip of Scheduling parallel program tasks DOI. Please chat if a computer systems and shortest tasks, the waiting time quantum increases context is quite complex, multilevel feedback queue scheduling algorithms like the higher than randomly over? Scheduling of Processes. Suppose now the dispatcher uses an algorithm that favors programs that have used. Data on a scheduling c time for example of the process? You need a feedback scheduling is longer than the line records? PowerPoint Presentation.
    [Show full text]
  • Lottery Scheduler for the Linux Kernel Planificador Lotería Para El Núcleo
    Lottery scheduler for the Linux kernel María Mejía a, Adriana Morales-Betancourt b & Tapasya Patki c a Universidad de Caldas and Universidad Nacional de Colombia, Manizales, Colombia, [email protected] b Departamento de Sistemas e Informática at Universidad de Caldas, Manizales, Colombia, [email protected] c Department of Computer Science, University of Arizona, Tucson, USA, [email protected] Received: April 17th, 2014. Received in revised form: September 30th, 2014. Accepted: October 20th, 2014 Abstract This paper describes the design and implementation of Lottery Scheduling, a proportional-share resource management algorithm, on the Linux kernel. A new lottery scheduling class was added to the kernel and was placed between the real-time and the fair scheduling class in the hierarchy of scheduler modules. This work evaluates the scheduler proposed on compute-intensive, I/O-intensive and mixed workloads. The results indicate that the process scheduler is probabilistically fair and prevents starvation. Another conclusion is that the overhead of the implementation is roughly linear in the number of runnable processes. Keywords: Lottery scheduling, Schedulers, Linux kernel, operating system. Planificador lotería para el núcleo de Linux Resumen Este artículo describe el diseño e implementación del planificador Lotería en el núcleo de Linux, este planificador es un algoritmo de administración de proporción igual de recursos, Una nueva clase, el planificador Lotería (Lottery scheduler), fue adicionado al núcleo y ubicado entre la clase de tiempo-real y la clase de planificador completamente equitativo (Complete Fair scheduler-CFS) en la jerarquía de los módulos planificadores. Este trabajo evalúa el planificador propuesto en computación intensiva, entrada-salida intensiva y cargas de trabajo mixtas.
    [Show full text]
  • CPU Scheduling
    Last Class: Processes • A process is the unit of execution. • Processes are represented as Process Control Blocks in the OS – PCBs contain process state, scheduling and memory management information, etc • A process is either New, Ready, Waiting, Running, or Terminated. • On a uniprocessor, there is at most one running process at a time. • The program currently executing on the CPU is changed by performing a context switch • Processes communicate either with message passing or shared memory Computer Science CS377: Operating Systems Lecture 5, page Today: Scheduling Algorithms • Goals for scheduling • FCFS & Round Robin • SJF • Multilevel Feedback Queues • Lottery Scheduling Computer Science CS377: Operating Systems Lecture 5, page 2 Scheduling Processes • Multiprogramming: running more than one process at a time enables the OS to increase system utilization and throughput by overlapping I/O and CPU activities. • Process Execution State • All of the processes that the OS is currently managing reside in one and only one of these state queues. Computer Science CS377: Operating Systems Lecture 5, page Scheduling Processes • Long Term Scheduling: How does the OS determine the degree of multiprogramming, i.e., the number of jobs executing at once in the primary memory? • Short Term Scheduling: How does (or should) the OS select a process from the ready queue to execute? – Policy Goals – Policy Options – Implementation considerations Computer Science CS377: Operating Systems Lecture 5, page Short Term Scheduling • The kernel runs the scheduler at least when 1. a process switches from running to waiting, 2. an interrupt occurs, or 3. a process is created or terminated. • Non-preemptive system: the scheduler must wait for one of these events • Preemptive system: the scheduler can interrupt a running process Computer Science CS377: Operating Systems Lecture 5, page 5 Criteria for Comparing Scheduling Algorithms • CPU Utilization The percentage of time that the CPU is busy.
    [Show full text]
  • Interactive Scheduling Two Level Scheduling Round Robin
    Two Level Scheduling • Interactive systems commonly employ two-level scheduling Interactive Scheduling – CPU scheduler and Memory Scheduler • Memory scheduler was covered in VM – We will focus on CPU scheduling 1 2 Round Robin Scheduling Our Earlier Example • Each process is given a timeslice to run in • 5 Process J1 • When the timeslice expires, the next – Process 1 arrives slightly before process J2 process preempts the current process, 2, etc… and runs for its timeslice, and so on J3 – All are immediately • Implemented with runnable J4 – Execution times – A ready queue indicated by scale on – A regular timer interrupt J5 x-axis 0246 8 10 12 1416 18 20 3 4 Round Robin Schedule Round Robin Schedule J1 J1 Timeslice = 3 units J2 Timeslice = 1 unit J2 J3 J3 J4 J4 J5 J5 0246 8 10 12 1416 18 20 0246 8 10 12 1416 18 20 5 6 1 Round Robin Priorities •Pros – Fair, easy to implement • Each Process (or thread) is associated with a •Con priority – Assumes everybody is equal • Issue: What should the timeslice be? • Provides basic mechanism to influence a – Too short scheduler decision: • Waste a lot of time switching between processes – Scheduler will always chooses a thread of higher • Example: timeslice of 4ms with 1 ms context switch = 20% round robin overhead priority over lower priority – Too long • Priorities can be defined internally or externally • System is not responsive • Example: timeslice of 100ms – Internal: e.g. I/O bound or CPU bound – If 10 people hit “enter” key simultaneously, the last guy to run will only see progress after 1 second.
    [Show full text]
  • Stride Scheduling: Deterministic Proportional-Share Resource
    Stride Scheduling: Deterministic Proportional-Share Resource Management Carl A. Waldspurger William E. Weihl Technical Memorandum MIT/LCS/TM-528 MIT Laboratory for Computer Science Cambridge, MA 02139 June 22, 1995 Abstract rates is required to achieve service rate objectives for users and applications. Such control is desirable across a This paper presents stride scheduling, a deterministic schedul- broad spectrum of systems, including databases, media- ing technique that ef®ciently supports the same ¯exible based applications, and networks. Motivating examples resource management abstractions introduced by lottery include control over frame rates for competing video scheduling. Compared to lottery scheduling, stride schedul- viewers, query rates for concurrent clients by databases ing achieves signi®cantly improved accuracy over relative and Web servers, and the consumption of shared re- throughput rates, with signi®cantly lower response time vari- sources by long-running computations. ability. Stride scheduling implements proportional-share con- trol over processor time and other resources by cross-applying Few general-purpose approaches have been proposed elements of rate-based ¯ow control algorithms designed for to support ¯exible, responsive control over service rates. networks. We introduce new techniques to support dynamic We recently introduced lottery scheduling, a random- changes and higher-level resource management abstractions. ized resource allocation mechanism that provides ef®- We also introduce a novel hierarchical stride scheduling al- cient, responsive control over relative computation rates gorithm that achieves better throughput accuracy and lower [Wal94]. Lottery scheduling implements proportional- response time variability than prior schemes. Stride schedul- share resource management ± the resource consumption ing is evaluated using both simulations and prototypes imple- rates of active clients are proportional to the relative mented for the Linux kernel.
    [Show full text]
  • Chapter 5: CPU Scheduling
    Chapter 5: CPU Scheduling 肖 卿 俊 办公室:九龙湖校区计算机楼212室 电邮:[email protected] 主页: https://csqjxiao.github.io/PersonalPage 电话:025-52091022 Chapter 5: CPU Scheduling ! Basic Concepts ! Scheduling Criteria ! Scheduling Algorithms ! Real System Examples ! Thread Scheduling ! Algorithm Evaluation ! Multiple-Processor Scheduling Operating System Concepts 5.2 Southeast University Basic Concepts ! Maximum CPU utilization obtained with multiprogramming ! A Fact: Process execution consists of an alternating sequence of CPU execution and I/O wait, called CPU–I/O Burst Cycle Operating System Concepts 5.3 Southeast University CPU-I/O Burst Cycle ! Process execution repeats the CPU burst and I/O burst cycle. ! When a process begins an I/O burst, another process can use the CPU for a CPU burst. Operating System Concepts 5.4 Southeast University CPU-bound and I/O-bound ! A process is CPU-bound if it generates I/O requests infrequently, using more of its time doing computation. ! A process is I/O-bound if it spends more of its time to do I/O than it spends doing computation. Operating System Concepts 5.5 Southeast University ! A CPU-bound process might have a few very long CPU bursts. ! An I/O-bound process typically has many short CPU bursts Operating System Concepts 5.6 Southeast University CPU Scheduler ! When the CPU is idle, the OS must select another process to run. ! This selection process is carried out by the short-term scheduler (or CPU scheduler). ! The CPU scheduler selects a process from the ready queue and allocates the CPU to it. ! The ready queue does not have to be a FIFO one.
    [Show full text]
  • Scheduling Algorithm for Grid Computing Using Shortest Job First with Time Quantum
    Sudan University of Science and Technology Collage of Graduate Studies College of Engineering Scheduling Algorithm for Grid Computing using Shortest Job First with Time Quantum خوارزمية الجدولة للحوسبة الشبكية بإستخدام العمليات اﻷقصر أوﻻ مع الزمن الكمي A thesis submitted in partial fulfillment of the requirements for the degree of M.SC.in computer and networks engineering By: Raham Hashim Yosuf Supervised by: Dr. Rania A. Mokhtar August 2017 Initiation اﻷية قال هللا تعالى: {أَ َّم ْن ُه َو َقانِ ٌت آنَا َء اللَّ ْي ِل َسا ِجدًا َو َقائِ ًما َي ْحذَ ُر ا ْْل ِخ َرةَ َويَ ْر ُجو َر ْح َمةَ َربِ ِه قُ ْل َه ْل يَ ْستَ ِوي الَّ ِذي َن يَ ْع َل ُمو َن َوالَّ ِذي َن َﻻ َي ْع َل ُمو َن إِنَّ َما يَتَذَ َّك ُر أُولُو ا ْﻷَْلبَا ِب } صدق هللا العظيم سورة الزمر اْلية )9( I Dedication Dedicated to My parents, my sisters And to my friends To everyone who tried to guide me to A better life….. With my love II Acknowledgement My thanks for Allah After that I would like to thanks my university Sudan University of science and Technology and my college of Graduate Studies Electronics Engineering Department. And my teachers for inspiring me this project. I owe my profound gratitude to my thesis supervisor Dr. Rania A.Mokhtar for her valuable guidance, supervision and persistent encouragement. Due to the approach adopted by her in handling my thesis and the way she gave me freedom to think about different things, I was able to do the constructive thesis.
    [Show full text]
  • Comprehensive Examinations in Computer Science 1872 - 1878
    COMPREHENSIVE EXAMINATIONS IN COMPUTER SCIENCE 1872 - 1878 edited by Frank M. Liang STAN-CS-78-677 NOVEMBER 1078 (second Printing, August 1979) COMPUTER SCIENCE DEPARTMENT School of Humanities and Sciences STANFORD UNIVERSITY COMPUTER SC l ENCE COMPREHENSIVE EXAMINATIONS 1972 - 1978 b Y the faculty and students of the Stanford University Computer Science Department edited by Frank M. Liang Abstract Since Spring 1972, the Stanford Computer Science Department has periodically given a "comprehensive examination" as one of the qualifying exams for graduate students. Such exams generally have consisted of a six-hour written test followed by a several-day programming problem. Their intent is to make it possible to assess whether a student is sufficiently prepared in all the important aspects of computer science. This report presents the examination questions from thirteen comprehensive examinations, along with their solutions. The preparation of this report has been supported in part by NSF grant MCS 77-23738 and in part by IBM Corporation. Foreword This report probably contains as much concentrated computer science per page as any document In existence - it is the result of thousands of person-hours of creative work by the entire staff of Stanford's Computer Science Department, together with dozens of h~ghly-talented students who also helped to compose the questions. Many of these questions have never before been published. Thus I think every person interested in computer science will find it stimulating and helpful to study these pages. Of course, the material is so concentrated it is best not taken in one gulp; perhaps the wisest policy would be to keep a copy on hand in the bathroom at all times, for those occasional moments when inspirational reading is desirable.
    [Show full text]
  • QEMU/Linux Assignments, Scheduling Continued
    Operating Systems 9/20/2018 Programming Assignment #2 About QEMU • DUE DATE: Monday, October 1 2018 • A processor emulator • Emulates code for target CPU on host CPU MINI-HOMEWORK DEADLINE: Tuesday Sep 25 • Download assignment files • Build the Linux kernel • Boot the QEMU image with the Linux kernel Guest System • Report output of the procedure Host System – uname -a 1 2 Why QEMU? Download the Code • Allow you to change the kernel • Get the files you need: – OS requires privilege to overwrite kernel file – Use git clone • Isolate kernel changes from the real machine – Approximate space requirements • Make programming and debugging easier – Kernel source and object code: 4 GB – Everything: About 4 GB of disk space 3 4 CSC 256/456 1 Operating Systems 9/20/2018 Get It Started! Start Up a New Kernel • Start QEMU and default Debian installation • Specify path to your kernel bzImage to – cd install runqemu.sh – sh runqemu.sh – sh runqemu.sh linux-3.18- • No password 77/arch/x86_64/boot/bzImage • Use poweroff command to shutdown • When QEMU boots, use uname -a to check if you booted the correct kernel image • Can kill qemu from different window if kernel panics • README.md file contains useful information 5 6 CPU Scheduling • Selects from among the processes/threads that are ready to execute, and allocates the CPU to it • CPU scheduling may take place at: 1. Hardware interrupt/software exception CPU Scheduling 2. System calls • Nonpreemptive: – Scheduling only when the current process terminates CS 256/456 or not able to run further Department
    [Show full text]
  • TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T
    TOWARDS TRANSPARENT CPU SCHEDULING by Joseph T. Meehean A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2011 © Copyright by Joseph T. Meehean 2011 All Rights Reserved i ii To Heather for being my best friend Joe Passamani for showing me what’s important Rick and Mindy for reminding me Annie and Nate for taking me out Greg and Ila for driving me home Acknowledgments Nothing of me is original. I am the combined effort of everybody I’ve ever known. — Chuck Palahniuk (Invisible Monsters) My committee has been indispensable. Their comments have been enlight- ening and valuable. I would especially like to thank Andrea for meeting with me every week whether I had anything worth talking about or not. Her insights and guidance were instrumental. She immediately recognized the problems we were seeing as originating from the CPU scheduler; I was incredulous. My wife was a great source of support during the creation of this work. In the beginning, it was her and I against the world, and my success is a reflection of our teamwork. Without her I would be like a stray dog: dirty, hungry, and asleep in afternoon. I would also like to thank my family for providing a constant reminder of the truly important things in life. They kept faith that my PhD madness would pass without pressuring me to quit or sandbag. I will do my best to live up to their love and trust. The thing I will miss most about leaving graduate school is the people.
    [Show full text]