Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version)

Total Page:16

File Type:pdf, Size:1020Kb

Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version) Cooperative Kernels: GPU Multitasking for Blocking Algorithms (Extended Version) Tyler Sorensen Hugues Evrard Alastair F. Donaldson Imperial College London Imperial College London Imperial College London London, UK London, UK London, UK [email protected] [email protected] [email protected] ABSTRACT Host CPU GPU Compute Units (CU) 0 1 2 3 There is growing interest in accelerating irregular data-parallel Graphics request 2CU algorithms on GPUs. These algorithms are typically blocking, so kill gather time they require fair scheduling. But GPU programming models (e.g. kill OpenCL) do not mandate fair scheduling, and GPU schedulers are execution time graphics graphics unfair in practice. Current approaches avoid this issue by exploit- Graphics terminated ing scheduling quirks of today’s GPUs in a manner that does not fork allow the GPU to be shared with other workloads (such as graphics time rendering tasks). We propose cooperative kernels, an extension to the traditional GPU programming model geared towards writing Figure 1: Cooperative kernels can flexibly resize to let other blocking algorithms. Workgroups of a cooperative kernel are fairly tasks, e.g. graphics, run concurrently scheduled, and multitasking is supported via a small set of language extensions through which the kernel and scheduler cooperate. We describe a prototype implementation of a cooperative kernel frame- holds a mutex, an unfair scheduler may cause another workgroup work implemented in OpenCL 2.0 and evaluate our approach by to spin-wait forever for the mutex to be released. Similarly, an porting a set of blocking GPU applications to cooperative kernels unfair scheduler can cause a workgroup to spin-wait indefinitely at and examining their performance under multitasking. Our pro- a global barrier so that other workgroups do not reach the barrier. totype exploits no vendor-specific hardware, driver or compiler A Degree of Fairness: Occupancy-bound Execution. The cur- support, thus our results provide a lower-bound on the efficiency rent GPU programming models—OpenCL [13], CUDA [19] and with which cooperative kernels can be implemented in practice. HSA [11], specify almost no guarantees regarding scheduling of workgroups, and current GPU schedulers are unfair in practice. CCS CONCEPTS Roughly speaking, each workgroup executing a GPU kernel is • Software and its engineering → Multiprocessing / multi- mapped to a hardware compute unit.1 The simplest way for a GPU programming / multitasking; Semantics; • Computing method- driver to handle more workgroups being launched than there are ologies → Graphics processors; compute units is via an occupancy-bound execution model [7, 27] where, once a workgroup has commenced execution on a compute KEYWORDS unit (it has become occupant), the workgroup has exclusive access GPU, cooperative multitasking, irregular parallelism to the compute unit until it finishes execution. Experiments suggest that this model is widely employed by today’s GPUs [2, 7, 20, 27]. The occupancy-bound execution model does not guarantee fair 1 INTRODUCTION scheduling between workgroups: if all compute units are occupied The Needs of Irregular Data-parallel Algorithms. Many inter- then a not-yet-occupant workgroup will not be scheduled until arXiv:1707.01989v1 [cs.PL] 6 Jul 2017 esting data-parallel algorithms are irregular: the amount of work to some occupant workgroup completes execution. Yet the execution be processed is unknown ahead of time and may change dynami- model does provide fair scheduling between occupant workgroups, cally in a workload-dependent manner. There is growing interest in which are bound to separate compute units that operate in parallel. accelerating such algorithms on GPUs [2–5, 7, 8, 12, 14, 15, 17, 20, Current GPU implementations of blocking algorithms assume the 22, 26, 27, 30, 32]. Irregular algorithms usually require blocking syn- occupancy-bound execution model, which they exploit by launch- chronization between workgroups, e.g. many graph algorithms use ing no more workgroups than there are available compute units [7]. a level-by-level strategy, with a global barrier between levels; work Resistance to Occupancy-bound Execution. stealing algorithms require each workgroup to maintain a queue, Despite its practi- typically mutex-protected, to enable stealing by other workgroups. cal prevalence, none of the current GPU programming models To avoid starvation, a blocking concurrent algorithm requires actually mandate occupancy-bound execution. Further, there are fair scheduling of workgroups. For example, if one workgroup reasons why this model is undesirable. First, the execution model does not enable multitasking, since a workgroup effectively owns a compute unit until the workgroup has completed execution. The © 2017 Copyright held by the owner/author(s). Publication rights licensed to Associa- tion for Computing Machinery. 1In practice, depending on the kernel, multiple workgroups might map to the same compute unit; we ignore this in our current discussion. GPU cannot be used meanwhile for other tasks (e.g. rendering). Sec- where a processing unit would temporarily give up its hardware ond, energy throttling is an important concern for battery-powered resource. We deviate from this design as, in the case of a global devices [31]. In the future, it will be desirable for a mobile GPU barrier, adopting yield would force the cooperative kernel to block driver to power down some compute units, suspending execution completely when a single workgroup yields, stalling the kernel until of associated occupant workgroups, if the battery level is low. the given workgroup resumes. Instead, our offer_kill allows a ker- Our assessment, informed by discussions with a number of indus- nel to make progress with a smaller number of workgroups, with trial practitioners who have been involved in the OpenCL and/or workgroups potentially joining again later via request_fork. HSA standardisation efforts (including [10, 23]), is that GPU ven- Figure 1 illustrates sharing of GPU compute units between a dors (1) will not commit to the occupancy-bound execution model cooperative kernel and a graphics task. Workgroups 2 and 3 of they currently implement, for the above reasons, yet (2) will not the cooperative kernel are killed at an offer_kill to make room guarantee fair scheduling using preemption. This is due to the high for a graphics task. The workgroups are subsequently restored to runtime cost of preempting workgroups, which requires managing the cooperative kernel when workgroup 0 calls request_fork. The thread local state (e.g. registers, program location) for all workgroup gather time is the time between resources being requested and the threads (up to 1024 on Nvidia GPUs), as well as shared memory, application surrendering them via offer_kill. To satisfy soft-real the workgroup local cache (up to 64 KB on Nvidia GPUs). Vendors time constraints, this time should be low; our experimental study instead wish to retain the essence of the simple occupancy-bound (Sec. 5.4) shows that, in practice, the gather-time for our applications model, supporting preemption only in key special cases. is acceptable for a range of graphics workloads. For example, preemption is supported by Nvidia’s Pascal archi- The cooperative kernels model has several appealing properties: tecture [18], but on a GTX Titan X (Pascal) we still observe star- vation: a global barrier executes successfully with 56 workgroups, (1) By providing fair scheduling between workgroups, coopera- but deadlocks with 57 workgroups, indicating unfair scheduling. tive kernels meet the needs of blocking algorithms, including irregular data-parallel algorithms. Our Proposal: Cooperative Kernels. To summarise: blocking al- (2) The model has no impact on the development of regular (non- gorithms demand fair scheduling, but for good reasons GPU ven- cooperative) compute and graphics kernels. dors will not commit to the guarantees of the occupancy-bound (3) The model is backwards-compatible: offer_kill and request_fork execution model. We propose cooperative kernels, an extension to may be ignored, and a cooperative kernel will behave exactly the GPU programming model that aims to resolve this impasse. as a regular kernel does on current GPUs. A kernel that requires fair scheduling is identified as cooperative, (4) Cooperative kernels can be implemented over the occupancy- and written using two additional language primitives, offer_kill and bound execution model provided by current GPUs: our proto- request_fork, placed by the programmer. Where the cooperative type implementation uses no special hardware/driver support. kernel could proceed with fewer workgroups, a workgroup can (5) If hardware support for preemption is available, it can be lever- execute offer_kill, offering to sacrifice itself to the scheduler. This aged to implement cooperative kernels efficiently, and coopera- indicates that the workgroup would ideally continue executing, but tive kernels can avoid unnecessary preemptions by allowing that the scheduler may preempt the workgroup; the cooperative the programmer to communicate “smart” preemption points. kernel must be prepared to deal with either scenario. Where the cooperative kernel could use additional resources, a workgroup Placing the primitives manually is straightforward for the rep- can execute request_fork to indicate that the kernel is prepared to resentative set of GPU-accelerated irregular algorithms we have proceed with the existing
Recommended publications
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Mac OS X: an Introduction for Support Providers
    Mac OS X: An Introduction for Support Providers Course Information Purpose of Course Mac OS X is the next-generation Macintosh operating system, utilizing a highly robust UNIX core with a brand new simplified user experience. It is the first successful attempt to provide a fully-functional graphical user experience in such an implementation without requiring the user to know or understand UNIX. This course is designed to provide a theoretical foundation for support providers seeking to provide user support for Mac OS X. It assumes the student has performed this role for Mac OS 9, and seeks to ground the student in Mac OS X using Mac OS 9 terms and concepts. Author: Robert Dorsett, manager, AppleCare Product Training & Readiness. Module Length: 2 hours Audience: Phone support, Apple Solutions Experts, Service Providers. Prerequisites: Experience supporting Mac OS 9 Course map: Operating Systems 101 Mac OS 9 and Cooperative Multitasking Mac OS X: Pre-emptive Multitasking and Protected Memory. Mac OS X: Symmetric Multiprocessing Components of Mac OS X The Layered Approach Darwin Core Services Graphics Services Application Environments Aqua Useful Mac OS X Jargon Bundles Frameworks Umbrella Frameworks Mac OS X Installation Initialization Options Installation Options Version 1.0 Copyright © 2001 by Apple Computer, Inc. All Rights Reserved. 1 Startup Keys Mac OS X Setup Assistant Mac OS 9 and Classic Standard Directory Names Quick Answers: Where do my __________ go? More Directory Names A Word on Paths Security UNIX and security Multiple user implementation Root Old Stuff in New Terms INITs in Mac OS X Fonts FKEYs Printing from Mac OS X Disk First Aid and Drive Setup Startup Items Mac OS 9 Control Panels and Functionality mapped to Mac OS X New Stuff to Check Out Review Questions Review Answers Further Reading Change history: 3/19/01: Removed comment about UFS volumes not being selectable by Startup Disk.
    [Show full text]
  • CS 151: Introduction to Computers
    Information Technology: Introduction to Computers Handout One Computer Hardware 1. Components a. System board, Main board, Motherboard b. Central Processing Unit (CPU) c. RAM (Random Access Memory) SDRAM. DDR-RAM, RAMBUS d. Expansion cards i. ISA - Industry Standard Architecture ii. PCI - Peripheral Component Interconnect iii. PCMCIA - Personal Computer Memory Card International Association iv. AGP – Accelerated Graphics Port e. Sound f. Network Interface Card (NIC) g. Modem h. Graphics Card (AGP – accelerated graphics port) i. Disk drives (A:\ floppy diskette; B:\ obsolete 5.25” floppy diskette; C:\Internal Hard Disk; D:\CD-ROM, CD-R/RW, DVD-ROM/R/RW (Compact Disk-Read Only Memory) 2. Peripherals a. Monitor b. Printer c. Keyboard d. Mouse e. Joystick f. Scanner g. Web cam Operating system – a collection of files and small programs that enables input and output. The operating system transforms the computer into a productive entity for human use. BIOS (Basic Input Output System) date, time, language, DOS – Disk Operating System Windows (Dual, parallel development for home and industry) Windows 3.1 Windows 3.11 (Windows for Workgroups) Windows 95 Windows N. T. (Network Technology) Windows 98 Windows N. T. 4.0 Windows Me Windows 2000 Windows XP Home Windows XP Professional The Evolution of Windows Early 80's IBM introduced the Personal PC using the Intel 8088 processor and Microsoft's Disk Operating System (DOS). This was a scaled down computer aimed at business which allowed a single user to execute a single program. Many changes have been introduced over the last 20 years to bring us to where we are now.
    [Show full text]
  • Real-Time Operating Systems with Example PICOS18
    Real-Time Operating Systems With Example PICOS18 Sebastian Fischmeister 1 What is an Operating System? . A program that acts as an intermediary between a user of a computer and the computer hardware . Operating system goals: o Execute user programs and make solving user problems easier. o Make the computer system convenient to use . Use the computer hardware in an efficient manner CSE480/CIS700 S. Fischmeister 2 1 Computer System Components 1. Hardware – provides basic computing resources (CPU, memory, I/O devices) 2. Operating system – controls and coordinates the use of the hardware among the various application programs for the various users 3. Applications programs – define the ways in which the system resources are used to solve the computing problems of the users (compilers, database systems, video games, business programs) 4. Users (people, machines, other computers) CSE480/CIS700 S. Fischmeister 3 Abstract View of System Components CSE480/CIS700 S. Fischmeister 4 2 What is an RTOS? . Often used as a control device in a dedicated application such as controlling scientific experiments, medical imaging systems, industrial control systems, and some display systems . Well-defined fixed-time constraints CSE480/CIS700 S. Fischmeister 5 More Precisely? . The system allows access to sensitive resources with defined response times. o Maximum response times are good for hard real-time o Average response times are ok for soft real-time . Any system that provides the above can be classified as a real-time system o 10us for a context switch, ok? o 10s for a context switch, ok? CSE480/CIS700 S. Fischmeister 6 3 Taxonomy of RTOSs .
    [Show full text]
  • CS 450: Operating Systems Michael Lee <[email protected]>
    The Process CS 450: Operating Systems Michael Lee <[email protected]> Agenda - The Process: what is it and what’s in it? - Forms of Multitasking - Tracking processes in the OS - Context switches and Scheduling - Process API a process is a program in execution - its behavior is largely defined by the program being executed - but a process is much more than just a program! Multitasking - Modern general-purpose OSes typically run dozens to hundreds of processes simultaneously - May collectively exceed capacity of hardware - Recall: virtualization allows each process to ignore physical hardware limitations and let OS take care of details CPU/Memory Virtualization - Time-slicing of CPU(s) is performed to simulate concurrency - Memory is partitioned and shared amongst processes - But per-process view is of a uniform address space - Lazy/On-demand loading of processes lowers total burden vs. “Batch” processing - Without multitasking, each program is run from start to finish without interruption from other processes - Including any I/O operations (which may be lengthy!) - Ensures minimal overhead (but at what cost?) - Is virtualization still necessary? Pros/Cons of Multitasking - Pro: may improve resource utilization if we can run some processes while others are blocking - Pro: makes process interaction possible - Con: virtualization introduces overhead (examples?) - Con: possibly reduced overall throughput Forms of Multitasking - Cooperative multitasking: processes voluntarily cede control - Preemptive multitasking: OS polices transitions (how?) - Real-time systems: hard, fixed time constraints (late is wrong!) What’s in a process? - Program (“text”) and data - Static/Stack/Heap memory contents - Registers (e.g., PC, SP, FP) - Open files and devices (e.g., network) - What else? Data vs.
    [Show full text]
  • Scheduling Techniques for Reducing Processor Energy Use in Macos
    Scheduling Techniques for Reducing Processor Energy Use in MacOS Jacob R. Lorch and Alan Jay Smith Computer Science Division, EECS Department, UC Berkeley Berkeley, CA 94720-1776 Abstract a number of engineering users. We found that, depending on the machine and user, up to 18±34% of total power was at- The CPU is one of the major power consumers in a tributable to components whose power consumption could be portable computer, and considerable power can be saved by reduced by power management of the processor, i.e. the CPU turning off the CPU when it is not doing useful work. In Ap- and logic that could be turned off when the CPU was inactive. ple's MacOS, however, idle time is often converted to busy This high percentage, combined with our intuition that soft- waiting, and generally it is very hard to tell when no use- ware power management could be signi®cantly improved for ful computation is occurring. In this paper, we suggest sev- the processor, led us to conclude that the most important tar- eral heuristic techniques for identifying this condition, and get for further research in software power management was for temporarily putting the CPU in a low-power state. These the processor. techniques include turning off the processor when all pro- Many modern microprocessors have low-power states, in cesses are blocked, turning off the processor when processes which they consume little or no power. To take advantage of appear to be busy waiting, and extending real time process such low-power states, the operating system needs to direct sleep periods.
    [Show full text]
  • Types of Operating System 3
    Types of Operating System 3. Batch Processing System The OS in the early computers was fairly simple. Its major task was to transfer control automatically from one job to the next. The OS was always resident in memory. To speed up processing, operators batched together jobs with similar requirement/needs and ran them through the computer as a group. Thus, the programmers would leave their programs with the operator. The operator would sort programs into batches with similar requirements and, as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. Memory layout of Batch System Operating System User Program Area Batch Processing Operating System JOBS Batch JOBS JOBS JOBS JOBS Monitor OPERATING SYSTEM HARDWARE Advantages: ➢ Batch processing system is particularly useful for operations that require the computer or a peripheral device for an extended period of time with very little user interaction. ➢ Increased performance as it was possible for job to start as soon as previous job is finished without any manual intervention. ➢ Priorities can be set for different batches. Disadvantages: ➢ No interaction is possible with the user while the program is being executed. ➢ In this execution environment , the CPU is often idle, because the speeds of the mechanical I/O devices are intrinsically slower than are those of electronic devices. 4. Multiprogramming System The most important aspect of job scheduling is the ability to multi-program. Multiprogramming increases CPU utilization by organizing jobs so that CPU always has one to execute. The idea is as follows: The operating system keeps several jobs in memory simultaneously.
    [Show full text]
  • I.T.S.O. Powerpc an Inside View
    SG24-4299-00 PowerPC An Inside View IBM SG24-4299-00 PowerPC An Inside View Take Note! Before using this information and the product it supports, be sure to read the general information under “Special Notices” on page xiii. First Edition (September 1995) This edition applies to the IBM PC PowerPC hardware and software products currently announced at the date of publication. Order publications through your IBM representative or the IBM branch office serving your locality. Publications are not stocked at the address given below. An ITSO Technical Bulletin Evaluation Form for reader′s feedback appears facing Chapter 1. If the form has been removed, comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. JLPC Building 014 Internal Zip 5220 1000 NW 51st Street Boca Raton, Florida 33431-1328 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. Copyright International Business Machines Corporation 1995. All rights reserved. Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp. Abstract This document provides technical details on the PowerPC technology. It focuses on the features and advantages of the PowerPC Architecture and includes an historical overview of the development of the reduced instruction set computer (RISC) technology. It also describes in detail the IBM Power Series product family based on PowerPC technology, including IBM Personal Computer Power Series 830 and 850 and IBM ThinkPad Power Series 820 and 850.
    [Show full text]
  • The Operating System the Operating System (OS) Is the He Low-Level Software Which Handles the Interface to Peripheral Hardware
    The Operating System The Operating System (OS) is the he low-level software which handles the interface to peripheral hardware, schedules tasks, allocates storage, and presents a default interface to the user when no application program is running. Main functions: - 1. Provides instructions to display the on screen elements with which you interact. Collectively, these elements are known as the user interface. 2. Loads programs into the computers memory so that you can use them. 3. Co-ordinates how programs work with the CPU, RAM, keyboard, mouse, printer and other hardware as well as with software. 4. Manages the way information is stored on and retrieved from disks. The User Interface With an OS you see and interact with a set of items on the screen, the user interface. Graphical User Interface - Most current OS provide a graphical user interface (GUI). Apple computer introduced the first GUI with its Macintosh computer in 1984. GUIs are so called because you use a mouse (or other pointing device) to point at graphical objects on the screen. The Desktop - The coloured area that is seen on the screen. The pictures stand for items you might see on a real desktop such as “my computer”. Icons - Icons are pictures that represent the parts of the computer you work with. Software designers try to design the icons so that they look like what they represent. You interact with your computers resources by activating the icon that represent the resource. This is done by using a pointing device. The Taskbar and the Start Button (windows only) - When you start a program a button appears for it on the taskbar – an area at the bottom of the screen whose purpose is to display the buttons for the programs you are running.
    [Show full text]
  • Mac OS 8 Revealed
    •••••••••••••••••••••••••••••••••••••••••••• Mac OS 8 Revealed Tony Francis Addison-Wesley Developers Press Reading, Massachusetts • Menlo Park, California • New York Don Mills, Ontario • Harlow, England • Amsterdam Bonn • Sydney • Singapore • Tokyo • Madrid • San Juan Seoul • Milan • Mexico City • Taipei Apple, AppleScript, AppleTalk, Color LaserWriter, ColorSync, FireWire, LocalTalk, Macintosh, Mac, MacTCP, OpenDoc, Performa, PowerBook, PowerTalk, QuickTime, TrueType, and World- Script are trademarks of Apple Computer, Inc., registered in the United States and other countries. Apple Press, the Apple Press Signature, AOCE, Balloon Help, Cyberdog, Finder, Power Mac, and QuickDraw are trademarks of Apple Computer, Inc. Adobe™, Acrobat™, and PostScript™ are trademarks of Adobe Systems Incorporated or its sub- sidiaries and may be registered in certain jurisdictions. AIX® is a registered trademark of IBM Corp. and is being used under license. NuBus™ is a trademark of Texas Instruments. PowerPC™ is a trademark of International Business Machines Corporation, used under license therefrom. SOM, SOMobjects, and System Object Model are licensed trademarks of IBM Corporation. UNIX® is a registered trademark of Novell, Inc. in the United States and other countries, licensed exclusively through X/Open Company, Ltd. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and Addison-Wesley was aware of a trademark claim, the designations have been printed in initial capital letters or all capital letters. The author and publisher have taken care in the preparation of this book, but make no express or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein.
    [Show full text]
  • Hardware Multitasking Within a Softcore CPU
    Hardware multitasking within a softcore CPU Ulrich Homann (FH Wedel University of Applied Sciences), Andrew Read June 2015 [email protected], [email protected] Abstract We have developed and implemented hardware multitasking support for a softcore CPU. The N.I.G.E. Machine's softcore CPU is an FPGA-based 32 bit stack machine optimized for running the FORTH programming language. The virtualization model that we have developed provides at least 32 independent CPU virtual machines within a single hardware instance. A full task switch takes place in only two clock cycles, the same duration as a branch or jump instruction. We have use the facility to provide a multitasking platform within the N.I.G.E. Machine's FORTH environment. Both cooperative multitasking, by means of the PAUSE instruction, and pre-emptive multitasking are supported. 1 Introduction The N.I.G.E. Machine is a complete microcomputer system implemented on an FPGA development board [1]. It comprises a 32 bit softcore processor optimized for the FORTH programming language, a set of peripheral hardware modules, and FORTH system software (gure 1). The N.I.G.E. Machine was presented at EuroFORTH in 2012, 2013 and 2014 [2, 3, 4]. The N.I.G.E. Machine follows in the footsteps of a number of signicant FORTH processors [6, 7, 8, 9, 10, 11], most especially the J-1 [6]. The N.I.G.E. Machine design les are are freely available with an open source license [5]. Most embedded systems, including those that control scientic instruments (such as the Open Network Forth system that controls the Munich particle accelerator [25]), require some level of multitasking.
    [Show full text]
  • Fundamental 2
    Fundamental 2 Lec#03 Shugofa Hassani Session Objective • To understand different types of operating system • To distinguish between types of operating system Types of operating systems • Single- and multi-user operating system • Single tasking / Batch operating system • Multi programing operating system • Multi-tasking operating system / Time-sharing operating system • Embedded operating system • Distributed operating system • Network operating System • Real Time operating System • Mobile operating System Single- and multi-user • Single user operating system allows one user to interact with system like desktop operating system , in reverse the multi user operating system system allows to several users to interact with the system in the same time like server operating system . Single- Tasking / Batch Operating system • A single-tasking system allows only one program in one time such as batch operating system . • In a batch operating system user do not interacted with the computer directly , each user prepared his job on offline device like punch card and submit to computer operator . • Job with similar needs batch together and run as group , the required time for executing of a batch do not considered. • The second batch executed when the first one completed , CPU goes to idle until completing the first batch . Multitasking / Time sharing operating system. • A multi-tasking (Time sharing) operating system can operate more than one program such as Windows operating system . Multi-tasking may be divided into two types. 1. Preemptive multitasking 2. Cooperative multitasking Con.. Preemptive multitasking : • Preemptive multitasking use schedule for each process to decide how long to allocate to any one task before giving another task a turn to use the operating system.
    [Show full text]