Based on: 2004 Deitel & Associates, Inc.

Operating Systems

Computer Science Department Prepared By Dr. Suleyman Al-Showarah 1.9 2000 and Beyond

Middleware is computer software that provides services to software applications beyond those available from the .  Middleware  Links two separate applications

 Often over a network and between incompatible machines  – Particularly important for Web services

 Simplifies communication across multiple architectures

Middleware : Software that acts as a bridge between an operating system or database and applications, especially on a network. 1.9 2000 and Beyond

A Web service is a method of communication between two electronic devices over a network.  Web services  Encompass set of related standards  Ready-to-use pieces of software on the Internet  Enable any two applications to communicate and exchange data

1.10 Application Bases

 Application base Combination of hardware and operating system used to develop Applications Developers and users unwilling to abandon established application base  Increased financial cost and time spent relearning

What does Application Base mean? The application base is the directory, which contains all the files related to a .NET application, including the executable file (.exe) that loads into the initial or default application domain. 1.11 Operating System Environments

 Operating systems intended for high-end environments Special design requirements and hardware support needs  Large main memory  Special-purpose hardware  Large numbers of processes Continue ...

 Embedded systems Characterized by small set of specialized resources Provide functionality to devices such as cell phones and PDAs (see next slide) Efficient resource management key to building successful operating system PDAs

A personal digital assistant (PDA), also known as a handheld PC, or personal data assistant, is a mobile device that functions as a personal information manager. Continue ... A real-time system is one that must information and produce a response within a specified time.  Real-time systems Require that tasks be performed within particular (often short) time frame  Autopilot feature of an aircraft must constantly adjust speed, altitude and direction Such actions cannot wait indefinitely—and sometimes cannot wait at all

In computing, a virtual machine (VM) is an emulation of a particular computer system. Continue ... A virtual machine (VM) is an operating system OS or application environment that is installed on software which imitates dedicated hardware.

 Virtual machines (VMs) Software abstraction of a computer Often executes on top of native operating system  Virtual machine operating system Manages resources provided by virtual machine Continue ...

 Applications of virtual machines Allow multiple instances of an operating system to execute concurrently Emulation  Software or hardware mimics functionality of hardware or software not present in system  Promote portability What is the difference between simulation and emulation

Simulation A simulation is a system that behaves similar to something else, but is implemented in an entirely different way. It provides the basic behaviour of a system, but may not necessarily adhere to all of the rules of the system being simulated. It is there to give you an idea about how something works.

Emulation An emulation is a system that behaves exactly like something else, and adheres to all of the rules of the system being emulated. It is effectively a complete replication of another system, right down to being binary compatible with the emulated system's inputs and outputs, but operating in a different environment to the environment of the original emulated system. The rules are fixed, and cannot be changed, or the system fails.

1.12 Operating System Components and Goals

 Computer systems have evolved Early systems contained no operating system, Later gained multiprogramming and timesharing machines

In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking. Continue (1.12 Operating System Components and Goals)

Personal computers and finally truly distributed systems Filled new roles as demand changed and grew 1.12.1 Core Operating System Components

, or command interpreter —allows user to enter a command.  Kernel —the software that contains the core  Typical operating system components include:  Processor scheduler  Memory manager  I/O manager  Interprocess communication (IPC) manager (see next slide)  manager (see next slide .. HW2) Interprocess communication (IPC)

Interprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Home Work (2)!

 What is the File system manager? 1.12.2 Operating System Goals  Users expect certain properties of operating systems  Efficiency  Robustness  Scalability  Extensibility  Portability  Security  Protection  Interactivity  Usability 1.13 Operating System Architectures

 1.13.1 Monolithic Architecture  Monolithic operating system (see next slide) Every component contained in kernel  Any component can directly communicate with any other Tend to be highly efficient Disadvantage is difficulty determining source of subtle errors A monolithic kernel is an operating system architecture where the entire operating system is working in kernel space and is alone in supervisor mode.

In hardware system. An electronic hardware system, such as a multi- core processor, is called "monolithic" if its components are integrated together in a single integrated circuit.

In software system. A software system is called "monolithic" if it has a monolithic architecture, in which functionally distinguishable aspects (for example data input and output, data processing, error handling, and the user interface), are not architecturally separate components but are all interwoven. A multi-core processor is a single computing component with two or more independent actual processing units (called "cores"), which are the units that read and execute program instructions.

In computing, the kernel is a computer program that manages input/output requests from software, and translates them into data processing instructions for the central processing unit and other electronic components of a computer. The kernel is a fundamental part of a modern computer's operating system.[

In computers, parallel processing is the processing of program instructions by dividing them among multiple processors with the objective of running a program in less time.

1.13.2 Layered Architecture (see next slide)  Layered approach to operating systems Tries to improve on monolithic kernel designs  Groups components that perform similar functions into layers Each layer communicates only with layers immediately above and below it Processes’ requests might pass through many layers before completion System throughput can be less than monolithic kernels  Additional methods must be invoked to pass data and control

1.13.3 Architecture

 Microkernel operating system architecture (see next slide) Provides only small number of services  Attempt to keep kernel small and scalable High degree of modularity  Extensible, portable and scalable Increased level of intermodule communication  Can degrade system performance In software engineering, extensibility (not to be confused with forward compatibility) is a system design principle where the implementation takes future growth into consideration. It is a systemic measure of the ability to extend a system and the level of effort required to implement the extension.

Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged in order to accommodate that growth.

Portability in high-level computer programming is the usability of the same software in different environments. The prerequirement for portability is the generalized abstraction between the application logic and system interfaces.

In computer science, abstraction is a technique for managing complexity of computer systems. It works by establishing a level of complexity on which a person interacts with the system, suppressing the more complex details below the current level. In computer science, a microkernel (also known as μ-kernel) is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, management, and inter-process communication (IPC).

Structure of monolithic and microkernel-based operating systems, respectively In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system

A process with two threads of execution, running on a single processor

1.13.4 Networked and Distributed Operating Systems

Network OS is used to manage networked computers, means there would be a server and one or more computers will be managed by that server like in your college, you might have got one dedicated server and it will manage your individual computers or laptop.

Distributed OS is one where all of computers that are connected can share in tasks. For instance, if you and your friend are connected each other using distributed OS then you can use a program which is actually on someone else's computer. This is the reason distributed OS needs more RAM & High speed processor. 1.13.4 Networked and Distributed Operating Systems

Runs on one computer Allows its processes to access resources on remote computers In a distributed system, failure transparency refers to the extent to which errors and subsequent recoveries of hosts and services within the system are invisible to users and applications. For example, if a server fails, but users are Continue ... automatically redirected to another server and never notice the failure, the system is said to exhibit high failure transparency. Fault tolerance is the property that enables  Distributed operating system a system to continue operating properly in the event of the failure of (or one or more Single operating system faults within) some of its components. Manages resources on more than one computer system In computer science, consistency models are used in distributed systems like distributed shared Goals include: memory systems or distributed data stores (such as a file systems, databases, optimistic  Transparent performancereplication systems). The system supports a given  Scalability model if operations on memory follow specific rules. The data consistency model specifies a  Fault tolerance contract between programmer and system, wherein the system guarantees that if the  Consistency programmer follows the rules, memory will be consistent and the results of memory operations will be predictable.