<<

Scanned by CamScanner Scanned by CamScanner

Ans 1 a). Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space and is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. For example, a chat program used on the World Wide Web could be designed so that chat participants communicate with one another by exchanging messages. A message-passing facility provides at least two operations: send (message) and receive (message). Messages sent by a process can be of either fixed or variable size. If only fixed-sized messages can be sent, the system-level implementation is straightforward. This restriction, however, makes the task of programming more difficult. Conversely, variable-sized messages require a more complex system-level implementation, but the programming task becomes simpler. This is a common kind of tradeoff seen throughout design. If processes P and Q want to communicate, they must send messages to and receive messages from each other; a communication link must exist between them. This link can be implemented in a variety of ways. Here are several methods for logically implementing a link and the send()/receive () operations:

• Direct or indirect communication • Synchronous or asynchronous communication • Automatic or explicit buffering We look at issues related to each of these features next.

1 Naming Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication. Under direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send0 and receive() primitives are defined as: • send (P, message)—Send a message to process P. • receive (Q, message)—Receive a message from process Q.

This scheme exhibits symmetry in addressing; that is, both the sender process and the receiver process must name the other to communicate. A variant of this scheme employs asymmetry in addressing. Here, only the sender names the recipient; the recipient is not required to name the sender. In this scheme, the send() and receive () primitives are defined as follows: • send(P, message)—Send a message to process P. • receive(id, message)—-Receive a message from any process; the variable id is set to the name of the process with which communication has taken place.

With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. For example, POSIX message queues use an integer value to identify a mailbox. In this scheme, a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if the processes have a shared mailbox, however. The send () and receive () primitives are defined as follows: • send(A, message)—Send a message to mailbox A. • receive(A, message)—Receive a message from mailbox A.

2 Synchronization Communication between processes takes place through calls to send () and receive () primitives. There are different design options for implementing each primitive. Message passing may be blocking or nonblocking— also known as synchronous and asynchronous.

• Blocking send. The sending process is blocked until the message is received by the receiving process or by the mailbox. • Nonblocking send. The sending process sends the message and resumes operation. • Blocking receive. The receiver blocks until a message is available. • Nonblocking receive. The receiver retrieves either a valid message or a null.

Different combinations of send() and receive () are possible. When both send() and receive() are blocking, we have a rendezvous between the sender and the receiver.

Buffering Whether communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue. Basically, such queues can be implemented in three ways: • Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message. • Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue (either the message is copied or a pointer to the message is kept), and the sender can continue execution without waiting. The links capacity is finite, however. If the link is full, the sender must block until space is available in the queue. • Unbounded capacity. The queues length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no buffering; the other cases are referred to as systems with automatic buffering. Advantage of Message passing system in Windows

Ansb) Following are some of important functions of an operating System.

 Memory Management  Processor Management  Device Management  File Management  Security  Control over system performance  Job accounting  Error detecting aids  Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it must in the main memory. An Operating System does the following activities for memory management −

 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.  In multiprogramming, the OS decides which process will get memory when and how much.  Allocates the memory when a process requests it to do so.  De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This function is called process scheduling. An Operating System does the following activities for processor management −  Keeps tracks of processor and status of process. The program responsible for this task is known as traffic controller.  Allocates the processor (CPU) to a process.  De-allocates processor when a process is no longer required.

Device Management

An Operating System manages device communication via their respective drivers. It does the following activities for device management −

 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.  Decides which process gets the device when and for how much time.  Allocates the device in the efficient way.  De-allocates devices.

File Management

A is normally organized into directories for easy navigation and usage. These directories may contain files and other directions.

An Operating System does the following activities for file management −

 Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.  Decides who gets the resources.  Allocates the resources.  De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −

 Security − By means of password and similar other techniques, it prevents unauthorized access to programs and data.  Control over system performance − Recording delays between request for a service and response from the system.  Job accounting − Keeping track of time and resources used by various jobs and users.  Error detecting aids − Production of dumps, traces, error messages, and other debugging and error detecting aids.  Coordination between other softwares and users − Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.

OR

Ansa)Client and model

 A client and server networking model is a model in which computers such as servers provide the network services to the other computers such as clients to perform a user based tasks. This model is known as client-server networking model.  The application programs using the client-server model should follow the given below strategies:

A computer network diagram of clients communicating with a server via the Internet.

 An application program is known as a client program, running on the local machine that requests for a service from an application program known as a server program, running on the remote machine.  A client program runs only when it requests for a service from the server while the server program runs all time as it does not know when its service is required.  A server provides a service for many clients not just for a single client. Therefore, we can say that client-server follows the many-to-one relationship. Many clients can use the service of one server.  Services are required frequently, and many users have a specific client-server application program. For example, the client-server application program allows the user to access the files, send e-, and so on. If the services are more customized, then we should have one generic application program that allows the user to access the services available on the remote computer. Client

A client is a program that runs on the local machine requesting service from the server. A client program is a finite program means that the service started by the user and terminates when the service is completed.

Server

A server is a program that runs on the remote machine providing services to the clients. When the client requests for a service, then the server opens the door for the incoming requests, but it never initiates the service.

A server program is an infinite program means that when it starts, it runs infinitely unless the problem arises. The server waits for the incoming requests from the clients. When the request arrives at the server, then it responds to the request.

Advantages of Client-server networks:

 Centralized: Centralized back-up is possible in client-server networks, i.e., all the data is stored in a server.  Security: These networks are more secure as all the shared resources are centrally administered.  Performance: The use of the dedicated server increases the speed of sharing resources. This increases the performance of the overall system.  Scalability: We can increase the number of clients and servers separately, i.e., the new element can be added, or we can add a new node in a network at any time.

Disadvantages of Client-Server network:

 Traffic Congestion is a big problem in Client/Server networks. When a large number of clients send requests to the same server may cause the problem of Traffic congestion.  It does not have a robustness of a network, i.e., when the server is down, then the client requests cannot be met.   A client/server network is very decisive. Sometimes, regular computer hardware does not serve a certain number of clients. In such situations, specific hardware is required at the server side to complete the work.  Sometimes the resources exist in the server but may not exist in the client. For example, If the application is web, then we cannot take the print out directly on printers without taking out the print view window on the web.

Ansb) Clock Synchronization in distributed system:

Clock synchronization deals with understanding the temporal ordering of events produced by concurrent processes. It is useful for synchronizing senders and receivers of messages, controlling joint activity, and the serializing concurrent access to shared objects. The goal is that multiple unrelated processes running on different machines should be in agreement with and be able to make consistent decisions about the ordering of events in a system. For these kinds of events, we introduce the concept of a logical clock, one where the clock need not have any bearing on the time of day but rather be able to create event sequence numbers that can be used for comparing sets of events, such as a messages, within a distributed system. Another aspect of clock synchronization deals with synchronizing time-of-day clocks among groups of machines. In this case, we want to ensure that all machines can report the same time. Algorithm explains Clock Synchronization is NTP.

Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time (UTC).NTP can usually maintain time to within tens of milliseconds over the public Internet and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100ms or more. The protocol is usually described in terms of a client-server model , but can as easily be used in peer-to-peer relationships where both peers consider the other to be a potential time source. Implementations send and receive timestamps using the User Datagram Protocol (UDP) on port number They can also use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange.

Ans2a) A system is secure if its resources are used and accessed as intended under all circumstances.

But in Reality Different problems faced during System Security such as

.Breach of Confidentiality : : This type of violation involves unauthorized reading of data.

.Breach of Integrity : This includes unauthorized modification of data.for example result in passing of liability to an innocent party or modification of source code of commercial application Breach of Availability : This involves unauthorized destruction of data. Some crackers would rather wreak havoc and gain status or bragging rights than gain financially.

Theft of Service : This violation involves unauthorized use of resources. For example ,an intruder may install a daemon on a system that act as a file server.

Denial of Service : This violation involves preventing legitimate use of system

System Network Threats regarding RAID System is while transferring data across the network if some bit gets lost than we use parity bit generated by Huffman code and how the bit is lost while transferring there are so many reasons listed below like stealing of data,Masquerading etc.other network threats are

a) VIRUSES b) WORMS ) TROJAN PROGRAMS d) LOGIC BOMBS e) BACKDOORS f) DENIAL OF SERVICE ATTACKS

Some levels are listed as Network threats

Level1.Front End Server those available to both internal and external users must be protected against unauthorized access.Typically these systems are email and Web Servers.

Level2 : Back end system(such as users workstations and internal database servers)must be protected to ensure confidentiality ,accuracy and integrity of data.

Level3: The corporate network must be protected against intrusion ,denial of service attacks and unauthorized access

Ans2b) Disk management and disk formatting:

Disk Management : Disk Management is responsile for several aspects of disk management like disk initialization, booting from disk and bad block recovery. Disk formatting is a aspect that comes under disk management. Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk or USB flash drive for initial use. In some cases, the formatting operation may also create one or more new file systems. The first part of the formatting process that performs basic medium preparation is often referred to as "low-level formatting".

Partitioning is the common term for the second part of the process, making the data storage device visible to an operating system.[1] The third part of the process, usually termed "high-level formatting" most often refers to the process of generating a new file system. In some operating systems all or parts of these three processes can be combined or repeated at different levels[nb 1] and the term "format" is understood to mean an operation in which a new disk medium is fully prepared to store files

BOOT BLOCK: BOOT BLOCK The boot block is usually the first block in the hard disk. ... Whenever the computer is turned on, a program in the BIOS chip detects the bootable partitions and selects the appropriate bootstrap loader to boot the correctoperating system into memory.

BAD BLOCK : Bad sector is a cluster of storage on the hard drive that appears to not be working properly. The operating system may have tried to read data on the hard drive from this sector and found that the error-correcting code (ECC) didn’t match the contents of the sector, which suggests that something is wrong. These may be marked as bad sectors, but can be repaired by overwriting the drive with zeros — or, in the old days, performing a low-level format. Windows’ Disk Check tool can also repair such bad sectors.

SWAP SPACE MANAGEMENT : Swap space helps the computer's operating system in pretending that it have more RAM than it actually has. It is also called as swap file.This interchange of data between virtual memory and real memory is called as swapping and space on disk as “swap space”.

OR

Ans2a) A linear list is the simplest and easiest directory structure to set up. bases OSuses this and Details of Files in a directory are stored in a special type of file

(and called directory). ... Sorting the list makes searches faster, at the expense of more complex insertions and deletions.

.

Linear List

In this algorithm, all the files in a directory are maintained as singly lined list. Each file contains the pointers to the data blocks which are assigned to it and the next file in the directory.

Characteristics

1. When a new file is created, then the entire list is checked whether the new file name is matching to a existing file name or not. In case, it doesn't exist, the file can be created at the beginning or at the end. Therefore, searching for a unique name is a big concern because traversing the whole list takes time. 2. The list needs to be traversed in case of every operation (creation, deletion, updating, etc) on the files therefore the systems become inefficient.

2. Hash Table

To overcome the drawbacks of singly linked list implementation of directories, there is an alternative approach that is hash table. This approach suggests to use hash table along with the linked lists.

A key-value pair for each file in the directory gets generated and stored in the hash table. The key can be determined by applying the hash function on the file name while the key points to the corresponding file stored in the directory. Now, searching becomes efficient due to the fact that now, entire list will not be searched on every operating. Only hash table entries are checked using the key and if an entry found then the corresponding file will be fetched using the value.

Reliability and integrity in file system

Regular file operations often involve changing several disk blocks. For example, in creating a file in Unix, we must write to the inode with the file information, add the file entry into its directory, and possibly create the first data block of the disk.

In moving a file from one directory to another, we must delete the entry from the first directory and insert the entry into the second. But what if the computer crashes between two of these operations? Then the filesystem could enter an inconsistent state, confusing the OS to the point that the filesystem can't be used at all.

One significant responsibility of a file system is to ensure that, regardless of the actions by programs accessing the data, the structure remains consistent. This includes actions taken if a program modifying data terminates abnormally or neglects to inform the file system that it has completed its activities. This may include updating the metadata, the directory entry and handling any data that was buffered but not yet updated on the physical storage media. Other failures which the file system must deal with include media failures or loss of connection to remote systems. In the event of an operating system failure or "soft" power failure, special routines in the file system must be invoked similar to when an individual program fails.

The file system must also be able to correct damaged structures. These may occur as a result of an operating system failure for which the OS was unable to notify the file system, power failure or reset. The file system must also record events to allow analysis of systemic issues as well as problems with specific files or directories.

Ans2b) Distributed with advantages and disadvantages A distributed shared memory is a mechanism allowing end-users' processes to access shared data without using inter-process communications. A distributed-memory system, often called a multicomputer, consists of multiple independent processing nodes with local memory modules which is connected by a general interconnection network

Advantages of Distributed Shared Memory:

 Hide data movement and provide a simpler abstraction for sharing data. don't need to worry about memory transfers between machines like when using the message passing model.  Allows the passing of complex structures by reference, simplifying algorithm development for distributed applications.  Takes advantage of "locality of reference" by moving the entire page containing the data referenced rather than just the piece of data.  Cheaper to build than multiprocessor systems. Ideas can be implemented using normal hardware and do not require anything complex to connect the shared memory to the processors.  Larger memory sizes are available to programs, by combining all physical memory of all nodes. This large memory will not incur disk latency due to swapping like in traditional distributed systems.  Unlimited number of nodes can be used. Unlike multiprocessor systems where main memory is accessed via a common bus, thus limiting the size of the multiprocessor system.  Programs written for shared memory multiprocessors can be run on DSM systems,

Disadvantages

 Generally slower to access than non-distributed shared memory.  Must provide additional protection against simultaneous accesses to shared data

Ans 3a) is one of popular version of UNIX operating System. It is open source as its source code is freely available. It is free to use. Linux was designed considering UNIX compatibility. Its functionality list is quite similar to that of UNIX. Components of Linux System Linux Operating System has primarily three components

 Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this operating system. It consists of various modules and it interacts directly with the underlying hardware. Kernel provides the required abstraction to hide low level hardware details to system or application programs.

 System − System libraries are special functions or programs using which application programs or system utilities accesses Kernel's features. These libraries implement most of the functionalities of the operating system and do not requires kernel module's code access rights.

 System Utility − System Utility programs are responsible to do specialized, individual level tasks

Kernel Mode vs User Mode Kernel component code executes in a special privileged mode called kernel mode with full access to all resources of the computer. This code represents a single process, executes in single address space and do not require any context switch and hence is very efficient and fast. Kernel runs each processes and provides system services to processes, provides protected access to hardware to processes. Support code which is not required to run in kernel mode is in System Library. User programs and other system programs works in User Mode which has no access to system hardware and kernel code. User programs/ utilities use System libraries to access Kernel functions to get system's low level tasks. Basic Features Following are some of the important features of Linux Operating System.

 Portable − Portability means software can works on different types of hardware in same way. Linux kernel and application programs supports their installation on any kind of hardware platform.

 Open Source − Linux source code is freely available and it is community based development project. Multiple teams work in collaboration to enhance the capability of Linux operating system and it is continuously evolving.

 Multi-User − Linux is a multiuser system means multiple users can access system resources like memory/ ram/ application programs at same time.

 Multiprogramming − Linux is a multiprogramming system means multiple applications can run at same time.

 Hierarchical File System − Linux provides a standard file structure in which system files/ user files are arranged.

 Shell − Linux provides a special interpreter program which can be used to execute commands of the operating system. It can be used to do various types of operations, call application programs. etc.

 Security − Linux provides user security using authentication features like password protection/ controlled access to specific files/ encryption of data. Architecture Thefollowing illustration shows the architecture of a Linux system –

The architecture of a Linux System consists of the following layers −

 Hardware layer − Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).

 Kernel − It is the core component of Operating System, interacts directly with hardware, provides low level services to upper layer components.

 Shell − An interface to kernel, hiding complexity of kernel's functions from users. The shell takes commands from the user and executes kernel's functions.

 Utilities − Utility programs that provide the user most of the functionalities of an operating systems.

3 b) The shell provides you with an interface to the UNIX system. It gathers input from you and executes programs based on that input. When a program finishes executing, it displays that program's output.

A shell is an environment in which we can run our commands, programs, and shell scripts. There are different flavors of shells, just as there are different flavors of operating systems. Each flavor of shell has its own set of recognized commands and functions.

Shell Types:

In UNIX there are two major types of shells: 1. The Bourne shell. If you are using a Bourne-type shell, the default prompt is the $ character.

2. The . If you are using a C-type shell, the default prompt is the % character.

There are again various subcategories for Bourne Shell which are listed as follows:

 Bourne shell ( sh)

 Korn shell ( ksh)

 Bourne Again shell ( )

 POSIX shell ( sh)

The different C-type shells follow:

 C shell ( csh)

 TENEX/TOPS C shell ( tcsh)

The original UNIX shell was written in the mid-1970s by Stephen R. Bourne while he was at AT&T Bell Labs in New Jersey.

The Bourne shell was the first shell to appear on UNIX systems, thus it is referred to as "the shell".

The Bourne shell is usually installed as /bin/sh on most versions of UNIX. For this reason, it is the shell of choice for writing scripts to use on several different versions of UNIX.

There are three main uses for the shell:

 Interactive use  Customization of your UNIX session  Programming

1 Interactive Use

When the shell is used interactively, the system waits for you to type a command at the UNIX prompt. Your commands can include special symbols that let you abbreviate filenames or redirect input and output.

2 Customization of Your UNIX Session

A UNIX shell defines variables to control the behavior of your UNIX session. Setting these variables will tell the system, for example, which directory to use as your home directory, or the file in which to store your mail. Some variables are preset by the system; you can define others in start-up files that are read when you log in. Start-up files can also contain UNIX commands or special shell commands. These will be executed every time you log in.

3 Programming

UNIX shells provide a set of special (or built-in) commands that can be used to create programs called shell scripts. In fact, many built-in commands can be used interactively like UNIX commands, and UNIX commands are frequently used in shell scripts. Scripts are useful for executing a series of individual commands. This is similar to BATCH files in MS-DOS. Scripts can also execute commands repeatedly (in a loop) or conditionally (if-else), as in many high-level programming languages.

ANS 3a)

Network File System role service and features included with the File and Storage Services server role in . Network File System (NFS) provides a file sharing solution for enterprises that have heterogeneous environments that include both Windows and non-Windows computers. Feature description

Using the NFS protocol, you can transfer files between computers running Windows and other non-Windows operating systems, such as Linux or UNIX.

NFS in Windows Server includes Server for NFS and Client for NFS. A computer running Windows Server can use Server for NFS to act as a NFS file server for other non-Windows client computers. Client for NFS allows a Windows-based computer running Windows Server to access files stored on a non-Windows NFS server. Inter Process Communication

A process can be of two type:

 Independent process.  Co-operating process.

An independent process is not affected by the execution of other processes while a co- operating process can be affected by other executing processes. Though one can think that those processes, which are running independently, will execute very efficiently but in practical, there are many situations when co-operative nature can be utilised for increasing computational speed, convenience and modularity. Inter process communication (IPC) is a mechanism which allows processes to communicate each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other using these two ways: 1. Shared Memory 2. Message passing

Ans 3b) Memory Management

The memory management subsystem is one of the most important parts of the operating system. Since the early days of computing, there has been a need for more memory than exists physically in a system. Strategies have been developed to overcome this limitation and the most successful of these is virtual memory. Virtual memory makes the system appear to have more memory than is physically present by sharing it among competing processes as they need it.

Virtual memory does more than just make your computer's memory go farther. The memory management subsystem provides:

Large Address Spaces

The operating system makes the system appear as if it has a larger amount of memory than it actually has. The virtual memory can be many times larger than the physical memory in the system.

Protection

Each process in the system has its own virtual address space. These virtual address spaces are completely separate from each other and so a process running one application cannot affect another. Also, the hardware virtual memory mechanisms allow areas of memory to be protected against writing. This protects code and data from being overwritten by rogue applications.

Memory Mapping

Memory mapping is used to map image and data files into a process' address space. In memory mapping, the contents of a file are linked directly into the virtual address space of a process.

Fair Physical Memory Allocation

The memory management subsystem allows each running process in the system a fair share of the physical memory of the system.

Shared Virtual Memory

Although virtual memory allows processes to have separate (virtual) address spaces, there are times when you need processes to share memory. For example there could be several processes in the system running the bash command shell. Rather than have several copies of bash, one in each process's virtual address space, it is better to have only one copy in physical memory and all of the processes running bash share it. Dynamic libraries are another common example of executing code shared between several processes.

Shared memory can also be used as an Inter Process Communication (IPC) mechanism, with two or more processes exchanging information via memory common to all of them. Linux supports the Unix System shared memory IPC.

Thread management in Linux

Linux has a unique implementation of threads. To the Linux kernel, there is no concept of a .Linux implements all threads as standard processes. The Linux kernel does not provide any special scheduling semantics or data structures to represent threads. Instead, a thread is merely a process that shares certain resources with other processes. Each thread has a unique task_struct and appears to the kernel as a normal process (which just happens to share resources, such as an address space, with other processes). This approach to threads contrasts greatly with operating systems such as Windows or Sun Solaris, which have explicit kernel support for threads (and sometimes call threads lightweight processes). The name "lightweight process" sums up the difference in philosophies between Linux and other systems. To these other operating systems, threads are an abstraction to provide a lighter, quicker execution unit than the heavy process. To Linux, threads are simply a manner of sharing resources between processes (which are already quite lightweight)11. For example, assume you have a process that consists of four threads. On systems with explicit thread support, there might exist one process descriptor that in turn points to the four different threads. The process descriptor describes the shared resources, such as an address space or open files. The threads then describe the resources they alone possess. Conversely, in Linux, there are simply four processes and thus four normal task_structstructures. The four processes are set up to share certain resources. Threads are created like normal tasks, with the exception that the clone() system call is passed flags corresponding to specific resources to be shared: clone(CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, 0);

Ans 4 a) (FAT) is a file system that was created by Microsoft in 1977.

FAT is still in use today as the preferred file system for floppy drive media and portable, high capacity storage devices like flash drives and other solid-state memory Ans 4 a) File Allocation Table (FAT) is a file system that was created by Microsoft in 1977.

FAT is still in use today as the preferred file system for floppy drive media and portable, high capacity storage devices like flash drives and other solid-state memory devices like SD cards.

FAT was the primary file system used in all of Microsoft's consumer operating systems from MS-DOS through Windows ME. Even though FAT is still a supported option on Microsoft's newer operating systems, NTFS is the primary file system used these days.

The File Allocation Table file system has seen advancements over time, primarily due to the need to support larger hard disk drives and larger file sizes.

Here's a lot more on the different versions of the FAT file system:

FAT12 (12-bit File Allocation Table)

The first widely used version of the FAT file system, FAT12, was introduced in 1980, right along with the first versions of DOS.

FAT12 was the primary file system for Microsoft operating systems up through MS-DOS 3.30 but was also used in most systems up through MS-DOS 4.0. FAT12 is still the file system used on the occasional floppy disk you'll find today.

FAT12 supports drive sizes and file sizes of up to 16 MB using 4 KB clusters or 32 MB using 8 KB ones, with a maximum number of 4,084 files on a single volume (when using 8KB clusters).

File names under FAT12 cannot exceed the maximum character limit of 8 characters, plus 3 for the extension. A number of file attributes were first introduced in FAT12, including hidden, read-only, system, and volume label.

Note-FAT8, introduced in 1977, was the first true version of the FAT file system but had limited use and only on some terminal-style computer systems of the time.

FAT16 (16-bit File Allocation Table)

The second implementation of FAT was FAT16, first introduced in 1984 in PC DOS 3.0 and MS-DOS 3.0.

A slightly more improved version of FAT16, called FAT16B, was the primary file system for MS-DOS 4.0 up through MS-DOS 6.22. Beginning with MS-DOS 7.0 and , a further improved version, called FAT16X, was used instead.

Depending on the operating system and the cluster size used, the maximum drive size a FAT16-formatted drive can be ranges from 2 GB up to 16 GB, the latter only in Windows NT 4 with 256 KB clusters.

File sizes on FAT16 drives max out at 4 GB with Large File Support enabled, or 2 GB without it.

The maximum number of files that can be held on a FAT16 volume is 65,536. Just like with FAT12, file names were limited to 8+3 characters but was extended to 255 characters starting with Windows 95.

The archive file attribute was introduced in FAT16.

FAT32 (32-bit File Allocation Table)

FAT32 is the latest version of the FAT file system. It was introduced in 1996 for Windows 95 OSR2 / MS-DOS 7.1 users and was the primary file system for consumer Windows versions through Windows ME.

FAT32 supports basic drive sizes up to 2 TB or even as high as 16 TB with 64 KB clusters.

Like with FAT16, file sizes on FAT32 drives max out at 4 GB with Large File Support turned on or 2 GB without it. A modified version of FAT32, called FAT32+, supports files close to 256 GB in size!

Up to 268,173,300 files can be contained on a FAT32 volume, so long as it's using 32 KB clusters. exFAT (Extended File Allocation Table) exFAT, first introduced in 2006, is yet another file system created by Microsoft although it's not the "next" FAT version after FAT32. exFAT is primarily intended to be used on portable media devices like flash drives, SDHC and SDXC cards, etc. exFAT officially supports portable media storage devices up to 512 TiB in size but theoretically could support drives as large as 64 ZiB, which is considerably larger than any media available as of this writing.

Native support for 255 character filenames and support for up to 2,796,202 files per directory are two noteworthy features of the exFAT system.

The exFAT file system is supported by almost all versions of Windows (older ones with optional updates), Mac OS X (10.6.5+), as well as on many TV, media, and other devices.

NTFS (NT file system; sometimes New Technology File System) is the file system that the Windows NT operating system uses for storing and retrieving files on a hard disk. NTFS is the Windows NT equivalent of the Windows 95 file allocation table (FAT) and the OS/2 High Performance File System (HPFS). However, NTFS offers a number of improvements over FAT and HPFS in terms of performance, extendibility, and security.

Notable features of NTFS include:

 Use of a b-tree directory scheme to keep track of file clusters  Information about a file's clusters and other data is stored with each cluster, not just a governing table (as FAT is)  Support for very large files (up to 2 to the 64th power or approximately 16 billion bytes in size)  An access control list (ACL) that lets a server administrator control who can access specific files  Integrated file compression  Support for names based on Unicode  Support for long file names as well as "8 by 3" names  Data security on both removable and fixed disks

How NTFS Works

When a hard disk is formatted (initialized), it is divided into partitions or major divisions of the total physical hard disk space. Within each partition, the operating system keeps track of all the files that are stored by that operating system. Each file is actually stored on the hard disk in one or more clusters or disk spaces of a predefined uniform size. Using NTFS, the sizes of clusters range from 512 bytes to 64 kilobytes. Windows NT provides a recommended default cluster size for any given drive size. For example, for a 4 GB () drive, the default cluster size is 4 KB (kilobytes). Note that clusters are indivisible. Even the smallest file takes up one cluster and a 4.1 KB file takes up two clusters (or 8 KB) on a 4 KB cluster system.

The selection of the cluster size is a trade-off between efficient use of disk space and the number of disk accesses required to access a file. In general, using NTFS, the larger the hard disk the larger the default cluster size, since it's assumed that a system user will prefer to increase performance (fewer disk accesses) at the expense of some amount of space inefficiency.

When a file is created using NTFS, a record about the file is created in a special file, the Master File Table (MFT). The record is used to locate a file's possibly scattered clusters. NTFS tries to find contiguous storage space that will hold the entire file (all of its clusters).

Each file contains, along with its data content, a description of its attributes (its metadata). b) 1.Introduction Thread, sometimes called a lightweight process (LWP), is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional (or heavyweight) process has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time. Figure 1 illustrates the difference between a traditional single-threaded process and a multithreaded process.

Figure 1 Single- and multithreaded processes.

2 Threading Concepts Many software packages that run on modern desktop PC’S are multithreaded. An application typically is implemented as a separate process with several threads of control. A web browser might have one thread display images or text while another thread retrieves data from the network. A word processor may have a thread for displaying graphics, another thread for reading keystrokes from the user, and a third thread for performing spelling and grammar checking in the background. In certain situations a single application may be required to perform several similar tasks. For example, a web server accepts client requests for web pages, images, sound, and so forth. A busy web server may have several (perhaps hundreds) of clients concurrently accessing it. If the web server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its request to be serviced could be enormous. One solution is to have the server run as a single process that accepts requests. When the server receives a request, it creates a separate process to service that request. In fact, this process-creation method was in common use before threads became popular. Process creation is very heavyweight, as was shown in the previous chapter. If the new process will perform the same tasks as the existing process, why incur all that overhead? It is generally more efficient for one process that contains multiple threads to serve the same purpose. This approach would multithread the web-server process. The server would create a separate thread that would listen for client requests; when a request was made, rather than creating another process, it would create another thread to service the request. Threads also play a vital role in remote procedure call (RPC) systems. RPC’s allow interprocess communication by providing a communication mechanism similar to ordinary function or procedure calls. Typically, RPC servers are multithreaded. When a server receives a message, it services the message using a separate thread. This allows the server to service several concurrent requests. 3 User and Kernel Level Threads Support for threads may be provided at either the user level, for user threads, or by the kernel, for kernel threads.

1. User threads: User Threads are supported above the kernel and are implemented by a thread library at the user level. The library provides support for thread creation, scheduling, and management with no support from the kernel. Because the kernel is unaware of user-level threads, all thread creation and scheduling are done in user space without the need for kernel intervention. Therefore, user-level threads are generally fast to create and manage; they have drawbacks, however. For instance, if the kernel is single-threaded, then any user-level thread performing a blocking system call will cause the entire process to block, even if other threads are available to run within the application. User-thread libraries include POSIX Pthreads, C-threads, and Solaris 2 UI-threads. 2. Kernel threads: Kernel threads are supported directly by the operating system: The kernel performs thread creation, scheduling, and management in kernel space. Because thread management is done by the operating system, kernel threads are generally slower to create and manage than are user threads. However, since the kernel is managing the threads, if a thread performs a blocking system call, the kernel can schedule another thread in the appli- cation for execution. Also, in a multiprocessor environment, the kernel can schedule threads on different processors. Most contemporary operating systems-including Windows NT, , Solaris 2 and BeOS support kernel threads. 4. The Classical Thread Model One way of looking at a process is that it is a way to group related resources together. A process has an address space containing program text and data, as well as other resources. These resource may include open files, child processes, pending alarms, handlers, accounting information, and more. By putting them together in the form of a process, they can be managed more easily. The other concept a process has is a thread of execution, usually shortened to just thread. The thread has a program counter that keeps track of which instruction to execute next. It has registers, which hold its current working variables. It has a stack, which contains the execution history, with one frame for each procedure called but not yet returned from. Although a thread must execute in some process, the thread and its process are different concepts and can be treated separately. Processes are used to group resources together; threads are the entities scheduled for execution on the CPU. What threads add to the process model is to allow multiple executions to take place in the same process environment, to a large degree independent of one another. Having multiple threads running in parallel in one process is analogous to having multiple processes running in parallel in one computer. In the former case, the threads share an address space and other resources. In the latter case, processes share physical memory, disks, printers, and other resources. Because threads have some of the properties of processes, they are sometimes called lightweight processes. The term multithreading is also used to describe the situation of allowing multiple threads in the same process. Some CPUs have direct hardware support for multithreading and allow thread switches to happen on a nanosecond time scale. In Figure 2 (a) we see three traditional processes. Each process has its own address space and a single thread of control. In contrast, in Figure 2 (b) we see a single process with three threads of control. Although in both cases we have three threads, in Figure 2 (a) each of them operates in a different address space, whereas in Figure 2 (b) all three of them share the same address space.

Figure 2. (a) Three processes each with one thread. (b) One process with three threads.

Different threads in a process are not as independent as different processes. All threads have exactly the same address space, which means that they also share the same global variables. Since every thread can access every memory address within the process' address space, one thread can read, write, or even wipe out another thread' s stack. There is no protection between threads because (1) it is impossible, and (2) it should not be necessary. Unlike different processes, which may be from different users and which may be hostile to one another, a process is always owned by a single user, who has presumably created multiple threads so that they can cooperate, not fight. In addition to sharing an address space, all the threads can share the same set of open files, child processes, alarms, and signals, an so on, as shown in Figure 3.

Figure 3. The first column lists some items shared by all threads in a process. The second one lists some items private to each thread.

It is important to realize that each thread has its own stack, as illustrated in Figure 4. Each thread's stack contains one frame for each procedure called but not yet returned from. This frame contains the procedure's local variables and the return address to use when the procedure call has finished. For example, if procedure X calls procedure Y and Y calls procedure Z, then while Z is executing, the frames for X, Y, and Z will all be on the stack. Each thread will generally call different procedures and a thus have a different execution history. This is why each thread needs its own stack.

Figure 4. Each thread has its own stack.

Process Scheduling

1 Basic Concepts In a single-processor system, only one process can run at a time; any others must wait until the CPU is free and can be rescheduled. The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The idea is relatively simple. A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. With multiprogramming, we try to use this time productively. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Every time one process has to wait, another process can take over use of the CPU. Scheduling of this kind is a fundamental operating-system function. Almost all computer resources are scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its scheduling is central to operating-system design.

2 CPU Scheduler A process migrates among the various scheduling queues throughout its lifetime. The operating system must select, for scheduling purposes, processes from these queues in some fashion. The selection process is carried out by the appropriate scheduler. Often, in a batch system, more processes are submitted than can be executed immediately. These processes are spooled to a mass-storage device (typically a disk), where they are kept for later execution. The long-term scheduler, or job scheduler, selects processes from this pool and loads them into memory for execution. The short- term scheduler, or CPU scheduler, selects from among the processes that are ready to execute and allocates the CPU to one of them. Some operating systems, such as time-sharing systems, may introduce an additional, intermediate level of scheduling. This is known as medium-term scheduler. The key idea behind a medium-term scheduler is that sometimes it can be advantageous to remove processes from memory (and from active contention for the CPU) and thus reduce the degree of multiprogramming. Later, the process can be reintroduced into memory, and its execution can be continued where it left off. This scheme is called swapping. The process is swapped out, and is later swapped in, by the medium-term scheduler.

OR

b)

Environmental Subsystems Environmental subsystems are user-mode processes layered over the native Windows XP executive services to enable Windows XP to run programs developed for other operating systems, including 16-bit Windows, MS-DOS, and POSIX. Each environmental subsystem provides a single application environment. Windows XP uses the Win32 API subsystem as the main operating environment, and thus this subsystem starts all processes. When an application is executed, the Win32 API subsystem calls the VM manager to load the application's executable code. The memory manager returns a status to Win32 indicating the type of executable. If it is not a native Win32 API executable, the Win32 API environment checks whether the appropriate environmental subsystem is running; if the subsystem is not running, it is started as a user-mode process. The subsystem then takes control over the application startup. The environmental subsystems use the LPC facility to provide operatingsystem services to client processes. The Windows XP subsystem architecture keeps applications from mixing API routines from different environments. For instance, a Win32 API application cannot make a POSIX system call, because only one environmental subsystem can be associated with each process. Since each subsystem is run as a separate user-mode process, a crash in one has no effect on other processes. The exception is Win32 API, which provides all keyboard, mouse, and graphical display capabilities. If it fails, the system is effectively disabled and requires a . The Win32 API environment categorizes applications as either graphical or character based, where a character-based application is one that thinks interactive output goes to a character-based (command) window. Win32 API transforms the output of a character- based application to a graphical representation in the command window. This transformation is easy: Whenever an output routine is called, the environmental subsystem calls a Win32 routine to display the text. Since the Win32 API environment performs this function for all characterbased windows, it can transfer screen text between windows via the clipboard. This transformation works for MS-DOS applications, as well as for POSIX command-line applications. MS-DOS Environment The MS-DOS environment does not have the complexity of the other Windows XP environmental subsystems. It is provided by a Win32 API application called the virtual DOS machine (VDM). Since the VDM is a user-mode process, it is paged and dispatched like any other Windows XP application. The VDM has an instruction- execution unit to execute or emulate 486 instructions. The VDM also provides routines to emulate the MS-DOS ROM BIOS and 812 Chapter 22 Windows XP "int 21" software-interrupt services and has virtual device drivers for the screen, keyboard, and communication ports. The VDM is based on MS-DOS 5.0 source code; it allocates at least 620 KB of memory to the application. The Windows XP command shell is a program that creates a window that looks like an MS-DOS environment. It can run both 16-bit and 32-bit executables. When an MS-DOS application is run, the command shell starts a VDM process to execute the program. If Windows XP is running on a IA32-compatible processor, MS- DOS graphical applications run in full-screen mode, and character applications can run full screen or in a window. Not all MS-DOS applications run under the VDM. For example, some MS-DOS applications access the disk hardware directly, so they fail to run on Windows XP because disk access is restricted to protect the file system. In general, MS-DOS applications that directly access hardware will fail to operate under Windows XP. Since MS-DOS is not a multitasking environment, some applications have been written in such a way as to "hog" the CPU. For instance, the use of busy loops can cause time delays or pauses in execution. The scheduler in the kernel dispatcher detects such delays and automatically throttles the CPU usage, but this may cause the offending application to operate incorrectly 16-Bit Windows Environment The Winl6 execution environment is provided by a VDM that incorporates additional software called (WOW32 for 16-bit applications); this software provides the Windows 3.1 kernel routines and stub routines for window-manager and graphical-device-interface (GDI) functions. The stub routines call the appropriate Win32 API subroutines—converting, or thunking, 16-bit addresses into 32-bit addresses. Applications that rely on the internal structure of the 16-bit window manager or GDI may not work, because the underlying Win32 API implementation is, of course, different from true 16-bit Windows. WOW32 can multitask with other processes on Windows XP, but it resembles Windows 3.1 in many ways. Only one Winl6 application can run at a time, all applications are single threaded and reside in the same address space, and all share the same input queue. These features imply that an application that stops receiving input will block all the other Winl6 applications, just as in Windows 3.x, and one Winl6 application can crash other Winl6 applications by corrupting the address space. Multiple Winl6 environments can coexist, however, by using the command start /separate wml6application from the command line. There are relatively few 16-bit applications that users need to continue to run on Windows XP, but some of them include common installation (setup) programs. Thus, the WOW32 environment continues to exist primarily because a number of 32-bit applications cannot be installed on Windows XP without it. 32-Bit Windows Environment on IA64 The native environment for Windows on IA64 uses 64-bit addresses and the native IA64 instruction set. To execute IA32 programs in this environment requires a thunking layer to translate 32-bit Win32 API calls into the corresponding 64-bit calls—just as 16- bit applications require translation on IA32 systems. 22.4 Environmental Subsystems 813 Thus, 64-bit Windows supports the WOW64 environment. The implementations of 32-bit and 64-bit Windows are essentially identical, and the IA64 processor provides direct execution of IA32 instructions, so WOW64 achieves a higher level of compatibility than VVOW32. Win32 Environment The main subsystem in Windows XP is the Win32 API. It runs Win32 API applications and manages all keyboard, mouse, and screen I/O. Since it is the controlling environment, it is designed to be extremely robust. Several features of the Win32 API contribute to this robustness. Unlike processes in the Winl6 environment, each Win32 process has its own input queue. The window manager dispatches all input on the system to the appropriate process's input queue, so a failed process does not block input to other processes. The Windows XP kernel also provides preemptive multitasking, which enables the user to terminate applications that have failed or are no longer needed. The Win32 API also validates all objects before using them, to prevent crashes that could otherwise occur if an application tried to use an invalid or wrong handle. The Win32 API subsystem verifies the type of the object to which a handle points before using the object. The reference counts kept by the prevent objects from being deleted while they are still being vised and prevent their use after they have been deleted. To achieve a high level of compatibility with Windows 95/98 systems, Windows XP allows users to specify that individual applications be run using a layer, which modifies the Win32 API to better approximate the behavior expected by old applications. For example, some applications expect to see a particular version of the system and fail on new versions. Frequently, applications have latent bugs that become exposed due to changes in the implementation. For example, using memory after freeing it may cause corruption only if the order of memory reuse by the heap changes; or an application may make assumptions about which errors can be returned by a routine or about the number of valid bits in an address. Running an application with the Windows 95/98 shims enabled causes the system to provide behavior much closer to Windows 95/98—though with reduced performance and limited interoperability with other applications. POSIX Subsystem The POSIX subsystem is designed to run POSIX applications written to follow the POSIX standard, which is based on the UNIX model. POSIX applications can be started by the Win32 API subsystem or by another POSIX application. POSIX applications use the POSIX subsystem server PSXSS.EXE, the POSIX dynamic link library PSXDLL .DLL, and the POSIX console session manager POSIX .EXE. Although the POSIX standard does not specify printing, POSIX applications can use printers transparently via the Windows XP redirection mechanism. POSIX applications have access to any file system on the Windows XP system; the POSIX environment enforces UNIX-like permissions on directory trees. Due to scheduling issues, the POSIX system in Windows XP does not ship with the system but is available separately for professional desktop systems and servers. It provides a much higher level of compatibility with UNIX applications than previous versions of NT. Of the commonly available UNIX 814 Chapter 22 Windows XP applications, most compile and run without change with the latest version of . Logon and Security Subsystems Before a user can access objects on Windows XP, that user must be authenticated by the logon sendee, . WINLOGON is responsible for responding to the secure attention sequence (Control-Alt-Delete). The secure attention sequence is a required mechanism for keeping an application from acting as a Trojan horse. Only WINLOGON can intercept this sequence in order to put up a logon screen, change passwords, and lock the workstation. To be authenticated, a user must have an account and provide the password for that account. Alternatively, a user logs on by using a smart card and personal identification number, subject to the security policies in effect for the domain. The local security authority subsystem (LSASS) is the process that generates access tokens to represent users on the system. It calls an authentication package to perform authentication using information from the logon subsystem or network server. Typically, the authentication package simply looks up the account information in a local database and checks to see that the password is correct. The security subsystem then generates the for the user ID containing the appropriate privileges, quota limits, and group IDs. Whenever the user attempts to access an object in the system, such as by opening a handle to the object, the access token is passed to the security reference monitor, which checks privileges and quotas. The default authentication package for Windows XP domains is Kerberos. LSASS also has the responsibility for implementing security policy such as strong passwords,

Ans 5a) Data compression is the process of modifying, encoding or converting the bits structure of data in such a way that it consumes less space on disk.

It enables reducing the storage size of one or more data instances or elements. Data compression is also known as source coding or bit-rate reduction.

Data compression enables sending a data object or file quickly over a network or the Internet and in optimizing physical storage resources.

There are two general types of compression algorithms:

1. Lossless compression

2. Lossy compression

Lossless Compression

Lossless compression compresses the data in such a way that when data is decompressed it is exactly the same as it was before compression i.e. there is no loss of data. A lossless compression is used to compress file data such as executable code, text files, and numeric data, because programs that process such file data cannot tolerate mistakes in the data.

Lossless compression will typically not compress file as much as lossy compression techniques and may take more processing power to accomplish the compression.

Lossy Compression

Lossy compression is the one that does not promise that the data received is exactly the same as data send i.e. the data may be lost.

This is because a lossy algorithm removes information that it cannot later restore.

Lossy algorithms are used to compress still images, and audio.

Lossy algorithms typically achieve much better compression ratios than the lossless algorithms.

b) PROCESS MANAGEMENT

2.1 Introduction A process can be thought of as a program in execution. A process will need certain resources-such as CPU time, memory, files, and I/O devices-to accomplish its task. These resources are allocated to the process either when it is created or while it is executing. A process is the unit of work in most systems. Such a system consists of a collection of processes: Operating-system processes execute system code, and user processes execute user code. All these processes may execute concurrently. Although traditionally a process contained only a single thread of control as it ran, most modern operating systems now support processes that have multiple threads. The operating system is responsible for the following activities in connection with process and thread management: the creation and deletion of both user and system processes; the scheduling of processes; and the provision of mechanisms for synchronization, communication, and deadlock handling for processes.

2.2 The Process Model In this model, all the runnable software on the computer, sometimes including the operating system, is organized into a number of sequential processes, or just processes for short. A process is just an instance of an executing program, including the current values of the program counter, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, of course, the real CPU switches back and forth from process to process, but to understand the system, it is much easier to think about a collection of processes running in (pseudo) parallel than to try to keep track of how the CPU switches from program to program. This rapid switching back and forth is called multiprogramming.

Figure 2-1. (a) Multiprogramming of four programs. (b) Conceptual model of four independent, sequential processes. (c) Only one program is active at once.

In Fig. 2-l (a) we see a computer multiprogramming four programs in memory. In Fig. 2-l (b) we see four processes, each with its own flow of control (i.e., its own logical program counter), and each one running independently of the other ones. Of course, there is only one physical program counter, so when each process runs, its logical program counter is loaded into the real program counter. When it is finished (for the time being), the physical program counter is saved in the process' stored logical program counter in memory. In Fig. 2-l (c) we see that viewed over a long enough time interval, all the processes have made progress, but at any given instant only one process is actually running.

2.3 Process Creation Operating systems need some way to create processes. In very simple systems, or in systems designed for running only a single application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be needed be present when the system comes up. In general-purpose systems, however, some way is needed to create and terminate processes as needed during operation. We will now look at some of the issues.

There are four principal events that cause processes to be created: 1. System initialization. 2. Execution of a process creation system call by a running process. 3. A user request to create a new process. 4. Initiation of a batch job.

When an operating system is booted, typically several processes are created. Some of these are foreground processes, that is, processes that interact with (human) users and perform work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, one background process may be designed to accept incoming e-mail, sleeping most of the day but suddenly springing to life when incoming e-mail arrives. Another background process may be designed to accept incoming requests for Web pages hosted on that machine, waking up when a request arrives to service the request. Processes that stay in the background to handle some activity such as e-mail, Web pages, news, printing, and so on are called daemons. Large systems commonly have dozens of them. In UNIX, the ps program can be used to list the running processes. In Windows, the can be used.

In addition to the processes created at boot time, new processes can be created afterward as well. Often a running process will issue system calls to create one or more new processes to it do its job. Creating new processes is particularly useful when the work to be done can easily be formulated in terms of several related, but otherwise independent interacting processes. For example, if a large amount of data is being fetched over a network for subsequent processing, it may be convenient to create one process to fetch the data and put them in a shared buffer while a second process removes the data items and processes them. On a multiprocessor, allowing each process to run on a different CPU may also make the job go faster.

In interactive systems, users can start a program by typing a command or (double) clicking an icon. Taking either of these actions starts a new process and runs the selected program in it. In command-based UNIX systems running X, the new process takes over the window in which it was started. In , when a process is started it does not have a window, but it can create one (or more) and most do. In both systems, users may have multiple windows open at once, each running some process. Using the mouse, the user can select a window and interact with the process, for example, providing input when needed.

The last situation in which processes are created applies only to the batch systems found on large mainframes. Here users can submit batch jobs to the system (possibly remotely). When the operating system decides that it has the resources to run another job, it creates a new process and runs the next job from the input queue in it.

2.4 Process Termination After a process has been created, it starts running and does whatever its job is. However, nothing lasts forever, not even processes. Sooner or later the new process will terminate, usually due to one of the following conditions: 1. Normal exit (voluntary). 2. Error exit (voluntary). 3. Fatal error (involuntary). 4. Killed by another process (involuntary).

Most processes terminate because they have done their work. When a compiler has compiled the program given to it, the compiler executes a system call to tell the operating system that it is finished. This call is exit in UNIX and ExitProcess in Windows. Screen- oriented programs also support voluntary termination. Word processors, Internet browsers and similar programs always have an icon or menu item that the user can click to tell the process to remove any temporary files it has open and then terminate.

The second reason for termination is that the process discovers a fatal error. For example, if a user types the command cc foo.c to compile the program foo.c and no such file exists, the compiler simply exits. Screen- oriented interactive processes generally do not exit when given bad parameters. Instead they pop up a dialog box and ask the user to try again.

The third reason for termination is an error caused by the process, often due to a program bug. Examples include executing an illegal instruction, referencing nonexistent memory, or dividing by zero. In some systems (e.g., UNIX), a process can tell the operating system that it wishes to handle certain errors itself, in which case the process is signaled (interrupted) instead of terminated when one of the errors occurs.

The fourth reason a process might terminate is that the process executes a system call telling the operating system to some other process. In UNIX this call is kill. The corresponding Win32 function is TerminateProcess. In both cases, the killer must have the necessary authorization to do in the killee. In some systems, when a process terminates, either voluntarily or otherwise, all processes it created are immediately killed as well. Neither UNIX nor Windows works this way, however.

2.5 Process States and Transitions As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states:

New: The process is being created. Running: Instructions are being executed. Waiting: The process is waiting for some event to occur (such as an I/O completion or reception of a signal).

Ready: The process is waiting to be assigned to a processor.

Figure 2.2 Diagram of process state.

Terminated: The process has finished execution.

OR a) A video server is a computer-based device (also called a "host") dedicated to delivering video.

Unlike personal computers, being multi-application devices, a video server is designed for one purpose; provisioning video, often for broadcasters. A professional-grade video server performs recording, storage, and playout of multiple video streams without any degradation of the video signal. Broadcast quality video servers often store hundreds of hours of compressed audio and video (in different codecs), play out multiple and synchronised simultaneous streams of video by, and offer quality interfaces such as SDI for digital video and XLR for balanced analog audio, AES/EBU digital audio and also Time Code. A genlock input is usually provided to provide a means of synchronizing with the house reference clock, thereby avoiding the need for timebase correction or frame synchronizers.

Video servers usually offer some type of control interface allowing them to be driven by broadcast automation systems that incorporate sophisticated broadcast programming applications. Popular protocols include VDCP and the 9-Pin Protocol.

They can optionally allow direct to disk recording using the same codec that is used in various post-production video editing software packages to prevent any wasted time in transcoding.

In TV Broadcast industries, a server is a device used to store broadcast quality images and allows several users to edit stories using the images they contain simultaneously.

The video server can be used in a number of contexts, some of which include:

 News: providing short news video clips as part of a news broadcast as seen on networks (like CNN and Fox News).  Production: enhance live events with instant replays and slow motion and highlights (sport production) (see OB Vans)  Instruction: delivering course material in video format.  Public Access: delivering city specific information to residents over a cable system.  Surveillance: deliver real-time video images of protected sites.  Entertainment: deliver film trailers or music .

Features

Typically, a video server can do the following:

 Ingest of different sources : video cameras (multiple angles), satellite data feeds, disk drives and other video servers. This can be done in different codecs.  Temporary or definitive storage of these video feeds.  Maintain a clear structure of all stored media with appropriate metadata to allow fast search : name, remarks, rating, date, time code, etc.  video editing of the different clips  Transfer those clips to other video servers or playout directly (via IP interface or SDI)

Generally, they have several bi directional channels (record and ingest) for video and audio. A perfect synchronisation is necessary between those channels to manage the feeds.

Video Surveillance

In the surveillance context, an IP video server converts analog video signals into IP video streams. The IP video server can stream digitized video over IP networks in the same way that an IP Camera can. Because an IP Video server uses IP protocols, it can stream video over any network that IP can use, including via a modem for access over a phone or ISDN connection. With the use of a video server attached to an analog camera, the video from an existing surveillance system can be converted and networked into a new IP surveillance system.

In the video security industry a video server is a device to which one or more video sources can be attached. Video servers are used to give existing analog systems network connectivity. Video servers are essentially transmission/ telemetry / monitoring devices. Viewing is done using a web browser or in some cases supplied software. These products also allow the upload of images to the internet or direct viewing from the internet. In order to upload to the internet an account with an ISP (internet service provider) may be required.

Phone apps that send direct security video feed to from security video servers are another recent security video server application innovation. This allows users to view security video server feed from anywhere they can use their smartphone. b) Microsoft Windows CE (now officially known as and previously also known as Windows Embedded CE, and sometimes abbreviated WinCE, codenamed Pegasus) is an operating system developed by Microsoft for embedded systems. Windows CE is a distinct operating system and kernel, rather than a trimmed-down version of desktop Windows.[7] It is not to be confused with Windows Embedded Standard which is an NT-based componentized version of desktop Microsoft Windows.

Microsoft licenses Windows CE to OEMs and device makers. The OEMs and device makers can modify and create their own user interfaces and experiences, with Windows CE providing the technical foundation to do so.

The current version of Windows Embedded Compact supports Intel and compatibles and ARM processors with Board Support Packages (BSP) directly. The MIPS and SHx architectures have kernel support. Features

Windows CE is optimized for devices that have minimal storage; a Windows CE kernel may run in under a of memory. Devices are often configured without disk storage, and may be configured as a "closed" system that does not allow for end-user extension (for instance, it can be burned into ROM). Windows CE conforms to the definition of a real-time operating system, with a deterministic interrupt latency. From Version 3 and onward, the system supports 256 priority levels and uses priority inheritance for dealing with priority inversion. The fundamental unit of execution is the thread. This helps to simplify the interface and improve execution time.

Microsoft has stated that the "CE" is not an intentional initialism, but many believe CE stands for "Consumer Electronics" or "Compact Edition". Microsoft says the letters instead imply a number of Windows CE design precepts, including "Compact, Connectable, Compatible, Companion, and Efficient." The first version—known during development under the "Pegasus"—featured a Windows-like GUI and a number of Microsoft's popular applications, all trimmed down for smaller storage, memory, and speed of the palmtops of the day.

Since then, Windows CE has evolved into a component-based, embedded, real-time operating system. It is no longer targeted solely at hand-held computers.[12] Many platforms have been based on the core Windows CE operating system, including Microsoft's AutoPC, Pocket PC 2000, Pocket PC 2002, 2003, SE, Windows Mobile 5.0, Windows Mobile 6, Smartphone 2002, Smartphone 2003, , , and many industrial devices and embedded systems. Windows CE even powered select games for the , was the operating system of the handheld, and can partially run on modified game consoles.

A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. First, source code was offered to several vendors, so they could adjust it to their hardware. Then products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public. However, a number of core components that do not need adaptation to specific hardware environments (other than the CPU family) are still distributed in binary only form.

Development tools

Visual Studio

Microsoft Visual Studio 2012 supports development for Windows Embedded Compact 2013.

Microsoft Visual Studio 2008 and earlier support projects for older releases of Windows CE / Windows Mobile, producing executable programs and platform images either as an emulator or attached by cable to an actual mobile device. A mobile device is not necessary to develop a CE program. The . Compact Framework supports a subset of the .NET Framework with projects in C#, and VB.NET, but not Managed C++. "Managed" applications employing the .NET Compact Framework also require devices with significantly larger memories (8 MB or more) while unmanaged applications can still run successfully on smaller devices. In Visual Studio 2010, the Windows Phone Developer Tools are used as an extension, allowing apps to be designed and tested within Visual Studio.

Free Pascal and

Free Pascal introduced the Windows CE port in Version 2.2.0, targeting ARM and x86 architectures. Later, the Windows CE header files were translated for use with Lazarus, a rapid application development (RAD) software package based on Free Pascal. Windows CE applications are designed and coded in the Lazarus integrated development environment (IDE) and compiled with an appropriate .

Platform Builder

This is used for building the platform (BSP + Kernel), device drivers (shared source or custom made) and also the application. This is a one step environment to get the system up and running. One can also use Platform Builder to export an SDK () for the target (SuperH, x86, MIPS, ARM etc.) to be used with another associated tool set named below.

Others

The Embedded Visual C++ (eVC) — a tool for development of embedded applications for Windows CE. It can be used standalone using the SDK exported from Platform Builder or using the Platform Builder's Platform Manager connectivity setup.

CodeGear Delphi Prism — runs in Visual Studio, also supports the .NET Compact Framework and thus can be used to develop mobile applications. It employs the Oxygene compiler created by RemObjects Software, which targets .NET, the .NET Compact Framework, and . Its command-line compiler is available free of charge.

Basic4ppc — a similar to — targets the .NET Compact Framework and supports Windows CE and Windows Mobile devices.

GLBasic — a very easy to learn and use BASIC dialect that compiles for many platforms, including Windows CE and Windows Mobile. It can be extended by writing inline C/C++ code.

LabVIEW — a graphical programming language, supporting many platforms, including Windows CE.

AutoHotkey — a port of the open source macro-creation and automation software utility available for Windows CE. It allows the construction of macros and simple GUI applications developed by systems analyst Jonathan Maxian Timkang. Timeline of Windows CE Development

Often Windows CE, Windows Mobile, and Pocket PC are used interchangeably, in part due to their common origin. This practice is not entirely accurate. Windows CE is a modular/componentized operating system that serves as the foundation of several classes of devices. Some of these modules provide subsets of other components' features (e.g. varying levels of windowing support; DCOM vs COM), others which are separate (Bitmap or TrueType font support), and others which add additional features to another component. One can buy a kit (the Platform Builder) which contains all these components and the tools with which to develop a custom platform. Applications such as Excel Mobile/Pocket Excel are not part of this kit. The older Handheld PC version of Pocket Word and several other older applications are included as samples, however.

Windows Mobile is best described as a subset of platforms based on a Windows CE underpinning. Currently, Pocket PC (now called Windows Mobile Classic), SmartPhone (Windows Mobile Standard), and Pocket PC Phone Edition (Windows Mobile Professional) are the three main platforms under the Windows Mobile umbrella. Each platform uses different components of Windows CE, plus supplemental features and applications suited for their respective devices.

Pocket PC and Windows Mobile are Microsoft-defined custom platforms for general PDA use, consisting of a Microsoft-defined set of minimum profiles (Professional Edition, Premium Edition) of software and hardware that is supported. The rules for manufacturing a Pocket PC device are stricter than those for producing a custom Windows CE-based platform. The defining characteristics of the Pocket PC are the as the primary and its extremely portable size.

CE v3.0 is the basis for Pocket PC 2002. A successor to CE v3.0 is CE.net. "PocketPC [is] a separate layer of code on top of the core Windows CE OS... Pocket PC is based on Windows CE, but it's a different offering." And licensees of Pocket PC are forbidden to modify the WinCE part.

The SmartPhone platform is a feature-rich OS and interface for cellular phone handsets. SmartPhone offers productivity features to business users, such as email, and multimedia abilities for consumers. The SmartPhone interface relies heavily on joystick navigation and PhonePad input. Devices running SmartPhone do not include a touchscreen interface. SmartPhone devices generally resemble other cellular handset form factors, whereas most Phone Edition devices use a PDA form factor with a larger display.

Versions Version Changes

Released November 18, 1996. Codename "Pegasus" and "Alder".

1.0  Devices named "handheld PC" (HPC)  4 MB ROM minimum  2 MB RAM minimum 1.01 version (1.0a) — added Japanese language support.

 Unsupported as of December 31, 2001.

Released September 29, 1997. Codename "Birch".

 Devices named "Palm-sized PC"  Real-time deterministic task scheduling  Architectures: ARM, MIPS, PowerPC, StrongARM, SuperH and x86  32-bit color screens  SSL 2.0 and SSL 3.0  Unsupported as of September 30, 2002 for Windows CE 2.11 and 2.0 September 30, 2005 for Windows CE 2.12.

2.11 version(Palm-Size PC 1.1) — changed screen resolution to QVGA, added handwriting recognition.

2.11 version(Palm-Size PC 1.2) — based on Windows CE H/PC 2.11 kernel, removed Pocket Office.

HandeldPC 2.11 version(HandheldPC Professional) — added small versions of , improved MS Office documents formats support. Released June 15, 2000. Codename "Cedar" and "Galileo".

 Major recode that made CE hard real time down to the microsecond level  Base for the Pocket PC 2000, Handheld PC 2000, Pocket PC 2002 and Smartphone 2002 3.0  Priority levels was increased from 8 to 256  Object store was increased from 65,536 to 4.19 million allowed objects  Restricted access to critical or restricting write access to parts of the registry  Unsupported as of October 9, 2007.

Released January 7, 2002. Codename "Talisker/Jameson/McKendric".

 Integrated with .NET Compact Framework  Driver structure changed greatly, new features added  Base for "Pocket PC 2003"[15]  and support[15][22]  HID devices and standardized keyboards support 4.x  TLS (SSL 3.1), IPsec L2TP VPN, or Kerberos[15]  Pocket Office was reduced to Wordpad  Separation to two editions — Core (only shell) and Professional (with Microsoft Accessories)  In addition to the older PocketIE browser, Mobile was available with near 100% page compatibility to its IE 5.5 desktop cousin.  With Windows CE.net 4.2, a new shell was provided with Internet Explorer integration  Unsupported as of July 10, 2012 for Windows CE 4.0 and January 8, 2013 for Windows CE 4.1 and July 9, 2013 for Windows CE 4.2.

Released in August 2004. Adds many new features. Codename "Macallan"

 Added automatic reporting for manufacturers  Mobile, a COM-based version of Windows XP's DirectX multimedia API  DirectDraw for 2D graphics and DirectShow for camera and video

5.x digitisation support  Remote Desktop Protocol (RDP) support  In this version Wordpad has been eliminated too  The "Pro" version contains the Internet Explorer browser and 9  Supported until October 14, 2014.

Released in September 2006. Codename "Yamazaki".

 Process address space is increased from 32 MB to 2 GB  Number of processes has been increased from 32 to 32,768  User mode and kernel mode device drivers are possible

6.0  512 MB physically managed memory  Device.exe, filesys.exe, GWES.exe have been moved to Kernel mode  Cellcore  SetKMode and set process permissions no longer possible  Supported until April 10, 2018.  System call performance improved

Released in March 2011.

 Multi-core CPU support (SMP)  Wi-Fi Positioning System  Bluetooth 3.0 + HS support  DLNA (Digital Living Network Alliance)  DRM technology  Media Transfer Protocol

7.0  Windows Phone 7 IE with Flash 10.1 support  NDIS 6.1 support  UX C++ XAML API using technologies like Windows Presentation Foundation and Silverlight for attractive and functional user interfaces  Modernized graphics based on OpenGL ES 2.0  Advanced touch and gesture input  Supported until April 13, 2021.  Kernel support for 3 GB physical RAM and supports ARMv7 assembly

2013  Released in June 2013  DHCPv6 client with stateful/stateless address configuration.  L2TP/IPsec over IPv6 for VPN connectivity.  Snapshot boot.  Improved XAML data binding and Expression Blend support.  OOM Model improvements from 7.  Supported until October 10, 2023.  HTML help viewer added.

Java Card refers to a software technology that allows Java-based applications (applets) to be run securely on smart cards and similar small devices. Java Card is the tiniest of Java platforms targeted for embedded devices. Java Card gives the user the ability to program the devices and make them application specific. It is widely used in SIM cards (used in GSM mobile phones) and ATM cards.[citation needed] The first Java Card was introduced in 1996 by Schlumberger's card division which later merged with Gemplus to form Gemalto. Java Card products are based on the Java Card Platform specifications developed by Sun Microsystems (later a subsidiary of Oracle Corporation). Many Java card products also rely on the GlobalPlatform specifications for the secure management of applications on the card (download, installation, personalization, deletion).

The main design goals of the Java Card technology are portability and security.

Portability

Java Card aims at defining a standard smart card computing environment allowing the same Java Card applet to run on different smart cards, much like a Java applet runs on different computers. As in Java, this is accomplished using the combination of a (the Java Card Virtual Machine), and a well-defined runtime library, which largely abstracts the applet from differences between smart cards. Portability remains mitigated by issues of memory size, performance, and runtime support (e.g. for communication protocols or cryptographic algorithms).

Security

Java Card technology was originally developed for the purpose of securing sensitive information stored on smart cards. Security is determined by various aspects of this technology:

Data encapsulation

Data is stored within the application, and Java Card applications are executed in an isolated environment (the Java Card VM), separate from the underlying operating system and hardware.

Applet Firewall Unlike other Java VMs, a Java Card VM usually manages several applications, each one controlling sensitive data. Different applications are therefore separated from each other by an applet firewall which restricts and checks access of data elements of one applet to another.

Cryptography

Commonly used symmetric key algorithms like DES, Triple DES, AES, and asymmetric key algorithms such as RSA, elliptic curve cryptography are supported as well as other cryptographic services like signing, key generation and key exchange.

Applet

The applet is a state machine which processes only incoming command requests and responds by sending data or response status words back to the interface device.

Java Card 3.0

The version 3.0 of the JavaCard specification (draft released in March 2008) is separated in two editions: the Classic Edition and the Connected Edition.

 The Classic Edition is an evolution of the Java Card Platform Version 2.2.2 and supports traditional card applets on more resource-constrained devices.  The Connected Edition provides a new virtual machine and an enhanced execution environment with network-oriented features. Applications can be developed as classic card applets requested by APDU commands or as servlets using HTTP to support web-based schemes of communication (HTML, REST, SOAP ...) with the card. The runtime supports volatile objects (garbage collection), multithreading, inter-application communications facilities, persistence, transactions, card management facilities ...) c) Multimedia file systems

Multimedia Applications and Systems are getting more and more involved in our everyday lives. Their main purpose is to deal with various media types like pictures, video data, audio data and text. Video and audio belong to continuous media data. Pictures and text belong to discrete media data. When most people refer to multimedia, they generally mean the combination of two or more continuous media. In practice, the two media are normally audio and video, that is, sound plus moving pictures.

Why do we need multimedia file systems?

The challenge on multimedia systems are media types that need to be played continuosly. That means that the data that should be played has to arrive in real time (or at least until a certain strict deadline). Continuous media data differs from discrete data but not only in its real time characteristics. A challenge for these systems is also the synchronization of pictures and the according sound. Hence these can be two different data streams, it is important to synchronize these before showing them on the monitor.

Another difference to discrete data is the file size. Video and audio need much more storage space than text data and the multimedia file system has to organize this data on disk in a way that efficiently uses the limited storage. d) is an open-source (EPL) (OS) and computing platform designed for smartphones and currently maintained by Accenture. Symbian was originally developed by Symbian Ltd., as a descendant of 's EPOC and runs exclusively on ARM processors, although an unreleased x86 port existed. The current form of Symbian is an open-source platform developed by in 2009, as the successor of the original Symbian OS. Symbian was used by many major mobile phone brands, like , Motorola, Sony , and above all by . It was the most popular smartphone OS on a worldwide average until the end of 2010, when it was overtaken by Android.

Symbian rose to fame from its use with the platform built by Nokia, first released in 2002 and powering most Nokia smartphones. UIQ, another Symbian platform, ran in parallel, but these two platforms were not compatible with each other. Symbian^3, was officially released in Q4 2010 as the successor of S60 and UIQ, first used in the , to use a single platform for the OS. In May 2011 an update, Symbian Anna, was officially announced, followed by Nokia Belle (previously Symbian Belle) in August 2011.

On 11 February 2011, Nokia announced that it would use Microsoft's Windows Phone OS as its primary smartphone platform, and Symbian will be its franchise platform[clarification needed], dropping Symbian as its main smartphone OS of choice.On 22 June 2011 Nokia made an agreement with Accenture for an outsourcing program. Accenture will provide Symbian-based software development and support services to Nokia through 2016; about 2,800 Nokia employees became Accenture employees as of October 2011.The transfer was completed on 30 September 2011. The Nokia 808 PureView is officially the last Symbian smartphone.

Symbian features pre-emptive multitasking and memory protection, like other operating systems (especially those created for use on desktop computers). EPOC's approach to multitasking was inspired by VMS and is based on asynchronous server-based events.

Symbian OS was created with three systems design principles in mind:

1. the integrity and security of user data is paramount 2. user time must not be wasted 3. all resources are scarce

To best follow these principles, Symbian uses a , has a request-and-callback approach to services, and maintains separation between user interface and engine. The OS is optimised for low-power battery-based devices and for ROM-based systems (e.g. features like XIP and re-entrancy in shared libraries). Applications, and the OS itself, follow an object-oriented design: Model-view-controller (MVC).

Later OS iterations diluted this approach in response to market demands, notably with the introduction of a real-time kernel and a platform security model in versions 8 and 9.

There is a strong emphasis on conserving resources which is exemplified by Symbian- specific programming idioms like descriptors and a cleanup stack. Similar methods exist to conserve storage space. Further, all Symbian programming is event-based, and the (CPU) is switched into a low power mode when applications are not directly dealing with an event. This is done via a programming idiom called active objects. Similarly the Symbian approach to threads and processes is driven by reducing overheads.

Operating system

The All over Model contains the following layers, from top to bottom:

 UI Framework Layer  Application Services Layer o Java ME  OS Services Layer o generic OS services o communications services o multimedia and graphics services o connectivity services  Base Services Layer  Kernel Services & Hardware Interface Layer

The Base Services Layer is the lowest level reachable by user-side operations; it includes the File Server and User Library, a Plug-In Framework which manages all plug-ins, Store, Central Repository, DBMS and cryptographic services. It also includes the Text Window Server and the Text Shell: the two basic services from which a completely functional port can be created without the need for any higher layer services.

Ansd )Symbian has a microkernel architecture, which means that the minimum necessary is within the kernel to maximise robustness, availability and responsiveness. It contains a scheduler, memory management and device drivers, but other services like networking, telephony and filesystem support are placed in the OS Services Layer or the Base Services Layer. The inclusion of device drivers means the kernel is not a true microkernel. The EKA2 real-time kernel, which has been termed a nanokernel, contains only the most basic primitives and requires an extended kernel to implement any other abstractions.

Symbian is designed to emphasise compatibility with other devices, especially file systems. Early development of EPOC led to adopting FAT as the internal file system, and this remains, but an object-oriented persistence model was placed over the underlying FAT to provide a POSIX-style interface and a streaming model. The internal data formats rely on using the same APIs that create the data to run all file manipulations. This has resulted in data-dependence and associated difficulties with changes and data migration.

There is a large networking and communication subsystem, which has three main servers called: ETEL (EPOC telephony), ESOCK (EPOC sockets) and C32 (responsible for serial communication). Each of these has a plug-in scheme. For example, ESOCK allows different ".PRT" protocol modules to implement various networking protocol schemes. The subsystem also contains code that supports short-range communication links, such as Bluetooth, IrDA and USB.

There is also a large volume of user interface (UI) Code. Only the base classes and substructure were contained in Symbian OS, while most of the actual user interfaces were maintained by third parties. This is no longer the case. The three major UIs — S60, UIQ and MOAP — were contributed to Symbian in 2009. Symbian also contains graphics, text layout and font rendering libraries.

All native Symbian C++ applications are built up from three framework classes defined by the application architecture: an application class, a document class and an application user interface class. These classes create the fundamental application behaviour. The remaining needed functions, the application view, data model and data interface, are created independently and interact solely through their APIs with the other classes.

Many other things do not yet fit into this model — for example, SyncML, Java ME providing another set of APIs on top of most of the OS and multimedia. Many of these are frameworks, and vendors are expected to supply plug-ins to these frameworks from third parties (for example, Helix Player for multimedia codecs). This has the advantage that the APIs to such areas of functionality are the same on many phone models, and that vendors get a lot of flexibility. But it means that phone vendors needed to do a great deal of integration work to make a Symbian OS phone.

Symbian includes a reference user-interface called "TechView." It provides a basis for starting customisation and is the environment in which much Symbian test and example code runs. It is very similar to the user interface from the personal organiser and is not used for any production phone user interface.

Version history Version Description

EPOC16, originally simply named EPOC, was the operating system developed by Psion in the late 1980s and early 1990s for Psion's "SIBO" (SIxteen Bit Organisers) devices. All EPOC16 devices EPOC16 featured an 8086-family processor and a 16-bit architecture. EPOC16 was a single-user preemptive multitasking operating system, written in Intel 8086 assembler language and C and designed to be delivered in ROM. It supported a simple programming language called Open Programming Language (OPL) and an integrated development environment (IDE) called OVAL. SIBO devices included the: MC200, MC400, Series 3 (1991–98), Series 3a, Series 3c, Series 3mx, Siena, Workabout and Workabout mx. The MC400 and MC200, the first EPOC16 devices, shipped in 1989.

EPOC16 featured a primarily 1-bit-per-, keyboard-operated graphical interface — the hardware for which it was designed originally had pointer input in the form of a digitiser panel.

In the late 1990s, the operating system was referred to as EPOC16 to distinguish it from Psion's then-new EPOC32 OS. The first version of EPOC32, Release 1 appeared on the Psion Series 5 ROM v1.0 in 1997. Later, ROM v1.1 featured Release 3. (Release 2 was never publicly available.) These were followed by the Psion Series 5mx, Revo / Revo plus, / and netPad (which all featured Release 5).

The EPOC32 operating system, at the time simply referred to as EPOC, was later renamed Symbian OS. Adding to the confusion with names, before the change to Symbian, EPOC16 was often referred to as SIBO to distinguish it from the "new" EPOC. Despite the similarity of the names, EPOC32 and EPOC16 were completely different operating systems, EPOC32 being written in C++ from a new codebase with development beginning during the mid-1990s.

EPOC32 was a pre-emptive multitasking, single user operating EPOC32 (releases system with memory protection, which encourages the application 1 to 5) developer to separate their program into an engine and an interface. The Psion line of PDAs come with a called EIKON which is specifically tailored for handheld machines with a keyboard (thus looking perhaps more similar to desktop GUIs than palmtop GUIs[87]). However, one of EPOC's characteristics is the ease with which new GUIs can be developed based on a core set of GUI classes, a feature which has been widely explored from Ericsson R380 and onwards.

EPOC32 was originally developed for the ARM family of processors, including the ARM7, ARM9, StrongARM and Intel's XScale, but can be compiled towards target devices using several other processor types.

During the development of EPOC32, Psion planned to license EPOC to third-party device manufacturers, and off its software division as Psion Software. One of the first licensees was the short-lived Geofox, which halted production with less than 1,000 units sold. Ericsson marketed a rebranded Psion Series 5mx called the MC218, and later created the EPOC Release 5.1 based smartphone, the R380. Oregon Scientific also released a budget EPOC device, the Osaris (notable as the only EPOC device to ship with Release 4).

Work started on the 32-bit version in late 1994.

The Series 5 device, released in June 1997, used the first iterations of the EPOC32 OS, codenamed "Protea", and the "Eikon" graphical user interface.

The Oregon Scientific Osaris was the only PDA to use the ER4.

The Psion Series 5mx, Psion Series 7, , Diamond Mako, Psion netBook and Ericsson MC218 were released in 1999 using ER5. A phone project was announced at CeBIT, the Phillips Illium/Accent, but did not achieve a commercial release. This release has been retrospectively dubbed Symbian OS 5.

The first phone using ER5u, the Ericsson R380 was released in November 2000. It was not an 'open' phone – software could not be installed. Notably, a number of never-released Psion prototypes for next generation PDAs, including a Bluetooth Revo successor codenamed "Conan" were using ER5u. The 'u' in the name refers to the fact that it supported Unicode.

In June 1998, Psion Software became Symbian Ltd., a major joint venture between Psion and phone manufacturers Ericsson, Motorola, and Nokia. As of Release 6, EPOC became known simply as Symbian OS. The OS was renamed Symbian OS and was envisioned as the base for a new range of smartphones. This release is sometimes called ER6. Psion gave 130 key staff to the new company and retained a 31% shareholding in the spin-off.

The first 'open' Symbian OS phone, the Nokia 9210 Communicator, was released in June 2001. Bluetooth support was added. Almost Symbian OS 6.0 500,000 Symbian phones were shipped in 2001, rising to 2.1 million and 6.1 the following year. Development of different UIs was made generic with a "reference design strategy" for either 'smartphone' or 'communicator' devices, subdivided further into keyboard- or tablet-based designs. Two reference UIs (DFRDs or Device Family Reference Designs) were shipped – Quartz and Crystal. The former was merged with Ericsson's 'Ronneby' design and became the basis for the UIQ interface; the latter reached the market as the Nokia Series 80 UI. Later DFRDs were Sapphire, Ruby, and Emerald. Only Sapphire came to market, evolving into the Pearl DFRD and finally the Nokia Series 60 UI, a keypad-based 'square' UI for the first true smartphones. The first one of them was the Nokia 7650 smartphone (featuring Symbian OS 6.1), which was also the first with a built-in camera, with VGA (0.3 Mpx = 640×480) resolution. Other notable S60 Symbian 6.1 devices are the , the short lived Sendo X and Siemens SX1 - the first and the last Symbian phone from Siemens.

Despite these efforts to be generic, the UI was clearly split between competing companies: Crystal or Sapphire was Nokia, Quartz was Ericsson. DFRD was abandoned by Symbian in late 2002, as part of an active retreat from UI development in favour of 'headless' delivery. Pearl was given to Nokia, Quartz development was spun off as UIQ Technology AB, and work with Japanese firms was quickly folded into the MOAP standard. First shipped in 2003. This is an important Symbian release which appeared with all contemporary user interfaces including UIQ (Sony Ericsson P800, P900, P910, Motorola A925, A1000), Series 80 (Nokia 9300, 9500), Series 90 (Nokia 7710), Series 60 (Nokia 3230, 6260, 6600, 6670, 7610) as well as several FOMA phones in Japan. It also added EDGE support and IPv6. Java support was changed from pJava and JavaPhone to one based on the Java ME standard.

Symbian OS 7.0 One million Symbian phones were shipped in Q1 2003, with the rate and 7.0s increasing to one million a month by the end of 2003.

Symbian OS 7.0s was a version of 7.0 special adapted to have greater backward compatibility with Symbian OS 6.x, partly for compatibility between the Communicator 9500 and its predecessor the Communicator 9210.

In 2004, Psion sold its stake in Symbian. The same year, the first worm for mobile phones using Symbian OS, Cabir, was developed, which used Bluetooth to spread itself to nearby phones. See Cabir and Symbian OS threats. First shipped in 2004, one of its advantages would have been a choice of two different kernels (EKA1 or EKA2). However, the EKA2 kernel version did not ship until Symbian OS 8.1b. The kernels behave more or less identically from user-side, but are internally very Symbian OS 8.0 different. EKA1 was chosen by some manufacturers to maintain compatibility with old device drivers, while EKA2 was a real-time kernel. 8.0b was deproductised in 2003.

Also included were new APIs to support CDMA, , two-way data streaming, DVB-H, and OpenGL ES with vector graphics and direct screen access. An improved version of 8.0, this was available in 8.1a and 8.1b versions, with EKA1 and EKA2 kernels respectively. The 8.1b version, with EKA2's single-chip phone support but no additional security layer, was popular among Japanese phone companies Symbian OS 8.1 desiring the real-time support but not allowing open application installation.

The first and maybe the most famous smartphone featuring Symbian OS 8.1a was Nokia N90 in 2005, Nokia's first in Nseries. Symbian OS 9.0 was used for internal Symbian purposes only. It was de-productised in 2004. 9.0 marked the end of the road for EKA1. 8.1a is the final EKA1 version of Symbian OS. Symbian OS 9.0 Symbian OS has generally maintained reasonable binary code compatibility. In theory the OS was BC from ER1-ER5, then from 6.0 to 8.1b. Substantial changes were needed for 9.0, related to tools and security, but this should be a one-off event. The move from requiring ARMv4 to requiring ARMv5 did not break backwards compatibility. Released early 2005. It includes many new security related features, including platform security module facilitating mandatory code signing. The new ARM EABI binary model means developers need to retool and the security changes mean they may have to recode. S60 platform 3rd Edition phones have Symbian OS 9.1. Sony Ericsson is shipping the M600 and P990 based on Symbian OS 9.1. The earlier versions had a defect where the phone hangs temporarily after the owner sent a large number of SMS'es. However, on 13 September 2006, Nokia released a small program to fix this defect. Support for Bluetooth 2.0 was also added. Symbian OS 9.1 Symbian 9.1 introduced capabilities and a Platform Security framework. To access certain APIs, developers have to sign their application with a digital signature. Basic capabilities are user- grantable and developers can self-sign them, while more advanced capabilities require certification and signing via the Symbian Signed program, which uses independent 'test houses' and phone manufacturers for approval. For example, file writing is a user- grantable capability while access to Multimedia Device Drivers require phone manufacturer approval. A TC TrustCenter ACS Publisher ID certificate is required by the developer for signing applications. Symbian OS 9.2 Released Q1 2006. Support for OMA Device Management 1.2 (was 1.1.2). Vietnamese language support. S60 3rd Edition Feature Pack 1 phones have Symbian OS 9.2.

Nokia phones with Symbian OS 9.2 OS include the Nokia E71, Nokia E90, , , Nokia N81 and Nokia 5700. Released on 12 July 2006. Upgrades include improved memory management and native support for Wifi 802.11, HSDPA. The Nokia Symbian OS 9.3 E72, Nokia 5730 XpressMusic, Nokia N79, , Nokia E52, Nokia E75, Nokia 5320 XpressMusic, Sony Ericsson P1 and others feature Symbian OS 9.3.

Announced in March 2007. Provides the concept of demand paging which is available from v9.3 onwards. Applications should launch up to 75% faster. Additionally, SQL support is provided by SQLite. Ships with the Samsung i8910 Omnia HD, , Nokia N97 mini, Nokia 5800 XpressMusic, Nokia 5530 XpressMusic, Nokia Symbian OS 9.4 5228, , Nokia 5233, Nokia 5235, Nokia C6-00, Nokia X6, Sony Ericsson Satio, Sony Ericsson Vivaz and Sony Ericsson Vivaz Pro, Micromax x265.

Used as the basis for Symbian^1, the first Symbian platform release. The release is also better known as S60 5th edition, as it is the bundled interface for the OS. Symbian^2 is a version of Symbian that only used by Japanese Symbian^2 manufacturers, started selling in Japan market since May 2010. The version is not used by Nokia.

Symbian^3 is an improvement over previous S60 5th Edition and features single touch menus in the user interface, as well as new Symbian OS kernel with hardware-accelerated graphics; further Symbian^3 improvements will come in the first half of 2011 including portrait (Symbian OS 9.5) qwerty keyboard, a new browser and split-screen text input. Nokia and Symbian announced that updates to Symbian^3 interface will be delivered Anna gradually, as they are available; Symbian^4, the previously planned major release, is now discontinued and some of its intended features will be incorporated into Symbian^3 in successive releases, starting with Symbian Anna.

In the summer of 2011 videos showing an early leaked version of Nokia Belle Symbian Belle (original name of Nokia Belle) running on a Nokia N8 (Symbian OS were published on YouTube. 10.1) On 24 August 2011, Nokia announced it officially for three new smartphones, the (later replaced by ), , and .

Nokia officially renamed Symbian Belle to Nokia Belle in a company blog post.

Nokia Belle adds to the Anna improvements with a pull-down status/notification bar, deeper near field communication integration, free-form re-sizable homescreen widgets, and six homescreens instead of the previous three. As of 7 February 2012, Nokia Belle update is available for most phone models through Nokia Suite, coming later to Australia. Users can check the availability at the Nokia homepage.

On 1 March 2012, Nokia announced a Feature Pack 1 update for Nokia Belle which will be available as an update to Nokia 603, 700, 701 (excluding others), and for Nokia 808 PureView natively.

The latest software release for Nokia 1st generation Symbian Belle smartphones (Nokia N8, C7, C6-01, Oro, 500, X7, E7, E6) is Nokia Belle Refresh (111.040.1511).

In October 2012, the Nokia Belle Feature Pack 2, widely considered the last major update for Symbian, was released for Nokia 603, 700, 701, and 808 PureView.