Unit 1 (Chapter 1 & 2.)

Key Concepts and Topics:

Central Processing Unit Multi-programming Data fetch cycle

The CPU is the part of a computer system that is commonly referred to as the "brains" of a computer. The CPU is also known as the processor or microprocessor and is responsible for executing a sequence of stored instructions called a program.

Multitasking has the same meaning of multiprogramming but in a more general sense, as it refers to having multiple: programs, processes, tasks, threads; running at the same time. This term is used in modern operating systems when multiple tasks share a common processing resource (e.g., CPU and Memory).

An instruction cycle (sometimes called a fetch–decode–execute cycle) is the basic operational of a computer. It is the process by which a computer retrieves a program instruction from its memory, determines what actions the instruction dictates, and carries out those actions.

Memory and RAM, DRAM Job pool and Job scheduling File

DRAM is a type of memory that is typically used for data or program code that a computer processor needs to function. DRAM is a common type of RAM used in personal computers (PCs), workstations and servers. Random access allows the PC processor to access any part of the memory directly rather than having to proceed sequentially from a starting place. RAM is located close to a computer’s processor and enables faster access to data than storage media such as hard disk drives and solid-state drives.

The concept of “job pool” refers to a batch processing system, where jobs are queued to be executed when resources are available. The job pool contains both jobs that are currently executing and jobs that have been scheduled but are not yet being executed. ... When a job is executing, it is fully present in memory. Job scheduling is the process of allocating system resources to many different tasks by an operating system (OS). The system handles prioritized job queues that are a-waiting CPU time and it should determine which job to be taken from which queue and the amount of time to be allocated for the job.

A collection of data or information that has a name, called the filename. Almost all information stored in a computer must be in a file. There are many different types of files: data files, text files, program files, directory files, and so on. ... For example, program files store programs, whereas text files store text. A collection of data on a computer constitutes a FILE.

Firmware, ROM, EEPROM Time sharing or multitasking Mass storage

Firmware is programming that's written to a hardware device's nonvolatile memory ROM. Hardware makers use embedded firmware to control the functions of various hardware devices and systems, much like a computer's operating system (OS) controls the function of software applications. EEPROM (electrically erasable programmable read-only memory) is user-modifiable read-only memory (ROM) that can be erased and reprogrammed (written to) repeatedly through the application of higher than normal electrical voltage. Unlike EPROM chips, EEPROMs do not need to be removed from the computer to be modified. However, an EEPROM chip has to be erased and reprogrammed in its entirety, not selectively. It also has a limited life - that is, the number of times it can be reprogrammed is limited to tens or hundreds of thousands of times. In an EEPROM that is frequently reprogrammed while the computer is in use, the life of the EEPROM can be an important design consideration.

Time-sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time which is shared among multiple users simultaneously is termed as time-sharing.

Mass storage refers to the storage of large amounts of data in a persisting and machine-readable fashion.

Input / Output (I/O) devices Interactive computer system Caching

I/O (input/output), pronounced "eye-oh," describes any operation, program, or device that transfers data to or from a computer. Typical I/O devices are printers, hard disks, keyboards, and mice. In fact, some devices are basically input-only devices (keyboards and mice); others are primarily output-only devices (printers); and others provide both input and output of data (hard disks, diskettes, writable CD-ROMs).

In computer science, interactive computing refers to software which accepts input from humans as it runs. Interactive software includes most popular programs, such as word processors or spreadsheet applications.

Caching is a very general technique for improving computer system performance. Based on the principle of locality of reference, it is used in a computer's primary storage hierarchy, its operating system, networks, and databases.

Instruction-Execution Cycle Response time Cache management

An instruction cycle (sometimes called a fetch–decode–execute cycle) is the basic operational process of a computer. It is the process by which a computer retrieves a program instruction from its memory, determines what actions the instruction dictates, and carries out those actions.

The elapsed time between the end of an inquiry or demand on a computer system and the beginning of a response; for example, the length of the time between an indication of the end of an inquiry and the display of the first character of the response at a user terminal.

Introduction to Cache Management. The cache provides in-memory storage and management for your data. You organize your data in the cache into data regions, each with its own configurable behavior. You store your data into your regions in key/value pairs called data entries.

Instruction Register Process I/O subsystem

In computing, an instruction register (IR) is the part of a CPU's control unit that holds the instruction currently being executed or decoded.

A process is an instance of a program running in a computer. It is close in meaning to task, a term used in some operating systems. In UNIX and some other operating systems, a process is started when a program is initiated (either by a user entering a shell command or by another program).

The I/O subsystem is connected to the rest of the components. It is in-charge of interacting with all of the devices that are responsible for either: entering information into the system or used to store or show information coming from the system.

Storage Device Hierarchy Virtual memory Protection

A storage device hierarchy consists of a group of storage devices that have different costs for storing data, different amounts of data stored, and different speeds of accessing the data.

Virtual memory is a memory management capability of an OS that uses hardware and software to allow a computer to compensate for physical memory shortages by temporarily transferring data from random access memory (RAM) to disk storage. Virtual address space is increased using active memory in RAM and inactive memory in hard disk drives (HDDs) to form contiguous addresses that hold both the application and its data.

Protection, then, is any mechanism for controlling the access of processes or users to the resources defined by a computer system. This mechanism must provide means to specify the controls to be imposed and to enforce the controls. Protection can improve reliability by detecting latent errors at the interfaces between component subsystems.

SCSI Swapping Security

The Small Computer System Interface (SCSI) is a set of parallel interface standards developed by the American National Standards Institute (ANSI) for attaching printers, disk drives, scanners and other peripherals to computers. SCSI (pronounced "skuzzy") is supported by all major operating systems.

A process must be in memory to be executed. A process, however, can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution. Swapping makes it possible for the total physical address space of all processes to exceed the real physical memory of the system, thus increasing the degree of multiprogramming in a system.

Security protects the integrity of the information stored in the system, from unauthorized access, malicious destruction or alteration and accidental introduction of inconsistency.

Direct Memory Access Interrupt driven Network OS

Direct memory access is used for high-speed I/O devices in order to avoid increasing the CPU’s execution load.

If the CPU does not poll the control bit, but instead receives an interrupt when the device is ready for the next byte, the data transfer is said to be interrupt driven.

A network operating system is an operating system that provides features such as file sharing across the network, along with a communication scheme that allows different processes on different computers to exchange messages.

Device Driver Trap or exception Real-time OS

The device controller is responsible for moving the data between the peripheral devices that it controls and its local buffer storage. Typically, operating systems have a device driver for each device controller. This device driver understands the device controller and provides the rest of the operating system with a uniform interface to the device.

A trap (or an exception) is a software-generated interrupt caused either by an error (for example, division by zero or invalid memory access) or by a specific request from a user program that an operating-system service be performed. The interrupt-driven nature of an operating system defines that system’s general structure.

A is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control device in a dedicated application.

Multiprocessor System (Parallel system) Dual-mode Hand-held System

Such systems have two or more processors in close communication, sharing the computer bus and sometimes the clock, memory, and peripheral devices. Multiprocessor systems first appeared prominently in servers and have since migrated to desktop and laptop systems.

The dual mode of operation provides us with the means for protecting the operating system from errant users—and errant users from one another. We accomplish this protection by designating some of the machine instructions that may cause harm as privileged instructions.

Mobile computing refers to computing on handheld smartphones and tablet computers, which offer several unique features.

Symmetric Multi-Processor User-mode Multi-media system

SMP (symmetric ) is the processing of programs by multiple processors that share a common operating system and memory. In symmetric (or "tightly coupled") multiprocessing, the processors share memory and the I/O bus or data path. A single copy of the operating system is in charge of all the processors.

When the computer system is executing on behalf of a user application, the system is in user-mode.

Binary file containing audio or A/V information.

Uniform Memory Access Kernel mode Client-server

UMA is defined as the situation in which access to any RAM from any CPU takes the same amount of time.

Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC. User Mode. In User mode, the executing code has no ability to directly access hardware or reference memory.

Denoting a computer system in which a central server provides data to a number of networked workstations.

Non- Privileged instructions Peer-to-peer

Systems in which memory access times vary significantly are known collectively as non-uniform memory access (NUMA) systems, and without exception, they are slower than systems in which memory and CPUs are located on the same motherboard.

A privileged instruction is a processor op-code (assembler instruction) which can only be executed in "supervisor" (or Ring-0) mode. These types of instructions tend to be used to access I/O devices and protected data structures from the windows kernel.

In this model, clients and servers are not distinguished from one another. Instead, all nodes within the system are considered peers, and each may act as either a client or a server, depending on whether it is requesting or providing a service. Peer-to-peer systems offer an advantage over traditional client-server systems. In a client-server system, the server is a bottleneck; but in a peer-to-peer system, services can be provided by several nodes distributed throughout the network. To participate in a peer-to-peer system, a node must first join the network of peers. Once a node has joined the network, it can begin providing services to and requesting services from—other nodes in the network.

Multiple Computing Cores Timer Open Source OS

Multiple cores were used on the same CPU chip, which could then lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in ; each core has an x86 architecture.

A timer can be set to interrupt the computer after a specified period. The period may be fixed (for example,1/60 second) or variable (for example, from 1 millisecond to 1 second). A variable timer is generally implemented by a fixed-rate clock and a counter.

In general, open source refers to any program whose source code is made available for use or modification as users or other developers see fit. Open source software is usually developed as a public collaboration and made freely available.

Blade Server Process management

Blade server. A server architecture that houses multiple server modules ("blades") in a single chassis. It is widely used in datacenters to save space and improve system management. Either self-standing or rack mounted, the chassis provides the power supply, and each blade has its own CPU, RAM and storage.

Process management is the ensemble of activities of planning and monitoring the performance of a business process. The term usually refers to the management of business processes and manufacturing processes. Business process management (BPM) and business process reengineering are interrelated, but not identical.

An open-source operating system modelled on UNIX.

Clustered System Program counter BSD Unix

A consists of a set of loosely or tightly connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

A program counter is a register in a computer processor that contains the address (location) of the instruction being executed at the current time. As each instruction gets fetched, the program counter increases its stored value by 1.

BSD UNIX has a longer and more complicated history than Linux. It started in 1978 as a derivative of AT&T’s UNIX. Releases from the University of California at Berkeley (UCB) came in source and binary form, but they were not open source because a license from AT&T was required.

Beowulf Clusters Memory management Solaris

Beowulf Clusters are designed to solve high-performance computing tasks. A Beowulf Cluster consists of commodity hardware such as personal computers connected via a LAN. No single specific software package is required to construct a cluster. Rather the nodes use a set of open-source software libraries to communicate with one another. Thus there are a variety of approaches to constructing a Beowulf Cluster.

Memory management is the process of controlling and coordinating computer memory, assigning portions called blocks to various running programs to optimize overall system performance. Memory management resides in hardware, in the OS (operating system), and in programs and applications.

Solaris is the computer operating system that Sun Microsystems provides for its family of Scalable Processor Architecture-based processors as well as for Intel-based processors. Sun has historically dominated the large UNIX workstation market.

Storage-Area Networks Instruction fetch cycle

A storage area network (SAN) is a network which provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear to the operating system as locally attached devices.

An instruction cycle (sometimes called a fetch–decode–execute cycle) is the basic operational process of a computer. It is the process by which a computer retrieves a program instruction from its memory, determines what actions the instruction dictates, and carries out those actions.

Practice Exercises

1. What are the three main purposes of an operating system?

 To provide an environment for a computer user to execute programs on in a convenient and efficient manner.  To allocate the separate resources of the computer as needed to solve the problem given. The allocation process should be as fair and efficient as possible.  As a control program it serves two major functions: (1) supervision of the execution of user programs to prevent errors and improper use of the computer, and (2) management of the operation and control of I/O devices.

2. We have stressed the need for an operating system to make efficient use of the computing hardware. When is it appropriate for the operating system to forsake this principle and to waste resources? Why is such a system not really wasteful?

 Single-user systems should maximize use of the system for the user. A GUI might “waste” CPU cycles, but it optimizes the user’s interaction with the system.

3. What is the main difficulty that a programmer must overcome in writing an operating system for a real-time environment?

 The main difficulty is keeping the operating system within the fixed time constraints of a real-time system. If the system does not complete a task in a certain time frame, it may cause a breakdown of the entire system it is running. Therefore when writing an OS for a real-time system, the writer must be sure that his scheduling schemes do not allow response time to exceed the time constraints

4. Keeping in mind the various definitions of operating system, consider whether the operating system should include applications such as: web browsers and mail programs. Argue both that it should and should not support your answers.

 An argument in favor of including popular applications with the OS is that if the application is embedded within the OS, it is likely to be better able to take advantage of features in the kernel and therefore have performance advantages over an application that runs outside of the kernel. Arguments against embedded applications within the OS typically dominate however: (1) the applications are applications and not part of an OS, (2) any performance benefits of running within the kernel are offset by the security vulnerabilities, (3) It may lead to a bloated OS.

5. How does the distinction between kernel mode and user mode function as a rudimentary form of protection (security) system?

 The distinction between kernel mode and user mode provides a rudimentary form of protection in the following manner. Certain instructions could be executed only when the CPU is in kernel mode. Similarly hardware devices could be accessed only when the program is executing in kernel mode. Control over when interrupts could be enabled or disabled is also possible only when the CPU is in kernel mode. Consequently, the CPU has very limited capability when executing in user mode, thereby enforcing protection of critical resources.

6. Which of the following instructions should be privileged?

 Set value of timer  Read the clock

 Clear memory  Issue a trap instruction

 Turn off Interrupts  Modify entries in device-status table

 Switch from user to kernel mode  Access I/O device

The following operations need to be privileged: Set value of timer, Clear memory, Turn off Interrupts, Modify entries in device-status table, Access I/O device. The rest can be performed in user mode.

7. Some early computers protected the operating system by placing it in a memory partition that could not be modified by either the user job or the operating system itself. Describe two difficulties that you think could arise with such a scheme.

 The data required by the OS (passwords, access controls, accounting information, and so on) would have to be stored in or passed though unprotected memory and thus be accessible to unauthorized users.

8. Some CPU’s provide more than two modes of operation. What are two possible uses of these multiple modes?

 Although most systems only distinguish between user and kernel modes, some CPU’s have supported multiple modes. Multiple modes could be used to provide a finer-grained security policy. For example rather than distinguishing between just kernel mode and user mode, you could distinguish between different types of user mode. Perhaps users’ belonging to the same group could execute each other’s code. The machine would go into specified mode when one of these users was running code. When the machine was in this mode, a member of the group could run code belonging to anyone else in the group. Another possibility would be to provide different distinctions within kernel mode. For example a specific mode could allow USB device drivers to run. This would mean that USB devices could be serviced without having to switch to kernel mode, thereby essentially allowing USB device drivers to run in a quasi-user/kernel mode.

9. Timers could be used to compute the current time. Provide a short description of how this could be accomplished.

 A program could use the following approach to compute the current time using timer interrupts. The program could set a timer for some time in the future and go to sleep. When it is awakened by the interrupt, it could update its local state, which it is using to keep track of the number of interrupts it has received thus far. It could then repeat this process of continually setting timer interrupts and updating its local state when the interrupts are actually raised.

10. Give two reasons why caches are useful. What problems do they solve? That problems do they cause? If a cache can be made as large as the device for which it is caching (for instance, a cache as large as a disk), why not make it that large and eliminate the device?

 Caches are useful when two or more components need to exchange data, and the components perform transfers at differing speeds. Caches solve the transfer problem by providing a buffer of intermediate speed between the components. If the fast device finds the data it needs in the cache, it need not wait for the slower device. The data in the cache must be kept consistent with the data in the components. If a component has a data value change, and the datum is also in the cache, the cache must also be updated. This is especially a problem on multiprocessor systems where more than one process may be accessing a datum. A component may be eliminated by an equal-sized cache, but only if: (a ) the cache and component have equivalent state-saving capacity (that is, if the component retains its data when electricity is removed, the cache must retain data as well), and (b) the cache is affordable, because fasters storage tends to be more expensive.

11. Distinguish between the client-server and peer-to-peer models of distributed systems.

 The client server model firmly distinguishes the roles of the client and server. Under this model, the client requests services that are provided by the server. The peer-to-peer model does not have such strict roles. In fact, all nodes in the system are considered peers and thus may act as either clients or servers – or both. A node may request a service from another peer, or the node may in fact provide such a service to other peers in the system. For example, let’s consider a system of nodes that share cooking recipes. Under the client-server model, all recipes are stored with the servers. If a client wishes to access a recipe, it must request the recipe from the specified server. Using the peer-to-peer model, a peer node could ask other peer nodes for the specified recipe. The node (or perhaps nodes) with the requested recipe could provide it to the requested node. Notice how each peer could act as both a client and a server.

12. In a multiprogramming and time-sharing environment, several users share the system simultaneously. This situation can result in various security problems. a.) What are two such problems?

 Stealing or copying a user’s files.

 Writing over another program’s area in memory; using system resources without proper accounting; causing the printer to mix output by sending data while some other user’s file is printing.

b.) Can we ensure the same degree of security in a time-shared machine as in a dedicated machine? Explain.

 Probably not, since any protection scheme devised by a human can also be broken – and the more complex the scheme is, the more difficult it is to be confident of its correct implementation.

13. The issue of resource utilization shows up in different forms in different types of operating systems. List what resources must be managed carefully in the following steps.

a. Mainframe or minicomputer system Memory and CPU resources, storage, network bandwidth

b. Workstation connected to servers. Memory and CPU services.

c. Mobile computers. Power consumption, memory resources.

14. Under what circumstances would users be better off using a time-sharing system than a PC or a single-user workstation?

 When there are few other users, the task is large, and the hardware is fast, time-sharing makes sense. The full power of the system can be brought to bear on the user’s problem. The problem can be solved faster on a . Another case occurs when lots of other users need resources at the same time. A personal computer is best when the job is small enough to be executed

15. Describe the difference between metric and asymmetric multiprocessing. What are three advantages and one disadvantage of multiprocessor systems?

 Symmetric multiprocessing doesn't have master slave property where the master checks whether everything is working properly or not, what work the slaves have to perform is assigned by the master. This is not present in symmetric multiprocessing. It treats all processors as being equal and equal I/O. Asymmetric multiprocessing, consists of master and slave method. It has one master and the remaining are slave processors. The I/O is usually done by the master only.

The advantages of multiprocessing and its disadvantages are:

Multiprocessors can save money by not duplicating power supplies, housings, and peripherals.

They can execute programs more quickly and can have increased reliability.

We can have more throughput(The no. of instructions executed in one second) in multiprocessing.

They are also more complex in both hardware and software than multi processor systems.

16. How do clustered systems differ from multiprocessor systems? What is required for two machines belonging to a cluster to cooperate to provide a highly available service?

 Clustered systems are typically constructed by combining multiple computers into a single system to perform a computational task distributed across the cluster. Multiprocessor systems on the other hand could be a single physical entity comprising of multiple CPUs. A clustered system is less tightly coupled than a multiprocessor system. Clustered systems communicate using messages, while processors in a multiprocessor system could communicate using . In order for two machines to provide a highly available service, the state on the two machines should be replicated and should be consistently updated. When one of the machines fails, the other could then take-over the functionality of the failed machine.

17. Consider a computing cluster costing of two nodes running a database. Describe two ways in which the cluster software can manage access to the data on the disk. Discuss the benefits and disadvantages of each.

 Consider the following two alternatives: asymmetric clustering and parallel clustering. With asymmetric clustering, one host runs the database application with the other host simply monitoring it. If the server fails, the monitoring host becomes the active server. This is appropriate for providing redundancy. However, it does not utilize the potential processing power of both hosts. With parallel clustering, the database application can run in parallel on both hosts. The difficulty implementing parallel clusters is providing some form of distributed locking mechanism for files on the shared disk.

18. How are network computers different from traditional personal computers? Describe some usage scenarios in which it is advantageous to use network computers.

 A network computer relies on a centralized computer for most of its services. It can therefore have a minimal operating system to manage its resources. A personal computer on the other hand has to be capable of providing all of the required functionality in a stand-alone-manner without relying on a centralized manner. Scenarios where administrative costs are high and where sharing leads to more efficient use of resources are precisely those settings where network computers are preferred.

19. What is the purpose of interrupts? How does an interrupt differ from a trap? Can traps be generated intentionally? If so, for what purpose?

 An interrupt is a hardware-generated change-of-flow within the system. An interrupt handler is summoned to deal with the cause of the interrupt; control is then returned to the interrupted context and instruction. A trap is a software-generated interrupt. An interrupt can be used to signal the completion of an I/O to obviate the need for device polling. A trap can be used to call operating system routines or to catch arithmetic errors.

20. Direct memory access is used for high-speed I/O devices in order to avoid increasing the CPU’s execution load. a. How does the CPU interface with the device to coordinate the transfer? b. How does the CPU know when the memory operations are complete? c. The CPU is allowed to execute other programs while the DMA controller is transferring data. Does this process interfere with the execution of the user programs? If so, describe what forms of interference are caused.

The CPU can initiate a DMA operation by writing values into special registers that can be independently accessed by the device. The device initiates the corresponding operation once it receives a command from the CPU. When the device is finished with its operation, it interrupts the CPU to indicate the completion of the operation. Both the device and the CPU can be accessing memory simultaneously. The memory controller provides access to the memory bus in a fair manner to these two entities. A CPU might therefore be unable to issue memory operations at peak speeds since it has to compete with the device in order to obtain access to the memory bus.

21. Some computer systems do not provide a privileged mode of operation in hardware. Is it possible to construct a secure operating system for these computer systems? Give arguments both that it is and that it is not possible.

 An operating system for a machine of this type would need to remain in control (or monitor mode) at all times. This could be accomplished by two methods:

a. Software interpretation of all user programs (like some BASIC, Java, and LISP systems, for example). The software interpreter would provide, in software, what the hardware does not provide. b. Require meant that all programs be written in high-level languages so that all object code is compiler produced. The compiler would generate (either in-line or by function calls) the protection checks that the hardware is missing.

22. Many SMP systems have different levels of caches; one level is local to each processing core, and another level is shared among all processing cores. Why are caching systems designed this way?

 The different levels are based on access speed as-well-as size. In general, the closer the cache is to the CPU, the faster the access. However, faster caches are typically more costly. Therefore, smaller and faster caches are placed local to each CPU, and shared caches that are larger, yet slower, and shared among several different processors.

23. Consider an SMP system similar to the one shown in Figure 1.6 Illustrate with an example how data residing in memory could in fact have a different value in each of the local caches.

 Say processor 1 reads data A with value 5 from main memory into its local cache. Similarly, processor 2 reads data A into its local cache as well. Processor 1 then updates A to 10. However, since A resides in processor 1’s local cache, the update only occurs there and not in the local cache for processor 2.

24. Discuss, with examples, how the problem of maintaining coherence of cached data manifests itself in the following processing environments:

Single-Processor The memory needs to be updated when a processor issues updates to cached values. These updates can

be performed immediately or in a lazy manner.

Multi-Processor Different processors might be caching the same memory location in its local caches. When updates are

made, the other cached locations need to be invalidated or updated.

Distributed Systems Consistency of cached memory values is not an issue. However, consistency problems might arise

when a client caches file data.

25. Describe a mechanism for enforcing memory protection in order to prevent a program from modifying the memory associated with other programs.

 The processor could keep track of what locations are associated with each process and limit access to locations that are outside of a program’s extent. Information regarding the extent of a program’s memory could be maintained by using base and limits registers and by performing a check for every memory access.

26. Which network configuration –LAN or WAN—would best suit the following environments?

a. A campus student union. LAN

b. Several campus locations across a statewide university system. WAN

c. A neighborhood. LAN or WAN

27. Describe some of the challenges of designing operating systems for mobile devices compared with designing operating systems for traditional PCs. The greatest challenges in designing mobile operating systems include:  Less storage capacity means the operating system must manage memory carefully.  The operating system must also manage power consumption carefully.  Less processing power plus fewer processors mean the operating.

28. What are some advantages of peer-to-peer systems over client-server systems?

 It is easy to install and so is the configuration of computers on this network

 All the resources and contents are shared by all the peers, unlike server-client architecture where Server shares all the contents and resources.

 P2P is more reliable as central dependency is eliminated. Failure of one peer doesn’t affect the functioning of other peers. In case of Client –Server network, if server goes down whole network gets affected.

 There is no need for full-time System Administrator. Every user is the administrator of his machine. User can control their shared resources.

 The over-all cost of building and maintaining this type of network is comparatively very less.

29. Describe some distributed applications that would be appropriate for a peer-to-peer system.

a. LAN b. LAN but WAN can be used for large university area.

c. WAN d. WAN

30. Identify several advantages and several disadvantages of open-source operating systems. Include the type of people who would find each aspect to be an advantage or a disadvantage.

 Open source operating systems have the advantages of having many people working on them, many people debugging them, ease of access and distribution, and rapid update cycles. Further, for students and programmers, there is certainly an advantage to being able to view and modify the source code. Typically open source operating systems are free for some forms of use, usually just requiring payment for support services. Commercial operating system companies usually do not like the competition that open source operating systems bring because these features are difficult to compete against. Some open source operating systems do not offer paid support programs. Some companies avoid open source projects because they need paid support, so that they have some entity to hold accountable if there is a problem or they need help fixing an issue. Finally, some complain that a lack of discipline in the coding of open source operating systems means that backward compatibility is lacking making upgrades difficult, and that the frequent release cycle exacerbates these issues by forcing users to upgrade frequently.