9. Distributed Processing The same exception may be thrown multiple times in the try block. Exception and Exception Handlers There may be many different exceptions thrown from the same try block. There can be multiple catch blocks Type of Bugs - following the same try block handling different exceptions thrown. The same block can handle all possible types of Logic Errors – Errors in program logic due to poor exceptions. understanding of the problem and solution procedure. catch(…) Syntax Errors – Errors arise due to poor understanding of { the language. // Statements for processing all exceptions } Exceptions are runtime anomalies or unusual conditions that a program may encounter while executing. E.g. divide Propagating an Exception: If a handler for an exception by zero, access to an array out of bounds, running out of is not defined at the place where an exception occurs then memory or disk space. it is propagated further so it could be handled in the calling subprogram. If not handled there it is propagated When a program encounters an exceptional condition, it further. should be Identified and dealt with effectively. If no subprogram/program provides a handler, the entire program is terminated and standard language-defined : It is a mechanism to detect and handler is invoked. report an ‗exceptional circumstance‖ so that appropriate action can be taken. It involves the following tasks. COROUTINES: Coroutines are subprogram components that generalize to allow multiple entry points 1. Find the problem (Hit the exception) and suspending and resuming of execution at certain 2. Inform that an error has occurred (Throw the locations. exception) 3. Receive the error information (catch the expression) Here we allow subprograms to return to their calling 4. Take corrective action (Handle the exception) program before completion of execution. Such subprograms are termed coroutines. When a coroutine main() receives control from another subprogram, it executes { partially and then is suspended when it returns control. At int x, y; a later point, the calling program may resume execution of cout << ―Enter values of x and y‖; the coroutine from the point at which execution was cin >>x>>y; previously suspended. If A calls Subprogram B as a try { coroutine, B executes awhile and returns control to A, just if (x != 0) as any ordinary subprogram would do. When A again cout ―y/x is =―<

Execution of resume B in coroutine A will involve the following steps:  The current value of CIP is saved in the resume point location of activation record for A.  The ip value in the resume point location is fetched from B‘s activation record and assigned to CIP so that subprogram B resume at proper location.

Subprogram Normally execution of subprogram is assumed to be initiated immediately upon its call

Subprogram scheduling relaxes the above condition.

Comparison with Subroutines Scheduling Techniques: 1. The lifespan of subroutines is dictated by last in, first 1. Schedule subprogram to be executed before or after out (the last called is the first to return); other subprograms. lifespan of coroutines is dictated by their use and need, call B after A 2. The start of the subroutine is the only point entry. There 2. Schedule subprogram to be executed when given might be multiple entries in coroutines. Boolean expression is true 3. The subroutine has to complete execution before it call X when Y = 7 and Z > 0 returns the control. Coroutines may suspend execution and 3. Schedule subprograms on basis of a simulated time return control to caller. scale. call B at time = Currenttime + 50 Example: Let there be a consumer-producer relationship 4. Schedule subprograms according to a priority where one routine creates items and adds to the queue and designation the other removes from the queue and uses them. call B with priority 5

Comparison of Coroutines with Subroutines Languages : GPSS, 1. The lifespan of subroutines is dictated by last in, first out (the last subroutine called is the first to return); Parallel Programming lifespan of coroutines is dictated by their use and need, Computer systems capable of executing several programs 2. The start of the subroutine is the only point entry. There concurrently are now quite common. A multiprocessor might be multiple entries in coroutines. system has several central processing units (CPU) sharing 3. The subroutine has to complete execution before it a common memory. A distributed or parallel computer returns the control. Coroutines may suspend execution and system has several computers (possibly hundreds), each return control to caller. with its own memory and CPU, connected with communication links into a network in which each can Example: Let there be a consumer-producer relationship communicate with the others. In such systems, many tasks where one routine creates items and adds to the queue and may execute concurrently. Operating systems that support the other removes from the queue and uses them. multiprogramming and time sharing provide this sort of concurrent execution for separate user programs. Principles of Parallel Programming Languages Parallel programming constructs add complexity to the language design because several processors may be accessing the same data simultaneously. To consider parallelism in programming languages, the following five concepts must be addressed: 1. Variable definitions: Variables may be either mutable or definitional Mutable variables are the common Implementation of Coroutine variables declared in most sequential languages. Values Only one activation of each coroutine exists at a time. A may be assigned to the variables and changed during single location, called resume point is reserved in the program execution. A definitional variable may be activation record to save the old ip value of CIP when a assigned a value only once. The virtue of such a variable resume instruction transfer control to another subroutine. is that there is no synchronization problem. Once assigned a value, any task may access the variable and obtain the execution is resumed; if not zero or if the queue is empty, correct value. then the counter is incremented by one (indicating a signal 2. Parallel composition: Execution proceeds from one has been sent but not yet received). In either case, statement to the next. In addition to the sequential and execution of Task A continues after the signal operation is conditional statements of sequential programming complete. languages, we need to add the parallel statement, which wait(P): When executed by a task B, this operation tests causes additional threads of control to begin executing. the value of the counter in P; if nonzero, then the counter 3. Program structure: Parallel programs generally follow value is decremented by one (indicating that B has one of the following two execution models: received a signal) and Task B continues execution; if zero, (a) They may be transformational, where the goal then Task B is inserted at the end of the task queue for P is to transform the input data into an appropriate and execution of B is suspended (indicating that B is output value. Parallelism is applied to speed up waiting for a signal to be sent). the process, such as multiplying a matrix rapidly by multiplying several sections of it in parallel, signal and wait both have simple semantics that require (b) They may be reactive, where the program the principle of atomicity. Each operation completes reacts to external stimuli called events. Real-time execution before any other concurrent operation can and command- and-control systems are examples access its data. Atomicity prevents certain classes of of reactive systems. An operating system and a undesirable nondeterministic events to occur. transaction processing system, such as a reservation system, are typical examples of such Software Architecture reactive systems. They are characterized by Most programming languages are based on a discrete generally having nondeterministic behavior model of execution, That is, the program begins because it is never explicit exactly when an event execution, and then, in some order, will occur. 1. Reads relevant data from the local file system, 4. Communication: Parallel programs must communicate 2. Produces some answers, with one another. Such communication will typically be 3. Writes data to the local file system. via shared memory with common data objects accessed by This description clearly shows the presence of two classes each parallel program or via messages where each parallel of data: temporary or transient data, which have a lifetime program has its own copy of the data object and passes being the execution of the program, and persistent data, data values among the other parallel programs. which has a lifetime transcending the execution of the 5. Synchronization: Parallel programs must be able to program. Traditionally, transient data are the objects order the execution of its various threads of control. defined by the given programming language and Although nondeterministic behavior is appropriate for encompass most of the objects described earlier. many applications, in some an ordering must be imposed. Persistent data have generally been in the realm of For example, it is possible to design a compiler where the database system or file systems. Programs execute by scanner and parser execute in parallel, but you have to reading persistent data into transient data within the make sure that the scanner has read in the next token program and then rewriting the transient data back into the before the parser operates on it to parse the program. The persistent storage when execution terminates. communication mechanism, described earlier will generally implement this. Persistent Data and Transaction Systems For most applications, the separation of data into transient Semaphores: A semaphore is a data object used for and persistent objects is a satisfactory situation and has synchronization between tasks. A semaphore consists of served us well for many years. However, there are two parts: applications where this model can be very inefficient. (1) an integer counter, whose value is always positive or Consider a reservation system (e.g., airline or hotel). Here zero, that is used to count the number of signals sent but we would like the reservation program to be executing not yet received; and (2) a queue of tasks that are waiting continually because reservation calls may occur at any for signals to be sent. In a binary semaphore, the counter time. However, in this case, it is inefficient to read in the may only have the values zero and one. In a general persistent data describing the state of reservations, update semaphore, the counter may take on any positive integer this state with the new information, and then rewrite the value. reservation back to the persistent store. Two primitive operations are defined for a semaphore data We could redo the reservation system by having the entire object P: database be an array in the reservation program and signal(P): When executed by a task A, this operation tests simply have the program loop waiting for new requests. the value of the counter in P; if zero, then the first task in This would avoid the necessity of rereading the data each the task queue is removed from the queue and its time we needed to update the reservation database. The problem is that if the system failed (e.g., by programming In client server computing, the clients request a resource error or power loss to the computer), all transient data are and the server provides that resource. A server may serve lost. We would lose the contents of our reservation multiple clients at the same time while a client is in system. contact with only one server. Both the client and server This problem can be solved by the use of a persistent usually communicate via a computer network but programming language. In this case, there is no distinction sometimes they may reside in the same system. between transient and persistent data. All changes to transient program variables are immediately reflected in An illustration of the client server system is given as changes to the persistent database, and there is no need to follows: first read data into transient storage. The two concepts are the same, resulting in reliability in the storage system if the system should fail during execution. We refer to a persistent programming language as one that does not explicitly mention data movement (e.g., between persistent and transient storage). Example: .

Persistence requires a few additional design constraints: 1. Need a mechanism to indicate an object is persistent. 2. Need a mechanism to address a persistent object.

3. Need to synchronize simultaneous access to an Figure: Client Server Architecture individual persistent object. 4. Need to check type compatibility of persistent objects. Characteristics of Client Server Computing The salient points for client server computing are as Networks and Client-Server Computing follows: Programs typically are written to access local data stores, produce answers, and terminate. The computer is 1. The client server computing works with a system of considered a machine to be used to solve a given problem. request and response. The client sends a request to the As machines became faster and more expensive, server and the server responds with the desired multiprogramming systems were developed and terminals information. were added as a way to allow more individuals to access a 2. The client and server should follow a common computer at the same time. communication protocol so they can easily interact with each other. All the communication protocols are Today virtually all computers have the ability to available at the application layer. communicate with other computers using high-speed 3. A server can only accommodate a limited number of communication lines. Protocols such as X.25 and TCP/IP client requests at a time. So it uses a system based to allow for the reliable transfer of information between two priority to respond to the requests. such machines. We call processors linked by 4. Denial of Service attacks hinders a server‘s ability to communication lines ―a computer network‖. In tightly respond to authentic client requests by inundating it coupled systems the execution of several programs is with false requests. handled in parallel using multiprocessor architectures and 5. An example of a client server computing system is a hardware techniques such as pipelining. The use of web server. It returns the web pages to the clients that communication lines allows for the development of requested them. loosely coupled systems. In such systems, each processor has its own memory and disk storage, but communicates Difference between Client Server Computing and Peer with other processors to transfer information. Systems to Peer Computing developed on such architectures are called distributed The major differences between client server computing systems. and peer to peer computing are as follows: Distributed systems can be centralized, where a single processor does the scheduling and informs the other 1. In client server computing, a server is a central node machines as to the tasks to execute, or distributed or peer- that services many client nodes. On the other hand, in to-peer, where each machine is an equal and the process a peer to peer system, the nodes collectively use their of scheduling is spread among all of the machines. In a resources and communicate with each other. centralized system, a single processor will inform another 2. In client server computing the server is the one that processor of the task to execute. When completed, the communicates with the other nodes. In peer to peer to assigned processor will indicate that it is ready to do computing, all the nodes are equal and share data with another task. each other directly. 3. Client Server computing is believed to be a subcategory of the peer to peer computing.

Advantages of Client Server Computing

1. All the required data is concentrated in a single place i.e. the server. So it is easy to protect the data and provide authorization and authentication. 2. The server need not be located physically close to the clients. Yet the data can be accessed efficiently. 3. It is easy to replace, upgrade or relocate the nodes in the client server model because all the nodes are independent and request data only from the server. 4. All the nodes i.e clients and server may not be build on similar platforms yet they can easily facilitate the transfer of data.

Disadvantages of Client Server Computing

1. If all the clients simultaneously request data from the server, it may get overloaded. This may lead to congestion in the network. 2. If the server fails for any reason, then none of the requests of the clients can be fulfilled. This leads of failure of the client server network. 3. The cost of setting and maintaining a client server model are quite high.