9. Distributed Processing Exception and Exception Handlers Type of Bugs
Total Page:16
File Type:pdf, Size:1020Kb
9. Distributed Processing The same exception may be thrown multiple times in the try block. Exception and Exception Handlers There may be many different exceptions thrown from the same try block. There can be multiple catch blocks Type of Bugs - following the same try block handling different exceptions thrown. The same block can handle all possible types of Logic Errors – Errors in program logic due to poor exceptions. understanding of the problem and solution procedure. catch(…) Syntax Errors – Errors arise due to poor understanding of { the language. // Statements for processing all exceptions } Exceptions are runtime anomalies or unusual conditions that a program may encounter while executing. E.g. divide Propagating an Exception: If a handler for an exception by zero, access to an array out of bounds, running out of is not defined at the place where an exception occurs then memory or disk space. it is propagated further so it could be handled in the calling subprogram. If not handled there it is propagated When a program encounters an exceptional condition, it further. should be Identified and dealt with effectively. If no subprogram/program provides a handler, the entire program is terminated and standard language-defined Exception Handling: It is a mechanism to detect and handler is invoked. report an ‗exceptional circumstance‖ so that appropriate action can be taken. It involves the following tasks. COROUTINES: Coroutines are subprogram components that generalize subroutines to allow multiple entry points 1. Find the problem (Hit the exception) and suspending and resuming of execution at certain 2. Inform that an error has occurred (Throw the locations. exception) 3. Receive the error information (catch the expression) Here we allow subprograms to return to their calling 4. Take corrective action (Handle the exception) program before completion of execution. Such subprograms are termed coroutines. When a coroutine main() receives control from another subprogram, it executes { partially and then is suspended when it returns control. At int x, y; a later point, the calling program may resume execution of cout << ―Enter values of x and y‖; the coroutine from the point at which execution was cin >>x>>y; previously suspended. If A calls Subprogram B as a try { coroutine, B executes awhile and returns control to A, just if (x != 0) as any ordinary subprogram would do. When A again cout ―y/x is =―<<y/x; passes control to B via a resume B, B again executes else awhile and returns control to A, just as an ordinary throw(x); subprogram. Thus, to A, B appears as an ordinary } subprogram. However, the situation is similar when catch (int i) viewed from Subprogram B. B, in the middle of { execution, resumes execution of A. A executes awhile and cout << ―Divide by zero exception caught‖; returns control to B. B continues execution awhile and } returns control to A. A executes awhile and returns control } to B. From Subprogram B, A appears much like an ordinary subprogram. The name coroutine derives from this symmetry. Rather than having a parent-child or caller- try – Block contains sequence of statements which may callee relationship between the two subprograms, the two generate exception. programs appear more as equals—two subprograms throw – When an exception is detected, it is thrown using swapping control back and forth as each executes, with throw statement neither clearly controlling the other (Figure 11.1). catch – It‘s a block that catches the exception thrown by throw statement and handles it appropriately. catch block immediately follows the try block. Execution of resume B in coroutine A will involve the following steps: The current value of CIP is saved in the resume point location of activation record for A. The ip value in the resume point location is fetched from B‘s activation record and assigned to CIP so that subprogram B resume at proper location. Subprogram Scheduling Normally execution of subprogram is assumed to be initiated immediately upon its call Subprogram scheduling relaxes the above condition. Comparison with Subroutines Scheduling Techniques: 1. The lifespan of subroutines is dictated by last in, first 1. Schedule subprogram to be executed before or after out (the last subroutine called is the first to return); other subprograms. lifespan of coroutines is dictated by their use and need, call B after A 2. The start of the subroutine is the only point entry. There 2. Schedule subprogram to be executed when given might be multiple entries in coroutines. Boolean expression is true 3. The subroutine has to complete execution before it call X when Y = 7 and Z > 0 returns the control. Coroutines may suspend execution and 3. Schedule subprograms on basis of a simulated time return control to caller. scale. call B at time = Currenttime + 50 Example: Let there be a consumer-producer relationship 4. Schedule subprograms according to a priority where one routine creates items and adds to the queue and designation the other removes from the queue and uses them. call B with priority 5 Comparison of Coroutines with Subroutines Languages : GPSS, SIMULA 1. The lifespan of subroutines is dictated by last in, first out (the last subroutine called is the first to return); Parallel Programming lifespan of coroutines is dictated by their use and need, Computer systems capable of executing several programs 2. The start of the subroutine is the only point entry. There concurrently are now quite common. A multiprocessor might be multiple entries in coroutines. system has several central processing units (CPU) sharing 3. The subroutine has to complete execution before it a common memory. A distributed or parallel computer returns the control. Coroutines may suspend execution and system has several computers (possibly hundreds), each return control to caller. with its own memory and CPU, connected with communication links into a network in which each can Example: Let there be a consumer-producer relationship communicate with the others. In such systems, many tasks where one routine creates items and adds to the queue and may execute concurrently. Operating systems that support the other removes from the queue and uses them. multiprogramming and time sharing provide this sort of concurrent execution for separate user programs. Principles of Parallel Programming Languages Parallel programming constructs add complexity to the language design because several processors may be accessing the same data simultaneously. To consider parallelism in programming languages, the following five concepts must be addressed: 1. Variable definitions: Variables may be either mutable or definitional Mutable variables are the common Implementation of Coroutine variables declared in most sequential languages. Values Only one activation of each coroutine exists at a time. A may be assigned to the variables and changed during single location, called resume point is reserved in the program execution. A definitional variable may be activation record to save the old ip value of CIP when a assigned a value only once. The virtue of such a variable resume instruction transfer control to another subroutine. is that there is no synchronization problem. Once assigned a value, any task may access the variable and obtain the execution is resumed; if not zero or if the queue is empty, correct value. then the counter is incremented by one (indicating a signal 2. Parallel composition: Execution proceeds from one has been sent but not yet received). In either case, statement to the next. In addition to the sequential and execution of Task A continues after the signal operation is conditional statements of sequential programming complete. languages, we need to add the parallel statement, which wait(P): When executed by a task B, this operation tests causes additional threads of control to begin executing. the value of the counter in P; if nonzero, then the counter 3. Program structure: Parallel programs generally follow value is decremented by one (indicating that B has one of the following two execution models: received a signal) and Task B continues execution; if zero, (a) They may be transformational, where the goal then Task B is inserted at the end of the task queue for P is to transform the input data into an appropriate and execution of B is suspended (indicating that B is output value. Parallelism is applied to speed up waiting for a signal to be sent). the process, such as multiplying a matrix rapidly by multiplying several sections of it in parallel, signal and wait both have simple semantics that require (b) They may be reactive, where the program the principle of atomicity. Each operation completes reacts to external stimuli called events. Real-time execution before any other concurrent operation can and command- and-control systems are examples access its data. Atomicity prevents certain classes of of reactive systems. An operating system and a undesirable nondeterministic events to occur. transaction processing system, such as a reservation system, are typical examples of such Software Architecture reactive systems. They are characterized by Most programming languages are based on a discrete generally having nondeterministic behavior model of execution, That is, the program begins because it is never explicit exactly when an event execution, and then, in some order, will occur. 1. Reads relevant data from the local file system, 4. Communication: Parallel programs must communicate 2. Produces some answers, with one another. Such communication will typically be 3. Writes data to the local file system. via shared memory with common data objects accessed by This description clearly shows the presence of two classes each parallel program or via messages where each parallel of data: temporary or transient data, which have a lifetime program has its own copy of the data object and passes being the execution of the program, and persistent data, data values among the other parallel programs.