Language-Supported Vs. Application-Specific Assertions
Total Page:16
File Type:pdf, Size:1020Kb
COMP 114 Prasun Dewan1
11. Assertions Assertions are useful to convince ourselves that our programs work correctly. They can be used in formal correctness proofs. A more practical application is to encode them in programs to catch errors. They extend the notion of unchecked or runtime exceptions.
Language-supported vs. application-specific assertions Assertions are statements about properties of programs. We have been making them implicitly. Each time we cast an object to a type, we assert that the object is of that type. For example, in the class cast below: ((String) nextElement()) we assert that the object returned by nextElement() is a String. In this example, the programming language understands this assertion and provides a way to express it. However, a programming language can provide us only with a fixed number of application-independent assertions. Some assertions are application-specific. For example, we might wish to say that the string returned by nextElement() is guaranteed to begin with a letter. The programming language does not understand the notion of letters (though the class Character does) and thus cannot provide us with a predefined way to express this assertion. More important, even if it did understand letters, there are innumerable assertions involving them to be burn into the language. For example, we may wish to say the second element of the string is guaranteed to be a letter, or the third element of the string is guaranteed to be a letter and so on. What is needed then is a mechanism to express arbitrary assertions. We will study such a mechanism here.
Assertions vs. Exceptions Assertions are related to exceptions in that a failed assertion results in an exception being thrown. For example, in if our class cast is wrong, a ClassCastException is thrown.
To better understand this relationship and the need for assertions, let us go back to our discussion of exceptions. We said there were two reasons why exceptions can occur: Unexpected User Input: Users did not follow the rules we expected them to. Example: illegal token. Unexpected Internal Inconsistency: The program has an internal inconsistency/error, which was detected during program execution. Example: illegal subscript.
We have addressed in great depth the first kind of exceptions. So let us focus now on the second kind exceptions. Why detect and catch these exceptions? Will it not be fine to simply to look at the output of test runs to identify these inconsistencies?
There are three reasons why the answer is no: Finite test runs: We can do only a finite number of test runs but there can be an infinite number of outputs produced by a program. We may not be able to catch every (externally manifested) error unless we observe every possible output. For example an inconsistent string value may not be printed because its output has been suppressed by some option. 1 Copyright Prasun Dewan, 2003.
1 Late detection: By looking at the output, we identify an error when it causes an error in the output, not when it occurs. Thus, late detection can make it difficult to find the cause of errors. For example, storing an inconsistent string may cause erroneous output when the string is matched, not when it is stored. Complex output: We might know what the relationship is between input and correct output but not know the absolute contents of the output. For example, if we were asked to compute 123.345 * 789.123, we may not know the result, but do know dividing it by 789.123 should yield 123.345. Multiplication is of course supported by the programming language, so we don’t need to worry about it working. Consider a variation of this problem where we are asked to compute the prime factors of a number such as 12345. We may not know the value of these factors, but do know that the number is divisible by each prime factor. No external manifestation: Some errors may not be manifested in the output – they cause the program to be unduly inefficient. For example duplicate items may be stored in a set..
Java helps us catch may of these errors by making several compile-time and runtime checks. For instance, it does type checking at compile time and subscript checking at runtime.
But these checks are not sufficient to catch all possible internal inconsistencies that may arise in a program. In particular, they do not catch program-specific internal inconsistencies, which cannot be checked by the language.
Assertions
What do we exactly mean an internal inconsistency? It is the failure of an assertion. An assertion is a statement regarding the state of a program. The state of a program consists of the values of (a) the program counter and (b) the variables of the program. The program counter, strictly, tells us which machine instruction is currently executing. For our purposes, it tells us, which Java statement is currently executing.
When we make an assertion, we say that some Boolean expression involving the variables of a program holds true when program control is at a particular statement.
For instance, we can say: { y = 10/x; z = 5} assert (y == 10/x) & (z == 5) to indicate that right after executing these two statements, the value of z is 5 and the value of y is 10/x. The assertion is not a Java construct – it has been italicized indicate that. Later, we will look at mechanisms for specifying assertions. For now, an assertion is simply the keyword assert followed by a Java Boolean expression.
An assertion need not name all the variables in a program – only those on which it imposes constraints. If it does not name a variable, then it does not care about the value of that variable, which can be anything.
Recording Variables Consider another statement: x = x + 1;
2 How do we assert that after execution of this statement, the new value of x should be one more than the previous value of x? The assertion: assert x == x + 1 will not do this since this equality will never hold true. If we wish to make an assertion after a statement that involves previous values of variables modified by the statement, we need to create recording variables for storing these previous values: oldX = x; x = x + 1; assert x == oldX + 1
A recording variable is not a variable of the program about which we are asserting. It is a readonly variable created by us solely to make assertions about the program. An assertion can refer to program variables and additional assertion variables created for assertions. A recording variable is one form of an assertion variable. We will later see other forms of assertion variables.
Role of Assertions
Debugging We motivated assertions as mechanisms for finding bugs early in the program, and the assertions above show how they can help. If we can make Java check an assertion, then we get an immediate warning if the assertion is false rather than later when the bug manifests itself in the output. Thus assertions are useful when your program is buggy.
Documentation Assertions are also useful when the program is correct. They serve a useful documentation tool. We know from an assertion about some statement something about the effect of the statement,, which we might not have deduced otherwise. Moreover, we can gain this knowledge without looking at the code of the statement. For example, the following assertion captures the essence of the code it follows: // detailed prime factor computation code {….} assert (number % prime) == 0
Specification and Formal Correctness Assertions are also useful for specifying what a program should implement. For instance, the following assertion can be provided as a specification of the desired behavior of the statement before the statement is implemented: //specification //assertion after desired statement assert (y == 10/x) & (z == 5) We now have a pretty precise idea of what is to be implemented. In fact, once we code an implementation of the specification:
//implementation of desired statement { y = 10/x; z = 5}
3 we can formally prove that this implementation meets the specification if we can formally state the semantics of the Java constructs used in the implementation. If we get time, we will see how we might go about doing so.
Preconditions and Postconditions
The assertion above tells us something about the program state after the statements block finishes execution. It is, thus, a postcondition of the statement block. A postcondition is an assertion about a statement that holds true right after the statement is executed. By a statement we mean not only a single statement such as an individual assignment, but also a statement block such as the one above, or a method, or a loop.
Similarly, a precondition is an assertion about a statement that is expected to be true before the statement is executed. For instance, the following may be a precondition of the statement block above: assert x != 0 { y = 10/x; z = 5 }
Why define two different assertions, a precondition and a postcondition, for a statement? Is the precondition of a statement not the postcondition about the previous statement? It is useful to define both assertions when we try to define consistency for the statement in isolation from other statements in the program. A postcondition of a statement tells us something about the program state after the statement finishes execution if it operates correctly and a precondition tells us what the statement assumes before execution in order to operate correctly, that is, produce the postcondition. A precondition, thus, tells us the program state a statement expects before execution in order to operate correctly, and a postcondition tells us what the statement does to the program state if it operates correctly. We can view the pre and post conditions as a contract. The postcondition tells us what the contractor must deliver to the customer and the precondition tells us what he needs to successfully produce the deliverable.
We will place both preconditions and postconditions before the statement using the keyword pre/post instead of assert: pre x != 0; post y == 10/x & z == 5; {y = 10/x; z= 5;} It is possible for a statement to be correct but for its precondition to be not met because some preceding statement is incorrect. x = 0; pre x != 0; post y == 10/x & z == 5; {y = 10/x; z= 5;}
However, the precondition is still correct, because assuming it is met, the post condition is guaranteed.
Weakest Precondition, Strongest Postcondition Here is an alternative pair of pre and post conditions for the statement above: pre x > 0; post y == 10/x & z == 5; {y = 10/x; z= 5;}
4 These are correct, because: (x > 0) => (x != 0) and thus the post condition is guaranteed with this precondition also. An assertion that implies another is considered stronger than the latter. In general, we cal always replace a precondition with a stronger one.
Which of the above preconditions should be used to describe the effect of the statement? We prefer the first precondition since it is the weaker one, which means, it makes fewer assumptions about the state of the program when it reaches the statement. The goal of the precondition in this example is to disallow division by zero. So the only constraint we would like to place on the value of x is that it not be 0. If we place the constraint that it be greater than zero, then we do preclude the illegal value of x, but we also preclude many valid values of x such as –1 that can be processed by the program to produce the desired postcondition. The stronger precondition f used (a) for documentation is not an accurate description of the constraints assumed by the program and (b) for debugging, would make the program fail when it should not. Thus always choose the weakest precondition.
Similarly the following pair is correct: pre x != 0; post z == 5 ; {y = 10/x; z= 5;}
The reason is that: y == 10/x & z == 5 z == y In general one can always replace a correct post condition with a weaker one. Again, which one of the two postconditions above should one choose?
The first postcondition is a more detailed specification of the actual affect of the statement on the program state, and hence, from the documentation point of view, is the preferable one. From the debugging point of view also it is preferable since a successful testing of a weaker assertion does not test all aspects of the statement. Thus, from both the documentation and debugging points of view, we should associate a statement with the weakest valid precondition and the strongest valid postcondition. Even from the specification/correctness point of view, this is the most useful choice, since the program can be shown to satisfy a greater number of specifications.
Perhaps an analogy will help to motivate weakest precondition and strongest postcondition. Consider a course you are taking, and the advertised prerequisites and objectives. We should not make the adverstised prerequisites any stronger than what are actually necessary for the course, otherwise some students who could take the course are not allowed. Similarly, we do not want the advertised objectives to be weaker than the ones that are actually met in the course, otherwise students would do not have proof (for another course or an employer) about everything they have learnt in this course, and may be stopped from taking a step they should have been allowed to take.
As it turns out, the post condition above is not the strongest one possible as it says nothing about the value of x. We know from the precondition that x != 0. The statement does not change its value. Thus, the strongest possible postcondition is given below: pre x != 0 {z = 5; y = 10/x;} post (z == 5) & (y == 10/x) & (x != 0)
5 As we see above, the pre and post conditions of a statement are related. For instance, we cannot guarantee y = 10/x or x != 0 without assuming x != 0. Thus, for the same statement, we can define more than one pair of (weakest precondition, strongest postcondition).
In other words, our choice of the strongest postcondition is related to our choice of the weakest precondition, and by strengthening one, we can stengthen the other. For instance, for our example statement, we have can identify (at least) two pairs of weakest precondition, strongest postcondition: pre x > 0 post (z == 5) & (y == 10/x) & (x > 0) & (y > 0) {z = 5; y = 10/x;} pre x != 0 post (z == 5) & (y == 10/x) & (x != 0) {z = 5; y = 10/x;}
How do we choose the “right” pair? We can fix one and then pick the other. What you do depends on why these assertions are being made.
Your intuition might tell you that we should pick the precondition first and then the postcondition, since the precondition is the one that is tested first. In fact, it is the opposite that we typically want to do.
In general, we write code to solve some problem. We can imagine the process of solving this problem to consist of three steps: define a postcondition defining a correct solution to the problem identify the weakest possible precondition that can lead to the postcondition write a program that satisfies this precondition and postcondition.
We can imagine these three steps to be carried out sequentially, in which case, we choose the postcondition first and then the precondition. However, in reality, this is sometimes an iterative process, involving a negotiation between the software producer and client. We may come up with a postcondition, find the weakest precondition for it is too strong, weaken the postcondition to weaken the precondition, and so on. It is traditional to assume the ideal process and always work backwords from postconditions to weakest preconditions.
In either case, if pre and post conditions of a statement are being made to specify/debug/document a program it, then the following two principles should be followed: First, the precondition should be the weakest one needed for correct operation of the statement, that is, for an operation that does not raise an exception. Thus, in the example above, the precondition is: x != 0 and not: x > 0 Second, a statement that is not equivalent should not have the same pre and post condition. Consider the following pre and post conditions about the example above: pre true post true { y = 10/x; z = 5}
6 These are not the best we can derive as they also apply to the following non-equivalent statement: pre true post true { y = 10/x;} Similarly, the following are also not the best pre and post conditions: pre false post false { y = 10/x; z = 5}
As they also apply to a non equivalent statement: pre false post false { y = 10/x; } Finally, the following pre and post conditions are also not the best: pre (x >0) post (y == 10/x) & (z == 5) & (x > 0) { y = 10/x; z = 5} as they apply to the following non equivalent statement: pre (x >0) post (y == 10/x) & (z == 5) & (x > 0) { y = 10/x; z = 5; x = abs(x) + 1}
From this reasoning the following are the best: pre (x !=0) post (y == 10/x) & (z == 5) & (x != 0) { y = 10/x; z = 5}
However, if assertions are being made to prove the program containing the statement correct, then the pre condition can be stronger than the best one used for debugging/documentation/specification point of view. This allows us to derive pre and post conditions of the containing program/statement.
Consider the following pre and post conditions of a pair of statements: pre true post (x == 1) {x = 1} pre (x == 1) post (y == 10/x) & (z == 5) & (x == 1)) {y = 10/x; z = 5}
We can use them to derive the following pre and post conditions of the composition of the two statements: pre true post (y == 10/x) & (z == 5) & (x == 1) {x = 1; y = 10/x; z = 5}
7 This means what we want to assert about a statement (for proving programs correct) may depend on what is executed before or after it, thereby implying that we need the ability to easily change the preconditions/postconditions for the code. As we shall see below, we can use some of the modularity techniques for easily changing the assertions we make about a program.
Weakest and strongest possible conditions What is the weakest possible precondition we can come up with? It is a precondition that is always true regardless of the state of the program. The boolean expression: true describes such a precondition. This precondition is implied by any other assertion, p: p => true
We can read this as saying: if p is true then true is true. But true is always true. So the above implication always holds regardless of what p is.
Another way to understand why this implication is always true comes from the following equivalence relation: p => q is the same as: (not p) or q
Why they are equivalent can be deduced by considering when they are false. The first expression is false iff: (p == true) and (q != true ) The second expression is false iff: ((not p) == false) and (q == false) which is the same as the expression above.
Thus: p => true is the same as: (not p) or true which is true for every p.
Similarly, the following is always true for all boolean expressions p: false => p It says if false is true, then p is true. Since false is never true, we can say it implies anything. Promising something when false is true is like promising something “when hell freezes over”.
Does it ever make sense to assert false? Consider the following code: switch c { case ‘a’: … case ‘b’: … default: assert false; }
8 Here the assertion is being made to indicate that program should never reach the default case, that is, the only values c should have are the characters ‘a’ and ‘b’.
Invariants Sometimes a condition is part of both the precondition and postcondition of a statement. Such a condition is called an invariant of the statement. In our example statement, x != 0 is an invariant. pre x != 0 post (y == 10/x) & (z == 5) & (x != 0 ) { y = 10/x; z = 5} We will separate an invariant from the precondition and postcondition of a statement: inv x != 0 post (y = 10/x) & (z == 5) { y = 10/x; z = 5} False is the invariant of all statements. However, if we are using an invariant for debugging or specification purposes, we should never assert such an invariant for a reachable stamenet as it conveys no useful information. Invariant should never involve the use of recording variables. Recording variables describe how program variables change, while invariants describe how these variables do not change
Invariants are particularly interesting when the statement they are describing is a loop. Consider the following loop: sum = 0; j = 0; while (j < n) { j++; sum = sum + j; }
We can define the following invariant for the loop:
j (sum = 0 k) & (j <= n ) & (j >= 0)
The first subexpression says that the value of sum at entry and exit of the loop is the first j integers, where j is the current value of the index variable, j. In the second subexpression, note that the upper bound on j is different from the upper bound put by the loop test
How do we know this assertion is the loop invariant? If we get to formal semantics of programming language constructs, we will prove this assertion using an induction argument. For now, we can try different values of j to convince ourselves about the validity of the assertion.
Invariants have proved to be useful in understanding and debugging loops, which tend to be notoriously difficult to understand and debug.
Note that a statement invariant is supposed to be true only at entry and exit of the statement. It need not be true in the middle of execution of the statement. For instance, in the example above, the assertion about sum is not true after execution of the first statement in the loop, but is still an invariant of the statement:
9 Asserting about Methods and Classes It is useful to define preconditions, postconditions, and invariants, not only for statements, but also methods and classes.
The precondition, postcondition, and invariant of a method is the precondition, postcondition, and invariant of the body of a method, that is, the statement block executed by the method: invariant x != 0 post (y == 10/x) & (z == 5) & (x != 0) void m () { y = 10/x; z = 5; }
If some condition is a precondition/postcondition/invariant of all public methods in a class, then it is a precondition/postcondition/invariant of the class: invariant y == x + 1 public class C { int x =0; int y = 1; public void incrementXAndY () { incrementX(); incrementY(); } public int getX() { return x;} public int getY() { return y;} incrementX() { x++;} incrementY() { y++;} }
The invariant is not true after execution of incrementX() or before the execution of incrementY(), but is true before and after both public methods. We can associate non-public methods with their own assertions, but these do not manifest themselves in the assertions of the class, since they have to do with the implementation of the class and not it’s specification.
We considered earlier another class specification tool, interfaces, which also specify public properties of classes. However, interfaces specify syntactic properties of the classes that implement them: the form of the headers of public methods of the class. They do not say anything about the semantics of the method, that is, what the method body actually does. We rely on these syntactic properties to give us semantic information. For instance, the instance method named incrementXAndY() probably increments the values of x and y. However, associating it with the assertion: oldX = x; oldY = y; post: x == oldX + 1; y == oldY + 1; leaves no scope for ambiguity.
10 Expressing Assertions
How do we express the Boolean expression of an assertion? Three main methods have been used: write them informally using a natural language such as English. These are easy to read but potentially ambiguous. (e.g all elements of this array are either odd or positive or all array elements are not odd.) write them in a programming language such as Java or Turing. These are executable and can thus be used to raise runtime exceptions. They can be supported by a library or special language constructs. In either case, they are language-dependent, and can thus not be used as a specification of an application before we have decided the programming language in which it will be coded. Perhaps more important, there is no programming language that allows us to conveniently express the kind of assertions we would like to make. Some languages do a better job than others. For instance, Turing offers more support for assertions than Java. write them in a mathematical language. Mathematical languages tend to be thorough, carefully thought out, and “time-tested”. However, they are not executable and thus cannot be used for testing.
Propositional Calculus and Quantifiers
There exists a mathematical language, called Propositional Calculus, developed for expressing the kind of assertions we would like to make. Propositional calculus describes a language for creating propositions. A proposition is a Boolean expression. The expression can refer to Boolean constants, true and false. In addition, it can refer to propositional variables. These variables include the variables of the program about which we are making assertions. They can also include assertion variables, such as the recording variables we saw earlier.
Thus, the following are propositions: true false x > 6 The result of a proposition is determined by the values assigned or bound to the propositional variables to which it refers. Thus, if x has been assigned the value 7, then the last proposition is true.
The calculus defines three Boolean operations: not, and, and or. All other Boolean operations such as xor can be described in terms of these operations. It does not define arithmetic and relational operations such as +, square, and >, these are defined by the algebra on which the calculus is based.
For the purposes of this course, we will assume that the logical and arithmetic operations are defined by Java. Moreover, we will assume that the Boolean operations are also defined by Java. This will allow us to use Java syntax for these operators, which will make it easier to convert propositional assertions into something Java can automatically check. Thus, we will consider the following to be a proposition in propositional calculus: (x > 6) & (y > 6)
11 Also we will use standard mathematical functions such as min, max, and when Java does not directly implement these operations.
By this definition, we have already been using propositional calculus for expressing assertions, since all of our assertions have used either Java operations or standard mathematical operations.
However, we have not used the full power of propositional calculus. A proposition can be a simple proposition or a quantified proposition. A simple proposition is a Boolean expression involving individual variables of the program we are asserting about. The assertions we have seen so far are simple since they involve statements about individual variables such as x and y.
We may want to make assertions about collections of variables. For instance, we might want to say: All elements of b are not null. At least one element of b is not null. Quantified propositions allow us to express such assertions formally: j: 0 < j < b.size() : b.elementAt(j) != null j: 0 < j < b.size(): b.elementAt(j) != null
The symbols and are called quantifiers in Propositional Calculus. The former is called a universal quantifier while the latter is called an existential quantifier. The j in the examples is a variable quantified by the quantifier preceding it. The term: 0 < j < b.size() describes the domain of the quantified variable. A domain is a collection of values such as b.elementAt(0), … b.elementAt(b.size() – 1). The Boolean expression: b.elementAt(j) != null describes a (simple or quantified) proposition in which the quantified variable j can be used as a proposition variable in addition to the program variables. We will call this proposition the sub- proposition of the quantified proposition. As we see above, this subproposition is evaluated for each element of the domain.
Thus, the general form of a quantified assertion is: Qx:D(x):P(x) where Q is either or , x is a quantified variable, D(x) describes its domain using x, and P(x) is a predicate or Boolean expression involving x (and other variables).
Both quantifiers bind x to each value of D(x) and calculate P(x) with this bound value. If Q is , then the quantified assertion is true, if P(x) is true for each element, x, of D(x). If Q is , then the quantified assertion is true if P(x) is true for at least one element.
Thus, the first assertion above is true if for every value of j in the range 0 < j < B.length, B[j] >= 0. The second assertion is true if there exists a value of j in the range 0 < j < B.length, for which B[j] >= 0.
Expressing Assertions in Java
How do we translate the assertions we have written so far into something Java can execute and verify?
First, we need a general exception to raise if an assertion fails:
12 package assertions; public class AnAssertionFailedException extends RuntimeException { public AnAssertionFailedException () {}; public AnAssertionFailedException (String initValue) { super (initValue); } } Figure 1: The Assertion Failed Exception We are making it an unchecked exception because of our earlier rule of making all exceptions raised by internal inconsistencies as unchecked exceptions. This way we do not force a careful programmer who can eliminate such inconsistencies to catch the associated exceptions.
For specific assertions, it may be useful to subclass this exception. In our examples, we will simply take the lazy approach of using the messsage field to indicate the kind of assertion.
We can now write an overloaded assert method that takes a proposition as an argument and raises this exception in case of error: package assertions; import java.util.Enumeration; public class AnAsserter { public static void assert (boolean proposition, String message) throws AnAssertionFailedException { if (!proposition) throw new AnAssertionFailedException (message); } public static void assert (boolean proposition) throws AnAssertionFailedException { assert (proposition, “”); }
} Figure 2: The Assert Methods
(Why make this method static?)
What we have so far is sufficient for making simple assertions involving Java operations. If we need to refer to non Java operations such as we must write Java methods that implement these operations: public static int sum (int from, int to) { … }
We can imagine building a library of such functions.
A statement’s preconditions and postconditions can be encoded by calling assert before and after the statement. A loop invariant can be encoded by calling assert at the beginning and end of loop body: sum = 0; j = 0; j //inv (sum = å0 k) & (j <= n ) & (j >= 0)
13 while (j < n) { assert (j <= n && j >= 0 && sum == sum(0, j)) j++; sum = sum + j; assert (j <= n && j > =0 && sum == sum(0, j)) }
A class invariant can be encoded by making it an invariant of all public methods. A public method invariant, in turn, can be encoded by calling assert at the beginning and at each exit point of the method body. In fact, it may be useful to create a separate proxy method for each public method that only asserts and calls the original real methods to do the real computation, as show below: //inv y = x + 1 public class C { int x =0; int y = 1; void assertClassInvariant{ AnAsserter.assert (y == x + 1); } void internalIncrementXAndY () { incrementX(); incrementY(); } public void incrementXAndY () { assertClassInvariant(); internalIncrementXAndY(); assertClassInvariant(); } int internalGetX() { return x; } public int getX() { assertClassInvariant(); int retVal = getX(); assertClassInvariant(); return retVal; } … } Here each of the original public methods has become an internal method with a new name. The old name is used for a new proxy public method that asserts the precondition, calls the original method, and asserts the postcondition.
Proxy methods separate assertions from the rest of the program. They also make it easy to assert post conditions in methods that have multiple exit points – putting a post condition after each exit point is much more work than putting them in a proxy method that calls it. To illustrate, assume the original method is: char toGrade (int score) { if (score >= 50) return ‘P’; else return ‘F’; }
14 Then rather than introducing recording variables and postconditions for both arms of the if, we can do so once in a proxy method: char assertingToGrade (int score) { char grade = toGrade(score); AnAsserter.assert (grade == ‘F’ || grade == ‘P’); return grade; }
Proxy methods force us to come with alternative names for methods. We will see that a more general idea, proxy classes, provide more modularity and do not have the naming problem.
It would be nice if Java, like some other languages such as Turing, understood preconditions, postconditions, and invariants – relieving us from the tedium of translating them into multiple assertions.
Implementing Quantified Asserions
Consider now quantified propositions:
Qx:D(x):P(x)
Ideally, what we would like are quanitfier functions that automatically do the quantification for us and return boolean results.
In order to write such functions, we need to be able to encode information about the three parts of a quantified proposition: the quantifier, the domain, and the sub-propositition.
We will encode the quantier in the name of the quantfying function, defining separate methods for the two quantifiers: forAll (domain, subProposition) thereExists (domain, subProposition) Consider, now, the domain. In general, the domain could be arbitrary. It could be an array, a vector, a hashtable, or any other collection of elements. If we want to provide a single function that works for arbitrary collections, we need all of these collections to implement a single interface for enumerating all elements of the domain. 2 Java’s Enumeration interface serves exactly this purpose, and we will, therefore, use it as the type of the object representing the domain.
Thus, we have a partial implementation of the two methods: package assertions; import java.util.Enumeration; public class AQuantifier { public static boolean forAll ((Enumeration domain, …) { while (domain.hasMoreElements()) … }
2 Unless we do something like ObjectEditor does, using method names, which in retrospect is a better idea.
15 public static boolean thereExists ((Enumeration domain, …) { while (domain.hasMoreElements()) … } } The two functions use the domain argument to look at each of the elements of the domain. A user of the function is responsible for mapping an actual collection to an enumeration object: AnAsserter.assert(AQuantifier.forAll(B.elements(), …, "Some element of B is null"); AnAsserter.assert(AQuantifier.thereExists(b.elements(), …, "All elements of B are null"); The code above shows … to indicate that we to worry about how a subproposition is to be encoded.
Encoding the sub-proposition is trickier. Unlike the proposition argument of assert, it is not an evaluated boolean expression. Instead, it is a function of the quantfied variable that can return a different value each time it is bound to a new member of the domain.
We could imagine sending a string encoding the Boolean expression, which forAll() and thereExists() evaluate. That makes the quantifier methods complex, as they must do some of the work of a compiler/interpreter. More important, the methods cannot access the non-public variables referenced by the expression. A more reasonable choice is to pass a Boolean function as a parameter to each of the two methods that can be called by them. For example, we can write the following function: boolean isNotNull(Object element) { return element != null; } and execute the following assertions: AnAsserter.assert(AQuantifier.forAll(B.elements(), isNotNull, "Some element of B is null"); AnAsserter.assert(AQuantifier.thereExists(b.elements(), isNotNull, "All elements of B are null"); Here, in addition to the domain object, a reference to the function is passed. The two quantifier methods now pass each element of the domain to the function received as a parameter: package assertions; import java.util.Enumeration; public class AQuantifier { public static boolean forAll (Enumeration domain, (int boolean) subProposition) { while (domain.hasMoreElements()) if (!subProposition (domain.nextElement()) return false; return true; } public static boolean thereExists (Enumeration domain, (int boolean) subProposition) { while (domain.hasMoreElements()) if (!subProposition (domain.nextElement()) return true; return false; } }
The forAll() function returns false if the subproposition evaluates to false for any element, and true otherwise. The foreach() function, on the other hand, returns true if the function evaluates to true for any element of the domain, and false otherwise.
As it turns out, Java does not support method parameters, though some languages such as C do. The reason is that it must worry about global variables accessed by the method that are not
16 accessible to the method receiving the parameter. But, as we know, it does allow object parameters, and an object is a collection of methods (and data). So instead of defining a function parameter that evaluates the sub-proposition, we can define an object parameter on which we can invoke a method that evaluates the sub-proposition. In other words, we pass, instead of a subproposition function, a subproposition object on which the function can be invoked. If each subproposition object implements the same interface, then the quantifier methods can be defined in terms of this interface.
Thus, we now put the function we wrote above in the class of the subproposition object: package visitors; import assertions.ElementChecker; public class ANonNullChecker implements ElementChecker { public boolean visit(Object element) { return (element != null); } }
The interface of the class defines the function header, which is shared by the classes of arbitrary subproposition objects: package assertions; public interface ElementChecker { public boolean visit (Object element); }
We have called the function “visit” because it processes or visits the element passed to it as an argument.
In our implementations of the quantifier methods, we do not call the function directly as we did in the previous version. Instead, we call the function on the subproposition object passed as a parameter, which visits each element of the domain: package assertions; import java.util.Enumeration; public class AQuantifier { public static boolean forAll (Enumeration domain, ElementChecker subProposition) { while (domain.hasMoreElements()) if (!subProposition.visit(domain.nextElement())) return false; return true; } public static boolean thereExists (Enumeration domain, ElementChecker subProposition) { while (domain.hasMoreElements()) if (subProposition.visit(domain.nextElement())) return true; return false; } } Similarly, instead of passing the function directly as a parameter, we pass an object encapsulating it: AnAsserter.assert(AQuantifier.forAll(b.elements(), new ANonNullChecker()), "Some element of B is null"); AnAsserter.assert(AQuantifier.thereExists(b.elements(), new ANonNullChecker()), "All elements of B are null");
17 Let us complicate matters by considering how the following assertion can made: j: 0 < j < b.size() : b.elementAt(j) != a.elementAt(0) How should the visitor method, which receives only the domain element as an argument, reference a.elementAt(0)? This problem arises because these assertions reference variables other than the one describing the domain. In general, subroposition P(x) can not only reference the quantified variable, x, but also reference any other visible variable. Thus, its general form is: P(x, v1, … vn) We can imagine adding another argument to the visit function that contains an array of the values of v1, … vn. The caller of the quantifier functions passes this array to them, and they in turn pass them to the visitor function. However, a simpler approach is possible, which relies on the fact that the same values are used in each visit to the domain element. As a result, the caller of the quantifier function can pass them to the constructor used to create the subproposition object, which can then store them in instance variables of the object. The visit() function can refer to these instance variables instead of the array parameter. This allows us to use meaningful names for these values.
Thus, for the above assertions, we implement the following class: package visitors; import assertions.ElementChecker; public class AnInequalityChecker implements ElementChecker { Object testObject; public AnInequalityChecker(Object theTestObject) { testObject = theTestObject; } public boolean visit(Object element) { return !element.equals(testObject); } } and use it as follows to implenent the assertion:
AnAsserter.assert(AQuantifier.forAll(b.elements(), new AnInequalityChecker(a.elementAt(0))), "Some element of b is equal to a.elementAt(0)");
The asserter passed the value of a.elementAt(0) to the constructor of the subproposition object. This value is stored in an instance variable. The visit function now becomes an impure function using both its parameter and the value of the global variable.
Visitor Pattern We have seen above an example of the visitor pattern, which has several components: A collection interface C providing a way to access elements of type T. In our example, the interface was java.util.Enumeration, but in general it can be any interface providing a way to access the elements. A visitor interface defining a visitor method that takes as an argument of type T. In our example, the method returned a Boolean result. In general, its return type can be arbitrary including void. public interface V {public T2 m (T p);}
18 A visitor method can perform many useful activities involving the element such as printing the elements, gathering statistics about them, and modifying them. One or more traverser methods that use the collection and visitor interface to pass one or more collection elements to the visitor method. traverser1 (C c, V v) { …v.m(element of C)…} One or more implementations of the visitor interface whose constructors take as arguments external variables that need to be accessed by the visitor method public class AV1 implements V { public AV1 (T1 p1, … Tn pN) { …} public T2 m (T p) { … } } A client or user of the pattern uses instantiates the visitor implementations and passes them as arguments to a traverser method.
The following figure shows the general pattern and its implementation in the example above.
Vector Collection Interface C uses with elements of type T uses component forAll() component Traverser 1 Object Object element1: T element1: T thereExists() Traverser 2 ElementChecker Visitor Interface implements implements ANonNull AnInequality Visitor Class 1 Visitor Class 2 Checker Checker
Figure 3 General Visitor Pattern and Example A client of the pattern passes a traverser an instance of the visitor and collection interfaces: traverser1(c, new AV1(a1,.. aN))
This pattern is useful in a compiler. As mentioned before, a compiler builds a tree representing the program. Several kinds of operations can be done on the tree nodes such as printing, code generation, and type checking. Instead of the bundling all of these complex operations in the tree object, it is more modular to put in them in separate visitor classes.The tree or some other class can implement one or more traverser methods that take as arguments instances of these classes, and call their visitor methods on the tree nodes.
Assertions support in Java 1.4 After Java was first designed, James Gosling, its inventor said in a conference that his biggest mistake was not supporting assertions. Java’s extensive runtime checking reduces the impact of this mistake – it automatically catches many inconsistencies that would have to be explicitly asserted in other programming languages. The most common assertion I have seen for C code is subscript checking:
AnAsserter.assert ((j >= 0) & (j < MAXELEMENTS)) a[j] = ..;
19 This and many other internal errors are automatically checked by Java. However, as demonstrated by the examples above, there is still need for programmer-defined exceptions.
Though it does not support pre-conditions, post-conditions, invariants, and quantifiers, Java (version 1.4) now supports basic assertions. It defines the AssertionError exception, which is like the AnAssertionFailedException implemented by us. In addition, it defines the: assert
Why bother with language support when we can easily provide library support to do that? Java’s language support allows a programmer to turn on or off assertion checking without changing the source code. This is useful as we might want assertions to be turned off when performing slow or time critical computations.
Proxies It is also possible to use library code to easily turn off code that asserts about method pre and post conditions. The idea is to put assertion code in separate classes that act as proxies for actual classes. Factories, factory selectors, and factory methods can now be used to choose either the asserting proxies or the regular classes. This is an extension of the proxy method idea – instead of mixing regular and proxy methods in one class, we separate them in different classes.
To illustrate, consider again the counter example reproduced below: package models; public class ACounter implements Counter { int counter = 0; public void add (int amount) { counter += amount; } public int getValue() { return counter; } }
Instead of putting the asserting proxy methods in the counter, we put them in a separate subclass. The proxy methods now use super to call the real methods in the superclass: package models; import assertions.AnAsserter; public class ACheckedCounter extends ACounter { public void add (int amount) { int oldCounter = counter; super.add(amount);
20 AnAsserter.assert(counter == (oldCounter + amount), “Counter added incorrectly. ” + “ Old Counter “ + oldCounter + “ Amount:“ + amount + “ New Counter:“ + counter); } public int getValue() { return counter; } }
We are seeing here the general notion of a proxy class – a class that can be used instead of some other class, called its base or subject, without requring a user of the class to be aware of the replacement. More precisely, a class, P, is a proxy of subject class, C, if an instance of P has all the public instance methods of C, and each (new) instance method declared in P calls the coresponding method in C after doing some optional preprocessing before doing some optional postprocessing: m(..) // declared in P … // optional preprocessing … // call m(..) implemented in C … // optional post processing
In this example, class AcheckedCounter is a proxy for class ACounter. We can use it instead of the the base class whenever we want to check the assertion implemented by it.
Proxies are very much in fashion today, as people create proxies for web servers that add functionality without requiring existing web browsers to be aware of the extensions. Proxies canm be used to log information, cache data, and provide access control for the base objects.
Delegating Proxy We created the proxy class above by creating an IS-A relationship between it and the original class, that is, by making the former extend the latter. An alternative is to create a HAS-A relationship between the proxy and original class, that is, by declaring an instance variable in the former that stores instances of the latter.
The code below shows the alternative approach: package models; import assertions.AnAsserter; public class ADelegatingCheckedCounter implements Counter { Counter counter; public ADelegatingCheckedCounter (Counter theCounter) { init(theCounter); } public void add (int amount) { int oldVal = counter.getValue(); counter.add(amount); int newVal = counter.getValue(); AnAsserter.assert(newVal == oldVal + amount, "New counter:" + newVal + " != old counter:" + oldVal + " + amount:" + amount); } public int getValue() {
21 return counter.getValue()b; } }
The instance variable, counter, initialized in the constructor, stored a reference to an instance of the original class (or any other class implementing the Counter reference). As before, the proxy class implements the same interface as the subject class. It implements the getValue() method by simply forwarding or delegating the call to the subject class. It implements the add() method by recording the value of the original counter, delegating to the subject, and then asserting. Thus, the behavior of this class is identical to the behavior of the original proxy class.
Using the HAS-A relationship to reuse code is called delegation as calls to reused methods are delegated to the reused class. The figure below shows the difference between the inheriting and delegating proxies. It uses standard (Object Modeling Technique) conventions for showing the IS-A and HAS-A relationships: arrows with triangles and diamonds, respectively. It also uses the non-standard convention of showing the implements relationship as arrows with squares.
implements Real Subject Class Subject Interface
IS-A
Proxy Class
implements Real Subject Class Subject Interface
HAS-A implements Proxy Class
Figure 4 Inheriting vs. Delegating Proxy
Inheritance vs. Delegation Delegation is not tied to the proxy pattern – it is a general alternative to inheritance, as shown in the figure below:
22 Reused Class Reused Interface
Reusing Class Reusing Interface
Reused Class Reused Interface
Reusing Class Reusing Interface
Figure 5 Delegation as an alternative to inheritance In both cases, a class is reused by extending its interface. The difference is in the relationship between the reusing and reused class: IS-A, in case of inheritance and HAS-A in case of delegation.
The proxy example illustrates the differences between the two approaches and the consequences of these differences. Let us look at them in depth.
Let us assume that we would like to create two classes, S and U. S contains some code that is shared or reused by U and other classes. The interface implemented by U is an extension of the interface implemented by S. In the inheritance-based solution, U IS-A S. In the delegation-based solution, U HAS-A S.
Single vs. Multiple Composed Instances
In inheritance, a single instance, of U, is created for the members (variables and methods) defined in U and S. In delegation, two instances, of U and S, called the delegator and delegate, are created, for the members of U and S, as shown in the figure below.
Reused Class
instance Reusing Class
Reused Class instance Physical component Reusing Class instance
23 Figure 6 Single vs. Multiple Instances in Inheritance and Delegation The arrow with a circle is the standard (OMT) notation for denoting a physical component. Similarly, the square box with rounded corners is the standard notation for instances. In the figure it shows that the delegate is a physical component of the delegator. The delegate must be bound to the delegator through a constructor or init method, while this step is not necessary in the inheritance approach, as shown in the figure below.
Reused Class
declares Reusing Class Constructor/init (…)
Reused Class
declares Reusing Class Constructor/init (delegate, …) Figure 7 Passing the delegate reference to the delegator This is illustrated by the constructor of the delegating proxy: public ADelegatingCheckedCounter (Counter theCounter) { counter = the Counter; } The inheriting proxy defined no constructor as it did not need an equivalent step.
Automatically reused methods vs. delegating stubs
In the inheritance approach, if U does not override some method, m. of S, then in the delegation approach U must defined a “stub” for method m that does nothing else but call method m of U, as shown in the figure below. The method getValue() in the delegating proxy is an example of such a stub: public int getValue() { return counter.getValue(); }
24 declares Reused Class Reused method m
Reusing Class
declares Reused Reused Class method m delegate.m() declares Reusing Class m stub
Figure 8 Delegation requires stubs for methods not overridden in the inheritance approach The task of defining stubs can be very tedious, especially when a large number of methods of the reused class are used directly.
Overridden Methods
Methods that are indeed overridden in the inheritance approach require an equivalent amount of work in the delegation approach. In both cases the new method is implemented in the reusing class.
declares Reused Class Reused method m
declares Reusing Class Overriding method m
declares Reused Reused Class method m
Overriding Reusing Class method m Figure 9 Overridden methods in the two approaches are handled similarly Sometimes an overriding or some other method in U must call some method in S. In the inheritance approach, super is used as the target of the call while in the delegation approach the delegate is the target, as shown in the figure below.
25 Super vs. delegate calls
declares Reused Class Reused method m super.m() declares Reusing Class Method n
declares Reused Reused Class method m delegate.m() declares Reusing Class Method n
Figure 10 Super vs. delegate calls This is illustrated by comparing the two implementations of the overridden method, add() in the two classes.
In the inheritance approach, super is the target of the overridden add() method of the reused class: public void add (int amount) { int oldCounter = counter; super.add(amount); AnAsserter.assert(counter == (oldCounter + amount), “Counter added incorrectly. ” + “ Old Counter “ + oldCounter + “ Amount:“ + amount + “ New Counter:“ + counter); }
In the delegation approach, on the other hand, the delegate is the target:
public void add (int amount) { int oldVal = counter.getValue(); counter.add(amount); int newVal = counter.getValue(); AnAsserter.assert(newVal == oldVal + amount, "New counter:" + newVal + " != old counter:" + oldVal + " + amount:" + amount); }
Instead of super the reference to the delegate is used, as shown in the figure below.
Callbacks Sometimes the reused class, S, must call methods in the reusing class, U, not because there is a symbiotic relationship in which each class reuses the other, but because these calls are needed to reuse the service provided by S. These calls in the reverse direction are called callbacks. Typically they allow U to customize the service provided by S. The figure below shows the difference between calls and callbacks.
26 Shared Class S: declares m1
callback call m2 m1
User Class U: declares m2
Figure 11 Calls vs. Callbacks
declares Abstract or Reused Class Reused Overridden method m Method n
this.n() declares Reusing Class Method n
declares Reused Reused Class method m delegator.n()
Method n declares with Reusing Class possibly more access Figure 12 Callbacks as overridding vs. delegator methods As shown in the figure above, in the inheritance approach, callbacks are methods that override concrete or implement abstract methods inherited from the superclass. In either case, calls are made implicitly or explicitly on the this variable. Dynamic dispatch of calls to these methods ensure that it is the implementations in the inheriting class that are used. In the delegation approach, on the other hand, these calls are made directly by using a reference to the delegator. The shared class is passed a reference in a constructor or init method as shown in the reference below.
27 declares Constructor/init Reused Class (…)
Reusing Class
declares Constructor/init Reused Class (delegator, …)
Reusing Class
Figure 13 Difference in constructor/init methods in callback case Factory methods are examples of callbacks. Consider the class we saw earlier that defines abstract factory methods: public abstract class AnAbstractTitleToCourseMapper implements TitleToCourseMapper { public AnAbstractTitleToCourseMapper() { fillCourses(); } abstract RegularCourse getRegularCourse(); abstract FreshmanSeminar getFreshmanSeminar(); void fillCourses() { RegularCourse introProg = getRegularCourse (); Freshman Seminar legoRobots = getFreshmanSeminar(); ... } public String getTitle() { return titleToCourseMapper.getTitle(); } ... } The following concrete class implemented these methods: public class ATitleToCourseMapper extends AnAbstractTitleToCourseMapper { RegularCourse getRegularCourse() { return new ARegularCourse(); } FreshmanSeminar getFreshmanSeminar() { return new AFreshmanSeminar(); } } Here the abstract class is a shared class used by the concrete subclass. The method fillCourses() in the shared class makes calls to the abstract methods defined by it. When this method is invoked on an instance of the concrete class, the implementations of the abstract methods in the concrete class are used. Thus, these implementations are examples of callbacks.
In the delegation approach, the following concrete delegate class replaces the abstract class:
28 public class ATitleToCourseMapperDelegate implements TitleToCourseMapper { CourseFactory delegator; public ATitleToCourseMapperDelegate(CourseFactory theDelegator) { delegator = theDelegator; fillCourses(); } void fillCourses() { RegularCourse introProg = delegator.getRegularCourse (); Freshman Seminar legoRobots = delegator. getFreshmanSeminar(); ... } public String getTitle() { return titleToCourseMapper.getTitle(); } ... } Its constructor receives an extra argument referencing the delegator, which is used in the callbacks. These callbacks are implemented by the following class: public class ATitleToCourseMapperDelegator implements TitleToCourseMapper, CourseFactory { public RegularCourse getRegularCourse() { return new ARegularCourse(); } public FreshmanSeminar getFreshmanSeminar() { return new AFreshmanSeminar(); } public String getTitle() { return titleToCourseMapper.getTitle(); } ... }
Since the delegate class is not a superclass of the delegator class, the headers of the callbacks cannot be defined as abstract methods. Since we wanted to use an interface to type the delegator, we are forced to make them public. If we declared the delegator class in the same package as the delegate class used it to type its instances, then we could have given them reduced access such as default or protected access. In either case the delegation approach forces us to make a compromise.
Accessing instance variables A similar issue arises when the reusing class, U accesses instance variables of the shared class S. Consider the string database class from the inheritance chapter: public class AStringDatabase extends AStringHistory implements StringDatabase { public void deleteElement (String element) { shiftUp(indexOf(element)); } public int indexOf (String element) { int index = 0;
29 while ((index < size) && !element.equals(contents[index])) index++; return index; } void shiftUp (int startIndex) { int index = startIndex ; while (index + 1 < size) { contents[index] = contents[index + 1]; index++; } size--; } public boolean member(String element) { return indexOf (element) < size; } public void clear() { size = 0; } }
It inherits from the string history class, and accesses the instance variables, contents and size, from the inherited class. A delegation-based equivalent is given below: package collections; import enums.StringEnumeration; public class ADelegatingStringDatabase implements StringDatabase { AStringHistory stringHistory = new AStringHistory(); public void deleteElement (String element) { shiftUp(indexOf(element)); } public int indexOf (String element) { int index = 0; while ((index < stringHistory.size) && ! element.equals(stringHistory.contents[index])) index++; return index; } void shiftUp (int startIndex) { int index = startIndex ; while (index + 1 < stringHistory.size) { stringHistory.contents[index] = stringHistory.contents[index + 1]; index++; } stringHistory.size--; } public boolean member(String element) { return indexOf (element) < stringHistory.size; } public void clear() { stringHistory.size = 0; }
30 public int size() { return stringHistory.size(); } public String elementAt (int index) { return stringHistory.elementAt(index); } public StringEnumeration elements() { return stringHistory.elements(); } public void addElement(String element) { stringHistory.addElement(element); } }
Like other delegating classes, it defines stubs for forwarding methods to the delegate. This class if different from other delegating classes. It creates the delegate rather than receive a reference to it in a constructor or init method. Moreover, it types the delegate using its class rather than its interface. As it is declared in the same package as the delegate class, using the class name as a type allows it to refer to the non-private variables, contents and size, of the delegate. To refer to them, it prefixes the variable name with a reference to the delegate: stringHistory.size An alternative would have to type the delegate using its interface and declare public method in the interface to set and get the size and contents variables. As mentioned before, both are unfortunate compromises required by the delegation approach.
Advantages of Inheritance We can summarize the advantages of inheritance over delegation. There is no need to instantiate multiple classes. There is no need to compose the delegator and delegate. There is no need to define stub methods that forward calls – the more the reused methods the more the effort involved in defining the stubs. No compromises need to be made to give : reusing class access to variables of shared class. shared class access to callbacks of reused class.
Thus, the delegation-based approach is more painful because of the need to define stubs and compose instances. The equivalent steps in the inheritance solution are automated for us by the language. For instance, when a shared method, m, is invoked in an instance of U, the language automatically does the forwarding of this invocation request to S. It is reassuring to know that inheritance offers benefits over delegation, because otherwise our study of inheritance would have been in vain!
However, in several situations, delegation offers advantages over inheritance, discussed below.
Substituting Reused Class Suppose we wish to change the reused class, from S, to another class, T implementing the same interface. In the delegation approach, all we have to do is compose an instance of the previous reusing class, U with an instance if T. In inheritance, since the “composition” occurs at compile time, we must define another reusing class, V, to use T. This is shown in the figure below.
31 Reused Class Reused Interface Reused Class 2 Reused Interface
Reusing Class Reusing Interface Reusing Class 2 Reusing Interface
Reused Class Reused Interface Reused Class 2 Reused Interface
Reusing Class Reusing Interface Reusing Class Reusing Interface
Figure 14 Substitution of reused class does not require new reusing class in delegation To illustrate, consider we wish to build a proxy for another implementation of the counter interface. In the inheritance approach, we must create a new proxy class that is identical to the previous proxy except its extend declaration names the new rather than the old counter class: package models; import assertions.AnAsserter; public class ACheckedCounter2 extends ACounter2 { public void add (int amount) { int oldCounter = counter; super.add(amount); AnAsserter.assert(counter == oldCounter + amount, "New counter:" + counter + " != old counter:" + oldCounter + " + amount:" + amount); } }
In the delegaton approach, we keep the same proxy class, as its constructor can accept an instance of any class that implements the counter interface: public ADelegatingCheckedCounter (Counter theCounter) { counter = theCounter; }
Multiple Concurrent Reuse Suppose the instance variables and methods of the shared class, S, must be shared simultaneously by two reusing classes, U and V. In the delegation approach, we can compose an instance of S with both an instance of U and an instance of V. In the inheritance approach, on the other hand, we must make V a subclass of U, or vice versa. This is shown in the figure below.
32 Reused Class
Reusing Class 1
Reusing Class 2
Reused Class
Reusing Class 1 Reusing Class 2
Figure 15 Concurrent reuse of shared class
This means that, in the inheritance approach, for every possible combination of classes wishing to reuse S concurrently, we must create such an inheritance chain. Moreover, it is not possible to dynamically add or remove a user of S. Since the composition is done dynamically in the delegation approach, we can create and change it at runtime. Furthermore it is not clear if V should extend U or vice versa, which means that the IS-A relationship between the two classes is not fundamental.
This is illustrated by considering the binding of counter views and controllers to a counter model. In the MVC chapter, we used delegation to do the binding. For instance to create a user interface to a counter that supported console based input and both console and message based views simultaneously, we used delegation to create the composition, as shown in the following figure.
ACheckedCounter
ACounterConsole ACounterController ACounterJOption View View
AConsoleControllerAndView AConsoleControllerAndViewAndJOptionView Figure 16 Sharing combination defined dynamically using delegation The figure below shows an inheritance chain supporting the same user interface.
33 ACounter
ACounterWithConsoleView
ACounterWithConsoleAndJOptionView
ACounterWithConsoleAndJOptionViewAndController
Figure 17 Inheritance chain supporting the same combination
The inheritance chain shows another technique to separate the model, view and controller into different classes.The delegation approach was given before, so we will not consider its delatiled implementation again. Let us consider the implementation of the inheritance approach to illustrate the difference concretely.
The console view does not need to be notified about calls to the add() method. It simply overrides the add method, calls the overridden method, and then displays the new counter. package views; import models.ACounter; public class ACounterWithConsoleView extends ACounter { void appendToConsole(int counterValue) { System.out.println("Counter: " + counterValue); } public void add (int amount) { super.add(amount); appendToConsole(getValue()); } }
The message view is implemented similarly, overriding the add from the console view rather than the model. package views; import models.ACounter; import javax.swing.JOptionPane; public class ACounterWithConsoleAndJOptionView extends ACounterWithConsoleView { void displayMessage(int counterValue) { JOptionPane.showMessageDialog(null, "Counter: " + counterValue); } public void add (int amount) { super.add(amount); displayMessage(getValue()); } }
34 The controller is a subclass of the message view that calls the add() method implemented by it, as shown below: package controllers; import views.ACounterWithConsoleAndJOptionView; import java.io.IOException; public class ACounterWithConsoleAndJOptionViewAndController extends ACounterWithConsoleAndJOptionView implements CounterWithConsoleViewAndController { public void processInput() { while (true) { int nextInput = Console.readInt(); if (nextInput == 0) return; add(nextInput); } } }
The decision to make the class implementing the class implementing the controller a sublcass of the class implementingthe message view , and the latter a subclass of the class implementing the console view, is arbitrary. Neither of these two relationships is fundamental.
This example shows the advantages of the inheritance approach. There is no need to define facades or notify views. Instantiating the controller automatically composes the configuration. Overrridding the write method is a sbustitute for notification. Thus, if there in only one combination of views and controllers, the inheritance approach is much easier to use. The problem arises when more than combination is needed. Each of these combinations must be anticipated. Moreover, these combinations result in code duplication. Assume that we wish to subtract the message view from the combination above. The delegation approach simply requires us to do one less composition, not using the top level façade. The inheritance approach requries us to create a new controller. This is shown in the figure below.
AnObservable Counter
ACounter
ACounterWithConsoleView ACounterConsole ACounterController View
AConsoleControllerAndView ACounterWithConsoleViewAndController Figure 18 Changing the composition requires new controller in the inheritance approach The new controller is a duplicate of the previous one except that it inherits from a different class: package controllers; import views.ACounterWithConsoleAndJOptionView; import java.io.IOException;
35 public class ACounterWithConsoleViewAndController extends ACounterWithConsoleView implements CounterWithConsoleViewAndController { public void processInput() { while (true) { int nextInput = Console.readInt(); if (nextInput == 0) return; add(nextInput); } } }
Distribution
Another advantage of delegation arises from the fact that a different instance is created for both the reused and reusing class. In a distributed environment, these can be placed on two different locations. For example, a view can be on the local client machine and the model can be on a remote server machine. The inheritance approach does not offer this flexibility as one instance is created that combines the instance members declared in the reused and reusing class. This is shown in the figure below. Data and methods of both Reused Class classes at one location
Reusing Class instance
instance Reused Class Can be in two different locations (client/server) Reusing Class instance
Figure 19 Delegate and delegator can be in two different locations
Reusing multiple classes Sometimes a class must reuse more than one class. In the inheritance approach, this means making the reusing class a subclass of multiple reused classes, while in the delegation approach this means making the reusing class have references to more than one delegate. This is shown in the figure below.
36 Reused Class 1 Reused Class 2
Reusing Class
Reused Class 1 Reused Class 1
Reusing Class
Figure 20 Multiple superclasses vs. multiple delegates In languages such as Java that do not support multiple inheritance, it is not possible to use the inheritance approach for reusing two classes. The delegation approach can does not have this problem.
Advantages of Delegation To summarize, delegation offers several advantages over inheritance: The reused class can be changed without changing reusing class. Multiple reusing classes can share reused class variables simultaneously. Variables and methods of reusing and reused class can be on separate computers. It works in single inheritance languages
Making the tradeoff Since both delegation and inheritance have their pros and cons, we must look carefully at the situation before making the choice. When the benefits of delegation do not apply, inheritance should be the alternative as it easier to use. Consider the delegating string database. It is bound to a single string history class, and there is no obvious advantage to separating the instance methods of the history and database class on separate computers. There is the disadvantage that a large number of forwarding methods must be defined. Thus, in this case, inheritance is the preferred approach. In case of the counter model, views, and controller, we need the ability to compose the objects in multiple way and possibly put the model on a server and the view and controller on a client machine. Thus, the delegation approach should be used in this case.
Sometimes it is clear that the inheritance approach is wrong.
When the best name we can choose for a reusing class has the form: SwithU where S is the reused class S and U the reusing class, then we know that S should have a reference to U rather than extend S. Thus, from the name ACounterWithConsoleView, we know that the best relationship between the two classes is HAS-A rather than IS-A.
Similarly, when the IS-A relationship between two classes can be reversed, then we know it is the wrong relationship between the two classes. For example, when we create the multiple view
37 counter by making the console view class an extension of the message view, or the reverse, then we know that IS-A is the wrong relationship between the two.
Sometimes the reusing class does not need all of the public methods of defined by the shared class. In this situation IS-A is clearly the wrong relationship between the two. Consider using a vector to create a string history. If we make string history an extension of Vector, defining new methods that add and retrieve strings instead of arbitrary objects, then we also inherit the methods to add and remove arbitrary objects. We can override these methods to do nothing, but that is a lot of work and more important deceive a user of the class who looks at the public methods to understand its behavior. The fundamental problem here is that we do not want a string history to be used wherever a vector can be.
This problem can arise even if the reusing class needs all the methods of the reused class. Consider the figure below public ACartesianPoint implements public ASquare extends APoint Point { implements Square { int x, y; Point center; public APoint(int theX, int theY) int sideLength; { x = theX; y = theY;} public ASquare( Point theCenter, int theSideLength) { public int getX() { return x } center = theCenter;sideLength = theSideLength; public int getY() { return y } } public double getRadius() { public int getX() {return center.getX();} return Math.sqrt(x*x + y*y); } public int getY() {return center.getY();} .public double getAngle() { public int getRadius() {return center.getRadius();} return Math.atan(y/x);} public int getAngle() {return center.getAngle();} } public int getSideLength() {return sideLength;} }
public ASquare extends APoint implements Square { int sideLength; public ASquare (int theX, int theY, int theSideLength) { x = theX; y = theY; sideLength = theSideLength} public int getSideLength() {return sideLength}; }
Figure 21 ASquare IS-A ACartesianPoint vs. HAS-A ACartesianPoint It shows two ways of using the code in a class defining a point is used to create one defining a square. In either case the point class is used to represent the center of the square. In the inheritance case, the square class inherits from the point class while in the delegation approach it has a reference to the point class. In both cases, the square class defines an additional variable and property to represent the length of each side of the square.
In the inheritance approach, the square class inherits all of the methods of the point class. Yet it is wrong as we don’t want to use a square wherever we can use a point, that is, ASquare IS NOT APoint. The practical disadvantage of this approach is that the inheriting class is obscure (it is not clear that the point class represents the center) and we must create a separate square class to reuse a point class that stores the Polar rather than Cartesian representation of a point.
38 Making the tradeoff in everyday software
Authors of objects we use every day have had to wrestle with the choice between delegation and inheritance. In Java 1.0, a user of a widget class by inheriting from it, overriding the method called in the widget when an action was performed on it. The overriding method used to call super in the widget class to process the action and then added application-specific functionality such as incrementing a counter when an increment button is pressed. In later versions of Java, the delegation approach is used. A widget user has a reference to an instance of the widget class and is notified (via the observer pattern) when an action is performed on the widget. In response to the notification, it performs application-specific functiomality. declares Widget Class Action Performed super call Java 1.0
declares Widget User Action Performed
declares Widget Class Action Performed delegate call Later versions declares Widget User Action Performed
Figure 22 Inheriting a vs. delegating to a widget class The main reason to changing to the delegation approach was to overcome the problem of single- inheritance in Java. With the inheritance approach, it was not possible for a class that extended a non-widget class to use a widget or for a class to use two different widget classes. The other advantages of delegation also apply such as the ability to transparently reuse another widget class implementing the same functionality.
Java provides a special class, Observable, that implements the observer pattern. It was intended to be the superclass of any observable class. However, such an object could not be a subclass of another class. For example, it was not possible for AStringSet, an extension of AStringDatabase, to use this class. While this class is still exists in Java, Java now also provides another class, PropertyChangeSupport, which provides a delegation-based implementation of the obervable pattern.
Similarly, when you study threads, you will see it is possible to inherit from a Thread class or have a reference to the class and implement a Runnable interface defining threads. The Runnable version did not exist in version 1.0.
In general, the fashion today is to use delegation rather than inheritance. The discussion above had given the reasons, techniques, and drawbacks for doing so.
39