<<

Dharmendra Mishra

Partially Ordered Sets

A R on a set A is called a partial order if R is reflexive, antisymmetric, and transitive. The set A together with the partial order R is called a , or simply a poset , and we will denote this poset by (A, R). If there is no possibility of confusion about the partial order, we may refer to the poset simply as A, rather than (A, R).

EXAMPLE 1

Let A be a collection of of a set S. The relation ⊆ of set inclusion is a partial order on A, so (A, ⊆) is a poset.

EXAMPLE 2

Let Z + be the set of positive integers. The usual relation ≤ less than or equal to is a partial order on Z +, as is ≥ (grater than or equal to).

EXAMPLE 3

The relation of divisibility (a R b is and only if a  b )is a partial order on Z +.

EXAMPLE 4

Let R be the set of all equivalence relations on a set A. Since R consists of subsets of A x A, R is a partially ordered set under the partial order of set containment. If R and S are equivalence relations on A, the same property may be expressed in relational notation as follows.

R ⊆ S if and only if x R y implies x S y for all X, y in A. Than (R, ⊆) is poset.

EXAMPLE 5

The relation < on Z + is not a partial order, since it is not reflexive.

EXAMPLE 6

Let R be partial order on a set A, and let R -1 be the inverse relation of R. Then R -1 is also a partial order. To see this, we recall the characterization of reflexive, antisymmetric, and transitive given in section 4.4. If R has these three properties, then ∆ ⊆ R, R ∩ R -1 ⊆ ∆ , and R2 ⊆ R. By taking inverses. we have

∆ = ∆-1 ⊆ R -1, R -1 ∩ (R -1) = R -1 ∩ R ⊆ ∆, and (R -1)2 ⊆ R -1. so, by section 4.4, R -1 is relative, antisymmetric, and transitive. Thus R-1 is also a partial order. The poset (A, R -1) is called the dual of the poset (A, R) and the partial order R -1 is called the dual of the partial order R.

The most familiar partial orders are the relations ≤ and ≥ on Z and R. For this reason, when speaking in general of a partial order R on a set A, we shall often use the symbols ≤ or ≥ for R. This makes the properties of R more familiar and easier to remember. Thus the reader may see the symbol ≤ used for many different partial orders on different sets. Do not mistake this to mean that these relations are all the same or that they are the familiar relation ≤ on Z or R. If it becomes absolutely necessary to distinguish partial orders from one another, we may also use symbols such as ≤1, ≤’, ≥1, ≥’, and so on to denote partial orders.

We will observe the following convention. Whenever (A, ≤) is a poset, we will always use the symbol > for the partial order <-1, and thus (A, >) will be the duel poset. Similarly, the dual of poset (A, <1) Of the poset will be denoted by (A, >1) and the dual of the poset (B, <’)will be

1/90 Dharmendra Mishra denoted by (B,>’). Again, this convention is to remind us of the familiar duel poset. (Z, <) and (Z, >) as well as the posets (R, <) and (R, >).

If (A, ≤) is a poset, the elements a and b of A are said to be comparable if

a ≤ b or b ≤ a.

Observe that in a partially ordered set every paid of elements need not be comparable. For example, consider the poset in Example 3. The elements 2 and 7 are not comparable, since 2 + 7 and 7 + 2. Thus the word “partial” in partially ordered set means that some elements may not be comparable. If every pair of elements in a poset A is comparable, we say that A is a linealy ordered set, and the partial order is called a linear order . We also say that A is a chain .

EXAMPLE 7

The poset of Example 2 is linearly ordered.

The following theorem is sometimes useful since it shows how to construct a new poset from given posets.

Theorem 1

If (A, ≤) and (B, ≤) are posets, then (A X B, ≤ ) is a poset, with partial order ≤ defined by

(a, b) ≤ (a’, b’) if a ≤ a’ in A and b ≤ b’ in B.

Note that the symbol ≤ is being used to denote three distinct partial orders. The reader should find it easy to determine which of the three is meant at any time.

Proof

If (a, b) ∈ A X B, then (a, b) ≤ (a, b) since a ≤ a in A and b ≤ b in B, so ≤ satisfies the reflexive property in A X B. Now suppose that (a, b) ≤ (a’, b’) and (a’, b’) ≤ (a, b) where a and a’ A ∈ and b and b’ ∈ a B. Then

a ≤ a’ and a’ ≤ a in A and

b ≤ b’ and b’ ≤ b in B.

Since A and B are posets, the antisymmetry of the partial orders on A and B implies that

a = a’ and b = b’.

Hence ≤ satisfies the antisymmetry property in A X B.

Finally, suppose that

(a, b) ≤ (a’, b’) and (a’, b’) ≤ (a’’, b’’). where a, a’, a’’ ∈ A, and b, b’, b’’ ∈ B. Then

a ≤ a’ and a’ ≤ a’’, so a ≤ a’’, by the transitive property of the partial order on A. Similarly,

b ≤ b’ and b’’ ≤ b’’.

2/90 Dharmendra Mishra

so b ≤ b’’, by the transitive property of the partial order on B. Hence

(a, b) ≤ (a’, b’’).

Consequently, the transitive property holds for the partial order on A X B, and wee conclude that A X B is a poset.

The partial order < defined on the Cartesian product A X B is called the product partial order.

If (A, < ) is a poset, we say that a < b if a < b but a ≠ b. Suppose now that (A, <) and (B, < ) are posets. In Theorem 1 we have defined the product partial order on A X B. Another useful partial order on A*B, denoted by <, is defined as follows.

(a, b) < (a’, b’) if a < a’ or if a = a’ and b < b’.

This ordering is called lexicographic, or “dictionary” order. The ordering of the elements in the first coordinate, expect in case of “ties,” when attention passes to the second coordinate. If (A, <) and (B, < ) are linearly ordered sets, then the lexicographic order < on A X B is also a linear order.

EXAMPLE 8

Let A = R, with usual ordering <. Then the plane R 2 = R X R may be fiven lexicographic order. This is illustrated in Figure 1. We see that the plane is linearly ordered by lexicographic order. Each vertical line has the usual order, and points on one line are less than points on a line farther to the right. Thus, in Figure 6.1, P 1 < P 2, P 1 < P 3 and P 2 < P 3.

Lexicographic ordering is easily extended to Cartesian products A1 X A 2 X…X A n as follows:

(a 1, a 2,….a n) < (a’ 1, a’ 2….a’ n) if and only if

a1 < a’ 1 or a1 = a’ 1 and a 2 < a’ 2 or a1 = a’ 1, a 2 = a’ 2 and a 3 < a’ 3 or … a1 = a’ 1 a 2 = a’ 2 …. A n-1 = a’ n-1 and a n < a’ n.

Thus the first coordinate dominates expect for equality, in which case we consider the second coordinate. If equality holds again, we pass to the next coordinate, and so on.

EXAMPLE 9

Let S = (a, b,…..z) be the ordinary alphabet, linearly ordered in the usual way (a < b, b < c…..y < z). Then S n = S X S X…X S (n factors) can be identified with the set of all words having length n. Lexicographic order on S n dictionary listing. This fact accounts for the name of the ordering.

Thus park < part, help < hind, jump < mump. The third is true since j < m; the second since h =, e < i; and the first is true since p = p, a= a, r = r, k < t.

If S is a poset, we can extend lexicographic order to S* (see Section 1.3) in the following way.

If x = a 1a2…a n and y = b 1b2…b k are in S* with n < k, we say that x < y if (a 1….a n) < (b 1…..b n) in S n under lexicographic ordering of S n.

3/90 Dharmendra Mishra

In the previous paragraph, we use the fact that the n-tuple (a 1, a 2….a n) ∈ S n, and the string a 1a2…a n ∈ S* are really the same sequence of length n, written in two different notations. The notations differ for historical reasons, and we will use them interchangeably depending on context.

EXAMPLE 10 Let S be (a…..z), ordered as usual. Then S* is the set of all possible “words” of any length, whether such words are meaningful or not.

Thus we have Help < helping

In S* SINCE Help < help

In S 4. Similarly, we have Help < helping

Since Help < helping

In S 6. As the example Help < helping

Shows, this order includes prefix order that is, any word is greater than all of its prefixes (beginning parts). This is also the way that words occur in the dictionary. Thus we have dictionary ordering again, but this time for words of any finite length.

Since a partial order is a relation, we can look at the digraph of any partial order on a finite set. We shall find that the digraph of partial orders can be represebted in a simpler manner than those of general relations. The following theorem provides the first result in this directions.

Theorem 2

The digraph of a partial order has no cycle of length greater than 1.

Proof

Suppose that the digraph of the partials order < on the set A contains a cycle of length n >2.

Then there exist distinct elements a 1, a 2…..a n ∈ A such that

a1 < a 2 a 2 < a3 a n-1 < a n, a n < a 1.

By the transitivity of the partial order used n-1 times a 1 < a n. By antisymmetry a n < a 1 and a 1 < a1 and a 1 < a n imply that a n = a 1 a contradiction to the assumption the a 1, a 2,….a n are distinct.

Hasse Diagrams

Let A be a finite set. Theorem 2 has shown that the digraph of a partial order on A has only cycles of length 1. Indeed since a prtial order is reflexive, every vertex in the digraph of the partial order is contatained in a cycle of length1. To simplify matters, we shall delete all such cycles from the digraph. Thus the digraph shows in Fig 2 (a) would be drawn shows in Fig 2 (b).

4/90 Dharmendra Mishra

c c

b b

a a

(a) (b)

Figure 2

c c a

b b b

a a c (a) (b)

Figure 3 Figure 4

We shall also eliminate all edges that are implied by the transitive property. Thus, if a < b and b < c, it follows that a < c. In this case, we omit the edge from a to c; however we do not drawn the edges from a to b and from b to c. For example the digraph shown in Fig 3 (a) would be drawn as shown in Fig 3 (b). We also agree to draw the digraph of a partial order with all edges pointing upward, so that arrows may be omitted from the edges. Finally, we replace the circles representing the vertices by dots. Thus the diagram shown in Fig 4 gives the final from of the digraph shown in Fig 2 (a). The resulting diagram of a partial order, much simpler than its digraph, is called the Hasse diagram of the partial order of the poset. Since the Hasse diagram completely describes the associated partial order, we shall find it to be a very useful tool.

EXAMPLE 11

Let A = {1, 2, 3, 4, 2}. Consider the partial order of divisibility on A. That is if a and b ∈ A, a < b if and only if a / b. Draw the Hasse diagram of the poset (A, <).

Solution

The Hasse diagram is shown in Fig 5. To emphasize the simplicity of the Hasse diagram we show in Fig 6 the digraph of the poset in Fig 5.

5/90 Dharmendra Mishra

Figure 5 Figure 6

EXAMPLE 12

Let S = {a,b,c} and A = P (S). Draw the Hasse diagram of the poset A with the partial order ⊆ (set inclusion).

Solution

We first determine A, obtaining

A = { ∅, (a), (b), (c), (a, b), (a, c), (b, c), (a, b, c) }

The Hasse diagram can then be drawn as shown in Figure 7.

Figure 7 Figure 8

Observe that the Hasse diagram of a finite linearly ordered set is always of the form shown in Fig 8.

It is easily seen that if (A, < ) is a poset and (A, >) is the dual poset, then the Hasse diagram of (A, > ) is just the Hasse diagram of (A, < ) turned upside down.

EXAMPLE 13

Figure 9 (a) shows the Hasse diagram of a poset ( A, <), where

A = {a, b, c, d, e, f, )

Figure 9 (b) shows the Hasse diagram of the dual poset (A, >). Notice that as stated, each of these diagrams can be constructed by turning the order upside down.

6/90 Dharmendra Mishra

Figure 9

Topological Sorting

If A is a poset with partial order < we sometimes need to find a linear order< for the set A that will merely be an extension of the given partial order in the sense that if a < b, then a < b. The process of constructing a linear order such as

EXAMPLE 14

Give a topological sorting for the poset whose hasse diagrm is shown in figure

Solution

The partial order < whose hasse diagram diagram is shown in figure 6.11 (a) is clearly a linear order. It is easy to see that every pair in ≤ is also in the order <, so < is a tipological sorting of the partial order ≤. Figures 6.11 (b) and (c) show two other solutions to this problem.

Figure 10 Figure 11

Isomorphism

Let (A, ≤ ) and (A’, ≤’) be posets and let f: A → A’ be a one- to- one correspon dence between A and A’ the function f is called an from (A, ≤) to (A’, ≤’) if for any a and b in A, a ≤ b if and only if f (a) ≤’ f(b).

If f : A → A’ is an isomporphism, we say that (A, ≤ ) and (A’, ≤’) are isomophic posets.

EXAMPLE 15

7/90 Dharmendra Mishra

Let A be the set Z + of positive inters, and let ≤ be the usual partial order on A (see example 2). Let A’ be the set of positive even integers, and let ≤’ be the usual partial order on A’. The function f: A → A’ given by

f(a) = 2a is an isomorphism from (A, ≤ ) to (A’, ≤’).

First, f is one to one since, if f (a) = f (b), then 2 a=2 b, so a = b. Next, Dom (f) = A, so f is everywhere defined. Finally, If c ∈ a’ then c = 2a for some a ∈ z +; therefore, c = f(a). This shows that f is onto so we see that f is a one- to- one correspondence. Finally, if a and b are elements of A, then it is clear that a ≤ b if and only if 2a ≤ 2b. Thus f is an iIsomorphism.

Suppose that f : A → A’ is an isomorphism from a poset (A, ≤ ) to a poset (A’, ≤’.) Suppose also that B is a of A, and B’ = f(B) is the corresponding subset of A’. Then we see from the definition of isomorphism that the following principle must hold.



8/90 Dharmendra Mishra

Languages, Grammars, Machines

1. INTRODUCTION

One may view a digital computer as a “machine” M with the following properties. At each step in time, M is in a certain “internal state”. M reads some “input”, M prints some “output”, which depends only on its internal state and the input and then, possibly, M changes its internal state. There are various ways to formalize the notion of a machine M, and there exists a hierachy of such machines.

One may also define a “computable” function in terms of a special kind of a machine M (Turing machine) which yields a nonnegative integer output M (n) for any given nonnegative integer input n. Since most data may be coded by such integers, the machine M can handle most kinds of data. This chapter is devoted to these and related topics.

2. ALPHABET, WORDS FREE SEMIGROUP

Consider a nonempty set A of symbols. A word or string w on the set A is finite sequence of its elements. For example, the sequences.

u = ababb and v = accbaaa are words on A = {a, b, c,}. When discussing words on A, we frequently call A the alphabet, and its elements are called letters. We will also abbreviate our notation and write a 2 for aa, a 3 for aaa, and so on. Thus, for the above words, u = abab 2 and v = ac 2 ba 3.

The empty sequence of letters, denoted by λ (Greek letter lambda) or ∈ (Greek letter epsilon), or 1, is also considered to be a word on A, called the empty word. The set of all words on A is denoted by A*.

The length of a word u, written [u] or (u), is the number of elements in its sequence of letters. For the above words u and we v, we have / (u) = 5 and λ (v) = 7. Also, / ( λ) = 0, where λ is empty word.

Remark : Unless otherwise stated, the alphabet A will be finite, the symbols u, v, w will be reserved for words on A, and the elements of A will come from the letters a, b, c.

Concatenation :

Consider two words u and v on alphabet A. The concatenation of u and v, written uv, is the word obtained by writing down the letters of u followed by the letters of v. For example, for the above words u and v, we have

uv = ababbaccbaaa = abab 2ac 2ba 3 As with letters, we define u 2 = uu, u 3 = uuu, and in general, u n+1 = uu n , where u is a word.

9/90 Dharmendra Mishra

Clearly, for any words u, v, w, the words (uv) w and u (uw) are identical, they simply consist of the letters of u, v, w, written down one after the other. Also, adjoining the empty word before or after a word u does not change the word u. In other words:

Theorem 1: The concatention operation for words on an alphabet A is associative. The empty word λ is an identity element for the operation.

(Generally speaking, the operation is not commutative, e.g., uv ≠ vu for the above words u and v.)

Subwords, initial Segments

Consider any word u = a 1a2…an on an alphabet A. Any sequence w = ajaj+1 …a k is called a subword of u. In particular, the subword w = a 1a2…a k.+ beginning with the first letter of u, is called an initial segment of u. In other words, w is a subword of u if u = v 1wv 2, and w is an initial segment of u if u = wv. Observe that λ and u are both subwords of u since u = λu.

Consider the word u = abca. The subwords and initial segments of u follow: 1) Subwords: λ. a, b, c, ab, bc, ca, abc, bca, abca. 2) Initial segments: λ, a, ab, abc, abca,.

Observe that the subword w = a appears in two places in u. The word ac is a subword of u even though all its letters belong to u.

Free Semigroup, Free Monoid

Let F denote the set of all nonempty words from an alphabet, A with the operation of concatenation. As noted above, the operation is associative. Thus F is a semigroup; it is called the free semigroup over A or the free semigroup generated by A. One can easily show that F satisfies the right and left cancelation laws. However, F is not commutative when A has more than one element. We will write F A for the free semigroup over A when we want to specify the set A.

Now let M = A* be the set of all words from A including the empty word λ. Since λ is an identity element for the operation of concatenation, M is a monoid, called the free monoid over A.

3. LANGUAGES A language L over an alphabet A is a collection of words on A. Recall that A* denotes the set of all words on A. Thus a language L is simply a subset of A*.

EXAMPLE 1. Let A = {a,b}. The following are languages over A.

10/90 Dharmendra Mishra

2 m m (a) L1 = {a, ab, ab ,…} (c) L3 = {a b : m > 0 }. m n m n (b) L2 = {a b : m > 0, n > 0} (d) L4 = {b ab : m > 0, n > 0}

One may verbally describe these languages as follows.

(a) L1 consists of all words beginning with an a and followed by zero or more b’s. (b) L2 consists of all words beginning with one or more a’s followed by one or more b’s (c) L3 consists of all words beginning with one or more a’s and followed by the number of b’s. (d) L4 consists of all words with exactly one a.

Operation on Languages

Suppose L and M languages over an alphabet A. Then the “ concatenation” of L and M, denoted by LM, is the language defined as follows:

LM = {uv : u ∈ L, v ∈ M }

That is LM denotes the set of all words which come from the concatenation of a word from L with a word from M. For example, suppose.

2 2 3 2 4 6 L1 = { a, b } L 2 = { a , ab, b }, L 3 = {a , a , a ,…}

3 2 3 2 2 2 5 Then L 1L2 = {a .a b, ab , b a , b ab, b }

3 5 7 2 2 2 4 2 6 L 1L3 = {a , a , a ,…, b a , b a , b a ,…}

2 2 2 4 L 1L1 = {a , ab , b a, b }

Clearly, the concatenation of languages is associative since the concatenation of words is associative.

Powers of a language L are defined as follows:

L0 = { λ}, L 1 = L, L 2 = LL, L m+1 = L m L for m > 1.

The unary operation L* (read “L star”) of a language L, called the kleene closure of L because Kleen Proved Theorem 13.2, is defined as the infinite union

L* = L o ∪ L 1 ∪ L 2 ∪…. = ∪ k = o L k

The definition of L* agrees with the notation A* which consists of all words over A. Some texts define L + to be the union of L 1, L 2…, that is, L + is the same as L* but without the empty word λ.

11/90 Dharmendra Mishra

4 REGULAR EXPRESSIONS, REGULAR LANGUAGES

Let A be a (nonempty) alphabet. This section defines a regular expression r over A and a language L(r) over a associated with the regular expression r. The expression r and its corresponding language L( r) are defined inductively as follows.

Definition:

Each of the following is a regular expression over an alphabet A.

(1) The symbol “ λ ” (empty word) and the pair “ ( )” (empty expression)are regular expressions.

(2) Each letter a in A is a regular expression.

(3) If r is a regular expression, then (r*) is a regular expression.

(4) If r 1 and r 2 are regular expressions, then (r 1 V r 2) is a regular expression.

(5) If r1 and r 2 are regular expressions, then (r 1 r 2) is a regular expression.

All regular expressions are formed in this way.

Observe that a regular expression r is a special kind of a word (string) which uses the letters of A and the five symbols.

( ) * V λ

we emphasize that no other symbols are used for regular expressions.

Definition:

The language L(r ) over A defined by a regular expression r over A is as follows:

(1) L( λ) = { λ} and L( ) = ∅, the empty set.

(2) L(a) = {a}, where a is a letter in A.

(3) L (r*) = (L(r) * [the kleene closure of L (r)]. (4) L (r 1 V r 2) = L (r 1) U (r 2) (the union of the languages).

(5) L (r 1r2) = L (r 1) L(r 2) (the concatenation of the languages)

Remark:

Parentheses will be omitted from regular expressions when possible. Since the concatenation of languages and the union of languages are associative, many of the parentheses may be

12/90 Dharmendra Mishra omitted. Also, by adopting the convention that “ * “ takes precedence over concatenation, and concatenatin takes presedence over “V”, other parentheses may be omitted.

Definition:

Let L be a language over A. Then L is Called a regular language over A if there exists a regular expression r over A such that L = L (r).

EXAMPLE 2.

Let A = {a,b}, Each of the following is an expression r and its corresponding Language L (r):

(a) Let r = a*, Then L (r) consists of all powers of a, i.e., all words in a (including the empty word λ).

(b) Let r = aa*. Then L (r) consists of all positive powers of a, i.e., all words in a excluding the empty word.

(c) Let r = a ∨ b*. Then L(r) consists of a any word in b, that is,

(d) Let r = (a ∨ b)*. Note L(a ∨ b) = {a} ∪ {b}= A; all words over A.

(e) Let r = (a ∨ b)*bb.Then L(r) consists of the concatenation of any word in A with bb, that is, all words ending in b 2.

(f) Let r = a ∨ b*. L(r) does not exist since r is not a regular expression. (Note ^ is not one of the symbols used for regular expressions.)

EXAMPLE 3.

Consider the following languages over A = {a, b}:

m n (a) L1 = {a b : m > 0, n > 0,} m n (b) L2 = {b ab : m > 0, n > 0} m m (c) L3 = {a b : m > 0}.

Find a regular expression r over A ={a,b} such that L 1=L(r) for i = 1, 2, 3.

(a) L1 consists of those words beginning with one or more a’s followed by one or more b’s hence we can set r = aa*bb*. Note that r is not unique ; for example, r = a*abb* is another solution.

(b) L 2 consists of all words which begin with one or more b’s followed by a single a which is then followed by one or more b’s that is, all words with exactly one a which is neither the first nor the last letter. Hence r = bb*abb* is a soloution.

(c) L3 consist of all words beginning with one or more a’s followed by the number of b’s. There exists no regular expression r such that L 3 = L(r); that is, L 3 is not a regular language. The proof of this fact appears in Example 13.7.

5 FINITE STATE AUTOMATA

A finite state automaton (FSA) or, simply, an automaton M consists of five parts:

(1) A finite set S of (internal) states.

(2) A finite set S (alphabet) A of inputs.

13/90 Dharmendra Mishra

(3) A subset Y of S (whose elements are called accepting or “yes” states).

(4) An initial state S o in S.

(5) A next-state function F from S X A into S.

Such an automaton M is denoted M = ( A, S, Y, s 0, F) when we want to indicate its five parts (The plural of automaton is automata.)

Some texts define the next-state function F:S X A → S in (5) by means of a collection of function f a: S → S, one for each a ∈ A; that is, each input a may be viewed as causing a change in the state of the automatoan M. setting F(s,a) = F a (s) shows that both definitions are eauivalent.

EXAMPLE 4:

The following defines an automaton M with Two input symbols and three states:

(1) A = {a, b}, input symbols.

(2) S = { S 0, S 1,S 2}, internal states.

(3) Y = {S 0, S 1}, “yes” states.

(4) So, initial state.

(5) Next-state function F: S X A → S defined by

F(S 0,a) = S 0, F(S 1, a) = S 0, F(S 2, a) = S 2 F(S 0,b) = S 1, F(S 1,b) = S 2, F(S 2, b) = S 2

The next-state function F is frequently given by means of a table as in Fig. 1

F a b

S0 S0 S1

S1 S0 S2

S2 S2 S2

Fig. 1

State Diagram of an Automaton M

An automaton M is usually defined by means of its state diagram D=D(M) rather than by listing its five parts. The state diagram D = D(M) is a labeled directed graph as follows.

(1) The vertices of D (M) are the states in S and an accepting state is denoted by means of a double circle.

14/90 Dharmendra Mishra

(2) There is an arrow (directed edge) in D(M) from state s j to state s k labeled by an input a if F(s j, a) = s k or, equivalently, if f a (s j) = s k.

(3) The initial state s o is indicated by means of a special arrow which terminates at S 0 but has no initial vertex.

For each vertex Sj and each letter a in the alphabet A, there will be an arrow leaving Sj which is labeled by a; hence the outdegree of each vertex is equal to number of elements in A. For notational convenience, we label a single arrow by all the inputs which cause the same of state rather than having an arrow for each such input.

The state diagram D = D(M) of the automatoan M in Example 13. 1 appears in Fig. 2. Note that both a and b label the arrow from s 2 to s 2 since F(s 2, a) and F(s 2, b)=s 2. Note also that the autdegree each vertex is 2.

Fig. 2

Language L(M) Determined by an Automaton M

Each automaton M with input alphabet A defines A language over A, denoted by L(M), as follows.

Let w = a 1a2…a m be a word on A. Then w determines a sequence of states

So → s 1 → s 2 → … → s m

Where s o is the initial state and F(S i -1, a i) = S i for i ≥ 1. In other words, W determines the path

P = (s o, a 1, s 1, a 2, s 2,…, a m, s m)

In the state diagram graph D(M).

We say that M recognizes the word w if the final state S m is an accepting state in Y. The language L(M) is the collection of all words from A which are accepted by M.

EXAMPLE 13.5

(a) Determine whether or not the automaton M in Fig.2 accepts the word w where:

(1) w = ababba; (2) w = baab; (3) w = λ.

(1) Use Fig. 2. And the word w = ababba to obtain the path

a b a b b a

P = s 0 → s o → s 1 → s o → s 1 → s 2 → s 2

The final state s 2 is not in Y; hence w is not accepted by M

15/90 Dharmendra Mishra

(2) The word w = baab determines the path

b a a b

P = s 0 → s 1 → s 0 → s 0→ s 1

The final state s 1 is in Y; hence w is accepted by M.

(3) Here the final state is the initial state s 0 since w is empty word. Since s o is in Y, λ is accepted by M.

(b) Describe the language L(M) of the automatoan M in Fig. 2

L(M) will consist of all words w from A which do not have two successive b’s. This comes from the following facts:

(1) We can enter the state s 2 only after two successive b’s.

(2) We can never leave s 2.

(3) The state s 2 is the only rejecting (nonaccepting) state.

The fundamental relationship between regular languages and automata is contained in the following theorem (whose proof lies beyond the scope of this text).

Theorem 2.

(kleene): A language L over an alphabet A is regular if and only if there is a finite State automaton M such that L = L(M).

The star operation L* on a language L is sometimes called the Kleene closure of L since Kleene first proved the above basic result.

EXAMPLE 6:

Let A = {a, b}. Construct an automatoan M which will accept precisely those words from A which end in two b’s.

2 Since b is accepted, but not λ or b, need three states, s 0, the initial state, s 1 and s 2 with an arrow labeled b going from S 0 to S 1 and one from S 1 to S 2. Also, S 2 is an accepting state, but not S 0 and S 1. This gives the graph in fig. 3 (a). On other hand, if there is an a, then we want to go back to S 0, and if we are in s 2 and there is a b, then we want to stay in s 2. These additional conditions give the required automaton M which is shown in Fig. 3(b).

Fig. 3 Pumping lemma

Suppose an automaton M over A has k states, and suppose w – a 1a2…a n is a word over A accepted by M such that [w] = n > k, the number of states. Let

16/90 Dharmendra Mishra

P = (s 0, s 1…,sn) be the corresponding sequence of states determined by the word w. since n > k, two of the states in P must be equal, say S i = S j where i < j. Let

X = a 1 a 2 . . . . a i, y = a i + 1 . . . a j, z = a j+1 . . . a n

m m As shown in Fig. 4, xy ends in S i = s j; hence xy also ends in s i. Thus, for every m, w m = xy z ends in s n’ which is an accepting state.

Fig. 4

The above discussion proves the following important result.

Theorem 3

(Pumping Lemma) : Suppose M is an automaton over A such that:

(i) M has K states. (ii) M accepts a word w from A where [w] > k.

m Then w = xyz where, for every positive m, w m = xy z is accepted by M.

The next example gives an application of the Pumping Lemma.

EXAMPLE 7

Show that the language L = { a m b m: m positive } is not regular.

Suppose L is regular. Then, by Theorem. 2, there exists a finite state automaton M which accepts L. Suppose M has k states. Let w = a k b k. Then [w] > k. By the Pumping Lemma 2 (Theorem. 3), w = xyz where y is not empty and w 2 = xy z is also accepted by M. If y consists of only a’s or only b’s, then w 2 will have a’s following b’s. In either case w 2 does not belong tc L, which is a contradiction. Thus L is not regular.

6 GRAMMARS

Figure 5 shows the grammatical construction of a specific sentence. Observe that there are:

1) Various variables, e.g., (sentence), (noun phrase), . . .; 2) Various terminal words, e.g., “The”, “Boy”, . . . ; 3) A beginning variable (setentence); and 4) Various substitutions or productions, e.g.

(sentence) → (noun phrase) (verb phrase) (object phrase) → (article) (noun) (noun) → apple

The final sentence only contains terminals, although both variables and terminals appear in its construction by the productions. This intuitive description is given in order to motivate the following definition of a grammar and the language it generates.

17/90 Dharmendra Mishra

(sentence)

(noun phrase) (verb phrase)

(article) (noun phrase) (verb) (object phrase)

(adjective) (noun) (article) (noun)

The little boy ate an apple.

Fig. 5

A phrase structure grammar or, simply, a grammar G consists of four parts:

(1) A finite set (vocabulary) V.

(2) A subset T of V whose elements are called terminals; the elements of N = V \ T are called non-terminals or variables.

(3) A nontermainal symbol S called the start symbol.

(4) A finite set P of productions. A production is an ordered pair ( α,β ) usually written α → β , where α and β are words in √. Each production in P must contain at least one nonterminal on its left side.

Such a grammar G is denoted by G = G(V, T, S, P) when we want to indicate its four parts.

The following notation, unless otherwise stated or implied, will be used for our grammars. Terminals will be denoted by italic lower case Latin letters, a, b, c, . . . , and nonterminals will be denoted by italic capital Latin letters, A, B, C, . . . with s as the start symbol. Also, Greek letters, α, β, . . . will denote words in √, that is, words in terminals and nonterminals. Furthermore, we will write

α → ( β1, β2, . . . ., βk) instead of α → β1, α → β2, . . . ., α → βk,

Remark :

Frequently, we will define a grammar G by only giving its productions, assuming implicitly that S is the start symbol and that the terminals and nonterminals of G are only those appearing in the productions.

EXAMPLE 8

The following defines a grammar G:

1 2 3 4 5 V = { A, B, S, a, b }. T = {a, b}, P = {S → AB, A → Aa, B → Bb, A → a, B → b } with S as the start symbol. The productions may be abbreviated as follows :

18/90 Dharmendra Mishra

S → AB, A → (Aa, a), B → (Bb, b)

Language L (G) of a Grammar G

Suppose w and w’ are words over the vocabulary set V of a grammar G. We write

w ⇒ w’ if w’ can be obtained from w by using one of the productions; that is, if there exists words u and v such that w = u α v and w’ = u β v and there is a production α → β . We write

* w ⇒ ⇒ w’ or w ⇒ w’ if w’ can be obtained from w using a finite number of productions.

Now let G be a grammar with terminal set T. The language of G, denoted by L (G), consists of all words in T that can be obtained from the start symbol S by the above process; that is,

L (G) = { w ∈ T* : S ⇒ ⇒ W }

EXAMPLE 9

Consider the grammar G in Example 13.8. Observe that w = a 2 b 4 can be obtained from the start symbol S as follows :

S ⇒ AB ⇒ AaB ⇒ aaB ⇒ aaBb ⇒ aaBbb ⇒ aaBbbb ⇒ aabbbb = a 2 b 4

Here we used the productions 1, 2, 4, 3, 3, 3, 5, respectively. Thus we can write S ⇒ ⇒ a 2 b 4. Hence w = a 2 b 4 belongs to L (G). More generally, the production sequence 1, 2 (r times), 4, 3 (s times), 5 will produce the word w = a’ abs b where r and s are nonnegative integers. On the other hand, no sequence of productions can produce an a after a b. Accordingly,

L (G) = {a m b n : m and n positive integers }

That is, the language L (G) of the grammar G consists of all words which begin with one or more a’s followed by one or more b’s.

EXAMPLE 10.

Find the language L(G) over {a,b,c,} generated by the grammar G with productions S → aSb, aS → Aa, Aab → c

First we must apply the first production one or more times to obtain the word w = a n sb n where n > o. To eliminate S, we must apply the second production to obtain the word w’ = a m Aabb m where m = n –1 ≥ 0. Now we can only apply the third production to finally obtain the word w n = am cb m m ≥ 0. Accordingly,

L (G) = { am cb m : m nonnegative }

That is, L (G) consists of all words with the same nonnegative number of a’s and b’s separated by a c.

Types of grammers

Grammars are classified according to the kinds of production which are allowed. The following grammar classification is due to Noam Chomsky.

19/90 Dharmendra Mishra

A type 0 grammar has no restrictions on its productions. Types1, 2 and 3 are defined as follows.

(1) A grammar G is said to be of Type 1 if every production is of the form α → β where α ≤ β  or of the form α → λ.

(2) A grammar G is said to be of type 2 if every production is of the form A → β, i.e., where the left side is a nonterminal.

(3) A grammar G is said to be of Type 3 if every production is of the form A → a or A → aB, i.e., where the left side is a single nonterminal and the right side is a single terminal or a terminal followed by a nonterminal or of the form S → λ.

Observe that the grammars form a hierarchy; that is every Type 3 grammar is a Type 2 grammar every Type 2 grammar is a Type 1 grammar and every Type 1 grammar is a Type grammar.

Grammars are also classiffied in terms of context - sensitive contex - free and regular as follows.

(a) A grammar G is said to be context-sensitive if the productions are of the form

α A α ’ → α B α’.

The name “context-sensitive” comes from the fact that we can replace the variable A by β in a word only when A lies between α and α’.

(b) A grammar G is said to be context-free if the productions are of the form

A - β The name “context-free” comes from the fact we can now replace the variable A by β regard less of where A appears.

(c) A grammer G is said to be regular if the production are of the form

A → a, A → aB, S → λ

Observe that a contex-free grammar is the same as a Type 2 grammar and a regular grammar is the same as a Type 3 grammar.

A fundamental relationship between regular grammars and finite automata follows.

Theorem 4:

A language L can be generated by a type 3 regular grammar G, if and only if there exists a finite automaton m which accepts L.

Thus a language L is regular iff L = L(r) where r is a regular expression iff L = L (M) where M is a finite automaton iff L = L(G) where G is a regular grammar. (Recall that iff is an abbreviation for “if and only if “.)

EXAMPLE 11.

Consider the language L = {a nbn; n > o}

(a) Find a contex – free grammar G which generates L.

Clearly the grammar G with the following productions will generate G:

20/90 Dharmendra Mishra

S → ab, S → aSb

Note that G is a context-free language since each left side is a nonterminal.

(b) Find a regular grammar G which generates L.

By example 13.7, L is not a regular language. Thus L cannot be generated by a regular grammar.

Derivation trees of context-free grammers

Consider a context-free (Type 2) grammar G. Any derivation of a word w in L(G) can be represented graphically by means of an ordered, rooted tree T, called a derivation or parse tree. For example, let G be the contex-free grammar with the following productions:

S → aAB, A → Bba, B → bB, B → c

The word w = acbabc can be derived from S as follows:

S ⇒ aAB → a(Bba) B ⇒ acba B ⇒ acba (bB) ⇒ acbabc

One can draw a derivation tree T of the word w as indicated by Fig. 6. Specifically, we begin with S as the root and then add branches to the tree according to the production used in the derivation of w. This yields the completed tree T which is shown in fig. 6 (e). The sequence of leaves from left to right in T is the derived word w. Also any nonleaf in T is a variable, say A and the immediate successors (children) of A form a word α where A → α is the production of G used in the deriviation of w.

Fig. 6

Backus-Naur form

There is another notation, called the Backus-Naur form, which is sometimes used for describing the productions of contex-free (Type2) grammar. Specifically.

(i) : : = is used instead of →.

(ii) Every nonterminal is enclosed in brackets ( ).

21/90 Dharmendra Mishra

(iii) All productions with the same nonterminal left-hand side are combined into one statement with all the right-hand sides listed on the right of : : = separated by vertical bars.

For instance the productions A → aB, A → b, A → BC are combined into the one statement:

(A) : : = a(B)\b\(B)(C)

Machines and Grammars

Theorem 13.4 tells us that the regular languages correspond to the finite state automata (FSA) there are also machines, more powerful than the FSA. Which correspond to the other grammars.

(a) Pushdown Automata : A pushdown automaton P is similar to a FSA except that P has an auxiliary stack which provides an unlimited amount of memory for P. A language L is recognized by a pushdown automaton P if and only if L is context-free.

(b) Linear Bounded Automata : A linear bounded automaton B is more powerful than a pushdown automaton. Such an automaton B uses a tape which is linearly bounded by the length of the input word W. A language L is recognized by a linear bounded automaton B if and only if and only if L is context sensitive.

(c) Turing Machine : A Turing machine M, named after the British mathematician Alan Turing, uses an infinite tape; it is able to recognize every language L that can be generated by any phase structure grammar G. In fact, a Turing machine M is one of a number equivalent ways to define the notion of a “computable” function.

The discussion of the pushdown automata and the linear bounded automata lies beyond the scope of this text. We will discuss Turing machines in Section 13.8.

13.7 FINITE STATE MACHINES

A Finite state machine (FSM) is similar to a finite state automaton (FSA) except that the finite state machine prints an output using an output alphabet distinct from the input alphabet. The formal definition follows.

A finite state machine (or complete sequential machine) M consists of six parts:

1) A finite set A of input symbols. 2) A finite set S of “ internal” states. 3) A finite set Z of output symbols. 4) An initial state S 0 in S. 5) A next-state function f form S X A into S. 6) An output function g from S X A into Z.

Such a machine M is denoted by M = M (A, S, Z, S 0, F, g) When we want to indicate its six parts.

EXAMPLE 13.12

The following defines a finite state machine M with two input symbols, three internal states, and three output symbols.

(1) A = {a,b}. (2) S = {S 0, S 1, S 2}. (3) Z = {x, y, z,} (4) Initial state S 0.

22/90 Dharmendra Mishra

(5) Next-state function f: s X A → S defined by

f(S 0, a) = s 1, f(S 1, a) = S 2, f (S 2, a) = S 0 f(s 0, b) = s 2, f(s 1, b) = s 1, f(s 2, b) – s 1

(6) Output function g: S X A → Z defined by

g (s 0, a) = x, g(s 1, a) = x, g(s 2, a) = z g(S 0, b) = y, g(s 1, b) = z, f (s 2, b) = y

State Table and State Diagram of a Finite State Machine

There are two ways of representing a finite state machine M in compact form. One way is by a table called the state table of the machine M, and the other way is by a labeled directed graph called the state diagram of the machine M.

The state table combines the next-state function f and the output function g into a single table which represent the function F: S X A → S X Z defined by

F(s i, a j) = (f(s i,a j), g(s i,aj))

For instance, the state table of the machine M in Example .12. is pictured in Fig.7 (a). The states are listed on the left of the table with the initial state first and the input symbols are listed on the top of the table. The entry in the table is a pair (sk, z r) where s k=f (s i,a j) is the next state and z r=g(s i,a j) is the output symbol. One assumes that there are no output symbols other than those appearing in the table.

Fig. 7

The state diagram D = D(M) of a finite machine M = M (A, S, Z, s 0, f, g) is a labeled directed graph. The vertices of D are the states of M. Moreover, if.

F(s i,a j) = (s k, z r) that is f(si,a j) = s k and g(s i,a j) = z r then there is an are (arrow) s i to s k which is labeled with the pair a j, z r. We usually put the input symbol a j near the base of the arrow (near s j) and the output symbol z r near the center of the arrow. We also label the initial state s 0 by drawing an extra arrow into s 0. For instance, the state diagram of the machine M in example 13.12 appears in fig. 7 (b).

Input and Output Tapes

The above discussion of a finite state machine M does not show the dynamic quality of M. Suppose M is given a string (word) of input symbols, say

u = a 1a2…a m

We visualize these symbols on an “input tape”; the machine M “reads” these input symbols one by one and, simultaneously, changes through a sequence of states

v = S 0S1S2…S m

23/90 Dharmendra Mishra

Where S 0 is the initial state, while printing a string (word) of output symbols

w = z 1z2….z m

On an “ output tape”. Formally, the initial state S0 and the input string u determine the strings v and w by

Si = f (S i-1, a i) and z i = g (S i-1, a i)

EXAMPLE 13.

Consider the machine M in Fig. 7 that is, Example 12. Suppose the input is the word

u = abaab

We calculate the sequence v of states and the output word w form the state diagram as follows. Beginning at the initial state S 0 we follow the arrows which are labeled by the input symbols as follows:

a,x b, z a, x a, z b, y

S0 → S 1 → S 1 → S 2 → S 0 → S 2

This yields the following sequence v of states and output word w:

V = S 0S1S1S2S0S2 and W = xzxzy

Binary addition

This subsection describes a finite state machine M which can do binary addition. By adding 0’s at the beginning of our numbers, we can assume that our numbers have the same number of digits. If the machine is given the input

1101011 + 0111011 ------then we want the out put to be the binary sum

10100110

Specifically, the input is the string of pairs of digits to be added:

11, 11, 00, 11, 01, 11, 10, b where b denotes blank spaces and the output should be the string

0, 1, 1, 0, 0, 1, 0, 1

We also want the machine to enter a state called “ stop” when the machine finishes the addition.

The input symbols are

A = { 00, 01,10, 11, b}

And the output symbols are

24/90 Dharmendra Mishra

Z = {0, 1, b}

The machine that we “construct” will have three states:

S = {carry (c) no carry (n), stop (s) }

Here n is the initial state. The machine is shown in Fig.8

In order to show the limitations of our machines, we state the following theorem.

Theorem 13.5 :

There is no finite state machine M which can do binary multiplication.

If we limit the size of the numbers that we multiply, then such machines do exist. Computers are important examples of finite machines which multiply numbers, but the numbers are limited as to their size.

Fig. 8



25/90 Dharmendra Mishra

Directed Graphs

1. INTRODUCTION

Directed graphs are graphs in which the edges are one-way. Such graphs are frequently more useful in various dynamical systems such as digital computers or flow systems. However, this added feature makes it more difficult to determine certain properties of the graph. That is, processing such graphs may be similar to traveling in a city with many one-way streets.

Directed graphs have already appeared in Chapter 3 on relations. One may view certain directed graphs as (binary) relations. For this reason, some texts discuss directed graphs in the context of relations. In fact, here we will present an efficient algorithm for finding the transitive closure of a relation.

This chapter gives the basic definitions and properties of directed graphs. Many of the definitions will be similar to those in the preceding chapter on (nondirected) graphs. However, for pedagogical reasons, this chapter will be mainly independent from the preceding chapter.

2. DIRECTED GRAPHS

A directed graphs G or digraph (or simply graph) consists of two things:

(i) A set V whose elements are called vertices, nodes, or points.

(ii) A set E of ordered pairs (u,v) of vertices called arcs or directed edges or simply edges.

We will write G (V,E) when we want to emphasize the two parts of G. We will also write V(G) and E(G) to denote, respectively, the set of vertices and the set of edges G. (If it is not explicitly stated, the context usually determines whether or not a graph G is a directed graph.)

Suppose e = (u,v) is a directed edge in a digraph G. Then the following terminology is used :

(a) e begins at u and ends at v. (b) u is the origin or initial point of e, and v is the destination or terminal point of e. (c) v is a successor of u. (d) u is adjacent to v, and v is adjacent from u. If u = v, then e is called a loop.

The set of all successors of a vertex y is important; it is denoted and formally defined by

succ (u) = {v ∈ V: there exists (u, v) ∈ E }

26/90 Dharmendra Mishra

It is called the successor list or adjacency list of u.

A picture of a directed graph G is a representation of G in the plane. That is, each vertex u of G is represented by a dot (or small circle), and each (directed) edge e=(u,v) is represented by an arrow or directed curve from the initial point u of e to the terminal point v. One usually presents a digraph G by its picture rather than explicitly listing its vertices and edges.

If the edges and/or vertices of a directed graph G are labeled with some type of data, then G is called a labeled directed graph.

A directed graph G (V, E) is said to be finite if its set V of vertices and its set E of edges are finite.

EXAMPLE 1

(a) Consider the directed graph G pictured in Fig. 1. It consists o four vertices and seven edges as follows:

V (G) = {A, B, C, D}

E(G) = {e 1…..,e 7} = {(A,D), (B,A), (B,A), (D,B), (B,C), (D,C), (B,B)}

The edges e 2 and e 3 are said to be parallel since they both begin at B and end at A. The edges e 7 is a loop since it begins and ends at B.

(b) Suppose three boys, A, B, C, are throwing a ball to each other such that A always throws the ball to B, but B and C are just as likely to throw the ball to A as they are to each other. Figure 2 illustrates this dynamical system, where edges are labeled with the respective probabilities, i.e. A throws the ball to B with probability 1, B throws the ball to A and C each with probability ½’ and c throws the ball to A and B each with probability ½’

Fig. 1 Fig. 2

Subgraphs Let G = G (V, E) be a directed graph, and let V’ be a subset V of vertices of G. Suppose E’ is a subset of E such that the endpoints of the edges in E’ belong to V’. Then H (V’,E’) is a directed graph, and it is called the subgraph of G. In particular, if E’ contains all the edges in E whose endpoints belong to

27/90 Dharmendra Mishra

V’, then H (V’, E’) is called the subgraph of G generated or determined to V’. For example, consider the graph G = G (V, E) in Fig. 1. Let

V’ = {B, C, D} and E’ = {e4, e5, e6, e7} = (D, B), (B, C),(D, C), (B, B)}

Then H (V’, E’) is the subgraph of G determine by the vertex set V’.

3. BASIC DEFINITIONS

This section discusses the questions of degrees of vertices, paths, and connectivity in a directed graph.

Degrees

Suppose G is a directed graph. The outdegree of a vertex v of G, written outdeg (v), is the number of edges beginning at v, and the indegree of v, written indeg (v), is the number of edges ending at v. Since each edge begins and ends at a vertex we immediately obtain the following theorem.

Theorem 1:

The sum of the outdegrees of the vertices of a digraph G equals the sum of the indegrees of the vertices, which equals the number of edges in G.

A vertex v with zero indegree is called a source, and a vertex u with zero outdegree is called a sink.

EXAMPLE 2 Consider the graph G in Fig. 1. We have: outdeg (A) = 1, outdeg (B) = 4, outdeg (C) = o, outdeg (D) = 2 indeg (A) = 2, indeg (B) = 2, indeg (C) = 2, indeg (D) = 1

As expected, the sum of the outdegrees equals the sum of the indegrees, which equals the number 7 of of edges. The vertex C is a sink since no edge begins at C. The graph has no sources.

Paths

Let G be directed graph. The concepts of path, simple path, trail and cycle carry over from nondirected graphs to G except that the directions of the edges must agree with the direction of the path. Specifically:

(I) A (directed) path P in G is an alternating sequence of vertices and directed edges, say,

P = (v 0, e 1, v 1, e 2, v 2,…,e n, v n)

Such that each edge ei begins at v i-1 and ends at v i. If there is no ambiguity, we denote p by its sequence of vertices or its sequence of edges.

28/90 Dharmendra Mishra

(ii) The length of the path P is n, its number of edges. (iii) A simple path is a path with distinct vertices. A trail is a path with distinct edges. (iv) A closed path has the same first and last vertices. (v) A spanning path contains all the vertices of G. (vi) A semipath is the same as a path except the edge ei may begin at v i-1 or v i and end at the other vertex. Semitrails and semisimple paths are analogously defined.

A vertex v is reachable from a vertex u if there is a path from u to v. If v is reachable from u, then (by eliminating redundant edges) there must be a simple path from u to v.

EXAMPLE 3

Consider the graph G in Fig 1.

(a) The sequence P 1 = (D, C, B, A) is a semipath but not a path since (C, B) is not an edge; that is, the direction of e 5 = (C, B) does not agree with the direction of P 1.

(b) The sequence P 2 = (D, B, A) is a path from D to A since (D, B) and (B, A) are edges. Thus A is reachable from D.

Connectivity

There are three types of connectivity in a directed graph G:

(i) G is strongly connected or strong if, for any pair of vertices u and v in G, there is a path from u to v and a path from v to u, that is, each is reachable from the other.

(ii) G is unilaterally connected or unilateral if, for any pair of vertices u and v in G, there is a path from u to v or a path from v to u, that is, one of them is reachable from the other

(iii) G is weakly connected or weak if there is a semipath between any pair of vertices u and v in G.

Let G’ be the (nondirected) graph obtained from a directed graph G by allowing all edges in G to be nondirected. Clearly, G is weakly connected if and only if the graph G’ is connected. Observe that strongly connected implies unilaterally connected, and that unilaterally connected implies weakly connected. We say that G is strictly unilateral of it is unilateral but not strong, and we say that G is strictly weak if it is weak but not unilateral.

Connectivity can be characterized in terms of spanning paths as follows:

29/90 Dharmendra Mishra

Theorem 2:

Let G be a finite directed graph. Then:

(i) G is strong if and only if G has a closed spanning path. (ii) G is unilateral if and only if G has a spanning path. (iii) G is weak if and only if G has a spanning semipath.

EXAMPLE 4

Consider the graph G in Fig. 1. It is weakly connected since the underlying nondirected graph is connected. There is no path from c to any other vertex (i.e. C is a sink), so G is not strongly connected. However, P = (B, A, D, C) is a spanning path, so G is unilaterally connected.

Graphs with sources and sinks appear in many applications (e.g., flow diagrams and networks). A sufficient condition for such vertices to exist follows.

Theorem 3:

Suppose a finite directed graph G is cycle-free, that is, contains no (directed) cycles. Then G contains a source and a sink.

Proof: Let P = (v 0,v 1,….v n) be a simple path of maximum length, which exists since G is finite. Then the last vertex v n is a sink; otherwise an edge (v n,u) will either extend P or from a cycle if u = v i for some i. Similarly, the first vertex v 0 is a source.

4 ROOTED TREES

Recall that a tree graph is connected cycle-free graph, that is, a connected graph without any cycles. A rooted tree T is a tree graph with a designated vertex r called the root of the tree. Since there is a unique simple path from the root r to any other vertex v in T, that determines a direction to the edges of T. Thus T may be viewed as a directed graph. We note that any tree may be made into a rooted tree by simply selecting one of the vertices as the root.

Consider a rooted tree T with root. The length of the path from the root r to any vertex v is called the level (or depth) of v, and the maximum vertex level is called the depth of the tree. Those vertices with degree1, other than the root r, are called the leaves of T, and a directed path from a vertex to a leaf is called a branch.

One usually draws a picture of a rooted tree T with the root at the top of the tree. Figure 3 show a rooted tree T with root r and 10 other vertices. The tree has five leaves, d, f, h, I, and j. Observe that :

Level (a) = 1, level (f) = 2, level (j) = 3

Furthermore, the depth of the tree is 3.

30/90 Dharmendra Mishra

Fig. 3

The fact that a rooted tree T gives a direction to the edges means that we can give a precedence relationship between the vertices. Specifically, we will say that a vertex u precedes a vertex or that v follows u if there is a (directed) path from v to u. In particular, we say that v immediately follows u if (u, v) is an edge, that is, if v follows u and is adjacent to u. We note that every vertex v, other than the root, immediately follows a unique vertex, but that v can be immediately followed by more than one vertex. For example, in Fig. 3, the vertex j follows c but immediately follows g. Also, both i and j immediately follow g.

5 SEQUENTIAL REPRESENTATION OF DIRECTED GRAPHS

There are two main ways of maintaining a directed graph G in the memory of a computer. One way, called the sequential representation of G, is by means of its adjacency matrix A. The other way, called the linked representation of G, is by means of linked lists of neighbors. This section covers the first representation and shows how the adjacency matrix A of g can be used to easily answer certain questions of connectivity in G. The linked representation will be covered in comminsection.

Digraphs and Relations, Adjacency Matrix

Let G (V, E) be a simple directed graph, that is a graph without parallel edges. Then E is simply a subset of V x V, and hence E is relation on V. Conversely, if R is a relation on a set v, then G (V, R) is a simple directed graph. Thus the concepts of relations on a set and simple directed graphs are one and the same. In fact, in Chapter 2, we already introduced the directed graph corresponding to a relation on a set.

Suppose G is simple directed graph with m vertices, and suppose the vertices of G have been ordered and are called v 1,v 2…,v m. Then the adjacency matrix A = [ a ij ] of G is the m x m matrix defined as follows:

1 if there is an edge (v i,v j) a ij = 0 otherwise

Such a matrix A, which contains entries of only 0 or 1, is called a bit matrix or a Boolean matrix.

31/90 Dharmendra Mishra

The adjacency matrix A of the graph G does depend on the ordering of the vertices of G; that s, a different ordering of the vertices may result in a different adjacency matrix. However, the matrices resulting from two different orderings are closely related in that one can be obtained from the other by simply interchanging rows and columns. Unless otherwise stated, we assume that the vertices of our matrix have a fixed ordering.

Remark 1: The adjacency matrix A = [a ij ] may be extended to directed graphs with parallel edges by setting

aij = the number of edges beginning at v i and ending at v j

Then the entries of A will be nonnegative integers. Conversely, every m x m matrix A with nonnegative integer entries uniquely a directed graph with m vertices.

Remark 2: If G is an undirected graph, then the adjacency matrix A of G will be a symmetric matrix, i.e.,a ij = a ji for every I and j. This follows from the fact that each undirected edge {u,v} corresponds to two directed edges (u,v) and (v,u).

EXAMPLE 5

Consider the directed graph G in Fig. 4 with vertices X, Y, Z, W. Suppose the vertices are ordered as follows:

V 1 = X, v 2 = Y, v 3 = Z, v 4 = W

Then the adjacency matrix A of g follows:

0 0 0 1 A = 1 0 1 1

1 0 0 1

1 0 1 0

Note that the number of 1’s in A is equal to the number (8) of edges.

32/90 Dharmendra Mishra

Fig. 4

2 3 Consider the powers A, A , A ,…. Of the adjacency matrix A = [a ij ] of a graph G. We will use the notation:

k ak (i, j) = the ij entry in the matrix A

Note that a 1(I, ,j) = a ij gives the number of paths of length 1 from vertex v i to vertex v j. One can show that a 2 (i, j) gives the number of paths of length 2 from v i to v j. In fact, we prove in Problem 9.14 the following general result.

Proposition 1:

Let A be the adjacency matrix of a graph G. Then a k(i, j), the ij entry in the k matrix A , gives the number of paths of length K from v i to v j.

EXAMPLE 6

Consider again the graph G in Fig. 4 whose adjacency matrix A is given in Example 5. The powers A 2, A 3, and A 4 of A follow:

1 0 0 1 1 0 0 2

2 0 1 2 3 0 2 3 A2 = A 3 = 1 0 1 1 2 0 1 2

1 0 0 2 2 0 2 1

2 0 2 1

5 0 3 5

A4 = 3 0 2 3

3 0 1 4

Observe that a 2(4,1) = 1, so there is a path of length 2 from v 4 to v 1. Also, a 3(2, 3) = 2, so there are two paths of length 3 from v 2 t v 3; and a 4(2, 4) = 5, so there are five paths of length 4 from v 2 to v 4. (Here, v1 = X, v 2 = Y,v 3 = Z, v 4 = W.)

Remarks:

33/90 Dharmendra Mishra

Suppose A is the adjacency matrix of a graph G, and suppose we now define the matrix B r, as follows:

2 3 Br = A + A + A + …+A r

Then the if entry of the matrix B r gives the number of paths of length r or less from vertex v i to vertex v j.

Path Matrix

Let G = G(V, E) be a simple directed graph with m vertices v 1,v 2,…v m. The path matrix or reachability matrix of G is the m-square matrix P =(p ij ) defined as follows:

1 if there is a path from vi to vj Pij = 0 otherwise

(The next subsection shows that the path matrix P may be viewed as the transitive closure of the relation E on V.)

Suppose now that there is a path from vertex v i to vertex v j in a graph G with m vertices. Then there j must be a simple path from v i to v j when v i ≠ v j, or there must be a cycle from v i to v when v i = v j. Since G has m vertices, such a simple path must have length m – 1 or less, or such a cycle must have length m or less. This means that there is a nonzero ij entry n the matrix.

2 3 m Bm = A + A + A +…+A

Where a is the adjacency matrix of G. Accordingly, the path matrix P and B m have the same nonzero entries. We select this result formally.

Proposition 2:

Let A be the adjacency matrix of a graph G with m vertices, and let

2 3 m Bm = A + A + A +…+A

Then the path matrix P and B m have the same nonzero entries.

Recall that a directed graph G is said to be strongly connected if, for any pair of vertices u and v in G, there is a path from u to v and from v to u. Accordingly, G is strongly connected if and only if the path matrix P of G has no zero entries. This fact together with Proposition 2 gives the following result.

Proposition 3:

Let A be the adjacency matrix of a graph G with m vertices, and let

2 3 m Bm = A + A + A +…+A

Then G is strongly connected if and only if B m has no zero entries.

EXAMPLE 7

34/90 Dharmendra Mishra

Consider the graph G with m = 4 vetices in Fig. 4 where we let v 1 = X, v 2 = Y, v 3 = Z, v 4 = W. Adding the matrices A, A 2, A 3, A 4 in Examples 5 and 6 we obtain the following matrix B 4, and replacing the nonzero entries in B 4 by 1, we obtain the path (reachability) matrix P of the graph G:

4 0 3 4 1 0 1 1

11 0 7 11 1 0 1 1 B4 = and P = 7 0 4 7 1 0 1 1

7 0 4 7 1 0 1 1

Examining the matrix B 4 or P, we see zero entries; hence G is not strongly connected. In particular, we see that the vertex v 2 = Y is not reachable from any of the other vertices.

Remark:

The adjacency matrix A and the path matrix P of a graph G may be viewed as logical (Boolean) matrices where 0 represents “False” and 1 represents “True”. Thus the logical operations of ^ (AND) and ∨ (OR), whose values appear in Fig. 5 may be applied to the entries of A and P. These operations will be used in the next section.

^ 0 1 v 0 1 0 0 0 0 0 1 1 0 1 1 1 1

(a) AND. (b) OR.

Fig. 5

Transitive Closure and the Path Matrix

Let R be a relation on a finite set V with m elements. As noted above, the relation R may be identified with simple directed graph G = G (V, R) we have discussed in relation and digraph unit.

R2 = {u, v} : w ∈ V such that (u, w) ∈ R and (w,v) ∈ R}

In other words, R 2 consists of all pairs (u,v) such that there is a path of length 2 from u to v. Similarly :

Rk = {u, v} : there is a path of length K from u to v }.

The transitive closure R* of the relation R on V may now be viewed as the set of ordered pairs (u, v) such that there is a path from u to v in graph G. Accordingly, the path matrix P of G = G (V, R) is precisely the adjacency matrix of the graph G’ = G’ (V, R*) which corresponds to the transitive closure R. Furthermore, by the above discussion, we need only look at simple paths of length m – 1 or less and cycles of length m or less. Accordingly, we have the following result which characterizes the transitive closure R* of R.

35/90 Dharmendra Mishra

Theorem 4.

Let R be a relation on a set V with m elements. Then :

m (i) R* = R ∪ R 2 ∪…∪ R is the transitive closure of R. (ii) The path matrix P of G (V, R) is the adjacency matrix of G’ (V, R*).

Linked list representation is same the representation on in undirected but the direction how to be consider for storm may of linked list for each vertex.

6 WARSHALL’S ALGORITHM; SHORTEST PATHS

Let G be a directed graph with m vertices, v 1, v 2, …., v m. Suppose we want to find the path matrix P of the graph G. Warshall gave an algorithm which is more efficient than calculation the powers of the adjacency matrix a. This algorithm is defined in this section, and a similar algorithm is used to find shortest paths in G when G is weighted.

Warshall’s Algorithm

First we define m-square Boolean matrices P 0, P 1,….,P m as follows. Let P k [i, j] denote the ij entry of the matrix P k. Then we define :

1 if there is a simple path from v i to v j which does not use any other vertices except possibly v 1, v2,….,v k Pk[i, j] =

0 otherwise

That is,

Po[i, j] = 1 if there is an edge from v i to v j

P1[I, j] = 1 if there is a simple path from v i to v j which does not use any other vertex except possibly v 1

P2[i, j] = 1 if there is a simple path from v i to v j which does not use any other vertices except possibly v 1 and v 2.

And so on.

Observe that the first matrix P o = A, the adjacency matrix of G. Furthermore, since G has only m vertices, the last matrix Pm = P, the path matrix of G.

Warshall observed that P k[i, j] = 1 can occur of one of the following two cases occurs.

36/90 Dharmendra Mishra

(1) There is a simple path from v i to v j which does not use any other vertices except possibly v 1, v 2,….,v k-1; hence

Pk-1[i, j] = 1

(2) There is a simple path from v i to v k and a simple path from v k to v j where each simple path does not use any other vertices except possibly v 1, v 2,….v k-1; hence

Pk-1[i, k] = 1 and P k-1[k, j] = 1

These two cases are pictured as follows :

(1) v i → …→ v j; (2) v j → … → v k → …→ v j

Here

→ … → denotes part of a simple path which does not use any other vertices except possibly v 1, v 2,….,v k-1. Accordingly, the elements of P k can be obtained by

Pk[i, j] = P k-1 [i, j] V (P k-1[i, k] ^ P k-1[k, j])

Where use the logical operations of ^ (AND) and ∨ (OR). In other words can obtain each entry in the matrix P k by looking at only three entries in the matrix Pk-1. Warshall’s algorithm follows.

Algorithm (Warshall’s Algorithm) : A directed graph G with M vertices I maintained in memory by its adjacency matrix a. This algorithm finds the (Boolean) path matrix P of the graph G.

Step 1. Repeat for I, J = 1,2….M: [Initializes P] If A[I,J] = 0, then: Set P[I,J] : = 0 Else : Set P[I,J] : = 1. [End of loop.]

Step 2. Repeat Steps 3 and 4 for K – 1,2,….M: [Updates P.]

Step 3. Repeat Step 4 for I = 1,2,….,M:

Step 4. Repeat for J = 1,2,….M: Set P[I, J]:= P[I, J] V (P[I, K] V P [K, J]). [End of loop.] [End of Step 3 loop.] [End of Step 2 loop.]

Step 5. Exit.

37/90 Dharmendra Mishra

Shortest-path Algorithm

Let G be a directed graph with m vertices, v 1,v 2,….v m. Suppose G is weighted; that is, suppose each edge e of G is assigned a nonnegative number w(e) called the weight of length of e. Then G may be maintained in memory by its weight matrix W = (w ij ) defined by

{w(e) if there is edge e from v i to v j Wij = 0 if there is no edge from v i to v j

The path matrix P tells us whether or not there are paths between the vertices. Now we want to find a matrix Q which tells us the lengths of the shortest paths between the vertices or, more exactly, a matrix Q = (q ij ) where

qij = length of the shortest path from v i to v j

Next we describe a modification of Warshall’s algorithm which efficiently finds us the matrix Q.

Here we define a sequence of matrices Q 0, Q 1,….Q m (analogous to the above matrices P 0, P 1,….P m) where Q k[i, j], the entry of Qk, is defined as follows :

Qk[i, j] = the smaller of the length of the preceding path from v i to v j or the sum of the lengths of the preceding paths from v i to v k and from v k to v j.

More exactly,

Qk[i, j] = MIN (Q k-1[i, j], Q k-1[i, k] + Q k-1[k, j])

The initial matrix Q 0 is the same as the weight matrix W except that each o in W is replaced by (or a very, very large number). The final matrix Q m will be desired matrix Q.

EXAMPLE 8

Figure 6 shows a weighted graph G and its weight matrix W where we assume v 1 = R, V 2 = S, v 3 = T, v 4 = U.

38/90 Dharmendra Mishra

7 5 0 0 7 0 0 2 W = 0 3 0 0 4 0 1 0

Fig. 6

Suppose we apply the modified Warshall’s algorithm to our weighted graph G. We will obtain the matrices Q 0, Q 1, Q 2, Q 3, and Q 4 in Fig 7 (To the right of each matrix Q k in Fig 7 we show the matrix of paths which correspond to the lengths in the matrix Q k.) The entries in the matrix Q 0 are the same as the weight matrix W except each o in W is replaced by ∞ (a very large number). We indicate how the circled entries are obtained.

Q1[4, 2] = MIN (Q 0[4, 2], Q 0[4,1] + Q 0[1, 2]) = MIN ( ∞,4 + 5) = 9 Q2[1, 3] = MIN (Q 1[1, 3], Q 1[1, 2] + Q 1[2, 3]) = MIN ( ∞, 5 + ∞) = ∞ Q3[4, 2] = MIN (Q 2[4, 2], Q 2 [4, 3] + Q 2[3, 2]) = MIN (9, 3 + 1) = 4 Q4[3,1] = MIN (Q 3[3,1], Q 3[3, 4] + Q 3 [4,1]) = MIN (10, 5 + 4) = 9

The last matrix Q 4 = Q, the desired shortest path matrix.

7 5 ∞ ∞ RR RS - - 7 ∞ ∞ 2 SR - - SU Q0 = ∞ 3 ∞ ∞ - TS - - 4 ∞ 1 ∞ UR - UT -

7 5 ∞ ∞ RR RS - - 7 12 ∞ 2 SR SRS - SU Q1 = ∞ 3 ∞ ∞ - TS - - 4 9 1 ∞ UR URS UT -

7 5 ∞ 7 RR RS - RSU 7 12 ∞ 2 SR SRS - SU Q2 = 10 3 ∞ 5 TSR TS - TSU 4 9 1 11 UR URS UT URS

7 5 ∞ 7 RR RS - RSU 7 12 ∞ 2 SR SRS - SU Q3 = 10 3 ∞ 5 TSR TS - TSU 4 4 1 6 UR UTS UT UTSU

39/90 Dharmendra Mishra

7 5 8 7 RR RS RSUT RSU 7 11 3 2 SR SURS SUT SU Q4 = 9 3 6 5 TSUR TS TSUT TSU 4 4 1 6 UR UTS UT UTSU

Fig. 7

7 DIRECTED CYCLE-FREE GRAPHS, TOPOLOGICAL SORT

Let S be a directed graph such that : (1) each vertex Vi of S represents a task, and (2) each (Directed) edge (u, v) of S means that the task u must be completed before beginning the task v. Suppose such a graph S contains a cycle, say

P = (u, v, w, u)

This means we must complete task u before beginning v, we must complete task v before beginning w, and we must complete w before beginning task u. Thus we cannot begin any of the three tasks in the cycle. Accordingly, such a graph S, representing tasks and a prerequisite relation, cannot have any cycles or, in other words, such a graph S must be cycle – free or a cyclic. A directed acyclic (cycle-free) graph is called a dag for short. Figure 9-16 is an example of such a graph.

(a) (b)

Fig. 8

A fundamental operation on a dag S is to process the vertices one after the other so that vertex u is always processed before vertex u whenever (u,v) is an edge. Such a linear ordering T of the vertices of S, which may not be unique, is called a topological sort. Figure 9 shows two topological sorts of the graph S in Fig 8 We have included the edges of S in Fig 9 to show that they agree with the direction of the linear ordering.

40/90 Dharmendra Mishra

Fig. 9 Two topological sorts.

The following is the main theoretical result in this section.

Theorem 5 :

Let S be a finite directed cycle-free graph. Then there exists a topological sort T of the graph S.

Note that the theorem states only that a topological sort exists. We now give an algorithm which will find a topological sort. The main idea of the algorithm is that any vertex (node) N with zero indegree may be closen as the first element in the sort T. The algorithm essentially repeats the following two steps until S is empty :

(1) Find a vertex N with zero indegree. (2) Delete N and its edges from the graph S.

We use an auxiliary QUEUE to temporarily hold all vertices with zero degree. The algorithm follows.

Algorithm 9.9 : The algorithm finds a topological sort T of a directed cycle- free graph S.

Step 1. Find the indegree INDEG (N) of each vertex N of S.

Step 2. Insert in QUEUE all vertices with zero degree.

Step 3. Repeat Steps 4 and 5 until QUEUE is empty.

41/90 Dharmendra Mishra

Step 4. Remove and process the front vertex N of QUEUE.

Step 5. Repeat for each neighbor M of the vertex N.

(a) Set INDEG (M) : = INDEG (M) - 1 [This deletes the edge from N to m .}

(b) If INDEG (M) = 0, add M to QUEUE.

[End of loop.] [End of Step 3 loop.]

Step 6. Exit.

EXAMPLE 9

Suppose Algorithm 9.9 is applied to the graph S in Fig 9-16 We obtain the following sequence of the elements of QUEUE and sequence of vertices being processed.

Vertex B E G D A F C QUEUE GEB DGE DG FAD FA CF C Q

Figure 9-18 shows that graph S as each of the first three vertices, B, E, and G, and its edges are deleted from S. The topological sort T is the sequence of processed vertices, that is:

T: B, E, G, D, A, F, C

Fig. 10



42/90 Dharmendra Mishra

Relations

Relations means as it’s name implies it shows. The relation ship between minimum two entities, objects. In this unit we will discuss about relations and it’s properties, and it’s representation and some operations on relations.

Before discussing about relations in detail we should know about product set.

1) Product Set

If A and B are two non empty finite set then product set or Cartesian product A X B as said to be ordered pair (a, b) with a E A and b E B.

A X B = { (a, b) / a ∈ A and b ∈ B }

Eg. A = { 1, 2, 3} B = { 4, 5}

A X B = { (1, 4) (1, 5), (2, 4), (2, 5), (3, 4), (3, 5) }

B X A = { (4, 1) , (4, 2), (4, 3), (5, 1), (5, 2), (5, 3) }

Is A X B ≠ B X A

Let coordinately of a set A is about as n(A) or /A/ Then

N (A X B) = n (A) X n (B) This can be verified from above eg: n (A X B) = 3 X 2 = 6

n (A) = 3 n (B) = 2

2) Relations

Usually relations are defined on sets; and it’s condition product is how the elements of the sets are related and we are grouping those elements which satisfying the relation R.

Let A and B are two sets. Then

A X B = { (a, b) / a ∈ A and b ∈ B }

On the above set A X B we can define some relation R then the set R contain only those tables which satisfying the defined relation.

Eg.: Let A = { 2, 3 }

B = { 1, 4, 9, 10 }

A X B = { (2, 1), (2 , 4) (2 , 9) (2, 10), (3, 1), (3, 4), (3, 9), (3, 10) } Her relation in given

2 R 4, 3 R 9 Read as 2 Related to 4 and 3 Related to 9 where 2, 3 ∈ A and 4, 9 ∈B. Her we can say that Let a E A and b E B then the above relation in b = a 2.

So the relation R in defined on A X B is b is a Square of A. So R = { (a, b) / (a, b) ∈ A X B and b = a 2 }.

In the Relation R is defined as. R is a set which contain the element (a, b) in ordered pair and every (a, b) ∈ A X B and b = a2. So the Relation set R is obtained as.

43/90 Dharmendra Mishra

R = { (2, 4), (3, 9 ) } ∈ A X B

Similarly any types of relations can be defined as sets such as. B < a ; b = a, a > b, a + 5 < b etc. . .

Where a ∈ A and b ∈ B. Where A and Bare two sets. So from the above discussions we can Understand that Relation R is a subset of A X B. If the relation is defined on two set A and B.

R C A X B

3) Domain & Range of a Relation

Let R be a relation defined on two sets A to B then Domain of R Dom (R) is a subset of A ie set of all first element in the pair that make up R.

Similarly Range of R is Range (R) is a subset of B in Set of all Second element in the pair that make up R.

We can explain it with example.

Let A = { 1, 2, 3, 4}

B = { 5, 6, 7, 10}

A X B = { (1, 5), (1, 6), (1, 7), (1, 10)

(2, 5), (2,6) (2,7) (2,10)

(3,5), (3,6), (3,7), (3,10), (4,5), (4,6), (4, 7), (4, 10) }

Relation R = { (a, b) / (a, b) ∈ A X B and (a + 4 = b) }

So R = { (1, 5), (2,6), (3,7) }.

Dom (R) = { 1, 2, 3 } ∈ A Range (12) = { 5, 6, 7 } ∈ B

4) R- Relative set

If R is a relation from A to B where A and B are two set and x E A is we define R(x), the R- Relative set of x is the all element y in B properly that x is Related Y x Ry for some x is A. R – Relative set of A is

R (A) = { y ∈ B / x R y for some x in A }

Let A = {1, 2, 3 } B {1, 2, 4, 5} Let R in a Relation x + 2 ≤ y where X ∈ A and y ∈ B the R = { (1, 4) (1,4) (2,4) (2,5) (3,5) }

R relative set of 2 is

R (2) = {4, 5) is in relation set 2 R 4 and 2 R 5.

R relative set of set A

R (A) = { 4, 5} is in the above relation set only 4 and 5 are related to some of the elements in A.

• Let R be a relation from A to B and Let A, and A 2 be subset of A

44/90 Dharmendra Mishra

Theorem 1 Then we can prove following Theorem

(A) If A 1 ⊆ A 2 then R (A 1) ⊆ R (A 2)

(B) R (A 1 υ A 2) = R (A 1) υ R (A 2)

(C) R (A 1 ∩ A 2) ⊆ R (A 1) ∩ R(A 2)

Proof :

a) If y ∈ R (A 1), then x R Y for X ∈ A 1, Since A 1 ⊆ A 2, x ∈ A 2 this y ∈ R (A 2)

b) If y E R (A υ B) this is from the definition of R – relative set x R y for some

X in A υ B 2 If x is in A1 them we must have y E R (A 1). By the some argument if X is in A 2 then we must have y ∈ R (A 2) is in either (are y ∈ R (A 1) υ R (A 2)

Ie we have taken that y ∈ R (A 1 υ A 2) and y ∈ R (A 1) υ R (A 2)

Ie R (A1 υ A 2) ⊆ R (A 1) υ R (A 2) – 1

We know that A 1 ⊆ (A 1 υ A 2) A2 ⊆ (A 1 υ A 2)

We have show that R (A 1) ⊆ R (A 1 υ A 2) R (A 2) ⊆ R (A 1 υ A 2) R (A 1) υ R (A 2) ⊆ R (A 1 υ A 2) -2

From 1 and 2 ie x ⊆ y and y ⊆ x them x = y

So R (A 1 υ A 2) = R (A 1) υ R (A 2)

c) By using above notations we can prove A (A 1 ∩ A 2) ⊆ R (A 1) ∩ R (A 2) Left as an exercise.

5) Matrix Representation of Relation

In order to represent a relation between two finite sets with a matrix as follows. If set A = {a 1, a2 ...... A m } and B = {b 1, b 2, b 3 . . . . . b n } are finite set contain M and n elements respectively and R is a relation from A to B we can represent R by a Mxn matrix M R = [Mij]

The matrix M R is defined as.

1 if (a i, b j) ∈ R. Mij = 0 If (a i, b j) ∉ R. *

Ie in mxn matrix each entry is matrix in filled with 1 If (a i, b j) is a element in Relation R else 0 if (a i, b j) in not a member relation.

Example

A = { 1, 2, 4, 6 } B = { 1, 4, 5, 16, 25, 36 }

Let R be a relation from A to B and relation is defined as y = x 2 where x ∈ A and y ∈ B. So Relation Set

R = { (1,1), (2,4), (4,16) (6,36) }.

Matrix Representation for the above Relation is

45/90 Dharmendra Mishra n (A) = 4 n (b) = 6 So we have 4 X 6 matrix MR

B

1 4 5 16 25 36

1 1 0 0 0 0 0

A = 2 0 1 0 0 0 0

4 0 0 0 1 0 0

6 0 0 0 0 0 1

(2 , 4) ∈ R so Correspondently is 1.

Example 2

Let Mr be the Matrix of a Relation then prepare the Relation set from the matrix assume elements as a 1 a 2 . . . . an for A b 1, b 2, b 3 . . b m for B.

1 0 0 1 0 MR = 0 0 1 0 0 0 1 1 0 0

Her n (A) = 3 n (B) = 5

Her Relation Set are R = { (a 1, b 1), (a 1, b 4), (a 2, b 3), (a 3, b 2), (a 3, b 3) }.

6) The Diagraph of a Relation

A relation can also be represented as Diagraph. If a relation is defined on two different set then the diagraph for that relation usually a bi partiate graph. More discuss about this is in following see when.

The Diagraph of a Relation

A relation can also be represented as a diagraph. Digraph in pictorial representation of a relation usually the relation is defined on single set. Let A be a finite set with n elements. Then we can define any relation R on AXA and a here R ∈ AXA. In order to represent a relation as A to A is R, we can draw a directed graph (Diagraph). Graph is a set of vertices along with set of Edges. Each vertex (node) in the graph represent the element in the set here A. Edge between two vertex represent say < V i V j are element of A.

Eg : 1

A = { 1,2,3,4 } A relation R defined on A to A is R ∈ A X A. Relation R { (a,b) / (a ∈ A and b ∈ A), a < b}.

R = { (1,2), (1,3) (1,4), (2,3) (2,4) (3,4) }

2

1 Fig. 1 4

3

46/90 Dharmendra Mishra is the graph of a relation . . . . R.

Eg : 2

Prepare the Relation set from the given Diagram

1 2 5

3 4

Figure 2 A = { 1,2,3,4,5 }

R = {1,1), (2,2) (1,3) (1,4) (1,5), (2,2), (2,3), (2,4), (2,5)

(3,3) (3,4) (3,5) (4,4) (4,5) (5,5) }

R = { (a, b) / (a, b) ∈ A X B, a ≤ b }.

Note :

Suppose a Relation R is defined on two different sets A and B is a relation is defined from A to B. Then whose digraph in said to be a bipartiate graph.

A = { 1,2,3 }

B = { a, b, c}.

A relation R is defmal on A X B in given.

R = { (1, a) (2, c), (3, b), (1,c), (3, a) }.

So graph for above relation in represented given below.

Her the nodes (vertex) are divided in to two sets. {1,2,3} (a, b, c) and also seen that there in no relationship among elements in the some set. So we can partiation the graph in to exalt two halfs; these types of graph called bi partiate graph. [We will discuss the properties of graph is coming unit Graph theory].

7) Paths in Relations and Digraphs

Path length in a relation R can be explained easily with help of a diagraph of a Relation.

Let a be set A = { a, b, c, d, e }.

And a Relation r definition A to a in

R = { (a, a), (a, b,) (b, c) (c, e) (c, d), (d, e) }.

The diagraph for the above relation is

2 2

47/90 Dharmendra Mishra

2 2

2 2

Fig 3

Here the element in the relation set R are a Ra, a R b, b R c, c R d, c R e, d R e. are seen in the graph with edges < a1, a2 >, < a, b >, < b, c >, < c, d > < c, e >, and < d, e >. Path length for above edges are in there a path of length are from are vertex to another vertex which is present in the relation observe the sequence. b R c and c R d. [ b Related to C and [ Related to d ] is contributing a path of length 1 each is a relation from. B to d in a sequence as b R c, c R d so path length from b to d is 2. [ is represented with Doted Line ].

So, suppose R is a relation on set A, A path of length n in R from a to b is a finite sequence

R R R R S = a x, x x 2, x2 x 3 ...... x n-1 b

Where x 1, x 2 . . . . . x n-1 are intermediate nodes in the diagraph through which we can reach to from a bat the path of length is n. it is denoted as. a Rn b meaning in the expression will generate a sequence starting from a and ending at b through intermediate n-1 elements. Like

R R R A x 1, x 1 x 2 ...... x n –1 b.

The notalian R (x) consist of all vertex that be reached from x bat whose path of length should be n.

The Notalwon R (x), consist of all vertex that can be reach from x by some path in R.

From the above digraph for a Relation defined on A to A. we can list different path of lengths.

R = { (a, a) (a, b), (b, c) (c, d) (c, e) (d, e) }

Path of length = 1 n = 1

a R a a R b b R C c R d c R e d R e

Path length n = 2

a R2 a - a R a and a R a a R2 b - a R a and a R b a R2 c - a R b and b R c b R2 e - b R c and c R e b R2 d - b R c and c R d c R2 e - C R d and c R e

Path length n = 3

48/90 Dharmendra Mishra

a R3 a - a R a, a R a and a R a a R3 b - a R a, a R a and a R b a R3 c - a R a, a R b and b R c a R3 d - a R b, b R c and c R d a R3 e - a R b, b R c and c R e b R3 e - b R c, c R d and d R e

Path of length n = 4

a R4 a - a R a, a R a, a R a and a R a a R 4 b - a R a, a R a, a R a and a R b. a R4 c - a R a, a R a, a R b and a R c a R4 d - a R a, a R b, b R c and c R d a R4 e - a R a, a R b, b R c and c R e

Here after in n > 3 we have no path from mode other than a

But if the n ® is sufficiently large it is difficult to prepare the part of length by using above method. In order to made this taste easy, we are using another method Matrix method to calculate R 1, R 2 . . . . R n.

We have already discussed matrix representative of a Relation R is M R. With help of this matrix. We can calculate any path of length in the relation.

2 n MR represent the matrix which gives the relation whose path of length in 2. Similarly M R is the matrix with path of length n.

2 R Here M R = M R M R M is the original relation matrix with path of length,

3 MR = M R 0 M R 0 M R

n MR = M R 0 M R 0 M R . . . .M R (n tones).

n (the operator is part are at relative positions) Guide Line for preparing M R .

1) Let M R be the Relational matrix with n rows and n columns and path of length one.

Her MR = [Mij ] 1 ≤ I ≤ n and I ≤ j ≤ n.

2 2 2) Let M R is the Relational matrix with path of length 2 is M R = [ n ij ] 1 ≤ i ≤ n and 1 ≤ j ≤ n.

2 3) In M R each entry

th th 1 If and only if i row and j column of M R above R 1 in the some relative position

Nij = th 0 If there is no 1 in some relative position of I row and j column of M R.

3 2 R 3 4) MR = M R M and Her each entry in M R = [ n ij ]

th th 2 1, if there is one in the some relative position of i row and j column of M R and M R [ N ij ] = 0, if there is no one in the some relative position of i th row and j th column of 2 MR and M R.

49/90 Dharmendra Mishra

4 5 n 5) Repeat the step for M R , M R ...... M R . k k-1 Is M R = M R M R.

Example 1

Let A = { a, b, c, d, e } R = { (a, a), (a, b), (b, c), (c, d), (c, e), (d, e) }.

MR = 1 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0

2 st MR = M R M R = 1 1 1 0 0 0

0 0 1 0 0

3rd 0 0 0 1 1

0 0 0 0 1

0 0 0 0 0

1st 4th

1 1 0 0 0

0 0 1 0 0

0 0 0 1 1

0 0 0 0 1

0 0 0 0 0

2 MR = 1 1 1 0 0

0 0 0 1 1

0 0 0 0 1

0 0 0 0 0

0 0 0 0 0

st st Her in M R 1 raw and 1 column contain 1 in some relative position 1 is in marked in circle in 2 2 2 MR M R (1,1) = 1 M R, & M R

50/90 Dharmendra Mishra

rd th 2 Similarly 3 row and 4 column of M R contain 1 in some relative position at 4. So in M R rd th 2 matrix 3 row 4 column contain a which is marked in Rectangle M R . Like this way we can n prepare M R .

8) Computer Representation of Relations.

A relation R can be represented in computer by using two methods. They are matrix representation, and another are in linked list representation. This same representation we are discussing in graph theory also any given relation R which can be represented as a diagraph as we have discussed. This diagraph we can represent as a matrix in graph theory we call it as adjacency matrix. Preparation of adjacency matrix is as given below.

1) Let R be a relative defined as set A with n (A) = M, so we have M R with mxm.

2) any entry in M R = [ M ij ] is

1 If there is a edge between i th vertex to j th vertex. Ie i th element with set A to J th element with set A Mij = 0 If there is no edge between i th vertex to j th vertex.

For the given graph

a a

Fig. 5 a a

a

a b c d e

A 1 1 0 0 0

MR = B 0 0 1 0 0

C 0 0 0 1 1

D 0 0 0 0 1

E 0 0 0 0 0

Adjacency list [ linked list representation]

The Diagraph in Relation can be represented as. Linked list, it is usually a storage call which consist of at least one information field and one pointed field to point next node [Storage cell]

INFO FIELD POINTER is the one such organization.

The Linked list representation of the above diagraph is

51/90 Dharmendra Mishra

a a b \

b c

c d e \ d e \

e Nil \

Her the \ in the pointed field indicated that there is no node after that in end of the list

9) Properties of Relations

In many applications to computer science and applied mathematics, we deal with relations on a set A rather than relations from A to B. Moreover, these relations often satisfy certain properties that will be discussed in this section.

Reflexive and Irreflexive Relations

A relation R on a set A is reflexive if (a, a) ∈ R for all a ∈ A, that is, if a R a for all a ∈ A. A relation R on a set is irreflxive if a R a for every a ∈ A.

Thus R is reflexive if every element a ∈ A is related to itself and it is irreflexive if no element is related to itself. a) Let ∆ = } (a,a) / a ∈ A] so that ∆ is the relation of equality on the set A. Then ∆ is reflexive, since (a, a) ∈ ∆ for all ∈ A. b) Let R = [ (a, b) ∈ A X A [a ≠ b], so that R is the relation of on the set A. Then R is irreflexive, since (a, a) ∉ R for all a ∈ A. c) Let A = {1,2,3), and let R = {(1,1), (1,2)}. Then R is not reflexive since (2,2) ∉ R and (3,3) ∉ R. Also, R is not irreflexive, since (1,1) ∈ R. d) Let A be a nonempty set. Let R = ∅ ⊆ A X A, the empty relation. Then R is not reflexive, since (a, a) ∉ R for all a ∈ A (the empty set has no elements). However, R is irreflexive.

We can identify a reflexive or irreflexive relation by its matrix as follows. The matrix of a must have all I’s on it’s main diagonal, while the matrix of an irreflexive relation must have all 0’s on its main diagonal.

Similarly, we can characterize the digraph of a reflexive or irreflexive relation as follows. A reflexive relation has a cycle of length I at every vertex, while an irreflexive relation has no cycles of length I. Another useful way of saying the same thing uses the equality relation ∆ on a set A : R is reflexive if and only if ∆ ⊆ R, and R is irreflexive if and only if ∆ ∩ R = ∅ .

Finally, we may note that if R is reflexive on a set A, then Dom (R) = Ran (R) = A.

Symmetric, Asymmetric, and Antisymmetric Relations

A relation R on a set A is symmetric if whenever a R b, then b R a. It then follows that R is not symmetric if we have some a and b ∈ A with a R b, but b R a. A relation R on a set A is asymmetric if whenever a R b, then b R a. It then follows that R is not asymmetric if we have some a and b ∈ A with both a R b and b R a.

A relation R on a set A is antisymmetric if whenever a R b and b R a, then a = b. The contrapositive of this definition is that R is antisymmetric if whenever a a ≠ b, then a R b or b

52/90 Dharmendra Mishra

R a. It follows that R is not antisymmetric if we have and b in A, a ≠ b, and both a R b and b R a.

Given a relation R, we shall want to determine which properties hold for R. Keep in mind the following remark : A property fails to hold in general if we can find one situation where the property does not hold.

Now we will see how can we solve some problems by wing there properties. Before Doing problem. We clarify some notations.

Z is a set of Integers both + ve and – ve 2 + in set of +ve integers. 2 - in set of –ve Integers R in set of Realnumbers.

Example I

Let A= Z, the set of integers, and let

R = { (a, b) ∈ A X A [ a < b ]

So that R is the relation less than. Is R symmetric, asymmetric, or antisymmetric?

Solution

Symmetry : If a < b, then it is not true that b < a, So R is not symmetric. Asymmetry: If a < b, then b < a (b is not less than a), so R is asymmetric. Antisymmetry: If a ≠ b, then either a < b or b < a, so that R is antisymmetric.

Let A = {1,2,3,4) and let

R = { (1,2), (2,2) (3,4), (4,1) }.

Then R is not symmetric, since (1,2) ∈ R, but (2,1) ∉ R. Also, R is not asymmetric, since (2,2) ∈ R. Finally, R is antisymmetric, since if a ≠ b, either (a, b) ∉ R or (b, a) ∉ R.

Example – 3

Let A = Z +, the set of positive integers, and let

R = { (a, b) ∈ A X A [ a divides b }.

Is R symmetric, asymmetric, or anisymmetric ?

Solution

If a \ b, it does not follow that b \ a, so R is not symmetric. For example 214 but 4 X 2 (4 Dn 2) not equal to 2 dwe by 4)

If a = b = 3, say, then a R b and b R a, so R is not asymmetric. If a \ b and b \ a, then a = b, so R is antisymmetric.

We now relate symmetric, asymmetric, and antisymmetric properties of a relation to properties of its matrix. The matrix M R = [ M ij ] of a satisfies the property that

If mij = 1, then m ij = 1.

Moreover, if m ji = 0, then m ij = 0. Thus M R is a matrix such that each pair of entries, symmetrically placed about the main diagonal, are either both 0 or both 1. It follows that M R = T M R, so that M R is a symmetric matrix

53/90 Dharmendra Mishra

The matrix M R = [ m ij ] of an R satisfies the property that

If m ij = 1, then m ji = 0.

If R is asymmetric, it follows that M ii = 0 for all i; that is, the main diagonal of the matrix M R consists entirely of 0’s. This must be true since the asymmetric property implies that if M ii = 1, then m ii = 0, which is a contradiction.

Finally, the matrix M R = [ m ij ] of an R satisfies the property that if i ≠ j, then m ij = 0 or m ji = 0.

Example 4 given below

Consider the matrix each of which is the matrix of relation, as indicated.

Relations R 1 and R 2 are symmetric since the matrices M R1 and M R2 are symmetric matrices. Relation R 3 is antisymmetric, since no symmetrically situated, off-diagonal positions of M R3 both contain 1’s. Such positions may both have 0’s, however, and the diagonal elements are unrestricted. The relation R 3 is not asymmetric because M R3 has 1’s on the main diagonal.

Relation R 4 has none of the three properties: M R4 is not symmetric. The presence of the 1’s in positions 4, 1 and 1, 4 of M R4 violates both asymmetry and antisymmetry.

Finally, R 5 is antisymmetric but not asymmetric, and R 6 is both asymmetric and antisymmetric.

1 0 1 0 1 1 0 0 0 1 = M R1 1 1 0 0 = M R2 1 1 1 1 0 1 1 0 0 1 1

(a) (b)

1 1 1 0 0 1 1 0 1 0 = M R3 0 0 1 0 = M R4 0 0 0 0 0 0 1 1 0 0 0

(c) (d)

1 0 0 1 0 1 1 1 0 1 1 1 = M R5 0 0 1 0 = M R6 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 0

(e) (f)

We now consider the diagraphs of these three types of relations. If R is an asymmetric relation, then the digraph of R cannot simultaneously have an edge from vertex i to vertex j and an edge from vertex j to vertex i. This is true for any i and j, and in particular if i equals j. Thus there can be no cycles of length 1, and all edges are “one-way streets.”

If R is an antisymmetric relations, then for different vertices i and j there cannot be an edge from vertex i to j and an edge from vertex j to vertex i.

When i = j, no condition is imposed. Thus there may be cycles of length I, but again all edges are “one way.”

54/90 Dharmendra Mishra

We consider the diagraphs of symmetric relations in more detail.

The diagraph of a symmetric relation R has the property that if there is edge from vertex i to vertex j, then there is an edge from vertex j to vertex i. Thus, if two vertices are connected by an edge, they must always be connected to both directions. Because of this, it is possible and quite useful to give a different representation of a symmetric relation. We keep he vertices as they appear in the diagraph, but if two vertices a and b are connected by edges in each direction, we replace these two edges with one undirected edge, or a “two-way street.” This undirected edge is just a single line without arrows and connects a and b. The resulting diagram will be called the graph of the symmetric relation.

Example 5

Let A = { a, b, c, d, e } and let R be the symmetric relation given by

R = { (a, b), (b, a), (a, d), (c, a), (b, c), (c, b), (b, e), (e, b), (e, d), (d, e), (c, d), (d, c) }

The usual diagraph of R is shown in Figure 6 (a) while Figure 6 (b) shows the graph of R. Note that each undirected edge corresponds to two ordered pairs in the relation R.

(a) (b) figure 6

An undirected edge between a and b, in the graph of a symmetric relation R, corresponds to a set {a, b] such that (a, b) ∈ R and (b, a) ∈ R. Sometimes we will also refer to such a set {a, b} as an undirected edge of the relation R and call a and b adjacent vertices.

A symmetric relation R on a set A is called connected if there is a path from any element of A to any other element of A. This simply means that the graph of R is all in one piece. In figure 17 we show the graphs of two symmetric relations. The graph in Figure 7 (a) is connected, whereas that in Figure 7 (b) is not connected.

Transitive Relations

We say that a relation R on a set A is transitive if whenever a R b and b R c, then a R c. It is often convenient to say what it means for a relation to be not transitive. A relation R on A is not transitive if there exist a, b, and c in A so that a R b and b R c, but a R c. If such a, b, and c do not exist, then R is transitive.

55/90 Dharmendra Mishra

(a) (b)

Figure 7 Eg: 1

Let A = Z, the set of integers, and let R be the relation less than. To see whether R is transitive, we assume that a R b and b R c. Thus a < b and b < c. It then follows that a < c, so a R c. Hence R is transitive.

Eg: 2

Let A = {1, 2, 3, 4 } and let

R = { (1, 2), (1, 3), (4, 2) }.

Is R transitive?

Solution

Since there are no elements a, b, and c in A such that a R b and b R c, but a R c, we conclude that R is transitive.

A relation R is transitive if and only if its matrix M R = [ m ij ] has the property

If m ij = 1 and m jk = 1, then m ik = 1.

2 The left – hand side of this statement simply means that (M R) has a 1 in position i, k. Thus 2 R the transitivity of R means that if (M R) has a 1 in any position, then M must have a 1 in the same position. Thus, in particular, if (M R)2 = M R, then R is transitive. The converse is not true.

Eg: 3

Let A = {1, 2, 3} and let R be the relation on A whose matrix is

1 1 1 MR = 0 0 1 0 0 1

Show that R is transitive.

Solution 2 By direct computation, (M R) = M R; therefore, R is transitive.

134 Chapter 4 Relations and Digraphs

To see what transitivity means for the digraph of a relation, we translate the definition of transitivity into geometric terms.

If we consider particular vertices a and c, the conditions a R b and b R c mean that thee is a path of length 2 in R from a to c. In order words, a R 2 c. Therefore, we may rephrase the definition of transitivity as follows: If a R 2 c, then a R c; that is, R 2 ⊆ R (as subsets of A X A). In

56/90 Dharmendra Mishra other words, if a and c are connected by a path of length 2 in R, then they must be connected by a path of length 1.

We can slightly generalize the foregoing geometric characterization of transitivity as follows.

Theorem 1

A relation R is transitive if an only if it satisfies the following property: If there is a path of length greater than 1 from vertex a to vertex b, there is a path of length 1 from a to b (that is, a is related to b). Algebraically stated, R is transitive if and only if R n ⊆ R for all n ≥ 1.

It will be convenient to have a restatement of some of these relational properties in terms of R - relative sets. We list there statements without proof.

Theorem 2

Let R be a relation on a set A. Then

a) Reflexivity of R means that a ∈ R (a) for all a in A.

b) Symmetry of R means that a ∈ R (b) if and only if b ∈ R (a).

c) Transitivity of R means that if b ∈ R (a) and c ∈ R (b), then c ∈ R (a).

10 Equivalence Relations

A relation R on a set a is called an if it is reflexive, symmetric, and transitive.

EXAMPLE 1

Let A be the set of triangles in the plane and let R be the relation on A defined as follows:

R = { (a, b) ∈ A X A [ a is congruent to b }.

It is easy to see that R is an equivalence relation.

EXAMPLE 2

Let A = {1, 2, 3, 4 } and let

R = { (1, 1), (1, 2), (2, 1) (2, 2), (3, 4), (4, 3), (3, 3), (4, 4) }.

It is easy to verify that R is an equivalence relation.

EXAMPLE 3

Let A = Z, the set of integers, and let R be defined by a R b if and only if a b ≤ b. Is R an equivalence relation?

Solution

Since a ≤ a, R is reflexive. If a ≤ b, it need not follow that b ≤ a, so R is not symmetric. Incidentally, R is transitive, since a ≤ b and b ≤ c imply that a ≤ c. We see that R is not an equivalence relation.

EXAMPLE 4

Let A = Z and let

57/90 Dharmendra Mishra

R = { (a, b) ∈ A X A [ a and b yield the same remainder when divided by 2 ].

In this case, we call 2 the modulus and write a ≡ b (mod 2), read “ a is congruent to b mod 2.”

Show that congruence mod 2 is an equivalence relation.

Solution

First, clearly a ≡ a (mod 2). Thus R is reflexive.

Second, if a ≡ b (mod 2), then a and b yield the same remainder when divided by 2, so b ≡ a (mod 2). R is symmetric.

Finally, suppose that a ≡ b (mod 2) and b ≡ c (mod 2). Then a, b, and c yield the same remainder when divided by 2. Thus, a ≡ c (mod 2). Hence congruence mod 2 is an equivalence relation.

EXAMPLE 5

Let A = Z and let n ∈ Z +. We generalize the relation defined in Example 4 as follows. Let

R = { (a, b) ∈ A X A ] a ≡ b (mod n) }

That is, a ≡ b (mod n) if and only if a and b yield the same remainder when divided by n. Proceeding exactly as in Example 4, we can show that congruence mod n is equivalence relation.

We note that if a ≡ b (mod n), then a = qn + r and b = tn + r and a – b is a multiple of n. This, a ≡ b (mod n) if and only if n \ (a – b ).

Partitions on set

Partition or quotient set of a non empty set A in a collection of D of non empty sub set of a such that

1) Each element in a belongs to one of the set in partition D.

2) If A 1 and A 2 are distinct elements of p then A 1 ∩ A 2 = ∅ eg:

A1 A4 A2 A3 A5

P = { A 1, A 2, A 3, A 4, A 5 }

Where each one A 1, A 2 . . . . A 5 are individual set.

Eg : 1

Let A = { 1, 2, 3, 4, 5, 6 }

A1 = {1, 3 } A 2 = { 2, 5, 6 }, A 3 = { 4 }

58/90 Dharmendra Mishra

Her P = { A 1, A 2, A 3 }

• Equivalence Relation and Partitions

The following result shows that if P is a partition of a set A then P can be used to construct an equivalence relation on A.

Theorem 1

Let P be a partition of a set A. Recall that the sets in P are called the blocks of P. Define the relation R on A as follows: a R b if and only if a and b are members of the same block.

Then R is an equivalence relation on A.

Proof

(a) If a ∈ A, then clearly a is in the same block as itself, so a R a.

(b) If a R b, then a and b are in the same block; so b R a.

(c) If a R b and b R c, then a, b, and c must all lie in the same block of P. Thus a R c.

Since R s reflexive, symmetric, and transitive, R is an equivalence relation. R will be called the equivalence relation determined by P.

EXAMPLE 6

Let A = { 1, 2, 3, 4 } and consider the partition P = { ( 1, 2, 3), (4) } of A. Find the equivalence relation R on A determined by P.

Solution

The blocks of P are { 1, 2, 3 } and { 4 }. Each element in a block is related to every other element in the same block and only to those elements. Thus, in this case,

R = { ( 1, 1), ( 1, 2), ( 1, 3), ( 2, 1), ( 2, 2), ( 2, 3), ( 3, 1), ( 3, 2), ( 3, 3), ( 4, 4).

If P is a partition of A and R is the equivalence relation determined by P, then the blocks of P can easily be described in terms of R. If A 1 is a block of P and a ∈ A; We see by definition that A1 consists of all elements X of A with a R x. That is, A 1 = R (a). Thus the partition P is { R (a) \ a ∈ A }. In words, P consists of all distinct R – relative sets that arise from elements of A. For instance, in Example 6 the blocks { 1, 2, 3 } and { 4 } can be described, respectively, as R (1) and R (4). Of course, { 1, 2, 3 } could also be described as R (2) or R (3), so this way of representing the blocks is not unique.

The foregoing construction of equivalence relations from partitions is very simple. We might be tempted to believe that few equivalence relations could be produced in this way. The fact is, as we will now show, that all equivalence relations on A can be produced from partitions.

Lemma 1

Let R be an equivalence relation on a set A, and let a ∈ A and b ∈ A. Then a R b if and only if R (a) = R (b).

Proof

First suppose that R (a) = R (b). Since R is reflexive, b ∈ R (b); therefore, b ∈ R (a), so a R b.

59/90 Dharmendra Mishra

-

• A lemma is a theorem whose main purpose is to aid in proving some other theorem.

Conversely, suppose that a R b. Then note that

1. b ∈ R (a) by definition. Therefore, since R is symmetric, 2. a ∈ R (b), by Theorem 2 (b) of Section 9

We must show that R (a) = R (b). First, choose an element x ∈ R (b). Since R is transitive, the fact that x ∈ R (b), together with (1), implies by Theorem 2 (c) of Section 4.4 that x ∈ R (a). Thus R (b) ⊆ R (a). Now choose y ∈ R (a). This fact and (2) imply, as before, that y ∈ R (b). Thus R (a) ⊆ R (b), so we must have R (a) = R (b).

Note the two – part structure of the lemma’s proof. Because we want to prove a biconditional, p ⇔ q, we must show q ⇒ p as well as p ⇒ q.

We now prove our main result.

Theorem 2

Let R be an equivalence relation on A, and let P be the collection of all distinct relative sets R (a) for a in A. Then P is a partition of A, and R is the equivalence relation determined by P.

Proof

According to the definition of a partition, we must show the following two properties:

(a) Every element of A belongs to some relative set.

(b) If R (a) and R (b) are not identical, then R (a) ∩ R (b) = ∅.

Now property (a) is true, since a ∈ R (a) by reflexivity of R. To show property (b) we prove the following equivalent statement:

If R (a) ∩ R (b) ≠ ∅, then R (a) = R (b).

To prove this, we assume that c ∈ R (a) ∩ R (b). Then a R c and b R c.

Since R is symmetric, we have c R b. Then a R c and c R b, o, by transitivity of R, a R b. Lemma 1 then tells us that R (a) = R (b). We have now proved that P is a partition. By Lemma 1 we see that a R b if and only if a and b belong to the same block of P. Thus P determines R, and the theorem is proved.

Note the use of the contrapositive in this proof.

If R is an equivalence relation on A, then the sets R (a) are traditionally called equivalence classes of R. Some authors denote the class R (a) by [a].

The partition P constructed in Theorem 2 therefore consists of all equivalence cases of R, and this partition will be denoted by A / R. Recall that partitions of A are also called quotient sets of A, and the notation A / R reminds us that P is the quotient set of A that is constructed form and determines R.

EXAMPLE 7

Let R be the relation defined in Example 2. Determine A / R.

60/90 Dharmendra Mishra

Solution

From Example 2 we have R (1) = { 1, 2 } = R (2). Also, R (3) = { 3, 4 } = R (4). Hence A / R = { { 1, 2 }, { 3, 4 } }.

EXAMPLE 8

Let R be the equivalence relation define in Example 4 Determine A / R.

Solution : -

Let R(0) = {. . . . .-4 , -2, 0, 2, 4 . . . . } the set of even integers since reminder is zero when this member divided by 2

R (1) = . . . ., -5, -3, -1, 1, 1, 3, 5, 7 . . . . }, the set of odd integers, since each gives a remainder of 1 when divided by 2. Hence A / R consists of the set of even integers and the set of odd integers.

From Examples 7 and 8 we can extract a general procedure for determining partition A / R for A finite or countable. The procedure is as follows:

Step 1: Choose any element of A and compute the equivalence class R (a).

Step 2: If R (a) ≠ A, choose an element b, not included in R (a), and compute the equivalence class R (b).

Step 3: If A is not the union of previously computed equivalence classes, then choose an element x of A that is not in any of those equivalence classes and compute R (x).

Step 4: Repeat step 3 until all elements of A are included in the computed equivalence classes. If A is countable, this process could continue indefinitely. In that case, continue until a pattern emerges that allows you to describe or give a formula for all equivalence classes.

4.5 Exercises

Operations on Relations

Now that we have investigated the classification of relations by properties they do or do not have, we next define some operations on relations.

Let R and S be relations from a set A to a set B. Then, if we remember that R and S are simply subsets of A X B, we can use set operations on R and S. For example, the complement of R, R, is referred to as the complementary relation. It is, of course, a relation from A to B that can be expressed simply in terms of R:

a R b if and only if a R b.

We can also form the intersection R ∩ S and the union R ∪ S of the relations R and S. In relational terms, we see that a R ∩ S b means that a R b and a S b. All our set-theoretic operations can be used in this way to produce new relations.

A different type of operation on a relation R from A to B is the formation of the inverse, usually written R -1. The relation R –1 is a relation from B to A (reverse order form R) defined by

b R -1 a if and only if a R b.

It is clear from this that (R -1)-1 = R. It is not hard to see that Dom (R -1) = Ran (R) and Ran (R -1) = Dom (R). We leave these simple facts for the reader to check.

61/90 Dharmendra Mishra

EXAMPLE 1

Let A = { 1, 2, 3, 4 } and B = {a, b, c } Let

R = { (1, a), (1, b), (2, b), (2, c), (3, b), (4, a) }

And S = { (1, b), (2, c), (3, b), (4, b) }

Compute (a) R; (b) R ∩ S; c) R ∪ S; and (d) R -1

Solution

a) We first find

A X B = { (1, a), (1, b), (1, c), (2, a), (2, b), (2, c), (3, a), (3, b), (3, c), (4, a), (4, b), (4, c) }.

Then the complement of R in A X B is

R = { (1, c), (2, a), (3, a), (3, c), (4, b), (4, c) }.

(b) We have R ∩ S = { (1, b), (3, b), (2, c) }

(c) We have

R ∪ S = { (1, a), (1, b), (2, b), (2, c), (3, b), (4, a), (4, b) }.

(d) Since (x, y) ∈ R -1 if and only if (y, x ) ∈ R, we have

R-1 = { ( a, 1), ( b, 1), ( b, 2), ( c, 2), ( b, 3), ( a, 4) }

EXAMPLE 2

Let A = R. Let R be the relation ≤ on A and let S be ≥. Then the complement of R is the relation >, Since a ≤ b means that a > b. Similarly, the complement of S is <. On the other hand, R -1 = S, since for any numbers a and b.

A R -1 b if and only if b R a if and only if b ≤ a if and only if a ≥ b. Similarly, we have S -1 = R. Also, we note that R ∩ S is the relation of equality, since a (R ∩ S) b if and only if a ≤ b and a ≥ b if and only if a = b. Since, for any a and b, a ≤ b or a ≥ b must hold, we see that R ∪ S = A X A; that is, R ∪ S is the universal relation in which any a is related to any b.

EXAMPLE 3

Let A = { a, b, c, d, e } and let R and S be two relations on A whose corresponding digraphs are shown in Figure. Then the reader can verify the following facts.

R = { (a, a), (b, b), (a, c), (b, a) (c, b), (c, d), (c, e), (c, a), (d, b), (d, a), (d, e), (e, b), (e, a), (e, d), (e, c) }

R-1 = { (b, a), (e, b), (c, c), (c, d), (d, d), (d, b), (c, b), (d, a), (e, e), (e, a) }

R ∩ S = { (a, b), (b, e), (c, c) }.

62/90 Dharmendra Mishra

b a

b c a c

e d e d

Figure 7

EXAMPLE 4

Let A = { 1, 2, 3 } and let R and S be relations on A. Suppose that the matrices of R and S are

1 0 1 0 1 1

MR = 0 1 1 and M S = 1 1 0

0 0 0 0 1 0

Then we can verify that

0 1 0 1 0 0

-1 MR = 1 0 0 MR = 0 1 0

1 1 1 1 1 0

0 0 1 1 1 1

MR ∩ S = 0 1 0 M R ∪ S = 1 1 1

0 0 0 0 1 0

Example 4 illustrates some general facts. Recalling the operations on Boolean matrices that if R and S are relations on set A, then

M R ∩ S = M R ∧ Ms

M R ∪ S = M R ∨ MS

-1 T M R = (M R) .

Moreover, if M is a Boolean matrix, we define the complement M of M as the matrix obtained from M by replacing every 1 in M by a 0 and every 0 by a 1. Thus, if

63/90 Dharmendra Mishra

1 0 0

M = 0 1 1

1 0 0 then

0 1 1

M = 1 0 0

0 1 1

We can also show that if R is a relation on a set A, then

MR = M R.

T T We know that a symmetric relation is a relation R such that M R = (M R) , and since (M R) = M R- -1 1, we see that R is symmetric if and only if R = R .

We now prove a few useful properties about combinations of relations.

Theorem 1

Suppose that R and S are relations from A to B.

(a) If R ⊆ S, then R -1 ⊆ S -1. (b) If R ⊆ S, then S ⊆ R. (c) (R ∩ S) –1 = R -1 ∩ S -1 and (R ∪ S) –1 = R –1 ∪ S -1. (d) R ∩ S = R ∪ S and R ∪ S = R ∩ S.

Proof

Parts (b) and (d) are special cases of general set properties we now prove part (a). Suppose that R ⊆ S and let (a, b) ∈ R -1. Then (b, a) ∈ R, so (b, a) ∈ S. This, in turn, implies that (a, b) ∈ S –1. Since each element of R -1 is in S – 1 , we are done.

We next prove part (c). For the first part, suppose that (a, b) ∈ (R ∩ S) -1. Then (b, a) ∈ R ∩ S, so (b, a) ∈ R and (b, a) ∈ S. This means that (a, b) ∈ R -1 and ( a, b) ∈ S -1, so (a, b) ∈ R -1 ∩ S -1. The converse containment can be proved by reversing the steps. A similar argument works to show that (R ∪ S) -1 = R -1 ∪ S -1

The relations R and R -1 can be used to check if R has the properties of relations that we presented in Section for instance, we saw earlier that R is symmetric if and only if R = R – 1 . Here are some other connections between operations on relations and properties of relations.

Theorem 2

Let R and S be relations on a set A.

(a) If R is reflexive, so is R -1. (b) If R and S are reflexive, then so are R ∩ s and R ∪ S. (c) R is reflexive if and only if R is irreflexive.

64/90 Dharmendra Mishra

Proof

Let ∆ be the equality relation on A. We know that R is reflexive if and only it ∆ ⊆ R. clearly, ∆ = ∆-1, so if ∆ ⊆ R, then ∆ = ∆-1 ⊆ R -1 by Theorem 1, so R -1 is also reflexive. This proves part (a). To prove part (b), we note that if ∆ ⊆ R and ∆ ⊆ S, then ∆ ⊆ R ∩ S and ∆ ⊆ R ∪ S. To show part (c), we note that a relation S is irreflexive if and only if S ∩ ∆ = ∅. Then R is reflexive if and only if ∆ ⊆ R if and only if ∆ ∩ R = ∅ if and only if R is irreflexive.

EXAMPLE 5

Let A = { 1, 2, 3 } and consider the two reflexive relations

R = { (1, 1), (1, 2), (1, 3), (2, 2), (3, 3) }

And

S = { (1, 1), (1, 2), (2, 2), (3, 2), (3, 3) }.

Then

(a) R-1 = { (1,1), (2, 1), (3, 1), (2, 2), (3, 3) }; R and R -1 are both reflexive. (b) R = { (2, 1), (2, 3), (3, 1), (3, 2) } is irreflexive while R is reflexive. (c) R ∩ S = { (1, 1), (1, 2), (2, 2), (3, 3) and R ∪ S = { (1, 1), (1, 2), (1, 3), (2, 2), (3, 2), (3, 3) } are both reflexive.

Theorem 3

Let R be a relation on a set A. Then

(a) R is symmetric if and only if R = R -1. (b) R is antisymmetric if and only if R ∩ R -1 ⊆ ∆. (c) R is asymmetric if and only if R ∩ R -1 = ∅.

Theorem 4

Let R and S be relations on A.

(a) If R is symmetric, so are R -1 and R. (b) If R and S are symmetric, so are R ∩ S and R ∪ S.

Proof

If R is symmetric, R = R -1 and thus (R -1)-1 = R = R -1,which means that R -1 is also symmetric. Also, (a, b) ∈ (R) -1 if and only if (b, a) ∈ R if and only if (b, a) ∉ R if and only if (a, b) ∉ R -1 = R if and only if (a, b) ∈ R, so R is symmetric and part (a) is proved. The proof of part (b) follows immediately from Theorem 1(c).

EXAMPLE 6

Let A = { 1, 2, 3 } and consider the symmetric relations

R = { (1,1), (1, 2), (2, 1), (1, 3), (3, 1) }

65/90 Dharmendra Mishra

And

S = { (1, 1), (1, 2), (2, 1), (2, 2), (3, 3) }.

Then

(a) R-1 = {(1,1), (2,1), (1, 2), (3,1), (1, 3) } and R = { (2, 2), (2, 3), (3, 2), (3, 3) }; R -1 and R are symmetric. (b) R ∩ S = { (1,1), (1, 2), (2,1) } and R ∪ S = { (1,1), (1, 2), (1, 3), (2,1), (2, 2), (3,1), (3, 3) }, which are both symmetric.

Theorem 5

Let R and S be relations on A.

(a) (R ∩ S) 2 ⊆ R 2 ∩ S 2. (b) If R and S are transitive, so is R ∩ S. (c) If R and S are equivalence relations, so is R ∩ S.

Proof

We prove part (a) geometrically. We have a (R ∩ S) 2 b if and only if there is a path of length 2 from a to b in R ∩ S. Both edges of this path lie in R and in S, so a R 2 b and a S 2 b, which implies that a (R 2 ∩ S 2) b. To show part (b), recall from Section 4.4 that a relation T is transitive if and only if T 2 ⊆ T. If R and S are transitive, then R 2 ⊆ R, S 2 ⊆ S, so (R ∩ S) 2 R2 ∩ S2 [by part (a) ] ⊆ R ∩ S, so R ∩ S is transitive. We next prove part (c). Relations R and S are each reflexive, symmetric, and transitive. The same properties hold for R ∩ S from Theorems 2(b), 4(b), and 5(b), respectively. Hence R ∩ S is an equivalence relation.

EXAMPLE 7

Let R and S be equivalence relations on a finite set A, and let A / R and A/s be the corresponding partitions (see Section 4.5). Since R ∩ S is an equivalence relation, it corresponds to a partition A / (R ∩ S). We now describe A / (R ∩ S) in terms of A / R and A / S. Let W be a block of A / (R ∩ S) and suppose that a and b belong to W. Then a (R ∩ S) b, so a R b and a s b. Thus a and b belong to the same block, say X, of A / R and to the same block, say Y, of A / S. This means that W ⊆ X ∩ Y. The steps in this argument are reversible; therefore, W = X ∩ Y. Thus we can directly compute the partition A / (R ∩ S) by forming all possible intersections of blocks in A / R with blocks in A / S.

Closures

If R is a relation on a set A, it may well happen that R lacks some of the important relational properties discussed in Section 4.4, especially reflexivity, symmetry, and transitivity. If R does not possess a particular property, we may wish to add pain to R until we get a relation that does have the required property. Naturally, we want to add as few new pairs as possible, so what we need to find is the smallest relation R 1 on A that contains R and possesses the property we desire. Sometimes R 1 does not exist. If a relation such as R 1 does exist, we call it the closure of R with respect to the property in question.

EXAMPLE 8

Suppose that R is a relation on a set A, and R is not reflexive. This can only occur because some pairs of the diagonal relation ∆ are not in R. Thus R 1 ∪ = R ∆ is the smallest reflexive relation on A containing R; that is, the reflexive closure of R is R ∪ ∆.

66/90 Dharmendra Mishra

EXAMPLE 9

Suppose now that R is a relation on A that is not symmetric. Then there must exist pairs (x, y) in R such that (y, x) is not in R. Of course, ( y, x) ∈ R -1, so if R is to be symmetric we must add all pairs from R -1; that is, we must enlarge R to R ∪ R -1. Clearly, (R ∪ R -1)-1 = R ∪ R -1, so R ∪ R-1 the smallest symmetric relation containing R; that is, R ∪ R -1 is the symmetric closure of R.

If A = { a, b, c, d } and R = { (a, b), (b, c), (a, c), (c, d)}, then R -1 = \ (b, a), (c, b), (c, a), (d, c) }, so the symmetric closure of R is

R ∪ R -1 = { (a, b), (b, a), (b, c), (c, b), (a, c), (c, a), (c, d), (d, c) }.

The symmetric closure of a relation R is very easy to visualize geometrically. All edges in the digraph of R become “two-way streets” in R ∪ R -1. Thus the graph of the symmetric closure of R is simply the digraph of R with all edges made bi-directional. We show in Figure 8 (a) the digraph of the relation R of Example 9. Figure (b)shows the graph of the symmetric closure R ∪ R -1.

b b

a c a c

d d (a) R (b) R ∪ R -1

The transitive closure of a relation R is the smallest containing R. We will discuss the transitive closure in the next section.

Composition

Now suppose that A, B,. and C are sets, R is a relation from A to B, and S is a relation from B to C. We can then define a new relation, the composition of R and S, written S ° R. The relation S ° R is a relation from A to C and is defined as follows. If a is in A and C is in C, then a (S ° R) c if and only if for some b in B, we have a R b and b s c. In other words, a is related to c by S ° R if we can get from a to c in two stages: first to an intermediate vertex b by relation R and then from b to c by relation S. The relation S ° R might be thought of as “S following R” since it represents the combined effect of two relations, first R, then S.

EXAMPLE 10

Let A = {1, 2, 3, 4 }, R = { (1, 2), (1,1), (1, 3), (2, 4), (3, 2) }, and S = { (1, 4), (1, 3), (2, 3),(3, 1),(4, 1) }. Since (1, 2) ∈ R and (2, 3) ∈ S, we must have (1, 3) ∈ S ° R. Similarly, since (1, 1) ∈ R and (1, 4) ∈ S, we see that (1, 4) ∈ S ° R. Proceeding in this way, we find that S ° R = { (1, 4), (1, 3), (1, 1), (2, 1), (3, 3) }.

The following result shows how to compute relative sets for the composition of two relations.

Theorem 7

Let R be a relation from A to B and let S be a relation from B to C. Then, if A 1 is any subset of A, we have

(S ° R) (A 1) = S (R (A 1) ). (1)

67/90 Dharmendra Mishra

Proof

If an element z ∈ C is in (S ° R) (A 1), then x (S ° R) for some x in A 1. By the definition of compositions, this means that x R y and y S z for some y in B. Thus y ∈ R (x), so z ∈ S(R

(x)). Since (x) ⊆ A 1, Theorem 1 (a) of Section 4 tells us that S (R (x) ) ⊆ S (R (A 1) ). Hence z ∈ S (R (A 1) ), so (S ° R) (A 1) S (R (A1) ).

Conversely , suppose that z ∈ S (R (A 1) ). Then z ∈ S (y) for some y in R (A 1) and, similarly, y ∈ R(x) for some x in A 1. This means that x R y and y s z, so x (S ° R ) z. Thus z ∈ (S ° R) (A 1), so S (R (A 1) ) (S ° R) (A 1). This proves the theorem.

EXAMPLE 11

Let A = {a, b, c } and let R and S be relations on A whose matrices are

1 0 1 1 0 0

MR = 1 1 1 Ms = 0 1 1

0 1 0 1 0 1

We see from the matrices that

(a, a) ∈ R and (a, a) ∈ S, so (a, a) ∈ S ° R (a, c) ∈ R and (c, a) ∈ S, so (a, a) ∈ S ° R (a, c) ∈ R and (c, c) ∈ S, so (a, c) ∈ S ° R.

It is easily seen that (a, b) ∉ S ° R since, if we had (a, x) ∈ R and (x, b) ∈ S, then matrix M R tells us that x would have to be a or c; but matrix M s tells us that neither (a, b) nor (c, b) is an element of S.

We see that the first row of M s ° R is 1 0 1. The reader may show by similar analysis that

1 0 1

MS ° R = 1 1 1

0 1 1

We note that M S ° R = M R Θ M S (verify this).

Example 11 illustrates a general and useful fact. Let A, B, and C be finite sets with n, p, and m elements, respectively, let R be a relation from A to B, and let S be a relation from B to C. Then R and S have Boolean matrices M R and M S with respective sizes n X p and p X m. Thus MR Θ M s can be computed, and it equals M S ° R. To see this let A = {a 1,…,a n}, B = {b 1,…,b p}, and C= {c 1,…,c m}, Also, suppose that M R, = [r ij ], M s = [S ij ], and M s0R = [t ij ]. Then t ij = 1 if and only if (a i, c j) S 0 R, which means that for some k, (a i, b k) ∈ R and (b k, c j) S. In other words, r ik = 1 and s kj = 1 for some k between 1 and p. This condition is identical to the condition needed for M R M S to have a 1 in position I, j, and thus M SOR and MR M S are equal.

In th special case where R and S are equal, we have S O R = R 2 and M SOR = M R2 = M R MR, as was shown

Let us redo Example 10 using matrices. We see that

68/90 Dharmendra Mishra

1 1 1 0

0 0 0 1

MR = 0 1 0 0

0 0 0 0

To see this let A = { a 1 . . . . a n], B = {b 1 . . . . . b p}, and C = {c 1 ...... C R} Also, suppose that MR = [ r ij ], M S = [ s ij ], and M S O R = [ t ij ]. Then t ij = 1 if and only if (a i, c j) ∈ S O R, which means ik that for some k, (a i, b k) ∈ R and (b k, c j) ∈ S. In other words, r = 1 and S kj = 1 for some k between 1 and p. This condition is identical to the condition needed for M R Θ M S to have a 1 in position I, j and thus M S O R and M R Θ M s are equal.

2 2 In the special case where R and S are equal, we have S O R = R and M S O R = M R = M R Θ MR, as was shown

EXAMPLE 12

Let Us redo example 10 using matrices. We see that

1 1 1 0 0 0 1 1

0 0 0 1 0 0 1 0

MR = 0 1 0 0 and M S = 1 0 0 0

0 0 0 0 1 0 0 0

Then 1 0 1 1

1 0 0 0

MR Θ M s = 0 0 1 0

0 0 0 0 so

S O R = { (1,1), (1, 3), (1, 4), (2, 1), (3, 3) }

As we found before. In cases where the number of pairs in R and S is large, the matrix method is much more reliable.

Theorem 7

Let A, B, C, and D be sets, R a relation from A to B, S a relation from B to C and T a relation from C to D. Then

T O (S O R) = T O S) O R.

Proof

The relations R, S and T are determined by their Boolean matrices M R, M s, and M T, respectively. As we showed after Example 11, the matrix of the composition is the Boolean matrix product; that is, MS O R = MR Θ Ms. Thus

MT O (S O R) = M S O R Θ M T = (M R Θ M S) Θ M T.

69/90 Dharmendra Mishra

Similarly,

M (T O S) O R = M R Θ (M S Θ M T).

Since Boolean matrix multiplication is associative [see Exercise 37 of Section 1,5] we must have

(M R Θ M s) Θ M T = M R Θ (M S Θ M T), and therefore

MT O (S O R) = M (T O S) O R.

Then

T O (S O R) = (T O S) O R

Since these relations have the same matrices.

The proof illustrates the advantages of having several ways to represent a relation. Here using the matrix of the relation produces a simple proof.

In general, R O S ≠ S O R, as shown in the following example.

EXAMPLE 13

Let A = {a, b}, R = { (a, a), (b, a), (b, b)}, and S = { (a, b), (b, a), (b, b) }. Then S O R = { (a, b), (b, a), (b, b)}, While R O S = { (a, a), (a, b), (b, a), (b, b) }.

Theorem 8

-1 Let A, B, and C be sets, R a relation from A to B, and S a relation from B to C. Then (S O R) -1 -1 = R O S .

Proof

-1 Let c ∈ C and a ∈ A. Then (c, a) ∈ (S O R) if and only if (a, c) ∈ S O R, that is, if and only if there is a b ∈ B with (a, b) ∈ R and (b, c) ∈ S. Finally, this is equivalent to the statement that -1 -1 -1 (c, b) ∈ S and (b, a) ∈ R ; that is, (c, a) ∈ R O S-1.



70/90 Dharmendra Mishra

Graph Theory

Graph is one of the Non-lineas Dalastructure There are two types of graph Directed graph and Undirected graph. Undirected graph means the graph where direction in not specified (it is usually bediaectional). But in directed graph whose direction is specifed.

Here we are discussing there problems indetail. First we will discuss indirected graph them go for Directed graph.

Undirected Graph

GRAPHS AND MULTIGRAPHS

A graph G consists of two things:

(i) A set V = V (G) whose elements are called vertices, points, or nodes of G. (ii) A set E = E (G) of unordered pairs of distinct vertices called edges of G.

We denote such a graph by G (V, E) when we want to emphasize the two parts of G.

Vertices u and v are said to be adjacent if there is an edge e= [u, v]. In such a case, u and v are called the endpoints of e, and e is said to connect u and v. Also, the edge e is said to be incident on each of its endpoints u and v.

Graphs are pictured by diagrams in the plane in a natural way. Specifically, each vertex v in V is represented by a dot (or small circle), and each edge e = {v 1, v 2} is represented by a curve which connects its endpoints v 1 and v 2. For example, Fig 1 (a) represents the graph G (V, E) where:

(i) V consists to vertices A, B, C, D. (ii) E consists of edges e 1 = {A, B}, e 2 = {B, C}, e 3 ={C, D}, e 4 = (A, C}, e 5 = {B, D}.

GRAPH THEORY

In fact, we will usually denote a graph by drawing its diagram rather than explicitly listing its vertices and edges.

(a) Graph (b) Multigraph

Fig. 1 Multigraphs

Consider the diagram in Fig (b). The edges e 4 and e 5 are called multiple edges since they connect the same endpoints, and the edge e 6 is called a loop since endpoints are the same vertex. Such a diagrams is called a multigraph; the formal definition of a graph

71/90 Dharmendra Mishra

permits neither multiple edges nor loops. Thus a graph may be defined to be a multigraph without multiple edges or loops.

Remark:

Some texts use the term graph to include multigraphs and use the term simple graph to mean a graph without edges and loops.

Degree of a vertex

The degree of a vertex v in a graph G, written deg (v), is equal to the number of edges in G which contain v, that is which are incident on v. Since each edge is counted twice in counting the degrees of the vertices of G. We have the following simple by important result.

Theorem 1:

The sun of the degrees of the vertices of a graph G is equal to twice the number of edges in G.

Consider, for example, the graph in Fig. 8-4(a). We have deg (A) = 2, deg (B) = 3, deg (C) = 3, deg (D) = 2

The sum of the degrees equals 10 which, as expected, is twice the number of edges. A vertex is said to be even or odd according as its degree is an even or an odd number. Thus A and D are even vertices wheras B and C are odd vertices.

Theorem 1 also holds for multigraphs where a loop is counted twice towards the degree of its endpoint. For example, in Fig 1 (b) we have deg (D) = 4 since the edge e 6 is counted twice; hence D is an even vertex. A vertex of degree zero is called an isolated vertex.

Finite Graphs, Trivial Graph

A multigraph is said to be finite if it has a finite number of vertices and a finite number of edges. Observe that a graph with a finite number of vertices must automatically have a finite number of edges and so must be finite. The finite graph with one vertex and no edges, i.e., a single points, is called the trivial graph. Unless otherwise specified, the multigraphs in this book shall be finite.

SUBGRAPHS, ISOMORPHC AND HOMEOMORPHIC GRAPHS

This section will discuss important relationships between graphs.

Subgraphs

Consider a graph G = G (V ,E). A graph H = (V’ ,E’) Is called a subgraph of G if the vertices and edges of H are contained in the vertices edges of G, that is, if V’ ⊆ V and E’ ⊆ E. In particular :

(i) A subgraph H(V’, E’) of G (V, E) is called the subgraph induced by its vertices V’ if its edge set E’ contains all edges in G whose endpoints belong to vertices in H.

(ii) If v is a vertex in G, then G - v is the subgraph of G obtained by deleting v from G and deleting all edges in G which contain v.

(iii) If e is an edge in G, then G – e is the subgraph of G obtained by simply deleting the edge e from G.

72/90 Dharmendra Mishra

Isomorphic Graphs

Graphs G (V, E) and G* (V*, E*) are said to be isomorphic if there exists a one to one correspondence V - V* such that {u, v} is an edge of G if and only if {f (u), f (v)} is an edge G*. Normally, we do distinguish between isomorphic graphs (even thought their diagrams may “look different”). Fig 2 ten graphs pictured as letters. We note that A and R are isomorphic graphs. Also, F and T, K and, and M,S,V, and Z are isomorphic graphs.

A F K M R S T V X Z

Fig. 2

Homeomorphic graphs

Given graph G we obtain a new graph by dividing an edge of G with additional vertices two graphs G and G* are said to homeomorphic if they can be obtained from the same graph or homeomorphic graphs by this method. The graphs (a) and (b) in Fig. 3 are not isomorphic, but they are homeomorphic since they can be obtained from the graph (c) by adding appropriate vertices.

Fig. 3

PATHS, CONNECTIVITY

A path in a multigraph G consists of an alternating sequence of vertices and edges of the form

V o, e 1, v 1, e 2, v 2, ...... , e n-1, v n-1, e n, vn

Where each edge e i contains the vertices v i-1 and v i (which appear on the sides of e i in the sequence). The number n of edges is called the length of the path. When there is no ambiguity, we denote a path by its sequence of vertices (v o, v 1….. , v n). The path is said to

73/90 Dharmendra Mishra

be closed if v o = v n Otherwise, we say the path is from v o to v n or between v o and v n or connects v o to v n.

A simple path is a path in which all vertices are distinct. (A path in which all edges are distinct will be called a trail) A cycle is a closed path in which all vertices are distinct expect v o = v n A cycle of length k is called a k-cycle. In a graph, and cycle must have length 3 or more.

EXAMPLE 1

Consider the graph G in Fig 4 (a) Consider the following sequences:

α = ( P 4, P 1, P 2, P 5, P 1, P 2, P 3, P 6) β = ( P 4, P 1, P 5, P 2, P 6)

γ = ( P 4, P 1, P 5, P 2, P 3, P 5, P 6) δ = (P4, P 1, P 5, P 3, P 6)

The sequence a is a path from p 4 to p 6 but it is not since the ege {P 1, P 2} is used twice. The sequence β is not a path since there is no edge {P 2, P 6}. The sequence γ is a trail since no edge is twice; but it is not a single path since the vertex P 5 is used twice. The sequence δ is a simple path from P 4 to P 6; but it is not the shortest path (with respect to length) from P 4 and P6. The shortest path from P 4 and P 6 is the simple path (P 4, P 5, P 6) which has length 2.

Fig. 4

By eliminating unnecessary edges, it is not difficult to see that any path from a vertex u to a vertex v can be replaced by a simple path from u to v. We state this result formally.

Theorem 2 :

There is a path from a vertex u to a vertex v if and only if there exists a simple path from u to v.

Connectivity, Connected Components

A graph G is connected if there is a path between any two of its vertices. The graph in Fig. 4 (a) is connected but the graph in Fig. 4 (b) is not connected since, for example, there is no path between vertices D and E.

Suppose G is a graph. A connected subgraph H of G is called a connected component of G if H is not contained in any larger connected subgraph of G. It is intuitively clear that any graph G can be partitioned into its connected components. For example, the graph G in Fig 4(b) has three connected components, the subgraphs induced by the vertex sets {A C, D}, {E, F}, and {B}.

The vertex B in Fig 4 (b) is called an isolated vertex since B does not belong to any edge or, in other words, deg (B) = 0. Therefore, as noted, B itself forms a connected components of the graph.

74/90 Dharmendra Mishra

Remark : Formally speaking, assuming any vertex u is connected to itself, the relation υ” is connected v is an equivalence relation on the vertex set of a graph G and the equivalence classes of the relation form the connected components of G.

Distance and diameter

Consider a connected graph G. The distance between vertices u and v in G, written d ( υ, ν), is the langth of the shortest path between υ and ν. The diameter of ag, written diam (G), is the maximum distance between any two points in G. For example, in Fig 5 (a), d (A, F) = 2 and diam (G) = 3, whereas in Fig 5 (b), d(A, F) = 3 and diam (G) = 4.

Cutpoints and Bridges

Let G be a connected graph. A vertex v in G is called a cutpoint if G - ν is disconnected. (Recall that G-v is the graph obtained from G by deleting v and edges containing ν.) An edge e or G is called a bridge if G-e is disconnected. (Recall that G-e is the graph obtained from G by simply deleting the edge e) In fig 5 (a), the vertex D is a cutpoint and there are no bridges. In Fig 5 (b), the edge e = {D, F} is a bridge. (Its endpoint D and F are necessarily cutpoints.)

THE BRIDGES OF K öNIGSBERG, TRAVERSABLE MULIGRAPHS

Königsberg bridge problem is there are A towns BCD and seven bridges as shown in figures 6 (2)

Question: Beginning anywhere and ending anywhere can a person walk through town crossing all seven bridges but not crossing any bridge twice? The people of königsberg wrote to the celebrated swiss mathematician L. Euler about this question. Euler proved in 1736 that such a walk is impossible. He replaced the islands and the two sides of the river by points and the bridges by curves, obtained Fig 8-9 (b).

(a) Königsberg in boridge (b) Euler’s graphical representation

Observe that Fig. 6(b) is a multigraph. A multigraph is said to be traversable if it “can be drawn without any breaks in the curve and without repeating any edges”. That is, if there is a path which includes all vertices and uses each edge exactly once. Such a path must be a trail (since no edge is used twice) and will be called a traversable trail. Clearly a traversable must be finite and connected. Fig 7 (b) shows a traversable trail of the multigraph in Fig 7 (a). (To indicate the direction of the trail. The diagram misses touching vertices which are actually

75/90 Dharmendra Mishra traversed.) Now it is not difficult to see that the walk Königsberg is possible if and only if the multigraph in Fig 6(b) is traversable.

Fig. 7

We now show Euler proved that the multigraph in Fig. 6 (b) is not traversable and hence that the walk in Königsberg is impossible. Recall first that a vertex is even or odd according as its degree is an even or an odd number. Suppose a multigraph is traversable and that a traversable trail does not begin or end at a vertex P. We claim that P is an even vertex. For whenever the traversable trail enters P by an edge, there must always be an edge not previously used by which the trail can leave P. Thus the edges in the trail incident with P must appear in pairs, and so P is an even vertex. Therefore if a vertex Q is odd, the traversable trail must begin or end at Q. Consequently, a multigraph with more than two odd vertices cannot be traversable. Observe that the multigraph corresponding to the Königsberg bridge problem has four odd vertices. Thus one cannot walk through Königsberg so that each bridge is crossed exactly once.

Euler actually proved the converse of the above statement, which is contained in the following theorem and corollary. A graph G is called an Eulerian graph if there exists a closed traversable trail, called an Eulerian trail.

Theorem 3 (Euler):

A finite connected graph is Eulerian if and only if each vertex has even degree.

Curollary 4:

Any finite connected graph with two odd vertices is traversable. A traversable trail may begin at either odd and will end at the other odd vertex.

Hamiltonian Graphs

The above discussion of Eulerian graphs emphasized traveling edges; here we concentrate on visiting vertices. A Hamiltonian circuit in a graph. G, named after the nineteenth-century Irish mathematician William Hamilton (1805-1865), is a closed path that visits every vertex in G exactly once. (Such a closed path must be a cycle) If G does admit a Hamiltonian circuit, then G is called a Hamiltonian graph. Note that an Eulerian circuit traverses every edge exactly once, but may repeat vertices, while a Hamiltonian circuit visits each vertex exactly once but may repeat edges. Fig 8 gives an example of a graph which is Hamiltonian but not Eulerian, and vice versa.

76/90 Dharmendra Mishra

Fig. 8

Although it is clear that only connected graphs can be Hamiltonian, there is no simple criterion to tell us whether or not a graph is Hamiltonian as there for Eulerian graphs. We do have the following sufficient condition which is due to G. A. Dirac.

Theorem 5:

Let G be a connected graph with n vertices. Then G is Hamiltonian if n ≥ 3 and n ≤ deg (v) for each vertex v in G.

LABELED AND WEIGHTED GRAPHS

A graph G is called a labeled graph if its edges and/or vertices are assigned data of one kind or another. In particular, G is called a weighted graph if each edge e of G is assigned a nonnegative number w (e) called the weight or length of v. Fig 9 shows a weighted graph where the weight of each edge is given in the obvious way. The weight (or length) of a path in such a weighted graph G is defined to be the sum of the weights of the edges in the path. One important problem in graph theory is to find a shortest path, that is, a path of minimum weight (length), between any two given vertices. The length of a shortest path between P and Q in Fig 9 is 14; one such path is

(P, A 1, A 2, A 5, A 3, A 6, Q)

The reader can try to find another shortest path evaluating shortest path between two points we will be discussing coming section.

Fig. 9

COMPLETE, REGULAR, AND BIPARTITE GRAPHS

There are many different of graph. This section considers three of them, complete, regular, and bipartite graphs.

Complete Graphs

A graph G is said to be complete if every vertex in G is connected to every other vertex in G. Thus a complete graph G must be connected. The complete graph with n vertices is denoted by K n. Figure 10 shows the graphs K 1 though K 6.

77/90 Dharmendra Mishra

Fig. 10

Regular Graphs

A graph G is regular of degree K or K-regular if every vertex has degree K. In other words, a graph is regular if every has the same degree.

The connected regular graphs of degree 0, 1, or 2 are easily described. The connected 0-regular graph is the trivial graph with one vertex and no edges. The connected 1-regular graph is the graph with two vertices and one edge connecting them. The connected 2-regular graph with n vertices is the graph which consist of a single n-cycle.

See Fig 11.

The 3-regular graphs must have an even number of vertices since the sum of the degrees of the vertices is an even number (Theorem 1) Figure 12 shows two connected 3-regular graphs with six vertices. In general, regular graphs can be quite complicated. For example, there are nineteen 3-regular graphs with ten vertices. We note that the complete graph with n vertices Kn is regular of degree n-1.

Fig. 12 Fig. 13

78/90 Dharmendra Mishra

Bipartite Graphs

A graph G is said to be bipartite if its vertices V can be partitioned two subsets M and N such that each edge of G connects a vertex of M to a vertex of N. By a complete bipartite graph, we mean that each vertex of M is connected to each vertex of N; this graph is denoted by Km,n where m is the number of vertices in M and n is the number of vertices in N, and, for standardization, we will assume m ≤ n. Figure 13 shows the graphs K 2,3, K 3,3, and K 2,3, Clearly the graph K m,n has mn edges.

TREE GRAPHS

A graph T is called a tree if T is connected and T has no cycles. Examples of trees are shown in Fig 14. A forest G is a graph with no cycles hence the connected components of a forest G are trees. (A graph without cycles is said to be cycle-free.) The tree consisting of a single with no edges is called the degenerate tree.

Fig. 8 14

Consider a tree T. Clearly there is only one simple path between two vertices of T; otherwise, the two paths would form a cycle. Also:

(a) Suppose there is no edge {u, v} in T and we add the edge e = {u, v} to T. Then the simple path from u to v in T and e will form a cycle; hence T is no longer a tree.

(b) On the other hand, suppose there is an edge e = {u, v} in T, and we delete e from T. Then T is no longer connected (since there cannot be a path from u to v); hence T is no longer a tree.

Theorem 6 :

Let G be a graph with n > 1 vertices. Then the following are equivalent:

(i) G is a tree. (ii) G is a cycle-free and has n- 1 edges. (iii) G is connected and has n – 1 edges.

This theorem also tells us that a finite tree T with n vertices must have n-1 edges. For example, the tree Fig. 14 (a) has 9 vertices 8 edges and the tree in Fig. 14 (b) has 13 vertices and 12 edges.

Spanning Trees

A subgraph T of a connected graph G is called spanning tree of G if T is a tree and T includes all the vertices of G. Fig 15 shows a connected graph G and spanning trees T 1, T 2, and T 3 of G.

79/90 Dharmendra Mishra

Fig. 15

Minimum Spanning Trees -

Suppose G is a connected weighted graph. That is, each edge of G is assigned a nonnegative number called the weight of the edge. Then any spanning tree T of G is assigned a total weight obtained by adding the weights of the edges in T. A minimal spanning tree of G is a spanning tree whose total weight is as small as possible.

In order to construct minimal spanning tree we have two methods Kruskal and Prims algorithm. We will discuss these algorithm in details.

Algorithms khuskals which follow, enable us to find a minimal spanning tree T of a connected weighted graph G where G has n vertices (In which case T must have n -1 edges.)

Kruskals Algorithm (two types).

Algorithm: (KRUSKAL) The input is a connected weighted graph G with n vertices.

Step1. Arrange the edges of G in the order of decreasing weights.

Step 2. Proceeding sequentially, delete each edge that does not disconnect the graph until n- 1 edges remain.

Step 3. Exit.

Another methods is (Kruskal another version).

Algorithm (Kruskal): The input is a connected weighted graph G with n vertices.

Step1. Arrange the edges of G in order of increasing weights.

Step2. Starting only with the vertices of G and processing sequentially, add each edge which does not result in a cycle until n-1 edges are added.

Steps3. Exit.

The weight of a minimal spanning tree is unique, but the minimal spanning tree itself is not. Different minimal spanning trees can occur when two or more edges have the same weight. In such a case, the arrangement of the edges in step 1 of Algorithms A or B is not unique and hence may result in different minimal spanning trees as illustrated in the following example.

80/90 Dharmendra Mishra

Example 2

Here we will see how to apply kruskal Algorithm for a given graphs. Her kruskal’s 2 nd Algorithm is using to solve the problem.

A B 7 8 3 C 4 7 7 5 D

6 4

E F Figure. 16

Prepare edge list < A, C > 7 < A, E > 4 < A, F > 7 < B, C > 8 < B, D > 3 < B, E > 7 < B, F > 5 < C, E > 6 < D, F > 4

Here < A, C > is select 80 < C, A > is not because it in redundant sort the list based on weight.

< B, D > 3 < A, E > 4 < D, F > 4 < B, F > 5 < C, E > 6 < A, C > 7 < A, F > 7 < B, E > 7 < B, C > 8

Selected edges one by one and prepare the graphs. B B A B A 3 D 4 3 4 4 D E D E F (a) (b) (c)

A B B 3 A 3 4 D C C 4 7 D 6 4 b 4 E F E F

(d) (e)

Fig 17 Total cost is 24

81/90 Dharmendra Mishra

Her the diagram shows the step by step building of minimum spanning tree in Fig 17 (a) (b) (c) (d) (e). Her in figure 17-c we are not selecting < B F > 5 become it case a cycle. Similarly we are not selecting < A, C > 7 became that also form cycle.

Prims Algorithm

This is another methods to construct minimum spanning tree her for the given graph we can perform the operation.

Algorithm prims

Step 1. Select any vertex v, and post marked it as visited.

Step 2. Examine all out going vertex from the all selected vertex to un selected vertex (only direct path can compider)

Step 3. Select least cost edge from the selected vertex to unselected vertex, and mark the recently selected as visited (put √ mark)

Step 4. Repeate through 2 and 3 until the graph contain n-1 edges (until all vertex bon visited)

A B 7 8 3 C 4 7 7 5 D

6 4

E F

Step by step builder

B A A A C 6 4 7 C 6 4 E E E (a) (b) (c)

(Put mark in A & E Mark A, E as visual Mark A,B,C,E as visual in original graph)

B B A A C 3 C 3 6 4 7 D 4 7 D 6 4 E E F

(d) (e)

Mark A,B,C,D,E as visual All selected Total cost=24

PLANAR GRAPHS

82/90 Dharmendra Mishra

A graphs or multigraph which can be drawn in the plane so that its edges not cross is said to be planar. Although the complete graphs with four vertices K 4 is usually pictured with crossing edges as in Fig 20 (a) it can also be drawn with noncrossing edges as in Fig 20 (b); hence K 4 is planar. Tree Graphs form an important class of planar graphs. This section introduces our reader to these important graphs.

Fig. 20 Fig. 21

Maps, Regions

A particular planar representation of a finite planar multigraph is called a map. We say that the map is connected if the underlying multigraph is connected. A given map divides the plane into various regions. For example, the map in Fig. 21 with six vertices and nine edges divides the plane into five regions. Observe that four of the regions are bounded, but the fifth region, outside the diagram, is unbounded. Thus there is no loss in generality in counting the number of regions if we assume that our map is contained in some large rectangle than in the entire plane.

Observe that the border of each region of a map consists of edges. Sometimes the edges will form a cycle, but sometimes not. For example, in Fig 21 the borders of all the regions are cycle expect for r 3. However, if we do move counterclockwise around r 3 starting, say, at the vertex C, then we obtain the closed path.

(C, D, E, F, E, C) where the edge {E, F} occurs twice. By the degree of a region r, written deg (r), we mean the length of the cycle or closed walk which borders r. We note that each edge either border two regions or is contained in a region and will occur twice in any walk along the border of the region. Thus we have a theorem for regions which is analogous to Theorem 1 for vertices.

Theorem 7:

The sum of the degrees of the regions of a map is equal to twice the number of edges.

The degrees of the regions of Fig 21 are:

deg (r 1)= 3, deg (r 2)=3, deg (r 3)= 5, deg (r 4)= 4, deg (r 5)= 3

The sum of the degrees is 18, which, as expected, is twice the number of edges.

For notational convenience we shall picture the vertices of a map with dots or small circles, or we shall assume that any intersections of lines or curves in the plane are vertices.

Euler’s Formula

Euler gave a formula which connects the number v of vertices, the number E of edges, and the number R of regions of any connected map. Specifically:

Theorem 8.8 (Euler): V – E + R = 2.

(The proof of Theorem 8.8 appears in Problem 8.20.)

83/90 Dharmendra Mishra

Observe that, in Fig. 8 - 21, V = 6, E = 9, and R = 5; and as expected by Euler’s formula.

V – E + R = 6 – 9 + 5 = 2

We emphasize that the underlying graph of a map must be connected in order for Euler’s formula to hold.

Let G be a connected planar multigraph with three or more vertices, so G is neither K 1 nor K2. Let M be a planar representation of G. It is not difficult to see that (1) a region of M can have degree 1 only if its border is a loop, and (2) a region of M can have degree 2 only if its border consist of two multiple edges. Accordingly, if G is a graph, not a multigraph, then every region of M must have degree 3 or more. This comment together with Euler’s is used to prove the following result on planar graphs.

Theorem 9:

Let G be a connected planar graph with p vertices and q edges, where p ≥ 3. Then q ≥ 3p - 6.

Note that the theorem is not true for K 1 where p = 1 and q = 0, and is not true for K 2 where p = 2 and q = 1.

Proof: Let r be the number of regions in a planar representation of G. By Euler’s formula,

P – q + r = 2

Now the sum of the degrees of the regions equals 2q by Theorem 8.7. But each region has degree 3 or more; hence

2q ≥ 3r

Thus r > 2q/3. Substituting this in Euler’s formula gives

q q 2 = p – q + r ≤ p – q + 2---- or 2 ≤ p - ---- 3 3

Multiplying the inequality by 3 gives 6 ≤ 3p-q which gives us our result.

Nonplanar Graphs, Kuratowski’s Theorem

We give two examples of nonplanar graphs. Consider first the utility graph; that is, three houses A 1, A 2, A 3 are to be connected to outlets for water, gas and electricity, B 1, B 2, B 3, as in Fig 22 (a). Observe that this is the graph K 3,3 and it has p = 6 vertices and q = 9 edges. Suppose the graph is planar. By to Euler’s formula a planar representation has r = 5 regions. Observe that no three vertices are connected to each other; hence the degree of each region must be 4 or more and so the sum of the degrees of the regions must be 20 or more. By Theorem 9 the graph must have 10 or more edges. This contradicts the fact that the graph has q = 9 edges. Thus the utility graph K 3,3 is nonplanar.

84/90 Dharmendra Mishra

Fig. 22

Consider next the star graph in Fig. 22(b). This is the complete graph K 5 on p=5 vertices and has q = 10 edges. If the graph is planar, then by Theorem 8.9.

10 = q ≤ 3p – 6 = 15 – 6 = 9

Which is impossible. Thus K 5 is nonplanar.

For many years mathematicians tried to characterize planar and nonplanar graphs. This problem was finally solved in 1930 by the Polish mathematician K. Kuratowski. The proof of this result, stated below, lies beyond the scope of this text.

REPRESENTING GRAPHS IN COMPUTER MEMORY

There are two standard of maintaining a graph G in the memory of a computer. One way, called the sequential representation of G, is by means of its adjacency matrix. A The other way, called the linked representation or adjacency structure of G, uses linked lists of neighbors. Matrices are usually used when the graph G is dense, and linked lists are usually used when G is sparse. (A graph G with m vertices and n edges is said to be dense when m = 0 (n 2) and sparse when m = 0(n) or even 0(n log n).)

Regardless of the way one maintains a graph G in memory, the graph G is normally input into the computer by its formal definition, that is, as a collection of vertices and a collection of pairs of vertices (edges).

Adjacency Matrix

Suppose G is a graph with m vertices, and suppose the vertices have been ordered, say V 1,V 2…..V m. Then the adjacency matrix A= {a ij } of the graph G is the m X m matrix defined by

A ij = 1 if v i is adjacent to v j 0 otherwise

Figure 23 (b) contains the adjacency matrix of the graph G in Fig 23 (a) where the vertices are ordered A, B, C, D, E. Ordered that each edge {V i, V j} of G is represented twice, by a ij = 1 and a ji = 1. Thus in particular, the adjacency matrix is symmetric.

85/90 Dharmendra Mishra

A B C A B C D E A 0 1 0 1 0 B 1 0 1 0 1 C 0 1 0 0 0 D E D 1 0 0 0 1 E 0 1 0 1 0

(a) (b) Fig. 23

The adjacency matrix A of a graph G does depend on the ordering of the vertices of G, that is, a different ordering of the vertices yields a different matrix. However, any two such adjacency matrices are closely related in that one can be obtained from the other by simply interchanging rows and columns. On the other hand, the adjacency matrix does not depend on the order in which the edges (pairs of vertices) are input into the computer.

There are variations of the above representation. If G is a multigraph, then we usually let a ij denote the number of edges {V i, V j}. Moreover. If G is a weighted graph, then we may let a ij denote the weight of the edge {V i, V j}.

Linked Representation of a Graph G

Let G be a graph with m vertices. The representation of G in memory by its adjacency matrix A has a number of major drawbacks. First of all it may be difficult to insert or delete vertices in G. The reason is that the size of A may need to be changed and the vertices may need to be recorded, so there may be many, many changes in the matrix. A Furthermore, suppose the number of edges is 0(m) or even 0 (m log m), that is, suppose G is sparse. Then the matrix A will contain many zeros; hence a great deal of memory space will be wasted. Accordingly, when G is sparse, G is usually represented in memory by some type of linked representation, also called an adjacency structure, which is described below by means of an example.

Consider the graph G in Fig 24(a). Observe that G may be equivalently defined by the table in Fig 24(b) which shows each vertex in G followed by its adjacency list, i.e., its list of adjacent vertices (neighbors). Here the symbol ∅ denotes an empty list. This table may also be presented in the compact from

G = {A: B, D, B : A, C, D; C : B; D : A, B: E: ∅}

Where a colon “:” separated a vertex from its list of neighbors, and a semicolon “;” separates the different lists.

Remark:

Observe that each edge a graph G is represented twice in an adjacency structure; that is, any edge, say {A, B}, is represented by B in the adjacency list of A, and also by A in the adjacency list of B. The graph G in Fig 24 (a) has four edges, and so there must be 8 vertices in the adjacency lists. On the other hand, each vertex in an adjacency list corresponds to a unique edge in the graph G.

86/90 Dharmendra Mishra

A B C Vertex Adjacency list

A B, D B A, C, D C B D A, B D E E ∅

(a) (b)

Fig. 24

The linked representation of graph G, which maintains G in memory by using its adjacency lists shows Fig 24. Her many of linked list can be maintained.

8.12 GRAPH ALGORITHMS

This section discusses two important graph algorithms which systematically examine the vertices and edges of a graph G. One is called a depth-first search (DFS) and the other is called a breadth-first search (BFS). Other graph algorithms will be discussed in the next chapter in connection with directed graphs. Any particular graph algorithms may depend on the way G is maintained in memory. Here we assume G is maintained in memory by its adjacency structure. Our test graph G with its adjacency structure appears in Fig 25.

Vertex Adjacency list

A B, C, D B A, E, F C A, F D A E B, F, G F B, C, E G E, H H G

(a) (b)

Fig. 25

During the execution of our algorithms, each vertex (node) N of G will be in one of three states called the status of N, as follows:

STATUS = 1: (Ready state) The initial state of the vertex N. STATUS = 2: (Waiting state) The vertex N is on a (Waiting) list, waiting to be processed.

STATUS = 3: (Processed state) The vertex N has been processed.

The waiting list for the depth-first search will be a (modified) STACK, whereas the waiting list for the breadth-first search will be a QUEYE.

Depth-first Search

The general idea behind a depth-first search beginning at a starting vertex. A is as follows. First we process the starting vertex. A Then we process each vertex N along with a path P

87/90 Dharmendra Mishra which begins at A; that is, we process a neighbor of A, then a neighbor of A, and so on. After coming to a “dead end”, that is, to a vertex with no unprocessed neighbor, we backtrack on the path P until we can continue along another path P And so on. The backtracking is accomplished by using a STACK to hold the initial vertices of future possible paths. We also need a field STATUS which tells us the current status of any vertex so that no vertex is processes more than once. The algorithm follows.

Algorithm (Depth-first Search): This algorithm executes a depth-first search on a graph G beginning with a starting vertex A.

Step 1. Initialize all vertices to the ready state (STATUS = 1.)

Step 2. Push the starting vertex A onto STACK and change the status of A to the waiting state (Status = 2.)

Step 3. Repeat Steps 4 and 5 until STACK is empty.

Step 4. Pop the top vertex N of STACK. Process N, and set STATUS (N) =3, the processed state.

Step 5. Examine each neighbor J of N.

(a) If STATUS (J) = 1 (ready state) push j onto STACK and reset STATUS (J) = 2 (waiting state).

(b) IF STATUS (J) = 2 (waiting state) delete the previous j from the STA CK and push the current j onto STACK.

(C) If STATUS (J) = 3(processed state), ignore the vertex j. [End of Step 3 loop.]

Step 6. Exit

The above algorithm will process only those vertices which are connected to the starting vertex. A, that is, the connected component including A. Suppose one wants to process all the vertices in the graph G. Then the algorithm must be modified so that it begins again with another vertex (which we call B) that is still in the ready state (STATE = 1). This vertex B can be obtained by traversing through the list of vertices.

Remark :

The structure STACK in the above algorithm is not technically a stack since, in Step 5(b), we allow a vertex J to be deleted and then inserted in the front of the stack. (Although it is the same vertex J, it usually represent a different edge in the adjacency structure.) If we do not move J in Step 5 (b), then we obtain an alternate form of DFS.

Example

Suppose the DFS Algorithm DFS is applied to the graph in Fig 25. The vertices are processed in the following order.

A, B, E, F, C, G, H, D.

Specifically, Fig 26 (a) shows the sequence of waiting lists in STACK and the vertices being processes. We have used the slash / to indicate that a vertex is deleted from the waiting list. Each vertex, excluding A, comes from an adjacency list and hence corresponds to an edge of the graph. These edges form a spanning tree of G which is pictured in Fig 26 (b) The numbers indicate the order the edges added to the tree, and the dashed lines indicate backtracking.

88/90 Dharmendra Mishra

Vertex STACK

A A B, C, D B E, F, C, D E F, G, F, C, D F C, G, C, D C G, D G H, D H D D

(a) (b)

Fig. 26

Breadth-first Search

The general idea behind a breadth-first search beginning at a starting vertex A is as follows. First we process the starting vertex A. Then we process all the neighbors of A. Then we process all the neighbors of neighbors of A. And so on. Naturally we need to keep track of the neighbors of vertex, and we need to guarantee that no vertex is processed twice. This is accomplished by using a QUEUE to hold vertices that are waiting to be processed, and by a field STATUS which tells us the current status if a vertex. The algorithm follows.

Algorithm (Breadth-first Search): This algorithm executes a breadth-first search on a graph G beginning with a starting vertex A.

Step 1. Initialize all vertices to the ready state (STATUS = 1.)

Step 2. Put the starting vertex A in QUEUE and change the status of A to the waiting State (STATUS = 2).

Step 3. Repeat Steps 4 and 5 until QUEUE is empty.

Step 4. Remove the front vertex N of QUEUE. Process N, and set STATUS (N)=3, the processes state.

Step 5. Examine each neighbor J of N.

(a) If STATUS (J) =1 (ready state), add J to the rear of QUEUE and reset STATUS (J) = 2 (waiting state).

(b) If STATUS (J) = 2 (waiting state) or STATUS (J) = 3 (processed state), ignore the vertex J.

Step 6. Exit.

Again, the above algorithm will process only those vertices which are connected to the starting vertex. A that is the connected component including A. Suppose one wants to process all the vertices in the graph G. Then the algorithm must be modified so that it begins again with another vertex (which we call B) that is still in the ready state (STATUS = 1). This vertex B can be obtained by traversing through the list of vertices.

89/90 Dharmendra Mishra

Example 8.5

Suppose the BFS Algorithm BFS is applied to the graph in Fig 25. The vertices processed in the following order.

A, D, C, B, F, E, G, H

Specifically, Fig 27 (a) shows the sequence of waiting lists in QUEUE and the vertices being processes. Again, each vertex, excluding A, comes from an adjacency list and hence corresponds to an edge of the graph. These edges form a spanning tree of G which is pictured in Fig 27 (b). Again the numbers indicate the order the edges are added to the tree. Observe that this spanning tree is different than the one in Fig 27 (b) which came from a depth-first search.

Vertex QUEUE

A A B, C, D D B, C C F, B B E, F F E E G G H H

(a) (b)

Fig. 27



90/90