LATTICCESS AND BOOLEAN

E-content for M. A/M.sc and Integrated Bsc-Msc

Prepared by Dr Aftab Hussain Shah Assistant Professor Department of

JANUARY 1, 2105 CENTRAL UNIVERSITY OF KASHMIR Srinagar

Contents

Chapter-1: The Concept of an Order Chapter-2: Lattices Chapter-3: Modular Distributive and Complemented Lattices Chapter-4: Boolean Algebras and their applications

Chapter 1 THE CONCEPT OF AN ORDER

The present chapter is aimed at to provide introductory definitions, examples and results on ordered sets which shall be used in the subsequent chapters. This chapter consists of five sections. Section 1.1 is on ordered sets and their examples. In section 1.2 we discuss Hasse diagrams, a special type of diagrams used to represent ordered sets. Section 1.3 and 1.4 deals with results and examples on order preserving and residuated mappings between the ordered sets. The chapter ends with some important results and examples on isomorphism of ordered sets.

1.1 Concept of an order

In this section we will go through the definition of partial order relation on a set, shortly read as “order” and discuss partial ordered sets (ordered sets) in detail with the help of various examples. The section ends with some of the important results on ordered sets.

Definition 1.1.1: A binary relation on a non-empty set 퐸 is a subset 푅 of the Cartesian product set퐸 × 퐸 = {(푥, 푦) | 푥, 푦 ∈ 퐸}. For any (푥, 푦) ∈ 퐸 × 퐸, (푥, 푦) ∈ 푅 means that 푥 is related to 푦 under 푅 and we denote it by 푥푅푦.

Definition 1.1.2: The Equivalence Relations on 퐸 is a binary relation 푅 that is: (a) Reflexive [for all푥 ∈ 퐸 (푥, 푥) ∈ 푅]; (b) Symmetric [for all 푥, 푦 ∈ 퐸 if (푥, 푦) ∈ 푅 then (푦, 푥) ∈ 푅]; (c) Transitive [for all 푥, 푦, 푧 ∈ 퐸if (푥, 푦) ∈ 푅 and (푦, 푧) ∈ 푅then (푥, 푧) ∈ 푅]. For any relation 푅 on 퐸 the dual of 푅 denoted by 푅푑 is defined by: (푥, 푦) ∈ 푅푑if and only if (푦, 푥) ∈ 푅. One can easily see that if 푅 is symmetric then 푅 = 푅푑. Here we shall be particularly interested in the situation where property (b) is replaced by the following property: (d) Antisymmetry[for all 푥, 푦 ∈ 퐸 if (푥, 푦) ∈ 푅 and (푦, 푥) ∈ 푅 then 푥 = 푦]. 푑 One can easily verify that if 푅is antisymmetric then 푅 ∩ 푅 = 푖푑퐸. Where 푖푑퐸 denotes the equality relation on 퐸as: (푥, 푦) ∈ 푅 ∩ 푅푑 if and only if (푥, 푦) ∈ 푅 and (푥, 푦) ∈ 푅푑; if and only if (푥, 푦) ∈ 푅 and (푦, 푥) ∈ 푅; if and only if 푥 = 푦; (since 푅 is antisymmetric)

if and only if (푥, 푦) ∈ 푖푑퐸; 푑 if and only if 푅 ∩ 푅 = 푖푑퐸. Notation: We usually denote 푅 by the symbol ≤ and write the expression (푥, 푦) ∈ ≤ in the equivalent form 푥 ≤ 푦. Which we read as “푥 is less than or equal to 푦”. With this notation we now define an order on a set 퐸.

Definition 1.1.3: We say ≤ is an order on 퐸 if and only if: (a) Reflexivity: [For all푥 ∈ 퐸 푥 ≤ 푥.] (b) Antisymmetry: [For all푥, 푦 ∈ 퐸 if 푥 ≤ 푦 and 푦 ≤ 푥 then 푥 = 푦.] (c) Transitivity:[For all 푥, 푦, 푧 ∈ 퐸 if 푥 ≤ 푦 and 푦 ≤ 푧 then 푥 ≤ 푧.]

Definition 1.1.4: By an ordered set we shall mean a set 퐸 on which there is defined an order ≤ and we denote it by (퐸; ≤). Other common terminology for an order is a partial order and for an ordered set is a or a poset.

Example 1.1.5: On every set 퐸 the relation of equality is an order.

Solution: Let 퐸 be any arbitrary set. Suppose for all 푥, 푦 ∈ 퐸 푥 ≤ 푦 if and only if 푥 = 푦. Reflexivity: For all 푥 ∈ 퐸 we have 푥 = 푥. So푥 ≤ 푥 for all 푥 ∈ 퐸. Antisymmetry: For any 푥, 푦 ∈ 퐸 suppose 푥 ≤ 푦 and 푦 ≤ 푥 if and only if 푥 = 푦 (by definitionof ≤). Thus ≤ is antisymmetric. Transitivity: For any 푥, 푦, 푧 ∈ 퐸 let 푥 ≤ 푦 and 푦 ≤ 푧 if and only if 푥 = 푦 and 푦 = 푧 if and only if 푥 = 푧 if and only if 푥 ≤ 푧. Which proves that ≤ is transitive.

Example 1.1.6: On the set ℙ(퐸) of all subsets of a non-empty set 퐸, the relation ⊆ of set inclusion defined by 퐴 ≤ 퐵 if and only if 퐴 ⊆ B is an order.

Solution: Reflexivity: For any 퐴 ∈ ℙ(퐸), since 퐴 ⊆ A so 퐴 ≤ 퐴. Therefore ‘⊆’ is reflexive. Antisymmetry: For any 퐴, 퐵 ∈ 푃(퐸) let퐴 ≤ 퐵 and 퐵 ≤ 퐴 implies 퐴 ⊆ B and 퐵 ⊆ A; this gives 퐴 = 퐵. Therefore ⊆ is antisymmetric. Transitivity: For any 퐴, 퐵, 퐶 ∈ 푃(퐸) let 퐴 ≤ 퐵 and 퐵 ≤ 퐶, which implies 퐴 ⊆ B and 퐵 ⊆ C; this gives 퐴 ⊆ 퐶; therefore 퐴 ≤ 퐶 and thus ⊆ is transitive. Hence ⊆ defines an order on ℙ(퐸).

Example 1.1.7: On the set ℕ of natural numbers the relation | of divisibility, defined by 푚 ≤ 푛 if and only if 푚|푛, is an order.

Solution: To show that ℕ with respect to | forms an order set we must show that it satisfies reflexivity, antisymmetry and transitivity.

Reflexivity: For any 푚 ∈ ℕ, we have 푚 = 1 ∙ 푚, so 푚|푚, thus 푚 ≤ 푚 and hence ≤ is reflexive Antisymmetry: For any 푚, 푛 ∈ ℕ suppose 푚 ≤ 푛 and 푛 ≤ 푚 if and only if 푚|푛 and 푛|푚 if and only if 푚 = 푛. Thus ≤ is antisymmetric. Transitivity: For 푚, 푛, 푝 ∈ ℕ suppose 푚 ≤ 푛 and 푛 ≤ 푝 if and only if 푚|푛 and 푛|푝 if and only if 푚|푝 i.e, 푚 ≤ 푛. Thus ≤ is transitive. So| is an order on ℕ.

Example 1.1.8: If (푃; ≤ ) is an ordered set and 푄 is a subset of 푃 then the relation≤푄 defined on 푄 by; 푥 ≤푄 푦 if and only if 푥 ≤ 푦 is an order on 푄.

Solution: We need to show that the relation ≤푄 defined on 푄 is an order. Reflexivity: For any 푥 ∈ 푄, we have 푥 ∈ 푃(푄 ⊆ 푃), since푃 is an ordered set, therefore we have 푥 ≤ 푥 if and only if 푥 ≤푄 푥. Thus ≤푄 is reflexive on 푄.

Antisymmetry: Let 푥, 푦 ∈ 푄 be such that 푥 ≤푄 푦 and 푦 ≤푄 푥; if and only if 푥 ≤ 푦and 푦 ≤ 푥 if and only if 푥 = 푦 (since ≤ is an order on 푃).

This proves that ≤푄 is antisymmetric.

Transitivity: Suppose for any 푥, 푦, 푧 ∈ 푄; 푥 ≤푄 푦 and 푦 ≤푄 푧; if and only if 푥 ≤ 푦 and 푦 ≤ 푧 if and only if 푥 ≤ 푧 (since ≤ is an order on 푃)

if and only if 푥 ≤푄 푧.

This proves that ≤푄 is transitive.

We often write ≤푄 simply as ≤ and say 푄 inherits the order ≤ from 푃. Thus for example, the set 퐸푞푢 퐸 of all equivalence relations on 퐸 inherits the order ⊆ from 푃(퐸 × 퐸). Example 1.1.9: The set of even positive integers may be ordered in a usual way or by divisibility.

Example 1.1.10: If (퐸1; ≤1), (퐸2; ≤2), . . . , (퐸푛; ≤푛) are ordered sets then the Cartesian 푛 product set×푖=1 퐸i can be given the Cartesian order ≤ defined by: for any (푥1, 푥2, . . . , 푥푛), 푛 (푦1, 푦2, . . . , 푦푛) ∈ ×푖=1 퐸푖;

(푥1, 푥2, . . . , 푥푛) ≤ (푦1, 푦2, . . . , 푦푛) if and only if 푥푖 ≤푖 푦푖 for all 푖 = 1,2, … , 푛.

푛 Solution: Reflexivity: Let (푥1, 푥2, . . . , 푥푛) ∈×푖=1 퐸푖 where 푥푖 ∈ 퐸푖 for all 푖 = 1,2, … , 푛. Since (퐸1; ≤1) , (퐸2; ≤2), . . . , (퐸푛; ≤푛) are ordered sets then by reflexivity of each (퐸푖; ≤푖) 푥푖 ≤푖 푥푖 for all 푖 = 1,2, … , 푛 if and only if (푥1, 푥2, . . . , 푥푛) ≤ (푥1, 푥2, . . . , 푥푛) (by definition of ≤). This Shows that ≤ is reflexive. 푛 Antisymmetry: Let (푥1, 푥2, . . . , 푥푛), (푦1, 푦2, . . . , 푦푛) ∈ ×푖=1 퐸푖 where 푥푖 ∈ 퐸푖 and 푦푖 ∈ 퐸푖 for all 푖 = 1,2, … , 푛. Suppose (푥1, 푥2, . . . , 푥푛) ≤ (푦1, 푦2, . . . , 푦푛) and (푦1, 푦2, . . . , 푦푛) ≤ (푥1, 푥2, . . . , 푥푛) if and only if 푥푖 ≤푖 푦푖 and 푦푖 ≤푖 푥푖 for all 푖 = 1,2, … , 푛 if and only if 푥푖 = 푦푖 for all 푖 = 1,2, … , 푛 (by antisymmetry of each (퐸푖; ≤푖)) if and only if (푥1, 푥2, . . . , 푥푛) =

(푦1, 푦2, . . . , 푦푛). Which showsthat ≤ is antisymmetric. 푛 Transitivity: For any (푥1, 푥2, . . . , 푥푛), (푦1, 푦2, . . . , 푦푛) and (푧1, 푧2, . . . , 푧푛) ∈×푖=1 퐸푖 if (푥1, 푥2, . . . , 푥푛) ≤ (푦1, 푦2, . . . , 푦푛) and (푦1, 푦2, . . . , 푦푛) ≤ (푧1, 푧2, . . . , 푧푛) if and only if 푥푖 ≤푖 푦푖 and 푦푖 ≤푖 푧푖 for all 푖 = 1,2, … , 푛 if and only if 푥푖 ≤푖 푧푖 for all 푖 = 1,2, … , 푛 (by transitivity of each (퐸푖; ≤푖)) if and only if (푥1, 푥2, . . . , 푥푛) ≤ (푧1, 푧2, . . . , 푧푛), proving that ≤ is transitive.

Definition 1.1.11: The order defined in above is called the Cartesian order.

Example 1.1.12: Let 퐸 and 퐹 be ordered sets, then the set 푀푎푝(퐸, 퐹) of all mappings 푓: 퐸 → 퐹 can be ordered by defining 푓 ≤ 푔 if and only if 푓(푥) ≤ 푔(푥) for all 푥 ∈ 퐸.

Solution: Reflexivity: Let 푥 ∈ 퐸, then푓(푥) ∈ 퐹, since 퐹 is an ordered set we have 푓(푥) ≤ 푓(푥) if and only if 푓 ≤ 푓 showing that (푀푎푝 (퐸, 퐹), ≤ ) is reflexive. Antisymmetry: For any 푓, 푔 ∈ (푀푎푝 (퐸, 퐹), ≤ ) suppose 푓 ≤ 푔 and 푔 ≤ 푓; if and only if 푓(푥) ≤ 푔(푥) and 푔(푥) ≤ 푓(푥) for all 푥 ∈ 퐸, since 푓(푥), 푔(푥) ∈ 퐹 and 퐹 is an ordered set therefore by antisymmetry of 퐹 we have 푓(푥) = 푔(푥) for any 푥 ∈ 퐸, thus 푓 ≤ 푔 and푔 ≤ 푓 implies 푓 = 푔. Hence (푀푎푝 (퐸, 퐹), ≤ ) is antisymmetric. Transitivity: For any 푓, 푔, ℎ ∈ (푀푎푝 (퐸, 퐹), ≤ ) suppose 푓 ≤ 푔 and 푔 ≤ ℎ if and only if 푓(푥) ≤ 푔(푥) and 푔(푥) ≤ ℎ(푥) for all 푥 ∈ 퐸. Since 푓(푥), 푔(푥), ℎ(푥) ∈ 퐹 and 퐹 is an ordered set therefore by the transitivity of 퐹 we have for any 푥 ∈ 퐸, 푓(푥) ≤ ℎ(푥) if and only if 푓 ≤ ℎ. Thus 푓 ≤ 푔 and 푔 ≤ ℎ implies 푓 ≤ ℎ. Hence (푀푎푝 (퐸, 퐹), ≤ ) is transitive. Thus 푀푎푝(퐸, 퐹) forms an ordered set with respect to the order defined above.

Definition 1.1.13: We say that elements 푥, 푦 of an ordered set (퐸; ≤) are comparable if either 푥 ≤ 푦 or 푦 ≤ 푥. We denote this by writing 푥 ∥ 푦.

Definition 1.1.14: If all elements of an ordered set ( 퐸, ≤) are comparable then we say that 퐸 forms a chain or that ≤ is a .

Definition 1.1.15: We say 푥, 푦 ∈ (퐸, ≤) are incomparable and write 푥 ∦ 푦, if neither 푥 ≰ 푦 nor 푦 ≰ 푥. If all pairs of distinct elements of 퐸 are incomparable, then clearly ≤ is equality, in which case we say that 퐸 forms an antichain.

Example 1.1.16: The sets ℕ, ℤ, ℚ, ℝ of natural numbers, integers, rationals and real numbers form chains under their usual orders of ≤.

Example 1.1.17: In Example 1.1.6, the singleton subsets of 푃(퐸) form an antichain under the inherited inclusion order.

Example 1.1.18: Let (푃1; ≤1) and (푃2; ≤2) be ordered sets we prove that the relation ≤ defined on 푃1 × 푃2 by: 푥1 <1 푥2, (푥1, 푦1) ≤ (푥2, 푦2) if and only if { or 푥1 = 푥2and 푦1 ≤2 푦2 is an order called the lexicographic order on 푃1 × 푃2. We also show also that ≤ is total order if and only if ≤1 and ≤2 are total orders.

Solution: Reflexivity: Suppose that (푥, 푦) ∈ 푃1 × 푃2,then푥 ∈ 푃1 and 푦 ∈ 푃2, since 푃1 and 푃2 are ordered sets we have;

푥 <1 푥; 푥 ≤1 푥 and 푦 ≤2 푦 if and only if { or 푥 = 푥 and 푦 ≤2 푦, if and only if (푥, 푦) ≤ (푥, 푦).

Which shows that ≤ is reflexive.

Antisymmetry: Suppose that (푥1, 푦1), (푥2, 푦2) ∈ 푃1 × 푃2 be any elements of 푃1 × 푃2.

Let (푥1, 푦1) ≤ (푥2, 푦2) and (푥2, 푦2) ≤ (푥1, 푦1). We show that (푥1, 푦1) = (푥2, 푦2).

푥1 <1 푥2; Now (푥1, 푦1) ≤ (푥2, 푦2) if and only if { or 푥1 = 푥2and 푦1 ≤2 푦2,

푥2 <1 푥1; and (푥2, 푦2) ≤ (푥1, 푦1) if and only if { or 푥2 = 푥1and 푦2 ≤2 푦1,

The following cases arise:

Case 1: 푥1 <1 푥2 and 푥2 <1 푥1. By antisymmetry of (푃1; ≤1) 푥1 = 푥2, which is not possible.

Case 2: 푥1 <1 푥2 and 푥2 = 푥1and 푦2 ≤2 푦1, this cannot happen simultaneously, so we omit this case also.

Case 3: 푥1 = 푥2 and 푦1 ≤2 푦2 and 푥2 <1 푥1. Again, this cannot happen simultaneously, so we didn’t consider this case. So, we are only left with the following case:

Case 4: 푥1 = 푥2 and 푦1 ≤2 푦2 and 푥2 = 푥1 and 푦2 ≤2 푦1. Since (푃2; ≤2) is an ordered set we conclude that 푥1 = 푥2 and 푦1 = 푦2. Therefore (푥1, 푦1) = (푥2, 푦2). So, we conclude from above cases that ≤ is antisymmetric.

Transitivity: Suppose (푥1, 푦1), (푥2, 푦2), (푥3, 푦3) ∈ 푃1 × 푃2 and let (푥1, 푦1) ≤ (푥2, 푦2) and (푥2, 푦2) ≤ (푥3, 푦3). We show that (푥1, 푦1) ≤ (푥3, 푦3).

푥1 <1 푥2; Now (푥1, 푦1) ≤ (푥2, 푦2) if and only if { or 푥1 = 푥2 and 푦1 ≤2 푦2,

푥2 <1 푥3; and (푥2, 푦2) ≤ (푥3, 푦3) if and only if { or 푥2 = 푥3 and 푦2 ≤2 푦3. The following cases arise:

Case 1: 푥1 <1 푥2 and 푥2 <1 푥3. By the transitivity of (푃1; ≤1) this implies푥1 <1 푥3. Therefore by definition of ≤ we have (푥1, 푦1) ≤ (푥3, 푦3).

Case 2: 푥1 <1 푥2 and 푥2 = 푥3 and 푦2 ≤2 푦3; this implies 푥1 <1 푥3 and 푦2 ≤2 푦3, this implies (푥1, 푦1) ≤ (푥3, 푦3). Case 3: 푥1 = 푥2 and 푦1 ≤2 푦2 and 푥2 <1 푥3. This implies푥1 <1 푥3 and 푦1 ≤2 푦2. Therefore by definition of ≤ we have (푥1, 푦1) ≤ (푥3, 푦3).

Case 4: 푥1 = 푥2 and 푦1 ≤2 푦2 and 푥2 = 푥3 and 푦2 ≤2 푦3. Since (푃2; ≤2) is an ordered set we conclude that 푥1 = 푥3 and 푦1 ≤2 푦3. Therefore (푥1, 푦1) ≤ (푥3, 푦3). So, we conclude from above cases that ≤ is transitive.

Now suppose that (푃1 × 푃2; ≤) is totally ordered, let (푥1, 푦1), (푥2, 푦2) ∈ 푃1 × 푃2. Then either (푥1, 푦1) ≤ (푥2, 푦2) or (푥2, 푦2) ≤ (푥1, 푦1). This implies that: 푥 < 푥 ; { 1 1 2 (i) or 푥1 = 푥2 and 푦1 ≤2 푦2, or 푥 < 푥 ; { 2 1 1 (ii) or 푥2 = 푥1 and 푦2 ≤2 푦1.

If (i) is true then either 푥1 <1 푥2 or 푥1 = 푥2 and 푦1 ≤2 푦2. If 푥1 <1 푥2 this implies ≤1 is total order and if 푥1 = 푥2 and 푦1 ≤2 푦2 this implies ≤2 is total order and same is the case with (ii). Conversely suppose that ≤1 and ≤2 are total orders, then for all 푥1, 푥2 ∈ 푃1 either 푥1 ≤1 푥2 or 푥2 ≤1 푥1 and for all 푦1, 푦2 ∈ 푃2 either 푦1 ≤2 푦2 or 푦2 ≤2 푦1. Here we have two cases:

Case 1: 푥1 ≤1 푥2 and 푦1 ≤2 푦2. This implies (푥1, 푦1) ≤ (푥2, 푦2) (by definition of ≤).

Case 2: 푥2 ≤1 푥1 and 푦2 ≤2 푦1. This implies that (푥2, 푦2) ≤ (푥1, 푦1) (by definition of ≤). Thus ≤ is a total order.

Example 1.1.19: Let 푃1 and 푃2 be disjoint sets if ≤1 is an order on 푃1and≤2is an order on 푃2 Prove that the following defines an order on 푃1 ∪ 푃2. 푥, 푦 ∈ 푃 and 푥 ≤ 푦; 푥 ≤ 푦 if and only if { 1 1 or 푥, 푦 ∈ 푃2 and 푥 ≤2 푦,

Solution: Reflexivity: Take any 푥 ∈ 푃1 ∪ 푃2 then 푥 ∈ 푃1 or 푥 ∈ 푃2. If 푥 ∈ 푃1, then by reflexivity of ≤1 on 푃1 푥 ≤1 푥 and if 푥 ∈ 푃2 then again by reflexivity of ≤2 on 푃2 푥 ≤2 푥, in either case by definition of ≤ we get 푥 ≤ 푥, proving the reflexivity of ≤.

Antisymmetry: Let 푥, 푦 ∈ 푃1 ∪ 푃2 and suppose 푥 ≤ 푦 and 푦 ≤ 푥. This implies: 푥, 푦 ∈ 푃 and 푥 ≤ 푦; { 1 1 or 푥, 푦 ∈ 푃2 and 푥 ≤2 푦, and 푦, 푥 ∈ 푃 and 푦 ≤ 푥; { 1 1 or 푦, 푥 ∈ 푃2 and 푦 ≤2 푥, The following cases can arise:

Case 1: 푥, 푦 ∈ 푃1 , 푥 ≤1 푦 and 푥, 푦 ∈ 푃1, 푦 ≤1 푥, since (푃1, ≤1) is an ordered set therefore by antisymmetry of ≤1 we have 푥 = 푦.

Case 2: 푥, 푦 ∈ 푃1, 푥 ≤1 푦 and 푦, 푥 ∈ 푃2, 푦 ≤2 푥, since 푃1 and 푃2 are disjoint so this case is not possible. Case3: 푥, 푦 ∈ 푃2 , 푥 ≤2 푦 and 푥, 푦 ∈ 푃2, 푦 ≤2 푥, since (푃2, ≤2 ) is an ordered set, therefore by antisymmetry of ≤2, 푥 = 푦.

Case 4: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦, 푥 ∈ 푃1 푦 ≤1 푥, which is again not possible as 푃1 and 푃2 are disjoint. Hence, we conclude that ≤ is antisymmetric.

Transitivity: Let 푥, 푦, 푧 ∈ 푃1 ∪ 푃2 and suppose 푥 ≤ 푦 and 푦 ≤ 푧. This implies: 푥, 푦 ∈ 푃 and 푥 ≤ 푦; { 1 1 or 푥, 푦 ∈ 푃2 and 푥 ≤2 푦, and

푦, 푧 ∈ 푃 and 푦 ≤ 푧; { 1 1 or 푦, 푧 ∈ 푃2 and 푦 ≤2 푧, we have to show that 푥 ≤ 푧. The following cases arise:

Case 1: 푥, 푦 ∈ 푃1 푥 ≤1 푦 and 푦, 푧 ∈ 푃1 and 푦 ≤1 푧. Since (푃1; ≤1) is an ordered set by transitivity of ≤1, we have 푥, 푧 ∈ 푃1 and 푥 ≤1 푧 and thus 푥 ≤ 푧.

Case 2: 푥, 푦 ∈ 푃1 and 푥 ≤1 푦 and 푦, 푧 ∈ 푃2 and 푦 ≤2 푧. But this implies푦 ∈ 푃1 ∩ 푃2, which is not possible as 푃1 ∩ 푃2 = ∅. Thus this case cannot arise.

Case 3: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦, 푧 ∈ 푃1 and 푦 ≤1 푧, but again this forces that 푦 ∈ 푃1 ∩ 푃2 which is again not possible as 푃1 ∩ 푃2 = ∅. So, we reject this case.

Case 4: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦, 푧 ∈ 푃2 and 푦 ≤2 푧, by transitivity of (푃2; ≤2) we have 푥, 푧 ∈ 푃2 and 푥 ≤2 푧 and so 푥 ≤ 푧.

Thus we conclude from above cases that ≤ is transitive. Thus 푃1 ∪ 푃2 is an ordered set under the above defined order.

Example 1.1.20: Let 푃1 and 푃2 be disjoint sets, if ≤1is an order on 푃1 and ≤2 is an order on 푃2. We show that the following defines an order on 푃1 ∪ 푃2:

푥, 푦 ∈ 푃1 and 푥 ≤1 푦, 푥 ≤ 푦 if and only if {or 푥, 푦 ∈ 푃2 and 푥 ≤2 푦, or 푥 ∈ 푃1 and 푦 ∈ 푃2.

The resulting ordered set is called the vertical sum or the linear sum of 푃1 and 푃2 and is denoted by 푃1⨁푃2.

Solution: Reflexivity: Take any 푥 ∈ 푃1 ∪ 푃2, then 푥 ∈ 푃1 or 푥 ∈ 푃2. Since (푃1; ≤1) and (푃2; ≤2) are ordered sets we have 푥 ≤1 푥 or 푥 ≤2 푥; in either case we have 푥 ≤ 푥. So ≤ defined above on 푃1 ∪ 푃2 is reflexive.

Antisymmetry: Suppose 푥, 푦 ∈ 푃1 ∪ 푃2 with 푥 ≤ 푦 and 푦 ≤ 푥 we have to show that 푥 = 푦.

푥, 푦 ∈ 푃1 and 푥 ≤1 푦, Now 푥 ≤ 푦 if and only if {or 푥, 푦 ∈ 푃2 and 푥 ≤2 푦, or 푥 ∈ 푃1 and 푦 ∈ 푃2. 푥, 푦 ∈ 푃1 and 푦 ≤1 푥, And 푦 ≤ 푥 if and only if {or 푥, 푦 ∈ 푃2 and 푦 ≤2 푥, or 푦 ∈ 푃1 and 푥 ∈ 푃2. The following cases can arise:

Case 1: 푥, 푦 ∈ 푃1 and푥 ≤1 푦 and 푥, 푦 ∈ 푃1 and푦 ≤1 푥, thus antisymmetry of ≤1 on 푃1 forces 푥 = 푦.

Case 2: 푥, 푦 ∈ 푃1 and 푥 ≤1 푦 and 푥, 푦 ∈ 푃2 and 푦 ≤2 푥. But then 푥, 푦 ∈ 푃1 ∩ 푃2; which is not possible as 푃1 and 푃2 are disjoint, so we omit this case.

Case 3 :푥, 푦 ∈ 푃1and 푥 ≤1 푦 and 푦 ∈ 푃1 and 푥 ∈ 푃2. But then 푥 ∈ 푃1 ∩ 푃2 which is again not possible as 푃1 and 푃2 are disjoint, so we omit this case also.

Case 4: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푥, 푦 ∈ 푃1 and 푦 ≤1 푥. But then again 푥, 푦 ∈ 푃1 ∩ 푃2; which is also not possible as 푃1 and 푃2 are disjoint, so we again reject this case.

Case 5: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푥, 푦 ∈ 푃2 and 푦 ≤2 푥, thus antisymmetry of ≤2 on 푃2 forces 푥 = 푦.

Case 6: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦 ∈ 푃1 and 푥 ∈ 푃2 which is again not possible as 푃1 and 푃2 are disjoint.

Case 7: 푥 ∈ 푃1 and 푦 ∈ 푃2 and 푦 ∈ 푃1and 푥 ∈ 푃2. But then 푥, 푦 ∈ 푃1 ∩ 푃2 which is not possible as 푃1 and 푃2 are disjoint.

Case 8: 푥 ∈ 푃1 and 푦 ∈ 푃2 and 푥, 푦 ∈ 푃2 and 푦 ≤2 푥. But then 푥 ∈ 푃1 ∩ 푃2 which is not possible.

Case 9: 푥 ∈ 푃1 and 푦 ∈ 푃2 and 푥, 푦 ∈ 푃1 and 푦 ≤1 푥. But then again 푥, 푦 ∈ 푃1 ∩ 푃2 which is also not possible, so we reject this case.

So, we conclude from above cases that antisymmetry holds.

Transitivity: Let 푥, 푦, 푧 ∈ 푃1 ∪ 푃2 and suppose that 푥 ≤ 푦 and 푦 ≤ 푧 we have to show that 푥 ≤ 푧.

푥, 푦 ∈ 푃1and 푥 ≤1 푦, Now 푥 ≤ 푦 if and only if { or 푥, 푦 ∈ 푃2and 푥 ≤2 푦, or 푥 ∈ 푃1and 푦 ∈ 푃2,

푦, 푧 ∈ 푃1and 푦 ≤1 푧, and 푦 ≤ 푧 if and only if {or 푦, 푧 ∈ 푃2and 푦 ≤2 푧, or 푦 ∈ 푃1and 푧 ∈ 푃2.

Again following cases can arise:

Case 1: 푥, 푦 ∈ 푃1 and 푥 ≤1 푦 and 푦, 푧 ∈ 푃1 and 푦 ≤1 푧, thus transitivity of ≤1on 푃1 forces 푥 ≤1 푧 and therefore 푥, 푧 ∈ 푃1 and 푥 ≤ 푧.

Case 2: 푥, 푦 ∈ 푃1 and 푥 ≤1 푦 and 푦, 푧 ∈ 푃2 and 푦 ≤2 푧. But then 푦 ∈ 푃1 ∩ 푃2 which is not possible as 푃1 and 푃2 are disjoint, so we reject this case. Case 3: 푥, 푦 ∈ 푃1 and 푥 ≤1 푦 and 푦 ∈ 푃1and 푧 ∈ 푃2. But then again 푦 ∈ 푃1 ∩ 푃2 which is also not possible, so we reject this case also.

Case 4: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦, 푧 ∈ 푃1 and 푦 ≤1 푧. But then again 푦 ∈ 푃1 ∩ 푃2 which is also not possible.

Case 5: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦, 푧 ∈ 푃2 and 푦 ≤2 푧, thus transitivity of ≤2 on 푃2 forces 푥 ≤2 푧 and therefore 푥, 푧 ∈ 푃1 and 푥 ≤ 푧.

Case 6: 푥, 푦 ∈ 푃2 and 푥 ≤2 푦 and 푦 ∈ 푃1and 푧 ∈ 푃2. This implies that 푦 ∈ 푃1 ∩ 푃2 which is not possible as 푃1 and 푃2 are disjoint, so we reject this case.

Case 7: 푥 ∈ 푃1 and 푦 ∈ 푃2 and 푦, 푧 ∈ 푃1 and 푦 ≤1 푧. But then 푦 ∈ 푃1 ∩ 푃2 which is again not possible, so we reject this case also.

Case 8: 푥 ∈ 푃1 and 푦 ∈ 푃2 and 푦, 푧 ∈ 푃2 and 푦 ≤2 푧. But then 푥 ∈ 푃1 and 푧 ∈ 푃2, so by definition 푥 ≤ 푧.

Case 9:푥 ∈ 푃1 and 푦 ∈ 푃2 and 푦 ∈ 푃1 and 푧 ∈ 푃2. But this implies that 푦 ∈ 푃1 ∩ 푃2 which is also not possible as 푃1 and 푃2 are disjoint, so we reject this case also. So, from above all cases we conclude that transitivity holds.

Next result shows that the notion of order can be carried from any relation to its dual. Theorem 1.1.21: If 푅 is an order on 퐸 then so is its dual.

Proof: Suppose that 푅 is an order on 퐸. We have to show that 푅푑is also an order on 퐸. Reflexivity: Let 푥 ∈ 퐸 then (푥, 푥) ∈ 푅 if and only if (푥, 푥) ∈ 푅푑. Antisymmetry: Let 푥, 푦 ∈ 퐸 and (푥, 푦) ∈ 푅푑 and (푦, 푥) ∈ 푅푑. Now(푥, 푦) ∈ 푅푑 if and only if (푦, 푥) ∈ 푅 and (푦, 푥) ∈ 푅푑 if and only if (푥, 푦) ∈ 푅, since 푅 is antisymmetric, we have 푥 = 푦. Transitivity: Suppose 푥, 푦, 푧 ∈ 퐸 such that (푥, 푦) ∈ 푅푑 and ( 푦, 푧) ∈ 푅푑, we have to show (푥, 푧) ∈ 푅푑. Now (푥, 푦) ∈ 푅푑 if and only if (푦, 푥) ∈ 푅 and (푦, 푧) ∈ 푅푑 if and only if (푧, 푦) ∈ 푅. This gives that (푥, 푦), (푦, 푧) ∈ 푅푑 if and only if (푦, 푥), (푧, 푦) ∈ 푅. Since 퐸 is an ordered set, this implies that (푧, 푥) ∈ 푅 if and only if (푥, 푧) ∈ 푅푑. Thus transitivity holds, which proves the theorem. Notation: We shall denote the dual of an order ≤ on 퐸 by the symbol ≥ which we read as “greater than or equal to”. Then the ordered set (퐸; ≥) is called the dual of (퐸; ≤ ) and is often written as 퐸푑. As a consequence of (Theorem 1.1.21) we can assert that to every statement that concerns an order on a set 퐸 there is a dual statement that concerns the corresponding dual order on 퐸. Principle of Duality: To every theorem that concerns an ordered set 퐸, there is a corresponding theorem that concerns dual ordered set 퐸푑. This is obtained by replacing each statement that involves ≤, explicitly by its dual.

Definition 1.1.22: If(퐸; ≤ ) is an ordered set, then by the top element or maximum element or greatest element of 퐸, we mean an element 푥 ∈ 퐸 such that 푦 ≤ 푥 for every 푦 ∈ 퐸.

Note: A top element when it exists is unique, in fact if 푥 and푦 are both top elements of 퐸 then on the one hand 푦 ≤ 푥 and on the other hand 푥 ≤ 푦, whence by the antisymmetry of ≤ we have 푥 = 푦.

Definition 1.1.23: By a bottom element or minimum element we mean an element 푧 ∈ 퐸 such that 푧 ≤ 푦 for every 푦 ∈ 퐸.

Note: A bottom element when it exists is unique, in fact if 푥 and 푧 are both bottom elements of 퐸, then on the one hand we have 푥 ≥ 푧 and on the other hand, we have 푧 ≥ 푥, whence by the antisymmetry of ≥ we have 푥 = 푦.

Definition 1.1.24: An ordered set that has both a top element and bottom element is said to be bounded.

Note: We shall use the notation 푥 < 푦 to mean 푥 ≤ 푦 and 푥 ≠ 푦. Note that relation < thus defined is transitive but is not an order, since it fails to be reflexive. In other words, strict inclusion is characterized by the antisymmetric and transitive laws.

Lemma 1.1.25: Let (퐸, ≤) be a ordered set and 푥1, 푥2, … , 푥푛 ∈ 퐸 and if 푥1 ≤ 푥2 ≤ ⋯ ≤ 푥푛 ≤ 푥1 then 푥1 = 푥2 = ⋯ = 푥푛. Proof: Suppose 푥1 ≤ 푥2 ≤ ⋯ ≤ 푥푛 ≤ 푥1. Now since ≤ is an order and the dual of an order is also an order so 푥1 ≥ 푥2 ≥ 푥3 ≥ ⋯ ≥ 푥푛 ≥ 푥1. From these two we get 푥1 = 푥2 = ⋯ = 푥푛.

Definition 1.1.26: In an ordered set (퐸; ≤ ) we say that 푥 is covered by 푦 (or that 푦 covers) if 푥 < 푦 and there is no 푎 ∈ 퐸 such that 푥 < 푎 < 푦. We denote this by 푥 ≺ 푦 The set of pairs (푥, 푦) such that 푦 covers 푥 is called the covering relation of (퐸; ≤ ).

Example 1.1.27: The covering relation of the partial ordering {(푎, 푏): ‘푎 divides 푏’} on {1, 2, 3, 4, 6, 12} are the following sets: (1,2), (2,4), (2,6), (3,6), (4,12).

1.2 Hasse diagrams

This is the important section to deal with partial ordered sets. In this section we are going to see how an order on any set can be represented by diagrams. They are called Hasse diagrams and are defined as below.

Definition 1.2.1: A diagram representing an ordered set is called a Hasse diagram if: (1) elements are represented by points and for any two elements 푥, 푦 if 푥 ≤ 푦, then they are joined by increasing line segment.

(2)Following procedure is used while drawing Hasse diagram of an ordered set 푆: (i) Since the partial ordering is reflexive, so each vertex of 푆 must be related to itself. Consequently, for convenience all cycles are deleted in a Hasse diagram. (ii) Since the partial ordering is transitive, thus if 푎 ≤ 푏 and 푏 ≤ 푐, it follows that 푎 ≤ 푐. In (iii) If a vertex 푎 is connected to vertex 푏 that is 푎 ≤ 푏, then vertex 푏 appears to be an immediate successor of 푎. Thus the arrows may be omitted.

Example 1.2.2: Let 퐸 = {1,2,3,4,6,12} be the set of positive integral divisors of 12. Then the Hasse diagram of (퐸; ≤) where ≤ is divisibility order on 퐸 is as follows.

Example 1.2.3:Let 푋 = {푎, 푏, 푐} and 퐸 = ℙ(푋), then the Hasse diagram of (퐸; ≤) where ≤ is inclusion order on ℙ(푋) can be drawn as below:

The Hasse diagram of 퐸푑 = (퐸; ≤) where ≤ is ⊇ is obtained by turning the above diagram upside down and is as follows:

Example 1.2.4: We draw the Hasse diagram on sets of 3,4 and 5 elements by taking different orders on them.

Solution: Set of 3 elements: (i) Let 퐴 = {푎, 푏, 푐}. If we take order ≤ on 퐴 then we get the Hasse diagram as below;

(ii) If we take set of positive integral divisors of 4 and ordered it by divisibility then we obtain Hasse diagram as;

Set of 4 elements: (i) {푎, 푏, 푐, 푑} under usual order;

(ii). Set of positive integral divisors of 6, when ordered by divisibility;

Set of 5 elements: (i).{푎, 푏, 푐, 푑, 푒, } with usual order:

(ii). Set of positive integral divisors of 16 if ordered by divisibility;

Example 1.2.5: The Hasse diagram for the set of positive integral divisors of 210 when ordered by divisibility is given below: Solution: The set of positive divisors of 210 is given as below; 푆 = {1,2,3,5,7,10,14,15,21,30,35,42,70,105,210},

Hasse diagram of above set when ordered by divisibility is given by;

Example 1.2.6: Let 푃1 and 푃2 be the ordered sets with Hasse diagrams;

We draw the Hasse diagrams of 푃1 × 푃2 and 푃2 × 푃1 under the Cartesian order.

Solution: Since 푃1 = {푎, 푏, 푐} and 푃2 = {푥, 푦}, therefore;

푃1 × 푃2 = {(푎, 푥), (푎, 푦), (푏, 푥), (푏, 푦), (푐, 푥), (푐, 푦)} and it’s Hasse diagram under Cartesian order is given by;

Now, 푃2 × 푃1 = {(푥, 푎), (푥, 푏), (푥, 푐), (푦, 푎), (푦, 푏), (푦, 푐)} and its Hasse diagram under Cartesian order is given by;

From the above we conclude that both 푃1 × 푃2 and 푃2 × 푃1have same Hasse diagram, except for the order of the components of vertices.

1.3 Order Preserving and Order Reversing Mappings

Order sets can be related to each other in different ways. In this section we will have look at how ordered sets can be related to each other by defining maps between them, in such a way so that their order remains unaltered or altered, which will be given special name accordingly. The section ends with some important results on order preserving mappings.

Definition 1.3.1: If (퐴, ≤1) and (퐵, ≤2) are ordered sets, then we say that a mapping 푓: 퐴 → 퐵 is isotone (or monotone or order preserving) if;

for all 푥, 푦 ∈ 퐴 푥 ≤1 푦 implies 푓(푥) ≤2 푓(푦) and is antitone (or inverting or order reversing) if; for all 푥, 푦 ∈ 퐴 푥 ≤1 푦 implies 푓(푥) ≥2 푓(푦).

Example 1.3.2: If 퐸 is a non-empty set and 퐴  퐸 then 푓퐴: 푃(퐸) → 푃(퐸) given by푓퐴(푋) = 퐴 ∩ 푋 is isotone and If 푋´ is the complement of 푋 in 퐸, then the assignment 푋 ↦ 푋´ defines an anti-tone mapping on 푃(퐸).

Solution: Let 푋, 푌 ∈ 푃(퐸) such that 푋  푌; we show that 푓퐴(푋)  푓퐴(푌) i.e., 퐴 ∩ 푋  퐴 ∩ 푌. Let 푥 ∈ 퐴 ∩ 푋 then 푥 ∈ 퐴 and 푥 ∈ 푋; therefore 푥 ∈ 퐴 and 푥 ∈ 푌 (푋  푌), this implies 푥 ∈ 퐴 ∩ 푌. Thus 푓퐴(푋)  푓퐴(푌) showing that 푓퐴is isotone. Now we show that 푓(푋) = 푋´ is antitone. Let 푋, 푌 ∈ 푃(퐸) such that 푋  푌, we have to show that 푌´  푋´. Let 푥 ∈ 푌´ then 푥 ∉ 푌 therefore 푥 ∉ 푋 (푋  푌) this implies 푥 ∈ 푋´. Thus 푌´ 푋´, showing that 푓 is antitone.

Example 1.3.3: Given 푓: 퐸 → 퐹consider the induced direct image map 푓→ ∶ 푃(퐸) → 푃(퐹) defined for every 푋  퐸 by 푓→(푋) = {푓(푥) | 푥 ∈ 푋} andthe induced inverse image map 푓←: 푃(퐹) → 푃(퐸) defined for every 푌  퐹 by 푓←(푌) = {푥 ∈ 퐸 | 푓 (푥) ∈ 푌 }. Each of these mappings is isotone.

Solution: Let 퐶, 퐷 ∈ 푃(퐸) such that 퐶 ⊆ 퐷. We claim that 푓→(퐶) ⊆ 푓→(퐷). We have by definition; 푓→(퐶) = { 푓 (푥) | 푥 ∈ 퐶} and 푓→(퐷) = { 푓 (푦) | 푦 ∈ 퐷}. Now let 푓(푥) ∈ 푓→(퐶) for all 푥 ∈ 퐶; since 퐶 ⊆ 퐷; therefore for all 푥 ∈ 퐶 we have 푥 ∈ 퐷; therefore by definition 푓(푥) ∈ 푓→(퐷). Thus 퐶 ⊆ 퐷 gives 푓→(퐶) ⊆ 푓→(퐷), proving the claim. Now we show that 푓←: 푃(퐹) → 푃(퐸) defined for every 푌  퐹 by; 푓←(푌) = {푥 ∈ 퐸 | 푓(푥) ∈ 푌 } is isotone. Let 푋, 푌 ∈ 푃(퐹) such that 푋 ⊆ 푌, we have to show that 푓←(푋) ⊆ 푓←(푌). By definition 푓←(푋) = {푥 ∈ 퐸 | 푓(푥) ∈ 푋 } and 푓←(푌) = {푦 ∈ 퐸 | 푓(푦) ∈ 푌 }. Let 푧 ∈ 푓←(푋), then by definition 푧 ∈ 퐸 such that 푓(푧) ∈ 푋 or 푧 ∈ 퐸 such that 푓(푧) ∈ 푌 (as 푋 ⊆ 푌) then 푧 ∈ 푓←(푌).Thus 푓←(푋) ⊆ 푓←(푌), showing that 푓← is isotone.

We shall now give a natural interpretation of isotone mappings. For this purpose, we require the following notations.

Definition 1.3.4:(i) By a down-set of an ordered set 퐸 we shall mean a subset 퐷 of 퐸, with the property that if 푥 ∈ 퐷 and 푦 ∈ 퐸 is such that 푦 ≤ 푥, then 푦 ∈ 퐷. We include the empty subset of 퐸 as down set.By a principaldown-set we shall mean a down set of the form; 푥 ↓ = {푦 ∈ 퐸 | 푦 ≤ 푥}. i.e, the down-set of 퐸 generated by 푥. (ii) By an up-set of an ordered set 퐸 we shall mean a subset 푈 of 퐸 with the property that if 푥 ∈ 푈 and 푦 ∈ 퐸 is such that 푦 ≥ 푥, then 푦 ∈ 푈 and the principal up-set is an up-set of the form; 푥↑ = {푦 ∈ 퐸 | 푦 ≥ 푥}. i.e, the up-set of 퐸 generated by 푥.

Example 1.3.5: In the chain 푄+of positive rational numbers the set 퐷 = {푞 ∈ 푄+| 푞2 ≤ 2} is a down-set that is not principal.

Solution: Clearly 퐷 ⊆ 푄+, let 푥 ∈ 퐷, this gives 푥2 ≤ 2 and let 푦 ∈ 푄+ such that 푦 ≤ 푥, this implies 푦2 ≤ 푥2 ≤ 2 so 푦2 ≤ 2 which gives 푦 ∈ 퐷.Thus 퐷 is a down-set. Now suppose to + the contrary that 퐷 is a principle down-set. Then by definition we have for some 푥0 ∈ 푄 + 2 ↓ 퐷 = {푞 ∈ 푄 : 푞 ≤ 2} = 푥0 + = {푦 ∈ 푄 : 푦 ≤ 푥0} + 2 2 = {푦 ∈ 푄 : 푦 ≤ 푥0 }. 2 ↓ Since 푥0 ≰ 2 for all 푥0 ∈ 푄. This implies 퐷 ≠ 푥0 for any 푥0 ∈ 푄, which is a contradiction. Thus 퐷 is not principal down-set.

The next result shows that the intersection and union of two down-sets is again a down-set.

Proposition 1.3.6: If 퐴 and 퐵 are down sets of an ordered set 퐸 then so are 퐴 ∩ 퐵 and 퐴 ∪ 퐵. Proof: Let 퐴 and 퐵 are down-sets of an ordered set 퐸 and let 푥 ∈ 퐴 ∩ 퐵, 푦 ∈ 퐸 with 푦 ≤ 푥. Then 푥 ∈ 퐴 and 푥 ∈ 퐵, 푦 ∈ 퐸 with 푦 ≤ 푥. This implies 푥 ∈ 퐴 and 푦 ∈ 퐸 with 푦 ≤ 푥 and 푥 ∈ 퐵, 푦 ∈ 퐸 with 푦 ≤ 푥. Since 퐴 is a down-set this implies 푦 ∈ 퐴. Also since 퐵 is a down-set so 푦 ∈ 퐵, thus 푦 ∈ 퐴 ∩ 퐵. Which shows that 퐴 ∩ 퐵 is a down-set. Now we show 퐴 ∪ 퐵 is a down-set. Let 푥 ∈ 퐴 ∪ 퐵, 푦 ∈ 퐸 with 푦 ≤ 푥. Then 푥 ∈ 퐴 or 푥 ∈ 퐵, 푦 ∈ 퐸 with 푦 ≤ 푥. This implies 푥 ∈ 퐴, 푦 ∈ 퐸 with 푦 ≤ 푥 or 푥 ∈ 퐵, 푦 ∈ 퐸 with 푦 ≤ 푥. Since 퐴 is a down-set this implies 푦 ∈ 퐴 or also since 퐵 is a down-set so 푦 ∈ 퐵, thus 푦 ∈ 퐴 ∪ 퐵 showing that 퐴 ∪ 퐵 is a down-set.

Note: The above result is not true in general for principal down-sets. For example, in

we have here 푐↓ ∩ 푑↓ = {푎, 푐} ∩ {푏, 푑} = {푎, 푏} = 푎↓ ∪ 푏↓. Isotone mapping are characterized by the following properties:

Theorem 1.3.7: If 퐸 and 퐹 are ordered sets and if 푓 ∶ 퐸 → 퐹 is any mapping then the following statement are equivalent: (1)푓 is isotone; (2) The inverse image of every principal down-set of 퐹 is a down-set of 퐸; (3) The inverse image of every principal up-set of 퐹 is an up-set of 퐸.

Proof: (1) ⇒ (2): Suppose that 푓 ∶ 퐸 → 퐹 is isotone and let 푦 ∈ 퐹 and let 퐴 = 푓 ←(푦↓) then; 퐴 = {푥 ∈ 퐸 푓(푥) ∈ 푦↓}. (i) Now since 푓(푥) ∈ 푦↓ we have푓(푥) ≤ 푦. If 퐴 is empty, then 퐴 is clearly a down-set, suppose that 퐴 is non-empty and let 푥 ∈ 퐴 then for every 푧 ∈ 퐸 with 푧 ≤ 푥 we have; 푓(푧) ≤ 푓(푥) ≤ 푦 (since 푓 is isotone). Thus 푓(푧) ≤ 푦 (by transitivity).This implies f (z) ∈ 푦↓ for all 푧 ∈ 퐸. Thus we have 푧 ∈ 퐴 (by (i)). So, by definition 퐴 is a down-set of 퐸. (2) ⇒ (1): For any 푥 ∈ 퐸 we have 푓 (푥) ∈ (푓 (푥)) ↓, therefore 푥 ∈ 푓←(푓 (푥) ↓). By (2) this is a down-set of 퐸, so if 푦 ∈ 퐸 such that 푦 ≤ 푥 then we have 푦 ∈ 푓←(푓 (푥)↓); which implies 푓 (y) ∈ (푓 (푥)) ↓, so by definition 푓 (푦) ≤ 푓 (푥). Thus 푓is isotone. (1) ⇔ (3): This follows from above by the principal of duality.

1.4 Residuated Mappings

In view of the above natural result we now investigate under what conditions the inverse image of the principal down-set is also a principal down-set. The outcome will be the type of mapping that will play an important role in the sequel.

Theorem 1.4.1: If 퐸 and 퐹 are ordered sets, then the following conditions concerning푓 ∶ 퐸 → 퐹 are equivalent. (1) The inverse image under f of every principal down-set of 퐹 is a principal down-set of 퐸.

(2)푓is isotone and there is an isotone mapping 푔: 퐹 → 퐸 such that 푔 ∘ 푓 ≥ 푖푑퐸 and 푓 ∘ 푔 ≤ 푖푑퐹.

Proof: (1) ⇒ (2): If (1) holds, then by the Theorem(1.3.7)푓 is isotone, which means that for all 푦 ∈ 퐹 there exists 푥 ∈ 퐸 such that 푓←(푦↓) = 푥↓, that is {푥 ∈ 퐸 | 푓(푥) ∈ 푦↓} = 푥↓; which implies {푥 ∈ 퐸 | 푓(푥) ≤ 푦} = 푥↓.

← ↓ Claim: For every 푦 ∈ 퐸, this element 푥 is unique, if possible suppose 푥0 ∈ 퐸 such that 푓 (푦 ) ↓ ↓ = 푥0, which implies { 푥0 ∈ 퐸 | 푓 (푥0) ≤ 푦 } = 푥0. Since 푓(푥) ≤ 푦 for all 푦 ∈ 퐸 and 푓(푥0) ≤ ← ← 푦 for all 푦 ∈ 퐹 we have 푥 ≤ 푓 (푦) and 푥0 ≤ 푓 (푦). Which is only possible if 푥 = 푥0. So we can define a mapping 푔: 퐹 → 퐸 by setting 푔(푦) = 푥. Claim: 푔: 퐹 → 퐸 defined as above is isotone:

For this let 푦1, 푦2 ∈ 퐹 with 푦1 ≤ 푦2. We will show that 푔(푦1) ≤ 푔(푦2) that is 푥1 ≤ 푥2. ↓ ↓ Let 푥 ∈ 푦1 then by definition 푥 ≤ 푦1 ≤ 푦2 (as 푦1 ≤ 푦2) or 푥 ≤ 푦2, which implies 푥 ∈ 푦2 so ↓ ↓ ← ← ↓ ← ↓ ↓ ↓ 푦1 ⊆ 푦2. Since 푓 is isotone, we have 푓 (푦1) ⊆ 푓 (푦2) which implies 푥1 ⊆ 푥2. ↓ ↓ ↓ Now푥1 ∈ 푥1 ⊆ 푥2, then 푥1 ∈ 푥2, so 푥1 ≤ 푥2 (by definition of down-set), therefore 푔(푦1) ≤ ↓ ↓ ← ↓ 푔(푦2). From this mapping 푔 we have 푔(푦) ∈ 푔(푦) = 푥 = 푓 (푦 ) this gives 푔(푦) ∈ ← ↓ ↓ 푓 (푦 ), which implies 푓(푔(푦)) ∈ 푦 . So푓(푔(푦)) ≤ 푦 for all 푦 ∈ 퐸. Thus 푔 ∘ 푓 ≥ 푖푑퐸. Thus (1) ⇒ (2) holds. (2) ⇒ (1): Suppose that (2) holds, then for all 푓(푥), 푦 ∈ 퐹 with 푓(푥) ≤ 푦 we have; 푥 ≤ 푔{푓(푥)} ≤ 푔(푦) (since 푔 is isotone). Also for all 푥, 푔(푦) ∈ 퐸, we have (since 푓 is isotone) 푥 ≤ 푔(푦) = 푓(푥) ≤ 푓(푔(푦)) ≤ 푦. It follows from the above observations that 푓(푥) ≤ 푦 if and only if 푥 ≤ 푔(푦); which implies 푓(푥) ∈ 푦↓ if and only if 푥 ∈ 푔(y) ↓ or 푥 ∈ 푓←(푦↓) if and only if 푥 ∈ 푔(푦)↓; which gives 푓← (푦↓) = (푔(y)) ↓. Thus (2) ⇒ (1) holds.

Definition 1.4.2: A mapping 푓: 퐸 → 퐹 that satisfies either of the equivalent conditions of Theorem 1.4.1 is said to be residuated.

Note: In particular if 푓: 퐸 → 퐹 is a residuated mapping, then as the isotone mapping 푔: 퐹 → 퐸 satisfies 푔 ∘ 푓 ≥ 푖푑퐸 and 푓 ∘ 푔 ≤ 푖푑퐹. We can show that 푔 is unique, to see this suppose 푔and푔∗ are two isotone mapping satisfying above properties then: ∗ ∗ ∗ ∗ 푔 = 푖푑퐸 ∘ 푔 ≤ (푔 ∘ 푓) ∘ 푔 = 푔 ∘ (푓 ∘ 푔) ≤ 푔 ∘ 푖푑퐹 = 푔 . Which implies 푔 ≤ 푔∗. ∗ ∗ ∗ ∗ ∗ Similarly, 푔 = 푔 ∘ 푖푑퐸 ≤ 푔 ∘ (푔 ∘ 푓) = (푓표푔 ) ∘ 푔. It follows that 푔 ≤ 푔 and so 푔 = 푔∗. + + We shall denote this unique 푔 by 푓 and call it the residual of 푓. Thus 푓 ∘ 푓 ≥ 푖푑퐸 and 푓 ∘ + 푓 ≤ 푖푑퐹.

Example 1.4.3: If 푓: 퐸 → 퐹 then the direct image map 푓→ ∶ 푃(퐸) → 푃(퐹) is residuated with residual 푓+ = 푓← ∶ 푃(퐹) → 푃(퐸).

Solution: We are given 푓: 퐸 → 퐹 and we know that 푓→ ∶ 푃(퐸) → 푃(퐹) is defined for every 푋 ⊆ 퐸 by 푓→(푋) = {푓(푥)| 푥 ∈ 푋}. (i) Also 푓+ = 푓← ∶ 푃(퐹) → 푃(퐸) is defined for every 푌 ⊆ 퐹 by 푓←(푌) = {푥 ∈ 퐸 | 푓(푥) ∈ 푌}. (ii) We have to show that 푓+ = 푓← is the residual of 푓→ or in other words 푓→ is residuated. Now for any 푋 ∈ 푃(퐸) we have (푓← ∘ 푓→)(푋) = 푓←(푓→(푋)) = 푓←({푓(푥)| 푥 ∈ 푋}) (by (i)) ⊇ 푋. ← → → ← So, from above we get 푓 ∘ 푓 ≥ 푖푑푃(퐸). Similarly, we can show that 푓 ∘ 푓 ≤ 푖푑푃(퐹), therefore by definition 푓← is residuated with residual 푓+ = 푓←. This establishes the claim and hence the result holds.

Example 1.4.4: If 퐸 is any set and 퐴 ⊆ 퐸 then 휆퐴: 푃(퐸) → 푃(퐸) defined by 휆퐴(푋) = 퐴 ∩ 푋 is + + residuated with residual 휆퐴 given by 휆퐴 (푌) = 푌 ∪ 퐴´.

Solution: We are given 휆퐴: 푃(퐸) → 푃(퐸) defied by 휆퐴(푋) = 퐴 ∩ 푋. (i) + We have to show that 휆퐴 is residuated with residual 휆퐴 defined as below: + 휆퐴 (푌) = 푌 ∪ 퐴. ´ (ii) + + Now we have for any 푋 ∈ 푃(퐸) 휆퐴 ∘ 휆퐴 (푋) = 휆퐴(휆퐴 (푋))

= 휆퐴(푋 ∪ 퐴´)(by (ii)) = 퐴 ∩ ( 푋 ∪ 퐴´) (by (i)) = (퐴 ∩ 푋) ∪ (퐴 ∩ 퐴´) = (퐴 ∩ 푋) ∪ ∅ = 퐴 ∩ 푋 ⊆ 푋. + + From this we get that 휆퐴 ∘ 휆퐴 ≤ 푖푑푃(퐸). Similarly, we can show that 휆퐴 ∘ 휆퐴 ≥ 푖푑푃(퐸). Thus by + definition 휆퐴 is residuated with residual 휆퐴 .

Theorem 1.4.5: If 푓: 퐸 → 퐹 and 푔: 퐹 → 퐸 is residuated then; 푓 ∘ 푓+ ∘ 푓 = 푓 and푓+ ∘ 푓 ∘ 푓+ = 푓+.

+ Proof: Since 푓: 퐸 → 퐹 is residuated, therefore 푓 is isotone. Hence 푓 ∘ 푓 ∘ 푓 ≥ 푓 ∘ 푖푑퐸 = 푓 + and 푓 ∘ 푓 ∘ 푓 ≤ 푖푑퐹 ∘ 푓 = 푓. From which the first equality holds. + + + + + + + + + + Now 푓 ∘ 푓 ∘ 푓 ≤ 푓 ∘ 푖푑퐹 = 푓 and 푓 ∘ 푓 ∘ 푓 ≥ 퐼푑퐸 ∘ 푓 . So 푓 ∘ 푓 ∘ 푓 = 푓 . Which proves the theorem. The next result proves that the residual of composition is the composition of residuals.

Theorem 1.4.6: If 푓: 퐸 → 퐹 and 푔: 퐹 → 퐺 are residuated mappings then so is 푔 ∘ 푓: 퐸 → 퐺 and (푔 ∘ 푓)+ = 푓+ ∘ 푔+.

Proof: Clearly 푔 ∘ 푓 and 푓 ∘ 푔 are isotone. Moreover + + + + (푓 ∘ 푔 ) ∘ (푔 ∘ 푓) ≥ 푓 ∘ 푖푑퐹 ∘ 푓 = 푓 ∘ 푓 ≥ 푖푑퐸 and + + + + (푔 ∘ 푓) ∘ (푓 ∘ 푔 ) ≤ 푔 ∘ 푖푑퐹 ∘ 푔 = 푔 ∘ 푔 ≤ 푖푑퐺. Thus 푔 ∘ 푓 is residuated and by the uniqueness of residuals (푔 ∘ 푓)+ is 푓+ ∘ 푔+.

Corollary 1.4.7: For every ordered set 퐸 and the set Res 푬 of residuated mappings 푓: 퐸 → 퐸 forms a , as does the set Res+ 푬 of residual mappings 푓+: 퐸 → 퐸.

Proof: Clearly if 푓, 푔 ∈ Res 푬, then 푓 ∘ 푔 ∈ 푅푒푠 퐸. Let 푓, 푔, ℎ ∈ Res 푬, then (푓 ∘ 푔) ∘ ℎ = 푓 ∘ (푔 ∘ ℎ). Thus 푅푒푠 퐸 is a semigroup.

+ 2 Example 1.4.8: If 푓: 퐸 → 퐸 is residuated then 푓 = 푓 if and only if 푓 = 푖푑퐸.

Solution: Suppose that 푓: 퐸 → 퐸 is residuated and 푓 = 푓+. + + + Since 푓 is residuated then there is an isotone map 푓 : 퐸 → 퐸 such that 푓 ∘ 푓 ≤ 푖푑퐸 and 푓 ∘ + + + 푓 ≥ 푖푑퐸. Since 푓 = 푓 then 푓 ∘ 푓 = 푓 ∘ 푓 ≤ 푖푑퐸. Also 푓 ∘ 푓 = 푓 ∘ 푓 ≥ 푖푑퐸, which 2 implies 푓 ∘ 푓 = 푖푑퐸. Thus 푓 = 푖푑퐸. 2 + + Conversely suppose that 푓 = 푖푑퐸 that is 푓 ∘ 푓 = 푖푑퐸 then 푓 = 푖푑퐸 ∘ 푓 ≤ 푓 ∘ 푓 ∘ 푓. Which + + + + implies 푓 ≤ 푓 . Also 푓 = 푓 ∘ 푖푑퐸 ≥ 푓 ∘ 푓 ∘ 푓 so 푓 ≥ 푓 . Thus from above we get 푓 = 푓 .

1.5 Isomorphism of Ordered Sets

We now consider the notion of an isomorphism of ordered sets. Clearly whatever are the properties of the bijection 푓: 퐸 ⟶ 퐹we require all these properties in order to define an isomorphism of ordered sets. We certainly also want 푓−1: 퐹 ⟶ 퐸to be an isomorphism. We note that simply choosing 푓 to be an isotone bijection is not enough e.g. if we consider the ordered sets with the following Hasse diagrams:

Then the Mapping 푓: 퐸 → 퐹 given by 푓(푥) =  , 푓(푦) = γ and 푓(푧) =  is an isotone bijection but 푓−1 is not an isotone since  <  and 푓−1() = 푧 || 푥 = 푓−1().

Definition 1.5.1: By an order isomorphism from an ordered set 퐸 to an ordered set 퐹, we mean an isotone bijection 푓: 퐸 → 퐹 whose inverse is also isotone. Note: From the above results we can see that the notion of an order isomorphism is equivalent to that of a bijection 푓 that is residuated whose inverse 푓−1: 퐹 → 퐸 is also isotone (residual of 푓). If there is an order isomorphism, 푓: 퐸 → 퐹 then we say that 퐸, 퐹 are (order) isomorphic and we write it as 퐸 ≃ 퐹.

Theorem 1.5.2: Ordered sets퐸 and 퐹 are isomorphic if and only if there is surjective mapping 푓: 퐸 → 퐹 such that 푥 ≤ 푦 if and only if 푓(푥) ≤ 푓(푦).

Proof: Suppose 퐸 ≃ 퐹 then by definition 푓: 퐸 → 퐹 is surjective isotone mapping and 푓−1 is isotone. Now since푓 is isotone therefore for any 푥 ≤ 푦 we have 푓(푥) ≤ 푓(푦)and also as 푓−1 is isotone therefore for 푓(푥) ≤ 푓(푦)we have푥 ≤ 푦. This implies; 푥 ≤ 푦 if and only if 푓(푥) ≤ 푓(푦). Conversely suppose that such a surjective mapping 푓 exists then 푓 is also injective, for if 푓(푥) = 푓(푦) then from 푓(푥) ≤ 푓(푦)we obtain 푥 ≤ 푦 and from 푓(푥) ≥ 푓(푦)we obtain 푥 ≥ 푦 so 푥 = 푦. Thus푓 is a bijection. Also by given hypothesis 푓 is isotone and so is 푓−1, since 푥 ≤ 푦 can be written as 푓(푓−1(푥)) ≤ 푓(푓−1(푦)). Which implies 푓−1(푥) ≤ 푓−1(푦).

Notation: We shall say that 퐸 and 퐹 are dually isomorphic if 퐸 ≃ 퐹푑or equivalently 퐹 ≃ 퐸푑. In the particular case where 퐸 ≃ 퐸푑, we say that 퐸 is a self-dual.

Example 1.5.3: Let Sub 푍 be the set of subgroups of additive abelian 푍 and order Sub 푍 by set inclusion. Then (ℕ; |) is dually isomorphic to (Sub 푍; ⊆) under the assignment 푛 ⟼ 푛푍. In fact, since every subgroup of 푍 is of the form 푛푍 for some 푛 ∈ 푁, this assignment is surjective. Also, we have 푛푍 ⊆ 푚푍 if and only if 푚|푛. Note that if we include zero in ℕ, then 0 is the top element of (푁; |) and it corresponds to the trivial subgroup {0}. The result therefore follows by above theorem. We end this section and thereby this chapter by giving some examples of isomorphism and dual isomorphism.

Proposition 1.5.4: If 퐸 and 퐹 are ordered sets, then under the Cartesian order 퐸 × 퐹 ≃ 퐹 × 퐸.

Proof: Define 푓 ∶ 퐸 × 퐹 → 퐹 × 퐸 by 푓(푎, 푏) = (푏, 푎). 풇 is surjective: Let (푏, 푎) ∈ 퐹 × 퐸 then푏 ∈ 퐹 and 푎 ∈ 퐸, so there exists(푎, 푏) ∈ 퐸 × 퐹 such that; 푓(푎, 푏) = (푏, 푎). Now let (푎, 푏), (푎′, 푏′) ∈ 퐸 × 퐹 such that; (푎, 푏) ≤ (푎′, 푏′) if and only if 푎 ≤ 푎′in 퐸 and 푏 ≤ 푏′in 퐹 if and only if 푏 ≤ 푏′in 퐹 and 푎 ≤ 푎′in 퐸 if and only if (푏, 푎) ≤ (푏′, 푎′) in 퐹 × 퐸 if and only if 푓(푎, 푏) ≤ 푓(푎′, 푏′). Therefore, by (Theorem 1.5.2) 퐸 × 퐹 ≃ 퐹 × 퐸.

Example 1.5.5: Prove that (퐸 × 퐹)푑 ≃ 퐸푑 × 퐹푑.

Proof: Define 푓: (퐸 × 퐹)푑 ⟶ 퐸푑 × 퐹푑 by 푓(푥, 푦) = (푥, 푦) 푑 Suppose (푥1, 푦1) ≤ (푥2, 푦2) in (퐸 × 퐹)

if and only if (푥2, 푦2) ≤ (푥1, 푦1) in 퐸 × 퐹

if and only if 푥2 ≤ 푥1 in 퐸 and 푦2 ≤ 푦1in 퐹 푑 푑 if and only if 푥1 ≤ 푥2 in 퐸 and 푦1 ≤ 푦2 in 퐹 푑 푑 if and only if (푥1, 푦1) ≤ (푥2, 푦2) in 퐸 × 퐹 . So 푓 is isotone. Also 푓 is clearly surjective. Therefore (퐸 × 퐹)푑 ≃ 퐸푑 × 퐹푑.

Example 1.5.6: Prove that (푃(퐸) ; ⊆) is self dual.

Proof: We show that the dual of (푃(퐸) ; ⊆ ) is (푃(퐸) ; ⊇ ). So, we define a map푓 ∶ (푃(퐸) ; ⊆) → (푃(퐸); ⊇) by; 푓(푋) = 푋′ (where푋′ is complement of 푋). 풇 is injective: Let 퐴, 퐵 ∈ (푃(퐸); ⊆) such that 푓(퐴) = 푓(퐵)or퐴′ = 퐵′ (by definition of 푓), which implies (퐴′)′ = (퐵′)′. Thus 퐴 = 퐵. 풇 is surjective: Let 퐴´ ∈ (푃(퐸); ⊇) then (퐴´)´ = 퐴 ∈ (푃(퐸); ⊆) such that 푓((퐴´)´) = 푓(퐴) = 퐴´. Thus 푓 is surjective. 풇 is isotone: Suppose 퐴, 퐵 ∈ (푃(퐸), ⊆) such that 퐴 ⊆ 퐵,we have to show that 퐴´ ⊇ 퐵´. Let 푥 ∈ 퐵´, then 푥 ∉ 퐵, which implies 푥 ∉ 퐴 (as 퐴 ⊆ 퐵) or 푥 ∈ 퐴´, so 퐴´ ⊇ 퐵´. Thus 푓 is isotone. 풇−ퟏ is isotone: Define 푓−1: (푃(퐸); ⊇) → (푃(퐸) ; ⊆) by 푓−1(푋′) = 푋. Suppose 퐴′, 퐵´ ∈ (푃(퐸); ⊇) such that 퐴´ ⊇ 퐵′. We show that퐴 ⊆ 퐵. Let 푧 ∈ 퐴, then 푧 ∉ 퐴′ which implies 푧 ∉ 퐵′; this gives 푧 ∈ 퐵. Thus 퐴 ⊆ 퐵 showing that 푓−1 is isotone. Hence (푃(퐸), ⊆ ) ≃ (푃(퐸), ⊇ ).

Example 1.5.7: Let 2 denote the two element chain 0 < 1. Prove that the mapping 푓: ℙ(1,2, … , 푛) → ퟐ푛 given by; 푓(푥) = (푥1, … , 푥푛), 1 if 푖 ∈ 푋; 푥 = { 푖 0 otherwise, where 푋 ⊆ ℙ(1,2, … , 푛) is an order isomorphism.

Proof: 풇 is isotone: Let 푋, 푌 ∈ 푃(1,2, … , 푛) and let 푓(푋) = (푥1, … , 푥푛), 푓(푌) = (푦1, … , 푦푛) then 푋 ⊆ 푌 if and only if (for all 푖) 푖 ∈ 푋 implies 푖 ∈ 푌

if and only if (for all 푖) 푥푖 = 1 implies푦푖 = 1

if and only if (for all 푖) 푥푖 ≤ 푦푖 if and only if 푓(푋) ≤ 푓(푌) in2푛. 푛 To show 푓 is onto take 푥 = (푥1, 푥2 … , 푥푛) ∈ 2 , then;

푥 = 푓(푋) where 푋 = {푖 |푥푖 = 1}. So 푓 is onto. Therefore 푓 is an ordered isomorphism

Chapter 2

Lattices Many important properties of an ordered set 푃 are expressed in terms of the existence of certain upper bounds or lower bounds of 푃. Two of the most important classes of ordered sets defined in this way are lattices and complete lattices. In this chapter we present the basic theory of such ordered sets and also consider as algebraic structures. We also discuss special type of lattice called down set lattice and the mappings which preserve the operation of lattices. The chapter ends with some important results and examples on complete lattices.

2.1 and Lattices

In this section we will go through the definition of semilattices and lattices and discuss lattice as an . The section ends with some of the important results on lattices. If 퐸 is an ordered set and 푥 ∈ 퐸 then the canonical embedding of 푥↓ into 퐸, that is the restriction ↓ ↓ to 푥 of the identity mapping on 퐸 is clearly isotone. For if 푖 ∶ 푥 → 퐸 defined by 푖푥(푥) = 푥, such that 푥 ≤ 푦 then 푖(푥) = 푥 ≤ 푦 = 푖푥(푦) which implies 푖푥(푥) ≤ 푖푥(푦). We shall now see when such embedding is residuated. Residuated mappings have important consequences as for as the structure of 퐸 is concerned.

Theorem 2.1.1: If 퐸 is an ordered set then the following are equivalent: (1) for every 푥 ∈ 퐸 the canonical embedding of 푥↓ into 퐸 is residuated; (2) the intersection of any two principal down-sets is a down-set.

↓ Proof: (1) ⟺ (2) For each 푥 ∈ 퐸, let 푖푥: 푥 → 퐸 be the canonical embedding. Then by the definition of a residuated mapping (1) holds if and only if for all 푥, 푦 ∈ 퐸 there exists 훼 = max{푧 ∈ 푥↓: 푧 = 푖(푧) ≤ 푦}. We claim that this is equivalent to the existence of 훼 ∈ 퐸 such that 푥↓ ∩ 푦↓ = 훼↓. Now clearly 훼 ⊆ 푥↓ ⊆ 퐸 so 푥 ∈ 퐸. Let 푧 ∈ 훼↓ then we have 푧 ≤ 푥 and 푧 ≤ 푦 this implies 푧 ∈ 푥↓ and 푧 ∈ 푦↓ this gives 푧 ∈ 푥↓ ∩ 푦↓ which implies 훼↓ ⊆ 푥↓ ∩ 푦↓. Secondly we suppose that 푘 ∈ 푥↓ ∩ 푦↓ implies 푘 ∈ 푥↓ and 푘 ∈ 푦↓ or 푘 ≤ 푥 and 푘 ≤ 푦; this gives 푘 ≤{ z ∈푥↓ : z ≤푦}; this implies 푘 ≤ 훼 or 푘 ∈훼↓. Thus 푥↓ ∩ 푦↓ = 훼↓, which is (2).

Definition 2.1.2: If 퐸 satisfies either of the equivalent conditions of the above theorem then we shall denote by 푥 ∧ 푦 the element 훼 such that 푥↓ ∩ 푦↓ = 훼↓ and call 푥 ∧ 푦 the meet of 푥 and 푦. In this situation we shall say that 퐸 is a meet .

We can of course develop the duals of the above, obtaining in this way the duals, the notion of a join semilattice which is characterized by the intersection of any two principal upsets being a principal upsets, the element 훽 such that 푥↑ ∩ 푦↑ = 훽 ↑ being denoted by 푥 ∨ 푦 and called the join of 푥 and 푦. Definition 2.1.3: The minimum of a subset 푆 of a partially ordered set (퐸; ≤) is an element of 푆 which is less than or equal to any other element of 푆. Proposition 2.1.4: Ever푦 chain is a meet semilattice in which 푥 ∧ 푦 = 푚푖푛 {푥, 푦}. proof: Let 퐶 be any chain and let 푥, 푦 ∈ 퐶 then either 푥 ≤ 푦 or 푦 ≤ 푥. Without loss of generality suppose that 푥 ≤ 푦; then 푚푖푛 {푥, 푦} = 푥. Now 푥 ≤ 푦 imples 푥 ∈ 푦↓. Clearly 푥 ∈ 푥↓ so 푥 ∈ 푥↓ ∩ 푦↓; thus 푥↓ ⊆ 푥↓ ∩ 푦↓. Also 푥↓ ∩ 푦↓ ⊆ 푥↓. So 푥↓ ∩ 푦↓ = 푥↓. Thus by definition of meet 푥 ∧ 푦 = 푥 = 푚푖푛 {푥, 푦}.

Example 2.1.5: (ℕ; |) is a meet semilattice in which 푚 ∧ 푛 = ℎ푐푓 {푚, 푛}. Solution: Let 퐶 be any chain in 푁 and 푥, 푦 ∈ ℕ then either 푥 divides 푦 or 푦 divides 푥. If 푥 divides y then ℎ푐푓{푥, 푦} = 푥. Clearly 푥↓ ∩ 푦↓ ⊆ 푥↓. Now 푥 divides 푦 implies 푥 ∈ 푦↓; which implies 푥↓ ⊆ 푦↓. Also 푥 ∈ 푥↓ always; thus 푥↓ ∩ 푦↓ ⊆ 푥↓; so 푥↓ = 푥↓ ∩ 푦↓. Thus by definition; 푥 ∧ 푦 = 푥 = ℎ푐푓{푥, 푦}. Meet-semilattices and join-semilattices can also be characterized in a purely algebraic way which we shall now describe.

Proposition 2.1.6: The meet semilattice ( 퐸 ; ∧ ) is a commutative idempotent semigroup.

Proof : Define the composition on ( 퐸;∧ ) by 푥 ∙ 푦 = 푥 ∧ 푦. Since 푥↓ ∩ (푦↓ ∩ 푧↓) = (푥↓ ∩ 푦↓) ∩ 푧↓; which implies 푥 ∧ (푦 ∧ 푧 ) = (푥 ∧ 푦) ∨ 푧; this gives 푥 ∙ (푦 ∙ 푧) = (푥 ∙ 푦) ∙ 푧. Therefore ∧ is associative. Also 푥↓ ∩ 푦↓ = 푦↓ ∩ 푥↓; this implies 푥 ∧ 푦 = 푦 ∧ 푥; this implies푥 ∙ 푦 = 푦 ∙ 푥 showing that ∧ is commutative. Since 푥↓ ∩ 푥↓ = 푥↓; this implies 푥 ∧ 푥 = 푥; implies 푥 ∙ 푥 = 푥; which implies 푥2 = 푥. So that the operation of ∧ is idempotent. Thus a system with a single binary, idempotent, commutative and associative operations with respect to meet is called a meet semilattice.

Proposition 2.1.7: The join semilattice (퐸; ∨) is a commutative idempotent semigroup.

Proof: The proof follows by applying principal of duality to (Proposition 2.1.6).

The following result shows that the converse of (Propositions 2.1.6 and 2.1.7) holds, that is every commutative, idempotent semigroup gives rise to meet semilattice and join semilattice.

Theorem 2.1.8: Every commutative idempotent semigroup can be ordered in such a way that it forms a meet semilattice.

Proof: Suppose that 퐸 is a commutative idempotent semigroup in which we shall denote the law of composition by juxtaposition. Define a relation ≤ on 퐸 by 푥 ≤ 푦 if and only if 푥푦 = 푥. We first show that ≤ is an order. Reflexivity: Since 퐸 is idempotent therefore we have 푥2 = 푥 for every 푥 ∈ 퐸 this implies that 푥푥 = 푥 for all 푥 ∈ 퐸; which implies 푥 ≤ 푥. So that ≤ is reflexive. Antisymmetry: Let 푥 ≤ 푦 and 푦 ≤ 푥 for all 푥, 푦 ∈ 퐸 then by commutativity of 퐸 we have 푥 = 푥푦 = 푦푥 = 푦; thus 푥 = 푦. Hence ≤ is antisymmetric. Transitivity: Let 푥 ≤ 푦 and 푦 ≤ 푧 then by definition 푥 = 푥푦 and 푦 = 푦푧; which implies 푥 = 푥푦 = 푥푦푧 = 푥푧 and therefore we get 푥 ≤ 푧, so that 푅 is transitive. Now if 푥, 푦 ∈ 퐸 then we have 푥푦 = 푥푥푦 = 푥푦푥 and so 푥푦 ≤ 푥. Inverting the roles of 푥, 푦 we also have 푦푥 = 푦푦푥 = 푦푥푦 implies 푥푦 ≤ 푦 therefore 푥푦 ∈ 푥↓ ∩ 푦↓. We now suppose that 푧 ∈ 푥↓ ∩ 푦↓. Then 푧 ≤ 푥 and 푧 ≤ 푦, this implies 푧 = 푧푥 and 푧 = 푧 = 푧푦 = 푧푥푦; and therefore 푧 ≤ 푥. Which implies that 푥푦 is the top element of 푥↓ ∩ 푦↓. Thus 퐸 is a meet semilattice in which 푥 ∧ 푦 = 푥푦.

Theorem 2.1.9: Every commutative idempotent semigroup can be ordered in such a way that it forms a join semilattice.

Proof: Suppose that 퐸 is a commutative idempotent semigroup in which we denote the law of composition by juxtaposition. Define a relation ≥ on 퐸 by 푥 ≥ 푦 if and only if 푥푦 = 푦. Then by above theorem ≥ is an order. If 푥, 푦 ∈ 퐸, then we have 푥푦 = 푥푦푦 = 푦푥푦; so 푦 ≥ 푥푦. Now inverting the roles of 푥 and 푦 we also have 푥 ≥ 푥푦. Therefore 푥푦 ∈ 푥↑ ∩ 푦↑. Suppose now that 푧 ∈ 푥↑ ∩ 푦↑ then 푧 ≥ 푥 and 푧 ≥ 푦; implies 푧 = 푧푥 and 푧 = 푧푦. So 푧 = 푧푦 = 푧푥푦 and therefore 푧 ≥ 푥푦; this implies 푥푦 = 푦 = 푠푢푝{푥, 푦}. Thus 퐸 is a join semilattice in which 푥 ∨ 푦 = 푥푦 = 푦.

Example 2.1.10: If 푃 and 푄 are meet semilattices then the set of isotone mappings from 푃 to 푄 forms the meet semilattice with respect to the order defined by; 푓 ≤ 푔 if and only if 푓(푥) ≤ 푔(푥).

Solution: Let 푀푎푝(푃, 푄) denotes set of isotone mappings from 푃 to 푄. Since 푀푎푝(푃, 푄) is an ordered set therefore we prove that for every 푓 ∈ 푀푎푝(푃, 푄) the canonical embedding of 푓↓ into 푀푎푝(푃, 푄) is residuated. Now the canonical embedding of 푓↓ into 푀푎푝(푃, 푄) is

↓ 푖푓: 푓 → 푀푎푝(푃, 푄). (1) ↓ If 푔 ∈ 푓 then 푔 ≤ 푓 and 푖푓(푔) = 푔 where 푔 ∈ 푀푎푝(푃, 푄). we claim that the map given in (1) is residuated. In other words we show that inverse image of every principal down of 푓↓ is ← ↓ ↓ ← ↓ principal down set. Let 푖푓 (푔 ) = 푘 where 푔, 푘 ∈ 푀푎푝(푃, 푄). Let 훼 ∈ 푖푓 (푔 ) be any function ↓ ↓ ← ↓ ↓ in 푀푎푝(푃, 푄) then 푖푓(훼) ∈ 푔 implies 훼 ∈ 푔 ; therefore 푖푓 (푔 ) ⊆ 푔 . ↓ ↓ ← ↓ ← ↓ Now let 푞 ∈ 푔 then 푞 ≤ 푔 implies 푖푓(푞) ∈ 푔 ; which implies 푔 ∈ 푖푓 (푔 ). Thus 푔 ⊆ 푖푓 (푔 ). ← ↓ ↓ ↓ ↓ therefore, 푖푓 (푔 ) = 푔 . Therefore we get 푔 = 푘 , as required.

Definition 2.1.11: If 퐸 is an ordered set and 퐹 is a subset of 퐸 then 푥 ∈ 퐸 is said to be lower bound of 퐹 if for all 푦 ∈ 퐹, 푥 ≤ 푦 and an upper bound of 퐹 if for all 푦 ∈ 퐹 , 푦 ≤ 푥. In what follows we shall denote the set of lower bounds of 퐹 in 퐸 by 퐹↓, and the set of upper bounds of 퐹 in 퐸 by 퐹↑.

Remark 2.1.12:

(i) We note here that the notation 퐹↓ is used to denote the down-set generated by 퐹, namely {푥 ∈ 퐸 | there exists 푎 ∈ 퐹 with 푥 ≤ 푎} and 퐹↑ to denote the upset generated by 퐹. In particular we have {푥}↓ = 푥↓and {푥}↑ = 푥↑.

(ii) Note that since 퐹↓ and 퐹↑ denote the set of lower bounds and upper bounds respectively, so we conclude that these sets may be empty only if they are unbounded. But not so when 퐸 is bounded because, in that case it has both a top element 1 and the bottom element 0. If 퐸 has a top element 1 then since 1 ≥ 푥 for every 푥 ∈ 퐸; therefore 퐸↑ = {1} otherwise 퐸↑ =∅. Similarly if 퐸 has a bottom element 0 then 퐸↓ = {0}, otherwise 퐸↓ = ∅.

(iii) Note that if 퐹 = ∅, then since ∅ is subset of every set so every 푥 ∈ 퐸 satisfies the relation 푦 ≤ 푥 for every 푦 ∈ 퐹. Thus ∅↑ = 퐸 and similarly ∅↓ = 퐸.

Definition 2.1.13: If 퐸 is an ordered set and 퐹 is a subset of 퐸 then by the infimum or greatest lower bound of 퐹 we mean the top element when such exists of the set 퐹↓ of lower bounds of

퐹. We denote this by 푖푛푓퐸퐹 or simply 푖푛푓퐹, if there is no confusion.

↓ Since ∅ = 퐸, we see that 푖푛푓퐸퐹 exists if and only if 퐸 has a top element 1, in which case 푖푛푓 퐸퐹= 1. It is immediate from what has gone before that a meet semilattice can be described as an ordered set in which every pair of elements 푥, 푦 has a greatest lower bound; here we have 푖푛푓{푥, 푦} = 푥 ∧ 푦. A simple inductive argument shows that for every finite subset

{푥1, 푥2, . . . , 푥푛} of a meet semilattice we have that 푖푛푓{푥1, 푥2, … , 푥푛} exists and is 푥1 ∧ 푥2 ∧ … ∧ 푥푛.

Definition 2.1.14: let 퐸 be an ordered set and 퐹 be a subset of 퐸, an element 푚 ∈ 퐹 is called least upper bound of 퐹 or 푠푢푝(퐹) if 푚 is an upper bound of 퐹 and 푚 ≤ 푛, whenever 푛 is an upper bound of 퐹. On many posets it is possible to define binary operation by using the greatest lower bound and the least upper bound of two elements.

Definition 2.1.15: A lattice is an ordered set (퐸, ≤) which with respect to its order is both a meet semilattice and join semilattice. Thus the lattice is an ordered set in which every pair of elements and hence every finite subset has an infimum and a supremum. We often denote the lattice by (퐸; ∧ ; ∨ ; ≤) .

Remarks 2.1.16:

(1) Let 퐸 be an ordered set. If 푥, 푦 ∈ 퐸 and 푥 ≤ 푦 then (푥, 푦)푢 = 푦↑and (푥, 푦)푙 = 푥↓ (where (푥, 푦)푢 denotes upper bound of {푥,푦} and (푥, 푦)푙 denotes lower bound of {푥, 푦}) and since the least element o푓 푦↑ 푖푠 푦 and the greatest element of 푥↓is 푥. We have 푥 ∨ 푦 = 푦 and 푥 ∧ 푦 = 푥 whenever 푥 ≤ 푦. In particular since “≤” is reflexive, we have 푥 ∨ 푥 = 푥 and 푥 ∧ 푥 = 푥. (2) In an ordered set 퐸, the 푙푢푏 of (푥, 푦) may fail to exist for two different reasons: (a) because 푥 and 푦 have no common upper bounds, (b) because they have no least upper bound. For example, consider the following Hasse diagrams:

Here (푎, 푏)푢 = ∅ and hence 푎 ∨ 푏 does not exist.

Here (푎, 푏)푢 = (푐, 푑) and thus 푎 ∨ 푏 does not exist as (푎, 푏)푢 has no least element. (3) Let 퐿 be a lattice then for all 푎, 푏, 푐, 푑 ∈ 퐿, (i) 푎 ≤ 푏 implies 푎 ∨ 푐 ≤ 푏 ∨ 푐 and 푎 ∧ 푐 ≤ 푏 ∧ 푐; (ii) 푎 ≤ 푏 and 푐 ≤ 푑 implies 푎 ∨ 푐 ≤ 푏 ∨ 푑 and 푎 ∧ 푐 ≤ 푏 ∧ 푑. (4) Let 퐿 be a lattice, let 푎, 푏, 푐 ∈ 퐿 and assume 푏 ≤ 푎 ≤ 푏 ∨ 푐,since푐 ≤ 푏 ∨ 푐 we have (푏 ∨ 푐) ∨ 푐 = 푏 ∨ 푐 (by 1). Thus 푏 ∨ 푐 ≤ 푎 ∨ 푐 ≤ (푏 ∨ 푐) ∨ 푐 = 푏 ∨ 푐, whenever 푎 ∨ 푐 = 푏 ∨ 푐.

Lemma 2.1.17: (Connecting lemma) Let 퐿 be a lattice and 푎, 푏 ∈ 퐿. Then the following are equivalent: (i) 푎 ≤ 푏; (ii) 푎 ∨ 푏 = 푏; (iii) 푎 ∧ 푏 = 푎.

Proof: We have already shown in above remark that (i) implies both (ii) and (iii). Now assume (ii) holds, then 푏 is an upper bound for (푎, 푏) whenever 푏 ≥ 푎. Thus (i) holds. Likewise, we can show that (iii) implies (i).

Theorem 2.1.18: A set 퐸 can be given the structure of lattice if and only if it can be endowed with two laws of composition (푥, 푦) ↦ 푥 ⋒ 푦 and (푥, 푦) ↦ 푥 ⋓ 푦 such that

(1) (퐸 ;⋒) and (퐸 ;⋓) are commutative . (2) The following absorption law holds: for all 푥, 푦 ∈ 퐸푥 ⋒ (푥 ⋓ 푦) = 푥 = 푥 ⋓ (푥 ⋒ 푦).

Proof: Suppose that 퐸 is a lattice then by definition ( 퐸; ≤ ) is both a meet semilattice and a join semilattice. That is, it has two laws of composition that satisfy (1) namely (푥, 푦) ↦ 푥 ∧ 푦 and(푥, 푦) ↦ 푥 ∨ 푦. To show that (2) holds we have; 푥 ≤ sup {푥, 푦} = 푥 ∨ 푦 so by connecting lemma 푥 ∧ (푥 ∨ 푦) = 푖푛푓 { 푥 , 푥 ∨ 푦} = 푥. Also 푥 ≥ 푖푛푓 {푥, 푦} = 푥 ∧ 푦 and so by connecting lemma 푥 ∨ (푥 ∧ 푦) = 푥. Thus 푥 ∧ (푥 ∨ 푦) = 푥 ∨ (푥 ∧ 푦) = 푥. Which proves (2). Suppose now that 퐸 has two laws of compositions ⋒ and ⋓ that satisfy (1) and (2). Using (2) we have 푥 ⋓ 푥 = 푥 ⋓ [푥 ⋒ (푥 ⋓ 푥 )]; again using (2) we have 푥 ⋓ [푥 ⋒ (푥 ⋓ 푥 )] = 푥. Thus 푥 ⋓ 푥 = 푥 and similarly 푥 ⋒ 푥 = 푥 . Which shows that 퐸 is idempotent. Thus 퐸 is a commutative idempotent semigroup, so by (Theorem 2.1.6 and Theorem 2.1.7) (퐸 ; ⋒) and ( 퐸 ; ⋓) are semilattices. In order to show that is (퐸 ; ⋓ ,⋒) is a lattice with ⋒ 푎푠 ∧ and ⋓ as ∨, we must show that the order defined by above compositions must coincide. In other words we must show that 푥 ⋒ 푦 = 푥 is equivalent to 푥 ⋓ 푦 = 푦. Now if 푥 ⋓ 푦 = 푦 then by using absorption law we have 푥 = 푥 ⋒ (푥 ⋓ 푦 ) = 푥 ⋒ 푦; which implies 푥 ⋒ 푦 = 푥 and if 푥 ⋒ 푦 = 푥 then using the absorption law we have 푦 = ( 푥 ⋒ 푦 ) ⋓ 푦 = 푥 ⋓ 푦. Thus we see that 퐸 is a lattice in which 푥 ≤ 푦 is described equivalently by 푥 ⋒ 푦 = 푥 or by 푥 ⋓ 푦 = 푦.

Lemma 2.1.17: Every chain is a lattice.

Proof: To prove that every chain ( 푃; ≤) is a lattice, fix some 푎, 푏 ∈ 푃 and without loss of generality assume that 푎 ≤ 푏. From reflexivity of " ≤ ", we have 푎 ≤ 푎; hence ‘푎’ is the lower bound of the set { 푎 , 푏}. To prove that it is the greatest lower bound note that if some 푐 ∈ 푃 is another lower bound of { 푎, 푏 } then 푐 ≤ 푎 . It means 푎 = 푖푛푓 {푎 , 푏}. Now we show that 푏 = 푠푢푝 { 푎 , 푏 }. From reflexivity of “≤” we have 푏 ≤ 푏; also by our assumption 푎 ≤ 푏. Hence 푏 is the upper bound of the set {푎 , 푏}. To prove that it is the least upper bound, let 푘 ∈ 푃 is another upper bound of { 푎, 푏} then by definition of supremum 푏 ≤ 푘 and therefore 푠푢푝{푎, 푏} = 푏. This shows that ( 푃; ≤) is a lattice.

Definition 2.1.18: A lattice 퐿 is said to be a bounded lattice if it has both an upper bound denoted by 1 and a lower bound denoted by 0.

Example 2.1.19: For every set 퐸, ( ℙ (퐸 );∩,∪, ⊆) is a bounded lattice.

Solution: For any two elements 푆 and 푇 in ℙ(퐸) we have 푆 ⊆ 푆 ∪ 푇 and 푇 ⊆ 푆 ∪ 푇. Thus 푆 ∪ 푇 is the upper bound of 푆 and 푇. Now if 푅 is any other upper bound then 푆 ⊆ 푅 and 푇 ⊆ 푅 this gives S ∪ 푇 ⊆ 푅. So that sup{푆, 푇} = 푆 ∪ 푇. Also 푆 ∩ 푇 ⊆ 푆 and 푆 ∩ 푇 ⊆ 푇. Thus 푆 ∩ 푇 is the lower bound of 푆 and 푇. Now if 퐿 is any other lower bound of 푆 and 푇 then 퐿 ⊆ 푆 and 퐿 ⊆ 푇, this gives 퐿 ⊆ 푆 ∩ 푇. Thus 푖푛푓{푆, 푇} = 푆 ∩ 푇. Since ∅ ⊆ S for every 푆 ∈ ℙ( 퐸 ) so ∅ is the lower bound of ℙ(퐸). Also 푆 ⊆ 퐸 for every 푆 ∈ ℙ (퐸) so 퐸 is the upper bound, therefore ℙ(퐸) is bounded lattice.

Example 2.1.20: For every infinite set 퐸, let ℙ푓(퐸 ) be the set of finite subsets of 퐸 then 푃푓 (퐸 ) is a lattice with no top element.

Solution: Since 퐸 is infinite set so 퐸 ∉ ℙ푓 (퐸). Let 퐴, 퐵 ∈ ℙ푓(퐸) then 퐴 ⊆ 퐴 ∪ 퐵 and 퐵 ⊆ 퐴 ∪ 퐵. Thus 퐴 ∪ 퐵 is upper bound of 퐴 and 퐵. Now if 푅 is any other upper bound then 퐴 ⊆ 푅 and 퐵 ⊆ 푅 this gives 퐴 ∪ 퐵 ⊆ 푅. So that sup {퐴, 퐵} = 퐴 ∪ 퐵. Similarly we can show that

푖푛푓 {퐴, 퐵} = 퐴 ∩ 퐵. To show that ℙ푓(퐸) has no top element, suppose to the contrary that “푍” is the top element of ℙ푓(퐸) then for all 푋 ∈ ℙ푓(퐸), 푋 ⊆ 푍. Since 푋 and 푍 are both finite implies 푋 ∪ 푍 is also finite. By our hypothesis, we must have 푋 ∪ 푍 ⊆ 푍, which is not possible. Thus ℙ푓(퐸) has no top element.

Example 2.1.21:(ℕ ∪ {0} ; ∣ ) is a bounded lattice. Solution: Let 푚, 푛 ∈ ℕ; then 푚 ∣ 푙푐푚 ( 푚, 푛 ) and 푛 ∣ 푙푐푚(푚 , 푛 ) gives 푙푐푚( 푚, 푛) is the upper bound of {푚 , 푛} . If 푘 is any upper bound of {푚 , 푛 }; then 푚 ∣ 푘 and 푛 ∣ 푘 implies 푙푐푚(푚, 푛) ∣ 푘. Therefore by definition 푠푢푝( 푚 , 푛 ) = 푙푐푚 ( 푚, 푛). In a similar manner we can show that 푖푛푓 (푚, 푛) = ℎ푐푓 ( 푚 , 푛 ). Also since 1 divides every natural number so 1 is the bottom element. Also every natural number divides 0; so 0 is the top element thus ℕ ∪ {0} is bounded. This proves that ( ℕ ∪ {0}; ∣ ) is a bounded lattice.

Example 2.1.22: If 푉 is a and 푆푢푏푉 denotes the set of subspaces of 푉 then ( 푆푢푏푉 ; ⊆ ) forms a lattice with 푖푛푓 {퐴, 퐵} = 퐴 ∩ 퐵 and 푠푢푝 { 퐴 , 퐵 } = 퐴 + 퐵 = {푎 + 푏 |푎 ∈ 퐴 and 푏 ∈ 퐵}.

Solution: Suppose 퐴, 퐵 ∈ 푆푢푏푉 then 퐴 ∩ 퐵 ∈ 푆푢푏푉. Also 퐴 ∩ 퐵 ⊆ 퐴 and 퐴 ∩ 퐵 ⊆ 퐵; therefore 퐴 ∩ 퐵 is the upper bound of 퐴 and 퐵. Let 푀 be any other upper bound of 퐴 and 퐵 then 퐴 ⊆ 푀 and 퐵 ⊆ 푀; which implies 퐴 ∩ 퐵 ⊆ 푀. Therefore 푖푛푓 {퐴, 퐵} = 퐴 ∩ 퐵. Also since 퐴 ⊆ 퐴 + 퐵 and 퐵 ⊆ 퐴 + 퐵, this implies 퐴 + 퐵 is an upper bound of 퐴 and 퐵. Let 푀 be any other upper bound of 퐴 and 퐵 then 퐴 ⊆ 푀 and 퐵 ⊆ 푀; this gives 퐴 + 퐵 = { 푎 + 푏 ∣ 푎 ∈ 퐴, 푏 ∈ 퐵 } ⊆ 푀. Thus 푠푢푝 { 퐴 , 퐵 } = 퐴 + 퐵 = { 푎 + 푏 ∣ 푎 ∈ 퐴, 푏 ∈ 퐵 }. Hence ( 푆푢푏 푉; ∩, +, ⊆ ) is a lattice. 푆푢푏푉 is also bounded with bottom element {0}, the zero space and top element as 푉. Example 2.1.23: If 퐿, 푀 are lattices then the set of isotone mappings 푓 ∶ 퐿 → 푀 form a lattice in which 푓 ∧ 푔 and 푓 ∨ 푔 are given by the prescriptions : (푓 ∧ 푔 ) (푥 ) = 푓 (푥) ∧ 푔 (푥) , ( 푓 ∨ 푔 ) (푥) = 푓 (푥 ) ∨ 푔(푥) . Sol: Let 푆 = { 푓 ∶ 퐿 → 푀 ∣for all 푥, 푦 ∈ 퐿 , 푥 ≤ 푦 implies 푓(푥) ≤ 푓(푦) }. We have to show that S is a lattice. Suppose that 푓 , 푔 ∈ 푆; we claim that ( 푓 ∧ 푔) (푥) = 푓(푥) ∧ 푔(푥) . Let ℎ = 푓 ∧ 푔 then ℎ ≤ 푓 and ℎ ≤ 푔 and for any 푙 ∈ S if 푙 ≤ 푓 and 푙 ≤ 푔 then 푙 ≤ ℎ; implies ℎ(푥) ≤ 푓(푥) and ℎ(푥) ≤ 푔(푥) for all 푥 ∈ 퐿. If 푙(푥) ≤ 푓(푥) and 푙(푥) ≤ 푔(푥) then 푙(푥) ≤ ℎ(푥) for all 푥 ∈ 퐿; since 푓(푥) ∧ 푔(푥) ≤ 푓(푥) and 푓(푥) ∧ 푔(푥) ≤ 푔(푥) for all 푥 ∈ 퐿; implies 푓(푥) ∧ 푔(푥) ≤ ℎ(푥) for all푥 ∈ 퐿. Also ℎ(푥) ≤ 푓(푥) ∧ 푔(푥) for every 푥 ∈ 퐿; this implies ℎ(푥) = 푓(푥) ∧ 푔(푥); which implies (푓 ∧ 푔)(푥) = 푓(푥) ∧ 푔(푥) as required. Likewise we can show that (푓 ∨ 푔) (푥) = 푓(푥) ∨ 푔(푥). Proposition 2.1.24: The set 푁(퐺) of normal subgroups of a group 퐺 forms a lattice in which 푠푢푝 { 퐻 , 퐾 } = {ℎ푘 ∣ ℎ ∈ 퐻 and 푘 ∈ 퐾} and 푖푛푓{퐻, 퐾} = 퐻 ∩ 퐾, where 퐻, 퐾 ∈ 푁(퐺).

Proof : Let 퐻, 퐾 ∈ 푁 (퐺), then clearly (퐻 ∩ 퐾) ∈ 푁(퐺). Now 퐻 ∩ 퐾 ⊆ 퐻 and 퐻 ∩ 퐾 ⊆ 퐾 so that 퐻 ∩ 퐾 is the lower bound of 퐻 and 퐾. If 푊 is any other lower bound of 퐻 and 퐾 then 푊 ⊆ 퐻 and 푊 ⊆ 퐾; which implies 푊 ⊆ 퐻 ∩ 퐾. Thus inf {퐻, 퐾} = 퐻 ∩ 퐾. To prove that supremum also exists, suppose that 퐻 and 퐾 are two normal subgroups of a group 퐺. We claim that 퐻퐾 = { ℎ푘 ∣ ℎ ∈ 퐻 and 푘 ∈ 퐾 }is also a normal subgroup of 퐺. Let 푔 ∈ 퐺 and 푥 ∈ 퐻퐾 then 푥 = ℎ푘 for someℎ ∈ 퐻 and 푘 ∈ 퐾. Thus we have 푔푥푔−1 = 푔(ℎ푘)푔−1(푔ℎ푔−1)(푔푘푔−1) ∈ 퐻퐾. This proves that 퐻퐾 ∈ 푁(퐺). Also 퐻 , 퐾 ⊆ 퐻퐾; therefore 퐻퐾 is upper bound of 퐻 and 퐾 and note that if 퐿 is any upper bound of 퐻 and 퐾 then 퐻 ⊆ 퐿 and 퐾 ⊆ 퐿; which implies 퐻퐾 ⊆ 퐿. Thus 퐻퐾 is the smallest subgroup containing both 퐻 and 퐾 which implies that 푠푢푝 {퐻, 퐾} = 퐻퐾 ={ hk ∣ℎ ∈ 퐻 and 푘 ∈ 퐾 }. Thus 푁(퐺) forms a lattice.

Example 2.1.25: We draw the Hasse diagram of the of the alternating group 𝒜4.

Proof: 𝒜4 is the alternating group on 4 letters, that is, it is the set of all even permutations. 𝒜4={(1), (12)(34), (13)(24), (14)(23), (123), (132), (124), (142), (134), (143),(234),(24 3)}which totals to 12 elements so by Lagrange’s Theorem the subgroups of 𝒜4 should have order 1,2,3,4,6 and 12. The subgroups of order 1 and order 12 are trivial. The subgroups of order 2 are {1, (1 2), (3 4)}, {1, (1 3), (2 4)}, {1, (1 4), (2 3)}. Subgroups of order 3 are {1, (2 3 4), (2 4 3)}, {1, (1 3 4)(1 4 3)}, {1, (1 2 4) (1 4 2)} and {1, (1 2 3)(1 3 2)}. Subgroup of order 4 is {1, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}. 𝒜4 has no subgroup of order 6. Now let 푆푢푏 𝒜4 denotes the subgroups of alternating group 𝒜4, then; 푆푢푏𝒜4 = {1, {1, (1 2), (3 4)}, {1, (1 3), (2 4)}, {1, (1 4), (2 3)}{1, (2 3 4)(2 4 3)}, {1, (1 3 4)(1 4 3)}, {1, (1 2 4)(1 4 2)}, {1, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)}, 𝒜4}. Thus the subgroup lattice of the alternating group 𝒜4 is as follows:

𝒜4

1

2.2 Down-set lattices

If 퐸 is an ordered set and 퐴, 퐵 are down sets of 퐸 then clearly so also are 퐴 ∩ 퐵 and 퐴 ∪ 퐵. Also 퐴 ∩ 퐵 is the largest down-set contained in both 퐴 and 퐵, therefore 푖푛푓{퐴, 퐵} = 퐴 ∩ 퐵 and similarly 푠푢푝 {퐴, 퐵} = 퐴 ∪ 퐵; thus the set of down-sets of 퐸 forms a lattice. We shall denote this lattice by 풪(퐸 ). We recall from the definition of a down-set that we include the empty subset as a down set. Thus the lattice 풪(퐸 ) is bounded with top element 퐸 and bottom element ∅.

Example 2.2.1: Consider the set with the following Hasse diagram as shown below

Here we have; 푎↓ = {푎}, 푏↓ = {푏}, 푐↓ = {푎, 푐} , 푑↓ = {푎, 푑}, 푒↓ = {푏, 푒}. Therefore 풪(퐸) = {휙, {푎}, {푏}, {푎, 푐}, {푎, 푏, 푑}, {푏, 푒}, 퐸, }. Thus the Hasse diagram of the subsets of 풪(퐸) is given as;

Down-set lattices will be of considerable interest to us later. For the moment we shall consider how to compute the cardinality of 풪(퐸) when the ordered set 퐸 is finite. Upper and Lower bounds for this are provided by the following result;

Theorem 2.2.2: If 퐸 is a finite ordered set with ∣ 퐸 ∣ = 푛 then; 푛 + 1 ≤ ∣ 풪(퐸 ) ∣ ≤ 2푛.

Proof: Since 풪(퐸)⊆ ℙ(퐸) so | 풪(퐸)| ≤ |ℙ(E)| = 2n. Thus | 풪(퐸)| ≤ 2n. We need to only show that 푛 + 1 ≤ ∣ 풪(퐸 ) ∣. We have two cases to consider. Case 1: If 퐸 is a chain. Clearly 퐸 has the least number of down sets when it is a chain, in which case 풪(퐸) is also a chain. We prove by induction on 푛 that in this case 푛 + 1 ≥ |풪(퐸 )|.

For 푛 = 1, the result is trivial, for 푛 = 2, let 퐸 = {푥1, 푥2}, since 퐸 is a chain either 푥1 ≤ 푥2 ↓ or 푥2 ≤ 푥1. Now ℙ(퐸) = {휙, {푥1}, {푥2}, 퐸}; therefore down-sets of 퐸 are ∅, {푥1} = {푥1, 푥2}, ↓ {푥2} = {푥1, 푥2} and 퐸. Thus |풪(퐸)| = 3 = 2 + 1. so the result is true for 푛 = 2. Assume that the result is true for all ordered sets having cardinality less than |퐸|. We prove that the result holds for n also so let ∣ 퐸 ∣= 푛.

Let 퐸1 be an ordered set such that |퐸1| = 푛 − 1. By induction hypothesis the result is true for 퐸1 i,e., | 풪(퐸1)| ≥ n. Now |풪(퐸)| = |풪(퐸1)| + 1 ≥ n + 1. Thus the result is true for ∣ 퐸 ∣= 푛 also. Case 2: If 퐸 is anti-chain. 푛 We show by induction ∣ 풪(퐸 ) ∣ ≤ 2 where 푛 = ∣ 퐸 ∣. For 푛 = 2 we have 퐸 = {푥1, 푥2} this 2 implies ℙ(퐸 ) = {휙, {푥1 }, {푥2}, 퐸 }.Thus ∣ 풪 (퐸) ∣= 4 = 2 =|ℙ(퐸)|. So the result is true for 푛 = 2. Assume that the result holds for all ordered sets having cardinality less than |퐸|. Let 퐸1 푛−1 be the ordered set such that |퐸1| = 푛 − 1|. By induction hypothesis ∣ 풪(퐸1) ∣ ≤ 2 . Now n−1 푛 |풪(퐸)| = |풪(퐸1)| + 1 ≤ 2 + 1 ≤ 2 . Thus by induction 퐸 has the greatest number of down-sets when it is an anti-chain in which case 풪(퐸 ) = 푃(퐸 ) which is of cardinality 2n. Thus 푛 + 1 ≤ ∣ 풪(퐸 ) ∣ ≤ 2푛.

In certain cases ∣풪(퐸)∣ can be calculated by using an ingenious algorithm that we shall describe. For this purpose, we shall denote by 퐸\{푥} the ordered set obtained from 퐸 by deleting the element 푥 and related comparabilities resulting from transitivity through 푥.

Example 2.2.3: Consider the lattice 퐿 given by the following Hasse diagram.

Therefore, 퐸\{푥} can be obtained by deleting the element 푥 and related comparabilities. Thus the Hasse diagram of 퐸\{푥} is:

Definition 2.2.4: We shall also use the notation 푥↕ to denote the cone through 푥, namely the set of elements that are comparabl e to 푥, formally; 푥↕ = 푥↓ ∪ 푥↑ = {푦 ∈ 퐸 ∶ 푦 ∦ 푥}.

Example 2.2.5: If 퐿is as in above example then퐿/푥↕ is a singleton as; 푥↕ = {푦 ∈ 퐿: 푦 ∦ 푥} = {푏, 푐, 푑, 푒, 푥} and 퐿 = {푎, 푏, 푐, 푑, 푒, 푥}, thus 퐿 ∖ 푥↕ = {푎} which is singleton.

Definition 2.2.6: We say that 푥 ∈ 퐸 is maximal if there is no 푦 ∈ 퐸 such that 푦 > 푥.

Definition 2.2.7: An element 푥 ∈ 퐸 is called minimal element of 퐸 if there exist no element 푦 ∈ 퐸 such that 푦 < 푥. Clearly a top (bottom) element can be characterized as a unique maximal (minimal) element.

We now give an alternate formula for ∣ 풪(퐸 ) ∣ via the concept of cone and maximal and minimal elements as described above.

Theorem 2.2.8: (Berman-Kohler) If 퐸 is a finite ordered set then; ∣ 풪(퐸 ) ∣=∣ 풪(퐸 \푥) ∣ +∣ 풪(퐸 \푥↕) ∣.

Proof: let 푋 be a non-empty down-set of 퐸 and let 푆 = {푥1, … , 푥푘} be the set of maximal element of 퐸 then clearly S is unique anti-chain corresponding to down set 푋. (푥i’s are not related to each other). Again let 퐹 be any anti-chain in 퐸, that is; for all푥, 푦 ∈ 퐹 푥 ∥ 푦, that is 퐹 is the set of maximal elements of any subset of 퐸. This implies 퐹↓ = 퐹. Thus every non empty down set 푋 of 퐸 is uniquely determined by an anti-chain in 퐸. Counting ∅ as an anti- chain, we thus see that ∣ 풪(퐸 ) ∣ is the number of anti-chains in 퐸. For any given element 푥 of 퐸, this can be expressed as the number of anti-chains that contain 푥 plus that do not contain 푥. Now if an antichain 퐴 contains a particular element 푥 of 퐸, then it contains no other element of cone 푥↕; for if 퐴 contains an푦 element say 푦 of 푥↕ then 푥 ≤ 푦 or 푦 ≤ 푥. Which is a contradiction as 푥 ∈ 퐴 and 퐴 is antichain. Thus ever푦 anti-chain that contains 푥 determines a down set of (퐸 ∖ 푥↕) and conversely we have 퐸 /푥↕ = {푦 ∈ 퐸 ∶ 푥 ∥ 푦}; therefore 풪(퐸 ∖ 푥↕) =set of down sets 표푓 퐸 ∖ 푥↕.Let 퐵 ∈ 풪 (퐸 ∖ 푥↕); that is 퐵 ⊆ 퐸 /푥↕ implies; 퐵 = {푦 ∈ 퐸 ∶ 푦 ∉ 푥↕} = { 푦 ∈ 퐸 ∶ 푦 ∉ 푥↑or푦 ∉ 푥↓} = {푦 ∈ 퐸 ∶ 푦 ≰ 푥 or 푥 ≰ 푦}. This implies 퐵 is antichain. Hence we see that number of antichains that contain 푥 is precisely∣풪 (퐸 ∖ 푥↕) ∣. Likewise the number of antichains that do not contain 푥 are precisely ∣풪 (퐸∖푥)∣. Thus the result follows.

Example 2.2.9: We now draw the Hasse diagram of the lattice of the down sets of each of following ordered sets;

Solution: Let 퐸 be the ordered set given by the Hasse diagram (1).

Then 풪(퐸) = {∅, {푎}, {푎, 푏}, {푎, 푏, 푐}}; thus the Hasse diagram of down sets of is given as;

Let 퐹 be the ordered set given by Hasse diagram (2) Then 풪(퐹) = {휙, {푎}, {푎, 푏}, {푎 , 푏 , 푐}, {푎, 푏, 푐, 푑, 푒 }, {푎, 푑}} thus the Hasse diagram o푓 down sets of 퐹 is;

Similarly, the Hasse diagram o푓 down sets of (3) is;

Hasse diagram of down sets of (4) is;

Example 2.2.10: If (푃1; ≤1)and (푃2 ; ≤2) are the ordered sets represented by the diagrams (a) and (b) respectively. Draw the Hasse diagram of the down sets of 푃1 ∪ 푃2.

Define an order on 푃1 ∪ 푃2 by; 푥, 푦 ∈ 푃 푎푛푑푥 ≤ 푦, 푥 ≤ 푦 if and only if { 1 1 푥, 푦 ∈ 푃2푎푛푑푥 ≤2 푦.

↓ ↓ ↓ ↓ Note that 푃1 ∪ 푃2 = {푎 , 푏 , 푥 , 푦 , 푧}. We have 푎 = {푎}, 푏 = {푎, 푏}, 푥 = {푥}, 푦 = {푥, 푦}, ↓ 푧 = {푥, 푧}. The down set lattice 풪(푃1 ∪ 푃2 ) is {휙, {푥}, {푎}, {푎, 푏}, {푥, 푧}, {푦, 푧}}. Thus the required Hasse diagram is;

2.3 Sublattices

As we have seen in the previous section that important sub-structures of an ordered set are the down-sets and the principal down sets. In this section we consider another type of substructure of semilattices.

Definition 2.3.1: By a ∧-subsemilattice of a meet semilattice 퐿 we mean a non-empty subset 퐸 of 퐿 that is closed under the meet operation, in the sense that if 푥, 푦 ∈ 퐸 then 푥 ∧ 푦 ∈ 퐸. A ∨-subsemilattice of a join semilattice is defined dually.

Definition 2.3.2: By a sublattice of a lattice we mean a subset that is both a meet- subsemilattice and join-subsemilattice.

Example 2.3.3: If 푉 is a vector space and if 푆푢푏 푉 denotes the set of subspaces of 푉, then ( 푆푢푏 푉; ⊆) is easily seen to be an ordered set under inclusion. Suppose 퐴, 퐵 ∈ 푆푢푏 푉 we have 퐴 ∩ 퐵 ⊆ 퐴 and 퐴 ∩ 퐵 ⊆ 퐵, therefore 퐴 ∩ 퐵 is the lower bound of {퐴, 퐵}. Suppose 푊 be any subspace of 푉 such that 푊 ⊆ 퐴 and 푊 ⊆ 퐵 then 푊 ⊆ 퐴 ∩ 퐵; therefore 퐴 ∩ 퐵 is the biggest subspace that is contained in both 퐴 and 퐵. Thus 퐴 ∩ 퐵 = 푖푛푓{퐴, 퐵}. Therefore 푆푢푏 푉 is a meet subsemilattice of the lattice ℙ(푉).

Example 2.3.4: For every ordered set 퐸, the lattice 풪(퐸 ) of down sets of 퐸 is the sublattice of the lattice ℙ(퐸 ), since 풪(퐸 ) ⊆ ℙ(퐸 ) and for any 퐴, 퐵 ∈ 풪 (퐸 ); clearly 퐴 ∩ 퐵 ⊆ 퐸. Let 푥 ∈ 퐴 ∩ 퐵 and 푦 ∈ 퐸 with the property that 푦 ≤ 푥 then 푥 ∈ 퐴 and 푥 ∈ 퐵 and 푦 ∈ 퐸 with 푦 ≤ 푥. Since 퐴 and 퐵 are down sets, by definition of down-set 푦 ∈ 퐴 and 푦 ∈ 퐵; this gives 푦 ∈ 퐴 ∩ 퐵. Thus 퐴 ∩ 퐵 ∈ 풪 (E). Likewise we can show that 퐴 ∪ 퐵 ∈ 풪(퐸 ). Thus 풪(퐸 ) is a sublattice of ℙ(퐸 ).

Particularly important sublattices of a lattice are those defined as:

Definition 2.3.5: By an ideal of a lattice, we shall mean a sublattice 퐽 of 퐿 that is also a down set. Dually by a filter of 퐿 we mean a sublattice that is also an upset. Next we prove that the set of ideals of a lattice is again a lattice.

Theorem 2.3.6: If 퐿 is a lattice then ordered by the set inclusion, the set 픗 (퐿) of ideals of 퐿 forms a lattice in which the lattice operations are given by

in푓{퐽, 퐾) = 퐽 ∩ 퐾; 푠푢푝 {퐽, 퐾} = {푥 ∈ 퐿 : there exists 푗 ∈ 퐽 and 푘 ∈ 퐿 푥 ≤ 푗 ∨ 푘}.

Proof: 퐿et 퐽, 퐾 be ideals of 퐿 and let 푎, 푏 ∈ 퐽 ∩ 퐾 then 푎, 푏 ∈ 퐽 and 푎, 푏 ∈ 퐾; since 퐽, 퐾are ideals of 퐿, which implies 푎 ∧ 푏, 푎 ∨ 푏 ∈ 퐽 and 푎 ∨ 푏, 푎 ∧ 푏 ∈ 퐾, this gives 푎 ∧ 푏 ∈ 퐽 ∩ 퐾 and 푎 ∨ 푏 ∈ 퐽 ∩ 퐾. Also if 푎 ∈ 퐿 and 푏 ∈ 퐽 ∩ 퐾 with 푎 ≤ 푏, then 푎 ∈ 퐽 ∩ 퐾. Thus 퐽 ∩ 퐾 is also an ideal of 퐿 and if 푊 is any other ideal such that 푊 ⊆ 퐽 and 푊 ⊆ 퐾 then 푊 ⊆ 퐽 ∩ 퐾. . Hence 푖푛푓{퐽, 퐾} exists in 픗 (퐿) and is 퐽 ∩ 퐾. Let 푀 = {푥 ∈ 퐿 ∣ 푥 ≤ 푗 ∨ 푘 for some 푗 ∈ 퐽 and 푘 ∈ 퐾}.We claim that 푀 is an ideal of 퐿. Suppose 푥1 , 푥2 ∈ 푀 then 푥1 ≤ 푗1 ∨ 푘1 and 푥2 ≤ 푗2 ∨ 푘2 for some 푗1, 푗2 ∈ 퐽 and 푘1, 푘2 ∈ 퐾. Now 푥1 ∧ 푥2 ≤ 푥1 ≤ 푗1 ∨ 푘1 for some 푗1 ∈ 퐽 and 푘1 ∈ 퐾. Therefore 푥1 ∧ 푥2 ∈ 푀. Also 푥1 ∨ 푥2 ≤ (푗1 ∨ 푘1) ∨ (푗2 ∨ 푘2) = (푗1 ∨ 푗2) ∨ (푘1 ∨ 푘2) where 푗1 ∨ 푗2 ∈ 퐽 and 푘1 ∨ 푘2 ∈ 퐾; this implies 푥1 ∨ 푥2 ∈ 푀. This shows that 푀 is a sublattice of 퐿. To show 푀 is a down set of 퐿, suppose 푎 ∈ 푀 such that 푧 ≤ 푎 for some 푧 ∈ 퐿, but 푎 ∈ 푀 so 푎 ≤ 푗 ∨ 푘 for some 푗 ∈ 퐽 and 푘 ∈ 퐾; this implies 푧 ≤ 푗 ∨ 푘 for some 푗 ∈ 퐽 and 푘 ∈ 퐾; which implies 푧 ∈ 푀. Thus 푀 ∈ 픗 (퐿) . Now we claim that = sup{퐽, 퐾}. Suppose 푗 ∈ 퐽 then 푗 ≤ 푗 ∨ 푘 for any 푘 ∈ 퐾 thus 푗 ∈ 푀 so 퐽 ⊆ 푀. In a similar manner we can show that 퐾 ⊆ 푀. This shows that 푀 is the upper bound of 퐽 and 퐾. Let 푁 ∈ 픗 (퐿) such that 퐽 ⊆ 푁 and 퐾 ⊆ 푁 we show that 푀 ⊆ 푁. Let 푚 ∈ 푀 then 푚 ≤ 푗 ∨ 푘 for some 푗 ∈ 퐽 and 푘 ∈ 퐾. Since 푗 ∈ 퐽 ⊆ 푁, 푘 ∈ 퐾 ⊆ 푁; this gives 푗, 푘 ∈ 푁. But 푁 is also a down set and 푚 ≤ 푗 ∨ 푘 ∈ 푁; this gives 푚 ∈ 푁 so 푀 ⊆ 푁. Thus 푀 = 푠푢푝 {퐽, 퐾}.

Note: From the above theorem although the ideal lattice 픗(퐿) is a meet subsemilattice of 풪(퐿) since intersection of two ideals is again an ideal. It is not sublattice since union of two ideals need not be an ideal. This situation in which a subsemilattice of a given lattice 퐿 that is not a sublattice of 퐿 can also form a lattice with respect to the same order as 퐿, is quite common in lattice theory. Another instance of this has been seen before in Example 2.1.21 where the set Sub V of subspaces of a vector space V forms a lattice in which 푖푛푓{퐴, 퐵} = 퐴 ∩ 퐵and 푠푢푝{퐴, 퐵} = 퐴 + 퐵, so that (푆푢푏 푉 ; ⊆) forms a lattice that is a∩-subsemilattice, but not a sublattice, of(푃(푉); ⊆). As we shall now see, afurther instance is provided by a closure mapping on a lattice.

Definition 2.3.7: An isotone mapping 푓: 퐸 → 퐸 is a closure on 퐸 if it is such that 푓 = 푓2 ≥ 2 푖푑퐸; and a dual closure if 푓 = 푓 ≤ 푖푑퐸. Theorem 2.3.8: Let 퐿 be a lattice and let 푓: 퐿 → 퐿be a closure. Then 퐼푚푓 is a lattice in which the lattice operations are given 푏y 푖푛푓{푎, 푏} = 푎 ∧ 푏 and 푠푢푝{푎 , 푏} = 푓(푎 ∨ 푏)

Proof: Suppose that 푓: 퐿 → 퐿 is a closure, let 푥 ∈ 퐼푚푓 then 푥 = 푓(푦) for some 푦 ∈ 퐿. Since 푓 is isotone so we obtain 푓(푥) = 푓2(푦) = 푓(푦) (by definition of closure mapping); this gives 푓(푥) = 푥. Consequently we see that; 퐼푚푓 = { 푥 ∈ 퐿 ∶ 푓(푥) = 푥}. If 푎, 푏 ∈ 퐼푚 푓 then 2 푓(푎) = 푎 and 푓(푏) = 푏. Since 푓 is isotone and 푓 = 푓 ≥ 푖푑퐿, 2 We have 푓(푎) ∧ 푓(푏) = 푎 ∧ 푏 ≤ 푓(푎 ∧ 푏); ( since 푓 = 푓 ≥ 푖푑퐿) (1) Also 푎 ∧ 푏 ≤ 푎 and 푎 ∧ 푏 ≤ 푏 and 푓 is isotone, implies 푓(푎 ∧ 푏) ≤ 푓(푎) and 푓(푎 ∧ 푏) ≤ 푓(푏). This gives 푓(푎 ∧ 푏) ≤ 푓(푎) ∧ 푓(푏) (by connecting lemma). (2) Combining (1) and (2) we get 푓(푎 ∧ 푏) = 푎 ∧ 푏. Thus 푎 ∧ 푏 ∈ 퐼푚푓. It follows that Im푓 is a meet subsemilattice of 퐿. As for the supremum in 퐼푚푓 of 푎, 푏 ∈ 퐼푚푓, we have 푎 ≤ 푎 ∨ 푏 and 푏 ≤ (푎 ∨ 푏), since 푓 is isotone so 푓(푎) ≤ 푓(푎 ∨ 푏) and 푓(푏) ≤ 푓(푎 ∨ 푏), this gives 푓(푎) ∨ 푓(푏) = 푎 ∨ 푏 ≤ 푓(푎 ∨ 푏) and so 푓(푎 ∨ 푏) ∈ 퐼푚푓 is an upper bound of {푎, 푏}. Suppose now that 푐 = 푓(푐) ∈ 퐼푚 푓 is any other upper bound of {푎, 푏} in Im푓, then 푎 ≤ 푐 and 푏 ≤ 푐; this gives 푎 ∨ 푏 ≤ 푐. By isotonicity of 푓 we obtain 푓(푎 ∨ 푏) ≤ 푓(푐) = 푐. Thus in the subset 퐼푚푓, the upper bound 푓(푎 ∨ 푏) is less than or equal to every upper bound of {푎, 푏}. Consequently 푠푢푝 {푎, 푏} exists in Im푓 and is 푓(푎 ∨ 푏 ).

2.4 Lattice morphisms In this section we discuss lattice morphisms. These are the morphisms of ordered sets that preserves the operations of meet and join.

Definition 2.4.1: If 퐿 and 푀 are join semilattices then 푓: 퐿 → 푀 is said to be a join morphism if 푓( 푥 ∨ 푦 ) = 푓(푥) ∨ 푓(푦) for all 푥, 푦 ∈ 퐿. Dually if 퐿 and 푀 are meet semilattices then 푓: 퐿 → 푀 is said to be a meet morphism if 푓( 푥 ∧ 푦 ) = 푓(푥) ∧ 푓(푦) for all 푥, 푦 ∈ 퐿.

Definition 2.4.2: If 퐿 and 푀 are lattices then 푓: 퐿 → 푀 is a lattice morphism if it is both a join morphism and a meet morphism.

Definition 2.4.3: If 퐿 and 푀 are join semilattices then a mapping 푓: 퐿 → 푀 is said to be a complete join morphism if for every family (푥훼)훼∈퐼 of elements of 퐿 such that ⋁훼∈퐼 푥훼exists in 퐿, ⋁훼∈퐼 푓(푥훼)exists in 푀 and 푓(∨훼∈퐼 푥훼 ) = ∨훼∈퐼 푓(푥훼). The notion of complete meet morphism is defined dually.

Theorem 2.4.4: If 퐿 and 푀 are join semilattices then every residuated mapping 푓: 퐿 → 푀 is a complete join morphism. Proof: Suppose that (푥훼)훼∈퐼 is a family of elements of 퐿 such that 푥 = ⋁훼∈퐼 푥훼exists in 퐿, for each 훼 ∈ 퐼 . We have 푥 ≥ 푥훼 and since 푓 is residuated implies 푓 is isotone, thus 푓(푥) ≥ + 푓(푥훼) for each 훼 ∈ 퐼; if 푦 ≥ 푓(푥훼) for each 훼 ∈ 퐼, then by the fact that 푓 is isotone we + + + + have 푓 (푦) ≥ 푓 (푓(푥훼)). Since 푓 is residuated 푓 ⃘푓 ≥ 푖푑퐿 therefore 푓 (푦) ≥ 푥훼for each + + 훼 ∈ 퐼 and so 푓 (푦) ≥ ∨훼∈퐼 푥 훼 = 푥, this implies 푓 (푦) ≥ 푥, since 푓 is residuated therefore + 푦 ≥ 푓( 푓 (푦)) ≥ 푓(푥). Thus we see that ⋁훼∈퐼(푥)훼 exists and is 푓(푥).

Definition2.4.5: We shall say that lattices 퐿 and 푀 are isomorphic if they are isomorphic as ordered sets.

Theorem 2.4.6: Lattices 퐿 and 푀 are isomorphic if and only if there is a bijection 푓: 퐿 → 푀 that is a ∨-morphism.

Proof: Suppose 퐿 ≃ 푀, then by definition there is residuated bijection 푓: 퐿 → 푀. So by above Theorem 푓 is a join morphism. Conversely suppose that 푓 ∶ 퐿 → 푀 is a bijection and a join morphism then we have; 푥 ≤ 푦 if and only if 푥 ∨ 푦 = 푦 (by connecting lemma) if and only if 푓( 푥 ∨ 푦) = 푓(푦 ) (since 푓 is isotone) if and only if 푓(푦) = 푓(푥) ∨ 푓(푦) if and only if 푓(푥) ≤ 푓(푦) (by connecting lemma). So that 퐿 is isomorphic to 푀.

Example 2.4.7: Let 퐿 be lattice, then every isotone mapping from 퐿 to 푀 is a lattice morphism if and only if 퐿 is a chain.

Proof: Suppose that 퐿 is a chain we show that 푓: 퐿 → 푀 is lattice morphism . Let 푥, 푦 ∈ 퐿, Then either 푥 ≤ 푦 or 푦 ≤ 푥, without loss of generality suppose 푥 ≤ 푦 then 푓(푥) ≤ 푓(푦), implies 푓(푦) = 푓(푥) ∨ 푓(푦). Also 푥 ≤ 푦 implies 푦 = 푥 ∨ 푦, which implies 푓(푥 ∨ 푦) = 푓(푥) ∨ 푓(푦). Similarly we can show that 푓(푥 ∧ 푦) = 푓(푥) ∧ 푓(푦). So 푓 is a join morphism. Thus 푓 is a lattice morphism. To prove the converse, suppose that 퐿 is not a chain then there exists 푎, 푏 ∈ 퐿such that 푎 ≰ 푏 or 푏≰푎. Suppose 푎≰푏, since 푓 is isotone so 푓(푎) ≰ 푓(푏) ≤ 푓(푎 ∨ 푏) = 푓(푎) ∨ 푓(푏). Which is a contradiction hence our supposition is wrong. Thus 퐿 is a chain.

2.5 Complete lattice

We have seen that in a meet semilattice the infimum of ever푦 finite subset exists. We now extend this concept to arbitrary subsets. We have seen that lattices are nicer structure then the posets because they allow us to take meet and join for any pair of elements in a set. What if we wanted to take the meet and join of arbitrary sets? Complete lattice allows us to do exactly that. All finite lattices are complete, so this is important only for infinite lattices. In this section we first discuss complete lattice and show many ways in which complete lattice arise in mathematics, next we consider the question; what if the given poset is not complete or even a lattice? Can we embed it into a complete lattice, this brings us to the notion of lattice completion which is useful for both finite and infinite posets?

Definition 2.5.1ꓽ A ∧-semilattice 퐿 is said to be ∧-complete if ever푦 subset 퐸 = {푥훼: 훼 ∈ 퐴} of 퐿 has an infimum which we denoted by 푖푛푓퐿 퐸 or by ⋀훼∈퐴 푥훼. Dually we can define ∨- complete ∨–semilattice in which we use the notation 푠푢푝퐿 퐸 or ⋁훼∈퐴 푥훼.

Definition 2.5.2: A Lattice is said to be complete if it is both ∧-complete and ∨-complete.

Theorem 2.5.3ꓽ Ever푦 complete lattice has a top and bottom element.

Proofꓽ Let 퐿 be a complete lattice then by definition any arbitrary subset 푀 of 퐿 has both supremum and infimum. Now taking 퐿 = 푀 we have 푠푢푝퐿 퐿 is the top element of 퐿 and 푖푛푓퐿 퐿 is the bottom element. The following lemma provides us a way of showing that a lattice is complete by only proving that the infimum exists, saving us half the work.

Lemma 2.5.4: (Half-work Lemma) A poset 푃 is a complete lattice if and only if 푖푛푓(푆) exists for every 푆 ⊆ 푃.

Proof: The forward direction is trivial. To prove converse we need to prove that 푠푢푝(푆) exists for every 푆 ⊆ 푃. To do so, we will formulate 푠푢푝(푆) in terms of the infimum of some other set. Consider the set 푇 of upper bounds of 푆 i.e., 푇 = {푥 ∈ 푋 |푠 ∈ 푆 and 푠 ≤ 푥} Now let 푎 = 푖푛푓(푇). We claim that 푎 = 푠푢푝(푆). From the definition of 푇, we get that for all 푠 ∈ 푆 and for all 푡 ∈ 푇 such that 푠 ≤ 푡. Since 푎 = 푖푛푓(푇), it follows that for all 푠 ∈ 푆 such that 푠 ≤ 푎. Thus, 푎 is an upper bound of 푆. Further, for any upper bound 푡 of 푆, we know that 푡 ∈ 푇. Therefore, 푎 ≤ 푡 because 푎 = 푖푛푓(푇). Thus, 푎 = 푠푢푝(푆).

Note: Note that the set T in the proof may be empty. In that case 푎 would be the top element of

Illustration of Half Work Lemma

Example 2.5.5: For every non empty set 퐸the power set lattice ℙ(퐸) is complete. The top element is 퐸 and the bottom element is ∅.

Solution: We have already seen that ℙ(퐸) is a bounded lattice with top element 퐸 and bottom element ∅. Let 푋 ⊆ ℙ(퐸) and let 퐵 be any upper bound of 푋 with respect to ⊆. This means

퐶 ⊆ 퐵 for each 퐶 ∈ 푋. But then ∪퐶∈푋 퐶 ⊆ 퐵 as well for all 퐶 ∈ 푋 (let 푥∈∪퐶∈푋 퐶 so푥 = 푐 ́ for some 푐 ́ ∈ 푋 and then 푐 ́ ⊆ 퐵 so 푥 ∈ 퐵). So the union is smaller than any other upper bound, hence b푦 definition ∪퐶∈푋 퐶 = 푠푢푝푋. A dual argument can be held for the intersection, showing that ℙ(퐸) is a complete lattice.

Example 2.5.6: Let 퐿 be a lattice that is formed by adding a chain (푄, ≤) of rationals a top element ∞ and a bottom element −∞ then 퐿 is bounded but is not complete.

Since 퐿 has top element and bottom element, so it is bounded. But 퐿 is not complete as if we consider the set {푥∈ 푄 ∶ 푥 > 0 and 푥2≤ 2} this set has no 푙푢푏 as √2 is not rational.

Definition 2.5.7: Let 푅 be any relation on set 푋 then by 푅푒 we mean the smallest equivalence relation on 푋 containing 푅. We call it the equivalence relation generated by 푅.

Example 2.5.8ꓽ For every non empty set 퐸 the set 퐸푞푢 퐸 of equivalence relations on 퐸 is a complete lattice.

Solution: Suppose 퐹 = (푅훼)훼∈퐴 is a family of equivalence relations on 퐸. We claim that 퐼푛푓훼∈퐴푅훼 exists in 퐸푞푢 퐸 and is the relation ⋀훼∈퐴푅훼 given by;

⋀훼∈퐴푅훼 = ∩훼∈퐴 푅훼.

Clearly ∩훼∈퐴 푅훼 ∈ 퐸푞푢 퐸; since intersection of any family of equivalence relations is again an equivalence relation. Now we show that it is also meet of (푅훼)훼∈퐴.

Now ∩훼∈퐴 푅훼 ⊆ 푅훼 for all 훼 ∈ 퐴; so ∩훼∈퐴 푅훼 is a lower bound of (푅훼)훼∈퐴. Let 휆 ∈ 퐸푞푢 퐸 is another lower bound of (푅훼)훼∈퐴, then by definition 휆 ⊆ 푅훼 for all 훼 ∈ 퐴; which implies

휆 ⊆ ∩훼∈퐴 푅훼. Thus ⋀훼∈퐴푅훼 = ∩훼∈퐴 푅훼 is the meet of (푅훼)훼∈퐴. 푒 Now we show that 푠푢푝 {(푅훼)훼∈퐴} = (∪훼∈퐴 푅훼) . For any 훼 ∈ 퐴, 푅훼 ⊆ ∪훼∈퐴 푅훼 ⊆ 푒 푒 (∪훼∈퐴 푅훼) ; so (∪훼∈퐴 푅훼) is an upper bound of (푅훼)훼∈퐴. Let 휆 ∈ 퐸푞푢 퐸 is any other upper bound of (푅훼)훼∈퐴 then by definition 푅훼 ⊆ 휆 for all 훼 ∈ 퐴 which implies ∪훼∈퐴 푅훼 ⊆ 휆 . 푒 푒 since (∪훼∈퐴 푅훼) is the smallest equivalence containing ∪훼∈퐴 푅훼 so (∪훼∈퐴 푅훼) ⊆ 휆. Thus 푒 푠푢푝 {(푅훼)훼∈퐴} = (∪훼∈퐴 푅훼) .

The relationship between complete semilattices and complete lattices is highlighted by the following result.

Theorem 2.5.9: A ⋀-complete ⋀-semilattice is a complete if and only if it has a top element.

Proof: Let 퐿 be a ⋀-complete ⋀-semilattice. If 퐿 is complete then 푠푢푝퐿퐿 is the top element of 퐿. Conversely suppose that 퐿 be a ⋀-complete ⋀-semilattice with the top element 1. Let 푋 = {푥훼 ∶ 훼 ∈ 퐴 } be a non empty subset of 퐿. We show that sup퐿푋 exists. Since 퐿 has a top element ↑ ↑ 1 therefore the set of upper bounds 푋 of 푋 is non empty. Let 푋 = {푚훽| 훽 ∈ 퐵} . Since 퐿 is

⋀-complete therefore ⋀훽∈퐵 푚훽 exists, since 푚훽 is an upper bound of 푋 for all 훽 ∈ 퐵. We have 푥α ≤ 푚훽 for all 훼 ∈ 퐴 푎푛푑 훽 ∈ 퐵. Thus it follows that 푥훼 ≤ ⋀푚훽 for all 훽 ∈ 퐵 and for ↑ every 푥α ∈푋 whenever ⋀훽∈퐵 푚훽 ∈ 푋 and ⋀훽∈퐵 푚훽 ≤ 푚훽. Hence ∧훽∈퐵 푚훽 is the supremum of 푋 in 퐿. Hence 퐿 is complete.

Example 2.5.10: Let 퐸 be an infinite set and let 푃푓(퐸) be the set of all finite subsets of 퐸. We have already shown that (푃푓(퐸), ⊆) is an ordered set. Let 푋 be any arbitrary subset of 푃푓(퐸) then 푋 is finite. Since every finite set is bounded so 푋 is bounded. Let 퐵 be any lower bound of 푋, then 퐵 ⊆ 퐶 for all 퐶 ∈ 푋, but then 퐵 ⊆∩퐶∈푋 퐶. Since B is arbitrary, so every lower bound of 푃푓(퐸)is smaller than ∩퐶∈푋 퐶. Hence by definition it is the greatest lower bound. Therefore

푃푓(퐸)is a finite ∩ −lattice ordered by set inclusion, so it is meet complete. Now 푃푓(퐸) ∪ 퐸 is a meet complete lattice with top element E. So by (Theorem 2.5.9) 푃푓(퐸) ∪ 퐸 is complete.

Example 2.5.11: If 퐺 is a group and let 푆푢푏 퐺 be the set of all subgroups of 퐺, ordered b푦 the set inclusion. Let 퐻, 퐾 ∈ 푆푢푏 퐺 then H∩ 퐾 ∈ 푆푢푏 퐺, since 퐻 ∩ 퐾 ⊆ 퐻 and 퐻 ∩ 퐾 ⊆ 퐾, showing that 퐻 ∩ 퐾 is the lower bound of 퐻 and 퐾. Let 푊 be any other lower bound of 퐻 and 퐾 such that 푊 ⊆ 퐻 and 푊 ⊆ 퐾, implies 푊 ⊆ 퐻 ∩ 퐾. So 퐻 ∩ 퐾 is the 푔푙푏 of 푆푢푏 퐺. Hence

푆푢푏 퐺 is a ∩-semilattice. To show that it is also meet complete let 퐻휆 ∶ 휆 ∈ 퐴} be any arbitrary family of subgroups then ∩휆∈퐴 퐻휆 is also a subgroup and ∩휆∈퐴 퐻휆 =∧휆∈퐴 퐻휆. thus 푆푢푏 퐺 is a meet complete meet semilattice with top element 퐺, so by (Theorem 2.5.9) 푆푢푏 퐺 is a complete lattice.

Example 2.5.12: Consider the lattice {푁 ∪ {0} ; | }. Since every natural number divides 0, so 0 is the top element and 1 divides every natural number so 1 is bottom element. If 푋 is any non empty subset of 푁 ∪ {0}, then as already seen in (Example 2.1.21) 푖푛푓푁푋 exists and is the greatest common devisor of elements of 푋. So 푁 ∪ {0} is meet complete meet semilattice with top element 0. Thus by (Theorem 2.5.9) { 푁 ∪ {0} ; |} is a complete lattice.

Definition 2.5.13: Let 푓: 푋 → 푋 be a function then 푐 ∈ 푋 is a fixed point of 푓 if 푓(푐) = 푐.

Concerning complete lattices, we have the following remarkable result.

Theorem 2.5.14: (Knaster Fixed Point Theorem) If 퐿 is a complete lattice and if 푓: 퐿 → 퐿 is an isotone mapping, then 푓 has a fixed point. Proof: Consider the set 퐴 = {푥 ∈ 퐿 ∶ 푥 ≤ 푓(푥)}. Since 퐿 is complete, it has a bottom element 0 and for 푥 ∈ 퐿, we have 푓(푥) ∈ 퐿, clearly 0 ∈ 퐴 (since 0 is bottom element) and by completeness of 퐿 there exists 훼 = 푆푢푝퐿퐴. Now for every 푥 ∈ 퐴 we have 푥 ≤ 훼 and therefore by the fact that 푓 being isotone we have; 푥 ≤ 푓(푥) ≤ 푓(훼) (since 푥 ∈ 퐴 implies 푥 ≤ 푓(푥)). This implies that 푥 is an upper bound of 퐴. Hence,

훼 = 푆푢푝퐿퐴 ≤ 푓(훼), so 훼 ∈ 퐴. (1) Again by the fact that 푓is isotone we have 푓(훼) ≤ 푓(푓(훼)) which implies 푓(훼) ∈ 퐴. So 푓(훼) ≤ 훼. (2) Combining (1) and (2) we get 푓(훼) = 훼.

Definition 2.5.14: We call two sets equipotent if and only if there exists one-to-one function from one set onto the other.

Next we give lattice theoretic proof of one of the well-known Theorem of set theory namely as the Schroeder Bernstein Theorem.

Theorem 2.5.15: (Schroeder Bernstein Theorem) If 퐸 and 퐹 are the sets and if there are injections 푓: 퐸 → 퐹 and 푔: 퐹 → 퐸 then 퐸 and 퐹 are equipotent.

Proof: We use the notation 푖푋: 푃(푋) → 푃(푋) to denote the antitone mapping that sends every subset of 푋 to its compliment in 푋. Consider the mapping 휓: 푃(퐸) → 푃(퐸) by

→ → 휓 = 푖퐸 ∘ 푔 ∘ 푖퐹 ∘ 푓.

Since 푓 and 푔 are isotone, so also is 휓. Since 푃(퐸) is a complete lattice and 휓: 푃(퐸) → 푃(퐸)is isotone, so by Fixed Point Theorem there exists 퐺 ⊆ 퐸 such that 휓(퐺) = 퐺, and therefore, → → 푖퐸(퐺) = (푔 ∘ 푖퐹 ∘ 푓 )(퐺). This situation may be summarized pictorially

Now since 푓 and 푔 are injective by hypothesis this configuration shows that we can define a bijection ℎ: 퐸 → 퐹 b푦 the prescription;

푓(푥) 푖푓 푥 ∈ 퐺; ℎ(푥) = { the unique element of 푔←(푥)푖푓푥 ∉ 퐺 , Claim: ℎ is surjective; let 푦 ∈ 퐹 be any element; Case 1: If 푦 ∈ 푓←(퐺)then 푦 = {푓(푥): 푥 ∈ 퐺}; this implies there exists푥 ∈ 퐺 such that 푦 = 푓(푥). Since 퐺 ⊆ 퐸, there exists 푥 ∈ 퐸 such that 푦 = 푓(푥) = ℎ(푥). → → Case 2: If 푦 ∈ 푖퐹(푓 (퐺)) then by definition there exists 푧 ∈ 푓 (퐺) such that 푦 = 푓(푧). Since 푓: 퐸 → 퐹 is injective therefore 푧 ∈ 퐹 and푖퐹(푧) = 푧; this implies 푦 = 푖퐹(푧) = 푧 and so 푦 ∈ 퐹. Since 푓→(퐺) ⊆ 퐹; this implies 푦 ∈ 푓→(퐺); so by first case there exists some 푥 ∈ 퐸 such that 푦 = ℎ(푥), which proves that h is surjective. Claim: ℎ is injective. Let 푥, 푥´ ∈ 퐸 such that ℎ(푥) = ℎ(푥´). Case 1: If 푥, 푥ˊ ∈ 퐺, then; ℎ(푥) = 푓(푥) = 푓(푥ˊ) = ℎ(푥ˊ). Since 푓 is injective, so 푥 = 푥ˊ. Case 2: If 푥 ∈ 퐺 but 푥ˊ ∉ 퐺, then 푥 ∈ 퐺 implies ℎ(푥) = 푓(푥) and 푥ˊ ∉ 퐺 gives 푥ˊ ∈ 푖퐸(퐺) this implies there exists 푦 ∈ 퐺 such that 푥ˊ = 푖퐸(푦). But 퐺 ⊆ 퐸 which implies 푦 ∈ 퐸; so 푥ˊ = 푖퐸(푦) = 푦. which is not possible. Thus we omit this case.

Case 3: If 푥 ∉ 퐺 and 푥ˊ ∉ 퐺, then both푥 ∈ 푖퐸(퐺)and푥ˊ ∈ 푖퐸(퐺), this implies there exists 푦ˊ ∈ 퐺 such that 푥 = 푖퐸(푦) and푥ˊ = 푖퐸(푦ˊ), then 푥 = 푦 and 푥ˊ = 푦ˊ which is not possible. So by case 1 we conclude that ℎ is injective.

Theorem 2.5.16: (Ward’s Theorem) Let 퐿 be a complete lattice if 푓 is a closure on 퐿, then Im 푓 is a complete lattice. Moreover for every non-empty subset 퐴 of Im 푓:

푖푛푓퐼푚푓퐴= 푖푛푓퐿퐴 and 푠푢푝퐼푚푓퐴 = 푓(푠푢푝퐿퐴).

Proof: We first prove that 퐼푚푓 is a ⋀-complete ⋀-semilattice. To see this suppose that 푓: 퐿 → 퐿 is a closure, let 푥 ∈ 퐼푚푓 then 푥 = 푓(푦) for some푦 ∈ 퐿, since 푓 is isotone, so 푓(푥) = 푓2(푦). But since 푓 is a closure so 푓2(푦) = 푓(푦) = 푥. Thus 푓(푥) = 푥. Therefore Im푓 = {푥 ∈ 퐿: 푓(푥) = 푥} the set of fixed points of 푓. Now suppose that 퐶 ⊆ 퐼푚푓. Since Im푓 ⊆ 퐿 and 퐿 is complete, therefore 퐼푛푓퐿퐶 exists. So let 푎 = 푖푛푓퐿퐶. By definition of infimum for all 푥 ∈ 퐶 we have 푎 ≤ 푥; since 푓 is isotone therefore 푓(푎) ≤ 푓(푥) = 푥 (since 푥 ∈ 퐼푚푓). Thus 푓(푎) is the lower bound of 퐶. Since 푎 is the greatest lower bound so we have 푓(푎) ≤ 푖푛푓퐿퐶 = 푎; also since 푎 ∈ 퐶 this implies 푎 ∈ 퐿 therefore 푓(푎) = 푓2(푎) ≥ 푎 (by definition of closure). Therefore (푎) = 푎 , which implies 푎 ∈ 푖푚푓. Thus 퐼푚푓 is ∧-complete. Since 퐿 is complete it has a top element 1, so 푓(1) ∈ 퐿 this implies 푓(1) ≤ 1 (since 1 is the top element). Also since 푓is closure, therefore by definition 푓 ≥ 푖푑퐿; this implies 푓(1) ≥ 1. Thus 푓(1) = 1 so 1 ∈ 퐼푚푓. Thus 퐼푚푓 is a ⋀-complete ⋀-semilattice with top element 1. Therefore by

Theorem 2.5.7, 퐼푚푓 is a complete lattice. Suppose now that 퐴 ⊆ 퐼푚푓. If 푎 = 퐼푛푓퐿퐴, then from above we have 푎 = 푓(푎) ∈ 퐼푚푓.If now 푦 ∈ 푖푚푓 is such that 푦 ≤ 푥 for every 푥 ∈ 퐴 then 푦 ≤

푎. Consequently we have 푎 = 푖푛푓퐼푚 푓퐴. Therefore 퐼푛푓퐼푚푓 퐴 = 퐼푛푓퐿 퐴. ∗ ∗ Now let 푆푢푝퐿퐴 = 푏 and 푏 = 푆푢푝퐼푚 푓퐴. We show that 푏 = 푓(푏). Since 퐼푚푓 is complete, ∗ ∗ ∗ ∗ ∗ we have 푏 ∈ 퐼푚푓so푏 = 푓(푏 ) and since 푏 ≥ 푥 for every 푥 ∈ 퐴 we have 푏 ≥ sup퐿퐴 = 푏.Thus by using the fact that 푓 is isotone, we have 푏∗ = 푓(푏∗) ≥ 푓(푏). But 푏 ≥ 푥 for every푥 ∈ 퐴. Therefore by isotonicity of 푓 we have 푓(푏) ≥ 푓(푥) = 푥 for all 푥 ∈ 퐴 (since 푥 ∈ 퐼푚푓) so ∗ ∗ we also have 푓(푏) ≥ 푠푢푝퐼푚푓퐴= 푏 . This implies 푏 = 푓(푏) as asserted.

Definition 2.5.17: Given ordered sets 퐸, 퐹 and antitone mapping 푓: 퐸 → 퐹 and 푔: 퐹 → 퐸, we say that the pair (푓, 푔) establishes a Galois connection between 퐸 and 퐹 if 푓푔 ≥ 푖푑퐹 and 푔푓 ≥ 푖푑퐸.

Remark 2.5.18: We now proceed to describe an important application of above theorem. For this purpose, given an ordered set 퐸 consider the mapping 휗: 푃(퐸) → 푃(퐸) given by 휗(퐴) = 퐴↓ and the mapping 휑: 푃(퐸) → 푃(퐸) by 휑(퐴) = 퐴↑. If 퐴, 퐵 ∈ 푃(퐸) with 퐴 ⊆ 퐵 then clearly every lower bound of 퐵 is a lower bound of 퐴 whence 퐵↓ ⊆ 퐴↓. Hence 휗 is antitone. Dually, so is 휑. Now every element of 퐴 is clearly a lower bound of the set of upper bounds of 퐴 ↑↓ whence 퐴 ⊆ 퐴 and therefore id푃(퐸) ≤ 휗휑. Dually every element of A is an upper bound of ↑↓ the set of lower bounds of 퐴 so 퐴 ⊆ 퐴 and therefore id푃(퐸) ≤ 휑휗. Consequently we see that (휗, 휑) establish Galois connection on 푃(퐸) we shall focus on the associated closure퐴 ⟼ 퐴↑↓ for this purpose we shall also require the following facts.

Theorem 2.5.19: Let 퐸 be an ordered set. If (퐴훼)훼∈퐼is a family of subsets of 퐸 then; ↑ ↑ ↓ ↓ (⋃훼∈퐼 퐴훼) = ⋂훼∈퐼 퐴훼 and (⋃훼∈퐼 퐴훼) = ⋂훼∈퐼 퐴훼.

↑ Proof: Since each 퐴훼 is contained in ⋃훼∈퐼 퐴훼 and 퐴 ⟼ 퐴 is antitone, we have that ↑ ↑ ↑ (⋃훼∈퐼 퐴훼) ⊆ ⋂훼∈퐼 퐴훼. To prove the reverse inclusion, observe that if 푥 ∈ ⋂훼∈퐼 퐴훼 then 푥 is an upper bound of 퐴훼 for every 훼 ∈ 퐼. Hence 푥 is an upper bound of ⋃훼∈퐼 퐴훼 and therefore ↑ belongs to (⋃훼∈퐼 퐴훼) . ↓ To prove other part, we have 퐴훼 ⊆ ∪훼∈퐼 퐴훼 and 퐴 ⟼ 퐴 is antitone, so we have that ↓ ↓ ↓ (⋃훼∈퐼 퐴훼) ⊆∩훼∈퐼 퐴훼. To prove reverse inclusion, suppose that 푥 ∈ ⋂훼∈퐼 퐴훼, then 푥 is the lower bound of 퐴훼 for every 훼 ∈ 퐼, this implies 푥 is the lower bound of ∪훼∈퐼 퐴훼, which implies ↓ ↓ ↓ that 푥 ∈ (∪훼∈퐼 퐴훼) . Thus (⋃훼∈퐼 퐴훼) = ⋂훼∈퐼 퐴훼.

Definition 2.5.20: By an embedding of an ordered set 퐸 into a lattice 퐿 we mean a mapping 푓: 퐸 → 퐿 such that for all 푥, 푦 ∈ 퐸, 푥 ≤ 푦 if and only if 푓(푥) ≤ 푓(푦).

Theorem 2.5.21: (Dedekind-MacNeille) Ever푦 ordered set 퐸 can be embedded in a complete lattice 퐿 in such a way that meets and joins that exist in 퐸 are preserved in 퐿.

Proof : Since 퐸 is an ordered set, so if 퐸 does not have a top element or bottom element we begin by adjoining whichever of these bounds is missing. Then without loss of generality we may assume that 퐸 is a bounded ordered set. Let 푓: 푃(퐸) → 푃(퐸) be the closure mapping given by 푓(퐴) = 퐴↑↓.Then by previous theorem 퐿 = 퐼푚푓 is a complete lattice. We have 푓({푥}) = {푥}↑↓ = 푥↓for all 푥 ∈ 퐸. Now suppose that푥 ≤ 푦 and let 푧 ∈ 푥↓, if and only if 푧 ≤ 푥; if and only if 푧 ≤ 푦,if and only if 푧 ∈ 푦↓. So 푥 ≤ 푦if and only if 푥↓ ⊆ 푦↓ if and only if 푓({푥})=푓({푦}). It follows that 푓 induces an embedding ′ ↑↓ ↓ 푓′: 퐸 → 퐿 given by푓 (푥) = 푓({푥}) = {푥} = 푥 . Suppose now that 퐴 = {푥훼: 훼 ∈ 퐼} ⊆ 퐸. If ↓ ↓ 푎 = ⋀훼∈퐼 푥훼then clearly 푎 = ⋂훼∈퐼 푥훼 so that 푓´(푎) = ⋂훼∈퐼 푓´(푥훼), that is existing infima are preserved.

Suppose now that 푏 = ⋁훼∈퐼 푥훼 exists. Since 푦 ≥ 푏; ↑ ↑↓↑ if and only if for all 푦 ∈ 푥훼 = {푥훼} ;

↑↓↑ ↑↓ ↑ if and only if 푦 ∈ ⋂훼∈퐼{푥훼} = (⋃훼∈퐼{푥훼} ) ; (by Theorem 2.5.19)

↑ ↑↓ ↑ we see that 푏 = (⋃훼∈퐼{푥훼} ) . Consequently,

↑↓ ↑↓ ↑↓ 푓´(푏) = {푏} = (⋃훼∈퐼{푥훼} ) ↑↓ = 푓(푠푢푝 ℙ(퐸) {{푥훼} |훼 ∈ 퐼} ↑↓ = 푠푢푝퐼푚 푓{{푥훼} | 훼 ∈ 퐼} (by Theorem 2.16)

= sup퐼푚푓{푓´(푥훼) | 훼 ∈ 퐼}. So that existing suprema are also preserved.

Definition 2.5.22: The complete lattice 퐿 = 퐼푚푓 = {퐴↑↓|퐴 ∈ 푃(퐸)} in the (Theorem 2.5.20) is called the Dedekind-MacNeille completion of 퐸.

Chapter 3

MODULAR, DISTRIBUTIVE AND COMPLEMENTED LATTICES.

We describe a special class of lattices called modular lattices. Modular lattices are numerous in mathematics; Distributive lattices are a special class of modular lattices. In this chapter we first introduce both modular and distributive lattices to show the relationship between them. Later, we focus on modular lattices. Distributive lattices are considered in detail. In the last section of this chapter we introduce complemented lattice and the lattice with unique complements, then we show that when a uniquely complemented lattice is distributive. Before formally introducing modular lattices and distributive lattices we prove the following lemma’s which will put the definitions into perspective.

3.1 Modular lattices Lemma 3.1.1: Let 퐿 be a lattice and let 푎, 푏, 푐 ∈ 퐿 then: (1) 푎 ∧ (푏 ∨ 푐) ≥ (푎 ∧ 푏) ∨ (푎 ∧ 푐) and 푎 ∨ (푏 ∧ 푐) ≤ (푎 ∨ 푏) ∧ (푎 ∨ 푐); (2) 푎 ≥ 푐 implies (푎 ∧ 푏 ) ∨ 푐 ≥ 푎 ∧ ( 푏 ∨ 푐)and (푎 ∨ 푏) ∧ 푐 ≤ 푎 ∨ (푏 ∧ 푐).

Proof: (1) we have 푎 ≥ (푎 ∧ 푏) and (푏 ∨ 푐) ≥ 푏 ≥ (푏 ∧ 푐) therefore; 푎 ∧ (푏 ∨ 푐) ≥ (푎 ∧ 푏) ∨ (푏 ∧ 푐). Also 푎 ≤ 푎 ∨ 푏 and (푏 ∧ 푐 ) ≤ 푐 ≤ (푎 ∨ 푐), which implies 푎 ∨ (푏 ∧ 푐) ≤ (푎 ∨ 푏) ∧ (푎 ∨ 푐) which proves (1). 2) If 푎 ≥ 푐, then by connecting lemma 푎 ∧ 푐 = 푐 thus from (1) we get; (푎 ∧ 푏) ∨ 푐 = (푎 ∧ 푏) ∨ (푎 ∧ 푐) ≤ 푎 ∧ ( 푏 ∨ 푐). Thus 푎 ≥ 푐 implies (푎 ∧ 푏 ) ∨ 푐 ≥ 푎 ∧ ( 푏 ∨ 푐). In a similar way we can show that 푎 ≤ 푐 implies 푎 ∨ (푏 ∧ 푐 ) ≤ ( 푎 ∨ 푏) ∧ 푐 that proves (2).

Lemma 3.1.2: Let L be a lattice then the following are equivalent: (1) (for all 푎 , 푏 , 푐 ∈ 퐿) 푎 ≥ 푐 implies 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ 푐; (2) (for all 푎 , 푏 , 푐 ∈ 퐿) 푎 ≥ 푐 implies 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐); (3) (for all 푝, 푞, 푟 ∈ 퐿) 푝 ∧ (푞 ∨ (푝 ∧ 푟)) = (푝 ∧ 푞) ∨ (푝 ∧ 푟). Proof: (1) ⇒ (2): Suppose that (1) holds then 푎 ≥ 푐 gives 푎 ∧ 푐 = 푐 (by connecting lemma). Therefore 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐). Thus (1) ⇒ (2) holds. (2) ⇒ (1): If 푎 ≥ 푐 then 푎 ∧ 푐 = 푐; therefore by (2) 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ 푐 so that (2) ⇒(1) holds. (2) ⇒ (3):To prove that (2) ⇒ (3); we put 푎 = 푝, 푏 = 푞, 푐 = 푝 ∧ 푟 in (3). We get for all 푝, 푞, 푟 ∈ 퐿, 푝 ∧ (푞 ∨ (푝 ∧ 푟)) = (푝 ∧ 푞) ∨ (푝 ∧ (푝 ∧ 푟)) = (푝 ∧ 푞) ∨ (푝 ∧ 푟). Thus (2) ⇒ (3) is true. (3) ⇒(2): Assume 푎 ≥ 푐 and applying (3) with 푝 = 푎, 푞 = 푏, and 푟 = 푐, we get; 푎 ∧ (푏 ∨ (푎 ∧ 푐)) = (푎 ∧ 푏) ∨ (푎 ∧ 푐). Which implies 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐). Thus (3) ⇒ (2) is true. Which proves the lemma. The next lemma shows that the operation of meet and join distributes over each other.

Lemma 3.1.3: let 퐿 be a lattice then the following are equivalent: (1) (for all 푎 , 푏, 푐 ∈ 퐿) 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ c); (2) (for all 푝, 푞, 푟 ∈ 퐿 ) 푝 ∨ (푞 ∧ 푟) = (푝 ∨ 푞) ∧ (푝 ∨ 푟).

Proof : Assume that ( 1) holds , then for all 푝, 푞, 푟 ∈ 퐿 we have; (푝 ∨ 푞) ∧ (푝 ∨ 푟) = ((푝 ∨ 푞) ∧ 푝) ∨ (( 푝 ∨ 푞) ∧ 푟) (using (1)) = 푝 ∨ ((푝 ∨ 푞) ∧ 푟) (since (푝 ∨ 푞) ≥ 푝 implies(푝 ∨ 푞) ∧ 푝 = 푝) = 푝 ∨ (푟 ∧ ( 푝 ∨ 푞) (because the meet operation is commutative) = 푝 ∨ (( 푝 ∧ 푟) ∨ (푞 ∧ 푟)) (using (1)) = (푝 ∨ (푝 ∧ 푟)) ∨ (푞 ∧ 푟) (since operation of join is associative) = 푝 ∨ (푞 ∧ 푟) (since 푝 ≥ 푝 ∧ 푟). Conversely suppose that (2) holds then for all 푎, 푏, 푐 ∈ 퐿 we have; (푎 ∧ 푏) ∨ ( 푎 ∧ 푐) = (( 푎 ∧ 푏) ∨ 푎) ∧ (( 푎 ∧ 푏) ∨ 푐) (by (2)) Now by using (2) and the fact that 푎 ∧ 푏 ≤ 푎 we get; (푎 ∧ 푏) ∨ ( 푎 ∧ 푐) = 푎 ∧ ( 푐 ∨ 푎) ∧ ( 푐 ∨ 푏 ) = (푎 ∧ (푐 ∨ 푎)) ∧ (푐 ∨ 푏) (since meet operation is associative) = 푎 ∧ (푏 ∨ 푐) (since 푎 ≤ 푎 ∨ 푐). Thus (푎 ∧ 푏) ∨ ( 푎 ∧ 푐) = 푎 ∧ ( 푏 ∨ 푐).

Definition 3.1.4: Let 퐿 be a lattice, then 퐿 is said to be modular if it satisfies the modular law that is; for all 푎, 푏, 푐, ∈ 퐿 푎 ≥ 푐 implies 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ 푐 or for all 푎, 푏. 푐, ∈ 퐿 푎 ≤ 푐 implies 푎 ∨ (푏 ∧ 푐 ) = (푎 ∨ 푏) ∧ 푐.

Remarks 3.1.5: From Lemma 3.1.1 we have 푎 ∧ (푏 ∨ 푐) ≥ (푎 ∧ 푏) ∨ (푎 ∧ 푐) and 푎 ≥ 푐 implies (푎 ∧ 푏 ) ∨ 푐 ≥ 푎 ∧ ( 푏 ∨ 푐) which are known as modular and distributive inequalities. This shows that any lattice is “half way” to being both modular and distributive. To establish modularity or distributivity, we only need to check only one side inequality.

We now first define two special lattices, the diamond lattice and the pentagon lattice. The diamond lattice 푀3 is shown below:

푀3 is modular, it however not satisfies distributive law. To see this note that in the diagram of 푀3 we have; 푎 ∧ (푏 ∨ 푐) = 푎 ∧ 1 = 1 and (푎 ∧ 푏) ∨ (푎 ∧ 푐) = 0 ∨ 0 = 0 and 푎 ≠ 0 so 푀3 is not modular. The smallest lattice which is not modular is pentagon (푁5) shown below;

In above figure 푎 ≥ 푐 holds however 푎 ∧ (푏 ∨ 푐) = 푎 ∧ 1 = 1 but (푎 ∧ 푏 ) ∨ 푐 = 0 ∨ 푐 = 푐. So 푁5 is not modular.

The following lemma is useful in proving the next theorem.

Lemma 3.1.6: For all lattices 퐿 and for all 푎, 푏, 푐 ∈ 퐿 let 푣 = 푎 ∧ (푏 ∨ 푐) and 푢 = (푎 ∧ 푏) ∨ 푐 . Then 푣 > 푢 implies 푣 ∧ 푏 = 푢 ∧ 푏 and푣 ∨ 푏 = 푢 ∨ 푏.

Proof : We have (푎 ∧ 푏) = (푎 ∧ 푏) ∧ 푏 (because (푎 ∧ 푏) ≤ 푏) ≤ [(푎 ∧ 푏) ∨ 푐] ∧ 푏 (because (푎 ∧ 푏) ∨ 푐 ≥ (푎 ∧ 푏)) = 푢 ∧ 푏 ≤ 푣 ∧ 푏 (since 푢 < 푣) = [푎 ∧ (푏 ∨ 푐)] ∧ 푏 (by assumption) = 푎 ∧ [(푏 ∨ 푐) ∧ 푏] (since operation of meet is associative) = 푎 ∧ 푏 (since (푏 ∨ 푐) ∧ 푏 = 푏). Thus 푣 ∧ 푏 = 푢 ∧ 푏. Likewise we can prove that 푣 ∨ 푏 = 푢 ∨ 푏.

Proposition 3.1.7: Every sublattice of a is modular.

Proof: Let 푀 be a sublattice of a modular lattice 퐿. Let 푎, 푏, 푐 ∈ 푀 then 푎, 푏, 푐 ∈ 퐿; since 퐿 is modular therefore if 푎 ≥ 푐 then 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ 푐 for all 푎, 푏, 푐 ∈ 푀 . Thus 푀 is modular. Example 3.1.8: The lattice 퐿(푉) of subspaces of a vector space 푉 is modular , since it is a sublattice of a modular lattice of a modular lattice of (normal) subgroups of the additive group of 푉.

Theorem 3.1.9: (푁5Theorem) A lattice 퐿 is modular if and only if it does not contain a sublattice isomorphic to 푁5.

Proof : If 퐿 contains 푁5 then it clearly violates modularity as 푁5 is not modular. Now assume to the contrary that 퐿 is not modular, then there exists 푎, 푏, 푐 ∈ 퐿 such that 푎 ≥ 푐 and 푎 ∧ (푏 ∨ 푐) > (푎 ∧ 푏) ∨ 푐. Clearly 푏 ∥ 푎 and 푏 ∥ 푐, for if 푏 ≤ 푎 then 푎 ∧ (푏 ∨ 푐) ≥ 푏 ∨ 푐 ≥ 푎 ∧ (푏 ∨ 푐); which is a contradiction. The other case is similar. We let 푣 = 푎 ∧ (푏 ∨ 푐) and 푢 = (푎 ∧ 푏 ) ∨ 푐 then 푣 > 푢 .We claim that 푣 ∥ 푏 and 푏 ∥ 푢. As 푏 ≤ 푣 implies 푏 ≤ [푎 ∧ ( 푏 ∨ 푐)]; this gives 푏 ≤ 푎. Thus by lemma (3.1.5) 푢 ∨ 푏 = 푣 ∨

푏 and 푢 ∧ 푏 = 푣 ∧ 푏. Thus 푢, 푣, 푏, 푢 ∨ 푏, and 푢 ∧ b forms 푁5, given below;

Example 3.1.10: The set of normal subgroups of any group 퐺 forms a modular lattice.

Proof: Let 풩-Sub 퐺 denotes the set of normal subgroups of 퐺.It is trivial to show that 풩-Sub 퐺 is a poset under set containment. We first show that 풩-Sub forms a lattice with;

퐺 1 ∧ 퐺2 = 퐺1⋂ 퐺2 and 퐺1 ∨ 퐺2 = {푔1푔2: 푔1 ∈ 퐺1and 푔2 ∈ 퐺2}. Where 퐺1, 퐺2 are any two members of 풩-Sub 퐺. Now for all 푥 ∈ 퐺1 ∩ 퐺2 and for all 푔 ∈ −1 퐺, we have 푥 ∈ 퐺1 and 푔 ∈ 퐺 implies 푔푥푔 ∈ 퐺1 and 푥 ∈ 퐺2 and 푔 ∈ −1 퐺 implies 푔푥푔 ∈ 퐺2. −1 Thus for all 푥 ∈ 퐺1⋂ 퐺2 and for all 푔 ∈ 퐺 we have 푔푥푔 ∈ 퐺1⋂ 퐺2. Also 퐺1⋂ 퐺2 ⊆ 퐺1 and 퐺1 ∩ 퐺2 ⊆ 퐺2; therefore 퐺1⋂ 퐺2 is the lower bound of 퐺1 and 퐺2. Let W be any other lower bound of 퐺1 and 퐺2 then 푊 ⊆ 퐺1 and 푊 ⊆ 퐺2 which implies 푊 ⊆ 퐺1 ∩ 퐺2. Therefore 퐺1 ∧ 퐺2 = 퐺1⋂ 퐺2. Now let 푔 ∈ 퐺 and 푥 ∈ 퐺1퐺2 then 푥 = 푔1푔2 for some 푔1 ∈ 퐺1 and 푔2 ∈ 퐺2. −1 −1 −1 −1 Thus 푔푥푔 = 푔(푔1푔2)푔 = (푔푔1푔 )(푔푔2푔 ) ∈ 퐺1퐺2; thus 퐺1 퐺2 ∈ 풩-Sub퐺. This proves the required claim. To prove that 풩-푆푢푏 퐺 is a modular lattice. We shall show that for

퐺1, 퐺2 푖푛 풩- Sub 퐺 such that 퐺2 ⊆ 퐺1 implies 퐺1⋂(퐺2퐺3) = 퐺1(퐺2⋂퐺3). Clearly 퐺2(퐺1⋂퐺3) ⊆ 퐺1⋂(퐺2퐺3) (since every lattice satisfies modular inequality) To prove the reverse inclusion let 푥 ∈ 퐺1 ⋂(퐺2퐺3). This gives 푥 ∈ 퐺1 and 푥 ∈ 퐺2퐺3 which implies that 푥 = 푔1 and 푥 = 푔2푔3 for some 푔1 ∈ 퐺1, 푔2 ∈ 퐺2 and 푔3 ∈ 퐺3. Thus 푔1 = 푔2푔3 ∈ -1 퐺1 which implies 푔3 = 푔2 푔1 ∈ 퐺1. Also 푔3 ∈ 퐺3; hence 푔3 ∈ 퐺1⋂퐺3, so 푥 = 푔2푔3 ∈ 퐺2(퐺1⋂ 퐺3) therefore 퐺1 ∩ (퐺2퐺3) = 퐺2(퐺1⋂퐺3), showing that 풩-Sub퐺 is modular.

3.2. Some important results on Modular Lattices. In this section we provide some results which gives necessary and sufficient condition for a lattice 퐿 to be modular.

Theorem 3.2.1: A lattice 퐿 is modular if and only if the ideal lattice 픗(퐿) is modular.

Proof: suppose 퐿 is modular then every sublattice of 퐿 is also modular. Since 픗 (퐿) the set of ideals of 퐿 is a sublattice of 퐿, therefore it is modular. Conversely suppose to the contrary that 퐿 is not modular. Let 푎, 푏, 푐 ∈ 퐿 , 푎 ≤ 푏 and let (푎 ∧ 푐) ∨ 푏 + 푎 ∧ (푐 ∨ 푏) be the lattice generated by 푎, 푏, 푐, with 푎 ≥ 푏. Therefore the sublattice 픗(퐿) of 퐿 generated by 푎, 푏 and 푐 must be homomorphic image of pentagon. As if any two of the five elements 푎 ∧ 푐, 푎 ∨ 푐, 푎 ∧ (푏 ∨ 푐), 푏 ∨ 푐 and 푐 are identified under a homomorphism, then so are (푎 ∧ 푐) ∨ 푏 and 푎 ∧ (푏 ∨ 푐). Consequently, the five elements are distinct in 퐿 and they form a pentagon. Which is contradict- ion showing that 퐿 is modular lattice.

Theorem 3.2.2: (Shearing Identity) A lattice 퐿 is modular if and only if for all 푥, 푦, 푧 ∈ 퐿 푥 ∧ (푦 ∨ 푧) = 푥 ∧ [(푦 ∧ (푥 ∨ 푧) ∨ 푧] (4)

Proof: We first prove that modularity implies shearing; We use the fact that 푥 ∨ 푧 ≥ 푧. We have [푦 ∧ (푥 ∨ 푧)] ∨ 푧 = 푧 ∨ (푦 ∧ (푥 ∨ 푧) (since join operation is commutative) = (푧 ∨ 푦) ∧ (푥 ∨ 푧) (by modularity of 퐿) = (푦 ∨ 푧) ∧ (푥 ∨ 푧) (since join operation is commutative). Now consider the RHS of (4) 푥 ∧ [(푦 ∧ (푥 ∨ 푧) ∨ 푧] = 푥 ∧ [ (푦 ∨ 푧) ∧ (푥 ∨ 푧)] = 푥 ∧ (푦 ∨ 푧) (by using modularity) and the fact that 푥 ∨ 푧 ≥ 푧. Hence (1) holds. Conversely suppose that 퐿 satisfies shearing identity. We show that 퐿 is modular. Let 푥, 푦, 푧 ∈ 퐿 then 푥 ∧ (푦 ∨ 푧) ≥ (푥 ∧ 푦) ∨ 푧 holds for all lattices (by modular inequality). Now we have 푥 ∧ (푦 ∨ 푧) = 푥 ∧ [(푦 ∧ (푥 ∨ 푧) ∨ 푧] (using (4)) ≤ [(푦 ∧ (푥 ∨ 푧) ∨ 푧] (by using 푥 ∧ 푦 ≤ 푥) = [푦 ∧ (푥 ∨ (푧 ∨ 푧)) (since join operation is associative) ≤ 푦 ∧ (푥 ∨ 푧) (using modular inequality) = (푥 ∧ 푦) ∨ 푧 (since meet operation is commutative). Thus 푥 ∧ (푦 ∨ 푧) = (푥 ∧ 푦) ∨ 푧 and so 퐿 is modular.

Proposition 3.2.3: A lattice 퐿 is modular if and only if for all 푥, 푦, 푧 ∈ 퐿; 푥 ∨ (푦 ∧ (푥 ∨ 푧)) = (푥 ∨ 푦) ∧ (푥 ∨ 푧) (5)

Proof: Suppose that 퐿 is modular. Since 퐿 is a lattice; for all 푥, 푦, 푧 ∈ 퐿 we have 푥 ∨ 푧 ∈ 퐿 so by using modularity of 퐿 with 푥 ≤ 푥 ∨ 푧. We have 푥 ∨ (푦 ∧ (푥 ∨ 푧)) = (푥 ∨ 푦) ∧ (푥 ∨ 푧). Conversely suppose that (5) holds. Put 푥 ∨ 푧 = 푡 ∈ 퐿 in (1); clearly 푥 ≤ 푡. Thus for all 푥, 푦, 푡 ∈ 퐿 with 푥 ≤ 푡, we have from (1) 푥 ∨ ( 푦 ∧ 푡) = (푥 ∨ 푦) ∧ 푡 which shows that 퐿 is modular. Proposition 3.2.4: A lattice 퐿 is modular if and only if; for all 푥, 푦, 푧 ∈ 퐿 {푥 ∨ [ 푦 ∧ (푥 ∨ 푧) ] ∧ 푧} = (푥 ∨ 푦) ∧ 푧 (6)

Proof: Suppose that 퐿 is modular. We have{푥 ∨ [ 푦 ∧ (푥 ∨ 푧)] = (푥 ∨ 푦) ∧ (푥 ∨ 푧)(by using modularity of 퐿). Consider LHS of (6) we have; {푥 ∨ [ 푦 ∧ (푥 ∨ 푧)]} ∧ 푧 = 푧 ∧ [(푥 ∨ 푧) ∧ (푥 ∨ 푦)] (since meet operation is commutative) = [푧 ∧ (푥 ∨ 푧)] ∧ (푥 ∨ 푦) (by modularity) = 푧 ∧ (푥 ∨ 푦) (since 푧 ≤ 푥 ∨ 푧) = (푥 ∨ 푦) ∧ 푧 (since meet operation is commutative. Thus {푥 ∨ [ 푦 ∧ (푥 ∨ 푧) ]} ∧ 푧 = (푥 ∨ 푦) ∧ 푧, which is (6). Conversely suppose that (6) holds. If 푥, 푦 , 푧 ∈ 퐿 with 푧 ≥ 푥 then by (Lemma 3.1.1) we have 푧 ∧ (푦 ∨ 푥) ≥ ( 푧 ∧ 푦) ∨ 푥 holds for all lattices. Also for 푧 ≥ 푥 we have from (6); 푧 ∧ (푦 ∨ 푥) = 푧 ∧ { ( 푦 ∧ (푥 ∨ 푧) ∨ 푥} (by assumption) = 푧 ∧ {(푦 ∧ 푧) ∨ 푥} (since 푧 ≥ 푥) ≤ (푧 ∧ 푦 ) ∨ 푥 (since 푎 ∧ 푏 ≤ 푏). This gives 푧 ∧ (푦 ∨ 푥) = (푧 ∧ 푦 ) ∨ 푥. Thus 퐿 is modular.

Theorem 3.2.5: A lattice 퐿 is modular if and only if whenever 푎 ≥ b and 푎 ∧ 푐 = 푏 ∧ c and 푎 ∨ 푐 = 푏 ∨ 푐 for some 푐 ∈ 퐿 then 푎 = 푏.

Proof: Let 퐿 be a modular lattice and let 푎, 푏, 푐 ∈ 퐿 such that 푎 ≥ 푏,푎 ∨ 푐 = 푏 ∨ 푐 and푎 ∧ 푐 = 푏 ∧ 푐. Then: 푎 = 푎 ∧ (푎 ∨ 푐) (since 푎 ≤ 푎 ∨ 푐) = 푎 ∧ (푏 ∨ 푐 ) (because 푎 ∨ 푐 = 푏 ∨ 푐) = 푎 ∧ (푐 ∨ 푏) (since join operation is associative) = (푎 ∧ 푐) ∨ 푏 (by modularity of 퐿) = (푏 ∧ 푐) ∨ 푏 (since 푎 ∧ 푐 = 푏 ∧ c) = 푏 (since (푏 ∧ 푐) ≤ 푏). This gives 푎 = 푏. Conversely suppose that 퐿 is a lattice satisfying the conditions stated in the theorem. Let 푎, 푏, 푐 ∈ 퐿 and 푎 ≥ 푏. We can easily verify the following relations and their duals. 푎 ∧ (푏 ∨ 푐) = 푎 ∧ (푐 ∨ 푏) ≥ 푏 ∨ (푎 ∧ 푐) and (푎 ∧ (푏 ∨ 푐)) ∧ 푐 = (푎 ∧ (푎 ∨ 푐)) ∧ 푐 = 푎 ∧ 푐. (7) Also 푎 ∧ 푐 = (푎 ∧ 푐) ∧ 푐 ≤ (푏 ∨ (푎 ∧ 푐)) ∧ 푐 = 푏 ∧ 푐 ≤ 푎 ∧ 푐. Hence (푏 ∨ (푎 ∧ 푐)) ∧ 푐 = 푎 ∧ 푐. (8) Since 푏 ≤ 푎 the dual of the relation (7) is (푏 ∨ (푎 ∧ 푐)) ∨ 푐 = 푏 ∨ 푐 and the dual of relation (8) is (푎 ∧ (푏 ∨ 푐)) ∨ 푐 = 푏 ∨ 푐. Thus we have (푎 ∧ (푏 ∨ 푐)) ∧ 푐 = (푏 ∨ (푎 ∧ 푐)) ∧ 푐 and (푎 ∧ (푏 ∨ 푐) ) ∧ 푐 = (푏 ∨ (푎 ∧ 푐)) ∨ 푐. Hence the assumed property implies that 푎 ∧ (푏 ∨ 푐) = 푏 ∨ (푎 ∧ 푐). So 퐿 is modular.

3.3

In this section we introduce an important subclass of modular lattices, namely that of the distributive lattices. These were the first to be considered in the earliest of the investigation. Here we discuss various examples of distributive lattices, the criteria whether the lattice is distributive and the important results which relates modular and distributive lattices.

Definition 3.3.1: A lattice 퐿 is distributive if for all 푎, 푏, 푐 ∈ 퐿; 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐) or 푎 ∨ (푏 ∧ 푐) = (푎 ∨ 푏) ∧ (푎 ∨ 푐).

Proposition 3.3.2: Every distributive lattice is modular.

Proof : Suppose that the lattice 퐿 is distributive then for all 푎, 푏, 푐 ∈ 퐿 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐). If 푎 ≥ 푐 then 푎 ∧ 푐 = 푐. therefore for all 푎, 푏, 푐 ∈ 퐿 with 푎 ≥ 푐 we have 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐) = (푎 ∧ 푏) ∨ 푐. Thus 퐿 is modular.

The converse of above proposition is not true. For example, the diamond lattice given below is already seen to be modular. We show that this lattice is not distributive.

Since 푥 ∧ (푦 ∨ 푧) = 푥 ∧ 1 = 푥 but (푥 ∧ 푦) ∨ (푥 ∧ 푧) = 0 ∨ 0 = 0 and 푥 ≠ 0. Thus 푥 ∧ (푦 ∨ 푧) ≠(푥 ∧ 푦) ∨ (푥 ∧ 푧).

Remark 3.3.3: Distributivity can be defined either by (1) or by (2) from (lemma 3.1.3). In other words 퐿 is distributive if and only if dual of (퐿퐷) is so. An application of duality principal shows that 퐿 is modular if and only if 퐿D is so.

Proposition3.3.4: Every sublattice of a distributive lattice is distributive lattice.

Proof: Let 퐿 be a distributive lattice and 푃 ⊆ 퐿 be the sublattice of 퐿. To show that 푃 is distributive we suppose that 푎, 푏, 푐 ∈ 푃, then 푎, 푏, 푐 ∈ 퐿. Since 퐿 is distributive we have; 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐) for all 푎, 푏, 푐 ∈ 푃 . Thus 푃 is a distributive lattice. Example 3.3.5: (ℙ(퐸);∩,∪, ⊆) is a distributive lattice.

Proof: We know that ℙ(퐸) is a bounded lattice with top element 퐸 and the bottom element ∅. Let 혈, 혉, 혊 ∈ ℙ(퐸) and let 푥 ∈ 퐴 ∩ (퐵 ∪ 퐶); if and only if 푥 ∈ 퐴 and 푥 ∈ 퐵 ∪ 퐶 if and only if 푥 ∈ 퐴 and 푥 ∈ 퐵 or 푥 ∈ 퐴 and 푥 ∈ 푐 if and only if 푥 ∈ (퐴 ∩ 퐵)or 푥 ∈ (퐴 ∩ 퐶) if and only if 푥 ∈ (퐴 ∩ 퐵) ∪ (퐴 ∩ 퐶). Thus 퐴 ∩ (퐵 ∪ 퐶) = (퐴 ∩ 퐵) ∪ (퐴 ∩ 퐶). So ℙ(퐸) is a distributive lattice.

Example 3.3.6: If 퐸 is an ordered set then the lattice 풪(퐸) of down sets of 퐸 is a distributive lattice.

Proof: Since 풪(퐸) ⊆ ℙ(퐸) is a sublattice of a ℙ(퐸) and ℙ(퐸) is a distributive lattice. Thus by (Proposition 3.3.4) 풪(퐸) is distributive.

Example 3.3.7: Any chain is a distributive lattice.

Proof: Let 𝒞 be any chain and let 푥, 푦, 푧 ∈ 𝒞. We have 푥 ∧ (푦 ∨ 푧) ≥ (푥 ∧ 푦) ∨ (푥 ∧ 푧) holds for all lattices. We only have to prove the reverse inequality. Here we have different cases to consider. Case 1: If 푥 ≤ 푦, 푧 then; 푥 ∧ 푦 = 푥, 푥 ∨ 푦 = 푦, 푥 ∧ 푧 = 푥 and 푥 ∨ 푧 = 푧 so, 푥 ∧ (푦 ∨ 푧) ≤ 푥 = 푥 ∨ 푥 = (푥 ∧ 푦) ∨ (푥 ∧ 푧). Thus in this case,푥 ∧ (푦 ∨ 푧) = (푥 ∧ 푦) ∨ (푥 ∧ 푧). Case2: If 푥 ≥ 푦, 푧 then; 푥 ∧ 푦 = 푦, 푥 ∧ 푧 = 푧 , 푥 ∨ 푦 = 푥 and 푥 ∨ 푧 = 푥. Now 푥 ∧ (푦 ∨ 푧) ≤ 푦 ∨ 푧 = (푥 ∧ 푦) ∨ (푥 ∧ 푧. Thus 푥 ∧ (푦 ∨ 푧) = (푥 ∧ 푦) ∨ (푥 ∧ 푧) holds in this case also. Case 3: If 푥 ≤ 푦 but 푥 ≥ 푧 then; 푥 ∧ 푦 = 푥, 푥 ∨ 푦 = 푦, 푥 ∧ 푧 = 푧 and 푥 ∨ 푧 = 푥. Now 푥 ∧ (푦 ∨ 푧) ≤ 푥 = 푥 ∨ 푥 = (푥 ∧ 푦) ∨ (푥 ∨ 푧). Thus 푥 ∧ (푦 ∨ 푧) = (푥 ∧ 푦) ∨ (푥 ∧ 푧) holds in this case also.Similarly, we can verify other cases. Thus in all the cases we have 푥 ∧ (푦 ∨ 푧) = (푥 ∧ 푦) ∨ (푥 ∧ 푧).

Example 3.3.8: If 퐷 is a distributive lattice and 푓: 퐷 → 퐿 is lattice morphism then Im푓 is a distributive sublattice of 퐿.

Proof: We have already seen that 퐼푚푓 is a sublattice of 퐿. Let 푓: 퐷 → 퐿 is a lattice-morphism then we have; 푓(푦 ∨ 푧) = 푓(푦) ∨ 푓(푧) for all 푦, 푧 ∈ 퐷 and 푓(푥 ∧ (푦 ∨ 푧)) = 푓(푥) ∧ (푓(푦) ∨ 푓(푧)). (8) Also by the distributivity of 퐷 we have; 푓(푥 ∧ (푦 ∨ 푧)) = 푓((푥 ∧ 푦) ∨ (푥 ∧ 푧)) = 푓(푥 ∧ 푦) ∨ 푓(푥 ∧ 푧) = {푓(푥) ∧ 푓(푦)} ∨ {푓(푥) ∧ 푓(푧)}. (9) Combining (8) and (9) we get 푓(푥) ∧ (푓(푦) ∨ 푓(푧)) = {푓(푥) ∧ 푓(푦)} ∨ {푓(푥) ∧ 푓(푧). Thus the result follows.

Example 3.3.9: If 퐷 is a distributive lattice and 퐸 is a non-empty set then the set 퐷퐸 of mappings 푓: 퐸 → 퐷 ordered by 푓 ⊆ 푔, if and only if (for all 푥, 푦 ∈ 퐸) 푓(푥) ≤ 푔(푥) is a distributive lattice.

Proof: We have already proved that 퐷퐸 is a lattice with respect to this order. So we only need to verify the distributivity of 퐷퐸. Since 퐷 is distributive; for all 푥 ∈ 퐸, 푓(푥), 푔(푥), ℎ(푥) ∈ 퐷. We have 푓(푥) ∧ (푔(푥) ∨ ℎ(푥)) = {푓(푥) ∧ 푔(푥)} ∨ {푓(푥) ∧ ℎ(푥)} if and only if 푓 ∧ (푔 ∨ ℎ) = (푓 ∧ 푔) ∨ (푓 ∧ ℎ). Hence 퐷퐸 is distributive.

Theorem 3.3.10: Prove that a lattice 퐿 is distributive if and only if for all 푥, 푦, 푧 ∈ 퐿; (푥 ∧ 푦) ∨ (푦 ∧ 푧) ∨ (푧 ∧ 푥) = (푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ (푧 ∨ 푥). (10)

Proof: Suppose that (10) holds, we show that 퐿 is distributive. Setting 푥 ≥ 푧 in (10) the L.H.S reduces to; (푥 ∧ 푦) ∨ (푦 ∧ 푧) ∨ (푧 ∧ 푥) = (푥 ∧ 푦) ∨ (푦 ∧ 푧) ∨ 푧 (since (푧 ∧ 푥) = 푧) = (푥 ∧ 푦) ∨ 푧 (since 푦 ∧ 푧 ≤ 푧). Now consider the R.H.S of (10) we have; (푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ (푧 ∨ 푥) = (푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ 푥 (since 푥 ≥ 푧) = 푥 ∧ (푥 ∨ 푦) ∧ (푦 ∨ 푧) (since meet operation is associative) = 푥 ∧ (푦 ∨ 푧) (since 푥 ≤ 푥 ∨ 푦) Hence from (10) we have (푥 ∧ 푦) ∨ 푧 = 푥 ∧ (푦 ∨ 푧),which is a modular law. Taking L.H.S. of (10) = 푢 and R.H.S = 푣 so that 푢 = 푣 which implies 푥 ∧ 푢 = 푥 ∧ 푣. Now 푥 ∧ 푢 = 푥 ∧ [(푦 ∧ 푧) ∨ (푥 ∧ 푦) ∨ (푥 ∧ 푧)] (by assumption) = (푥 ∧ 푦) ∨ (푥 ∧ 푧) (by using modularity of 퐿) Also 푥 ∧ 푣 = 푥 ∧ [(푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ (푧 ∨ 푥)] (by assumption) = [푥 ∧ (푥 ∨ 푦)] ∧ [(푦 ∨ 푧) ∧ (푧 ∨ 푥)] (by using modularity of 퐿) = [푥 ∧ (푧 ∨ 푥)] ∧ (푦 ∨ 푧) (since 푥 ∨ 푦 ≥ 푥). So that 푥 ∧ (푦 ∨ 푧) = (푥 ∧) ∨ (푥 ∧ 푧). Conversely suppose that 퐿 is distributive. Applying distributivity to R.H.S of (10) we get (푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ (푧 ∨ 푥) = {(푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ 푧} ∨ {(푥 ∨ 푦) ∧ (푦 ∨ 푧) ∧ 푥} = {(푥 ∨ 푦) ∧ 푧} ∨ {(푦 ∨ 푧) ∧ 푥} (by connecting lemma ) = (푧 ∧ 푥) ∨ (푧 ∧ 푦) ∨ (푥 ∧ 푧) ∨ (푥 ∧ 푦) = (푥 ∧ 푦) ∨ (푦 ∧ 푧) ∨ (푧 ∧ 푥). Thus distributivity implies (10).

Theorem 3.3.11: If 퐿 is a distributive lattice, then the ideal lattice 픗(퐿) is distributive.

Proof: Suppose that 퐿 is distributive, let 퐼, 퐽, 퐾 ∈ 픗(퐿) we have; 퐼 ∨ (퐽 ∧ 퐾) ⊆ (퐼 ∨ 퐽) ∧ (퐼 ∨ 퐾) holds for all lattices. For the reverse inclusion if 푥 ∈ (퐼 ∨ 퐽) ∧ (퐼 ∨ 퐾) then 푥 ∈ (퐼 ∨ 퐽) and 푥 ∈ 퐼 ∨ 퐾, which implies that there exists 푎1 , 푎2 , ∈ 퐼, 푏 ∈ 퐽 , 푐 ∈ 퐾 with 푥 ≤ 푎1 ∨ 푏 and 푥 ≤ 푎2 ∨ 푐.Since 퐼 is ideal, 푎 = 푎1 ∨ 푎2 ∈ 퐼 this implies 푥 ≤ (푎 ∨ 푏) ∧ (푎 ∨ 푐) = 푎 ∨ (푏 ∧ 푐. Thus 푥 ∈ 퐼 ∨ (퐽 ∧ 퐾). Thus we have(퐼 ∨ 퐽 ) ∧ (퐼 ∨ 퐾) ⊆ 퐼 ∨ (퐽 ∧ 퐾). This gives 퐼 ∨ (퐽 ∧ 퐾) = (퐼 ∨ 퐽) ∧ (퐼 ∨ 퐾). Thus 퐿 is distributive.

Definition 3.3.12: The direct product 푃푄 of two partially ordered sets 푃 and 푄 is the set of all couples (푥, 푦) with 푥 ∈ 푃, 푦 ∈ 푄 partially ordered by the rule that (푥1, 푦1) ≤ (푥2, 푦2) if and only if 푥1 ≤ 푥2 in 푃 and 푦1 ≤ 푦2 in 푄.

Proposition 3.3.13: The direct product 퐿푀 of any two distributive lattices is a distributive lattice.

Proof: For any two elements(푥1, 푦1) and (푥2, 푦2) in 퐿푀 the elements (푥1 ∨ 푥2, 푦1 ∨ 푦2) contains both of the elements (푥1, 푦1), (푥2 , 푦2), hence is an upper bound for the pair. Let (푢, 푣) be any upper bound for the pair (푥1, 푦1) , (푥2 , 푦2) then 푢 ≥ 푥1, 푥2 implies 푢 ≥ 푥1 ∨ 푥2. Likewise we have 푣 ≥ 푦1 ∨ 푦2. Hence (푥1 ∨ 푦1 , 푥2 ∨ 푦2) = (푥1, 푦1) ∨ (푥2, 푦2). Dually we can show that;

(푥1 ∧ 푥2, 푦1 ∧ 푦2) = (푥1, 푦1) ∧ (푥2, 푦2). Which proves that 퐿푀 is a lattice. To prove distributivity, let 푥, 푦, 푧 ∈ 퐿푀 then we have 푥 = (푥1, 푦1), 푦 = (푥2, 푦2) , 푧 = (푥3, 푦3) where 푥1 , 푥2 , 푥3 ∈ 퐿 and 푦1, 푦2, 푦3 ∈ 푀.

Now, 푥 ∧ (푦 ∨ 푧) = (푥1, 푦1 ) ∧ [(푥2, 푦2)∨(푥3, 푦3)] (by assumption)

= (푥1, 푦1)∧(푥2 ∨ 푥3, 푦2 ∨ 푦3) (by definition)

= (푥1 ∧ (푥2 ∨ 푥3), 푦1 ∧ (푦2 ∨ 푦3) (by definition)

= [(푥1, 푦1) ∧ (푥2, 푦2)] ∨ [(푥1, 푦1 )∧(푥3, 푦3)] (since 퐿, 푀are distributive) = (푥 ∧ 푦) ∨ (푥 ∧ 푧). So 퐿푀 is distributive. Proposition 3.3.14: A lattice 퐿 is distributive if and only if 푥 ∨ (푦 ∧ 푧) ≥ (푥 ∨ 푦) ∧ 푧 for all 푥, 푦, 푧 ∈ 퐿.

Proof: Suppose that 퐿 is distributive, then for all 푥, 푦, 푧 ∈ 퐿 we have 푥 ∨ (푦 ∧ 푧) = (푥 ∨ 푦) ∧ (푥 ∨ 푧) ≥ (푥 ∨ 푦) ∧ 푧. (since 푥 ∨ 푧 ≥ 푧) Conversely suppose that for all 푥, 푦, 푧 ∈ 퐿, 푥 ∨ (푦 ∧ 푧) ≥ (푥 ∨ 푦) ∧ 푧 holds. We show 퐿 is distributive. Since 푥 ∨ (푦 ∧ 푧) ≥ (푥 ∨ 푦) ∧ (푥 ∨ 푧) holds for every lattice, we have to only show reverse inequality. We have (푥 ∨ 푦) ∧ (푥 ∨ 푧) = (푥 ∨ 푧) ∧ (푥 ∨ 푦) ≥ ((푥 ∨ 푧) ∧ 푥) ∨ 푦 = 푥 ∨ 푦 ≥ 푥 ∨ (푦 ∧ 푧). This implies 푥 ∨ (푦 ∧ 푧) = (푥 ∨ 푦) ∧ (푥 ∨ 푧). Hence 퐿 is distributive.

Theorem 3.3.15: A lattice 퐿 is distributive if and only if it has no sublattice of either of the forms given below. Equivalently, 퐿 is distributive if and only if 푧 ∧ 푥 = 푧 ∧ 푦 and 푧 ∨ 푥 = 푧 ∨ 푦 implies 푥 = 푦.

Proof: Observe first that the two statements are equivalent. In fact if 푥 ∧ 푧 = 푦 ∧ 푧 and 푥 ∨ 푧 = 푦 ∨ 푧 with 푥 ≠ 푦 then the two lattices shown above arise from the cases 푥‖⃦푦. Now suppose that 퐿 is distributive and that there exists 푥, 푦, 푧 ∈ 퐿 such that 푥 ∧ 푧 = 푦 ∧ 푧 and 푥 ∨ 푧 = 푦 ∨ 푧 then we have; 푥 = 푥 ∧ ( 푥 ∨ 푧) (since 푥 ≤ 푥 ∨ 푧) = 푥 ∧ (푦 ∨ 푧) (since 푧 ∨ 푥 = 푧 ∨ 푦) = (푥 ∧ 푦) ∨ (푥 ∧ 푧) (by distributivity of 퐿) = (푥 ∧ 푦) ∨ (푦 ∧ 푧) (since 푥 ∧ 푧 = 푦 ∧ 푧) = 푦 ∧ (푥 ∨ 푧) (by distributivity of 퐿) = 푦 ∧ (푦 ∨ 푧) (since 푧 ∨ 푥 = 푧 ∨ 푦) = 푦. (since 푦 ∨ 푧 ≥ 푦) Consequently, 퐿 has no sublattice of either of the forms. Conversely if 퐿 has no sublattice of either of the above forms then by the theorem (3.1.6) 퐿 must be modular. Given 푎, 푏, 푐 ∈ 퐿, we define 푎⋆ = (푏 ∨ 푐) ∧ 푎, 푏⋆ = (푐 ∨ 푎) ∧ 푏, 푐⋆ = (푎 ∨ 푏) ∧ 푐, then 푎⋆ ∧ 푐⋆ = [(푏 ∨ 푐) ∧ 푎] ∧ (푎 ∨ 푏) ∧ 푐 = (푏 ∨ 푐) ∧ (푎 ∧ 푐) = [(푏 ∨ 푐) ∧ 푐] ∧ 푎 = 푎 ∧ 푐. Similarly 푏⋆ ∧ 푐⋆ = 푏 ∧ 푐 and 푎⋆ ∧ 푏⋆ = 푎 ∧ 푏 Now let 푑 = (푎 ∨ 푏) ∧ (푏 ∨ 푐) ∧ (푐 ∨ 푎) then; 푎⋆ ∨ 푐⋆ = 푎⋆ ∨ [(푎 ∨ 푏) ∧ 푐] (by assumption) = (푎⋆ ∨ 푐) ∧ (푎 ∧ 푏) (by using modularity of 퐿) = [((푏 ∨ 푐) ∧ 푎) ∨ 푐] ∧ (푎 ∨ 푏) ( since 푎⋆ = (푏 ∨ 푐) ∧ 푎) = [(푏 ∨ 푐) ∧ (푎 ∨ 푐)] ∧ (푎 ∨ 푏) = 푑 By symmetry we deduce that 푎⋆ ∨ 푐⋆ = 푎⋆ ∨ 푏⋆ = 푏⋆ ∨ 푐⋆ = 푑. 푐⋆ ∨ 푎⋆ ∨ (푏 ∧ 푐) = 푑 ∨ (푏 ∧ 푐) = 푑 and We now observe that { 푐⋆ ∧ [푎⋆ ∨ (푏 ∧ 푐) = (푐⋆ ∧ 푎⋆) ∨ (푏 ∧ 푐) = (푎 ∧ 푏) ∨ (푏 ∧ 푐) 푐⋆ ∨ 푏⋆ ∨ (푏 ∧ 푐) = 푑 and and by symmetry { 푐⋆ ∧ [푏⋆ ∨ (푎 ∧ 푐)] = (푎 ∧ 푐) ∨ (푏 ∧ 푐). By the hypothesis we deduce that 푎⋆ ∨ (푏 ∧ 푐) = 푏⋆ ∨ (푎 ∧ 푐) whence; 푎⋆ ∨ (푏 ∧ 푐) = 푎⋆ ∨ (푏 ∧ 푐) ∨ 푏⋆ ∨ (푎 ∧ 푐) = 푎⋆ ∨ 푏⋆ = 푑. It follows from this that(푎 ∨ 푏) ∧ 푐 = 푐⋆ = 푐⋆ ∧ 푑 = 푐⋆ ∧ (푎⋆ ∨ (푏 ∧ 푐) = (푐⋆ ∧ 푎⋆) ∨ (푏 ∧ 푐) = (푎 ∧ 푐) ∨ (푏 ∧ 푐). Thus 퐿 is distributive.

Example 3.3.16: The set 푁 = {1,2,3, … } ordered by divisibility is a distributive lattice.

Solution: We know that (푁;|) is a lattice with 푠푢푝 {푚, 푛} = 푙푐푚{푚, 푛} and 푖푛푓 (푚, 푛) = ℎ푐푓{푚, 푛}. To show that 푁 is distributive we use above theorem. Let 푥, 푦, 푧 ∈ 푁 such that 푥 ∨ 푦 = 푧 ∨ 푦 and 푥 ∧ 푦 = 푧 ∧ 푦 which implies 푙푐푚{푥, 푦} = 푙푐푚(푧, 푦) and gcd{푥, 푦} = gcd {푧, 푦}, implies; 푥푦 푧푦 = gcd {푥, 푦} gcd {푥, 푦}

this implies 푥푦 = 푧푦, so 푥 = 푧. Thus by Theorem (3.3.15) 푁 is distributive.

Proposition 3.3.16: Every lattice is distributive if and only if, for all ideals 퐼, 퐽 ∈ 퐿; 퐼 ∨ 퐽 = {푖 ∨ 푗: 푖 ∈ 퐼 , 푗 ∈ 퐽}.

Proof: Suppose 퐿 is distributive and let us take 푡 ∈ 퐼 ∨ 퐽 then 푡 ≤ 푖 ∨ 푗 with 푖 ∈ 퐼 , 푗 ∈ 퐽. Then by distributivity of 퐿 we have 푡 = 푡 ∧ (푖 ∨ 푗) = (푡 ∧ 푖) ∨ (푡 ∧ 푗) = 푖1 ∨ 푖2; where푖1 = 푡 ∧ 푖 ∈ 퐼 and 푗1 = 푡 ∧ 푗 ∈ 퐽, since 퐼, 퐽 are ideals of 퐿. Thus 푡 = 푖1 ∨ 푗1 for 푖1 ∈ 퐼 , 푗1 ∈ 퐽. This gives 퐼 ∨ 퐽 = {푖 ∨ 푗 ∶ 푖 ∈ 퐼 , 푗 ∈ 퐽}. For the converse suppose that 퐼 ∨ 퐽 = {푖 ∨ 푗 ∶ 푖 ∈ 퐼, 푗 ∈ 퐽} and suppose if possible 퐿 is non distributive then there exists three elements 푎, 푏, 푐 (as in M3). Now consider the principal ideal 퐼 = (푏) 퐽 = (푐); we have 푎 ≤ 푏 ∨ 푐 and so ∈ 퐼 ∨ 퐽 . But we claim that 푎 can’t be written as 푎 = 푖 ∨ 푗 because if it is so then 푖 ≤ 푎 , 푗 ≤ 푎. Since 푗 ∈ 퐽 = (푐) so 푗 ≤ 푐. Combining 푗 ≤ 푎, 푗 ≤ 푐 implies 푗 ≤ 푎 ∧ 푐 = 0 < 푏 ∈ (푏) = 퐼. Thus 푎 = 푖 ∨ 푗 ∈ 퐼 = (푏) = {0, 푏}. Which is a contra- diction. Hence 퐿 is distributive.

3.4: Complemented lattices:

Before defining the complemented lattice, we first prove the following lemma;

Lemma 3.4.1: Let (퐿 , ≤ ) be a lattice with universal upper and lower bounds 0 and 1 . For any element 푎 ∈ 퐿 ; 푎 ∨ 1 = 푎 , 푎 ∧ 1 = 푎 and 푎 ∨ 0 = 푎 , 푎 ∧ 0 = 0 .

Proof : We know that for any 푎, 푏 ∈ 퐿 ; 푎 ≤ 푎 ∨ 푏 and 푎 ∧ 푏 ≤ 푎. (1) So by using (1), we get 1 ≤ 푎 ∨ 1 (2) Since 1 is the upper bound of 퐿 and 푎 ∨ 1 ∈ 퐿. This implies that 푎 ∨ 1 ≤ 1 (3) From (2) and (3) we get 푎 ∨ 1 = 1. Since 푎 ∧ 1 ≤ 푎 . (4) Also by reflexivity of 퐿, 푎 ≤ 푎 and 푎 ≤ 1 , therefore we get 푎 ≤ 푎 ∧ 1 (5) Thus from (4) and (5) we get 푎 ∧ 1 = 1. Similarly we can prove that 푎 ∨ 0 = 푎 and 푎 ∧ 0 = 0.

Complemented elements: Definition 3.4.2: If 퐿 is a bounded lattice then we say that 푦 ∈ 퐿 is a complement of 푥 ∈ 퐿 if 푥 ∧ 푦 = 0 and 푥 ∨ 푦 = 1. In this case we say 푥 is complemented element of 퐿.

Since meet and join operations are commutative, therefore 푥 ∧ 푦 = 0 if and only if 푦 ∧ 푥 = 0 and 푥 ∨ 푦 = 1 if and only if 푦 ∨ 푥 = 1. Thus by definition complement is symmetric in 푥 and 푦, so that 푦 is complement of 푥 if and only if 푥 is complement of y. Thus we conclude that every complement of a complemented element is itself complemented.

Example 3.4.3: In each of the lattices

the first of which is non modular and second is modular but not distributive. The elements 푥 and y are complements of z thus in this case 푧 has two complements. In general complement may not be unique. Also from above lemma we have 0 ∧ 1 = 0 and 0 ∨ 1 = 1; which shows that that 0 and 1 are complements of each other. It is easy to show that 1 is the only complement of 0. In fact if 푐 ≠ 1 is a complement of 0 and 푐 ∈ 퐿; then 0 ∧ 푐 = 0 and 0 ∨ 푐 = 1 , also since 0 ≤ 푐, therefore 0 ∨ 푐 = 푐 and 푐 ≠ 1 leads to a contradiction. In a similar manner we can show that 0 is the only complement of 1.

Definition 3.4.4: We say that a lattice 퐿 is complemented if every element of 퐿 is complemented.

Example 3.4.5: Let 푉 be a vector space and consider the lattice 퐿(푉) of subspaces of 푉. We have seen that 퐿(푉) is modular (Example 3.1.8). It is also complemented. To establish this we observe that if 푊 is a subspace of 푉, then any basis of 푊 can be extended to a basis of 푉 by means of a set 퐴 = {푥훼 ∶ 훼 ∈ 퐼} of elements of 푉. The subspace generated by 퐴 then serves as a complement of 푊 in (푉).

Definition 3.4.6: We say that a lattice 퐿 is relatively complemented if every interval [푥, 푦] of 퐿 is complemented. A complement in [푥, 푦] of 푎 ∈ [푥, 푦] is called relative complement of 푎.

Theorem 3.4.7: Any complemented modular lattice 푀 is relatively complemented.

Proof: Let 퐿 be a complemented modular lattice. Given any [푎, 푏] ⊆ 퐿 and 푥 ∈ [푎, 푏], let 푦 be the complement of 푥 in 퐿. Consider the element 푧 = 푏 ∧ (푎 ∨ 푦) = (푏 ∧ 푦) ∨ 푎 (since 퐿 is modular) Then clearly 푧 ≥ 푎 and 푧 ≤ 푏, so that 푧 ∈ [푎, 푏] and by modularity, 푥 ∧ 푧 = 푥 ∧ (푦 ∨ 푎) = (푥 ∧ 푦) ∨ 푎 = 0 ∨ 푎 = 푎; 푥 ∨ 푧 = 푥 ∨ (푏 ∧ 푦) = (푥 ∨ 푦) ∧ 푏 = 1 ∧ 푏 = 푏. Thus 푧 is complement of 푥 in [푎, 푏].

Theorem 3.4.8: In a distributive lattice all complements and relative complements are unique.

Proof: We first show that if an element in a distributive lattice has a complement then this complement is unique. Suppose that an element 푎 has two complements 푏 and 푐. We have 푏 = 푏 ∧ 1 (since 1 ≥ 푏) = 푏 ∧ (푎 ∨ 푐) (since 푐 is complement of 푎) = (푏 ∧ 푎) ∨ (푏 ∧ 푐) (by definition of distributivity) = 0 ∨ (푏 ∧ 푐) (since 푏 ∧ 푎 = 0) = (푎 ∧ 푐) ∨ (푏 ∧ 푐) (since 푎 ∧ 푐 = 0) = (푎 ∨ 푏) ∧ 푐 (by definition of distributive lattice) = 1 ∧ 푐 (since 푎 ∨ 푏 = 1) = 푐 (since 1 ≥ 푐).

Uniqueness of relative complements:

Given any [푎, 푏] ⊆ 퐿 (Distributive lattice) and 푥 ∈ [푎, 푏]. Let 푝 be the complement of 푥 in 퐿 and let 푞 be the relative complement of 푥. we have 푝 = 푝 ∧ 푏 (since 푝 ≤ 푏) = 푝 ∧ (푥 ∨ 푞) (since 푞 is the relative complement of 푥) = (푝 ∧ 푥) ∨ (푝 ∧ 푞) (by distributivity of 퐿) = 푎 ∨ (푝 ∧ 푞) (since 푝 is relative complement of 푥) = (푥 ∧ 푞) ∨ (푝 ∧ 푞) (since 푞 is relative complement of 푥) = (푥 ∨ 푝) ∧ 푞 (by distributivity of 퐿) = 푏 ∧ 푞 (since 푝 is relative complement of 푥) = 푞 (since 푞 ≤ 푏).

3.5 Uniquely complemented lattice

In view of above Theorem, it is natural to consider complemented lattice in which complements are unique. In such a lattice we shall use the notation 푥′ to denote the unique complement of 푥. Denoting likewise the complement of 푥′ by 푥′′ we have then by the symmetry of the definition 푥 is the complement of 푥′, hence by uniqueness 푥′′= 푥. There is an important history concerning these lattices, it long has been suspected that every uniquely complemented lattice is distributive. However, this is not the case. In fact, R. P. Dilworth established the remarkable result that every lattice can be embedded in a uniquely complemented lattice. The following result will illustrate the difficulty in seeking uniquely complemented lattices that are not distributive. Definition 3.5.1: If 퐿 is a lattice with bottom element 0 the by an atom of 퐿 we shall mean an element 푎 such that 0 ≺ 푎. If for every 푥 ∈ 퐿\{0} there is an atom 푎 such that 푎 ≤ 푥 then we say that 퐿 is atomic. Dually, we have the notion of a coatom and that of 퐿 being coatomic.

Theorem 3.5.2: (Birkhoff -Ward) Every uniquely complemented atomic lattice is distributive.

Proof: We establish the proof by means of following non-trivial observations. (1) 퐼푓푥 > 푦 푡ℎ푒푛 푡ℎ푒푟푒 푖푠 푎푛 푎푡표푚 푝 푠푢푐ℎ푡ℎ푎푡 푝 ≤ 푥 푎푛푑 푝 ∧ 푦 = 0.

In fact 푥 > 푦 gives 푦′ ∨ 푥 = 1. We can’t therefore have 푦’ ∧ 푥 = 0. For if 푦’ ∧ 푥 = 0 then 푥 = 푦′′ = 푦, which is a contradiction.Thus 푦′ ∧ 푥 > 0 and so there is an atom 푝 such that 푝 ≤ 푥 ∧ 푦′ whence 푝 ≤ 푥 and 푝 ≤ 푦′, the later giving 푝 ∧ 푦 = 0.

(2) If x and y contain the same atom then x = y. In fact, if 푥 and 푦 contain the same atom say 푎 so that 0 ≺ 푎 and 푎 ≤ 푥 and푎 ≤ 푦 gives 푎 ≤ 푥 ∧ 푦. Thus it follows that if 푥 and 푦 contain the same atom so do 푥 and 푥 ∧ 푦. Suppose that 푥 ∧ 푦 < 푥. Then by (1) there would exist an atom 푝 contained in 푥 but not contained in 푥 ∧ 푦. This contradiction shows that 푥 ∧ 푦 = 푦 whence 푥 ≤ 푦. Similarly, we have 푦 ≤ 푥 and so 푥 = 푦.

(3) The complement of an atom is a coatom. Let 푝 be an atom then 푝 ≠ 0. We claim that 푝′ ≠ 1. For if 푝′ = 1 then 푝 = 푝′′ = 0, which is a contradiction. Suppose that 푝′ < 푥 < 1. Then 푝 ∨ 푥 = 1. But 푝 being an atom either 푝 ∧ 푥 = 푝 or 푝 ∧ 푥 = 0. The former gives 푝 ≤ 푥, which implies 푥 = 푝 ∨ 푥 = 1 which is a contradiction.

(4) If 푝 and 푞 are distinct atoms then 푞 ≤ 푝’. By (3), both 푝′and 푞′ are coatoms. Suppose that 푞 ≰ 푝′. Then necessarily 푞 ∨ 푝′ = 1 and 푞 ∧ 푝′ = 0 which is a contradiction as 푞 = 푝′′ = 푝.

(5) If 푝 is an atom then 푝 ∧ 푥 = 0 if and only if 푥 ≤ 푝′. If 푝 ∧ 푥 = 0 then 푝 ≰ 푥 so every atom 푞 under 푥 is distinct from 푝. Thus, by (4) every atom under 푥 is an atom under 푥 ∧ 푝′. Since the converse is also true, it follows by (2) that 푥 = 푥 ∧ 푝′, whence 푥 ≤ 푝′.

(6) If 푝 is an atom then 푝 ≤ 푥 ∨ 푦 if and only if 푝 ≤ 푥 or 푝 ≤ 푦.

Suppose that 푝 ≤ 푥 ∨ 푦. since 푝 is an atom we have either 푝 ∧ 푥 = 푝 or 푝 ∧ 푥 = 0, i.e, either 푝 ≤ 푥 or, by (5), 푥 ≤ 푝′. Likewise, either 푝 ≤ 푦 or 푦 ≤ 푝′ since then 푥 ∨ 푦 ≤ 푝′, which gives 푝 ≤ 푥 ∨ 푦 ≤ 푝′ and the contradiction 푝 = 0. Thus we must have either 푝 ≤ 푥 or 푝 ≤ 푦.

With these technical details to hand suppose now that 𝒜 is the set of atoms of 퐿. For every 푥 ∈

퐿 let 𝒜푥 = {푎 ∈ 𝒜: 푎 ≤ 푥}, and consider the mapping 푓: 퐿 → ℙ(𝒜) given by the prescription 푓(푥) = 𝒜푥 . It is clear from (2) that 푓 is injective. Moreover, using (6) we see that 𝒜푥∨푦 = 𝒜푥 ∪ 𝒜푦 and so 푓 is a join morphism. Now clearly for 푝 ∈ 𝒜, we have 푝 ≤ 푥 ∧ 푦 if and only if 푝 ≤ 푥 and 푝 ≤ 푦. It follows that 𝒜푥∧푦 = 𝒜푥 ∩ 𝒜푦 and so 푓 is a lattice morphism. Thus 퐿 ≃ 퐼푚푓 where 퐼푚푓 is a sublattice of the distributive lattice ℙ(𝒜). Consequently, 퐿 is distributive.

Corollary 3.5.3: If 퐿 is complete, then 퐿 ≃ ℙ(𝒜).

Proof Suppose that 퐿 is complete and let 푁 = {푝푖: 푖 ∈ 퐼} where each 푝푖 ∈ 𝒜, let q be an atom with 푞 ≤∨푖∈퐼 푝푖. Then necessarily 푞 = 푝푖 for some 푖 ∈ 퐼. In fact, suppose that 푞 ≠ 푝푖for all 푖 ∈ 퐼. Then (4) gives 푝푖 ≤ 푞′ whence we have the contradiction 푞 ≤∨푖∈퐼 푝푖 ≤ 푞′. We conclude that 푁 = 𝒜푥, where 푥 =∨푖∈퐼 푝푖 . Hence 푓is also surjective and 퐿 ≃ ℙ(𝒜)

Definition 3.5.4: By the width of the lattice 퐿 we mean the supremum of the cardinalities of the antichains in 퐿.

Corollary 3.5.5: If a uniquely complemented lattice satisfies descending chain condition or ascending chain condition, then it is distributive.

Proof: Under the descending chain conditions every non-zero element of lattice contains an atom so 퐿 is atomic. Thus by Birkhoff Ward’s theorem 퐿 is distributive.

Corollary 3.5.6: Every uniquely complemented lattice of finite width is distributive.

Proof: Let 퐿 be a uniquely complemented lattice of finite width. We show 퐿 satisfies descending chain conditions. Suppose this is not so and 푥1 > 푥2 > 푥3 > ⋯ is an infinite descending chain. Observe first that for each푖, 푥푖 ∧ 푥′푖+1 ≠ 0. For if 푥푖 ∧ 푥푖+1 = 0 then 푥′푖+1 has 푥푖 as its complement, which is not possible as 퐿 is uniquely complemented. Now for each 푖, define

푦푖 = 푥푖 ∧ 푥푖+1

Then for푖 < 푗, we have 푦푗 ≤ 푥푗 ≤ 푥푖+1 hence 푦푖 ∧ 푦푗 ≤ 0, i,e. the elements 푦1, 푦2, … form an infinite antichain, which contradicts our hypothesis.

Corollary 3.5.7: A finite uniquely complemented lattice is distributive.

Proof: Since 퐿 is finite, therefore every anti-chain in 퐿 is finite. Thus 퐿 is uniquely complemented lattice of finite width, so by above corollary 퐿 is distributive.

Theorem3.5.8: In a uniquely complemented lattice 퐿 the following properties of complementation are equivalent: (1) (for all 푥 ≤ 푦) implies 푦′ ≤ 푥′; (2) (for all 푥, 푦 ∈ 퐿) (푥 ∧ 푦)′ = 푥′ ∨ 푦′; (3) (for all 푥, 푦 ∈ 퐿) (푥 ∨ 푦) = 푥′ ∧ 푦′. Moreover, each implies that 퐿 is distributive.

Proof: (1) ⇒(2): Suppose that (1) holds, that is 푥 ↦ 푥′ is antitone. Then from 푥 ∧ 푦 ≤ 푥, 푦 we obtain (푥 ∧ 푦)′ ≥ 푥′ ∨ 푦′ and consequently 푥 ∧ 푦 = (푥 ∧ 푦)′′ ≤ (푥′ ∨ 푦′)′. Likewise 푥′, 푦′ ≤ 푥′ ∨ 푦′ gives (푥′ ∨ 푦′)′ ≤ 푥′′ ∧ 푦’’ = 푥 ∧ 푦. Hence 푥 ∧ 푦 = (푥′ ∨ 푦′)′ and consequently (푥 ∧ 푦)′ = (푥′ ∨ 푦′)′′ = 푥′ ∨ 푦′. (2) ⇒ (1) : Suppose that for all 푥, 푦 ∈ 퐿 (푥 ∧ 푦)′ = 푥′ ∨ 푦′; let 푥 ≤ 푦, this gives 푥 ∧ 푦 = 푥 so that 푥′ = 푥′ ∨ 푦′ ≥ 푦′. A dual proof establishes the equivalence of (1) and (3) As for the distributivity, suppose that any one of the above conditions hold. Then we have the 푦 = 푥 ∨ (푥′ ∧ 푦); (4) property that 푥 ≤ 푦 implies { 푥 = (푥 ∨ 푦′) ∧ 푦. (5) In fact if 푥 ≤ 푦, then since [푥 ∨ (푥′ ∧ 푦′)]′ ∨ 푦 ≥ [푥 ∨ (푥′ ∧ 푦)]′ ∨ 푥 ∨ (푥′ ∧ 푦) = 푥 ∧ (푥′ ∧ 푦) ∨ 푥 ∨ (푥′ ∧ 푦) = (푥′ ∧ 1) ∨ 푥 = 푥′ ∨ 푥 = 1. We have since 1 is the top element therefore [푥 ∨ (푥′ ∧ 푦′)]′ ∨ 푦 ≤ 1, this gives [푥 ∨ (푥′ ∧ 푦′)]′ ∨ 푦 = 1; and by (3), [푥 ∨ (푥′′ ∧ 푦′)]′ ∧ 푦 = 푥′ ∧ (푥′ ∧ 푦)′ ∧ 푦 = 0 Thus 푦 = [푥 ∨ (푥′ ∧ 푦′)]′′ = 푥 ∨ (푥′ ∧ 푦) and so (4) holds. As for (5), using (4) we see that 푥 ≤ 푦 implies 푦′ ≤ 푥′ so by (4) 푥′ = 푦′ ∨ (푦′′ ∧ 푥) which implies 푥 = 푥′′ = 푦 ∧ (푦′ ∨ 푥), so (5) holds. We now use (4) and (5) to show that 퐿 is distributive. For this purpose suppose that 푎, 푏, 푐 ∈ 퐿 are such that 푎 ∨ 푐 = 푏 ∨ 푐 = 훼 and 푎 ∧ 푐 = 푏 ∧ 푐 = 훽. Then on the one hand 푎 ∨ 훼′ ∨ (푐 ∧ 훽′) = 푎 ∨ 훽 ∨ 훼′ ∨ (푐 ∧ 훽′). Now since 훽 ≤ 푐, so by (4) 푐 = 훽 ∨ (훽′ ∧ 푐). Therefore 푎 ∨ 훼′ ∨ (푐 ∧ 훽′) = 푎 ∨ 푐 ∨ 훼′ = 푎 ∨ 훼′and similarly 푏 ∨ 훼′ ∨ (푐 ∧ 훽′) = 1. On the other hand, 푎 ∧ [훼′ ∨ (푐 ∧ 훽′)] = 푎 ∧ 푐 ∧ 훽′ = 훽 ∧ 훽′ = 0. Similarly b∧ [훼′ ∨ (푐 ∧ 훽′)] = 0. Thus by the uniqueness of complements 푎 = [훼′ ∨ (푐 ∧ 훽′)]′ = 푏. Therefore, by Theorem (3.3.15) 퐿 is distributive. The properties (2) and (3) in the above theorem are often referred to as the de Morgan laws.

Theorem3.5.9: (Von Neumann) Every uniquely complemented modular lattice is distributive.

For the proof of the theorem we need the following lemma.

Lemma: If a uniquely complemented lattice 퐿 is modular, then in it (1) 푥 ∧ 푦 = 0 implies 푥 ≤ 푦′ (2) 푥 < 푦 implies 푥 ∨ (푥′ ∧ 푦) = 푦.

Proof of the lemma: we will show that if 푥 ∧ 푦 = 0 then 푥 ∨ (푥 ∨ 푦′) is the complement of 푦. Observe that [(푥 ∨ 푦) ∧ (푥 ∨ (푥 ∨ 푦′)] = 푥 ∨ [(푥 ∨ 푦) ∧ (푥 ∨ 푦)]′ = 푥 ∨ 0 = 푥, whence [(푥 ∨ (푥 ∨ 푦′)) ∧ 푦] = [(푥 ∨ (푥 ∨ 푦′)] ∧ [(푥 ∨ 푦) ∧ 푦]] = 푥 ∧ 푦 = 0. But 푥 ∨ (푥 ∨ 푦′) ∨ 푦 = (푥 ∨ 푦) ∨ (푥 ∨ 푦)′ = 1, therefore by the uniqueness of the complement 푥 ∨ (푥 ∨ 푦)′ = 푦′ and hence 푥 ≤ 푦′. (2) If 푥 < 푦 then 푥′ ∧ 푦 ≠ 0, for if 푥′ ∧ 푦 = 0, then 푥 = 푥′′ = 푦 which is a contradiction. If we assume 푥 ∨ (푥′ ∧ 푦) = 푧 < 푦, then we obtain 푧′ ∧ 푦 ≠ 0 and since 푧′ ∧ 푥 ≤ 푧′ ∧ 푧 = 0. Thus it follows from (1) that 푧′ ≤ 푥′, hence 푧′ ∧ 푦 ≤ 푧, a contradiction. Proof of the Theorem: Suppose a uniquely complemented lattice 퐿 is modular. We show that it contains no lattice isomorphic to 푀3 (the diamond). The proof is by contradiction. Suppose there exists in 퐿 a sublattice of the form

(a) Suppose first that 푢 = 0 then 푥 ∧ 푦 = 0 and 푥 ∧ 푧 = 0, so by above lemma 푥 ≤ 푣 = 푦 ∨ 푧 ≤ 푥′. Which is not possible. (b) Now suppose that 푢 ≠ 0. Let 푥⋆ = 푢′ ∧ 푥, 푦⋆ = 푢′ ∧ 푦 푧⋆ = 푢′ ∧ 푧 then 푥⋆ ∧ 푦⋆ = (푢′ ∧ 푥) ∧ (푢′ ∧ 푦) = (푢′ ∧ 푥) ∧ 푦 = 푢′ ∧ (푥 ∧ 푦) = 푢′ ∧ 푢 = 0 Similarly 푦⋆ ∧ 푧⋆ = 푧⋆ ∧ 푥⋆ = 0. Consider 푢 ∨ 푣′ on the one hand, we have 푢 ∨ 푣′ ∨ (푥⋆ ∨ 푦⋆) = 푢 ∨ 푣′ ∨ (푢′ ∧ 푥) ∨ (푢′ ∧ 푦) = 푣′ ∨ 푥 ∨ 푦 (by modularity) = 1 Hence 푥⋆ ∨ 푦⋆ is the complement of 푢 ∨ 푣′. Similarly so also are 푦∗ ∨ 푧∗and 푧∗ ∨ 푥∗. Hence we have (푥⋆ ∨ 푦⋆) = (푦∗ ∨ 푧∗) = (푧∗ ∨ 푥∗). This then takes us back to the situation (a) and the subsequent contradiction completes the proof.

Chapter 4 Boolean Rings and Boolean Algebras

This chapter is aimed at to provide knowledge about Boolean rings and Boolean algebras. This chapter consists of 4 sections. Section 4.1 is based on Boolean algebras and isomorphism of Boolean algebras. In section 4.2 we discuss Boolean rings with the help of some examples and in section 4.3 we look at the comparison of Boolean rings and Boolean algebras and discuss the technique how Boolean can be converted into Boolean and conversely. This section ends with some important results on how Boolean rings are converted into Boolean algebras and conversely. In section 4.4 we discuss some important applications of Boolean algebras.

4.1 Boolean Algebras

In this section we shall discuss Boolean algebras in detail with some examples. Also we shall discuss briefly isomorphism between Boolean algebras. This section ends with few important results on Boolean algebras. Let us investigate the example of the power set ℙ(푋), of a set 푋 more closely. The power set is a lattice that is ordered by inclusion. By the definition of the power set, the largest element in ℙ(X) is 푋 itself and the smallest element is ∅, the empty set. For any set 퐴 in ℙ(푋), we know that 퐴 ∩ 푋 = 퐴and 퐴 ∪ ∅ = 퐴. This suggests the following definition for lattices. An element 1in a poset 푋 is a top element if 푎 ≤ 1for all 푎 ∈ 푋. An element 0 is a bottom element of 푋 if0 ≤ 푎 for all 푎 ∈ 푋. Let 퐴 be in ℙ(푋). Recall that the complement of A is 퐴´ = 푋 ∖ 퐴 = {푥 ∶ 푥 ∈ 푋 and 푥 ∉ 푋}. We know that 퐴 ∪ 퐴´ = 푋 Sand 퐴 ∩ 퐴′ = ∅. We can generalize this example for lattices In a lattice 퐿 the binary operations ∨and ∧satisfy commutative and associative laws; however, they need not satisfy the distributive law 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐); however in 푃(푋) the distributive law is satisfied since 퐴 ∩ (퐵 ∪ 퐶) = (퐴 ∩ 퐵) ∪ (퐴 ∩ 퐶) for 퐴, 퐵, 퐶 ∈ 푃(푋) and we know that a lattice 퐿 is distributive if and only if the following distributive law holds; 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐) for all 푎, 푏, 푐 ∈ 퐿.

Definition 4.1.1: A is a lattice 퐵 with a greatest element 1and a smallest element 0 such that 퐵is both distributive and complemented.

Example 4.1.2: The power set of 푋, ℙ(푋) is our prototype for a Boolean algebra. As it turns out it is also one of the most important Boolean algebras. The following theorem allows us to characterize Boolean algebras in terms of the binary relations ∨ and ∧ without mention of the fact that a Boolean algebra is a poset. The next result proves that the Boolean algebra is an algebraic structure with respect to the operation of join and meet.

Theorem 4.1.3: A set 퐵 is a Boolean algebra if and only if there exist binary operations ∨ and ∧ on 퐵 satisfying the following axioms. (1) 푎 ∨ 푏 = 푏 ∨ 푎 and 푎 ∧ 푏 = 푏 ∧ 푎 for all 푎, 푏 ∈ 퐵, (2) 푎 ∨ (푏 ∨ 푐) = (푎 ∨ 푏) ∨ 푐and 푎 ∧ (푏 ∧ 푐) = (푎 ∧ 푏) ∧ 푐for푎, 푏, 푐 ∈ 퐵, (3) 푎 ∧ (푏 ∨ 푐) = (푎 ∧ 푏) ∨ (푎 ∧ 푐) and 푎 ∨ (푏 ∧ 푐) = (푎 ∨ 푏) ∧ (푎 ∨ 푐) for all 푎, 푏, 푐 ∈ 퐵, (4) There exist elements 1and0 such that 푎 ∨ 0 = 푎 and 푎 ∧ 1 = 푎 for all 푎 ∈ 퐵, for every 푎 ∈ 퐵 there exists an 푎´ ∈ 퐵 such that 푎 ∨ 푎´ = 1 and 푎 ∧ 푎´ = 0.

Proof: Let 퐵 be a set satisfying (1)-(5) in the theorem. We show that 퐵 is a Boolean algebra. One of the idempotent laws is satisfied since 푎 = 푎 ∨ 0 (since 0 ≤ 푎) = 푎 ∨ (푎 ∧ 푎´) (using (5)) = (푎 ∨ 푎) ∧ (푎 ∨ 푎´) (using (3)) = (푎 ∨ 푎) ∧ 1 (using (5)) = 푎 ∨ 푎. (using (4)) Observe that, 1 ∨ 푏 = (1 ∨ 푏) ∧ 1 = (1 ∧ 1) ∨ (푏 ∧ 1) = 1 ∨ 1 = 1. Consequently, the first of the two absorption laws holds, since 푎 ∨ (푎 ∧ 푏) = (푎 ∧ 1) ∨ (푎 ∧ 푏) = 푎 ∧ (1 ∨ 푏) = 푎 ∧ 1 (using (4)) = 푎. The other idempotent and absorption laws are proven similarly. Since 퐵 also satisfies (1)-(3), the conditions of Theorem 2.1.6 and 2.1.7 are met, therefore 퐵 must be a lattice. Condition (4) tells us that 퐵 is a distributive lattice. For 푎 ∈ 퐵, 0 ∨ 푎 = 푎, hence 0 ≤ 푎 and 0 is the bottom element in 퐵. To show that 1 is the top element in 퐵, we will first show that 푎 ∨ 푏 = 푏 is equivalent to 푎 ∧ 푏 = 푎. Since 푎 ∨ 1 = 푎 for all 푎 ∈ 퐵, using the absorption laws we can determine that 푎 ∨ 1 = (푎 ∧ 1) ∨ 1 = 1 ∨ (1 ∧ 푎) = 1 or 푎 ≤ 1 for all 푎 in 퐵. Finally, since we know that 퐵 is complemented by (5), 퐵 must be a Boolean algebra. Conversely suppose that 퐵 is Boolean algebra. Let 1 and 0 be the greatest and least elements in 퐵 respectively. If we define 푎 ∨ 푏 and 푎 ∧ 푏 as least upper and greatest lower bounds of {푎, 푏}, then 퐵 is a lattice by Theorem 2.1.6 and 2.1.7, definition of distributive lattice and our hypothesis.

Many other identities hold in Boolean algebras. Some of these are listed in the following theorem.

Theorem 4.1.4: Let 퐵 be a Boolean algebra then (a) 푎 ∨ 1 = 1 and 푎 ∧ 0 = 0 for all 푎 ∈ 퐵 (b) If 푎 ∨ 푏 = 푎 ∨ 푐 and 푎 ∧ 푏 = 푎 ∧ 푐 for 푎, 푏, 푐 ∈ 퐵 then 푏 = 푐 (c) (푎´)´ = 1 for all 푎 ∈ 퐵 (d) 1´ = 0 and 0´ = 1.

Proof: (a) We know that 푎 ∨ 푏 = sup {푎, 푏} then: 푎 ∨ 1 = sup {푎, 1} = 1. (1 being the top element in 퐵) Also we know that 푎 ∧ 푏 = inf {푎, 푏} then: 푎 ∧ 0 = 푖푛푓 {푎, 0} = 0. (0 being the least element in 퐵) (b) For 푎 ∨ 푏 = 푎 ∨ 푐 and 푎 ∧ 푏 = 푎 ∧ 푐, we have 푏 = 푏 ∨ (푏 ∧ 푎) = 푏 ∨ (푎 ∧ 푏) (by (1) of Theorem 4.1.3) = 푏 ∨ (푎 ∨ 푐) (by given hypothesis) = (푏 ∨ 푎) ∧ (푏 ∨ 푐) (by (3) of Theorem 4.1.3) = (푎 ∨ 푏) ∧ (푏 ∨ 푐) (since join operation is commutative) = (푎 ∨ 푐) ∧ (푏 ∨ 푐) (by given hypothesis) = (푐 ∨ 푎) ∧ (푐 ∨ 푏) (by (1) of Theorem 4.1.3) = 푐 ∨ (푎 ∧ 푏) (by (3) of Theorem 4.1.3) = 푐 ∨ (푎 ∧ 푐) (by given hypothesis) = 푐 ∨ (푐 ∧ 푎) (by (1) of Theorem 4.1.3) = 푐. (c) We know that 푎 = 푎 ∨ 0 then: 푎´ = (푎 ∨ 0)´ = 푎´ ∧ 0´ (by de Morgan law) = 푎´ ∧ 1 (using (d)). Therefore we have (푎´)´ = (푎´ ∧ 1)´ = 푎 ∨ 1´ (by de Morgan law) = 푎 ∨ 0 (by (d)) = 푎 (by Theorem 4.1.3 (4)). (d) We know that 1 = 푎 ∨ 푎´, therefore 1´ = (푎 ∨ 푎´)´ = 푎´ ∧ 푎 (by de Morgan law) = 푎 ∧ 푎´ (by Theorem 4.1.3 (1)) = 0 (by Theorem 4.1.3 (5)). Also we know that 0 = 푎 ∧ 푎´therefore; 0´ = ( 푎 ∧ 푎´)´ = 푎´ ∨ 푎 (by de Morgan law, Theorem 4.1.4 (1)) = 푎 ∨ 푎´ (by Theorem 4.1.3 (1)) = 1 (by Theorem 4.1.3 (5)).

Theorem 4.1.5: If 퐵 is a Boolean algebra then (1) [de Morgan laws] for all 푥, 푦 ∈ 퐵, (푥 ∧ 푦)´ = 푥´ ∨ 푦´, (푥 ∨ 푦)´ = 푥´ ∧ 푦´; (2) For all 푥, 푦 ∈ 퐵푥 ≤ 푦 if and only if 푥´ ≥ 푦´; (3) For all 푥, 푦, 푧 ∈ 퐵 푥 ∧ 푦 ≤ 푧 if and only if 푥 ≤ 푧 ∨ 푦´; (4) For all 푥, 푦, 푧 ∈ 퐵 푥 ∨ 푦 ≥ 푧 if and only if 푥 ≥ 푧 ∧ 푦´.

Proof: (1) Observe that, by distributivity of 퐵 (푥 ∧ 푦) ∨ 푥´ ∨ 푦´ = (푥 ∨ 푥´ ∨ 푦´) ∧ (푦 ∨ 푥´ ∨ 푦´) = (1 ∨ 푦´) ∧ (1 ∨ 푥´) = 1 ∧ 1 = 1; 푥 ∧ 푦 ∧ (푥´ ∨ 푦´) = (푥 ∧ 푦 ∧ 푥´) ∨ (푥 ∧ 푦 ∧ 푦´) = (0 ∧ 푦) ∨ (0 ∧ 푥) = 0 ∨ 0 = 0 this shows that the (unique) complement of 푥 ∧ 푦 is 푥´ ∨ 푦´. Similarly, we can establish the other law. (2) By (1) we have 푥 ≤ 푦 if and only if 푥 ∧ 푦 = 푥 if and only if 푥´ ∨ 푦´ = 푥´ if and only if 푦´ ≤ 푥´. (3) If 푥 ∧ 푦 ≤ 푧 then 푧 ∨ 푦´ ≥ (푥 ∧ 푦) ∨ 푦´ = (푥 ∨ 푦´) ∧ (푦 ∨ 푦´) = (푥 ∨ 푦´) ∧ 1 = (푥 ∨ 푦´) ≥ 푥 (by distributivity and (5) of Theorem 4.1.2). Also if 푥 ≤ 푧 ∨ 푦´ then; 푥 ∧ 푦 ≤ (푧 ∨ 푦´) ∧ 푦 = (푧 ∧ 푦) ∨ (푦´ ∧ 푦) = (푧 ∧ 푦) ∨ 0 = 푧 ∧ 푦 ≤ 푧 (by distributivity and (5) of Theorem 4.1.2). (4) This is clearly the dual of (3) and it follows by the duality principle.

Definition 4.1.6: When the underlying set 퐵 is empty, the resulting algebra is degenerate in the sense that it has just one element. In this case, the operations of join, meet, and complementation are all constant, and 0 = 1. The simplest non-degenerate Boolean algebra is discussed in example below;

Example 4.1.7: Class of all subsets of a one-element set which has just two elements, 0 (the empty set) and 1 (the one-element set) forms non-degenerate Boolean algebra under the operations of join and meet described by the following arithmetic tables;

˅ 0 1 0 1 and ˄

0 0 1 0 0 0

1 1 1 1 0 1

and complementation is the unary operation that maps 0 to 1, and conversely.

Notation: By 푅푋 we shall denote the set of all functions from 푋 into 푅 (as discussed in example 4.2.) throughout.

Example 4.1.8: The set 푅푋 forms Boolean algebra as it clearly satisfies the axioms of Theorem 4.1.2.

Isomorphism of Boolean Algebras

Definition 4.1.9: A function 휑 is called an isomorphism from a Boolean algebra 퐵 = (퐵,∧퐵,∨퐵, 0퐵, 1퐵) into a Boolean algebra 퐶 = (퐶,∧퐶,∨퐶, 0퐶, 1퐶) if and only if (a) 휑 is a one-one function from 퐵 into 퐶, (b) for any 푥, 푦 in 퐵, 휑(푥 ∧퐵 푦) = 휑(푥) ∧퐶 휑(푦)

휑(푥 ∨퐵 푦) = 휑(푥) ∨퐶 휑(푦)

휑(푥’퐵) = (휑(푥))’퐶. Such a function 휑 is called an isomorphism from 퐵 onto 퐶 if in addition 휑 is a function from 퐵 onto 퐶.

Theorem 4.1.10: Let 휑 be an isomorphism from a Boolean algebra 퐵 into (respectively, onto) a Boolean algebra 퐶(with the notation given above) then:

(a) (휑0퐵) = 0퐶 and (1휑퐵) = 1퐶. (b) It is not necessary to assume that: 휑(푥 ∨퐵 푦) = 휑(푥) ∨퐶 (휑푦) for all 푥, 푦 in 퐵 Alternatively, we could omit the assumption that:

(휑(푥) ∧퐵 푦) = 휑(푥) ∧퐶 휑(푦). (c) If 휓 is an isomorphism from 퐶 into (respectively, onto) a Boolean algebra 퐷 = (퐷,∧퐷,∨퐷, 0, 1퐷) then the composite mapping 휓 ∘ 휑 is an isomorphism from 퐵 into (respectively, onto) 퐷. (d) The inverse mapping 휑−1 is an isomorphism from the subalgebra of 퐶 determined by 휑[ℬ] onto 퐵 and in particular if 휑 is onto 퐶, then 휑−1is an isomorphism from 퐶 onto ℬ.

′ Proof : (a) 휑(0퐵) = 휑 (푥 ∧퐵 푥퐵) = 휑(푥) ∧퐶 휑(푥′퐵) = 휑(푥) ∧퐶 (휑(푥))’퐶 = 0퐶 and 휑(1퐵) = 휑(0’퐵) = (휑(0퐵))’퐶 = 0’퐶 = 1퐶.

(b) 휑(푥 ∨퐵 푦) = 휑((푥’퐵 ∧퐵 푦’퐵)’퐵) = (휑(푥’퐵 ∧퐵 푦’퐵))’퐶

= (휑(푥’퐵) ∧퐶 휑(푦퐵))’퐵

= ((휑(푥))’퐶 ∧퐶 (휑(푦))’퐶))’퐶

= 휑(푥) ∨퐶 휑(푦).

(c) First, 휓 ∘ 휑 is one-one (if 푥 ≠ 푦, then 휑(푥) ≠ 휑(푦) and therefore (휑(푥)) ≠ 휓(휑(푦)). )

Second (휓 ∘ 휑)(푥’퐵) = 휓(휑(푥’퐵)) = (휓(휑(푥))) ’퐷.

Lastly (휓 ∘ 훷)( 푥 ∧퐵 푦) = 휓(휑(푥 ∧퐵 푦))

= 휓휑(푥) ∧퐶 휑(푦))

= 휓(휑(푥)) ∧퐷 휓(휑(푦))

= (휓 ∘ 휑)(푥) ∧퐷 (휓 ∘ 휑)(푦).

(d) Assume 푧 ∈ 휑[ℬ]. Then 푧 = 휑(푥) and 푤 = 휑(푦) for some 푥 and 푦 in 퐵. Hence 푥 = 휑−1(푧) and 푦 = 휑−1(푤). First, if 푧 ≠ 푤, then 푥 ≠ 푦 (for if 푥 = 푦, then 푧 = 휑(푥) = 휑(푦) −1 = 푤). Thus 휑 is one-one. Second, 휑(푥 ∨퐵 푦) = 휑(푥) ∨퐶 휑(푦) = 푧 ∨퐶 푤. Hence ′ 휑−1(푧 ∨ 푤) = 푥 ∨ 푦 = 휑−1(푧) ∨ 훷−1(푤). Thirdly we have (푥’ ) = (휑(푥)) = 퐶 퐵 퐵 퐵 퐶 −1 −1 푧’퐶. 휑 (푧’퐶) = (휑 (푧))’ 퐵.

We say that 퐵 is isomorphic with 퐶 if and only if there is an isomorphism from 퐵 onto 퐶. From Theorem 4.3.2(d, c) it follows that, if 퐵 is isomorphic with 퐶 then 퐶 is isomorphic with 퐵 and if in addition 퐶 is isomorphic with 퐷 then 퐵 is isomorphic with 퐷. Isomorphic Boolean algebras have in a certain sense the same Boolean structure. More precisely, this means that any property (formulated in the language of Boolean algebras) holding for one Boolean algebra also holds for any isomorphic Boolean algebra.

Example 4.1.11: Boolean algebras 푃(푋) and 푅푋 are isomorphic via the mapping that takes each subset of 푋 to its characteristic function. proof: Let 푓: 푃(푋) → 푅푋 be a mapping defined by: 1 if 푥 ∈ 푃, for all 푃 ∈ 푃(푋) 푓(푃) = 푝(푥) = { 0 if 푥 ∉ 푃. We now show that 푓is isomorphism. Clearly f is one-one Now푓(푃 ∨ 푄) = 푓(푃 ∪ 푄) = 1 if and only if 푥 ∈ 푃 ∪ 푄 if and only if 푥 ∈ 푃 or 푥 ∈ 푄 if and only if 푝(푥) = 1 and 푞(푥) = 1 if and only if 푝(푥) ≠ 푞(푥) or 푝(푥) = 푞(푥) = 1 if and only if 푝(푥) + 푞(푥) + 푝(푥)푞(푥) = 1 if and only if 푝(푥) ∨ 푞(푥) = 1 if and only if 푓(푃) ∨ 푓(푄) = 1. This implies that 푓(푃 ∨ 푄) = 푓(푃) ∨ 푓(푄). Also 푓(푃 ∨ 푄) = 푓(푃 ∪ 푄) = 0 if and only if 푥 ∉ 푃 ∪ 푄, if and only if 푥 ∉ 푃 and 푥 ∉ 푄 if and only if 푝(푥) = 0 and 푞(푥) = 0 if and only if 푝(푥) + 푞(푥) + 푝(푥)푞(푥) = 0 if and only if 푓(푃) ∨ 푓(푄) = 0. This also implies 푓(푃 ∨ 푄) = 푓(푃) ∨ 푓(푄). Thus join is preserved. Similarly, we can show that meet is preserved. 0 if 푥 ∈ 푃 Also 푓(푃´) = 푝´(푥) = { 1 if 푥 ∉ 푃 = 푔(푃)´. And clearly 푓(∅) = 0 and 푓(푋) = 1. Thus 푓is isomorphism.

Finite Boolean Algebras

Definition 4.1.12: A Boolean algebra is a finite Boolean algebra if it contains a finite number of elements as set. Finite Boolean algebras are particularly nice since we can classify them up to isomorphism.

Let 퐵 and 퐶 be Boolean algebras and we know that bijective map 휙: 퐵 → 퐶 is an isomorphism of Boolean algebras if for all 푎 and 푏 in 퐵 휙(푎 ∨ 푏) = 휙(푎) ∨ 휙(푏) 휙(푎 ∧ 푏) = 휙(푎) ∧ 휙(푏)

We will show that any finite Boolean algebra is isomorphic to the Boolean algebra obtained by taking the power set of some finite set 푋. We will need a few lemmas anddefinitions before we prove this result. Let 퐵 be a finite Boolean algebra. Recall that an element푎 ∈ 퐵is an atom of 퐵 if 푎 ≠ 0and 푎 ∧ 푏 = 푎for all non-zero 푏 ∈ 퐵. Equivalently푎is an atom of 퐵if there is no non- zero 푏 ∈ 퐵 distinct from 푎such that 0 ≤ 푏 ≤ 푎.

Lemma 4.1.13: Let 퐵 be a finite Boolean algebra. If 푏 is a non-zero element of 퐵, then there is an atom 푎 in 퐵 such that 푎 ≤ 푏.

Proof: If 푏 is an atom, let 푎 = 푏. Otherwise, choose an element 푏1, not equal to 0 or 푏 such that 푏1 ≤ 푏. We are guaranteed that this is possible since b is not an atom. If 푏1 is an atom then we are done. If not choose 푏2 not equal to 0 or 푏1 such that 푏2 ≤ 푏1. Again if 푏2 is an atom, let 푎 = 푏2. Continuing this process, we can obtain a chain

0 ≤ ⋯ ≤ 푏3 ≤ 푏2 ≤ 푏1 ≤ 푏. Since 퐵 is finite Boolean algebra this chain must be finite. That is for some 푘, 푏푘 is an atom then let 푎 = 푏푘.

Lemma 4.1.14: Let 푎 and 푏 be atoms in a finite Boolean algebra 퐵 such that 푎 ≠ 푏. Then 푎 ∧ 푏 = 0.

Proof: Since 푎 ∧ 푏 is the greatest lower bound of 푎 and 푏, we know that 푎 ∧ 푏 ≤ 푎. Hence either 푎 ∧ 푏 = 푎or 푎 ∧ 푏 = 0. However if 푎 ∧ 푏 = 푎 then either 푎 ≤ 푏 or 푎 = 0. In either case we have a contradiction because 푎 and 푏 are both atoms therefore 푎 ∧ 푏 = 0.

Lemma 4.1.15: Let 퐵 be a Boolean algebra and 푎, 푏 ∈ 퐵. Then following statements are equivalent. 1. 푎 ≤ 푏, 2. 푎 ∧ 푏´ = 0, 3. 푎´ ∨ 푏 = 1.

Proof: (1) ⇒ (2): If 푎 ≤ 푏 then 푎 ∨ 푏 = 푏. Therefore, 푎 ∧ 푏´ = 푎 ∧ (푎 ∨ 푏)´ = 푎 ∧ (푎´ ∧ 푏´) (de Morgan law, Theorem 4.1.4 (1)) = (푎 ∧ 푎´) ∧ 푏´ = 0 ∧ 푏´ (by Theorem 4.1.2 (5)) = 0. (2) ⇒ (3): If 푎 ∧ 푏´ = 0 then 푎´ ∨ 푏 = ( 푎 ∧ 푏´)´ = 0´ = 1 (3) ⇒ (1): If 푎´ ∨ 푏 = 1, then 푎 = 푎 ∧ (푎´ ∨ 푏) = (푎 ∧ 푎´) ∨ (푎 ∧ 푏) (by Theorem 4.1.2 (3)) = 0 ∨ (푎 ∧ 푏) (by Theorem 4.1.2 (5)) = 푎 ∧ 푏. Thus 푎 ≤ 푏.

Lemma 4.1.16: Let 퐵 be a Boolean algebra and 푏 and 푐 be elements in 퐵 such that 푏 ≰ 푐. Then there exists an atom 푎 ∈ 퐵 such that 푎 ≤ 푏 and 푎 ≰ 푐.

Proof: By lemma (4.1.16) 푏 ∧ 푐´ ≠ 0. Hence, there exists an atom 푎 such that 푎 ≤ 푏 ∧ 푐´. Consequently 푎 ≤ 푏 and 푎 ≰ 푐.

Lemma 4.1.17: Let 푏 ∈ 퐵 and 푎1, … , 푎푛 be the atoms of 퐵 such that푎푖 ≤ 푏 for all 푖 = 1, … , 푛. Then 푏 = 푎1 ∨ … ∨ 푎푛. Furthermore, if 푎, 푎1, … , 푎푛 are atoms of 퐵 such that 푎 ≤ 푏, 푎푖 ≤ 푏 for all 푖 = 1, … , 푛 and 푏 = 푎 ∨ 푎1 ∨ … ∨ 푎푛, then 푎 = 푎푖 for all 푖 = 1, … , 푛.

Proof: Let 푏1 = 푎1 ∨ … ∨ 푎푛. Since 푎푖 ≤ 푏 for each 푖 and we know that 푏1 ≤ 푏. If we can show that 푏 ≤ 푏1 then the lemma is true by antisymmetry. Assume that 푏 ≰ 푏1. Then there exists an atom 푎 such that 푎 ≤ 푏 and 푎 ≰ 푏1. Since 푎 is an atom and 푎 ≤ 푏 we can deduce that 푎 = 푎푖 for some 푎푖. However this is impossible since 푎 ≤ 푏1. Therefore 푏 ≤ 푏1.

Now suppose that 푏 = 푎1 ∨ … ∨ 푎푛. If 푎 is an atom less than 푏 then

푎 = 푎 ∧ 푏 = 푎 ∧ (푎1 ∨ … ∨ 푎푛) = (푎 ∧ 푎1) ∨ … ∨ (푎 ∧ 푎푛).

But each term is 0 or 푎 with 푎 ∧ 푎푖 occurring for only one 푎푖. Hence by (lemma 4.1.15) 푎 = 푎푖 for some 푖.

Theorem 4.1.18: Let 퐵 be a finite Boolean algebra. Then there exists a set 푋 such that 퐵 is isomorphic to 푃(푋).

Proof: We will show that 퐵 is isomorphic to 푃(푋), where 푋 is the set of atoms of 퐵. Let 푎 ∈ 퐵. By lemma 4.1.18, we can write 푎 uniquely as 푎 = 푎1 ∨ … ∨ 푎푛 for 푎1, … , 푎푛 ∈ 푋. Consequently we can define a map 휙: 퐵 → 푃(푋) by

휙(푎) = 휙(푎1 ∨ … ∨ 푎푛) = {푎1, … , 푎푛}. Clearly 휙 is onto.

Now let 푎 = 푎1 ∨ … ∨ 푎푛 and 푏 = 푏1 ∨ … ∨ 푏푚 be elements in 퐵, where each 푎푖 and each 푏푖 is an atom. If 휙(푎) = 휙(푏) then {푎1, … , 푎푛} = {푏1, … , 푏푚} and 푎 = 푏. Consequently 휙 is injective. The join of 푎 and 푏 is preserved by 휙 since

휙(푎 ∨ 푏) = 휙(푎1 ∨ … ∨ 푎푛 ∨ 푏1, … , 푏푚),

= {푎1, … , 푎푛, 푏1, … , 푏푚},

= {푎1, … , 푎푛} ∪ {푏1, … , 푏푚},

= 휙(푎1 ∨ … ∨ 푎푛) ∪ 휙(푏1, … , 푏푚), = 휙(푎) ∪ 휙(푏). Similarly 휙(푎 ∧ 푏) = 휙(푎) ∩ 휙(푏). Thus meet and join are both preserved, hence the result follows.

Corollary 4.1.19: If 퐵 is a finite Boolean algebra then 퐵 has 2푛elements where 푛 is the number of atoms in 퐵.

Proof: From the theorem we have 퐵 ≃ 푃(퐸) for some finite set 퐸. Without loss of generality we may assume that 퐸 = {1,2, … , 푛}. Let 2 denote the two-element chain 0 < 1 and consider 풏 the mapping 푓: 푃(퐸) ⟶ ퟐ given by 푓(푋) = (푥1, 푥2, . . . , 푥푛) where 1 if 푖 ∈ 푋; 푥 = { 푖 0 otherwise.

Given 퐴, 퐵 ∈ 푃(퐸), let 푓(퐴) = (푎1, 푎2, . . . , 푎푛) and 푓 (퐵) = (푏1, 푏2, . . . , 푏푛). Then we have 퐴 ⊆ 퐵 if and only if for every 푖, 푖 ∈ 퐴 implies 푖 ∈ 퐵, which is equivalent to 푎푖 = 1 implies 푏푖 = 1, that is to 푎푖 ≤ 푏푖, which by the definition is equivalent to 푓(퐴) ≤ 푓(퐵). Moreover given 풏 any 푥 = (푥1, 푥2, . . . , 푥푛) ∈ ퟐ we have 푓(퐶) = 푥, where 퐶 = {푖│푥푖 = 1}. It therefore follows by (Theorem 1.5.2) that (퐸) ≃ ퟐ풏. Hence 퐵 ≃ ퟐ풏 and so has 2푛 elements. Moreover, since 퐸 has 푛 elements, 퐵 ≃ (퐸) has 푛 atoms.

4.2 Boolean Rings In this section we will discuss Boolean rings and we some of its types. Also in this section we will look at some properties of Boolean rings. This section ends with some important results on Boolean rings.

A ring is an abstract version of arithmetic, the kind of thing we studied in school. The prototype is the ring of integers. It consists of a universe — the set of integers — and three operations on the universe: the binary operations of addition and multiplication. There are also two distinguished integers zero and one.

The set of integers satisfies a number of basic laws that are familiar from school mathematics;

The associative laws for addition and multiplication;

(1) 푝 + (푞 + 푟) = (푝 + 푞) + 푟 for all 푝, 푞, 푟 ∈ 푍;

(2) 푝 · (푞 · 푟) = (푝 · 푞) · 푟.

The commutative laws for addition and multiplication;

(3) 푝 + 푞 = 푞 + 푝;

(4) 푝 · 푞 = 푞 · 푝.

The identity laws for addition and multiplication;

(5) 푝 + 0 = 푝;

(6) 푝 · 1 = 푝.

The inverse law for addition;

(7) 푝 + (−푝) = 0. and the distributive laws for multiplication over addition;

(8) 푝 · (푞 + 푟) = 푝 · 푞 + 푝 · 푟;

(9) (푞 + 푟) · 푝 = 푞 · 푝 + 푟 · 푝.

Any set (universe) satisfying above properties is named as ring where the difference between the ring of integers and an arbitrary ring is that, in the latter, the universe may be an arbitrary non-empty set of elements, not just a set of integers, and the operations take their arguments and values from this set. The commutative law for multiplication is not required to hold in an arbitrary ring; if it does, the ring is said to be commutative. Also, a ring is not always required to have a unit, an element 1 satisfying (6); if it does, it is called a ring with unit.

There are other natural examples of rings besides the integers. The most trivial is the ring with just one element in its universe: zero. It is called the degenerate ring. The simplest non- degenerate ring with unit has just two elements, zero and one. The operations of addition and multiplication are described by the following arithmetic tables: + 0 1 . 0 1

0 0 1 0 0 0

1 1 0 1 0 1

Remark 4.2.1: An examination of the above tables shows that the two-element ring has several special properties. First of all, every element is its own additive inverse, that is: (10) 푝 + 푝 = 0.

Therefore, the operation of negation is superfluous: every element is its own negative. Rings satisfying condition (10) are said to have characteristic 2. Second, every element is its own square, that is; (11) 푝 · 푝 = 푝. Element with this property are called idempotent. Rings with the property that every element is idempotent have special name as defined below:

Definition 4.2.2: A ring 푅 with unit is said to be Boolean ring if every element of it is idempotent.

Example 4.2.3: The two-element ring is the simplest non- degenerate example of Boolean ring.

The condition of idempotence in the definition of a Boolean ring has quite a strong influence on the structure of such rings. Two of its most surprising consequences are proved in the next proposition.

Definition 4.2.4: the characteristic of a ring 푅 is the least positive integer 푛 such that 푛푥 = 0 for all 푥 ∈ 푅.

Proposition 4.2.5: Let B be a Boolean ring then: (a) Boolean ring always has characteristic 2. (b) A Boolean ring is always commutative.

Proof: (a) Let 푝, 푞, 푟 ∈ 푅 (Boolean Ring).Then by definition: 2 (푝 + 푞) = (푝 + 푞) . 2 2 2 Therefore (푝 + 푞) = (푝 + 푞) = 푝 + 푞 + 푝푞 + 푞푝 = 푝 + 푞, which implies 0 = 푝푞 + 푞푝 (by using (10) and (11)). (12)

(b) Put 푝 = 푞 in (12), we get0 = 푝2 + 푝2 = 푝 + 푝 (using (11)). Assertion (a) implies that every element is its own negative so (푝. 푞) = −(푝. 푞) (13) Adding (12) and (13) we get 푝. 푞 = 푞. 푝 + 푝. 푞 + (−푝. 푞) 푝. 푞 = 푞. 푝 + 푝. 푞 – 푝. 푞 푝. 푞 = 푞. 푝 + 0 푝. 푞 = 푞. 푝. Thus (b) holds hence the result follows.

Since as we know negation in Boolean rings is the identity operation it is not necessary to use the minus sign for additive inverse of an element of the Boolean ring. So in case of Boolean rings, only a slight modification in set of Ring axiom is needed, the identity (7) should be replaced by (10). From now on the official axioms for a Boolean ring are: (1) − (3), (5), (6), (8) − (11).

Example 4.2.6: The universe of this example consists of ordered pairs (푝, 푞) of elements of the universe as in example 4.2.2. Let S denotes the universe then;

푆 = {(0,0), (0,1), (1,0), (1,1)}. To add or multiply two pairs in S, just add or multiply the corresponding coordinates as in 푅 (the universe in example 4.2.2) that is:

(a)(푝0, 푝1) + (푞0, 푞1) = (푝0 + 푞0, 푝1 + 푞1),

(b) (푝0, 푝1). (푞0, 푞1) = (푝0푞0, 푝1. 푞1).

These equations make sense because their right sides refer to the elements and operations of 푅. The Zero and unit of the ring are the pairs (0,0) and (1,1). It is a simple matter to check that the axioms for Boolean rings are true in S. In each case, the verification of the axioms reduces to its validity in 푅.

Example 4.2.7: The preceding examples can easily be generalized to each positive integer 푛.

The universe in this case is the set of 푛-termed sequences (푝0 , 푝1, . . . , 푝푛−1) of element from universe as in example 4.2.2. The sum and the product of two such 푛-tupples defined coordinate-wise, just as in the case of ordered pairs as in example 4.2.4.

(푝0 , 푝1, . . . 푝푛) + (푞0, 푞1, . . . , 푞푛−1) = (푝0 + 푞0 , 푝1 + 푞1, . . . , 푝푛−1 + 푞푛−1 (푝0 , 푝1, . . . 푝푛). (푞0, 푞1, . . . , 푞푛−1) = (푝0. 푞0 , 푝1. 푞1, . . . , 푝푛−1. 푞푛−1). The zero and unit are the n-tuples (0, 0, . . . , 0) and (1, 1, . . . , 1)

Example 4.2.8: Let ‘푋’ be an arbitrary set and 푅푋, the set of all functions from ‘푋’ into 푅 as defined in example 4.2.2. The elements of 푅푋 will be called 2- valued functions on 푋. The distinguished elements and the operations of 푅푋 are defined point-wise. This means that 0 and 1 in 푅푋 are the constant functions defined for each 푥 in 푋 by; 0(푥) = 0 and 1(푥) = 1 Then the functions 푝 + 푞 and 푝. 푞 are defined by: 푝 + 푞(푥) = 푝(푥) + 푞(푥). 푝. 푞(푥) = 푝(푥) . 푞(푥). The above equations make sense as their right hand sides refer to elements and operations of 푅 as defined in example 4.2.2.

Verifying that 푅푋 is a Boolean ring is conceptually the same as verifying that 푆 (as in example 4.2.4) is a Boolean ring, but notationally it looks a bit different. Consider as an example The verification of the distributive law (8). In the context of 푅푋, the left and right sides of (8) denote functions from 푋 into 푅. It must be shown that these two functions are equal. They obviously have the same 푋, so it suffices to check that the values of the two functions at each element 푥 in the domain agree that is {푝. (푞 + 푟)} (푥) = (푝. 푞 + 푝. 푟) (푥) (14) The left and right sides of (14) evaluate to 푝(푥). {푞(푥) + 푟(푥)} and 푝(푥) . 푞(푥) + 푝(푥). 푟(푥) (15) respectively, by the definitions of addition and multiplication in 푅푋. Each of these terms denotes an element of 푅. Since, the distributive law holds in 푅, the terms in (15) are equal. Therefore the equation (14) is true. The other Boolean axioms are verified for 푅푋 in similar fashion.

The next result shows that in an arbitrary ring some additional properties hold than those of (1)-(10) as mentioned below:

Theorem 4.2.9: Prove that in an arbitrary ring, 푝 ∙ 0 = 0 ∙ 푝 = 0 and 푝 ∙ (−푞) = (−푝) ∙ 푞 = −(푝 ∙ 푞)

Proof: We can write 푝 ∙ 0 = 푝(0 + 0), = 푝 ∙ 0 + 푝 ∙ 0 or 0 = 푝 ∙ 0 (by left cancellation law). Similarly 0 ∙ 푝 = 0. Thus 푝 ∙ 0 = 0 ∙ 푝 = 0. Now 푝(−푞) + 푝 ∙ 푞 = 푝 ∙ (−푞 + 푞) by (8) = 푝 ∙ (0) = 0 this implies 푝 ∙ (−푞) = −(푝 ∙ 푞). Similarly (−푝) ∙ 푞 = −(푝 ∙ 푞).

Definition 4.2.10: A Boolean group 퐵 is a group in which every element has order two (in other words the law (10) is valid, that is for all 푝 ∈ 퐵 푝 + 푝 = 0).

Theorem 4.2.11: Every Boolean group 퐵 is commutative (that is, the commutative law (3) is valid).

Proof: Since 퐵satisfies law (10), then we can write; (푝 + 푞) + (푝 + 푞) = 0 for all 푝, 푞 ∈ 퐵. (i) Now 푝 + 푞 = 푝 + 0 + 푞, then by (i) we have; 푝 + 푞 = 푝 + (푝 + 푞) + (푝 + 푞) + 푞 = (푝 + 푝) + (푞 + 푝) + (푞 + 푞) by (1) = 0 + (푞 + 푝) + 0 = 푞 + 푝. Thus 퐵 is commutative.

Definition 4.2.12: A zero divisor in a ring is a non-zero element 푝 such that 푝 ∙ 푞 = 0 for some non-zero element 푞.

Theorem 4.2.13: Boolean ring with or without unit having more than two elements has zero divisors.

Proof: Let 퐵 be a Boolean ring having more than two elements, then there exist two distinct non-zero elements 푥, 푦 ∈ 퐵 then: 푥 + 푦 ≠ 0 as 푥 ≠ 푦. Now we have following cases to consider: Case 1: If 푥 ∙ 푦 = 0, then we are done. Case 2: If 푥 ∙ 푦 ≠ 0, then;

(푥 ∙ 푦) ∙ (푥 + 푦) = 푥 ∙ 푦 ∙ 푥 + 푥 ∙ 푦 ∙ 푦 = 푥2 ∙ 푦 + 푥 ∙ 푦2 = 푥 ∙ 푦 + 푥 ∙ 푦 (using 11), = 0. Hence 푥 ∙ 푦 is zero-divisor in this case. Thus the result follows.

4.3 Boolean Algebras Versus Rings This is the last section of this unit. In this section our focus will be mainly on how Boolean rings can be converted into Boolean algebras and Boolean algebras into Boolean rings. This section ends with some important results on how Boolean rings are converted into Boolean algebras and conversely.

Boolean rings and Boolean algebras: The theories of Boolean algebras and Boolean rings are very closely related; in fact, they are just different ways of looking at the same subject. More precisely every Boolean algebra can be turned into a Boolean ring by defining appropriate operations of addition and multiplication and conversely every Boolean ring can be turned into a Boolean algebra by defining appropriate operations of join, meet and complement. We discuss this next as follows:

How to convert Boolean rings into Boolean algebras and conversely? We will discuss the general technique how to convert every Boolean ring into Boolean algebra and conversely.

Motivated by this set-theoretic example, we can introduce into every Boolean algebra 퐴 operations of addition and multiplication very much like symmetric difference and intersection; just define; (3) 푝 + 푞 = (푝 ∧ 푞) ∨ (푝 ∧ 푞) and 푝・푞 = 푝 ∧ 푞. Under these operations, together with 0 and 1 (the zero and unit of the Boolean algebra), 퐴becomes a Boolean ring. Conversely, every Boolean ring can be turned into a Boolean algebra with the same zero and unit; just define operations of join, meet, and complement by; (4) 푝 ∨ 푞 = 푝 + 푞 + 푝・푞, 푝 ∧ 푞 = 푝・푞 and 푝’ = 푝 + 1. Start with a Boolean algebra, turn it into a Boolean ring (with the same zero and unit) using the definitions in (3), and then convert the ring into a Boolean algebra using the definitions in (4); the result is the original Boolean algebra. Conversely start with a Boolean ring, convert it into a Boolean algebra using the definitions in (4) and then convert the Boolean algebra into a Boolean ring using the definitions in (3); the result is the original ring.

Theorem 4.3.1: Let (퐵; ∧,∨, ´) be a Boolean algebra. Define a multiplication and an addition on 퐵 by setting: For all 푥, 푦 ∈ 퐵 푥푦 = 푥 ∧ 푦, 푥 + 푦 = (푥 ∧ 푦´) ∨ (푥´ ∧ 푦). Then (퐵; ⋅, +) is a Boolean ring.

Proof: Clearly(퐵; ⋅) is a semigroup with an identity, namely the top element 1 of 퐵. Moreover, for every 푥 ∈ 퐵 we have 푥2 = 푥 ∙ 푥 = 푥 ∧ 푥 = 푥, so every element is idempotent. Now given 푥, 푦, 푧 ∈ 퐵 it is easy to verify that; (푥 + 푦) + 푧 = (푥 ∧ 푦´ ∧ 푧´) ∨ (푥´ ∧ 푦 ∧ 푧´) ∨ (푥´ ∧ 푦´ ∧ 푧) ∨ (푥 ∧ 푦 ∧ 푧), which, being symmetric in 푥, 푦, 푧 is also equal to 푥 + (푦 + 푧). Since 푥 + 0 = (푥 ∧ 0´) ∨ (푥´ ∧ 0) = (푥 ∧ 1) ∨ 0 = 푥 and 푥 + 푥 = (푥 ∧ 푥´) ∨ (푥´ ∧ 푥) = 0 ∨ 0 = 0, we see that (퐵; ⋅) is an in which −푥 = 푥 for every 푥 ∈ 퐵. Finally, for all 푥, 푦, 푧 ∈ 퐵 we have; 푥푦 + 푥푦 = [푥푦 ∧ (푥푧)´] ∨ [(푥푦)´ ∧ 푥푧] = [푥 ∧ 푦 ∧ (푥 ∧ 푧)´] ∨ [(푥 ∧ 푦)´ ∧ 푥 ∧ 푧] = [푥 ∧ 푦 ∧ (푥´ ∨ 푧´)] ∨ [(푥´ ∨ 푦´) ∧ 푥 ∧ 푧] = (푥 ∧ 푦 ∧ 푧´) ∨ (푥 ∧ 푦´ ∧ 푧) = 푥 ∧ [(푦 ∧ 푧´) ∨ (푦´ ∧ 푧)] = 푥(푦 + 푧).

Thus (퐵; ⋅, +) is a Boolean ring. We can also proceed in opposite direction. Given a Boolean ring (퐵; ∙, +) we can equip it with the structure of a Boolean algebra. For this purpose, we first note that in such a ring we have 푥 + 푦 = (푥 + 푦)2 = 푥2 + 푥푦 + 푦푥 + 푦2 = 푥 + 푥푦 + 푦푥 + 푦, Whence 푥푦 + 푦푥 = 0 and so −푥푦 = 푦푥. Taking 푦 = 푥, we obtain −푥2 = 푥2, that is, −푥 = 푥, so that 푥 + 푥 = 0. Thus a Boolean ring is of characteristic 2. Now since 푥 = −푥 for every 푥 we have 푥푦 = −푥푦 = 푦푥, whence we see that a Boolean algebra is commutative.

Theorem 4.3.2: Let (퐵; ⋅, +) be a Boolean ring. For all 푥, 푦, 푧 ∈ 퐵 define 푥 ∧ 푦 = 푥푦, 푥 ∨ 푦 = 푥 + 푦 + 푥푦, 푥´ = 1 + 푥.

Then (퐵; ∧,∨, ´) is a Boolean algebra.

Proof: It is clear from above that (퐵; ∧) is an abelian semigroup. Also, (푥 ∨ 푦) ∨ 푧 = (푥 ∨ 푦) + 푧 + (푥 ∨ 푦)푧 = 푥 + 푦 + 푥푦 + 푧 + 푥푧 + 푦푧 + 푥푦푧, the symmetry of which shows that (퐵; ∨) is also a semigroup, again abelian. Since now 푥 ∧ (푥 ∨ 푦) = 푥(푥 + 푦 + 푥푦) = 푥 + 푥푦 + 푥푦 = 푥 + 0 = 푥and 푥 ∨ (푥 ∧ 푦) = 푥 + 푥푦 + 푥2푦 = 푥 + 푥푦 + 푥푦 = 푥 + 0 = 푥, it follows that (퐵; ∧,∨) is a lattice. This lattice is distributive, since: 푥 ∧ (푦 ∨ 푧) = 푥(푦 + 푧 + 푦푧) = 푥푦 + 푥푧 + 푥푦푧 = 푥푦 + 푥푧 + 푥푦푥푧 = 푥푦 ∨ 푥푧, = (푥 ∧ 푦) ∨ (푥 ∧ 푧). Now the order in this lattice is given by; 푥 ≤ 푦 if and only if 푥 = 푥 ∧ 푦 = 푥푦. Hence the lattice is bounded with top element 1 and bottom element 0. Finally 푥 ∨ 푥´ = 푥 푥´ + 푥푥´ = 푥 + 1 + 푥 + 푥(1 + 푥) 1 and 푥 ∧ 푥´ = 푥푥´ = 푥(1 + 푥) = 푥 + 푥 = 0, and so 푥´ is the complement of 푥. Thus (퐵; ∧,∨, ´) is a Boolean algebra.

4.4 Applications of Boolean Algebras

Propositional Calculus

We now turn our attention to the applications of Boolean algebra to two-valued logic and in particular to the calculus of propositions. Historically, lattice theory had its beginnings in the investigations of Boole into the formation of logic.

Definition 4.4.1: By a proposition we mean a statement which in some clearly defined sense is either true (푇) or false (퐹). Thus of the two propositions Grass is green, Fish grows on trees, the first is 푇 and the second is 퐹. With the help of words ‘and’ (∧), ‘or’ (∨), ‘not’ (∼) compound propositions such as Grass is not green, Grass is green and fish grow on trees, can be constructed from simpler ones and truth values 푇 or 퐹 of compound propositions may be calculated from those of simpler ones of which they are composed by means of the logical matrices defined below. Before defining logical matrices, we discuss the notation as follows; Notation: By the symbol ‘∼’ we denote not or the negation of any statement which we have denoted before by symbol ′ (complementation), so we use the symbol ′ throughout instead of ‘∼’.

Definition 4.4.2: Logical matrices are to be regarded a statements of the axioms upon which the propositional calculus is based. They are as follows:

∧ 푇 퐹 ∨ 푇 퐹

푇 푇 퐹 푇 푇 푇

퐹 퐹 퐹 퐹 푇 퐹 ′ Thus ‘grass is green and fish grow on trees’ is 퐹 because 퐹 appears in row 푇 and column 퐹 of the 푇 퐹 matrix for ∧. It is usual to denote propositions by 푝, 푞, … and propositions compounded from these by 퐹 푇 푝 ∧ 푞, 푝 ∨ 푞, 푝′, 푝 ∧ 푞′, 푝 ∧ (푞 ∨ 푟), and so on. By way of clarification it should be stated that ∨ denotes the inclusive ‘or’ so that 푝 ∨ 푞 means ‘푝 or 푞 or both’. The exclusive ‘or’ corresponds to the + of Boolean rings. We could write 푝 + 푞 to mean ‘푝 or 푞 but not both’.

Definition 4.4.3: A proposition which cannot be constructed from simpler propositions exclusively with the help of ∧, ∨, ′ may be called an elementary proposition.

In general, the truth value of compound proposition can be determined from the logical matrices and from the truth values of the elementary propositions composing it. There are however certain compound propositions whose truth value can be determined without a knowledge of the truth values of the elementary propositions. For instance Shakespeare wrote Hamlet or Shakespeare did not write Hamlet is 푇 whether or not Shakespeare wrote Hamlet. In the above compound proposition we can replace the elementary proposition ‘Shakespeare wrote Hamlet’ by any other proposition 푝 without altering its truth value. Thus

푝 ∨ 푝′is 푇 for all 푝.

Similarly

푝 ∧ 푝′is 퐹 for all 푝.

These results can be calculated from the logical matrices alone. Such computations are conveniently set out in the form of truth tables. In such a table all possible combinations of truth values for the elementary propositions involved are tabulated to the left of the double line. The columns to the right of the double line are then computed in succession from the logical matrices. The truth tables for 푝 ∨ 푝′ and for 푝 ∧ 푝′ may be set down together as follows.

푝 푝′ 푝 ∨ 푝′ 푝 ∧ 푝′

푇 퐹 푇 퐹

퐹 푇 푇 퐹

Definition 4.4.4: A proposition 푞, such as 푝 ∨ 푝′, is said to be formally true and is called a tautology if every proposition 푞∗ obtained from 푞 by replacing its elementary propositions by arbitrary propositions is 푇. Correspondingly 푞 is said to be formally false and is called an absurdity or a contradiction if every 푞∗ is 퐹. We observe that this notation permits us to write

(푝 ∧ 푞)∗ = 푝∗ ∧ 푞∗, (푝 ∨ 푞)∗ = 푝∗ ∨ 푞∗, (푝′)∗ = (푝∗)′. Notation: We introduce the symbol ⟷ by defining 푝 ⟷ 푞 to be an abbrevatio for the proposition (푝 ∧ 푞) ∨ (푝′ ∧ 푞′). The following computation

푝 푞 푝 ∧ 푞 푝′ 푞′ (푝′ ∧ 푞′) 푝 ⟷ 푞

푇 푇 푇 퐹 퐹 퐹 푇

푇 퐹 퐹 퐹 푇 퐹 퐹

퐹 푇 퐹 푇 퐹 퐹 퐹 퐹 퐹 퐹 푇 푇 푇 푇 shows that ⟷ has the following logical matrix

⟷ 푇 퐹

푇 푇 퐹

퐹 퐹 푇

We see that 푝 ⟷ 푞 is 푇 if and only if 푝 and 푞 have equal truth values whatever the truth values of the elementary propositions composing 푝 and 푞 may be.

Definition 4.4.5: Possibly if the proposition푝 ⟷ 푞 is formally true, then in that case we call 푝, 푞equivalent propositions and write 푝 ≡ 푞.

The above definition of ≡ is helpful for us in the following result;

Proposition 4.4.6: If 푝, 푞 and 푟 are different propositions then: (i) 푝 ≡ 푝, (ii) if 푝 ≡ 푞 then 푞 ≡ 푝, (iii) if 푝 ≡ 푞 and 푞 ≡ 푟then 푝 ≡ 푟, (iv) if 푝 ≡ 푞 then 푝′ ≡ 푞′, (v) if 푝 ≡ 푞then 푝 ∧ 푟 ≡ 푞 ∧ 푟, (vi) if 푝 ≡ 푞then 푝 ∨ 푟 ≡ 푞 ∨ 푟, (vii) 푝 ∧ 푞 ≡ 푞 ∧ 푝, (viii) 푝 ∨ 푞 ≡ 푞 ∨ 푝.

Proof: In each case an indirect proof can be constructed. We exhibit the details for (iii) and rest will follow by the similar argument. Suppose that 푝 ≢ 푟. Then 푝∗ ⟷ 푟∗ is 퐹 for some choice of 푝∗ and 푟∗ and so for this choice of푝∗and푟∗have different truth values. If we suppose 푝∗ is 푇 and 푟∗ is 퐹, then 푝 ≡ 푞 states that 푝∗ ⟷ 푞∗ is 푇 for every 푞∗ and consequently each 푞∗ like 푝∗ is 푇. Further 푞 ≡ 푟 states that 푞∗ ⟷ 푟∗ is 푇from which we see that 푟∗ like 푞∗ is 푇 in contradiction to the supposition that 푟∗ is 퐹. Since a similar contradiction arises if we suppose that 푝∗ is 퐹 and 푟∗ is 푇. Thus validity of (iii) has been demonstrated.

If 푝1 ≡ 푝2 and 푞1 ≡ 푞2, then from (v) and (vii) we have

푝1 ∧ 푞1 ≡ 푝2 ∧ 푞1 ≡ 푞1 ∧ 푝2 ≡ 푞2 ∧ 푝2 ≡ 푝2 ∧ 푞2. Using (iii) we obtain

(ix) if 푝1 ≡ 푝2 and 푞1 ≡ 푞2 then 푝1 ∧ 푞1 ≡ 푝2 ∧ 푞2.

Similarly, from (iii), (vi) and (vii) we have (x) if 푝1 ≡ 푝2 and 푞1 ≡ 푞2 then 푝1 ∨ 푞1 ≡ 푝2 ∨ 푞2.

Remark 4.4.7: The (i), (ii), (iii) of above result show that ≡ is an equivalence relation which we might well have denoted by =. Our object however has been to elucidate the meaning of this kind of equality and for this purpose we think the notation ≡ more suggestive. It is in this sense that the postulates for a distributive lattices are satisfied by interpreting ∨ and ∧ as union and intersection. For instance, the distributive law takes the form 푝 ∧ (푞 ∨ 푟) ≡ (푝 ∧ 푞) ∨ (푝 ∧ 푟) and its validity can be demonstrated by showing that each side has the same truth value whatever the truth values of 푝, 푞 and 푟 may be. This is done in the following truth table

푝 푞 푟 푞 ∨ 푟 푝 ∧ (푞 ∨ 푟) 푝 ∧ 푞 푝 ∧ 푟 (푝 ∧ 푞) ∨ (푝 ∧ 푟)

푇 푇 푇 푇 푇 푇 푇 푇

푇 푇 퐹 푇 푇 푇 퐹 푇

푇 퐹 푇 푇 푇 퐹 푇 푇

푇 퐹 퐹 퐹 퐹 퐹 퐹 퐹

퐹 푇 푇 푇 퐹 퐹 퐹 퐹

퐹 푇 퐹 푇 퐹 퐹 퐹 퐹

퐹 퐹 푇 푇 퐹 퐹 퐹 퐹

퐹 퐹 퐹 퐹 퐹 퐹 퐹 퐹

In clarification of the meanings of 푝 ⟷ 푞 and 푝 ≡ 푞 we point out that there are different categories of propositions. We can consider elementary propositions as being in the lowest category. The statement 푝 ⟷ 푞 is in higher category since it is a proposition about propositions namely that 푝, 푞 have the same truth value. 푝 ≡ 푞 is in a higher category still, since it is a proposition about a proposition about propositions namely that 푝 ⟷ 푞 is formally true.

Remarks 4.4.8: 1. The equivalence relation mentioned earlier separates all propositions into equivalence classes and we may denote by 푝̅ the class to which the proposition 푝 belongs. That is, the propositions 푝 and 푞 belong to the same class if and only if 푝 ≡ 푞. Expressing this in another way, 푝̅ = 푞̅ if and only if 푝 ≡ 푞. 2. The results (iv), (ix) and (x) show that the operations ´, ∧and ∨ are stable in relation to the equivalence relation and that the equivalence relation is indeed a congruence relation which enables us to define unambiguously the following operations on the set of equivalence classes: 푝̅ = 푞̅if and only if 푝 ≡ 푞´, 푝̅ = 푞̅ ∩ 푟̅if and only if 푝 ≡ 푞 ∧ 푟, 푝̅ = 푞̅ ∪ 푟̅ if and only if 푝 ≡ 푞 ∨ 푟. 3. If, for example (ix) where not true, then for some 푝1, 푞1, 푝2, 푞2 we would have 푝1 ≡ 푞1, 푝2 ≡ 푞2, 푝1 ∧ 푝2 ≢ 푞1 ∧ 푞2 with the consequence that 푝̅̅1̅ = 푞̅̅1̅, 푝̅̅2̅ = 푞̅̅2̅, 푝̅̅1̅̅∧̅̅̅푝̅2̅ ≠ 푞̅̅1̅̅∧̅̅̅푞̅2̅ and we could not define 푝̅̅1̅ ∩ 푝̅̅2̅ = 푝̅̅1̅̅∧̅̅̅푝̅2̅ without involving ambiguity. 4. We observe that the statement 푝 ∨ 푝´ ⟷ 푞 ∨ 푞´ is always 푇 since each side has the same truth value 푇 for all 푝, 푞. Thus 푝 ∨ 푝´ ≡ 푞 ∨ 푞´ and we may denote the class to which 푝 ∨ 푝´ belongs by 퐼. Thus

퐼 = 푝̅̅̅∨̅̅̅푝̅̅´ = 푝̅ ∪ 푝̅´ = 푝̅ ∪ 푝̅´. In the same way 푝 ∧ 푝´ ≡ 푞 ∧ 푞´, since each side is always 퐹 and we can write 푂 = 푝̅̅̅∧̅̅̅푝̅̅´ = 푝̅ ∩ 푝̅´ = 푝̅ ∩ 푝̅´. In fact 퐼 is the class of all tautologies and 푂 is the class of all contradictions.

Theorem 4.4.10: The classes 푝̅, 푞̅, … , 푂, 퐼 of propositions form a Boolean algebra in relation to the operations ∩, ∪ and ´ defined above.

Proof: We need not give a detailed proof for each postulate. We have already seen in (vii) that 푝 ∧ 푞 ≡ 푞 ∧ 푝 so we obtain commutative laws from

푝̅ ∩ 푞̅ = 푝̅̅̅∧̅̅̅푞̅ = 푞̅̅̅∧̅̅̅푝̅ = 푞̅ ∩ 푝̅. The other postulates for a distributive lattice are proved in a similar manner. For instance we show by constructing a truth table that 푝 ≡ 푝 ∧ (푝 ∨ 푞) from which we get 푝̅ = 푝̅̅̅∧̅̅(̅̅푝̅̅∨̅̅̅푞̅̅) = 푝̅ ∩ (̅푝̅̅∨̅̅̅푞̅) = 푝̅ ∩ (푝̅ ∪ 푞̅) which is absorption law. Further, 푝̅ ∪ 퐼 = 푝̅ ∪ (푝̅ ∪ 푝̅´) = (푝̅ ∪ 푝̅) ∪ 푝̅´ = 푝̅ ∪ 푝̅´ = 퐼, and dually, 푝̅ ∩ 푂 = 푂. These formulae imply that 퐼 is the top element and that 푂 is the bottom element. The relations 푝̅ ∪ 푝̅´ = 퐼, 푝̅ ∩ 푝̅´ = 푂 now demonstrate that the lattice is complemented since each class 푝̅ has a complement 푝̅´. Thus the lattice is Boolean algebra. Hence the result follows.

Switching Circuits By a switching circuit we mean a piece of electrical apparatus between the terminals of which may be one or more switches of different sorts. These switches may be hand operated, or may be operated by a circuit itself, or by other circuits. Since we are only concerned with whether or not a current flow in a circuit when the potential difference is applied between two of the terminals, we do not take into account the magnitude of the current nor the magnitude of the component resistances. At any instant the given switch 푎 is supposed to be either open (푎 = 0) or closed (푎 = 1). By means of electrical relays it is possible to arrange that a number of other switches are open when 푎 is open and are closed when 푎 is closed. We shall denote each of the by 푎 so that 푎 really denotes a class of switches which are either simultaneously open or simultaneously closed. Again another set of switches 푎′ can be operated by relays so that each switch 푎′ is open when 푎′ when 푎 is closed and is closed when 푎 is open. In the accompanying diagrams the lines indicate conductors while the lettered gaps in the conductors denote switches. The boxes containing letters denote relays which may be used to operate other switches. Fig. 1 denotes a circuit containing a single switch 푎 and a relay. A current flows in this circuit only when 푎 = 1 and when it does so, it operates the relay which may be used to operate other switches 푎 and also

to operate switches 푎′. The circuit of fig. 2 has two switches 푎 and 푏 in series and will be denoted by 푎 ∩ 푏.

Since this circuit is closed if and only if 푎 and 푏 are both closed,

푎 ∩ 푏 = 1 ⟺ 푎 = 1 ∧ 푏 = 1, { (1) 푎 ∩ 푏 = 0 ⟺ 푎 = 0 ∨ 푏 = 0. When the relay of this circuit operates a switch 푐 such that 푐 = 1 when 푎 ∩ 푏 = 1 and 푐 = 0 when 푎 ∩ 푏 = 0, it is natural to write 푐 = 푎 ∩ 푏. In effect, this means that not only can a single letter denote a class of switches but a single letter or single formula can denote a class of equivalent circuits all of which are open simultaneously or all closed simultaneously.

In a similar manner a circuit containing two switches 푎 and 푏 in parallel will be denoted by 푎 ∪ 푏 (fig. 3) and it is easy to see

푎 ∪ 푏 = 1 ⟺ 푎 = 1 ∨ 푏 = 1, { (2) 푎 ∪ 푏 = 0 ⟺ 푎 = 0 ∧ 푏 = 0.

As our notation suggests, the component circuits, or rather the class of equivalent circuits are the elements of Boolean algebra. The verification of the postulates of a distributive lattice is accomplished in exactly the same manner as was done in the propositional calculus by means of truth tables. For instance, the two circuits in fig. 4 and fig. 5 are either simultaneously open or

simultaneously closed, as is demonstrated by the accompanying truth table.

푎 푏 푐 푎 ∪ 푏 (푎 ∪ 푏) ∪ 푐 푏 ∪ 푐 푎 ∪ (푏 ∪ 푐)

0 0 0 0 0 0 0

0 0 1 0 1 1 1

0 1 0 1 1 1 1

0 1 1 1 1 1 1

1 0 0 1 1 0 1

1 0 1 1 1 1 1

1 1 0 1 1 1 1

1 1 1 1 1 1 1

Thus the circuit (푎 ∪ 푏) ∪ 푐 is equivalent to the circuit 푎 ∪ (푏 ∪ 푐). We adopt the convention that 푎 = 1 denoted that the switch or circuit 푎 was closed but we may in fact denote the short circuit by 1 (fig. 6). Thus we may interpret 푎 = 1 to mean that 푎 is temporarily equivalent to a short circuit. Similarly we interpret 푎 = 0푎 is temporarily equivalent to an open circuit (fig. 6), which is labelled 0.

Since it is easily verified that 푎 ∪ 1 = 1, 푎 ∩ 0 = 0, it is clear that 1 is the top element and 0 is the bottom element. An examination of the circuits of fig. 7 reveals that 푎 ∪ 푎′ = 1, 푎 ∩ 푎′ = 0; from which it is clear that each class 푎 has complement 푎′. So the lattice is a Boolean algebra.

In the circuit of figs. 2 and 3 the switches 푎 and 푏 might be operated manually since they may be operated independently. The circuit in fig.7 denoted by 푎 ∪ 푎′ = 1 reveals a different situation for the operation of 푎′ is determined by that of 푎 and the one cannot be manipulated independently of the other. This relationship is expressed by the formula 푎 ∪ 푎′ = 1.

We now mention some other circuits in which the relationship between the switches is expressed by an equation in the Boolean algebra,

Example 4.4.11: Consider a circuit with two switches 푎, 푏 related by the equation

푎 ∪ 푏 = 푎 or by one of the equivalent formulae 푎 ≥ 푏, 푎 ∩ 푏 = 푏. Since 1 ≥ 푎 ≥ 푏, it follows that 푏 = 1 implies 푎 = 1. Thus 푏 cannot be closed until 푎 is closed and whenever 푎 is open 푏 must be open. We need not concern ourselves here with the mechanical construction of such a circuit which can be achieved in various ways but it is a plan that such device would have practical value. Indeed the idea can be extended to a sequential system of circuits 푎, 푏, 푐, … such that 푎 ≥ 푏 ≥ 푐 … of which the lost can be closed only when 푎, 푏, 푐, … have been closed in alphabetical order.

Example 4.4.12: Another circuit of special interest contains three switches 푎, 푏, 푐 satisfying

푏 = 푎 ∩ (푏 ∪ 푐)

Since this relation implies 푎 ≥ 푏, this circuit is the modification of the previous one. Assume initially that 푎 = 1; then the closing of 푐 ensures that 푐 = 1, 푏 ∪ 푐 = 1, 푏 = 푎 ∩ (푏 ∪ 푐) = 1, 푏 closes. However, 푏 must open immediately when open 푎. This is known as a lock-in circuit. We can suppose that 푎 is a break switch which is normally held closed by a spring and that 푐 is a make switch normally held open by a spring. The switch 푏 is operated by a relay. To close 푏 we have to only press 푐 momentarily, but when this is done 푏 opens and stays open until 푐 is pressed again. This circuit is illustrated in fig. 8.

The two principal objects in applying Boolean algebra to switching problem is first to design a circuit with a prescribed function and secondly to simplify a circuit without altering its function.

As an illustration of the first type of problem we consider the construction of a binary adder which yields the sum of three digits 푎, 푏, 푐 in the binary scale. One of these digits, say 푐, will be a “carry in” from a previous column. Since 푎, 푏, 푐 are each 0 or 1 their sum must be one of the four integers 0,1,2,3 which in the binary scale take the forms 00, 01, 10, 11. If we denote this sum in the binary scale by 푥푦, then 푥 is the “carry out” digit which is inserted in the next column. The summation to be performed and the required value of 푥, 푦 are as follows:

Thus employing the formulae (1) and (2) we get;

푦 = 1 if and only if (푎 = 0 ∧ 푏 = 0 ∧ 푐 = 1) ∨ (푎 = 0 ∧ 푏 = 1 ∧ 푐 = 0) ∨ (푎 = 1 ∧ 푏 = 0 ∧ 푐 = 0) ∨ (푎 = 1 ∧ 푏 = 1 ∧ 푐 = 1);

if and only if (푎′ = 1 ∧ 푏′ = 1 ∧ 푐 = 1) ∨ (푎′ = 1 ∧ 푏 = 1 ∧ 푐′ = 1) ∨ (푎 = 1 ∧ 푏′ = 1 ∧ 푐′ = 1) ∨ (푎 = 1 ∧ 푏 = 1 ∧ 푐 = 1);

if and only if (푎′ ∩ 푏′ ∩ 푐 = 1) ∨ (푎′ ∩ 푏 ∩ 푐′ = 1) ∨ (푎 ∩ 푏′ ∩ 푐 = 1)∨ (a∩b∩c=1);

if and only if (푎′ ∩ 푏′ ∩ 푐) ∪ (푎′ ∩ 푏 ∩ 푐′) ∪ (푎 ∩ 푏′ ∩ 푐) ∪ (푎 ∩ 푏 ∩ 푐) = 1.

Consequently,

푦 = (푎′ ∩ 푏′ ∩ 푐) ∪ (푎′ ∩ 푏 ∩ 푐′) ∪ (푎 ∩ 푏′ ∩ 푐) ∪ (푎 ∩ 푏 ∩ 푐). Similarly 푥 = (푎′ ∩ 푏 ∩ 푐) ∪ (푎 ∩ 푏′ ∩ 푐) ∪ (푎 ∩ 푏 ∩ 푐′) ∪ (푎 ∩ 푏 ∩ 푐).

This simplifies to, 푥 = (푎 ∩ 푏) ∪ (푏 ∩ 푐) ∪ (푐 ∩ 푎).

The following network may therefore be used (fig. 9)

The problem of simplifying a circuit is largely one of reducing a given Boolean polynomial to an equivalent expression which is simpler in form in the sense that fewer letters are required in writing it down. Thus the first of two expressions for 푥 above requires 12 letters or switches while the second only requires 6. A further alternative employing only 5 switches would be given by the formula;

푥 = [푎 ∩ (푏 ∪ 푐)] ∪ (푏 ∩ 푐).

There are of course also certain technical considerations which must be taken into account in determining which of two circuits should be regarded as the simpler. We consider some aspects of the problem in next section.

Bridge circuits

Bridge circuits

The circuits discussed in the previous section have all been of the series-parallel type. If bilateral elements are used which conduct current in both directions it may be possible to simplify a given circuit by using by using bridge circuit such as that of fig.10. in this circuit we suppose that when 푐 is closed current can flow in either direction through this switch. The bridge circuit illustrated employs only five switches though a series-parallel circuit for 푥 would require at least eight corresponding to 푥 = [푎 ∩ (푒 ∪ (푐 ∩ 푑))] ∪ [푏 ∩ (푑 ∪ (푐 ∩ 푒))] or ten corresponding to the 푥 in the figure. The appropriate formula for such a bridge circuit can be obtained by enumerating the possible paths of the current and by taking the union of the Boolean functions for the different paths.

Definition 4.4.13: A device for the construction of non-series-parallel circuits is the disjunctive tree which employs transfer switches in which the operation of 푎 and 푎′ is effected by a single spring.

Consider the case of three variables (three classes of switches) 푎, 푏, 푐 and suppose that 푓(푎, 푏, 푐) is a Boolean function for the class of circuits, we find that 푓(푎, 푏, 푐) can be written as

[푎 ∩ 푏 ∩ 푓(1,1, 푐)] ∪ [푎 ∩ 푏′ ∩ 푓(1,0, 푐)] ∪ [푎′ ∩ 푏 ∩ 푓(0,1, 푐)] ∪ [푎′ ∩ 푏′ ∩ 푓(0,0, 푐)].

Now 푓(1,1, 푐), 푓(1,0, 푐), 푓(0,1, 푐), 푓(0,0, 푐) all belong to the four element lattice generated by 푐 which is composed of the elements 0, 푐, 푐′, 1. Therefore the circuit required is realised by marrying the disjunctive tree for 푎 and 푏 (fig.11 a) with the network of fig. 11b.

There is of course no need to include the open circuit 0 in fig.11b except for diagrammatic purposes. Whatever the nature of 푓(푎, 푏, 푐) at most eight switches or four transfer switches are required. By way of illustration we take,

푓(푎, 푏, 푐) = (푎′ ∩ 푏′ ∩ 푐) ∪ (푎′ ∩ 푏 ∩ 푐′) ∪ (푎 ∩ 푏′ ∩ 푐′) ∪ (푎 ∩ 푏 ∩ 푐) which is the formula for the digit 푦 of the binary adder investigated in the previous section. We can easily verify that

푓(1,1, 푐) = 푐, 푓(1,0, 푐) = 푐′, 푓(0,1, 푐) = 푐′, 푓(0,0, 푐) = 푐.

To obtain a required circuit (fig.12) it is only necessary to connect the circuits 푎 ∩ 푏 and 푎′ ∩ 푏′ with 푐 and to connect 푎 ∩ 푏′ and 푎′ ∩ 푏 with 푐′. The short circuit 1 in fig.11b is not required.

The circuit of fig.12 is clearly more economical the series parallel circuit of fig.9 for the same Boolean function. The method described may be applied to any number of variables but as the number rises the complexities of the computation rapidly increases.