<<

Strong AI vs. Weak AI

Automated Reasoning • can be classified into two categories: strong AI and weak AI • Strong AI: – Can pass the Turing Test – Has an intelligence that matches or exceeds that of human beings George F Luger • Weak AI: ARTIFICIAL INTELLIGENCE 6th edition – Doesn’t pass the Turing Test Structures and Strategies for Complex Problem Solving – Produces machines that act intelligently

1 2

Automated Reasoning Program An Example – Theorist

• Allows computers to reason completely, or nearly • The first AI program completely, automatically • Deliberately engineered to mimic the problem • Uses weak problem-solving methods solving skills of a human being • Prerequisite for weak method problem solving: • A weak problem solver – An unambiguous and exact notation for representing • Uses uniform representation medium – information propositional calculus – Precise inference rules for drawing conclusions, and • Uses sound inference rules – substitution, – A carefully described strategies to apply these rules replacement, and detachment • The design of search strategies is an art – cannot • Uses strategies or heuristic methods to guide the guarantee to find a useful solution solution process 3 4 The Strategy of LT Inference Rules of LT • First, directly apply substitution to the goal to match • Substitution allows any expression to be it against all known axioms and theorems substituted for every occurrence of a symbol • Second, if it fails, apply all possible detachments (B ∨ B) → B Substitute ¬A for B, we have and replacements to the goal and test all results for (¬A ∨ ¬A) → ¬A success. If it fails, add them to a subproblem list • Replacement allows a connective to be replaced by • Third, chaining method is used to find a new its definition or an equivalent form subproblem (if a → c is a problem and b → c is – Since ¬A ∨ B ≡ A → B found, then a → b is a new subproblem) – Therefore ¬A ∨ ¬A can be replaced by A → ¬A • Fourth, if the above three methods fail, go to the • Detachment is modus ponens subproblem list and select the next untried subproblem 5 6

The Strategy of LT Reasoning as Search

• The executive routine applies the four methods • explored a search tree: the root was repeatedly until either the solution is found or the the initial hypothesis, each branch was a deduction memory and time are exhausted based on the rules of logic. Somewhere "up" the • The LP executes a goal -driven, breath -first search tree was the goal: the proposition the program of the problem space intended to prove. The pathway along the • Matching process enables the substitution, branches that led to the goal was a proof – a series replacement and detachment of the expressions of statements, each deduced using the rules of logic, that led from the hypothesis to the proposition to be proved.

7 8 An Example

Suppose we wish to prove:

(p → ¬p) → ¬p

1. (A ∨ A) → A Matching identifies the best axiom out of five available ones 2. (¬A ∨ ¬A) → ¬A Substitute ¬A for A in order to apply 3. (A → ¬A) → ¬A Replacement of → for ∨ and ¬, followed by 4. (p → ¬p) → ¬p substitution of p for A

QED 9 10

General Problem Solver General Problem Solver • The problem-solving strategies of human subject • Study revealed there were many ways the LT’s is called difference reduction solution differed from those of human subjects • The general process of using transformations to • Human behavior showed strong evidence of reduce problem differences is called means-ends matching and difference reduction mechanism analysis • GPS was based on the LT • The algorithm applying means-ends analysis is • GPS was intended to work as a universal problem called the General Problem Solver (GPS) solver to solve any formalized symbolic problem – Differences between the initial statement and the goal are found – A difference table containing transformation rules is used to remove the differences

11 12 Fig 14.1a Transformation rules for logic problems, from Newell and Simon (1961) Fig 14.1b A proof of a theorem in propositional calculus, from Newell and Simon (1961), generated by human subject

13 14

Fig 14.2 Flow chart and difference reduction table for the General Problem Solver, from Newell and Simon (1963b). GPS Model

• The GPS model of problem solving requires two components – General procedure for comparing two state descriptions and reducing their differences – The table of connections, giving links between problem differences and transformations that reduce them

15 16 Resolution Method Resolution Refutation

• A modern and more powerful method for • It proves a theorem by negating the goal automated reasoning statement to be proved and adding it to the set of • Can prove theorems in both propositional and given axioms known to be true predicate calculus • Proving the theorem directly may take a long time • The primary rule of inference in or may not work at all • Uses a single rule of resolution, instead of trying • It uses the resolution rule of inference to show that different rules of inference and hoping one the negated goal leads to a contradiction succeeds • It follows that the original goal must be consistent • Greatly reduces the search space – this proves the theorem

17 18

Resolution Refutation Proof Steps Clause Form

• Put the premises or axioms into clause form • A clause is a disjunction of literals • Add the negation of what is to be proved, in clause – e.g. A ∨ B ∨ C form, to the set of axioms ¬ B ∨ C ∨ D • Resolve these clause together, producing new • A literal is an atomic expression or the negation of clauses that logically follow from them an atomic expression • Produce a contradiction by generating the empty • The database of axioms can be represented in a clause conjunctive normal form using only ∨, ∧, and ¬ • The substitutions used to produce the empty – e.g. clause are those that make the original goal true (A ∨ B ∨ C ) ∧ (¬ B ∨ C ∨ D )

19 20 Clause Form Resolution Rules of Reference

• Resolution is an operation on two clauses (one containing a literal and the other containing its negation) to produce a new and simpler clause where two literals are “resolved away” – e.g. (A ∨ B ∨ C ) ∧ (¬ B ∨ C ∨ D ) ≡ A ∨ C ∨ D (p ∨ q ) ∧ (¬ q ∨ r) ≡ p ∨ r (f ∨ g ) ∧ (f ∨ ¬ g) ≡ f ∨ f ≡ f • Unification may be required before resolution

21 22

An Example An Example

Axioms: Predicate Form: Clause Form: Fido is a dog. All dogs are animals. 1. ∀(X) (dog(X) → animal(X)) ¬ dog(X) ∨ animal(X) All animals will die. 2. dog(fido) dog(fido) Prove: 3. ∀(Y) (animal(Y) → die(Y)) ¬ animal(Y) ∨ die(Y) Fido will die. Negate the conclusion that Fido will die: Using predicate calculus: 4. ¬ die(fido) ¬ die(fido) 1. All dogs are animals: ∀(X) (dog(X) → animal(X)) The entire database can be represented as a conjunction 2. Fido is a dog: dog(fido) 3. Modus ponens and {fido/X} gives: animal(fido) of disjuncts: 4. All animals will die: ∀(Y) (animal(Y) → die(Y)) (¬ dog(X) ∨ animal(X)) ∧ (¬ animal(Y) ∨ die(Y)) 5. Modus ponens and {fido/Y} gives: die(fido) ∧ (dog(fido)) ∧ (¬ die(fido))

23 24 Resolution proof for the “dead dog” problem Substitution of Variables

• The sequence of substitutions used gives values of variables to make the goal true • e.g. – Find out if “something will die” is true – Negated goal: ¬ (∃ Z die(Z)) – The substitution {fido/Z} shows that fido is an instance of animal that will die

25 26

Conversion to the Clause Form Conversion to the Clause Form

Prove: Nine steps to convert statements to the clause form: Some programmers hate failures 1. Eliminate conditionals (→) using a → b ≡ ¬ a ∨ b No programmer hates any success ⇒ ∃ ∧ ∀ ¬ ∨ ∴ No failure is a success 1) ( X) [p(X) ( Y) ( f(Y) h(X, Y))] ⇒ ∀ ¬ ∨ ∀ ¬ ∨ ¬ Let: 2) ( X) [ p(X) ( Y) ( s(Y) h(X, Y))] ⇒ ¬ ∀ ¬ ∨ ¬ p(X) = X is a programmer s(X) = X is a success 3) ( Y) ( f(Y) s(Y)) f(X) = X is a failure h(X, Y) = X hates Y 2. When possible, eliminate negations or reduce their scope using ¬ (¬ a) ≡ a Then premises and negated conclusion are: ¬ (a ∨ b) ≡ ¬ a ∧ ¬ b ¬ (∃X) a(X) ≡ (∀X) ¬ a(X) ∃ ∧ ∀ → 1) ( X) [p(X) ( Y) (f(Y) h(X, Y))] ¬ (a ∧ b) ≡ ¬ a ∨ ¬ b ¬ (∀X) a(X) ≡ (∃X) ¬ a(X) 2) (∀X) [p(X) → (∀Y) (s(Y) → ¬ h(X, Y))] ⇒ ∃ ∧ 3) ¬ (∀Y) (f(Y) → ¬ s(Y)) 3) ( Y) (f(Y) s(Y)) 27 28 Conversion to the Clause Form Conversion to the Clause Form

3. Standardize variables (each quantifier has unique 6. Drop all universal quantification variable name). 1) ⇒ p(a) ∧ (¬ f(Y) ∨ h(a, Y))] e.g. ( ∃X) ¬ p(X) ∨ (∀X) p(X) ⇒ (∃X) ¬ p(X) ∨ (∀Y) p(Y) 2) ⇒ ¬ p(X) ∨ ¬ s(Y) ∨ ¬ h(X, Y) ∀ 4. Move all to front without changing their order 3) ⇒ f(b) ∧ s(b) 1) ⇒ (∀Y) ( ∃X) [p(X) ∧ (¬ f(Y) ∨ h(X, Y))] 2) ⇒ (∀X) ( ∀Y) [ ¬ p(X) ∨ ¬ s(Y) ∨ ¬ h(X, Y)] 7. Convert the expression to conjunctive normal 5. Eliminate existential quantifiers using Skolem form using: p ∨ (q ∧ r) = (p ∨ q) ∧ (p ∨ r) functions ß Our example is already in conjunctive normal ⇒ ∀ ∧ ¬ ∨ 1) ( Y) [p(a) ( f(Y) h(a, Y))] form 2) ⇒ (∀X) ( ∀Y) [ ¬ p(X) ∨ ¬ s(Y) ∨ ¬ h(X, Y)] 3) ⇒ f(b) ∧ s(b) 29 30

Conversion to the Clause Form Conversion to the Clause Form

8. Eliminate ∧ signs by writing the expression as a 9. Rename variables in clauses so that each clause set of clauses. has unique variable name. Needless sharing of 1) ⇒ { p(a), ¬ f(Y) ∨ h(a, Y) } variable names may cause loss of generality in 2) ⇒ {¬ p(X) ∨ ¬ s(Y) ∨ ¬ h(X, Y) } the solution. 3) ⇒ { f(b), s(b) } 1a) p(a) Or: 1a) p(a) 1b) ¬ f(Y) ∨ h(a, Y) 1b) ¬ f(Y) ∨ h(a, Y) 2a) ¬ p(X) ∨ ¬s(Z) ∨ ¬ h(X, Z) 2a) ¬ p(X) ∨ ¬ s(Y) ∨ ¬ h(X, Y) 3a) f(b) 3a) f(b) 3b) s(b) 3b) s(b)

31 32 Unification Proof by Resolution Refutation

• The process of finding substitutions for variables to make arguments match is called unification. • Without unification it is impossible to resolve clauses such as: ¬ f(Y) ∨ h(a, Y) f(b) ∴ p(a) ∧ ( ¬ f(b) ∨ h(a, b) ) ∧ • For (1a) – (3b): {a/X, b/Y, b/Z} ( ¬ p(a) ∨ ¬ s(b) ∨ ¬ h(a, b) ) ∧ f(b) ∧ s(b) = nil • Since the root is nil, the conclusion is valid. 33 34

The Binary Resolution Proof Procedure One resolution proof for an example from the propositional calculus.

Prove aa from the following axioms: Propositions Clause Form b ∧ c → a a ∨ ¬ b ∨ ¬ c b b d ∧ e → c c ∨ ¬ d ∨ ¬ e e ∨ f e ∨ f d ∧ ¬ f d ¬ f ¬ a We first convert the propositions to the clause form using: p → q ≡ ¬ p ∨ q and ¬ (p ∧ q) ≡ ¬ p ∨ ¬ q Then the negated goal, ¬ a, is added to the clause set. 35 36 Happy Student Problem Example

• Anyone passing his history exams and winning the lottery is happy. But anyone who studies or is lucky can pass all his exams. John did not study but he is lucky. Anyone who is lucky wins the lottery. Is John happy? • To solve the problem, we first change the sentences to the predicate form

37 38

Fig 14.5 One refutation for the “happy student” problem. Exciting Life Problem Example

• All people who are not poor and are smart are happy. Those people who read are smart. John can read and is not poor. Happy people have exciting lives. Can anyone be found with an exciting life? • To apply resolution refutation method we change the sentences first to the predicate form and then to the clause form

39 40 Fig 14.6 Resolution proof for the “exciting life” problem.

41 42

Fig 14.7 another resolution refutation for the example of Fig 14.6. Strategies and Simplification Techniques for Resolution

• There is a different resolution refutation proof tree for the exciting life problem (Fig. 17) • For N clauses, there are up to N 2 ways of combining them at just the first level. In a large problem the exponential growth of the combinatorics will quickly get out of control • Therefore search heuristics or strategies are used to control and simplify the proof process

43 44 Fig 14.8 Complete state space for the “exciting life” problem generated by breadth-first search (to two levels). The Breadth-First Strategy

• Each clause is compared for resolution with every clause in the clause space on the first round • The clauses at level n are generated by resolving all clauses at level n -1 against the original clauses and all clauses previously produced • It guarantees to find the shortest solution • It guarantees to find a refutation if one exists • A good strategy for a small problem

45 46

Fig 14.9 Using the unit preference strategy on the exciting life problem. The Unit Preference Strategy

• A clause of one literal is called a unit clause • The unit preference strategy uses units for resolving whenever they are available • Resolving with a unit clause guarantees the resolvent is smaller than the largest parent clause, which makes it closer to producing the clause of no literals

47 48 Unification substitutions of Fig 14.6 applied to the original query Answer Extraction from Resolution Refutation

• Retaining information on the unification substitution made in the resolution refutation gives information for the correct answer • To record the answer, retain the original conclusion to be approved and, into that conclusion, introduce each unification made in the resolution process

49 50

Fig 14.11 Answer extraction process on the “finding fido” problem. Fido Dog Problem Example

Fido the dog goes wherever John, his master, goes. John is at the library. Where is Fido? First represent the sentences in predicate calculus expressions and then reduce them to clause form. Predicates: at(john, X) → at(fido, X) at(john, library) Clauses: ¬ at(john, X) ∨ at(fido, X) at(john, library) Negated goal: ¬ at(fido, Z)

51 52 Fig 14.12 Skolemization as part of the answer extraction process. Grandparent Problem Example

Everyone has a parent. The parent of a parent is a grandparent. Given the person John, prove that John has a grandparent. The following are the sentences and negated goal in predicate calculus expressions and the clause form: (∀X)( ∃Y) p(X, Y) (∀X)( ∀Y)( ∀Z) p(X, Y) ∧ p(Y, Z) → gp(X, Z) ¬ gp(john, W) p(X, pa(X)) ¬ p(W, Y) ∨ ¬ p(Y, Z) ∨ gp(W, Z) ¬ gp(john, V) 53 54

Prolog and Automated Reasoning Horn Clause

• One problem with resolution proof procedure is • A Horn clause contains at most one positive literal when predicate calculus descriptors are transformed ∨ ¬ ∨ ¬ ∨ ∨ ¬ a b1 b2 … bn to clause form, important heuristic problem-solving • To emphasize the positive literal (a), the Horn information is left out. For example: clause is generally written as an implication: a ∨ ¬ b ∨ c → ¬ d (provides heuristic information) a ← b ∧ b ∧ … ∧ b ¬ a ∧ b ∧ ¬ c ∨ ¬ d (no heuristic information) 1 2 n • Horn clause allows only restricted representation • Prolog uses Horn clauses whose procedural of the clauses, but provides very efficient search interpretation provides an explicit strategy that strategy for refutation preserves the heuristic information

55 56 Three Forms of the Horn Clause Reducing Clauses into Horn Form 1. The original clause has no positive literals ← ∧ ∧ ∧ b1 b2 … bn • It takes three steps called a headless clause or goals – Select and move the positive literal to the very left ∨ ¬ ∨ ¬ ∨ ∨ ¬ 2. It has no negative literals • a b1 b2 … bn ← a1 – Change the clause to Horn form by the rule: a ← ∨ ¬ ∨ ¬ ∨ ∨ ¬ ≡ ← ¬ ¬ ∨ ¬ ∨ 2 • a b1 b2 … bn a ( b1 b2 ∨ ¬ …… … bn) ← a3 – Use de Morgan’s law to change it to called the facts ← ∧ ∧ ∧ • a b1 b2 … bn 3. It has one positive and one or more negative literals • It may not always be possible to transform a clause ← ∧ ∧ ∧ a b1 b2 … bn to the Horn form (e.g. p ∨ q)

called a rule 57 58

Search Strategy for Refutation Search Strategy for Refutation

• Prolog implements left-to-right, depth-first search • The Prolog interpreter continues to reduce the

of clause for refutation leftmost goal, b 1, using the first clause in P that • Given a goal: unifies with b 1 ← ∧ ∧ ∧ b ← c ∧ c ∧ … ∧ c a1 a2 … an 1 1 2 p and a program P, the Prolog interpreter searches • The goal then becomes: ← c ∧ c ∧ … ∧ c ∧ b ∧ … ∧ b ∧ a ∧ … ∧ a for the first clause in P whose head unifies with a1 1 2 p 2 n 2 n ← ∧ ∧ ∧ a1 b1 b2 … bm • The search continues from left to right and from top to down until the goal reduces to the null • After reducing a 1, the goal clause becomes: ← ∧ ∧ ∧ ∧ ∧ ∧ clause b1 b2 … bm a2 … an

59 60 Depth-First Search Strategy Homework Assignment

goal 1. Put the following predicate calculus expression in clause form:

∀ (X) (p(X) → { ∀ (Y) [p(Y) → p(f(X, Y))] ∧ ¬ ∀ (Y) [q(X, Y) → p(Y)]}) a a a … a 1 2 3 n 2. Prove the following using resolution refutation method. Draw an inversed binary tree to show the resolution process. … Fact: ¬ d(f) ∨ [b(f) ∧ c(f)] b1 b2 b3 bm Rules: ¬ d(X) → ¬ a(X) b(Y) → e(Y) c1 c2 c3 … cp g(W) → c(W) Prove: ¬ a(Z) ∨ e(Z)

61 62