The Frame Problem --- a GENERAL STATEMENT of the FRAME PROBLEM
Total Page:16
File Type:pdf, Size:1020Kb
The Frame Problem --- A GENERAL STATEMENT OF THE FRAME PROBLEM --- The frame problem is the problem of representing change in a complex world where the rules (dynamics of the environment, “laws of nature”) of the world make it so that actions have many unintended as well as intended consequences. ---EXAMPLE: THE YALE SHOOTING PROBLEM--- --- BACKGROUND: AI problem solving and planning --- A problem solver has an internal model the model can represent states of the world operations on the model correspond to actions and and actions take the world to new states --- BACKGROUND: AI problem solving and planning --- Actions can be tried out "in thought experiments" before trying them out in the real world (search in a problem space) --- BACKGROUND: AI problem solving and planning --- Intelligent systems characterized by two components: Knowledge base (KB) - Rich internal representation Heuristics - Smart heuristics, not brute-force exhaustive search --- BACKGROUND: AI problem solving and planning --- Consequently, two ways of being smart: Janlert (TRD p.3) "It is tempting to speculate on a general trade-off between modeling and heuristics. Some problem solvers may rely on having a very rich and accurate model of the world, and then they don't have to be very clever, because all they have to do is consult the model for answers. But some problem solvers may rely on their cleverness - what they don't know they can always figure outv-vand so manage without much explicit knowledge of the world." --- BACKGROUND: AI problem solving and planning --- You can have a very big model with explicit (relevant consequences) in which case the heuristics need not be very smart - combinatorial explosion and need to address novelty means you could need too much explicit knowledge - expert systems do not generalize across domains? Or you can have a very smart heuristic component and a compact world model - you need to be able to assess only the relevant consequences without needing to assess the relevance of the consequence - need to address novelty and the holistic nature of relevance could mean you can't "tag" the relevance of a consequence in a context insensitive manner --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- (I) The original frame problem (II) The persistence problem (III) The ramifications and qualifications problem (IV) The metaphysical problem --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- (I) The original frame problem Doing away with frame axioms. Representing the effect of actions in situation calculus, you represent the world at a time as a situation, a set of sentences (axioms) in predicate logic. Then an action's effects will be represented by updating the situation, by deriving the logical consequences. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- An example of a frame axiom: (x) (s) (Block(x)[s] --> Block(x)[GRASP[s]]). In words: If x is a block in situation s, then x will still be a block if you grasp it, in the situation that is the result of the Grasp-action having been applied to x in situation s. You don't want to specify IF F is true in s and E occurs in s, THEN F is true in s+1 for each F, s and E separately. How do you represent non-change as a rule (and change as the exception) compactly, without the need for frame axioms? --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- Logically, anything could happen as I turn the light switch (think of living in a Douglas Adams world). I can represent it happening. That it doesn't happen in this world has to be known. That means it has to be in the axioms of the KB. In original situation calculus, all effects AND NON- EFFECTS of actions need to be represented individually, as separate axioms concerning the world. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- (x)(y)(c)(s) (True_in(s, Color(x,c)) --> True_in(result(s,go(x,y)), Color(x,c)))) Where x is an object, y is a location, c is a color and s is a situation. You will need a lot of frame axioms. Too many. Most axioms describing the (meta)physics of the world (how actions affect situations) will be frame axioms like this - stating non-changes. Since most actions don't change most things. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- Turning on the light will cause the bulb to illuminate. Turning on the light does not cause measles. Turning on the light does not cure measles...and it should not even occur to you that it might (i.e. you should not spend cognitive resources assessing the possibility) They seem unnecessary or wasteful to list, to human intuition; if you could instead state generally that, by and large, things don't change, you could save a lot of code. The facts about measles belong to the background of facts that need not be explicitly considered (by a human), since common sense assumes them to remain unaffected. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- The original problem, then, is how to represent default non-change economically. "[W]hether you can infer that something is still true given what you know has happened since it became true.". – McDermott, TDD p.116 This would appear to be a "technical problem." Result of an unfortunate feature of representing change in situation calculus. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- Technical problems admit technical solutions. (If you define your frame problem broadly, the "philosophers' frame problem" or "the epistemological frame problem", then this is a more restriced problem (but perhaps still a symptom of the larger one!)). --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- (II) The persistence problem (representing common sense law of inertia) A somewhat more general frame problem is representing the common sense law of inertia - most actions don't change most things - not just in situation calculus, but logicist AI generally, succinctly. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- The classical example: Painting does not change location and moving does not affect color: the original solution was to define these frames of reference for each action (hence the name), motion and color are outside each others' frames. Grasping, pushing, pulling etc. only affect properties in the motion frame but not the color frame. The background forms a static frame of reference for each action against which their likely consequences stand out. If you could define the frames, then everything else, all predicates outside the frame, would not need to be considered as candidates for updating (and, incidentally, you could learn new ones without the need to update all actions). --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- The sleeping dogs strategy is to consider stasis the default case, that is only assumed to be violated if there is sufficient cause. What you want is: IF F is true is s and E occurs in s AND E is irrelevant with respect to the truth of F, generally or in s, THEN F is true in s+1 and thereafter, UNLESS you can prove from the axioms it doesn't. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- "For unclear reasons, [philosophers] have seen a special obstacle to the sleeping dogs idea.“ - Drew McDermott, TRD .116 "Letting sleeping dogs lie", you will assume that things do not change unless you can show them to change (explicitly mentioned in the action, or derivable as a consequence from KB). The problem then becomes defining this common sense law of inertia precisely. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- (III) The ramifications and qualifications problem (handling side effects and preconditions in causal reasoning) "The frame problem arises in keeping temporary knowledge up to date, when there are side effects; more specifically, it concerns how a system can ignore most conceivable updating questions and confront only 'realistic' possibilities.“ - John Haugeland TRD pp.82-83 --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- When an action is performed, many beliefs need to be updated because actions can have long-run consequences (ramifications). When an action is performed, what intended and unintended effects it has, depends on situational context (qualifications) --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- An action will cause a change ceteris paribus (if background conditions hold), and will also change other things. This is to do with holism of the world represented: can we frame the relevant background conditions a priori or do we need to consult all our knowledge to predict the consequeces of any action? --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- The Qualifications and Ramifications problem - cheap test solution: find representation and heuristic that allows you to check many potential updates quickly - sleeping dogs solution: assume things need not be updated unless there is an explicit reason to do so --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- The original problem was, that for each action you needed to separately specify its effects and non- effects. Now, this would not be a problem if the effects actually outweighed, or were even comparable, to non- effects. If the metaphysics of world were such that when you took an action it had all sorts of repercussions all over. Then this would be the only sensible thing to do. But in fact the world is such that when you take an action, most things aren't affected at all, and the problem is to be able to take advantage of this. To be able to say "let sleeping dogs lie", precisely rather than metaphorically. --- THE ORIGINAL FRAME PROBLEM AND SOME GENERALIZATIONS --- (IV) The metaphysical problem Janlert: TRD p.7-8 "[T]he problem of finding a representational form permitting a changing, complex world to be efficiently and adequately represented." Dennett: TRD p.42 "I think [...] that it is a new, deep epistemological problem .