Metareasoning As a Formal Computational Problem

Metareasoning As a Formal Computational Problem

Metareasoning as a Formal Computational Problem Vincent Conitzer Department of Computer Science Duke University Durham, NC 27708, USA Abstract Metareasoning research often lays out high-level prin- ciples, which are then applied in the context of larger systems. While this approach has proven quite success- ful, it sometimes obscures how metareasoning can be seen as a crisp computational problem in its own right. This alternative view allows us to apply tools from the theory of algorithms and computational complexity to Figure 1: The three levels of doing, reasoning, and metareasoning. In this paper, we consider some known metareasoning (Cox and Raja 2007). results on how variants of the metareasoning problem can be precisely formalized as computational problems, and shown to be computationally hard to solve to opti- mality. We discuss a variety of techniques for address- While this sounds natural, doing it well is far from an ing these hardness results. easy problem. For one, the usefulness of one deliber- ation action may not be seperable from another. For example, if there is a particularly risky plan that the Introduction agent is considering, the agent may need to rule out two ways in which this plan could potentially fail. If the agent only manages to rule out one of the two fail- An agent acting in the world generally needs to spend ure possibilities, and does not deliberate on the other, some time and other resources on deliberation, to assess then the plan is still too risky and will not be chosen. the quality of the various plans of action available to it. Hence, the deliberation on the first failure possibility To find the absolutely optimal plan, an agent generally was a waste of time: the agent does not obtain any needs to perform a very large amount of deliberation: benefit from deliberation unless it considers both fail- it has to consider all the relevant implications of all the ure possibilities. To make things more complicated, the relevant facts that it knows about the world, and, if the agent generally has to consider the outcomes of earlier agent is able to gather additional information, it also deliberation actions in choosing the next deliberation has to take all relevant information gathering actions action. For instance, in the above example, if the agent (and consider the implications of the resulting informa- considers the first failure possibility and realizes that tion). This is not always feasible: for example, if the the plan would in fact fail in this way, then there is no agent has a deadline for choosing an action, there may point in considering the other failure possibility, so the not be enough time for all of this deliberation. Still, agent should spend its valuable time considering other the agent may be able to find a plan of action that is options rather than pointlessly figuring out in exactly close to optimal. To do so, the agent needs to focus on how many ways the risky plan would have failed. Hence, the parts of the deliberation (deliberation actions) that in general, the agent does not merely need to choose a have the greatest impact on the quality of its plan. De- subset of the deliberation actions; rather, it needs to termining which deliberation actions to perform is the create a complete contingency plan for deliberating. metareasoning problem, in which the agent needs to rea- son about the reasoning it will perform. Figure 1, taken From the above, it should be clear that the metarea- from Cox and Raja (Cox and Raja 2007), illustrates the soning problem is nontrivial, and may in fact be com- three different levels of doing (plans of action), reason- putationally hard. This is an issue of concern, since we ing (deliberation actions), and metareasoning. Copyright c 2008, Association for the Advancement of Ar- tificial Intelligence (www.aaai.org). All rights reserved. want to avoid the ironic situation in which the agent ation actions are uncertain, and what deliberation ac- spends so much time solving the metareasoning prob- tion should be taken next in general depends on the lem that there is no time left to take any actual deliber- outcomes of the current and past deliberation actions. ation actions! However, even if the problem does turn Hence, in general, a solution to the metareasoning prob- out to be computationally hard, this does not mean lem consists of a full contingency plan (at least if we aim that we should abandon the metareasoning approach to solve the problem to optimality). altogether: we could still find fast heuristics or approx- imation algorithms that find close-to-optimal solutions In this subsection, however, we consider a simplified to the metareasoning problem, or algorithms that find variant of the metareasoning problem in which the the optimal solution fast under certain conditions. outcomes of deliberation actions are completely pre- dictable. Specifically, suppose that the agent has m In the remainder, we first discuss some known re- tasks that it needs to complete. For each of the tasks, sults (Conitzer and Sandholm 2003) that imply that it has a default plan that has some cost; however, by certain variants of the the metareasoning problem are deliberating on the plan more, the agent can reduce in fact computationally hard. While these variants by this cost. (For example, Conitzer and Sandholm con- no means capture all the interesting parts of all metar- sider a setting where the agent needs to solve m un- easoning problems, they are useful for illustrating some related vehicle routing problem instances, and it can computational difficulties that metareasoning systems improve the quality of its solution for each routing must face. As such, these results set the stage for the problem instance by spending more computation on remainder of the paper, in which we discuss what their it—that is, it has an anytime algorithm for the vehi- implications for real metareasoning systems are. cle routing problem.) The agent also has a deadline T by which it needs to finalize all of its plans. Fi- nally, we assume that for each task i, there is a func- Variants of the metareasoning problem tion fi, where fi(ti) is the reduction in the cost of and their complexity the plan for the ith task that results from spending ti units of deliberation time on that task. Of course, in reality, this improvement is not so perfectly pre- Before we can determine whether the metareasoning dictable, but these functions are often used as a model- problem is computationally hard, we first need to de- ing simplification. They are called (deterministic) per- fine it as a computational problem. However, there are formance profiles (Horvitz 1987; Boddy and Dean 1994; many different settings in which metareasoning is es- Zilberstein and Russell 1996). sential, and each of these settings leads to a different variant of the metareasoning problem. We could try to The goal is to obtain the maximum total savings given create a computational definition of the metareasoning the time limit. That is, we want to choose the times problem that is so general that it captures every vari- t1, . , tm to spend on deliberating on the tasks, with ant that we might reasonably encounter. It would not Pm the goal of maximizing i=1 fi(ti), under the con- be very surprising if such a general problem turned out m straint that P t ≤ T . Conitzer and Sandholm to be computationally hard; moreover, it is not clear i=1 i show that (the decision variant of) this problem is NP- that such a hardness result would tell us anything very complete, even if the f are piecewise linear. In contrast, interesting, because it could still be the case that most i if we require that the f are concave, then the problem reasonable variants are in fact quite easy to solve. i can be solved in polynomial time (Boddy and Dean 1994). (Similar results based on concavity are common Instead, we will consider definitions of some very re- in metareasoning: see, for example, Horvitz (Horvitz stricted variants of the metareasoning problem that still 2001).) However, Conitzer and Sandholm argue that turn out to be computationally hard. Such results are the fi are generally not concave: for example, anytime much more meaningful, because it seems likely that algorithms generally go through distinct phases, and most real-world metareasoning systems need to solve a often the end of one phase does not produce as much problem that is at least as hard as at least one of these improvement as the beginning of the next phase. problems. We discuss mostly the results of Conitzer and Sandholm (Conitzer and Sandholm 2003). Later It is quite a negative result that even this simple deter- in the paper, we discuss the implication of these results ministic variant of the metareasoning problem is hard. for the design of metareasoning systems. Still, the implications of this hardness result for metar- easoning are limited, because there are variants of the Variant 1: Deliberation that leads to metareasoning problem that do not include the above problem as a subproblem. In a sense, in the above prob- predictable improvements lem, the deliberation actions reveal new plans of action (e.g., vehicle routes). However, there are many metar- As we mentioned above, one of the main difficulties easoning settings in which the set of available plans is in metareasoning is that the outcomes of the deliber- known from the beginning, and the only purpose of de- the next subsection, we consider a variant of the metar- liberation is to discover which plan is best. If the re- easoning problem without such properties.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us