Normal Form Backward Induction for Decision Trees with Coherent

Normal Form Backward Induction for Decision Trees with Coherent

Nathan Huntley and Matthias C. M. Troffaes. Normal Form Backward Induction for Decision Trees with Coherent Lower Previsions. Annals of Operations Research, 195(1):111-134, 2012. http://dx.doi.org/10.1007/s10479-011-0968-2 NORMAL FORM BACKWARD INDUCTION FOR DECISION TREES WITH COHERENT LOWER PREVISIONS NATHAN HUNTLEY AND MATTHIAS C. M. TROFFAES Abstract. We examine normal form solutions of decision trees under typical choice functions induced by lower previsions. For large trees, finding such solutions is hard as very many strate- gies must be considered. In an earlier paper, we extended backward induction to arbitrary choice functions, yielding far more efficient solutions, and we identified simple necessary and sufficient conditions for this to work. In this paper, we show that backward induction works for maximality and E-admissibility, but not for interval dominance and Γ-maximin. We also show that, in some situations, a computationally cheap approximation of a choice function can be used, even if the approximation violates the conditions for backward induction; for instance, interval dominance with backward induction will yield at least all maximal normal form solutions. 1. Introduction In classical decision theory, one aims to maximize expected utility. Such approach requires probabilities for all relevant events. However, when information and knowledge are limited, sadly, the decision maker may not be able to specify or elicit probabilities exactly. To handle this, various theories have been suggested, including lower previsions [35], which essentially amount to sets of probabilities. In non-sequential problems, given a lower prevision, various generalizations of maximizing expected utility exist [34]. Sequential extensions of some of these alternatives have been sug- gested [29, 9, 13, 1, 30, 5, 14, 31], yet not systematically studied. In this paper, we study, systematically, using lower previsions, which decision criteria admit efficient solutions to sequen- tial decision problems, by backward induction, even if probabilities are not exactly known. Our main contribution is that we prove for which criteria backward induction coincides with the usual normal form. We study very general sequential decision problems: a subject can choose from a set of options, where each option has uncertain consequences, leading to either rewards or more options. Based on her beliefs and preferences, the subject seeks an optimal strategy. Such problems are represented by a decision tree [25, 18, 3]. When maximizing expected utility, one can solve a decision tree by the usual normal form method, orby backward induction. First, note that the subject can specify, in advance, her actions in all eventualities. In the normal form, she simply chooses a specification which maximizes her arXiv:1104.0191v2 [math.ST] 23 Mar 2012 expected utility. However, in larger problems, the number of specifications is gargantuan, and the normal form is not feasible. Fortunately, backward induction is far more efficient. We find the expected utility at the final decision nodes, and then replace these nodes with the maximum expected utility. The previously penultimate decision nodes are now ultimate, and the process repeats until the root is Key words and phrases. backward induction; decision tree; lower prevision; sequential decision making; choice function; maximality; E-admissibility; interval dominance; maximin; imprecise probability. 2 NATHANHUNTLEYANDMATTHIASC.M.TROFFAES reached. Backward induction is guaranteed to coincide with the normal form [25] if probabilities are non-zero [8, p. 44]. The usual normal form method works easily with decision criteria for lower previsions: apply it to the set of all strategies. Generalizing backward induction is harder, as no single expectation summarizes all relevant information about substrategies, unlike with expected utility. We follow Kikuti et al. [14], and instead replace nodes with sets of optimal substrategies, moving from right to left in the tree, eliminating strategies as we go. De Cooman and Troffaes [5] presented a similar idea for dynamic programming. In this general setting, normal form and backward induction can differ, as noted by many [29, 20, 30, 14, 5, 13, 1]. However, for some decision criteria the methods always coincide. In [11], we found conditions for coincidence. In this paper, we expand the work begun in [12], and investigate what works for lower previsions, finding that maximality and E-admissibility work, but the others do not. This coincidence is of interest for at least two reasons. First, as mentioned, the normal form is not feasible for larger trees, whereas backward induction can eliminate many strategies early on, hence being far more efficient. Secondly, one might argue that a solution where the two methods differ is philosophically flawed [11, 8, 29, 21]. The paper is organized as follows. Section 2 explains decision trees and introduces notation. Section 3 presents lower previsions and their decision criteria, and demonstrates normal form backward induction on a simple example. Section 4 formally defines the two methods, and characterizes their equivalence, which is applied in Section 5 to lower previsions. Section 6 discusses a larger example. Section 7 concludes. Readers familiar with decision trees and lower previsions can start with Sections 3.3 and 6. 2. Decision Trees 2.1. Definition and Example. Informally, a decision tree [18, 3] is a graphical causal represen- tation of decisions, events, and rewards. Decision trees consist of a rooted tree [7, p. 92, Sec. 3.2] of decision nodes, chance nodes, and reward leaves, growing from left to right. The left hand side corresponds to what happens first, and the right side to what happens last. Consider the following example. Tomorrow, a subject is going for a walk in the lake district. It may rain (E1), or not (E2). The subject can either take a waterproof (d1), or not (d2). But the subject may also choose to buy today’s newspaper, at cost c, to learn about tomorrow’s weather forecast (dS), or not (dS ), before leaving for the lake district. The forecast has two possible outcomes: predicting rain (S1), or not (S2). The corresponding decision tree is depicted in Figure 1. Decision nodes are depicted by squares, and chance nodes by circles. From each node, a number of branches emerge, representing decisions at decision nodes and events at chance nodes. The events from a node form a partition of the possibility space: exactly one of the events will take place. Each path in a decision tree corresponds to a sequence of decisions and events. The reward from each such sequence appears at the right hand end of the branch. 2.2. Notation. A particular decision tree can be seen as a combination of smaller decision trees: for example, one could draw the subtree corresponding to buying the newspaper, and also draw the subtree corresponding to making an immediate decision. The decision tree for the full problem is then formed by joining these two subtrees at a decision node. So, we can represent a decision tree by its subtrees and the type of its root node. Let T1,..., Tn be decision trees. If T combines the trees at a decision node, we write n T = Ti. i=1 G NORMAL FORM BACKWARD INDUCTION FOR DECISION TREES WITH COH. LOWER PREVISIONS 3 E1 10 − c 11 N 11 d 1 15 − c E2 1 N 11 E1 d 5 − c 2 12 1 N 11 S 20 − c E2 1 N 1 E1 S 10 − c 2 11 S N 12 d d 1 15 − c E2 1 N 12 E1 d 5 − c 2 12 N 1 N 12 20 − c E2 d S E1 10 1 N 12 d 1 15 E2 N 12 E1 d 5 2 2 N 12 20 E2 Figure 1. A decision tree for walking in the lake district. If T combines the trees at a chance node, with subtree Ti being connected by event Ei (E1,..., En is a partition of the possibility space) we write n T = Ei Ti. i=1 K For instance, for the tree of Fig. 1 with c = 1, we write (S1(T1 ⊔ T2) ⊙ S2(T1 ⊔ T2)) ⊔ (U1 ⊔ U2) with, where we denoted the reward nodes by their utility, T1 = E19 ⊙ E214 U1 = E110 ⊙ E215 T2 = E14 ⊙ E219 U2 = E15 ⊙ E220 The above notation shall prove very useful when considering recursive definitions. In this paper we often consider subtrees of larger trees. For subtrees, it is important to know the events that were observed in the past. Two subtrees with the same configuration of nodes and arcs may have different preceding events, and should be treated differently. Therefore we associate with every decision tree T an event ev(T ) representing the intersection of all the events on chance arcs that have preceded T . Definition 1. A subtree of a tree T obtained by removal of all non-descendants of a particular node N is called the subtree of T at N and is denoted by stN (T ). These subtrees are called ‘continuation trees’ by Hammond [8]. 4 NATHANHUNTLEYANDMATTHIASC.M.TROFFAES Consider all possible ways that sets of decision trees T1, ..., Tn can be combined. Our notation easily extends. For any partition E1,..., En, n n EiTi = EiTi : Ti ∈ Ti . i=1 ( i=1 ) K K For any sets of consistent decision trees T1,..., Tn, n n Ti = Ti : Ti ∈ Ti . i=1 ( i=1 ) G G For convenience we only work with decision trees for which there is no event arc that is impossible given preceding events. Definition 2. A decision tree T is called consistent if for every node N of T , ev(stN (T )) 6= ∅. Clearly, if a decision tree T is consistent, then for any node N in T , stN (T ) is also consistent. Considering only consistent trees is not really a restriction, since inconsistent trees would only be drawn due to an oversight and could easily be made consistent.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    22 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us