<<

Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17)

An Ambiguity Aversion Model for Decision Making under Ambiguity

Wenjun Ma,1 Xudong Luo,2∗ Yuncheng Jiang1† 1 School of Computer Science, South China Normal University, Guangzhou, China. 2 Institute of Logic and Cognition, Department of Philosophy, Sun Yat-sen University, Guangzhou, China.

Abstract of experimental observation on the human’s choices in real world, for example, the (Ellsberg 1961) In real life, decisions are often made under ambiguity, and the Machina Paradox (Machina 2009). (ii) Some of them where it is difficult to estimate accurately the proba- bility of each single possible consequence of a choice. can solve the paradoxes but conditionally by applying some However, this problem has not been solved well in ex- additional parameters setting. In other words, it is unclear isting work for the following two reasons. (i) Some why a decision maker will select different bounds of param- of them cannot cover the Ellsberg paradox and the eter variations in various situation. Thus, the prediction of Machina Paradox. Thus, the choices that they predict such models in new decision problems is doubtful. More de- could be inconsistent with empirical observations. (ii) tails discussion can be found in related work section. Some of them rely on parameter tuning without offering To tackle the above problems, this paper will identify a set explanations for the reasonability of setting such bounds of basic principles that should be followed by a of parameters. Thus, the prediction of such a model in ordering for decision making under ambiguity. Then we will new decision making problems is doubtful. To the end, construct a decision model to set a determinate preference this paper proposes a new decision making model based on D-S theory and the emotion of ambiguity aversion. ordering based on expected utility intervals and the emotion Some insightful properties of our model and the vali- of ambiguity aversion by using evidence theory of Dempster dating on two famous paradoxes show that our model and Shafer (D-S theory) (Shafer 1976) and the generalised indeed is a better alternative for decision making under Hartley measure (Dubois and Prade 1985). Accordingly, we ambiguity. will reveal some insightful properties of our model. Finally, we will valid our model by solving two famous paradoxes. . This paper advances the state of art in the field of decision-making under in the following aspects: Introduction (i) identify a set of basic principles for comparing interval- In real life, due to some factors such as time pressure, lack valued expected utilities; (ii) give a normative decision of data, noise disturbance, and random outcome of some at- model for decision making under ambiguity; (iii) disclose tributes, decisions often have to be made in the case that no a set of properties, which are consistent with human’s in- precise probability distributions over the single possible out- tuitions; and (iv) use our model to solve both well-known comes are available. In order to distinguish these cases from Ellsberg paradox and Machina paradox without any extra the cases (in which the possible outcomes of choice are parameters setting, which most of the existing models can- already formulated in terms of a unique probability distri- not. bution), we called such cases decision making under ambi- The remainder of this paper is organised as follows. First, guity. Since decision making under ambiguity is inevitable we recap some basic concepts of D-S theory. Second, we in real-world applications, this topic is a central concern give the formal definition of decision problems under ambi- in decision science (Tversky and Kahneman 1992) as well guity. Third, we discuss some principles for setting a proper as artificial intelligence (Dubois, Fargier, and Perny 2003; preference ordering in decision making under ambiguity. Liu 2006; Yager and Alajlan 2015). Fourth, we present the our decision making model that em- In the literature, various theoretic models have been pro- ploys the ambiguity aversion to find an optimal choice based posed with the aim of capturing how ambiguity can affect on a set of basic principles. Fifth, we also reveal some prop- decision making and some of them are based on Dempster- erties of our model. Sixth, we validate our model by show- Shafer theory (Shafer 1976). However, the existing models ing that it can solve well both Ellsberg paradox and Machina still have drawbacks: (i) Some of them still violate a series paradox. Seventh, we discuss the related work. Finally, we conclude the paper with the further work. ∗The corresponding author: [email protected] †The corresponding author: [email protected] Copyright c 2017, Association for the Advancement of Artificial Preliminaries Intelligence (www.aaai.org). All rights reserved. This section recaps some basic concepts of D-S theory.

614 Θ Definition 1 (Shafer 1976) Let Θ be a set of exhaustive and (vi) M = {mc | c ∈ C}, where mc :2 c → [0, 1] is a mass mutually exclusive elements, called a frame of discernment function to represent the decision maker’s uncertainty Θ (or simple a frame). Function m :2 → [0, 1] is a mass about the utility that choice c could cause; and m ∅ m A function if ( )=0 and A⊆Θ ( )=1. (vii) ⊆ C × C is a binary relation to represent the pref- For Definition 1, if for any m(A) > 0(A ⊆ Θ),we erence ordering of decision-maker over choices (as have |A| =1(i.e., A is a singleton), then m is probability usual,  and ∼ respectively denote the asymmetric and function. Thus, probability function is actually a special case symmetric parts of ). of mass function. Here we must notice that the mass distribution, which rep- Now, based on Definition 1, we can consider the formal resents the decision maker’s uncertainty about the outcomes definition of ambiguity degree for a given choice. that choice c could cause, is not based on the state space as Definition 2 Let mc be a mass function over a frame Θc and the expected utility theory does, but based on the possible |A| be the cardinality of set A. Then the ambiguity degree of utilities of a choice c. The reason for such a change is that mc, denoted as δc, is given by: the decision maker cares more about the ambiguity of pos-  m A |A| sible utilities rather than the ambiguity of the states of the c( )log2 world and in some cases, the ambiguity of the states of the A⊆Θc δc = . | | (1) world does not mean the ambiguity of possible utilities. For log2 Θc instance, suppose a decision maker obtains $10 when the Definition 2 is a normalised version of the generalised result of rolling a dice is odd, but obtains $5 when the re- Hartley measure for non-specificity (Dubois and Prade sult of rolling a dice is even. Now, the dice is unfair, where 1985). Such a normalised version can guarantee the ambi- the chance of points 1 or 3 or 5 are 1/2 , and the chance guity degree is in the range of [0, 1]. of other points is 1/2. Clearly, in this example, the states Also, by the concept of mass function in Definition 1, the of the world are ambiguity, and multiple probability-values point-valued expected utility function (Neumann and Mor- can be assigned to points 1 or 3 or 5. However, the possible genstern 1944) can be extended to an expected utility inter- utilities are unambiguity since one unique probability distri- val (Strat 1990): bution can be assigned to u($5) with 1/2 and u($10) with 1/2. Definition 3 For choice c specified by mass function mc Hence, although we define mass function over the set of over Θc = {u1, ..., un}, where ui is a real number indicat- possible utilities for convenience, essentially such a mass ing a possible utility of choice c, its expected utility interval function still is obtained based on the uncertainty of states is EUI(c)=[E(c), E(c)], where  of world. Thus, we can transform such a mass function over E(c)= mc(A)min{ui | ui ∈ A}, (2) the state space as well. A⊆Θ  Basic Principles E(c)= mc(A)max{ui | ui ∈ A}. (3) A⊆Θ Now, based on Definition 4, we consider how to set the pref- erence ordering for the choice set in a decision problem In Definition 3, when |A| =1for any mc(A) > 0(A ⊆ under ambiguity. More specifically, we present some basic Θ),wehaveE(c)=E(c). Thus, the interval-valued ex- principles the decision maker should obey when he sets a pected utility degenerates to the point-valued one. In other preference ordering in order to compare two choices in de- words, the interval-valued expected utility covers the point- cision making under ambiguity according to their interval- valued expected utility as a special case. valued expected utilities. Formally, we have: Definition 5 Let C be a finite choice set and the interval- Problem Definition valued expected utility of choice c ∈ C be EUI(c)= Now we give a formal definition for the problem of decision [E(c), E(c)], the ambiguity degree of choice c ∈ C be δ(c), making under ambiguity as follows: and S be a state space. Then a binary relation, denoted as Definition 4 A decision problem under ambiguity is a 6- , over C is a preference ordering over C if it satisfies that tuple (S, X, C, U, Θ,M,), where for all c1,c2,c3 ∈ C: (i) S is the state space that contains all possible states of 1. Weak order. (i) Either c1  c2 or c2  c1; and (ii) if nature (exactly one state is true but a decision-maker c1  c2 and c2  c3, then c1  c3. is uncertain which state that is); 2. Archimedean axiom. If c1  c2 and c2  c3, then there (ii) X is the set of possible outcomes; exist λ, μ ∈ (0, 1) such that (iii) C = {c : S → X} is the choice set of all options; λc1 +(1− λ)c3  c2  μc1 +(1− μ)c3. (iv) U = {u(c(s)) |∀c ∈ C, ∀s ∈ S, u(c(s)) ∈ R}, where u(c(s)) ∈ R (real number) is the utility of the outcome 3. Monotonicity. If c1(s)c2(s) for all s ∈ S, then c1  c2. x ∈ X that is caused by choice c ∈ C in state s ∈ S; 4. Risk independence. If E(c3)=E(c3), then ∀λ∈(0, 1], (v) Θ={Θc | c ∈ C}, where Θc is the utility set for all possible outcomes of choice c; c1  c2 ⇔ λc1 +(1− λ)c3  λc2 +(1− λ)c3.

615 5. Representation axiom. There exists a function Q : The Ambiguity-Aversion Model EUI → such that Now we turn to construct an apparatus that considers the im- pact of ambiguity aversion effect in decision making under c1 c2 ⇔Q(EUI(c1))≥Q(EUI(c2)). ambiguity. Accordingly, we will be able to set a proper pref- E c > E c c  c erence ordering over a set of choices with different expected 6. Range . If ( 1) ( 2), then 1 2. utility intervals. EUI(c1)=EUI(c2) δ(c1) < 7. Ambiguity Aversion. If and Definition 6 Let EUI(c)=[E(c), E(c)] be the expected δ(c2) c1  c2 , then . utility intervals of choices c, and δ(c) be the ambiguity de- Principles 1-3 are the standard axioms in the subjective gree of choice c. Then the ambiguity-aversion expected util- expected utility theory (Savage 1954). The first means a ity of choice c, denoted as ε(c), is defined as preference ordering can compare any pair of choices and ε(c)=E(c) − δ(c)(E(c) − E(c)). (4) satisfies the law of transitivity. The second works like a continuity axiom on preferences. It asserts that no choices For any two choices c1 and c2, the preference ordering  is are either infinitely better or infinitely worse than any other defined as follows: choices. In other words, this principle means no lexico- c1  c2 ⇔ ε(c1) ≥ ε(c2). graphic preferences for certainty.1 The third is a monotonic- (5) ity requirement, asserting that if the decision maker consid- When binary relation  expresses preference ordering, we ers a choice not worse than the utility of another choice on can use it to define two other binary relations: each state of world, then the former choice is conditionally (i) c1 ∼ c2 if c1  c2 and c2  c1; and preferred to the latter. (ii) c1  c2 if c1  c2 and c1 ∼ c2. The rest of principles are our own. The fourth means when comparing two choices, it is unnecessary to consider states From formula (4), we can see that E(c) ≤ ε(c) ≤ E(c), of nature in which the probability value is determined and and the higher ambiguity degree δc is, the more E(c) (the these choices yield the same outcome. Moreover, since in upper bound of the expected utility interval of a choice) will decision making under risk, the probability value of each be discounted. In particular, when δ =1(meaning abso- state is determined, under this condition, the fourth principle lute ambiguity, i.e., m(Θ) = 1), E(c) is discounted to E(c) in fact means when comparing two decisions, it is unneces- (the lower bound of the expected utility interval of a choice). sary to consider states of nature in which these choices yield And when δ =0(meaning no ambiguity at all, i.e., all focal the same outcome. That is, this principle requires the prefer- elements are singletons and thus the mass function degener- ence ordering to obey the sure thing principle (Savage 1954) ates to a probability function), ε(c)=E(c)=E(c) (i.e.,no in case of decision making under risk. The fifth means the discount is possible). preference ordering can be represented by the quantitative Hence, by Definition 6, we can easily obtain that relation of the real number evaluations based on expected ε(c)=δcE(c)+(1− δc)E(c). utility intervals. By principles 1-5, we can find that the Q function is In other words, if we consider the parameter δc as the in- monotonic, linear function that satisfies: terpretation of an ambiguity attitude. Then, our ambiguity- aversion expected utility exactly is a special case of the Q(EUI(c1)+EU(c2)) = Q(EUI(c1)) + EU(c2), α-maxmin model (Ghirardato, Maccheroni, and Marinacci 2004), which extends the well-known Hurwicz criterion where EU(c2)=E(c2)=E(c2) is the unique expected util- (Jaffray and Jeleva 2007) for ambiguity. In other words, we ity of choice c2. Clearly, such function also satisfies: show a method to obtain the exactly value of ambiguity at- titude of a decision maker based on the ambiguity degree of (i) Q(EUI(c1)+a)=Q(EUI(c1))+a; and the potentially obtained utility. Nonetheless, we must notice (ii) Q(k × EUI(c1))=k × Q(EUI(c1)), that the α-maxmin model does not really give a method to calculate the ambiguity degree, thus it is arbitrary to as- where a ∈ and k ≥ 0. sume different choices have different ambiguity degrees in The sixth principle means if the worst situation of a choice their model. As a result, the α-maxmin model somehow is better than the best situation of another, we should choose requires a decision maker to have the same ambiguity atti- the first one. In fact, together with the fifth principle, it re- tude to all the choices in a decision making problem, while stricts in our model, since we give a method to calculate the ambi- E(c)≤Q(EUI(c))≤E(c). guity degree, it allows the case that the decision maker has different ambiguity attitudes to different choices. The seventh principle reveals the relation of ambiguity de- In the following, we will check whether or not the prefer- gree and the preference ordering. That is, a decision maker ence ordering  in Definition 6 enables us to compare any will select a choice with less ambiguity, ceteris paribus. two choices properly based on the basic principles presented in Definition 5. 1Lexicographic preferences describe comparative preferences, where an economic agent prefers any amount of one good (X) to Theorem 1 The preference ordering that is set in Definition any amount of another (Y). 6 satisfies the principles listed in Definition 5.

616 Proof: We check the theorem according to the principles to have meals in KFC in Beijing and in McDonald in Shang- listed in Definition 5 one by one. hai. (i) Since ε(c1) ≥ ε(c2) or ε(c1) ≤ ε(c2) holds for any Actually, many researchers in the field of the decision- c1,c2 ∈ C, by Definition 6, we have c1  c2 or c2  c1. making (e.g., Suzumura (2016)) advocate that Houthakker Moreover, suppose c1  c2 and c2  c3. Then by Definition Axiom of Revealed Preference provides a necessary and suf- 6, we have ε(c1) ≥ ε(c2) and ε(c2) ≥ ε(c3). As a result, ficient condition for observing whether a set of choice func- ε(c1) ≥ ε(c3). Thus, by Definition 6 we have c1  c3. So, tions can be generated by a set of preferences that are well- principle 1 in Definition 5 holds for the preference ordering. behaved (meaning that these preferences satisfy the reflex- (ii) Suppose c1  c2 and c2  c3. Then by Definition 6, ive, transitive, complete, monotonic, convex and continuous we have ε(c1) ≥ ε(c2) and ε(c2) ≥ ε(c3). Since ε(ci) ∈ axioms). Fortunately, we have: for i =1, 2, 3, by continuity of the real numbers, we can always find some λ, μ ∈ (0, 1) such that Theorem 2 The preference ordering that is set in Definition 6 satisfies the Houthakker axiom. λc1 +(1− λ)c3  c2,c2  μc1 +(1− μ)c3. Thus, principle 2 in Definition 5 holds for the preference Proof: Let choices x and y are in both choice sets A and ordering. B. Suppose that x ∈ c(A) and y ∈ c(B) (c(A) and c(B) are A B (iii) Suppose c1(s)  c2(s) for all s ∈ S, then by Defini- the optimal choice sets of choice sets and , respectively). ε x ≥ ε y ε y ≥ tions 4 and 6, we have u(c1(s)) ≥ u(c2(s)), ∀s ∈ S. Hence, Then, by Definition 6, we have ( ) ( ) and ( ) by Definitions 2, 3 and 6, and the fact that E(c) ≤ ε(c) ≤ ε(z), where z ∈ B (z = y). Hence, we have ε(x) ≥ ε(z). Thus, x ∈ c(B).  E(c), we can obtain c1  c2. Thus, principle 3 in Definition 5 holds for the preference ordering. Now, we reveal the relation of our models and the ex- pected utility theory with the following theorem: (iv) By EU(c3)=E(c3)=E(c3) and λ∈(0, 1] ,wehave (1 − λ)EU(c3) ∈ . Thus, by Definition 6 and λ ∈ (0, 1], Theorem 3 For a decision problem under ambiguity we have (S, X, C, U, Θ,M,), if for two choices c1,c2 ∈ C,we |A| m A > A ⊆ i , have =1for any ci ( ) 0( Θ) and =12. c1  c2 Then the ambiguity-aversion expected utility of choice ci is ⇔ ε c ≥ ε c ( 1) ( 2) an expected utility EU(ci) and the preference ordering sat- ⇔ λε(c1)+(1− λ)EU(c3) ≥ λε(c2)+(1− λ)EU(c3) isfies: c  c ⇔ EU c ≥ EU c . ⇔ λc1 +(1− λ)c3  λc2 +(1− λ)c3. 1 2 ( 1) ( 2) (6)

Thus, principle 4 in Definition 5 holds for the preference |A| m A > A ⊆ Proof: By =1for any ci ( ) 0( Θ) and ordering. i , m A =1 2,wehave ci is probability function. Suppose = (v) Principle 5 can be obtained by Definition 5 directly. {u } (vi) By the fact that E(c)≤Q(EUI(c))≤E(c), principle j , then by Definition 3, we have 6 always holds for the preference ordering.  EU(ci)=E(ci)=E(ci)= mc (A)uj. (vii) By Definitions 2 and 6, it is straightforward that prin- i ciple 7 holds for the preference ordering.  Thus, by Definition 6, we have

Properties c1  c2 ⇔ EU(c1) ≥ EU(c2). Now we release some insightful properties of our model.  In the theory of rational choice (Hindmoor and Taylor Theorem 3 shows that preference ordering defined in the 2015; Suzumura 2016), there are some axioms that we can expected utility theory is a special case of our model. In impose on our choice rules. One of them is well-known other words, for a choice c under risk (i.e., |A| =1for all Houthakker Axiom of Revealed Preference (Houthakker mc(A) > 0, where A ⊆ Θ ), the ambiguity-aversion ex- 1950): pected utility ε(c) is the same as the expected utility EU(c). “Suppose choices x and y are both in choice sets A That is, ε(c)=EU(c). and B.Ifx ∈ c(A) and y ∈ c(B), then x ∈ c(B) (and, Also, some properties related to the comparison of two by symmetry, y ∈ c(A) as well), where c(A) and c(B) choices based on the relation of expected utility intervals are is an optimal choice set for set A or set B.” shown as follows: The above axiom means that if x is chosen when y is avail- Theorem 4 Let C be a finite choice set and the interval- able, then x is also chosen whenever y is chosen and x is valued expected utility of choice ci ∈ C be EUI(ci)= available. For example, suppose the quality of food in all [E(ci), E(ci)], and its ambiguity degree be δ(ci). A binary the branches of McDonald restaurant is the same in China relation  over C, which is defined by Definition 6, satisfies: and so is that of KFC. Now McDonald and KFC both have restaurants in Beijing and Shanghai in China. If a girl likes (i) if E(c1) >E(c2), E(c1) > E(c2), and δ(c1) ≤ δ(c2), to have meals in McDonald in Beijing and have meals in then c1  c2; and KFC in Shanghai, the axiom states that she should also like (ii) if E(c1)≥E(c2) and δ(c1)=δ(c2)=1, then c1 c2.

617 Definition 4, we have Table 1: Three Color Ellsberg Paradox 30 balls 60 balls 1 2 mc ({100})= ,mc ({0})= ; R: red B: blue G: green 1 3 1 3 c1 $100 $0 $0 1 2 c $0 $100 $0 mc ({0})= ,mc ({0, 100})= ; 2 2 3 2 3 c3 $100 $0 $100 c $0 $100 $100 1 2 4 mc ({100})= ,mc ({0, 100})= ; 3 3 3 3 1 2 mc ({0})= ,mc ({100})= . 4 3 4 3 Proof: (i) When E(c1) >E(c2), E(c2) > E(c2) and 0 ≤ Hence, by Definition 2, the ambiguity of each choice is δ(c1) ≤ δ(c2) ≤ 1, by Definition 6, we have 2 2 δc =0,δc = ,δc = ,δc =0. ε(c1) − ε(c2) 1 2 3 3 3 4 ≥ (1−δ(c2))(E(c1)−E(c2))+δ(c2)(E(c1)−E(c2)) Thus, by Definition 3, we have > 0. EUI 100 200 c1 = ,EUIc2 =[0, ], Thus, we have c1  c2. 3 3 100 200 (ii) By δ(c1)=δ(c2)=1and Definition 6, we have EUIc =[ , 100],EUIc = . 3 3 4 3 ε c E c −δ E c −E c E c fori , . ( i)= ( i)+(1 ci )( ( i) ( i))= ( i) =1 2 Finally, by Definition 6, we have 100 Thus, by E(c1) ≥ E(c2),wehavec1  c2.  ε(c1)= , Item (i) of Theorem 4 means that if the ambiguity degree 3 200 2 200 200 of a choice is not more than that of the other choices, and the ε(c2)= − ×( −0)= , worst and best situations of this choice is better than those of 3 3 3 9 the other choices respectively, we should make this choice. 2 100 500 ε(c3) = 100− ×(100− )= , And item (ii) of Theorem 4 means that in the case of abso- 3 3 9 lute ambiguity, the decision maker should take the maximin 200 ε(c4)= . attitude (i.e., compare their worst cases and choose the best 3 one). Thus, we have c1 c2 and c4 c3. Since Ellsberg Para- dox shows that decision maker exhibits a systematic prefer- Paradoxes Analyses ence for the choices with a deterministic probability distri- bution for the potentially obtained utility over the choices Now we validate our ambiguity-aversion model by solving with undetermined probability distribution for the poten- two paradoxes (Ellsberg 1961; Machina 2009). tially obtained utility, a phenomenon known as ambiguity The Ellsberg paradox is a well-known, long-standing ex- aversion and our method can solve such a type of paradox. ample about ambiguity (Ellsberg 1961). Suppose in an urn Thus, our ambiguity-aversion expected utility indeed shows containing 90 balls, among which 30 are red, and the rest the decision maker’s ambiguity aversion. Actually, by the are either blue or green. Table 1 shows two pairs of decision structure of Ellsberg paradox, we will find that it is con- problems, each involving a decision between two choices: structed based on the uncertainty of the probability values c1 and c2,orc3 and c4. A ball is randomly selected from the that are assigned to some possible utilities and the violation urn. And the return of selecting a ball for each choice is $100 of the sure thing principle in the subjective expected util- or $0. Ellsberg (1961) found that a very common pattern of ity theory. That is, by changing the utilities of the states of responses to these problems is: c1  c2 and c4  c3, which world that two choices yield the same outcome, the prefer- actually violates the expected utility theory. ence of the decision maker might be changed (e.g., the utility Now we use our model to analyse this paradox. Firstly, of green ball have changed from 0 in c1 and c2 to 100 in c3 since the possible utilities of all the choices could be 100 or and c4). However, it does not mean such phenomenon will 0 (here we regard the monetary prizes as the utility values to happen for any state. In fact, by the fourth principle about simplify the problem), we have risk independence and our model, we can reveal the essence of Ellsberg paradox by the following claim: Θc =Θc =Θc =Θc = {0, 100}. 1 2 3 4 Claim 1 A paradox of Ellsberg type will not exist if we only change the utility of all the choices in a state satisfying two Further, since the number of red balls is 30 out of 90 balls, 30 1 conditions: the probability of that the selected ball is red is 90 = 3 ; and although the number of blue or green balls is unknown, it is (i) the probability value assigned to the state is deter- known that their total number is 60, and so the probability mined; and 200 2 that the selected ball is blue or green is 300 = 3 . Then, by (ii) all the choices yield the same outcome in such a state.

618 Table 2: 50:51 Example Table 3: Reflection Example 50 balls 51 balls 50 balls 51 balls E1 E2 E3 E4 E1 E2 E3 E4

c1 $80 $80 $40 $40 c1 $40 $80 $40 $0 c2 $80 $40 $80 $40 c2 $40 $40 $80 $0

c3 $120 $80 $40 $0 c3 $0 $80 $40 $40

c4 $120 $40 $80 $0 c4 $0 $40 $80 $0

Recently, Machina (2009) posed the following questions since c4 is an informationally symmetric left-right reflec- regarding the ability of Choquet expected utility theory (Co- tion of c1 and c3 is a left-right reflection of c2, any decision letti, Petturiti, and Vantaggi 2014) and some well-known de- maker who prefers c1 to c2 should have the “reflected” rank- cision making models under ambiguity to cover variations ing, i.e., c4 is preferred to c3. And if c2  c1 then c3  c4. of the Ellsberg paradox that appear plausible and even natu- In recent experimental analyses, some authors found that ral. We refer to these variations as examples of the Machina over 90 percent of subjects expressed strict preference in the paradox. reflection problems, and that roughly 70 percent with the structure c1 c2 and c4 c3 or c2 c1 and c3 c4. And More specifically, considering the first example with two 2 pairs of choice selections as shown in Table 2. In this 50:51 among those subjects, roughly 3 prefer “packaging” the two c c c c example, the comparison of c1 and c2 vs. that of c3 and c4 extreme outcomes together. That is, 2 1 and 3 4 differ only in whether they offer the higher prize $80 on the (L’Haridon and Placido 2008). And such a result cannot be event E2 or E3. As argued by Machina, an ambiguity averse explained well by five famous ambiguity models (Baillon decision maker will prefer c1 to c2, but the decision maker and Placido 2011). may feel that the tiny ambiguity difference between c3 and Similarly, in our method, since the possible utilities for all c4 does not offset c4 objective advantage (a slight advantage choices could be 80 or 40 or 0,wehave due to that the 51st ball may yield $80), and therefore prefers { , , }. Θc1 =Θc2 =Θc3 =Θc4 = 0 40 80 c4 to c3. Then, by Definitions 4, 2, 3 and 6, we have: In our method, since the possible utilities of c1 or c2 could be 40 or 80 and that for c3 or c4 could be 120 or 80 or 40 or 0,wehaveΘc =Θc = {40, 80} and Θc =Θc = 1 1 log2 2 1 2 3 4 ε(c1)=60−( + )× ×(60−20)=34.8, {0, 40, 80, 120}. Then, by Definitions 4, 2, 3 and 6, we have 2 2 log2 3 1 log2 2 50 51 ε c2 − × × − . , ε(c1)=80× +40× =59.8, ( )=60 (60 20)=47 4 101 101 2 log2 3 50 51 log2 2 1 log2 2 ε(c2)=80−( + )×(80 − 40)× =40, ε(c3)=60− × ×(60−20)=47.4, 101 101 log2 2 2 log2 3 log2 2 50 51 1 1 log2 2 ε(c3)=79.6− ×( + )×(79.6−39.6)=59.6, ε(c4)=60−( + )× ×(60−20)=34.8. log2 4 101 101 2 2 log2 3 log2 2 50 51 Thus, we have c2 c1 and c3 c4. So our method cov- ε(c4)=99.8− ×( + )×(99.8−19.8)=59.8. log2 4 101 101 ers the reflect example in Machina paradoxes well. In fact, based on the structure of the reflect example in Machina Thus, we have c1 c2 and c4 c3. Moreover, we will find paradoxes, we will find that it considers two factors to con- that for c1 and c2, although c2 has a small advantage due to struct the paradoxes. (i) The ambiguity attitude of different that the 51st ball may yield $80, the decision maker still take choices will be different. Thus, even both c1 and c2 have choice c1 for the reason of ambiguity aversion. Whilst, for same expected utility interval, most of the decision mak- c3 and c4, although the ambiguity degree for both choices ers still expressed strict preference in the reflection prob- are the same, the slight advantage of c4 can still influence lems. (ii) The decision maker will be indifferent between the decision maker’s selection. In other words, our method the choices that have the same mass function over the same can consider the ambiguity aversion of the decision maker set of possible obtained utilities. In this vein, we can make as well as the advantage of higher utility for a decision mak- the following claim to reveal the essence of such a paradox ing under ambiguity. So, our method covers well the 50:51 as follows: example. Thus, we can make the following claim: Claim 3 An reflect type Machina paradox can be solved by Claim 2 A 50:51 type Machina paradox can be solved by a a decision model satisfying two conditions: decision model that considers both the expected utility inter- (i) the decision model can distinguish the ambiguity de- val and the ambiguity aversion attitude of a given decision grees of different choices; and maker. (ii) the evaluation of two choices should be the same if they Now we turn to the second type of Machina paradoxes have the same mass function over the same set of utili- called the reflection example in Table 3. In this example, ties.

619 Related Work less, our ambiguity-aversion model can work as a normative There are three strands of literature related to our research. decision making model solving two paradoxes without set- First, there have been various decision models that con- ting any additional parameters setting. sider the impact of ambiguity in decision making under Finally, for the belief expected utility model in (Jaffray uncertainty. For example, Choquet expected utility model 1989; Jaffray and Philippe 1997), which can be recovered (Chateauneuf and Tallon 2002; Coletti, Petturiti, and Van- the mathematical form considered in our model as taggi 2014; Schmeidler 1989) successfully captures the type + − V (c)=αV (c)+(1− α)V (c), of ambiguity aversion displayed in Ellsberg paradox by ap- plying the Choquet integral to model the expected utility. where α ∈ [0, 1] is a constant, and Maxmin expected utility (Gilboa and Schmeidler 1989) con-  + siders the worst outcome in a decision making under ambi- V (c)= φ(B)u(MB), α guity to find a robust choice. And the -MEU model (Ghi- B rardato, Maccheroni, and Marinacci 2004) distinguishes dif-  V − c φ B u m , ferent ambiguity attitudes of the decision makers. Also, a ( )= ( ) ( B) general model called variational preferences (VP) has been B proposed (Maccheroni, Marinacci, and Rustichini 2006) to where φ is the Mobius transform of a belief function that acts capture both MEU and multiplier preferences. Hence, the as mass function m, B is a set of outcomes, mB and MB are smooth ambiguity model (Klibanoff, Marinacci, and Muk- respectively the worse and best outcomes in set B. However, erji 2005) enables decision makers to separate the effect of since the value of α is the value of a pessimism index func- ambiguity aversion from that of . However, they tion, determined by the risk attitude or ambiguity attitude of cannot provide an explanation for both two paradoxes that decision-maker, it should be not changed for the same para- we analysed in this paper (Baillon and Placido 2011). dox. Thus, for the 50:51 example of Machina paradox, we Second, some decision models based on D-S theory have find that it is hard to determine a value of α that satisfies the been proposed to solve decision problems under ambiguity. preference ordering. And if we change the utilities of some Generally speaking, various decision models based on D-S outcomes in Ellsbergs Paradox, the range of α will change. theory can be regarded as the Unique Ranking Order Satis- Therefore, as mentioned in (Jaffray and Philippe 1997), they faction Problem (UROSP) where the point-value probabil- just give a descriptive model to accommodate the paradoxes. ity of each single outcome is unknown or is unavailable. And since such α is a parameter, the Jaffray’s model also has Ordered weighted averaging model (Yager 2008), Jaffray– the limitations as the decision model with additional param- Wakker approach (Jaffray and Wakker 1993), and partially eters. Rather, in our method, since we give a way to calculate consonant belief functions model (Giang 2012) have devel- the ambiguity degree, which can change accordingly, we can oped different formulas based on a decision maker’s attitude avoid the issues of Jaffray’s model. to search for solutions to UROSP. However, these models are based on the assumption that a decision maker’s atti- tude can be elicited accurately. Unfortunately, it is difficult Conclusion to obtain a determinate point value of a decision maker’s This paper proposed an ambiguity aversion decision model attitude in a uncertain environment. Therefore, to this end based on D-S theory and normalised version of the gener- some authors apply some basic notions and terminology of alised Hartley measure for nonspecificity for decision mak- D-S theory without any additional parameter. TBM (Smets ing under ambiguity. First, we introduce the basic principles and Kennes 2008) and MRA (Liu, Liu, and Liu 2013) are that should be followed when setting the preference ordering this kind. Nevertheless, both cannot solve the two famous in decision making under ambiguity. Then, we proposed an paradoxes so well as our model proposed in this paper. ambiguity-aversion model to instantiate these principles and Third, in recently years, some decision models (Gul and revealed some insightful properties of our model. Finally, we Pesendorfer 2014; Siniscalchi and Marciano 2008) have validated our models by solving the two famous paradoxes been proposed with the claim of solving both of these two (i.e., Ellsberg paradox and Machina paradox). paradoxes with some parameter variations. For example, the There are many possible extensions to our work. Perhaps expect uncertainty utility (EUU) theory (Gul and Pesendor- the most interesting one is the axiomatisation and some psy- fer 2014) will solve the paradoxes based on the interval- chological related experimental studies of our method. To valued expected utility and the utility indexes (a parameter achieve this goal, we need identify further axioms to de- variation) applied to the upper and lower bounds of the inter- rive the ambiguity degree as well as construct a representa- val. However, such a model has the following limitations: (i) tion theorem based on the axioms proposed. Another tempt- It is unclear why a decision maker needs to choose different ing avenue is to apply our decision model into game the- utility indexes in different decision problems. (ii) Since such ory as (Zhang et al. 2014) did. Finally, although our method utility indexes will be obtained after we know the human se- can model the general human behaviour in decision mak- lection of a given decision problem or a paradox, it is ques- ing under ambiguity as shown by solving some well-known tionable whether or not such utility indexes have the same paradoxes in the research field of decision-making, it is still predication power for any new problems. (iii) Such a model worth discussing the construction of a parameterised model is somehow descriptive rather than normative. As a result, it based on our ambiguity aversion model to describe the indi- has few benefits for the decision support issues. Neverthe- vidual’s behaviour for real world application.

620 Acknowledgements with the sure-thing principle. Journal of Risk & Uncertainty This work was partially supported by Bairen plan of Sun 7(3):255–71. Yat-sen University, the National Natural Science Founda- Jaffray, J. Y. 1989. Linear utility theory for belief functions. tion of China under Grant No. 61272066, the Program for Operations Research Letters 8(2):107–112. New Century Excellent Talents in University in China un- Klibanoff, P.; Marinacci, M.; and Mukerji, S. 2005. A der Grant No. NCET-12-0644, the Natural Science Foun- smooth model of decision making under ambiguity. Econo- dation of Guangdong Province of China under Grant No. metrica 73(6):1849–1892. S2012030006242, and the Project of Science and Technol- L’Haridon, O., and Placido, L. 2008. Betting on Machina’s ogy in Guangzhou in China under Grant No. 2014J4100031 reflection example: An experiment on ambiguity. Theory & and 201604010098. Decision 69(3):375–393. References Liu, H. C.; Liu, L.; and Liu, N. 2013. Risk evaluation ap- proaches in failure mode and effects analysis: A literature Baillon, A., and Placido, L. 2011. Ambiguity models review. Expert Systems with Applications 40(2):828–838. and the Machina paradoxes. American Economic Review 101(4):1547–60. Liu, W. 2006. Analyzing the degree of conflict among belief functions. Artificial Intelligence 170(11):909–924. Chateauneuf, A., and Tallon, J. M. 2002. Diversifica- tion, convex preferences and non-empty in Choquet ex- Maccheroni, F.; Marinacci, M.; and Rustichini, A. 2006. pected utility model. Economic Theory 19(3):509–523. Ambiguity aversion, robustness, and the variational repre- sentation of preferences. Econometrica 74(6):1447–1498. Coletti, G.; Petturiti, D.; and Vantaggi, B. 2014. Choquet Machina, M. J. 2009. Risk, ambiguity, and the rank- Expected Utility Representation of Preferences on General- dependence axioms. The American Economic Review ized Lotteries. Springer International Publishing. 99(1):385–392. Dubois, D., and Prade, H. 1985. A note on measures of Neumann, J. v., and Morgenstern, O. 1944. Theory of games specificity for fuzzy sets. International Journal of General and economic behavior. Princeton University Press. Systems 10(4):279–283. Savage, L. J. 1954. The Foundations of Statistics, volume 1. Dubois, D.; Fargier, H.; and Perny, P. 2003. Qualitative John Wiley & Sons Inc. with preference relations and comparative uncertainty: An axiomatic approach. Artificial Intelligence Schmeidler, D. 1989. Subjective probability and expected 148(1):219–260. utility without additivity. Econometrica 57(3):571–87. Ellsberg, D. 1961. Risk, ambiguity and the Savage axioms. Shafer, G. 1976. A Mathematical Theory of Evidence. Quarterly Journal of 75(4):643–669. Princeton University Press. Ghirardato, P.; Maccheroni, F.; and Marinacci, M. 2004. Siniscalchi, and Marciano. 2008. Vector expected utility and Differentiating ambiguity and ambiguity attitude. Journal attitudes toward variation. Econometrica 77(3):801–855. of Economic Theory 118(2):133–173. Smets, P., and Kennes, R. 2008. The transferable belief Giang, P. H. 2012. Decision with Dempster-Shafer be- model. Artificial Intelligence 66(94):191–234. lief functions: Decision under ignorance and sequential con- Strat, T. M. 1990. Decision analysis using belief func- sistency. International Journal of Approximate Reasoning tions. International Journal of Approximate Reasoning 4(5- 53(1):38–53. 6):391–417. Gilboa, I., and Schmeidler, D. 1989. Maxmin expected util- Suzumura, K. 2016. Choice, Preferences, and Procedures: ity with non-unique prior. Journal of Mathematical Eco- A Rational Choice Theoretic Approach. Harvard University nomics 18(2):141–153. Press. Gul, F., and Pesendorfer, W. 2014. Expected uncertain util- Tversky, A., and Kahneman, D. 1992. Advances in prospect ity theory. Econometrica 82(1):1–39. theory: Cumulative representation of uncertainty. Journal of Hindmoor, A., and Taylor, B. 2015. Rational Choice. Pal- Risk & Uncertainty 5(4):297–323. grave Macmillan Ltd. Yager, R. R., and Alajlan, N. 2015. Dempster-Shafer Houthakker, H. S. 1950. Revealed preference and the utility belief structures for decision making under uncertainty. function. Economica 17(17):159–174. Knowledge-Based Systems 80(C):58–66. Jaffray, J. Y., and Jeleva, M. 2007. Information process- Yager, R. R. 2008. Decision Making Under Dempster- ing under imprecise risk with the hurwicz criterion. In Pro- Shafer . Springer Berlin Heidelberg. ceedings of the Fifth International Symposium on Imprecise Zhang, Y.; Luo, X.; Ma, W.; and Ho−Fung, L. 2014. Am- Probability: Theories and Applications 49(1):117–129. biguous bayesian games. International Journal of Intelligent Jaffray, J. Y., and Philippe, F. 1997. On the existence of Systems 29(12):1138–1172. subjective upper and lower probabilities. Mathematics of Operations Research 22(1):165–185. Jaffray, J. Y., and Wakker, P. 1993. Decision mak- ing with belief functions: Compatibility and incompatibility

621