Experiencing Default Nudges: Autonomy, Manipulation, and Choice-Satisfaction as Judged by People Themselves

Patrik Michaelsen*, Lars-Olof Johansson & Martin Hedesström. Department of Psychology, University of Gothenburg.

Critiques of nudging suggest that nudges infringe on decision makers’ autonomy. Yet, little empirical research has explored whether people who are subjected to nudges agree. In three online between-group experiments (N = 2083), we subject participants to different choice architectures and measure their experiences of autonomy, choice-satisfaction, perceived threat to freedom of choice, and objection to the . Participants who received an opt-out nudge made more prosocial choices but did not report more negative choice experiences compared to participants in opt-in default or active choice conditions. This was predominantly the case even when the presence of the nudge was made transparent to participants, and when choice stakes were increased. Our results suggest that defaults are less manipulative and autonomy-infringing than sometimes feared. Policy-makers should include measures of choice experiences when testing out new nudges.

Keywords: choice architecture; nudge; default; autonomy; manipulation; transparency *Corresponding author. [email protected]. Twitter: @ p_michaelsen

Unethical behavior change interventions should not be used for policy. However, the class of policy interventions known as nudges, interventions influencing behavior by changing cues in choice environments, have been accused of just this: unethicality (for book-length discussions, see Conly, 2012; Rebonato, 2012; Sunstein, 2014, 2015, 2016; White, 2013). The stakes of the accusation are high, as nudges or similar behaviorally informed interventions are reported to already be in use by over 200 government units all over the world (OECD, 2019). In the ethical debate surrounding nudging, most critiques cluster around two main lines. One deems nudging manipulative, infringing on rational decision-making capabilities and threatening freedom of choice. A second deems nudging paternalistic, overriding people’s means or ends for those preferred by the choice architect behind the intervention. Central to both charges is that nudging is claimed to pay insufficient respect to individuals’ autonomy.1 But who should principally be the judge of whether autonomy is respected? On a subjectivist view, the individual is their own best judge, and assessment should be left up to them (cf. “…better off as judged by themselves”, Thaler & Sunstein, 2008, p.6, italics added). An important implication of this position is that respect for autonomy is an empirical issue, calling not

1 Common retorts from proponents of nudging include advocating intervention transparency and simple exit clauses, or arguing that some environmental influence is inescapable and that (benevolent) intervening is preferable to random design. All of these arguments are raised by Thaler & Sunstein in Nudge, 2008, but, see also e.g. Bovens, 2009; Hansen & Jespersen, 2013; Hausman & Welch, 2010; Nys & Engelen, 2017; Wilkinson, 2013.

1 only for the perspectives of theorists but also for that of people who are subjected to the nudge.2 In this regard, it is surprising that empirical research has not to a larger extent been used as the basis of argument by ethicists (Sunstein, 2016; 2017, constitute exceptions), or been conducted at all. To date, empirical work on people’s perceptions of nudging mainly consists of survey studies, wherein participants rate descriptions of nudges without actually encountering them. We argue that this approach only captures part of the picture, and that choice experiences, such as subjective autonomy, are better understood by subjecting participants to the nudge at hand. In essence, it is the experience of autonomy that is ethically relevant, not a projection thereof, and research suggests the two can very well differ (Francis, Howard, Howard, Gummerum, Ganis, Anderson, & Terbeck, 2016; Patil, Cogoni, Zangrando, Chittaro, & Silani, 2014; Tassy, Oullier, Mancini, & Wicker, 2013). In the following we review the empirical literature on nudge perceptions, with a focus on autonomy and freedom of choice. We conclude that discrepancies are already hinted at between findings for rated descriptions and actual experiences of nudges, but that the latter category is under-researched. We then present three studies where we use opt-out defaults to nudge people and afterwards measure their experiences of autonomy, choice-satisfaction, perceptions of threat to freedom of choice, and objections to the choice format. Our results suggest that accusations of nudging being manipulative and unethical may be overstated, as judged by people themselves. Public opinions on nudging What could be labelled policy acceptance studies targeting nudge interventions have seen a proliferation in the last few years. This research consists of survey studies where participants read descriptions of nudge interventions and rate them on acceptability and other dimensions. The general conclusion is that most researched nudges receive majority support in most countries researched. Exceptions concern nudges that deprive people of their money in non-transparent ways (e.g., a defaulted carbon emission offset), default enrollment in cadaveric , and nudges perceived to be working against one’s social group, or that are proposed by a political opponent (Hagman, Andersson, Västfjäll, & Tinghög, 2015; Jung & Mellers, 2016; Pe’er, Feldman, Gamliel, Sahar, Tikotsky, Hod, & Schupak, 2019; Reisch & Sunstein, 2016; Sunstein, Reisch, & Rauber, 2017; Tannenbaum, Fox, & Rogers, 2017). Some studies have investigated people’s views on how nudges may influence autonomy and freedom of choice. For instance, Jung and Mellers (2016) found descriptions of nudges targeting automatic cognitive processes to be rated as both more autonomy-threatening and paternalistic than nudges targeting reflective cognitive processes. Hagman et al. (2015) showed majority support for eight different types of nudges, but that a majority of participants simultaneously considered six of them intrusive to freedom of choice. In three studies on default nudges, Yan and Yates (2019) found opt-out defaults to be viewed more negatively than opt-in defaults in several regards. In applications of organ donation, carbon emission offsets and retirement savings, a consistent pattern showed opt-out defaults to be rated as less ethical, more

2 Of course, not everyone is a subjectivist in this matter. However, an objectivist, considering autonomy present or absent regardless of what the agent thinks, may nevertheless value subjective the experience for possible downstream consequences, e.g. carryover effects to other values such as well-being, or for policy implementation success.

2 deceptive and manipulative, more coercive, and rendering less expected autonomy, when compared to opt-in defaults. This occurred even though the nudge was complemented by efforts to increase transparency, and highlighting that opting out was possible. In sum, survey research suggests people find nudges an acceptable means of behavior change, but that there are concerns with how freedom of choice and autonomy may be affected. This provides a somewhat ambiguous empirically based justification for nudging, with people on the one hand adhering to criticisms, but on the other simultaneously stating agreement. However, as outlined above, we believe this research perspective needs to be complemented with judgments from people actually experiencing nudges, and especially when assessing choice experiences such as autonomy. People’s experiences of being nudged Subjective experiences of being nudged have not been a primary topic in nudging research up to this point. Instead, to the extent researched at all, these issues have been presented as outcomes secondary to nudge effectiveness on the target behavior, or as mediators to this end. A few findings relating to autonomy and freedom of choice exist however. One early experiment examining choice experiences is the study by Abhyankar, Summers, Velikova and Bekker (2014). In this study, experiences of autonomy (“perceived behavioral control”) were measured for participants subjected to an opt-out default nudge in a hypothetical cancer treatment choice. Results showed participants receiving the opt-out default to report high levels of behavioral control, and not significantly lower control than participants subjected to an opt-in default or an active choice format. Additionally, opt-out participants’ choice satisfaction mirrored these findings, with high absolute ratings and no significant differences compared to the other choice formats. Unfortunately however, while breaking new ground, this study may not have had the methods necessary to detect potential negative effects of the nudge. There are at least three methodological limitations that bear on this issue. First, the sample size was limited to 124 participants (41-42 per condition). Second, the choice stakes could be considered low from a lack of palpable consequences. Finally, it is unclear to what extent participants were aware of the nudge, leaving it open that generalizability may be limited to cases where the nudge intervention is not noticed. As far as we are aware, there is no other study having measured experienced autonomy of participants subjected to a nudge. Two studies, however, have measured a related construct: perceived threat to freedom of choice. In two experiments on carbon offset contributions, Bruns and colleagues found an opt-out nudge to be perceived as more freedom-threatening than a recommendation (but less so than a mandate; Bruns & Perino, 2019), and that increasing transparency of the nudge did not affect how it was perceived (Bruns, Kantorowicz-Reznichenko, Klement, Luistro Jonsson, & Rahali, 2018). While freedom of choice is a prerequisite for autonomy, perceived threat to freedom of choice is a judgment of whether an influence attempt is happening or not. It should be noted that this is different from influence actually having been exerted on a choice, which is what would constitute an infringement on autonomy. Relatedly, at least two empirical studies have inferred, but not measured, that opt-out defaults leading to reactant behavior (Arad & Rubinstein, 2015; Hedlin & Sunstein, 2016). Furthermore, it has been demonstrated empirically that people subjected to nudges perceived the nudge as fair (Steffel, Williams & Pogacar, 2016; Michaelsen, Nyström, Luke, Johansson & Hedesström, 2020), perceived the choice architect as trustworthy (Paunov, Wänke & Vogel, 2018) and that they were willing to work with the choice architect again in the future (Steffel et al., 2016).

3 In sum, evidence suggests that people judge nudges more favorably when experiencing them than when rating them from a distance. For instance, experiences of behavioral control, threat to freedom of choice, and fairness seem at odds with findings in the policy acceptance literature. However, the research on experiences of nudges is thus far scarce, and methodological issues leave important questions unanswered. To the extent that we are correct in identifying autonomy as of central relevance to the ethics of nudging, further research on this particular issue seem warranted. Overview of present studies Across three experiments (total N = 2083), we subject participants to default nudges and measure experiences of autonomy, choice-satisfaction, perceived threat to freedom of choice, and objection to the choice format. All studies use opt-out defaults to nudge people. Defaults were chosen for being a prototypical nudge, often having a large effect on choice, and for being subtle enough to allow for a manipulation of transparency of the intervention. Due to the scarcity and limitations of prior studies on choice experiences in nudge interventions, we approach this research without directional hypotheses. Motivated by the charge that nudge interventions interfere with important aspects of choice, our central aim is to investigate whether an opt-out default nudge result in different experiences of autonomy, choice-satisfaction, perceived manipulation, and objection compared to an opt-in default or active choice format. For all experiments, however, we predict a default effect on the target behavior (Jachimowicz, Duncan, Weber, & Johnson, 2019).

Study 1: Experiencing a pro-environmental default nudge In experiment 1, we provide a first test of how different choice architecture settings affect people’s experiences and perceptions when choosing. The choice task is formatted as one of three common choice architectures in a between-groups design: opt-out default, opt-in default, or active choice. This allows us to compare the opt-out default condition, i.e. the “nudge”, both to the default which commonly is considered “business-as-usual”, i.e. the opt-in default, as well as to a format without a default requiring active choosing. Method Participants Participants were recruited through Amazon Mechanical Turk (MTurk), and paid $0.40 for their time. We aimed to recruit 100 participants per condition. With an expected 20% dropout from unfinished surveys and failure in comprehension checks, we requested 360 participants (362 complete responses received). We used three attention and comprehension checks, and excluded participants that failed one or more. Specifically, we excluded participants that failed to answer how the choice task was formatted3 (56 participants failed), failed an attention check requiring responding with specific scale step (12 participants failed), or stated that their choice in the choice task (amount of pro-environmental appliances chosen) were more than +/- 2 off from what was

3 After data had been collected, we realized that our question was somewhat ambiguously phrased for participants in the active choice condition, resulting in a large amount of incorrect answers. For this reason, we decided to not use this question for cleaning the specific condition. All reported results are substantially the same regardless of exclusion or not. Descriptives for both are provided in Appendix A.

4 recorded (52 participants failed). Our final sample consisted of 290 participants (M = 36.47 years old, SD = 11.28; 44 % female). The dropouts were however systematic in that participants in the opt-out condition were more likely to incorrectly answer how the choice was formatted4. This rendered experimental conditions unbalanced, with sample sizes of 60 (opt-out), 115 (active choice), and 115 (opt-in). The lowest powered two-group comparison thus had 80% power to detect an effect of Cohen’s d = .45. Procedure and Materials In the experimental scenario, participants imagined moving to a new apartment, and that before moving in they could choose appliances for their new residence (the scenario was adapted from Steffel et al., 2016). Participants were told that 10 specified appliances were included but that they had to decide between having the appliances in an environmentally friendly (“green”) or standard (“non-green”) version. The choice task was experimentally manipulated in a between-groups design with participants randomized into one of three conditions. An opt-out condition had all appliances pre-selected as green, an opt-in condition had all pre-selected as non-green, and a third condition required an active choice for each appliance. In the two default conditions participants could change their choice by ticking a box next to each appliance. In the active choice condition, the participants were required to explicitly state Yes or No to having each appliance in the environmentally friendly version. In all conditions, each green appliance came with a cost ($1 - $7), to be added or subtracted from the apartment’s monthly rent. Subsequent to choosing appliances, participants answered questions regarding experienced autonomy, perceived threat to freedom of choice, and choice-satisfaction. Based on a scale used by Cornwell and Krantz (2014), adjusted in line with recommendations from Felsen, Castelo and Reiner (2013), experienced autonomy was measured with six items referring to the actual experience of choosing, i.e. whether the participant experienced control in making the decision. Perceived threat to freedom of choice was measured by Dillard and Shen’s (2005) 4-item scale in where the items refer to how the choice environment was perceived by the participant, i.e. whether the choice environment tried to influence them in any direction, regardless of whether this was judged to affect the choice made. Choice-satisfaction was measured with a single item and referred to contentment with decision made. All items used 9-point scales with labels at end points. For further detail on scales used, see Appendix B. Presentation order for experienced autonomy and perceived threat to freedom of choice was randomized. Choice-satisfaction was measured prior to the above two. For exploratory purposes, we also included some individual difference-measures pertaining to control in decision-making. Specifically, we included select items from the desirability of control scale (Burger & Cooper, 1979), the internal control-subscale from Sapp and Harrod’s locus of control short scale (1993), as well as select items concerning trait reactance (Hong & Page, 1989). Stimulus materials and data are available in an online supplementary material and at osf.io/69be8.

4 Modal error was to reverse the choice scale (0-10), for instance reporting haven chosen 0 green appliances when 10 was actually recorded. In other words, understanding the choice format as had it been opt-in.

5

Figure 1. Frequency distributions for all dependent variables in experiment 1, separated by condition.

6 Results All main analyses were conducted as analysis of variance (ANOVA) comparisons between choice format conditions (opt-out vs. opt-in vs. active choice) unless otherwise stated. All post hoc analyses used the Tukey HSD test. Summary descriptive results can be found in Table 1, frequency distributions in Figure 1, and additional analyses and visualizations in the online supplementary material or at osf.io/69be8. Default effect on choice. There was a significant influence from the choice architecture intervention on how many green appliances people chose,5 F(2, 287) = 32.39, p < .001, ηp2 = 0.184. Post-hoc comparisons showed significant differences between the opt-out condition (M = 6.75, SD = 2.96) and both the opt-in condition (M = 3.38, SD = 2.28; p < .001) and the active choice condition (M = 4.23, SD = 2.82; p < .001). The difference between opt-in and active choice was also significant (p = .043). Experienced autonomy. Participants did not significantly differ in experienced autonomy between the opt-out default (M = 7.88, SD = 1.42), opt-in default (M = 7.55, SD = 1.40), and active choice (M = 7.78, SD = 1.17) conditions, F(2, 287) = 1.51, p = .222, ηp2 = 0.010. Choice-satisfaction. No significant differences in choice-satisfaction were found between the opt-out default (M = 8.13, SD = 1.13), opt-in default (M = 7.72, SD = 1.45), and active choice (M = 7.78, SD = 1.36) conditions, F(2, 287) = 1.95, p = .145, ηp2 = 0.013. Perceived threat to freedom of choice. Similarly for perceived threat to freedom of choice, no significant differences were found between the opt-out default (M = 2.33, SD = 2.03), opt-in default (M = 2.16, SD = 1.82), and active choice (M = 2.50, SD = 2.00) conditions, F(2, 287) = 0.86, p = .425, ηp2 = 0.006. Additional analyses. A possible concern could be that the above summary statistics conceal subsets of participants reacting aversively to the opt-out nudge. Another could be that associations between the dependent variables differed between choice formats, for instance if high number of green appliances chosen was linked to high autonomy and satisfaction in one condition but not in another. We explored this in visualizations and correlation analyses. As can be seen in Figure 1, there were no signs of bi-modal distributions for any of the experience and perception variables, for any of the choice formats. Instead, all distributions were skewed towards favorable evaluations. There was hence no evidence of some group of participants being negatively affected by the opt-out nudge. We also found that choice correlated similarly with the other dependent variables across all experimental conditions (see supplementary materials for details). For the full sample, correlations with choice read: for autonomy: r =.18, p = .002, choice-satisfaction: r =.16, p = .007, and perceived threat to freedom of choice: r =-.11, p = .057. Correlations between other DVs were also similar across choice formats. Discussion Experiment 1 found that while setting the choice architecture in an opt-out format had a sizeable influence on choices, participants experiences of autonomy, choice-satisfaction and perceived threat to freedom of choice did not differ from those subjected to an opt-in or active choice format. It should also be noted that ratings were overall favorable, with reports of autonomy and satisfaction high and perceptions of threat to freedom of choice low. As evidenced by the

5 Levene’s test indicated the assumption of equal error variances was violated for this analysis. A Kruskal-Wallis test however produced results similar to the ANOVA (p < .001).

7 visualizations in Figure 1, this seemed to largely be the case for the whole sample, with no real signs of subgroups reacting aversively to being subjected to the opt-out default. We suggest these findings most plausibly could be due to three explanations: 1) non- existent or miniscule true effects (or correspondingly: a lack of statistical power in the present experiment), 2) a lack of recognition of the nudge leading to weak experimental manipulation strength, or 3) a lack of engagement with the hypothetical choice task. If subjective autonomy- infringement remains low in more severe testing, i.e. testing with increased statistical power and addressing points 2 and 3, this should be of relevance when assessing the ethicality of using nudges as policy instruments. In the next two experiments we seek to address these issues.

Study 2: Do choice experiences deteriorate when intervention transparency is increased? In experiment 2, we want to replicate the findings from experiment 1 and examine whether the lack of significant differences in experiences and perceptions between choice formats could stem from the nudge not being noticed. We thus add a choice architecture disclosure in the form of a short information text presented to approximately half the sample before choosing. The text box discloses that the way in which a choice situation is presented to a decision maker may influence their choice, as well as in what direction this effect could be expected to exert influence in the present context. Apart from this addition, experiment 2 uses the same design and basic materials as experiment 1. Method Participants Participants were recruited through Amazon Mechanical Turk (MTurk), and paid $0.45 for their time. We aimed to collect 200 participants per choice format (100 receiving the nudge disclosure, 100 not) after exclusions. This corresponds to 80% power to detect main effect differences of ηp2 = 0.0159. Expecting a 20% dropout, we requested 720 participants (722 completed responses received). We used two comprehension checks, and excluded participants that failed at least one of these. Specifically, participants were excluded for failing to answer how the choice format was set up (87 participants failed), or stated that their choice of environmental appliances were more than +/- 2 off from what was recorded (66 participants failed). We also included a question assessing comprehension of the disclosure contents (134 of 355 participants failed, 37.7%). Our analysis strategy below is to first report analyses for the full sample (not excluding for the disclosure comprehension check), and follow up with an additional analysis of only participants receiving the disclosure and passing the disclosure comprehension check (answering “what’s the experience of someone disclosed and understanding that they’re subjected to a nudge?”). Descriptive results separated for participants undisclosed, disclosed, and disclosed and passing comprehension check can be found in Appendix A, and data is available in an online supplementary material and at osf.io/69be8. The final sample, not excluding for the disclosure check, consisted of 606 participants (M = 35.8 years old, SD = 11.23; 48.3 % female). As there had been a larger dropout from the opt-out condition in the previous experiment, we attempted to clarify the choice task instructions. Balance was successfully improved, with group sizes of 179 (opt-out), 203 (active choice), and 224 (opt-in).

8 Procedure and Materials Experiment 2 used the same apartment acquisition scenario as experiment 1. The design was expanded by manipulating the transparency of the intervention. Before choosing apartment appliances, approximately half of the participants in each choice format condition were shown a text disclosing the presence of the choice architecture intervention and its presumed influence on choice. For instance, in the opt-out default condition disclosed participants received information stating that when choice options are presented with a pre-selection this may have an influence on choice, and that in this particular setup it may lead to people choosing a relatively high amount of environmentally friendly appliances. Specific wordings for each condition can be found in the online supplementary material and at osf.io/69be8. After making choices of appliances, participants answered the same measures as in experiment 1.6 Results All main analyses were conducted as 3(choice format: opt-out vs. opt-in vs. active choice) by 2(transparency: disclosure present vs absent) ANOVAs unless otherwise stated. The Tukey HSD test was used for all post hoc analyses. Summary descriptive results can be found in Table 1, frequency distributions in Figure 2. Default effect on choice. As expected, there was a significant main effect of choice format on number of green household appliances chosen, F(2, 600) = 83.35, p < .001, ηp2 = 0.217. Post- hoc comparisons revealed a significant difference between the opt-out default (M = 7.46, SD = 2.57) and opt-in default (M = 4.14, SD = 2.65; p < .001) conditions, as well as a significant difference between the opt-out and active choice conditions (M = 4.72, SD = 2.83; p < .001). The difference between opt-in and active choice was not significant (p = .067). There was no significant main effect of the transparency manipulation, F(1, 600) = 0.84, p = .361, ηp2 = 0.001, or interaction effect between choice format and transparency, F(2, 600) = 0.93, p = .397, ηp2 = 0.003). Results for participants being disclosed of the choice architecture and passing the disclosure comprehension check were highly similar. The one-way ANOVA was significant, F(2, 191) = 27.30, p < .001, ηp2 = 0.222, and post hoc analysis showed participants in the opt-out condition (M = 7.31, SD = 2.78) making significantly more pro-environmental choices than participants in either the opt-in (M = 4.09, SD = 2.48, p < .001) or active choice condition (M = 4.70, SD = 2.71, p < .001). The difference between opt-in and active choice was not significant (p = .418). Experienced autonomy. There was a significant main effect of choice format on experienced autonomy, F(2, 600) = 3.43, p = .033, ηp2 = 0.011. Specifically, participants in the opt-out default condition (M = 7.84, SD = 1.30) experienced higher autonomy than participants in the opt-in condition (M = 7.49, SD = 1.46, p = .029). There was no significant difference between active choice (M = 7.72, SD = 1.35) and opt-in (p = .204), or opt-out (p = .646). There was also no significant main effect of the transparency manipulation, F(1, 600) = 0.31, p = .578, ηp2 = 0.001, or interaction effect between choice format and transparency, F(2, 600) = 0.22, p = .802, ηp2 = 0.001.

6 An additional item was included to the perceived threat to freedom of choice scale in this experiment only. For consistency with other experiments and previous literature, we chose to drop this item from all analyses below. Results are substantially the same whether item included or not (see Tables S11 and S12 in the supplementary material)

9 For disclosed participants passing the comprehension check, no significant difference between conditions was found, F(2, 191) = 1.79, p = .171, ηp2 = 0.018. An inspection of the descriptives however show the means and standard deviations to be highly similar to those of the full sample reported above: opt-out (M = 7.84, SD = 1.37), opt-in (M = 7.42, SD = 1.46), active choice (M = 7.79, SD = 1.18). This suggests that the change of statistical significance may have been a result of decreased power. Of more importance however, the result shows that participants who understood the nudge disclosure did not consider themselves to be less autonomous, even though their choices had in fact been heavily influenced by the choice format. Choice-satisfaction. For choice-satisfaction, a significant main effect of choice format was found, F(2, 600) = 4.37, p = .013, ηp2 = 0.014. Post-hoc comparisons showed a significant difference between the opt-out default (M = 7.97, SD = 1.27) and opt-in default (M = 7.57, SD = 1.39, p = .012), with active choice wedged inbetween (M = 7.72, SD = 1.50) not significantly different from opt-in (p = .588) nor opt-out (p = .148). There was no significant main effect of the transparency manipulation, F(1, 600) = 1.89, p = .169, ηp2 = 0.003, or interaction effect, F(2, 600) = 1.98, p = .139, ηp2 = 0.007. The significant difference between choice formats persisted when analyzing only participants who received and understood the disclosure, F(2, 191) = 5.40, p = .005, ηp2 = 0.054. Again, post hoc analysis showed participants in the opt-out condition (M = 7.94, SD = 1.31) to be significantly more satisfied than participants in the opt-in condition (M = 7.19, SD = 1.68, p = .009). The same was true for active choice (M = 7.91, SD = 1.27, p = .016) vs. opt-in. Participants in the opt-out and active choice conditions did not differ significantly from each other (p = .989). Perceived threat to freedom of choice. There were no significant main effects for either choice format, F(2, 600) = 0.56, p = .573, ηp2 = 0.002, transparency manipulation, F(1, 600) = 0.428, p = .513, ηp2 = 0.001, or a significant interaction (F(2, 600) = 0.08, p = .926, ηp2 < 0.001). Notably, none of the cell means reached above 2.5 on the 9-point scale, indicating low perceived threat to freedom of choice overall: opt-out (M = 2.36 SD =1.77), opt-in (M =2.35, SD = 1.75), active choice (M = 2.27, SD = 1.83). Participants receiving a disclosure and passing the disclosure comprehension check had perceptions similar to the larger sample, with no significant differences between choice formats, F(2, 191) = 0.04, p = .965, ηp2 < 0.001, and no group means above 2.5: opt-out (M = 2.35 SD =1.86), opt-in (M =2.44, SD = 1.70), active choice (M = 2.41, SD = 2.00). Additional analyses. Similarly to in experiment 1, frequency distributions for the experience and perception variables showed little signs of participants in the opt-out condition reacting aversively (see Figure 2). For all conditions, autonomy, choice-satisfaction and perceived threat to freedom of choice were heavily skewed in the direction of participants not considering the choice environment invasive. Number of green appliances chosen correlated significantly with all three other dependent variables, for autonomy: r =.21, p < .001, choice-satisfaction: r =.22, p < .001, and perceived threat to freedom of choice: r =-.12, p = .003. All correlations were in the same direction and of roughly equal strength between the three choice formats (see supplementary materials for details).

10

Figure 2. Frequency distributions for all dependent variables in experiment 2, separated by condition.

11 Discussion Our second experiment mirrors the findings from the first, in that the choice architecture intervention had a strong influence on choice (Cohen’s d for opt-out default vs. each of the other conditions > 1) without resulting in negative effects on other important outcomes. This conclusion holds when increasing the transparency of the intervention, yielding highly similar results for participants receiving a disclosure and acknowledging the choice architecture intervention’s presence and potential effect. Curiously, results even indicate that participants receiving the opt-out default nudge experienced themselves as slightly better off both with regards to autonomy and choice- satisfaction, as compared to participants receiving the opt-in default. It should however be noted that both the differences are small, with d’s of .26 for autonomy and .30 for choice-satisfaction. To the extent that confidence be put in these differences, we speculate that facilitated preference-alignment for choices may be an explanation. Answers to an item for environmental concern show that 75 % of the sample placed themselves above the scale midpoint. If accepting a rent increase for the sake of being environmentally friendly was in line with participants’ desires, the opt-out default may have promoted feelings of autonomy and satisfaction by facilitating this green behavior occurring. Another possibility is that autonomy was heightened in the opt-out condition by this format being the less common choice format, compared to the opt-in and active choice conditions, and thus may have made participants more aware of their opportunity to exercise choice. That we did not see any differences in perceived threat to freedom of choice may have been a consequence of people not engaging enough with the hypothetical choice to be provoked. We explore this possibility in the following study by introducing a monetary pay-off to the choice task.

Study 3: Do choice experiences deteriorate when stakes are increased? The results of experiment 2 suggested that a lack of intervention transparency was not the reason for participants’ apparent acceptance of the opt-out nudge (compared to opt-in, they experienced themselves somewhat better off). Experiment 3 proceeds by substituting the hypothetical choice task for a consequential task with monetary stakes. Specifically, participants choose whether to donate a bonus payment to charity or take the money themselves. In prior studies on the acceptability of nudges, participants have rated opt-out default payments as among the most disapproved and freedom-threatening nudges (Hagman et al., 2015; Reisch & Sunstein, 2016), suggesting this to be a challenging test. Method Participants We aimed to recruit 400 participants per choice format after exclusions (200 receiving a nudge disclosure, 200 not). This corresponds to 80% power to detect main effect differences of ηp2 = 0.008. We requested 1250 participants from MTurk (1258 completed responses received). Participants were paid 50¢, with a possible bonus of 20¢ depending on their decision in a donation choice (not known to participants when signing up). We excluded participants that failed to restate whether they had chosen to donate or not (19 participants failed). We also included a disclosure comprehension question (273/586, 47%, participants failed). For disclosure comprehension, we

12 employ the same reporting strategy as in experiment 2 where we first report results for the full sample and then follow up with sub-group analyses for participants receiving the disclosure and passing the check. The final sample consisted of 1187 participants (age M = 37.9, SD = 12.5; 52.7% female). Group sizes for the choice format conditions were approximately equal, with 401 (opt-out), 392 (active choice), and 394 (opt-in). Procedure and Materials The ostensible purpose of the study was for participants to compare the similarity of pairs of geometrical shapes. Six pairs of shapes were rated before the onset of the actual experiment. After having completed this task, participants were given a bonus payment of 20¢ and presented with the opportunity to donate this money to a charitable cause (US hurricane relief, high on the agenda at the time of data collection, fall 2017). The donation choice was set up similarly to in experiment 2, with a 3(choice format: opt-out vs. opt-in vs. active choice) × 2(transparency: disclosure present vs. absent) design. After choosing whether to keep or donate the bonus, participants answered the same measures for choice experiences and perceptions as in the previous two experiments, as well as an additional item on whether, if at all, they objected to the way in which the choice was presented to them. This item was included to complement the questions on perceived threat to freedom of choice, as it is separate whether one perceives that an attempt to influence choice is taking place, and whether one deems this objectionable or not. We did not include the individual differences measures used in previous experiments. Results All main analyses were conducted as 3(choice format: opt-out vs. opt-in vs. active choice) by 2(transparency: disclosure present vs absent) ANOVAs unless otherwise stated. We used Tukey HSD for all post hoc analyses. Summary descriptive results can be found in Table 1 and frequency distributions in Figure 3. Details for the additional analyses reported, as well as materials and raw data, can be found in the online supplementary material or at osf.io/69be8. Donation choice7. A three-stage hierarchical logistic regression was conducted to assess the influence of choice format and transparency on donation choice. Coding choice format, we used the opt-in default as the reference group and created dummy variables for opt-out and active choice, respectively. Transparency was coded 1 when a disclosure was present and 0 when absent. The two choice format dummy variables were multiplied by the transparency dummy to create two interaction terms. In the first stage we entered the dummy variables for choice format. Both predictors were significant, showing that participants in the opt-out condition (44.4 % donated; Wald (1) = 29.27, OR = 2.29, p < .001), and participants in the active choice condition (35.5 % donated; Wald (1) = 8.41, OR = 1.57, p = .004) were each significantly more likely to donate their bonus to charity than participants in the opt-in condition (25.9 % donated), Nagelkerke R2 = .034. In the second stage we added the dummy variable for transparency. This did not significantly improve the fit with the data, χ2(1) = 1.84, p = .175, Nagelkerke R2 = .037. In the third stage we added the two interaction terms. This did not significantly improve the fit with the data compared to the previous stage, χ2(2) = .46, p = .794, Nagelkerke R2 = .037.

7 The analyses for donation choice can be found tabulated with more detail in the supplementary material, Tables S12 and S13.

13 For disclosed participants passing the disclosure comprehension check, the pattern of results were somewhat different compared to the full sample. Participants in the opt-out default condition donated their bonus to a similar extent (42.6 %), but compared to the full sample, a higher percentage chose to donate in the active choice (41.9 %) and the opt-in conditions (35.2 %). In a logistic regression, choice format differences were not significant between participants in the opt-out and the opt-in conditions, Wald (1) = 1.25, OR = 1.36, p = .264, or between participants in the active choice and opt-in conditions, Wald (1) = .93, OR = 1.33, p = .334. Experienced autonomy. There was no significant main effect of choice format on experienced autonomy, F(2,1181)= 0.03, p = .973, ηp2 < 0.001; opt-out (M = 7.92, SD = 1.26), opt-in (M = 7.90, SD = 1.28), active choice (M = 7.92, SD = 1.23). The main effect for the transparency manipulation was significant, however, F(1,1181) = 4.36, p = .037, ηp2 = 0.004. Participants who received a disclosure experienced themselves as less autonomous (M = 7.84, SD = 1.24) than participants who did not receive a disclosure (M = 7.99, SD = 1.27). As can be seen in Table A2 (Appendix A), this effect seems primarily driven by the active choice conditions. The interaction effect was not significant, F(2,1181)= 1.12, p = .327, ηp2 = .002. Among only participants receiving and passing the disclosure check no difference between choice formats was found, F(2, 310) = 2.28, p = .104, ηp2 = 0.014. Experiences of autonomy were high regardless of choice format received: opt-out (M = 7.95, SD = 1.05), opt-in (M = 7.95, SD = 1.11), active choice (M = 7.64, SD = 1.38). The difference reported above between participants receiving and not receiving a disclosure was not significant when participants failing the disclosure check were excluded, t(912) = 1.50, p = .134, d = .11. The mean difference however remained almost identical: disclosed and passing check (M = 7.86, SD = 1.18), undisclosed (M = 7.99, SD = 1.27). Choice-satisfaction. There was no significant main effect of choice format on choice- satisfaction, F(2,1181) = 0.77, p = .464, ηp2 = 0.001, no significant main effect for the transparency manipulation, F(1,1184) = 0.64, p = .424, ηp2 = 0.001, nor a significant interaction between the two, F(2,1181) = 1.37, p = .255, ηp2 = 0.002. Regardless of which choice format assigned to, participants reported a high satisfaction with choice made: opt-out (M = 8.02, SD = 1.50), opt-in (M = 7.89, SD = 1.68), active choice (M = 7.93, SD = 1.57). For disclosed participants passing the comprehension check, choice formats did not differ significantly either, F(2,310) = 1.51, p = .222, ηp2 = 0.010; opt-out (M = 8.03, SD = 1.48), opt-in (M = 7.91, SD = 1.59), active choice (M = 7.63, SD = 1.97). Perceived threat to freedom of choice. For perceptions of threat to freedom of choice, there was a significant main effect of both choice format, F(2,1181) = 5.06, p = .006, ηp2 = .008, and of the transparency manipulation, F(1,1181) = 8.42, p = .004, ηp2 = 0.007. Post hoc testing showed that participants in the opt-out condition (M = 2.95, SD = 2.18) perceived the choice format to be more threatening to freedom of choice than participants both in the opt-in condition (M = 2.54, SD = 1.90, p = .013) and in the active choice condition (M = 2.55, SD = 2.01, p = .018). Participants who received the disclosure reported higher freedom threat (M = 2.85, SD = 2.08) than participants who did not receive the disclosure (M = 2.51, SD = 1.99). The interaction was not significant, F(2,1183) = 2.16, p = .134, ηp2 = 0.003. For disclosed participants passing the comprehension check there was no significant difference between choice formats, F(2,310) = 2.76, p = .065, ηp2 = 0.018. However, a glance at the means suggests a tendency among participants in the opt-out condition to perceive a higher threat to freedom of choice: opt-out (M = 3.18, SD = 2.22), opt-in (M = 2.52, SD = 1.83), active choice (M = 2.94, SD = 2.20).

14 The difference reported above between participants receiving and not receiving a disclosure persisted when participants failing the disclosure check were excluded, t(912) = -2.67, p = .008, d = .18. Participants disclosed and passing the check perceiving a higher threat to freedom of choice (M = 2.89, SD = 2.10) than undisclosed participants (M = 2.51, SD = 1.99). Objection to choice format. Complementing the perceptions of threat to freedom of choice, we further tested whether participants objected to the way in which the choice was presented. However, results showed no significant main effects of either choice format, F(2,1181) = 0.53, p = .591, ηp2 = 0.001, or the transparency manipulation, F(1,1181) = 2.70, p = .100, ηp2 = 0.002, or an interaction effect, F(2,1181) = 1.27, p = .281, ηp2 = 0.002. Objection ratings were low for all conditions: opt-out (M = 2.64, SD = 2.26), opt-in (M = 2.59, SD = 2.18), active choice (M = 2.47, SD = 2.26). For disclosed participants passing the comprehension check, no differences between choice formats existed either, F(2,310) = 0.18, p = .837, ηp2 = 0.001, opt-out (M = 2.51, SD = 2.19), opt- in (M = 2.35, SD = 1.82), active choice (M = 2.46, SD = 2.04). These results indicate that while differences existed in perceptions of threat to freedom of choice, participants subjected to the opt- out default did not oppose this influence more than other participants did. Further, the level of objection was consistently low, not exceeding 3 on the 9-point scale for any experimental condition. Additional analyses. Frequency distributions for all experience and perception variables can be found in Figure 3. Even with a consequential choice, we see no real evidence of some subgroup in the opt-out default condition displaying aversion towards the nudge. Compared to the previous experiments, a slight increase in perceived threat to freedom of choice is visible. Even so, the vast majority of participants still rated this threat as low. Further, participants’ donation choice was significantly associated with all experience and perception variables. Specifically, t-tests showed donating (vs. not donating) to be significantly associated with reports of higher autonomy (d = .32) and choice-satisfaction (d = .56), as well as with lower perceptions of threat to freedom of choice (d = .35) and lower objection (d = .39). This was the case, in the same direction and roughly equal strength, for all choice formats. There were no significant two-way interactions between donation choice and choice format (all ps >.22). Discussion In experiment 3, we added monetary consequences to the choice task in order to provide a more severe test for experiential consequences of differing choice architectures. As in previous experiments, the choice architecture had a sizeable effect on participants’ choices, with the opt- out default leading to not far from twice the amount of donations compared to the opt-in default. Though the amount of money in the choice was small, it did equal 40% of the study’s sign-up payment, and its symbolic value may still have been enough to trigger stronger reactions than the hypothetical choices of the previous experiments. Results for perceived threat to freedom of choice suggest the monetary amount was enough to trigger stronger reactions from participants. While ratings remained on the low side of the scale, participants subjected to the opt-out default nudge did to a higher degree perceive the intervention as trying to influence them than participants in the other two choice formats. Participants disclosed of there being an intervention in place likewise provided higher ratings than undisclosed participants; however this is not surprising as this is what the disclosure sought to convey. Further, these results should be interpreted against the background of participants’ responses to the more value-laden question of whether they objected to the choice format. For this

15 measure, participants subjected to the opt-out default objected a mere .05 scale-point more than opt-in participants, and all means were in the lowest third of the scale. Taken together, this suggest participants were able to detect the attempt at influence, but, on average, did not mind this happening. However, in turn, when interpreting the objection results, one should consider that these may have been influenced by a likely high sympathy for the hurricane relief cause, which may have suppressed negative effects from the choice formatting. Future research may explore this by varying the beneficiaries of the nudge.

Table 1. Means and standard deviations for dependent variables across experiments. Experiment (Scale) Opt-out default Opt-in default Active choice M (SD) M (SD) M (SD)

Choice

Experiment 1 (0-10) 6.75a (2.96) 3.38c (2.28) 4.23b (2.82)

Experiment 2 (0-10) 7.46a (2.57) 4.14b (2.65) 4.72b (2.83)

Experiment 3 (% donated) 44.4%a (0.50) 25.9%c (0.44) 35.5%b (0.48)

Experienced autonomy

Experiment 1 (1-9) 7.88a (1.42) 7.55a (1.40) 7.78a (1.17)

Experiment 2 (1-9) 7.84a (1.30) 7.49b (1.46) 7.72ab (1.35)

Experiment 3 (1-9) 7.92a (1.26) 7.90a (1.28) 7.92a (1.23)

Choice satisfaction

Experiment 1 (1-9) 8.13a (1.13) 7.70a (1.50) 7.78a (1.36)

Experiment 2 (1-9) 7.97a (1.27) 7.57b (1.39) 7.70b (1.50)

Experiment 3 (1-9) 8.02a (1.50) 7.89a (1.68) 7.93a (1.57)

Perceived threat to freedom of choice

Experiment 1 (1-9) 2.33a (2.03) 2.16a (1.82) 2.50a (2.00)

Experiment 2 (1-9) 2.44a (1.77) 2.31a (1.88) 2.24a (1.82)

Experiment 3 (1-9) 2.95a (2.18) 2.55b (1.90) 2.55b (2.01)

Objection to choice format

Experiment 3 (1-9) 2.64a (2.26) 2.59a (2.18) 2.47a (2.26) Values in the same row not sharing a subscript are significantly different at p<.05

16

Figure 3. Frequency distributions for experience and perception measures in experiment 3, separated by condition.

17 General Discussion Motivated by the concern that nudging constitutes a threat to autonomy, we explored experiences and perceptions of people subjected to nudges. The results of experiment 1 showed that participants who received an opt-out default nudge reported high levels of experienced autonomy and choice-satisfaction, and low levels of perceived threat to freedom of choice. This pattern persisted in experiments 2 and 3 when the transparency of the nudge was increased, and when stakes of the choice task were raised. The results were substantiated by comparisons of the opt-out format to opt-in and active choice formats. While the opt-out default had a consistently large influence on participants’ choices, differences in experiences and perception between choice formats were few and minor. A descriptive overview across experiments and dependent variables is presented in Table 1. These findings paint a somewhat different picture of how people perceive nudges than the hitherto dominant policy acceptance literature. Specifically, participants seem more embracing of the nudge intervention when experiencing the nudge first-hand than when rating a description of the nudge. It is noteworthy that this was found even though the nudges used in our experiments, defaults nudges, have been among the nudges rated most intrusive to freedom of choice in the acceptance literature (Hagman et al., 2015). Perhaps when actually confronted with the choice, people experience their ability to choose intact, and therefore do not feel as intruded upon as they would have expected. What implications do empirical findings like these have for people interested in how well nudging respects, or disrespects, autonomy? For someone holding autonomy to be a purely objective feature of certain circumstances, nothing will change: objective levels of autonomy have not been measured, and people’s subjective experiences are irrelevant to the ethicality of nudge interventions. A more moderate objectivist might consider subjective experience of autonomy as a useful indicator of whether autonomy is respected or not, just not the decisive one. As argued at the outset however, if one takes a subjectivist position and maintain that a person should be allowed to be their own judge, empirical data of subjective experiences should be of central importance. For the present experiments, the data showed participants strongly skewed towards considering themselves in control (see Table 1, Figures 1-3). At least on the surface level then, to the extent that the present choice environments generalize, concerns of manipulation and autonomy seem alleviated. However, a possible interpretation of the high autonomy ratings, and lack of differences between choice formats, is that people display what may be labeled an autonomy bias. In absence of strong evidence to the contrary, people tend to see themselves as originators of their actions (for similar conclusions, see e.g. Davison, 1983; Nolan, Schultz, Cialdini, Goldstein, & Griskevicius, 2008; Jaeger & Schultz, 2017). If this interpretation is sound, wouldn’t this constitute a problem for the subjectivist position – perhaps people really were manipulated, but due to the limits of introspection (“autonomy bias”), they failed to realize that this was the case? We see two main ways of responding to this objection from a subjectivist position. One response is to simply deny that something like an autonomy bias is a problem: matters should be considered as judged by people themselves, and if people judge themselves fine, then they are fine. Another response is to emphasize that our experimental participants were aware and understanding enough of the situation that their experiences should be considered valid regardless of a general tendency to overestimate one’s autonomy. Specifically, the third experiment showed that participants nudged in a choice concerning real money, while subjected to a disclosure of the nudge’s presence and possible effect, and passing a comprehension check for this information, still considered themselves to be highly autonomous. If these participants are deemed to be autonomous

18 only in an in-authentic way, it can be asked whether we are really using the same yardstick for assessing the autonomy of people subjected to nudges that we are using for people in other everyday situations. Regardless of one’s position of who is the better autonomy judge however, one potential issue to note is the level of costliness for our costly choice. Presumably people’s judgments will be scaled according to the stakes at hand, and a lack of negative experiences and perceptions could be thought to stem from insufficient costliness. However, had the stakes been dramatically higher (e.g. nudged to donate $20), people most likely would have simply opted out. In that case, manipulation in an objective sense would not have occurred, and presumably experienced autonomy would not be lowered as participants explicitly exerted control when opting out. For researching experiences and perceptions of people subjected to nudges, we suggest that a costliness level signalling value to the decision maker, while still compatible with the nudge exerting influence, is an appropriate level. Further, there is the question of generalizability. The ways in which a nudge can be designed, and the contexts in which it can be applied, are only limited by the inventiveness of the choice architect. We acknowledge that the present findings certainly cannot account fully for this heterogeneity. Rather, we believe a piecemeal approach is what this research field is confined to (as suggested by Wilkinson, 2013). Further studies should explore whether being subjected to other types of nudges, such as norm interventions, produce similar results. Other topics for future research are, for example, how perceptions and experiences of being nudged may be moderated by ease of opting-out (resistibility), people’s preferences for the behavior they are being nudged towards, and by their attitudes towards the choice architect. In conclusion, we have argued that the study of people’s choice experiences, especially autonomy, is important for assessing the ethicality of nudge interventions. We believe the present results to be the to date most robust empirical evidence of relevance to the nudging and autonomy debate. Our findings suggest that people subjected to an opt-out default nudge can be content in an absolute sense, and also not less content than people subjected to an opt-in default or active choice format. Thus, default nudges may be less manipulative than what is sometimes feared and suggested in other research. Policy makers employing defaults to nudge people should therefore feel more confident that their interventions are acceptable to nudgees. Additionally, answering calls for increased intervention transparency need not diminish the nudge’s effectiveness nor impair how it is experienced. Nonetheless, nudges make up a highly heterogeneous class of interventions, and more research is needed before general conclusions can be attempted. In the meantime, we recommend choice architects to routinely use measures of choice experiences as a guide when testing out new interventions.

19 Acknowledgements. We thank Lina Nyström and Juliane Bücker for excellent research assistance. We thank Timothy J. Luke for comments on a previous draft, as well as suggestions for data analysis and help with data visualizations. For helpful comments we also thank Katarina Nordblom, Niklas Harring and other participants at a 2018 workshop in Gothenburg, organized by Centre for Collective Action Research, University of Gothenburg. Parts of this work were presented at the 38th annual conference of the Society for Judgment and Decision Making in Vancouver, 10-13 November, 2017. The research was funded by grant MAW2015.0106 from the Marcus and Amalia Wallenberg foundation, awarded to the third author.

References Abhyankar, P., Summers, B. A., Velikova, G., & Bekker, H. L. (2014). Framing options as choice or opportunity: does the frame influence decisions?. Medical Decision Making, 34(5), 567- 582. Arad, A., & Rubinstein, A. (2018). The people’s perspective on libertarian-paternalistic policies. The Journal of Law and Economics, 61(2), 311-333. Bovens, L. (2009). The ethics of nudge. In Grüne-Yanoff T. & Hansson S.O. (Ed.), Preference change (pp. 207-220). Dordrecht: Springer. Bruns, H., Kantorowicz-Reznichenko, E., Klement, K., Luistro Jonsson, M., & Rahali, B. (2018). Can nudges be transparent and yet effective?. Journal of Economic Psychology, 65, 41-59. Bruns, H., & Perino, G. (2019). The role of autonomy and reactance for nudging-experimentally comparing defaults to recommendations and mandates. Pre-print available at: https://ssrn.com/abstract=3442465 Burger, J. M., & Cooper, H. M. (1979). The desirability of control. Motivation and emotion, 3(4), 381-393. Conly, S. (2014). Against autonomy: Justifying coercive . Cambridge University Press. Cornwell, J. F. M., & Krantz, D. H. (2014). Public policy for thee, but not for me: varying the grammatical person of public policy justifications influences their support. Judgment and Decision Making, 9(5), 433-444. Damgaard, M. T., & Gravert, C. (2018). The hidden costs of nudging: Experimental evidence from reminders in fundraising. Journal of Public Economics, 157, 15-26. Davison, W. P. (1983). The Third-Person Effect in Communication. The Public Opinion Quarterly, 47(1), 1-15 Dillard, J. P., & Shen, L. (2005). On the nature of reactance and its role in persuasive health communication. Communication Monographs, 72(2), 144-168. Felsen, G., Castelo, N., & Reiner, P. B. (2013). Decisional enhancement and autonomy: public attitudes towards overt and covert nudges. Judgment and Decision Making, 8(3), 202-213. Hagman, W., Andersson, D., Västfjäll, D., & Tinghög, G. (2015). Public views on policies involving nudges. Review of Philosophy and Psychology, 6(3), 439-453. Hausman, D. M., & Welch, B. (2010). Debate: To nudge or not to nudge. Journal of Political Philosophy, 18(1), 123-136. Hansen, P. G., & Jespersen, A. M. (2013). Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy. European Journal of Risk Regulation, 4(1), 3-28.

20 Hedlin, S., & Sunstein, C. R. (2016). Does active choosing promote green energy use: Experimental evidence. Ecology LQ, 43, 107-141. Hong, S. M., & Page, S. (1989). A psychological reactance scale: Development, factor structure and reliability. Psychological Reports, 64(3), 1323-1326. Jachimowicz, J. M., Duncan, S., Weber, E. U., & Johnson, E. J. (2019). When and why defaults influence decisions: A meta-analysis of default effects. Behavioural Public Policy , 3(2), 159-186. Jaeger, C. M., & Schultz, P. W. (2017). Coupling social norms and commitments: Testing the underdetected nature of social influence. Journal of Environmental Psychology, 51, 199- 208. Jung, J. Y., & Mellers, B. A. (2016). American attitudes toward nudges. Judgment & Decision Making, 11(1), 62-74. Michaelsen, Nyström, Luke, Johansson & Hedesström (2020). Fairness in default nudges: is increasing transparency enough? Manuscript in preparation Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008). Normative social influence is underdetected. Personality and social psychology bulletin, 34(7), 913-923. Nys, T. R., & Engelen, B. (2017). Judging nudging: Answering the manipulation objection. Political Studies, 65(1), 199-214. OECD (2019), Delivering better policies through behavioural insights: New approaches. OECD Publishing. Paunov, Y., Wänke, M., & Vogel, T. (2018). Transparency effects on policy compliance: disclosing how defaults work can enhance their effectiveness. Behavioural Public Policy, 3, (2), 187-208. Pe’er, E., Feldman, Y., Gamliel, E., Sahar, L., Tikotsky, A., Hod, N., & Schupak, H. (2019). Do minorities like nudges? The role of group norms in attitudes towards behavioral policy. Judgment and Decision Making, 14(1), 40-50. Rebonato, R. (2012). Taking liberties: A Critical Assessment of . Palgrave Macmillan. Reisch, L., & Sunstein, C. (2016). Do Europeans like nudges? Judgment and Decision Making, 11(4), 310-325. Sapp, S. G., & Harrod, W. J. (1993). Reliability and validity of a brief version of Levenson's locus of control scale. Psychological Reports, 72(2), 539-550. Steffel, M., Williams, E., & Pogacar, R. (2016). Ethically deployed defaults: Transparency and consumer protection through disclosure and preference articulation. Journal of Marketing Research, 53(5), 865-880. Sunstein, C. R. (2014) Why nudge? The politics of libertarian paternalism. Yale University Press Sunstein, C. R. (2015). Choosing not to choose: Understanding the value of choice. Oxford University Press Sunstein, C. R. (2016). The ethics of influence : Government in the age of behavioral science. Cambridge University Press. Sunstein, C. R. (2017). Human agency and . Springer International Publishing. Sunstein, C. R., Reisch, L., & Rauber, J. (2017). A worldwide consensus on nudging? Not quite, but almost. Regulation & Governance, 12(1), 3-22.

21 Tannenbaum, D., Fox, C. R., & Rogers, T. (2017). On the misplaced politics of behavioural policy interventions. Nature Human Behaviour, 1(7), 1-7. Thaler, R. H., and Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New York, NY: Penguin Books. White, M. (2013). The manipulation of choice: Ethics and libertarian paternalism. Springer. Wilkinson, T. (2013). Nudging and manipulation. Political Studies, 61(2), 341-355. Yan, H., & Yates, J. F. (2019). Improving acceptability of nudges: Learning from attitudes towards opt-in and opt-out policies. Judgment and Decision Making, 14(1), 26-39.

22