Experimental Practices in Economics: a Methodological Challenge for Psychologists?
Total Page:16
File Type:pdf, Size:1020Kb
BEHAVIORAL AND BRAIN SCIENCES (2001) 24, 383–451 Printed in the United States of America Experimental practices in economics: A methodological challenge for psychologists? Ralph Hertwig Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, 14195 Berlin, Germany. [email protected] Andreas Ortmann Center for Economic Research and Graduate Education, Charles University, and Economics Institute, Academy of Sciences of the Czech Republic, 111 21 Prague 1, Czech Republic. [email protected] Abstract: This target article is concerned with the implications of the surprisingly different experimental practices in economics and in areas of psychology relevant to both economists and psychologists, such as behavioral decision making. We consider four features of ex- perimentation in economics, namely, script enactment, repeated trials, performance-based monetary payments, and the proscription against deception, and compare them to experimental practices in psychology, primarily in the area of behavioral decision making. Whereas economists bring a precisely defined “script” to experiments for participants to enact, psychologists often do not provide such a script, leaving participants to infer what choices the situation affords. By often using repeated experimental trials, economists allow participants to learn about the task and the environment; psychologists typically do not. Economists generally pay participants on the ba- sis of clearly defined performance criteria; psychologists usually pay a flat fee or grant a fixed amount of course credit. Economists vir- tually never deceive participants; psychologists, especially in some areas of inquiry, often do. We argue that experimental standards in economics are regulatory in that they allow for little variation between the experimental practices of individual researchers. The exper- imental standards in psychology, by contrast, are comparatively laissez-faire. We believe that the wider range of experimental practices in psychology reflects a lack of procedural regularity that may contribute to the variability of empirical findings in the research fields un- der consideration. We conclude with a call for more research on the consequences of methodological preferences, such as the use on monetary payments, and propose a “do-it-both-ways” rule regarding the enactment of scripts, repetition of trials, and performance-based monetary payments. We also argue, on pragmatic grounds, that the default practice should be not to deceive participants. Keywords: behavioral decision making; cognitive illusions; deception; experimental design; experimental economics; experimental prac- tices; financial incentives; learning; role playing 1. Introduction Ralph Hertwig is a research scientist at the Center for Adaptive Behavior and Cognition at the Max Planck Empirical tests of theories depend crucially on the method- Institute for Human Development in Berlin. Currently ological decisions researchers make in designing and im- he is a visiting scholar at Columbia University, New plementing the test (Duhem 1953; Quine 1953). Analyzing York. His research focuses on how people reason and and changing specific methodological practices, however, make decisions when faced with uncertainty, the role of can be a challenge. In psychology, for instance, “it is re- simple heuristics in human judgment and decision mak- markable that despite two decades of counterrevolutionary ing, and how heuristics are adapted to the ecological attacks, the mystifying doctrine of null hypothesis testing is structure of particular decision environments. In 1996, the German Psychological Association awarded him the still today the Bible from which our future research gener- Young Scientist Prize for his doctoral thesis. ation is taught” (Gigerenzer & Murray 1987, p. 27). Why is it so difficult to change scientists’ practices? One answer is Andreas Ortmann is an assistant professor at the Cen- that our methodological habits, rituals, and perhaps even ter for Economic Research and Graduate Education at quasi-religious attitudes about good experimentation are Charles University and researcher at the Economics Insti- tute of the Academy of Sciences of the Czech Republic, deeply entrenched in our daily routines as scientists, and both in Prague, and also a visiting research scientist at the hence often not reflected upon. Max Planck Institute for Human Development in Berlin. To put our practices into perspective and reflect on the An economist by training, his game-theoretic and experi- costs and benefits associated with them, it is useful to look mental work addresses the origins and evolution of lan- at methodological practices across time or across disci- guages, moral sentiments, conventions, and organizations. plines. Adopting mostly the latter perspective, in this arti- Downloaded© 2001 from https://www.cambridge.org/coreCambridge University Press. DSMZ 0140-525X/01, on 24 Mar 2020 $12.50 at 13:21:29, subject to the Cambridge Core terms of use, available at https://www.cambridge.org/core/terms383. https://doi.org/10.1017/S0140525X01564146 Hertwig & Ortmann: Experimental practices in economics cle we point out that two related disciplines, experimental ticipant in a number of ways. In what follows, we consider economics and corresponding areas in psychology (in par- four key features of experimental practices in economics, ticular, behavioral decision making) have very different namely, script enactment, repeated trials, financial incen- conceptions of good experimentation. tives, and the proscription against deception. The differ- We discuss the different conceptions of good experi- ences between psychology and economics on these four mentation in terms of four key variables of experimental de- features can be summed up – albeit in a simplified way – sign and show how these variables tend to be realized dif- as follows. Whereas economists bring a precisely defined ferently in the two disciplines. In addition, we show that “script” to experiments and have participants enact it, psy- experimental standards in economics, such as performance- chologists often do not provide such a script. Economists based monetary payments (henceforth, financial incen- often repeat experimental trials; psychologists typically do tives) and the proscription against deception, are rigorously not. Economists almost always pay participants according enforced through conventions or third parties. As a result, to clearly defined performance criteria; psychologists usu- these standards allow for little variation in the experimen- ally pay a flat fee or grant a fixed amount of course credit. tal practices of individual researchers. The experimental Economists do not deceive participants; psychologists, par- standards in psychology, by contrast, are comparatively lais- ticularly in social psychology, often do. sez-faire, allowing for a wider range of practices. The lack We argue that economists’ realizations of these variables of procedural regularity and the imprecisely specified social of experimental design reduce participants’ uncertainty by situation “experiment” that results may help to explain why explicitly stating action choices (script), allowing partici- in the “muddy vineyards” (Rosenthal 1990, p. 775) of soft pants to gain experience with the situation (repeated trials), psychology, empirical results “seem ephemeral and un- making clear that the goal is to perform as well as they can replicable” (p. 775). (financial incentives), and limiting second-guessing about the purpose of the experiment (no deception). In contrast, psychologists’ realizations of these variables tend to allow 1.1. The uncertain meaning of the social more room for uncertainty by leaving it unclear what the situation “experiment” action choices are (no script), affording little opportunity In his book on the historical origins of psychological exper- for learning (no repeated trials), leaving it unclear what the imentation, Danziger (1990) concluded that “until rela- experimenters want (no financial incentives), and prompt- tively recently the total blindness of psychological investi- ing participants to second-guess (deception). gators to the social features of their investigative situations Before we explore these differences in detail, four ca- constituted one of the most characteristic features of their veats are in order. First, the four variables of experimental research practice” (p. 8). This is deplorable because the ex- design we discuss are, in our view, particularly important perimenter and the human data source are necessarily en- design variables. This does not mean that we consider oth- gaged in a social relationship; therefore, experimental re- ers to be irrelevant. For example, we question economists’ sults in psychology will always be codetermined by the usual assumption that the abstract laboratory environment social relationship between experimenter and participant. in their experiments is neutral and, drawing on results from Schultz (1969) observed that this relationship “has some of cognitive psychology, have argued this point elsewhere the characteristics of a superior-subordinate one.... Per- (Ortmann & Gigerenzer 1997). Second, we stress that haps the only other such one-sided relationships are those whenever we speak of standard experimental practices in of parent and child, physician and patient, or drill sergeant “psychology,” we mean those used in research on behavioral and trainee” (p. 221). The