Heterogeneous Treatment Effects in Impact Evaluation†

Total Page:16

File Type:pdf, Size:1020Kb

Heterogeneous Treatment Effects in Impact Evaluation† American Economic Review: Papers & Proceedings 2015, 105(5): 467–470 http://dx.doi.org/10.1257/aer.p20151015 Heterogeneous Treatment Effects in Impact Evaluation† By Eva Vivalt* The past few years have seen an exponential which programs work best where. To date, it growth in impact evaluations. These evalua- has conducted 20 meta-analyses and systematic tions are supposed to be useful to policymakers, reviews of different development programs.1 development practitioners, and researchers Data gathered through meta-analyses are the designing future studies. However, it is not yet ideal data with which to answer the question of clear to what extent we can extrapolate from how much we can extrapolate from past results, past impact evaluation results or under which as what one would want is a large database of conditions Deaton 2010; Pritchett and Sandefur impact evaluation results. Since data on these 20 2013 . Further,( it has been shown that even a topics were collected in the same way, we can similar) program, in a similar environment, can also look across different types of programs to yield different results Duflo et al. 2012; Bold see if there are any more general trends. et al. 2013 . The different( findings that have The rest of this paper proceeds as follows. been obtained) in such similar conditions point First, I briefly discuss the framework of hetero- to substantial context-dependence of impact geneous treatment effects. I then describe the evaluation results. It is critical to understand this data in more detail and provide some illustrative context-dependence in order to know what we results. A fuller treatment of the topic is found can learn from any impact evaluation. in Vivalt 2015 . While the most important reason to exam- ( ) ine generalizability is to aid interpretation and I. Heterogeneous Treatment Effects improve predictions, it could also help direct research attention to where it is most needed. If I define generalizability as the ability to pre- we could identify which results would be more dict results outside of the sample.2 apt to generalize, we could reallocate effort to I model treatment effects as depending on the those topics which are less well understood. context of the intervention. Suppose we have Though impact evaluations are still rapidly an intervention and are investigating a particu- increasing both in number and in terms of the lar outcome, Y . In the simplest model, context amount of resources devoted to them, a few can be represented as a “contextual variable,” C , thousand are already complete. We are thus which interacts with the treatment such that: at the point where we can begin to answer the question more generally. 1 Y T C T C , I do this using a large, unique dataset of ( ) j = α + β j + δ j + γ j j + εj impact evaluation results. These data were gath- ered by a nonprofit research organization that where j indexes individuals in the study and T I founded, AidGrade, that seeks to determine indicates treatment status. A particular impact evaluation might estimate an equation without the interaction term: * New York University, 14A Washington Mews 303, New York, NY 10003 e-mail: [email protected] . I# thank ( ) Edward Miguel, Hunt Allcott, David Card, Ernesto Dal Bó, 2 Y j T j j . Bill Easterly, Vinci Chow, Willa Friedman, Xing Huang, ( ) = α + β ′ + ε Steven Pennings, Edson Severnini, and seminar partici- pants at the University of California, Berkeley, Columbia 1 Throughout, I will refer to all 20 as meta-analyses, University, New York University, and the World Bank for but some did not have enough comparable outcomes to be helpful comments. included in a meta-analysis and became systematic reviews. † Go to http://dx.doi.org/10.1257/aer.p20151015 to visit 2 Technically, all that can be measured is “local general- the article page for additional materials and author disclo- izability”—the ability to predict results in a particular out of sure statement. sample group. Vivalt 2015 expands on this. ( ) 467 468 AEA PAPERS AND PROCEEDINGS MAY 2015 would then capture C . In this situa- pay; rural electrification; safe water storage; tion,β′ when the contextualβ +variableγ changes, in scholarships; school meals; unconditional a new context, the observed effect attributed to cash transfers; water treatment; and women’s the treatment also changes. This appears as low empowerment programs. generalizability. The subset of the data on which I am focusing The example described is the simplest case. for this paper is based on those papers that passed One can imagine that the true state of the world all screening stages in the meta-analyses and are has “interaction effects all the way down.” publicly available online AidGrade 2015 . The search and screening criteria( were very )broad II. Data and, after passing the full text screening, the vast majority of papers that were later excluded were This paper uses a database of impact evaluation excluded merely because they had no outcome results collected by AidGrade, a US nonprofit that variables in common. The small overlap of out- I founded in 2012. AidGrade focuses on gathering come variables is a surprising and notable feature the results of impact evaluations and analyzing of the data. the data, including through meta-analysis. Its data When considering the variation of effect sizes on impact evaluation results were collected in the within a set of papers, the definition of the set course of its meta-analyses from 2012–2014. is clearly critical. If the outcome variable is AidGrade’s meta-analyses follow the standard defined very narrowly e.g., “height in centime- stages: i topic selection; ii a search for rel- ters” , it is clear what( is being measured, and evant papers;( ) iii screening( of) papers; iv data one potential) source of dispersion in results is extraction; and( )v data analysis. In addition,( ) removed. On the other hand, if outcomes are it pays attention (to) vi dissemination and vii defined too narrowly, there may be little overlap updating of results AidGrade( ) 2013 . These stages( ) in outcomes between papers. Therefore, multi- are described below,( but the reader) is referred to ple coding rules were used, with this paper con- Vivalt 2015 for a more detailed summary and sidering narrowly-defined outcomes. The reader Higgins( and Green) 2008 for further information is referred to Vivalt 2015 for more details. about meta-analyses( and systematic) reviews. ( ) The interventions that were selected for III. Results meta-analysis were selected largely on the basis of there being a sufficient number of studies on I will restrict attention here to discussing that topic. how the treatment effects vary within interven- A comprehensive literature search was car- tion-outcome, using the data’s original units ried out using a mix of the search aggrega- e.g., percentage points . tors SciVerse, Google Scholar, and EBSCO ( The first thing we) might care about is: if PubMed. The online databases of J-PAL, IPA,/ we were considering the results of a particular CEGA, and 3ie were also searched for com- impact evaluation, how likely is it that the point pleteness. Finally, the references of any exist- estimate for an outcome will fall within the con- ing systematic reviews or meta-analyses were fidence interval of another impact evaluation’s collected. estimate for the same intervention and outcome? Any impact evaluation which appeared to be What is the probability that the confidence inter- on the intervention in question was included, vals of the two studies will overlap? The mean barring those in developed countries. Any will be contained in the confidence interval paper that tried to consider the counterfactual about 53 percent of the time; the studies’ con- was considered an impact evaluation. Both fidence intervals will overlap approximately 83 published papers and working papers were percent of the time. included. Twenty topics were covered to date: Second, how far away are the results from conditional cash transfers; contract teachers; one another? If we were to take the mean result deworming; financial literacy training; HIV within a particular intervention-outcome combi- education; improved stoves; insecticide-treated nation, what is the average difference between bed nets; irrigation; micro health insurance; that and a given study’s result? Putting the abso- microfinance; micronutrient supplementation; lute differences in terms of percents, the average mobile phone-based reminders; performance difference is 114 percent; the median across VOL. 105 NO. 5 HETEROGENEOUS TREATMENT EFFECTS IN IMPACT EVALUATION 469 intervention-outcomes is 48 percent. Excluding more opportunities to explicitly integrate these a given study from the calculation of the mean analyses. Efforts to coordinate across different result, to focus on prediction out of sample, the studies, asking the same questions or looking absolute differences increase to an average of at some of the same outcome variables, would 311 percent and median of 52 percent. also help. Any subgroup analyses should be It should be emphasized these numbers prespecified so as to avoid specification search- include outcomes one may not wish to consider ing Casey et al. 2012 . The framing of hetero- together, such as percentage points and rate geneous( treatment effects) could also provide ratios. Rate ratios, as the name suggests, com- positive motivation for replication projects in prise a ratio of two rates: the rate e.g., of the different contexts: different findings would incidence of a disease in the treatment( group not necessarily negate the earlier ones but add in the numerator and )the related rate in the another level of information. control group in the denominator. A change in Ultimately, knowing how much results extrap- 0.1 of a ratio, should be interpreted differently olate and when is critical if we are to know how than a change in 0.1 of an outcome measured to interpret an impact evaluation’s results or apply on another scale, such as in percentage points.
Recommended publications
  • GPI's Research Agenda
    A RESEARCH AGENDA FOR THE GLOBAL PRIORITIES INSTITUTE Hilary Greaves, William MacAskill, Rossa O’Keeffe-O’Donovan and Philip Trammell February 2019 (minor changes July 2019) We acknowledge Pablo Stafforini, Aron Vallinder, James Aung, the Global Priorities Institute Advisory Board, and numerous colleagues at the Future of Humanity Institute, the Centre for Effective Altruism, and elsewhere for their invaluable assistance in composing this agenda. 1 Table of Contents Introduction 3 GPI’s vision and mission 3 GPI’s research agenda 4 1. The longtermism paradigm 6 1.1 Articulation and evaluation of longtermism 6 1.2 Sign of the value of the continued existence of humanity 8 1.3 Mitigating catastrophic risk 10 1.4 Other ways of leveraging the size of the future 12 1.5 Intergenerational governance 14 1.6 Economic indices for longtermists 16 1.7 Moral uncertainty for longtermists 18 1.8 Longtermist status of interventions that score highly on short-term metrics 19 2. General issues in global prioritisation 21 2.1 Decision-theoretic issues 21 2.2 Epistemological issues 23 2.3 Discounting 24 2.4 Diversification and hedging 28 2.5 Distributions of cost-effectiveness 30 2.6 Modelling altruism 32 2.7 Altruistic coordination 33 2.8 Individual vs institutional actors 35 Bibliography 38 Appendix A. Research areas for future engagement 46 A.1 Animal welfare 46 A.2 The scope of welfare maximisation 48 Appendix B. Closely related areas of existing academic research 51 B.1 Methodology of cost-benefit analysis and cost-effectiveness analysis 51 B.2 Multidimensional economic indices 51 B.3 Infinite ethics and intergenerational equity 53 B.4 Epistemology of disagreement 53 B.5 Demandingness 54 B.6 Forecasting 54 B.7 Population ethics 55 B.8 Risk aversion and ambiguity aversion 55 B.9 Moral uncertainty 57 1 B.10 Value of information 58 B.11 Harnessing and combining evidence 59 B.12 The psychology of altruistic decision-making 60 Appendix C.
    [Show full text]
  • Should the Randomistas (Continue To) Rule?
    Should the Randomistas (Continue to) Rule? Martin Ravallion Abstract The rising popularity of randomized controlled trials (RCTs) in development applications has come with continuing debates on the pros and cons of this approach. The paper revisits the issues. While RCTs have a place in the toolkit for impact evaluation, an unconditional preference for RCTs as the “gold standard” is questionable. The statistical case is unclear on a priori grounds; a stronger ethical defense is often called for; and there is a risk of distorting the evidence-base for informing policymaking. Going forward, pressing knowledge gaps should drive the questions asked and how they are answered, not the methodological preferences of some researchers. The gold standard is the best method for the question at hand. Keywords: Randomized controlled trials, bias, ethics, external validity, ethics, development policy JEL: B23, H43, O22 Working Paper 492 August 2018 www.cgdev.org Should the Randomistas (Continue to) Rule? Martin Ravallion Department of Economics, Georgetown University François Roubaud encouraged the author to write this paper. For comments the author is grateful to Sarah Baird, Mary Ann Bronson, Caitlin Brown, Kevin Donovan, Markus Goldstein, Miguel Hernan, Emmanuel Jimenez, Madhulika Khanna, Nishtha Kochhar, Andrew Leigh, David McKenzie, Berk Özler, Dina Pomeranz, Lant Pritchett, Milan Thomas, Vinod Thomas, Eva Vivalt, Dominique van de Walle and Andrew Zeitlin. Staff of the International Initiative for Impact Evaluation kindly provided an update to their database on published impact evaluations and helped with the author’s questions. Martin Ravallion, 2018. “Should the Randomistas (Continue to) Rule?.” CGD Working Paper 492. Washington, DC: Center for Global Development.
    [Show full text]
  • New App for Effective Altruism Released Aidgrade’S New Tool Helps the Public Understand Latest Research Results in International Development
    Contact: Eva Vivalt For Immediate Release 202-630-6763 [email protected] New App for Effective Altruism Released AidGrade’s new tool helps the public understand latest research results in international development November 7, 2013 — AidGrade, a research institute dedicated to discovering what works in international development, has just released a much-expanded version of its meta-analysis app which allows people to download and analyze results of individual studies. Meta-analysis combines findings from academic studies in a statistically sound way. It is to be contrasted with “vote counting”, which simply tallies how many studies found positive or negative effects and can be misleading. AidGrade previously released an app that allowed people to conduct their own meta-analyses online. AidGrade simultaneously built a database of impact evaluation results from academic studies, such as those produced by the Abdul Latif Jameel Poverty Action Lab, at MIT; Innovations for Poverty Action, based around Yale; and the Center for Effective Global Action, at the University of California, Berkeley, among others. More than 5,000 results from 300 papers have now been coded and are coming online. “It’s awesome to see these results,” enthused Eva Vivalt, founder of the project and post- doctoral fellow at New York University. “You hear so much in the media about ‘this study says this’ or ‘this study says that’, but each academic paper only gives a small part of the story. We need to look at the aggregate results, but in a way that makes sense. Economics isn’t like other disciplines where results can be easily replicated across contexts.
    [Show full text]
  • JOSHUA TASOFF August 28, 2019
    JOSHUA TASOFF August 28, 2019 https://scholar.cgu.edu/joshua-tasoff/ Department of Economics, Phone: 909-621-8782 School of Politics & Economics, Fax: 909-621-8460 Claremont Graduate University, [email protected] Harper E. 204 160 E. Tenth Street Claremont, CA 91711 EMPLOYMENT Associate Professor (with tenure), Claremont Graduate University, July 2016 – Present Assistant Professor, Claremont Graduate University, July 2010 – July 2016 Visiting Professor, Carnegie Mellon University, Spring 2018 Visiting Professor, University of Southern California, Fall 2013 EDUCATION Ph.D. University of California, Berkeley, Economics, June 2010 Dissertation: “Essays in psychological and political economics” Advisors: Botond Kőszegi and Matthew Rabin B.S. Massachusetts Institute of Technology, Economics, June 2003 RESEARCH INTERESTS Behavioral Economics, Experimental Economics, Economics of Microbial Communities, Food Choice REFEREED PUBLICATIONS 10. “The Role of Time Preferences and Exponential-Growth Bias in Retirement Savings”with Gopi Shah Goda, Matthew Levy, Colleen Manchester, and Aaron Sojourner. Economic Inquiry, Vol. 57 (3), July 2019, pp1636-1658 9. “Exponential-Growth Bias in Experimental Consumption Decisions” with Matthew Levy. Economica, 20 March 2019, https://doi.org/10.1111/ecca.12306 8. “Fantasy and Dread: The Demand for Information and the Consumption Utility of the Future”, Management Science, Vol. 63 (12), December 2017, pp. 4037–4060, http://dx.doi.org/10.1287/mnsc.2016.2550 with Ananda Ganguly 7. “When Higher Productivity Hurts: The Interaction Between Overconfidence and Capital” Journal of Behavioral and Experimental Economics, Vol. 67, April 2017, pp 131-142 with Andrew Royal 6. “Exponential-Growth Bias and Overconfidence”, Journal of Economic Psychology, (Lead Article) Vol. 58, Feb (2017), pp 1-14 with Matthew Levy 5.
    [Show full text]
  • Weighing the Evidence: Which Studies Count?
    Weighing the Evidence: Which Studies Count? Eva Vivalt∗ Aidan Coville† Sampada KC‡ April 30, 2021 Abstract We present results from two experiments run at World Bank and Inter- American Development Bank workshops on how policy-makers, practi- tioners and researchers weigh evidence and seek information from impact evaluations. We find that policy-makers care more about attributes of studies associated with external validity than internal validity, while for researchers the reverse is true. These preferences can yield large differ- ences in the estimated effects of pursued policies: policy-makers indicated a willingness to accept a program that had a 6.3 percentage point smaller effect on enrollment rates if it were recommended by a local expert, larger than the effects of most programs. ∗Department of Economics, University of Toronto, [email protected] †Development Impact Evaluation group, World Bank, [email protected] ‡Institutions and Political Inequality, Berlin Social Science Center (WZB), [email protected]. The findings, interpretations, and conclusions expressed in this paper are entirely those of the au- thors. They do not necessarily represent the views of the World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent. 1 1 Introduction The basic motivation behind evidence-based policy-making is simple: rigorous ev- idence on what works best could help policy-makers make more informed decisions that improve policy effectiveness. This could take the form of, for example, scale up or adaptation of programs based on evidence on the project of interest or simi- lar projects elsewhere.
    [Show full text]
  • Deaton Randomization Revisited V7 2020
    Randomization in the tropics revisited: a theme and eleven variations Angus Deaton School of Public and International Affairs, Princeton University Schaeffer Center for Health Policy and Economics, University of Southern California National Bureau of Economic Research December 2019, Revised July 2020 Forthcoming in Florent Bédécarrats, Isabelle Guérin and François Roubaud, Randomized controlled trials in the field of development: a critical perspective, Oxford University Press. For (generous, helpful, and amazingly rapid) comments on an earlier version, I am most grateful to Nancy Cartwright, Anne Case, Shoumitro Chatterjee, Nicolas Côté, Jean Drèze, William Easterly, Reetika Khera, Lant Pritchett, C. Rammanohar Reddy, François Roubaud, Dean Spears and Bastian Steuwer. I acknowledge generous support from the National Institute on Aging through the NBER, Grant number P01AG05842. Development economists have been using randomized controlled trials (RCTs) for the best part of two decades1, and economists working on welfare policies in the US have been doing so for much longer. The years of experience have made the discussions richer and more nuanced, and both proponents and critics have learned from one another, at least to an extent. As is often the case, researchers seem reluctant to learn from earlier mistakes by others, and the lessons from the first wave of experiments, many of which were laid out by Jim Heckman and his collaborators2 a quarter of a century ago, have frequently been ignored in the second wave. In this essay, I do not attempt to reconstruct the full range of questions that I have written about elsewhere3, nor to summarize the long running debate in economics.
    [Show full text]
  • Essays in Development Economics and Trade by Eva Vivalt A
    Essays in Development Economics and Trade by Eva Vivalt A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Economics in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Edward Miguel, Chair Professor Pierre-Olivier Gourinchas Professor Ernesto Dal B´o Spring 2011 Essays in Development Economics and Trade Copyright 2011 by Eva Vivalt 1 Abstract Essays in Development Economics and Trade by Eva Vivalt Doctor of Philosophy in Economics University of California, Berkeley Professor Edward Miguel, Chair Economic development has the potential to improve lives. Three issues that directly affect economic development are conflict, trade, and innovation, the subjects of the three chapters in this dissertation. Conflict causes enormous suffering, but the study of peacekeeping is plagued by endogene- ity issues. The first chapter in this dissertation uses an instrumental variables approach to estimate the effectiveness of U.N. peacekeepers at ending episodes of conflict, maintaining the peace once peace has been obtained, and preventing another episode from ever re-occurring. I find that the likelihood of being sent U.N. peacekeepers varies with temporary membership in the U.N. Security Council and exploit this variation in my estimation. This variation also suggests that the leaders of countries in conflict often do not want their country to receive peacekeepers. The results indicate that even though peacekeepers are often unwanted, they help to maintain the peace after an episode of conflict has ended and reduce the likelihood that the conflict resumes. After peace, trade is also considered crucial to development.
    [Show full text]
  • Download Program
    Allied Social Science Associations Program Denver, CO January 7–9, 2011 Contract negotiations, management and meeting arrangements for ASSA meetings are conducted by the American Economic Association. i ASSA_Program2011.indb i 11/11/10 11:30 AM Thanks to the 2011 American Economic Association Program Committee Members Joseph Altonji Joshua Angrist William Collins Janet Currie Gene Grossman Kathryn Graddy Jonathan Gruber John List Ted Miguel Valerie Ramey Helene Rey Gerard Roland Nancy Rose Steve Shavell Robert Stavins Richard Thaler Cover Art—“Lonely Trail,” copyright 2008, Anthony J. Morreale, Jr. (Oil on Canvas, 22″ x 30″ ). Tony painted this scene after a ski trip to Colorado. He is a retired executive and now a career/life planning and outplacement counselor for OMNI TEAM in Nashville, TN. He has been painting for about eight years and invites you to visit his website at www.amorreale. com. AEA staff member, Susan Houston, calls him “Dad.” ii ASSA_Program2011.indb ii 11/11/10 11:30 AM Contents General Information. .iv Hotels and Meeting Rooms . ix Listing of Advertisers and Exhibitors . xxi Allied Social Science Associations Executive Offi cers . xxiii Summary of Sessions by Organization . xxvi Daily Program of Events . 1 Program of Sessions Thursday, January 6 . 25 Friday, January 7 . 26 Saturday, January 8 . 132 Sunday, January 9 . 249 Subject Area Index. 327 Index of Participants . 330 iii ASSA_Program2011.indb iii 11/11/10 11:30 AM General Information PROGRAM SCHEDULES A listing of sessions where papers will be presented and another covering activities such as business meetings and receptions are provided in this program.
    [Show full text]
  • Basic Income Project Proposal
    OpenResearch Basic Income Project Proposal OVERVIEW FOR COMMENTS AND FEEDBACK JANUARY 2020 CONTENTS EXECUTIVE SUMMARY Why Study Basic Income? 3 Our Plan 3 MOTIVATION 3 RESEARCH QUESTIONS 4 6 EXISTING RESEARCH AND UNANSWERED QUESTIONS 1970s Experiments — Focus on Labor Supply 8 Impacts on Health and Well-being in Developed Countries 8 Cash vs. In-Kind, Conditional vs. Unconditional, and Universal vs. Means-Tested Benefits 8 Decision-Making and Cognitive Function 9 PILOT STUDY 10 RESEARCH DESIGN FOR MAIN STUDY 12 Eligible Population 13 13 Experimental Design Sampling 14 Data Collection 15 Comprehensive In-Person Surveys 16 Monthly or Bimonthly Surveys 16 Time Use Surveys 16 Qualitative Interviews 16 Administrative Data 17 Outcomes 17 Power Calculations 18 Plans for Data Analysis 26 Threats to Validity 27 Limitations 27 NOTES & REFERENCES 28 29 RESEARCH DIRECTOR Elizabeth Rhodes RESEARCH MANAGER Sam Manning OPERATIONS MANAGER Elizabeth Proehl DATA MANAGER Patrick Krause PRIMARY INVESTIGATORS Alex Bartik, David Broockman, Sarah Miller, Elizabeth Rhodes, Eva Vivalt QUALITATIVE PRIMARY INVESTIGATORS Jessica Wiederspan, Sandra Smith, Elizabeth Rhodes SPECIAL THANKS For help crafting the experimental design, we thank David Broockman, Eva Vivalt, and Sarah Miller. We thank working group participants at Stanford (John Ahlquist, Juliana Bidadanure, David Card, Mark Duggan, Greg Duncan, Jonathan Fisher, Natalie Foster, David Grusky, Hazel Markus, David Price, Rob Reich, Giovanni Righi, Rachel Schneider, Luke Shaefer, Eldar Shafir, Sandra Smith, Guy
    [Show full text]
  • April 19, 2018
    JOSHUA TASOFF April 19, 2018 https://sites.google.com/site/joshtasoff/ Department of Economics, Phone: 909-621-8782 School of Politics & Economics, Fax: 909-621-8460 Claremont Graduate University, [email protected] Harper E. 204 160 E. Tenth Street Claremont, CA 91711 EMPLOYMENT Associate Professor (with tenure), Claremont Graduate University, July 2016 – Present Assistant Professor, Claremont Graduate University, July 2010 – July 2016 Visiting Professor, University of Southern California, Fall 2013 EDUCATION Ph.D. University of California, Berkeley, Economics, June 2010 Dissertation: “Essays in psychological and political economics” Advisors: Botond Kőszegi and Matthew Rabin B.S. Massachusetts Institute of Technology, Economics, June 2003 RESEARCH INTERESTS Behavioral Economics, Experimental Economics, Economics of Microbial Communities, Food Choice REFEREED PUBLICATIONS 8. “Fantasy and Dread: The Demand for Information and the Consumption Utility of the Future”, Management Science, Vol. 63 (12), December 2017, pp. 4037–4060, http://dx.doi.org/10.1287/mnsc.2016.2550 with Ananda Ganguly 7. “When Higher Productivity Hurts: The Interaction Between Overconfidence and Capital” Journal of Behavioral and Experimental Economics, Vol. 67, April 2017, pp 131-142 with Andrew Royal 6. “Exponential-Growth Bias and Overconfidence”, Journal of Economic Psychology, (Lead Article) Vol. 58, Feb (2017), pp 1-14 with Matthew Levy 5. “Misunderestimation: Exponential-Growth Bias and Time-Varying Returns” Economics Bulletin, Vol. 36 (1) pp. 29-34 with Matthew Levy 4. “Exponential-Growth Bias and Lifecycle Consumption”, Journal of the European Economic Association, (Lead Article), Vol 14(3), (2016) with Matthew Levy 3. “An Economic Framework of Microbial Trade”, PLOS ONE, July 29, 2015, 0(7): e0132907.
    [Show full text]
  • Download Program
    Allied Social Science Associations Program BOSTON, MA January 3–5, 2015 Contract negotiations, management and meeting arrangements for ASSA meetings are conducted by the American Economic Association. Participants should be aware that the media has open access to all sessions and events at the meetings. i Thanks to the 2015 American Economic Association Program Committee Members Richard Thaler, Chair Severin Borenstein Colin Camerer David Card Sylvain Chassang Dora Costa Mark Duggan Robert Gibbons Michael Greenstone Guido Imbens Chad Jones Dean Karlan Dafny Leemore Ulrike Malmender Gregory Mankiw Ted O’Donoghue Nina Pavcnik Diane Schanzenbach Cover Art—“Melting Snow on Beacon Hill” by Kevin E. Cahill (Colored Pencil, 15 x 20 ); awarded first place in the mixed media category at the ″ ″ Salmon River Art Guild’s 2014 Regional Art Show. Kevin is a research economist with the Sloan Center on Aging & Work at Boston College and a managing director at ECONorthwest in Boise, ID. Kevin invites you to visit his personal website at www.kcahillstudios.com. ii Contents General Information............................... iv ASSA Hotels ................................... viii Listing of Advertisers and Exhibitors ...............xxvii ASSA Executive Officers......................... xxix Summary of Sessions by Organization ..............xxxii Daily Program of Events ............................1 Program of Sessions Friday, January 2 ...........................29 Saturday, January 3 .........................30 Sunday, January 4 .........................145 Monday, January 5.........................259 Subject Area Index...............................337 Index of Participants . 340 iii General Information PROGRAM SCHEDULES A listing of sessions where papers will be presented and another covering activities such as business meetings and receptions are provided in this program. Admittance is limited to those wearing badges. Each listing is arranged chronologically by date and time of the activity.
    [Show full text]
  • Edward Miguel
    Edward Miguel Department of Economics Phone: 1 (510) 642 7162 508-1 Evans Hall, #3880 Fax: 1 (510) 642 6615 University of California Email: [email protected] Berkeley, CA 94720-3880, USA URL: http://emlab.berkeley.edu/users/emiguel EMPLOYMENT University of California, Berkeley Professor, Department of Economics, 2009-present Associate Professor (with tenure), Department of Economics, 2005-2009 Assistant Professor, Department of Economics, 2000-2005 Faculty Director, Center of Evaluation for Global Action (CEGA), 2006-present Faculty Executive Committee, U.C. Berkeley/UCSF Site, Robert Wood Johnson Foundation Scholars in Health Policy Research Program Affiliated Faculty, Department of Agricultural and Resource Economics, 2006-present Member, University of California Giannini Foundation of Agricultural Economics, 2008-present Stanford University Visiting Professor, Department of Economics, and Visiting Research Scholar, Stanford Center for International Development / International Policy Studies, 2007-08 Princeton University Visiting Fellow, Center for Health & Wellbeing / Research Program in Development Studies, 2002-03 EDUCATION Harvard University Ph.D. in Economics 2000, A.M. in Economics 1998, NSF Graduate Fellowship 1996-99 Massachusetts Institute of Technology S.B. Economics, S.B. Mathematics 1996 (Phi Beta Kappa), Harry S Truman Scholar (1995, NJ) HONORS, GRANTS, AND FELLOWSHIPS Alfred P. Sloan Research Fellow, 2005-07 Bill and Melinda Gates Foundation, 2006-2010 (Co-PI) Google Foundation, 2006-2009 (Co-PI) Harry Frank Guggenheim
    [Show full text]