Sixth Interim Report

Total Page:16

File Type:pdf, Size:1020Kb

Sixth Interim Report WESTERN REGIONAL RESEARCH PUBLICATION W-133 Benefits & Costs Transfer in Natural Resource Planning Sixth Interim Report September, 1993 Compiled by John C. Bergstrom Department of Agricultural and Applied Economics University of Georgia 208 Conner Hall Athens, GA 30602-7509 Benefits and Costs Transfer in Natural Resource Planning CONTENTS Natural Resource Valuation Case Studies John R. Boyce and Daniel W. A Market Test of the Contingent Valuation Method: The McCollum Case of Bison Hunting Permits in Alaska Gregory L. Poe and Richard C. Information, Risk Perceptions, and Contingent Values for Bishop Groundwater Protection Edgar L. Michalson An Estimate of the Economic Value of Selected Columbia and Snake River Anadromous Fisheries, 1938- 1990 Priya Shyamsundar and Randall A. Does Contingent Valuation Work in Non-Market Economies Kramer Donald J. Epp and Sharon I. Gripp Test-Retest Reliability of Contingent Valuation Estimates for an Unfamiliar Policy Choice: Valuation of Tropical Rain Forest Preservation Se-Hyun Choi, Dean F. Schreiner, Benefit-Cost Analysis of a Trout Fishery in Southeastern David M. Leslie, Jr., and Jack Oklahoma Harper Douglas M. Larson and John B. Toward Measuring Use and Nonuse Values from Related Loomis Market Behavior: Some Preliminary Results Thomas C. Brown Measuring Nonuse Value: A Comparison of Recent Contingent Valuation Studies Charles E. Meier and Alan Randall The Hedonic Demand Model: Comparative Statics and Prospects for Estimation Catherine L. Kling An Assessment of the Empirical Magnitude of Option Values for Environmental Goods LeRoy Hansen and Dannette Woo Harry Goes to be with Tom and Dick: A Source of Bias in Estimates of Resource Demand Jeffrey Englin and Trudy Ann Comparing Observed and Multiple-Scenario Contingent Cameron Behavior: A Panel Analysis Utilizing Poisson Regression Techniques The Contingent Valuation Method and Discrete Choice Questions Kevin J. Boyle, F. Reed Johnson, Valuing Public Goods: Discrete Versus Continuous Daniel W. McCollum, William H. Contingent-Valuation Responses Desvousges, Richard W. Dunford, and Sara P. Hudson Thomas P. Holmes and Randall A. A Contingent Valuation Test of the Prominence Hypothesis Kramer Barbara J. Kanninen Bias in Discrete Response Cantingent Valuation Michael P. Welsh and Richard C. Multiple Bounded Discrete Choice Models Bishop Gregory L. Poe, Eric K. Lossin, and A Convolutions Approach to Measuring the Differences in Michael P. Welsh Simulated Distributions: Application to Dichotomous Choice Contingent Valuation Richard G. Walsh and John R. Intellectual Capital and the Transfer Process McKean Robert P. Berrens Some Problems with Deriving Demand Curves from Measure of Use Variables in Referendum Contingent Valuation Models John Loomis, Brian Roach, Frank Reservoir Recreation Demand and Benefits Transfers: Ward and Richard Ready Preliminary Results George R. Parsons and Mary Jo Benefits Transfer in a Random Utility Model of Recreation Kealy John C. Bergstrom RPA Assessment of Outdoor Recreation: Current Applications and Future Directions FORWARD The W-133 Regional Research Project entitled "Benefits and Costs Transfer in Natural Resource Planning" was recently rechartered by the U.S. Department of Agriculture, Cooperative State Research Service. The focus of the W-133 Project is the economic valuation of environmental resources and the application of economic values to facilitate environmental and natural resource decision-making. Researchers in over 25 states across the United States are involved in the W-133 Project through Land-Grant University Experiment Station appointments. One of the strengths of the W-133 Project is that it has also attracted much interest and participation from researchers from a variety of public and private colleges and universities, federal resource management agencies, and state resource management agencies. The active and widespread participation in the W-133 Project is reflected in the topics and authorship of the research papers presented in this report. The specific objectives of the W-133 Project are to: 1) provide site specific use and nonuse values of natural resources for public policy analysis, and 2) develop protocols for transferring value estimates to unstudied sites. Research under these two specific objectives is targeted by W- 133 participants at four resource areas: water based recreation, groundwater quality, wetlands, and recreational fisheries. The papers presented in this report represent major progress towards addressing these four problem areas and meeting the specific W-133 regional research objectives. The first set of papers present environmental resource valuation case studies. These case studies demonstrate and assess techniques for valuing non-marketed resources in different policy and management situations. The second set of papers address several general topics related to resource valuation theory and modeling. These topics include use values, nonuse values, and demand model specification and estimation. The third set of papers focus specifically on the contingent valuation method. The primary concern discussed in these papers is the proper application of discrete choice contingent valuation questions. The fourth set of papers present and discuss issues related to the transfer of benefit estimates in the policy and management arena. The papers in this section provide insight into conceptual and empirical problems which must be faced when attempting to use benefits transfer to assess environmental and natural resource policy and management. John C. Bergstrom The University of Georgia September, 1993 A Market Test of the Contingent Valuation Method: The Case of Bison Hunting Permits in Alaska John R. Boyce Department of Economics University of Alaska Fairbanks Fairbanks, Alaska Daniel W. McCollurn U.S. Forest Service Rocky Mountain Forest and Range Experiment Station Fort Collins, Colorado Introduction The most important advances in economics have resulted fiom testable hypotheses regarding how people behave. These hypotheses have been meticulously tested against observations of actual behavior. In contrast, non-market valuation techniques such as the contingent valuation method (CVM) are based on survey responses of how people say they would hypothetically behave if the hypothetical market existed. These techniques rely on stated intentions of behavior, not on actual observations of behavior. Furthermore, the information is typically gathered from individuals who have incentives for affecting the results. Thus, in spite of two decades of research in this area, many economists remain highly skeptical of non-market valuation techniques for estimating economic value. The two main types of issues that arise in relation to the use of CVM are its validity and its reliability. Validity is broadly defined as "the degree to which it measures the theoretical construct under investigation" (Mitchell & Carson 1989, p. 190). Reliability is concerned with how accurate the measure is. Our primary concern in this paper is with validity. In particular, we are concerned with what Mitchell and Carson term "criterion validity." Essentially, the test is whether or not CVM estimates of value are comparable to estimates of value that are obtained in situations where responses are based on actual behavior. Mitchell and Carson term such tests as "hypothetical-simulated-markets." Several such tests have appeared in the literature. Bohm's (1972) study of the value of closed circuit television programming was the first construction of such a test. Bohm concluded that responses to hypothetical questions resulted in "irresponsible" responses. Bishop and Heberlein (1979) tested the validity of CVM surveys by sending negotiable checks to hunters having drawn a goose permit in the Horicon zone of central Wisconsin. The willingness to accept (WTA) estimates obtained fiom this simulated market were compared with CVM responses to both WTA and willingness to pay (WTP) surveys. They concluded that "contingent values could easily be in error by 50% or more" (Bishop, Heberlein, and Kealy, 1983, p. 620). Bishop and Heberlein followed this with two, more exhaustive studies in 1983 and 1984 involving special early season deer hunting permits in the Sandhill Wildlife Demonstration area (Bishop, Heberlein, Welsh, and Baurngartner 1984; Bishop and Heberlein 1985, 1986; Bishop, Heberlein, McCollum, and Welsh 1988) . These studies used a simulated market for both WTA and WTP. In addition, they examined the effect of various market forms, including an incentive compatible.fifth price auction (to sell four permits) in the 1983 study. They concluded that "contingent valuation is accurate enough, at least for items like hunting permits, to be useful for policy analysis" (Bishop, Heberlein, McCollum, and Welsh 1988, p. 8-3). Another study, by Dickie, Fisher, and Gerking (1987), conducted an experiment in which CVM was compared with actual behavior for valuing strawberries which were being sold door to door. Several studies have also extended the test to areas where existence value is being measured. Siep and Strand (1991) compared CVM results with actual expenditure data collected on people's contributions to environmental groups. Duffield and Patterson (1991) examined contribution rates for in-stream improvements in Montana. Siep and Strand and Duffield and Patterson found that non-zero bids were much lower for actual transactions than for CVM. There have also been several laboratory experiments such as the 'SOA' experiment
Recommended publications
  • Department of Geography
    Department of Geography UNIVERSITY OF FLORIDA, SPRING 2019 GEO 4167c section #09A6 / GEO 6161 section # 09A9 (3.0 credit hours) Course# 15235/15271 Intermediate Quantitative Methods Instructor: Timothy J. Fik, Ph.D. (Associate Professor) Prerequisite: GEO 3162 / GEO 6160 or equivalent Lecture Time/Location: Tuesdays, Periods 3-5: 9:35AM-12:35PM / Turlington 3012 Instructor’s Office: 3137 Turlington Hall Instructor’s e-mail address: [email protected] Formal Office Hours Tuesdays -- 1:00PM – 4:30PM Thursdays -- 1:30PM – 3:00PM; and 4:00PM – 4:30PM Course Materials (Power-point presentations in pdf format) will be uploaded to the on-line course Lecture folder on Canvas. Course Overview GEO 4167x/GEO 6161 surveys various statistical modeling techniques that are widely used in the social, behavioral, and environmental sciences. Lectures will focus on several important topics… including common indices of spatial association and dependence, linear and non-linear model development, model diagnostics, and remedial measures. The lectures will largely be devoted to the topic of Regression Analysis/Econometrics (and the General Linear Model). Applications will involve regression models using cross-sectional, quantitative, qualitative, categorical, time-series, and/or spatial data. Selected topics include, yet are not limited to, the following: Classic Least Squares Regression plus Extensions of the General Linear Model (GLM) Matrix Algebra approach to Regression and the GLM Join-Count Statistics (Dacey’s Contiguity Tests) Spatial Autocorrelation / Regression
    [Show full text]
  • The Evolution of Econometric Software Design: a Developer's View
    Journal of Economic and Social Measurement 29 (2004) 205–259 205 IOS Press The evolution of econometric software design: A developer’s view Houston H. Stokes Department of Economics, College of Business Administration, University of Illinois at Chicago, 601 South Morgan Street, Room 2103, Chicago, IL 60607-7121, USA E-mail: [email protected] In the last 30 years, changes in operating systems, computer hardware, compiler technology and the needs of research in applied econometrics have all influenced econometric software development and the environment of statistical computing. The evolution of various representative software systems, including B34S developed by the author, are used to illustrate differences in software design and the interrelation of a number of factors that influenced these choices. A list of desired econometric software features, software design goals and econometric programming language characteristics are suggested. It is stressed that there is no one “ideal” software system that will work effectively in all situations. System integration of statistical software provides a means by which capability can be leveraged. 1. Introduction 1.1. Overview The development of modern econometric software has been influenced by the changing needs of applied econometric research, the expanding capability of com- puter hardware (CPU speed, disk storage and memory), changes in the design and capability of compilers, and the availability of high-quality subroutine libraries. Soft- ware design in turn has itself impacted applied econometric research, which has seen its horizons expand rapidly in the last 30 years as new techniques of analysis became computationally possible. How some of these interrelationships have evolved over time is illustrated by a discussion of the evolution of the design and capability of the B34S Software system [55] which is contrasted to a selection of other software systems.
    [Show full text]
  • International Journal of Forecasting Guidelines for IJF Software Reviewers
    International Journal of Forecasting Guidelines for IJF Software Reviewers It is desirable that there be some small degree of uniformity amongst the software reviews in this journal, so that regular readers of the journal can have some idea of what to expect when they read a software review. In particular, I wish to standardize the second section (after the introduction) of the review, and the penultimate section (before the conclusions). As stand-alone sections, they will not materially affect the reviewers abillity to craft the review as he/she sees fit, while still providing consistency between reviews. This applies mostly to single-product reviews, but some of the ideas presented herein can be successfully adapted to a multi-product review. The second section, Overview, is an overview of the package, and should include several things. · Contact information for the developer, including website address. · Platforms on which the package runs, and corresponding prices, if available. · Ancillary programs included with the package, if any. · The final part of this section should address Berk's (1987) list of criteria for evaluating statistical software. Relevant items from this list should be mentioned, as in my review of RATS (McCullough, 1997, pp.182- 183). · My use of Berk was extremely terse, and should be considered a lower bound. Feel free to amplify considerably, if the review warrants it. In fact, Berk's criteria, if considered in sufficient detail, could be the outline for a review itself. The penultimate section, Numerical Details, directly addresses numerical accuracy and reliality, if these topics are not addressed elsewhere in the review.
    [Show full text]
  • Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS*
    © 2003-2008, The Trustees of Indiana University Regression Models for Categorical Dependent Variables: 1 Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS* Hun Myoung Park (kucc625) This document summarizes regression models for categorical dependent variables and illustrates how to estimate individual models using SAS 9.1, Stata 10.0, LIMDEP 9.0, and SPSS 16.0. 1. Introduction 2. The Binary Logit Model 3. The Binary Probit Model 4. Bivariate Logit/Probit Models 5. Ordered Logit/Probit Models 6. The Multinomial Logit Model 7. The Conditional Logit Model 8. The Nested Logit Model 9. Conclusion 1. Introduction A categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often treated as continuous variables. When a dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various regression models for categorical dependent variables. The nonlinearity of categorical dependent variable models (CDVMs) makes it difficult to fit the models and interpret their results. 1.1 Regression Models for Categorical Dependent Variables In CDVMs, the left-hand side (LHS) variable or dependent variable is neither interval nor ratio, but rather categorical. The level of measurement and data generation process (DGP) of a dependent variable determines the proper type of CDVM. Binary responses (0 or 1) are modeled with binary logit and probit regressions, ordinal responses (1st, 2nd, 3rd, …) are formulated into (generalized) ordered logit/probit regressions, and nominal responses are analyzed by multinomial logit, conditional logit, or nested logit models depending on specific circumstances.
    [Show full text]
  • Investigating Data Management Practices in Australian Universities
    Investigating Data Management Practices in Australian Universities Margaret Henty, The Australian National University Belinda Weaver, The University of Queensland Stephanie Bradbury, Queensland University of Technology Simon Porter, The University of Melbourne http://www.apsr.edu.au/investigating_data_management July, 2008 ii Table of Contents Introduction ...................................................................................... 1 About the survey ................................................................................ 1 About this report ................................................................................ 1 The respondents................................................................................. 2 The survey results............................................................................... 2 Tables and Comments .......................................................................... 3 Digital data .................................................................................... 3 Non-digital data forms....................................................................... 3 Types of digital data ......................................................................... 4 Size of data collection ....................................................................... 5 Software used for analysis or manipulation .............................................. 6 Software storage and retention ............................................................ 7 Research Data Management Plans.........................................................
    [Show full text]
  • Package 'Ecdat'
    Package ‘Ecdat’ November 3, 2020 Version 0.3-9 Date 2020-11-02 Title Data Sets for Econometrics Author Yves Croissant <[email protected]> and Spencer Graves Maintainer Spencer Graves <[email protected]> Depends R (>= 3.5.0), Ecfun Suggests Description Data sets for econometrics, including political science. LazyData true License GPL (>= 2) Language en-us URL https://www.r-project.org NeedsCompilation no Repository CRAN Date/Publication 2020-11-03 06:40:34 UTC R topics documented: Accident . .4 AccountantsAuditorsPct . .5 Airline . .7 Airq.............................................8 bankingCrises . .9 Benefits . 10 Bids............................................. 11 breaches . 12 BudgetFood . 15 BudgetItaly . 16 BudgetUK . 17 Bwages . 18 1 2 R topics documented: Capm ............................................ 19 Car.............................................. 20 Caschool . 21 Catsup . 22 Cigar ............................................ 23 Cigarette . 24 Clothing . 25 Computers . 26 Consumption . 27 coolingFromNuclearWar . 27 CPSch3 . 28 Cracker . 29 CRANpackages . 30 Crime . 31 CRSPday . 33 CRSPmon . 34 Diamond . 35 DM ............................................. 36 Doctor . 37 DoctorAUS . 38 DoctorContacts . 39 Earnings . 40 Electricity . 41 Fair ............................................. 42 Fatality . 43 FinancialCrisisFiles . 44 Fishing . 45 Forward . 46 FriendFoe . 47 Garch . 48 Gasoline . 49 Griliches . 50 Grunfeld . 51 HC.............................................. 52 Heating . 53
    [Show full text]
  • Limdep Handout
    Limdep Handout Full Manuals are available for consultation in the Business Library, in the Economics Department General Office, and in my office. Help from the course T.A. related to accessing and running LIMDEP using NLOGIT via the VCL is available in the Econ Dept 9th Floor Computer Lab. See course web page for times. The TA will show you how to run LIMDEP programs and try to help troubleshoot basic problems, but will not write your programs for you! ▪ General Comments for programming: 1) Use “$” at the end of each command to separate commands; 2) Use semi-colon “;” to separate options within a command; 3) Use “,” to separate variables within a option (eg. dstat; rhs = income, gdp, interest ). How to run the Limdep program? ▪ Left click and select lines you wish to run. You can click “go” (or Ctrl-R or Run button on the top) ▪ If you need to re-run your program, you will often get an error message. To solve this problem, you can do as follow: (1) Select output right click clear all (2) Click “Project” menu on the top and select “Reset” To get you started here are some basic Limdep commands 1. Data Input (1) Text File Input Read; file=”location of file.txt” (eg. file=”a:\data.txt”) ; nobs=# (eg. nobs=34) ; nvars=# (eg. nvars=9) ; names= list of names of variables $ (eg. names=year, interst,…) Notes: You MUST tell Limdep the number of observations and the number of variables that you have in the text file. You also have to provide a list of variable names using the ‘names’ option.
    [Show full text]
  • EA/LIMDEP Users Manual
    1 Econometric Analysis/LimDep Users Manual By William H. Greene 2 Table of Contents Section Name Page I. What is EA/LimDep 3 II. Installing and Starting EA/LimDep 3 III. A View of the Desktop 4 IV. A Short Demonstration 5 V. General EA/LimDep Info 11 VI. The Main Menu 11 VII. Using EA/LimDep 15 3 Appendix: Using EA/LimDep Most of the computations described in Econometric Analysis can be done with any modern general purpose econometrics package, such as LIMDEP, Gauss, TSP, E-Views, etc. We have included on CD as part of the text, a modified version of one of these programs, EA/LimDep, as well as the data sets and program code that were used in the applications in the text. I. What is EA/LimDep? EA/LimDep is a computer program that you can use for nearly all the computations described in Econometric Analysis. It is a trimmed version of a large commercial package, LIMDEP (for LIMited DEPendent variable modeling), that is used by researchers in universities, government, and industry for the same kinds of analyses done in this text. There are three differences between EA/LimDep and LIMDEP: First, the size of data sets that you can analyze are restricted to 50,000 values with up to 50 variables and up to 1,000 observations. Second, LIMDEP’s very specialized procedures, such as the nested logit model and advanced extensions of the Poisson regression model are not available in EA/LimDep. (You will be able to do all of the computations described in Econometric Analysis save for the more advanced model extensions in Chapters 19 and 20.) Third, the number of parameters in a model is restricted to 15 parameters.
    [Show full text]
  • Wilkinson's Tests and Econometric Software
    Journal of Economic and Social Measurement 29 (2004) 261–270 261 IOS Press Wilkinson’s tests and econometric software B.D. McCullough Department of Decision Sciences, Drexel University, Philadelphia, PA 19104, USA E-mail: [email protected] The Wilkinson Tests, entry-level tests for assessing the numerical accuracy of statistical computations, have been applied to statistical software packages. Some software developers, having failed these tests, have corrected deficiencies in subsequent versions. Thus these tests have had a meliorative impact on the state of statistical software. These same tests are applied to several econometrics packages. Many deficiencies are noted. Keywords: Numerical accuracy, software reliability 1. Introduction The primary purpose of econometric software is to crunch numbers. Regrettably, the primary consideration in evaluating econometric software is not how well the software fulfills its primary purpose. What does matter is how easy it is to get an answer out of the package; whether the answer is accurate is of almost no impor- tance. Reviews of econometric software typically make no mention whatsoever of accuracy, though Vinod [14,15], Veall [13], McCullough [7,8], and MacKenzie [5] are exceptions. In part, this lack of benchmarking during reviews may be attributed to the fact that, until quite recently, no one ever collected the various benchmarks in a single place (see the NIST StRD at www.nist.gov/itl/div898/strd). A reviewer may well have been aware of a few scattered benchmarks, but including only a few benchmarks is hardly worth the effort. This view is supported by the fact that while the statistics profession has a long history of concern with the reliability of its software, software reviews in statistical journals rarely mention numerical accuracy.
    [Show full text]
  • Introduction to GAUSS Programming Language" by Eduardo Rossi ( 2
    7htriXuWtcih ti 5)GEE GhcvYrscty i` BcttsVurab 31A@ Eprcha ! WigpcfYX Vy 9Urtch 0urXU 'sYY fcst i` rY`YrYhWYs( UvUcfUVfY Ut% wwwpcttYXugUV !Wigputcha )prcf ! Duhhcha 5)GEE ch IchXiws FbY6Yfp9Yhu EyhtUx @itY ih @ugYrcWUf BrYWcscih HUrcUVfYs ! 1rYUtcha gUtrcWYs ! DY`YrYhWcha 9UtrcWYs # DYsbUpcha 9UtrcWYs $ 1ihWUtYhUtcih !9UtrcxApYrUtirs " 1ih`irgUVcfcty UhX tbY Xit ipYrUtirs # EpYWcUf gUtrcx ipYrUtcihs 8iUXcha 5Uuss 2UtU wctb Gscha tbY 9Utrcx 3Xctir FYxt fYs '( EprYUXsbYYts " Etircha gUtrcWYs ' fYs( # ! 2UtUsYts ' fYs( 7h 2Yptb # E6AI BD7@F 4AD9)F @3I 183)D 2383F3 3xUgpfY 3xUgpfY 6 Flow of Control 25 6.1 Conditional branching: IF ................................. 25 6.2 Loop statements: DO WHILE/UNTIL and FOR ....................... 26 7 Suspending execution 29 7.1 Temporary suspension using commands - PAUSE, WAIT . 29 7.2 Terminating a program using commands - END . 29 8 Publication Quality Graphics 29 8.1 Some Commands . 30 9 GAUSS Procedures 32 9.1 Global and Local variables . 33 9.2 Naming Conventions . 33 9.3 Writing Procedures . 33 9.4 Further Examples . 35 10 GAUSS Functions and keywords 37 11 Monte Carlo Simulations 38 12 References: 40 12.1 References for this handout . 40 12.2 Useful GAUSS Resources: . 40 12.3 GAUSS Resources for Economists: . 40 13 Appendix: GAUSS Functions and Routines - Quick Reference 41 13.1 Linear Regression . 41 13.2 Descriptive Statistics . 41 13.3 Cumulative Distribution Functions . 41 13.4 Di¤erentiation and Integration Routines . 41 13.5 Root Finding, Polynomial Multiplication and Interpolation . 42 13.6 Random Number Generators and Seeds . 42 2 1 Introduction GAUSS is a programming language designed to operate with and on matrices.
    [Show full text]
  • Regression Models for Binary Dependent Variables Using * Stata, SAS, R, LIMDEP, and SPSS
    IndianaUniversit y University Information Technology Services Regression Models for Binary Dependent Variables Using * Stata, SAS, R, LIMDEP, and SPSS Hun Myoung Park, Ph.D. [email protected] © 2003-2010 Last modified on October 2010 University Information Technology Services Center for Statistical and Mathematical Computing Indiana University 410 North Park Avenue Bloomington, IN 47408 (812) 855-4724 (317) 278-4740 http://www.indiana.edu/~statmath * The citation of this document should read: “Park, Hun Myoung. 2009. Regression Models for Binary Dependent Variables Using Stata, SAS, R, LIMDEP, and SPSS. Working Paper. The University Information Technology Services (UITS) Center for Statistical and Mathematical Computing, Indiana University.” http://www.indiana.edu/~statmath/stat/all/cdvm/index.html © 2003-2010, The Trustees of Indiana University Regression Models for Binary Dependent Variables: 2 This document summarizes logit and probit regression models for binary dependent variables and illustrates how to estimate individual models using Stata 11, SAS 9.2, R 2.11, LIMDEP 9, and SPSS 18. 1. Introduction 2. Binary Logit Regression Model 3. Binary Probit Regression Model 4. Bivariate Probit Regression Models 5. Conclusion References 1. Introduction A categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often treated as continuous variables. When a dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various regression models for categorical dependent variables. The nonlinearity of categorical dependent variable models makes it difficult to fit the models and interpret their results.
    [Show full text]
  • Sample Selection Models in R: Package Sampleselection
    Sample Selection Models in R: Package sampleSelection Ott Toomet Arne Henningsen Tartu University University of Copenhagen Abstract This introduction to the R package sampleSelection is a slightly modified version of Toomet and Henningsen(2008b), published in the Journal of Statistical Software. This paper describes the implementation of Heckman-type sample selection models in R. We discuss the sample selection problem as well as the Heckman solution to it, and argue that although modern econometrics has non- and semiparametric estimation methods in its toolbox, Heckman models are an integral part of the modern applied analysis and econometrics syllabus. We describe the implementation of these models in the package sampleSelection and illustrate the usage of the package on several simulation and real data examples. Our examples demonstrate the effect of exclusion restrictions, identification at infinity and misspecification. We argue that the package can be used both in applied research and teaching. Keywords: sample selection models, Heckman selection models, econometrics, R. 1. Introduction Social scientists are often interested in causal effects|what is the impact of a new drug, a certain type of school or being born as a twin. Many of these cases are not under the researcher's control. Often, the subjects can decide themselves, whether they take a drug or which school they attend. They cannot control whether they are twins, but neither can the researcher|the twins may tend to be born in different types of families than singles. All these cases are similar from the statistical point of view. Whatever is the sampling mechanism, from an initial \random" sample we extract a sample of interest, which may not be representative of the population as a whole (see Heckman and MaCurdy 1986, p.
    [Show full text]