Appendix 1: Computer Software

Total Page:16

File Type:pdf, Size:1020Kb

Appendix 1: Computer Software APPENDIX 1: COMPUTER SOFTWARE In this appendix we provide details on the computer software that is used in this book. Five computer programs are used: 1. SHAZAM - a general purpose econometrics package 2. LIMDEP - a general purpose econometrics package 3. DEAP - a data envelopment analysis (computer) program (Coelli, 1996b). 4. FRONTIER - a computer program for the estimation of stochastic frontier models (Coelli, 1996a). 5. TFPIP - a total factor productivity index (computer) program written by Tim Coelli. The SHAZAM and LIMDEP computer programs are a widely used econometrics software packages. They can be used to estimate a large number of econometric models. For further information on these computer programs, including information on how to purchase them, refer to the web sites: http://shazam.econ.ubcxa/ and http://www.limdepxom/ The remaining three computer programs (listed above) were written by Tim Coelli, specifically for the measurement of efficiency and/or productivity. Information on these three computer programs can be obtained from the Centre for Efficiency and Productivity Analysis (CEPA) web site: http://www.uq.edu. au/economics/cepa where copies of these programs (including manuals) may be downloaded free of charge. We now discuss the use of these latter three computer programs. 318 APPENDIX 1 DEAP Version 2.1: A Data Envelopment Analysis (Computer) Program This computer program has been written to conduct data envelopment analyses (DEA). The computer program can consider a variety of models. The three principal options are: 1. Standard CRS and VRS DEA models that involve the calculation of technical and scale efficiencies (where applicable). These methods are outlined in Chapter 6. 2. The extension of the above models to account for cost and allocative efficiencies. These methods are outlined in Section 7.2. 3. The application of Malmquist DEA methods to panel data to calculate indices of total factor productivity (TFP) change; technological change; technical efficiency change and scale efficiency change. These methods are discussed in Chapter 10. All methods are available in either an input or an output orientation (with the exception of the cost efficiencies option). The output from the program includes, where applicable, technical, scale, allocative and cost efficiency estimates; slacks; peers; targets; TFP and technological change indices. The DEAP computer program is written in Fortran (Lahey F77LEM/32) for IBM compatible PCs. It is a DOS program but can be easily run from WINDOWS using WINDOWS EXPLORER. The program involves a simple batch file system where the user creates a data file and a small file containing instructions. The user then starts the program by typing "DEAP" at the DOS prompt^ and is then prompted for the name of the instruction file. The program then executes these instructions and produces an output file which can be read using a text editor, such as NOTEPAD, or any program that can accept text files, such as WORD or EXCEL. The execution of DEAP Version 2.1 on PC generally involves five files: 1. The executable file, DEAP.EXE 2. The start-up file, DEAP.OOO 3. A data file (for example, called TEST-DTA.TXT) 4. An instruction file (for example, called TEST-INS.TXT) 5. An output file (for example, called TEST-OUT.TXT). The program can also be run by double-clicking on the DEAP.EXE file in WINDOWS EXPLORER. The use of WINDOWS EXPLORER is discussed at the end of this appendix. COMPUTER SOFTWARE 319 The executable file and the start-up file is supplied on the disk. The start-up file, DEAP.OOO, is a file that stores key parameter values that the user may or may not need to alter.^ The data and instruction files must be created by the user prior to execution. The output file is created by DEAP during execution. Examples of data, instruction and output files are listed in Chapters 6 and 7. Data file The program requires that the data be listed in a text file^ and expects the data to appear in a particular order. The data must be listed by observation (i.e., one row for each firm). There must be a column for each output and each input, with all outputs listed first and then all inputs listed (from left to right across the file). For example, for 40 observations on two outputs and two inputs there would be four columns of data (each of length 40) listed in the order: yl, y2, xl, x2. The cost efficiencies option requires that price information be supplied for the inputs. These price columns must be listed to the right of the input data columns and appear in the same order. That is, for three outputs and two inputs, the order for the columns must be: yl, y2, y3, xl, x2, wl, w2, where wl and w2 are input prices corresponding to input quantities, xl and x2. The Malmquist option is used with panel data. For example, for 30 firms observed in each of 4 years, all data for year 1 must be listed first, followed by the year 2 data listed underneath in the same order (of firms) and so on. Note that the panel must be "balanced", i.e., all firms must be observed in all time periods. A data file can be produced using any number of computer packages. For example: • using a text editor (such as NOTEPAD), • using a word processor (such as WORD) and saving the file in text format, • using a spreadsheet (such as EXCEL) and printing to a file, or • using a statistics package (such as SHAZAM or LIMDEP) and writing data to a file. Note that the data file should only contain numbers separated by spaces or tabs. It should not contain any column headings. ^ At present this file only contains two parameters. One is the value of a variable (EPS) used to test inequalities with zero and the other is a flag that can be used to suppress the printing of the firm-by-firm reports in the output file. This text file may be edited if the user wishes to alter this value. ^ All data, instruction and output files are (ASCII) text files. 320 APPENDIX 1 Instruction file The instruction file is a text file that is usually constructed using a text editor or a word processor. The easiest way to create a new instruction file is to edit one of the example instruction files that are supplied with the program and then save the edited file under a different file name. The best way to describe the structure of the instruction file is via examples. Refer to the examples in Chapters 6 and 7. Output file As noted earlier, the output file is a text file that is produced by DEAP when an instruction file is executed. The output file can be read using a text editor, such as NOTEPAD, or using a word processor, such as WORD. The output may also be imported into a spreadsheet program, such as EXCEL, to allow further manipulation into tables and graphs for subsequent inclusion into report documents. FRONTIER Version 4.1: A Computer Program for Stochastic Frontier Estimation The FRONTIER computer program is very similar in construction to the DEAP computer program. It has been written to provide maximum-likelihood estimates of the parameters of a number of stochastic frontier production and cost functions. The stochastic frontier models considered can accommodate (unbalanced) panel data and assume firm effects that are distributed as truncated normal random variables. The two primary model specifications considered in the program are: 1. The Battese and CoeUi (1992) time-varying inefficiencies specification, which is discussed in Section 10.5. 2. The Battese and Coelli (1995) model specification in which the inefficiency effects are directly influenced by a number of variables. This model is discussed in Section 10.6. The computer program also permits the estimation of other models that have appeared in the literature through the imposition of simple restrictions. Estimates of standard errors are also calculated, along with individual and mean efficiency estimates. The program can accommodate cross-sectional and panel data; time-varying and time-invariant inefficiency effects; cost and production functions; half-normal and truncated normal distributions; and functional forms which have a dependent variable in logged or original units. COMPUTER SOFTWARE 321 The execution of FRONTIER Version 4.1 on an IBM PC generally involves five files: 1. The executable file, FR0NT41 .EXE 2. The start-up file, FR0NT41.000 3. A data file (for example, called TEST-DTA.TXT) 4. An instruction file (for example, called TEST-INS.TXT) 5. An output file (for example, called TEST-OUT.TXT). The start-up file, FR0NT41.000, contains values for a number of key variables, such as the convergence criterion, printing flags and so on. This text file may be edited if the user wishes to alter any values. The data and instruction files must be created by the user prior to execution."* The output file is created by FRONTIER during execution. Examples of data, instruction and output files are presented in Chapter 9. The program requires that the data be stored in an text file and is quite particular about the order in which the data are listed. Each row of data should represent an observation. The columns must be presented in the following order: 1. firm number (an integer in the range 1 to N); 2. period number (an integer in the range 1 to T); 3. dependent variable; 4. regressor variables; and 5. variables influencing the inefficiency effects (if applicable). The observations can be listed in any order but the columns must be in the stated order.
Recommended publications
  • Department of Geography
    Department of Geography UNIVERSITY OF FLORIDA, SPRING 2019 GEO 4167c section #09A6 / GEO 6161 section # 09A9 (3.0 credit hours) Course# 15235/15271 Intermediate Quantitative Methods Instructor: Timothy J. Fik, Ph.D. (Associate Professor) Prerequisite: GEO 3162 / GEO 6160 or equivalent Lecture Time/Location: Tuesdays, Periods 3-5: 9:35AM-12:35PM / Turlington 3012 Instructor’s Office: 3137 Turlington Hall Instructor’s e-mail address: [email protected] Formal Office Hours Tuesdays -- 1:00PM – 4:30PM Thursdays -- 1:30PM – 3:00PM; and 4:00PM – 4:30PM Course Materials (Power-point presentations in pdf format) will be uploaded to the on-line course Lecture folder on Canvas. Course Overview GEO 4167x/GEO 6161 surveys various statistical modeling techniques that are widely used in the social, behavioral, and environmental sciences. Lectures will focus on several important topics… including common indices of spatial association and dependence, linear and non-linear model development, model diagnostics, and remedial measures. The lectures will largely be devoted to the topic of Regression Analysis/Econometrics (and the General Linear Model). Applications will involve regression models using cross-sectional, quantitative, qualitative, categorical, time-series, and/or spatial data. Selected topics include, yet are not limited to, the following: Classic Least Squares Regression plus Extensions of the General Linear Model (GLM) Matrix Algebra approach to Regression and the GLM Join-Count Statistics (Dacey’s Contiguity Tests) Spatial Autocorrelation / Regression
    [Show full text]
  • The Evolution of Econometric Software Design: a Developer's View
    Journal of Economic and Social Measurement 29 (2004) 205–259 205 IOS Press The evolution of econometric software design: A developer’s view Houston H. Stokes Department of Economics, College of Business Administration, University of Illinois at Chicago, 601 South Morgan Street, Room 2103, Chicago, IL 60607-7121, USA E-mail: [email protected] In the last 30 years, changes in operating systems, computer hardware, compiler technology and the needs of research in applied econometrics have all influenced econometric software development and the environment of statistical computing. The evolution of various representative software systems, including B34S developed by the author, are used to illustrate differences in software design and the interrelation of a number of factors that influenced these choices. A list of desired econometric software features, software design goals and econometric programming language characteristics are suggested. It is stressed that there is no one “ideal” software system that will work effectively in all situations. System integration of statistical software provides a means by which capability can be leveraged. 1. Introduction 1.1. Overview The development of modern econometric software has been influenced by the changing needs of applied econometric research, the expanding capability of com- puter hardware (CPU speed, disk storage and memory), changes in the design and capability of compilers, and the availability of high-quality subroutine libraries. Soft- ware design in turn has itself impacted applied econometric research, which has seen its horizons expand rapidly in the last 30 years as new techniques of analysis became computationally possible. How some of these interrelationships have evolved over time is illustrated by a discussion of the evolution of the design and capability of the B34S Software system [55] which is contrasted to a selection of other software systems.
    [Show full text]
  • International Journal of Forecasting Guidelines for IJF Software Reviewers
    International Journal of Forecasting Guidelines for IJF Software Reviewers It is desirable that there be some small degree of uniformity amongst the software reviews in this journal, so that regular readers of the journal can have some idea of what to expect when they read a software review. In particular, I wish to standardize the second section (after the introduction) of the review, and the penultimate section (before the conclusions). As stand-alone sections, they will not materially affect the reviewers abillity to craft the review as he/she sees fit, while still providing consistency between reviews. This applies mostly to single-product reviews, but some of the ideas presented herein can be successfully adapted to a multi-product review. The second section, Overview, is an overview of the package, and should include several things. · Contact information for the developer, including website address. · Platforms on which the package runs, and corresponding prices, if available. · Ancillary programs included with the package, if any. · The final part of this section should address Berk's (1987) list of criteria for evaluating statistical software. Relevant items from this list should be mentioned, as in my review of RATS (McCullough, 1997, pp.182- 183). · My use of Berk was extremely terse, and should be considered a lower bound. Feel free to amplify considerably, if the review warrants it. In fact, Berk's criteria, if considered in sufficient detail, could be the outline for a review itself. The penultimate section, Numerical Details, directly addresses numerical accuracy and reliality, if these topics are not addressed elsewhere in the review.
    [Show full text]
  • Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS*
    © 2003-2008, The Trustees of Indiana University Regression Models for Categorical Dependent Variables: 1 Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS* Hun Myoung Park (kucc625) This document summarizes regression models for categorical dependent variables and illustrates how to estimate individual models using SAS 9.1, Stata 10.0, LIMDEP 9.0, and SPSS 16.0. 1. Introduction 2. The Binary Logit Model 3. The Binary Probit Model 4. Bivariate Logit/Probit Models 5. Ordered Logit/Probit Models 6. The Multinomial Logit Model 7. The Conditional Logit Model 8. The Nested Logit Model 9. Conclusion 1. Introduction A categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often treated as continuous variables. When a dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various regression models for categorical dependent variables. The nonlinearity of categorical dependent variable models (CDVMs) makes it difficult to fit the models and interpret their results. 1.1 Regression Models for Categorical Dependent Variables In CDVMs, the left-hand side (LHS) variable or dependent variable is neither interval nor ratio, but rather categorical. The level of measurement and data generation process (DGP) of a dependent variable determines the proper type of CDVM. Binary responses (0 or 1) are modeled with binary logit and probit regressions, ordinal responses (1st, 2nd, 3rd, …) are formulated into (generalized) ordered logit/probit regressions, and nominal responses are analyzed by multinomial logit, conditional logit, or nested logit models depending on specific circumstances.
    [Show full text]
  • Investigating Data Management Practices in Australian Universities
    Investigating Data Management Practices in Australian Universities Margaret Henty, The Australian National University Belinda Weaver, The University of Queensland Stephanie Bradbury, Queensland University of Technology Simon Porter, The University of Melbourne http://www.apsr.edu.au/investigating_data_management July, 2008 ii Table of Contents Introduction ...................................................................................... 1 About the survey ................................................................................ 1 About this report ................................................................................ 1 The respondents................................................................................. 2 The survey results............................................................................... 2 Tables and Comments .......................................................................... 3 Digital data .................................................................................... 3 Non-digital data forms....................................................................... 3 Types of digital data ......................................................................... 4 Size of data collection ....................................................................... 5 Software used for analysis or manipulation .............................................. 6 Software storage and retention ............................................................ 7 Research Data Management Plans.........................................................
    [Show full text]
  • Package 'Ecdat'
    Package ‘Ecdat’ November 3, 2020 Version 0.3-9 Date 2020-11-02 Title Data Sets for Econometrics Author Yves Croissant <[email protected]> and Spencer Graves Maintainer Spencer Graves <[email protected]> Depends R (>= 3.5.0), Ecfun Suggests Description Data sets for econometrics, including political science. LazyData true License GPL (>= 2) Language en-us URL https://www.r-project.org NeedsCompilation no Repository CRAN Date/Publication 2020-11-03 06:40:34 UTC R topics documented: Accident . .4 AccountantsAuditorsPct . .5 Airline . .7 Airq.............................................8 bankingCrises . .9 Benefits . 10 Bids............................................. 11 breaches . 12 BudgetFood . 15 BudgetItaly . 16 BudgetUK . 17 Bwages . 18 1 2 R topics documented: Capm ............................................ 19 Car.............................................. 20 Caschool . 21 Catsup . 22 Cigar ............................................ 23 Cigarette . 24 Clothing . 25 Computers . 26 Consumption . 27 coolingFromNuclearWar . 27 CPSch3 . 28 Cracker . 29 CRANpackages . 30 Crime . 31 CRSPday . 33 CRSPmon . 34 Diamond . 35 DM ............................................. 36 Doctor . 37 DoctorAUS . 38 DoctorContacts . 39 Earnings . 40 Electricity . 41 Fair ............................................. 42 Fatality . 43 FinancialCrisisFiles . 44 Fishing . 45 Forward . 46 FriendFoe . 47 Garch . 48 Gasoline . 49 Griliches . 50 Grunfeld . 51 HC.............................................. 52 Heating . 53
    [Show full text]
  • Limdep Handout
    Limdep Handout Full Manuals are available for consultation in the Business Library, in the Economics Department General Office, and in my office. Help from the course T.A. related to accessing and running LIMDEP using NLOGIT via the VCL is available in the Econ Dept 9th Floor Computer Lab. See course web page for times. The TA will show you how to run LIMDEP programs and try to help troubleshoot basic problems, but will not write your programs for you! ▪ General Comments for programming: 1) Use “$” at the end of each command to separate commands; 2) Use semi-colon “;” to separate options within a command; 3) Use “,” to separate variables within a option (eg. dstat; rhs = income, gdp, interest ). How to run the Limdep program? ▪ Left click and select lines you wish to run. You can click “go” (or Ctrl-R or Run button on the top) ▪ If you need to re-run your program, you will often get an error message. To solve this problem, you can do as follow: (1) Select output right click clear all (2) Click “Project” menu on the top and select “Reset” To get you started here are some basic Limdep commands 1. Data Input (1) Text File Input Read; file=”location of file.txt” (eg. file=”a:\data.txt”) ; nobs=# (eg. nobs=34) ; nvars=# (eg. nvars=9) ; names= list of names of variables $ (eg. names=year, interst,…) Notes: You MUST tell Limdep the number of observations and the number of variables that you have in the text file. You also have to provide a list of variable names using the ‘names’ option.
    [Show full text]
  • EA/LIMDEP Users Manual
    1 Econometric Analysis/LimDep Users Manual By William H. Greene 2 Table of Contents Section Name Page I. What is EA/LimDep 3 II. Installing and Starting EA/LimDep 3 III. A View of the Desktop 4 IV. A Short Demonstration 5 V. General EA/LimDep Info 11 VI. The Main Menu 11 VII. Using EA/LimDep 15 3 Appendix: Using EA/LimDep Most of the computations described in Econometric Analysis can be done with any modern general purpose econometrics package, such as LIMDEP, Gauss, TSP, E-Views, etc. We have included on CD as part of the text, a modified version of one of these programs, EA/LimDep, as well as the data sets and program code that were used in the applications in the text. I. What is EA/LimDep? EA/LimDep is a computer program that you can use for nearly all the computations described in Econometric Analysis. It is a trimmed version of a large commercial package, LIMDEP (for LIMited DEPendent variable modeling), that is used by researchers in universities, government, and industry for the same kinds of analyses done in this text. There are three differences between EA/LimDep and LIMDEP: First, the size of data sets that you can analyze are restricted to 50,000 values with up to 50 variables and up to 1,000 observations. Second, LIMDEP’s very specialized procedures, such as the nested logit model and advanced extensions of the Poisson regression model are not available in EA/LimDep. (You will be able to do all of the computations described in Econometric Analysis save for the more advanced model extensions in Chapters 19 and 20.) Third, the number of parameters in a model is restricted to 15 parameters.
    [Show full text]
  • Wilkinson's Tests and Econometric Software
    Journal of Economic and Social Measurement 29 (2004) 261–270 261 IOS Press Wilkinson’s tests and econometric software B.D. McCullough Department of Decision Sciences, Drexel University, Philadelphia, PA 19104, USA E-mail: [email protected] The Wilkinson Tests, entry-level tests for assessing the numerical accuracy of statistical computations, have been applied to statistical software packages. Some software developers, having failed these tests, have corrected deficiencies in subsequent versions. Thus these tests have had a meliorative impact on the state of statistical software. These same tests are applied to several econometrics packages. Many deficiencies are noted. Keywords: Numerical accuracy, software reliability 1. Introduction The primary purpose of econometric software is to crunch numbers. Regrettably, the primary consideration in evaluating econometric software is not how well the software fulfills its primary purpose. What does matter is how easy it is to get an answer out of the package; whether the answer is accurate is of almost no impor- tance. Reviews of econometric software typically make no mention whatsoever of accuracy, though Vinod [14,15], Veall [13], McCullough [7,8], and MacKenzie [5] are exceptions. In part, this lack of benchmarking during reviews may be attributed to the fact that, until quite recently, no one ever collected the various benchmarks in a single place (see the NIST StRD at www.nist.gov/itl/div898/strd). A reviewer may well have been aware of a few scattered benchmarks, but including only a few benchmarks is hardly worth the effort. This view is supported by the fact that while the statistics profession has a long history of concern with the reliability of its software, software reviews in statistical journals rarely mention numerical accuracy.
    [Show full text]
  • Introduction to GAUSS Programming Language" by Eduardo Rossi ( 2
    7htriXuWtcih ti 5)GEE GhcvYrscty i` BcttsVurab 31A@ Eprcha ! WigpcfYX Vy 9Urtch 0urXU 'sYY fcst i` rY`YrYhWYs( UvUcfUVfY Ut% wwwpcttYXugUV !Wigputcha )prcf ! Duhhcha 5)GEE ch IchXiws FbY6Yfp9Yhu EyhtUx @itY ih @ugYrcWUf BrYWcscih HUrcUVfYs ! 1rYUtcha gUtrcWYs ! DY`YrYhWcha 9UtrcWYs # DYsbUpcha 9UtrcWYs $ 1ihWUtYhUtcih !9UtrcxApYrUtirs " 1ih`irgUVcfcty UhX tbY Xit ipYrUtirs # EpYWcUf gUtrcx ipYrUtcihs 8iUXcha 5Uuss 2UtU wctb Gscha tbY 9Utrcx 3Xctir FYxt fYs '( EprYUXsbYYts " Etircha gUtrcWYs ' fYs( # ! 2UtUsYts ' fYs( 7h 2Yptb # E6AI BD7@F 4AD9)F @3I 183)D 2383F3 3xUgpfY 3xUgpfY 6 Flow of Control 25 6.1 Conditional branching: IF ................................. 25 6.2 Loop statements: DO WHILE/UNTIL and FOR ....................... 26 7 Suspending execution 29 7.1 Temporary suspension using commands - PAUSE, WAIT . 29 7.2 Terminating a program using commands - END . 29 8 Publication Quality Graphics 29 8.1 Some Commands . 30 9 GAUSS Procedures 32 9.1 Global and Local variables . 33 9.2 Naming Conventions . 33 9.3 Writing Procedures . 33 9.4 Further Examples . 35 10 GAUSS Functions and keywords 37 11 Monte Carlo Simulations 38 12 References: 40 12.1 References for this handout . 40 12.2 Useful GAUSS Resources: . 40 12.3 GAUSS Resources for Economists: . 40 13 Appendix: GAUSS Functions and Routines - Quick Reference 41 13.1 Linear Regression . 41 13.2 Descriptive Statistics . 41 13.3 Cumulative Distribution Functions . 41 13.4 Di¤erentiation and Integration Routines . 41 13.5 Root Finding, Polynomial Multiplication and Interpolation . 42 13.6 Random Number Generators and Seeds . 42 2 1 Introduction GAUSS is a programming language designed to operate with and on matrices.
    [Show full text]
  • Regression Models for Binary Dependent Variables Using * Stata, SAS, R, LIMDEP, and SPSS
    IndianaUniversit y University Information Technology Services Regression Models for Binary Dependent Variables Using * Stata, SAS, R, LIMDEP, and SPSS Hun Myoung Park, Ph.D. [email protected] © 2003-2010 Last modified on October 2010 University Information Technology Services Center for Statistical and Mathematical Computing Indiana University 410 North Park Avenue Bloomington, IN 47408 (812) 855-4724 (317) 278-4740 http://www.indiana.edu/~statmath * The citation of this document should read: “Park, Hun Myoung. 2009. Regression Models for Binary Dependent Variables Using Stata, SAS, R, LIMDEP, and SPSS. Working Paper. The University Information Technology Services (UITS) Center for Statistical and Mathematical Computing, Indiana University.” http://www.indiana.edu/~statmath/stat/all/cdvm/index.html © 2003-2010, The Trustees of Indiana University Regression Models for Binary Dependent Variables: 2 This document summarizes logit and probit regression models for binary dependent variables and illustrates how to estimate individual models using Stata 11, SAS 9.2, R 2.11, LIMDEP 9, and SPSS 18. 1. Introduction 2. Binary Logit Regression Model 3. Binary Probit Regression Model 4. Bivariate Probit Regression Models 5. Conclusion References 1. Introduction A categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often treated as continuous variables. When a dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various regression models for categorical dependent variables. The nonlinearity of categorical dependent variable models makes it difficult to fit the models and interpret their results.
    [Show full text]
  • Sixth Interim Report
    WESTERN REGIONAL RESEARCH PUBLICATION W-133 Benefits & Costs Transfer in Natural Resource Planning Sixth Interim Report September, 1993 Compiled by John C. Bergstrom Department of Agricultural and Applied Economics University of Georgia 208 Conner Hall Athens, GA 30602-7509 Benefits and Costs Transfer in Natural Resource Planning CONTENTS Natural Resource Valuation Case Studies John R. Boyce and Daniel W. A Market Test of the Contingent Valuation Method: The McCollum Case of Bison Hunting Permits in Alaska Gregory L. Poe and Richard C. Information, Risk Perceptions, and Contingent Values for Bishop Groundwater Protection Edgar L. Michalson An Estimate of the Economic Value of Selected Columbia and Snake River Anadromous Fisheries, 1938- 1990 Priya Shyamsundar and Randall A. Does Contingent Valuation Work in Non-Market Economies Kramer Donald J. Epp and Sharon I. Gripp Test-Retest Reliability of Contingent Valuation Estimates for an Unfamiliar Policy Choice: Valuation of Tropical Rain Forest Preservation Se-Hyun Choi, Dean F. Schreiner, Benefit-Cost Analysis of a Trout Fishery in Southeastern David M. Leslie, Jr., and Jack Oklahoma Harper Douglas M. Larson and John B. Toward Measuring Use and Nonuse Values from Related Loomis Market Behavior: Some Preliminary Results Thomas C. Brown Measuring Nonuse Value: A Comparison of Recent Contingent Valuation Studies Charles E. Meier and Alan Randall The Hedonic Demand Model: Comparative Statics and Prospects for Estimation Catherine L. Kling An Assessment of the Empirical Magnitude of Option Values for Environmental Goods LeRoy Hansen and Dannette Woo Harry Goes to be with Tom and Dick: A Source of Bias in Estimates of Resource Demand Jeffrey Englin and Trudy Ann Comparing Observed and Multiple-Scenario Contingent Cameron Behavior: A Panel Analysis Utilizing Poisson Regression Techniques The Contingent Valuation Method and Discrete Choice Questions Kevin J.
    [Show full text]