Introduction to GAUSS Programming Language" by Eduardo Rossi ( 2
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Python – an Introduction
Python { AN IntroduCtion . default parameters . self documenting code Example for extensions: Gnuplot Hans Fangohr, [email protected], CED Seminar 05/02/2004 ² The wc program in Python ² Summary Overview ² Outlook ² Why Python ² Literature: How to get started ² Interactive Python (IPython) ² M. Lutz & D. Ascher: Learning Python Installing extra modules ² ² ISBN: 1565924649 (1999) (new edition 2004, ISBN: Lists ² 0596002815). We point to this book (1999) where For-loops appropriate: Chapter 1 in LP ² ! if-then ² modules and name spaces Alex Martelli: Python in a Nutshell ² ² while ISBN: 0596001886 ² string handling ² ¯le-input, output Deitel & Deitel et al: Python { How to Program ² ² functions ISBN: 0130923613 ² Numerical computation ² some other features Other resources: ² . long numbers www.python.org provides extensive . exceptions ² documentation, tools and download. dictionaries Python { an introduction 1 Python { an introduction 2 Why Python? How to get started: The interpreter and how to run code Chapter 1, p3 in LP Chapter 1, p12 in LP ! Two options: ! All sorts of reasons ;-) interactive session ² ² . Object-oriented scripting language . start Python interpreter (python.exe, python, . power of high-level language double click on icon, . ) . portable, powerful, free . prompt appears (>>>) . mixable (glue together with C/C++, Fortran, . can enter commands (as on MATLAB prompt) . ) . easy to use (save time developing code) execute program ² . easy to learn . Either start interpreter and pass program name . (in-built complex numbers) as argument: python.exe myfirstprogram.py Today: . ² Or make python-program executable . easy to learn (Unix/Linux): . some interesting features of the language ./myfirstprogram.py . use as tool for small sysadmin/data . Note: python-programs tend to end with .py, processing/collecting tasks but this is not necessary. -
Department of Geography
Department of Geography UNIVERSITY OF FLORIDA, SPRING 2019 GEO 4167c section #09A6 / GEO 6161 section # 09A9 (3.0 credit hours) Course# 15235/15271 Intermediate Quantitative Methods Instructor: Timothy J. Fik, Ph.D. (Associate Professor) Prerequisite: GEO 3162 / GEO 6160 or equivalent Lecture Time/Location: Tuesdays, Periods 3-5: 9:35AM-12:35PM / Turlington 3012 Instructor’s Office: 3137 Turlington Hall Instructor’s e-mail address: [email protected] Formal Office Hours Tuesdays -- 1:00PM – 4:30PM Thursdays -- 1:30PM – 3:00PM; and 4:00PM – 4:30PM Course Materials (Power-point presentations in pdf format) will be uploaded to the on-line course Lecture folder on Canvas. Course Overview GEO 4167x/GEO 6161 surveys various statistical modeling techniques that are widely used in the social, behavioral, and environmental sciences. Lectures will focus on several important topics… including common indices of spatial association and dependence, linear and non-linear model development, model diagnostics, and remedial measures. The lectures will largely be devoted to the topic of Regression Analysis/Econometrics (and the General Linear Model). Applications will involve regression models using cross-sectional, quantitative, qualitative, categorical, time-series, and/or spatial data. Selected topics include, yet are not limited to, the following: Classic Least Squares Regression plus Extensions of the General Linear Model (GLM) Matrix Algebra approach to Regression and the GLM Join-Count Statistics (Dacey’s Contiguity Tests) Spatial Autocorrelation / Regression -
Introduction to GNU Octave
Introduction to GNU Octave Hubert Selhofer, revised by Marcel Oliver updated to current Octave version by Thomas L. Scofield 2008/08/16 line 1 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 8 6 4 2 -8 -6 0 -4 -2 -2 0 -4 2 4 -6 6 8 -8 Contents 1 Basics 2 1.1 What is Octave? ........................... 2 1.2 Help! . 2 1.3 Input conventions . 3 1.4 Variables and standard operations . 3 2 Vector and matrix operations 4 2.1 Vectors . 4 2.2 Matrices . 4 1 2.3 Basic matrix arithmetic . 5 2.4 Element-wise operations . 5 2.5 Indexing and slicing . 6 2.6 Solving linear systems of equations . 7 2.7 Inverses, decompositions, eigenvalues . 7 2.8 Testing for zero elements . 8 3 Control structures 8 3.1 Functions . 8 3.2 Global variables . 9 3.3 Loops . 9 3.4 Branching . 9 3.5 Functions of functions . 10 3.6 Efficiency considerations . 10 3.7 Input and output . 11 4 Graphics 11 4.1 2D graphics . 11 4.2 3D graphics: . 12 4.3 Commands for 2D and 3D graphics . 13 5 Exercises 13 5.1 Linear algebra . 13 5.2 Timing . 14 5.3 Stability functions of BDF-integrators . 14 5.4 3D plot . 15 5.5 Hilbert matrix . 15 5.6 Least square fit of a straight line . 16 5.7 Trapezoidal rule . 16 1 Basics 1.1 What is Octave? Octave is an interactive programming language specifically suited for vectoriz- able numerical calculations. -
The Evolution of Econometric Software Design: a Developer's View
Journal of Economic and Social Measurement 29 (2004) 205–259 205 IOS Press The evolution of econometric software design: A developer’s view Houston H. Stokes Department of Economics, College of Business Administration, University of Illinois at Chicago, 601 South Morgan Street, Room 2103, Chicago, IL 60607-7121, USA E-mail: [email protected] In the last 30 years, changes in operating systems, computer hardware, compiler technology and the needs of research in applied econometrics have all influenced econometric software development and the environment of statistical computing. The evolution of various representative software systems, including B34S developed by the author, are used to illustrate differences in software design and the interrelation of a number of factors that influenced these choices. A list of desired econometric software features, software design goals and econometric programming language characteristics are suggested. It is stressed that there is no one “ideal” software system that will work effectively in all situations. System integration of statistical software provides a means by which capability can be leveraged. 1. Introduction 1.1. Overview The development of modern econometric software has been influenced by the changing needs of applied econometric research, the expanding capability of com- puter hardware (CPU speed, disk storage and memory), changes in the design and capability of compilers, and the availability of high-quality subroutine libraries. Soft- ware design in turn has itself impacted applied econometric research, which has seen its horizons expand rapidly in the last 30 years as new techniques of analysis became computationally possible. How some of these interrelationships have evolved over time is illustrated by a discussion of the evolution of the design and capability of the B34S Software system [55] which is contrasted to a selection of other software systems. -
International Journal of Forecasting Guidelines for IJF Software Reviewers
International Journal of Forecasting Guidelines for IJF Software Reviewers It is desirable that there be some small degree of uniformity amongst the software reviews in this journal, so that regular readers of the journal can have some idea of what to expect when they read a software review. In particular, I wish to standardize the second section (after the introduction) of the review, and the penultimate section (before the conclusions). As stand-alone sections, they will not materially affect the reviewers abillity to craft the review as he/she sees fit, while still providing consistency between reviews. This applies mostly to single-product reviews, but some of the ideas presented herein can be successfully adapted to a multi-product review. The second section, Overview, is an overview of the package, and should include several things. · Contact information for the developer, including website address. · Platforms on which the package runs, and corresponding prices, if available. · Ancillary programs included with the package, if any. · The final part of this section should address Berk's (1987) list of criteria for evaluating statistical software. Relevant items from this list should be mentioned, as in my review of RATS (McCullough, 1997, pp.182- 183). · My use of Berk was extremely terse, and should be considered a lower bound. Feel free to amplify considerably, if the review warrants it. In fact, Berk's criteria, if considered in sufficient detail, could be the outline for a review itself. The penultimate section, Numerical Details, directly addresses numerical accuracy and reliality, if these topics are not addressed elsewhere in the review. -
Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS*
© 2003-2008, The Trustees of Indiana University Regression Models for Categorical Dependent Variables: 1 Estimating Regression Models for Categorical Dependent Variables Using SAS, Stata, LIMDEP, and SPSS* Hun Myoung Park (kucc625) This document summarizes regression models for categorical dependent variables and illustrates how to estimate individual models using SAS 9.1, Stata 10.0, LIMDEP 9.0, and SPSS 16.0. 1. Introduction 2. The Binary Logit Model 3. The Binary Probit Model 4. Bivariate Logit/Probit Models 5. Ordered Logit/Probit Models 6. The Multinomial Logit Model 7. The Conditional Logit Model 8. The Nested Logit Model 9. Conclusion 1. Introduction A categorical variable here refers to a variable that is binary, ordinal, or nominal. Event count data are discrete (categorical) but often treated as continuous variables. When a dependent variable is categorical, the ordinary least squares (OLS) method can no longer produce the best linear unbiased estimator (BLUE); that is, OLS is biased and inefficient. Consequently, researchers have developed various regression models for categorical dependent variables. The nonlinearity of categorical dependent variable models (CDVMs) makes it difficult to fit the models and interpret their results. 1.1 Regression Models for Categorical Dependent Variables In CDVMs, the left-hand side (LHS) variable or dependent variable is neither interval nor ratio, but rather categorical. The level of measurement and data generation process (DGP) of a dependent variable determines the proper type of CDVM. Binary responses (0 or 1) are modeled with binary logit and probit regressions, ordinal responses (1st, 2nd, 3rd, …) are formulated into (generalized) ordered logit/probit regressions, and nominal responses are analyzed by multinomial logit, conditional logit, or nested logit models depending on specific circumstances. -
Investigating Data Management Practices in Australian Universities
Investigating Data Management Practices in Australian Universities Margaret Henty, The Australian National University Belinda Weaver, The University of Queensland Stephanie Bradbury, Queensland University of Technology Simon Porter, The University of Melbourne http://www.apsr.edu.au/investigating_data_management July, 2008 ii Table of Contents Introduction ...................................................................................... 1 About the survey ................................................................................ 1 About this report ................................................................................ 1 The respondents................................................................................. 2 The survey results............................................................................... 2 Tables and Comments .......................................................................... 3 Digital data .................................................................................... 3 Non-digital data forms....................................................................... 3 Types of digital data ......................................................................... 4 Size of data collection ....................................................................... 5 Software used for analysis or manipulation .............................................. 6 Software storage and retention ............................................................ 7 Research Data Management Plans......................................................... -
Gretl User's Guide
Gretl User’s Guide Gnu Regression, Econometrics and Time-series Allin Cottrell Department of Economics Wake Forest university Riccardo “Jack” Lucchetti Dipartimento di Economia Università Politecnica delle Marche December, 2008 Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). Contents 1 Introduction 1 1.1 Features at a glance ......................................... 1 1.2 Acknowledgements ......................................... 1 1.3 Installing the programs ....................................... 2 I Running the program 4 2 Getting started 5 2.1 Let’s run a regression ........................................ 5 2.2 Estimation output .......................................... 7 2.3 The main window menus ...................................... 8 2.4 Keyboard shortcuts ......................................... 11 2.5 The gretl toolbar ........................................... 11 3 Modes of working 13 3.1 Command scripts ........................................... 13 3.2 Saving script objects ......................................... 15 3.3 The gretl console ........................................... 15 3.4 The Session concept ......................................... 16 4 Data files 19 4.1 Native format ............................................. 19 4.2 Other data file formats ....................................... 19 4.3 Binary databases .......................................... -
Package 'Ecdat'
Package ‘Ecdat’ November 3, 2020 Version 0.3-9 Date 2020-11-02 Title Data Sets for Econometrics Author Yves Croissant <[email protected]> and Spencer Graves Maintainer Spencer Graves <[email protected]> Depends R (>= 3.5.0), Ecfun Suggests Description Data sets for econometrics, including political science. LazyData true License GPL (>= 2) Language en-us URL https://www.r-project.org NeedsCompilation no Repository CRAN Date/Publication 2020-11-03 06:40:34 UTC R topics documented: Accident . .4 AccountantsAuditorsPct . .5 Airline . .7 Airq.............................................8 bankingCrises . .9 Benefits . 10 Bids............................................. 11 breaches . 12 BudgetFood . 15 BudgetItaly . 16 BudgetUK . 17 Bwages . 18 1 2 R topics documented: Capm ............................................ 19 Car.............................................. 20 Caschool . 21 Catsup . 22 Cigar ............................................ 23 Cigarette . 24 Clothing . 25 Computers . 26 Consumption . 27 coolingFromNuclearWar . 27 CPSch3 . 28 Cracker . 29 CRANpackages . 30 Crime . 31 CRSPday . 33 CRSPmon . 34 Diamond . 35 DM ............................................. 36 Doctor . 37 DoctorAUS . 38 DoctorContacts . 39 Earnings . 40 Electricity . 41 Fair ............................................. 42 Fatality . 43 FinancialCrisisFiles . 44 Fishing . 45 Forward . 46 FriendFoe . 47 Garch . 48 Gasoline . 49 Griliches . 50 Grunfeld . 51 HC.............................................. 52 Heating . 53 -
Automated Likelihood Based Inference for Stochastic Volatility Models H
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Institutional Knowledge at Singapore Management University Singapore Management University Institutional Knowledge at Singapore Management University Research Collection School Of Economics School of Economics 11-2009 Automated Likelihood Based Inference for Stochastic Volatility Models H. Skaug Jun YU Singapore Management University, [email protected] Follow this and additional works at: https://ink.library.smu.edu.sg/soe_research Part of the Econometrics Commons Citation Skaug, H. and YU, Jun. Automated Likelihood Based Inference for Stochastic Volatility Models. (2009). 1-28. Research Collection School Of Economics. Available at: https://ink.library.smu.edu.sg/soe_research/1151 This Working Paper is brought to you for free and open access by the School of Economics at Institutional Knowledge at Singapore Management University. It has been accepted for inclusion in Research Collection School Of Economics by an authorized administrator of Institutional Knowledge at Singapore Management University. For more information, please email [email protected]. Automated Likelihood Based Inference for Stochastic Volatility Models Hans J. SKAUG , Jun YU November 2009 Paper No. 15-2009 ANY OPINIONS EXPRESSED ARE THOSE OF THE AUTHOR(S) AND NOT NECESSARILY THOSE OF THE SCHOOL OF ECONOMICS, SMU Automated Likelihood Based Inference for Stochastic Volatility Models¤ Hans J. Skaug,y Jun Yuz October 7, 2008 Abstract: In this paper the Laplace approximation is used to perform classical and Bayesian analyses of univariate and multivariate stochastic volatility (SV) models. We show that imple- mentation of the Laplace approximation is greatly simpli¯ed by the use of a numerical technique known as automatic di®erentiation (AD). -
Julia: a Fresh Approach to Numerical Computing∗
SIAM REVIEW c 2017 Society for Industrial and Applied Mathematics Vol. 59, No. 1, pp. 65–98 Julia: A Fresh Approach to Numerical Computing∗ Jeff Bezansony Alan Edelmanz Stefan Karpinskix Viral B. Shahy Abstract. Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be \laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design|a dance between special- ization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human con- venience. Key words. Julia, numerical, scientific computing, parallel AMS subject classifications. 68N15, 65Y05, 97P40 DOI. 10.1137/141000671 Contents 1 Scientific Computing Languages: The Julia Innovation 66 1.1 Julia Architecture and Language Design Philosophy . 67 ∗Received by the editors December 18, 2014; accepted for publication (in revised form) December 16, 2015; published electronically February 7, 2017. -
Limdep Handout
Limdep Handout Full Manuals are available for consultation in the Business Library, in the Economics Department General Office, and in my office. Help from the course T.A. related to accessing and running LIMDEP using NLOGIT via the VCL is available in the Econ Dept 9th Floor Computer Lab. See course web page for times. The TA will show you how to run LIMDEP programs and try to help troubleshoot basic problems, but will not write your programs for you! ▪ General Comments for programming: 1) Use “$” at the end of each command to separate commands; 2) Use semi-colon “;” to separate options within a command; 3) Use “,” to separate variables within a option (eg. dstat; rhs = income, gdp, interest ). How to run the Limdep program? ▪ Left click and select lines you wish to run. You can click “go” (or Ctrl-R or Run button on the top) ▪ If you need to re-run your program, you will often get an error message. To solve this problem, you can do as follow: (1) Select output right click clear all (2) Click “Project” menu on the top and select “Reset” To get you started here are some basic Limdep commands 1. Data Input (1) Text File Input Read; file=”location of file.txt” (eg. file=”a:\data.txt”) ; nobs=# (eg. nobs=34) ; nvars=# (eg. nvars=9) ; names= list of names of variables $ (eg. names=year, interst,…) Notes: You MUST tell Limdep the number of observations and the number of variables that you have in the text file. You also have to provide a list of variable names using the ‘names’ option.