Ilnumerics Optimization Toolbox

Total Page:16

File Type:pdf, Size:1020Kb

Ilnumerics Optimization Toolbox ILNumerics Optimization Toolbox 1 INTRODUCTION Optimization deals with the minimization or maximization of functions. The ILNumerics Optimization Toolbox consists of functions that perform minimization (or maximization) on general nonlinear functions and problems. An optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. Here, we are focussing on continuous optimization problems. The standard form of a continuous optimization problem is min Subject to ≤0, =1,…, , , =0, =1,…, Where is the objective function to be minimized over the variable : → are called the inequality constraints , : → are called the equality constraints , and : → is a convex set in , called bound constraints. By convention, the standard form defines a minimization problem . A maximization problem can be handled by negating the objective function. Based on the description of the function f and the feasible set , the problem can be classified as linear, quadratic, non- linear, semi-infinite, semi-definite, multiple-objective, or discrete optimization problem. However, in the current status of the ILNumerics optimization toolbox, only nonlinear unconstraint and constraint optimization functions are provided. 2 UNCONSTRAINED OPTIMIZATION The program available for unconstrained optimization problem in ILNumerics is called optimUnconst . The op timUnconst function solves optimization problems with nonlinear objectives, without bound constraints on the unknown variables . A quasi-Newton method using a Broyden-Fletcher-Goldfarb-Shanno (BFGS) formula to update the approximate Hessian matrix is implemented. The quasi-Newton method has a O(n²) memory requirement. For the moment, only the BFGS and the classical adaptive Newton Method algorithm is available for unconstrained optimization problem. optimUnconst gives the option to provide user defined functions for the computation of the hessian or gradient. By default, the gradient is computed using finite differences based on an optimal step size. The optimUnconst function is essentially an unconstrained nonlinear optimization solver: • xopt = optimUnconst(objfunc,x0); • xopt = optimUnconst(objfunc,x0, gradfunc: gradient); • xopt = optimUnconst(objfunc,x0, hessianFunc: hessian); • xopt = optimUnconst(objfunc,x0, gradfunc: gradient, hessianFunc: hessian); where • objfunc is the objective function, • x0 is the initial guess, • gradient is the gradient of the objective function, • xopt is the optimal point, or the minimizer, • hessianFunc is the hessian function giving the explicit expression of the hessian matrix. 2.12.12.1 TTTHE COST FUNCTION The cost function is directly passed to optimUnconst as a function parameter. In most cases, this will be an ILNumerics function. But it is also fine, to give the cost function as an anonymous function. Requirements on the cost function: The cost function has to be “smooth enough”, i.e. the 2nd derivative of the cost function (Hessian matrix) is expected to exist and to be non-zero on the whole definition set. 2.2 GETTING STARTED WITH UNCONSTRAINED OPTIMIZATION The simplest use of the optimUnconst algorithm is as follows: xopt = optimUnconst(objfunc,x0) ; where • objfunc is the objective function, • x0 is the initial guess, • xopt is the optimal point, or the minimizer. 2.2.1 Example In the following example, we compute the unconstrained minimum of the Rosenbrock 1 function. The function is given by = 100 ! " ! 1 1 http://en.wikipedia.org/wiki/Rosenbrock_function In C# the algorithm looks as follows: public static ILRetArray <double > Rosenbrock( ILInArray <double > x) { using (ILScope .Enter(x)) { return 100 *(x[1]–x[0]*x[0])*(x[1]-x[0]*x[0])+(x[0] - 1)*(x[0] - 1); } } The minimum of the Rosenbrock function is known to lay at [1,1]. In order to find this minimum programmatically, we give it to the optimUnconst function, together with a point to start looking for the minimum, let’s say: x0 = (-5,-5): ILArray <double > xopt = Optimization .optim(Rosenbrock, -5 * ones(2, 1)); >xopt > <Double> [2,1] > [0]: 1.00000 > [1]: 1.00000 The value returned from optimUnconst is called the minimizer of the objective function. It corresponds to the point x, where the objective function reaches a minimum. For non- convex problems, that minimum will be a local minimum with respect to the given starting point or initial guess. For convex functions f, the minimizer corresponds to the global minimum. The minimum, i.e. the value of the objective function at the minimizer can be found by simply evaluating the objective function: >Rosenbrock(xopt) ><Double> (:,:) 1e-025 * > 1.51685 2.3 OPTIM UNCONST WITH GRADIENT AND /OR HESSIAN FOR UNCONSTRAINED OPTIMIZATION By default, the gradient and the hessian approximations (i.e. first and 2 nd derivatives) of the objective function are computed automatically by optimUnconst . In order to speed up this computation, user defined functions for the hessian and/or the gradient can be provided: Xopt=optimUnconst(objfunc, x0, gradfunc: userGradientFunction); xopt=optimUnconst(objfunc, x0, hessianFunc:userDefinedHessian); xopt=optimUnconst(objfunc, x0, gradfunc: userGradientFunction, hessianFunc: userDefinedHessian); where userGradientFunction and userDefinedHessian are the user provided hessian and gradient functions. When the hessian is provided, the algorithm will become a classical Newton method 2 algorithm, with exact hessian approximation instead of the BFGS approximation of the hessian. 2 For an introduction, see: http://en.wikipedia.org/wiki/Newton_method_in_optimization 2.3.1 Example We would like to minimize the function ! = ! 1 " 2 " $ " 4 . The derivative of the function is ) ' . = ! 1, 2, $ " 4/! 1 " 2 " $ " 4 * The C# algorithm of the function is as follow. public ILRetArray <double > NewExampleFunction( ILInArray <double > x) { using (ILScope .Enter(x)) { ILArray <double > a= array(1.0, 2.0, -4.0); return norm(x – a, 2); } } The gradient is given by public ILRetArray <double > GradientNewExampleFunction( ILInArray <double > x) { using (ILScope .Enter(x)) { ILArray <double > a=array(1.0, 2.0, -4.0); return (x – a)/norm(x-a,2); } } Giving the starting point (10.0, 100.0, 1000.0), the minimum of the function is found by: ILArray <double> xopt = Optimization .optimUnconst( NewExampleFunction, array(10.0, 100.0, 1000.0), gradFunc: GradientNewExampleFunction); >xopt > <Double> [3, 1] > [0]: 1 > [1]: 2 > [2]: -4 The value of the function at the minimizer is found by > NewExampleFunction(xopt) > <Double> 0 In a similar way, the hessian function can be provided for the computation. In this case, the BFGS approximation of the hessian is replaced by the user defined function, resulting in a faster convergence of the optimization algorithm. Note that the hessian function must return symmetric matrices of size n x n, where n is the dimensionality of the given starting point. 2.3.2 Example We would like to minimize the function = 2 , The derivative of the function is ' = 4, And the hessian is ' . + = 4 The C# algorithm for the computation is as follows. public ILRetArray <double > objfunc(ILInArray <double > x) { using (ILScope .Enter(x)) { return 2 * x * x; } } The gradient function is defined by: public ILRetArray <double > gradfunc(ILInArray <double > x) { using (ILScope .Enter(x)) { return 4*x; } } The hessian function is defined by: public ILRetArray <double > hessianfunc(ILInArray <double > x) { using (ILScope .Enter(x)) { return 4; } } Giving the starting point 1000.0, the method can be called as follows: ILArray <double> xopt = Optimization .optimUnconst( objfunc, 1000.0, gradfunc: gradfunc, hessianfunc: hessianfunc); From the immediate window, it appears ><Double> (:,:) 1e-012 * > 1.42109 2.4 FEATURES : optimUnconst provides an efficient optimization solver based on the robust BFGS algorithm. The BFGS is a Quasi-Newton algorithm based on updates using a Gauss C-G algorithm , and a line-search of optimUnconst is based on the Golden Section Search algorithm. Even with “big” number, the BFGS will manage to adapt. However, it is recommend to normalize / scale functions with high values to a better floating point precision range (i.e.: values ‘near 1.0’) . 3 CONSTRAINED NONLINEAR OPTIMIZATION ALGORITHMS ILNumerics provides method optim to obtain the value of x that minimizes a nonlinear objective functional f(x) allowing different kind of constraints. The simplest call of optim is: xopt=optim(objfunc, x0); where xopt is the cost function minimizer and x0 the initial guess and objfunc the cost function. In fact, the previous call is equivalent to the optimUnconst function call. However, the hessian and the gradient cannot be provided using this call, but the Newton method can still be used through approximation of the hessian matrix instead of BFGS approximation. 3.13.13.1 NNNONLINEAR OPTIMIZATION WITH BOUNDARY CONSTRAINTS optim solves bounded constraints optimization problems defined by min ,-./01 Subject to . 2- ≤ ≤ /- The calling sequence for such problem is : xopt=optim(objfunc, x0,lowerBound:a,upperBound:b); • objfunc is the objective function, • x0 is the initial guess, • lowerBound is the lower bound constraint, • upperBound is the lower bound constraint. 3.1.1 Example Let us consider the minimization problem min = 1 , 3/-.415 5, 5 ≤ ≤ 0. The C# function to solve the problem can be written as following: public ILRetArray <double > objfunc(ILInArray
Recommended publications
  • Python – an Introduction
    Python { AN IntroduCtion . default parameters . self documenting code Example for extensions: Gnuplot Hans Fangohr, [email protected], CED Seminar 05/02/2004 ² The wc program in Python ² Summary Overview ² Outlook ² Why Python ² Literature: How to get started ² Interactive Python (IPython) ² M. Lutz & D. Ascher: Learning Python Installing extra modules ² ² ISBN: 1565924649 (1999) (new edition 2004, ISBN: Lists ² 0596002815). We point to this book (1999) where For-loops appropriate: Chapter 1 in LP ² ! if-then ² modules and name spaces Alex Martelli: Python in a Nutshell ² ² while ISBN: 0596001886 ² string handling ² ¯le-input, output Deitel & Deitel et al: Python { How to Program ² ² functions ISBN: 0130923613 ² Numerical computation ² some other features Other resources: ² . long numbers www.python.org provides extensive . exceptions ² documentation, tools and download. dictionaries Python { an introduction 1 Python { an introduction 2 Why Python? How to get started: The interpreter and how to run code Chapter 1, p3 in LP Chapter 1, p12 in LP ! Two options: ! All sorts of reasons ;-) interactive session ² ² . Object-oriented scripting language . start Python interpreter (python.exe, python, . power of high-level language double click on icon, . ) . portable, powerful, free . prompt appears (>>>) . mixable (glue together with C/C++, Fortran, . can enter commands (as on MATLAB prompt) . ) . easy to use (save time developing code) execute program ² . easy to learn . Either start interpreter and pass program name . (in-built complex numbers) as argument: python.exe myfirstprogram.py Today: . ² Or make python-program executable . easy to learn (Unix/Linux): . some interesting features of the language ./myfirstprogram.py . use as tool for small sysadmin/data . Note: python-programs tend to end with .py, processing/collecting tasks but this is not necessary.
    [Show full text]
  • Introduction to GNU Octave
    Introduction to GNU Octave Hubert Selhofer, revised by Marcel Oliver updated to current Octave version by Thomas L. Scofield 2008/08/16 line 1 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 8 6 4 2 -8 -6 0 -4 -2 -2 0 -4 2 4 -6 6 8 -8 Contents 1 Basics 2 1.1 What is Octave? ........................... 2 1.2 Help! . 2 1.3 Input conventions . 3 1.4 Variables and standard operations . 3 2 Vector and matrix operations 4 2.1 Vectors . 4 2.2 Matrices . 4 1 2.3 Basic matrix arithmetic . 5 2.4 Element-wise operations . 5 2.5 Indexing and slicing . 6 2.6 Solving linear systems of equations . 7 2.7 Inverses, decompositions, eigenvalues . 7 2.8 Testing for zero elements . 8 3 Control structures 8 3.1 Functions . 8 3.2 Global variables . 9 3.3 Loops . 9 3.4 Branching . 9 3.5 Functions of functions . 10 3.6 Efficiency considerations . 10 3.7 Input and output . 11 4 Graphics 11 4.1 2D graphics . 11 4.2 3D graphics: . 12 4.3 Commands for 2D and 3D graphics . 13 5 Exercises 13 5.1 Linear algebra . 13 5.2 Timing . 14 5.3 Stability functions of BDF-integrators . 14 5.4 3D plot . 15 5.5 Hilbert matrix . 15 5.6 Least square fit of a straight line . 16 5.7 Trapezoidal rule . 16 1 Basics 1.1 What is Octave? Octave is an interactive programming language specifically suited for vectoriz- able numerical calculations.
    [Show full text]
  • Gretl User's Guide
    Gretl User’s Guide Gnu Regression, Econometrics and Time-series Allin Cottrell Department of Economics Wake Forest university Riccardo “Jack” Lucchetti Dipartimento di Economia Università Politecnica delle Marche December, 2008 Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). Contents 1 Introduction 1 1.1 Features at a glance ......................................... 1 1.2 Acknowledgements ......................................... 1 1.3 Installing the programs ....................................... 2 I Running the program 4 2 Getting started 5 2.1 Let’s run a regression ........................................ 5 2.2 Estimation output .......................................... 7 2.3 The main window menus ...................................... 8 2.4 Keyboard shortcuts ......................................... 11 2.5 The gretl toolbar ........................................... 11 3 Modes of working 13 3.1 Command scripts ........................................... 13 3.2 Saving script objects ......................................... 15 3.3 The gretl console ........................................... 15 3.4 The Session concept ......................................... 16 4 Data files 19 4.1 Native format ............................................. 19 4.2 Other data file formats ....................................... 19 4.3 Binary databases ..........................................
    [Show full text]
  • Automated Likelihood Based Inference for Stochastic Volatility Models H
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Institutional Knowledge at Singapore Management University Singapore Management University Institutional Knowledge at Singapore Management University Research Collection School Of Economics School of Economics 11-2009 Automated Likelihood Based Inference for Stochastic Volatility Models H. Skaug Jun YU Singapore Management University, [email protected] Follow this and additional works at: https://ink.library.smu.edu.sg/soe_research Part of the Econometrics Commons Citation Skaug, H. and YU, Jun. Automated Likelihood Based Inference for Stochastic Volatility Models. (2009). 1-28. Research Collection School Of Economics. Available at: https://ink.library.smu.edu.sg/soe_research/1151 This Working Paper is brought to you for free and open access by the School of Economics at Institutional Knowledge at Singapore Management University. It has been accepted for inclusion in Research Collection School Of Economics by an authorized administrator of Institutional Knowledge at Singapore Management University. For more information, please email [email protected]. Automated Likelihood Based Inference for Stochastic Volatility Models Hans J. SKAUG , Jun YU November 2009 Paper No. 15-2009 ANY OPINIONS EXPRESSED ARE THOSE OF THE AUTHOR(S) AND NOT NECESSARILY THOSE OF THE SCHOOL OF ECONOMICS, SMU Automated Likelihood Based Inference for Stochastic Volatility Models¤ Hans J. Skaug,y Jun Yuz October 7, 2008 Abstract: In this paper the Laplace approximation is used to perform classical and Bayesian analyses of univariate and multivariate stochastic volatility (SV) models. We show that imple- mentation of the Laplace approximation is greatly simpli¯ed by the use of a numerical technique known as automatic di®erentiation (AD).
    [Show full text]
  • Julia: a Fresh Approach to Numerical Computing∗
    SIAM REVIEW c 2017 Society for Industrial and Applied Mathematics Vol. 59, No. 1, pp. 65–98 Julia: A Fresh Approach to Numerical Computing∗ Jeff Bezansony Alan Edelmanz Stefan Karpinskix Viral B. Shahy Abstract. Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be \laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design|a dance between special- ization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human con- venience. Key words. Julia, numerical, scientific computing, parallel AMS subject classifications. 68N15, 65Y05, 97P40 DOI. 10.1137/141000671 Contents 1 Scientific Computing Languages: The Julia Innovation 66 1.1 Julia Architecture and Language Design Philosophy . 67 ∗Received by the editors December 18, 2014; accepted for publication (in revised form) December 16, 2015; published electronically February 7, 2017.
    [Show full text]
  • Programming for Computations – Python
    15 Svein Linge · Hans Petter Langtangen Programming for Computations – Python Editorial Board T. J.Barth M.Griebel D.E.Keyes R.M.Nieminen D.Roose T.Schlick Texts in Computational 15 Science and Engineering Editors Timothy J. Barth Michael Griebel David E. Keyes Risto M. Nieminen Dirk Roose Tamar Schlick More information about this series at http://www.springer.com/series/5151 Svein Linge Hans Petter Langtangen Programming for Computations – Python A Gentle Introduction to Numerical Simulations with Python Svein Linge Hans Petter Langtangen Department of Process, Energy and Simula Research Laboratory Environmental Technology Lysaker, Norway University College of Southeast Norway Porsgrunn, Norway On leave from: Department of Informatics University of Oslo Oslo, Norway ISSN 1611-0994 Texts in Computational Science and Engineering ISBN 978-3-319-32427-2 ISBN 978-3-319-32428-9 (eBook) DOI 10.1007/978-3-319-32428-9 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2016945368 Mathematic Subject Classification (2010): 26-01, 34A05, 34A30, 34A34, 39-01, 40-01, 65D15, 65D25, 65D30, 68-01, 68N01, 68N19, 68N30, 70-01, 92D25, 97-04, 97U50 © The Editor(s) (if applicable) and the Author(s) 2016 This book is published open access. Open Access This book is distributed under the terms of the Creative Commons Attribution-Non- Commercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, a link is provided to the Creative Commons license and any changes made are indicated.
    [Show full text]
  • Learning Econometrics with GAUSS
    Learning Econometrics with GAUSS by Ching-Fan Chung Institute of Economics, Academia Sinica ♠ · ♥ · ♦ · ♣ ii Contents 1 Introduction 1 1.1 Getting Started with GAUSS . 2 1.1.1 Executing DOS Commands in GAUSS . 3 1.1.2 Some GAUSS Keystrokes . 3 1.1.3 A Note on Computer Memory . 4 1.2 The GAUSS Editor . 4 1.3 GAUSS Statements . 5 1.3.1 Some Syntax Rules . 6 1.3.2 Two Types of Errors . 6 2 Data Input and Output 9 2.1 ASCII Files . 9 2.1.1 ASCII Data Files . 10 2.1.2 ASCII Output Files . 11 2.1.3 Other Commands Related to ASCII Output Files . 12 2.1.4 An Example . 13 2.2 Matrix Files . 15 3 Basic Algebraic Operations 17 3.1 Arithmetic Operators . 17 3.2 Element-by-Element Operations . 17 3.3 Other Arithmetic Operators . 18 3.4 Priority of the Arithmetic Operators . 19 3.5 Matrix Concatenation and Indexing Matrices . 20 4 GAUSS Commands 23 4.1 Special Matrices . 23 4.2 Simple Statistical Commands . 23 4.3 Simple Mathematical Commands . 24 4.4 Matrix Manipulation . 24 4.5 Basic Control Commands . 25 4.6 Some Examples . 26 4.7 Character Matrices and Strings . 34 4.7.1 Character Matrices . 34 4.7.2 Strings . 35 4.7.3 The Data Type . 36 4.7.4 Three Useful GAUSS Commands . 36 5 GAUSS Program for Linear Regression 41 5.1 A Brief Review . 41 5.1.1 The Ordinary Least Squares Estimation . 41 5.1.2 Analysis of Variance .
    [Show full text]
  • Using Gretl for Principles of Econometrics, 4Th Edition Version 1.0411
    Using gretl for Principles of Econometrics, 4th Edition Version 1.0411 Lee C. Adkins Professor of Economics Oklahoma State University April 7, 2014 1Visit http://www.LearnEconometrics.com/gretl.html for the latest version of this book. Also, check the errata (page 459) for changes since the last update. License Using gretl for Principles of Econometrics, 4th edition. Copyright c 2011 Lee C. Adkins. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see AppendixF for details). i Preface The previous edition of this manual was about using the software package called gretl to do various econometric tasks required in a typical two course undergraduate or masters level econo- metrics sequence. This version tries to do the same, but several enhancements have been made that will interest those teaching more advanced courses. I have come to appreciate the power and usefulness of gretl's powerful scripting language, now called hansl. Hansl is powerful enough to do some serious computing, but simple enough for novices to learn. In this version of the book, you will find more information about writing functions and using loops to obtain basic results. The programs have been generalized in many instances so that they could be adapted for other uses if desired. As I learn more about hansl specifically and programming in general, I will no doubt revise some of the code contained here. Stay tuned for further developments. As with the last edition, the book is written specifically to be used with a particular textbook, Principles of Econometrics, 4th edition (POE4 ) by Hill, Griffiths, and Lim.
    [Show full text]
  • Gretl Manual
    Gretl Manual Gnu Regression, Econometrics and Time-series Library Allin Cottrell Department of Economics Wake Forest University August, 2005 Gretl Manual: Gnu Regression, Econometrics and Time-series Library by Allin Cottrell Copyright © 2001–2005 Allin Cottrell Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see http://www.gnu.org/licenses/fdl.html). iii Table of Contents 1. Introduction........................................................................................................................................... 1 Features at a glance ......................................................................................................................... 1 Acknowledgements .......................................................................................................................... 1 Installing the programs................................................................................................................... 2 2. Getting started ...................................................................................................................................... 4 Let’s run a regression ...................................................................................................................... 4 Estimation output............................................................................................................................. 6 The
    [Show full text]
  • SDSU Sage Tutorial Documentation Release 1.2
    SDSU Sage Tutorial Documentation Release 1.2 Michael O’Sullivan, David Monarres, Matteo Polimeno Jan 25, 2019 CONTENTS 1 About this tutorial 3 1.1 Introduction...............................................3 1.2 Getting Started..............................................3 1.3 Contributing to the tutorial........................................ 11 2 SageMath as a Calculator 13 2.1 Arithmetic and Functions........................................ 13 2.2 Solving Equations and Inequalities................................... 19 2.3 Calculus................................................. 21 2.4 Statistics................................................. 26 2.5 Plotting.................................................. 27 3 Programming in SageMath 39 3.1 SageMath Objects............................................ 39 3.2 Programming Tools........................................... 54 3.3 Packages within SageMath........................................ 61 3.4 Interactive Demonstrations in the Notebook............................... 66 4 Mathematical Structures 73 4.1 Integers and Modular Arithmetic.................................... 73 4.2 Groups.................................................. 78 4.3 Linear Algebra.............................................. 88 4.4 Rings................................................... 98 4.5 Fields................................................... 109 4.6 Coding Theory.............................................. 114 Bibliography 123 Index 125 i ii SDSU Sage Tutorial Documentation, Release 1.2 Contents:
    [Show full text]
  • Econometrics with Octave
    Econometrics with Octave Dirk Eddelb¨uttel∗ Bank of Montreal, Toronto, Canada. [email protected] November 1999 Summary GNU Octave is an open-source implementation of a (mostly Matlab compatible) high-level language for numerical computations. This review briefly introduces Octave, discusses applications of Octave in an econometric context, and illustrates how to extend Octave with user-supplied C++ code. Several examples are provided. 1 Introduction Econometricians sweat linear algebra. Be it for linear or non-linear problems of estimation or infer- ence, matrix algebra is a natural way of expressing these problems on paper. However, when it comes to writing computer programs to either implement tried and tested econometric procedures, or to research and prototype new routines, programming languages such as C or Fortran are more of a bur- den than an aid. Having to enhance the language by supplying functions for even the most primitive operations adds extra programming effort, introduces new points of failure, and moves the level of abstraction further away from the elegant mathematical expressions. As Eddelb¨uttel(1996) argues, object-oriented programming provides `a step up' from Fortran or C by enabling the programmer to seamlessly add new data types such as matrices, along with operations on these new data types, to the language. But with Moore's Law still being validated by ever and ever faster processors, and, hence, ever increasing computational power, the prime reason for using compiled code, i.e. speed, becomes less relevant. Hence the growing popularity of interpreted programming languages, both, in general, as witnessed by the surge in popularity of the general-purpose programming languages Perl and Python and, in particular, for numerical applications with strong emphasis on matrix calculus where languages such as Gauss, Matlab, Ox, R and Splus, which were reviewed by Cribari-Neto and Jensen (1997), Cribari-Neto (1997) and Cribari-Neto and Zarkos (1999), have become popular.
    [Show full text]
  • Lecture 5: Scientific Computing Languages
    Scientific Computing Languages (Lectures on High-performance Computing for Economists V) Jes´usFern´andez-Villaverde1 and Pablo Guerr´on2 September 15, 2021 1University of Pennsylvania 2Boston College Programming languages for scientific computation • General-purpose languages (GPL): 1. C++. 2. Python. • Domain-specific languages (DSL): 1. Julia. 2. R. 3. Matlab. • If you want to undertake research on computational-intensive papers, learning a GPL is probably worthwhile. • Moreover, knowing a GPL will make you a better user of a DSL. 1 C++ C/C++ • C/C++ is the infrastructure of much of the modern computing world. • If you know Unix and C/C++, you can probably master everything else easily (think of Latin and Romance languages!). • In some sense, C++ is really a \federation" of languages. • What is the difference between C and C++? • C++ introduced full OOP support. 2 C++ • General-purpose, multi-paradigm, static partially inferred typed, compiled language. • Current standard is C++20 (published in December 2020). C++23 in preview release. • Developed by Bjarne Stroustrup at Bells Labs in the early 1980s. • C++ has a more modern design and approach to programming than predecessor languages. Enormously influential in successor languages. • If you know C++, you will be able to read C legacy code without much problem. • In fact, nearly all C programs are valid C++ programs (the converse is not true). • But you should try to \think" in C++20, not in C or even in older C++ standards. 3 4 C++: advantages 1. Powerful language: you can code anything you can imagine in C++. 2. Continuously evolving but with a standard: new implementations have support for meta-programming, functional programming,..
    [Show full text]