Stan Modeling Language

Total Page:16

File Type:pdf, Size:1020Kb

Stan Modeling Language Stan Modeling Language User’s Guide and Reference Manual Stan Development Team Stan Version 2.12.0 Tuesday 6th September, 2016 mc-stan.org Stan Development Team (2016) Stan Modeling Language: User’s Guide and Reference Manual. Version 2.12.0. Copyright © 2011–2016, Stan Development Team. This document is distributed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). For full details, see https://creativecommons.org/licenses/by/4.0/legalcode The Stan logo is distributed under the Creative Commons Attribution- NoDerivatives 4.0 International License (CC BY-ND 4.0). For full details, see https://creativecommons.org/licenses/by-nd/4.0/legalcode Stan Development Team Currently Active Developers This is the list of current developers in order of joining the development team (see the next section for former development team members). • Andrew Gelman (Columbia University) chief of staff, chief of marketing, chief of fundraising, chief of modeling, max marginal likelihood, expectation propagation, posterior analysis, RStan, RStanARM, Stan • Bob Carpenter (Columbia University) language design, parsing, code generation, autodiff, templating, ODEs, probability functions, con- straint transforms, manual, web design / maintenance, fundraising, support, training, Stan, Stan Math, CmdStan • Matt Hoffman (Adobe Creative Technologies Lab) NUTS, adaptation, autodiff, memory management, (re)parameterization, optimization C++ • Daniel Lee (Columbia University) chief of engineering, CmdStan, builds, continuous integration, testing, templates, ODEs, autod- iff, posterior analysis, probability functions, error handling, refactoring, training, Stan, Stan Math, CmdStan, RStan, PyStan • Ben Goodrich (Columbia University) RStan, multivariate probability functions, matrix algebra, (re)parameterization, constraint trans- forms, modeling, R, C++, training, Stan, Stan Math, RStan, RStanARM • Michael Betancourt (University of Warwick) chief of smooth manifolds, MCMC, Riemannian HMC, geometry, analysis and measure theory, ODEs, CmdStan, CDFs, autodiff, transformations, refactoring, modeling, adaptation, variational inference, logos, web design, training, Stan, Stan Math, CmdStan • Marcus Brubaker (University of Toronto, Scarborough) chief of optimization, code efficiency, matrix algebra, multivariate distributions, C++, Stan, Stan Math, CmdStan • Jiqiang Guo (NPD Group) modeling, Stan, RStan • Allen Riddell (Dartmouth College) PyStan • Marco Inacio (University of São Paulo/UFSCar) functions and distributions, Stan, Stan Math iii • Jeffrey Arnold (University of Washington) emacs mode, pretty printing, manual • Rob J. Goedman (Consultant, La Jolla, California) parsing, Stan, Stan.jl • Brian Lau (CNRS, Paris) MatlabStan • Mitzi Morris (Lucidworks) parsing, testing, JSON, Stan, Stan Math • Rob Trangucci (Columbia University) max marginal likelihood, multilevel modeling and GLMs, poststratification, autodiff, template metaprogramming, training, Stan, Stan Math, RStan • Jonah Sol Gabry (Columbia University) RStan, RStanARM, ShinyStan, multilevel modeling, model comparison (LOO), model visualization, training • Alp Kucukelbir (Columbia University) variational inference, Stan, CmdStan • Robert L. Grant (St. George’s, University of London and Kingston University) StataStan • Dustin Tran (Harvard University) variational inference, Stan, CmdStan • Krzysztof Sakrejda (University of Massachusetts, Amherst) sparse matrix operations, Stan, Stan Math • Aki Vehtari (Aalto University) GPs, model comparison (WAIC, LOO), modeling, algorithms, MatlabStan • Rayleigh Lei (University of Michigan) vectorization, ADVI, RStan, Stan, Stan Math • Sebastian Weber (Novartis Pharma) ODE solvers, Stan, Stan Math • Charles Margossian (Metrum LLC) ODE solvers, Stan, Stan Math iv Development Team Alumni These are developers who have made important contributions in the past, but are no longer contributing actively. • Michael Malecki (Crunch.io, YouGov plc) original design, modeling, logos, R • Peter Li (Columbia University) RNGs, higher-order autodiff, ensemble sampling, Metropolis, example models, C++ • Yuanjun Guo (Columbia University) dense mass matrix estimation, C++ v Contents Preface x Acknowledgements xvi I Introduction 20 1. Overview 21 II Programming Techniques 28 2. Model Building as Software Development 29 3. Data Types 35 4. Containers: Arrays, Vectors, and Matrices 38 5. Multiple Indexing and Range Indexing 42 6. Regression Models 50 7. Time-Series Models 87 8. Missing Data & Partially Known Parameters 105 9. Truncated or Censored Data 109 10. Finite Mixtures 114 11. Measurement Error and Meta-Analysis 122 12. Latent Discrete Parameters 129 13. Sparse and Ragged Data Structures 149 14. Clustering Models 152 15. Gaussian Processes 165 16. Directions, Rotations, and Hyperspheres 179 17. Reparameterization & Change of Variables 182 18. Custom Probability Functions 189 19. User-Defined Functions 191 20. Solving Differential Equations 201 21. Problematic Posteriors 209 22. Optimizing Stan Code for Efficiency 223 vi 23. Reproducibility 248 III Modeling Language Reference 250 24. Execution of a Stan Program 251 25. Data Types and Variable Declarations 257 26. Expressions 273 27. Statements 291 28. User-Defined Functions 311 29. Program Blocks 319 30. Modeling Language Syntax 331 IV Built-In Functions 337 31. Void Functions 338 32. Integer-Valued Basic Functions 340 33. Real-Valued Basic Functions 343 34. Array Operations 365 35. Matrix Operations 372 36. Sparse Matrix Operations 393 37. Mixed Operations 396 38. Ordinary Differential Equation Solvers 398 V Discrete Distributions 402 39. Conventions for Probability Functions 403 40. Binary Distributions 408 41. Bounded Discrete Distributions 410 42. Unbounded Discrete Distributions 416 43. Multivariate Discrete Distributions 421 VI Continuous Distributions 422 44. Unbounded Continuous Distributions 423 vii 45. Positive Continuous Distributions 431 46. Non-negative Continuous Distributions 439 47. Positive Lower-Bounded Probabilities 441 48. Continuous Distributions on [0, 1] 443 49. Circular Distributions 445 50. Bounded Continuous Probabilities 447 51. Distributions over Unbounded Vectors 448 52. Simplex Distributions 455 53. Correlation Matrix Distributions 456 54. Covariance Matrix Distributions 459 VII Additional Topics 461 55. Point Estimation 462 56. Bayesian Data Analysis 472 57. Markov Chain Monte Carlo Sampling 476 58. Transformations of Constrained Variables 485 59. Variational Inference 502 VIII Algorithms & Implementations 504 60. Hamiltonian Monte Carlo Sampling 505 61. Optimization Algorithms 517 62. Variational Inference 520 63. Diagnostic Mode 522 IX Software Process 524 64. Software Development Lifecycle 525 X Contributed Modules 534 65. Contributed Modules 535 viii Appendices 537 A. Licensing 537 B. Stan for Users of BUGS 539 C. Stan Program Style Guide 548 D. Warning and Error Messages 556 E. Deprecated Features 558 F. Mathematical Functions 562 Bibliography 564 Index 573 ix Preface Why Stan? We did not set out to build Stan as it currently exists. We set out to apply full Bayesian inference to the sort of multilevel generalized linear models discussed in Part II of (Gelman and Hill, 2007). These models are structured with grouped and interacted predictors at multiple levels, hierarchical covariance priors, nonconjugate coefficient priors, latent effects as in item-response models, and varying output link functions and distributions. The models we wanted to fit turned out to be a challenge for current general- purpose software. A direct encoding in BUGS or JAGS can grind these tools to a halt. Matt Schofield found his multilevel time-series regression of climate on tree- ring measurements wasn’t converging after hundreds of thousands of iterations. Initially, Aleks Jakulin spent some time working on extending the Gibbs sampler in the Hierarchical Bayesian Compiler (Daumé, 2007), which as its name suggests, is compiled rather than interpreted. But even an efficient and scalable implementation does not solve the underlying problem that Gibbs sampling does not fare well with highly correlated posteriors. We finally realized we needed a better sampler, not a more efficient implementation. We briefly considered trying to tune proposals for a random-walk Metropolis- Hastings sampler, but that seemed too problem specific and not even necessarily possible without some kind of adaptation rather than tuning of the proposals. The Path to Stan We were at the same time starting to hear more and more about Hamiltonian Monte Carlo (HMC) and its ability to overcome some of the the problems inherent in Gibbs sampling. Matt Schofield managed to fit the tree-ring data using a hand-coded imple- mentation of HMC, finding it converged in a few hundred iterations. HMC appeared promising but was also problematic in that the Hamiltonian dy- namics simulation requires the gradient of the log posterior. Although it’s possible to do this by hand, it is very tedious and error prone. That’s when we discovered reverse-mode algorithmic differentiation, which lets you write down a templated C++ function for the log posterior and automatically compute a proper analytic gradient up to machine precision accuracy in only a few multiples of the cost to evaluate the log probability function itself. We explored existing algorithmic differentiation pack- ages with open licenses such as rad (Gay, 2005) and its repackaging in the Sacado module of the Trilinos toolkit and the CppAD package in the coin-or toolkit. But nei- ther package supported very many special functions
Recommended publications
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software January 2017, Volume 76, Issue 1. doi: 10.18637/jss.v076.i01 Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matthew D. Hoffman Columbia University Columbia University Adobe Creative Technologies Lab Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University Columbia University Marcus A. Brubaker Jiqiang Guo Peter Li York University NPD Group Columbia University Allen Riddell Indiana University Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces sup- port sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.
    [Show full text]
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matt Hoffman Columbia University Columbia University Adobe Research Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University University of Warwick Marcus A. Brubaker Jiqiang Guo Peter Li University of Toronto, NPD Group Columbia University Scarborough Allen Riddell Dartmouth College Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.2.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line, through R using the RStan package, or through Python using the PyStan package. All three interfaces support sampling and optimization-based inference. RStan and PyStan also provide access to log probabilities, gradients, Hessians, and data I/O. Keywords: probabilistic program, Bayesian inference, algorithmic differentiation, Stan.
    [Show full text]
  • Advancedhmc.Jl: a Robust, Modular and Efficient Implementation of Advanced HMC Algorithms
    2nd Symposium on Advances in Approximate Bayesian Inference, 20191{10 AdvancedHMC.jl: A robust, modular and efficient implementation of advanced HMC algorithms Kai Xu [email protected] University of Edinburgh Hong Ge [email protected] University of Cambridge Will Tebbutt [email protected] University of Cambridge Mohamed Tarek [email protected] UNSW Canberra Martin Trapp [email protected] Graz University of Technology Zoubin Ghahramani [email protected] University of Cambridge & Uber AI Labs Abstract Stan's Hamilton Monte Carlo (HMC) has demonstrated remarkable sampling robustness and efficiency in a wide range of Bayesian inference problems through carefully crafted adaption schemes to the celebrated No-U-Turn sampler (NUTS) algorithm. It is challeng- ing to implement these adaption schemes robustly in practice, hindering wider adoption amongst practitioners who are not directly working with the Stan modelling language. AdvancedHMC.jl (AHMC) contributes a modular, well-tested, standalone implementation of NUTS that recovers and extends Stan's NUTS algorithm. AHMC is written in Julia, a modern high-level language for scientific computing, benefiting from optional hardware acceleration and interoperability with a wealth of existing software written in both Julia and other languages, such as Python. Efficacy is demonstrated empirically by comparison with Stan through a third-party Markov chain Monte Carlo benchmarking suite. 1. Introduction Hamiltonian Monte Carlo (HMC) is an efficient Markov chain Monte Carlo (MCMC) algo- rithm which avoids random walks by simulating Hamiltonian dynamics to make proposals (Duane et al., 1987; Neal et al., 2011). Due to the statistical efficiency of HMC, it has been widely applied to fields including physics (Duane et al., 1987), differential equations (Kramer et al., 2014), social science (Jackman, 2009) and Bayesian inference (e.g.
    [Show full text]
  • Information Theory, Pattern Recognition and Neural Networks Part III Physics Exams 2006
    Part III Physics exams 2004–2006 Information Theory, Pattern Recognition and Neural Networks Part III Physics exams 2006 1 A channel has a 3-bit input, x 000, 001, 010, 011, 100, 101, 110, 111 , ∈{ } and a 2-bit output y 00, 01, 10, 11 . Given an input x, the output y is ∈{ } generated by deleting exactly one of the three input bits, selected at random. For example, if the input is x = 010 then P (y x) is 1/3 for each of the outputs 00, 10, and 01; If the input is x = 001 then P (y=|01 x)=2/3 and P (y=00 x)=1/3. | | Write down the conditional entropies H(Y x=000), H(Y x=010), and H(Y x=001). | | [3] |Assuming an input distribution of the form x 000 001 010 011 100 101 110 111 1 p p p p p 1 p P (x) − 0 0 − , 2 4 4 4 4 2 work out the conditional entropy H(Y X) and show that | 2 H(Y )=1+ H p , 2 3 where H2(x)= x log2(1/x)+(1 x)log2(1/(1 x)). [3] Sketch H(Y ) and H(Y X−) as a function− of p (0, 1) on a single diagram. [5] | ∈ Sketch the mutual information I(X; Y ) as a function of p. [2] H2(1/3) 0.92. ≃ Another channel with a 3-bit input x 000, 001, 010, 011, 100, 101, 110, 111 , ∈{ } erases exactly one of its three input bits, marking the erased symbol by a ?.
    [Show full text]
  • Maple Advanced Programming Guide
    Maple 9 Advanced Programming Guide M. B. Monagan K. O. Geddes K. M. Heal G. Labahn S. M. Vorkoetter J. McCarron P. DeMarco c Maplesoft, a division of Waterloo Maple Inc. 2003. ­ ii ¯ Maple, Maplesoft, Maplet, and OpenMaple are trademarks of Water- loo Maple Inc. c Maplesoft, a division of Waterloo Maple Inc. 2003. All rights re- ­ served. The electronic version (PDF) of this book may be downloaded and printed for personal use or stored as a copy on a personal machine. The electronic version (PDF) of this book may not be distributed. Information in this document is subject to change without notice and does not rep- resent a commitment on the part of the vendor. The software described in this document is furnished under a license agreement and may be used or copied only in accordance with the agreement. It is against the law to copy the software on any medium as specifically allowed in the agreement. Windows is a registered trademark of Microsoft Corporation. Java and all Java based marks are trademarks or registered trade- marks of Sun Microsystems, Inc. in the United States and other countries. Maplesoft is independent of Sun Microsystems, Inc. All other trademarks are the property of their respective owners. This document was produced using a special version of Maple that reads and updates LATEX files. Printed in Canada ISBN 1-894511-44-1 Contents Preface 1 Audience . 1 Worksheet Graphical Interface . 2 Manual Set . 2 Conventions . 3 Customer Feedback . 3 1 Procedures, Variables, and Extending Maple 5 Prerequisite Knowledge . 5 In This Chapter .
    [Show full text]
  • On Measures of Entropy and Information
    On Measures of Entropy and Information Tech. Note 009 v0.7 http://threeplusone.com/info Gavin E. Crooks 2018-09-22 Contents 5 Csiszar´ f-divergences 12 Csiszar´ f-divergence ................ 12 0 Notes on notation and nomenclature 2 Dual f-divergence .................. 12 Symmetric f-divergences .............. 12 1 Entropy 3 K-divergence ..................... 12 Entropy ........................ 3 Fidelity ........................ 12 Joint entropy ..................... 3 Marginal entropy .................. 3 Hellinger discrimination .............. 12 Conditional entropy ................. 3 Pearson divergence ................. 14 Neyman divergence ................. 14 2 Mutual information 3 LeCam discrimination ............... 14 Mutual information ................. 3 Skewed K-divergence ................ 14 Multivariate mutual information ......... 4 Alpha-Jensen-Shannon-entropy .......... 14 Interaction information ............... 5 Conditional mutual information ......... 5 6 Chernoff divergence 14 Binding information ................ 6 Chernoff divergence ................. 14 Residual entropy .................. 6 Chernoff coefficient ................. 14 Total correlation ................... 6 Renyi´ divergence .................. 15 Lautum information ................ 6 Alpha-divergence .................. 15 Uncertainty coefficient ............... 7 Cressie-Read divergence .............. 15 Tsallis divergence .................. 15 3 Relative entropy 7 Sharma-Mittal divergence ............. 15 Relative entropy ................... 7 Cross entropy
    [Show full text]
  • End-User Probabilistic Programming
    End-User Probabilistic Programming Judith Borghouts, Andrew D. Gordon, Advait Sarkar, and Neil Toronto Microsoft Research Abstract. Probabilistic programming aims to help users make deci- sions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spread- sheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncer- tainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclu- sions about the promise and limitations of probabilistic programming for end-users. 1 Introduction In this paper, we discuss the potential of bringing together two rather distinct approaches to decision making under uncertainty: spreadsheets and probabilistic programming. We start by introducing these two approaches. 1.1 Background: Spreadsheets and End-User Programming The spreadsheet is the first \killer app" of the personal computer era, starting in 1979 with Dan Bricklin and Bob Frankston's VisiCalc for the Apple II [15]. The primary interface of a spreadsheet|then and now, four decades later|is the grid, a two-dimensional array of cells. Each cell may hold a literal data value, or a formula that computes a data value (and may depend on the data in other cells, which may themselves be computed by formulas).
    [Show full text]
  • An Introduction to Data Analysis Using the Pymc3 Probabilistic Programming Framework: a Case Study with Gaussian Mixture Modeling
    An introduction to data analysis using the PyMC3 probabilistic programming framework: A case study with Gaussian Mixture Modeling Shi Xian Liewa, Mohsen Afrasiabia, and Joseph L. Austerweila aDepartment of Psychology, University of Wisconsin-Madison, Madison, WI, USA August 28, 2019 Author Note Correspondence concerning this article should be addressed to: Shi Xian Liew, 1202 West Johnson Street, Madison, WI 53706. E-mail: [email protected]. This work was funded by a Vilas Life Cycle Professorship and the VCRGE at University of Wisconsin-Madison with funding from the WARF 1 RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 2 Abstract Recent developments in modern probabilistic programming have offered users many practical tools of Bayesian data analysis. However, the adoption of such techniques by the general psychology community is still fairly limited. This tutorial aims to provide non-technicians with an accessible guide to PyMC3, a robust probabilistic programming language that allows for straightforward Bayesian data analysis. We focus on a series of increasingly complex Gaussian mixture models – building up from fitting basic univariate models to more complex multivariate models fit to real-world data. We also explore how PyMC3 can be configured to obtain significant increases in computational speed by taking advantage of a machine’s GPU, in addition to the conditions under which such acceleration can be expected. All example analyses are detailed with step-by-step instructions and corresponding Python code. Keywords: probabilistic programming; Bayesian data analysis; Markov chain Monte Carlo; computational modeling; Gaussian mixture modeling RUNNING HEAD: Introduction to PyMC3 with Gaussian Mixture Models 3 1 Introduction Over the last decade, there has been a shift in the norms of what counts as rigorous research in psychological science.
    [Show full text]
  • Information Theory and Entropy
    3 Information Theory and Entropy Solomon Kullback (1907–1994) was born in Brooklyn, New York, USA, and graduated from the City College of New York in 1927, received an M.A. degree in mathematics in 1929, and completed a Ph.D. in mathematics from the George Washington University in 1934. Kully as he was known to all who knew him, had two major careers: one in the Defense Department (1930–1962) and the other in the Department of Statistics at George Washington University (1962–1972). He was chairman of the Statistics Department from 1964–1972. Much of his pro- fessional life was spent in the National Security Agency and most of his work during this time is still classified. Most of his studies on information theory were done during this time. Many of his results up to 1958 were published in his 1959 book, “Information Theory and Statistics.” Additional details on Kullback may be found in Greenhouse (1994) and Anonymous (1997). When we receive something that decreases our uncertainty about the state of the world, it is called information. Information is like “news,” it informs. Informa- tion is not directly related to physical quantities. Information is not material and is not a form of energy, but it can be stored and communicated using material or energy means. It cannot be measured with instruments but can be defined in terms of a probability distribution. Information is a decrease in uncertainty. 52 3. Information Theory and Entropy This textbook is about a relatively new approach to empirical science called “information-theoretic.” The name comes from the fact that the foundation originates in “information theory”; a set of fundamental discoveries made largely during World War II with many important extensions since that time.
    [Show full text]
  • Lecture Notes (Chapter
    Chapter 12 Information Theory So far in this course, we have discussed notions such as “complexity” and “in- formation”, but we have not grounded these ideas in formal detail. In this part of the course, we will introduce key concepts from the area of information the- ory, such as formal definitions of information and entropy, as well as how these definitions relate to concepts previously discussed in the course. As we will see, many of the ideas introduced in previous chapters—such as the notion of maximum likelihood—can be re-framed through the lens of information theory. Moreover, we will discuss how machine learning methods can be derived from a information-theoretic perspective, based on the idea of maximizing the amount of information gain from training data. In this chapter, we begin with key definitions from information theory, as well as how they relate to previous concepts in this course. In this next chapter, we will introduce a new supervised learning technique—termed decision trees— which can be derived from an information-theoretic perspective. 12.1 Entropy and Information Suppose we have a discrete random variable x. The key intuition of information theory is that we want to quantify how much information this random variable conveys. In particular, we want to know how much information we obtain when we observe a particular value for this random variable. This idea is often motivated via the notion of surprise: the more surprising an observation is, the more information it contains. Put in another way, if we observe a random event that is completely predictable, then we do not gain any information, since we could already predict the result.
    [Show full text]
  • GNU Octave a High-Level Interactive Language for Numerical Computations Edition 3 for Octave Version 2.0.13 February 1997
    GNU Octave A high-level interactive language for numerical computations Edition 3 for Octave version 2.0.13 February 1997 John W. Eaton Published by Network Theory Limited. 15 Royal Park Clifton Bristol BS8 3AL United Kingdom Email: [email protected] ISBN 0-9541617-2-6 Cover design by David Nicholls. Errata for this book will be available from http://www.network-theory.co.uk/octave/manual/ Copyright c 1996, 1997John W. Eaton. This is the third edition of the Octave documentation, and is consistent with version 2.0.13 of Octave. Permission is granted to make and distribute verbatim copies of this man- ual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the en- tire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the same conditions as for modified versions. Portions of this document have been adapted from the gawk, readline, gcc, and C library manuals, published by the Free Software Foundation, 59 Temple Place—Suite 330, Boston, MA 02111–1307, USA. i Table of Contents Publisher’s Preface ...................... 1 Author’s Preface ........................ 3 Acknowledgements ........................................ 3 How You Can Contribute to Octave ........................ 5 Distribution .............................................. 6 1 A Brief Introduction to Octave ....... 7 1.1 Running Octave...................................... 7 1.2 Simple Examples ..................................... 7 Creating a Matrix ................................. 7 Matrix Arithmetic ................................. 8 Solving Linear Equations..........................
    [Show full text]
  • Maple 7 Programming Guide
    Maple 7 Programming Guide M. B. Monagan K. O. Geddes K. M. Heal G. Labahn S. M. Vorkoetter J. McCarron P. DeMarco c 2001 by Waterloo Maple Inc. ii • Waterloo Maple Inc. 57 Erb Street West Waterloo, ON N2L 6C2 Canada Maple and Maple V are registered trademarks of Waterloo Maple Inc. c 2001, 2000, 1998, 1996 by Waterloo Maple Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the copyright holder, except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Contents 1 Introduction 1 1.1 Getting Started ........................ 2 Locals and Globals ...................... 7 Inputs, Parameters, Arguments ............... 8 1.2 Basic Programming Constructs ............... 11 The Assignment Statement ................. 11 The for Loop......................... 13 The Conditional Statement ................. 15 The while Loop....................... 19 Modularization ........................ 20 Recursive Procedures ..................... 22 Exercise ............................ 24 1.3 Basic Data Structures .................... 25 Exercise ............................ 26 Exercise ............................ 28 A MEMBER Procedure ..................... 28 Exercise ............................ 29 Binary Search ......................... 29 Exercises ........................... 30 Plotting the Roots of a Polynomial ............. 31 1.4 Computing with Formulæ .................. 33 The Height of a Polynomial ................. 34 Exercise ...........................
    [Show full text]