How-To in Julia

Total Page:16

File Type:pdf, Size:1020Kb

How-To in Julia Appendix A How-to in Julia The code examples in this book are primarily designed to illustrate statistical concepts. However, they also have a secondary purpose. They serve a way of learning how to use Julia by example. Towards this end, the appendix links language features with specific code listings in the book. This appendix can be used on an ad-hoc basis to find code examples where you can see “how to” do specific things in Julia. Once you find the specific “how to” that you are looking for, you can refer to its associated code example, referenced via “⇒”. This appendix is also available at https://statisticswithjulia.org/howto.html. The appendix is broken up into several subsections as follows. Basics (Section A.1)dealwith basic language features. Text and I/O (Section A.2) deal with textual operations as well as input and output. Data Structures (Section A.3) deal with data structures and their use. This includes basic arrays as well as other structures. Data Frames, Time-Series, and Dates (Section A.4)deal with Data Frames and related objects for organizing heterogeneous data. Mathematics (Section A.5) covers various mathematical aspects of the language. Randomness, Statistics, and Machine Learning (Section A.6) deal with random number generation, elementary statistics, distributions, statistical inference, and machine learning. Graphics (Section A.7) deals with plotting, manipulation of figures, and animation. A.1 Basics Types Check the type of an object ⇒ Listing 1.2. Specify the type of an argument to a function ⇒ Listing 1.11. Specify the type of an array when initialized using zeros() ⇒ Listing 1.8. © Springer Nature Switzerland AG 2021 475 Y. Nazarathy and H. Klok, Statistics with Julia, Springer Series in the Data Sciences, https://doi.org/10.1007/978-3-030-70901-3 476 APPENDIX A . HOW-TO IN JULIA Convert the type of a variable with convert() ⇒ Listing 3.22. Convert the type of a variable with a constructor like Int() ⇒ Listing 1.11. Use a 32 bit float instead of default 64 bit float with the f0 float literal ⇒ Listing 9.20. Use big representation of numbers using big() ⇒ Listing 2.3. Check if a variable is immutable with isimmutable() ⇒ Listing 4.1. Variables Modify a global variable inside a different scope by declaring global ⇒ Listing 1.5. Assign two values in a single statement (using an implicit tuple) ⇒ Listing 1.6. Copy a variable, array, or struct with copy() ⇒ Listing 4.2. Copy a variable, array, or struct with deepcopy() ⇒ Listing 4.2. Conditionals and Logical Operations Use the conditional if statement ⇒ Listing 1.6. Use the conditional else statement ⇒ Listing 1.11. Use the conditional elseif statement ⇒ Listing 1.17. Use the shorthand conditional formatting operator ?: ⇒ Listing 2.5. Carry out element-wise and using .& ⇒ Listing 4.10. Carry out element-wise negation using .! ⇒ Listing 4.10. Use logical or || ⇒ Listing 1.13. A.1. BASICS 477 Use short circuit evaluation with logical and && ⇒ Listing 8.16. Loops Create a while loop ⇒ Listing 1.11. Loop over values in an array ⇒ Listing 1.1. Create nested for loops ⇒ Listing 1.6. Break out of a loop with break ⇒ Listing 2.5. Execute the next loop iteration from the top with continue ⇒ Listing 2.5. Loop over an enumeration of (Index, value) pairs created by enumerate() ⇒ Listing 4.30. Functions Create a function ⇒ Listing 1.6. Create a one line function ⇒ Listing 1.10. Create function using begin and end ⇒ Listing 3.32. Create a function that returns a function ⇒ Listing 1.7. Pass functions as arguments to functions ⇒ Listing 10.10. Create a function with a multiple number of arguments ⇒ Listing 1.7. Use an anonymous function ⇒ Listing 1.15. Define a function inside another function ⇒ Listing 2.4. Create a function that returns a tuple ⇒ Listing 7.10. 478 APPENDIX A . HOW-TO IN JULIA Setup default values to function arguments ⇒ Listing 10.10. Other Basic Operations Check the running time of a block of code ⇒ Listing 1.3. Increment values using += ⇒ Listing 1.8. Do element-wise comparisons such as, for example, using .> ⇒ Listing 2.9. Apply an element-wise computation to a tuple ⇒ Listing 2.10. Use the logical xor() function ⇒ Listing 2.12. Set a numerical value to be infinity with Inf ⇒ Listing 3.6. Include another block of Julia code using include() ⇒ Listing 3.34. Find the maximal value amongst several arguments using max() ⇒ Listing 7.1. Find the minimal value amongst several arguments using min() ⇒ Listing 5.20. Metaprogramming Define a macro ⇒ Listing 9.10. Interacting with Other Languages Copy data to the R environment with @rput from package RCall ⇒ Listing 1.18. Get data from the R environment with @rget from package RCall ⇒ Listing 1.18. Execute an R-language block with the command R from package RCall ⇒ Listing 1.18. Setup a Python object in Julia using @pyimport from package PyCall ⇒ Listing 1.19. A.2. TEXT AND I/O 479 A.2 Text and I/O Strings Split a string based on whitespace with split() ⇒ Listing 1.9. Use LaTeX formatting for strings ⇒ Listing 2.4. See if a string is a substring of another string with occursin() ⇒ Listing 4.30. Concatenate two strings using * ⇒ Listing 1.9. Text Output Print text output including new lines, and tabs ⇒ Listing 1.1. Format variables within strings when printing ⇒ Listing 2.1. Display an expression to output using display() ⇒ Listing 1.8. Display an expression to output using show(stdout,...) ⇒ Listing 9.5. Present the value of an expression with @show ⇒ Listing 4.1. Display an information line with @info ⇒ Listing 9.7. Redirect the standard output to a file ⇒ Listing 9.7. Reading and Writing From Files Open a file for writing with open() ⇒ Listing 4.12. Open a file for reading with open() ⇒ Listing 4.30. Write a string to a file with write() ⇒ Listing 4.12. 480 APPENDIX A . HOW-TO IN JULIA Close a file after it was opened ⇒ Listing 4.12. Read from a file with read() ⇒ Listing 4.12. Find out the current working directory with pwd() ⇒ Listing 4.31. See the list of files in a directory with readdir() ⇒ Listing 4.31. See the directory of the current file with @ DIR ⇒ Listing 9.9. Change the current directory with cd() ⇒ Listing 9.9. CSV Files Read a CSV file to create a dataframe with a header ⇒ Listing 4.3. Read a CSV file to create a dataframe with without a header ⇒ Listing 6.1. Write to a CSV file with CSV.write() ⇒ Listing 4.32. JSON Parse a JSON file with JSON.parse() ⇒ Listing 1.9. BSON WritetoaBSONfile ⇒ Listing 9.20. ReadfromaBSONfile ⇒ Listing 9.19. HTTP Input Create an HTTP request ⇒ Listing 1.9. Convert binary data to a string ⇒ Listing 1.9. A.3. DATA STRUCTURES 481 A.3 Data Structures Creating Arrays Create a range of numbers ⇒ Listing 1.2. Create an array of zero values with zeros() ⇒ Listing 1.8. Create an array of one values with ones() ⇒ Listing 2.4. Create an array with a repeated value using fill() ⇒ Listing 7.10. Create an array of strings ⇒ Listing 1.1. Create an array of numerical values based on a formula ⇒ Listing 1.1. Create an empty array of a given type ⇒ Listing 1.3. Create an array of character ranges ⇒ Listing 2.2. Create an array of tuples ⇒ Listing 6.6. Create an array of arrays ⇒ Listing 1.15. Basic Array Operations Discover the length() of an array ⇒ Listing 1.6. Access elements of an array ⇒ Listing 1.6. Obtain the first and last elements of an array using first() and last() ⇒ Listing 3.32. Apply a function like sqrt() onto an array of numbers ⇒ Listing 1.1. Map a function onto an array with map() ⇒ Listing 8.10. 482 APPENDIX A . HOW-TO IN JULIA Append with push!() to an array ⇒ Listing 1.3. Convert an object into an array with the collect() function ⇒ Listing 1.9. Pre-allocate an array of a given size ⇒ Listing 1.16. Delete an element from an array or collection with deleteat!() ⇒ Listing 2.4. Find the first element of an array matching a pattern with findfirst() ⇒ Listing 2.4. Append an array to an existing array with append!() ⇒ Listing 2.5. Sum up two equally size arrays element by element ⇒ Listing 3.7. Stick together several arrays into one array using vcat() and ... ⇒ Listing 7.9. Further Array Accessories Sum up values of an array with sum() ⇒ Listing 1.7. Search for a maximal index in an array using findmax() ⇒ Listing 1.8. Count the number of occurrence repetitions with the count() function ⇒ Listing 1.9. Sort an array using the sort() function ⇒ Listing 1.9. Filter an array based on a criterion using the filter() function ⇒ Listing 1.15. Find the maximal value in an array using maximum() ⇒ Listing 2.3. Count the number of occurrence repetitions with the counts() function from StatsBase ⇒ Listing 2.3. Reduce a collection to unique elements with unique() ⇒ Listing 2.5. Check if a an array is empty with isempty() ⇒ Listing 3.6. A.3. DATA STRUCTURES 483 Find the minimal value in an array using minimum() ⇒ Listing 3.6. Accumulate values of an array with accumulate() ⇒ Listing 3.30. Sort an array in place using the sort!() function ⇒ Listing 6.6. Sets Check if an element is an element of a set with in() ⇒ Listing 2.6. Check if a set is a subset of a set with issubset() ⇒ Listing 2.6. Obtain the set difference of two sets with setdiff() ⇒ Listing 2.5.
Recommended publications
  • Chapter 7 Assessing and Improving Convergence of the Markov Chain
    Chapter 7 Assessing and improving convergence of the Markov chain Questions: . Are we getting the right answer? . Can we get the answer quicker? Bayesian Biostatistics - Piracicaba 2014 376 7.1 Introduction • MCMC sampling is powerful, but comes with a cost: dependent sampling + checking convergence is not easy: ◦ Convergence theorems do not tell us when convergence will occur ◦ In this chapter: graphical + formal diagnostics to assess convergence • Acceleration techniques to speed up MCMC sampling procedure • Data augmentation as Bayesian generalization of EM algorithm Bayesian Biostatistics - Piracicaba 2014 377 7.2 Assessing convergence of a Markov chain Bayesian Biostatistics - Piracicaba 2014 378 7.2.1 Definition of convergence for a Markov chain Loose definition: With increasing number of iterations (k −! 1), the distribution of k θ , pk(θ), converges to the target distribution p(θ j y). In practice, convergence means: • Histogram of θk remains the same along the chain • Summary statistics of θk remain the same along the chain Bayesian Biostatistics - Piracicaba 2014 379 Approaches • Theoretical research ◦ Establish conditions that ensure convergence, but in general these theoretical results cannot be used in practice • Two types of practical procedures to check convergence: ◦ Checking stationarity: from which iteration (k0) is the chain sampling from the posterior distribution (assessing burn-in part of the Markov chain). ◦ Checking accuracy: verify that the posterior summary measures are computed with the desired accuracy • Most
    [Show full text]
  • Getting Started in Openbugs / Winbugs
    Practical 1: Getting started in OpenBUGS Slide 1 An Introduction to Using WinBUGS for Cost-Effectiveness Analyses in Health Economics Dr. Christian Asseburg Centre for Health Economics University of York, UK [email protected] Practical 1 Getting started in OpenBUGS / WinB UGS 2007-03-12, Linköping Dr Christian Asseburg University of York, UK [email protected] Practical 1: Getting started in OpenBUGS Slide 2 ● Brief comparison WinBUGS / OpenBUGS ● Practical – Opening OpenBUGS – Entering a model and data – Some error messages – Starting the sampler – Checking sampling performance – Retrieving the posterior summaries 2007-03-12, Linköping Dr Christian Asseburg University of York, UK [email protected] Practical 1: Getting started in OpenBUGS Slide 3 WinBUGS / OpenBUGS ● WinBUGS was developed at the MRC Biostatistics unit in Cambridge. Free download, but registration required for a licence. No fee and no warranty. ● OpenBUGS is the current development of WinBUGS after its source code was released to the public. Download is free, no registration, GNU GPL licence. 2007-03-12, Linköping Dr Christian Asseburg University of York, UK [email protected] Practical 1: Getting started in OpenBUGS Slide 4 WinBUGS / OpenBUGS ● There are no major differences between the latest WinBUGS (1.4.1) and OpenBUGS (2.2.0) releases. ● Minor differences include: – OpenBUGS is occasionally a bit slower – WinBUGS requires Microsoft Windows OS – OpenBUGS error messages are sometimes more informative ● The examples in these slides use OpenBUGS. 2007-03-12, Linköping Dr Christian Asseburg University of York, UK [email protected] Practical 1: Getting started in OpenBUGS Slide 5 Practical 1: Target ● Start OpenBUGS ● Code the example from health economics from the earlier presentation ● Run the sampler and obtain posterior summaries 2007-03-12, Linköping Dr Christian Asseburg University of York, UK [email protected] Practical 1: Getting started in OpenBUGS Slide 6 Starting OpenBUGS ● You can download OpenBUGS from http://mathstat.helsinki.fi/openbugs/ ● Start the program.
    [Show full text]
  • BUGS Code for Item Response Theory
    JSS Journal of Statistical Software August 2010, Volume 36, Code Snippet 1. http://www.jstatsoft.org/ BUGS Code for Item Response Theory S. McKay Curtis University of Washington Abstract I present BUGS code to fit common models from item response theory (IRT), such as the two parameter logistic model, three parameter logisitic model, graded response model, generalized partial credit model, testlet model, and generalized testlet models. I demonstrate how the code in this article can easily be extended to fit more complicated IRT models, when the data at hand require a more sophisticated approach. Specifically, I describe modifications to the BUGS code that accommodate longitudinal item response data. Keywords: education, psychometrics, latent variable model, measurement model, Bayesian inference, Markov chain Monte Carlo, longitudinal data. 1. Introduction In this paper, I present BUGS (Gilks, Thomas, and Spiegelhalter 1994) code to fit several models from item response theory (IRT). Several different software packages are available for fitting IRT models. These programs include packages from Scientific Software International (du Toit 2003), such as PARSCALE (Muraki and Bock 2005), BILOG-MG (Zimowski, Mu- raki, Mislevy, and Bock 2005), MULTILOG (Thissen, Chen, and Bock 2003), and TESTFACT (Wood, Wilson, Gibbons, Schilling, Muraki, and Bock 2003). The Comprehensive R Archive Network (CRAN) task view \Psychometric Models and Methods" (Mair and Hatzinger 2010) contains a description of many different R packages that can be used to fit IRT models in the R computing environment (R Development Core Team 2010). Among these R packages are ltm (Rizopoulos 2006) and gpcm (Johnson 2007), which contain several functions to fit IRT models using marginal maximum likelihood methods, and eRm (Mair and Hatzinger 2007), which contains functions to fit several variations of the Rasch model (Fischer and Molenaar 1995).
    [Show full text]
  • Using the COIN-OR Server
    Using the COIN-OR Server Your CoinEasy Team November 16, 2009 1 1 Overview This document is part of the CoinEasy project. See projects.coin-or.org/CoinEasy. In this document we describe the options available to users of COIN-OR who are interested in solving opti- mization problems but do not wish to compile source code in order to build the COIN-OR projects. In particular, we show how the user can send optimization problems to a COIN-OR server and get the solution result back. The COIN-OR server, webdss.ise.ufl.edu, is 2x Intel(R) Xeon(TM) CPU 3.06GHz 512MiB L2 1024MiB L3, 2GiB DRAM, 4x73GiB scsi disk 2xGigE machine. This server allows the user to directly access the following COIN-OR optimization solvers: • Bonmin { a solver for mixed-integer nonlinear optimization • Cbc { a solver for mixed-integer linear programs • Clp { a linear programming solver • Couenne { a solver for mixed-integer nonlinear optimization problems and is capable of global optiomization • DyLP { a linear programming solver • Ipopt { an interior point nonlinear optimization solver • SYMPHONY { mixed integer linear solver that can be executed in either parallel (dis- tributed or shared memory) or sequential modes • Vol { a linear programming solver All of these solvers on the COIN-OR server may be accessed through either the GAMS or AMPL modeling languages. In Section 2.1 we describe how to use the solvers using the GAMS modeling language. In Section 2.2 we describe how to call the solvers using the AMPL modeling language. In Section 3 we describe how to call the solvers using a command line executable pro- gram OSSolverService.exe (or OSSolverService for Linux/Mac OS X users { in the rest of the document we refer to this executable using a .exe extension).
    [Show full text]
  • Stan: a Probabilistic Programming Language
    JSS Journal of Statistical Software MMMMMM YYYY, Volume VV, Issue II. http://www.jstatsoft.org/ Stan: A Probabilistic Programming Language Bob Carpenter Andrew Gelman Matt Hoffman Columbia University Columbia University Adobe Research Daniel Lee Ben Goodrich Michael Betancourt Columbia University Columbia University University of Warwick Marcus A. Brubaker Jiqiang Guo Peter Li University of Toronto, NPD Group Columbia University Scarborough Allen Riddell Dartmouth College Abstract Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.2.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propa- gation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line, through R using the RStan package, or through Python using the PyStan package. All three interfaces support sampling and optimization-based inference. RStan and PyStan also provide access to log probabilities, gradients, Hessians, and data I/O. Keywords: probabilistic program, Bayesian inference, algorithmic differentiation, Stan.
    [Show full text]
  • Installing BUGS and the R to BUGS Interface 1. Brief Overview
    File = E:\bugs\installing.bugs.jags.docm 1 John Miyamoto (email: [email protected]) Installing BUGS and the R to BUGS Interface Caveat: I am a Windows user so these notes are focused on Windows 7 installations. I will include what I know about the Mac and Linux versions of these programs but I cannot swear to the accuracy of my comments. Contents (Cntrl-left click on a link to jump to the corresponding section) Section Topic 1 Brief Overview 2 Installing OpenBUGS 3 Installing WinBUGs (Windows only) 4 OpenBUGS versus WinBUGS 5 Installing JAGS 6 Installing R packages that are used with OpenBUGS, WinBUGS, and JAGS 7 Running BRugs on a 32-bit or 64-bit Windows computer 8 Hints for Mac and Linux Users 9 References # End of Contents Table 1. Brief Overview TOC BUGS stands for Bayesian Inference Under Gibbs Sampling1. The BUGS program serves two critical functions in Bayesian statistics. First, given appropriate inputs, it computes the posterior distribution over model parameters - this is critical in any Bayesian statistical analysis. Second, it allows the user to compute a Bayesian analysis without requiring extensive knowledge of the mathematical analysis and computer programming required for the analysis. The user does need to understand the statistical model or models that are being analyzed, the assumptions that are made about model parameters including their prior distributions, and the structure of the data, but the BUGS program relieves the user of the necessity of creating an algorithm to sample from the posterior distribution and the necessity of writing the computer program that computes this algorithm.
    [Show full text]
  • Winbugs Lectures X4.Pdf
    Introduction to Bayesian Analysis and WinBUGS Summary 1. Probability as a means of representing uncertainty 2. Bayesian direct probability statements about parameters Lecture 1. 3. Probability distributions Introduction to Bayesian Monte Carlo 4. Monte Carlo simulation methods in WINBUGS 5. Implementation in WinBUGS (and DoodleBUGS) - Demo 6. Directed graphs for representing probability models 7. Examples 1-1 1-2 Introduction to Bayesian Analysis and WinBUGS Introduction to Bayesian Analysis and WinBUGS How did it all start? Basic idea: Direct expression of uncertainty about In 1763, Reverend Thomas Bayes of Tunbridge Wells wrote unknown parameters eg ”There is an 89% probability that the absolute increase in major bleeds is less than 10 percent with low-dose PLT transfusions” (Tinmouth et al, Transfusion, 2004) !50 !40 !30 !20 !10 0 10 20 30 % absolute increase in major bleeds In modern language, given r Binomial(θ,n), what is Pr(θ1 < θ < θ2 r, n)? ∼ | 1-3 1-4 Introduction to Bayesian Analysis and WinBUGS Introduction to Bayesian Analysis and WinBUGS Why a direct probability distribution? Inference on proportions 1. Tells us what we want: what are plausible values for the parameter of interest? What is a reasonable form for a prior distribution for a proportion? θ Beta[a, b] represents a beta distribution with properties: 2. No P-values: just calculate relevant tail areas ∼ Γ(a + b) a 1 b 1 p(θ a, b)= θ − (1 θ) − ; θ (0, 1) 3. No (difficult to interpret) confidence intervals: just report, say, central area | Γ(a)Γ(b) − ∈ a that contains 95% of distribution E(θ a, b)= | a + b ab 4.
    [Show full text]
  • Polyhedral Outer Approximations in Convex Mixed-Integer Nonlinear
    Jan Kronqvist Polyhedral Outer Approximations in Polyhedral Outer Approximations in Convex Mixed-Integer Nonlinear Programming Approximations in Convex Polyhedral Outer Convex Mixed-Integer Nonlinear Programming Jan Kronqvist PhD Thesis in Process Design and Systems Engineering Dissertations published by Process Design and Systems Engineering ISSN 2489-7272 Faculty of Science and Engineering 978-952-12-3734-8 Åbo Akademi University 978-952-12-3735-5 (pdf) Painosalama Oy Åbo 2018 Åbo, Finland 2018 2018 Polyhedral Outer Approximations in Convex Mixed-Integer Nonlinear Programming Jan Kronqvist PhD Thesis in Process Design and Systems Engineering Faculty of Science and Engineering Åbo Akademi University Åbo, Finland 2018 Dissertations published by Process Design and Systems Engineering ISSN 2489-7272 978-952-12-3734-8 978-952-12-3735-5 (pdf) Painosalama Oy Åbo 2018 Preface My time as a PhD student began back in 2014 when I was given a position in the Opti- mization and Systems Engineering (OSE) group at Åbo Akademi University. It has been a great time, and many people have contributed to making these years a wonderful ex- perience. During these years, I was given the opportunity to teach several courses in both environmental engineering and process systems engineering. The opportunity to teach these courses has been a great experience for me. I want to express my greatest gratitude to my supervisors Prof. Tapio Westerlund and Docent Andreas Lundell. Tapio has been a true source of inspiration and a good friend during these years. Thanks to Tapio, I have been able to be a part of an inter- national research environment.
    [Show full text]
  • Flexible Programming of Hierarchical Modeling Algorithms and Compilation of R Using NIMBLE Christopher Paciorek UC Berkeley Statistics
    Flexible Programming of Hierarchical Modeling Algorithms and Compilation of R Using NIMBLE Christopher Paciorek UC Berkeley Statistics Joint work with: Perry de Valpine (PI) UC Berkeley Environmental Science, Policy and Managem’t Daniel Turek Williams College Math and Statistics Nick Michaud UC Berkeley ESPM and Statistics Duncan Temple Lang UC Davis Statistics Bayesian nonparametrics development with: Claudia Wehrhahn Cortes UC Santa Cruz Applied Math and Statistics Abel Rodriguez UC Santa Cruz Applied Math and Statistics http://r-nimble.org October 2017 Funded by NSF DBI-1147230, ACI-1550488, DMS-1622444; Google Summer of Code 2015, 2017 Hierarchical statistical models A basic random effects / Bayesian hierarchical model Probabilistic model Flexible programming of hierarchical modeling algorithms using NIMBLE (r- 2 nimble.org) Hierarchical statistical models A basic random effects / Bayesian hierarchical model BUGS DSL code Probabilistic model # priors on hyperparameters alpha ~ dexp(1.0) beta ~ dgamma(0.1,1.0) for (i in 1:N){ # latent process (random effects) # random effects distribution theta[i] ~ dgamma(alpha,beta) # linear predictor lambda[i] <- theta[i]*t[i] # likelihood (data model) x[i] ~ dpois(lambda[i]) } Flexible programming of hierarchical modeling algorithms using NIMBLE (r- 3 nimble.org) Divorcing model specification from algorithm MCMC Flavor 1 Your new method Y(1) Y(2) Y(3) MCMC Flavor 2 Variational Bayes X(1) X(2) X(3) Particle Filter MCEM Quadrature Importance Sampler Maximum likelihood Flexible programming of hierarchical modeling algorithms using NIMBLE (r- 4 nimble.org) What can a practitioner do with hierarchical models? Two basic software designs: 1. Typical R/Python package = Model family + 1 or more algorithms • GLMMs: lme4, MCMCglmm • GAMMs: mgcv • spatial models: spBayes, INLA Flexible programming of hierarchical modeling algorithms using NIMBLE (r- 5 nimble.org) What can a practitioner do with hierarchical models? Two basic software designs: 1.
    [Show full text]
  • Performance of Optimization Software - an Update
    Performance of Optimization Software - an Update INFORMS Annual 2011 Charlotte, NC 13-18 November 2011 H. D. Mittelmann School of Math and Stat Sciences Arizona State University 1 Services we provide • Guide to Software: "Decision Tree" • http://plato.asu.edu/guide.html • Software Archive • Software Evaluation: "Benchmarks" • Archive of Testproblems • Web-based Solvers (1/3 of NEOS) 2 We maintain the following NEOS solvers (8 categories) Combinatorial Optimization * CONCORDE [TSP Input] Global Optimization * ICOS [AMPL Input] Linear Programming * bpmpd [AMPL Input][LP Input][MPS Input][QPS Input] Mixed Integer Linear Programming * FEASPUMP [AMPL Input][CPLEX Input][MPS Input] * SCIP [AMPL Input][CPLEX Input][MPS Input] [ZIMPL Input] * qsopt_ex [LP Input][MPS Input] [AMPL Input] Nondifferentiable Optimization * condor [AMPL Input] Semi-infinite Optimization * nsips [AMPL Input] Stochastic Linear Programming * bnbs [SMPS Input] * DDSIP [LP Input][MPS Input] 3 We maintain the following NEOS solvers (cont.) Semidefinite (and SOCP) Programming * csdp [MATLAB_BINARY Input][SPARSE_SDPA Input] * penbmi [MATLAB Input][MATLAB_BINARY Input] * pensdp [MATLAB_BINARY Input][SPARSE_SDPA Input] * sdpa [MATLAB_BINARY Input][SPARSE_SDPA Input] * sdplr [MATLAB_BINARY Input][SDPLR Input][SPARSE_SDPA Input] * sdpt3 [MATLAB_BINARY Input][SPARSE_SDPA Input] * sedumi [MATLAB_BINARY Input][SPARSE_SDPA Input] 4 Overview of Talk • Current and Selected(*) Benchmarks { Parallel LP benchmarks { MILP benchmark (MIPLIB2010) { Feasibility/Infeasibility Detection benchmarks
    [Show full text]
  • Julia: a Modern Language for Modern ML
    Julia: A modern language for modern ML Dr. Viral Shah and Dr. Simon Byrne www.juliacomputing.com What we do: Modernize Technical Computing Today’s technical computing landscape: • Develop new learning algorithms • Run them in parallel on large datasets • Leverage accelerators like GPUs, Xeon Phis • Embed into intelligent products “Business as usual” will simply not do! General Micro-benchmarks: Julia performs almost as fast as C • 10X faster than Python • 100X faster than R & MATLAB Performance benchmark relative to C. A value of 1 means as fast as C. Lower values are better. A real application: Gillespie simulations in systems biology 745x faster than R • Gillespie simulations are used in the field of drug discovery. • Also used for simulations of epidemiological models to study disease propagation • Julia package (Gillespie.jl) is the state of the art in Gillespie simulations • https://github.com/openjournals/joss- papers/blob/master/joss.00042/10.21105.joss.00042.pdf Implementation Time per simulation (ms) R (GillespieSSA) 894.25 R (handcoded) 1087.94 Rcpp (handcoded) 1.31 Julia (Gillespie.jl) 3.99 Julia (Gillespie.jl, passing object) 1.78 Julia (handcoded) 1.2 Those who convert ideas to products fastest will win Computer Quants develop Scientists prepare algorithms The last 25 years for production (Python, R, SAS, DEPLOY (C++, C#, Java) Matlab) Quants and Computer Compress the Scientists DEPLOY innovation cycle collaborate on one platform - JULIA with Julia Julia offers competitive advantages to its users Julia is poised to become one of the Thank you for Julia. Yo u ' v e k i n d l ed leading tools deployed by developers serious excitement.
    [Show full text]
  • Open Source Tools for Optimization in Python
    Open Source Tools for Optimization in Python Ted Ralphs Sage Days Workshop IMA, Minneapolis, MN, 21 August 2017 T.K. Ralphs (Lehigh University) Open Source Optimization August 21, 2017 Outline 1 Introduction 2 COIN-OR 3 Modeling Software 4 Python-based Modeling Tools PuLP/DipPy CyLP yaposib Pyomo T.K. Ralphs (Lehigh University) Open Source Optimization August 21, 2017 Outline 1 Introduction 2 COIN-OR 3 Modeling Software 4 Python-based Modeling Tools PuLP/DipPy CyLP yaposib Pyomo T.K. Ralphs (Lehigh University) Open Source Optimization August 21, 2017 Caveats and Motivation Caveats I have no idea about the background of the audience. The talk may be either too basic or too advanced. Why am I here? I’m not a Sage developer or user (yet!). I’m hoping this will be a chance to get more involved in Sage development. Please ask lots of questions so as to guide me in what to dive into! T.K. Ralphs (Lehigh University) Open Source Optimization August 21, 2017 Mathematical Optimization Mathematical optimization provides a formal language for describing and analyzing optimization problems. Elements of the model: Decision variables Constraints Objective Function Parameters and Data The general form of a mathematical optimization problem is: min or max f (x) (1) 8 9 < ≤ = s.t. gi(x) = bi (2) : ≥ ; x 2 X (3) where X ⊆ Rn might be a discrete set. T.K. Ralphs (Lehigh University) Open Source Optimization August 21, 2017 Types of Mathematical Optimization Problems The type of a mathematical optimization problem is determined primarily by The form of the objective and the constraints.
    [Show full text]