Book of Abstracts
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Revolution R Enterprise 6.1 README
Revolution R Enterprise 6.1 README Revolution R Enterprise 6.1 for 32-bit and 64-bit Windows and 64-bit Red Hat Enterprise Linux (RHEL 5.x and RHEL 6.x) features an updated release of the RevoScaleR package that provides fast, scalable data management and data analysis: the same code scales from data frames to local, high-performance .xdf files to data distributed across a Windows HPC Server cluster, Windows HPC Server Azure Burst cluster, or IBM Platform Computing LSF cluster. RevoScaleR also allows distribution of the execution of essentially any R function across cores and nodes, delivering the results back to the user. Installation instructions and instructions for getting started are provided in your confirmation e-mail. What’s New in Revolution R Enterprise 6.1 Big Data Decision Tree Models New RevoScaleR function rxDTree can be used to create decision tree models. It is based on a binning algorithm so that it can scale to huge data. Both classification and regression trees are supported. The model objects returned can be made to inherit from the rpart class of the rpart package, so that plot.rpart, text.rpart, and printcp can be used for subsequent analysis. Prediction for models fitted by rxDTree can be done using rxPredict. See Chapter 10 of the RevoScaleR User’s Guide for examples on how to create decision tree models with rxDTree. Additional information is available in the rxDTree help file, seen by entering ?rxDTree at the R command line. Support for Compression in .xdf Files RevoScaleR’s .xdf files can now be created using zlib compression. -
Sweave User Manual
Sweave User Manual Friedrich Leisch and R-core June 30, 2017 1 Introduction Sweave provides a flexible framework for mixing text and R code for automatic document generation. A single source file contains both documentation text and R code, which are then woven into a final document containing • the documentation text together with • the R code and/or • the output of the code (text, graphs) This allows the re-generation of a report if the input data change and documents the code to reproduce the analysis in the same file that contains the report. The R code of the complete 1 analysis is embedded into a LATEX document using the noweb syntax (Ramsey, 1998) which is usually used for literate programming Knuth(1984). Hence, the full power of LATEX (for high- quality typesetting) and R (for data analysis) can be used simultaneously. See Leisch(2002) and references therein for more general thoughts on dynamic report generation and pointers to other systems. Sweave uses a modular concept using different drivers for the actual translations. Obviously different drivers are needed for different text markup languages (LATEX, HTML, . ). Several packages on CRAN provide support for other word processing systems (see Appendix A). 2 Noweb files noweb (Ramsey, 1998) is a simple literate-programming tool which allows combining program source code and the corresponding documentation into a single file. A noweb file is a simple text file which consists of a sequence of code and documentation segments, called chunks: Documentation chunks start with a line that has an at sign (@) as first character, followed by a space or newline character. -
Frequently Asked Questions About Rcpp
Frequently Asked Questions about Rcpp Dirk Eddelbuettel Romain François Rcpp version 0.12.7 as of September 4, 2016 Abstract This document attempts to answer the most Frequently Asked Questions (FAQ) regarding the Rcpp (Eddelbuettel, François, Allaire, Ushey, Kou, Chambers, and Bates, 2016a; Eddelbuettel and François, 2011; Eddelbuettel, 2013) package. Contents 1 Getting started 2 1.1 How do I get started ?.....................................................2 1.2 What do I need ?........................................................2 1.3 What compiler can I use ?...................................................3 1.4 What other packages are useful ?..............................................3 1.5 What licenses can I choose for my code?..........................................3 2 Compiling and Linking 4 2.1 How do I use Rcpp in my package ?............................................4 2.2 How do I quickly prototype my code?............................................4 2.2.1 Using inline.......................................................4 2.2.2 Using Rcpp Attributes.................................................4 2.3 How do I convert my prototyped code to a package ?..................................5 2.4 How do I quickly prototype my code in a package?...................................5 2.5 But I want to compile my code with R CMD SHLIB !...................................5 2.6 But R CMD SHLIB still does not work !...........................................6 2.7 What about LinkingTo ?...................................................6 -
Revolution R Enterprise™ 7.1 Getting Started Guide
Revolution R Enterprise™ 7.1 Getting Started Guide The correct bibliographic citation for this manual is as follows: Revolution Analytics, Inc. 2014. Revolution R Enterprise 7.1 Getting Started Guide. Revolution Analytics, Inc., Mountain View, CA. Revolution R Enterprise 7.1 Getting Started Guide Copyright © 2014 Revolution Analytics, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of Revolution Analytics. U.S. Government Restricted Rights Notice: Use, duplication, or disclosure of this software and related documentation by the Government is subject to restrictions as set forth in subdivision (c) (1) (ii) of The Rights in Technical Data and Computer Software clause at 52.227-7013. Revolution R, Revolution R Enterprise, RPE, RevoScaleR, RevoDeployR, RevoTreeView, and Revolution Analytics are trademarks of Revolution Analytics. Other product names mentioned herein are used for identification purposes only and may be trademarks of their respective owners. Revolution Analytics. 2570 W. El Camino Real Suite 222 Mountain View, CA 94040 USA. Revised on March 3, 2014 We want our documentation to be useful, and we want it to address your needs. If you have comments on this or any Revolution document, send e-mail to [email protected]. We’d love to hear from you. Contents Chapter 1. What Is Revolution R Enterprise? .................................................................... -
Iotools: High-Performance I/O Tools for R by Taylor Arnold, Michael J
CONTRIBUTED RESEARCH ARTICLE 6 iotools: High-Performance I/O Tools for R by Taylor Arnold, Michael J. Kane, and Simon Urbanek Abstract The iotools package provides a set of tools for input and output intensive data processing in R. The functions chunk.apply and read.chunk are supplied to allow for iteratively loading contiguous blocks of data into memory as raw vectors. These raw vectors can then be efficiently converted into matrices and data frames with the iotools functions mstrsplit and dstrsplit. These functions minimize copying of data and avoid the use of intermediate strings in order to drastically improve performance. Finally, we also provide read.csv.raw to allow users to read an entire dataset into memory with the same efficient parsing code. In this paper, we present these functions through a set of examples with an emphasis on the flexibility provided by chunk-wise operations. We provide benchmarks comparing the speed of read.csv.raw to data loading functions provided in base R and other contributed packages. Introduction When processing large datasets, specifically those too large to fit into memory, the performance bottleneck is often getting data from the hard-drive into the format required by the programming environment. The associated latency comes from a combination of two sources. First, there is hardware latency from moving data from the hard-drive to RAM. This is especially the case with “spinning” disk drives, which can have throughput speeds several orders of magnitude less than those of RAM. Hardware approaches for addressing latency have been an active area of research and development since hard-drives have existed. -
ESS — Emacs Speaks Statistics
ESS — Emacs Speaks Statistics ESS version 5.14 The ESS Developers (A.J. Rossini, R.M. Heiberger, K. Hornik, M. Maechler, R.A. Sparapani, S.J. Eglen, S.P. Luque and H. Redestig) Current Documentation by The ESS Developers Copyright c 2002–2010 The ESS Developers Copyright c 1996–2001 A.J. Rossini Original Documentation by David M. Smith Copyright c 1992–1995 David M. Smith Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Chapter 1: Introduction to ESS 1 1 Introduction to ESS The S family (S, Splus and R) and SAS statistical analysis packages provide sophisticated statistical and graphical routines for manipulating data. Emacs Speaks Statistics (ESS) is based on the merger of two pre-cursors, S-mode and SAS-mode, which provided support for the S family and SAS respectively. Later on, Stata-mode was also incorporated. ESS provides a common, generic, and useful interface, through emacs, to many statistical packages. It currently supports the S family, SAS, BUGS/JAGS, Stata and XLisp-Stat with the level of support roughly in that order. A bit of notation before we begin. emacs refers to both GNU Emacs by the Free Software Foundation, as well as XEmacs by the XEmacs Project. The emacs major mode ESS[language], where language can take values such as S, SAS, or XLS. -
Integrating R with Azure for High-Throughput Analysis Hugh Analysis Shanahan
Integrating R with Azure for High- throughput Integrating R with Azure for High-throughput analysis Hugh analysis Shanahan Hugh Shanahan Department of Computer Science Royal Holloway, University of London [email protected] @HughShanahan Hugh Shanahan Integrating R with Azure for High-throughput analysis Applicability to other domains Integrating R with Azure for High- throughput analysis This project started out doing something very specific Hugh for the domain I work in (Computational Biology). Shanahan I promise that there will be no Biology in this talk !! Realised can be extended to running high-throughput jobs in R. Contrast with MapReduce / R formalisms (HadoopStreaming, Rhipe, Revolution Analytics, ... ) - parallelisation happens outside of individual R script. Hugh Shanahan Integrating R with Azure for High-throughput analysis Applicability to other domains Integrating R with Azure for High- throughput analysis This project started out doing something very specific Hugh for the domain I work in (Computational Biology). Shanahan I promise that there will be no Biology in this talk !! Realised can be extended to running high-throughput jobs in R. Contrast with MapReduce / R formalisms (HadoopStreaming, Rhipe, Revolution Analytics, ... ) - parallelisation happens outside of individual R script. Hugh Shanahan Integrating R with Azure for High-throughput analysis Applicability to other domains Integrating R with Azure for High- throughput analysis This project started out doing something very specific Hugh for the domain I work in (Computational Biology). Shanahan I promise that there will be no Biology in this talk !! Realised can be extended to running high-throughput jobs in R. Contrast with MapReduce / R formalisms (HadoopStreaming, Rhipe, Revolution Analytics, .. -
BIG DATA 50 the Hottest Big Data Startups of 2014
BIG DATA 50 The hottest big data startups of 2014 Jeff Vance Table of Contents Big Data Startup Landscape – Overview .................................................................................. i About the Author ................................................................................................................. iii Introduction – the Big Data Boom .......................................................................................... 1 Notes on Methodology & the origin of the Big Data 50 ............................................................. 2 The Big Data 50 ..................................................................................................................... 5 Poised for Explosive Growth ....................................................................................................... 5 Entrigna ................................................................................................................................... 5 Nuevora ................................................................................................................................... 7 Roambi .................................................................................................................................... 9 Machine Learning Mavens ........................................................................................................ 10 Oxdata ................................................................................................................................... 10 Ayasdi ................................................................................................................................... -
Species' Traits and Phylogenetic
Northern Michigan University NMU Commons All NMU Master's Theses Student Works 8-2018 CLIMATE DRIVEN RANGE SHIFTS OF NORTH AMERICAN SMALL MAMMALS: SPECIES’ TRAITS AND PHYLOGENETIC INFLUENCES Katie Nehiba [email protected] Follow this and additional works at: https://commons.nmu.edu/theses Part of the Ecology and Evolutionary Biology Commons Recommended Citation Nehiba, Katie, "CLIMATE DRIVEN RANGE SHIFTS OF NORTH AMERICAN SMALL MAMMALS: SPECIES’ TRAITS AND PHYLOGENETIC INFLUENCES" (2018). All NMU Master's Theses. 557. https://commons.nmu.edu/theses/557 This Open Access is brought to you for free and open access by the Student Works at NMU Commons. It has been accepted for inclusion in All NMU Master's Theses by an authorized administrator of NMU Commons. For more information, please contact [email protected],[email protected]. CLIMATE DRIVEN RANGE SHIFTS OF NORTH AMERICAN SMALL MAMMALS: SPECIES’ TRAITS AND PHYLOGENETIC INFLUENCES By Katie R. Nehiba THESIS Submitted to Northern Michigan University In partial fulfillment of the requirements For the degree of MASTER OF SCIENCE Office of Graduate Education and Research July 2018 ABSTRACT By Katie R. Nehiba Current anthropogenically-driven climate change is accelerating at an unprecedented rate. In response, species’ ranges may shift, tracking optimal climatic conditions. Species-specific differences may produce predictable differences in the extent of range shifts. I evaluated if patterns of predicted responses to climate change were strongly related to species’ taxonomic identities and/or ecological characteristics of species’ niches, elevation and precipitation. I evaluated differences in predicted range shifts in well-sampled small mammals that are restricted to North America: kangaroo rats, voles, chipmunks, and ground squirrels. -
Reproducible Research.. Using Sweave, Knitr and Pandoc
Reproducible Research.. Using Sweave, Knitr and Pandoc Aedín Culhane [email protected] Nov 20th 2012 My R Course Website http://bcb.dfci.harvard.edu/~aedin/ My HSPH homepage http://www.hsph.harvard.edu/research/aedin-culhane/ When issues of reproducibility arise • ``Remember that microarray analysis you did six months ago? We ran a few more arrays. Can you add them to the project and repeat the same analysis?'' • ``The statistical analyst who looked at the data I generated previously is no longer available. Can you get someone else to analyze my new data set using the same methods (and thus producing a report I can expect to understand)?'' • ``Please write/edit the methods sections for the abstract/paper/grant proposal I am submitting based on the analysis you did several months ago.'' From Keith Baggerly • Selected articles published in Nature Genetics between January 2005 and December 2006 that had used profiling with microarrays • Of the 56 items retrieved electronically, 20 articles were considered potentially eligible for the project • The four teams were from – University of Alabama at Birmingham (UAB) – Stanford/Dana-Farber (SD) – London (L) and Ioannina/Trento (IT) • Each team was comprised of 3-6 scientists who worked together to evaluate each article. Results • Result could be reproduced n=2 • Reproduced with discrepancy n=6 • Could not be reproduced n=10 – No data n=4 (no data n=2, subset n=1, no reporter data n=1) – Confusion over matching of data to analysis (n=2) – Specialized software required and not available (n=1)m – Raw data available but could not be processed n=2 Reproducibility of Analysis Ioannidis JP, Allison DB, Ball CA, Coulibaly I, Cui X, Culhane AC, et al, (2009) Repeatability of published microarray gene expression Analyses. -
REVOLUTION ANALYTICS = ACTUARIAL EYE (Part 1)
REVOLUTION ANALYTICS = ACTUARIAL EYE (Part 1) “Perhaps the most important cultural trend today: The explosion of data about every aspect of our world and the rise of applied math gurus who know how to use it.” – Chris Anderson, editor-in-chief of Wired. “There is a real appetite in the business to understand more and to think about how we assemble and use data, also making sure we have the right people to ask the right questions of the data – because one without the other is not helpful.” - Wendy Thorpe, AMP Why Analytics? Organizations are built on great decisions and great decisions are built on great predictions. So what are great predictions built on? The answer is ANALYTICS!!! So in case you are wondering what this could possibly mean for you as a prospective actuary, the answer depends on what kind of actuary you want to be. Let me quote Duncan West here: If you are in a leadership role in your organization, get data onto the strategic agenda. Many companies talk about the importance of data but talk is not cheap. Do they manage themselves in ways that demonstrates the importance? Actuaries in any role need to go away from a regulatory and compliance mindset that accuracy is the most important way to measure success. They should measure success by helping the business to make good decisions. And actuaries at all levels need to help develop the skills necessary to show insights to the business. Insights are useless if the business can’t understand them. So communicating insight is a key part of an actuarial role. -
Dynamic Documents with R and Knitr, by Yihui Xie 116 Tugboat, Volume 35 (2014), No
TUGboat, Volume 35 (2014), No. 1 115 ● ● ● Book review: Dynamic Documents with R ● ● ● and knitr, by Yihui Xie ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Boris Veytsman ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● Yihui Xie, Dynamic Documents with R and knitr. ● ● ● ● ● ● ● ● ● ● ● ● ● ● Chapman & Hall/CRC Press, 2013, 190+xxvi pp. ● ● ● ● ● ● ● ● ● US$ ISBN ● Paperback, 59.95. 978-1482203530. ● ● ● ● Petal Length, cm Petal ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 1 2 3 4 5 6 7 0.5 1.0 1.5 2.0 2.5 Petal Width, cm This plot shows an almost linear dependence between the parameters. We can try a linear fit for these data: model <- lm(Petal.Length ˜ Petal.Width, data = iris) model$coefficients ## (Intercept) Petal.Width ## 1.084 2.230 summary(model)$r.squared ## [1] 0.9271 The large value of R2 = 0.9271 indicates the good quality of the fit. Of course we can replot the data There are several reasons why this book might be of together with the prediction of the linear model: interest to a TEX user. First, LATEX has a prominent place in the book. Second, the book describes a very plot(Petal.Length ˜ Petal.Width, interesting offshoot of literate programming, a topic data = iris, xlab = "Petal Width, cm", traditionally popular in the TEX community. Third, ylab = "Petal Length, cm") abline(model) since a number of TEX users work with data analysis and statistics, R could be a useful tool for them. Since some TUGboat readers are likely not fa- ● ● ● miliar with R, I would like to start this review with ● ● a short description of the software. R [1] is a free ● ● ● ● ● ● ● ● ● ● ● implementation of the S language (sometimes R is ● ● ● ● ● ● ● ● ● ● ● ● ● ● called GNU S).