High Dimensional Data Analysis Via the SIR/PHD Approach

Total Page:16

File Type:pdf, Size:1020Kb

High Dimensional Data Analysis Via the SIR/PHD Approach High dimensional data analysis via the SIR/PHD approach April 6, 2000 Preface Dimensionality is an issue that can arise in every scientific field. Generally speaking, the difficulty lies on how to visualize a high dimensional function or data set. This is an area which has become increasingly more important due to the advent of computer and graphics technology. People often ask : “How do they look?”, “What structures are there?”, “What model should be used?” Aside from the differences that underly the various scientific con- texts, such kind of questions do have a common root in Statistics. This should be the driving force for the study of high dimensional data analysis. Sliced inverse regression(SIR) and principal Hessian direction(PHD) are two basic di- mension reduction methods. They are useful for the extraction of geometric information underlying noisy data of several dimensions - a crucial step in empirical model building which has been overlooked in the literature. In this Lecture Notes, I will review the theory of SIR/PHD and describe some ongoing research in various application areas. There are two parts. The first part is based on materials that have already appeared in the literature. The second part is just a collection of some manuscripts which are not yet published. They are included here for completeness. Needless to say, there are many other high dimensional data analysis techniques (and we may encounter some of them later on) that should deserve more detailed treatment. In this complex field, it would not be wise to anticipate the existence of any single tool that can outperform all others in every practical situation. Real world problems generally require a number of passes to the same data. Different approaches often lead to different structural findings at various stages. Serious readers should try out as many methods as possible on their own. I started writing a preliminary version of this Lecture Notes in 1991-1992 when I had the chance to teach a seminar course at UCLA on High Dimensional Data Analysis. At that time, SIR/PHD has just begun to appear in official journals. This makes the writing of a book very difficult because most works are yet to be published. Even though such materials have later been used in similar courses and workshops, I hardly have the mood to rewrite it. The real opportunity finally came last year when the colleagues at Institute of Statistical Science, Academia Sinica, initiated the idea of this Lecture Series. I figured that I would have more to write now because there have many new exciting developments along the line. Most noteworthy are the books of Cook(1998), and Cook and Weisberg (1994). I admire the way they presented the ideas, which are not far from I really want to say. It is such a remarkable achievement for them to find a lucid language in dealing with the difficult subject of how to think of graphics in rigorous statistical terms. As expected, with the new language, they have generated many new ideas and useful techniques that go beyond SIR/PHD. For this book, I am still using the words as I originally thought about the subject of dimension reduction. The basic material is narrowly focused on the development in which I am directly involved. Thus there is no serious attempt to be comprehensive in surveying the whole literature on SIR/PHD. For many researchers, SIR/PHD is still a novel technique and new results are still waiting to be published. I would like to thank a lot people who one way or the other have helped me in the i development of SIR/PHD, including all my co-authors, colleagues, students, friends, and many anonymous referees. I would also like to thank Dr. Chen-Hsing Chen who is in charge of the Lecture Series. My writing of this Lecture Notes would have further been delayed without the persistent request from him. I really appreciate his patience. Finally, many of the computer and graphical outputs are put together by Dr. Chun-Houh Chen, who has been working with me over years. Without his devotion to critical programming works and his many good ideas in implementing SIR/PHD, the progress would be much slowed. Acknowledgment: Over years, the research of Li has been supported in part by NSF grants. ii Contents I SIR/PHD - THEORY AND PRACTICE ix 1 A Model for Dimension Reduction in Regression 1 1.1 Static and dynamic graphics. ...................... 1 1.1.1 Graphical tools. .......................... 1 1.1.2 Boston housing data. ...................... 3 1.2 A regression paradigm. .......................... 5 1.3 Principal component analysis. ...................... 7 1.4 Effective dimension reduction in regression. ................. 7 1.4.1 The model. .............................. 8 1.4.2 Special cases. .......................... 9 1.4.3 The e.d.r. directions. ...................... 10 1.4.4 The rationale. .......................... 10 1.4.5 An equivalent version. ...................... 11 1.4.6 Discrepancy measure. ...................... 12 2 Sliced Inverse Regression: Basics 13 2.1 Forward and inverse regression. ...................... 13 2.2 An algorithm of SIR. .......................... 14 2.3 SIR and principal component analysis. .................. 16 2.4 Some simulation examples. .......................... 17 2.5 Contour plotting and SIR. .......................... 20 2.6 Fisher consistency for SIR. .......................... 21 2.7 Proof of Theorem 2.1 . .......................... 24 3 Sampling Properties of SIR 25 3.1 Consistency of SIR. .............................. 25 3.1.1 The root n rate. .......................... 25 3.1.2 The descripency measure. ...................... 26 3.1.3 Simulation. .............................. 26 3.2 Eigenvalues. ............................... 28 3.2.1 Chi-squared test. .......................... 29 3.2.2 Eigenvalues and the assessment of K ................. 29 iii 4 Applying Sliced Inverse Regression 31 4.1 Worsted yarn. ................................. 31 4.2 Variable selection. ............................. 32 4.3 Boston housing data. ............................. 33 4.3.1 Crime rate. ............................. 34 4.3.2 The low crime rate group. ..................... 34 4.3.3 Intrepretation. ............................. 34 4.4 Structure removal. ............................. 36 4.5 OTL push-pull circuit. ......................... 37 5 Generalization of SIR : Second Moment Based Methods 41 5.1 A simple symmetric response curve. ..................... 41 5.2 Slice covariances. ............................. 42 5.3 Basic properties of slice covariances. ..................... 43 5.4 An iterative procedure. ......................... 44 5.5 SIR II algorithm. ............................. 45 6 Transformation and SIR 47 6.1 Dependent variable transformation. ..................... 47 6.2 Some Remarks. ............................. 51 6.3 Examples. .................................. 52 6.3.1 Curves and clusters. ......................... 52 6.3.2 Heteroscedasticity. ......................... 54 6.3.3 Horseshoe and helix. ......................... 56 6.4 Simple estimates for the standard deviations of the SIR directions. .................................. 58 7 Principal Hessian Directions 61 7.1 Principal Hessian directions. ......................... 61 7.2 Dimension reduction. ............................. 62 7.3 Stein’s lemma and estimates of the PHD’s. ................ 62 7.3.1 Stein’s lemma. ......................... 62 7.3.2 Estimates for principal Hessian directions. ............ 64 7.4 Sampling properties for normal carriers. ................ 65 7.5 Linear conditional expectation for x...................... 65 7.6 Extension. .................................. 67 7.7 Examples. .................................. 68 8 Linear Design Condition 73 8.1 Forcing elliptic symmetry. ......................... 73 8.1.1 A simple case: a 2-D square. ..................... 73 8.1.2 Brillinger’s normal resampling. ................ 74 8.1.3 Minimum volume ellipsoid and Voronoi tesselation. ....... 74 8.2 Higher dimension. ............................. 75 iv 8.2.1 Effectiveness of MVE. ...................... 75 8.2.2 Difference between conditional linearity and the elliptic contoured distribution. .............................. 77 8.2.3 A simulation study. ...................... 78 8.2.4 Most low dimension projections are nearly linear. ......... 79 8.3 Implication and some guidance. ...................... 79 8.3.1 Blind application. .......................... 79 8.3.2 Diagnostic checking. ...................... 80 8.3.3 The most dangerous directions. .................. 80 8.3.4 Bias bound. .............................. 80 9 Incorporating Discrete Input Variables 81 9.1 Stratification. ............................... 81 9.2 Pooling estimates from different strata. .................. 82 9.3 Estimation of treatment effects. ...................... 83 10 Quasi-Helices in High Dimensional Regression 89 10.1 Quasi-helical confounding. .......................... 89 10.2 The κ measure of nonlinearity. ...................... 92 10.3 Searching for quasi-helices. .......................... 94 10.4 Sensitivity of geometric shape change. .................. 97 10.5 Over-linearization in linear approximation. .................. 98 10.6 Over-fit in nonlinear approximation. .................. 102 10.7 Model uncertainty and information loss . .................. 102 10.7.1 Least favorable submodel. ...................... 102 10.7.2 Information loss for nearly linear regression. ......... 104 10.8 Hypothesis testing for nearly linear regression. ............
Recommended publications
  • Emacs Speaks Statistics (ESS): a Multi-Platform, Multi-Package Intelligent Environment for Statistical Analysis
    Emacs Speaks Statistics (ESS): A multi-platform, multi-package intelligent environment for statistical analysis A.J. Rossini Richard M. Heiberger Rodney A. Sparapani Martin Machler¨ Kurt Hornik ∗ Date: 2003/10/22 17:34:04 Revision: 1.255 Abstract Computer programming is an important component of statistics research and data analysis. This skill is necessary for using sophisticated statistical packages as well as for writing custom software for data analysis. Emacs Speaks Statistics (ESS) provides an intelligent and consistent interface between the user and software. ESS interfaces with SAS, S-PLUS, R, and other statistics packages under the Unix, Microsoft Windows, and Apple Mac operating systems. ESS extends the Emacs text editor and uses its many features to streamline the creation and use of statistical software. ESS understands the syntax for each data analysis language it works with and provides consistent display and editing features across packages. ESS assists in the interactive or batch execution by the statistics packages of statements written in their languages. Some statistics packages can be run as a subprocess of Emacs, allowing the user to work directly from the editor and thereby retain a consistent and constant look- and-feel. We discuss how ESS works and how it increases statistical programming efficiency. Keywords: Data Analysis, Programming, S, SAS, S-PLUS, R, XLISPSTAT,STATA, BUGS, Open Source Software, Cross-platform User Interface. ∗A.J. Rossini is Research Assistant Professor in the Department of Biostatistics, University of Washington and Joint Assis- tant Member at the Fred Hutchinson Cancer Research Center, Seattle, WA, USA mailto:[email protected]; Richard M.
    [Show full text]
  • XLISP-STAT a Statistical Environment Based on the XLISP Language (Version 2.0)
    I XLISP-STAT A Statistical Environment Based on the XLISP Language (Version 2.0) by Luke Tierney l.5i1 University of Minnesota School of Statistics Technical Report Number 528 July 1989 Contents Preface .. 3 1 Starting and Finishing 6 2 Introduction to Basics 8 2.1 Data ........ 8 2.2 The Listener and the Evaluator . 8 3 Elementary Statistical Operations 11 3.1 First Steps ......... 11 3.2 Summary Statistics and Plots 12 3.3 Two Dimensional Plots 16 3.4 Plotting Functions ..... 19 4 More on Generating and Modifying Data 20 4.1 Generating Random Data . 20 4.2 Generating Systematic Data . 20 4.3 Forming Subsets and .Deleting Cases 21 4.4 Combining Several Lists 22 4.5 Modifying Data . 22 5 Some Useful Shortcuts 24 5.1 Getting Help . 24 5.2 Listing and Undefining Variables .. 26 5.3 More on the XLISP-STAT Listener .. 26 5 .4 Loading Files . 28 5.5 Saving Your Work ..... 28 5.6 The XLISP-STAT Editor 29 5.7 Reading Data Files .. 29 5.8 User Initialization File 29 6 More Elaborate Plots 30 6.1 Spinning Plots . ..... 30 6.2 Scatterplot Matrices • It ••••• 32 6.3 Interacting with Individual Plots 35 6.4 Linked Plots ....... 35 6.5 Modifying a Scatter Plot . 36 6.6 Dynamic Simulations . 39 7 Regression 42 8 Defining Your Own Functions and Methods 47 8.1 Defining Functions .... 47 8.2 Anonymous Functions .. 48 8.3 Some Dynamic Simulations 48 8.4 Defining Methods . 51 8.5 Plot Methods . 52 9 Matrices and Arrays 53 10 Nonlinear Regression 54 1 11 One Way ANOVA 57 12 Maximization and Maximum Likeliho~d Estimation 58 13 Approximate Bayesian Computations 61 A XLISP-STAT on UNIX Systems 68 A.1 XLISP-STAT Under the X11 Window System.
    [Show full text]
  • Spatial Tools for Econometric and Exploratory Analysis
    Spatial Tools for Econometric and Exploratory Analysis Michael F. Goodchild University of California, Santa Barbara Luc Anselin University of Illinois at Urbana-Champaign http://csiss.org Outline ¾A Quick Tour of a GIS ¾Spatial Data Analysis ¾CSISS Tools Spatial Data Analysis Principles: 1. Integration ¾Linking data through common location the layer cake ¾Linking processes across disciplines spatially explicit processes e.g. economic and social processes interact at common locations 2. Spatial analysis ¾Social data collected in cross- section longitudinal data are difficult to construct ¾Cross-sectional perspectives are rich in context can never confirm process though they can perhaps falsify useful source of hypotheses, insights 3. Spatially explicit theory ¾Theory that is not invariant under relocation ¾Spatial concepts (location, distance, adjacency) appear explicitly ¾Can spatial concepts ever explain, or are they always surrogates for something else? 4. Place-based analysis ¾Nomothetic - search for general principles ¾Idiographic - description of unique properties of places ¾An old debate in Geography The Earth's surface ¾Uncontrolled variance ¾There is no average place ¾Results depend explicitly on bounds ¾Places as samples ¾Consider the model: y = a + bx Tract Pop Location Shape 1 3786 x,y 2 2966 x,y 3 5001 x,y 4 4983 x,y 5 4130 x,y 6 3229 x,y 7 4086 x,y 8 3979 x,y Iij = EiAjf (dij) / ΣkAkf (dik) Aj d Ei ij Types of Spatial Data Analysis ¾ Exploratory Spatial Data Analysis • exploring the structure of spatial data • determining
    [Show full text]
  • Paquetes Estadísticos Con Licencia Libre (I) Free Statistical
    2013, Vol. 18 No 2, pp. 12-33 http://www.uni oviedo.es/reunido /index.php/Rema Paquetes estadísticos con licencia libre (I) Free statistical software (I) Carlos Carleos Artime y Norberto Corral Blanco Departamento de Estadística e I. O. y D.M.. Universidad de Oviedo RESUMEN El presente artículo, el primero de una serie de trabajos, presenta un panorama de los principales paquetes estadísticos disponibles en la actualidad con licencia libre, entendiendo este concepto a partir de los grandes proyectos informáticos así autodefinidos que han cristalizado en los últimos treinta años. En esta primera entrega se presta atención fundamentalmente al programa R. Palabras clave : paquetes estadísticos, software libre, R. ABSTRACT This article, the first in a series of works, presents an overview of the major statistical packages available today with free license, understanding this concept from the large computer and self-defined projects that have crystallized in the last thirty years. In this first paper we pay attention mainly to the program R. Keywords : statistical software, free software, R. Contacto: Norberto Corral Blanco Email: [email protected] . 12 Revista Electrónica de Metodología Aplicada (2013), Vol. 18 No 2, pp. 12-33 1.- Introducción La palabra “libre” es usada, y no siempre de manera adecuada, en múltiples campos de conocimiento así que la informática no iba a ser una excepción. La palabra es especialmente problemática porque su traducción al inglés, “free”, es aun más polisémica, e incluye el concepto “gratis”. En 1985 se fundó la Free Software Foundation (FSF) para defender una definición rigurosa del sófguar libre. La propia palabra “sófguar” (del inglés “software”, por oposición a “hardware”, quincalla, aplicado al soporte físico de la informática) es en sí misma problemática.
    [Show full text]
  • Pl a N a Ss E Ss De V Elo P E X P Lo R E R E S E a Rc H B U Ild C O N Ta C T P
    PLAN ACTION VERBS Use consistent verb tense (generally past tense). Start phrases with descriptive action verbs. Supply quantitative data whenever possible. Adapt terminology to include key words. Incorporate action verbs with keywords and current “hot” topics, programs, tools, testing terms, and instrumentation to develop concise, yet highly descriptive phrases. Remember that résumés are scanned for such words, so do everything possible to incorporate important phraseology and current keywords into your résumé. ASSESS From What Color is Your Parachute, Richard Bolles, 2005 achieved delivered founded motivated resolved adapted detailed gathered navigated responded analyzed detected generated operated restored arbitrated determined guided perceived retrieved DEVELOP ascertained devised hypothesized persuaded reviewed assessed diagnosed identified piloted risked attained discovered illustrated predicted scheduled audited displayed implemented problem-solved selected built dissected improvised proofread served collected distributed influenced projected shaped conceptualized diverted initiated promoted summarized EXPLORE compiled eliminated innovated publicized supplied computed enforced inspired purchased surveyed conducted established installed reasoned synthesized conserved evaluated integrated recommended taught consolidated examined investigated referred tested constructed expanded maintained rehabilitated transcribed consulted experimented mediated rendered trouble-shot controlled expressed mentored reported tutored RESEARCH counseled
    [Show full text]
  • Emacs Speaks Statistics: One Interface — Many Programs
    DSC 2001 Proceedings of the 2nd International Workshop on Distributed Statistical Computing March 15-17, Vienna, Austria http://www.ci.tuwien.ac.at/Conferences/DSC-2001 K. Hornik & F. Leisch (eds.) ISSN 1609-395X Emacs Speaks Statistics: One Interface | Many Programs Richard M. Heiberger∗ Abstract Emacs Speaks Statistics (ESS) is a user interface for developing statisti- cal applications and performing data analysis using any of several powerful statistical programming languages: currently, S, S-Plus, R, SAS, XLispStat, Stata. ESS provides a common editing environment, tailored to the grammar of each statistical language, and similar execution environments, in which the statistical program is run under the control of Emacs. The paper discusses the capabilities and advantages of using ESS as the primary interface for sta- tistical programming, and the design issues addressed in providing a common interface for differently structured languages on different computational plat- forms. 1 Introduction Complex statistical analyses require multiple computational tools, each targeted for a particular statistical analysis. The tools are usually designed with the assump- tion of a specific style of interaction with the software. When tools with conflicting styles are chosen, the interference can partially negate the efficiency gain of using appropriate tools. ESS (Emacs Speaks Statistics) [7] provides a single interface to most of the tools that a statistician is likely to use and therefore mitigates many of the incompatibility issues. ESS provides a
    [Show full text]
  • Statistical Inference for Apparent Populations Author(S): Richard A
    Statistical Inference for Apparent Populations Author(s): Richard A. Berk, Bruce Western and Robert E. Weiss Source: Sociological Methodology, Vol. 25 (1995), pp. 421-458 Published by: American Sociological Association Stable URL: https://www.jstor.org/stable/271073 Accessed: 24-08-2019 15:58 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms American Sociological Association is collaborating with JSTOR to digitize, preserve and extend access to Sociological Methodology This content downloaded from 160.39.33.163 on Sat, 24 Aug 2019 15:58:49 UTC All use subject to https://about.jstor.org/terms STATISTICAL INFERENCE FOR APPARENT POPULATIONS Richard A. Berk* Bruce Westernt Robert E. Weiss* In this paper we consider statistical inference for datasets that are not replicable. We call these datasets, which are common in sociology, apparent populations. We review how such data are usually analyzed by sociologists and then suggest that perhaps a Bayesian approach has merit as an alternative. We illustrate our views with an empirical example. 1. INTRODUCTION It is common in sociological publications to find statistical inference applied to datasets that are not samples in the usual sense. For the substantive issues being addressed, the data on hand are all the data there are.
    [Show full text]
  • Quantian: a Single-System Image Scientific Cluster Computing
    Quantian: A single-system image scientific cluster computing environment Dirk Eddelbuettel, Ph.D. B of A, and Debian [email protected] Presentation at the Extreme Linux SIG at USENIX 2004 in Boston, July 2, 2004 Quantian: A single-system image scientific cluster computing environment – p. 1 Introduction Quantian is a directly bootable and self-configuring Linux sytem that runs from a compressed dvd image. Quantian offers zero-configuration cluster computing using openMosix. Quantian can boot ’thin clients’ directly via PXE in an ’openmosixterminalserver’ setting. Quantian contains around 1gb of additional ’quantitative’ software: scientific, numerical, statistical, engineering, ... Quantian also contains tools of general usefulness such as editors, programming languages, a very complete latex suite, two ’office’ suites, networking tools and multimedia apps. Quantian: A single-system image scientific cluster computing environment – p. 2 Family tree overview Quantian is based on clusterKnoppix, which extends Knoppix with an openMosix-enabled kernel and applications (chpox, gomd, tyd, ....), kernel modules and security patches. ClusterKnoppix extends Knoppix, an impressive ’linux on a cdrom’ system which puts 2.3gb of software onto a cdrom along with the very best auto-detection and configuration. Knoppix is based on Debian, a Linux distribution containing over 6000 source packages available for 10 architectures (such as i386, alpha, ia64, amd64, sparc or s390) produced by hundreds of individuals from across the globe. Quantian: A single-system image scientific cluster computing environment – p. 3 Family tree: Debian ’Linux the Linux way’: made by volunteers (some now with full-time backing) from across the globe. Focus on very high technical standards with rigorous policy and reference documents.
    [Show full text]
  • Quantian As an Environment for Distributed Statistical Computing
    Quantian for distributed statistical computing Dirk Eddelbuettel Background Quantian as an environment for Introduction Timeline distributed statistical computing Quantian Motivation Content Distributed Computing Dirk Eddelbuettel Overview Preparation Beowulf Debian Project openMosix [email protected] R Examples Snow SnowFT papply DSC 2005 – Directions in Statistical Computing 2005 Others University of Washington, Seattle, August 13-14, 2005 Summary Quantian for distributed statistical computing Outline Dirk Eddelbuettel Background Introduction Timeline 1 Background Quantian Motivation Content 2 Quantian Distributed Computing Overview Preparation 3 Distributed Computing Beowulf openMosix R Examples Snow 4 R Examples SnowFT papply Others Summary 5 Summary Quantian for distributed statistical computing What is Quantian? Dirk A live-dvd for numbers geeks Eddelbuettel Background Introduction Timeline • Quantian is a directly bootable and self-configuring Quantian Motivation Linux sytem that runs from a compressed dvd image. Content • Quantian offers zero-configuration cluster computing Distributed Computing using openMosix, including ’openMosix Terminalserver’ Overview Preparation via PXE. Beowulf openMosix • Quantian contains over 6gb of software, including an R Examples Snow additional 4gb of ’quantitative’ software: scientific, SnowFT papply numerical, statistical, engineering, ... Others • Summary Quantian also contains editors, programming languages, complete latex support, two ’office’ suites, networking tools and more. Quantian for distributed
    [Show full text]
  • Statistics and Econometrics Using Xlispstat
    STATISTICS AND ECONOMETRICS USING XLISPSTAT John E. Floyd University of Toronto May 27, 2009 Contents 1 Introduction 1 2 Working with XLispStat 5 2.1 Using XLispStat as a Calculator . 6 2.2 De¯ning Objects and Working with Lists . 8 2.3 Writing Lisp Functions . 16 2.4 Working with Matrices . 19 2.5 Reading and Writing Data Files . 26 2.6 Transforming Data . 32 2.7 Error Messages . 46 3 Descriptive Statistics 49 4 Hypothesis Tests 63 4.1 Probability Densities and Quantiles . 63 4.2 Plotting Probability Distributions . 68 4.3 Generating Random Data . 71 4.4 Tests of the Mean and Standard Deviation . 73 4.5 Tests of the Di®erence Between Two Means . 75 4.6 Tests of Goodness of Fit . 80 5 Linear Regression Analysis 85 5.1 Using Matrix Calculations . 86 5.2 Using the Regression-Model Function . 90 5.3 Heteroskedasticity . 93 5.4 Time Series: Autocorrelated Residuals . 95 5.5 Multicollinearity . 102 5.6 Some Improved Linear Regression Functions . 107 5.6.1 A Basic OLS-Regression Function . 108 5.6.2 Regressions on Cross-Sectional Data . 113 i ii CONTENTS 5.6.3 Time-Series Regressions . 114 5.6.4 Adjusting the Lengths of Time-Series and Setting up Lagged Values . 118 6 Regression Analysis of Panel Data 119 6.1 Di®erences Estimation . 122 6.2 Entity Demeaned Fixed E®ects Regression . 125 6.3 Using Fixed-E®ects Dummy Variables . 126 6.4 Reorganisation of Panel Data Sets . 137 7 Instrumental Variables Regression 141 7.1 Two-Stage Least Squares .
    [Show full text]
  • Literate Statistical Practice
    New URL: http://www.R-project.org/conferences/DSC-2001/ DSC 2001 Proceedings of the 2nd International Workshop on Distributed Statistical Computing March 15-17, Vienna, Austria http://www.ci.tuwien.ac.at/Conferences/DSC-2001 K. Hornik & F. Leisch (eds.) ISSN 1609-395X Literate Statistical Practice A.J. Rossini∗ Abstract Literate Statistical Practice (LSP) is an method for statistical practice which suggests that documentation and specification occur at the same time as statistical coding. It applies literate programming Knuth (1992) to the practice of statistics. We discuss 2 different approaches for LSP, one currently implemented using Emacs with Noweb and Emacs Speaks Statistics (ESS), and the other developed based on eXtensible Markup Language (XML) tools. The interference needed to change an individual's habits comes at a high cost, and good tools are critical for encouraging people to switch to a more literate style. We discuss why LSP can help, and suggest how ESS can ease the burden placed on the analyst. Keywords: Data Analysis, Programming Tools, User Interfaces, Literate Pro- gramming, Reproducible Research. 1 Introduction In statistics, the ability to document both programming language coding as well as mathematical thought is critical to understandable, explainable, and reproducible data analysis. We will refer to the activities involved in statistical research and data analysis as statistical practice. These activities often involve computing, ranging from finding reference and background material to programming and computation. Literate Statistical Practice is a programming methodology, derived from Lit- erate Programming, which encourages the construction of documentation for data management and statistical analysis as the code for it is produced.
    [Show full text]
  • Computer Intensive Statistics STAT:7400
    Computer Intensive Statistics STAT:7400 Luke Tierney Spring 2019 Introduction Syllabus and Background Basics • Review the course syllabus http://www.stat.uiowa.edu/˜luke/classes/STAT7400/syllabus.pdf • Fill out info sheets. – name – field – statistics background – computing background Homework • some problems will cover ideas not covered in class • working together is OK • try to work on your own • write-up must be your own • do not use solutions from previous years • submission by GitHub at http://github.uiowa.edu or by Icon at http://icon.uiowa.edu/. 1 Computer Intensive Statistics STAT:7400, Spring 2019 Tierney Project • Find a topic you are interested in. • Written report plus possibly some form of presentation. Ask Questions • Ask questions if you are confused or think a point needs more discussion. • Questions can lead to interesting discussions. 2 Computer Intensive Statistics STAT:7400, Spring 2019 Tierney Computational Tools Computers and Operating Systems • We will use software available on the Linux workstations in the Mathe- matical Sciences labs (Schaeffer 346 in particular). • Most things we will do can be done remotely by using ssh to log into one of the machines in Schaeffer 346 using ssh. These machines are l-lnx2xy.stat.uiowa.edu with xy = 00, 01, 02, . , 19. • You can also access the CLAS Linux systems using a browser at http://fastx.divms.uiowa.edu/ – This connects you to one of several servers. – It is OK to run small jobs on these servers. – For larger jobs you should log into one of the lab machines. • Most of the software we will use is available free for installing on any Linux, Mac OS X, or Windows computer.
    [Show full text]