CSISS Software Tools Program Luc Anselin

Total Page:16

File Type:pdf, Size:1020Kb

CSISS Software Tools Program Luc Anselin CSISS Software Tools Program Luc Anselin University of Illinois, Urbana-Champaign [email protected] http://geog55.gis.uiuc.edu/~luc Outline ◆ Background and Goals ◆ Software Tools Programs ◆ Implementation Issues Background ◆ Spatial Analysis Tools – facilitate dissemination of spatial analysis techniques – easy accessible “lowest common denominator” » entry into the field, ESDA, visualization – state of the art/frontier » facilitate implementation of advanced/new techniques » spatial econometrics ◆ Two Dimensions – both functionality and training/education 1 CSISS Software Tools ◆ Development of next generation SpaceStat ◆ Multiple fronts –traditional commercial software environments –open source » the Open SpaceStat Project –web-based analysis »theSpaceStat Web Project Delivery Mechanisms ◆ Commercial Platforms – start from GIS – start from statistical/econometric software – functionality » implemented as macros, scripts, libraries, modules » new technologies: COM/CORBA components » need to handle data models for spatial information ◆ Self-contained – no other software needed – multi-platform – extensible, open architecture 2 Five Main Efforts ◆ Software Tools Clearing House ◆ Spatial Analysis with Existing Statistical/Econometric Software ◆ Dynamic ESDA with GIS ◆ The Open SpaceStat Project ◆ The SpaceStat Web Project Software Tools Clearing House ◆ Web “portal” to software tools –links to sites » vendors, individuals ◆ Search capability –by technique, by platform, etc. ◆ Benchmarks and background –benchmark on common data sets –links to learning materials Software Tools Clearing House Current Status ◆ Inventory of Links –portal sites (ai-geostats, etc.) » collection of links to spatial stats software –individual toolboxes (matlab, R) –specialized software (SpaceStat, CrimeStat) –tools: collection of downloadable programs ◆ Links to CSISS Efforts 3 Spatial Tools for Stats/Econ ◆ Goal – add basic spatial econometric methods to existing stats/econometric software » diagnostics for spatial effects » estimation methods » simulation methods ◆ Ongoing Activities – inventory of existing tools – spatial econometrics in (fill in the blanks) » toolboxes: S-Plus/R, Matlab/Octave, Gauss/Ox … » xlispstat » access through CSISS Web Site Spatial Data Analysis in XlispStat ◆ ACE492SE Class Projects – estimation models with spatial heterogeneity – diagnostics for spatial autocorrelation – maximum likelihood estimation – iv/gmm estimation ◆ The “Template” – bare bones spatial econometric functionality » model specifications: lag, error, regimes » diagnostics: Moran’s I, LM tests » estimation: ml, gm, gmm Dynamic ESDA for GIS ◆ Goal – add ESDA functionality to a GIS through dynamically linked windows » ESRI industrial partner » MapObjects, ArcView/ArcInfo8 extension ◆ Ongoing Activities – port DynESDA Arcview extension » self-contained using MapObjects, COM » cross-platform, building on STARS (Python/Tkinter) – add functionality to DynESDA » linking and brushing multiple maps » LISA maps, hot spot detection, inequality measures 4 DynESDA2 Design ◆ Map as One of the Views – no longer ArcView driven – MapObjects for mapping functionality – multiple maps linked – transparent selection identifier ◆ Modular Design – modules for statistical graphics – modules for mapping function – linked through common bitmap New Functionality ◆ More Linking and Brushing – linking and brushing of both statistical graphs and maps (brushing maps) – linking multiple maps ◆ ESDA Functions – basic features same as DynESDA ArcView extension – built-in box maps, percentile maps – LISA maps 5 STARS ◆ Space-Time Analysis of Regional Systems (Sergio Rey) –open source (Python and Tkinter) –dynamically linked windows –focus on evolution of regional disparities –easily extensible to encompass other ESDA techniques (moran scatterplot, etc.) 6 Open SpaceStat Project ◆ Goal – develop collection of open spatial data analysis modules that incorporate state of the art methods » moving target requires open environment ◆ Design – cross-platform using Python + Numpy » linux, windows, mac – common kernel, common user interface – develop collection of components » all the basic techniques » open design allows for high end users/programmers Open SpaceStat Project (2) ◆ Functionality –ESDA » spatial autocorrelation statistics » visualization – dynamically linked windows –spatial econometrics » regression diagnostics, spatial regression » space-time regression » spatial probit Open SpaceStat Project (3) ◆ Com-Centered Efforts – develop a modular collection of software components for spatial data analysis » DynESDA based on MapObjects/ArcObjects » specialized spatial econometric functionality as com ◆ Open Source Efforts – develop an open source successor for SpaceStat » Python and Numpy » Tkinter for interface and graphics » open source toolboxes as prototypes (xlispstat, R, ox, octave, …) 7 SpaceStat Web Project ◆ Goal – delivery of spatial data analysis via the web – ESDA and visualization ◆ Functionality – building on ArcIMS/MapObjects IMS – spatial analysis of rates » visualization of hot spots, clusters » smoothing of rates »LISA – point pattern analysis Implementation Issues ◆ Interoperability – moving data back and forth – spatial data models » points, distances: can use standard model » weights, topology: requires gis-like functionality – sparseness of spatial weights » efficient routines for handling very sparse matrices (Choleski decomposition, eigenvalues, etc.) ◆ Hooks – for com-compliant stat/econ software, the addition of specialized modules is relatively straightforward – ArcGIS 8.0 is com-compliant, avoiding the need for gis functionality in the stats package 8 DynESDA 2 Spatial Analysis with Dynamically Linked Windows Luc Anselin, Ibnu Syabri, Yanqui Ren University of Illinois at Urbana-Champaign 9.
Recommended publications
  • Emacs Speaks Statistics (ESS): a Multi-Platform, Multi-Package Intelligent Environment for Statistical Analysis
    Emacs Speaks Statistics (ESS): A multi-platform, multi-package intelligent environment for statistical analysis A.J. Rossini Richard M. Heiberger Rodney A. Sparapani Martin Machler¨ Kurt Hornik ∗ Date: 2003/10/22 17:34:04 Revision: 1.255 Abstract Computer programming is an important component of statistics research and data analysis. This skill is necessary for using sophisticated statistical packages as well as for writing custom software for data analysis. Emacs Speaks Statistics (ESS) provides an intelligent and consistent interface between the user and software. ESS interfaces with SAS, S-PLUS, R, and other statistics packages under the Unix, Microsoft Windows, and Apple Mac operating systems. ESS extends the Emacs text editor and uses its many features to streamline the creation and use of statistical software. ESS understands the syntax for each data analysis language it works with and provides consistent display and editing features across packages. ESS assists in the interactive or batch execution by the statistics packages of statements written in their languages. Some statistics packages can be run as a subprocess of Emacs, allowing the user to work directly from the editor and thereby retain a consistent and constant look- and-feel. We discuss how ESS works and how it increases statistical programming efficiency. Keywords: Data Analysis, Programming, S, SAS, S-PLUS, R, XLISPSTAT,STATA, BUGS, Open Source Software, Cross-platform User Interface. ∗A.J. Rossini is Research Assistant Professor in the Department of Biostatistics, University of Washington and Joint Assis- tant Member at the Fred Hutchinson Cancer Research Center, Seattle, WA, USA mailto:[email protected]; Richard M.
    [Show full text]
  • XLISP-STAT a Statistical Environment Based on the XLISP Language (Version 2.0)
    I XLISP-STAT A Statistical Environment Based on the XLISP Language (Version 2.0) by Luke Tierney l.5i1 University of Minnesota School of Statistics Technical Report Number 528 July 1989 Contents Preface .. 3 1 Starting and Finishing 6 2 Introduction to Basics 8 2.1 Data ........ 8 2.2 The Listener and the Evaluator . 8 3 Elementary Statistical Operations 11 3.1 First Steps ......... 11 3.2 Summary Statistics and Plots 12 3.3 Two Dimensional Plots 16 3.4 Plotting Functions ..... 19 4 More on Generating and Modifying Data 20 4.1 Generating Random Data . 20 4.2 Generating Systematic Data . 20 4.3 Forming Subsets and .Deleting Cases 21 4.4 Combining Several Lists 22 4.5 Modifying Data . 22 5 Some Useful Shortcuts 24 5.1 Getting Help . 24 5.2 Listing and Undefining Variables .. 26 5.3 More on the XLISP-STAT Listener .. 26 5 .4 Loading Files . 28 5.5 Saving Your Work ..... 28 5.6 The XLISP-STAT Editor 29 5.7 Reading Data Files .. 29 5.8 User Initialization File 29 6 More Elaborate Plots 30 6.1 Spinning Plots . ..... 30 6.2 Scatterplot Matrices • It ••••• 32 6.3 Interacting with Individual Plots 35 6.4 Linked Plots ....... 35 6.5 Modifying a Scatter Plot . 36 6.6 Dynamic Simulations . 39 7 Regression 42 8 Defining Your Own Functions and Methods 47 8.1 Defining Functions .... 47 8.2 Anonymous Functions .. 48 8.3 Some Dynamic Simulations 48 8.4 Defining Methods . 51 8.5 Plot Methods . 52 9 Matrices and Arrays 53 10 Nonlinear Regression 54 1 11 One Way ANOVA 57 12 Maximization and Maximum Likeliho~d Estimation 58 13 Approximate Bayesian Computations 61 A XLISP-STAT on UNIX Systems 68 A.1 XLISP-STAT Under the X11 Window System.
    [Show full text]
  • Spatial Tools for Econometric and Exploratory Analysis
    Spatial Tools for Econometric and Exploratory Analysis Michael F. Goodchild University of California, Santa Barbara Luc Anselin University of Illinois at Urbana-Champaign http://csiss.org Outline ¾A Quick Tour of a GIS ¾Spatial Data Analysis ¾CSISS Tools Spatial Data Analysis Principles: 1. Integration ¾Linking data through common location the layer cake ¾Linking processes across disciplines spatially explicit processes e.g. economic and social processes interact at common locations 2. Spatial analysis ¾Social data collected in cross- section longitudinal data are difficult to construct ¾Cross-sectional perspectives are rich in context can never confirm process though they can perhaps falsify useful source of hypotheses, insights 3. Spatially explicit theory ¾Theory that is not invariant under relocation ¾Spatial concepts (location, distance, adjacency) appear explicitly ¾Can spatial concepts ever explain, or are they always surrogates for something else? 4. Place-based analysis ¾Nomothetic - search for general principles ¾Idiographic - description of unique properties of places ¾An old debate in Geography The Earth's surface ¾Uncontrolled variance ¾There is no average place ¾Results depend explicitly on bounds ¾Places as samples ¾Consider the model: y = a + bx Tract Pop Location Shape 1 3786 x,y 2 2966 x,y 3 5001 x,y 4 4983 x,y 5 4130 x,y 6 3229 x,y 7 4086 x,y 8 3979 x,y Iij = EiAjf (dij) / ΣkAkf (dik) Aj d Ei ij Types of Spatial Data Analysis ¾ Exploratory Spatial Data Analysis • exploring the structure of spatial data • determining
    [Show full text]
  • Paquetes Estadísticos Con Licencia Libre (I) Free Statistical
    2013, Vol. 18 No 2, pp. 12-33 http://www.uni oviedo.es/reunido /index.php/Rema Paquetes estadísticos con licencia libre (I) Free statistical software (I) Carlos Carleos Artime y Norberto Corral Blanco Departamento de Estadística e I. O. y D.M.. Universidad de Oviedo RESUMEN El presente artículo, el primero de una serie de trabajos, presenta un panorama de los principales paquetes estadísticos disponibles en la actualidad con licencia libre, entendiendo este concepto a partir de los grandes proyectos informáticos así autodefinidos que han cristalizado en los últimos treinta años. En esta primera entrega se presta atención fundamentalmente al programa R. Palabras clave : paquetes estadísticos, software libre, R. ABSTRACT This article, the first in a series of works, presents an overview of the major statistical packages available today with free license, understanding this concept from the large computer and self-defined projects that have crystallized in the last thirty years. In this first paper we pay attention mainly to the program R. Keywords : statistical software, free software, R. Contacto: Norberto Corral Blanco Email: [email protected] . 12 Revista Electrónica de Metodología Aplicada (2013), Vol. 18 No 2, pp. 12-33 1.- Introducción La palabra “libre” es usada, y no siempre de manera adecuada, en múltiples campos de conocimiento así que la informática no iba a ser una excepción. La palabra es especialmente problemática porque su traducción al inglés, “free”, es aun más polisémica, e incluye el concepto “gratis”. En 1985 se fundó la Free Software Foundation (FSF) para defender una definición rigurosa del sófguar libre. La propia palabra “sófguar” (del inglés “software”, por oposición a “hardware”, quincalla, aplicado al soporte físico de la informática) es en sí misma problemática.
    [Show full text]
  • High Dimensional Data Analysis Via the SIR/PHD Approach
    High dimensional data analysis via the SIR/PHD approach April 6, 2000 Preface Dimensionality is an issue that can arise in every scientific field. Generally speaking, the difficulty lies on how to visualize a high dimensional function or data set. This is an area which has become increasingly more important due to the advent of computer and graphics technology. People often ask : “How do they look?”, “What structures are there?”, “What model should be used?” Aside from the differences that underly the various scientific con- texts, such kind of questions do have a common root in Statistics. This should be the driving force for the study of high dimensional data analysis. Sliced inverse regression(SIR) and principal Hessian direction(PHD) are two basic di- mension reduction methods. They are useful for the extraction of geometric information underlying noisy data of several dimensions - a crucial step in empirical model building which has been overlooked in the literature. In this Lecture Notes, I will review the theory of SIR/PHD and describe some ongoing research in various application areas. There are two parts. The first part is based on materials that have already appeared in the literature. The second part is just a collection of some manuscripts which are not yet published. They are included here for completeness. Needless to say, there are many other high dimensional data analysis techniques (and we may encounter some of them later on) that should deserve more detailed treatment. In this complex field, it would not be wise to anticipate the existence of any single tool that can outperform all others in every practical situation.
    [Show full text]
  • Pl a N a Ss E Ss De V Elo P E X P Lo R E R E S E a Rc H B U Ild C O N Ta C T P
    PLAN ACTION VERBS Use consistent verb tense (generally past tense). Start phrases with descriptive action verbs. Supply quantitative data whenever possible. Adapt terminology to include key words. Incorporate action verbs with keywords and current “hot” topics, programs, tools, testing terms, and instrumentation to develop concise, yet highly descriptive phrases. Remember that résumés are scanned for such words, so do everything possible to incorporate important phraseology and current keywords into your résumé. ASSESS From What Color is Your Parachute, Richard Bolles, 2005 achieved delivered founded motivated resolved adapted detailed gathered navigated responded analyzed detected generated operated restored arbitrated determined guided perceived retrieved DEVELOP ascertained devised hypothesized persuaded reviewed assessed diagnosed identified piloted risked attained discovered illustrated predicted scheduled audited displayed implemented problem-solved selected built dissected improvised proofread served collected distributed influenced projected shaped conceptualized diverted initiated promoted summarized EXPLORE compiled eliminated innovated publicized supplied computed enforced inspired purchased surveyed conducted established installed reasoned synthesized conserved evaluated integrated recommended taught consolidated examined investigated referred tested constructed expanded maintained rehabilitated transcribed consulted experimented mediated rendered trouble-shot controlled expressed mentored reported tutored RESEARCH counseled
    [Show full text]
  • Emacs Speaks Statistics: One Interface — Many Programs
    DSC 2001 Proceedings of the 2nd International Workshop on Distributed Statistical Computing March 15-17, Vienna, Austria http://www.ci.tuwien.ac.at/Conferences/DSC-2001 K. Hornik & F. Leisch (eds.) ISSN 1609-395X Emacs Speaks Statistics: One Interface | Many Programs Richard M. Heiberger∗ Abstract Emacs Speaks Statistics (ESS) is a user interface for developing statisti- cal applications and performing data analysis using any of several powerful statistical programming languages: currently, S, S-Plus, R, SAS, XLispStat, Stata. ESS provides a common editing environment, tailored to the grammar of each statistical language, and similar execution environments, in which the statistical program is run under the control of Emacs. The paper discusses the capabilities and advantages of using ESS as the primary interface for sta- tistical programming, and the design issues addressed in providing a common interface for differently structured languages on different computational plat- forms. 1 Introduction Complex statistical analyses require multiple computational tools, each targeted for a particular statistical analysis. The tools are usually designed with the assump- tion of a specific style of interaction with the software. When tools with conflicting styles are chosen, the interference can partially negate the efficiency gain of using appropriate tools. ESS (Emacs Speaks Statistics) [7] provides a single interface to most of the tools that a statistician is likely to use and therefore mitigates many of the incompatibility issues. ESS provides a
    [Show full text]
  • Statistical Inference for Apparent Populations Author(S): Richard A
    Statistical Inference for Apparent Populations Author(s): Richard A. Berk, Bruce Western and Robert E. Weiss Source: Sociological Methodology, Vol. 25 (1995), pp. 421-458 Published by: American Sociological Association Stable URL: https://www.jstor.org/stable/271073 Accessed: 24-08-2019 15:58 UTC JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms American Sociological Association is collaborating with JSTOR to digitize, preserve and extend access to Sociological Methodology This content downloaded from 160.39.33.163 on Sat, 24 Aug 2019 15:58:49 UTC All use subject to https://about.jstor.org/terms STATISTICAL INFERENCE FOR APPARENT POPULATIONS Richard A. Berk* Bruce Westernt Robert E. Weiss* In this paper we consider statistical inference for datasets that are not replicable. We call these datasets, which are common in sociology, apparent populations. We review how such data are usually analyzed by sociologists and then suggest that perhaps a Bayesian approach has merit as an alternative. We illustrate our views with an empirical example. 1. INTRODUCTION It is common in sociological publications to find statistical inference applied to datasets that are not samples in the usual sense. For the substantive issues being addressed, the data on hand are all the data there are.
    [Show full text]
  • Quantian: a Single-System Image Scientific Cluster Computing
    Quantian: A single-system image scientific cluster computing environment Dirk Eddelbuettel, Ph.D. B of A, and Debian [email protected] Presentation at the Extreme Linux SIG at USENIX 2004 in Boston, July 2, 2004 Quantian: A single-system image scientific cluster computing environment – p. 1 Introduction Quantian is a directly bootable and self-configuring Linux sytem that runs from a compressed dvd image. Quantian offers zero-configuration cluster computing using openMosix. Quantian can boot ’thin clients’ directly via PXE in an ’openmosixterminalserver’ setting. Quantian contains around 1gb of additional ’quantitative’ software: scientific, numerical, statistical, engineering, ... Quantian also contains tools of general usefulness such as editors, programming languages, a very complete latex suite, two ’office’ suites, networking tools and multimedia apps. Quantian: A single-system image scientific cluster computing environment – p. 2 Family tree overview Quantian is based on clusterKnoppix, which extends Knoppix with an openMosix-enabled kernel and applications (chpox, gomd, tyd, ....), kernel modules and security patches. ClusterKnoppix extends Knoppix, an impressive ’linux on a cdrom’ system which puts 2.3gb of software onto a cdrom along with the very best auto-detection and configuration. Knoppix is based on Debian, a Linux distribution containing over 6000 source packages available for 10 architectures (such as i386, alpha, ia64, amd64, sparc or s390) produced by hundreds of individuals from across the globe. Quantian: A single-system image scientific cluster computing environment – p. 3 Family tree: Debian ’Linux the Linux way’: made by volunteers (some now with full-time backing) from across the globe. Focus on very high technical standards with rigorous policy and reference documents.
    [Show full text]
  • Quantian As an Environment for Distributed Statistical Computing
    Quantian for distributed statistical computing Dirk Eddelbuettel Background Quantian as an environment for Introduction Timeline distributed statistical computing Quantian Motivation Content Distributed Computing Dirk Eddelbuettel Overview Preparation Beowulf Debian Project openMosix [email protected] R Examples Snow SnowFT papply DSC 2005 – Directions in Statistical Computing 2005 Others University of Washington, Seattle, August 13-14, 2005 Summary Quantian for distributed statistical computing Outline Dirk Eddelbuettel Background Introduction Timeline 1 Background Quantian Motivation Content 2 Quantian Distributed Computing Overview Preparation 3 Distributed Computing Beowulf openMosix R Examples Snow 4 R Examples SnowFT papply Others Summary 5 Summary Quantian for distributed statistical computing What is Quantian? Dirk A live-dvd for numbers geeks Eddelbuettel Background Introduction Timeline • Quantian is a directly bootable and self-configuring Quantian Motivation Linux sytem that runs from a compressed dvd image. Content • Quantian offers zero-configuration cluster computing Distributed Computing using openMosix, including ’openMosix Terminalserver’ Overview Preparation via PXE. Beowulf openMosix • Quantian contains over 6gb of software, including an R Examples Snow additional 4gb of ’quantitative’ software: scientific, SnowFT papply numerical, statistical, engineering, ... Others • Summary Quantian also contains editors, programming languages, complete latex support, two ’office’ suites, networking tools and more. Quantian for distributed
    [Show full text]
  • Statistics and Econometrics Using Xlispstat
    STATISTICS AND ECONOMETRICS USING XLISPSTAT John E. Floyd University of Toronto May 27, 2009 Contents 1 Introduction 1 2 Working with XLispStat 5 2.1 Using XLispStat as a Calculator . 6 2.2 De¯ning Objects and Working with Lists . 8 2.3 Writing Lisp Functions . 16 2.4 Working with Matrices . 19 2.5 Reading and Writing Data Files . 26 2.6 Transforming Data . 32 2.7 Error Messages . 46 3 Descriptive Statistics 49 4 Hypothesis Tests 63 4.1 Probability Densities and Quantiles . 63 4.2 Plotting Probability Distributions . 68 4.3 Generating Random Data . 71 4.4 Tests of the Mean and Standard Deviation . 73 4.5 Tests of the Di®erence Between Two Means . 75 4.6 Tests of Goodness of Fit . 80 5 Linear Regression Analysis 85 5.1 Using Matrix Calculations . 86 5.2 Using the Regression-Model Function . 90 5.3 Heteroskedasticity . 93 5.4 Time Series: Autocorrelated Residuals . 95 5.5 Multicollinearity . 102 5.6 Some Improved Linear Regression Functions . 107 5.6.1 A Basic OLS-Regression Function . 108 5.6.2 Regressions on Cross-Sectional Data . 113 i ii CONTENTS 5.6.3 Time-Series Regressions . 114 5.6.4 Adjusting the Lengths of Time-Series and Setting up Lagged Values . 118 6 Regression Analysis of Panel Data 119 6.1 Di®erences Estimation . 122 6.2 Entity Demeaned Fixed E®ects Regression . 125 6.3 Using Fixed-E®ects Dummy Variables . 126 6.4 Reorganisation of Panel Data Sets . 137 7 Instrumental Variables Regression 141 7.1 Two-Stage Least Squares .
    [Show full text]
  • Literate Statistical Practice
    New URL: http://www.R-project.org/conferences/DSC-2001/ DSC 2001 Proceedings of the 2nd International Workshop on Distributed Statistical Computing March 15-17, Vienna, Austria http://www.ci.tuwien.ac.at/Conferences/DSC-2001 K. Hornik & F. Leisch (eds.) ISSN 1609-395X Literate Statistical Practice A.J. Rossini∗ Abstract Literate Statistical Practice (LSP) is an method for statistical practice which suggests that documentation and specification occur at the same time as statistical coding. It applies literate programming Knuth (1992) to the practice of statistics. We discuss 2 different approaches for LSP, one currently implemented using Emacs with Noweb and Emacs Speaks Statistics (ESS), and the other developed based on eXtensible Markup Language (XML) tools. The interference needed to change an individual's habits comes at a high cost, and good tools are critical for encouraging people to switch to a more literate style. We discuss why LSP can help, and suggest how ESS can ease the burden placed on the analyst. Keywords: Data Analysis, Programming Tools, User Interfaces, Literate Pro- gramming, Reproducible Research. 1 Introduction In statistics, the ability to document both programming language coding as well as mathematical thought is critical to understandable, explainable, and reproducible data analysis. We will refer to the activities involved in statistical research and data analysis as statistical practice. These activities often involve computing, ranging from finding reference and background material to programming and computation. Literate Statistical Practice is a programming methodology, derived from Lit- erate Programming, which encourages the construction of documentation for data management and statistical analysis as the code for it is produced.
    [Show full text]