Some General Results

Total Page:16

File Type:pdf, Size:1020Kb

Some General Results Appendix Some General Results A.1 Multinomial Distribution Consider the following multinomial distribution coming from n independent multi- nomial trials with xi the number of counts occurring in the cell with probability pi (i = 1, 2,...,k), namely n! k Pr[{x }] = pxi i k ! i i=1 xi i=1 n k k k = pxi , x = n, p = 1. (A.1) x , x ,...,x i i i 1 2 k i=1 i=1 i=1 Here Pr[{xi }] stands for Pr[{xi }] = Pr[X1 = x1, X2 = x2,...,Xk = xk ], the “singular” (and symmetrical version) of the distribution as we need the identifia- k = bility constraint i=1 pi 1. We can express this distribution briefly as the singular ( , ,..., ) ∼ ( , ) distribution x1 x2 xk Multinomial n p . = − k−1 = We obtain the nonsingular distribution by writing pk 1 i=1 pi and xk − k−1 n i=1 xi . In this respect, the notations for the singular and nonsingular distribu- tions are sometimes confused in the literature, especially the definition n n! = . {x } k ! i i=1 xi The binomial distribution, written Binomial(n, p), has k = 2, and the Bernoulli dis- tribution, written Bernoulli(x, p), is Binomial(1, p) for discrete random x taking the values of 1 and 0. © Springer Nature Switzerland AG 2019 571 G. A. F. Seber and M. R. Schofield, Capture-Recapture: Parameter Estimation for Open Animal Populations, Statistics for Biology and Health, https://doi.org/10.1007/978-3-030-18187-1 572 Appendix: Some General Results A.1.1 Some Properties By adding appropriate cells together we see that the marginal distribution of any subset of a multinomial distribution is also multinomial with the appropriate pi ’s added together. If we finally end up with just two pooled cells we have the binomial distribution. We now show how a nonsingular multinomial distribution can be repre- sented by the product of conditional binomial distributions. To do this we note first that if x1 and x2 have a multinomial distribution and x1 has a binomial distribution, we find that Pr[x2, x1] Pr[x2 | x1]= Pr[x1] − − n − x p x2 p n x1 x2 = 1 2 1 − 2 . x2 1 − p1 1 − p1 Now Pr[x1, x2,...,xk ]=Pr[x1] Pr[x2 | x1] Pr[x3 | x1, x2]···Pr[xk | x1, x2,...,xk−1] and we have shown that for k = 2wehave Pr[x1, x2]=Pr[x1] Pr[x2 | x1] where both distributions are binomial. We can then show by induction that the fac- torization of Pr[x1, x2,...,xk ] gives a product of conditional binomial distributions. We now consider two useful techniques applied by Robson and Youngs (1971) and referred to as “peeling” and “pooling” by Burnham (1991) in using multinomial distributions. A.1.2 Peeling Process The peeling process with multinomial distributions can be described as follows. k = Suppose we wish to peel off the probability distribution of X1.If i=2 xi x and k = − = i=2 pi 1 p1 q1, then Appendix: Some General Results 573 k Pr[X1 = x1, X2 = x2,...,Xk = xk with xi = n] i=1 = Pr[X1 = x1] Pr[X1 = x1,...,Xk = xk | X1 = x1] = Pr[X1 = x1] Pr[X2 = x2,...,Xk = xk with x = n − x1] k xi n − n − x p = px1 qn x1 1 i . x 1 1 x , x ,...,x q 1 2 3 k i=2 1 A.1.3 Pooling Process The pooling process begins with two independent singular multinomial distributions with the same number of cells and the same cell probabilities, namely n k k Pr[{x }] = 1 pxi , x = n i x , x ,...,x i i 1 1 2 k i=1 i=1 and n k k Pr[{y }] = 2 pyi , y = n . i y , y ,...,y i i 2 1 2 k i=1 i=1 If we “add” the two distributions together (the convolution) we get + n1 n2 k n + n + Pr[{x + y }] = 1 2 pxi yi , (x + y ) = n + n . i i x + y ,...,x + y i i i 1 2 1 1 k k i=1 i=1 This can be proved using moment generating functions as the moment generating function of the sum is the product of the two moment generating functions, or simply observe that we have n1 + n2 multinomial trials with the same set of probabilities {pi }. Now because of the independence we have Pr[{xi }] Pr[{yi }] k n n + = 1 2 pxi yi x , x ,...,x y , y ,...,y i 1 2 k 1 1 k i=1 k n + n + = 1 2 xi yi pi x1 + y1, x2 + y2,...,xk + yk i=1 + × n1 n2 / n1 n2 , x1, x2,...,xk y1, y2,...,yk x1 + y1, x2 + y2,...,xk + yk 574 Appendix: Some General Results that is, the product of the two probability distributions is the product of the distribution of their sum times a hypergeometric distribution. A.1.4 Conditional Distribution Suppose we have a nonsingular distribution n! − − [ , ]= x1 x2 ( − − )n x1 x2 . Pr x1 x2 p1 p2 1 p1 p2 x1!x2!(n − x1 − x2)! If y = x1 + x2, then y has probability function n Pr[y]= (p + p )y(1 − p − p )n−y y 1 2 1 2 and Pr[x , y] Pr[x | y]= 1 1 Pr[y] [ , ] = Pr x1 x2 [ ] Pry x1 x2 = y p1 p2 , x1 p1 + p2 p1 + p2 which is a binomial distribution. A.2 Delta Method We consider general ideas only without getting too involved with technical details about limits (see also Agresti (2013: Sect. 16.1). Let X be a random variable with mean μ and variance σ2, and let Y = g(X) be a “well-behaved” function of X that has a Taylor expansion 1 g(X) − g(μ) = (X − μ)g(μ) + (X − μ)2g(X ), 2 0 where X0 lies between X and μ, g (μ) is the derivative of g evaluated at X = μ, and g (X0) is the second derivative of g evaluated at X = X0. Then taking expected values, 1 E[g(X)]≈g(μ) + σ2g(μ). (A.2) 2 Appendix: Some General Results 575 Assuming second order terms can be neglected, we have E[Y ]≈g(μ) and var(Y ) ≈ E[(g(X) − g(μ))2] ≈ E[(X − μ)2][g(μ)]2 = σ2[g(μ)]2. (A.3) For example, if g(X) = log X then, for large μ, σ2 var(log X) ≈ . (A.4) μ2 If X = (X1, X2,...,Xk ) is a vector with mean μ, then for suitable g,wehave the first order Taylor expansion k = ( ) − (μ) ≈ ( − ) (μ), Y g X g Xi μi gi (A.5) i=1 (μ) / = μ where gi is ∂g ∂ Xi evaluated at X . Then [ ]≈ [( ( ) − (μ))2] var Y E ⎡g X g ⎤ k k ≈ ⎣ ( − )( − ) (μ) (μ)⎦ E Xi μi X j μ j gi g j i=1 j=1 k k = [ , ] (μ) (μ). cov Xi X j gi g j (A.6) i=1 j=1 A “quick and dirty” method for a product or ratio of two variables is a follows. If Y = X1/ X2 then taking logs and differentials we get Y = X1 − X2 . Y X1 X2 Squaring and taking expected values gives us X var[X ] var[X ] cov[X , X ] 1 ≈ 2 1 + 2 − 1 2 , var μy 2 2 2 (A.7) X2 μ1 μ2 μ1μ2 where μy ≈ μ1/μ2. For a product X1 X2 we simply replace the minus sign by a plus sign and μy by μ1μ2. Sometimes we wish to derive asymptotic variances and covariances of parameters using Taylor expansions and the delta method. This can be very onerous, but there are a few shortcuts or “rules” that have been brought together by Seber (1967) and Jolly (1965) that we now mention. 576 Appendix: Some General Results A.2.1 Application to the Multinomial Distribution Suppose X has the Multinomial distribution given by (A.1) and X X ···X g(X) = 1 2 r (s ≤ k). Xr+1 Xr+2 ···Xs Then, using (A.5), ( ) − (μ) r − s − g X g ≈ Xi μi − Xi μi . g(μ) μ μ i=1 i i=r+1 i Now squaring the above equation, taking expected values, and using μi = npi and 2 σ = npi (1 − pi ),wehave [ ] ( − ) var Xi = npi 1 pi = 1 − 1 2 2 2 μi n pi npi n and [ , ] cov Xi X j =−npi p j =−1 . 2 μi μ j n pi p j n We now have an expression like (A.6) involving three sets of covariances (except when s = r + 1) so that s s 2 r s − r var[g(X)]≈[g(μ)]2 μ−1 − − + δ − r(s − r) , i n n 2 2 i=1 where δ is 1 when s − r ≥ 2 and zero when s = r + 1. Then (Seber 1982: 8–9), [g(μ)]2 s var[g(X)]≈ p−1 − (s − 2r)2 , (A.8) n i i=1 for all s > r. Another way of describing the above method is, as in (A.7) above, to take loga- rithms and then differentials so that r s log(X) = log Xi − log Xi , i=1 i=r+1 and r s δXi = δXi − δXi . X X X i i=1 i i=r+1 i Appendix: Some General Results 577 We then square both sides and take expected values. This is the approach used by Cormack (1993a, b), for example, for log-linear models. Two multinomial cases of interest in this monograph are, s = 2, r = 1 and s = 4, r = 2.
Recommended publications
  • Manual Eviews 6 Espanol Pdf
    Manual Eviews 6 Espanol Pdf Statistical, forecasting, and modeling tools with a simple object-oriented interface. Corel VideoStudio Pro X8 User Guide PDF. 6. Corel VideoStudio Pro User Guide. EViews Illustrated.book Page 1 Thursday, March 19, 2015 9:53 AM Microsoft Corporation. All other product names mentioned in this manual may be Page 6. On this page you can download PDF book Onan Generator Shop Manual Model It can be done by copy-and-paste as well, which is described in EViews' help. This software product, including program code and manual, is copyrighted, and all rights manual or the EViews program. 6. Custom Edit Fields in EViews. 3 Achievements, 4 Performance, 5 Licensing and availability, 6 Software packages editing and embedding SageMath within LaTeX documents, The Python standard library, Archived from the original (PDF) on 2007-06-27. Català · Čeština · Deutsch · Español · Français · Bahasa Indonesia · Italiano · Nederlands. Manual Eviews 6 Espanol Pdf Read/Download Software Lga 775 Related Programs Free Download write Offline Spanish Pilz 4BB0D53E-1167- 4A61-8661-62FB02050D02 EViews 6 Why do I have 2. 0.4 sourceblog.sourceforge.net/maximo- 6-training-manual.pdf 2015-09-03.net/applied-advanced-econometrics-using-eviews.pdf 2015-08- 19 08:25:07 sourceblog.sourceforge.net/manual-de-primavera-p6-en-espanol.pdf. This software product, including program code and manual, is copyrighted, and all you may access the PDF files from within EViews by clicking on Help in the For discussion, see “Command and Capture Window Docking” on page 6. Software Installation for Eviews 8 for SMU Student - Free download as PDF File (.pdf), Text file (.txt) or read online for free.
    [Show full text]
  • An Introduction to R Notes on R: a Programming Environment for Data Analysis and Graphics Version 4.1.1 (2021-08-10)
    An Introduction to R Notes on R: A Programming Environment for Data Analysis and Graphics Version 4.1.1 (2021-08-10) W. N. Venables, D. M. Smith and the R Core Team This manual is for R, version 4.1.1 (2021-08-10). Copyright c 1990 W. N. Venables Copyright c 1992 W. N. Venables & D. M. Smith Copyright c 1997 R. Gentleman & R. Ihaka Copyright c 1997, 1998 M. Maechler Copyright c 1999{2021 R Core Team Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into an- other language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the R Core Team. i Table of Contents Preface :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1 1 Introduction and preliminaries :::::::::::::::::::::::::::::::: 2 1.1 The R environment :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 2 1.2 Related software and documentation ::::::::::::::::::::::::::::::::::::::::::::::: 2 1.3 R and statistics :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 2 1.4 R and the window system ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
    [Show full text]
  • Modeling Data with Functional Programming in R 2 Preface
    Brian Lee Yung Rowe Modeling Data With Functional Programming In R 2 Preface This book is about programming. Not just any programming, but program- ming for data science and numerical systems. This type of programming usually starts as a mathematical modeling problem that needs to be trans- lated into computer code. With functional programming, the same reasoning used for the mathematical model can be used for the software model. This not only reduces the impedance mismatch between model and code, it also makes code easier to understand, maintain, change, and reuse. Functional program- ming is the conceptual force that binds the these two models together. Most books that cover numerical and/or quantitative methods focus primarily on the mathematics of the model. Once this model is established the computa- tional algorithms are presented without fanfare as imperative, step-by-step, algorithms. These detailed steps are designed for machines. It is often difficult to reason about the algorithm in a way that can meaningfully leverage the properties of the mathematical model. This is a shame because mathematical models are often quite beautiful and elegant yet are transformed into ugly and cumbersome software. This unfortunate outcome is exacerbated by the real world, which is messy: data does not always behave as desired; data sources change; computational power is not always as great as we wish; re- porting and validation workflows complicate model implementations. Most theoretical books ignore the practical aspects of working in thefield. The need for a theoretical and structured bridge from the quantitative methods to programming has grown as data science and the computational sciences become more pervasive.
    [Show full text]
  • From Datasets to Resultssets in Stata
    From datasets to resultssets in Stata Roger Newson King's College London, London, UK [email protected] http://www.kcl-phs.org.uk/rogernewson/ Distributed at the 10th UK Stata Users' Group Meeting on 29 June 2004 1 What are resultssets? A resultsset is a Stata dataset created as output by a Stata command. It may be simply listed to the Stata log and/or output to a disk ¯le and/or written to the memory, overwriting any pre-existing dataset. If you are a SAS user converting to Stata, then note that Stata datasets do the job of SAS data sets, and Stata resultssets do the job of SAS output data sets. 1.1 Why resultssets? In general, a Stata dataset, or a dataset in any other format, should contain one observation per thing, and data on attributes of things. For instance, in the medical sector, a dataset might have one observation per patient, and data on the patient's baseline characteristics. Alternatively, a dataset might have one observation per visit to a health centre, and data on the state of the patient who made the visit. Usually, the variables in a dataset are either primary key variables, which identify the things corresponding to the observations, or non-key variables, which identify interesting attributes of those things. For instance, if there is one observation per patient, then one of the variables is usually a patient ID number, which identi¯es the observation uniquely. Or, if there is one observation per visit, and a patient may only make one visit per day, then the primary key variables are usually patient ID and visit date.
    [Show full text]
  • Symbolic Formulae for Linear Mixed Models Arxiv:1911.08628V1 [Stat
    Symbolic Formulae for Linear Mixed Models∗ Emi Tanaka1,2,* Francis K. C. Hui3 1 School of Mathematics and Statistics, The University of Sydney, NSW, Australia, 2006 2 Department of Econometrics and Business Statistics, Monash University, VIC, Australia, 3800 3 Research School of Finance, Actuarial Studies & Statistics, Australian National University, ACT, Australia, 2601 < [email protected] Abstract A statistical model is a mathematical representation of an often simpli- fied or idealised data-generating process. In this paper, we focus on a par- ticular type of statistical model, called linear mixed models (LMMs), that is widely used in many disciplines e.g. agriculture, ecology, econometrics, psychology. Mixed models, also commonly known as multi-level, nested, hierarchical or panel data models, incorporate a combination of fixed and random effects, with LMMs being a special case. The inclusion of random effects in particular gives LMMs considerable flexibility in accounting for many types of complex correlated structures often found in data. This flex- arXiv:1911.08628v1 [stat.ME] 19 Nov 2019 ibility, however, has given rise to a number of ways by which an end-user can specify the precise form of the LMM that they wish to fit in statistical software. In this paper, we review the software design for specification of the LMM (and its special case, the linear model), focusing in particular on the use of high-level symbolic model formulae and two popular but contrasting R-packages in lme4 and asreml. ∗Supported by R Consortium 1 1 Introduction Statistical models are mathematical formulation of often simplified real world phe- nomena, the use of which is ubiquitous in many data analyses.
    [Show full text]
  • Experiences in Training Students in Statistical Consulting and Data Analysis
    ICOTS 3, 1990: Gordon Smyth Experiences in Training Students in Statistical Consulting and Data Analysis Gordon I( Smyth - Brisbane, Australia 1. Introduction This paper describes some of my experiences teaching consulting and data analysis at the University of California, Santa Barbara, and at The University of Queensland, Australia. Both universities have formal data analysis courses. At the University of California, however, there is also an organised consulting service, the "StatLab, directed by a faculty member and staffed by graduate students. The graduate students discharge their Teaching Assistant obligations by consulting under the supervision of the Director. The aims of the StatLab are: (1) to give graduate students experience in consulting, (2) to provide statitical assistance to researchers and graduate students on campus, and (3) to provide a commercial consulting service to the general community. At The University of Queensland, graduate students are drastically more scarce, and there is no consulting laboratory along the above lines. It is still possible, though, to give a small number of the most interested students experience in consulting through Research Assistantships, Vacation Scholarships, and Honours projects. How we teach consulting depends of course on how we see the consulting process itself. I find it useful to emphasis four distinct steps: (1) defining the prob- lem, (2) initial examination of the &a using tables and graphs, (3) more formal analysis as appropriate using parametric models and statistical tests, and (4) presenting con- clusions. I often ask students in my data analysis courses to write reports in four sections corresponding to the above four steps. Most of the following discussion is organi,sed under the same headings.
    [Show full text]
  • A Comparative Evaluation of Selected Statistical Software for Computing Various Categorical Analyses
    A Comparative Evaluation of Selected Statistical Software for Computing Various Categorical Analyses Nancy McDermott and Cynthia White Social Science Computing Cooperative University of Wisconsin - Madison Introduction Unlike other statistical packages. by default PROC LOGISTIC models the probability that the event equals zero. To change This paper is a comparative evaluation of statistical software for this to model the probability that the event equals one as in other computing various categorical analyses including logistic packages. specify the DESCENDING option on the PROC regression, multinomial logits, and loglinear analysis. The LOGISTIC statement. If you do not add the DESCENDING following statistical packages were included in the evaluation: option, your parameter estimates may be opposite in sign of The SAS System" (version 6.09). STATA (version 3.1). SPSS what you may' get from other statistical packages. (version 4.0). GLiM (version 3.n). and LlMDEP (version 6.0). The RISKLIMITS option on the MODEL statement requests Large data sets were 'selected for analYSis. The code for the confidence intervals for the conditional odds ratio. 95% analyses is presented for each of the software packages. confidence intervals are computed by default. The LACKFIT Important and unique features of the analyses are noted. option on the MODEL statement requests the Hosmer­ Following the output. perfonnance comparisons on a Spare Lemeshow Goodness~of~Fit Test. Only one other statistical 10/512 MP running UNIX with 128Mb memory are provided. package (STATA) provided outpu1 for these two statistics. The UNIX time command was used to compare the performances of the statistical packages. The paper also makes Logistic Regression in STATA some recommendations on the appropriate package to use in certain situations.
    [Show full text]
  • Still Coming Down from the Mountains
    IASE 2017 Satellite Paper – Refereed Stern, Coe, & Stern STILL COMING DOWN FROM THE MOUNTAINS Roger Stern1,2, Ric Coe2,3 , and David Stern1 1University of Reading 2Statistics for Sustainable Development 3ICRAF [email protected] Statistics has changed in many ways since the 1960s, when many African countries became independent. Among these changes are an increasing emphasis on data and a set of unifying principles that can simplify the teaching of statistical modelling. These changes have yet to impact training in statistics in many countries. Access to technology is needed if these changes are to be incorporated into statistics teaching, and this is now feasible in many African universities. INTRODUCTION We are in the middle of a “Data Revolution” according to the United Nations (2014). Statistical skills are needed to make sense of data and this means that training in statistics should include components that are more data based, using a range of real examples. This is reflected in the 2016 GAISE report, which recommends that (introductory) statistics courses should “integrate real data with a context and purpose”. The report also notes that the “rapid increase in available data has made the field of statistics more salient”. Prior to the data revolution there was a statistical modelling revolution that gathered pace following the seminal paper by Nelder and Wedderburn (1972) together with the accompanying GLIM software. The changes in modelling have made more advanced statistical skills accessible to a wider range of students. The teaching of statistics in many African universities has been unaffected by these major developments in the subject.
    [Show full text]
  • Newsletter24
    GENSTAT Newsletter Issue No. 24 Iaa 11 II MM '^1^ kii II 11 & Iaa II II Editors PWLane Rothamsted Experimentai Station HARPENDEN Hertfordshire United Kingdom AI^ 2JQ K I TWnder NAG Limited Wilkinson House Jordan Hill Road OXFORD United Kingdom 0X2 8DR Printed and produced by the Numerical Algorithms Group ®The Numerical Algorithms Group Limited 1988 All rights reserved. NAG is a registered trademark of The Numerical Algoridims Group Ltd ISSN 0269-0764 The views expressed in contributed articles are not necessarily those of the publishers. Genstat Newsletter Issue No. 24 NP1976 October 1989 Genstat Newsletter No. 24 Contents Page 1. Editorial 3 2. News 4 3. Sixth Genstat Conference, Edinburgh, 11-15 September 1989 J Maindonald 5 4. Modelling the Variance in Regression M S Ridout 1 5. Use of Genstat at the Intemational Maize and Wheat Improvement Centre C Gonzalez 15 6. Experience with Genstat in Teaching an Applied Statistics Course C Donnelly 17 7. Experiences with Genstat 5 on Personal Computers V van den Berg 19 8. Genstat 5 Release 1 for Personal Computers PGN Digby 24 9. Use of Genstat and Other Software in Graduate Students' Problems P ME Altham 27 10. Features of the Genstat 5 Language: 3 KITrinder 35 Published Twice Yearly by Rothamsted Experimental Station Statistics Department and the Numerical Algorithms Group Ltd Page 2 Genstat Newsletter No. 24 Editorial This issue has been a little delayed because of a shortage of articles. However, articles are now arriving based on talks given at the Genstat Conference in Edinburgh. Three of these have been included in this issue, together with the report on the Conference.
    [Show full text]
  • R: a Free Software Project in Statistical Computing
    R: A Free Software Project in Statistical Computing Achim Zeileis http://statmath.wu-wien.ac.at/∼zeileis/ [email protected] Overview • A short introduction to some letters of interest – R, S, Z • Statistics and computing – applied vs. computational statistics vs. stat. computing – statistical software • The R project • Basic functionality • Empirical application: diabetes in native populations – exploratory analysis – linear regression – tree models • Packages • Summary Some letters R is an interactive computational environment for data analysis, inference and visualization. S is a language for data analysis and graphics, implemented in the commercial software S-PLUS and the open-source software R. Z (aka Achim Zeileis) is a statistician at WU Wien spending a considerable share of his time using and developing R: • for research, • for applied data analysis, • for course administration, • for Web page generation, CD covers, mp3 administration . (but that’s another story). Some letters: R • R is an interactive computational environment for data anal- ysis, inference and visualization. • Developed for the Unix, Windows and Macintosh families of operating systems by an international team. • Released under the GPL (General Public License), similar to the open-source operating system Linux. • Highly extensible through user-defined functions and a fast- growing list of add-on packages. • Based on the S language but with a new underlying imple- mentation. Some letters: S • S is a language for data analysis and graphics developed by John Chambers and co-workers at Bell Labs (of AT&T, now Lucent Technologies). • Exclusively licensed (and eventually sold) to Insightful Corp. as the basis for the commercial statistics system S-PLUS.
    [Show full text]
  • Comments on Micro Computer Software (Pdf 1.2
    223 COMMENTS ON MICRO COMPUTER SOFTWARE Y. E. PITTELKOW The diversity of possible approaches to the analysis of longitudinal data is described in HAND (1991). This diversity is also present in the types of analyses available in computer packages for analyzing longitudinal data. The purpose of this note is to provide some indication of the range of software that is available on micro computers and which may be used for analyzing longitudinal data. Attention is restricted to software packages, thus excluding libraries of subroutines. Software for the analysis of longitudinal data may be broadly categorized into three main types as follows: 1. Special purpose software that is tailored for longitudinal data. This type of software is sometimes restrictive in the range of analyses that it performs, but it is often efficient, since it can take account of special structures and restrictions; 2. General model fitting software, where analyses suitable for longitudinal data are performed as a special case of a more general model (eg. MANOVA in SPSS, MGLH in SYSTAT); 3. General purpose software that has no specific facilities for handling longitudinal data, but where it is possible to program a series of steps using available functions and facilities supplied with the software to perform suitable analyses. These steps (sometimes referred to as macros or procedures) may be stored and used repeatedly (eg. GAUSS, MATLAB, S, S-SPLUS, SAS, and X-LISP STAT). Software systems, or packages, may contain implementations of the three types of software within the single system. This is common amongst the 'bigger' systems such as BMDP, GENSTAT, PSTAT, SAS, and SPSS.
    [Show full text]
  • San Fernando Earthquake Conference 50 Years of Lifeline Engineering Understanding, Improving, & Operationalizing Hazard Resilience for Lifelines
    University of California, Los Angeles (UCLA) California, USA San Fernando Earthquake Conference 50 Years of Lifeline Engineering Understanding, Improving, & Operationalizing Hazard Resilience for Lifelines Book of Abstracts Version 1 Edited by Craig A. Davis, Kent Yu, and Ertugrul Taciroglu Published by UCLA Natural Hazards Risk and Resiliency Research Center (NHR3) Report# GIRS-2021-05 March 22, 2021 lifelines2021.ucla.edu This book of abstracts is published by the University of California Los Angeles Natural Hazards Risk and Resiliency Research Center (NHR3) for the Lifelines 2021-22 Conference commemorating the San Fernando Earthquake – 50 Years of Lifeline Earthquake Engineering. Any statements expressed in these materials are those of the individual authors and do not necessarily represent the views of ASCE, the B. John Garrick Risk Institute, the Natural Hazards Risk and Resiliency Research Center, or the Regents of the University of California. No reference made in this publication to any specific method, product, process, or service constitutes or implies an endorsement, recommendation, or warranty thereof by UCLA or ASCE. The materials are for general information only and do not represent a standard of UCLA or ASCE, nor are they intended as a reference to purchase specifications, contracts, regulations, statutes, or any other legal document. UCLA and ASCE make no warranty of any kind, whether express or implied, concerning the accuracy, completeness, suitability, or utility of any information, apparatus, product, or process discussed in this publication and assumes no liability therefor. This information should not be used without first securing competent advice with respect to its suitability for any general or specific application.
    [Show full text]