The FACTOR Procedure This Document Is an Individual Chapter from SAS/STAT® 14.1 User’S Guide
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Administering Unidata on UNIX Platforms
C:\Program Files\Adobe\FrameMaker8\UniData 7.2\7.2rebranded\ADMINUNIX\ADMINUNIXTITLE.fm March 5, 2010 1:34 pm Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta UniData Administering UniData on UNIX Platforms UDT-720-ADMU-1 C:\Program Files\Adobe\FrameMaker8\UniData 7.2\7.2rebranded\ADMINUNIX\ADMINUNIXTITLE.fm March 5, 2010 1:34 pm Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Beta Notices Edition Publication date: July, 2008 Book number: UDT-720-ADMU-1 Product version: UniData 7.2 Copyright © Rocket Software, Inc. 1988-2010. All Rights Reserved. Trademarks The following trademarks appear in this publication: Trademark Trademark Owner Rocket Software™ Rocket Software, Inc. Dynamic Connect® Rocket Software, Inc. RedBack® Rocket Software, Inc. SystemBuilder™ Rocket Software, Inc. UniData® Rocket Software, Inc. UniVerse™ Rocket Software, Inc. U2™ Rocket Software, Inc. U2.NET™ Rocket Software, Inc. U2 Web Development Environment™ Rocket Software, Inc. wIntegrate® Rocket Software, Inc. Microsoft® .NET Microsoft Corporation Microsoft® Office Excel®, Outlook®, Word Microsoft Corporation Windows® Microsoft Corporation Windows® 7 Microsoft Corporation Windows Vista® Microsoft Corporation Java™ and all Java-based trademarks and logos Sun Microsystems, Inc. UNIX® X/Open Company Limited ii SB/XA Getting Started The above trademarks are property of the specified companies in the United States, other countries, or both. All other products or services mentioned in this document may be covered by the trademarks, service marks, or product names as designated by the companies who own or market them. License agreement This software and the associated documentation are proprietary and confidential to Rocket Software, Inc., are furnished under license, and may be used and copied only in accordance with the terms of such license and with the inclusion of the copyright notice. -
LATEX for Beginners
LATEX for Beginners Workbook Edition 5, March 2014 Document Reference: 3722-2014 Preface This is an absolute beginners guide to writing documents in LATEX using TeXworks. It assumes no prior knowledge of LATEX, or any other computing language. This workbook is designed to be used at the `LATEX for Beginners' student iSkills seminar, and also for self-paced study. Its aim is to introduce an absolute beginner to LATEX and teach the basic commands, so that they can create a simple document and find out whether LATEX will be useful to them. If you require this document in an alternative format, such as large print, please email [email protected]. Copyright c IS 2014 Permission is granted to any individual or institution to use, copy or redis- tribute this document whole or in part, so long as it is not sold for profit and provided that the above copyright notice and this permission notice appear in all copies. Where any part of this document is included in another document, due ac- knowledgement is required. i ii Contents 1 Introduction 1 1.1 What is LATEX?..........................1 1.2 Before You Start . .2 2 Document Structure 3 2.1 Essentials . .3 2.2 Troubleshooting . .5 2.3 Creating a Title . .5 2.4 Sections . .6 2.5 Labelling . .7 2.6 Table of Contents . .8 3 Typesetting Text 11 3.1 Font Effects . 11 3.2 Coloured Text . 11 3.3 Font Sizes . 12 3.4 Lists . 13 3.5 Comments & Spacing . 14 3.6 Special Characters . 15 4 Tables 17 4.1 Practical . -
Factor — Factor Analysis
Title stata.com factor — Factor analysis Description Quick start Menu Syntax Options for factor and factormat Options unique to factormat Remarks and examples Stored results Methods and formulas References Also see Description factor and factormat perform a factor analysis of a correlation matrix. The commands produce principal factor, iterated principal factor, principal-component factor, and maximum-likelihood factor analyses. factor and factormat display the eigenvalues of the correlation matrix, the factor loadings, and the uniqueness of the variables. factor expects data in the form of variables, allows weights, and can be run for subgroups. factormat is for use with a correlation or covariance matrix. Quick start Principal-factor analysis using variables v1 to v5 factor v1 v2 v3 v4 v5 As above, but retain at most 3 factors factor v1-v5, factors(3) Principal-component factor analysis using variables v1 to v5 factor v1-v5, pcf Maximum-likelihood factor analysis factor v1-v5, ml As above, but perform 50 maximizations with different starting values factor v1-v5, ml protect(50) As above, but set the seed for reproducibility factor v1-v5, ml protect(50) seed(349285) Principal-factor analysis based on a correlation matrix cmat with a sample size of 800 factormat cmat, n(800) As above, retain only factors with eigenvalues greater than or equal to 1 factormat cmat, n(800) mineigen(1) Menu factor Statistics > Multivariate analysis > Factor and principal component analysis > Factor analysis factormat Statistics > Multivariate analysis > Factor and principal component analysis > Factor analysis of a correlation matrix 1 2 factor — Factor analysis Syntax Factor analysis of data factor varlist if in weight , method options Factor analysis of a correlation matrix factormat matname, n(#) method options factormat options matname is a square Stata matrix or a vector containing the rowwise upper or lower triangle of the correlation or covariance matrix. -
(LS-Factor) for Modeling Soil Erosion by Water
Geosciences 2015, 5, 117-126; doi:10.3390/geosciences5020117 OPEN ACCESS geosciences ISSN 2076-3263 www.mdpi.com/journal/geosciences Short Communication A New European Slope Length and Steepness Factor (LS-Factor) for Modeling Soil Erosion by Water Panos Panagos 1,*, Pasquale Borrelli 1,† and Katrin Meusburger 2,† 1 European Commission, Joint Research Centre, Institute for Environment and Sustainability, Via Enrico Fermi 2749, I-21027 Ispra (VA), Italy; E-Mail: [email protected] 2 Environmental Geosciences, University of Basel, Bernoullistrasse 30, 4056 Basel, Switzerland; E-Mail: [email protected] † These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +39-0332-785574; Fax: +39-0332-786394. Academic Editor: Jesus Martinez-Frias Received: 24 December 2014 / Accepted: 23 March 2015 / Published: 3 April 2015 Abstract: The Universal Soil Loss Equation (USLE) model is the most frequently used model for soil erosion risk estimation. Among the six input layers, the combined slope length and slope angle (LS-factor) has the greatest influence on soil loss at the European scale. The S-factor measures the effect of slope steepness, and the L-factor defines the impact of slope length. The combined LS-factor describes the effect of topography on soil erosion. The European Soil Data Centre (ESDAC) developed a new pan-European high-resolution soil erosion assessment to achieve a better understanding of the spatial and temporal patterns of soil erosion in Europe. The LS-calculation was performed using the original equation proposed by Desmet and Govers (1996) and implemented using the System for Automated Geoscientific Analyses (SAGA), which incorporates a multiple flow algorithm and contributes to a precise estimation of flow accumulation. -
5 Command Line Functions by Barbara C
ADAPS: Chapter 5. Command Line Functions 5 Command Line Functions by Barbara C. Hoopes and James F. Cornwall This chapter describes ADAPS command line functions. These are functions that are executed from the UNIX command line instead of from ADAPS menus, and that may be run manually or by automated means such as “cron” jobs. Most of these functions are NOT accessible from the ADAPS menus. These command line functions are described in detail below. 5.1 Hydra Although Hydra is available from ADAPS at the PR sub-menu, Edit Time Series Data using Hydra (TS_EDIT), it can also be started from the command line. However, to start Hydra outside of ADAPS, a DV or UV RDB file needs to be available to edit. The command is “hydra rdb_file_name.” For a complete description of using Hydra, refer to Section 4.5.2 Edit Time-Series Data using Hydra (TS_EDIT). 5.2 nwrt2rdb This command is used to output rating information in RDB format. It writes RDB files with a table containing the rating equation parameters or the rating point pairs, with all other information contained in the RDB comments. The following arguments can be used with this command: nwrt2rdb -ooutfile -zdbnum -aagency -nstation -dddid -trating_type -irating_id -e (indicates to output ratings in expanded form; it is ignored for equation ratings.) -l loctzcd (time zone code or local time code "LOC") -m (indicates multiple output files.) -r (rounding suppression) Rules • If -o is omitted, nwrt2rdb writes to stdout; AND arguments -n, -d, -t, and -i must be present. • If -o is present, no other arguments are required, and the program will use ADAPS routines to prompt for them. -
Computing in the ACI Cluster
Computing in the ACI Cluster Ben Seiyon Lee Pennsylvania State University Department of Statistics October 5, 2017 Graduate Workshop 10/5/2017 1 Outline 1 High Performance Computing 2 Accessing ACI 3 Basic Unix Commands 4 Navigation and File Creation 5 SCP clients 6 PBS scripts 7 Running PBS scripts 8 Parallelization 9 Best Practices Graduate Workshop 10/5/2017 2 High Performance Computing High Performance Computing Large number of processors Large memory requirements Large storage requirements Long runtimes ACI-B: Batch Log in to a head node and submit jobs to compute nodes Groups can purchase allocations or use open queue Intel Xeon E5-2680 v2 2.8 GHz, 256 Gb RAM, 20 cores per node Statistics Department has 5 nodes (20 processors per node) Graduate Workshop 10/5/2017 3 Accessing ACI Sign up for an account: ICS-ACI Account Sign-up 2-Factor Authentication Mac Open Terminal ssh into ACI: ssh <username>@aci-b.aci.ics.psu.edu Complete 2 Factor Authentication Windows Open Putty Enter aci-b.aci.ics.psu.edu in the Host Name field Select SSH then X11 and Enable X11 forwarding Select Connection then Data and enter your username in the Auto-login username field Graduate Workshop 10/5/2017 4 Unix Commands Change directories: cd Home Directory: cd Here: cd . Up one directory: cd .. All files in the directory: ls * Wildcards: Test* . *.png Send output to another command: | Write command output to a file: ls > log.txt Create Directory: mkdir cd ~/ work mkdir Workshop mkdir WorkshopB l s Remove Directory: rmdir rmdir WorkshopB l s mkdir WorkshopB Graduate Workshop 10/5/2017 5 Unix Commands Move Files: mv mv file1 .txt ./WorkshopB/ mv ../WorkshopB/file1 .txt ./WorkshopB/file2 .txt Copy Files: cp cp ../WorkshopB/file1 .txt ../WorkshopB/file2 .txt Remove Files: rm rm file1.txt rm −r WorkshopB Access Manual for commands: man man rm q List files: ls l s ls ~/work/Workshop Graduate Workshop 10/5/2017 6 Unix Commands Print the current directoy: pwd pwd Past commands: history h i s t o r y Manage permissions for a file: chmod chmod u=rwx,g=rwx,o=rwx file1 . -
Advanced Topics in Sorting
Advanced Topics in Sorting complexity system sorts duplicate keys comparators 1 complexity system sorts duplicate keys comparators 2 Complexity of sorting Computational complexity. Framework to study efficiency of algorithms for solving a particular problem X. Machine model. Focus on fundamental operations. Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X. Optimal algorithm. Algorithm with best cost guarantee for X. lower bound ~ upper bound Example: sorting. • Machine model = # comparisons access information only through compares • Upper bound = N lg N from mergesort. • Lower bound ? 3 Decision Tree a < b yes no code between comparisons (e.g., sequence of exchanges) b < c a < c yes no yes no a b c b a c a < c b < c yes no yes no a c b c a b b c a c b a 4 Comparison-based lower bound for sorting Theorem. Any comparison based sorting algorithm must use more than N lg N - 1.44 N comparisons in the worst-case. Pf. Assume input consists of N distinct values a through a . • 1 N • Worst case dictated by tree height h. N ! different orderings. • • (At least) one leaf corresponds to each ordering. Binary tree with N ! leaves cannot have height less than lg (N!) • h lg N! lg (N / e) N Stirling's formula = N lg N - N lg e N lg N - 1.44 N 5 Complexity of sorting Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X. -
Multivariate Statistical Functions in R
Multivariate statistical functions in R Michail T. Tsagris [email protected] College of engineering and technology, American university of the middle east, Egaila, Kuwait Version 6.1 Athens, Nottingham and Abu Halifa (Kuwait) 31 October 2014 Contents 1 Mean vectors 1 1.1 Hotelling’s one-sample T2 test ............................. 1 1.2 Hotelling’s two-sample T2 test ............................ 2 1.3 Two two-sample tests without assuming equality of the covariance matrices . 4 1.4 MANOVA without assuming equality of the covariance matrices . 6 2 Covariance matrices 9 2.1 One sample covariance test .............................. 9 2.2 Multi-sample covariance matrices .......................... 10 2.2.1 Log-likelihood ratio test ............................ 10 2.2.2 Box’s M test ................................... 11 3 Regression, correlation and discriminant analysis 13 3.1 Correlation ........................................ 13 3.1.1 Correlation coefficient confidence intervals and hypothesis testing us- ing Fisher’s transformation .......................... 13 3.1.2 Non-parametric bootstrap hypothesis testing for a zero correlation co- efficient ..................................... 14 3.1.3 Hypothesis testing for two correlation coefficients . 15 3.2 Regression ........................................ 15 3.2.1 Classical multivariate regression ....................... 15 3.2.2 k-NN regression ................................ 17 3.2.3 Kernel regression ................................ 20 3.2.4 Choosing the bandwidth in kernel regression in a very simple way . 23 3.2.5 Principal components regression ....................... 24 3.2.6 Choosing the number of components in principal component regression 26 3.2.7 The spatial median and spatial median regression . 27 3.2.8 Multivariate ridge regression ......................... 29 3.3 Discriminant analysis .................................. 31 3.3.1 Fisher’s linear discriminant function .................... -
Performance Analysis and Improvement in UNIX File System Tree Traversal
Performance Analysis and Improvement in UNIX File System Tree Traversal. Jonathan M. Smith Computer Science Department Columbia University New York, New York 10027 and Bell Communications Researcht 3 Corporate Place Piscataway, New Jersey 08854 Technical Report CUCS-323-88 ABSTRACT A utility program has been developed to aid UNIX system administrators in obtaining information about mounted file systems. The program gathers the information by a traversal of the accessible nodes in the file hierarchy; without kernel-recorded path names, this is the only way to dynamically determine mount-point names. The program has had three significant versions, the second and third of which were driven by performance requirements rather than functional requirements. The original version showed a factor of 7 improvement over the performance of a naive tree traversal. The second iteration showed a factor of 10 improvement over the previous version by using extra information about the structure of the file system tree to prune unnecessary branches from the traversal. The third iteration showed another factor of 3 improvement by changing the search strategy. The perfor mance improvements depend on an analysis described in this report. Since the program's main task is traversal of a UNIX file system tree, our experience can be generalized to other such searches. Performance Analysis and Improvement in UNIX File System Tree Traversal. Jonathan M. Smith Computer Science Department Columbia University New York, New York 10027 and Bell Communications Researcht 3 Corporate Place Piscataway, New Jersey 08854 Technical Report CUCS-323-88 1. Introduction The UNIX® kernel [8, 9] does not store the path name passed to system calls. -
Gnu Coreutils Core GNU Utilities for Version 5.93, 2 November 2005
gnu Coreutils Core GNU utilities for version 5.93, 2 November 2005 David MacKenzie et al. This manual documents version 5.93 of the gnu core utilities, including the standard pro- grams for text and file manipulation. Copyright c 1994, 1995, 1996, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”. Chapter 1: Introduction 1 1 Introduction This manual is a work in progress: many sections make no attempt to explain basic concepts in a way suitable for novices. Thus, if you are interested, please get involved in improving this manual. The entire gnu community will benefit. The gnu utilities documented here are mostly compatible with the POSIX standard. Please report bugs to [email protected]. Remember to include the version number, machine architecture, input files, and any other information needed to reproduce the bug: your input, what you expected, what you got, and why it is wrong. Diffs are welcome, but please include a description of the problem as well, since this is sometimes difficult to infer. See section “Bugs” in Using and Porting GNU CC. This manual was originally derived from the Unix man pages in the distributions, which were written by David MacKenzie and updated by Jim Meyering. -
Probabilistic Inferences for the Sample Pearson Product Moment Correlation Jeffrey R
Journal of Modern Applied Statistical Methods Volume 10 | Issue 2 Article 8 11-1-2011 Probabilistic Inferences for the Sample Pearson Product Moment Correlation Jeffrey R. Harring University of Maryland, [email protected] John A. Wasko University of Maryland, [email protected] Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Harring, Jeffrey R. and Wasko, John A. (2011) "Probabilistic Inferences for the Sample Pearson Product Moment Correlation," Journal of Modern Applied Statistical Methods: Vol. 10 : Iss. 2 , Article 8. DOI: 10.22237/jmasm/1320120420 Available at: http://digitalcommons.wayne.edu/jmasm/vol10/iss2/8 This Regular Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Journal of Modern Applied Statistical Methods Copyright © 2011 JMASM, Inc. November 2011, Vol. 10, No. 2, 476-493 1538 – 9472/11/$95.00 Probabilistic Inferences for the Sample Pearson Product Moment Correlation Jeffrey R. Harring John A. Wasko University of Maryland, College Park, MD Fisher’s correlation transformation is commonly used to draw inferences regarding the reliability of tests comprised of dichotomous or polytomous items. It is illustrated theoretically and empirically that omitting test length and difficulty results in inflated Type I error. An empirically unbiased correction is introduced within the transformation that is applicable under any test conditions. Key words: Correlation coefficients, measurement, test characteristics, reliability, parallel forms, test equivalency. -
Point‐Biserial Correlation: Interval Estimation, Hypothesis Testing, Meta
UC Santa Cruz UC Santa Cruz Previously Published Works Title Point-biserial correlation: Interval estimation, hypothesis testing, meta- analysis, and sample size determination. Permalink https://escholarship.org/uc/item/3h82b18b Journal The British journal of mathematical and statistical psychology, 73 Suppl 1(S1) ISSN 0007-1102 Author Bonett, Douglas G Publication Date 2020-11-01 DOI 10.1111/bmsp.12189 Peer reviewed eScholarship.org Powered by the California Digital Library University of California 1 British Journal of Mathematical and Statistical Psychology (2019) © 2019 The British Psychological Society www.wileyonlinelibrary.com Point-biserial correlation: Interval estimation, hypothesis testing, meta-analysis, and sample size determination Douglas G. Bonett* Department of Psychology, University of California, Santa Cruz, California, USA The point-biserial correlation is a commonly used measure of effect size in two-group designs. New estimators of point-biserial correlation are derived from different forms of a standardized mean difference. Point-biserial correlations are defined for designs with either fixed or random group sample sizes and can accommodate unequal variances. Confidence intervals and standard errors for the point-biserial correlation estimators are derived from the sampling distributions for pooled-variance and separate-variance versions of a standardized mean difference. The proposed point-biserial confidence intervals can be used to conduct directional two-sided tests, equivalence tests, directional non-equivalence tests, and non-inferiority tests. A confidence interval for an average point-biserial correlation in meta-analysis applications performs substantially better than the currently used methods. Sample size formulas for estimating a point-biserial correlation with desired precision and testing a point-biserial correlation with desired power are proposed.