Purposes of Data Analysis Parameters and Statistics Variables and Samples

Total Page:16

File Type:pdf, Size:1020Kb

Purposes of Data Analysis Parameters and Statistics Variables and Samples Purposes of Data Analysis Part 1: Probability Distributions True Distributions or Relationships in the Earths System Probability Distribution Normal Distribution Organizing Data Student-t Distribution Sampling Chi Square Distribution Find Relationships among Data F Distribution Weather Forecasts, Significance Tests Test Significance of the Results Physical Understanding, …. ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu Parameters and Statistics Variables and Samples Random Variable: A variable whose values occur at Parameters: Numbers that describe a population. For random, following a probability distribution. example, the population mean (µ)and standard deviation (σ). Observation: When the random variable actually attains a value, that value is called an observation (of the Statistics: Numbers that are calculated from a sample. variable). A given population has only one value of a particular parameter, but a particular statistic calculated from different Sample: A collection of several observations is called samples of the population has values that are generally sample. If the observations are generated in a random different, both from each other, and from the parameter that fashion with no bias, that sample is known as a random the statistics is designed to estimate. sample. The science of statistics is concerned with how to draw By observing the distribution of values in a random sample, reliable, quantifiable inferences from statistics about we can draw conclusions about the underlying probability parameters. distribution. (from Middleton 2000) ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu 1 Probability Distribution Example continuous probability distribution discrete probability distribution If you throw a die, there are six possible outcomes: the PDF numbers 1, 2, 3, 4, 5 or 6. This is an example of a random variable (the dice value), a variable whose possible values occur at random. When the random variable actually attains a value (such as when the dice is actually thrown) that value is called an The pattern of probabilities for a set of events is called a observation. probability distribution. If you throw the die 10 times, then you have a random (1) The probability of each event or combinations of events must range from sample which consists of 10 observations. 0 to 1. (2) The sum of the probability of all possible events must be equal too 1. ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu Probability Density Function Cumulative Distribution Function P = the probability that a randomly selected value of a variable X falls between a and b. The cumulative distribution function F(x) is defined as the probability f(x) = the probability density function. that a variable assumes a value less than x. The probability function has to be integrated over distinct limits to obtain a probability. The cumulative distribution function is often used to assist in The probability for X to have a particular value is ZERO. calculating probability (will show later). Two important properties of the probability density function: The following relation between F and P is essential for probability (1) f(x) ≥ 0 for all x within the domain of f. calculation: (2) ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu 2 Normal Distribution Standard Normal Distribution The standard normal distribution has a mean of 0 and a standard f: probability density function mean deviation of 1. µ: mean of the population This probability distribution is particularly useful as it can represent σ: standard deviation of the population any normal distribution, whatever its mean and standard deviation. The normal distribution is one of the most important distribution in Using the following transformation, a normal distribution of variable X geophysics. Most geophysical variables (such as wind, temperature, can be converted to the standard normal distribution of variable Z: pressure, etc.) are distributed normally about their means. Z = ( X - µ ) / σ ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu Transformations How to Use Standard Normal Distribution It can be shown that any frequency function can be transformed in to a frequency function of given form by a suitable transformation or functional relationship. For example, the original data follows some complicated skewed distribution, we may want to transform this distribution into a known distribution (such as the normal distribution) whose theory and property are well known. Most geoscience variables are distributed normally about their mean or can be transformed in such a way that they become normally distributed. Example 1: What is the probability that a value of Z is greater than 0.75? The normal distribution is, therefore, one of the most important distribution in geoscience data analysis. Answer: P(Z≥0.75) = 1-P(Z≤0.75)=1-0.7734=0.2266 ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu 3 Another Example Example 1 Given a normal distribution with µ = 50 and σ = 10, find the probability that X assumes a value between 45 and 62. negative value The Z values corresponding to x1 = 45 and x2 = 62 are Example 2: What is the probability that Z lies between the limits Z1=-0.60 and Z2=0.75? Answer: Therefore, P(45 < X < 62) = P(-0.5 < Z < 1.2) P(Z ≤-0.60) = 1 − P(Z < 0.60) Î P(Z >-0.60)=1−0.7257=0.2743 = P(Z <1.2) – P(Z<-0.5) P(-0.60 ≤Z ≤0.75) = P(Z ≤0.75) − P(Z ≤-0.60) = 0.8849 – 0.3085 = 0.7734 − 0.2743 = 0.5764 ESS210B ESS210B = 0.4991 Prof. Jin-Yi Yu Prof. Jin-Yi Yu Example 2 Probability of Normal Distribution An electrical firm manufactures light bulbs that have a length of life that is normally distributed with mean equal to 800 hours and a standard deviation of 40 hours. Find the possibility that a bulb burns between 778 and 834 hours. The Z values corresponding to x1 = 778 and x2 = 834 are z1 = (778 – 800) / 40 = -0.55 z2 = (834 – 800) / 40 =0.85 Therefore, P(778 < X < 834) = P(-0.55 < Z < 0.85) = P(Z <0.85) – P(Z<-0.55) = 0.8849 – 0.2912 = 0.5111 ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu 4 Probability of Normal Distribution Application in Operational Climatology 0.53 -1.28 -0.53 Warmer than 9 out of 10 years 1.28 There is only a 4.55% probability that a normally distributed variable will fall more than 2 standard deviations away from its mean. This figure shows average temperatures for January 1989 over the US, This is the two-tailed probability. The probability that a normal variable expressed as quantiles of the local Normal distributions. will exceed its mean by more then 2 σ is only half of that, 2.275%. Different values of µ and σ have been estimated for each location. ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu How to Estimate µ and σ Sampling Distribution In order to use the normal distribution, we need to know Is the sample mean close to the population mean? the mean and standard deviation of the population To answer this question, we need to know the probability ÎBut they are impossible to know in most geoscience distribution of the sample mean. applications, because most geoscience populations are infinite. We can obtain the distribution by repeatedly drawing samples from a population and find out the frequency ÎWe have to estimate µ and σ from samples. distribution. and ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu 5 Example Distribution of Sample Average If we repeatedly draw N observations from a standard normal distribution (µ=0 and σ=1) and calculate the sample mean, what is the distribution of the sample mean? What did we learn from this example? ÎIf a sample is composed of N random observations from a normal distribution with mean µ and standard deviation σ, then the distribution of the sample average will be a normal distribution with mean µ but standard deviation σ/sqrt(N). ESS210B ESS210B (from Berk & Carey’s book) Prof. Jin-Yi Yu Prof. Jin-Yi Yu Standard Error Distribution of Sample Means One example of how the normal distribution can be used The standard deviation of the probability distribution of for data that are “non-normally” distributed is to determine the sample mean (X) is also referred as the “standard the distribution of sample means. error” of X. Is the sample mean close to the population mean? To answer this question, we need to know the probability distribution of the sample mean. We can obtain the distribution by repeatedly drawing samples from a population and find out the frequency distribution. ESS210B ESS210B Prof. Jin-Yi Yu Prof. Jin-Yi Yu 6 An Example of Normal Distribution Central Limit Theory 1000 random numbers =2 N N =1 0 In the limit, as the sample size becomes large, the sum (or the mean) of a set of independent measurements will have a normal distribution, non-normal distribution irrespective of the distribution of the raw data. If there is a population which has a mean of µand a standard deviation If we repeatedly draw a sample of N values from a of σ, then the distribution of its sampling means will have: population and calculate the mean of that sample, the µ µ (1) a mean x = , and distribution of sample means tends to be in normal σ σ 1/2 (2) a standard deviation x = /(N) = standard error of the mean distribution. ESS210B ESS210B (Figure from Panofsky and Brier 1968) Prof. Jin-Yi Yu Prof. Jin-Yi Yu Sample Size Normal Distribution of Sample Means The distribution of sample means can be transformed into the standard normal distribution with the following (S-σ)/σ * 100 transformation: N=30 The bigger the sample size N, the more accurate the estimated mean and standard deviation are to the true values.
Recommended publications
  • The Savvy Survey #16: Data Analysis and Survey Results1 Milton G
    AEC409 The Savvy Survey #16: Data Analysis and Survey Results1 Milton G. Newberry, III, Jessica L. O’Leary, and Glenn D. Israel2 Introduction In order to evaluate the effectiveness of a program, an agent or specialist may opt to survey the knowledge or Continuing the Savvy Survey Series, this fact sheet is one of attitudes of the clients attending the program. To capture three focused on working with survey data. In its most raw what participants know or feel at the end of a program, he form, the data collected from surveys do not tell much of a or she could implement a posttest questionnaire. However, story except who completed the survey, partially completed if one is curious about how much information their clients it, or did not respond at all. To truly interpret the data, it learned or how their attitudes changed after participation must be analyzed. Where does one begin the data analysis in a program, then either a pretest/posttest questionnaire process? What computer program(s) can be used to analyze or a pretest/follow-up questionnaire would need to be data? How should the data be analyzed? This publication implemented. It is important to consider realistic expecta- serves to answer these questions. tions when evaluating a program. Participants should not be expected to make a drastic change in behavior within the Extension faculty members often use surveys to explore duration of a program. Therefore, an agent should consider relevant situations when conducting needs and asset implementing a posttest/follow up questionnaire in several assessments, when planning for programs, or when waves in a specific timeframe after the program (e.g., six assessing customer satisfaction.
    [Show full text]
  • Evolution of the Infographic
    EVOLUTION OF THE INFOGRAPHIC: Then, now, and future-now. EVOLUTION People have been using images and data to tell stories for ages—long before the days of the Internet, smartphones, and Excel. In fact, the history of infographics pre-dates the web by more than 30,000 years with the earliest forms of these visuals being cave paintings that helped early humans find food, resources, and shelter. But as technology has advanced, so has our ability to tell meaningful stories. Here’s a look into the evolution of modern infographics—where they’ve been, how they’ve evolved, and where they’re headed. Then: Printed, static infographics The 20th Century introduced the infographic—a staple for how we communicate, visualize, and share information today. Early on, these print graphics married illustration and data to communicate information in a revolutionary way. ADVANTAGE Design elements enable people to quickly absorb information previously confined to long paragraphs of text. LIMITATION Static infographics didn’t allow for deeper dives into the data to explore granularities. Hoping to drill down for more detail or context? Tough luck—what you see is what you get. Source: http://www.wired.co.uk/news/archive/2012-01/16/painting- by-numbers-at-london-transport-museum INFOGRAPHICS THROUGH THE AGES DOMO 03 Now: Web-based, interactive infographics While the first wave of modern infographics made complex data more consumable, web-based, interactive infographics made data more explorable. These are everywhere today. ADVANTAGE Everyone looking to make data an asset, from executives to graphic designers, are now building interactive data stories that deliver additional context and value.
    [Show full text]
  • Using Survey Data Author: Jen Buckley and Sarah King-Hele Updated: August 2015 Version: 1
    ukdataservice.ac.uk Using survey data Author: Jen Buckley and Sarah King-Hele Updated: August 2015 Version: 1 Acknowledgement/Citation These pages are based on the following workbook, funded by the Economic and Social Research Council (ESRC). Williamson, Lee, Mark Brown, Jo Wathan, Vanessa Higgins (2013) Secondary Analysis for Social Scientists; Analysing the fear of crime using the British Crime Survey. Updated version by Sarah King-Hele. Centre for Census and Survey Research We are happy for our materials to be used and copied but request that users should: • link to our original materials instead of re-mounting our materials on your website • cite this as an original source as follows: Buckley, Jen and Sarah King-Hele (2015). Using survey data. UK Data Service, University of Essex and University of Manchester. UK Data Service – Using survey data Contents 1. Introduction 3 2. Before you start 4 2.1. Research topic and questions 4 2.2. Survey data and secondary analysis 5 2.3. Concepts and measurement 6 2.4. Change over time 8 2.5. Worksheets 9 3. Find data 10 3.1. Survey microdata 10 3.2. UK Data Service 12 3.3. Other ways to find data 14 3.4. Evaluating data 15 3.5. Tables and reports 17 3.6. Worksheets 18 4. Get started with survey data 19 4.1. Registration and access conditions 19 4.2. Download 20 4.3. Statistics packages 21 4.4. Survey weights 22 4.5. Worksheets 24 5. Data analysis 25 5.1. Types of variables 25 5.2. Variable distributions 27 5.3.
    [Show full text]
  • Monte Carlo Likelihood Inference for Missing Data Models
    MONTE CARLO LIKELIHOOD INFERENCE FOR MISSING DATA MODELS By Yun Ju Sung and Charles J. Geyer University of Washington and University of Minnesota Abbreviated title: Monte Carlo Likelihood Asymptotics We describe a Monte Carlo method to approximate the maximum likeli- hood estimate (MLE), when there are missing data and the observed data likelihood is not available in closed form. This method uses simulated missing data that are independent and identically distributed and independent of the observed data. Our Monte Carlo approximation to the MLE is a consistent and asymptotically normal estimate of the minimizer ¤ of the Kullback- Leibler information, as both a Monte Carlo sample size and an observed data sample size go to in¯nity simultaneously. Plug-in estimates of the asymptotic variance are provided for constructing con¯dence regions for ¤. We give Logit-Normal generalized linear mixed model examples, calculated using an R package that we wrote. AMS 2000 subject classi¯cations. Primary 62F12; secondary 65C05. Key words and phrases. Asymptotic theory, Monte Carlo, maximum likelihood, generalized linear mixed model, empirical process, model misspeci¯cation 1 1. Introduction. Missing data (Little and Rubin, 2002) either arise naturally|data that might have been observed are missing|or are intentionally chosen|a model includes random variables that are not observable (called latent variables or random e®ects). A mixture of normals or a generalized linear mixed model (GLMM) is an example of the latter. In either case, a model is speci¯ed for the complete data (x; y), where x is missing and y is observed, by their joint den- sity f(x; y), also called the complete data likelihood (when considered as a function of ).
    [Show full text]
  • Interactive Statistical Graphics/ When Charts Come to Life
    Titel Event, Date Author Affiliation Interactive Statistical Graphics When Charts come to Life [email protected] www.theusRus.de Telefónica Germany Interactive Statistical Graphics – When Charts come to Life PSI Graphics One Day Meeting Martin Theus 2 www.theusRus.de What I do not talk about … Interactive Statistical Graphics – When Charts come to Life PSI Graphics One Day Meeting Martin Theus 3 www.theusRus.de … still not what I mean. Interactive Statistical Graphics – When Charts come to Life PSI Graphics One Day Meeting Martin Theus 4 www.theusRus.de Interactive Graphics ≠ Dynamic Graphics • Interactive Graphics … uses various interactions with the plots to change selections and parameters quickly. Interactive Statistical Graphics – When Charts come to Life PSI Graphics One Day Meeting Martin Theus 4 www.theusRus.de Interactive Graphics ≠ Dynamic Graphics • Interactive Graphics … uses various interactions with the plots to change selections and parameters quickly. • Dynamic Graphics … uses animated / rotating plots to visualize high dimensional (continuous) data. Interactive Statistical Graphics – When Charts come to Life PSI Graphics One Day Meeting Martin Theus 4 www.theusRus.de Interactive Graphics ≠ Dynamic Graphics • Interactive Graphics … uses various interactions with the plots to change selections and parameters quickly. • Dynamic Graphics … uses animated / rotating plots to visualize high dimensional (continuous) data. 1973 PRIM-9 Tukey et al. Interactive Statistical Graphics – When Charts come to Life PSI Graphics One Day Meeting Martin Theus 4 www.theusRus.de Interactive Graphics ≠ Dynamic Graphics • Interactive Graphics … uses various interactions with the plots to change selections and parameters quickly. • Dynamic Graphics … uses animated / rotating plots to visualize high dimensional (continuous) data.
    [Show full text]
  • Questionnaire Analysis Using SPSS
    Questionnaire design and analysing the data using SPSS page 1 Questionnaire design. For each decision you make when designing a questionnaire there is likely to be a list of points for and against just as there is for deciding on a questionnaire as the data gathering vehicle in the first place. Before designing the questionnaire the initial driver for its design has to be the research question, what are you trying to find out. After that is established you can address the issues of how best to do it. An early decision will be to choose the method that your survey will be administered by, i.e. how it will you inflict it on your subjects. There are typically two underlying methods for conducting your survey; self-administered and interviewer administered. A self-administered survey is more adaptable in some respects, it can be written e.g. a paper questionnaire or sent by mail, email, or conducted electronically on the internet. Surveys administered by an interviewer can be done in person or over the phone, with the interviewer recording results on paper or directly onto a PC. Deciding on which is the best for you will depend upon your question and the target population. For example, if questions are personal then self-administered surveys can be a good choice. Self-administered surveys reduce the chance of bias sneaking in via the interviewer but at the expense of having the interviewer available to explain the questions. The hints and tips below about questionnaire design draw heavily on two excellent resources. SPSS Survey Tips, SPSS Inc (2008) and Guide to the Design of Questionnaires, The University of Leeds (1996).
    [Show full text]
  • STANDARDS and GUIDELINES for STATISTICAL SURVEYS September 2006
    OFFICE OF MANAGEMENT AND BUDGET STANDARDS AND GUIDELINES FOR STATISTICAL SURVEYS September 2006 Table of Contents LIST OF STANDARDS FOR STATISTICAL SURVEYS ....................................................... i INTRODUCTION......................................................................................................................... 1 SECTION 1 DEVELOPMENT OF CONCEPTS, METHODS, AND DESIGN .................. 5 Section 1.1 Survey Planning..................................................................................................... 5 Section 1.2 Survey Design........................................................................................................ 7 Section 1.3 Survey Response Rates.......................................................................................... 8 Section 1.4 Pretesting Survey Systems..................................................................................... 9 SECTION 2 COLLECTION OF DATA................................................................................... 9 Section 2.1 Developing Sampling Frames................................................................................ 9 Section 2.2 Required Notifications to Potential Survey Respondents.................................... 10 Section 2.3 Data Collection Methodology.............................................................................. 11 SECTION 3 PROCESSING AND EDITING OF DATA...................................................... 13 Section 3.1 Data Editing ........................................................................................................
    [Show full text]
  • Improving Critical Thinking Through Data Analysis*
    Midwest Manufacturing Accounting Conference Improving Critical Thinking Through Data Analysis* Kurt Reding May 17, 2018 *This presentation is based on the article, “Improving Critical Thinking Through Data Analysis,” published in the June 2017 issue of Strategic Finance. Outline . Critical thinking . Diagnostic data analysis . Merging critical thinking with diagnostic data analysis 2 Critical Thinking . “Bosses Seek ‘Critical Thinking,’ but What Is That?” (emphasis added) “It’s one of those words—like diversity was, like big data is— where everyone talks about it but there are 50 different ways to define it,” says Dan Black, Americas director of recruiting at the accounting firm and consultancy EY. (emphasis added) “Critical thinking may be similar to U.S. Supreme Court Justice Porter Stewart’s famous threshold for obscenity: You know it when you see it,” says Jerry Houser, associate dean and director of career services at Williamette University in Salem, Ore. (emphasis added) http://www.wsj.com/articles/bosses-seek-critical-thinking-but-what-is-that-1413923730 3 Critical Thinking . Critical thinking is not… − Going-through-the-motions thinking. − Quick thinking. 4 Critical Thinking . Proposed definition: Critical thinking is a manner of thinking that employs curiosity, creativity, skepticism, analysis, and logic, where: − Curiosity means wanting to learn, − Creativity means viewing information from multiple perspectives, − Skepticism means maintaining a ‘trust but verify’ mindset; − Analysis means systematically examining and evaluating evidence, and − Logic means reaching well-founded conclusions. 5 Critical Thinking . Although some people are innately more curious, creative, and/or skeptical than others, everyone can exercise these personal attributes to some degree. Likewise, while some people are naturally better analytical and logical thinkers, everyone can improve these skills through practice, education, and training.
    [Show full text]
  • Data Visualization in Exploratory Data Analysis: an Overview Of
    DATA VISUALIZATION IN EXPLORATORY DATA ANALYSIS: AN OVERVIEW OF METHODS AND TECHNOLOGIES by YINGSEN MAO Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN INFORMATION SYSTEMS THE UNIVERSITY OF TEXAS AT ARLINGTON December 2015 Copyright © by YINGSEN MAO 2015 All Rights Reserved ii Acknowledgements First I would like to express my deepest gratitude to my thesis advisor, Dr. Sikora. I came to know Dr. Sikora in his data mining course in Fall 2013. I knew little about data mining before taking this course and it is this course that aroused my great interest in data mining, which made me continue to pursue knowledge in related field. In Spring 2015, I transferred to thesis track and I appreciated Dr. Sikora’s acceptance to be my thesis adviser, which made my academic research possible. Because this is my first time diving myself into academic research, I’m thankful for Dr. Sikora for his valuable guidance and patience throughout the thesis. Without his precious support this thesis would not have been possible. I also would like to thank the members of my Master committee, Dr. Wang and Dr. Zhang, for their insightful advices, and the kind acceptance for being my committee members at the last moment. November 17, 2015 iii Abstract DATA VISUALIZATION IN EXPLORATORY DATA ANALYSIS: AN OVERVIEW OF METHODS AND TECHNOLOGIES Yingsen Mao, MS The University of Texas at Arlington, 2015 Supervising Professor: Riyaz Sikora Exploratory data analysis (EDA) refers to an iterative process through which analysts constantly ‘ask questions’ and extract knowledge from data.
    [Show full text]
  • Theory and Applications of Monte Carlo Simulations
    Theory and Applications of Monte Carlo Simulations Victor (Wai Kin) Chan (Ed.) InTech, 364 pp. ISBN: 978-953-51-1012-5 The book ‘Theory and Applications of Monte Carlo Simulations’ is an excellent reference for researchers interested in exploring the full potential of Monte Carlo Simulations (MCS). The book focuses on the latest advances and applications of (MCS). It presents eleven chapters that show to which extent sampling techniques can be exploited to solve complex problems or analyze complex systems in a variety of engineering and science topics. The book explores most of the issues and topics in MCS including goodness of fit, uncertainty evaluation, variance reduction, optimization, and statistical estimation. All these concepts are extensively discussed and examples of solutions are given in order to help both researchers and practitioners. Perhaps the most valuable parts of the book are the examples of ground- breaking applications of MCS in financial systems modeling and estimation of transition behavior of organic molecules, but every single chapter teaches at least one new application or explains a useful concept about MCS. The aim of this book is to combine knowledge of MCS from diverse fields to facilitate research and new applications of MCS. Any creative researcher will be able to take advantage of the presented techniques and can find plenty of applications in their scientific field. Monte Carlo Simulations Monte Carlo simulations are an extensive class of computational algorithms that rely on repeated random sampling to obtain numerical results. One runs simulations a large number of times over in order to obtain the distribution of an unknown probabilistic entity.
    [Show full text]
  • Exploratory Data Analysis
    Copyright © 2011 SAGE Publications. Not for sale, reproduction, or distribution. 530 Data Analysis, Exploratory The ultimate quantitative extreme in textual data Klingemann, H.-D., Volkens, A., Bara, J., Budge, I., & analysis uses scaling procedures borrowed from McDonald, M. (2006). Mapping policy preferences II: item response theory methods developed originally Estimates for parties, electors, and governments in in psychometrics. Both Jon Slapin and Sven-Oliver Eastern Europe, European Union and OECD Proksch’s Poisson scaling model and Burt Monroe 1990–2003. Oxford, UK: Oxford University Press. and Ko Maeda’s similar scaling method assume Laver, M., Benoit, K., & Garry, J. (2003). Extracting that word frequencies are generated by a probabi- policy positions from political texts using words as listic function driven by the author’s position on data. American Political Science Review, 97(2), some latent scale of interest and can be used to 311–331. estimate those latent positions relative to the posi- Leites, N., Bernaut, E., & Garthoff, R. L. (1951). Politburo images of Stalin. World Politics, 3, 317–339. tions of other texts. Such methods may be applied Monroe, B., & Maeda, K. (2004). Talk’s cheap: Text- to word frequency matrixes constructed from texts based estimation of rhetorical ideal-points (Working with no human decision making of any kind. The Paper). Lansing: Michigan State University. disadvantage is that while the scaled estimates Slapin, J. B., & Proksch, S.-O. (2008). A scaling model resulting from the procedure represent relative dif- for estimating time-series party positions from texts. ferences between texts, they must be interpreted if a American Journal of Political Science, 52(3), 705–722.
    [Show full text]
  • Learning Objectives for Data Concept and Visualization
    Learning Objectives for Data Concept and Visualization Assignment 1: Data Quality MODULE TITLE LEARNING OBJECTIVES Concept and Impact of Data • Summarize concepts of data quality. Understand and describe Quality the impact of data on actuarial work and projects. Data Quality Principles • Understand the categories of data quality principles. Given a principle of data quality, provide an example that illustrates the principle. Understand what is involved in a review of data. Data Governance • Concepts, roles and responsibilities, committees, tools Data Documentation: • Describe these aspects of data documentation: Metadata Terminology Data and metadata terminology Relationship between data documentation and data governance Types and uses of metadata for data scientists Aggregate Insurance and • Explain the regulator and business needs for statistical data Statistical Data Range of Weight: 0-10 % Assignment 2: Sources of Data MODULE TITLE LEARNING OBJECTIVES Data Science and Data • Understand the fundamental concepts of data science. Scientists • Understand different types of data scientists. • Summarize the insurers’ use of predictive analytics and data science and the roles of data scientists and data science team. Insurer Data-Driven Decision Explain how insurers and risk managers use data-driven decision Making making Insurer Operational Data • Understand what typical attributes are made available in each of the following data sources: Policy and Premium Data Claims Information Claim Notes Billing Information Producer Information • Understand how corrections for each of these attributes are recorded. COPYRIGHT 2018 THE CAS INSTITUTE 1 | Pag • Understand timing of collection and updating. Understand how quality can change over time. The Value of Statistical Plan • Understand why insurance companies produce statistical files, Data typical attributes in stat files and the advantages and disadvantages of using stat files over operational data.
    [Show full text]