Official Statistics for the Next Decade-- Methodological Issues and Challenges

Total Page:16

File Type:pdf, Size:1020Kb

Official Statistics for the Next Decade-- Methodological Issues and Challenges Official Statistics for the Next Decade-- Methodological Issues and Challenges Danny Pfeffermann Conference on New Techniques and Technologies for Statistics (NTTS) March, 2015 1 List of tough challenges A- Collection and management of big data for POS √ B- Integration of computer science for POS from big data C- Data accessibility, privacy and confidentiality D- Possible use of Internet panels √ E- How to deal with mode effects √ G- Future censuses and small area estimation √ F- Integration of statistics and geospatial information. Ques. Are Universities preparing students for NSOs? √ 2 Collection and management of big data for POS Exp. 1. Count of number of vehicles crossing road sections. Presently done in a very primitive way. Why not get the information and much more from cell phone companies? Available in principle for each time point. Exp. 2. Use the BPP, based on 5 million commodities sold on line to predict the CPI requires two costly surveys. 3 Big Data Big Problems Big headache Coverage/selection bias (we are talking of POS) Data accessibility New legislation Privacy (data protection) Disclosure control Computer storage Computation and Analysis Linkage of different files Risk of data manipulation 4 Two types of big data Type 1. Data obtained from sensors, cameras, cell phones…, - generally structured and accurate, Type 2. Data obtained from social networks, e-commerce etc.,- diverse, unstructured and appears irregularly. Type 1 measurements available continuously. Should POS publications be mostly in the form of graphs and pictures? If aggregate data needed, how should big data be transformed to monthly aggregates? By sampling? Will random sampling continue playing an important role when processing big data? 5 Other important issues Coverage bias- major concern in use of big data for POS. Big data of credit card transactions contains no information on transactions made with other means of payment. Opinions expressed in social networks are different from opinions held by the general public. No bias should occur when using big data to predict other variables, estimated from standard surveys. e.g., use BPP to predict the CPI. Use job advertisements to predict employment. Use Satellite images to predict crops. Requires proper statistical analysis to identify and test the prediction models. 6 Big data are supposedly free of sampling errors. Are measures of error still an issue? Measure of bias? How? Measurement errors? Big Data for sub-populations: NSOs publish estimates for sub-populations; age, gender, ethnicity, geography,… Big data may not contain this information. Requires massive linkage if missing information available in other big files. Will traditional sample surveys always be needed? We are familiar with design-based, model-dependent, and model-assisted estimators. New: algorithmic estimators- the result of computational algorithms applied to raw big data. (Example, measure of religiosity). 7 Computer engineering for POS from big data No longer Gigabytes (~109 bytes). Terabytes (~1012bytes) and petabytes (1015 bytes) are the least new standards. Computing facilities at most (all?) NSOs cannot store and handle such high volumes of data. Possible solution. Use Cloud storage, management and processing facilities. Big problem. Data protection. Multiple users, data distributed over a large number of devices. Possible sol. Private cloud installation, incorporating all local computers; combined management of storage space and processing power of the separate computers. 8 Summary of in house computing challenges 1. Study the logic of storage and processing of big data, 2. Prepare storage spaces that can be regularly extended to higher volumes of data. 3. Establish communication networks that permit receiving data from multiple sources in different formats, and prepare the data for processing and analysis. 4. Protect the data from possible hackers and develop new methods of statistical data control (SDC). 5. Develop analytic tools for processing, editing and analysing big data, including visualization techniques. Everything different, if cloud service can be used. 9 Data accessibility, privacy and confidentiality Two aspects: A- Protect the data from intruders. Very expensive devices. B- SDC. Guarantee that data released cannot be used to reveal private confidential data. Current SDC procedures need extensive modifications. Exp 1. Release “synthetic data” generated from models. Can we generate new big data? Exp 2. Research (safe) rooms. Available procedures for release of data and control of outputs need major revision. New trend: release synthetic data for researchers before they get the real data in Research rooms. 10 Big data- summary remarks New expensive computing facilities, new data processing techniques, new linkage methods, new visualization methods, new analytic methods, new measures of error. Only some of the big challenges facing computer scientists and statisticians in the use of big data for POS. Big potential advantages: timeliness, much broader coverage (possible coverage bias), no sampling frames, no questionnaires, no interviewers,… Considering the constant decline in response rates in traditional surveys, use of big data seems inevitable. Big data will just grow bigger and bigger. 11 Possible use of Internet panels for POS Web surveys have huge advantages over traditional surveys. Major problem: volunteers with access to the internet. At best represent the population of internet users (IU). Ipanel: big group of volunteers agreeing to participate regularly in surveys, often in return of certain incentives. Ipanel possibly recruited by probability sampling, and the samples selected from the Ipanel often selected by probability sampling. Big challenge: Estimate general population parameters from Ipanel sample. 12 Common solutions Propensity scores (PS): select a traditional large reference sample, treat sample SI from Ipanel as treatment sample, and reference sample S , as control sample. R Estimate propensity scores based on all j S SIR S . Divide SI into C classes based on estimated propensity w, psa w scores. Compute an adjusted weight ddjj fc for jS cI . w d j initial weights assigned to iS I . Yˆ w,PSA dyw, psa . c j S jj cI 13 Problems with the use of propensity scores Requires drawing a large reference sample which can be very costly. Strong ignorability: let T 1 for iS I , T 0 for iS R . PS(a): T ┴ Y given covariates x used for PS, PS(b): 0 Pr(T 1| x) 1 for every x S. Conditions may hold for some, but not for all study variables. Not obvious how to estimate the variance of estimators. 14 Another common solution: Calibration w cal Change base weights d j to weights d j , such that for U observed survey variables Z with known totals tz , dtcalz U . jS jj z I Totals might be cell totals or marginal cell totals. ˆU Reliable sample estimates tz may also be used. Does not require a reference sample. Combining propensity scores and calibration adjustments possibly more effective in reducing the bias. 15 A new alternative approach? Let Ai 1 if iU is an Internet User (IU), Ai 0 otherwise. Assumption. Pr(Ai 1| x i , y i ) 0 i U . y-study variable,x-covariates. Bayes Pr(Ai 1| x i , y i ) fp ( y i | x i ) fIU ( yi | x i ) f ( y i | x i , A i 1) , Pr(Aii 1| x ) fyp (ii | x ) = distribution in target population U, fyIU (ii | x ) = distribution for IU. 16 A new alternative approach (cont.) In practice, not every IU asked to participate in the Ipanel agrees, or a person may agree but does not respond in a particular survey taken for its members. Let Ri =1 if IU i is in Ipanel and responds, Ri =0 otherwise. The marginal distribution for responding unit i is then, f(|x) y f (|x, y A 1, R 1) R i i i i i i Pr(R 1 y ,x , A 1)Pr( A 1| x , y ) f ( y | x ) i i i i i i ip i i . Pr(RAAi 1| x i , i 1)Pr( i 1| x i ) 17 A new alternative approach- inference The respondents’ likelihood (assuming independence) r Pr(R 1 y ,x , A 1; )Pr( A 1| y ,x ; ) f( y | x ; ) i i i i i i i p i i . LResp (,) i1 Pr(RAAi 1 x i , i 1; , )Pr( i 1| x i ; , ) Inference: maximize the likelihood with respect to unknown ˆ ˆ parameters and use fpp( yi | x i ; ) f ( y i | x i ; ) for inference about the target population U . Example: estimate total as, Yˆ U Ey( | x ;)ˆ . IP iU p i i 18 Inference (cont.) When Ipanel selected with probabilities πi and the covariates are unknown for units outside the Ipanel sample, Uw Yˆ E( y | x ;ˆ )/( pˆˆ p ), IP iS p i i i Ri Ai IR ˆ ˆ pˆ Ri Pr(Ri 1 y i,x i , A i 1) ; pˆ Ai Pr(Ayi 1| x i , i ). Remarks: 1- A full parametric inference requires specifying models for pRi , pAi and fp( y i | x i ; ). Likelihood complicated, and may face non-identifiability problems. Use of Empirical likelihood simpler and safer. 19 Remarks (cont.) 2- Although none of the stochastic processes in the likelihood is observable, the respondents’ model is testable using classical test statistics, since it relates to the observed data. 3- Further simplification of the likelihood obtained by combining the models for Internet use and for Ipanel response into a single model. Define Di = A i R i . The model is, Pr(Di 1 y i ,x i ) f p ( y i | x i ) fD( y i | x i ) f ( y i | x i , D i 1) Pr(Dii 1| x ) Much simpler model, but might be too restrictive. 20 Comparison with other approaches Use of proposed approach does not require the availability of a reference sample as required for the use of propensity scores, and does not rely on ignorability conditions.
Recommended publications
  • Generic Law on Official Statistics for Latin America
    Generic Law on Official Statistics for Latin America Statistical Conference of the Americas of ECLAC Thank you for your interest in this ECLAC publication ECLAC Publications Please register if you would like to receive information on our editorial products and activities. When you register, you may specify your particular areas of interest and you will gain access to our products in other formats. www.cepal.org/en/publications ublicaciones www.cepal.org/apps Generic Law on Official Statistics for Latin America Statistical Conference of the Americas of ECLAC The Generic Law on Official Statistics for Latin America was adopted by the Statistical Conference of the Americas of the Economic Commission for Latin America and the Caribbean at its tenth meeting, held in Santiago from 19 to 21 November 2019. Thanks are conveyed to the Inter-American Development Bank (IDB) for its support in the preparation of this document. IDB provided consultants who were involved throughout the process and financial support for the organization of the Regional workshop on legal frameworks for the production of official statistics, held in Bogotá, from 3 to 5 July 2018, and for the meeting at which the final text was discussed, in San Salvador, on 29 and 30 August 2019. United Nations publication LC/CEA.10/8 Distribution: L Copyright © United Nations, 2020 All rights reserved Printed at United Nations, Santiago S.20-00045 This publication should be cited as: Economic Commission for Latin America and the Caribbean (ECLAC), Generic Law on Official Statistics for Latin America (LC/CEA.10/8), Santiago, 2020. Applications for authorization to reproduce this work in whole or in part should be sent to the Economic Commission for Latin America and the Caribbean (ECLAC), Publications and Web Services Division, publicaciones.
    [Show full text]
  • United Nations Fundamental Principles of Official Statistics
    UNITED NATIONS United Nations Fundamental Principles of Official Statistics Implementation Guidelines United Nations Fundamental Principles of Official Statistics Implementation guidelines (Final draft, subject to editing) (January 2015) Table of contents Foreword 3 Introduction 4 PART I: Implementation guidelines for the Fundamental Principles 8 RELEVANCE, IMPARTIALITY AND EQUAL ACCESS 9 PROFESSIONAL STANDARDS, SCIENTIFIC PRINCIPLES, AND PROFESSIONAL ETHICS 22 ACCOUNTABILITY AND TRANSPARENCY 31 PREVENTION OF MISUSE 38 SOURCES OF OFFICIAL STATISTICS 43 CONFIDENTIALITY 51 LEGISLATION 62 NATIONAL COORDINATION 68 USE OF INTERNATIONAL STANDARDS 80 INTERNATIONAL COOPERATION 91 ANNEX 98 Part II: Implementation guidelines on how to ensure independence 99 HOW TO ENSURE INDEPENDENCE 100 UN Fundamental Principles of Official Statistics – Implementation guidelines, 2015 2 Foreword The Fundamental Principles of Official Statistics (FPOS) are a pillar of the Global Statistical System. By enshrining our profound conviction and commitment that offi- cial statistics have to adhere to well-defined professional and scientific standards, they define us as a professional community, reaching across political, economic and cultural borders. They have stood the test of time and remain as relevant today as they were when they were first adopted over twenty years ago. In an appropriate recognition of their significance for all societies, who aspire to shape their own fates in an informed manner, the Fundamental Principles of Official Statistics were adopted on 29 January 2014 at the highest political level as a General Assembly resolution (A/RES/68/261). This is, for us, a moment of great pride, but also of great responsibility and opportunity. In order for the Principles to be more than just a statement of noble intentions, we need to renew our efforts, individually and collectively, to make them the basis of our day-to-day statistical work.
    [Show full text]
  • World Health Statistics 2007
    100010010010100101111101010100001010111111001010001100100100100010001010010001001001010000010001010101000010010010010100001001010 00100101010010111011101000100101001001000101001010101001010010010010010100100100100100101010100101010101001010101001101010010000 101010100111101010001010010010010100010100101001010100100100101010000101010001001010111110010101010100100101000010100100010100101 001010010001001000100100101001011111010101000010101111110010100011001001001000100010100100010010010100000100010101010000100100100 1010000100101000100101010010111011101000100101001001000101001010101001010010010010010100100100100100101010100101010101001010101le e 0 ib n n lí po n s e i a d 01101010010000101010100n 1111010100010100100100101000101001010010101001001001010100001010100010010101111100101010101001001010000101 W é i b 0010001010010100101001000100010001001001010010 11111010101000010101111110010100011001001001000100010100100010010010100000100010101 m a t ORLD 0100001001001001010000100101000100101010010111011101000100101001001000101001010101001010010010010010100100100100100101010100101 0 101010010101010011010100100001010101001111010100010100100100101000101001010010101001001001010100001010100010010101 111100101010101 e www.who.int/whosis n 0010010100001010010001010010100101001000100010001001001010010111110101010000101011111100101000110010010010001000101001000100100 g 10 i é l n g 1000001000101010100001001001001010000100101000100101010010111011101000100101001001000101001010101001010010010010010100100100100a e 1 H le le 00101010100101010101001010101001101010010000101010100ment
    [Show full text]
  • Jacob Oleson
    JACOB J. OLESON The University of Iowa College of Public Health Department of Biostatistics Iowa City, IA 52242 [email protected] Citizenship: USA EDUCATION: Ph.D. Statistics University of Missouri – Columbia 2002 M.A. Statistics University of Missouri – Columbia 1999 B.A. Mathematics Central College (Pella, Iowa) 1997 EMPLOYMENT: 2015 – present Director of Graduate Studies, Department of Biostatistics, The University of Iowa 2014 – present Director – Center for Public Health Statistics – College of Public Health – The University of Iowa 2012 – present Associate Professor – Department of Biostatistics – College of Public Health – The University of Iowa 2004 – 2012 Assistant Professor – Department of Biostatistics – College of Public Health – The University of Iowa 2002 – 2004 Assistant Professor – Department of Mathematics & Statistics – Arizona State University 2000 – 2002 Research Assistant – University of Missouri - Columbia; Missouri Department of Conservation 1999 – 2002 Research Assistant – University of Missouri - Columbia; Biostatistics. 1997 – 2000 Graduate Instructor – University of Missouri - Columbia COURSES TAUGHT: The University of Iowa • 171:161 – Introduction to Biostatistics (3 terms) • 171:162 – Introduction to Biostatistics WWW (online) (4 terms; supervisor 10 terms) • 171:162 – Design and Analysis of Biomedical Studies (6 terms; supervisor 6 terms) • 171:241 – Applied Categorical Data Analysis (2 terms) • 171:202 – Biostatistical Methods II (3 terms) • 171:281 – Independent Study (Spatial Statistics, Survey Statistics,
    [Show full text]
  • WP1105 Modeling Mortality with a Bayesian Vector
    ARC Centre of Excellence in Population Ageing Research Working Paper 2011/5 Modeling Mortality with a Bayesian Vector Autoregression Carolyn Njenga and Michael Sherris* * Njenga is a doctoral student in the School of Risk and Actuarial at the University of New South Wales (UNSW) and a Research Assistant at the ARC Centre of Excellence in Population Ageing Research (CEPAR). Sherris is Professor of Actuarial Studies at UNSW and a CEPAR Chief Investigator. This paper can be downloaded without charge from the ARC Centre of Excellence in Population Ageing Research Working Paper Series available at www.cepar.edu.au Modeling Mortality with a Bayesian Vector Autoregression Carolyn Ndigwako Njenga Australian School of Business University of New South Wales, Sydney, NSW, 2052 Australia Email: [email protected] Michael Sherris Australian School of Business University of New South Wales, Sydney, NSW, 2052 Email: [email protected] 4th March 2011 Abstract Mortality risk models have been developed to capture trends and common fac- tors driving mortality improvement. Multiple factor models take many forms and are often developed and fitted to older ages. In order to capture trends from young ages it is necessary to take into account the richer age structure of mortality im- provement from young ages to middle and then into older ages. The Heligman and Pollard (1980) model is a parametric model which captures the main features of period mortality tables and has parameters that are interpreted according to age range and effect on rates. Although time series techniques have been applied to model parameters in various parametric mortality models, there has been limited analysis of parameter risk using Bayesian techniques.
    [Show full text]
  • Big Data and Modernizing Federal Statistics: Update Bill Bostic Associate Director Economic Programs Directorate
    Big Data and Modernizing Federal Statistics: Update Bill Bostic Associate Director Economic Programs Directorate Ron Jarmin Ph.D. Assistant Director, Research and Methodology Directorate August 2015 1 Big Data Trends and Challenges . Trends . Increasingly data-driven economy . Individuals are increasingly mobile . Technology changes data uses . Stakeholder expectations are changing . Agency budgets and staffing remain flat/reduced. The next generation of official statistics . Utilize broad sources of information . Increase granularity, detail, and timeliness . Reduce cost & burden . Maintain confidentiality and security . Measure impact of a global economy . Multi-disciplinary challenges : . Computation, statistics, informatics, social science, policy 2 MIT Workshop Series Objectives . Convene experts – . in computer science, social science, statistics, informatics and business . Explore . challenges to building the next generation of official statistics . Identify . new opportunities for using big data to augment official statistics . core computational and methodological challenges . ongoing research that should inform the Big Data research program 3 Workshop Organizers Series Conveners . Census: Cavan Capps, Ron Prevost . MIT: Micah Altman Workshop conveners . First Workshop: Ron Duych (DOT), Joy Sharp (DOT) . Second Workshop Amy O’Hara, Laura McKenna, Robin Bachman . Third Workshop: Peter Miller, Benjamin Reist, Michael Thieme Workshop Sponsors . William Bostic, Ron Jarmin 4 Workshop Coverage Approach . Examine a set of broad topical questions through a specific case . Link broad issues to specific approach case Acquisition . Link specific challenge of case to broad challenge Topics Big Data Access Retention Challenges Acquisition – Data Sources Using New forms of Information for Official Economic Statistics [August 3-4] Access -- Privacy Challenges Analysis Location Confidentiality and Official Surveys [October 5-6] Analysis – Inference Challenges Transparency and Inference [December 7-8] 5 Preliminary Observations from First Workshop .
    [Show full text]
  • Cooperation Between a Statistical Bureau and an Academic
    International Statistical Institute, 56th Session, 2007: Gad Nathan Cooperation between a statistical bureau and an academic department of statistics as a basis for teaching official statistics Gad Nathan Hebrew University, Department of Statistics Mt. Scopus 91905 Jerusalem, Israel E-mail: [email protected] 1. Introduction The teaching of official statistics was for long deemed to be outside the scope of academic curricula and was traditionally confined to the internal training programmes of the official statistical agencies themselves and to those of international agencies. The relevance of academic training in statistics for official statisticians was questioned, for instance, by Bishop (1964), and by Benjamin (1975), who emphasized the need for the training of official statisticians in administrative skills, rather than in statistics. In recent years, however, statistical sciences have developed considerably to cover a wide range of applications, each with its own specific methodology. Some applications, such as econometrics, psychometrics, biostatistics, operations research etc., have become disciplines in their own right and recognized as closely related to classical statistical sciences. The development of a vast range of statistical techniques, developed over the past fifty years, has resulted in a high degree of specialization and this is manifested in the diversity of graduate academic programmes and curricula in statistics. On the other hand, the functions and modes of operation of governmental statistical agencies have changed considerably over the past fifty years. The developments in statistical methodology, especially in sampling techniques and analysis, have shifted the emphasis from a primarily descriptive mode, mostly concerned with the tabulation and simple description of administrative data, to very sophisticated methods of collection by sample surveys, sometimes linked to administrative data, and to advanced methods of analysis, such as time series analysis and categorical data models.
    [Show full text]
  • Working-Paper-Front-Cover.Pages
    Assessing the accuracy of response propensities in longitudinal studies CCSR Working Paper 2010-08 Ian Plewis, Sosthenes Ketende, Lisa Calderwood Ian Plewis, Social Statistics, University of Manchester, Manchester M13 9PL, U.K. E-mail: [email protected]. Sosthenes Ketende and Lisa Calderwood, Centre for Longitudinal Studies, Institute of Education, London WC1H 0AL, U.K. The omnipresence of non-response in longitudinal studies is addressed by assessing the accuracy of statistical models constructed to predict different types of non-response. Particular attention is paid to summary measures derived from receiver operating characteristic curves and logit rank plots as ways of assessing accuracy. The ideas are applied to data from the first four waves of the UK Millennium Cohort Study and the results suggest that our ability to discriminate and predict non-response is not high. Changes in socio-economic circumstances do predict wave non-response with implications for the underlying missingness mechanism. Conclusions are drawn in terms of the potential of interventions to prevent non-response and methods of adjusting for it.. www.ccsr.ac.uk Abstract The omnipresence of non-response in longitudinal studies is addressed by assessing the accuracy of statistical models constructed to predict different types of non-response. Particular attention is paid to summary measures derived from receiver operating characteristic curves and logit rank plots as ways of assessing accuracy. The ideas are applied to data from the first four waves of the UK Millennium Cohort Study and the results suggest that our ability to discriminate and predict non-response is not high. Changes in socio-economic circumstances do predict wave non- response with implications for the underlying missingness mechanism.
    [Show full text]
  • Guidance Note: Meeting ICP National Accounts Expenditure Data 1 Requirements During the COVID-19 Pandemic
    Guidance Note: Meeting ICP National Accounts Expenditure Data 1 Requirements during the COVID-19 Pandemic ICP Inter-Agency Coordination Group May 10, 2021 The COVID-19 pandemic is expected to pose unprecedented challenges for compiling national accounts expenditure data for the International Comparison Program (ICP). This note presents some guidance to assist National Implementing Agencies (NIAs) in meeting ICP data requirements in the context of the pandemic situation. 1 Drafted by Mizuki Yamanaka, Maurice Nsabimana, Inyoung Song and Marko Rissanen, World Bank, with inputs from Andrey Kosarev, Interstate Statistical Committee of the Commonwealth of Independent States; Brian Graf and Paul Austin, International Monetary Fund; Claudia Andrea De Camino Ferrario, United Nations Economic Commission for Latin America and the Caribbean; Gregoire Mboya Deloubassou, African Development Bank; Kaushal Joshi, Asian Development Bank; Majed Skaini, United Nations Economic and Social Commission for Western Asia; Paul Konijn, Eurostat; Beiling Yan, Organisation for Economic Co-operation and Development; Michel Mouyelo- Katoula; and Catherine Van Rompaey, Eric Roland Metreau and Yuri Dikhanov, World Bank. Edited by Edie Purdie, World Bank. 1 1. Introduction This note seeks to provide guidance to International Comparison Program (ICP) National Implementing Agencies (NIAs) on national accounts activities for the ICP 2021 cycle in the current context of COVID- 19, as was done in a similar note for price data (“Guidance Note: Meeting ICP Price Data Requirements During the COVID-19 Pandemic”). As of the beginning of the 2nd quarter of 2021, the pandemic is still not under control, and potential disruptions to ICP national accounts activities remain. This guidance note summarizes national accounts data requirements for the ICP 2021 cycle and ICP approaches, overviews potential challenges caused by the COVID-19 pandemic, and provides some recommendations.
    [Show full text]
  • European Master in Official Statistics (EMOS) European Master in Official
    European Master in Official Statistics (EMOS) @ Ludwig-Maximilans-Universität München European Statistical Training Program Exchange with EMOS Partner Universities Cooperating Institutions • Bavarian State Office for Statistics (Bayerisches Landesamt für Statistik) EMOS @ LMU Department of Statistics (LMU Munich) • Institute for Employment Research (Institut für Arbeitsmarkt Contact: Prof. Dr Thomas Augustin Special Route through und Berufsforschung) [email protected] Master of Science: • Statistics of the German Federal Employment Agency (Statis- Dean of Studies Statistics in Economic and Social Sciences tik der Budesagentur für Arbeit) Department of Statistics Programme is also open for lateral entrants LMU Munich • Statistical Office of Munich (Statistisches Amt der Lan- deshauptstadt München) Programme Outline Department of Statistics Research Groups: EMOS Route through Statistics in Economic and Social Sciences (M.Sc.) • Methodological Foundations of Statistics and their Applications (Prof. Dr Thomas Augustin) • Computational Statistics (Prof. Dr Bernd Bischl) Estimation and Advanced Statistical Methods of Social Concepts of Data Sources and • Biostatistics (Prof. Dr Sonja Greven) 1 Testing I Modelling Statistics A Official Statistics Production • Methods for Missing Data, Model Selection and Model Averaging (Prof. Dr Christian Heumann) • Applied Statistics in Social Sciences, Economics and Business (Prof. Dr Göran Kauermann) • Statistical Consulting Unit (StaBLab) (Prof. Dr Helmut Küchenhoff) Complex Data Methods of Social Longitudial Data OR • Financial Econometrics (Prof. Stefan Mittnik, Ph.D.) Econometrics Structures Statistics B Time Series Analysis • Bioimaging (Prof. Dr Volker Schmid) • Applied Stochastics (N.N.) 2 • Data Science (N.N.) Dissemination in Methods of Economic Specialiced Software for Econ./Social • Planned: Social Surveys (Endowed Professorship by Institute for Employment Research) EMOS Colloquium Official Statistics Statistics Sciences OR Advanced Programming Available Study Programmes: 6 ECTS from e.g.
    [Show full text]
  • Presentation
    Your Business by the Numbers: Introduction to Census Economic Programs and Census Data Tools April 3, 2019 Presented by: Eric Coyle Data Dissemination Specialist U.S. Census Bureau Agenda About the Census Bureau Economic Data Overview • Key Terms • Monthly, Quarterly, and Annual Programs, and the LEHD Program Economic Census • Overview American Community Survey • Overview Accessing the Data • AFF, API, CBB, Infographics, etc. 2 About the Census Bureau Decennial Population and Housing Census • The U.S. Census Bureau is the federal government’s largest statistical agency. Every 10 years • We conduct more than 130 censuses and surveys each year, including Economic Census Every 5 years - The Decennial Census – the once-a-decade population and housing count of the United States Census of Governments Every 5 years - The American Community Survey – the ongoing annual survey of the nation’s population American Community Survey - The Census of Governments – identifies the scope and nature of the Annual nation's state and local government sector Annual Retail Trade - The Economic Census – the official five-year measure of American Annual business Plus more than 130 • Our mission is to serve as the leading source of quality data about America’s demographic and economic surveys people, places, and economy. every year Key Economic Census Terms • NAICS (North American Industry Classification System) • Our primary data dimension • Establishments (vs. Companies, Firms, etc.) • Our collection/tabulation level • Employers (vs. Nonemployers) • EC only covers
    [Show full text]
  • Nowcasting the Finnish Economy with a Large Bayesian Vector Autoregressive Model
    Bank of Finland BoF Economics Review 6 2017 Nowcasting the Finnish economy with a large Bayesian vector autoregressive model Juha Itkonen, Bank of Finland Petteri Juvonen, Bank of Finland and University of Jyväskylä Abstract Timely and accurate assessment of current macroeconomic activity is crucial for policymak- ers and other economic agents. Nowcasting aims to forecast the current economic situation ahead of official data releases. We develop and apply a large Bayesian vector autoregres- sive (BVAR) model to nowcast quarterly GDP growth rate of the Finnish economy. We study the BVAR model’s out-of-sample performance at different forecasting horizons, and compare to various bridge models and a dynamic factor model. JEL codes: C52, C53, E32, E37 We would like to thank Juha Kilponen, Jaana Rahko and Eleonora Granziera for valuable comments, and everyone who have helped with the implementation of the model, especially Juhani Törrönen. The views expressed in this paper are those of the authors and do no neces- sarily reflect the views of the Bank of Finland. BoF Economics Review consists of analytical studies on monetary policy, financial markets and macroeconomic developments. Articles are published in Finnish, Swedish or English. Previous knowledge of the topic may be required from the reader. Editors: Juha Kilponen, Esa Jokivuolle, Karlo Kauko, Paavo Miettinen and Juuso Vanhala www.suomenpankki.fi 1 Introduction Policymakers and economic agents rely on timely and accurate information on the current condition of the economy. The official statistics, however, are published with a considerable delay and are subject to revisions long after their initial release. European quarterly GDP flash estimates are released with a 45 day delay and the first official statistic with a 60 day delay counting from the end of the quarter.
    [Show full text]