National Accounts – a Brief Guide
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Estimating the Effects of Fiscal Policy in OECD Countries
Estimating the e®ects of ¯scal policy in OECD countries Roberto Perotti¤ This version: November 2004 Abstract This paper studies the e®ects of ¯scal policy on GDP, in°ation and interest rates in 5 OECD countries, using a structural Vector Autoregression approach. Its main results can be summarized as follows: 1) The e®ects of ¯scal policy on GDP tend to be small: government spending multipliers larger than 1 can be estimated only in the US in the pre-1980 period. 2) There is no evidence that tax cuts work faster or more e®ectively than spending increases. 3) The e®ects of government spending shocks and tax cuts on GDP and its components have become substantially weaker over time; in the post-1980 period these e®ects are mostly negative, particularly on private investment. 4) Only in the post-1980 period is there evidence of positive e®ects of government spending on long interest rates. In fact, when the real interest rate is held constant in the impulse responses, much of the decline in the response of GDP in the post-1980 period in the US and UK disappears. 5) Under plausible values of its price elasticity, government spending typically has small e®ects on in°ation. 6) Both the decline in the variance of the ¯scal shocks and the change in their transmission mechanism contribute to the decline in the variance of GDP after 1980. ¤IGIER - Universitµa Bocconi and Centre for Economic Policy Research. I thank Alberto Alesina, Olivier Blanchard, Fabio Canova, Zvi Eckstein, Jon Faust, Carlo Favero, Jordi Gal¶³, Daniel Gros, Bruce Hansen, Fumio Hayashi, Ilian Mihov, Chris Sims, Jim Stock and Mark Watson for helpful comments and suggestions. -
Design and Implementation of N-Of-1 Trials: a User's Guide
Design and Implementation of N-of-1 Trials: A User’s Guide N of 1 The Agency for Healthcare Research and Quality’s (AHRQ) Effective Health Care Program conducts and supports research focused on the outcomes, effectiveness, comparative clinical effectiveness, and appropriateness of pharmaceuticals, devices, and health care services. More information on the Effective Health Care Program and electronic copies of this report can be found at www.effectivehealthcare.ahrq. gov. This report was produced under contract to AHRQ by the Brigham and Women’s Hospital DEcIDE (Developing Evidence to Inform Decisions about Effectiveness) Methods Center under Contract No. 290-2005-0016-I. The AHRQ Task Order Officer for this project was Parivash Nourjah, Ph.D. The findings and conclusions in this document are those of the authors, who are responsible for its contents; the findings and conclusions do not necessarily represent the views ofAHRQ or the U.S. Department of Health and Human Services. Therefore, no statement in this report should be construed as an official position of AHRQ or the U.S. Department of Health and Human Services. Persons using assistive technology may not be able to fully access information in this report. For assistance contact [email protected]. The following investigators acknowledge financial support: Dr. Naihua Duan was supported in part by NIH grants 1 R01 NR013938 and 7 P30 MH090322, Dr. Heather Kaplan is part of the investigator team that designed and developed the MyIBD platform, and this work was supported in part by Cincinnati Children’s Hospital Medical Center. Dr. Richard Kravitz was supported in part by National Institute of Nursing Research Grant R01 NR01393801. -
The National Accounts, GDP and the 'Growthmen'
The National Accounts, GDP and the ‘Growthmen’ Geoff Tily January 2015 The National Accounts, GDP and the ‘Growthmen’ A review essay of Diane Coyle GDP: A Brief but Affectionate History, 2013 By Geoff Tily Reading GDP: A Brief But Affectionate History by Diane Coyle (2013) led to the question –when and how did GDP growth become the central focus of policymaking? Younger readers may be more surprised by the answers than older ones, with the details not commonplace in conventional histories of post-war policy. 2 Abstract It is apt to start with Keynes, who played a far greater role in the creation and construction of National Accounts than is usually recognised, doing so in part to aid his own theoretical and practical initiatives. These were not concerned with growth, but with raising the level of activity and employment. The accounts were one of several means to this end. Coyle rightly bemoans real GDP growth as the end of policy, but that was not the original intention. Moreover Coyle adheres to a theoretical view where outcomes can only improve through gains in productivity, i.e. growth in output per unit of whatever input, which seems inseparable from GDP growth. The analysis also touches on the implications for theory and policy doctrine in practice. Most obviously Keynes’s approach was rejected on the ground of practical application. The emphasis on growth and an associated supply- orientation for policy seemingly became embedded through the OECD formally from 1961 and then in the UK via the National Economic Development Corporation of the 1960s (the relationships between these initiatives are of great interest but far from clear). -
Generalized Linear Models for Aggregated Data
Generalized Linear Models for Aggregated Data Avradeep Bhowmik Joydeep Ghosh Oluwasanmi Koyejo University of Texas at Austin University of Texas at Austin Stanford University [email protected] [email protected] [email protected] Abstract 1 Introduction Modern life is highly data driven. Datasets with Databases in domains such as healthcare are records at the individual level are generated every routinely released to the public in aggregated day in large volumes. This creates an opportunity form. Unfortunately, na¨ıve modeling with for researchers and policy-makers to analyze the data aggregated data may significantly diminish and examine individual level inferences. However, in the accuracy of inferences at the individ- many domains, individual records are difficult to ob- ual level. This paper addresses the scenario tain. This particularly true in the healthcare industry where features are provided at the individual where protecting the privacy of patients restricts pub- level, but the target variables are only avail- lic access to much of the sensitive data. Therefore, able as histogram aggregates or order statis- in many cases, multiple Statistical Disclosure Limita- tics. We consider a limiting case of gener- tion (SDL) techniques are applied [9]. Of these, data alized linear modeling when the target vari- aggregation is the most widely used technique [5]. ables are only known up to permutation, and explore how this relates to permutation test- It is common for agencies to report both individual ing; a standard technique for assessing statis- level information for non-sensitive attributes together tical dependency. Based on this relationship, with the aggregated information in the form of sample we propose a simple algorithm to estimate statistics. -
Meta-Analysis Using Individual Participant Data: One-Stage and Two-Stage Approaches, and Why They May Differ Danielle L
CORE Metadata, citation and similar papers at core.ac.uk Provided by Keele Research Repository Tutorial in Biostatistics Received: 11 March 2016, Accepted: 13 September 2016 Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/sim.7141 Meta-analysis using individual participant data: one-stage and two-stage approaches, and why they may differ Danielle L. Burke*† Joie Ensor and Richard D. Riley Meta-analysis using individual participant data (IPD) obtains and synthesises the raw, participant-level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta-analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual-level interactions, such as treatment-effect modifiers. There are two statistical approaches for conducting an IPD meta-analysis: one-stage and two-stage. The one-stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two-stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta-analysis model. There have been numerous comparisons of the one-stage and two-stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one-stage and two-stage IPD meta-analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one-stage or two-stage itself. -
Learning to Personalize Medicine from Aggregate Data
medRxiv preprint doi: https://doi.org/10.1101/2020.07.07.20148205; this version posted July 8, 2020. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. It is made available under a CC-BY-ND 4.0 International license . Learning to Personalize Medicine from Aggregate Data Rich Colbaugh Kristin Glass Volv Global Lausanne, Switzerland Abstract There is great interest in personalized medicine, in which treatment is tailored to the individual characteristics of pa- tients. Achieving the objectives of precision healthcare will require clinically-grounded, evidence-based approaches, which in turn demands rigorous, scalable predictive analytics. Standard strategies for deriving prediction models for medicine involve acquiring ‘training’ data for large numbers of patients, labeling each patient according to the out- come of interest, and then using the labeled examples to learn to predict the outcome for new patients. Unfortunate- ly, labeling individuals is time-consuming and expertise-intensive in medical applications and thus represents a ma- jor impediment to practical personalized medicine. We overcome this obstacle with a novel machine learning algo- rithm that enables individual-level prediction models to be induced from aggregate-level labeled data, which is read- ily-available in many health domains. The utility of the proposed learning methodology is demonstrated by: i.) lev- eraging US county-level mental health statistics to create a screening tool which detects individuals suffering from depression based upon their Twitter activity; ii.) designing a decision-support system that exploits aggregate clinical trials data on multiple sclerosis (MS) treatment to predict which therapy would work best for the presenting patient; iii.) employing group-level clinical trials data to induce a model able to find those MS patients likely to be helped by an experimental therapy. -
Good Statistical Practices for Contemporary Meta-Analysis: Examples Based on a Systematic Review on COVID-19 in Pregnancy
Review Good Statistical Practices for Contemporary Meta-Analysis: Examples Based on a Systematic Review on COVID-19 in Pregnancy Yuxi Zhao and Lifeng Lin * Department of Statistics, Florida State University, Tallahassee, FL 32306, USA; [email protected] * Correspondence: [email protected] Abstract: Systematic reviews and meta-analyses have been increasingly used to pool research find- ings from multiple studies in medical sciences. The reliability of the synthesized evidence depends highly on the methodological quality of a systematic review and meta-analysis. In recent years, several tools have been developed to guide the reporting and evidence appraisal of systematic reviews and meta-analyses, and much statistical effort has been paid to improve their methodological quality. Nevertheless, many contemporary meta-analyses continue to employ conventional statis- tical methods, which may be suboptimal compared with several alternative methods available in the evidence synthesis literature. Based on a recent systematic review on COVID-19 in pregnancy, this article provides an overview of select good practices for performing meta-analyses from sta- tistical perspectives. Specifically, we suggest meta-analysts (1) providing sufficient information of included studies, (2) providing information for reproducibility of meta-analyses, (3) using appro- priate terminologies, (4) double-checking presented results, (5) considering alternative estimators of between-study variance, (6) considering alternative confidence intervals, (7) reporting predic- Citation: Zhao, Y.; Lin, L. Good tion intervals, (8) assessing small-study effects whenever possible, and (9) considering one-stage Statistical Practices for Contemporary methods. We use worked examples to illustrate these good practices. Relevant statistical code Meta-Analysis: Examples Based on a is also provided. -
Editors' Summary of the Brookings Papers On
1017-00a Editors Sum 12/30/02 14:45 Page ix Editors’ Summary The brookings panel ON Economic Activity held its seventy-fourth conference in Washington, D.C., on September 5 and 6, 2002. This issue of Brookings Papers on Economic Activity includes the papers and dis- cussions presented at the conference. The first paper reviews the process and methods of inflation and output forecasting at four central banks and proposes strategies for improving the usefulness of their formal economic models for policymaking. The second paper analyzes the implications for monetary policymaking of uncertainty about the levels of the natural rates of unemployment and interest. The third paper examines reasons for the recent rise in current account deficits in the lower-income countries of Europe and the role of economic integration in breaking the link between domestic saving and domestic investment. The fourth paper applies a new decomposition of productivity growth to a new database of income-side output to examine the recent speedup in U.S. productivity growth and the contribution made by new economy industries. Despite a growing transparency in the conduct of monetary policy at many central banks, little is still known publicly about the actual process of central bank decisionmaking. In the first paper of this issue, Christopher Sims examines this process, drawing on interviews with staff and policy committee members from the Swedish Riksbank, the European Central Bank, the Bank of England, and the U.S. Federal Reserve. Central bank policy actions are inevitably based on forecasts of inflation and output. Sims’ interviews focused on how those forecasts are made, how uncer- tainty is dealt with, and what role formal economic models play in the process. -
Reconciling Household Surveys and National Accounts Data Using a Cross Entropy Estimation Method
TMD DISCUSSION PAPER NO. 50 RECONCILING HOUSEHOLD SURVEYS AND NATIONAL ACCOUNTS DATA USING A CROSS ENTROPY ESTIMATION METHOD Anne-Sophie Robilliard Sherman Robinson International Food Policy Research Institute Trade and Macroeconomics Division International Food Policy Research Institute 2033 K Street, N.W. Washington, D.C. 20006, U.S.A. November 1999 (Revised January 2001) TMD Discussion Papers contain preliminary material and research results, and are circulated prior to a full peer review in order to stimulate discussion and critical comment. It is expected that most Discussion Papers will eventually be published in some other form, and that their content may also be revised. This paper is available at http://www.cgiar.org/ifpri/divs/tmd/dp/papers/dp50.pdf Abstract This paper presents an approach to reconciling household surveys and national accounts data that starts from the assumption that the macro data represent control totals to which the household data must be reconciled, but the macro aggregates may be measured with error. The economic data gathered in the household survey are assumed to be accurate, or have been adjusted to be accurate. Given these assumptions, the problem is how to use the additional information provided by the national accounts data to re-estimate the household weights used in the survey so that the survey results are consistent with the aggregate data, while simultaneously estimating the errors in the aggregates. The estimation approach represents an efficient “information processing rule” using an estimation criterion based on an entropy measure of information. The survey household weights are treated as a prior. New weights are estimated that are close to the prior using a cross-entropy metric and that are also consistent with the additional information. -
Point and Density Forecasts for the Euro Area Using Bayesian Vars
A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Berg, Tim Oliver; Henzel, Steffen Working Paper Point and Density Forecasts for the Euro Area Using Bayesian VARs CESifo Working Paper, No. 4711 Provided in Cooperation with: Ifo Institute – Leibniz Institute for Economic Research at the University of Munich Suggested Citation: Berg, Tim Oliver; Henzel, Steffen (2014) : Point and Density Forecasts for the Euro Area Using Bayesian VARs, CESifo Working Paper, No. 4711, Center for Economic Studies and ifo Institute (CESifo), Munich This Version is available at: http://hdl.handle.net/10419/96866 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu Point and Density Forecasts for the Euro Area Using Bayesian VARs Tim O. -
Macroeconomic Accounts
VWL II – Makroökonomie WS2001/02 Dr. Ana B. Ania Macroeconomic Accounts The system of national accounts Macroeconomic accounting deals with aggregates and accounting identities, that is with macroeco- nomic magnitudes and how they relate to each other by definition. Macroeconomic accounting does not explain how each magnitude changes as a result of a change in other magnitudes. This will be the task of macroeconomic analysis, that we will look at in the next units of the course. In this first unit, we just deal with definitions. All that we say will therefore always hold by definition. The purpose of the system of national accounts is to keep record of the transactions that take place in any economy for the purpose of measuring productive activity and income generated in the economy. Since 1998 all countries should conform the new system of national accounts of the UN—System of National Accounts 1993 (SNA93). Additionally, the Eurostat (Central Statistical Agency of the EU) has produced a standard for members of the EU in accordance with SNA93 called the European System of Accounts 1995 (ESA95). The exposition here will be in accordance with these new systems. Institutional units In the view of the system of national accounts an economy is a system of decision-making units that interact with each other and perform economic transactions. The basic decision-making unit is called an institutional unit, which is any entity capable of owning assets, incurring liabilities, engaging in economic activities and transactions. Institutional units are either households or legal persons like cor- porations, governmental agencies, associations. -
A Guide to Deflating the Input-Output Accounts: Sources and Methods
Catalogue no. 15F0077GIE System of National Accounts A Guide to Deflating the Input-Output Accounts: Sources and methods 2001 A Guide to Deflating the Input–Output Accounts: Sources and Methods A Guide to Deflating the Input–Output Accounts Sources and Methods Data in many forms Statistics Canada disseminates data in a variety of forms. In addition to publications, both standard and special tabulations are offered. Data are available on the Internet, compact disc, diskette, computer printouts, microfiche and microfilm, and magnetic tape. Maps and other geographic reference materials are available for some types of data. Direct online access to aggregated information is possible through CANSIM, Statistics Canada’s machine-readable database and retrieval system. How to obtain more information Inquiries about this free product should be directed to: Industry Measures and Analysis Division, Statistics Canada, Ottawa, Ontario, K1A 0T6 (telephone: (613) 951-9417), e-mail: [email protected] (or to the Statistics Canada Regional Reference Centre in: Halifax (902) 426-5331 Regina (306) 780-5405 Montréal (514) 283-5725 Edmonton (403) 495-3027 Ottawa (613) 951-8116 Calgary (403) 292-6717 Toronto (416) 973-6586 Vancouver (604) 666-3691 Winnipeg (204) 983-4020 You can also visit our World Wide Web site: http://www.statcan.ca Toll-free access is provided for all users who reside outside the local dialing area of any of the Regional Reference Centres. National enquiries line 1 800 263-1136 National telecommunications device for the hearing impaired 1 800 363-7629 Order-only line (Canada and United States) 1 800 267-6677 Standards of service to the public Statistics Canada is committed to serving its clients in a prompt, reliable and courteous manner and in the official language of their choice.