The Econometric Approach to Efficiency Analysis
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Aggregate Productivity Growth: Lessons from Microeconomic Evidence
Aggregate Productivity Growth: Lessons from Microeconomic Evidence Lucia Foster, John Haltiwanger and C.J. Krizan* June 2000 * Respectively, Bureau of the Census; University of Maryland, Bureau of the Census, and NBER; and Fannie Mae and Bureau of the Census. We thank Tomas Dvorak for helpful research assistance. The analyses and results presented in this paper are attributable to the authors and do not necessarily reflect concurrence by the Bureau of the Census. 1 I. Overview Recent research using establishment and firm level data has raised a variety of conceptual and measurement questions regarding our understanding of aggregate productivity growth.1 Several key, related findings are of interest. First, there is large scale, ongoing reallocation of outputs and inputs across individual producers. Second, the pace of this reallocation varies over time (both secularly and cyclically) and across sectors. Third, much of this reallocation reflects within rather than between sector reallocation. Fourth, there are large differentials in the levels and the rates of growth of productivity across establishments within the same sector. The rapid pace of output and input reallocation along with differences in productivity levels and growth rates are the necessary ingredients for the pace of reallocation to play an important role in aggregate (i.e., industry) productivity growth. However, our review of the existing studies indicates that the measured contribution of such reallocation effects varies over time and across sectors and is sensitive to measurement methodology. An important objective of this paper is to sort out the role of these different factors so that we can understand the nature and the magnitude of the contribution of reallocation to aggregate productivity growth. -
Lectures on Nonparametric Bayesian Statistics
Lectures on Nonparametric Bayesian Statistics Notes for the course by Bas Kleijn, Aad van der Vaart, Harry van Zanten (Text partly extracted from a forthcoming book by S. Ghosal and A. van der Vaart) version 4-12-2012 UNDER CONSTRUCTION 1 Introduction Why adopt the nonparametric Bayesian approach for inference? The answer lies in the si- multaneous preference for nonparametric modeling and desire to follow a Bayesian proce- dure. Nonparametric (and semiparametric) models can avoid the arbitrary and possibly un- verifiable assumptions inherent in parametric models. Bayesian procedures may be desired for the conceptual simplicity of the Bayesian paradigm, easy interpretability of Bayesian quantities or philosopohical reasons. 1.1 Motivation Bayesian nonparametrics is the study of Bayesian inference methods for nonparametric and semiparametric models. In the Bayesian nonparametric paradigm a prior distribution is assigned to all unknown quantities (parameters) involved in the modeling, whether finite or infinite dimensional. Inference is made from the “posterior distribution”, the conditional distribution of all parameters given the data. A model completely specifies the conditional distribution of all observable given all unobserved quantities, or parameters, while a prior distribution specifies the distribution of all unobservables. From this point of view, random effects and latent variables also qualify as parameters, and distributions of these quantities, often considered as part of the model itself from the classical point of view, are considered part of the prior. The posterior distribution involves an inversion of the order of conditioning. Existence of a regular version of the posterior is guaranteed under mild conditions on the relevant spaces (see Section 2). -
1- TECHNOLOGY Q L M. Muniagurria Econ 464 Microeconomics Handout
M. Muniagurria Econ 464 Microeconomics Handout (Part 1) I. TECHNOLOGY : Production Function, Marginal Productivity of Inputs, Isoquants (1) Case of One Input: L (Labor): q = f (L) • Let q equal output so the production function relates L to q. (How much output can be produced with a given amount of labor?) • Marginal productivity of labor = MPL is defined as q = Slope of prod. Function L Small changes i.e. The change in output if we change the amount of labor used by a very small amount. • How to find total output (q) if we only have information about the MPL: “In general” q is equal to the area under the MPL curve when there is only one input. Examples: (a) Linear production functions. Possible forms: q = 10 L| MPL = 10 q = ½ L| MPL = ½ q = 4 L| MPL = 4 The production function q = 4L is graphed below. -1- Notice that if we only have diagram 2, we can calculate output for different amounts of labor as the area under MPL: If L = 2 | q = Area below MPL for L Less or equal to 2 = = in Diagram 2 8 Remark: In all the examples in (a) MPL is constant. (b) Production Functions With Decreasing MPL. Remark: Often this is thought as the case of one variable input (Labor = L) and a fixed factor (land or entrepreneurial ability) (2) Case of Two Variable Inputs: q = f (L, K) L (Labor), K (Capital) • Production function relates L & K to q (total output) • Isoquant: Combinations of L & K that can achieve the same q -2- • Marginal Productivities )q MPL ' Small changes )L K constant )q MPK ' Small changes )K L constant )K • MRTS = - Slope of Isoquant = Absolute value of Along Isoquant )L Examples (a) Linear (L & K are perfect substitutes) Possible forms: q = 10 L + 5 K Y MPL = 10 MPK = 5 q = L + K Y MPL = 1 MPK = 1 q = 2L + K Y MPL = 2 MPK = 1 • The production function q = 2 L + K is graphed below. -
Download (4Mb)
University of Warwick institutional repository: http://go.warwick.ac.uk/wrap A Thesis Submitted for the Degree of PhD at the University of Warwick http://go.warwick.ac.uk/wrap/58404 This thesis is made available online and is protected by original copyright. Please scroll down to view the document itself. Please refer to the repository record for this item for information to help you to cite it. Our policy information is available from the repository home page. Library Declaration and Deposit Agreement 1. STUDENT DETAILS Please complete the following: Full name: …………………………………………………………………………………………….Gui Pedro Araújo de Mendonça University ID number: ………………………………………………………………………………0759252 2. THESIS DEPOSIT 2.1 I understand that under my registration at the University, I am required to deposit my thesis with the University in BOTH hard copy and in digital format. The digital version should normally be saved as a single pdf file. 2.2 The hard copy will be housed in the University Library. The digital version will be deposited in the University’s Institutional Repository (WRAP). Unless otherwise indicated (see 2.3 below) this will be made openly accessible on the Internet and will be supplied to the British Library to be made available online via its Electronic Theses Online Service (EThOS) service. [At present, theses submitted for a Master’s degree by Research (MA, MSc, LLM, MS or MMedSci) are not being deposited in WRAP and not being made available via EthOS. This may change in future.] 2.3 In exceptional circumstances, the Chair of the Board of Graduate Studies may grant permission for an embargo to be placed on public access to the hard copy thesis for a limited period. -
Oecd Compendium of Productivity Indicators 2005
OECD COMPENDIUM OF PRODUCTIVITY INDICATORS 2005 ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT Pursuant to Article 1 of the Convention signed in Paris on 14th December 1960, and which came into force on 30th September 1961, the Organisation for Economic Co-operation and Development (OECD) shall promote policies designed: • To achieve the highest sustainable economic growth and employment and a rising standard of living in member countries, while maintaining financial stability, and thus to contribute to the development of the world economy. • To contribute to sound economic expansion in member as well as non-member countries in the process of economic development; and • To contribute to the expansion of world trade on a multilateral, non-discriminatory basis in accordance with international obligations. The original member countries of the OECD are Austria, Belgium, Canada, Denmark, France, Germany, Greece, Iceland, Ireland, Italy, Luxembourg, the Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, Turkey, the United Kingdom and the United States. The following countries became members subsequently through accession at the dates indicated hereafter: Japan (28th April 1964), Finland (28th January 1969), Australia (7th June 1971), New Zealand (29th May 1973), Mexico (18th May 1994), the Czech Republic (21st December 1995), Hungary (7th May 1996), Poland (22nd November 1996), Korea (12th December 1996) and the Slovak Republic (14th December 2000). The Commission of the European Communities takes part in the work of the OECD (Article 13 of the OECD Convention). © OECD 2005 FOREWORD Over the past few years, productivity and economic growth have been an important focus of OECD work. -
R G M B.Ec. M.Sc. Ph.D
Rolando Kristiansand, Norway www.bayesgroup.org Gonzales Martinez +47 41224721 [email protected] JOBS & AWARDS M.Sc. B.Ec. Applied Ph.D. Economics candidate Statistics Jobs Events Awards Del Valle University University of Alcalá Universitetet i Agder 2018 Bolivia, 2000-2004 Spain, 2009-2010 Norway, 2018- 1/2018 Ph.D scholarship 8/2017 International Consultant, OPHI OTHER EDUCATION First prize, International Young 12/2016 J-PAL Course on Evaluating Social Programs Macro-prudential Policies Statistician Prize, IAOS Massachusetts Institute of Technology - MIT International Monetary Fund (IMF) 5/2016 Deputy manager, BCP bank Systematic Literature Reviews and Meta-analysis Theorizing and Theory Building Campbell foundation - Vrije Universiteit Halmstad University (HH, Sweden) 1/2016 Consultant, UNFPA Bayesian Modeling, Inference and Prediction Multidimensional Poverty Analysis University of Reading OPHI, Oxford University Researcher, data analyst & SKILLS AND TECHNOLOGIES 1/2015 project coordinator, PEP Academic Research & publishing Financial & Risk analysis 12 years of experience making economic/nancial analysis 3/2014 Director, BayesGroup.org and building mathemati- Consulting cal/statistical/econometric models for the government, private organizations and non- prot NGOs Research & project Mathematical 1/2013 modeling coordinator, UNFPA LANGUAGES 04/2012 First award, BCG competition Spanish (native) Statistical analysis English (TOEFL: 106) 9/2011 Financial Economist, UDAPE MatLab Macro-economic R, RStudio 11/2010 Currently, I work mainly analyst, MEFP Stata, SPSS with MatLab and R . SQL, SAS When needed, I use 3/2010 First award, BCB competition Eviews Stata, SAS or Eviews. I started working in LaTex, WinBugs Python recently. 8/2009 M.Sc. scholarship Python, Spyder RECENT PUBLICATIONS Disjunction between universality and targeting of social policies: Demographic opportunities for depolarized de- velopment in Paraguay. -
Robust Numerical Bayesian Unit Root Test for Model
ROBUST NUMERICAL BAYESIAN UNIT ROOT TEST FOR MODEL UNCERTAINTY by XUEDONG WU (Under the Direction of Jeffrey H. Dorfman) ABSTRACT Unit root testing is an important procedure when performing time series analysis, since all the succeeding inferences should be performed based on a stationary series. Thus, it is crucial to test the stationarity of the time series in hand accurately and efficiently. One issue among the existing popular unit root testing methods is that they all require certain assumptions about the model specification, whether on the functional form or the stochastic term distribution; then all the analyses are performed based on the pre- determined model. However, various circumstances such as data incompleteness, variable selection, and distribution misspecification which may lead to an inappropriate model specification that will produce an erroneous conclusion since the test result depends on the particular model considered. This dissertation focuses on confronting this issue by proposing a new numerical Bayesian unit root test incorporating model averaging which can take model uncertainty as well as variable transformation into account. The first chapter introduces a broad literature review of all the building blocks need for the development of the new methods, including traditional frequentist unit root tests, Bayesian unit root tests, and Bayesian model averaging. Following chapter II elaborates the mathematical derivation of the proposed methods, Monte Carlo simulation study and results, as well as testing conclusions on the benchmark Nelson and Plosser (1982) macroeconomic time series. Chapter III applies the method to investigate the effect of data frequency on unit root test results particularly for financial data. -
Market Size, Division of Labor, and Firm Productivity
NBER WORKING PAPER SERIES MARKET SIZE, DIVISION OF LABOR, AND FIRM PRODUCTIVITY Thomas Chaney Ralph Ossa Working Paper 18243 http://www.nber.org/papers/w18243 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 July 2012 We are grateful to Pol Antras, Holger Breinlich, Alejandro Cunat, Elhanan Helpman, Gianmarco Ottaviano, Henry Overman, Stephen Redding, and Tony Venables. We also thank the editor, Robert W. Staiger, and two anonymous referees, for their thoughtful comments. All remaining errors are ours. This work extends the second chapter of Ossa’s Ph.D. dissertation originally entitled "Trade Liberalization, Outsourcing, and Firm Productivity". The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peer- reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications. © 2012 by Thomas Chaney and Ralph Ossa. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. Market Size, Division of Labor, and Firm Productivity Thomas Chaney and Ralph Ossa NBER Working Paper No. 18243 July 2012 JEL No. F10,F12,L22,L25 ABSTRACT We generalize Krugman's (1979) 'new trade' model by allowing for an explicit production chain in which a range of tasks is performed sequentially by a number of specialized teams. We demonstrate that an increase in market size induces a deeper division of labor among these teams which leads to an increase in firm productivity. -
Bayesian Treatments to Panel Data Models with an Application To
Bayesian Treatments to Panel Data Models with an Application to Models of Productivity Junrong Liu Department of Economics/Rice University Robin Sickles Department of Economics/Rice University E. G. Tsionas University of Athens This Draft November 25, 2013 Abstract This paper considers two models for uncovering information about technical change in large heterogeneous panels. The first is a panel data model with nonparametric time effects. Second, we consider a panel data model with common factors whose number is unknown and their effects are firm-specific. This paper proposes a Bayesian approach to estimate the two models. Bayesian inference techniques organized around MCMC are applied to implement the models. Monte Carlo experiments are performed to examine the finite-sample performance of this approach, which dominates a variety of estimators that rely on parametric assumptions. In order to illustrate the new method, the Bayesian approach has been applied to the analysis of efficiency trends in the U.S. largest banks using a dataset based on the Call Report data from FDIC over the period from 1990 to 2009. Keywords: Panel data, time-varying heterogeneity, Bayesian econometrics, banking studies, productivity JEL Classification: C23; C11; G21; D24 1 1. Introduction In this paper, we consider two panel data models with unobserved heterogeneous time-varying effects, one with individual effects treated as random functions of time, the other with common factors whose number is unknown and their effects are firm-specific. This paper has two distinctive features and can be considered as a generalization of traditional panel data models. Firstly, the individual effects that are assumed to be heterogeneous across units as well as to be time varying are treated nonparametrically, following the spirit of the model from Bai (2009) and Kneip et al. -
End-User Probabilistic Programming
End-User Probabilistic Programming Judith Borghouts, Andrew D. Gordon, Advait Sarkar, and Neil Toronto Microsoft Research Abstract. Probabilistic programming aims to help users make deci- sions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spread- sheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncer- tainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclu- sions about the promise and limitations of probabilistic programming for end-users. 1 Introduction In this paper, we discuss the potential of bringing together two rather distinct approaches to decision making under uncertainty: spreadsheets and probabilistic programming. We start by introducing these two approaches. 1.1 Background: Spreadsheets and End-User Programming The spreadsheet is the first \killer app" of the personal computer era, starting in 1979 with Dan Bricklin and Bob Frankston's VisiCalc for the Apple II [15]. The primary interface of a spreadsheet|then and now, four decades later|is the grid, a two-dimensional array of cells. Each cell may hold a literal data value, or a formula that computes a data value (and may depend on the data in other cells, which may themselves be computed by formulas). -
Labour Productivity
Labour productivity Introduction Labour productivity is an important economic indicator that is closely linked to economic growth, competitiveness, and living standards within an economy. Labour productivity represents the total volume of output (measured in terms of Gross Domestic Product, GDP) produced per unit of labour (measured in terms of the number of employed persons) during a given time reference period. The indicator allows data users to assess GDP-to-labour input levels and growth rates over time, thus providing general information about the efficiency and quality of human capital in the production process for a given economic and social context, including other complementary inputs and innovations used in production Given its usefulness in conveying valuable information on a country’s labour market situation, it was one of the indicators used to measure progress towards the achievement of the Millennium Development Goals (MDGs), under Goal 1 (Eradicate poverty and hunger), and it was included as one of the indicators proposed to measure progress towards the achievement of the Sustainable Development Goals (SDG), under Goal 8 (Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all). 1 ILOSTAT presents ILO estimates and projections on labour productivity, both in constant 2005 US$ and in constant 2011 international $ in Purchasing Power Parity (PPP).2 Concepts and definitions Productivity represents the amount of output per unit of input. In ILOSTAT’s indicator, output is measured as gross domestic product (GDP) for the aggregate economy expressed at purchasing power parities (PPP) to account for price differences in countries. The GDP represents the monetary value of all goods and services produced within a country over a specified period of time. -
Bayesian Data Analysis and Probabilistic Programming
Probabilistic programming and Stan mc-stan.org Outline What is probabilistic programming Stan now Stan in the future A bit about other software The next wave of probabilistic programming Stan even wider adoption including wider use in companies simulators (likelihood free) autonomus agents (reinforcmenet learning) deep learning Probabilistic programming Probabilistic programming framework BUGS started revolution in statistical analysis in early 1990’s allowed easier use of elaborate models by domain experts was widely adopted in many fields of science Probabilistic programming Probabilistic programming framework BUGS started revolution in statistical analysis in early 1990’s allowed easier use of elaborate models by domain experts was widely adopted in many fields of science The next wave of probabilistic programming Stan even wider adoption including wider use in companies simulators (likelihood free) autonomus agents (reinforcmenet learning) deep learning Stan Tens of thousands of users, 100+ contributors, 50+ R packages building on Stan Commercial and scientific users in e.g. ecology, pharmacometrics, physics, political science, finance and econometrics, professional sports, real estate Used in scientific breakthroughs: LIGO gravitational wave discovery (2017 Nobel Prize in Physics) and counting black holes Used in hundreds of companies: Facebook, Amazon, Google, Novartis, Astra Zeneca, Zalando, Smartly.io, Reaktor, ... StanCon 2018 organized in Helsinki in 29-31 August http://mc-stan.org/events/stancon2018Helsinki/ Probabilistic program