The Econometric Approach to Efficiency Analysis

Total Page:16

File Type:pdf, Size:1020Kb

The Econometric Approach to Efficiency Analysis 2 The Econometric Approach to Efficiency Analysis William H. Greene 2.1 Introduction Chapter 1 describes two broad paradigms for measuring economic efficiency, one based on an essentially nonparametric, programming approach to anal- ysis of observed outcomes, and one based on an econometric approach to estimation of theory-based models of production, cost, or profit. This chapter presents an overview of techniques for econometric analysis of technical (pro- duction) and economic (cost) efficiency. The stochastic frontier model of Aigner, Lovell, and Schmidt (1977) is now the standard econometric platform for this type of analysis. I survey the underlying models and econometric tech- niques that have been used in studying technical inefficiency in the stochastic frontier framework and present some of the recent developments in econo- metric methodology. Applications that illustrate some of the computations are presented in the final section. 2.1.1 Modeling production The empirical estimation of production and cost functions is a standard exer- cise in econometrics. The frontier production function or production frontier is an extension of the familiar regression model based on the theoretical premise that a production function, or its dual, the cost function, or the convex conjugate of the two, the profit function, represents an ideal, the maximum output attain- able given a set of inputs, the minimum cost of producing that output given the prices of the inputs, or the maximum profit attainable given the inputs, 92 FRIED: “CHAP02” — 2007/8/24 — 19:02 — PAGE 92 — #1 The Econometric Approach to Efficiency Analysis 93 outputs, and prices of the inputs. The estimation of frontier functions is the econometric exercise of making the empirical implementation consistent with the underlying theoretical proposition that no observed agent can exceed the ideal. In practice,the frontier function model is (essentially) a regression model that is fit with the recognition of the theoretical constraint that all observations lie within the theoretical extreme. Measurement of (in)efficiency is, then, the empirical estimation of the extent to which observed agents (fail to) achieve the theoretical ideal. My interest in this chapter is in this latter function. The estimated model of production, cost, or profit is the means to the objective of measuring inefficiency. As intuition might suggest at this point, the exercise here is a formal analysis of the “residuals” from the production or cost model. The theory of optimization, production, and/or cost provides a description of the ultimate source of deviations from this theoretical ideal. 2.1.2 History of thought The literature on frontier production and cost functions and the calculation of efficiency measures begins with Debreu (1951) and Farrell (1957) [though there are intellectual antecedents, e.g., Hicks’s (1935) suggestion that monop- olists would enjoy their position through the attainment of a quiet life rather than through the pursuit of economic profits, a conjecture formalized some- what more by Leibenstein (1966, 1975)]. Farrell suggested that one could usefully analyze technical efficiency in terms of realized deviations from an idealized frontier isoquant. This approach falls naturally into an economet- ric approach in which the inefficiency is identified with disturbances in a regression model. The empirical estimation of production functions had begun long before Farrell’s work, arguably with Cobb and Douglas (1928). However, until the 1950s, production functions were largely used as devices for studying the func- tional distribution of income between capital and labor at the macroeconomic level. The celebrated contribution of Arrow et al. (1961) marks a milestone in this literature. The origins of empirical analysis of microeconomic production structures can be more reasonably identified with the work of Dean (1951, a leather belt shop), Johnston (1959, electricity generation), and, in his seminal work on electric power generation, Nerlove (1963). It is noteworthy that all three of these focus on costs rather than production, though Nerlove, follow- ing Samuelson (1938) and Shephard (1953), highlighted the dual relationship between cost and production.1 Empirical attention to production functions at a disaggregated level is a literature that began to emerge in earnest in the 1960s (see, e.g., Hildebrand and Liu, 1965; Zellner and Revankar, 1969). 2.1.3 Empirical antecedents The empirical literature on production and cost developed largely indepen- dently of the discourse on frontier modeling. Least squares or some variant FRIED: “CHAP02” — 2007/8/24 — 19:02 — PAGE 93 — #2 94 The Measurement of Productive Efficiency and Productivity Growth was generally used to pass a function through the middle of a cloud of points, and residuals of both signs were, as in other areas of study, not singled out for special treatment. The focal points of the studies in this literature were the esti- mated parameters of the production structure, not the individual deviations from the estimated function. An argument was made that these “averaging” estimators were estimating the average,rather than the“best-practice”technol- ogy. Farrell’s arguments provided an intellectual basis for redirecting attention from the production function specifically to the deviations from that function, and respecifying the model and the techniques accordingly. A series of papers includingAigner and Chu (1968) and Timmer (1971) proposed specific econo- metric models that were consistent with the frontier notions of Debreu (1951) and Farrell (1957). The contemporary line of research on econometric models begins with the nearly simultaneous appearance of the canonical papers of Aigner, Lovell, and Schmidt (1977) and Meeusen and van den Broeck (1977), who proposed the stochastic frontier models that applied researchers now use to combine the underlying theoretical propositions with a practical economet- ric framework. The current literature on production frontiers and efficiency estimation combines these two lines of research. 2.1.4 Organization of the survey This survey presents an overview of this literature and proceeds as follows: Section 2.2 presents the microeconomic theoretical underpinnings of the empirical models. As in the other parts of our presentation, this section gives only a cursory survey because of the very large literature on which it is based. The interested reader can find considerable additional detail in chapter 1 of this book and in a gateway to the larger literature, chapter 2 of Kumbhakar and Lovell (2000). Section 2.3 constructs the basic econometric framework for the econo- metric analysis of efficiency. This section also presents some intermediate results on “deterministic” (orthodox) frontier models that adhere strictly to the microeconomic theory. This part is brief. It is of some historical inter- est and contains some useful perspective for the more recent work. However, with little exception, current research on the deterministic approach to effi- ciency analysis takes place in the environment of “data envelopment analysis” (DEA), which is the subject of chapter 3 of this book.2 This section provides a bridge between the formulation of orthodox frontier models and the modern stochastic frontier models. Section 2.4 introduces the stochastic production frontier model and presents results on formulation and estimation of this model. Section 2.5 extends the stochastic frontier model to the analysis of cost and profits and describes the important extension of the frontier concept to multiple-output technologies. Section 2.6 turns to a major econometric issue, that of accommodating heterogeneity in the production model. The assumptions made in sections 2.4 FRIED: “CHAP02” — 2007/8/24 — 19:02 — PAGE 94 — #3 The Econometric Approach to Efficiency Analysis 95 and 2.5 regarding the stochastic nature of technical inefficiency are narrow and arguably unrealistic. Inefficiency is viewed as simply a random shock distributed homogeneously across firms. These assumptions are relaxed at the end of section 2.5 and in section 2.6. Here, I examine proposed models that allow the mean and variance of inefficiency to vary across firms, thus producing a richer, albeit considerably more complex, formulation. This part of the econometric model extends the theory to the practical consideration of observed and unobserved influences that are absent from the pure theory but are a crucial aspect of the real-world application. The econometric analysis continues in section 2.7 with the development of models for panel data. Once again, this is a modeling issue that provides a means to stretch the theory to producer behavior as it evolves through time. The analysis pursued here goes beyond the econometric issue of how to exploit the useful features of longitudinal data. The literature on panel data estimation of frontier models also addresses the fundamental question of how and whether inefficiency varies over time, and how econometric models can be made to accommodate the theoretical propositions. The formal measurement of inefficiency is considered in sections 2.8 and 2.9. The use of the frontier function model for estimating firm-level inefficiency that was suggested in sections 2.3 and 2.4 is formalized in the stochastic frontier model in section 2.8. Section 2.9 considers the separate issue of allocative inefficiency. In this area of study, the distinction between errors in optimization and the consequences of those errors for the goals or objectives of
Recommended publications
  • Aggregate Productivity Growth: Lessons from Microeconomic Evidence
    Aggregate Productivity Growth: Lessons from Microeconomic Evidence Lucia Foster, John Haltiwanger and C.J. Krizan* June 2000 * Respectively, Bureau of the Census; University of Maryland, Bureau of the Census, and NBER; and Fannie Mae and Bureau of the Census. We thank Tomas Dvorak for helpful research assistance. The analyses and results presented in this paper are attributable to the authors and do not necessarily reflect concurrence by the Bureau of the Census. 1 I. Overview Recent research using establishment and firm level data has raised a variety of conceptual and measurement questions regarding our understanding of aggregate productivity growth.1 Several key, related findings are of interest. First, there is large scale, ongoing reallocation of outputs and inputs across individual producers. Second, the pace of this reallocation varies over time (both secularly and cyclically) and across sectors. Third, much of this reallocation reflects within rather than between sector reallocation. Fourth, there are large differentials in the levels and the rates of growth of productivity across establishments within the same sector. The rapid pace of output and input reallocation along with differences in productivity levels and growth rates are the necessary ingredients for the pace of reallocation to play an important role in aggregate (i.e., industry) productivity growth. However, our review of the existing studies indicates that the measured contribution of such reallocation effects varies over time and across sectors and is sensitive to measurement methodology. An important objective of this paper is to sort out the role of these different factors so that we can understand the nature and the magnitude of the contribution of reallocation to aggregate productivity growth.
    [Show full text]
  • Lectures on Nonparametric Bayesian Statistics
    Lectures on Nonparametric Bayesian Statistics Notes for the course by Bas Kleijn, Aad van der Vaart, Harry van Zanten (Text partly extracted from a forthcoming book by S. Ghosal and A. van der Vaart) version 4-12-2012 UNDER CONSTRUCTION 1 Introduction Why adopt the nonparametric Bayesian approach for inference? The answer lies in the si- multaneous preference for nonparametric modeling and desire to follow a Bayesian proce- dure. Nonparametric (and semiparametric) models can avoid the arbitrary and possibly un- verifiable assumptions inherent in parametric models. Bayesian procedures may be desired for the conceptual simplicity of the Bayesian paradigm, easy interpretability of Bayesian quantities or philosopohical reasons. 1.1 Motivation Bayesian nonparametrics is the study of Bayesian inference methods for nonparametric and semiparametric models. In the Bayesian nonparametric paradigm a prior distribution is assigned to all unknown quantities (parameters) involved in the modeling, whether finite or infinite dimensional. Inference is made from the “posterior distribution”, the conditional distribution of all parameters given the data. A model completely specifies the conditional distribution of all observable given all unobserved quantities, or parameters, while a prior distribution specifies the distribution of all unobservables. From this point of view, random effects and latent variables also qualify as parameters, and distributions of these quantities, often considered as part of the model itself from the classical point of view, are considered part of the prior. The posterior distribution involves an inversion of the order of conditioning. Existence of a regular version of the posterior is guaranteed under mild conditions on the relevant spaces (see Section 2).
    [Show full text]
  • 1- TECHNOLOGY Q L M. Muniagurria Econ 464 Microeconomics Handout
    M. Muniagurria Econ 464 Microeconomics Handout (Part 1) I. TECHNOLOGY : Production Function, Marginal Productivity of Inputs, Isoquants (1) Case of One Input: L (Labor): q = f (L) • Let q equal output so the production function relates L to q. (How much output can be produced with a given amount of labor?) • Marginal productivity of labor = MPL is defined as q = Slope of prod. Function L Small changes i.e. The change in output if we change the amount of labor used by a very small amount. • How to find total output (q) if we only have information about the MPL: “In general” q is equal to the area under the MPL curve when there is only one input. Examples: (a) Linear production functions. Possible forms: q = 10 L| MPL = 10 q = ½ L| MPL = ½ q = 4 L| MPL = 4 The production function q = 4L is graphed below. -1- Notice that if we only have diagram 2, we can calculate output for different amounts of labor as the area under MPL: If L = 2 | q = Area below MPL for L Less or equal to 2 = = in Diagram 2 8 Remark: In all the examples in (a) MPL is constant. (b) Production Functions With Decreasing MPL. Remark: Often this is thought as the case of one variable input (Labor = L) and a fixed factor (land or entrepreneurial ability) (2) Case of Two Variable Inputs: q = f (L, K) L (Labor), K (Capital) • Production function relates L & K to q (total output) • Isoquant: Combinations of L & K that can achieve the same q -2- • Marginal Productivities )q MPL ' Small changes )L K constant )q MPK ' Small changes )K L constant )K • MRTS = - Slope of Isoquant = Absolute value of Along Isoquant )L Examples (a) Linear (L & K are perfect substitutes) Possible forms: q = 10 L + 5 K Y MPL = 10 MPK = 5 q = L + K Y MPL = 1 MPK = 1 q = 2L + K Y MPL = 2 MPK = 1 • The production function q = 2 L + K is graphed below.
    [Show full text]
  • Download (4Mb)
    University of Warwick institutional repository: http://go.warwick.ac.uk/wrap A Thesis Submitted for the Degree of PhD at the University of Warwick http://go.warwick.ac.uk/wrap/58404 This thesis is made available online and is protected by original copyright. Please scroll down to view the document itself. Please refer to the repository record for this item for information to help you to cite it. Our policy information is available from the repository home page. Library Declaration and Deposit Agreement 1. STUDENT DETAILS Please complete the following: Full name: …………………………………………………………………………………………….Gui Pedro Araújo de Mendonça University ID number: ………………………………………………………………………………0759252 2. THESIS DEPOSIT 2.1 I understand that under my registration at the University, I am required to deposit my thesis with the University in BOTH hard copy and in digital format. The digital version should normally be saved as a single pdf file. 2.2 The hard copy will be housed in the University Library. The digital version will be deposited in the University’s Institutional Repository (WRAP). Unless otherwise indicated (see 2.3 below) this will be made openly accessible on the Internet and will be supplied to the British Library to be made available online via its Electronic Theses Online Service (EThOS) service. [At present, theses submitted for a Master’s degree by Research (MA, MSc, LLM, MS or MMedSci) are not being deposited in WRAP and not being made available via EthOS. This may change in future.] 2.3 In exceptional circumstances, the Chair of the Board of Graduate Studies may grant permission for an embargo to be placed on public access to the hard copy thesis for a limited period.
    [Show full text]
  • Oecd Compendium of Productivity Indicators 2005
    OECD COMPENDIUM OF PRODUCTIVITY INDICATORS 2005 ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT Pursuant to Article 1 of the Convention signed in Paris on 14th December 1960, and which came into force on 30th September 1961, the Organisation for Economic Co-operation and Development (OECD) shall promote policies designed: • To achieve the highest sustainable economic growth and employment and a rising standard of living in member countries, while maintaining financial stability, and thus to contribute to the development of the world economy. • To contribute to sound economic expansion in member as well as non-member countries in the process of economic development; and • To contribute to the expansion of world trade on a multilateral, non-discriminatory basis in accordance with international obligations. The original member countries of the OECD are Austria, Belgium, Canada, Denmark, France, Germany, Greece, Iceland, Ireland, Italy, Luxembourg, the Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, Turkey, the United Kingdom and the United States. The following countries became members subsequently through accession at the dates indicated hereafter: Japan (28th April 1964), Finland (28th January 1969), Australia (7th June 1971), New Zealand (29th May 1973), Mexico (18th May 1994), the Czech Republic (21st December 1995), Hungary (7th May 1996), Poland (22nd November 1996), Korea (12th December 1996) and the Slovak Republic (14th December 2000). The Commission of the European Communities takes part in the work of the OECD (Article 13 of the OECD Convention). © OECD 2005 FOREWORD Over the past few years, productivity and economic growth have been an important focus of OECD work.
    [Show full text]
  • R G M B.Ec. M.Sc. Ph.D
    Rolando Kristiansand, Norway www.bayesgroup.org Gonzales Martinez +47 41224721 [email protected] JOBS & AWARDS M.Sc. B.Ec. Applied Ph.D. Economics candidate Statistics Jobs Events Awards Del Valle University University of Alcalá Universitetet i Agder 2018 Bolivia, 2000-2004 Spain, 2009-2010 Norway, 2018- 1/2018 Ph.D scholarship 8/2017 International Consultant, OPHI OTHER EDUCATION First prize, International Young 12/2016 J-PAL Course on Evaluating Social Programs Macro-prudential Policies Statistician Prize, IAOS Massachusetts Institute of Technology - MIT International Monetary Fund (IMF) 5/2016 Deputy manager, BCP bank Systematic Literature Reviews and Meta-analysis Theorizing and Theory Building Campbell foundation - Vrije Universiteit Halmstad University (HH, Sweden) 1/2016 Consultant, UNFPA Bayesian Modeling, Inference and Prediction Multidimensional Poverty Analysis University of Reading OPHI, Oxford University Researcher, data analyst & SKILLS AND TECHNOLOGIES 1/2015 project coordinator, PEP Academic Research & publishing Financial & Risk analysis 12 years of experience making economic/nancial analysis 3/2014 Director, BayesGroup.org and building mathemati- Consulting cal/statistical/econometric models for the government, private organizations and non- prot NGOs Research & project Mathematical 1/2013 modeling coordinator, UNFPA LANGUAGES 04/2012 First award, BCG competition Spanish (native) Statistical analysis English (TOEFL: 106) 9/2011 Financial Economist, UDAPE MatLab Macro-economic R, RStudio 11/2010 Currently, I work mainly analyst, MEFP Stata, SPSS with MatLab and R . SQL, SAS When needed, I use 3/2010 First award, BCB competition Eviews Stata, SAS or Eviews. I started working in LaTex, WinBugs Python recently. 8/2009 M.Sc. scholarship Python, Spyder RECENT PUBLICATIONS Disjunction between universality and targeting of social policies: Demographic opportunities for depolarized de- velopment in Paraguay.
    [Show full text]
  • Robust Numerical Bayesian Unit Root Test for Model
    ROBUST NUMERICAL BAYESIAN UNIT ROOT TEST FOR MODEL UNCERTAINTY by XUEDONG WU (Under the Direction of Jeffrey H. Dorfman) ABSTRACT Unit root testing is an important procedure when performing time series analysis, since all the succeeding inferences should be performed based on a stationary series. Thus, it is crucial to test the stationarity of the time series in hand accurately and efficiently. One issue among the existing popular unit root testing methods is that they all require certain assumptions about the model specification, whether on the functional form or the stochastic term distribution; then all the analyses are performed based on the pre- determined model. However, various circumstances such as data incompleteness, variable selection, and distribution misspecification which may lead to an inappropriate model specification that will produce an erroneous conclusion since the test result depends on the particular model considered. This dissertation focuses on confronting this issue by proposing a new numerical Bayesian unit root test incorporating model averaging which can take model uncertainty as well as variable transformation into account. The first chapter introduces a broad literature review of all the building blocks need for the development of the new methods, including traditional frequentist unit root tests, Bayesian unit root tests, and Bayesian model averaging. Following chapter II elaborates the mathematical derivation of the proposed methods, Monte Carlo simulation study and results, as well as testing conclusions on the benchmark Nelson and Plosser (1982) macroeconomic time series. Chapter III applies the method to investigate the effect of data frequency on unit root test results particularly for financial data.
    [Show full text]
  • Market Size, Division of Labor, and Firm Productivity
    NBER WORKING PAPER SERIES MARKET SIZE, DIVISION OF LABOR, AND FIRM PRODUCTIVITY Thomas Chaney Ralph Ossa Working Paper 18243 http://www.nber.org/papers/w18243 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 July 2012 We are grateful to Pol Antras, Holger Breinlich, Alejandro Cunat, Elhanan Helpman, Gianmarco Ottaviano, Henry Overman, Stephen Redding, and Tony Venables. We also thank the editor, Robert W. Staiger, and two anonymous referees, for their thoughtful comments. All remaining errors are ours. This work extends the second chapter of Ossa’s Ph.D. dissertation originally entitled "Trade Liberalization, Outsourcing, and Firm Productivity". The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peer- reviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications. © 2012 by Thomas Chaney and Ralph Ossa. All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicit permission provided that full credit, including © notice, is given to the source. Market Size, Division of Labor, and Firm Productivity Thomas Chaney and Ralph Ossa NBER Working Paper No. 18243 July 2012 JEL No. F10,F12,L22,L25 ABSTRACT We generalize Krugman's (1979) 'new trade' model by allowing for an explicit production chain in which a range of tasks is performed sequentially by a number of specialized teams. We demonstrate that an increase in market size induces a deeper division of labor among these teams which leads to an increase in firm productivity.
    [Show full text]
  • Bayesian Treatments to Panel Data Models with an Application To
    Bayesian Treatments to Panel Data Models with an Application to Models of Productivity Junrong Liu Department of Economics/Rice University Robin Sickles Department of Economics/Rice University E. G. Tsionas University of Athens This Draft November 25, 2013 Abstract This paper considers two models for uncovering information about technical change in large heterogeneous panels. The first is a panel data model with nonparametric time effects. Second, we consider a panel data model with common factors whose number is unknown and their effects are firm-specific. This paper proposes a Bayesian approach to estimate the two models. Bayesian inference techniques organized around MCMC are applied to implement the models. Monte Carlo experiments are performed to examine the finite-sample performance of this approach, which dominates a variety of estimators that rely on parametric assumptions. In order to illustrate the new method, the Bayesian approach has been applied to the analysis of efficiency trends in the U.S. largest banks using a dataset based on the Call Report data from FDIC over the period from 1990 to 2009. Keywords: Panel data, time-varying heterogeneity, Bayesian econometrics, banking studies, productivity JEL Classification: C23; C11; G21; D24 1 1. Introduction In this paper, we consider two panel data models with unobserved heterogeneous time-varying effects, one with individual effects treated as random functions of time, the other with common factors whose number is unknown and their effects are firm-specific. This paper has two distinctive features and can be considered as a generalization of traditional panel data models. Firstly, the individual effects that are assumed to be heterogeneous across units as well as to be time varying are treated nonparametrically, following the spirit of the model from Bai (2009) and Kneip et al.
    [Show full text]
  • End-User Probabilistic Programming
    End-User Probabilistic Programming Judith Borghouts, Andrew D. Gordon, Advait Sarkar, and Neil Toronto Microsoft Research Abstract. Probabilistic programming aims to help users make deci- sions under uncertainty. The user writes code representing a probabilistic model, and receives outcomes as distributions or summary statistics. We consider probabilistic programming for end-users, in particular spread- sheet users, estimated to number in tens to hundreds of millions. We examine the sources of uncertainty actually encountered by spreadsheet users, and their coping mechanisms, via an interview study. We examine spreadsheet-based interfaces and technology to help reason under uncer- tainty, via probabilistic and other means. We show how uncertain values can propagate uncertainty through spreadsheets, and how sheet-defined functions can be applied to handle uncertainty. Hence, we draw conclu- sions about the promise and limitations of probabilistic programming for end-users. 1 Introduction In this paper, we discuss the potential of bringing together two rather distinct approaches to decision making under uncertainty: spreadsheets and probabilistic programming. We start by introducing these two approaches. 1.1 Background: Spreadsheets and End-User Programming The spreadsheet is the first \killer app" of the personal computer era, starting in 1979 with Dan Bricklin and Bob Frankston's VisiCalc for the Apple II [15]. The primary interface of a spreadsheet|then and now, four decades later|is the grid, a two-dimensional array of cells. Each cell may hold a literal data value, or a formula that computes a data value (and may depend on the data in other cells, which may themselves be computed by formulas).
    [Show full text]
  • Labour Productivity
    Labour productivity Introduction Labour productivity is an important economic indicator that is closely linked to economic growth, competitiveness, and living standards within an economy. Labour productivity represents the total volume of output (measured in terms of Gross Domestic Product, GDP) produced per unit of labour (measured in terms of the number of employed persons) during a given time reference period. The indicator allows data users to assess GDP-to-labour input levels and growth rates over time, thus providing general information about the efficiency and quality of human capital in the production process for a given economic and social context, including other complementary inputs and innovations used in production Given its usefulness in conveying valuable information on a country’s labour market situation, it was one of the indicators used to measure progress towards the achievement of the Millennium Development Goals (MDGs), under Goal 1 (Eradicate poverty and hunger), and it was included as one of the indicators proposed to measure progress towards the achievement of the Sustainable Development Goals (SDG), under Goal 8 (Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all). 1 ILOSTAT presents ILO estimates and projections on labour productivity, both in constant 2005 US$ and in constant 2011 international $ in Purchasing Power Parity (PPP).2 Concepts and definitions Productivity represents the amount of output per unit of input. In ILOSTAT’s indicator, output is measured as gross domestic product (GDP) for the aggregate economy expressed at purchasing power parities (PPP) to account for price differences in countries. The GDP represents the monetary value of all goods and services produced within a country over a specified period of time.
    [Show full text]
  • Bayesian Data Analysis and Probabilistic Programming
    Probabilistic programming and Stan mc-stan.org Outline What is probabilistic programming Stan now Stan in the future A bit about other software The next wave of probabilistic programming Stan even wider adoption including wider use in companies simulators (likelihood free) autonomus agents (reinforcmenet learning) deep learning Probabilistic programming Probabilistic programming framework BUGS started revolution in statistical analysis in early 1990’s allowed easier use of elaborate models by domain experts was widely adopted in many fields of science Probabilistic programming Probabilistic programming framework BUGS started revolution in statistical analysis in early 1990’s allowed easier use of elaborate models by domain experts was widely adopted in many fields of science The next wave of probabilistic programming Stan even wider adoption including wider use in companies simulators (likelihood free) autonomus agents (reinforcmenet learning) deep learning Stan Tens of thousands of users, 100+ contributors, 50+ R packages building on Stan Commercial and scientific users in e.g. ecology, pharmacometrics, physics, political science, finance and econometrics, professional sports, real estate Used in scientific breakthroughs: LIGO gravitational wave discovery (2017 Nobel Prize in Physics) and counting black holes Used in hundreds of companies: Facebook, Amazon, Google, Novartis, Astra Zeneca, Zalando, Smartly.io, Reaktor, ... StanCon 2018 organized in Helsinki in 29-31 August http://mc-stan.org/events/stancon2018Helsinki/ Probabilistic program
    [Show full text]