A Panel Data Analysis of the Connection Between Employee Remuneration, Productivity and Minimum Wage in Romania

Total Page:16

File Type:pdf, Size:1020Kb

A Panel Data Analysis of the Connection Between Employee Remuneration, Productivity and Minimum Wage in Romania RECENT ADVANCES in MATHEMATICS and COMPUTERS in BUSINESS, ECONOMICS, BIOLOGY & CHEMISTRY A Panel Data Analysis of the Connection between Employee Remuneration, Productivity and Minimum Wage in Romania MARIA DENISA ANTONIE, AMALIA CRISTESCU, NICOLAE CATANICIU National Scientific Research Institute for Labor and Social Protection 6-8 Povernei Str., District 1, Bucharest ROMANIA [email protected], [email protected], [email protected] Abstract: - In this paper we present the results of a panel data analysis of the impact of productivity and minimum wage upon employee remuneration. We used a fixed effect regression model with Driscoll and Kraay standard errors to account for the possible problem of heteroskedastic and autocorrelated error structure. The main conclusion is that in the studied period the Romanian economy was characterized by a strong correlation between productivity and remuneration, which indicates the economic efficiency. Key-Words: - productivity, remuneration, econometric model, panel data, fixed effects, robust standard errors 1 Introduction (which is actually the most significant component of the In an economy, and especially in an emerging one like labor cost) on the labor productivity and the minimum Romania, the relationship between productivity and wage. growth potential is important, mainly if the productivity The analysis was conducted over a period of five years level is seen as the third factor of economic growth after (2003-2007), using data for the 29 activities of the labor force and capital. The potential GDP growth Romanian economy. As for the method, we choosed a creates the prerequisites for reducing the existing excess panel data model. demand in the economy and for ensuring real economic Panels are attractive since they contain more information convergence. Secondly, the productivity influences the than single cross-section and allow for an increase inflation and the exchange rate. A low correlation precision in estimation. between wages and productivity creates inflationary pressures by increasing the costs. In order to achieve the competitiveness on the European unique market, 2 Data Romania must focus on the economic growth [5]. Also, The variables used are: employee remuneration (lrem), most of the times, the competitiveness is assessed by the productivity (lprod) and minimum wage (lhminwage). correlation between wages (labor costs) and labor Initial data are annual, for the 29 activities of Romanian productivity. economy. We worked with log data, after we brought Since productivity and wages are both essential them to hourly values. Also, employee remuneration and economic factors, the way they interrelate is a constant minimum wage were deflated by using the CPI with the concern for the economists, as well as for employers and base in 2003. For the productivity, in order to ensure policy-makers. The literature is focused on the comparability, the values were divided to the production connection between these two factors. There are several index with the base in 2003. papers concerning labor productivity and wages in The remuneration per hour was calculated by dividing Romania that use different models and tools. A revised the employee remuneration to the number of hours form of the coefficient of structural changes was used in worked. The employee remuneration is part of the added order to determine the regional/sectoral dissimilarities value and includes the total remuneration, in cash (gross between productivity and wage [8]. Also, a short term wages) or in kind, that an employer pays an employee in forecast of the correlation between the labor productivity exchange for the work performed over a period of time, index and the average gross earnings index in the and the employer’s contribution to social insurance. The Romanian industry was conducted using lag econometric employer’s contribution to social insurance covers the models, ARIMA processes, as well as feed forward state social insurance contribution (including neural networks [1]. contributions to the retiring fund); the contribution to the In the analysis undertaken in this study, we tried to fund for of unemployment benefits payment; the capture the dependence of employee remuneration contribution to the social health insurance fund. ISSN: 1790-2769 134 ISBN: 978-960-474-194-6 RECENT ADVANCES in MATHEMATICS and COMPUTERS in BUSINESS, ECONOMICS, BIOLOGY & CHEMISTRY The productivity per hour was calculated as the ratio function that describes the correlation between labor between production and the working hours performed by costs and the productivity level, and the impact of the the occupied population in the 29 economic activities minimum wage. This made the authors to choose the (the occupied population includes employees and following regression equation: freelance workers). lremit = C+ a*lprodit + b*lhminwageit + uit The minimum wage per hour was calculated as the ratio where lrem is the log of the hourly employee between the national minimum wage and the average remuneration, lprod – the log of the productivity per number of working hours per month (170 hours). hour and lhminwage – the log of the national minimum The sources of our data are the publications of the wage per hour. Romanian National Institute of Statistics (National Accounts, Romanian Statistical Yearbook). 4 Methodology A panel data regression differs from a regular time-series 3 Economic description or cross-section regression in that it has a double The economic theory favors the need to link wages with subscript on its variables: labor productivity. From this perspective, the labor ’ yit = a + X it b + uit , i = 1, …, N; t = 1, …, T (1) demand is often described as a decreasing curve of the The i subscript denotes the cross-section dimension and t labor’s marginal productivity. However, in practice, denotes the time-series dimension. Most of the panel there are certain elements that complicate the disclosure data application utilize a one-way error component of the correlation between the wage level and the model for the disturbances, with: uit = αi + εit [2]. productivity level. First, employers are interested in There are several different linear models for panel data. labor costs and not just wages. Second, the employment The fundamental distinction is that between fixed-effects relationship is bilateral between an employer (interested and random-effects models. In the fixed-effects (FE) in labor costs) and employees (interested in wage and model, the αi are permitted to be correlated with the other associated benefits). From a microeconomics regressors xit, while continuing to assume that xit is perspective, such a relationship is characterized by uncorrelated with the idiosyncratic error εit. In the asymmetric information and by incomplete and incorrect random-effects (RE) model, it is assumed that αi is disclosure of both parties’ intentions. This means that purely random, a stronger assumption implying that αi is there are problems in estimating the level of individual uncorrelated with the regressors [4]. labor productivity. From the macroeconomic perspective, the labor supply 4.1 Test for poolability of the data and demand curves are built at the expense of omitting One of the main motivations behind pooling a time the heterogeneous nature of the aggregation elements. series of cross-sections is to widen the database in order Thus, labor demand reflects the marginal productivity to get better and more reliable estimates of the curve of the labor, aggregated at the level of the national parameters of the model. The question is to pool or not economy or of the economic sectors (activities). to pool the data. Technically speaking, the relationship between the labor The simplest poolability test has its null hypothesis the supply and demand should take into account the work OLS model: y = a + b’X + ε and as its alternative the force remuneration. The problem is that this it it it FE model: yit = a + b’Xit +αi + εit [9]. In other words, we remuneration includes a variable component with many test for the presence of individual effects. Formally, we degrees of freedom – the wage (economic activities, write H0: αi = 0, i = 1,…, N. We consider the F statistics administrative units and time) and a semi-variable according to the construction principle: component (or semi-constant), namely the social (ESS - ESS ) /(N -1) R U , where ESSR denotes the contributions paid by the employer (to make it more F1-way = ESS /((TNK--1) ) simple, we will ignore other minor elements of wage U costs) that are changed from time to time, as a result of residual sum of squares under the null hypothesis, ESSU government decisions, but do not vary in relation to the the residual sum of squares under the alternative. Under type of the economic activity. H0, the statistic F1-way is distributed as F with (N-1, (T - On the other hand, the wage structure is not the same for 1) N - K) degrees of freedom. The two sums of squares all workers, depending on the various fields of activity. evolve as intermediate results from OLS and from FE However, these wages have also mutual factors of estimation. influence, the most important of which being the In Stata, if we run the xtreg command with the fe option, minimum wage. we obtain at the bottom of the output the F-test that all Under these circumstances, it is necessary to identify a αi=0. If we reject the null hypothesis it also means that tradeoff between the motivational base of the behavior ISSN: 1790-2769 135 ISBN: 978-960-474-194-6 RECENT ADVANCES in MATHEMATICS and COMPUTERS in BUSINESS, ECONOMICS, BIOLOGY & CHEMISTRY the OLS estimates suffer from an omission variables (yit- y i ) = ( x it - x i )' b +()eit - e i (2) problem and they are biased and inconsistent. The within estimator is the OLS estimator of this model. Because αi has been eliminated, OLS leads to consistent 4.2 The Hausman test estimates of b even if αi is correlated with xit as is the The Hausman principle can be applied to all hypothesis case in the FE model.
Recommended publications
  • Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation
    Research Report Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation Insu Paek December 2009 ETS RR-09-35 Listening. Learning. Leading.® Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation Insu Paek ETS, Princeton, New Jersey December 2009 As part of its nonprofit mission, ETS conducts and disseminates the results of research to advance quality and equity in education and assessment for the benefit of ETS’s constituents and the field. To obtain a PDF or a print copy of a report, please visit: http://www.ets.org/research/contact.html Copyright © 2009 by Educational Testing Service. All rights reserved. ETS, the ETS logo, GRE, and LISTENING. LEARNING. LEADING. are registered trademarks of Educational Testing Service (ETS). SAT is a registered trademark of the College Board. Abstract Three statistical testing procedures well-known in the maximum likelihood approach are the Wald, likelihood ratio (LR), and score tests. Although well-known, the application of these three testing procedures in the logistic regression method to investigate differential item function (DIF) has not been rigorously made yet. Employing a variety of simulation conditions, this research (a) assessed the three tests’ performance for DIF detection and (b) compared DIF detection in different DIF testing modes (targeted vs. general DIF testing). Simulation results showed small differences between the three tests and different testing modes. However, targeted DIF testing consistently performed better than general DIF testing; the three tests differed more in performance in general DIF testing and nonuniform DIF conditions than in targeted DIF testing and uniform DIF conditions; and the LR and score tests consistently performed better than the Wald test.
    [Show full text]
  • Wald (And Score) Tests
    Wald (and Score) Tests 1 / 18 Vector of MLEs is Asymptotically Normal That is, Multivariate Normal This yields I Confidence intervals I Z-tests of H0 : θj = θ0 I Wald tests I Score Tests I Indirectly, the Likelihood Ratio tests 2 / 18 Under Regularity Conditions (Thank you, Mr. Wald) a.s. I θbn → θ √ d −1 I n(θbn − θ) → T ∼ Nk 0, I(θ) 1 −1 I So we say that θbn is asymptotically Nk θ, n I(θ) . I I(θ) is the Fisher Information in one observation. I A k × k matrix ∂2 I(θ) = E[− log f(Y ; θ)] ∂θi∂θj I The Fisher Information in the whole sample is nI(θ) 3 / 18 H0 : Cθ = h Suppose θ = (θ1, . θ7), and the null hypothesis is I θ1 = θ2 I θ6 = θ7 1 1 I 3 (θ1 + θ2 + θ3) = 3 (θ4 + θ5 + θ6) We can write null hypothesis in matrix form as θ1 θ2 1 −1 0 0 0 0 0 θ3 0 0 0 0 0 0 1 −1 θ4 = 0 1 1 1 −1 −1 −1 0 θ5 0 θ6 θ7 4 / 18 p Suppose H0 : Cθ = h is True, and Id(θ)n → I(θ) By Slutsky 6a (Continuous mapping), √ √ d −1 0 n(Cθbn − Cθ) = n(Cθbn − h) → CT ∼ Nk 0, CI(θ) C and −1 p −1 Id(θ)n → I(θ) . Then by Slutsky’s (6c) Stack Theorem, √ ! n(Cθbn − h) d CT → . −1 I(θ)−1 Id(θ)n Finally, by Slutsky 6a again, −1 0 0 −1 Wn = n(Cθb − h) (CId(θ)n C ) (Cθb − h) →d W = (CT − 0)0(CI(θ)−1C0)−1(CT − 0) ∼ χ2(r) 5 / 18 The Wald Test Statistic −1 0 0 −1 Wn = n(Cθbn − h) (CId(θ)n C ) (Cθbn − h) I Again, null hypothesis is H0 : Cθ = h I Matrix C is r × k, r ≤ k, rank r I All we need is a consistent estimator of I(θ) I I(θb) would do I But it’s inconvenient I Need to compute partial derivatives and expected values in ∂2 I(θ) = E[− log f(Y ; θ)] ∂θi∂θj 6 / 18 Observed Fisher Information I To find θbn, minimize the minus log likelihood.
    [Show full text]
  • Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs
    Journal of Statistical Theory and Applications Volume 10, Number 4, 2011, pp. 553-569 ISSN 1538-7887 Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs Yanqing Yi1∗and Xikui Wang2 1 Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. Johns, Newfoundland, Canada A1B 3V6 2 Department of Statistics, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2 Abstract Data collected from response adaptive designs are dependent. Traditional statistical methods need to be justified for the use in response adaptive designs. This paper gener- alizes the Rao's score test to response adaptive designs and introduces a generalized score statistic. Simulation is conducted to compare the statistical powers of the Wald, the score, the generalized score and the likelihood ratio statistics. The overall statistical power of the Wald statistic is better than the score, the generalized score and the likelihood ratio statistics for small to medium sample sizes. The score statistic does not show good sample properties for adaptive designs and the generalized score statistic is better than the score statistic under the adaptive designs considered. When the sample size becomes large, the statistical power is similar for the Wald, the sore, the generalized score and the likelihood ratio test statistics. MSC: 62L05, 62F03 Keywords and Phrases: Response adaptive design, likelihood ratio test, maximum likelihood estimation, Rao's score test, statistical power, the Wald test ∗Corresponding author. Fax: 1-709-777-7382. E-mail addresses: [email protected] (Yanqing Yi), xikui [email protected] (Xikui Wang) Y. Yi and X.
    [Show full text]
  • Statistical Asymptotics Part II: First-Order Theory
    First-Order Asymptotic Theory Statistical Asymptotics Part II: First-Order Theory Andrew Wood School of Mathematical Sciences University of Nottingham APTS, April 15-19 2013 Andrew Wood Statistical Asymptotics Part II: First-Order Theory First-Order Asymptotic Theory Structure of the Chapter This chapter covers asymptotic normality and related results. Topics: MLEs, log-likelihood ratio statistics and their asymptotic distributions; M-estimators and their first-order asymptotic theory. Initially we focus on the case of the MLE of a scalar parameter θ. Then we study the case of the MLE of a vector θ, first without and then with nuisance parameters. Finally, we consider the more general setting of M-estimators. Andrew Wood Statistical Asymptotics Part II: First-Order Theory First-Order Asymptotic Theory Motivation Statistical inference typically requires approximations because exact answers are usually not available. Asymptotic theory provides useful approximations to densities or distribution functions. These approximations are based on results from probability theory. The theory underlying these approximation techniques is valid as some quantity, typically the sample size n [or more generally some measure of information], goes to infinity, but the approximations obtained are often accurate even for small sample sizes. Andrew Wood Statistical Asymptotics Part II: First-Order Theory First-Order Asymptotic Theory Test statistics Consider testing the null hypothesis H0 : θ = θ0, where θ0 is an arbitrary specified point in Ωθ. If desired, we may
    [Show full text]
  • Econometrics-I-11.Pdf
    Econometrics I Professor William Greene Stern School of Business Department of Economics 11-1/78 Part 11: Hypothesis Testing - 2 Econometrics I Part 11 – Hypothesis Testing 11-2/78 Part 11: Hypothesis Testing - 2 Classical Hypothesis Testing We are interested in using the linear regression to support or cast doubt on the validity of a theory about the real world counterpart to our statistical model. The model is used to test hypotheses about the underlying data generating process. 11-3/78 Part 11: Hypothesis Testing - 2 Types of Tests Nested Models: Restriction on the parameters of a particular model y = 1 + 2x + 3T + , 3 = 0 (The “treatment” works; 3 0 .) Nonnested models: E.g., different RHS variables yt = 1 + 2xt + 3xt-1 + t yt = 1 + 2xt + 3yt-1 + wt (Lagged effects occur immediately or spread over time.) Specification tests: ~ N[0,2] vs. some other distribution (The “null” spec. is true or some other spec. is true.) 11-4/78 Part 11: Hypothesis Testing - 2 Hypothesis Testing Nested vs. nonnested specifications y=b1x+e vs. y=b1x+b2z+e: Nested y=bx+e vs. y=cz+u: Not nested y=bx+e vs. logy=clogx: Not nested y=bx+e; e ~ Normal vs. e ~ t[.]: Not nested Fixed vs. random effects: Not nested Logit vs. probit: Not nested x is (not) endogenous: Maybe nested. We’ll see … Parametric restrictions Linear: R-q = 0, R is JxK, J < K, full row rank General: r(,q) = 0, r = a vector of J functions, R(,q) = r(,q)/’. Use r(,q)=0 for linear and nonlinear cases 11-5/78 Part 11: Hypothesis Testing - 2 Broad Approaches Bayesian: Does not reach a firm conclusion.
    [Show full text]
  • Package 'Lmtest'
    Package ‘lmtest’ September 9, 2020 Title Testing Linear Regression Models Version 0.9-38 Date 2020-09-09 Description A collection of tests, data sets, and examples for diagnostic checking in linear regression models. Furthermore, some generic tools for inference in parametric models are provided. LazyData yes Depends R (>= 3.0.0), stats, zoo Suggests car, strucchange, sandwich, dynlm, stats4, survival, AER Imports graphics License GPL-2 | GPL-3 NeedsCompilation yes Author Torsten Hothorn [aut] (<https://orcid.org/0000-0001-8301-0471>), Achim Zeileis [aut, cre] (<https://orcid.org/0000-0003-0918-3766>), Richard W. Farebrother [aut] (pan.f), Clint Cummins [aut] (pan.f), Giovanni Millo [ctb], David Mitchell [ctb] Maintainer Achim Zeileis <[email protected]> Repository CRAN Date/Publication 2020-09-09 05:30:11 UTC R topics documented: bgtest . .2 bondyield . .4 bptest . .6 ChickEgg . .8 coeftest . .9 coxtest . 11 currencysubstitution . 13 1 2 bgtest dwtest . 14 encomptest . 16 ftemp . 18 gqtest . 19 grangertest . 21 growthofmoney . 22 harvtest . 23 hmctest . 25 jocci . 26 jtest . 28 lrtest . 29 Mandible . 31 moneydemand . 31 petest . 33 raintest . 35 resettest . 36 unemployment . 38 USDistLag . 39 valueofstocks . 40 wages ............................................ 41 waldtest . 42 Index 46 bgtest Breusch-Godfrey Test Description bgtest performs the Breusch-Godfrey test for higher-order serial correlation. Usage bgtest(formula, order = 1, order.by = NULL, type = c("Chisq", "F"), data = list(), fill = 0) Arguments formula a symbolic description for the model to be tested (or a fitted "lm" object). order integer. maximal order of serial correlation to be tested. order.by Either a vector z or a formula with a single explanatory variable like ~ z.
    [Show full text]
  • Wald (And Score) Tests
    Wald (and score) tests • MLEs have an approximate multivariate normal sampling distribution for large samples (Thanks Mr. Wald.) • Approximate mean vector = vector of true parameter values for large samples • Asymptotic variance-covariance matrix is easy to estimate • H0: Cθ = h (Linear hypothesis) H0 : Cθ = h Cθ h is multivariate normal as n − → ∞ Leads! to a straightforward chisquare test • Called a Wald test • Based on the full model (unrestricted by any null hypothesis) • Asymptotically equivalent to the LR test • Not as good as LR for smaller samples • Very convenient sometimes Example of H0: Cθ=h Suppose θ = (θ1, . θ7), with 1 1 H : θ = θ , θ = θ , (θ + θ + θ ) = (θ + θ + θ ) 0 1 2 6 7 3 1 2 3 3 4 5 6 θ1 θ2 1 1 0 0 0 0 0 θ 0 − 3 0 0 0 0 0 1 1 θ = 0 − 4 1 1 1 1 1 1 0 θ 0 − − − 5 θ6 θ 7 Multivariate Normal Univariate Normal • 1 1 (x µ)2 – f(x) = exp − σ√2π − 2 σ2 (x µ)2 ! " – −σ2 is the squared Euclidian distance between x and µ, in a space that is stretched by σ2. 2 (X µ) 2 – − χ (1) σ2 ∼ Multivariate Normal • 1 1 1 – f(x) = 1 k exp 2 (x µ)#Σ− (x µ) Σ 2 (2π) 2 − − − | | 1 # $ – (x µ)#Σ− (x µ) is the squared Euclidian distance between x and µ, −in a space that− is warped and stretched by Σ. 1 2 – (X µ)#Σ− (X µ) χ (k) − − ∼ Distance: Suppose Σ = I2 2 1 d = (X µ)!Σ− (X µ) − − 1 0 x1 µ1 = x1 µ1, x2 µ2 − − − 0 1 x2 µ2 # $ # − $ ! " x1 µ1 = x1 µ1, x2 µ2 − − − x2 µ2 # − $ = !(x µ )2 + (x µ ")2 1 − 1 2 − 2 2 2 d = (x1 µ1) + (x2 µ2) − − (x1, x2) % x µ 2 − 2 (µ1, µ2) x µ 1 − 1 86 APPENDIX A.
    [Show full text]
  • Week 4: Simple Linear Regression II
    Week 4: Simple Linear Regression II Marcelo Coca Perraillon University of Colorado Anschutz Medical Campus Health Services Research Methods I HSMP 7607 2019 These slides are part of a forthcoming book to be published by Cambridge University Press. For more information, go to perraillon.com/PLH. c This material is copyrighted. Please see the entire copyright notice on the book's website. Updated notes are here: https://clas.ucdenver.edu/marcelo-perraillon/ teaching/health-services-research-methods-i-hsmp-7607 1 Outline Algebraic properties of OLS Reminder on hypothesis testing The Wald test Examples Another way at looking at causal inference 2 Big picture We used the method of least squares to find the line that minimizes the sum of square errors (SSE) We made NO assumptions about the distribution of or Y We saw that the mean of the predicted values is the same as the mean of the observed values and that implies that predictions regress towards the mean Today, we will assume that distributes N(0; σ2) and are independent (iid). We need that assumption for inference (not to find the best βj ) 3 Algebraic properties of OLS Pn 1) The sum of the residuals is zero: i=1 ^i = 0. One implication of this is that if the residuals add up to zero, then we will always get the mean of Y right. Confusion alert: is the error term; ^ is the residual (more on this later) Recall the first order conditions: @SSE = Pn (y − β^ − β^ x ) = 0 @β0 i=1 i 0 1 i @SSE = Pn x (y − β^ − β^ x ) = 0 @β1 i=1 i i 0 1 i From the first one, it's obvious that we choose β^0; β^1 that satisfy this Pn ^ ^ Pn ¯ property: i=1(yi − β0 − β1xi ) = i=1 ^i = 0, so ^ = 0 In words: On average, the residuals or predicted errors are zero, so on average the predicted outcomey ^ is the same as the observed y.
    [Show full text]
  • A Warning About Wald Tests David A
    Paper 5088 -2020 A Warning about Wald Tests David A. Dickey, NC State University ABSTRACT In mixed models, tests for variance components can be of interest. In some instances, several tests are available. For example, in standard balanced experiments like blocked designs, split plots and other nested designs, and random effect factorials, an F test for variance components is available along with the Wald test, Wald being a test based on large sample theory. In some cases, only the Wald test is available, so it is the default output when the COVTEST option is invoked in analyzing mixed models. However, one must be very careful in using this test. Because the Wald test is a large-sample test, it is important to know exactly what is meant by large sample. Does a field trial with 4 blocks and 80 entries (genetic crosses) in each block satisfy the "large sample" criterion? The answer is no, because, for testing the random block effects, it is the number of blocks (4) that needs to be large, not the overall sample size (320). Surprisingly it is not even possible to find significant block effects with the Wald test in this example, no matter how large the true block variance is. This problem is not shared by the F test when it is available as it is in this example. A careful look at the relationship between the F test and the Wald test is shown in this paper, through which the details of the above phenomenon is made clear. The exact nature of this problem, while important to practitioners is apparently not well known.
    [Show full text]
  • Fundamentals of Data Science the F Test
    Fundamentals of Data Science The F test Ramesh Johari 1 / 12 The recipe 2 / 12 The hypothesis testing recipe In this lecture we repeatedly apply the following approach. I If the true parameter was θ0, then the test statistic T (Y) should look like it would when the data comes from f(Y jθ0). I We compare the observed test statistic Tobs to the sampling distribution under θ0. I If the observed Tobs is unlikely under the sampling distribution given θ0, we reject the null hypothesis that θ = θ0. The theory of hypothesis testing relies on finding test statistics T (Y) for which this procedure yields as high a power as possible, given a particular size. 3 / 12 The F test in linear regression 4 / 12 What if we want to know if multiple regression coefficients are zero? This is equivalent to asking: would a simpler model suffice? For this purpose we use the F test. Multiple regression coefficients We again assume we are in the linear normal model: 2 Yi = Xiβ + "i with i.i.d. N (0; σ ) errors "i. The Wald (or t) test lets us test whether one regression coefficient is zero. 5 / 12 Multiple regression coefficients We again assume we are in the linear normal model: 2 Yi = Xiβ + "i with i.i.d. N (0; σ ) errors "i. The Wald (or t) test lets us test whether one regression coefficient is zero. What if we want to know if multiple regression coefficients are zero? This is equivalent to asking: would a simpler model suffice? For this purpose we use the F test.
    [Show full text]
  • Hypothesis Testing I
    Introduction Wald test p-values t-test Quiz Hypothesis testing I. Botond Szabo Leiden University Leiden, 26 March 2018 Introduction Wald test p-values t-test Quiz Outline 1 Introduction 2 Wald test 3 p-values 4 t-test 5 Quiz Introduction Wald test p-values t-test Quiz Example Suppose we want to know if exposure to asbestos is associated with lung disease. We take some rats and randomly divide them into two groups. We expose one group to asbestos and then compare the disease rate in two groups. We consider the following two hypotheses: 1 The null hypothesis: the disease rate is the same in both groups. 2 Alternative hypothesis: The disease rate is not the same in two groups. If the exposed group has a much higher disease rate, we will reject the null hypothesis and conclude that the evidence favours the alternative hypothesis. Introduction Wald test p-values t-test Quiz Null and alternative hypotheses We partition the parameter space Θ into sets Θ0 and Θ1 and wish to test H0 : θ 2 Θ0 versus H1 : θ 2 Θ1: We call H0 the null hypothesis and H1 the alternative hypothesis. In our previous example θ is the difference between the disease rates in the exposed and not exposed groups, Θ0 = f0g; and Θ1 = (0; 1): Introduction Wald test p-values t-test Quiz Rejection region Let T be a random variable and let T be the range of X : We test the hypothesis by finding a subset of outcomes R ⊂ X ; called the rejection region. If T 2 R; we reject the null hypothesis, otherwise we retain it: T 2 R ) reject H0 T 2= R ) retain H0: In our previous example T is an estimate of the difference between the disease rates in the exposed and not exposed groups.
    [Show full text]
  • Section 5 Inference in the Multiple-Regression Model
    Section 5 Inference in the Multiple-Regression Model Kinds of hypothesis tests in a multiple regression There are several distinct kinds of hypothesis tests we can run in a multiple regression. Suppose that among the regressors in a Reed Econ 201 grade regression are variables for SAT-math and SAT-verbal: giiii=β01 +βSATM +β 2 SATV +… + u • We might want to know if math SAT matters for Econ 201: H 01:0β = o Would it make sense for this to be a one-tailed or a two-tailed test? o Is it plausible that a higher math SAT would lower Econ 201 performance? o Probably a one-tailed test makes more sense. • We might want to know if either SAT matters. This is a joint test of two simultaneous hypotheses: H 01:0,0.β= β= 2 o The alternative hypothesis is that one or both parts of the null hypothesis fails to hold. If β1 = 0 but β2 ≠ 0, then the null is false and we want to reject it. o The joint test is not the same as separate individual tests on the two coefficients. In general, the two variables are correlated, which means that their coefficient estimators are correlated. That means that eliminating one of the variables from the equation affects the significance of the other. The joint test tests whether we can delete both variables at once, rather than testing whether we can delete one variable given that the other is in (or out of) the equation. A common example is the situation where the two variables are highly and positively correlated (imperfect but high multicollinearity).
    [Show full text]