Advances in the Meta-Analysis of Heterogeneous Clinical Trials I: the Inverse Variance Heterogeneity Model

Total Page:16

File Type:pdf, Size:1020Kb

Advances in the Meta-Analysis of Heterogeneous Clinical Trials I: the Inverse Variance Heterogeneity Model CONCLI-01195; No of Pages 9 Contemporary Clinical Trials xxx (2015) xxx–xxx Contents lists available at ScienceDirect Contemporary Clinical Trials journal homepage: www.elsevier.com/locate/conclintrial Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model Suhail A.R. Doi a,⁎, Jan J. Barendregt b,c, Shahjahan Khan d, Lukman Thalib e, Gail M. Williams c a Research School of Population Health, Australian National University, Canberra, Australia b Epigear International, Sunrise Beach, Australia c School of Population Health, University of Queensland, Brisbane, Australia d School of Agricultural, Computational and Environmental Sciences, University of Southern Queensland, Toowoomba, Australia e Department of Community Medicine, Kuwait University, Kuwait article info abstract Article history: This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous Received 5 February 2015 studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident Received in revised form 10 May 2015 estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption Accepted 15 May 2015 with a quasi-likelihood based variance structure — the IVhet model. Extensive simulations confirm that this estima- Available online xxxx tor retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous mag- Keywords: – Fixed effect nesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71 1.46) which Heterogeneity not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, Meta-analysis under the RE model the pooled OR is 0.71 (95% CI 0.57–0.89) which, given the simulation results, reflects underes- Quasi-likelihood timation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model re- Random effects place both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com. © 2015 Elsevier Inc. All rights reserved. 1. Introduction from which they depart under the influence of the random error only [4]. As such, only within-study variation is considered to be present. In the era of evidence based medicine, meta-analyses of well- In practice, this model ensures that the larger studies (with the lowest designed and executed randomized controlled trials have the potential probability of random error) have the greatest influence on the pooled to provide high levels of evidence to support therapeutic interventions estimate. The drawback however is that this model also demonstrates in all areas of clinical practice. Despite the potential of the outcome of increasing overdispersion as heterogeneity increases. Overdispersion such trials to guide decision making, they may sometimes fail to pro- here refers to an estimator that has a greater observed variance (true duce credible conclusive results or may disagree if there were multiple variance often assessed through simulation) than that theoretically ex- independent trials that investigate the same clinical question. In this sit- pected which is based on the statistical model (used in the confidence uation, a meta-analysis of the trial results has the potential to combine interval computation). conceptually similar and independent studies with the purpose of de- In an attempt to tackle the issue of overdispersion, the RE approach riving more reliable statistical conclusions (based on a much larger sam- was suggested which attempts to create a more fully specified model ple data) than any of the individual studies [1,2]. Today, clinical decision [6]. This model makes the additional assumption that the true treatment making relies heavily on this methodology as is evident by the work of effects in the individual studies are different from each other and these the Cochrane collaboration and the high volume of publications for differences follow a normal distribution with a common variance. The meta-analyses outside the collaboration [3]. assumption of normally distributed random effects is not justified [7] Meta-analyses, customarily, are performed using either the fixed ef- because the underlying effects included in the meta-analysis do not fect (FE) or the random effects (RE) models [4,5]. The FE model of meta- constitute a random sample from the population. This model neverthe- analysis is underpinned by the assumption that one identical true treat- less ignores the need for randomization in statistical inference [8] and ment effect is common to every study included in the meta-analysis, the variance of these underlying effects is usually approximated by a moment-based estimator [9]. The application of this common variance to the model has the unintended effect of redistributing the study ⁎ Corresponding author at: Chronic Disease Epidemiology Division, Research School of Population Health, Australian National University, Mills Road, Acton ACT 2601, Australia. weights in only one direction: from larger to smaller studies [10].Thus E-mail address: [email protected] (S.A.R. Doi). the studies with the lowest probability of random error are penalized http://dx.doi.org/10.1016/j.cct.2015.05.009 1551-7144/© 2015 Elsevier Inc. All rights reserved. Please cite this article as: S.A.R. Doi, et al., Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model, Contemp Clin Trials (2015), http://dx.doi.org/10.1016/j.cct.2015.05.009 2 S.A.R. Doi et al. / Contemporary Clinical Trials xxx (2015) xxx–xxx fl ρ ^ and do not in uence the combined estimates as strongly. The inclusion the extent of bias in an estimator. wδ will only be greater than zero if of this estimated common between-studies variance also seems to be both the weights and the study effects are correlated and of course, the mechanism that attempts to address overdispersion with increasing there will be such a correlation if the weights are derived from the data. heterogeneity, yielding wider confidence intervals and lesser statistical For the inverse variance weighted FE model, the weighted estimator, significance than would be attained through the conventional fixed ef- fect model. Yet, given the faulty assumptions, it does not work as ex- Xk ^ ^ pected and as heterogeneity increases, the coverage of the confidence θ FE ¼ w jδ j; interval drops well below the nominal level [7,11].Thoughcorrections j¼1 have been suggested [12,13] they have not been easy to implement and this estimator substantially underestimates the statistical error has weights that sum to 1 given by: and remains potentially overconfident in its conclusions [13,14]. , A careful look at the inputs to these models demonstrates that both Xk ¼ 1 1 ; ð Þ use inverse variance weighting to decrease observed variance of the es- w j υ υ 2 timator. However, the approach taken with the RE estimator disadvan- j j¼1 j tages it because as heterogeneity increases, the inverse variance weights are moved towards equality thus increasing estimator variance. where the sampling error variance of the jth study is υj. Given the previ- This also leads to a failure to specify the theoretical variance correctly so ous discussion, this inverse variance estimator is likely to be biased (un- that it now falls short of the observed variance and nominal coverage is less there are equal variances across studies). However, despite the not achieved. expected bias, the FE estimator does improve over the arithmetic While alternative frequentist RE models that attempt to improve on mean estimator because the weights don't just increase the bias, but the conventional theoretical model in various ways have been described (by doing so) they also make the variance of the estimator much smaller in the literature [11], they all continue to be based on the assumption of [20] and trade off this bias through reduction in the mean squared error normally distributed random effects which, as mentioned above, leads (MSE). The point of having such increased precision despite, on average, to several problems. There is therefore the need for a better method the estimator being biased is that while the estimator is biased “on av- and this paper argues that the random effects model should be replaced erage”, it will still be “more correct” most of the time if the MSE of the by a distributional assumption free model. Such a model has been pro- biased estimator is smaller than that (MSE) of the unbiased estimator. posed by us as a variant of the quality effects model that sets quality Since, as researchers, we usually have only one set of sample data for to equal (called the IVhet model) [15]. This paper reviews the model's meta-analysis, the bias in the estimator is overshadowed by the de- theoretical construct and presents an evaluation of its performance creased MSE. using standard performance measures [16]. This works well under the model assumption that studies suffer only from sampling errors. When heterogeneity due to systematic errors is 2. Difference between empirically weighted means and the also present, this model suffers from overdispersion. To resolve this, arithmetic mean the RE weighted estimator was put forward by DerSimonian and Laird in 1986 [9] where the pooled effect is computed in the same way: Consider a collection of k independent studies, the jth of which has ^ estimated effect size δ which varies from its true effect size, δ through Xk j j ^ Ã^ θRE ¼ w δ j; random error. Also consider that the true effects, δjs, also vary from an j j¼1 underlying common effect, θ, through bias. There is the possibility of some diversity of true effects (which remain similar) across studies (in which case θ would simply be the mean of the true (unbiased) ef- but where the weights that sum to 1 are now given by: fects).
Recommended publications
  • Some General Points on the -Measure of Heterogeneity in Meta-Analysis
    Metrika DOI 10.1007/s00184-017-0622-3 Some general points on the I2-measure of heterogeneity in meta-analysis Dankmar Böhning1 · Rattana Lerdsuwansri2 · Heinz Holling3 Received: 27 January 2017 © The Author(s) 2017. This article is an open access publication Abstract Meta-analysis has developed to be a most important tool in evaluation research. Heterogeneity is an issue that is present in almost any meta-analysis. How- ever, the magnitude of heterogeneity differs across meta-analyses. In this respect, Higgins’ I 2 has emerged to be one of the most used and, potentially, one of the most useful measures as it provides quantification of the amount of heterogeneity involved in a given meta-analysis. Higgins’ I 2 is conventionally interpreted, in the sense of a variance component analysis, as the proportion of total variance due to heterogeneity. However, this interpretation is not entirely justified as the second part involved in defin- ing the total variation, usually denoted as s2, is not an average of the study-specific variances, but in fact some other function of the study-specific variances. We show that s2 is asymptotically identical to the harmonic mean of the study-specific variances and, for any number of studies, is at least as large as the harmonic mean with the inequality being sharp if all study-specific variances agree. This justifies, from our point of view, the interpretation of explained variance, at least for meta-analyses with larger number B Dankmar Böhning [email protected] Rattana Lerdsuwansri [email protected] Heinz Holling [email protected] 1 Southampton Statistical Sciences Research Institute and Mathematical Sciences, University of Southampton, Southampton, UK 2 Department of Mathematics and Statistics, Faculty of Science, Thammasat University, Rangsit Campus, Pathumthani, Thailand 3 Statistics and Quantitative Methods, Faculty of Psychology and Sports Science, University of Münster, Münster, Germany 123 D.
    [Show full text]
  • Data Analysis in Astronomy and Physics
    Data Analysis in Astronomy and Physics Lecture 3: Statistics 2 Lecture_Note_3_Statistics_new.nb M. Röllig - SS19 Lecture_Note_3_Statistics_new.nb 3 Statistics Statistics are designed to summarize, reduce or describe data. A statistic is a function of the data alone! Example statistics for a set of data X1,X 2, … are: average, the maximum value, average of the squares,... Statistics are combinations of finite amounts of data. Example summarizing examples of statistics: location and scatter. 4 Lecture_Note_3_Statistics_new.nb Location Average (Arithmetic Mean) Out[ ]//TraditionalForm= N ∑Xi i=1 X N Example: Mean[{1, 2, 3, 4, 5, 6, 7}] Mean[{1, 2, 3, 4, 5, 6, 7, 8}] Mean[{1, 2, 3, 4, 5, 6, 7, 100}] 4 9 2 16 Lecture_Note_3_Statistics_new.nb 5 Location Weighted Average (Weighted Arithmetic Mean) 1 "N" In[36]:= equationXw ⩵ wi Xi "N" = ∑ wi i 1 i=1 Out[36]//TraditionalForm= N ∑wi Xi i=1 Xw N ∑wi i=1 1 1 1 1 In case of equal weights wi = w : Xw = ∑wi Xi = ∑wXi = w ∑Xi = ∑Xi = X ∑wi ∑w w N 1 N Example: data: xi, weights :w i = xi In[46]:= x1={1., 2, 3, 4, 5, 6, 7}; w1=1/ x1; x2={1., 2, 3, 4, 5, 6, 7, 8}; w2=1 x2; x3={1., 2, 3, 4, 5, 6, 7, 100}; w3=1 x3; In[49]:= x1.w1 Total[w1] x2.w2 Total[w2] x3.w3 Total[w3] Out[49]= 2.69972 Out[50]= 2.9435 Out[51]= 3.07355 6 Lecture_Note_3_Statistics_new.nb Location Median Arrange Xi according to size and renumber.
    [Show full text]
  • The Generalized Likelihood Uncertainty Estimation Methodology
    CHAPTER 4 The Generalized Likelihood Uncertainty Estimation methodology Calibration and uncertainty estimation based upon a statistical framework is aimed at finding an optimal set of models, parameters and variables capable of simulating a given system. There are many possible sources of mismatch between observed and simulated state variables (see section 3.2). Some of the sources of uncertainty originate from physical randomness, and others from uncertain knowledge put into the system. The uncertainties originating from physical randomness may be treated within a statistical framework, whereas alternative methods may be needed to account for uncertainties originating from the interpretation of incomplete and perhaps ambiguous data sets. The GLUE methodology (Beven and Binley 1992) rejects the idea of one single optimal solution and adopts the concept of equifinality of models, parameters and variables (Beven and Binley 1992; Beven 1993). Equifinality originates from the imperfect knowledge of the system under consideration, and many sets of models, parameters and variables may therefore be considered equal or almost equal simulators of the system. Using the GLUE analysis, the prior set of mod- els, parameters and variables is divided into a set of non-acceptable solutions and a set of acceptable solutions. The GLUE methodology deals with the variable degree of membership of the sets. The degree of membership is determined by assessing the extent to which solutions fit the model, which in turn is determined 53 CHAPTER 4. THE GENERALIZED LIKELIHOOD UNCERTAINTY ESTIMATION METHODOLOGY by subjective likelihood functions. By abandoning the statistical framework we also abandon the traditional definition of uncertainty and in general will have to accept that to some extent uncertainty is a matter of subjective and individual interpretation by the hydrologist.
    [Show full text]
  • Lecture 3: Measure of Central Tendency
    Lecture 3: Measure of Central Tendency Donglei Du ([email protected]) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 Donglei Du (UNB) ADM 2623: Business Statistics 1 / 53 Table of contents 1 Measure of central tendency: location parameter Introduction Arithmetic Mean Weighted Mean (WM) Median Mode Geometric Mean Mean for grouped data The Median for Grouped Data The Mode for Grouped Data 2 Dicussion: How to lie with averges? Or how to defend yourselves from those lying with averages? Donglei Du (UNB) ADM 2623: Business Statistics 2 / 53 Section 1 Measure of central tendency: location parameter Donglei Du (UNB) ADM 2623: Business Statistics 3 / 53 Subsection 1 Introduction Donglei Du (UNB) ADM 2623: Business Statistics 4 / 53 Introduction Characterize the average or typical behavior of the data. There are many types of central tendency measures: Arithmetic mean Weighted arithmetic mean Geometric mean Median Mode Donglei Du (UNB) ADM 2623: Business Statistics 5 / 53 Subsection 2 Arithmetic Mean Donglei Du (UNB) ADM 2623: Business Statistics 6 / 53 Arithmetic Mean The Arithmetic Mean of a set of n numbers x + ::: + x AM = 1 n n Arithmetic Mean for population and sample N P xi µ = i=1 N n P xi x¯ = i=1 n Donglei Du (UNB) ADM 2623: Business Statistics 7 / 53 Example Example: A sample of five executives received the following bonuses last year ($000): 14.0 15.0 17.0 16.0 15.0 Problem: Determine the average bonus given last year. Solution: 14 + 15 + 17 + 16 + 15 77 x¯ = = = 15:4: 5 5 Donglei Du (UNB) ADM 2623: Business Statistics 8 / 53 Example Example: the weight example (weight.csv) The R code: weight <- read.csv("weight.csv") sec_01A<-weight$Weight.01A.2013Fall # Mean mean(sec_01A) ## [1] 155.8548 Donglei Du (UNB) ADM 2623: Business Statistics 9 / 53 Will Rogers phenomenon Consider two sets of IQ scores of famous people.
    [Show full text]
  • Good Statistical Practices for Contemporary Meta-Analysis: Examples Based on a Systematic Review on COVID-19 in Pregnancy
    Review Good Statistical Practices for Contemporary Meta-Analysis: Examples Based on a Systematic Review on COVID-19 in Pregnancy Yuxi Zhao and Lifeng Lin * Department of Statistics, Florida State University, Tallahassee, FL 32306, USA; [email protected] * Correspondence: [email protected] Abstract: Systematic reviews and meta-analyses have been increasingly used to pool research find- ings from multiple studies in medical sciences. The reliability of the synthesized evidence depends highly on the methodological quality of a systematic review and meta-analysis. In recent years, several tools have been developed to guide the reporting and evidence appraisal of systematic reviews and meta-analyses, and much statistical effort has been paid to improve their methodological quality. Nevertheless, many contemporary meta-analyses continue to employ conventional statis- tical methods, which may be suboptimal compared with several alternative methods available in the evidence synthesis literature. Based on a recent systematic review on COVID-19 in pregnancy, this article provides an overview of select good practices for performing meta-analyses from sta- tistical perspectives. Specifically, we suggest meta-analysts (1) providing sufficient information of included studies, (2) providing information for reproducibility of meta-analyses, (3) using appro- priate terminologies, (4) double-checking presented results, (5) considering alternative estimators of between-study variance, (6) considering alternative confidence intervals, (7) reporting predic- Citation: Zhao, Y.; Lin, L. Good tion intervals, (8) assessing small-study effects whenever possible, and (9) considering one-stage Statistical Practices for Contemporary methods. We use worked examples to illustrate these good practices. Relevant statistical code Meta-Analysis: Examples Based on a is also provided.
    [Show full text]
  • Experimental and Simulation Studies of the Shape and Motion of an Air Bubble Contained in a Highly Viscous Liquid Flowing Through an Orifice Constriction
    Accepted Manuscript Experimental and simulation studies of the shape and motion of an air bubble contained in a highly viscous liquid flowing through an orifice constriction B. Hallmark, C.-H. Chen, J.F. Davidson PII: S0009-2509(19)30414-2 DOI: https://doi.org/10.1016/j.ces.2019.04.043 Reference: CES 14954 To appear in: Chemical Engineering Science Received Date: 13 February 2019 Revised Date: 25 April 2019 Accepted Date: 27 April 2019 Please cite this article as: B. Hallmark, C.-H. Chen, J.F. Davidson, Experimental and simulation studies of the shape and motion of an air bubble contained in a highly viscous liquid flowing through an orifice constriction, Chemical Engineering Science (2019), doi: https://doi.org/10.1016/j.ces.2019.04.043 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Experimental and simulation studies of the shape and motion of an air bubble contained in a highly viscous liquid flowing through an orifice constriction. B. Hallmark, C.-H. Chen, J.F. Davidson Department of Chemical Engineering and Biotechnology, Philippa Fawcett Drive, Cambridge. CB3 0AS. UK Abstract This paper reports an experimental and computational study on the shape and motion of an air bubble, contained in a highly viscous Newtonian liquid, as it passes through a rectangular channel having a constriction orifice.
    [Show full text]
  • Prevalence of Elevated Blood Pressure Levels In
    PREVALENCE OF ELEVATED BLOOD PRESSURE LEVELS IN OVERWEIGHT AND OBESE YOUNG POPULATION (2 – 19 YEARS) IN THE UNITED STATES BETWEEN 2011 AND 2012 By: Lawrence O. Agyekum A Dissertation Submitted to Faculty of the School of Health Related Professions, Rutgers, The State University of New Jersey in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Biomedical Informatics Department of Health Informatics Spring 2015 © 2015 Lawrence Agyekum All Rights Reserved Final Dissertation Approval Form PREVALENCE OF ELEVATED BLOOD PRESSURE LEVELS IN OVERWEIGHT AND OBESE YOUNG POPULATION (2 – 19 YEARS) IN THE UNITED STATES BETWEEN 2011 AND 2012 BY Lawrence O. Agyekum Dissertation Committee: Syed Haque, Ph.D., Committee Chair Frederick Coffman Ph.D., Committee Member Shankar Srinivasan, Ph.D., Member Approved by the Dissertation Committee: _____________________________________ Date: _______________ _____________________________________ Date: _______________ ____________________________________ Date: _______________ ii ABSTRACT Prevalence of Elevated Blood Pressure Levels in Overweight and Obese Young Population (2 -19 Years) in the United States between 2011-2012 By Lawrence Ofori Agyekum Several studies have reported hypertension prevalence in children and adolescents in the United States (US) using regional or local population-based samples but few have reported national prevalence. The present study estimates national hypertension prevalence in US children and adolescents for 2011-2012. A convenient sample size of 4,196 (population aged ≤ 19) representing 43% of 9,756 (total survey respondents) was selected and stratified by age groups; “Children” and “Adolescents” using the 2007 Joint National Committee recommended definitions. Next, hypertension distribution was explained by gender, race, age, body weight, standing height and blood serum total cholesterol.
    [Show full text]
  • Arxiv:1111.2491V1 [Physics.Data-An] 10 Nov 2011 .Simulation 2
    Optimized differential energy loss estimation for tracker detectors Ferenc Sikl´er KFKI Research Institute for Particle and Nuclear Physics, Budapest, Hungary CERN, Geneva, Switzerland S´andor Szeles E¨otv¨os University, Budapest, Hungary Abstract The estimation of differential energy loss for charged particles in tracker detectors is studied. The robust truncated mean method can be generalized to the linear combination of the energy deposit measurements. The optimized weights in case of arithmetic and geometric means are obtained using a detailed simulation. The results show better particle separation power for both semiconductor and gaseous detectors. Key words: Energy loss, Silicon, TPC PACS: 29.40.Gx, 29.85.-c, 34.50.Bw 1. Introduction The identification of charged particles is crucial in several fields of particle and nuclear physics: particle spectra, correlations, selection of daughters of resonance decays and for reducing the background of rare physics processes [1, 2]. Tracker detectors, both semiconductor and gaseous, can be employed for particle identification, or yield extraction in the statistical sense, by proper use of energy deposit measurements along the trajectory of the particle. While for gaseous detectors a wide momentum range is available, in semiconductors there is practically no logarithmic rise of differential energy loss (dE/dx) at high momentum, thus only momenta below the the minimum ionization region are accessible. In this work two representative materials, silicon and neon are studied. Energy loss of charged particles inside matter is a complicated process. For detailed theoretical model and several comparisons to measured data see Refs. [3, 4]. While the energy lost and deposited differ, they will be used interchangeably in the discussion.
    [Show full text]
  • Estimation of a Proportion in Survey Sampling Using the Logratio Approach
    Noname manuscript No. (will be inserted by the editor) Estimation of a proportion in survey sampling using the logratio approach Karel Hron · Matthias Templ · Peter Filzmoser Received: date / Accepted: date Abstract The estimation of a mean of a proportion is a frequent task in statistical survey analysis, and often such ratios are estimated from compositions such as income components, wage components, tax components, etc. In practice, the weighted arithmetic mean is reg- ularly used to estimate the center of the data. However, this estimator is not appropriate if the ratios are estimated from compositions, because the sample space of compositional data is the simplex and not the usual Euclidean space. We demonstrate that the weighted geometric mean is useful for this purpose. Even for different sampling designs, the weighted geometric mean shows excellent behavior. Keywords Survey sampling, Proportions, Compositional data, Logratio analysis K. Hron Department of Mathematical Analysis and Applications of Mathematics, Faculty of Science, Palack´yUni- versity, 17. listopadu 12, 771 46 Olomouc, Czech Republic Department of Geoinformatics, Faculty of Science, Palack´yUniversity, tˇr.Svobody 26, 771 46 Olomouc, Czech Republic Tel.: +420585634605 Fax: +420585634002 E-mail: [email protected] M. Templ Department of Statistics and Probability Theory, Vienna University of Technology, Wiedner Hauptstr. 7, A-1040 Vienna, Austria Tel.: +4315880110715 E-mail: [email protected] P. Filzmoser Department of Statistics and Probability Theory, Vienna University of Technology, Wiedner Hauptstr. 8-10, A-1040 Vienna, Austria Tel.: +4315880110733 E-mail: p.fi[email protected] 2 Karel Hron et al. 1 Introduction Many surveys are concerned with the problem of estimating a mean of proportions.
    [Show full text]
  • Simultaneous Penalized M-Estimation of Covariance Matrices Using
    Simultaneous penalized M-estimation of covariance matrices using geodesically convex optimization Esa Ollilaa,∗, Ilya Soloveychikb, David E. Tylerc,1, Ami Wieselb,2 aAalto University, Finland bThe Hebrew University of Jerusalem, Israel cRutgers – The State University of New Jersey, USA Abstract A common assumption when sampling p-dimensional observations from K dis- tinct group is the equality of the covariance matrices. In this paper, we propose two penalized M-estimation approaches for the estimation of the covariance or scatter matrices under the broader assumption that they may simply be close to each other, and hence roughly deviate from some positive definite “center”. The first approach begins by generating a pooled M-estimator of scatter based on all the data, followed by a penalised M-estimator of scatter for each group, with the penalty term chosen so that the individual scatter matrices are shrunk towards the pooled scatter matrix. In the second approach, we minimize the sum of the individual group M-estimation cost functions together with an additive joint penalty term which enforces some similarity between the individual scatter es- timators, i.e. shrinkage towards a mutual center. In both approaches, we utilize the concept of geodesic convexity to prove the existence and uniqueness of the penalized solution under general conditions. We consider three specific penalty functions based on the Euclidean, the Riemannian, and the Kullback-Leibler distances. In the second approach, the distance based penalties are shown to lead to estimators of the mutual center that are related to the arithmetic, the Riemannian and the harmonic means of positive definite matrices, respectively.
    [Show full text]
  • Weighted Power Mean Copulas: Theory and Application
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Klein, Ingo; Fischer, Matthias J.; Pleier, Thomas Working Paper Weighted power mean copulas: Theory and application IWQW Discussion Papers, No. 01/2011 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Klein, Ingo; Fischer, Matthias J.; Pleier, Thomas (2011) : Weighted power mean copulas: Theory and application, IWQW Discussion Papers, No. 01/2011, Friedrich- Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/50916 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have
    [Show full text]
  • Effect of Acupuncture Intervention and Manipulation Types on Poststroke Dysarthria: a Systematic Review and Meta-Analysis
    Hindawi Evidence-Based Complementary and Alternative Medicine Volume 2020, Article ID 4981945, 17 pages https://doi.org/10.1155/2020/4981945 Review Article Effect of Acupuncture Intervention and Manipulation Types on Poststroke Dysarthria: A Systematic Review and Meta-Analysis Young-Jae Park 1,2 and Jin-Moo Lee3,4 1Department of Biofunctional Medicine and Diagnostics, College of Korean Medicine, Kyung Hee University, Seoul, Republic of Korea 2Department of Diagnosis and Biofunctional Medicine, Kyung Hee University Hospital at Gangdong, Seoul, Republic of Korea 3Department of Gynecology, College of Korean Medicine, Kyung Hee University, Seoul, Republic of Korea 4Department of Women Health Clinic, Kyung Hee University Hospital at Gangdong, Seoul, Republic of Korea Correspondence should be addressed to Young-Jae Park; [email protected] Received 22 April 2020; Revised 18 August 2020; Accepted 8 September 2020; Published 14 September 2020 Academic Editor: Takao Namiki Copyright © 2020 Young-Jae Park and Jin-Moo Lee. +is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. +is study aimed to evaluate the effect of acupuncture intervention and manipulation types on poststroke dysarthria. Electronic database, including PubMed, CENTRAL, Scopus, RISS, and CNKI, were searched for randomized controlled trials (RCT), treating dysarthria using acupuncture, speech-language therapy (SLT), and general management (GM), published before April 2019. +e number, distribution, intensity, depth, and repetition of acupuncture and bleeding therapy on the sublingual veins were considered as manipulation types. Risk of bias of the included trials was evaluated, and their efficacy was assessed using risk ratio (RR) and the standard mean differences in the Frenchay Dysarthria Assessment and Speech Function Grading, with 95% confidence intervals (CIs).Fifteen RCT trials involving 1453 patients were isolated.
    [Show full text]