M-Estimation (Estimating Equations)

Total Page:16

File Type:pdf, Size:1020Kb

M-Estimation (Estimating Equations) Chapter 7 M-Estimation (Estimating Equations) 7.1 Introduction In Chapter 1 we made the distinction between the parts of a fully specified statistical model. The primary part is the part that is most important for answering the underlying scientific questions. The secondary part consists of all the remaining details of the model. Usually the primary part is the mean or systematic part of the model, and the secondary part is mainly concerned with the distributional assumptions about the random part of the model. The full specification of the model is important for constructing the likelihood and for using the associated classical methods of inference as spelled out in Chapters 2 and 3 and supported by the asymptotic results of Chapter 6. However, we are now ready to consider robustifying the inference so that misspecification of some secondary assumptions does not invalidate the resulting inferential methods. Basically this robustified inference relies on replacing the 1 b information matrix inverse I.Â/ in the asymptotic normality result for ÂMLE by a generalization I.Â/1B.Â/I.Â/1 called the sandwich matrix. In correctly specified models, I.Â/ D B.Â/, and the sandwich matrix just reduces to the usual I.Â/1. When the model is not correctly specified, I.Â/ ¤ B.Â/, and the sandwich matrix is important for obtaining approximately valid inference. Thus, use of this more general result accommodates misspecification but is still appropriate in correctly specified models although its use there in small samples can entail some loss of efficiency relative to standard likelihood inference. Development of this robustified inference for likelihood-based models leads to a more general context. As discussed in Chapter 6, the asymptotic normal b Pproperties of ÂMLE follow from Taylor expansion of the likelihood equation S .Â/ D n T iD1 @ log f.Yi I Â/=@ D 0. The more general approach is then to define an estimator of interest as the solution of an estimating equation but without the equation necessarily coming from the derivative of a log likelihood. For historical reasons and for motivation from maximum likelihood, this more general approach D.D. Boos and L.A. Stefanski, Essential Statistical Inference: Theory and Methods, 297 Springer Texts in Statistics, DOI 10.1007/978-1-4614-4818-1 7, © Springer Science+Business Media New York 2013 298 7 M-Estimation (Estimating Equations) is called M-estimation. In recent years the approach is often referred to loosely as estimating equations. This chapter borrows heavily from the systematic description of M-estimation in Stefanski and Boos (2002). P n M-estimators are solutions of the vector equation iD1 .Yi ; Â/ D 0.Thatis, the M-estimator b satisfies Xn b .Yi ; Â/ D 0: (7.1) iD1 Here we are assuming that Y1;:::;Yn are independent but not necessarily identi- cally distributed,  is a b-dimensional parameter, and is a known .b 1/-function that does not depend on i or n. In this description Yi represents the ith datum. In some applications it is advantageous to emphasize the dependence of on particular components of Yi . For example, in a regression problem Yi D .xi ;Yi / and (7.1) would typically be written Xn b .Yi ; xi ; Â/ D 0: (7.2) iD1 where xi is the ith regressor. Huber (1964,1967) introduced M-estimators and their asymptotic properties, and they were an important part of the development of modern robust statistics. Liang and Zeger (1986) helped popularize M-estimators in the biostatistics literature under the name generalized estimating equations (GEE). Obviously, many others have made important contributions. For example, Godambe (1960) introduced the concept of an optimum estimating function in an M-estimator context, and that paper could be called a forerunner of the M-estimator approach. There is a large literature on M-estimation and estimating equations. We will not attempt to survey this literature or document its development. Rather we want to show that the M-estimator approach is simple, powerful, and widely applicable. We especially want students to feel comfortable finding and using the asymptotic approximations that flow from the method. One key advantage of the approach is that a very large class of asymptotically normal statistics including delta method transformations can be put in the gen- eral M-estimator framework. This unifies large sample approximation methods, simplifies analysis, and makes computations routine although sometimes tedious. Fortunately, the tedious derivative and matrix calculations often can be performed symbolically with programs such as Maple and Mathematica. Many estimators not typically thought of as M-estimators can be written in the form of M-estimators. Consider as a simple example the mean deviation from the sample mean 1 Xn b D j Y Y j: 1 n i iD1 7.1 Introduction 299 Is this an M-estimator? There is certainly no single equation of the form Xn .Yi ;/D 0 iD1 b b that yields 1. Moreover, there is no family of densities f.yI Â/ such that 1 is a component of the maximum likelihood estimator of Â. But if we let 1.y; 1;2/ D j y 2 j1 and 2.y; 1;2/ D y 2,then 0 Á 1 P  à Xn n b b iD1 j Yi 2 j 1 0 .Y ; b; b/ D @ P Á A D i 1 2 n b 0 iD1 iD1 Yi 2 P b b n yields 2 D Y and 1 D .1=n/ iD1 jYi Y j. We like to use the term “partial M-estimator” for an estimator that is not naturally an M-estimator until additional functions are added. The key idea is simple: any estimator that would be an M-estimator if certain parameters were known, is a partial M-estimator because we can “stack” functions for each of the unknown parameters. This aspect of M-estimators is related to the general approach of Randles (1982) for replacing unknown parameters by estimators. b From the above example it should be obvious that we can replace 2 D Y by any other estimator defined by an estimating equation; for example, the sample median. Moreover, we can also add functions to give delta method asymptotic results b b for transformations of parameters, for example, 3 D log. 1/; see Examples 7.2.3 (p. 304)and7.2.4 (p. 305) and also Benichou and Gail (1989). The combination of “approximation by averages” and “delta theorem” method- ology from Chapter 5 can handle a larger class of problems than the enhanced M-estimation approach described in this chapter. However, enhanced M-estimator methods, implemented with the aid of symbolic mathematics software (for deriving analytic expressions) and standard numerical routines for derivatives and matrix algebra (for obtaining numerical estimates) provide a unified approach that is simple in implementation, easily taught, and applicable to a broad class of complex problems. A description of the basic approach is given in Section 7.2 along with a few examples. Connections to the influence curve are given in Section 7.3 and then extensions for nonsmooth functions are given in Section 7.4. Extensions for regression are given in Section 7.5. A discussion of a testing problem is given in Section 7.6, and Section 7.7 summarizes the key features of the M-estimator method. The Appendix gives theorems for the consistency and asymptotic normality of b as well as Weak Laws of Large Numbers for averages of summands with estimated parameters. 300 7 M-Estimation (Estimating Equations) 7.2 The Basic Approach M-estimators solve (7.1,p.298), where the vector function must be a known function that does not depend on i or n. For regression situations, the argument of is expanded to depend on regressors xi , but the basic still does not depend on i. For the moment we confine ourselves to the iid case where Y1;:::;Yn are iid (possibly vector-valued) with distribution function F . The true parameter value Â0 is defined by Z EF .Y1; Â0/ D .y; Â0/dF.y/D 0: (7.3) RFor example, if .Yi ;/ D Yi R , then clearly the population mean 0 D ydF.y/is the unique solution of .y /dF.y/ D 0. If there is one unique Â0 satisfying (7.3), then in general there exists a sequence b b p of M-estimators  such that the Weak Law of Large Numbers leads to  ! Â0 as n !1. These type results are similar to the consistency results discussed in Chapter 6. Theorem 7.1 (p. 327) in this chapter gives one such result for compact parameter spaces.P Furthermore, if is suitably smooth, then Taylor expansion of 1 n G n.Â/ D n iD1 .Yi ; Â/ gives b 0 b 0 D G n.Â/ D G n.Â0/ C G n.Â0/. Â0/ C Rn; 0 0 where G n.Â/ D @G n.Â/=@Â.Forn sufficiently large, we expect G n.Â0/ to be nonsingular so that upon rearrangement ˚ « p b 0 1 p p n. Â0/ D G n.Â0/ nG n.Â0/ C nRn : (7.4) Define 0.y; Â/ D @ .y; Â/=@ and ˚ « 0 A.Â0/ D EF .Y1; Â0/ ; (7.5) ˚ « T B.Â0/ D EF .Y1; Â0/ .Y1; Â0/ : (7.6) Under suitable regularity conditions as n !1, Xn 1 ˚ « p G 0 . / D 0.Y ;  / ! A. /; (7.7) n 0 n i 0 0 iD1 p d nG n.Â0/ ! N f0; B.Â0/g ; (7.8) and p p nRn ! 0: (7.9) 7.2 The Basic Approach 301 Putting (7.1)and(7.4)–(7.9) together with Slutsky’s Theorem, we have that  à V . / b is AN  ; 0 as n !1; (7.10) 0 n 1 1 T where V .Â0/ D A.Â0/ B.Â0/fA.Â0/ g .
Recommended publications
  • Autoregressive Conditional Kurtosis
    Autoregressive conditional kurtosis Article Accepted Version Brooks, C., Burke, S. P., Heravi, S. and Persand, G. (2005) Autoregressive conditional kurtosis. Journal of Financial Econometrics, 3 (3). pp. 399-421. ISSN 1479-8417 doi: https://doi.org/10.1093/jjfinec/nbi018 Available at http://centaur.reading.ac.uk/20558/ It is advisable to refer to the publisher’s version if you intend to cite from the work. See Guidance on citing . Published version at: http://dx.doi.org/10.1093/jjfinec/nbi018 To link to this article DOI: http://dx.doi.org/10.1093/jjfinec/nbi018 Publisher: Oxford University Press All outputs in CentAUR are protected by Intellectual Property Rights law, including copyright law. Copyright and IPR is retained by the creators or other copyright holders. Terms and conditions for use of this material are defined in the End User Agreement . www.reading.ac.uk/centaur CentAUR Central Archive at the University of Reading Reading’s research outputs online This is a pre-copyedited, author-produced PDF of an article accepted for publication in the Journal of Financial Econometrics following peer review. The definitive publisher-authenticated version (C. Brooks, S.P. Burke, S. Heravi and G. Persand, ‘Autoregressive Conditional Kurtosis’, Journal of Financial Econometrics, 3.3 (2005)) is available online at: http://jfec.oxfordjournals.org/content/3/3/399 1 Autoregressive Conditional Kurtosis Chris Brooks1, Simon P. Burke2, Saeed Heravi3, Gita Persand4 The authors’ affiliations are 1Corresponding author: Cass Business School, City of London, 106 Bunhill Row, London EC1Y 8TZ, UK, tel: (+44) 20 70 40 51 68; fax: (+44) 20 70 40 88 81 41; e-mail: [email protected] ; 2School of Business, University of Reading 3Cardiff Business School, and 4Management School, University of Southampton.
    [Show full text]
  • Quadratic Versus Linear Estimating Equations
    ST 762 Nonlinear Statistical Models for Univariate and Multivariate Response Quadratic versus Linear Estimating Equations GLS estimating equations 0 1 n fβj 0 2 2 −1 X σ gj 0 Yj − fj @ 2 2 1/σ A 4 4 2 2 2 = 0: 0 2σ gj 0 2σ gj (Yj − fj ) − σ gj j=1 νθj Estimating equations for β are linear in Yj . Estimating equations for β only require specification of the first two moments. GLS is optimal among all linear estimating equations. 1 / 26 Quadratic versus Linear Estimating Equations ST 762 Nonlinear Statistical Models for Univariate and Multivariate Response Gaussian ML estimating equations 0 2 2 1 n fβj 2σ gj νβj 2 2 −1 X σ g 0 Yj − fj j = 0: @ 2 2 1/σ A 4 4 (Y − f )2 − σ2g 2 0 2σ gj 0 2σ gj j j j j=1 νθj Estimating equations for β are quadratic in Yj . Estimating equations for β require specification of the third and fourth moments as well. Specifically, if we let Yj − f (xj ; β) j = ; σg (β; θ; xj ) then we need to know 3 ∗ 2 ∗ E j = ζj and var j = 2 + κj : 2 / 26 Quadratic versus Linear Estimating Equations ST 762 Nonlinear Statistical Models for Univariate and Multivariate Response Questions ∗ ∗ ^ If we know the true values ζj and κj , how much is β improved using the quadratic estimating equations versus using the linear estimating equations? If we use working values (for example ζj = κj = 0, corresponding to ∗ ∗ normality) that are not the true values (i.e., ζj and κj ), is there any improvement in using the quadratic estimating equations? If we use working variance functions that are not the true variance functions, is there
    [Show full text]
  • Inference Based on Estimating Equations and Probability-Linked Data
    University of Wollongong Research Online Centre for Statistical & Survey Methodology Faculty of Engineering and Information Working Paper Series Sciences 2009 Inference Based on Estimating Equations and Probability-Linked Data R. Chambers University of Wollongong, [email protected] J. Chipperfield Australian Bureau of Statistics Walter Davis University of Wollongong, [email protected] M. Kovacevic Statistics Canada Follow this and additional works at: https://ro.uow.edu.au/cssmwp Recommended Citation Chambers, R.; Chipperfield, J.; Davis, Walter; and Kovacevic, M., Inference Based on Estimating Equations and Probability-Linked Data, Centre for Statistical and Survey Methodology, University of Wollongong, Working Paper 18-09, 2009, 36p. https://ro.uow.edu.au/cssmwp/38 Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library: [email protected] Centre for Statistical and Survey Methodology The University of Wollongong Working Paper 18-09 Inference Based on Estimating Equations and Probability-Linked Data Ray Chambers, James Chipperfield, Walter Davis, Milorad Kovacevic Copyright © 2008 by the Centre for Statistical & Survey Methodology, UOW. Work in progress, no part of this paper may be reproduced without permission from the Centre. Centre for Statistical & Survey Methodology, University of Wollongong, Wollongong NSW 2522. Phone +61 2 4221 5435, Fax +61 2 4221 4845. Email: [email protected] Inference Based on Estimating Equations and Probability-Linked Data Ray Chambers, University of Wollongong James Chipperfield, Australian Bureau of Statistics Walter Davis, Statistics New Zealand Milorad Kovacevic, Statistics Canada Abstract Data obtained after probability linkage of administrative registers will include errors due to the fact that some linked records contain data items sourced from different individuals.
    [Show full text]
  • Generalized Estimating Equations for Mixed Models
    GENERALIZED ESTIMATING EQUATIONS FOR MIXED MODELS Lulah Alnaji A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2018 Committee: Hanfeng Chen, Advisor Robert Dyer, Graduate Faculty Representative Wei Ning Junfeng Shang Copyright c August 2018 Lulah Alnaji All rights reserved iii ABSTRACT Hanfeng Chen, Advisor Most statistical approaches of molding the relationship between the explanatory variables and the responses assume subjects are independent. However, in clinical studies the longitudinal data are quite common. In this type of data, each subject is assessed repeatedly over a period of time. Therefore, the independence assumption is unlikely to be valid with longitudinal data due to the correlated observations of each subject. Generalized estimating equations method is a popular choice for longitudinal studies. It is an efficient method since it takes the within-subjects correla- tion into account by introducing the n n working correlation matrix R(↵) which is fully char- ⇥ acterized by the correlation parameter ↵. Although the generalized estimating equations’ method- ology considers correlation among the repeated observations on the same subject, it ignores the between-subject correlation and assumes subjects are independent. The objective of this dissertation is to provide an extension to the generalized estimating equa- tions to take both within-subject and between-subject correlations into account by incorporating the random effect b to the model. If our interest focuses on the regression coefficients, we regard the correlation parameter ↵ and as nuisance and estimate the fixed effects β using the estimating equations U(β,G,ˆ ↵ˆ).
    [Show full text]
  • Using Generalized Estimating Equations to Estimate Nonlinear
    Using generalized estimating equations to estimate nonlinear models with spatial data ∗ § Cuicui Lu†, Weining Wang ‡, Jeffrey M. Wooldridge Abstract In this paper, we study estimation of nonlinear models with cross sectional data using two-step generalized estimating equations (GEE) in the quasi-maximum likelihood estimation (QMLE) framework. In the interest of improving efficiency, we propose a grouping estimator to account for the potential spatial correlation in the underlying innovations. We use a Poisson model and a Negative Binomial II model for count data and a Probit model for binary response data to demon- strate the GEE procedure. Under mild weak dependency assumptions, results on estimation consistency and asymptotic normality are provided. Monte Carlo simulations show efficiency gain of our approach in comparison of different esti- mation methods for count data and binary response data. Finally we apply the GEE approach to study the determinants of the inflow foreign direct investment (FDI) to China. keywords: quasi-maximum likelihood estimation; generalized estimating equations; nonlinear models; spatial dependence; count data; binary response data; FDI equation JEL Codes: C13, C21, C35, C51 arXiv:1810.05855v1 [econ.EM] 13 Oct 2018 ∗This paper is supported by the National Natural Science Foundation of China, No.71601094 and German Research Foundation. †Department of Economics, Nanjing University Business School, Nanjing, Jiangsu 210093 China; email: [email protected] ‡Department of Economics, City, U of London; Northampton Square, Clerkenwell, London EC1V 0HB. Humboldt-Universität zu Berlin, C.A.S.E. - Center for Applied Statistics and Economics; email: [email protected] §Department of Economics, Michigan State University, East Lansing, MI 48824 USA; email: [email protected] 1 1 Introduction In empirical economic and social studies, there are many examples of discrete data which exhibit spatial or cross-sectional correlations possibly due to the closeness of ge- ographical locations of individuals or agents.
    [Show full text]
  • Stat 8112 Lecture Notes Unbiased Estimating Equations Charles J
    Stat 8112 Lecture Notes Unbiased Estimating Equations Charles J. Geyer September 26, 2020 1 Introduction In this handout we generalize the notion of maximum likelihood estima- tion to solution of unbiased estimating equations. We are much less formal in this handout, merely giving a broad overview. Unlike in Geyer (2013) there is no \no-n" version of these asymptotics (and as far as I can see there cannot be). Thus these asymptotics are based on the law of large numbers (LLN) and the central limit theorem (CLT), and n is sample size. The mathematics we explicitly show in this handout will be for indepen- dent and identically distributed (IID) data. If one has non-IID data, then one must use an LLN or a CLT for such data. Suppose X1, X2, ::: are IID and g(x; θ) is some continuously differen- tiable function of data and the the parameter that satisfies Eθfg(Xi; θ)g = 0; for all θ: (1) Write n 1 X h (θ) = g(X ; θ): n n i i=1 We seek estimators satisfying hn(θ^n) = 0: (2) If g is a vector-to-vector function, then so is hn and thus we say (2) are estimating equations (plural), thinking of each component of (2) as one scalar equation. We say the estimating equations are unbiased if Eθfhn(θ)g = 0; for all θ; (3) which follows from (1). The terminology is a bit different from the usual applications of unbiasedness. Clearly (3) says hn(θ) is an unbiased estimator of zero if θ is the true unknown parameter value, but we usually don't think of random variables containing unknown parameters as \estimators." Nevertheless, this is the accepted terminology for saying that (3) holds.
    [Show full text]
  • Introduction to the Generalized Estimating Equations and Its Applications in Small Cluster Randomized Trials
    Introduction to the Generalized Estimating Equations and its Applications in Small Cluster Randomized Trials Fan Li BIOSTAT 900 Seminar November 11, 2016 1 / 24 Overview Outline Background The Generalized Estimating Equations (GEEs) Improved Small-sample Inference Take-home Message How GEEs work? How to improve small-sample inference, especially in cluster randomized trial (CRT) applications? 2 / 24 Cluster Randomized Trials (CRTs) Randomizing clusters of subjects rather than independent subjects (convenience, ethics, contamination etc.) Intervention administered at the cluster level Outcomes are measured at the individual level Outcomes from subjects within the same cluster exhibit a greater correlation than those from different clusters Intraclass correlation coefficient (ICC) ranges from 0:01 to 0:1 Interest lies in evaluating the intervention effect Small number of clusters with large cluster sizes 3 / 24 The Stop Colorectal Cancer (STOP CRC) Study Study the effect of intervention to improve cancer screening 26 health clinics (n = 26) are allocated to either usual care or intervention (1-to-1 ratio) Usual care - provided opportunistic/occasional screening Intervention - an automated program for mailing testing kits with instructions to patients The clinics contain variable number of patients (mi): Min 461/Med 1426/Max 3302 Primary outcome - The patient-level binary outcome (yij), completion status of screening test within a year of study initiation Baseline clinic- and individual-level covariates Inference { estimand? 4 / 24 The Estimand
    [Show full text]
  • Maximum Likelihood Estimation of Distribution Parameters from Incomplete Data Edwin Joseph Hughes Iowa State University
    Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1962 Maximum likelihood estimation of distribution parameters from incomplete data Edwin Joseph Hughes Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Mathematics Commons Recommended Citation Hughes, Edwin Joseph, "Maximum likelihood estimation of distribution parameters from incomplete data " (1962). Retrospective Theses and Dissertations. 2089. https://lib.dr.iastate.edu/rtd/2089 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. This dissertation has been 62—6497 microfilmed exactly as received HUGHES, Edwin Joseph, 1928- MAXIMUM LIKELIHOOD ESTIMATION OF DISTRIBUTION PARAMETERS FROM INCOMPLETE DATA. Iôwà State University of Science and Technology Ph.D., 1962 Mathematics University Microfilms, Inc., Ann Arbor, Michigan MAXIMUM LIKELIHOOD ESTIMATION OF DISTRIBUTION PARAMETERS FROM INCOMPLETE DATA by Edwin Joseph Hughes A Dissertation Submitted to the Graduate Faculty in Partial Fulfillment of The Requirements for the Degree of DOCTOR OF PHILOSOPHY Major Subject: Statistics Approved: Signature was redacted for privacy. In Ch _ „ rk Signature was redacted for privacy.
    [Show full text]
  • Application of Generalized Linear Models and Generalized Estimation Equations
    Fisheries Research 145 (2013) 66–75 Contents lists available at SciVerse ScienceDirect Fisheries Research j ournal homepage: www.elsevier.com/locate/fishres Application of Generalized Linear Models and Generalized Estimation Equations to model at-haulback mortality of blue sharks captured in a pelagic longline fishery in the Atlantic Ocean a,∗ b a Rui Coelho , Paulo Infante , Miguel N. Santos a Instituto Português do Mar e da Atmosfera (IPMA I.P.), Avenida 5 de Outubro s/n, 8700-305 Olhão, Portugal b Centro de Investigac¸ ão em Matemática e Aplicac¸ ões (CIMA-UE) e Departamento de Matemática, ECT da Universidade de Évora, Rua Romão Ramalho 59, 7000-671 Évora, Portugal a r t a b i s c l e i n f o t r a c t Article history: At-haulback mortality of blue shark (Prionace glauca) captured by the Portuguese pelagic longline fish- Received 31 July 2012 ery targeting swordfish in the Atlantic was modeled. Data was collected by onboard fishery observers Received in revised form 29 January 2013 that monitored 762 fishing sets (1 005 486 hooks) and recorded information on 26 383 blue sharks. The Accepted 16 February 2013 sample size distribution ranged from 40 to 305 cm fork length, with 13.3% of the specimens captured dead at-haulback. Data modeling was carried out with Generalized Linear Models (GLM) and Gener- Keywords: alized Estimation Equations (GEE), given the fishery-dependent source of the data. The explanatory Generalized Estimation Equations variables influencing blue shark mortality rates were year, specimen size, fishing location, sex, season Generalized Linear Models and branch line material.
    [Show full text]
  • Estimated Estimating Equations: Semiparamet- Ric Inference for Clustered/Longitudinal Data
    Estimated estimating equations: Semiparamet- ric inference for clustered/longitudinal data Jeng-Min Chiou Academia Sinica, Taipei, Taiwan and Hans-Georg Muller¨ † University of California, Davis, USA Summary. We introduce a flexible marginal modelling approach for statistical in- ference for clustered/longitudinal data under minimal assumptions. This estimated estimating equations (EEE) approach is semiparametric and the proposed mod- els are fitted by quasi-likelihood regression, where the unknown marginal means are a function of the fixed-effects linear predictor with unknown smooth link, and variance-covariance is an unknown smooth function of the marginal means. We propose to estimate the nonparametric link and variance-covariance functions via smoothing methods, while the regression parameters are obtained via the esti- mated estimating equations. These are score equations that contain nonparamet- ric function estimates. The proposed EEE approach is motivated by its flexibil- ity and easy implementation. Moreover, if data follow a generalized linear mixed model (GLMM), with either specified or unspecified distribution of random effects and link function, the proposed model emerges as the corresponding marginal (population-average) version and can be used to obtain inference for the fixed ef- fects in the underlying GLMM, without the need to specify any other components of this GLMM. Among marginal models, the EEE approach provides a flexible al- ternative to modelling with generalized estimating equations (GEE). Applications of EEE include diagnostics and link selection. The asymptotic distribution of the proposed estimators for the model parameters is derived, enabling statistical infer- ence. Practical illustrations include Poisson modelling of repeated epileptic seizure counts and simulations for clustered binomial responses.
    [Show full text]
  • Association Between Early Cerebral Oxygenation and Neurodevelopmental Impairment Or Death in Premature Infants
    Journal of Perinatology https://doi.org/10.1038/s41372-021-00942-w ARTICLE Association between early cerebral oxygenation and neurodevelopmental impairment or death in premature infants 1 1 1 1 1 1 Anup C. Katheria ● Jacob Stout ● Ana L. Morales ● Debra Poeltler ● Wade D. Rich ● Jane Steen ● 1 1 Shauna Nuzzo ● Neil Finer Received: 6 July 2020 / Revised: 15 October 2020 / Accepted: 19 January 2021 © The Author(s), under exclusive licence to Springer Nature America, Inc. 2021 Abstract Objective To assess the relationship between cerebral oxygenation in the first 72 h of life and neurodevelopmental impairment (NDI) at 2 years corrected age in former premature infants. Study design Prospective observational cohort study of 127 infants <32 weeks GA at birth with cerebral oxygenation monitoring using NIRS in the first 72 h of life. Results Using a threshold cutoff for cerebral hypoxia, infants with NDI or death had increased duration of hypoxia (4 vs 2.3%, p = 0.001), which was more pronounced in the 23–27 week subgroup (7.6 vs 3.2%, p < 0.001). Individual generalized 1234567890();,: 1234567890();,: estimating equations to adjust for repeated measures were modeled in this subgroup for the physiologic parameters including StO2. StO2 < 67% was a predictor for death or NDI (OR 2.75, 95% CI 1.006, 7.5132, p = 0.049). Conclusion An increased duration of cerebral hypoxia is associated with NDI or death in infants born <32 weeks GA. Introduction context of neonatal morbidities like hypoxic ischemic encephalopathy (HIE), late-onset sepsis, and congenital Underdeveloped cerebral circulatory systems of preterm heart disease.
    [Show full text]
  • Estimating Equations and Maximum Likelihood Asymptotics
    This is page 188 Printer: Opaque this 19 Estimating Equations and Maximum Likelihood asymptotics Here we give a rigorous account of the consistency and asymptotic normal- ity for certain solutions of estimating equations, of which least squares and maximum likelihood estimation are special cases. Though the material and the proof in particular are technical, it is worthwhile to understand the conditions under which these types of estimators have such desired prop- erties, and how those conditions can be verified, as is done in the examples that follow. Let be given n 2 N and a set χ, a random vector X 2 χn, a parameter p n p set Θ ⊂ R with non-empty interior and a function Un : χ × Θ ! R . We consider the estimating equation Un(X; θ) = 0; θ 2 Θ: (19.1) For least squares estimation, say pairs (Xi;Yi); i = 1; : : : ; n with distribution depending on θ are observed for which E[YijXi] = fi(Xi; θ) for fi(x; θ) in some parametric class of functions. The least squares estimate of θ is given as the minimizer of n 1 X 2 J(θ; X) = (y − f (X ; θ)) ; 2n i i i i=1 19. Estimating Equations and Maximum Likelihood asymptotics 189 which under smoothness conditions can be obtained via (19.1) with n 1 X U (x; θ) = @ J(θ; x) = (f (x ; θ) − y ) @ f (x ; θ): (19.2) n θ n i i i θ i i i=1 In the following, @θ as in (19.2) applied to a real valued function depending 2 on a vector parameter θ returns a gradient vector, and likewise @θ returns a matrix of second partial derivatives.
    [Show full text]