On the Asymptotics of Minimum Disparity Estimation

Total Page:16

File Type:pdf, Size:1020Kb

On the Asymptotics of Minimum Disparity Estimation Noname manuscript No. (will be inserted by the editor) On The Asymptotics of Minimum Disparity Estimation Arun Kumar Kuchibhotla · Ayanendranath Basu Received: date / Accepted: date Abstract Inference procedures based on the minimization of divergences are popular statistical tools. Beran(1977) proved consistency and asymptotic nor- mality of the minimum Hellinger distance (MHD) estimator. This method was later extended to the large class of disparities in discrete models by Lindsay (1994) who proved existence of a sequence of roots of the estimating equation which is consistent and asymptotically normal. However the current literature does not provide a general asymptotic result about the minimizer of a generic disparity. In this paper we prove, under very general conditions, an asymptotic representation of the minimum disparity estimator itself (and not just for a root of the estimating equation), thus generalizing the results of Beran(1977) and Lindsay(1994). This leads to a general framework for minimum disparity estimation encompassing both discrete and continuous models. Keywords Disparity · Quadratic Approximation · Non-parametric Density Estimation 1 Introduction Different types of divergence measures have been used in the literature to measure the dissimilarity between two distributions. A prominent subclass of density-based divergences is the family of disparities which will be described in detail in Section2. Given a density g and a family of parametric densities, a natural way of getting a \best fitting” parameter is to minimize a dispar- ity measure between g and a density from the (parametric) family over the parameter space. When dealing with point estimation in parametric models, maximum likelihood is the most popular method of estimation. But other alternatives like the method of moments and M-estimators are also available. University of Pennsylvania and Indian Statistical Institute E-mail: [email protected] E-mail: [email protected] 2 Arun Kumar Kuchibhotla, Ayanendranath Basu Considering the efficiency of the estimator to be the criterion for com- parison, the maximum likelihood estimator is one of the best under some regularity conditions. Rao(1961), Robertson(1972) and Fryer and Robertson (1972) have noted that there is a class of estimators containing the maximum likelihood estimator such that each estimator in the class is asymptotically ef- ficient or asymptotically equivalent to the maximum likelihood estimator (up to order n−1=2). Many authors have followed this up by considering various other criteria like higher order efficiency in order to single out the maximum likelihood estimator as the best. But in the current era of big data, some er- rors in the generation, recording and transmission of data are not unexpected. Thus it appears justifiable that one should consider the asymptotic robustness of the estimators together with their asymptotic efficiency when comparing estimators. Note however that, while there is a well established concept of asymptotic efficiency of an estimator, there is no universal way of proving asymptotic robustness of the estimator or claiming that some estimator is the best robust estimator. Beran(1977) considered the minimum Hellinger distance estimator in con- tinuous models. He appears to be the first to prove that there are estimators which are asymptotically fully efficient while enjoying strong robustness prop- erties. Beran's (1977) approach required a non-parametric estimator of the data density. The Hellinger distance was then replaced by a general disparity by Lindsay(1994) who considered discrete models and used sample propor- tions as estimates of the actual density. A focal point of his work was the study of the properties of zeros of an estimating function obtained as the derivative of a disparity. The main result of Lindsay(1994) states that there exists a sequence of roots which is consistent and asymptotically normal with asymp- totic variance coinciding with the inverse of Fisher information when the true density is an element of the parametric family. Later the results of Lindsay (1994) were extended by Basu and Lindsay(1994), Park and Basu(2004) and Kuchibhotla and Basu(2015) to continuous models under different conditions on the model, the kernel density estimate and the disparity generating func- tion. However, these authors also consider the roots of an estimating equation rather than the minimum disparity estimator itself. As noted by Ferguson(1982), proving the asymptotic results for some se- quence of roots of the disparity based estimating equation may not prove the same for the minimum disparity estimator. Also, the results of the previous authors only mention that there exists a \good" sequence of roots and do not prescribe how to get such a sequence when there are multiple roots of the estimating equation. In light of this discussion, we feel that one should derive the asymptotic results for the minimum disparity estimator. Also, an approach which parallels the framework of Lindsay(1994) in case of contin- uous models in terms of the conditions on the disparity does not exist in the literature. Although Kuchibhotla and Basu(2015) considered a set up where the disparity conditions are milder than those of Lindsay(1994), they have stronger conditions on the density estimator. On The Asymptotics of Minimum Disparity Estimation 3 In this paper, we first prove a grand consistency theorem for the minimum disparity estimator under minimal conditions. We then develop an asymptotic representation of the minimum disparity estimator in a general framework. Our results are applicable whenever the densities exist with respect to a σ- finite base measure rather than being specific to the case of the Lebesgue measure. Also, the conditions on the disparity are exactly the same as those in Lindsay(1994). The specific achievements of this paper may be listed as follows. 1. Consistency is proved with minimal conditions for a suitable subclass of disparities; even the differentiability of the probability density function with respect to the parameter or smoothness of the disparity generating function are not required. 2. All the results proved in this paper relate to the minimizer of the disparity itself, and not just a suitable sequence of roots of the estimating equation. This is unlike most of the previous works done in this area; Beran(1977) is an exception. 3. The grand consistency theorem and the asymptotic representation of the disparity do not require the observations to be independent; neither is it necessary for the density estimator to be a kernel density estimator. 4. Theorem 4.1, together with Remark 10 establishes a general framework for minimum disparity estimation encompassing both discrete and continuous models. Results of Lindsay(1994) emerge as a special case. 5. The development described in the previous items establishes the legitimacy of the disparity based analogue of the likelihood ratio test considered in Theorems 5.1{ 5.3 which depends explicitly on the minimizer of the dis- parity. This also avoids the possibility of having a negative statistic due to the use of a root which is not a global minimizer. We now outline the remaining sections of the paper. In Section2, we present the grand consistency theorem of the minimum disparity estimator. In Section 3, we prove the quadratic approximation of the disparity which leads to an asymptotic representation of the minimum disparity estimator. In Section4, we prove asymptotic normality of the estimating function which, combined with the asymptotic representation of the estimator, leads to the asymptotic normality of the minimum disparity estimator. In Section5, we consider testing of hypothesis using disparities. Finally, we conclude with some remarks in Section6. We try to present our results step-by-step so that the assumptions required for each step become transparent and the generalization of the results currently available only for kernel density estimators becomes easier. In this paper we deal with the asymptotic efficiency results of the mini- mum disparity estimator, and do not re-emphasize the well known robustness properties of these estimators. However, see Remark5 and Theorem 5.3. Although we primarily follow the approach of Lindsay(1994) in defining the disparities, the class of disparities also coincides with the class of φ-divergences of Csisz´ar(1963) and Ali and Silvey(1966). Other authors have worked on the φ-divergence formulation and independently determined the properties of the 4 Arun Kumar Kuchibhotla, Ayanendranath Basu corresponding minimum distance procedures primarily in discrete models. See, for example, Morales et al.(1995) and Pardo(2006). However the literature is deficient in general results based on φ-divergences in continuous models, where the results are usually scattered, corresponding to specific divergences, as in Beran(1977) or Basu et al.(1997). 2 Consistency Let G represent the class of all probability distributions having densities with respect to some σ-finite base measure µ on some measurable space (Ω; Λ, µ) with Λ representing a σ-field on Ω. We assume that the true distribution G and the model FΘ = fFθ : θ 2 Θg belong to G. Let g and fθ be the corresponding densities (with respect to µ). Let X1;X2;:::;Xn be a random sample from G which is modelled by FΘ. We do not necessarily assume that the observations are independent, although we require them to be identically distributed. Our aim is to estimate the parameter θ by choosing the model density which gives the closest fit to the data. Let C be a real valued strictly convex function with C(0) = 0. Consider the divergence given by the form Z g(x) ρC (g; fθ) = C − 1 fθ(x)dµ(x): fθ(x) This form describes the class of all disparities (Lindsay, 1994) between the densities g and fθ. For g(x) = 0 or fθ(x) = 0, we use the following convention 0 a C(d) 0C − 1 = 0; and 0C − 1 = a lim : 0 0 d!1 d The function C in the disparity ρC (g; fθ) is called the disparity generating function.
Recommended publications
  • Hellinger Distance Based Drift Detection for Nonstationary Environments
    Hellinger Distance Based Drift Detection for Nonstationary Environments Gregory Ditzler and Robi Polikar Dept. of Electrical & Computer Engineering Rowan University Glassboro, NJ, USA [email protected], [email protected] Abstract—Most machine learning algorithms, including many sion boundaries as a concept change, whereas gradual changes online learners, assume that the data distribution to be learned is in the data distribution as a concept drift. However, when the fixed. There are many real-world problems where the distribu- context does not require us to distinguish between the two, we tion of the data changes as a function of time. Changes in use the term concept drift to encompass both scenarios, as it is nonstationary data distributions can significantly reduce the ge- usually the more difficult one to detect. neralization ability of the learning algorithm on new or field data, if the algorithm is not equipped to track such changes. When the Learning from drifting environments is usually associated stationary data distribution assumption does not hold, the learner with a stream of incoming data, either one instance or one must take appropriate actions to ensure that the new/relevant in- batch at a time. There are two types of approaches for drift de- formation is learned. On the other hand, data distributions do tection in such streaming data: in passive drift detection, the not necessarily change continuously, necessitating the ability to learner assumes – every time new data become available – that monitor the distribution and detect when a significant change in some drift may have occurred, and updates the classifier ac- distribution has occurred.
    [Show full text]
  • Hellinger Distance-Based Similarity Measures for Recommender Systems
    Hellinger Distance-based Similarity Measures for Recommender Systems Roma Goussakov One year master thesis Ume˚aUniversity Abstract Recommender systems are used in online sales and e-commerce for recommend- ing potential items/products for customers to buy based on their previous buy- ing preferences and related behaviours. Collaborative filtering is a popular computational technique that has been used worldwide for such personalized recommendations. Among two forms of collaborative filtering, neighbourhood and model-based, the neighbourhood-based collaborative filtering is more pop- ular yet relatively simple. It relies on the concept that a certain item might be of interest to a given customer (active user) if, either he appreciated sim- ilar items in the buying space, or if the item is appreciated by similar users (neighbours). To implement this concept different kinds of similarity measures are used. This thesis is set to compare different user-based similarity measures along with defining meaningful measures based on Hellinger distance that is a metric in the space of probability distributions. Data from a popular database MovieLens will be used to show the effectiveness of different Hellinger distance- based measures compared to other popular measures such as Pearson correlation (PC), cosine similarity, constrained PC and JMSD. The performance of differ- ent similarity measures will then be evaluated with the help of mean absolute error, root mean squared error and F-score. From the results, no evidence were found to claim that Hellinger distance-based measures performed better than more popular similarity measures for the given dataset. Abstrakt Titel: Hellinger distance-baserad similaritetsm˚attf¨orrekomendationsystem Rekomendationsystem ¨aroftast anv¨andainom e-handel f¨orrekomenderingar av potentiella varor/produkter som en kund kommer att vara intresserad av att k¨opabaserat p˚aderas tidigare k¨oppreferenseroch relaterat beteende.
    [Show full text]
  • Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation
    Research Report Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation Insu Paek December 2009 ETS RR-09-35 Listening. Learning. Leading.® Three Statistical Testing Procedures in Logistic Regression: Their Performance in Differential Item Functioning (DIF) Investigation Insu Paek ETS, Princeton, New Jersey December 2009 As part of its nonprofit mission, ETS conducts and disseminates the results of research to advance quality and equity in education and assessment for the benefit of ETS’s constituents and the field. To obtain a PDF or a print copy of a report, please visit: http://www.ets.org/research/contact.html Copyright © 2009 by Educational Testing Service. All rights reserved. ETS, the ETS logo, GRE, and LISTENING. LEARNING. LEADING. are registered trademarks of Educational Testing Service (ETS). SAT is a registered trademark of the College Board. Abstract Three statistical testing procedures well-known in the maximum likelihood approach are the Wald, likelihood ratio (LR), and score tests. Although well-known, the application of these three testing procedures in the logistic regression method to investigate differential item function (DIF) has not been rigorously made yet. Employing a variety of simulation conditions, this research (a) assessed the three tests’ performance for DIF detection and (b) compared DIF detection in different DIF testing modes (targeted vs. general DIF testing). Simulation results showed small differences between the three tests and different testing modes. However, targeted DIF testing consistently performed better than general DIF testing; the three tests differed more in performance in general DIF testing and nonuniform DIF conditions than in targeted DIF testing and uniform DIF conditions; and the LR and score tests consistently performed better than the Wald test.
    [Show full text]
  • Wald (And Score) Tests
    Wald (and Score) Tests 1 / 18 Vector of MLEs is Asymptotically Normal That is, Multivariate Normal This yields I Confidence intervals I Z-tests of H0 : θj = θ0 I Wald tests I Score Tests I Indirectly, the Likelihood Ratio tests 2 / 18 Under Regularity Conditions (Thank you, Mr. Wald) a.s. I θbn → θ √ d −1 I n(θbn − θ) → T ∼ Nk 0, I(θ) 1 −1 I So we say that θbn is asymptotically Nk θ, n I(θ) . I I(θ) is the Fisher Information in one observation. I A k × k matrix ∂2 I(θ) = E[− log f(Y ; θ)] ∂θi∂θj I The Fisher Information in the whole sample is nI(θ) 3 / 18 H0 : Cθ = h Suppose θ = (θ1, . θ7), and the null hypothesis is I θ1 = θ2 I θ6 = θ7 1 1 I 3 (θ1 + θ2 + θ3) = 3 (θ4 + θ5 + θ6) We can write null hypothesis in matrix form as θ1 θ2 1 −1 0 0 0 0 0 θ3 0 0 0 0 0 0 1 −1 θ4 = 0 1 1 1 −1 −1 −1 0 θ5 0 θ6 θ7 4 / 18 p Suppose H0 : Cθ = h is True, and Id(θ)n → I(θ) By Slutsky 6a (Continuous mapping), √ √ d −1 0 n(Cθbn − Cθ) = n(Cθbn − h) → CT ∼ Nk 0, CI(θ) C and −1 p −1 Id(θ)n → I(θ) . Then by Slutsky’s (6c) Stack Theorem, √ ! n(Cθbn − h) d CT → . −1 I(θ)−1 Id(θ)n Finally, by Slutsky 6a again, −1 0 0 −1 Wn = n(Cθb − h) (CId(θ)n C ) (Cθb − h) →d W = (CT − 0)0(CI(θ)−1C0)−1(CT − 0) ∼ χ2(r) 5 / 18 The Wald Test Statistic −1 0 0 −1 Wn = n(Cθbn − h) (CId(θ)n C ) (Cθbn − h) I Again, null hypothesis is H0 : Cθ = h I Matrix C is r × k, r ≤ k, rank r I All we need is a consistent estimator of I(θ) I I(θb) would do I But it’s inconvenient I Need to compute partial derivatives and expected values in ∂2 I(θ) = E[− log f(Y ; θ)] ∂θi∂θj 6 / 18 Observed Fisher Information I To find θbn, minimize the minus log likelihood.
    [Show full text]
  • Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs
    Journal of Statistical Theory and Applications Volume 10, Number 4, 2011, pp. 553-569 ISSN 1538-7887 Comparison of Wald, Score, and Likelihood Ratio Tests for Response Adaptive Designs Yanqing Yi1∗and Xikui Wang2 1 Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. Johns, Newfoundland, Canada A1B 3V6 2 Department of Statistics, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2 Abstract Data collected from response adaptive designs are dependent. Traditional statistical methods need to be justified for the use in response adaptive designs. This paper gener- alizes the Rao's score test to response adaptive designs and introduces a generalized score statistic. Simulation is conducted to compare the statistical powers of the Wald, the score, the generalized score and the likelihood ratio statistics. The overall statistical power of the Wald statistic is better than the score, the generalized score and the likelihood ratio statistics for small to medium sample sizes. The score statistic does not show good sample properties for adaptive designs and the generalized score statistic is better than the score statistic under the adaptive designs considered. When the sample size becomes large, the statistical power is similar for the Wald, the sore, the generalized score and the likelihood ratio test statistics. MSC: 62L05, 62F03 Keywords and Phrases: Response adaptive design, likelihood ratio test, maximum likelihood estimation, Rao's score test, statistical power, the Wald test ∗Corresponding author. Fax: 1-709-777-7382. E-mail addresses: [email protected] (Yanqing Yi), xikui [email protected] (Xikui Wang) Y. Yi and X.
    [Show full text]
  • On Measures of Entropy and Information
    On Measures of Entropy and Information Tech. Note 009 v0.7 http://threeplusone.com/info Gavin E. Crooks 2018-09-22 Contents 5 Csiszar´ f-divergences 12 Csiszar´ f-divergence ................ 12 0 Notes on notation and nomenclature 2 Dual f-divergence .................. 12 Symmetric f-divergences .............. 12 1 Entropy 3 K-divergence ..................... 12 Entropy ........................ 3 Fidelity ........................ 12 Joint entropy ..................... 3 Marginal entropy .................. 3 Hellinger discrimination .............. 12 Conditional entropy ................. 3 Pearson divergence ................. 14 Neyman divergence ................. 14 2 Mutual information 3 LeCam discrimination ............... 14 Mutual information ................. 3 Skewed K-divergence ................ 14 Multivariate mutual information ......... 4 Alpha-Jensen-Shannon-entropy .......... 14 Interaction information ............... 5 Conditional mutual information ......... 5 6 Chernoff divergence 14 Binding information ................ 6 Chernoff divergence ................. 14 Residual entropy .................. 6 Chernoff coefficient ................. 14 Total correlation ................... 6 Renyi´ divergence .................. 15 Lautum information ................ 6 Alpha-divergence .................. 15 Uncertainty coefficient ............... 7 Cressie-Read divergence .............. 15 Tsallis divergence .................. 15 3 Relative entropy 7 Sharma-Mittal divergence ............. 15 Relative entropy ................... 7 Cross entropy
    [Show full text]
  • Statistical Asymptotics Part II: First-Order Theory
    First-Order Asymptotic Theory Statistical Asymptotics Part II: First-Order Theory Andrew Wood School of Mathematical Sciences University of Nottingham APTS, April 15-19 2013 Andrew Wood Statistical Asymptotics Part II: First-Order Theory First-Order Asymptotic Theory Structure of the Chapter This chapter covers asymptotic normality and related results. Topics: MLEs, log-likelihood ratio statistics and their asymptotic distributions; M-estimators and their first-order asymptotic theory. Initially we focus on the case of the MLE of a scalar parameter θ. Then we study the case of the MLE of a vector θ, first without and then with nuisance parameters. Finally, we consider the more general setting of M-estimators. Andrew Wood Statistical Asymptotics Part II: First-Order Theory First-Order Asymptotic Theory Motivation Statistical inference typically requires approximations because exact answers are usually not available. Asymptotic theory provides useful approximations to densities or distribution functions. These approximations are based on results from probability theory. The theory underlying these approximation techniques is valid as some quantity, typically the sample size n [or more generally some measure of information], goes to infinity, but the approximations obtained are often accurate even for small sample sizes. Andrew Wood Statistical Asymptotics Part II: First-Order Theory First-Order Asymptotic Theory Test statistics Consider testing the null hypothesis H0 : θ = θ0, where θ0 is an arbitrary specified point in Ωθ. If desired, we may
    [Show full text]
  • Tailoring Differentially Private Bayesian Inference to Distance
    Tailoring Differentially Private Bayesian Inference to Distance Between Distributions Jiawen Liu*, Mark Bun**, Gian Pietro Farina*, and Marco Gaboardi* *University at Buffalo, SUNY. fjliu223,gianpiet,gaboardig@buffalo.edu **Princeton University. [email protected] Contents 1 Introduction 3 2 Preliminaries 5 3 Technical Problem Statement and Motivations6 4 Mechanism Proposition8 4.1 Laplace Mechanism Family.............................8 4.1.1 using `1 norm metric.............................8 4.1.2 using improved `1 norm metric.......................8 4.2 Exponential Mechanism Family...........................9 4.2.1 Standard Exponential Mechanism.....................9 4.2.2 Exponential Mechanism with Hellinger Metric and Local Sensitivity.. 10 4.2.3 Exponential Mechanism with Hellinger Metric and Smoothed Sensitivity 10 5 Privacy Analysis 12 5.1 Privacy of Laplace Mechanism Family....................... 12 5.2 Privacy of Exponential Mechanism Family..................... 12 5.2.1 expMech(;; ) −Differential Privacy..................... 12 5.2.2 expMechlocal(;; ) non-Differential Privacy................. 12 5.2.3 expMechsmoo −Differential Privacy Proof................. 12 6 Accuracy Analysis 14 6.1 Accuracy Bound for Baseline Mechanisms..................... 14 6.1.1 Accuracy Bound for Laplace Mechanism.................. 14 6.1.2 Accuracy Bound for Improved Laplace Mechanism............ 16 6.2 Accuracy Bound for expMechsmoo ......................... 16 6.3 Accuracy Comparison between expMechsmoo, lapMech and ilapMech ...... 16 7 Experimental Evaluations 18 7.1 Efficiency Evaluation................................. 18 7.2 Accuracy Evaluation................................. 18 7.2.1 Theoretical Results.............................. 18 7.2.2 Experimental Results............................ 20 1 7.3 Privacy Evaluation.................................. 22 8 Conclusion and Future Work 22 2 Abstract Bayesian inference is a statistical method which allows one to derive a posterior distribution, starting from a prior distribution and observed data.
    [Show full text]
  • An Information-Geometric Approach to Feature Extraction and Moment
    An information-geometric approach to feature extraction and moment reconstruction in dynamical systems Suddhasattwa Dasa, Dimitrios Giannakisa, Enik˝oSz´ekelyb,∗ aCourant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA bSwiss Data Science Center, ETH Z¨urich and EPFL, 1015 Lausanne, Switzerland Abstract We propose a dimension reduction framework for feature extraction and moment reconstruction in dy- namical systems that operates on spaces of probability measures induced by observables of the system rather than directly in the original data space of the observables themselves as in more conventional methods. Our approach is based on the fact that orbits of a dynamical system induce probability measures over the measur- able space defined by (partial) observations of the system. We equip the space of these probability measures with a divergence, i.e., a distance between probability distributions, and use this divergence to define a kernel integral operator. The eigenfunctions of this operator create an orthonormal basis of functions that capture different timescales of the dynamical system. One of our main results shows that the evolution of the moments of the dynamics-dependent probability measures can be related to a time-averaging operator on the original dynamical system. Using this result, we show that the moments can be expanded in the eigenfunction basis, thus opening up the avenue for nonparametric forecasting of the moments. If the col- lection of probability measures is itself a manifold, we can in addition equip the statistical manifold with the Riemannian metric and use techniques from information geometry. We present applications to ergodic dynamical systems on the 2-torus and the Lorenz 63 system, and show on a real-world example that a small number of eigenvectors is sufficient to reconstruct the moments (here the first four moments) of an atmospheric time series, i.e., the realtime multivariate Madden-Julian oscillation index.
    [Show full text]
  • Econometrics-I-11.Pdf
    Econometrics I Professor William Greene Stern School of Business Department of Economics 11-1/78 Part 11: Hypothesis Testing - 2 Econometrics I Part 11 – Hypothesis Testing 11-2/78 Part 11: Hypothesis Testing - 2 Classical Hypothesis Testing We are interested in using the linear regression to support or cast doubt on the validity of a theory about the real world counterpart to our statistical model. The model is used to test hypotheses about the underlying data generating process. 11-3/78 Part 11: Hypothesis Testing - 2 Types of Tests Nested Models: Restriction on the parameters of a particular model y = 1 + 2x + 3T + , 3 = 0 (The “treatment” works; 3 0 .) Nonnested models: E.g., different RHS variables yt = 1 + 2xt + 3xt-1 + t yt = 1 + 2xt + 3yt-1 + wt (Lagged effects occur immediately or spread over time.) Specification tests: ~ N[0,2] vs. some other distribution (The “null” spec. is true or some other spec. is true.) 11-4/78 Part 11: Hypothesis Testing - 2 Hypothesis Testing Nested vs. nonnested specifications y=b1x+e vs. y=b1x+b2z+e: Nested y=bx+e vs. y=cz+u: Not nested y=bx+e vs. logy=clogx: Not nested y=bx+e; e ~ Normal vs. e ~ t[.]: Not nested Fixed vs. random effects: Not nested Logit vs. probit: Not nested x is (not) endogenous: Maybe nested. We’ll see … Parametric restrictions Linear: R-q = 0, R is JxK, J < K, full row rank General: r(,q) = 0, r = a vector of J functions, R(,q) = r(,q)/’. Use r(,q)=0 for linear and nonlinear cases 11-5/78 Part 11: Hypothesis Testing - 2 Broad Approaches Bayesian: Does not reach a firm conclusion.
    [Show full text]
  • Issue PDF (13986
    European Mathematical Society NEWSLETTER No. 22 December 1996 Second European Congress of Mathematics 3 Report on the Second Junior Mathematical Congress 9 Obituary - Paul Erdos . 11 Report on the Council and Executive Committee Meetings . 15 Fifth Framework Programme for Research and Development . 17 Diderot Mathematics Forum . 19 Report on the Prague Mathematical Conference . 21 Preliminary report on EMS Summer School . 22 EMS Lectures . 27 European Won1en in Mathematics . 28 Euronews . 29 Problem Corner . 41 Book Reviews .....48 Produced at the Department of Mathematics, Glasgow Caledonian University Printed by Armstrong Press, Southampton, UK EDITORS Secretary Prof Roy Bradley Peter W. Michor Department of Mathematics Institut fiir Mathematik, Universitiit Wien, Strudlhof­ Glasgow Caledonian University gasse 4, A-1090 Wien, Austria. GLASGOW G4 OBA, SCOTLAND e-mail: [email protected] Editorial Team Glasgow: Treasurer R. Bradley, V. Jha, J. Gomatam, A. Lahtinen G. Kennedy, M. A. Speller, J. Wilson Department of Mathematics, P.O.Box 4 Editor - Mathematics Education FIN-00014 University of Helsinki Finland Prof. Vinicio Villani Dipartimento di Matematica e-mail: [email protected] Via Bounarroti, 2 56127 Pisa, Italy EMS Secretariat e-mail [email protected] Ms. T. Makelainen University of Helsinki (address above) Editors - Brief Reviews e-mail [email protected] I Netuka and V Soucek tel: +358-9-1912 2883 Mathematical Institute Charles University telex: 124690 Sokolovska 83 fax: +358-9-1912 3213 18600 Prague, Czech Republic e-mail: Newsletter editor [email protected] R. Bradley, Glasgow Caledonian University ( address [email protected] above) USEFUL ADDRESSES e-mail [email protected] President: Jean-Pierre Bourguignon Newsletter advertising officer IHES, Route de Chartres, F-94400 Bures-sur-Yvette, M.
    [Show full text]
  • Statistics As Both a Purely Mathematical Activity and an Applied Science NAW 5/18 Nr
    Piet Groeneboom, Jan van Mill, Aad van der Vaart Statistics as both a purely mathematical activity and an applied science NAW 5/18 nr. 1 maart 2017 55 Piet Groeneboom Jan van Mill Aad van der Vaart Delft Institute of Applied Mathematics KdV Institute for Mathematics Mathematical Institute Delft University of Technology University of Amsterdam Leiden University [email protected] [email protected] [email protected] In Memoriam Kobus Oosterhoff (1933–2015) Statistics as both a purely mathematical activity and an applied science On 27 May 2015 Kobus Oosterhoff passed away at the age of 82. Kobus was employed at contact with Hemelrijk and the encourage- the Mathematisch Centrum in Amsterdam from 1961 to 1969, at the Roman Catholic Univer- ment received from him, but he did his the- ity of Nijmegen from 1970 to 1974, and then as professor in Mathematical Statistics at the sis under the direction of Willem van Zwet Vrije Universiteit Amsterdam from 1975 until his retirement in 1996. In this obituary Piet who, one year younger than Kobus, had Groeneboom, Jan van Mill and Aad van der Vaart look back on his life and work. been a professor at Leiden University since 1965. Kobus became Willem’s first PhD stu- Kobus (officially: Jacobus) Oosterhoff was diploma’ (comparable to a masters) in dent, defending his dissertation Combina- born on 7 May 1933 in Leeuwarden, the 1963. His favorite lecturer was the topolo- tion of One-sided Test Statistics on 26 June capital of the province of Friesland in the gist J. de Groot, who seems to have deeply 1969 at Leiden University.
    [Show full text]