2 Yule: the Time–Correlation Problem, Nonsense Correlations, Periodicity and Autoregressions

Total Page:16

File Type:pdf, Size:1020Kb

2 Yule: the Time–Correlation Problem, Nonsense Correlations, Periodicity and Autoregressions Notes 2 Yule: The Time–Correlation Problem, Nonsense Correlations, Periodicity and Autoregressions 1. William S. Gosset (1876–1937) was one of the most influential statisticians of the twentieth century. As an employee of Guinness, the famous Irish brewer of stout, he was precluded from publishing under his own name and thus took the pseudonym ‘Student’. Gosset worked primarily on experimental design and small-sample problems and was the inventor of the eponymous Student’s t-distribution. For further biographical details see Pearson (1950, 1990). 2. Yule admitted that he could not produce a proof of this result. It is, in fact, a special case of a more general result proved by Egon Pearson for a non-random series: see Pearson (1922, pages 37–40, and in particular his equation (xviii)). 3. The method used by Yule to produce his Figs 5–9 is discussed in Yule (1926, page 55). We approximate it here to recreate these distributions in our composite Figure 2.6. 4. The calculations required to construct Figure 2.7 are outlined in Yule (1926, page 56). 5. For a derivation of this result in the more general context of calculating ‘intraclass’ correlation, see Yule and Kendall (1950, §§11.40–11.41). 6. A small Gauss program was written to simulate Yule’s sampling procedure and to compute the results shown in Table 2.1 and later in Tables 2.4 and 2.7. Necessarily, the results differ numerically from those of Yule because of the sampling process. 7. Yule states that the maximum negative correlation is that between ‘terms 2 and 8 or 3 and 9, and is −0.988’, which is clearly a misreading of his Table VI, which gives correlations to three decimal places, unlike our Table 2.6, which retains just two to maintain consistency with earlier tables. 8. The index numbers themselves are used, rather than the smoothed Index of Fluctuations, which are often analyzed (see Mills, 2011a, §§3.8–3.9). The serial correlations were obtained using the correlogram view in EViews 6. Compared to Yule’s own heroic calculations, which he described in detail (Yule, 1926, page 42) and reported as Table XIII and Fig. 19, they are uniformally smaller but display an identical pattern. 9. Yule (1927, pages 284–6) discussed further features of the sunspot numbers and their related disturbances estimated from equation (2.19). These features do not seem to appear in the extended series and are therefore not discussed here. 10. Indeed, the unusual behavior of sunspots over the last decades of the twen- tieth century has been the subject of great interest: see, for example, Solanki et al. (2004). 397 398 Notes 3 Kendall: Generalizations and Extensions of Stationary Autoregressive Models 1. Spencer-Smith (1947, page 105) returned to this point about moving averaging inducing spurious oscillations, but defined trend in a manner rather different to that considered here, being more akin to a long cycle: ‘such series do not contain very prolonged steady increases or decreases in the general values of these terms, as may happen in economic series, and where such movements occur the use of the moving average method may be valid’. 2. The use of the term autocorrelation follows Wold, 1938, with ‘serial correla- tion’ being reserved for the sample counterpart, rk. Note also the simplification of the notation used by Kendall from that employed in Chapter 2. 3. Hall (1925) had earlier suggested the same approach but restricted attention to local linear trends and failed to make the link with moving averages as set out below. Interestingly, however, he introduced moving sums to remove seasonal and cyclical components, referring to these as moving integrals and the process of computing them as moving integration, thus predating the use of the term integrated process to refer to a cumulated variable, introduced by Box and Jenkins (see §6.9), by some four decades. 4. A recent application of the approach is to be found in Mills (2007), where it is used to obtain recent trends in temperatures. An extension was suggested by Quenouille (1949), who dispensed with the requirement that the local poly- nomial trends should smoothly ‘fit together’, thus allowing discontinuities in their first derivatives. As with many of Quenouille’s contributions, this approach was both technically and computationally demanding and does not seem to have captured the attention of practitioners of trend fitting! 5. The data are provided in Table 1 of Coen et al. (1969). We have used EViews 6 to compute this regression, which leads to some very minor differences in the estimates as reported in Coen et al.’s Table 2 equation (7). 6. The estimates differ somewhat from Coen et al. (1969, Table 3, equation (10)) as some of the early observations on the lagged regressors are not provided there, so that the regression is here estimated over a slightly truncated sample period. 4 Durbin: Inference, Estimation, Seasonal Adjustment and Structural Modelling 1. Walker (1961, Appendix) provided a more complicated adjustment for the bias in this estimator. For this simulation it leads to a bias adjustment of the order 0.02, which would almost eradicate the bias. He also showed that Durbin’s adjustment would be approximately 0.004, which is what we find. 2. Durbin recounts that the government of the time was becoming increasingly worried about the rising unemployment figures. The Prime Minister, Harold Wilson, ‘had worked as an economic statistician in the government service during the war, and he was really rather good at interpretation of numerical data … and … was very interested in looking at the figures himself. He got the idea … that maybe the reason why the unemployment series appeared to Notes 399 be behaving in a somewhat strange way was due to the seasonal adjustment procedure that was being used … and asked the CSO to look into this. … It turned out that Wilson was right and there was something wrong with the seasonal adjustment. I think it is remarkable that a point like this should be spotted by a prime minister’ (Phillips, 1988, pages 140–1). To be fair, Wilson had a long-standing interest in economics and statistics, having been a lecturer in economic history at New College, Oxford (from the age of 21) and a Research Fellow at University College, as well as holding a variety of government posts that dealt with economic data. 3. The paper represented the work of a team of statisticians at the Research Branch of the CSO, with the authors being responsible for the supervision of the work and the writing of the paper. The Bureau of the Census X-11 seasonal adjust- ment package has been used extensively by statistical agencies across the world for adjusting economic time series. Mills (2011a, §§14.7–14.10) discusses its development in some detail. The Henderson moving average is based on the requirement that it should follow a cubic polynomial trend without distor- tion. The filter weights are conveniently derived in Kenny and Durbin (1982, Appendix) and also in Mills (2011a, §10.6). 4. Extensions to autoregressive models were later provided by Kulperger (1985) and Horváth (1993) and then to ARMA processes by Bai (1994). 5. Gauss’ original derivation of the recursive least squares (RLS) algorithm is given in Young (1984, Appendix 2), which provides the authorized French trans- lation of 1855 by Bertrand along with comments by the author designed to ‘translate’ the derivation into modern statistical notation and terminology. Young (2011, Appendix A) provides a simple vector-matrix derivation of the RLS algorithm. 6. Hald (1981) and Lauritzen (1981, 2002) claim that T.N. Thiele, a Danish astronomer and actuary, proposed in an 1880 paper a recursive procedure for estimating the parameters of a model that contained, as we know them today, a regression component, a Brownian motion and a white noise, that was a direct antecedent of the Kalman filter. Durbin was aware of this histori- cal link, for he remarked that it was ‘of great interest to note that the Kalman approach to time series modelling was anticipated in a remarkable way for a particular problem by … Theile in 1880, as has been pointed out by Lauritzen (1981)’ (Durbin, 1984, page 170). 7. The wearing of seatbelts by front seat occupants of cars and light goods vehicles had been made compulsory in the UK on 31 January 1983 for an experimen- tal period of three years with the intention that Parliament would consider extending the legislation before the expiry of this period. The Department of Transport undertook to monitor the effect of the law on road casualties and, as part of this monitoring exercise, invited Durbin and Harvey to conduct an independent technical assessment of the statistical evidence. 8. This ‘opportunity for public discussion’ was facilitated by reading the paper at a meeting of the RSS. The resulting discussion, along with the author’s rejoinder, was then published along with the paper. While we do not refer to the discussion here, it does make for highly entertaining and, in a couple of places, rather astonishing, reading, with non-statisticians being somewhat perplexed with the findings, to say the least! More generally, a number of Durbin’s papers published in RSS journals are accompanied with discussants 400 Notes remarks and author rejoinders and these typically make fascinating, if often tangential, reading. 9. Only Durbin and Koopman’s classical solution to the estimation problem is discussed here, although they also provide formulae for undertaking Bayesian inference (see Durbin and Koopman, 2000). 5 Jenkins: Inference in Autoregressive Models and the Development of Spectral Analysis 1. The name white noise was coined by physicists and engineers because of its resemblance to the optical spectrum of white light, which consists of very narrow lines close together.
Recommended publications
  • F:\RSS\Me\Society's Mathemarica
    School of Social Sciences Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Mathematics in the Statistical Society 1883-1933 John Aldrich No. 0919 This paper is available on our website http://www.southampton.ac.uk/socsci/economics/research/papers ISSN 0966-4246 Mathematics in the Statistical Society 1883-1933* John Aldrich Economics Division School of Social Sciences University of Southampton Southampton SO17 1BJ UK e-mail: [email protected] Abstract This paper considers the place of mathematical methods based on probability in the work of the London (later Royal) Statistical Society in the half-century 1883-1933. The end-points are chosen because mathematical work started to appear regularly in 1883 and 1933 saw the formation of the Industrial and Agricultural Research Section– to promote these particular applications was to encourage mathematical methods. In the period three movements are distinguished, associated with major figures in the history of mathematical statistics–F. Y. Edgeworth, Karl Pearson and R. A. Fisher. The first two movements were based on the conviction that the use of mathematical methods could transform the way the Society did its traditional work in economic/social statistics while the third movement was associated with an enlargement in the scope of statistics. The study tries to synthesise research based on the Society’s archives with research on the wider history of statistics. Key names : Arthur Bowley, F. Y. Edgeworth, R. A. Fisher, Egon Pearson, Karl Pearson, Ernest Snow, John Wishart, G. Udny Yule. Keywords : History of Statistics, Royal Statistical Society, mathematical methods.
    [Show full text]
  • Two Principles of Evidence and Their Implications for the Philosophy of Scientific Method
    TWO PRINCIPLES OF EVIDENCE AND THEIR IMPLICATIONS FOR THE PHILOSOPHY OF SCIENTIFIC METHOD by Gregory Stephen Gandenberger BA, Philosophy, Washington University in St. Louis, 2009 MA, Statistics, University of Pittsburgh, 2014 Submitted to the Graduate Faculty of the Kenneth P. Dietrich School of Arts and Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2015 UNIVERSITY OF PITTSBURGH KENNETH P. DIETRICH SCHOOL OF ARTS AND SCIENCES This dissertation was presented by Gregory Stephen Gandenberger It was defended on April 14, 2015 and approved by Edouard Machery, Pittsburgh, Dietrich School of Arts and Sciences Satish Iyengar, Pittsburgh, Dietrich School of Arts and Sciences John Norton, Pittsburgh, Dietrich School of Arts and Sciences Teddy Seidenfeld, Carnegie Mellon University, Dietrich College of Humanities & Social Sciences James Woodward, Pittsburgh, Dietrich School of Arts and Sciences Dissertation Director: Edouard Machery, Pittsburgh, Dietrich School of Arts and Sciences ii Copyright © by Gregory Stephen Gandenberger 2015 iii TWO PRINCIPLES OF EVIDENCE AND THEIR IMPLICATIONS FOR THE PHILOSOPHY OF SCIENTIFIC METHOD Gregory Stephen Gandenberger, PhD University of Pittsburgh, 2015 The notion of evidence is of great importance, but there are substantial disagreements about how it should be understood. One major locus of disagreement is the Likelihood Principle, which says roughly that an observation supports a hypothesis to the extent that the hy- pothesis predicts it. The Likelihood Principle is supported by axiomatic arguments, but the frequentist methods that are most commonly used in science violate it. This dissertation advances debates about the Likelihood Principle, its near-corollary the Law of Likelihood, and related questions about statistical practice.
    [Show full text]
  • Bayesian Inference for Heterogeneous Event Counts
    Bayesian Inference for Heterogeneous Event Counts ANDREW D. MARTIN Washington University This article presents an integrated set of Bayesian tools one can use to model hetero- geneous event counts. While models for event count cross sections are now widely used, little has been written about how to model counts when contextual factors introduce heterogeneity. The author begins with a discussion of Bayesian cross-sectional count models and discusses an alternative model for counts with overdispersion. To illustrate the Bayesian framework, the author fits the model to the number of women’s rights cosponsorships for each member of the 83rd to 102nd House of Representatives. The model is generalized to allow for contextual heterogeneity. The hierarchical model allows one to explicitly model contextual factors and test alternative contextual explana- tions, even with a small number of contextual units. The author compares the estimates from this model with traditional approaches and discusses software one can use to easily implement these Bayesian models with little start-up cost. Keywords: event count; Markov chain Monte Carlo; hierarchical Bayes; multilevel models; BUGS INTRODUCTION Heterogeneity is what makes society and, for that matter, statistics interesting. However, until recently, little attention has been paidto modeling count data observed in heterogeneous clusters (a notable exception is Sampson, Raudenbush, and Earls 1997). Explanations AUTHOR’S NOTE: This research is supported by the National Science Foundation Law and Social Sciences and Methodology, Measurement, and Statistics Sections, Grant SES-0135855 to Washing- ton University. I would like to thank Dave Peterson, Kevin Quinn, and Kyle Saunders for their helpful comments. I am particularly indebted to Christina Wolbrecht for sharing her valuable data set with me.
    [Show full text]
  • The Likelihood Principle
    1 01/28/99 ãMarc Nerlove 1999 Chapter 1: The Likelihood Principle "What has now appeared is that the mathematical concept of probability is ... inadequate to express our mental confidence or diffidence in making ... inferences, and that the mathematical quantity which usually appears to be appropriate for measuring our order of preference among different possible populations does not in fact obey the laws of probability. To distinguish it from probability, I have used the term 'Likelihood' to designate this quantity; since both the words 'likelihood' and 'probability' are loosely used in common speech to cover both kinds of relationship." R. A. Fisher, Statistical Methods for Research Workers, 1925. "What we can find from a sample is the likelihood of any particular value of r [a parameter], if we define the likelihood as a quantity proportional to the probability that, from a particular population having that particular value of r, a sample having the observed value r [a statistic] should be obtained. So defined, probability and likelihood are quantities of an entirely different nature." R. A. Fisher, "On the 'Probable Error' of a Coefficient of Correlation Deduced from a Small Sample," Metron, 1:3-32, 1921. Introduction The likelihood principle as stated by Edwards (1972, p. 30) is that Within the framework of a statistical model, all the information which the data provide concerning the relative merits of two hypotheses is contained in the likelihood ratio of those hypotheses on the data. ...For a continuum of hypotheses, this principle
    [Show full text]
  • School of Social Sciences Economics Division University of Southampton Southampton SO17 1BJ, UK
    School of Social Sciences Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Professor A L Bowley’s Theory of the Representative Method John Aldrich No. 0801 This paper is available on our website http://www.socsci.soton.ac.uk/economics/Research/Discussion_Papers ISSN 0966-4246 Key names: Arthur L. Bowley, F. Y. Edgeworth, , R. A. Fisher, Adolph Jensen, J. M. Keynes, Jerzy Neyman, Karl Pearson, G. U. Yule. Keywords: History of Statistics, Sampling theory, Bayesian inference. Professor A. L. Bowley’s Theory of the Representative Method * John Aldrich Economics Division School of Social Sciences University of Southampton Southampton SO17 1BJ UK e-mail: [email protected] Abstract Arthur. L. Bowley (1869-1957) first advocated the use of surveys–the “representative method”–in 1906 and started to conduct surveys of economic and social conditions in 1912. Bowley’s 1926 memorandum for the International Statistical Institute on the “Measurement of the precision attained in sampling” was the first large-scale theoretical treatment of sample surveys as he conducted them. This paper examines Bowley’s arguments in the context of the statistical inference theory of the time. The great influence on Bowley’s conception of statistical inference was F. Y. Edgeworth but by 1926 R. A. Fisher was on the scene and was attacking Bayesian methods and promoting a replacement of his own. Bowley defended his Bayesian method against Fisher and against Jerzy Neyman when the latter put forward his concept of a confidence interval and applied it to the representative method. * Based on a talk given at the Sample Surveys and Bayesian Statistics Conference, Southampton, August 2008.
    [Show full text]
  • The Contrasting Strategies of Almroth Wright and Bradford Hill to Capture the Nomenclature of Controlled Trials
    6 Whose Words are they Anyway? The Contrasting Strategies of Almroth Wright and Bradford Hill to Capture the Nomenclature of Controlled Trials When Almroth Wright (1861–1947) began to endure the experience common to many new fathers, of sleepless nights caused by his crying infant, he characteristically applied his physiological knowledge to the problem in a deductive fashion. Reasoning that the child’s distress was the result of milk clotting in its stomach, he added citric acid to the baby’s bottle to prevent the formation of clots. The intervention appeared to succeed; Wright published his conclusions,1 and citrated milk was eventually advocated by many paediatricians in the management of crying babies.2 By contrast, Austin Bradford Hill (1897–1991) approached his wife’s insomnia in an empirical, inductive manner. He proposed persuading his wife to take hot milk before retiring, on randomly determined nights, and recording her subsequent sleeping patterns. Fortunately for domestic harmony, he described this experiment merely as an example – actually to perform it would, he considered, be ‘exceedingly rash’.3 These anecdotes serve to illustrate the very different approaches to therapeutic experiment maintained by the two men. Hill has widely been credited as the main progenitor of the modern randomised controlled trial (RCT).4 He was principally responsible for the design of the first two published British RCTs, to assess the efficacy of streptomycin in tuberculosis5 and the effectiveness of whooping cough vaccine6 and, in particular, was responsible for the introduction of randomisation into their design.7 Almroth Wright remained, throughout his life, an implacable opponent of this ‘statistical method’, preferring what he perceived as the greater certainty of a ‘crucial experiment’ performed in the laboratory.
    [Show full text]
  • Frequentist and Bayesian Statistics
    Faculty of Life Sciences Frequentist and Bayesian statistics Claus Ekstrøm E-mail: [email protected] Outline 1 Frequentists and Bayesians • What is a probability? • Interpretation of results / inference 2 Comparisons 3 Markov chain Monte Carlo Slide 2— PhD (Aug 23rd 2011) — Frequentist and Bayesian statistics What is a probability? Two schools in statistics: frequentists and Bayesians. Slide 3— PhD (Aug 23rd 2011) — Frequentist and Bayesian statistics Frequentist school School of Jerzy Neyman, Egon Pearson and Ronald Fischer. Slide 4— PhD (Aug 23rd 2011) — Frequentist and Bayesian statistics Bayesian school “School” of Thomas Bayes P(D|H) · P(H) P(H|D)= P(D|H) · P(H)dH Slide 5— PhD (Aug 23rd 2011) — Frequentist and Bayesian statistics Frequentists Frequentists talk about probabilities in relation to experiments with a random component. Relative frequency of an event, A,isdefinedas number of outcomes consistent with A P(A)= number of experiments The probability of event A is the limiting relative frequency. Relative frequency 0.0 0.2 0.4 0.6 0.8 1.0 020406080100 n Slide 6— PhD (Aug 23rd 2011) — Frequentist and Bayesian statistics Frequentists — 2 The definition restricts the things we can add probabilities to: What is the probability of there being life on Mars 100 billion years ago? We assume that there is an unknown but fixed underlying parameter, θ, for a population (i.e., the mean height on Danish men). Random variation (environmental factors, measurement errors, ...) means that each observation does not result in the true value. Slide 7— PhD (Aug 23rd 2011) — Frequentist and Bayesian statistics The meta-experiment idea Frequentists think of meta-experiments and consider the current dataset as a single realization from all possible datasets.
    [Show full text]
  • Sequential Monte Carlo Methods for Inference and Prediction of Latent Time-Series
    SSStttooonnnyyy BBBrrrooooookkk UUUnnniiivvveeerrrsssiiitttyyy The official electronic file of this thesis or dissertation is maintained by the University Libraries on behalf of The Graduate School at Stony Brook University. ©©© AAAllllll RRRiiiggghhhtttsss RRReeessseeerrrvvveeeddd bbbyyy AAAuuuttthhhooorrr... Sequential Monte Carlo Methods for Inference and Prediction of Latent Time-series A Dissertation presented by Iñigo Urteaga to The Graduate School in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Electrical Engineering Stony Brook University August 2016 Stony Brook University The Graduate School Iñigo Urteaga We, the dissertation committee for the above candidate for the Doctor of Philosophy degree, hereby recommend acceptance of this dissertation. Petar M. Djuri´c,Dissertation Advisor Professor, Department of Electrical and Computer Engineering Mónica F. Bugallo, Chairperson of Defense Associate Professor, Department of Electrical and Computer Engineering Yue Zhao Assistant Professor, Department of Electrical and Computer Engineering Svetlozar Rachev Professor, Department of Applied Math and Statistics This dissertation is accepted by the Graduate School. Nancy Goroff Interim Dean of the Graduate School ii Abstract of the Dissertation Sequential Monte Carlo Methods for Inference and Prediction of Latent Time-series by Iñigo Urteaga Doctor of Philosophy in Electrical Engineering Stony Brook University 2016 In the era of information-sensing mobile devices, the Internet- of-Things and Big Data, research on advanced methods for extracting information from data has become extremely critical. One important task in this area of work is the analysis of time-varying phenomena, observed sequentially in time. This endeavor is relevant in many applications, where the goal is to infer the dynamics of events of interest described by the data, as soon as new data-samples are acquired.
    [Show full text]
  • Identification of Spikes in Time Series Arxiv:1801.08061V1 [Stat.AP]
    Identification of Spikes in Time Series Dana E. Goin1 and Jennifer Ahern1 1 Division of Epidemiology, School of Public Health, University of California, Berkeley, California January 25, 2018 arXiv:1801.08061v1 [stat.AP] 24 Jan 2018 1 Abstract Identification of unexpectedly high values in a time series is useful for epi- demiologists, economists, and other social scientists interested in the effect of an exposure spike on an outcome variable. However, the best method to identify spikes in time series is not known. This paper aims to fill this gap by testing the performance of several spike detection methods in a sim- ulation setting. We created simulations parameterized by monthly violence rates in nine California cities that represented different series features, and randomly inserted spikes into the series. We then compared the ability to detect spikes of the following methods: ARIMA modeling, Kalman filtering and smoothing, wavelet modeling with soft thresholding, and an iterative outlier detection method. We varied the magnitude of spikes from 10-50% of the mean rate over the study period and varied the number of spikes inserted from 1 to 10. We assessed performance of each method using sensitivity and specificity. The Kalman filtering and smoothing procedure had the best over- all performance. We applied Kalman filtering and smoothing to the monthly violence rates in nine California cities and identified spikes in the rate over the 2005-2012 period. 2 List of Tables 1 ARIMA models and parameters by city . 21 2 Average sensitivity of spike identification methods for spikes of magnitudes ranging from 10-50% increase over series mean .
    [Show full text]
  • Statistical Inference a Work in Progress
    Ronald Christensen Department of Mathematics and Statistics University of New Mexico Copyright c 2019 Statistical Inference A Work in Progress Springer v Seymour and Wes. Preface “But to us, probability is the very guide of life.” Joseph Butler (1736). The Analogy of Religion, Natural and Revealed, to the Constitution and Course of Nature, Introduction. https://www.loc.gov/resource/dcmsiabooks. analogyofreligio00butl_1/?sp=41 Normally, I wouldn’t put anything this incomplete on the internet but I wanted to make parts of it available to my Advanced Inference Class, and once it is up, you have lost control. Seymour Geisser was a mentor to Wes Johnson and me. He was Wes’s Ph.D. advisor. Near the end of his life, Seymour was finishing his 2005 book Modes of Parametric Statistical Inference and needed some help. Seymour asked Wes and Wes asked me. I had quite a few ideas for the book but then I discovered that Sey- mour hated anyone changing his prose. That was the end of my direct involvement. The first idea for this book was to revisit Seymour’s. (So far, that seems only to occur in Chapter 1.) Thinking about what Seymour was doing was the inspiration for me to think about what I had to say about statistical inference. And much of what I have to say is inspired by Seymour’s work as well as the work of my other professors at Min- nesota, notably Christopher Bingham, R. Dennis Cook, Somesh Das Gupta, Mor- ris L. Eaton, Stephen E. Fienberg, Narish Jain, F. Kinley Larntz, Frank B.
    [Show full text]
  • “It Took a Global Conflict”— the Second World War and Probability in British
    Keynames: M. S. Bartlett, D.G. Kendall, stochastic processes, World War II Wordcount: 17,843 words “It took a global conflict”— the Second World War and Probability in British Mathematics John Aldrich Economics Department University of Southampton Southampton SO17 1BJ UK e-mail: [email protected] Abstract In the twentieth century probability became a “respectable” branch of mathematics. This paper describes how in Britain the transformation came after the Second World War and was due largely to David Kendall and Maurice Bartlett who met and worked together in the war and afterwards worked on stochastic processes. Their interests later diverged and, while Bartlett stayed in applied probability, Kendall took an increasingly pure line. March 2020 Probability played no part in a respectable mathematics course, and it took a global conflict to change both British mathematics and D. G. Kendall. Kingman “Obituary: David George Kendall” Introduction In the twentieth century probability is said to have become a “respectable” or “bona fide” branch of mathematics, the transformation occurring at different times in different countries.1 In Britain it came after the Second World War with research on stochastic processes by Maurice Stevenson Bartlett (1910-2002; FRS 1961) and David George Kendall (1918-2007; FRS 1964).2 They also contributed as teachers, especially Kendall who was the “effective beginning of the probability tradition in this country”—his pupils and his pupils’ pupils are “everywhere” reported Bingham (1996: 185). Bartlett and Kendall had full careers—extending beyond retirement in 1975 and ‘85— but I concentrate on the years of setting-up, 1940-55.
    [Show full text]
  • State Space and Unobserved Component Models
    STATE SPACE AND UNOBSERVED COMPONENT MODELS Theory and Applications This volume offers a broad overview of the state-of-the-art developments in the theory and applications of state space modelling. With fourteen chap- ters from twenty three contributors, it offers a unique synthesis of state space methods and unobserved component models that are important in a wide range of subjects, including economics, finance, environmental science, medicine and engineering. The bookis divided into four sections: introduc- tory papers, testing, Bayesian inference and the bootstrap, and applications. It will give those unfamiliar with state space models a flavour of the work being carried out as well as providing experts with valuable state-of-the- art summaries of different topics. Offering a useful reference for all, this accessible volume makes a significant contribution to the advancement of time series analysis. Andrew Harvey is Professor of Econometrics and Fellow of Corpus Christi College, University of Cambridge. He is the author of The Economet- ric Analysis of Time Series (1981), Time Series Models (1981) and Fore- casting, Structural Time Series Models and the Kalman Filter (1989). Siem Jan Koopman is Professor of Econometrics at the Free University Amsterdam and Research Fellow of Tinbergen Institute, Amsterdam. He has published in international journals and is coauthor of Time Series Anaysis by State Space Models (with J. Durbin, 2001). Neil Shephard is Professor of Economics and Official Fellow, Nuffield College, Oxford University. STATE SPACE AND
    [Show full text]