Estimation and Detection with Applications to Navigation

Total Page:16

File Type:pdf, Size:1020Kb

Estimation and Detection with Applications to Navigation Linköping studies in science and technology. Dissertations. No. 1216 Estimation and Detection with Applications to Navigation David Törnqvist Department of Electrical Engineering Linköping University, SE–581 83 Linköping, Sweden Linköping 2008 Linköping studies in science and technology. Dissertations. No. 1216 Estimation and Detection with Applications to Navigation David Törnqvist [email protected] www.control.isy.liu.se Division of Automatic Control Department of Electrical Engineering Linköping University SE–581 83 Linköping Sweden ISBN 978-91-7393-785-6 ISSN 0345-7524 Copyright c 2008 David Törnqvist Parts of this thesis have been published previously. Paper A is published with permission from Springer Science and Business Media c 2008 Papers B and E are published with permission from IFAC c 2008. Paper C is published with permission from IEEE c 2008. Paper D is published with permission from EURASIP c 2007. Printed by LiU-Tryck, Linköping, Sweden 2008 To my family! Abstract The ability to navigate in an unknown environment is an enabler for truly autonomous systems. Such a system must be aware of its relative position to the surroundings using sensor measurements. It is instrumental that these measurements are monitored for distur- bances and faults. Having correct measurements, the challenging problem for a robot is to estimate its own position and simultaneously build a map of the environment. This prob- lem is referred to as the Simultaneous Localization and Mapping (SLAM) problem. This thesis studies several topics related to SLAM, on-board sensor processing, exploration and disturbance detection. The particle filter (PF) solution to the SLAM problem is commonly referred to as FastSLAM and has been used extensively for ground robot applications. Having more complex vehicle models using for example flying robots extends the state dimension of the vehicle model and makes the existing solution computationally infeasible. The factor- ization of the problem made in this thesis allows for a computationally tractable solution. Disturbance detection for magnetometers and detection of spurious features in image sensors must be done before these sensor measurements can be used for estimation. Dis- turbance detection based on comparing a batch of data with a model of the system using the generalized likelihood ratio test is considered. There are two approaches to this prob- lem. One is based on the traditional parity space method, where the influence of the initial state is removed by projection, and the other on combining prior information with data in the batch. An efficient parameterization of incipient faults is given which is shown to improve the results considerably. Another common situation in robotics is to have different sampling rates of the sen- sors. More complex sensors such as cameras often have slower update rate than ac- celerometers and gyroscopes. An algorithm for this situation is derived for a class of models with linear Gaussian dynamic model and sensors with different sampling rates, one slow with a nonlinear and/or non-Gaussian measurement relation and one fast with a linear Gaussian measurement relation. For this case, the Kalman filter is used to process the information from the fast sensor and the information from the slow sensor is processed using the PF. The problem formulation covers the important special case of fast dynamics and one slow sensor, which appears in many navigation and tracking problems. Vision based target tracking is another important estimation problem in robotics. Dis- tributed exploration with multi-aircraft flight experiments has demonstrated localization of a stationary target with estimate covariance on the order of meters. Grid-based estima- tion as well as the PF have been examined. v Populärvetenskaplig sammanfattning Tänk dig att du är en brandman på väg in i ett eldhärjat hus. Du vet inte hur det ser ut där inne eller vilka vägar som är möjliga att ta. Väl inne i huset utforskar du det på jakt efter brandorsaken. Det är samtidigt mycket viktigt att komma ihåg vilken väg du kom och hur långt det är till närmaste utgång om du snabbt behöver lämna huset. Under utforskandet byggs en mental karta över huset upp genom att använda synintryck och balanssinne. Ibland kanske du återkommer till en bekant plats och kan då uppdatera din karta lite extra eftersom du förstår hur olika delar av huset hänger ihop. Eftersom vi vill undvika att människor skadas i samband med att de ska släcka brän- der eller utföra andra farliga uppdrag kommer det i framtiden bli vanligt med robotar som hjälper oss med dessa uppdrag. Scenariot ovan är därför något som framtida robotar kom- mer ställas inför. De här robotarna måste förstås ha sensorer för att känna av omgivningen och sin egen rörelse. Sensorer som används för detta är bland annat kameror, laser, radar, mikrofon, kompass, accelerometer och gyroskop. Från dessa sensorer flödar mängder med information som måste utnyttjas på ett smart sätt för att roboten ska bli så framgångsrik som möjligt. Det är denna hantering av information som den här avhandlingen behandlar. I det inledande scenariot handlade det mycket om att försöka bygga upp en karta över huset. Inom robotiken kallas detta problem för samtidig lokalisering och kartuppbyggnad och förkortas SLAM (eng. Simultaneous Localization And Mapping). SLAM-problemet har studerats flitigt under de senaste tjugo åren, men det kräver mycket beräkningar vilket gör att datorer inte klarar av att bygga särskilt stora kartor. Mycket forskning görs därför på hur man kan förenkla problemet så att mindre beräkningar krävs. Ett bidrag i avhan- dlingen behandlar ett sätt att minska beräkningsbördan för robotar som kräver avancerade modeller. I det här fallet används en helikopter som är utrustad med en kamera, accelerom- eter, gyroskop och barometer. Helikoptern flyger över ett område och tittar ner på marken med kameran. Intressanta objekt i bilden sparas och bygger upp kartan som helikoptern kan använda för att positionera sig med. Processen att bygga upp kartan och positionera sig bygger på så kallade estimeringsmetoder. För att välja ut särskilt intressanta objekt i bilden, som lämpar sig för att bygga en karta med, används olika bildbehandlingsmetoder. De försöker välja ut objekt som ska vara lätta att känna igen från en bild till en annan. Ibland blir det fel då metoderna luras av ett snarlikt objekt i bilden. För att upptäcka den här typen av fel används statistiska detektionsmetoder. Ett annat område där dessa metoder kan användas är för att upptäcka fel i sensormätningar. Ibland används en kompass för att veta i vilken riktning roboten färdas. Den är mycket känslig för störande magnetfält och dessa störningar måste därför upptäckas. Ett annat område som behandlas i avhandlingen är sökning efter t.ex. en försvunnen person med hjälp av flera obemannade flygplan. Flygplanen är utrustade med kameror som spanar ner på marken. Estimeringsmetoder används för att representera information om vad flygplanen har sett och inte har sett så att den kan distribueras mellan flygplanen. Tanken med tekniken är att flygplanen ska få beslutsunderlag för att själva kunna fatta beslut om var det är lämpligt att söka. Sammanfattningsvis kan man säga att den här avhandlingen syftar till att robotar ska kunna använda sina “ögon”, “öron” och andra sensorer genom utnyttjande av estimerings- och detektionsmetoder. vii Acknowledgments When crossing a river, you must make sure to find good rocks to stand on. My rocks when writing this thesis has been Prof Fredrik Gustafsson and lately Dr Thomas Schön. Fredrik has a constant flow of ideas and can find a seed to something positive in everything. Thomas, discussing with you is great and you have provided invaluable feedback during this work. Thank you both! I would also like to thank Prof Lennart Ljung for drafting me to the control group and for creating a good atmosphere in the group. Our secretary Ulla Salaneck also deserves my gratitude for always arranging practical matters in a smooth way. Various parts of this thesis have been proofread by Fredrik, Dr Gustaf Hendeby, Dr Umut Orguner, Thomas, Per Skoglar, Martin Skoglund, and Lic Henrik Tidefelt. Your comments have been invaluable! Thank you! Gustaf also deserves appreciation for an- swering all these LATEX-questions! Henrik helped me with some of the figures, thanks! During 2007, I had the opportunity to visit two different research centers for a couple of months each. First, I visited the Australian Center for Field Robotics. This was really inspiring, seeing all the different airplanes, helicopters and ground robots. The people at the department were great, they learned me how to appreciate good pub food (especially those burgers with beetroot), they made great espresso and we had nice discussions about SLAM. Thank you! After this visit, I was invited to C3UV at University of California, Berkeley. That was a great time flying helicopters and airplanes, experiencing the desert heat and discussing particle filters. I really liked to work with you there, thank you! The four of us who started our PhD studies at the same time, we did this journey together. Thank you Dr Daniel Axehill, Gustaf and Dr Johan Sjöberg for company during many late nights at the office, spare time activities, and for simply being really good friends! I would also like to thank all the other people at the Automatic Control group for the good atmosphere and for all the laughs created by fika-discussions and junk-mail. My co-authors in different papers over the years are Dr Gunnar Bark, Lic Gianpaolo Conte, Dr Eva Englund, Dr Erik Geijer Lundin, Dr Fredrik Gunnarsson, Prof Fredrik Gustafsson, Prof Karl Hedrick, Dr Rickard Karlsson, Dr Zu Kim, Ass Prof Inger Klein, Allison Ryan, Dr Thomas Schön, John Tisdale, and Dr Niclas Wiberg. Thank you very much for inspiring collaborations! This work was partially sponsored by SSF strategic research center Modeling, Visu- alization and Information Integration MOVIII, the Center for Industrial Information Tech- nology, CENIIT, and the Excellence Center in Computer Science and Systems Engineering in Linköping, ECSEL, which are hereby gratefully acknowledged.
Recommended publications
  • A History of Evidence in Medical Decisions: from the Diagnostic Sign to Bayesian Inference
    BRIEF REPORTS A History of Evidence in Medical Decisions: From the Diagnostic Sign to Bayesian Inference Dennis J. Mazur, MD, PhD Bayesian inference in medical decision making is a concept through the development of probability theory based on con- that has a long history with 3 essential developments: 1) the siderations of games of chance, and ending with the work of recognition of the need for data (demonstrable scientific Jakob Bernoulli, Laplace, and others, we will examine how evidence), 2) the development of probability, and 3) the Bayesian inference developed. Key words: evidence; games development of inverse probability. Beginning with the of chance; history of evidence; inference; probability. (Med demonstrative evidence of the physician’s sign, continuing Decis Making 2012;32:227–231) here are many senses of the term evidence in the Hacking argues that when humans began to examine Thistory of medicine other than scientifically things pointing beyond themselves to other things, gathered (collected) evidence. Historically, evidence we are examining a new type of evidence to support was first based on testimony and opinion by those the truth or falsehood of a scientific proposition respected for medical expertise. independent of testimony or opinion. This form of evidence can also be used to challenge the data themselves in supporting the truth or falsehood of SIGN AS EVIDENCE IN MEDICINE a proposition. In this treatment of the evidence contained in the Ian Hacking singles out the physician’s sign as the 1 physician’s diagnostic sign, Hacking is examining first example of demonstrable scientific evidence. evidence as a phenomenon of the ‘‘low sciences’’ Once the physician’s sign was recognized as of the Renaissance, which included alchemy, astrol- a form of evidence, numbers could be attached to ogy, geology, and medicine.1 The low sciences were these signs.
    [Show full text]
  • The Interpretation of Probability: Still an Open Issue? 1
    philosophies Article The Interpretation of Probability: Still an Open Issue? 1 Maria Carla Galavotti Department of Philosophy and Communication, University of Bologna, Via Zamboni 38, 40126 Bologna, Italy; [email protected] Received: 19 July 2017; Accepted: 19 August 2017; Published: 29 August 2017 Abstract: Probability as understood today, namely as a quantitative notion expressible by means of a function ranging in the interval between 0–1, took shape in the mid-17th century, and presents both a mathematical and a philosophical aspect. Of these two sides, the second is by far the most controversial, and fuels a heated debate, still ongoing. After a short historical sketch of the birth and developments of probability, its major interpretations are outlined, by referring to the work of their most prominent representatives. The final section addresses the question of whether any of such interpretations can presently be considered predominant, which is answered in the negative. Keywords: probability; classical theory; frequentism; logicism; subjectivism; propensity 1. A Long Story Made Short Probability, taken as a quantitative notion whose value ranges in the interval between 0 and 1, emerged around the middle of the 17th century thanks to the work of two leading French mathematicians: Blaise Pascal and Pierre Fermat. According to a well-known anecdote: “a problem about games of chance proposed to an austere Jansenist by a man of the world was the origin of the calculus of probabilities”2. The ‘man of the world’ was the French gentleman Chevalier de Méré, a conspicuous figure at the court of Louis XIV, who asked Pascal—the ‘austere Jansenist’—the solution to some questions regarding gambling, such as how many dice tosses are needed to have a fair chance to obtain a double-six, or how the players should divide the stakes if a game is interrupted.
    [Show full text]
  • Chapter 1: the Objective and the Subjective in Mid-Nineteenth-Century British Probability Theory
    UvA-DARE (Digital Academic Repository) "A terrible piece of bad metaphysics"? Towards a history of abstraction in nineteenth- and early twentieth-century probability theory, mathematics and logic Verburgt, L.M. Publication date 2015 Document Version Final published version Link to publication Citation for published version (APA): Verburgt, L. M. (2015). "A terrible piece of bad metaphysics"? Towards a history of abstraction in nineteenth- and early twentieth-century probability theory, mathematics and logic. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:27 Sep 2021 chapter 1 The objective and the subjective in mid-nineteenth-century British probability theory 1. Introduction 1.1 The emergence of ‘objective’ and ‘subjective’ probability Approximately at the middle of the nineteenth century at least six authors – Simeon-Denis Poisson, Bernard Bolzano, Robert Leslie Ellis, Jakob Friedrich Fries, John Stuart Mill and A.A.
    [Show full text]
  • Introduction to Bayesian Inference and Modeling Edps 590BAY
    Introduction to Bayesian Inference and Modeling Edps 590BAY Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2019 Introduction What Why Probability Steps Example History Practice Overview ◮ What is Bayes theorem ◮ Why Bayesian analysis ◮ What is probability? ◮ Basic Steps ◮ An little example ◮ History (not all of the 705+ people that influenced development of Bayesian approach) ◮ In class work with probabilities Depending on the book that you select for this course, read either Gelman et al. Chapter 1 or Kruschke Chapters 1 & 2. C.J. Anderson (Illinois) Introduction Fall 2019 2.2/ 29 Introduction What Why Probability Steps Example History Practice Main References for Course Throughout the coures, I will take material from ◮ Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., & Rubin, D.B. (20114). Bayesian Data Analysis, 3rd Edition. Boco Raton, FL, CRC/Taylor & Francis.** ◮ Hoff, P.D., (2009). A First Course in Bayesian Statistical Methods. NY: Sringer.** ◮ McElreath, R.M. (2016). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Boco Raton, FL, CRC/Taylor & Francis. ◮ Kruschke, J.K. (2015). Doing Bayesian Data Analysis: A Tutorial with JAGS and Stan. NY: Academic Press.** ** There are e-versions these of from the UofI library. There is a verson of McElreath, but I couldn’t get if from UofI e-collection. C.J. Anderson (Illinois) Introduction Fall 2019 3.3/ 29 Introduction What Why Probability Steps Example History Practice Bayes Theorem A whole semester on this? p(y|θ)p(θ) p(θ|y)= p(y) where ◮ y is data, sample from some population.
    [Show full text]
  • Harold Jeffreys's Theory of Probability Revisited
    Statistical Science 2009, Vol. 24, No. 2, 141–172 DOI: 10.1214/09-STS284 c Institute of Mathematical Statistics, 2009 Harold Jeffreys’s Theory of Probability Revisited Christian P. Robert, Nicolas Chopin and Judith Rousseau Abstract. Published exactly seventy years ago, Jeffreys’s Theory of Probability (1939) has had a unique impact on the Bayesian commu- nity and is now considered to be one of the main classics in Bayesian Statistics as well as the initiator of the objective Bayes school. In par- ticular, its advances on the derivation of noninformative priors as well as on the scaling of Bayes factors have had a lasting impact on the field. However, the book reflects the characteristics of the time, especially in terms of mathematical rigor. In this paper we point out the fundamen- tal aspects of this reference work, especially the thorough coverage of testing problems and the construction of both estimation and testing noninformative priors based on functional divergences. Our major aim here is to help modern readers in navigating in this difficult text and in concentrating on passages that are still relevant today. Key words and phrases: Bayesian foundations, noninformative prior, σ-finite measure, Jeffreys’s prior, Kullback divergence, tests, Bayes fac- tor, p-values, goodness of fit. 1. INTRODUCTION Christian P. Robert is Professor of Statistics, Applied Mathematics Department, Universit´eParis Dauphine The theory of probability makes it possible to and Head of the Statistics Laboratory, Center for respect the great men on whose shoulders we stand. Research in Economics and Statistics (CREST), H. Jeffreys, Theory of Probability, Section 1.6.
    [Show full text]
  • Inverse Probability, Entropy
    Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links. 2.3: Forward probabilities and inverse probabilities 27 Bayesians also use probabilities to describe inferences. 2.3 Forward probabilities and inverse probabilities Probability calculations often fall into one of two categories: forward prob- ability and inverse probability. Here is an example of a forward probability problem: Exercise 2.4.[2, p.40] An urn contains K balls, of which B are black and W = K B are white. Fred draws a ball at random from the urn and replaces − it, N times. (a) What is the probability distribution of the number of times a black ball is drawn, nB? (b) What is the expectation of nB? What is the variance of nB? What is the standard deviation of nB? Give numerical answers for the cases N = 5 and N = 400, when B = 2 and K = 10. Forward probability problems involve a generative model that describes a pro- cess that is assumed to give rise to some data; the task is to compute the probability distribution or expectation of some quantity that depends on the data. Here is another example of a forward probability problem: Exercise 2.5.[2, p.40] An urn contains K balls, of which B are black and W = K B are white. We define the fraction f B/K. Fred draws N − B ≡ times from the urn, exactly as in exercise 2.4, obtaining nB blacks, and computes the quantity (n f N)2 z = B − B .
    [Show full text]
  • Bayesian Inference
    Bayesian Inference Statisticat, LLC Abstract The Bayesian interpretation of probability is one of two broad categories of interpre- tations. Bayesian inference updates knowledge about unknowns, parameters, with infor- mation from data. The LaplacesDemon package is a complete environment for Bayesian inference within R, and this vignette provides an introduction to the topic. This arti- cle introduces Bayes’ theorem, model-based Bayesian inference, components of Bayesian inference, prior distributions, hierarchical Bayes, conjugacy, likelihood, numerical approx- imation, prediction, Bayes factors, model fit, posterior predictive checks, and ends by comparing advantages and disadvantages of Bayesian inference. Keywords: Bayesian, LaplacesDemon, LaplacesDemonCpp, R. This article is an introduction to Bayesian inference for users of the LaplacesDemon package (Statisticat LLC. 2015) in R (R Core Team 2014), often referred to as LD. LaplacesDemonCpp is an extension package that uses C++. A formal introduction to LaplacesDemon is provided in an accompanying vignette entitled “LaplacesDemon Tutorial”. Merriam-Webster defines ‘Bayesian’ as follows Bayesian : being, relating to, or involving statistical methods that assign proba- bilities or distributions to events (as rain tomorrow) or parameters (as a population mean) based on experience or best guesses before experimentation and data col- lection and that apply Bayes’ theorem to revise the probabilities and distributions after obtaining experimental data. In statistical inference, there are two broad categories of interpretations of probability: Bayesian inference and frequentist inference. These views often differ with each other on the fundamen- tal nature of probability. Frequentist inference loosely defines probability as the limit of an event’s relative frequency in a large number of trials, and only in the context of experiments that are random and well-defined.
    [Show full text]
  • How Ronald Fisher Became a Mathematical Statistician Comment Ronald Fisher Devint Statisticien
    Mathématiques et sciences humaines Mathematics and social sciences 176 | Hiver 2006 Varia How Ronald Fisher became a mathematical statistician Comment Ronald Fisher devint statisticien Stephen Stigler Édition électronique URL : http://journals.openedition.org/msh/3631 DOI : 10.4000/msh.3631 ISSN : 1950-6821 Éditeur Centre d’analyse et de mathématique sociales de l’EHESS Édition imprimée Date de publication : 1 décembre 2006 Pagination : 23-30 ISSN : 0987-6936 Référence électronique Stephen Stigler, « How Ronald Fisher became a mathematical statistician », Mathématiques et sciences humaines [En ligne], 176 | Hiver 2006, mis en ligne le 28 juillet 2006, consulté le 23 juillet 2020. URL : http://journals.openedition.org/msh/3631 ; DOI : https://doi.org/10.4000/msh.3631 © École des hautes études en sciences sociales Math. Sci. hum ~ Mathematics and Social Sciences (44e année, n° 176, 2006(4), p. 23-30) HOW RONALD FISHER BECAME A MATHEMATICAL STATISTICIAN Stephen M. STIGLER1 RÉSUMÉ – Comment Ronald Fisher devint statisticien En hommage à Bernard Bru, cet article rend compte de l’influence décisive qu’a eu Karl Pearson sur Ronald Fisher. Fisher ayant soumis un bref article rédigé de façon hâtive et insuffisamment réfléchie, Pearson lui fit réponse éditoriale d’une grande finesse qui arriva à point nommé. MOTS-CLÉS – Exhaustivité, Inférence fiduciaire, Maximum de vraisemblance, Paramètre SUMMARY – In hommage to Bernard Bru, the story is told of the crucial influence Karl Pearson had on Ronald Fisher through a timely and perceptive editorial reply to a hasty and insufficiently considered short submission by Fisher. KEY-WORDS – Fiducial inference, Maximum likelihood, Parameter, Sufficiency, It is a great personal pleasure to join in this celebration of the friendship and scholarship of Bernard Bru.
    [Show full text]
  • How Ronald Fisher Became a Mathematical Statistician
    Math. Sci. hum ~ Mathematics and Social Sciences (44e année, n° 176, 2006(4), p. 23-30) HOW RONALD FISHER BECAME A MATHEMATICAL STATISTICIAN Stephen M. STIGLER1 RÉSUMÉ – Comment Ronald Fisher devint statisticien En hommage à Bernard Bru, cet article rend compte de l’influence décisive qu’a eu Karl Pearson sur Ronald Fisher. Fisher ayant soumis un bref article rédigé de façon hâtive et insuffisamment réfléchie, Pearson lui fit réponse éditoriale d’une grande finesse qui arriva à point nommé. MOTS-CLÉS – Exhaustivité, Inférence fiduciaire, Maximum de vraisemblance, Paramètre SUMMARY – In hommage to Bernard Bru, the story is told of the crucial influence Karl Pearson had on Ronald Fisher through a timely and perceptive editorial reply to a hasty and insufficiently considered short submission by Fisher. KEY-WORDS – Fiducial inference, Maximum likelihood, Parameter, Sufficiency, It is a great personal pleasure to join in this celebration of the friendship and scholarship of Bernard Bru. Over a period of more than 20 years I have enjoyed adventures with Bernard, adventures that have taken us to surprising and exotic corners of the literature of the statistical world. We have chased ghosts in that literature – references to works that cannot be proven to exist, nor can they be proven not to exist. We have sought skeletons, as in the literature of craniometry. We have pursued foods and recipes, as in the case of the dessert invented by Laplace and made famous by Brillat-Savarin. He has helped me identify rare works of art – such as a contemporary drawing of Laplace and Napoleon. He has identified for me inscriptions on the Pantheon that are barely a trace, and obscure citations that would have stymied Inspectors Poirot and Maigret.
    [Show full text]
  • The Rule of Succession
    SANDY L. ZABELL THE RULE OF SUCCESSION 1. INTRODUCTION Laplace's rule of succession states, in brief, that if an event has occurred m times in succession, then the probability that it will occur again is (m+ 1)/(m+2). The rule of succession was the classical attempt to reduce certain forms of inductive inference - "pure in- ductions" (De Morgan) or "eductions" (W. E. Johnson) - to purely probabilistic terms. Subjected to varying forms of ridicule by Venn, Keynes, and many others, it often served as a touchstone for much broader issues about the nature and role of probability. This paper will trace the evolution of the rule, from its original formulation at the hands of Bayes, Price, and Laplace, to its general- izations by the English philosopher W. E. Johnson, and its perfection at the hands of Bruno de Finetti. By following the debate over the rule, the criticisms of it that were raised and the defenses of it that were mounted, it is hoped that some insight will be gained into the achievements and limitations of the probabilistic attempt to explain induction. Our aim is thus not purely - or even primarily - historical in nature. As usually formulated, however, the rule of succession involves some element of the infinite in its statement or derivation. That element is not only unnecessary, it can obscure and mislead. We begin therefore by discussing the finite version of the rule, its statement, history, and derivation (sections 2-3), and then use it as a background against which to study the probabilistic analysis of induction from Bayes to de Finetti (sections 4-9).
    [Show full text]
  • Astronomical Statistics
    N I V E R U S E I T H Y T O H F G E R D I N B U ASTRONOMICAL STATISTICS John Peacock Royal Observatory Edinburgh [email protected], http://www.roe.ac.uk/ jap/teaching/astrostats/ ∼ May 31, 2021 “Probability theory is nothing but common sense reduced to calculation”Laplace Introductory Notes This course introduces essential statistical concepts within the context of their use in Astronomical data analysis. It covers the basics of probability theory, through to advanced likelihood analysis for large-scale data challenges. Schedule: Monday 11:10 to 12:00 and Thursday 11:10 to 12:00 First Lecture 17th September 2012, Last Lecture 22nd October 2012. Additional tutorial sessions will be held at times to be announced. Lectures will take place in the ROE Lecture Theatre. Handouts: Printed notes define the examinable content of the course. They will be handed out in installments. It is strongly encouraged that personal notes are taken during each lecture where key issues, and illustrations will be discussed. It is expected that students read the notes. There will be three problem sheets, for each problem sheet an accompanying tutorial will go through these problems. Suggested references: The main text for this course is these notes. For further reading, the following may be useful, especially the first: 1 Prasenjit Saha, Principles of Data Analysis, £10 from Amazon or cappella-archive.com, or • downloadable free from http://www.itp.uzh.ch/∼ psaha/. You should consider doing both, as ∼ it is an excellent short text, and scribble on the latter.
    [Show full text]
  • R. A. Fisher and the Making of Maximum Likelihood 1912 – 1922 John Aldrich
    Statistical Science 1997, Vol. 12, No. 3, 162]176 R. A. Fisher and the Making of Maximum Likelihood 1912 ± 1922 John Aldrich Abstract. In 1922 R. A. Fisher introduced the method of maximum likelihood. He first presented the numerical procedure in 1912. This paper considers Fisher's changing justifications for the method, the concepts he developed around it Žincluding likelihood, sufficiency, effi- ciency and information. and the approaches he discarded Žincluding inverse probability.. Key words and phrases: Fisher, Pearson, Student, Bayes's postulate, inverse probability, maximum likelihood, sufficiency, efficiency, infor- mation. INTRODUCTION Fisher did not finish with maximum likelihood in 1922 Žmore was to come in 1925 and 1934., but the The making of maximum likelihood was one of method of Cramer and Wald, of countless applica- the most important developments in 20th century  tions and textbook expositions, had arrived. The statistics. It was the work of one man but it was no 1921 version also has a continuing place in Fisher's simple process: between 1912 and 1922 R. A. Fisher writings and elsewhere}as likelihood analysis. produced three justifications}and three names} This paper follows the development of the ideas for a procedure that existed in two forms. Pratt w of Fisher alone but the syllabus extends beyond his Ž1976. shows that Edgeworth Ž1908. anticipated a writings. His problems came out of the work of Karl good part of the 1922 version, but nobody noticed Pearson and ``Student'' ŽW. S. Gosset.. Their writ- until a decade or so after Fisher had redone it. x ings belong in the syllabus.
    [Show full text]