Probability Distributions in Library and Information Science: a Historical and Practitioner Viewpoint

Total Page:16

File Type:pdf, Size:1020Kb

Probability Distributions in Library and Information Science: a Historical and Practitioner Viewpoint Probability Distributions in Library and Information Science: A Historical and Practitioner Viewpoint Stephen J. Bensman LSU Libraries, Louisiana State University, Baton Rouge, LA 70803. E-mail: [email protected] This paper has a dual character dictated by its twofold information science with the history of probability and purpose. First, it is a speculative historiographic essay statistics. Second, the paper is intended to serve as a prac- containing an attempt to fix the present position of li- brary and information science within the context of the tical guide to persons doing statistical research in library probabilistic revolution that has been encompassing all and information science. Thus, the components comprising of science. Second, it comprises a guide to practitioners the duality of this paper are closely interconnected. engaged in statistical research in library and information I came to this topic as a result of recently completed science. There are pointed out the problems of utilizing research (Bensman, 1996; Bensman & Wilder, 1998) on the statistical methods in library and information science because of the highly and positively skewed distribu- market for scientific information. In this research, I wanted tions that dominate this discipline. Biostatistics are in- to solve what seemed a simple problem: What role does dicated as the source of solutions for these problems, scientific value play in the price libraries pay for scientific and the solutions are then traced back to the British journals? To solve this problem, I had to use parametric biometric revolution of 1865–1950, during the course of statistical techniques such as correlation and regression, and which modern inferential statistics were created. The thesis is presented that science has been undergoing a these techniques immediately confronted me with what probabilistic revolution for over 200 years, and it is seemed to be an extremely difficult problem. These tech- stated that this revolution is now coming to library and niques are based on the assumption of the normal distribu- information science, as general stochastic models re- tion, whereas library and information science data do not place specific, empirical informetric laws. An account is conform to the normal distribution but are dominated by given of the historical development of the counting dis- tributions and laws of error applicable in statistical re- horrendously skewed distributions. I realized that there is a search in library and information science, and it is need to connect the information science laws with the stressed that these distributions and laws are not spe- probability distributions, on which statistics are based, in cific to library and information science but are inherent some easily understandable manner, as an aid to persons in all biological and social phenomena. Urquhart’s Law is conducting statistical investigations of the problems afflict- used to give a practical demonstration of the distribu- tions. The difficulties of precisely fitting data to theoret- ing libraries. This need is becoming especially pressing, as ical probability models in library and information science computers are not only making much more data available because of the inherent fuzziness of the sets are dis- but also making simpler highly sophisticated statistical anal- cussed, and the paper concludes with the description of yses through spreadsheets and software such as SAS. a simple technique for identifying and dealing with the To obtain help in this matter, I contacted the LSU De- skewed distributions in library and information science. Throughout the paper, emphasis is placed on the rele- partment of Experimental Statistics, which assigned me as vance of research in library and information science to an adviser an ecologist named Jay Geaghan. Jay suggested social problems, both past and present. that I read a manual entitled Some Methods for the Statis- tical Analysis of Samples of Benthic Invertebrates (Elliott, 1977). It was an eye-opener in two respects. First, the Introduction manual introduced me to the system of probability distri- This paper has a dual character dictated by its twofold butions, with which biologists model patterns in nature, purpose. First, it is a speculative historiographic essay, in showing how to test for them and transform them for which I attempt to describe the present state of library and standard parametric statistical operations. Second, it pointed information science in terms of the overall development of out that the key model for the skewed distributions domi- science. To do this, I connect the history of library and nating biological phenomena is the negative binomial dis- tribution. This jarred a memory of Price (1976) describing the negative binomial distribution as the model for the © 2000 John Wiley & Sons, Inc. double-edged Matthew Effect, which Robert K. Merton and JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 51(9):816–833, 2000 his students, Jonathan and Stephen Cole and Harriet Zuck- is the geometric mean, from which deviations are measured erman, had placed at the basis of the social stratification of in logarithmic units. The negative binomial can be trans- science. In my previous writings (Bensman, 1982, 1985), I formed into an approximation of the lognormal distribution. had posited the double-edged Matthew Effect as underlying the skewed patterns of library use. The research on the scientific information market neces- Library and Information Science within the sitated solving many complex statistical problems. In seek- Context of the Historical Relationship of ing the solutions for these problems, I noticed a dual pattern. Probability and Statistics to Science as a Whole First, most of these problems had already been solved in The history of probability and statistics is too complex to biostatistics. Second, most of the works presenting these be adequately summarized in a paper such as this. There- solutions were British. The Elliott manual, for example, was fore, I will restrict myself to reviewing the theses of two key published by the British Freshwater Biological Association, books on this subject. Together, these books validate the and based on samples of benthic invertebrates in the English view presented in a two-volume collection of essays pub- Lake District. It also dawned on me that bibliometrics as a lished by MIT Press and entitled The Probabilistic Revolu- discipline had also risen to a great extent in Britain. Being tion (1987): that since 1800, the world has been experienc- a historian by training, my interest was naturally piqued, ing a scientific revolution, in which the mathematical theory and I decided to write a book that would not only present a of probability has been adopted in discipline after disci- history of this development but would also be an aid to pline. This probabilistic revolution is coming to library and persons doing statistical research in library and information information science, as specific, empirical, bibliometric science. Such an approach seemed particularly beneficial, laws are being replaced by general stochastic models. Of because, like myself, many persons in the library field primary importance in this transition has been the seminal understand things better in terms of their historical devel- work on bibliometric distributions by Bertram C. Brookes opment than mathematically. and Abraham Bookstein. For his part, Brookes (1977, 1979, 1984) concentrated on The Probability Distributions Affecting Library Bradford’s Law of Scattering, which he explored theoreti- and Information Science cally as a very mixed Poisson model. Coming to regard Bradford’s Law as a new calculus for the social sciences, he In the analysis of the production, dissemination, use, and found it almost identical mathematically to other empirical evaluation of human knowledge, we are basically dealing bibliometric laws, suspecting of these laws that “beneath with three discrete or counting distributions and two con- their confusions there lurks a simple distribution which tinuous laws of error. The counting distributions are the embraces them all but which remains to be identified” following: (1) the binomial, which models uniformity and (Brookes, 1984, p. 39). He reduced these various laws to a whose characteristic is that the variance is less than the single law, which he modeled two ways as “the Inverse mean; (2) the Poisson, which models randomness and Square Law of frequencies” and “the Log Law of ranks.” whose characteristic is that variance equals the mean; and The main features of Brookes’ hypothesis of a single dis- (3) the negative binomial, which models concentration and tribution arising from a mixed Poisson process were en- whose characteristic is that the variance is greater than the dorsed by Bookstein. In his work, Bookstein (1990, 1995, mean. I hasten to add that the negative binomial is only the 1997) posited through mathematical analysis that the vari- most useful of a series of contagious distributions, and, ous bibliometric laws together with Pareto’s law on income depending on the circumstances, it can change into the beta are variants of a single distribution, in spite of marked binomial, Poisson, or logarithmic series. differences in their appearance. Seeking a way to deal with To help explain the idea of a law of error, I will present these distributions, Bookstein (1997) came to the following to you my concept of a statistical model. A statistical model conclusion: is a mental construct of reality that is logically designed to test a hypothesis. It is centered on a hypothetical point, from I have argued. .that one important mechanism for surviv- which deviations are measured according to a law of error. ing in an ambiguous world is to create functional forms that Depending on the size of the deviation from the hypothet- are not too seriously affected by imperfect conceptualiza- ical point on which the model is centered, one accepts or tion. In this article I pushed this notion further, and looked rejects the hypothesis being tested. In statistical textbooks, at suitable random components for the underlying stable the law of error is the normal distribution, and the hypo- expectations.
Recommended publications
  • F:\RSS\Me\Society's Mathemarica
    School of Social Sciences Economics Division University of Southampton Southampton SO17 1BJ, UK Discussion Papers in Economics and Econometrics Mathematics in the Statistical Society 1883-1933 John Aldrich No. 0919 This paper is available on our website http://www.southampton.ac.uk/socsci/economics/research/papers ISSN 0966-4246 Mathematics in the Statistical Society 1883-1933* John Aldrich Economics Division School of Social Sciences University of Southampton Southampton SO17 1BJ UK e-mail: [email protected] Abstract This paper considers the place of mathematical methods based on probability in the work of the London (later Royal) Statistical Society in the half-century 1883-1933. The end-points are chosen because mathematical work started to appear regularly in 1883 and 1933 saw the formation of the Industrial and Agricultural Research Section– to promote these particular applications was to encourage mathematical methods. In the period three movements are distinguished, associated with major figures in the history of mathematical statistics–F. Y. Edgeworth, Karl Pearson and R. A. Fisher. The first two movements were based on the conviction that the use of mathematical methods could transform the way the Society did its traditional work in economic/social statistics while the third movement was associated with an enlargement in the scope of statistics. The study tries to synthesise research based on the Society’s archives with research on the wider history of statistics. Key names : Arthur Bowley, F. Y. Edgeworth, R. A. Fisher, Egon Pearson, Karl Pearson, Ernest Snow, John Wishart, G. Udny Yule. Keywords : History of Statistics, Royal Statistical Society, mathematical methods.
    [Show full text]
  • A History of Evidence in Medical Decisions: from the Diagnostic Sign to Bayesian Inference
    BRIEF REPORTS A History of Evidence in Medical Decisions: From the Diagnostic Sign to Bayesian Inference Dennis J. Mazur, MD, PhD Bayesian inference in medical decision making is a concept through the development of probability theory based on con- that has a long history with 3 essential developments: 1) the siderations of games of chance, and ending with the work of recognition of the need for data (demonstrable scientific Jakob Bernoulli, Laplace, and others, we will examine how evidence), 2) the development of probability, and 3) the Bayesian inference developed. Key words: evidence; games development of inverse probability. Beginning with the of chance; history of evidence; inference; probability. (Med demonstrative evidence of the physician’s sign, continuing Decis Making 2012;32:227–231) here are many senses of the term evidence in the Hacking argues that when humans began to examine Thistory of medicine other than scientifically things pointing beyond themselves to other things, gathered (collected) evidence. Historically, evidence we are examining a new type of evidence to support was first based on testimony and opinion by those the truth or falsehood of a scientific proposition respected for medical expertise. independent of testimony or opinion. This form of evidence can also be used to challenge the data themselves in supporting the truth or falsehood of SIGN AS EVIDENCE IN MEDICINE a proposition. In this treatment of the evidence contained in the Ian Hacking singles out the physician’s sign as the 1 physician’s diagnostic sign, Hacking is examining first example of demonstrable scientific evidence. evidence as a phenomenon of the ‘‘low sciences’’ Once the physician’s sign was recognized as of the Renaissance, which included alchemy, astrol- a form of evidence, numbers could be attached to ogy, geology, and medicine.1 The low sciences were these signs.
    [Show full text]
  • The Interpretation of Probability: Still an Open Issue? 1
    philosophies Article The Interpretation of Probability: Still an Open Issue? 1 Maria Carla Galavotti Department of Philosophy and Communication, University of Bologna, Via Zamboni 38, 40126 Bologna, Italy; [email protected] Received: 19 July 2017; Accepted: 19 August 2017; Published: 29 August 2017 Abstract: Probability as understood today, namely as a quantitative notion expressible by means of a function ranging in the interval between 0–1, took shape in the mid-17th century, and presents both a mathematical and a philosophical aspect. Of these two sides, the second is by far the most controversial, and fuels a heated debate, still ongoing. After a short historical sketch of the birth and developments of probability, its major interpretations are outlined, by referring to the work of their most prominent representatives. The final section addresses the question of whether any of such interpretations can presently be considered predominant, which is answered in the negative. Keywords: probability; classical theory; frequentism; logicism; subjectivism; propensity 1. A Long Story Made Short Probability, taken as a quantitative notion whose value ranges in the interval between 0 and 1, emerged around the middle of the 17th century thanks to the work of two leading French mathematicians: Blaise Pascal and Pierre Fermat. According to a well-known anecdote: “a problem about games of chance proposed to an austere Jansenist by a man of the world was the origin of the calculus of probabilities”2. The ‘man of the world’ was the French gentleman Chevalier de Méré, a conspicuous figure at the court of Louis XIV, who asked Pascal—the ‘austere Jansenist’—the solution to some questions regarding gambling, such as how many dice tosses are needed to have a fair chance to obtain a double-six, or how the players should divide the stakes if a game is interrupted.
    [Show full text]
  • Chapter 1: the Objective and the Subjective in Mid-Nineteenth-Century British Probability Theory
    UvA-DARE (Digital Academic Repository) "A terrible piece of bad metaphysics"? Towards a history of abstraction in nineteenth- and early twentieth-century probability theory, mathematics and logic Verburgt, L.M. Publication date 2015 Document Version Final published version Link to publication Citation for published version (APA): Verburgt, L. M. (2015). "A terrible piece of bad metaphysics"? Towards a history of abstraction in nineteenth- and early twentieth-century probability theory, mathematics and logic. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:27 Sep 2021 chapter 1 The objective and the subjective in mid-nineteenth-century British probability theory 1. Introduction 1.1 The emergence of ‘objective’ and ‘subjective’ probability Approximately at the middle of the nineteenth century at least six authors – Simeon-Denis Poisson, Bernard Bolzano, Robert Leslie Ellis, Jakob Friedrich Fries, John Stuart Mill and A.A.
    [Show full text]
  • Reliability” Within the Qualitative Methodological Literature As a Proxy for the Concept of “Error” Within the Qualitative Research Domain
    The “Error” in Psychology: An Analysis of Quantitative and Qualitative Approaches in the Pursuit of Accuracy by Donna Tafreshi MA, Simon Fraser University, 2014 BA, Simon Fraser University, 2011 Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Department of Psychology Faculty of Arts and Social Sciences © Donna Tafreshi 2018 SIMON FRASER UNIVERSITY Fall 2018 Copyright in this work rests with the author. Please ensure that any reproduction or re-use is done in accordance with the relevant national copyright legislation. Approval Name: Donna Tafreshi Degree: Doctor of Philosophy Title: The “Error” in Psychology: An Analysis of Quantitative and Qualitative Approaches in the Pursuit of Accuracy Examining Committee: Chair: Thomas Spalek Professor Kathleen Slaney Senior Supervisor Professor Timothy Racine Supervisor Professor Susan O’Neill Supervisor Professor Barbara Mitchell Internal Examiner Professor Departments of Sociology & Gerontology Christopher Green External Examiner Professor Department of Psychology York University Date Defended/Approved: October 12, 2018 ii Abstract The concept of “error” is central to the development and use of statistical tools in psychology. Yet, little work has focused on elucidating its conceptual meanings and the potential implications for research practice. I explore the emergence of uses of the “error” concept within the field of psychology through a historical mapping of its uses from early observational astronomy, to the study of social statistics, and subsequently to its adoption under 20th century psychometrics. In so doing, I consider the philosophical foundations on which the concepts “error” and “true score” are built and the relevance of these foundations for its usages in psychology.
    [Show full text]
  • Governing Collective Action in the Face of Observational Error
    Governing Collective Action in the Face of Observational Error Thomas Markussen, Louis Putterman and Liangjun Wang* August 2017 Abstract: We present results from a repeated public goods experiment where subjects choose by vote one of two sanctioning schemes: peer-to-peer (informal) or centralized (formal). We introduce, in some treatments, a moderate amount of noise (a 10 percent probability that a contribution is reported incorrectly) affecting either one or both sanctioning environments. We find that the institution with more accurate information is always by far the most popular, but noisy information undermines the popularity of peer-to-peer sanctions more strongly than that of centralized sanctions. This may contribute to explaining the greater reliance on centralized sanctioning institutions in complex environments. JEL codes: H41, C92, D02 Keywords: Public goods, sanctions, information, institution, voting * University of Copenhagen, Brown University, and Zhejiang University of Technology, respectively. We would like to thank The National Natural Science Foundation of China (Grant No.71473225) and The Danish Council for Independent Research (DFF|FSE) for funding. We thank Xinyi Zhang for assistance in checking translation of instructions, and Vicky Zhang and Kristin Petersmann for help in checking of proofs. We are grateful to seminar participants at University of Copenhagen, at the Seventh Biennial Social Dilemmas Conference held at University of Massachusetts, Amherst in June 2017, and to Attila Ambrus, Laura Gee and James Walker for helpful comments. 1 Introduction When and why do groups establish centralized, formal authorities to solve collective action problems? This fundamental question was central to classical social contract theory (Hobbes, Locke) and has recently been the subject of a group of experimental studies (Okada, Kosfeld and Riedl 2009, Traulsen et al.
    [Show full text]
  • The Role of Observational Errors in Optimum Interpolation Analysis :I0
    U.S. DEPARTMENT OF COMMERCE NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION NATIONAL WEATHER SERVICE NATIONAL METEOROLOGICAL CENTER OFFICE NOTE 175 The Role of Observational Errors in Optimum Interpolation Analysis :I0. K. H. Bergman Development Division APRIL 1978 This is an unreviewed manuscript, primarily intended for informal exchange of information among NMC staff members. k Iil The Role of Observational Errors in Optimum Interpolation Analysis 1. Introduction With the advent of new observing systems and the approach of the First GARP Global Experiment (FGGE), it appears desirable for data providers to have some knowledge of how their product is to be used in the statistical "optimum interpolation" analysis schemes which will operate at most of the Level III centers during FGGE. It is the hope of the author that this paper will serve as a source of information on the way observational data is handled by optimum interpolation analysis schemes. The theory of optimum interpolation analysis, especially with regard to the role of observational errors, is reviewed in Section 2. In Section 3, some comments about determining observational error characteristics are given along with examples of same. Section 4 discusses error-checking procedures which are used to identify suspect observations. These latter procedures are an integral part of any analysis scheme. 2. Optimum Interpolation Analysis The job of the analyst is to take an irregular distribution of obser- vations of variable quality and obtain the best possible estimate of a meteorological field at a regular network of grid points. The optimum interpolation analysis scheme attempts to accomplish this goal by mini- mizing the mean square interpolation error for a large ensemble of analysis situations.
    [Show full text]
  • Orme) Wilberforce (Albert) Raymond Blackburn (Alexander Bell
    Copyrights sought (Albert) Basil (Orme) Wilberforce (Albert) Raymond Blackburn (Alexander Bell) Filson Young (Alexander) Forbes Hendry (Alexander) Frederick Whyte (Alfred Hubert) Roy Fedden (Alfred) Alistair Cooke (Alfred) Guy Garrod (Alfred) James Hawkey (Archibald) Berkeley Milne (Archibald) David Stirling (Archibald) Havergal Downes-Shaw (Arthur) Berriedale Keith (Arthur) Beverley Baxter (Arthur) Cecil Tyrrell Beck (Arthur) Clive Morrison-Bell (Arthur) Hugh (Elsdale) Molson (Arthur) Mervyn Stockwood (Arthur) Paul Boissier, Harrow Heraldry Committee & Harrow School (Arthur) Trevor Dawson (Arwyn) Lynn Ungoed-Thomas (Basil Arthur) John Peto (Basil) Kingsley Martin (Basil) Kingsley Martin (Basil) Kingsley Martin & New Statesman (Borlasse Elward) Wyndham Childs (Cecil Frederick) Nevil Macready (Cecil George) Graham Hayman (Charles Edward) Howard Vincent (Charles Henry) Collins Baker (Charles) Alexander Harris (Charles) Cyril Clarke (Charles) Edgar Wood (Charles) Edward Troup (Charles) Frederick (Howard) Gough (Charles) Michael Duff (Charles) Philip Fothergill (Charles) Philip Fothergill, Liberal National Organisation, N-E Warwickshire Liberal Association & Rt Hon Charles Albert McCurdy (Charles) Vernon (Oldfield) Bartlett (Charles) Vernon (Oldfield) Bartlett & World Review of Reviews (Claude) Nigel (Byam) Davies (Claude) Nigel (Byam) Davies (Colin) Mark Patrick (Crwfurd) Wilfrid Griffin Eady (Cyril) Berkeley Ormerod (Cyril) Desmond Keeling (Cyril) George Toogood (Cyril) Kenneth Bird (David) Euan Wallace (Davies) Evan Bedford (Denis Duncan)
    [Show full text]
  • Introduction to Bayesian Inference and Modeling Edps 590BAY
    Introduction to Bayesian Inference and Modeling Edps 590BAY Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Fall 2019 Introduction What Why Probability Steps Example History Practice Overview ◮ What is Bayes theorem ◮ Why Bayesian analysis ◮ What is probability? ◮ Basic Steps ◮ An little example ◮ History (not all of the 705+ people that influenced development of Bayesian approach) ◮ In class work with probabilities Depending on the book that you select for this course, read either Gelman et al. Chapter 1 or Kruschke Chapters 1 & 2. C.J. Anderson (Illinois) Introduction Fall 2019 2.2/ 29 Introduction What Why Probability Steps Example History Practice Main References for Course Throughout the coures, I will take material from ◮ Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., & Rubin, D.B. (20114). Bayesian Data Analysis, 3rd Edition. Boco Raton, FL, CRC/Taylor & Francis.** ◮ Hoff, P.D., (2009). A First Course in Bayesian Statistical Methods. NY: Sringer.** ◮ McElreath, R.M. (2016). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Boco Raton, FL, CRC/Taylor & Francis. ◮ Kruschke, J.K. (2015). Doing Bayesian Data Analysis: A Tutorial with JAGS and Stan. NY: Academic Press.** ** There are e-versions these of from the UofI library. There is a verson of McElreath, but I couldn’t get if from UofI e-collection. C.J. Anderson (Illinois) Introduction Fall 2019 3.3/ 29 Introduction What Why Probability Steps Example History Practice Bayes Theorem A whole semester on this? p(y|θ)p(θ) p(θ|y)= p(y) where ◮ y is data, sample from some population.
    [Show full text]
  • Harold Jeffreys's Theory of Probability Revisited
    Statistical Science 2009, Vol. 24, No. 2, 141–172 DOI: 10.1214/09-STS284 c Institute of Mathematical Statistics, 2009 Harold Jeffreys’s Theory of Probability Revisited Christian P. Robert, Nicolas Chopin and Judith Rousseau Abstract. Published exactly seventy years ago, Jeffreys’s Theory of Probability (1939) has had a unique impact on the Bayesian commu- nity and is now considered to be one of the main classics in Bayesian Statistics as well as the initiator of the objective Bayes school. In par- ticular, its advances on the derivation of noninformative priors as well as on the scaling of Bayes factors have had a lasting impact on the field. However, the book reflects the characteristics of the time, especially in terms of mathematical rigor. In this paper we point out the fundamen- tal aspects of this reference work, especially the thorough coverage of testing problems and the construction of both estimation and testing noninformative priors based on functional divergences. Our major aim here is to help modern readers in navigating in this difficult text and in concentrating on passages that are still relevant today. Key words and phrases: Bayesian foundations, noninformative prior, σ-finite measure, Jeffreys’s prior, Kullback divergence, tests, Bayes fac- tor, p-values, goodness of fit. 1. INTRODUCTION Christian P. Robert is Professor of Statistics, Applied Mathematics Department, Universit´eParis Dauphine The theory of probability makes it possible to and Head of the Statistics Laboratory, Center for respect the great men on whose shoulders we stand. Research in Economics and Statistics (CREST), H. Jeffreys, Theory of Probability, Section 1.6.
    [Show full text]
  • Monthly Weather Review James E
    DEPARTMENT OF COMMERCE WEATHERBUREAU SINCLAIR WEEKS, Secretary F. W. REICHELDERFER, Chief MONTHLY WEATHER REVIEW JAMES E. CASKEY, JR., Editor Volume 83 Cloeed Febru 15, 1956 Number 12 DECEMBER 1955 Issued Mm% 15, 1956 A SIMULTANEOUS LAGRANGIAN-EULERIAN TURBULENCE EXPERIMENT F'RA.NK GIFFORD, R. U. S. Weather Bureau The Pennsylvania State University * [Manuscript received October 28, 1955; revised January 9, 1956J ABSTRACT Simultaneous measurements of turbulent vertical velocity fluctuationsat a height of 300 ft. measured by means of a fixed anemometer, small floating balloons, and airplane gust equipment at Brookhaven are presented. The re- sulting Eulerian (fixed anemometer) turbulence energy mectra are similar to the Lagrangian (balloon) spectra but " - with a displacement toward higher frequencies. 1. THE NATURE OF THEPROBLEM function, and the closely related autocorrelation function. This paper reports an attempt to make measurements of The term Eulerian, in turbulence study, impliescon- both Lagrangian and Eulerianturbulence fluctuations sideration of velocity fluctuations at a point or points simultaneously. This problem owes its importance largely fixed in space, as measured for example by one or more to the fact that experimenters usually measure the Eu- fixed anemometers or wind vanes. The term Lagrangian lerian-time form of the turbulence spectrumor correlation, implies study of the fluctuations of individual fluid par- which all 6xed measuring devices of turbulence record; cels; these are very difficult to measure. In a sense, these but theoretical developments as well as practical applica- two points of view of the turbulence phenomenon are tionsinevitably require the Eulerian-space, or the La- closely related tothe two most palpable physical manifesta- grangian form.
    [Show full text]
  • Arxiv:2103.01707V1 [Astro-Ph.EP] 2 Mar 2021
    Draft version March 3, 2021 Typeset using LATEX preprint2 style in AASTeX63 Mass and density of asteroid (16) Psyche Lauri Siltala1 and Mikael Granvik1, 2 1Department of Physics, P.O. Box 64, FI-00014 University of Helsinki, Finland 2Asteroid Engineering Laboratory, Onboard Space Systems, Lule˚aUniversity of Technology, Box 848, S-98128 Kiruna, Sweden (Received October 1, 2020; Revised February 23, 2021; Accepted February 23, 2021) Submitted to ApJL ABSTRACT We apply our novel Markov-chain Monte Carlo (MCMC)-based algorithm for asteroid mass estimation to asteroid (16) Psyche, the target of NASA's eponymous Psyche mission, based on close encounters with 10 different asteroids, and obtain a mass of −11 (1:117 0:039) 10 M . We ensure that our method works as expected by applying ± × it to asteroids (1) Ceres and (4) Vesta, and find that the results are in agreement with the very accurate mass estimates for these bodies obtained by the Dawn mission. We then combine our mass estimate for Psyche with the most recent volume estimate to compute the corresponding bulk density as (3:88 0:25) g/cm3. The estimated ± bulk density rules out the possibility of Psyche being an exposed, solid iron core of a protoplanet, but is fully consistent with the recent hypothesis that ferrovolcanism would have occurred on Psyche. Keywords: to be | added later 1. INTRODUCTION and radar observations (Shepard et al. 2017). Asteroid (16) Psyche is currently of great in- There have, however, been concerns that Psy- terest to the planetary science community as che's relatively high bulk density of approxi- 3 it is the target of NASA's eponymous Psyche mately 4 g/cm is still too low to be consis- mission currently scheduled to be launched in tent with iron meteorites and that the asteroid 2022 (Elkins-Tanton et al.
    [Show full text]