Probability Theory and Statistics

Total Page:16

File Type:pdf, Size:1020Kb

Probability Theory and Statistics Probability Theory and Statistics With a view towards bioinformatics Lecture notes Niels Richard Hansen Department of Applied Mathematics and Statistics University of Copenhagen November 2005 2 Prologue Flipping coins and rolling dices are two commonly occurring examples in an introductory course on probability theory and statistics. They represent archetypical experiments where the outcome is uncertain – no matter how many times we role the dice we are unable to predict the outcome of the next role. We use probabilities to describe the uncertainty; a fair, classical dice has probability 1/6 for each side to turn up. Elementary probability computations can to some extent be handled based on intuition and common sense. The probability of getting yatzy in a single throw is for instance 6 1 = = 0.0007716. 65 64 The argument for this and many similar computations is based on the pseudo theorem that the probability for any event equals number of favourable outcomes . number of possible outcomes Getting yatzy consists of the six favourable outcomes with all five dices facing the same side upwards. We call the formula above a pseudo theorem because, as we will show in Section 1.4, it is only the correct way of assigning probabilities to events under a very special assumption about the probabilities describing our experiment. The special assumption is that all outcomes are equally probable – something we tend to believe if we don’t know any better. It must, however, be emphasised that without a proper training most people will either get it wrong or have to give up if they try computing the probability of anything except the most elementary events. Even when the pseudo theorem applies. There exist numerous tricky probability questions where intuition somehow breaks down and wrong conclusions can be drawn easily if one is not extremely careful. A good challenge could be to compute the probability of getting yatzy in three throws with the usual rules and provided that we always hold as many equal dices as possible. i ii 0.0 0.2 0.4 0.6 0.8 1.0 0 1000 2000 3000 4000 5000 n Figure 1: The average number of times that the dice sequence 1,4,3 comes out before the sequence 2,1,4 as a function of the number of times the dice game has been played. The yatzy problem can in principle be solved by counting – simply write down all combi- nations and count the number of favourable and possible combinations. Then the pseudo theorem applies. It is a futile task but in principle a possibility. But in many cases it is impossible to rely on counting – even in principle. As an example lets consider a simple dice game with two participants: First you choose a sequence of three dice throws, 1, 4, 3, say, and then I choose 2, 1, 4. We throw the dice until one of the two sequences comes out, and you win if 1, 4, 3 comes out first and otherwise I win. If the outcome is 4, 6, 2, 3, 5, 1, 3, 2, 4, 5, 1, 4, 3, then you win. It is natural to ask with what probability you will win this game. In addition, it is clearly a quite boring game, since we have to throw a lot of dices and simply wait for one of the two sequences to occur. Another question could therefore be to tell how boring the game is? Can we for instance compute the probability for having to throw the dice more than 20, or perhaps 50, times before any of the two sequences shows up. The problem that we encounter here is first of all that the pseudo theorem does not apply simply because there is an infinite number of favourable as well as possible outcomes. The event that you win consists of the outcomes being all finite sequences of throws ending with 1, 4, 3 without 2, 1, 4 occurring somewhere as three subsequent throws. Moreover, these outcomes are certainly not equally probable. By developing the theory of probabilities we obtain a framework for solving problems like this and doing many other even more subtle computations. And if we can not compute the solution we might be able to obtain an answer to our questions using computer simulations. Admittedly problems do not solve themselves iii just by developing some probability theory, but with a sufficiently well developed theory it becomes easier to solve problems of interest, and what should not be underestimated it becomes possible to clearly state which problems we are solving and on which assumptions the solution rests. number 0 50 100 150 200 250 3 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 n Figure 2: Playing the dice game 5000 times, this graph shows how the games are distributed according to the number times we had to throw the dice before one of the sequences 1,4,3 or 2,1,4 occured. Enough about dices! After all this is going to be about probability theory and statistics in bioinformatics. But some of the questions that we encounter in bioinformatics, especially in biological sequence analysis, are nevertheless very similar to those we asked above. If we don’t know any better, and we typically don’t, the majority of sequenced DNA found in the publicly available databases is considered as being random. This typically means that we regard the DNA sequences as the outcome of throwing a four sided dice, with sides A, C, G, and T, a tremendously large number of times. One purpose of regarding DNA sequences as being random is to have a background model, which can be used for evaluating the performance of methods for locating interesting and biologically important sequence patterns in DNA sequences. If we have developed a method for detecting novel protein coding genes, say, in DNA sequences we need to have such a background model for the DNA that is not protein coding. Otherwise we can not tell how likely it is that our method finds false protein coding genes, i.e. that the method claims that a segment of DNA is protein coding even though it is not. Thus we need to compute the probability that our method claims that a random DNA sequence – with random having the meaning above – is a protein coding gene. If this is unlikely we believe that our method indeed finds truly protein coding genes. A simple gene finder can be constructed as follows: After the iv start codon, ATG, a number of nucleotides occur before one of the stop codons, TAA, TAG, TGA is reached for the first time. Our protein coding gene finder then claims that if more than 100 nucleotides occur before any of the stop codons is reached then we have a gene. So what is the chance of getting more than 100 nucleotides before reaching a stop coding for the first time? The similarity between determining how boring our little dice game is should be clear. The sequence of nucleotides occurring between a start and a stop codon is called an open reading frame, and what we are interested in is thus how the lengths of open reading frames are distributed in random DNA sequences. Probability theory is not restricted to the analysis of the performance of methods on random sequences, but also provides the key ingredient in the construction of such methods – for instance more advanced gene finders. As mentioned above, if we don’t know any better we often regard DNA as being random, but we actually know that a protein coding gene is certainly not random – at least not in the sense above. There is some sort of regularity, for instance the codon structure, but still a substantial variation. A sophisticated probabilistic model of a protein coding DNA sequence may be constructed to capture both the variation and regularities in the protein coding DNA genes, and if we can then compute the probability of a given DNA sequence being a protein coding gene we can compare it with the probability that it is “just random”. The construction of probabilistic models brings us to the topic of statistics. Where proba- bility theory is concerned with computing probabilities of events under a given probabilistic model, statistic deals to a large extend with the opposite. That is, given that we have ob- served certain events as outcomes of an experiment how do we find a suitable probabilistic model of the mechanism that generated those outcomes. How we can use probability the- ory to make this transition from data to model and in particular how we can analyse the methods developed and the results obtained using probability theory is the major topic of these notes. Contents 1 Probability Theory 1 1.1 Introduction.................................... 1 1.2 Samplespaces................................... 1 1.3 Probabilitymeasures.. .. .. .. .. .. .. .. .. .. .. 3 1.4 Probability measures on discrete sets . ....... 7 1.5 Probability measures on the real line . ...... 12 1.6 Randomvariables................................. 20 1.6.1 Transformations of random variables . 21 1.6.2 Several random variables – multivariate distributions......... 24 1.7 Conditional distributions and independence . ......... 29 1.7.1 Conditional probabilities and independence . ....... 29 1.7.2 Random variables, conditional distributions and independence . 33 1.8 Simulations .................................... 37 1.8.1 The binomial, multinomial, and geometric distributions ....... 41 1.9 Entropy ...................................... 44 1.10 Sums,MaximaandMinima . 48 1.11 Localalignment-acasestudy . 52 v vi Contents A R 55 A.1 ObtainingandrunningR .. .. .. .. .. .. .. .. .. .. 55 A.2 Manuals,FAQsandonlinehelp . 56 A.3 TheRlanguage,functionsandscripts . ..... 57 A.3.1 Functions, expression evaluation, and objects .
Recommended publications
  • Foundations of Probability Theory and Statistical Mechanics
    Chapter 6 Foundations of Probability Theory and Statistical Mechanics EDWIN T. JAYNES Department of Physics, Washington University St. Louis, Missouri 1. What Makes Theories Grow? Scientific theories are invented and cared for by people; and so have the properties of any other human institution - vigorous growth when all the factors are right; stagnation, decadence, and even retrograde progress when they are not. And the factors that determine which it will be are seldom the ones (such as the state of experimental or mathe­ matical techniques) that one might at first expect. Among factors that have seemed, historically, to be more important are practical considera­ tions, accidents of birth or personality of individual people; and above all, the general philosophical climate in which the scientist lives, which determines whether efforts in a certain direction will be approved or deprecated by the scientific community as a whole. However much the" pure" scientist may deplore it, the fact remains that military or engineering applications of science have, over and over again, provided the impetus without which a field would have remained stagnant. We know, for example, that ARCHIMEDES' work in mechanics was at the forefront of efforts to defend Syracuse against the Romans; and that RUMFORD'S experiments which led eventually to the first law of thermodynamics were performed in the course of boring cannon. The development of microwave theory and techniques during World War II, and the present high level of activity in plasma physics are more recent examples of this kind of interaction; and it is clear that the past decade of unprecedented advances in solid-state physics is not entirely unrelated to commercial applications, particularly in electronics.
    [Show full text]
  • Randomized Algorithms Week 1: Probability Concepts
    600.664: Randomized Algorithms Professor: Rao Kosaraju Johns Hopkins University Scribe: Your Name Randomized Algorithms Week 1: Probability Concepts Rao Kosaraju 1.1 Motivation for Randomized Algorithms In a randomized algorithm you can toss a fair coin as a step of computation. Alternatively a bit taking values of 0 and 1 with equal probabilities can be chosen in a single step. More generally, in a single step an element out of n elements can be chosen with equal probalities (uniformly at random). In such a setting, our goal can be to design an algorithm that minimizes run time. Randomized algorithms are often advantageous for several reasons. They may be faster than deterministic algorithms. They may also be simpler. In addition, there are problems that can be solved with randomization for which we cannot design efficient deterministic algorithms directly. In some of those situations, we can design attractive deterministic algoirthms by first designing randomized algorithms and then converting them into deter- ministic algorithms by applying standard derandomization techniques. Deterministic Algorithm: Worst case running time, i.e. the number of steps the algo- rithm takes on the worst input of length n. Randomized Algorithm: 1) Expected number of steps the algorithm takes on the worst input of length n. 2) w.h.p. bounds. As an example of a randomized algorithm, we now present the classic quicksort algorithm and derive its performance later on in the chapter. We are given a set S of n distinct elements and we want to sort them. Below is the randomized quicksort algorithm. Algorithm RandQuickSort(S = fa1; a2; ··· ; ang If jSj ≤ 1 then output S; else: Choose a pivot element ai uniformly at random (u.a.r.) from S Split the set S into two subsets S1 = fajjaj < aig and S2 = fajjaj > aig by comparing each aj with the chosen ai Recurse on sets S1 and S2 Output the sorted set S1 then ai and then sorted S2.
    [Show full text]
  • Mathematical Analysis of the Accordion Grating Illusion: a Differential Geometry Approach to Introduce the 3D Aperture Problem
    Neural Networks 24 (2011) 1093–1101 Contents lists available at SciVerse ScienceDirect Neural Networks journal homepage: www.elsevier.com/locate/neunet Mathematical analysis of the Accordion Grating illusion: A differential geometry approach to introduce the 3D aperture problem Arash Yazdanbakhsh a,b,∗, Simone Gori c,d a Cognitive and Neural Systems Department, Boston University, MA, 02215, United States b Neurobiology Department, Harvard Medical School, Boston, MA, 02115, United States c Universita' degli studi di Padova, Dipartimento di Psicologia Generale, Via Venezia 8, 35131 Padova, Italy d Developmental Neuropsychology Unit, Scientific Institute ``E. Medea'', Bosisio Parini, Lecco, Italy article info a b s t r a c t Keywords: When an observer moves towards a square-wave grating display, a non-rigid distortion of the pattern Visual illusion occurs in which the stripes bulge and expand perpendicularly to their orientation; these effects reverse Motion when the observer moves away. Such distortions present a new problem beyond the classical aperture Projection line Line of sight problem faced by visual motion detectors, one we describe as a 3D aperture problem as it incorporates Accordion Grating depth signals. We applied differential geometry to obtain a closed form solution to characterize the fluid Aperture problem distortion of the stripes. Our solution replicates the perceptual distortions and enabled us to design a Differential geometry nulling experiment to distinguish our 3D aperture solution from other candidate mechanisms (see Gori et al. (in this issue)). We suggest that our approach may generalize to other motion illusions visible in 2D displays. ' 2011 Elsevier Ltd. All rights reserved. 1. Introduction need for line-ends or intersections along the lines to explain the illusion.
    [Show full text]
  • Probability Theory Review for Machine Learning
    Probability Theory Review for Machine Learning Samuel Ieong November 6, 2006 1 Basic Concepts Broadly speaking, probability theory is the mathematical study of uncertainty. It plays a central role in machine learning, as the design of learning algorithms often relies on proba- bilistic assumption of the data. This set of notes attempts to cover some basic probability theory that serves as a background for the class. 1.1 Probability Space When we speak about probability, we often refer to the probability of an event of uncertain nature taking place. For example, we speak about the probability of rain next Tuesday. Therefore, in order to discuss probability theory formally, we must first clarify what the possible events are to which we would like to attach probability. Formally, a probability space is defined by the triple (Ω, F,P ), where • Ω is the space of possible outcomes (or outcome space), • F ⊆ 2Ω (the power set of Ω) is the space of (measurable) events (or event space), • P is the probability measure (or probability distribution) that maps an event E ∈ F to a real value between 0 and 1 (think of P as a function). Given the outcome space Ω, there is some restrictions as to what subset of 2Ω can be considered an event space F: • The trivial event Ω and the empty event ∅ is in F. • The event space F is closed under (countable) union, i.e., if α, β ∈ F, then α ∪ β ∈ F. • The even space F is closed under complement, i.e., if α ∈ F, then (Ω \ α) ∈ F.
    [Show full text]
  • Iasinstitute for Advanced Study
    IAInsti tSute for Advanced Study Faculty and Members 2012–2013 Contents Mission and History . 2 School of Historical Studies . 4 School of Mathematics . 21 School of Natural Sciences . 45 School of Social Science . 62 Program in Interdisciplinary Studies . 72 Director’s Visitors . 74 Artist-in-Residence Program . 75 Trustees and Officers of the Board and of the Corporation . 76 Administration . 78 Past Directors and Faculty . 80 Inde x . 81 Information contained herein is current as of September 24, 2012. Mission and History The Institute for Advanced Study is one of the world’s leading centers for theoretical research and intellectual inquiry. The Institute exists to encourage and support fundamental research in the sciences and human - ities—the original, often speculative thinking that produces advances in knowledge that change the way we understand the world. It provides for the mentoring of scholars by Faculty, and it offers all who work there the freedom to undertake research that will make significant contributions in any of the broad range of fields in the sciences and humanities studied at the Institute. Y R Founded in 1930 by Louis Bamberger and his sister Caroline Bamberger O Fuld, the Institute was established through the vision of founding T S Director Abraham Flexner. Past Faculty have included Albert Einstein, I H who arrived in 1933 and remained at the Institute until his death in 1955, and other distinguished scientists and scholars such as Kurt Gödel, George F. D N Kennan, Erwin Panofsky, Homer A. Thompson, John von Neumann, and A Hermann Weyl. N O Abraham Flexner was succeeded as Director in 1939 by Frank Aydelotte, I S followed by J.
    [Show full text]
  • A Mathematical Analysis of Student-Generated Sorting Algorithms Audrey Nasar
    The Mathematics Enthusiast Volume 16 Article 15 Number 1 Numbers 1, 2, & 3 2-2019 A Mathematical Analysis of Student-Generated Sorting Algorithms Audrey Nasar Let us know how access to this document benefits ouy . Follow this and additional works at: https://scholarworks.umt.edu/tme Recommended Citation Nasar, Audrey (2019) "A Mathematical Analysis of Student-Generated Sorting Algorithms," The Mathematics Enthusiast: Vol. 16 : No. 1 , Article 15. Available at: https://scholarworks.umt.edu/tme/vol16/iss1/15 This Article is brought to you for free and open access by ScholarWorks at University of Montana. It has been accepted for inclusion in The Mathematics Enthusiast by an authorized editor of ScholarWorks at University of Montana. For more information, please contact [email protected]. TME, vol. 16, nos.1, 2&3, p. 315 A Mathematical Analysis of student-generated sorting algorithms Audrey A. Nasar1 Borough of Manhattan Community College at the City University of New York Abstract: Sorting is a process we encounter very often in everyday life. Additionally it is a fundamental operation in computer science. Having been one of the first intensely studied problems in computer science, many different sorting algorithms have been developed and analyzed. Although algorithms are often taught as part of the computer science curriculum in the context of a programming language, the study of algorithms and algorithmic thinking, including the design, construction and analysis of algorithms, has pedagogical value in mathematics education. This paper will provide an introduction to computational complexity and efficiency, without the use of a programming language. It will also describe how these concepts can be incorporated into the existing high school or undergraduate mathematics curriculum through a mathematical analysis of student- generated sorting algorithms.
    [Show full text]
  • Effective Theory of Levy and Feller Processes
    The Pennsylvania State University The Graduate School Eberly College of Science EFFECTIVE THEORY OF LEVY AND FELLER PROCESSES A Dissertation in Mathematics by Adrian Maler © 2015 Adrian Maler Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2015 The dissertation of Adrian Maler was reviewed and approved* by the following: Stephen G. Simpson Professor of Mathematics Dissertation Adviser Chair of Committee Jan Reimann Assistant Professor of Mathematics Manfred Denker Visiting Professor of Mathematics Bharath Sriperumbudur Assistant Professor of Statistics Yuxi Zheng Francis R. Pentz and Helen M. Pentz Professor of Science Head of the Department of Mathematics *Signatures are on file in the Graduate School. ii ABSTRACT We develop a computational framework for the study of continuous-time stochastic processes with c`adl`agsample paths, then effectivize important results from the classical theory of L´evyand Feller processes. Probability theory (including stochastic processes) is based on measure. In Chapter 2, we review computable measure theory, and, as an application to probability, effectivize the Skorokhod representation theorem. C`adl`ag(right-continuous, left-limited) functions, representing possible sample paths of a stochastic process, form a metric space called Skorokhod space. In Chapter 3, we show that Skorokhod space is a computable metric space, and establish fundamental computable properties of this space. In Chapter 4, we develop an effective theory of L´evyprocesses. L´evy processes are known to have c`adl`agmodifications, and we show that such a modification is computable from a suitable representation of the process. We also show that the L´evy-It^odecomposition is computable.
    [Show full text]
  • Arxiv:1912.01885V1 [Math.DG]
    DIRAC-HARMONIC MAPS WITH POTENTIAL VOLKER BRANDING Abstract. We study the influence of an additional scalar potential on various geometric and analytic properties of Dirac-harmonic maps. We will create a mathematical wishlist of the pos- sible benefits from inducing the potential term and point out that the latter cannot be achieved in general. Finally, we focus on several potentials that are motivated from supersymmetric quantum field theory. 1. Introduction and Results The supersymmetric nonlinear sigma model has received a lot of interest in modern quantum field theory, in particular in string theory, over the past decades. At the heart of this model is an action functional whose precise structure is fixed by the invariance under various symmetry operations. In the physics literature the model is most often formulated in the language of supergeome- try which is necessary to obtain the invariance of the action functional under supersymmetry transformations. For the physics background of the model we refer to [1] and [18, Chapter 3.4]. However, if one drops the invariance under supersymmetry transformations the resulting action functional can be investigated within the framework of geometric variational problems and many substantial results in this area of research could be achieved in the last years. To formulate this mathematical version of the supersymmetric nonlinear sigma model one fixes a Riemannian spin manifold (M, g), a second Riemannian manifold (N, h) and considers a map φ: M → N. The central ingredients in the resulting action functional are the Dirichlet energy for the map φ and the Dirac action for a spinor defined along the map φ.
    [Show full text]
  • The Orthosymplectic Lie Supergroup in Harmonic Analysis Kevin Coulembier
    The orthosymplectic Lie supergroup in harmonic analysis Kevin Coulembier Department of Mathematical Analysis, Faculty of Engineering, Ghent University, Belgium Abstract. The study of harmonic analysis in superspace led to the Howe dual pair (O(m) × Sp(2n);sl2). This Howe dual pair is incomplete, which has several implications. These problems can be solved by introducing the orthosymplectic Lie supergroup OSp(mj2n). We introduce Lie supergroups in the supermanifold setting and show that everything is captured in the Harish-Chandra pair. We show the uniqueness of the supersphere integration as an orthosymplectically invariant linear functional. Finally we study the osp(mj2n)-representations of supersymmetric tensors for the natural module for osp(mj2n). Keywords: Howe dual pair, Lie supergroup, Lie superalgebra, harmonic analysis, supersphere PACS: 02.10.Ud, 20.30.Cj, 02.30.Px PRELIMINARIES Superspaces are spaces where one considers not only commuting but also anti-commuting variables. The 2n anti- commuting variables x`i generate the complex Grassmann algebra L2n. Definition 1. A supermanifold of dimension mj2n is a ringed space M = (M0;OM ), with M0 the underlying manifold of dimension m and OM the structure sheaf which is a sheaf of super-algebras with unity on M0. The sheaf OM satisfies the local triviality condition: there exists an open cover fUigi2I of M0 and isomorphisms of sheaves of super algebras T : Ui ! ¥ ⊗ L . i OM CUi 2n The sections of the structure sheaf are referred to as superfunctions on M . A morphism of supermanifolds ] ] F : M ! N is a morphism of ringed spaces (f;f ), f : M0 ! N0, f : ON ! f∗OM .
    [Show full text]
  • Curriculum Vitae
    Curriculum Vitae General First name: Khudoyberdiyev Surname: Abror Sex: Male Date of birth: 1985, September 19 Place of birth: Tashkent, Uzbekistan Nationality: Uzbek Citizenship: Uzbekistan Marital status: married, two children. Official address: Department of Algebra and Functional Analysis, National University of Uzbekistan, 4, Talabalar Street, Tashkent, 100174, Uzbekistan. Phone: 998-71-227-12-24, Fax: 998-71-246-02-24 Private address: 5-14, Lutfiy Street, Tashkent city, Uzbekistan, Phone: 998-97-422-00-89 (mobile), E-mail: [email protected] Education: 2008-2010 Institute of Mathematics and Information Technologies of Uzbek Academy of Sciences, Uzbekistan, PHD student; 2005-2007 National University of Uzbekistan, master student; 2001-2005 National University of Uzbekistan, bachelor student. Languages: English (good), Russian (good), Uzbek (native), 1 Academic Degrees: Doctor of Sciences in physics and mathematics 28.04.2016, Tashkent, Uzbekistan, Dissertation title: Structural theory of finite - dimensional complex Leibniz algebras and classification of nilpotent Leibniz superalgebras. Scientific adviser: Prof. Sh.A. Ayupov; Doctor of Philosophy in physics and mathematics (PhD) 28.10.2010, Tashkent, Uzbekistan, Dissertation title: The classification some nilpotent finite dimensional graded complex Leibniz algebras. Supervisor: Prof. Sh.A. Ayupov; Master of Science, National University of Uzbekistan, 05.06.2007, Tashkent, Uzbekistan, Dissertation title: The classification filiform Leibniz superalgebras with nilindex n+m. Supervisor: Prof. Sh.A. Ayupov; Bachelor in Mathematics, National University of Uzbekistan, 20.06.2005, Tashkent, Uzbekistan, Dissertation title: On the classification of low dimensional Zinbiel algebras. Supervisor: Prof. I.S. Rakhimov. Professional occupation: June of 2017 - up to Professor, department Algebra and Functional Analysis, present time National University of Uzbekistan.
    [Show full text]
  • On P-Adic Mathematical Physics B
    ISSN 2070-0466, p-Adic Numbers, Ultrametric Analysis and Applications, 2009, Vol. 1, No. 1, pp. 1–17. c Pleiades Publishing, Ltd., 2009. REVIEW ARTICLES On p-Adic Mathematical Physics B. Dragovich1**, A. Yu. Khrennikov2***, S.V.Kozyrev3****,andI.V.Volovich3***** 1Institute of Physics, Pregrevica 118, 11080 Belgrade, Serbia 2Va¨xjo¨ University, Va¨xjo¨,Sweden 3 Steklov Mathematical Institute, ul. Gubkina 8, Moscow, 119991 Russia Received December 15, 2008 Abstract—A brief review of some selected topics in p-adic mathematical physics is presented. DOI: 10.1134/S2070046609010014 Key words: p-adic numbers, p-adic mathematical physics, complex systems, hierarchical structures, adeles, ultrametricity, string theory, quantum mechanics, quantum gravity, prob- ability, biological systems, cognitive science, genetic code, wavelets. 1. NUMBERS: RATIONAL, REAL, p-ADIC We present a brief review of some selected topics in p-adic mathematical physics. More details can be found in the references below and the other references are mainly contained therein. We hope that this brief introduction to some aspects of p-adic mathematical physics could be helpful for the readers of the first issue of the journal p-Adic Numbers, Ultrametric Analysis and Applications. The notion of numbers is basic not only in mathematics but also in physics and entire science. Most of modern science is based on mathematical analysis over real and complex numbers. However, it is turned out that for exploring complex hierarchical systems it is sometimes more fruitful to use analysis over p-adic numbers and ultrametric spaces. p-Adic numbers (see, e.g. [1]), introduced by Hensel, are widely used in mathematics: in number theory, algebraic geometry, representation theory, algebraic and arithmetical dynamics, and cryptography.
    [Show full text]
  • Discrete Mathematics Discrete Probability
    Introduction Probability Theory Discrete Mathematics Discrete Probability Prof. Steven Evans Prof. Steven Evans Discrete Mathematics Introduction Probability Theory 7.1: An Introduction to Discrete Probability Prof. Steven Evans Discrete Mathematics Introduction Probability Theory Finite probability Definition An experiment is a procedure that yields one of a given set os possible outcomes. The sample space of the experiment is the set of possible outcomes. An event is a subset of the sample space. Laplace's definition of the probability p(E) of an event E in a sample space S with finitely many equally possible outcomes is jEj p(E) = : jSj Prof. Steven Evans Discrete Mathematics By the product rule, the number of hands containing a full house is the product of the number of ways to pick two kinds in order, the number of ways to pick three out of four for the first kind, and the number of ways to pick two out of the four for the second kind. We see that the number of hands containing a full house is P(13; 2) · C(4; 3) · C(4; 2) = 13 · 12 · 4 · 6 = 3744: Because there are C(52; 5) = 2; 598; 960 poker hands, the probability of a full house is 3744 ≈ 0:0014: 2598960 Introduction Probability Theory Finite probability Example What is the probability that a poker hand contains a full house, that is, three of one kind and two of another kind? Prof. Steven Evans Discrete Mathematics Introduction Probability Theory Finite probability Example What is the probability that a poker hand contains a full house, that is, three of one kind and two of another kind? By the product rule, the number of hands containing a full house is the product of the number of ways to pick two kinds in order, the number of ways to pick three out of four for the first kind, and the number of ways to pick two out of the four for the second kind.
    [Show full text]