Complexity of Algorithms 3.4 the Integers and Division

Total Page:16

File Type:pdf, Size:1020Kb

Complexity of Algorithms 3.4 the Integers and Division University of Hawaii ICS141: Discrete Mathematics for Computer Science I Dept. Information & Computer Sci., University of Hawaii Jan Stelovsky based on slides by Dr. Baek and Dr. Still Originals by Dr. M. P. Frank and Dr. J.L. Gross Provided by McGraw-Hill ICS 141: Discrete Mathematics I – Fall 2011 13-1 University of Hawaii Lecture 16 Chapter 3. The Fundamentals 3.3 Complexity of Algorithms 3.4 The Integers and Division ICS 141: Discrete Mathematics I – Fall 2011 13-2 3.3 Complexity of AlgorithmsUniversity of Hawaii n An algorithm must always produce the correct answer, and should be efficient. n How can the efficiency of an algorithm be analyzed? n The algorithmic complexity of a computation is, most generally, a measure of how difficult it is to perform the computation. n That is, it measures some aspect of the cost of computation (in a general sense of “cost”). n Amount of resources required to do a computation. n Some of the most common complexity measures: n “Time” complexity: # of operations or steps required n “Space” complexity: # of memory bits required ICS 141: Discrete Mathematics I – Fall 2011 13-3 Complexity Depends on InputUniversity of Hawaii n Most algorithms have different complexities for inputs of different sizes. n E.g. searching a long list typically takes more time than searching a short one. n Therefore, complexity is usually expressed as a function of the input size. n This function usually gives the complexity for the worst-case input of any given length. ICS 141: Discrete Mathematics I – Fall 2011 13-4 Worst-, Average- and University of Hawaii Best-Case Complexity n A worst-case complexity measure estimates the time required for the most time consuming input of each size. n An average-case complexity measure estimates the average time required for input of each size. n An best-case complexity measure estimates the least time consuming input of each size. ICS 141: Discrete Mathematics I – Fall 2011 13-5 Example 1: Max algorithm University of Hawaii n Problem: Find the simplest form of the exact order of growth (Θ) of the worst-case time complexity of the max algorithm, assuming that each line of code takes some constant time every time it is executed (with possibly different times for different lines of code). ICS 141: Discrete Mathematics I – Fall 2011 13-6 Complexity Analysis of max University of Hawaii procedure max(a1, a2, …, an: integers) v := a1 t1 for i := 2 to n t2 if ai > v then v := ai t3 return v t4 n First, what is an expression for the exact total worst-case time? (Not its order of growth.) n t1: once n t2: n – 1 + 1 times n t3 (comparison): n – 1 times n t4: once ICS 141: Discrete Mathematics I – Fall 2011 13-7 Complexity Analysis (cont.) University of Hawaii n Worst-case time complexity t(n) = t1 + t2 + t3 + t4 =1+ (n −1+1) + (n −1) +1 = 2n +1 n In terms of the number of comparisons made t(n) = t2 + t3 # of = (n −1+1) + (n −1) comparisons = 2n −1 n Now, what is the simplest form of the exact (Θ) order of growth of t(n)? t(n) = Θ(n) ICS 141: Discrete Mathematics I – Fall 2011 13-8 Example 2: Linear Search University of Hawaii n In terms of the number of comparisons procedure linear_search (x: integer, a1, a2, …, an: distinct integers) i := 1 while (i ≤ n ∧ x ≠ ai) t11 & t12 i := i + 1 if i ≤ n then location := i t2 else location := 0 return location ICS 141: Discrete Mathematics I – Fall 2011 13-9 Linear Search Analysis University of Hawaii n Worst case time complexity: t(n) = t11 + t12 + t2 # of = (n +1) + n +1 = 2n + 2 = Θ(n) comparisons n Best case: t(n) = t11 +t12 +t2 =1+1+1= Θ(1) n Average case, if item is present: 3+ 5 + 7 ++ (2n +1) 2(1+ 2 ++ n) + n t(n) = = n n 2[n(n +1) / 2] = +1 = n + 2 = Θ(n) n ICS 141: Discrete Mathematics I – Fall 2011 13-10 Example 3: Binary Search University of Hawaii procedure binary_search (x:integer, a1, a2, …, an: distinct integers, sorted smallest to largest) i := 1 Key question: j := n How many loop iterations? while i < j begin t1 m := ⎣(i + j)/2⎦ if x > am then i := m + 1 else j := m t2 end if x = ai then location := i else location := 0 t3 return location ICS 141: Discrete Mathematics I – Fall 2011 13-11 Binary Search Analysis University of Hawaii k n Suppose that n is a power of 2, i.e., ∃k: n = 2 . n Original range from i = 1 to j = n contains n items. n Each iteration: Size j - i + 1 of range is cut in half. k k–1 k–2 n Size decreases as 2 , 2 , 2 ,… 0 n Loop terminates when size of range is 1 = 2 (i = j). n Therefore, the number of iterations is: k = log2n t(n) = t1 + t2 + t3 = (k +1) + k +1 = 2k + 2 = 2log2 n + 2 = Θ(log2 n) k n Even for n ≠ 2 (not an integral power of 2), time complexity is still the same. ICS 141: Discrete Mathematics I – Fall 2011 13-12 Analysis of Sorting AlgorithmsUniversity of Hawaii n Check out Rosen 3.3 Example 5 and Example 6 for worst-case time complexity of bubble sort and insertion sort algorithms in terms of the number of comparisons made. ICS 141: Discrete Mathematics I – Fall 2011 13-13 Bubble Sort Analysis University of Hawaii procedure bubble_sort (a1, a2, …, an: real numbers with n ≥ 2) for i := 1 to n – 1 for j := 1 to n – i if aj > aj+1 then interchange aj and aj+1 {a1, a2, …, an is in increasing order} n Worst-case complexity in terms of the number of comparisons: Θ(n2) ICS 141: Discrete Mathematics I – Fall 2011 13-14 Insertion Sort University of Hawaii procedure insertion_sort (a , a , …, a : real numbers; n ≥ 2) 1 2 n for j := 2 to n begin i := 1 while aj > ai i := i + 1 m := aj for k := 0 to j – i – 1 aj-k := aj-k-1 ai := m end {a1, a2, …, an are sorted in increasing order} n Worst-case complexity in terms of the number of comparisons: Θ(n2) ICS 141: Discrete Mathematics I – Fall 2011 13-15 Common Terminology for the University of Hawaii Complexity of Algorithms ICS 141: Discrete Mathematics I – Fall 2011 13-16 Computer Time Examples University of Hawaii -9 n Assume that time = 1 ns (10 second) per operation, problem size = n bits, and #ops is a function of n. (1.25 bytes) (125 kB) #ops(n) n = 10 n = 106 log2 n 3.3 ns 19.9 ns n 10 ns 1 ms n log2 n 33 ns 19.9 ms n2 100 ns 16 m 40 s n 301,004.5 2 1.024 µs 10 n! 3.63 ms Ouch! ICS 141: Discrete Mathematics I – Fall 2011 13-17 Review: Complexity University of Hawaii n Algorithmic complexity = cost of computation. n Focus on time complexity for our course. n Although space & energy are also important. n Characterize complexity as a function of input size: n Worst-case, best-case, or average-case. n Use orders-of-growth notation to concisely summarize the growth properties of complexity functions. n Need to know n Names of specific orders of growth of complexity. n How to analyze the order of growth of time complexity for simple algorithms. ICS 141: Discrete Mathematics I – Fall 2011 13-18 Tractable vs. Intractable University of Hawaii n A problem that is solvable using an algorithm with at most polynomial time complexity is called tractable (or feasible). P is the set of all tractable problems. n A problem that cannot be solved using an algorithm with worst-case polynomial time complexity is called intractable (or infeasible). 1,000,000 n Note that n is technically tractable, but really very hard. nlog log log n is technically intractable, but easy. Such cases are rare though. ICS 141: Discrete Mathematics I – Fall 2011 13-19 P vs. NP University of Hawaii n NP is the set of problems for which there exists a tractable algorithm for checking a proposed solution to tell if it is correct. n We know that P⊆NP, but the most famous unproven conjecture in computer science is that this inclusion is proper. n i.e., that P⊂NP rather than P=NP. n Whoever first proves this will be famous! (or disproves it!) ICS 141: Discrete Mathematics I – Fall 2011 13-20 3.4 The Integers and DivisionUniversity of Hawaii n Of course, you already know what the integers are, and what division is… n But: There are some specific notations, terminology, and theorems associated with these concepts which you may not know. n These form the basics of number theory. n Number theory is vital in many today important algorithms (hash functions, cryptography, digital signatures,…). ICS 141: Discrete Mathematics I – Fall 2011 13-21 Divides, Factor, Multiple University of Hawaii n Let a,b ∈ Z with a ≠ 0. n Definition: a|b ⇔ “a divides b” ⇔ (∃c∈Z: b = ac) “There is an integer c such that c times a equals b.” n Example: 3|-12 (True), but 3|7 (False). n If a divides b, then we say a is a factor or a divisor of b, and b is a multiple of a. n E.g.: “b is even” ⇔ 2|b.
Recommended publications
  • Research Statement
    D. A. Williams II June 2021 Research Statement I am an algebraist with expertise in the areas of representation theory of Lie superalgebras, associative superalgebras, and algebraic objects related to mathematical physics. As a post- doctoral fellow, I have extended my research to include topics in enumerative and algebraic combinatorics. My research is collaborative, and I welcome collaboration in familiar areas and in newer undertakings. The referenced open problems here, their subproblems, and other ideas in mind are suitable for undergraduate projects and theses, doctoral proposals, or scholarly contributions to academic journals. In what follows, I provide a technical description of the motivation and history of my work in Lie superalgebras and super representation theory. This is then followed by a description of the work I am currently supervising and mentoring, in combinatorics, as I serve as the Postdoctoral Research Advisor for the Mathematical Sciences Research Institute's Undergraduate Program 2021. 1 Super Representation Theory 1.1 History 1.1.1 Superalgebras In 1941, Whitehead dened a product on the graded homotopy groups of a pointed topological space, the rst non-trivial example of a Lie superalgebra. Whitehead's work in algebraic topol- ogy [Whi41] was known to mathematicians who dened new graded geo-algebraic structures, such as -graded algebras, or superalgebras to physicists, and their modules. A Lie superalge- Z2 bra would come to be a -graded vector space, even (respectively, odd) elements g = g0 ⊕ g1 Z2 found in (respectively, ), with a parity-respecting bilinear multiplication termed the Lie g0 g1 superbracket inducing1 a symmetric intertwining map of -modules.
    [Show full text]
  • Introduction to Computational Social Choice
    1 Introduction to Computational Social Choice Felix Brandta, Vincent Conitzerb, Ulle Endrissc, J´er^omeLangd, and Ariel D. Procacciae 1.1 Computational Social Choice at a Glance Social choice theory is the field of scientific inquiry that studies the aggregation of individual preferences towards a collective choice. For example, social choice theorists|who hail from a range of different disciplines, including mathematics, economics, and political science|are interested in the design and theoretical evalu- ation of voting rules. Questions of social choice have stimulated intellectual thought for centuries. Over time the topic has fascinated many a great mind, from the Mar- quis de Condorcet and Pierre-Simon de Laplace, through Charles Dodgson (better known as Lewis Carroll, the author of Alice in Wonderland), to Nobel Laureates such as Kenneth Arrow, Amartya Sen, and Lloyd Shapley. Computational social choice (COMSOC), by comparison, is a very young field that formed only in the early 2000s. There were, however, a few precursors. For instance, David Gale and Lloyd Shapley's algorithm for finding stable matchings between two groups of people with preferences over each other, dating back to 1962, truly had a computational flavor. And in the late 1980s, a series of papers by John Bartholdi, Craig Tovey, and Michael Trick showed that, on the one hand, computational complexity, as studied in theoretical computer science, can serve as a barrier against strategic manipulation in elections, but on the other hand, it can also prevent the efficient use of some voting rules altogether. Around the same time, a research group around Bernard Monjardet and Olivier Hudry also started to study the computational complexity of preference aggregation procedures.
    [Show full text]
  • Introduction to Computational Social Choice Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, Ariel D
    Introduction to Computational Social Choice Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, Ariel D. Procaccia To cite this version: Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, Ariel D. Procaccia. Introduction to Computational Social Choice. Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, Ariel D. Procaccia, Handbook of Computational Social Choice, Cambridge University Press, pp.1-29, 2016, 9781107446984. 10.1017/CBO9781107446984. hal-01493516 HAL Id: hal-01493516 https://hal.archives-ouvertes.fr/hal-01493516 Submitted on 21 Mar 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 1 Introduction to Computational Social Choice Felix Brandta, Vincent Conitzerb, Ulle Endrissc, J´er^omeLangd, and Ariel D. Procacciae 1.1 Computational Social Choice at a Glance Social choice theory is the field of scientific inquiry that studies the aggregation of individual preferences towards a collective choice. For example, social choice theorists|who hail from a range of different disciplines, including mathematics, economics, and political science|are interested in the design and theoretical evalu- ation of voting rules. Questions of social choice have stimulated intellectual thought for centuries. Over time the topic has fascinated many a great mind, from the Mar- quis de Condorcet and Pierre-Simon de Laplace, through Charles Dodgson (better known as Lewis Carroll, the author of Alice in Wonderland), to Nobel Laureates such as Kenneth Arrow, Amartya Sen, and Lloyd Shapley.
    [Show full text]
  • Discrete Mathematics Discrete Probability
    Introduction Probability Theory Discrete Mathematics Discrete Probability Prof. Steven Evans Prof. Steven Evans Discrete Mathematics Introduction Probability Theory 7.1: An Introduction to Discrete Probability Prof. Steven Evans Discrete Mathematics Introduction Probability Theory Finite probability Definition An experiment is a procedure that yields one of a given set os possible outcomes. The sample space of the experiment is the set of possible outcomes. An event is a subset of the sample space. Laplace's definition of the probability p(E) of an event E in a sample space S with finitely many equally possible outcomes is jEj p(E) = : jSj Prof. Steven Evans Discrete Mathematics By the product rule, the number of hands containing a full house is the product of the number of ways to pick two kinds in order, the number of ways to pick three out of four for the first kind, and the number of ways to pick two out of the four for the second kind. We see that the number of hands containing a full house is P(13; 2) · C(4; 3) · C(4; 2) = 13 · 12 · 4 · 6 = 3744: Because there are C(52; 5) = 2; 598; 960 poker hands, the probability of a full house is 3744 ≈ 0:0014: 2598960 Introduction Probability Theory Finite probability Example What is the probability that a poker hand contains a full house, that is, three of one kind and two of another kind? Prof. Steven Evans Discrete Mathematics Introduction Probability Theory Finite probability Example What is the probability that a poker hand contains a full house, that is, three of one kind and two of another kind? By the product rule, the number of hands containing a full house is the product of the number of ways to pick two kinds in order, the number of ways to pick three out of four for the first kind, and the number of ways to pick two out of the four for the second kind.
    [Show full text]
  • Why Theory of Quantum Computing Should Be Based on Finite Mathematics Felix Lev
    Why Theory of Quantum Computing Should be Based on Finite Mathematics Felix Lev To cite this version: Felix Lev. Why Theory of Quantum Computing Should be Based on Finite Mathematics. 2018. hal-01918572 HAL Id: hal-01918572 https://hal.archives-ouvertes.fr/hal-01918572 Preprint submitted on 11 Nov 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Why Theory of Quantum Computing Should be Based on Finite Mathematics Felix M. Lev Artwork Conversion Software Inc., 1201 Morningside Drive, Manhattan Beach, CA 90266, USA (Email: [email protected]) Abstract We discuss finite quantum theory (FQT) developed in our previous publica- tions and give a simple explanation that standard quantum theory is a special case of FQT in the formal limit p ! 1 where p is the characteristic of the ring or field used in FQT. Then we argue that FQT is a more natural basis for quantum computing than standard quantum theory. Keywords: finite mathematics, quantum theory, quantum computing 1 Motivation Classical computer science is based on discrete mathematics for obvious reasons. Any computer can operate only with a finite number of bits, a bit represents the minimum amount of information and the notions of one half of the bit, one third of the bit etc.
    [Show full text]
  • Mathematics 1
    Mathematics 1 MATH 1030 College Algebra (MOTR MATH 130): 3 semester hours Mathematics Prerequisites: A satisfactory score on the UMSL Math Placement Examination, obtained at most one year prior to enrollment in this course, Courses or approval of the department. This is a foundational course in math. Topics may include factoring, complex numbers, rational exponents, MATH 0005 Intermediate Algebra: 3 semester hours simplifying rational functions, functions and their graphs, transformations, Prerequisites: A satisfactory score on the UMSL Math Placement inverse functions, solving linear and nonlinear equations and inequalities, Examination, obtained at most one year prior to enrollment in this polynomial functions, inverse functions, logarithms, exponentials, solutions course. Preparatory material for college level mathematics courses. to systems of linear and nonlinear equations, systems of inequalities, Covers systems of linear equations and inequalities, polynomials, matrices, and rates of change. This course fulfills the University's general rational expressions, exponents, quadratic equations, graphing linear education mathematics proficiency requirement. and quadratic functions. This course carries no credit towards any MATH 1035 Trigonometry: 2 semester hours baccalaureate degree. Prerequisite: MATH 1030 or MATH 1040, or concurrent registration in MATH 1020 Contemporary Mathematics (MOTR MATH 120): 3 either of these two courses, or a satisfactory score on the UMSL Math semester hours Placement Examination, obtained at most one year prior to enrollment Prerequisites: A satisfactory score on the UMSL Math Placement in this course. A study of the trigonometric and inverse trigonometric Examination, obtained at most one year prior to enrollment in this functions with emphasis on trigonometric identities and equations. course. This course presents methods of problem solving, centering on MATH 1045 PreCalculus (MOTR MATH 150): 5 semester hours problems and questions which arise naturally in everyday life.
    [Show full text]
  • How to Choose a Winner: the Mathematics of Social Choice
    Snapshots of modern mathematics № 9/2015 from Oberwolfach How to choose a winner: the mathematics of social choice Victoria Powers Suppose a group of individuals wish to choose among several options, for example electing one of several candidates to a political office or choosing the best contestant in a skating competition. The group might ask: what is the best method for choosing a winner, in the sense that it best reflects the individual pref- erences of the group members? We will see some examples showing that many voting methods in use around the world can lead to paradoxes and bad out- comes, and we will look at a mathematical model of group decision making. We will discuss Arrow’s im- possibility theorem, which says that if there are more than two choices, there is, in a very precise sense, no good method for choosing a winner. This snapshot is an introduction to social choice theory, the study of collective decision processes and procedures. Examples of such situations include • voting and election, • events where a winner is chosen by judges, such as figure skating, • ranking of sports teams by experts, • groups of friends deciding on a restaurant to visit or a movie to see. Let’s start with two examples, one from real life and the other made up. 1 Example 1. In the 1998 election for governor of Minnesota, there were three candidates: Norm Coleman, a Republican; Skip Humphrey, a Democrat; and independent candidate Jesse Ventura. Ventura had never held elected office and was, in fact, a professional wrestler. The voting method used was what is called plurality: each voter chooses one candidate, and the candidate with the highest number of votes wins.
    [Show full text]
  • Applied-Math-Catalog-Web.Pdf
    Applied mathematics was once only associated with the natural sciences and en- gineering, yet over the last 50 years it has blossomed to include many fascinat- ing subjects outside the physical sciences. Such areas include game theory, bio- mathematics and mathematical economy demonstrating mathematics' presence throughout everyday human activity. The AMS book publication program on applied and interdisciplinary mathematics strengthens the connections between mathematics and other disciplines, highlighting the areas where mathematics is most relevant. These publications help mathematicians understand how math- ematical ideas may benefit other sciences, while offering researchers outside of mathematics important tools to advance their profession. Table of Contents Algebra and Algebraic Geometry .............................................. 3 Analysis ........................................................................................... 3 Applications ................................................................................... 4 Differential Equations .................................................................. 11 Discrete Mathematics and Combinatorics .............................. 16 General Interest ............................................................................. 17 Geometry and Topology ............................................................. 18 Mathematical Physics ................................................................... 19 Probability .....................................................................................
    [Show full text]
  • Discrete Mathematics
    Texas Education Agency Breakout Instrument Proclamation 2014 Subject §126. Technology Applications Course Title §126.37. Discrete Mathematics (One-Half to One Credit), Beginning with School Year 2012-2013 TEKS (Knowledge and Student Expectation Breakout Element Subelement Skills) (a) General Requirements. Students shall be awarded one-half to one credit for successful completion of this course. The required prerequisite for this course is Algebra II. This course is recommended for students in Grades 11 and 12. (b) Introduction. (1) The technology applications curriculum has six strands based on the National Educational Technology Standards for Students (NETS•S) and performance indicators developed by the International Society for Technology in Education (ISTE): creativity and innovation; communication and collaboration; research and information fluency; critical thinking, problem solving, and decision making; digital citizenship; and technology operations and concepts. (2) Discrete Mathematics provides the tools used in most areas of computer science. Exposure to the mathematical concepts and discrete structures presented in this course is essential in order to provide an adequate foundation for further study. Discrete Mathematics is generally listed as a core requirement for Computer Science majors. Course topics are divided into six areas: sets, functions, and relations; basic logic; proof techniques; counting basics; graphs and trees; and discrete probability. Mathematical topics are interwoven with computer science applications to enhance the students' understanding of the introduced mathematics. Students will develop the ability to see computational problems from a mathematical perspective. Introduced to a formal system (propositional and predicate logic) upon which mathematical reasoning is based, students will acquire the necessary knowledge to read and construct mathematical arguments (proofs), understand mathematical statements (theorems), and use mathematical problem-solving tools and strategies.
    [Show full text]
  • Topics in Discrete Mathematics
    TOPICS IN DISCRETE MATHEMATICS A.F. Pixley Harvey Mudd College July 21, 2010 ii Contents Preface v 1 Combinatorics 1 1.1 Introduction . 1 1.2 The Pigeonhole Principle . 2 1.3 Ramsey's Theorem . 7 1.4 Counting Strategies . 15 1.5 Permutations and combinations . 19 1.6 Permutations and combinations with repetitions . 28 1.7 The binomial coefficients . 38 1.8 The principle of inclusion and exclusion . 45 2 The Integers 53 2.1 Divisibility and Primes . 53 2.2 GCD and LCM . 58 2.3 The Division Algorithm and the Euclidean Algorithm . 62 2.4 Modules . 67 2.5 Counting; Euler's φ-function . 69 2.6 Congruences . 73 2.7 Classical theorems about congruences . 79 2.8 The complexity of arithmetical computation . 85 3 The Discrete Calculus 93 3.1 The calculus of finite differences . 93 3.2 The summation calculus . 102 3.3 Difference Equations . 108 3.4 Application: the complexity of the Euclidean algorithm . 114 4 Order and Algebra 117 4.1 Ordered sets and lattices . 117 4.2 Isomorphism and duality . 119 4.3 Lattices as algebras . 122 4.4 Modular and distributive lattices . 125 iii iv CONTENTS 4.5 Boolean algebras . 132 4.6 The representation of Boolean algebras . 137 5 Finite State Machines 145 5.1 Machines-introduction . 145 5.2 Semigroups and monoids . 146 5.3 Machines - formal theory . 148 5.4 The theorems of Myhill and Nerode . 152 6 Appendix: Induction 161 v Preface This text is intended as an introduction to a selection of topics in discrete mathemat- ics.
    [Show full text]
  • Discrete Mathematics, Second Edition
    Jean Gallier Discrete Mathematics, Second Edition January 4, 2016 Springer Chapter 6 An Introduction to Discrete Probability 6.1 Sample Space, Outcomes, Events, Probability Roughly speaking, probability theory deals with experiments whose outcome are not predictable with certainty. We often call such experiments random experiments. They are subject to chance. Using a mathematical theory of probability, we may be able to calculate the likelihood of some event. In the introduction to his classical book [1] (first published in 1888), Joseph Bertrand (1822–1900) writes (translated from French to English): “How dare we talk about the laws of chance (in French: le hasard)? Isn’t chance the antithesis of any law? In rejecting this definition, I will not propose any alternative. On a vaguely defined subject, one can reason with authority. ...” Of course, Bertrand’s words are supposed to provoke the reader. But it does seem paradoxical that anyone could claim to have a precise theory about chance! It is not my intention to engage in a philosophical discussion about the nature of chance. Instead, I will try to explain how it is possible to build some mathematical tools that can be used to reason rigorously about phenomema that are subject to chance. These tools belong to probability theory. These days, many fields in computer science such as machine learning, cryptography, computational linguistics, computer vision, robotics, and of course algorithms, rely a lot on probability theory. These fields are also a great source of new problems that stimulate the discovery of new methods and new theories in probability theory. Although this is an oversimplification that ignores many important contributors, one might say that the development of probability theory has gone through four eras whose key figures are: Pierre de Fermat and Blaise Pascal, Pierre–Simon Laplace, and Andrey Kolmogorov.
    [Show full text]
  • Discrete Mathematics
    Discrete Mathematics Grades 9-12 1 credit Prerequisite: Precalculus G/T This course is an introduction to the study of Discrete Mathematics, a branch of contemporary mathematics that develops reasoning and problem-solving abilities, with an emphasis on proof. Topics include Logic, Mathematical Reasoning and Proof, Set Theory, Combinatorics, Probability, Cryptology, and Graph Theory. Course requirements are rigorous, with an emphasis on mathematical reasoning and communication. This course is intended for students capable of and interested in progressing through the concepts of discrete mathematics in more depth and at an accelerated rate. Graphing calculators are an integral part of this course. Discrete Mathematics Essential Curriculum UNIT I: Logic Goal. The student will demonstrate the ability to use mathematical logic to solve problems. Objectives – The student will be able to: a. represent English-language statements using symbolic logic notation. b. use and interpret relational conjunctions (and, or, xor, not), terms of causation (if… then) and equivalence (if and only if). c. use truth tables to analyze the truth values of compound statements based on the truth values of their components. d. use truth tables to determine if two statements are logically equivalent. e. use truth tables to identify tautologies and contradictions. f. identify the hypothesis and conclusion of a conditional statement. g. interpret English-language statements (if, only if, necessary, sufficient) by converting each to the standard “if… then” form. h. write the converse, inverse, and contrapositive of a conditional statement. i. describe a counterexample for a conditional statement. j. write the negation of a conditional statement. k. write the negation of statements with quantifiers “for all” and “there exists” and write the negation of statements with multiple quantifiers.
    [Show full text]