Lecture Notes for Math182: Algorithms Draft: Last Revised July 28, 2020

Total Page:16

File Type:pdf, Size:1020Kb

Lecture Notes for Math182: Algorithms Draft: Last Revised July 28, 2020 Lecture notes for Math182: Algorithms Draft: Last revised July 28, 2020 Allen Gehret Author address: Department of Mathematics, University of California, Los Ange- les, Los Angeles, CA 90095 E-mail address: [email protected] Contents List of Figures vii Introduction ix Prerequisites xi Conventions and notation xi Acknowledgements xii Chapter 1. Discrete mathematics and basic algorithms 1 1.1. Induction 1 1.2. Summations 2 1.3. Triangular number algorithms 5 1.4. Common functions 10 1.5. Fibonacci numbers 15 1.6. Exercises 22 Chapter 2. Asymptotics 27 2.1. Asymptotic notation 27 2.2. Properties of asymptotic notation 31 2.3. The Euclidean Algorithm 34 2.4. Exercises 38 Chapter 3. Sorting algorithms 41 3.1. Insertion sort 41 3.2. Merge sort 45 3.3. Lower bound on comparison-based sorting 52 3.4. Exercises 54 Chapter 4. Divide-and-Conquer 57 4.1. The maximum-subarray problem 57 4.2. The substitution method 61 4.3. The recursion-tree method 65 4.4. The master method 70 4.5. Strassen's algorithm for matrix multiplication 71 Chapter 5. Data structures 73 5.1. Heaps 73 5.2. Heapsort 79 5.3. Priority queues 80 5.4. Stacks and queues 83 Chapter 6. Dynamic programming 89 iii iv CONTENTS 6.1. Rod cutting 89 6.2. Matrix-chain multiplication 96 Chapter 7. Greedy algorithms 103 7.1. An activity-selection problem 103 Chapter 8. Elementary graph algorithms 107 8.1. Representations of graphs 107 8.2. Breadth-first search 107 8.3. Depth-first search 114 8.4. Minimum spanning trees 118 Chapter 9. Single-source shortest paths 125 9.1. Single-source shortest paths 125 9.2. The Bellman-Ford algorithm 128 Appendix A. Pseudocode conventions and Python 131 A.1. Pseudocode conventions 131 A.2. Python 133 Bibliography 135 Index 137 Abstract The objectives of this class is are as follows: (1) Survey various well-known algorithms which solve a variety of common problems which arise in computer science. This includes reviewing the mathematical background involved in the algorithms and when possi- ble characterizing the algorithms into one of several algorithm paradigms (greedy, divide-and-conquer, dynamic,...). (2) Use mathematics to analyze and determine the efficiency of an algorithm (i.e., does the running time scale linearly, scale quadratically, scale expo- nentially, etc. with the size of the input, etc.). (3) Use mathematics to prove the correctness of an algorithm (i.e., prove that it correctly does what it is supposed to do). Portions of these notes are based on [6], [1], and [5]. Any and all questions, com- ments, typos, suggestions concerning these notes are enthusiastically welcome and greatly appreciated. Last revised July 28, 2020. 2010 Mathematics Subject Classification. Primary . The first author is supported by the National Science Foundation under Award No. 1703709. v List of Figures 3.1 Here we trace through Insertion-Sort with input A = h3; 2; 1i. The first part of the code processes the 2 in the second spot, inserting it before the 3 in the first spot, moving 3 over one spot to the right. The next part of the code processes the 1 in the third spot, inserting it before the 2 and the 3, thus moving the 2 and the 3 over one spot to the right each. 43 3.2 Here we trace through Merge(A; 5; 6; 8) where A[5 :: 8] = h2; 3; 1; 4i. This means that we will merge h2; 3i with h1; 4i, so we should end up with A[5 :: 8] = h1; 2; 3; 4i, which we do. At the top we show the result of lines 1-11, after we have copied A[5; 6] into L[1; 2] and A[6; 7] into R[1; 2]. 48 3.3 Here we consider the recursion tree of Merge-Sort in three levels of detail: (1) With just the top level of recursion shown, (2) With the top two levels of recursion shown, and (3) all three levels of recursion shown. 53 3.4 Here we show the decision tree for the Insertion-Sort algorithm run on an input of size 3. The leaves of the tree represent the permutation of the original input needed in order to sort the original input. We also show an example of a particular path of the decision tree based on the input h7; 9; 5i which requires the permutation h3; 1; 2i in order to be put into sorted for h5; 7; 9i. 55 4.1 This shows the three locations where we may find our maximum subarray: contained in the left half A[low :: mid], contained in the right half A[mid + 1 :: hight], or split between both halves and crossing the midpoint (of the form A[i : : j] where i ≤ mid and j ≥ mid + 1) 59 4.2 Finding the two subarrays which, when joined together, constitutes the maximum subarray of A[low :: high] which crosses the midpoint 60 4.3 The recursion tree for T (n) = 3T (n=4) + cn2 67 4.4 The recursion tree for T (n) = T (n=3) + T (2n=3) + cn 69 5.1 Here we illustrate a heap in two ways. First as an abstract nearly-complete binary tree. Second as an array. Note that this heap is neither a max-heap nor a min-heap. Furthermore, only the subarray A[1 : : A:heap-size] represents the heap, the rest of the subarray A[A:heap-size + 1 : : A:length] is not part of the heap. 74 5.2 Here we illustrate a max-heap in two ways. First as a binary tree. Second as an array. The tree has height 3, the node 8 (corresponding to A[4]) has height 1. 75 vii viii LIST OF FIGURES 5.3 Here we call Max-Heapify(A; 2). First it exchanges A[2] with A[4] (its left child). Then it recursively calls Max-Heapify(A; 4), which exchanges A[4] with A[9] (its right child). Then it is finished and the heap with root 2 is now a max-heap. 77 5.4 Here we call Build-Max-Heap on the array A = h4; 1; 3; 2; 16; 9; 10; 14; 8; 7i. It iteratively calls Max-Heapify(5); Max-Heapify(4);:::; Max-Heapify(1). 78 5.5 Here we call Heapsort on a certain array A. First we apply Build-Max-Heap(A) to get a max-heap. Then we iteratively take the max and put it at the end of the heap, make the heap one element smaller, then call Max-Heapify on the smaller heap to find the next max. 81 5.6 Here we call Heap-Increase-Key(A; i; 15). First this changes the key A[i] = 4 to A[i] = 15. Next it bubbles up the key 15 until the max-heap property is satisfied. 83 5.7 We start with a nonempty stack S. Then we call in succession Push(S; 17), Push(S; 3), and Pop(S), which returns 3. At the end, 3 is still part of the array S, but not part of the stack S. 85 5.8 We start with a nonempty queue and call in succession Enqueue(Q; 17); Enqueue(Q; 3); Enqueue(5) which adds 17, 3, and 5 to the back of the queue, in that order. Then we call Dequeue(Q) which returns 15 and advances the head of the queue to the next element after 15. 86 6.1 First we show the full recursion tree for Cut-Rod(p; 4). Next we show the subproblem graph for n = 4, which is like a collapsed version of the recursion tree. 94 6.2 Here we find the optimal parenthesization for the chain hA1;A2;:::;Ani, where the array of dimensions is p = h30; 35; 15; 5; 10; 20; 25i. We compute iteratively the optimal solutions to the subproblems Ai::j and store the answers in the two-dimensional array m[i; j]. We also store in s[i; j] the index k which achieves the optimal solution for m[i; j]. This allows us to reconstruct the optimal parenthesization. 100 8.1 Here we show how both an undirected graph and a directed graph can be represented with either an adjacency-list or with an adjacency-matrix. 108 8.2 An example of BFS. The black vertices are the ones with double circles and the white vertices have distance 1. All other vertices are gray. 111 8.3 Here we show the result of DFS on a directed graph. Below that is an illustration of the intervals of the discovery times for each nodes. Finally, we show the directed graph with all tree and forward edges going down and all back edges going up. 117 8.4 An example of a minimum-spanning tree 119 8.5 Here we show a cut (S; V n S) which respects A (the bold edges). We illustrate this cut in two different ways. We also indicate a light edge for the cut. 120 8.6 The proof of Theorem 8.4.1 121 Introduction What is an algorithm? This is a tough question to provide a definitive answer to, but since it is the subject of this course, we will attempt to say a few words about it. Generally speaking, an algorithm is an unambiguous sequence of instructions which accomplishes some- thing. As human beings, we are constantly following algorithms in our everyday life. For instance, here is an algorithm for eating a banana: (Step 1) Peel banana.
Recommended publications
  • LINEAR ALGEBRA METHODS in COMBINATORICS László Babai
    LINEAR ALGEBRA METHODS IN COMBINATORICS L´aszl´oBabai and P´eterFrankl Version 2.1∗ March 2020 ||||| ∗ Slight update of Version 2, 1992. ||||||||||||||||||||||| 1 c L´aszl´oBabai and P´eterFrankl. 1988, 1992, 2020. Preface Due perhaps to a recognition of the wide applicability of their elementary concepts and techniques, both combinatorics and linear algebra have gained increased representation in college mathematics curricula in recent decades. The combinatorial nature of the determinant expansion (and the related difficulty in teaching it) may hint at the plausibility of some link between the two areas. A more profound connection, the use of determinants in combinatorial enumeration goes back at least to the work of Kirchhoff in the middle of the 19th century on counting spanning trees in an electrical network. It is much less known, however, that quite apart from the theory of determinants, the elements of the theory of linear spaces has found striking applications to the theory of families of finite sets. With a mere knowledge of the concept of linear independence, unexpected connections can be made between algebra and combinatorics, thus greatly enhancing the impact of each subject on the student's perception of beauty and sense of coherence in mathematics. If these adjectives seem inflated, the reader is kindly invited to open the first chapter of the book, read the first page to the point where the first result is stated (\No more than 32 clubs can be formed in Oddtown"), and try to prove it before reading on. (The effect would, of course, be magnified if the title of this volume did not give away where to look for clues.) What we have said so far may suggest that the best place to present this material is a mathematics enhancement program for motivated high school students.
    [Show full text]
  • Sums and Products
    1-27-2019 Sums and Products In this section, I’ll review the notation for sums and products. Addition and multiplication are binary operations: They operate on two numbers at a time. If you want to add or multiply more than two numbers, you need to group the numbers so that you’re only adding or multiplying two at once. Addition and multiplication are associative, so it should not matter how I group the numbers when I add or multiply. Now associativity holds for three numbers at a time; for instance, for addition, a +(b + c)=(a + b)+ c. But why does this work for (say) two ways of grouping a sum of 100 numbers? It turns out that, using induction, it’s possible to prove that any two ways of grouping a sum or product of n numbers, for n 3, give the same result. ≥ Taking this for granted, I can define the sum of a list of numbers a1,a2,...an recursively as follows. A sum of a single number is just the number: { } 1 ai = a1. i=1 X I know how to add two numbers: 2 ai = a1 + a2. i=1 X To add n numbers for n> 2, I add the first n 1 of them (which I assume I already know how to do), then add the nth: − n n−1 ai = ai + an. i=1 i=1 X X Note that this is a particular way of grouping the n summands: Make one group with the first n 1, and add it to the last term.
    [Show full text]
  • Notes on Linear Algebra
    Notes on linear algebra Darij Grinberg Tuesday 13th December, 2016 at 21:44 These notes are frozen in a (very) unfinished state. Currently, only the basics of matrix algebra have been completed (products, triangularity, row operations etc.). Contents 1. Preface3 1.1. Acknowledgments . .4 2. Introduction to matrices4 2.1. Matrices and entries . .4 2.2. The matrix builder notation . .6 2.3. Row and column vectors . .7 2.4. Transposes . .8 2.5. Addition, scaling and multiplication . .9 2.6. The matrix product rewritten . 13 2.7. Properties of matrix operations . 18 2.8. Non-properties of matrix operations . 21 2.9. (*) The summation sign, and a proof of (AB) C = A (BC) ....... 22 2.10. The zero matrix . 31 2.11. The identity matrix . 32 2.12. (*) Proof of AIn = A ............................ 33 2.13. Powers of a matrix . 35 2.14. (*) The summation sign for matrices . 36 2.15. (*) Application: Fibonacci numbers . 40 2.16. (*) What is a number? . 43 3. Gaussian elimination 51 3.1. Linear equations and matrices . 51 3.2. Inverse matrices . 53 3.3. More on transposes . 61 1 Notes on linear algebra (Tuesday 13th December, 2016, 21:44) page 2 3.4. Triangular matrices . 62 3.5. (*) Proof of Proposition 3.32 . 71 3.6. The standard matrix units Eu,v ...................... 75 3.7. (*) A bit more on the standard matrix units . 77 l 3.8. The l-addition matrices Au,v ....................... 84 3.9. (*) Some proofs about the l-addition matrices . 86 l 3.10. Unitriangular matrices are products of Au,v’s .
    [Show full text]
  • Notes on Summations and Related Topics
    Notes on summations and related topics James Aspnes December 13, 2010 1 Summations Summations are the discrete versions of integrals; given a sequence xa; xa+1; : : : ; xb, Pb its sum xa + xa+1 + ··· + xb is written as i=a xi: The large jagged symbol is a stretched-out version of a capital Greek letter sigma. The variable i is called the index of summation, a is the lower bound or lower limit, and b is the upper bound or upper limit. Mathematicians invented this notation centuries ago because they didn't have for loops; the intent is that you loop through all values of i from a to b (including both endpoints), summing up the body of the summation for each i. If b < a, then the sum is zero. For example, −5 X 2i sin i = 0: i3 i=0 This rule mostly shows up as an extreme case of a more general formula, e.g. n X n(n + 1) i = ; 2 i=1 which still works even when n = 0 or n = −1 (but not for n = −2). Summation notation is used both for laziness (it's more compact to write Pn i=0(2i + 1) than 1 + 3 + 5 + 7 + ··· + (2n + 1)) and precision (it's also more clear exactly what you mean). 1.1 Formal definition For finite sums, we can formally define the value by either of two recurrences: b ( X 0 if b < a f(i) = (1) f(a) + Pb f(i) otherwise. i=a i=a+1 b ( X 0 if b < a f(i) = (2) f(b) + Pb−1 f(i) otherwise.
    [Show full text]
  • Short Course on Asymptotics Mathematics Summer REU Programs University of Illinois Summer 2015
    Short Course on Asymptotics Mathematics Summer REU Programs University of Illinois Summer 2015 A.J. Hildebrand 7/21/2015 2 Short Course on Asymptotics A.J. Hildebrand Contents Notations and Conventions 5 1 Introduction: What is asymptotics? 7 1.1 Exact formulas versus approximations and estimates . 7 1.2 A sampler of applications of asymptotics . 8 2 Asymptotic notations 11 2.1 Big Oh . 11 2.2 Small oh and asymptotic equivalence . 14 2.3 Working with Big Oh estimates . 15 2.4 A case study: Comparing (1 + 1=n)n with e . 18 2.5 Remarks and extensions . 20 3 Applications to probability 23 3.1 Probabilities in the birthday problem. 23 3.2 Poisson approximation to the binomial distribution. 25 3.3 The center of the binomial distribution . 27 3.4 Normal approximation to the binomial distribution . 30 4 Integrals 35 4.1 The error function integral . 35 4.2 The logarithmic integral . 37 4.3 The Fresnel integrals . 39 5 Sums 43 5.1 Euler's summation formula . 43 5.2 Harmonic numbers . 46 5.3 Proof of Stirling's formula with unspecified constant . 48 3 4 CONTENTS Short Course on Asymptotics A.J. Hildebrand Notations and Conventions R the set of real numbers C the set of complex numbers N the set of positive integers x; y; t; : : : real numbers h; k; n; m; : : : integers (usually positive) [x] the greatest integer ≤ x (floor function) fxg the fractional part of x, i.e., x − [x] p; pi; q; qi;::: primes P summation over all positive integers ≤ x n≤x P summation over all nonnegative integers ≤ x (i.e., including 0≤n≤x n = 0) P summation over all primes ≤ x p≤x Convention for empty sums and products: An empty sum (i.e., one in which the summation condition is never satisfied) is defined as 0; an empty product is defined as 1.
    [Show full text]
  • INTRODUCTION to the ARITHMETIC THEORY of QUADRATIC FORMS Contents 1. Review of Linear Algebra and Tensors 1 2. Quadratic Forms A
    INTRODUCTION TO THE ARITHMETIC THEORY OF QUADRATIC FORMS SAM RASKIN Contents 1. Review of linear algebra and tensors 1 2. Quadratic forms and spaces 10 3. p-adic fields 25 4. The Hasse principle 44 References 57 1. Review of linear algebra and tensors 1.1. Linear algebra is assumed as a prerequisite to these notes. However, this section serves to review the language of abstract linear algebra (particularly tensor products) for use in the remainder of these notes. We assume the reader familiar with these notions, and therefore do not provide a complete and detailed treatment. In particular, we assume the reader knows what a homomorphism of algebraic objects is. 1.2. Rings and fields. Recall that a ring is a set A equipped with associative binary operations ` and ¨ called \addition" and \multiplication" respectively, such that pk; `q is an abelian group with identity element 0, such that multiplication distributes over addition, and such that there is a multiplicative identity 1. The ring A is said to be commutative if its multiplication is a commutative operation. Homomorphisms of rings are maps are always assumed to preserve 1. We say that a commutative ring A is a field if every non-zero element a P A admits a multiplica- tive inverse a´1. We usually denote fields by k (this is derived from the German “K¨orper"). Example 1.2.1. For example, the rational numbers Q, the real numbers R, and the complex numbers C are all fields. The integers Z form a commutative ring but not a field.
    [Show full text]
  • Blastoff Language Reference Manual
    BLAStoff Language Reference Manual Katon Luaces, Michael Jan, Jake Fisher, Jason Kao fknl2119, mj2886, jf3148, [email protected] Contents 1 Introduction 2 2 Lexical Conventions 2 2.1 Assignment . .2 2.1.1 Matrix Literal Definition . .2 2.1.2 Graph Definition . .3 2.1.3 Number Definition . .4 2.1.4 Generator Function Definition . .4 2.1.5 String Definition . .6 2.1.6 Integers vs. Floats . .6 2.2 Comments . .7 2.3 Functions . .7 2.4 If statements . .8 2.5 For/While Loops . .8 2.6 Operations . .9 2.6.1 Selection [] . .9 2.6.2 Matrix Multiplication * . 11 2.6.3 Convolution . 12 2.6.4 Element-wise Multiplication @ . 13 2.6.5 Element-wise Addition + . 13 2.6.6 Exponentiation ^ ...................... 14 2.6.7 Size jj ............................. 15 2.6.8 Vertical Concatenation : . 16 2.6.8.1 A note on horizontal concatenation . 16 2.6.9 Reduce Rows % . 16 2.6.9.1 A note on matrices where m =0......... 17 2.6.9.2 A note on reduce columns . 18 2.6.10 Assignment operators *=, =, @=, +=, ^=, := . 18 2.6.11 Comparisons ==; 6=>; ≥; <; ≤ ................ 18 2.6.12 Semiring redefinition # . 19 2.6.12.1 A note on matrices where m = 0, again . 20 2.6.13 Logical Negation ! . 20 2.7 Precedence . 21 2.8 Keywords . 21 3 More Language Details 21 3.1 Memory . 21 3.2 Scope . 22 3.3 Printing . 22 1 4 Sample Code 23 4.1 Some Standard Library Functions . 23 4.1.1 One . 23 4.1.2 Horizontal Concatenation .
    [Show full text]
  • Math 127: Induction
    Math 127: Induction Mary Radcliffe 1 Induction Fundamentals The moment we've all been waiting for: a full treatment of proof by induction! Before we get into the technique, here, let us first understand what kinds of propositions we wish to treat using induction. As a first understanding, we will consider propositions that take the basic form: 8n 2 N; p(n) (1) where p(n) is a proposition about n. For example, suppose we wish to prove something like the following: n(n + 1) 8n 2 ; 1 + 2 + 3 + ··· + n = N 2 Using the techniques we previously developed in the notes about logic and proof, you might find yourself at something of a loss. While we are able to prove this proposition directly, let's imagine we don't see a way forward with direct proof. Contradiction and contrapositive seem pretty infeasible here, since assuming that two numbers are not equal doesn't really give us much to work with. So we'd like to develop a different path. But, ah! We remember something useful: the Induction Axiom from the Peano Axioms! In case you've forgotten, let's restate it here: If S is a set of natural numbers, having the property that 1 2 S and that if a 2 S, then (2) the successor a+ of a is also in S, then S = N. To apply that to our current situation, we can look to where we wish to have all of N: we wish that every number in N makes the proposition true. So we could imagine that we have a set S, where we define S by saying that n 2 S if and only if p(n) is true.
    [Show full text]