1 CSCI S-111 Section Notes Unit 7, Section 4 1. Quicksort

Total Page:16

File Type:pdf, Size:1020Kb

1 CSCI S-111 Section Notes Unit 7, Section 4 1. Quicksort CSCI S-111 Section Notes Unit 7, Section 4 1. Quicksort The quicksort algorithm uses a recursive "divide-and-conquer" approach to achieve a much better average-case time complexity than the O(n2) sorting algorithms we've seen thus far. The elements to be sorted are partitioned into two subarrays such that all the elements in the "left" subarray are less than or equal to all the elements in the "right" subarray. The subarrays themselves are then recursively partitioned, until we get down to subarrays containing just a single element (which can't be further partitioned). Once all the recursive invocations reach the base case of a single-element subarray, the entire array is sorted. Recall that partitioning is accomplished by choosing a pivot value, and repeatedly swapping elements such that the left subarray contains only values that are <= the pivot, and the right subarray contains only values that are >= the pivot. Let's trace through quicksort on the following array, assuming that we're using the middle element as the pivot value: pivot = swaps: ----------------------------- | 7 | 39 | 20 | 11 | 16 | 5 | ----------------------------- / \ / \ Recursively partition Recursively partition left subarray: right subarray: pivot = pivot = swaps: swaps: -------------------- ----------- | | | | | | | | -------------------- ----------- / \ / \ / \ / \ left subarray: right subarray: left subarray: right subarray: base case reached! pivot = base case reached! base case reached! ------ swaps: ------ ------ | | ------------------ | | | | ------ | | | | ------ ------ ------------------ / \ / \ left subarray: right subarray: pivot = base case reached! swaps: ------ ---------- | | | | | ------ ---------- / \ / \ left subarray: right subarray: base case reached! base case reached! ----- ------ | | | | ----- ------ 1 What is the time complexity of quicksort in the best case? In the worst case? In the average case? How would you characterize the performance of quicksort in the example we just stepped through? Was it an example of best-case, worst-case, or average-case performance? Why? Optional: How many calls to the partition() method does quicksort perform as a function of the input size n? Note that this will not be the same as the overall time complexity for the algorithm, which is based on the number of comparisons and moves rather than partition() calls. 2 2. Mergesort Mergesort uses the same overall recursive "divide-and-conquer" strategy as quicksort, but whereas quicksort does all the work of sorting the array in the process of dividing (ie., partitioning) it, mergesort performs no sorting during the division phase of the algorithm, and instead does all the work of sorting the array in the process of re-combining (ie., merging) the subarrays formed during the division phase. Let's trace through mergesort on the following array, paying particular attention to the order in which the recursive calls happen (they do not happen in parallel, despite what this diagram implies): ----------------------------------------- | 7 | 39 | 20 | 11 | 16 | 5 | 9 | 28 | ----------------------------------------- split / \ --------------------- --------------------- | | | | | | | | | | --------------------- --------------------- split split / \ / \ ----------- ----------- ----------- ----------- | | | | | | | | | | | | ----------- ----------- ----------- ----------- split split split split / \ / \ / \ / \ ------ ------ ------ ------ ------ ------ ------ ------ | | | | | | | | | | | | | | | | ------ ------ ------ ------ ------ ------ ------ ------ \ / \ / \ / \ / merge merge merge merge ----------- ----------- ----------- ----------- | | | | | | | | | | | | ----------- ----------- ----------- ----------- \ / \ / merge merge --------------------- --------------------- | | | | | | | | | | --------------------- --------------------- \ / merge ----------------------------------------- | | | | | | | | | ----------------------------------------- What major advantage does mergesort have over quicksort with respect to time complexity? What major disadvantage does mergesort have compared to quicksort with respect to space complexity? 3 3. Deriving an expression for runtime from experimental data In Problem Set 2, we will ask you to take some experimental data and infer the big-O time complexity from the experimental data. Here is an example of this process. The main method for the SortCount class we've given you runs the various sort algorithms on either a random or almost-sorted array and returns the number of comparisons and moves involved in each algorithm. Here is sample output from SortCount for one of the sorting algorithms on random arrays: n comparisons from three runs moves from three runs --- --------------------------- --------------------- 100 4950, 4950, 4950 297, 297, 297 200 19900, 19900, 19900 597, 597, 597 800 319600, 319600, 319600 2397, 2397, 2397 What can we say about the number of comparisons and moves for different inputs? What can we infer about the comparisons and moves needed by this algorithm? Given these observations, what sorting algorithm do we think this is? Does it make sense that the number of comparisons and moves is the same regardless of the sortedness of the input? 4 However, this is not the case for all of our sorting algorithms. What are some examples of sorting algorithms whose comparisons vary with the input? Here is some sample output from InsertionSort: n comparisons --- ----------- 100 2926, 2483, 2753 (avg = 2721) 200 10815, 10098, 10239 (avg = 10384) What do we notice about the number of comparisons? 5 4. Radix sort Radix sort is a stable, distributive sorting algorithm that works by processing the individual digits of each element according to their significant position. The version we worked with in lecture begins by evaluating the least-significant digit of each element in the array (i.e., the rightmost digit of each element), and moves from left to right over the array of elements at each stage, depositing each value into a 'bucket' according to the value of its least-significant digit. It then repeats this process for each successive digit, stopping once it has evaluated each element according to the most-significant position of the largest element in the array. We can break down radix sort into the following procedure: − Start at the beginning of our array and iterate over each element − For each element, place it into a “bucket” according to the value of its least- significant digit, but otherwise maintain the order of the elements (this is achieved by moving from left to right over the array) − When you reach the end of the array, repeat the process of the next most-significant digit. − Stop when you've evaluated all the elements according to the most-significant position of the largest element in the array. An example: Original Unsorted Array, n = 12: 41 326 18 1 117 56 86 7 14 221 19 30 1st Pass: 'Buckets' for the 1's digit: Sig. Digit 0 1 2 3 4 5 6 7 8 9 Elements 2nd Pass: 'Buckets' for the 10's digit: Sig. Digit 0 1 2 3 4 5 6 7 8 9 Elements 3rd Pass: 'Buckets' for the 100's digit: Sig. Digit 0 1 2 3 4 5 6 7 8 9 Elements 6 Questions on radix sort: Thinking about the example above and keeping in mind that radix sort processes its data as a sequence of m quantities with k possible values, what do m and k represent in the example above? How many operations did our example above require? How many operations would the example above have required if the elements were already in sorted order? If they were in reverse order? Which sorting method would have been more efficient for sorting the above array: radix sort, or merge sort? 7 5. The removeDups() problem from Problem Set 2 This problem asks you to remove duplicates from an already sorted array using an algorithm that requires O(n) steps. To do this, each element can move at most once. For example, the problem set says that if we are given the array: 2 5 5 5 10 12 12 we need to change it to look like this: 2 5 10 12 0 0 0 To get a O(n) algorithm, the 10 and 12 should be moved only once. This problem is somewhat like insertion sort, in that you want to consider the elements from left to right and potentially "insert" each element arr[i] somewhere in the subarray that goes from arr[0] to arr[i]. However, the problem is different from insertion sort in that you don't need to perform a backwards pass in which you figure out where an element should go while shifting other elements to the right. Instead, you should be able to use an index to keep track of where the next "insertion" (if any) should occur. Also, the "insertions" are really just moves, in which an element arr[i] is copied into a position originally occupied by another element, without sliding other elements over. Let's consider how we would process the example from the problem set: 2 5 5 5 10 12 12 We consider the elements of the array from left to right, beginning with element 1 (the first 5). - element 1 (the first 5): does it need to move? [no, because there are no duplicates to its left] - element 2 (the second 5): does it need to move? [no, because it's a duplicate] - element 3 (the third 5): does it need to move? [no, because it's a dup] - element 4 (the 10): does it need to move? o its left] - element 5 (the first 12): does it need to move? [yes, because it's not a dup, and there are dups to its left] 8 - element 6 (the second 12): does it need to move? [no, because it's a dup] - we've reached the end of the array - we conclude by filling the elements that are now unused with 0s Here's another
Recommended publications
  • Time Complexity
    Chapter 3 Time complexity Use of time complexity makes it easy to estimate the running time of a program. Performing an accurate calculation of a program’s operation time is a very labour-intensive process (it depends on the compiler and the type of computer or speed of the processor). Therefore, we will not make an accurate measurement; just a measurement of a certain order of magnitude. Complexity can be viewed as the maximum number of primitive operations that a program may execute. Regular operations are single additions, multiplications, assignments etc. We may leave some operations uncounted and concentrate on those that are performed the largest number of times. Such operations are referred to as dominant. The number of dominant operations depends on the specific input data. We usually want to know how the performance time depends on a particular aspect of the data. This is most frequently the data size, but it can also be the size of a square matrix or the value of some input variable. 3.1: Which is the dominant operation? 1 def dominant(n): 2 result = 0 3 fori in xrange(n): 4 result += 1 5 return result The operation in line 4 is dominant and will be executedn times. The complexity is described in Big-O notation: in this caseO(n)— linear complexity. The complexity specifies the order of magnitude within which the program will perform its operations. More precisely, in the case ofO(n), the program may performc n opera- · tions, wherec is a constant; however, it may not performn 2 operations, since this involves a different order of magnitude of data.
    [Show full text]
  • Quick Sort Algorithm Song Qin Dept
    Quick Sort Algorithm Song Qin Dept. of Computer Sciences Florida Institute of Technology Melbourne, FL 32901 ABSTRACT each iteration. Repeat this on the rest of the unsorted region Given an array with n elements, we want to rearrange them in without the first element. ascending order. In this paper, we introduce Quick Sort, a Bubble sort works as follows: keep passing through the list, divide-and-conquer algorithm to sort an N element array. We exchanging adjacent element, if the list is out of order; when no evaluate the O(NlogN) time complexity in best case and O(N2) exchanges are required on some pass, the list is sorted. in worst case theoretically. We also introduce a way to approach the best case. Merge sort [4]has a O(NlogN) time complexity. It divides the 1. INTRODUCTION array into two subarrays each with N/2 items. Conquer each Search engine relies on sorting algorithm very much. When you subarray by sorting it. Unless the array is sufficiently small(one search some key word online, the feedback information is element left), use recursion to do this. Combine the solutions to brought to you sorted by the importance of the web page. the subarrays by merging them into single sorted array. 2 Bubble, Selection and Insertion Sort, they all have an O(N2) In Bubble sort, Selection sort and Insertion sort, the O(N ) time time complexity that limits its usefulness to small number of complexity limits the performance when N gets very big. element no more than a few thousand data points.
    [Show full text]
  • Lecture 11: Heapsort & Its Analysis
    Lecture 11: Heapsort & Its Analysis Agenda: • Heap recall: – Heap: definition, property – Max-Heapify – Build-Max-Heap • Heapsort algorithm • Running time analysis Reading: • Textbook pages 127 – 138 1 Lecture 11: Heapsort (Binary-)Heap data structure (recall): • An array A[1..n] of n comparable keys either ‘≥’ or ‘≤’ • An implicit binary tree, where – A[2j] is the left child of A[j] – A[2j + 1] is the right child of A[j] j – A[b2c] is the parent of A[j] j • Keys satisfy the max-heap property: A[b2c] ≥ A[j] • There are max-heap and min-heap. We use max-heap. • A[1] is the maximum among the n keys. • Viewing heap as a binary tree, height of the tree is h = blg nc. Call the height of the heap. [— the number of edges on the longest root-to-leaf path] • A heap of height k can hold 2k —— 2k+1 − 1 keys. Why ??? Since lg n − 1 < k ≤ lg n ⇐⇒ n < 2k+1 and 2k ≤ n ⇐⇒ 2k ≤ n < 2k+1 2 Lecture 11: Heapsort Max-Heapify (recall): • It makes an almost-heap into a heap. • Pseudocode: procedure Max-Heapify(A, i) **p 130 **turn almost-heap into a heap **pre-condition: tree rooted at A[i] is almost-heap **post-condition: tree rooted at A[i] is a heap lc ← leftchild(i) rc ← rightchild(i) if lc ≤ heapsize(A) and A[lc] > A[i] then largest ← lc else largest ← i if rc ≤ heapsize(A) and A[rc] > A[largest] then largest ← rc if largest 6= i then exchange A[i] ↔ A[largest] Max-Heapify(A, largest) • WC running time: lg n.
    [Show full text]
  • Mergesort and Quicksort ! Merge Two Halves to Make Sorted Whole
    Mergesort Basic plan: ! Divide array into two halves. ! Recursively sort each half. Mergesort and Quicksort ! Merge two halves to make sorted whole. • mergesort • mergesort analysis • quicksort • quicksort analysis • animations Reference: Algorithms in Java, Chapters 7 and 8 Copyright © 2007 by Robert Sedgewick and Kevin Wayne. 1 3 Mergesort and Quicksort Mergesort: Example Two great sorting algorithms. ! Full scientific understanding of their properties has enabled us to hammer them into practical system sorts. ! Occupy a prominent place in world's computational infrastructure. ! Quicksort honored as one of top 10 algorithms of 20th century in science and engineering. Mergesort. ! Java sort for objects. ! Perl, Python stable. Quicksort. ! Java sort for primitive types. ! C qsort, Unix, g++, Visual C++, Python. 2 4 Merging Merging. Combine two pre-sorted lists into a sorted whole. How to merge efficiently? Use an auxiliary array. l i m j r aux[] A G L O R H I M S T mergesort k mergesort analysis a[] A G H I L M quicksort quicksort analysis private static void merge(Comparable[] a, Comparable[] aux, int l, int m, int r) animations { copy for (int k = l; k < r; k++) aux[k] = a[k]; int i = l, j = m; for (int k = l; k < r; k++) if (i >= m) a[k] = aux[j++]; merge else if (j >= r) a[k] = aux[i++]; else if (less(aux[j], aux[i])) a[k] = aux[j++]; else a[k] = aux[i++]; } 5 7 Mergesort: Java implementation of recursive sort Mergesort analysis: Memory Q. How much memory does mergesort require? A. Too much! public class Merge { ! Original input array = N.
    [Show full text]
  • Quick Sort Algorithm Song Qin Dept
    Quick Sort Algorithm Song Qin Dept. of Computer Sciences Florida Institute of Technology Melbourne, FL 32901 ABSTRACT each iteration. Repeat this on the rest of the unsorted region Given an array with n elements, we want to rearrange them in without the first element. ascending order. In this paper, we introduce Quick Sort, a Bubble sort works as follows: keep passing through the list, divide-and-conquer algorithm to sort an N element array. We exchanging adjacent element, if the list is out of order; when no evaluate the O(NlogN) time complexity in best case and O(N2) exchanges are required on some pass, the list is sorted. in worst case theoretically. We also introduce a way to approach the best case. Merge sort [4] has a O(NlogN) time complexity. It divides the 1. INTRODUCTION array into two subarrays each with N/2 items. Conquer each Search engine relies on sorting algorithm very much. When you subarray by sorting it. Unless the array is sufficiently small(one search some key word online, the feedback information is element left), use recursion to do this. Combine the solutions to brought to you sorted by the importance of the web page. the subarrays by merging them into single sorted array. 2 Bubble, Selection and Insertion Sort, they all have an O(N2) time In Bubble sort, Selection sort and Insertion sort, the O(N ) time complexity that limits its usefulness to small number of element complexity limits the performance when N gets very big. no more than a few thousand data points.
    [Show full text]
  • Time Complexity of Algorithms
    Time Complexity of Algorithms • If running time T(n) is O(f(n)) then the function f measures time complexity – Polynomial algorithms: T(n) is O(nk); k = const – Exponential algorithm: otherwise • Intractable problem: if no polynomial algorithm is known for its solution Lecture 4 COMPSCI 220 - AP G Gimel'farb 1 Time complexity growth f(n) Number of data items processed per: 1 minute 1 day 1 year 1 century n 10 14,400 5.26⋅106 5.26⋅108 7 n log10n 10 3,997 883,895 6.72⋅10 n1.5 10 1,275 65,128 1.40⋅106 n2 10 379 7,252 72,522 n3 10 112 807 3,746 2n 10 20 29 35 Lecture 4 COMPSCI 220 - AP G Gimel'farb 2 Beware exponential complexity ☺If a linear O(n) algorithm processes 10 items per minute, then it can process 14,400 items per day, 5,260,000 items per year, and 526,000,000 items per century ☻If an exponential O(2n) algorithm processes 10 items per minute, then it can process only 20 items per day and 35 items per century... Lecture 4 COMPSCI 220 - AP G Gimel'farb 3 Big-Oh vs. Actual Running Time • Example 1: Let algorithms A and B have running times TA(n) = 20n ms and TB(n) = 0.1n log2n ms • In the “Big-Oh”sense, A is better than B… • But: on which data volume can A outperform B? TA(n) < TB(n) if 20n < 0.1n log2n, 200 60 or log2n > 200, that is, when n >2 ≈ 10 ! • Thus, in all practical cases B is better than A… Lecture 4 COMPSCI 220 - AP G Gimel'farb 4 Big-Oh vs.
    [Show full text]
  • Heapsort Vs. Quicksort
    Heapsort vs. Quicksort Most groups had sound data and observed: – Random problem instances • Heapsort runs perhaps 2x slower on small instances • It’s even slower on larger instances – Nearly-sorted instances: • Quicksort is worse than Heapsort on large instances. Some groups counted comparisons: • Heapsort uses more comparisons on random data Most groups concluded: – Experiments show that MH2 predictions are correct • At least for random data 1 CSE 202 - Dynamic Programming Sorting Random Data N Time (us) Quicksort Heapsort 10 19 21 100 173 293 1,000 2,238 5,289 10,000 28,736 78,064 100,000 355,949 1,184,493 “HeapSort is definitely growing faster (in running time) than is QuickSort. ... This lends support to the MH2 model.” Does it? What other explanations are there? 2 CSE 202 - Dynamic Programming Sorting Random Data N Number of comparisons Quicksort Heapsort 10 54 56 100 987 1,206 1,000 13,116 18,708 10,000 166,926 249,856 100,000 2,050,479 3,136,104 But wait – the number of comparisons for Heapsort is also going up faster that for Quicksort. This has nothing to do with the MH2 analysis. How can we see if MH2 analysis is relevant? 3 CSE 202 - Dynamic Programming Sorting Random Data N Time (us) Compares Time / compare (ns) Quicksort Heapsort Quicksort Heapsort Quicksort Heapsort 10 19 21 54 56 352 375 100 173 293 987 1,206 175 243 1,000 2,238 5,289 13,116 18,708 171 283 10,000 28,736 78,064 166,926 249,856 172 312 100,000 355,949 1,184,493 2,050,479 3,136,104 174 378 Nice data! – Why does N = 10 take so much longer per comparison? – Why does Heapsort always take longer than Quicksort? – Is Heapsort growth as predicted by MH2 model? • Is N large enough to be interesting?? (Machine is a Sun Ultra 10) 4 CSE 202 - Dynamic Programming ..
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • Binary Search
    UNIT 5B Binary Search 15110 Principles of Computing, 1 Carnegie Mellon University - CORTINA Course Announcements • Sunday’s review sessions at 5‐7pm and 7‐9 pm moved to GHC 4307 • Sample exam available at the SCHEDULE & EXAMS page http://www.cs.cmu.edu/~15110‐f12/schedule.html 15110 Principles of Computing, 2 Carnegie Mellon University - CORTINA 1 This Lecture • A new search technique for arrays called binary search • Application of recursion to binary search • Logarithmic worst‐case complexity 15110 Principles of Computing, 3 Carnegie Mellon University - CORTINA Binary Search • Input: Array A of n unique elements. – The elements are sorted in increasing order. • Result: The index of a specific element called the key or nil if the key is not found. • Algorithm uses two variables lower and upper to indicate the range in the array where the search is being performed. – lower is always one less than the start of the range – upper is always one more than the end of the range 15110 Principles of Computing, 4 Carnegie Mellon University - CORTINA 2 Algorithm 1. Set lower = ‐1. 2. Set upper = the length of the array a 3. Return BinarySearch(list, key, lower, upper). BinSearch(list, key, lower, upper): 1. Return nil if the range is empty. 2. Set mid = the midpoint between lower and upper 3. Return mid if a[mid] is the key you’re looking for. 4. If the key is less than a[mid], return BinarySearch(list,key,lower,mid) Otherwise, return BinarySearch(list,key,mid,upper). 15110 Principles of Computing, 5 Carnegie Mellon University - CORTINA Example
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Classification Sorting algorithms used in computer science are often classified by: • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is .
    [Show full text]
  • COSC 311: ALGORITHMS HW1: SORTING Due Friday, September 22, 12Pm
    COSC 311: ALGORITHMS HW1: SORTING Due Friday, September 22, 12pm In this assignment you will implement several sorting algorithms and compare their relative per- formance. The sorting algorithms you will consider are: 1. Insertion sort 2. Selection sort 3. Heapsort 4. Mergesort 5. Quicksort We will discuss all of these algorithms in class. You should run your experiments on the department servers, remus/romulus (if you would like to write your code on another machine that is fine, but make sure you run the actual tim- ing experiments on remus/romulus). Instructions for how to access the servers can be found on the CS department web page under “Computing Resources.” If you are a Five College stu- dent who has previously taken an Amherst CS course or who enrolled during preregistration last spring, you should have an account already set up (you may need to change your password; go to https://www.amherst.edu/help/passwords). If you don’t already have an account, you can request one at https://sysaccount.amherst.edu/sysaccount/CoursePetition.asp. It will take a day to create the new account, so please do this right away. Please type up your responses to the questions below. I recommend using LATEX, which is a type- setting language that makes it easy to make math look good. If you’re not already familiar with it, I encourage you to practice! Your tasks: 1) Theoretical predictions. Rank the five sorting algorithms in order of how you expect their run- times to compare (fastest to slowest). Your ranking should be based on the asymptotic analysis of the algorithms.
    [Show full text]
  • Algorithm Time Cost Measurement
    CSE 12 Algorithm Time Cost Measurement • Algorithm analysis vs. measurement • Timing an algorithm • Average and standard deviation • Improving measurement accuracy 06 Introduction • These three characteristics of programs are important: – robustness: a program’s ability to spot exceptional conditions and deal with them or shutdown gracefully – correctness: does the program do what it is “supposed to” do? – efficiency: all programs use resources (time, space, and energy); how can we measure efficiency so that we can compare algorithms? 06-2/19 Analysis and Measurement An algorithm’s performance can be described by: – time complexity or cost – how long it takes to execute. In general, less time is better! – space complexity or cost – how much computer memory it uses. In general, less space is better! – energy complexity or cost – how much energy uses. In general, less energy is better! • Costs are usually given as functions of the size of the input to the algorithm • A big instance of the problem will probably take more resources to solve than a small one, but how much more? Figuring algorithm costs • For a given algorithm, we would like to know the following as functions of n, the size of the problem: T(n) , the time cost of solving the problem S(n) , the space cost of solving the problem E(n) , the energy cost of solving the problem • Two approaches: – We can analyze the written algorithm – Or we could implement the algorithm and run it and measure the time, memory, and energy usage 06-4/19 Asymptotic algorithm analysis • Asymptotic
    [Show full text]