Data Structures in C

Total Page:16

File Type:pdf, Size:1020Kb

Data Structures in C Data Structures in C Lecture Notes for CIS*2520 University of Guelph DRAFTJudi McCuaig August 26, 2017 2 Foreward This document was originally written as supplement to the slides and lectures for CIS*2520. Recently(early 2017) it has been revised to serve as the main reference document for a distance education version of CIS*2520. Students using this material are assumed to have a background in programming in C. Special thanks to David McCaughan, Pascal Matsakis and Xining Li for sharing slides and presentations from their offerings CIS*2520. D. McCaughan's materials influenced the overall design of the course materials signifi- cantly. Dr. Matsakis and and Dr. Li's work helped greatly to improve the material on computational complexity, heaps, and priority queues. Contributors The latex source for this book was placed on github on March 24, 2017. The work has been improved by contributions from the following persons: • Lauren Good This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. DRAFT Contents 1 Introduction 1 1.1 Why Software Design.........................................1 1.2 Software Development- CIS 2520 Style................................1 1.3 Test driven development.......................................3 1.4 Extending Activities..........................................5 2 Abstraction 7 2.1 Introduction..............................................7 2.2 Real World Abstraction........................................7 2.2.1 Implementation........................................8 2.3 Abstraction in Software Design....................................9 2.3.1 Abstract Data Types..................................... 11 2.3.2 Worked Example: The Fraction ADT............................. 11 2.4 Information Hiding (Encapsulation).................................. 13 2.5 Extending Activities.......................................... 13 3 List ADT 15 3.1 Introduction.............................................. 15 3.2 Linked Lists.............................................. 15 3.2.1 Implementation Details.................................... 16 3.2.2 Nodes............................................. 17 3.2.3 Operations........................................... 17 3.2.4 Adding Elements....................................... 18 3.2.5 Deleting Nodes........................................ 19 3.2.6 Other Core List Operations.................................. 19 3.3 Array implementation for a list.................................... 20 3.4 Array Lists vs Linked Lists......................................DRAFT 20 3.5 List Iterator.............................................. 21 3.6 Extending Activities.......................................... 21 4 Stack ADT 23 4.1 Why use stacks?............................................ 25 4.2 Operations on a Stack ADT..................................... 25 4.3 Array Implementation of a Stack................................... 27 4.4 Applications.............................................. 28 4.5 Resources............................................... 29 4.6 Extending Activities.......................................... 29 5 Queues 31 5.1 Introduction.............................................. 31 5.2 Implementation: Encapsulate List ADT............................... 32 5.3 Array Implementation of a Queue................................... 33 5.4 Additional Resources......................................... 35 3 4 CONTENTS 5.5 Extending Activities.......................................... 35 6 Associative Arrays 37 6.1 Introduction.............................................. 37 6.2 Implementation............................................ 37 6.2.1 Array Implementation..................................... 37 6.2.2 Linked List Implementation.................................. 38 6.3 Operations on Associative Arrays................................... 39 6.4 Applications.............................................. 41 6.4.1 Lending Library........................................ 41 6.5 Extending Activities.......................................... 42 7 Hash Tables 43 7.1 Introduction.............................................. 43 7.2 Hash Table Operations........................................ 44 7.3 Hash Table Characteristics...................................... 44 7.4 Hash Functions............................................ 45 7.4.1 Before Hashing: Preconditioning............................... 45 7.4.2 Hashing by Truncation.................................... 45 7.4.3 Hashing by Folding...................................... 45 7.4.4 Hashing by Division...................................... 45 7.5 Collision Resolution for Hashing.................................... 46 7.5.1 Open Addressing....................................... 46 7.5.2 Separate Chaining....................................... 48 7.6 Additional Resources......................................... 48 7.7 Extending Activities.......................................... 48 8 Introduction to Trees 51 8.1 Simple Rooted Trees......................................... 51 8.2 Operations on Trees.......................................... 52 8.2.1 Tree Insertions......................................... 52 8.2.2 Removing nodes from a tree................................. 54 8.2.3 Tree Traversal......................................... 54 8.3 Binary Trees.............................................. 55 8.3.1 C Structure for BinaryDRAFT Tree ADT............................... 55 8.3.2 Recursive Helper Functions.................................. 56 8.3.3 Wrapper functions....................................... 61 8.4 Types of Binary Trees......................................... 62 8.5 Extending Activities.......................................... 63 9 Heaps 65 9.1 Introduction.............................................. 65 9.2 Operations............................................... 66 9.2.1 Heap Insert.......................................... 66 9.2.2 Heap Delete.......................................... 66 9.3 Array Implementation of a Heap................................... 67 9.3.1 Adding Elements....................................... 67 9.3.2 Removing Elements...................................... 71 9.4 Extending Activities.......................................... 73 CONTENTS 5 10 Priority Queue 75 10.1 Uses.................................................. 75 10.2 Implementation............................................ 75 10.3 Operations............................................... 75 10.4 Varying Priority and Starvation Prevention.............................. 76 10.5 Extending Activities.......................................... 76 11 Trees 77 11.1 Balanced Trees............................................ 77 11.2 Binary Search Trees.......................................... 79 11.2.1 AVL trees........................................... 79 11.2.2 Case B: Imbalance to the left -> Right rotation....................... 87 11.2.3 Case C: Imbalance in left subtree of right child double rotation.............. 87 11.3 B-Trees................................................ 90 11.4 Summary................................................ 92 11.5 Additional Resources......................................... 92 11.6 Extending Activities.......................................... 92 12 Sorting 95 12.1 Bubble Sort.............................................. 96 12.1.1 Bubble Sort Pseudocode................................... 97 12.2 Selection Sort............................................. 98 12.2.1 Seletion Sort Pseudocode................................... 99 12.3 Insertion Sort............................................. 99 12.3.1 Selection Sort Pseudocode.................................. 100 12.4 Shell Sort............................................... 100 12.4.1 Shell Sort Pseudocode.................................... 101 12.5 Quick Sort............................................... 102 12.5.1 Quick Sort Pseudocode.................................... 103 12.6 Merge Sort............................................... 103 12.6.1 Merge Sort Pseudocode.................................... 104 12.7 Additional Resources......................................... 105 12.8 Self-Test Questions.......................................... 105 13 Algorithmic Complexity and Complexity Analysis 109 13.0.1 Determining Complexity.................................... 109 13.0.2 Big O notation........................................ 111 13.0.3 Estimating Complexity.................................... 111 13.0.4 Complexity in the realDRAFT world................................. 111 13.1 Analysis of Algorithms using Big O.................................. 112 13.1.1 Analyzing recursive functions................................. 113 13.1.2 O(1) and O(N)........................................ 114 13.1.3 O(logN)............................................ 114 13.1.4 O(NlogN)........................................... 115 13.1.5 O(n2)............................................. 116 13.1.6 O(2n)............................................. 116 13.1.7 Reasonable vs Unreasonable algorithms........................... 117 13.2 Additional Resources for Computational Complexity......................... 118 13.3 Extending Activities.......................................... 118 6 CONTENTS DRAFT Chapter 1 Introduction Welcome to CIS 2520! A data structures course is often the first course in a series of courses on software design. During this class we will examine a variety of abstract
Recommended publications
  • Optimizing Sorting with Genetic Algorithms ∗
    Optimizing Sorting with Genetic Algorithms ∗ Xiaoming Li, Mar´ıa Jesus´ Garzaran´ and David Padua Department of Computer Science University of Illinois at Urbana-Champaign {xli15, garzaran, padua}@cs.uiuc.edu http://polaris.cs.uiuc.edu Abstract 1 Introduction Although compiler technology has been extraordinarily The growing complexity of modern processors has made successful at automating the process of program optimiza- the generation of highly efficient code increasingly difficult. tion, much human intervention is still needed to obtain high- Manual code generation is very time consuming, but it is of- quality code. One reason is the unevenness of compiler im- ten the only choice since the code generated by today’s com- plementations. There are excellent optimizing compilers for piler technology often has much lower performance than some platforms, but the compilers available for some other the best hand-tuned codes. A promising code generation platforms leave much to be desired. A second, and perhaps strategy, implemented by systems like ATLAS, FFTW, and more important, reason is that conventional compilers lack SPIRAL, uses empirical search to find the parameter values semantic information and, therefore, have limited transfor- of the implementation, such as the tile size and instruction mation power. An emerging approach that has proven quite schedules, that deliver near-optimal performance for a par- effective in overcoming both of these limitations is to use ticular machine. However, this approach has only proven library generators. These systems make use of semantic in- successful on scientific codes whose performance does not formation to apply transformations at all levels of abstrac- depend on the input data.
    [Show full text]
  • Space-Efficient Data Structures, Streams, and Algorithms
    Andrej Brodnik Alejandro López-Ortiz Venkatesh Raman Alfredo Viola (Eds.) Festschrift Space-Efficient Data Structures, Streams, LNCS 8066 and Algorithms Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday 123 Lecture Notes in Computer Science 8066 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany Andrej Brodnik Alejandro López-Ortiz Venkatesh Raman Alfredo Viola (Eds.) Space-Efficient Data Structures, Streams, and Algorithms Papers in Honor of J. Ian Munro on the Occasion of His 66th Birthday 13 Volume Editors Andrej Brodnik University of Ljubljana, Faculty of Computer and Information Science Ljubljana, Slovenia and University of Primorska,
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Classification Sorting algorithms used in computer science are often classified by: • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is .
    [Show full text]
  • Ankara Yildirim Beyazit University Graduate
    ANKARA YILDIRIM BEYAZIT UNIVERSITY GRADUATE SCHOOL OF NATURAL AND APPLIED SCIENCES DEVELOPMENT OF SORTING AND SEARCHING ALGORITHMS Ph. D Thesis By ADNAN SAHER MOHAMMED Department of Computer Engineering December 2017 ANKARA DEVELOPMENT OF SORTING AND SEARCHING ALGORITHMS A Thesis Submitted to The Graduate School of Natural and Applied Sciences of Ankara Yıldırım Beyazıt University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in Computer Engineering, Department of Computer Engineering By ADNAN SAHER MOHAMMED December 2017 ANKARA PH.D. THESIS EXAMINATION RESULT FORM We have read the thesis entitled “Development of Sorting and Searching Algorithms” completed by Adnan Saher Mohammed under the supervision of Prof. Dr. Fatih V. ÇELEBİ and we certify that in our opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Doctor of Philosophy. Prof. Dr. Fatih V. ÇELEBİ (Supervisor) Prof. Dr. Şahin EMRAH Yrd. Doç. Dr. Hacer YALIM KELEŞ (Jury member) (Jury member) Yrd. Doç. Dr. Özkan KILIÇ Yrd. Doç. Dr. Shafqat Ur REHMAN (Jury member) (Jury member) Prof. Dr. Fatih V. ÇELEBİ Director Graduate School of Natural and Applied Sciences ii ETHICAL DECLARATION I hereby declare that, in this thesis which has been prepared in accordance with the Thesis Writing Manual of Graduate School of Natural and Applied Sciences, • All data, information and documents are obtained in the framework of academic and ethical rules, • All information, documents and assessments are presented in accordance with scientific ethics and morals, • All the materials that have been utilized are fully cited and referenced, • No change has been made on the utilized materials, • All the works presented are original, and in any contrary case of above statements, I accept to renounce all my legal rights.
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation (reordering) of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2006). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and upper and lower bounds. Classification Sorting algorithms are often classified by: • Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2).
    [Show full text]
  • Comparison Sorts Name Best Average Worst Memory Stable Method Other Notes Quicksort Is Usually Done in Place with O(Log N) Stack Space
    Comparison sorts Name Best Average Worst Memory Stable Method Other notes Quicksort is usually done in place with O(log n) stack space. Most implementations on typical in- are unstable, as stable average, worst place sort in-place partitioning is case is ; is not more complex. Naïve Quicksort Sedgewick stable; Partitioning variants use an O(n) variation is stable space array to store the worst versions partition. Quicksort case exist variant using three-way (fat) partitioning takes O(n) comparisons when sorting an array of equal keys. Highly parallelizable (up to O(log n) using the Three Hungarian's Algorithmor, more Merge sort worst case Yes Merging practically, Cole's parallel merge sort) for processing large amounts of data. Can be implemented as In-place merge sort — — Yes Merging a stable sort based on stable in-place merging. Heapsort No Selection O(n + d) where d is the Insertion sort Yes Insertion number of inversions. Introsort No Partitioning Used in several STL Comparison sorts Name Best Average Worst Memory Stable Method Other notes & Selection implementations. Stable with O(n) extra Selection sort No Selection space, for example using lists. Makes n comparisons Insertion & Timsort Yes when the data is already Merging sorted or reverse sorted. Makes n comparisons Cubesort Yes Insertion when the data is already sorted or reverse sorted. Small code size, no use Depends on gap of call stack, reasonably sequence; fast, useful where Shell sort or best known is No Insertion memory is at a premium such as embedded and older mainframe applications. Bubble sort Yes Exchanging Tiny code size.
    [Show full text]
  • Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme Md
    Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme Md. Enamul Karim and Abdun Naser Mahmood AI and Algorithm Research Lab Department of Computer Science, University of Dhaka [email protected] Abstract In this paper we present a random shuffling scheme to apply with adaptive sorting algorithms. Adaptive sorting algorithms utilize the presortedness present in a given sequence. We have probabilistically increased the amount of presortedness present in a sequence by using a random shuffling technique that requires little computation. Theoretical analysis suggests that the proposed scheme can improve the performance of adaptive sorting. Experimental results show that it significantly reduces the amount of disorder present in a given sequence and improves the execution time of adaptive sorting algorithm as well. Keywords: Adaptive Sorting, Random Shuffle. 1. Introduction Sorting is the process of arranging numbers in either increasing (non-decreasing) or decreasing (non- increasing) order. Due to its immense importance, sorting has never ceased to appear as research topic in computer science studies. The problem of sorting has been extensively studied and many important discoveries have been made in this respect. It has been observed [Estivill-Castro and Wood 1991] that the performance of sorting algorithms can be increased if the presortedness of the data is taken into account. This led to a new type of sorting approach, which is now known as adaptive sorting. Adaptive-sorting Algorithms take the advantage of existing order in the input. Intuitively ‘run time’ of an adaptive sorting algorithm varies smoothly from O(n) to O(n log n) as the disorder or “unsortedness” of the input varies [Mehlhorn 1984].
    [Show full text]
  • An Adaptive Sorting Algorithm for Almost Sorted List
    International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-5, Issue-12 E-ISSN: 2347-2693 An Adaptive Sorting Algorithm for Almost Sorted List Rina Damdoo1*, Kanak Kalyani2 1* Department of CSE, RCOEM, Nagpur, India 2 Department of CSE, RCOEM, Nagpur, India *Corresponding Author: [email protected], Tel.: +91-9324425240 Available online at: www.ijcseonline.org Received: 20/Nov/2017, Revised: 29/Nov/2017, Accepted: 17/Dec/2017, Published: 31/Dec/2017 Abstract—Sorting algorithm has a great impact on computing and also attracts a grand deal of research. Even though many sorting algorithms are evolved, there is always a scope of for a new one. As an example, Bubble sort was first analyzed in 1956, but due to the complexity issues it was not wide spread. Although many consider Bubble sort a solved problem, new sorting algorithms are still being evolved as per the problem scenarios (for example, library sort was first published in 2006 and its detailed experimental analysis was done in 2015) [11]. In this paper new adaptive sorting method Paper sort is introduced. The method is simple for sorting real life objects, where the input list is almost but not completely sorted. But, one may find it complex to implement when time-and-space trade-off is considered. Keywords—Adaptive sort, Time complexity, Paper sort, Sorted list, Un-sorted list I. INTRODUCTION Now the third position element is compared with the A sorting algorithm is known as adaptive, if it takes benefit last (say L) number in the sorted list and is added at end to of already sorted elements in the list i.e.
    [Show full text]
  • Msc THESIS FPGA-Based High Throughput Merge Sorter
    Computer Engineering 2018 Mekelweg 4, 2628 CD Delft The Netherlands http://ce.et.tudelft.nl/ MSc THESIS FPGA-Based High Throughput Merge Sorter Xianwei Zeng Abstract As database systems have shifted from disk-based to in-memory, and the scale of the database in big data analysis increases significantly, the workloads analyzing huge datasets are growing. Adopting FP- GAs as hardware accelerators improves the flexibility, parallelism and power consumption versus CPU-only systems. The accelerators CE-MS-2018-02 are also required to keep up with high memory bandwidth provided by advanced memory technologies and new interconnect interfaces. Sorting is the most fundamental database operation. In multiple- pass merge sorting, the final pass of the merge operation requires significant throughput performance to keep up with the high mem- ory bandwidth. We study the state-of-the-art hardware-based sorters and present an analysis of our own design. In this thesis, we present an FPGA-based odd-even merge sorter which features throughput of 27.18 GB/s when merging 4 streams. Our design also presents stable throughput performance when the number of input streams is increased due to its high degree of parallelism. Thanks to such a generic design, the odd-even merge sorter does not suffer through- put drop for skewed data distributions and presents constant perfor- mance over various kinds of input distributions. Faculty of Electrical Engineering, Mathematics and Computer Science FPGA-Based High Throughput Merge Sorter THESIS submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in ELECTRICAL ENGINEERING by Xianwei Zeng born in GUANGDONG, CHINA Computer Engineering Department of Electrical Engineering Faculty of Electrical Engineering, Mathematics and Computer Science Delft University of Technology FPGA-Based High Throughput Merge Sorter by Xianwei Zeng Abstract As database systems have shifted from disk-based to in-memory, and the scale of the database in big data analysis increases significantly, the workloads analyzing huge datasets are growing.
    [Show full text]
  • Sorting N-Elements Using Natural Order: a New Adaptive Sorting Approach
    Journal of Computer Science 6 (2): 163-167, 2010 ISSN 1549-3636 © 2010 Science Publications Sorting N-Elements Using Natural Order: A New Adaptive Sorting Approach 1Shamim Akhter and 2M. Tanveer Hasan 1Aida Lab, National Institute of Informatics, Tokyo, Japan 2 CEO, DSI Software Company Ltd., Dhaka, Bangladesh Abstract: Problem statement: Researchers focused their attention on optimally adaptive sorting algorithm and illustrated a need to develop tools for constructing adaptive algorithms for large classes of measures. In adaptive sorting algorithm the run time for n input data smoothly varies from O(n) to O(nlogn), with respect to several measures of disorder. Questions were raised whether any approach or technique would reduce the run time of adaptive sorting algorithm and provide an easier way of implementation for practical applications. Approach: The objective of this study is to present a new method on natural sorting algorithm with a run time for n input data O(n) to O(nlogm), where m defines a positive value and surrounded by 50% of n. In our method, a single pass over the inputted data creates some blocks of data or buffers according to their natural sequential order and the order can be in ascending or descending. Afterward, a bottom up approach is applied to merge the naturally sorted subsequences or buffers. Additionally, a parallel merging technique is successfully aggregated in our proposed algorithm. Results: Experiments are provided to establish the best, worst and average case runtime behavior of the proposed method. The simulation statistics provide same harmony with the theoretical calculation and proof the method efficiency.
    [Show full text]
  • CSE 373: Master Method, Finishing Sorts, Intro to Graphs
    The tree method: precise analysis Problem: Need a rigorous way of getting a closed form We want to answer a few core questions: CSE 373: Master method, finishing sorts, How much work does each recursive level do? intro to graphs 1. How many nodes are there on level i?(i = 0 is “root” level) 2. At some level i, how much work does a single node do? (Ignoring subtrees) Michael Lee 3. How many recursive levels are there? Monday, Feb 12, 2018 How much work does the leaf level (base cases) do? 1. How much work does a single leaf node do? 2. How many leaf nodes are there? 1 2 The tree method: precise analysis The tree method: precise analysis n 1 node, n work per How many levels are there, exactly? Is it log2(n)? n n n Let’s try an example. Suppose we have T(4). What happens? 2 2 2 nodes, 2 work per 4 n n n n n 4 4 4 4 4 nodes, 4 work per 2 2 . 2i nodes, n work per . i 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2h nodes, 1 work per Height is log (4) = 2. 1. numNodes(i) = 2i 2 n For this recursive function, num recursive levels is same as height. 2. workPerNode(n; i) = 2i 3. numLevels(n) = ? Important: total levels, counting base case, is height + 1. 4. workPerLeafNode(n) = 1 Important: for other recursive functions, where base case doesn’t 5.
    [Show full text]
  • A Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection
    Fachbereich 3: Mathematik und Informatik Diploma Thesis A Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Robin Tenhagen Matriculation No.2288410 5 January 2015 Examiner: Prof. Dr. Gabriel Zachmann Supervisor: Prof. Dr. Frieder Nake Advisor: David Mainzer Robin Tenhagen A Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Diploma Thesis, Fachbereich 3: Mathematik und Informatik Universität Bremen, January 2015 Diploma ThesisA Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Selbstständigkeitserklärung Hiermit erkläre ich, dass ich die vorliegende Arbeit selbstständig angefertigt, nicht anderweitig zu Prüfungszwecken vorgelegt und keine anderen als die angegebenen Hilfsmittel verwendet habe. Sämtliche wissentlich verwendete Textausschnitte, Zitate oder Inhalte anderer Verfasser wurden ausdrücklich als solche gekennzeichnet. Bremen, den 5. Januar 2015 Robin Tenhagen 3 Diploma ThesisA Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Abstract Real world data is often being sorted in a repetitive way. In many applications, only small parts change within a small time frame. This thesis presents the novel approach of time-based adaptive sorting. The proposed algorithm is hybrid; it uses advantages of existing adaptive sorting algorithms. This means, that the algorithm cannot only be used on CPU, but, introducing new fast adaptive GPU algorithms, it delivers a usable adaptive sorting algorithm for GPGPU. As one of practical examples, for collision detection in well-known animation scenes with deformable cloths, especially on CPU the presented algorithm turned out to be faster than the adaptive hybrid algorithm Timsort.
    [Show full text]