Fibonacci Heaps

Total Page:16

File Type:pdf, Size:1020Kb

Fibonacci Heaps Fibonacci Heaps Outline for Today ● Recap from Last Time ● Quick refresher on binomial heaps and lazy binomial heaps. ● The Need for decrease-key ● An important operation in many graph algorithms. ● Fibonacci Heaps ● A data structure efficiently supporting decrease- key. ● Representational Issues ● Some of the challenges in Fibonacci heaps. Recap from Last Time (Lazy) Binomial Heaps ● Last time, we covered the binomial heap and a variant called the lazy binomial heap. ● These are priority queue structures designed to support efficient melding. ● Elements are stored in a collection of binomial trees. 2 1 2 0 7 4 5 5 1 3 8 7 6 9 8 Eager Binomial Heap 1 9 5 2 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap 2 5 9 3 7 6 4 8 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 10 12 5 9 3 11 7 6 4 8 Lazy Binomial Heap 2 10 11 12 5 9 3 7 6 4 8 Let’s enqueue 10, 11, and 12 into both heaps. Eager Binomial Heap 3 9 5 4 10 12 7 6 11 8 Lazy Binomial Heap 3 9 5 4 10 12 7 6 11 8 Draw what happens after we do a extract-min from both heaps. Operation Costs ● Eager Binomial Heap: ● Lazy Binomial Heap: ● enqueue: O(log n) ● enqueue: O(1) ● meld: O(log n) ● meld: O(1) ● find-min: O(log n) ● find-min: O(1) ● extract-min: O(log n) ● extract-min: O(log n)* (Nothing to see here.) ● *amortized Intuition:Intuition: Each Each extract-min extract-min hashas to to do do a a bunch bunch of of cleanup cleanup forfor the the earlier earlier enqueue enqueue operations,operations, but but then then leaves leaves us us withwith few few trees. trees. New Stuff! The Need for decrease-key The decrease-key Operation ● Some priority queues support the operation decrease-key(v, k), which works as follows: Given a pointer to an element v, lower its key (priority) to k. It is assumed that k is less than the current priority of v. ● This operation is crucial in efficient implementations of Dijkstra's algorithm and Prim's MST algorithm. Dijkstra and decrease-key ● Dijkstra's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. 2 ∞? ∞? 5 1 start 0? 4 10 1 ∞? ∞? 4 Dijkstra and decrease-key ● Dijkstra's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. 2 5? ∞? 5 1 start 0 4 10 1 10? ∞? 4 Dijkstra and decrease-key ● Dijkstra's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. ● Dijkstra's algorithm runtime is O(n Tenq + n Text + m Tdec) Prim and decrease-key ● Prim's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. 7 4 ∞? ∞? 1 10 ∞? ∞? 4 3 9 9 6 8 ∞? ∞? 5 2 ∞? ∞? Prim and decrease-key ● Prim's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. 7 4 ∞? 1? 1 10 10? ∞? 4 3 9 9 6 8 ∞? 5? 5 2 2? ∞? Prim and decrease-key ● Prim's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. 7 4 1 10 4 3 9 9 6 5 2 8 Prim and decrease-key ● Prim's algorithm can be implemented with a priority queue using ● O(n) total enqueues, ● O(n) total extract-mins, and ● O(m) total decrease-keys. ● Prim's algorithm runtime is O(n Tenq + n Text + m Tdec) Standard Approaches ● In a binary heap, enqueue, extract-min, and decrease-key can be made to work in time O(log n) time each. ● Cost of Dijkstra's / Prim's algorithm: = O(n Tenq + n Text + m Tdec) = O(n log n + n log n + m log n) = O(m log n) Standard Approaches ● In a lazy binomial heap, enqueue takes amortized time O(1), and extract-min and decrease-key take amortized time O(log n). ● Cost of Dijkstra's / Prim's algorithm: = O(n Tenq + n Text + m Tdec) = O(n + n log n + m log n) = O(m log n) Where We're Going ● The Fibonacci heap has these amortized runtimes: ● enqueue: O(1) ● extract-min: O(log n). ● decrease-key: O(1). ● Cost of Prim's or Dijkstra's algorithm: = O(n Tenq + n Text + m Tdec) = O(n + n log n + m) = O(m + n log n) ● This is theoretically optimal for a comparison-based priority queue in Dijkstra's or Prim's algorithms. The Challenge of decrease-key 5 1 5 7 9 6 7 2 3 6 10 10 8 4 7 How might we implement decrease-key in a lazy binomial heap? IfIf our our lazy lazy binomialbinomial heap heap hashas n n nodes, nodes, how how talltall can can the the tallesttallest tree tree be? be? 5 1 5 7 7 6 7 2 3 6 9 10 8 4 SupposeSuppose the the biggest biggest k 10 tree has 2 knodes in it. tree has 2 nodes in it. k Then 2 k≤ n. Then 2 ≤ n. SoSo k k = = O(log O(log n n).). How might we implement decrease-key in a lazy binomial heap? 5 1 5 7 7 6 7 2 3 6 9 10 8 4 10 Challenge: Support decrease-key in (amortized) time O(1). 5 1 5 7 7 6 7 2 3 6 9 10 8 4 WeWe cannot cannot have have all all three three of of these these 3 nice properties at once: nice properties at once: 1.1. decrease-key decrease-key takes takes time time O(1). O(1). 2.2. Our Our trees trees are are heap-ordered. heap-ordered. 3.3. Our Our trees trees are are binomial binomial trees. trees. Challenge: Support decrease-key in (amortized) time O(1). 5 1 5 7 7 6 7 2 3 6 9 10 8 4 WeWe cannot cannot have have all all three three of of these these 3 nice properties at once: nice properties at once: 1.1. decrease-key decrease-key takes takes time time O(1). O(1). 2.2. Our Our trees trees are are heap-ordered. heap-ordered. 3.3. Our Our trees trees are are binomial binomial trees. trees. Challenge: Support decrease-key in (amortized) time O(1). 3 5 1 5 7 7 6 7 2 3 6 9 10 8 4 Challenge: Support decrease-key in (amortized) time O(1). 3 5 2 1 0 5 7 7 6 8 2 6 9 10 4 Challenge: Support decrease-key in (amortized) time O(1). 3 5 2 1 5 7 7 6 8 2 6 9 10 4 Order 2 Order 1 Order 0 5 3 9 10 11 12 7 6 4 8 What We Used to Do Problem: What do we do in an extract-min? 3 5 2 1 5 7 7 6 8 2 6 9 10 4 3 11 ThisThis system system assumes assumes we we can can assign an “order” to each tree. assign an “order” to each tree. That’s easy with binomial trees. 5 9 4 12 That’s easy with binomial trees. That’s harder with our new trees. That’s harder with our new trees. 7 6 10 WhatWhat should should we we do do here? here? 8 What We Used to Do Problem: What do we do in an extract-min? Order 0 Order 2 Order 1 Order 1 Order 1 Order 0 3 5 2 1 5 7 7 6 8 2 6 9 10 4 Order 2 Order 0 Order 1 Order 0 Order 0 Order 0 5 9 3 10 11 12 7 6 4 Idea 1: A tree has order k Idea 1: Ak tree has order k if it has 2 knodes. if it has 2 nodes. 8 IdeaIdea 2: 2: A A tree tree has has order order k k ifif its its root root has has k k children. children. What We Used to Do Problem: What do we do in an extract-min? Order 2 Order 1 Order 0 5 2 1 5 3 7 7 6 8 2 6 9 10 4 Problem: What do we do in an extract-min? 3 1 5 5 7 2 2 7 6 6 8 4 9 10 Problem: What do we do in an extract-min? Question:Question: How How efficientefficient is is this?this? 3 1 5 5 7 2 2 7 6 6 8 4 9 10 (1) To do a decrease-key, cut the node from its parent. (2) Do extract-min as usual, using child count as order. FindFind a a series series of of operations operations that that givesgives rise rise to to a a heap heap containing containing a a singlesingle tree tree with with this this shape.
Recommended publications
  • Arxiv:Submit/0396732
    Dynamic 3-sided Planar Range Queries with Expected Doubly Logarithmic Time⋆ Gerth Stølting Brodal1, Alexis C. Kaporis2, Apostolos N. Papadopoulos4, Spyros Sioutas3, Konstantinos Tsakalidis1, Kostas Tsichlas4 1 MADALGO⋆⋆, Department of Computer Science, Aarhus University, Denmark gerth,tsakalid @madalgo.au.dk 2 { } Computer Engineering and Informatics Department, University of Patras, Greece [email protected] 3 Department of Informatics, Ionian University, Corfu, Greece [email protected] 4 Department of Informatics, Aristotle University of Thessaloniki, Greece [email protected] Abstract. This work studies the problem of 2-dimensional searching for the 3-sided range query of the form [a,b] ( , c] in both main × −∞ and external memory, by considering a variety of input distributions. We present three sets of solutions each of which examines the 3-sided problem in both RAM and I/O model respectively. The presented data structures are deterministic and the expectation is with respect to the input distribution: (1) Under continuous µ-random distributions of the x and y coordinates, we present a dynamic linear main memory solution, which answers 3- sided queries in O(log n + t) worst case time and scales with O(log log n) expected with high probability update time, where n is the current num- ber of stored points and t is the size of the query output. We external- ize this solution, gaining O(logB n + t/B) worst case and O(logB logn) amortized expected with high probability I/Os for query and update operations respectively, where B is the disk block size. (2)Then, we assume that the inserted points have their x-coordinates drawn from a class of smooth distributions, whereas the y-coordinates are arbitrarily distributed.
    [Show full text]
  • Exponential Structures for Efficient Cache-Oblivious Algorithms
    Exponential Structures for Efficient Cache-Oblivious Algorithms Michael A. Bender1, Richard Cole2, and Rajeev Raman3 1 Computer Science Department, SUNY Stony Brook, Stony Brook, NY 11794, USA. [email protected]. 2 Computer Science Department, Courant Institute, New York University, New York, NY 10012, USA. [email protected]. 3 Department of Mathematics and Computer Science, University of Leicester, Leicester LE1 7RH, UK. [email protected] Abstract. We present cache-oblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searching, partial persistence and planar point location. On a hierarchical memory where data is transferred in blocks of size B, some of the results we achieve are: – We give a linear-space data structure for dynamic searching that supports searches and updates in optimal O(logB N) worst-case I/Os, eliminating amortization from the result of Bender, Demaine, and Farach-Colton (FOCS ’00). We also consider finger searches and updates and batched searches. – We support partially-persistent operations on an ordered set, namely, we allow searches in any previous version of the set and updates to the latest version of the set (an update creates a new version of the set). All operations take an optimal O(logB (m+N)) amortized I/Os, where N is the size of the version being searched/updated, and m is the number of versions. – We solve the planar point location problem in linear space, taking optimal O(logB N) I/Os for point location queries, where N is the number of line segments specifying the partition of the plane.
    [Show full text]
  • Section Handout
    CS106B Handout 37 Autumn 2012 November 26th, 2012 Section Handout Discussion Problem 1: Quadtrees A quadtree is a rooted tree structure where each internal node has precisely four children. Every node in the tree represents a square, and if a node has children, each encodes one of that square’s four quadrants. Quadtrees have many applications in computer graphics, because they can be used as in- memory models of images. That they can be used as in-memory versions of black and white images is easily demonstrated via the following (borrowed from Wikipedia.org): The 8 by 8 pixel image on the left is modeled by the quadtree on the right. Note that all leaf nodes are either black or white, and all internal nodes are shaded gray. The internal nodes are gray to reflect the fact that they contain both black and white pixels. When the pixels covered by a particular node are all the same color, the color is stored in the form of a Boolean and all four children are set to NULL. Otherwise, the node’s sub-region is recursively subdivided into four sub-quadrants, each represented by one of four children. Given a Grid<bool> representation of a black and white image, implement the gridToQuadtree function, which reads the image data, constructs the corresponding quadtree, and returns its root. Frame your implementation around the following data structure: struct quadtree { int lowx, highx; // smallest and largest x value covered by node int lowy, highy; // smallest and largest y value covered by node bool isBlack; // entirely black? true.
    [Show full text]
  • Suggested Final Project Topics
    CS166 Handout 10 Spring 2019 April 25, 2019 Suggested Final Project Topics Here is a list of data structures and families of data structures we think you might find interesting topics for a final project. You're by no means limited to what's contained here; if you have another data structure you'd like to explore, feel free to do so! If You Liked Range Minimum Queries, Check Out… Range Semigroup Queries In the range minimum query problem, we wanted to preprocess an array so that we could quickly find the minimum element in that range. Imagine that instead of computing the minimum of the value in the range, we instead want to compute A[i] ★ A[i+1] ★ … ★ A[j] for some associative op- eration ★. If we know nothing about ★ other than the fact that it's associative, how would we go about solving this problem efficiently? Turns out there are some very clever solutions whose run- times involve the magical Ackermann inverse function. Why they're worth studying: If you really enjoyed the RMQ coverage from earlier in the quarter, this might be a great way to look back at those topics from a different perspective. You'll get a much more nuanced understanding of why our solutions work so quickly and how to adapt those tech- niques into novel settings. Lowest Common Ancestor and Level Ancestor Queries Range minimum queries can be used to solve the lowest common ancestors problem: given a tree, preprocess the tree so that queries of the form “what node in the tree is as deep as possible and has nodes u and v as descendants?” LCA queries have a ton of applications in suffix trees and other al- gorithmic domains, and there’s a beautiful connection between LCA and RMQ that gave rise to the first ⟨O(n), O(1)⟩ solution to both LCA and RMQ.
    [Show full text]
  • Functional Data Structures and Algorithms
    Charles University in Prague Faculty of Mathematics and Physics DOCTORAL THESIS Milan Straka Functional Data Structures and Algorithms Computer Science Institute of Charles University Supervisor of the thesis: doc. Mgr. Zdeněk Dvořák, Ph.D. Study programme: Computer Science Specialization: Discrete Models and Algorithms (4I4) Prague 2013 I am grateful to Zdeněk Dvořák for his support. He was very accommodative during my studies. He quickly discovered any errors in my early conjectures and suggested areas of interest that prove rewarding. I would also like to express my sincere gratitude to Simon Peyton Jones for his supervision and guidance during my internship in Microsoft Research Labs, and also for helping me with one of my papers. Our discussions were always very intriguing and motivating. Johan Tibell supported my work on data structures in Haskell. His enthusiasm encouraged me to overcome initial hardships and continues to make the work really enjoyable. Furthermore, I would like to thank to Michal Koucký for comments and dis- cussions that improved the thesis presentation considerably. Finally, all this would not be possible without my beloved wife and my parents. You make me very happy, Jana. iii iv I declare that I carried out this doctoral thesis independently, and only with the cited sources, literature and other professional sources. I understand that my work relates to the rights and obligations under the Act No. 121/2000 Coll., the Copyright Act, as amended, in particular the fact that the Charles University in Prague has the right to conclude a license agreement on the use of this work as a school work pursuant to Section 60 paragraph 1 of the Copyright Act.
    [Show full text]
  • Sequential Nonlinear Learning
    SEQUENTIAL NONLINEAR LEARNING a thesis submitted to the graduate school of engineering and science of bilkent university in partial fulfillment of the requirements for the degree of master of science in electrical and electronics engineering By Nuri Denizcan Vanlı August, 2015 Sequential Nonlinear Learning By Nuri Denizcan Vanlı August, 2015 We certify that we have read this thesis and that in our opinion it is fully adequate, in scope and in quality, as a thesis for the degree of Master of Science. Assoc. Prof. Dr. S. Serdar Kozat (Advisor) Prof. Dr. A. Enis C¸etin Assoc. Prof. Dr. C¸a˘gatay Candan Approved for the Graduate School of Engineering and Science: Prof. Dr. Levent Onural Director of the Graduate School ii ABSTRACT SEQUENTIAL NONLINEAR LEARNING Nuri Denizcan Vanlı M.S. in Electrical and Electronics Engineering Advisor: Assoc. Prof. Dr. S. Serdar Kozat August, 2015 We study sequential nonlinear learning in an individual sequence manner, where we provide results that are guaranteed to hold without any statistical assump- tions. We address the convergence and undertraining issues of conventional non- linear regression methods and introduce algorithms that elegantly mitigate these issues using nested tree structures. To this end, in the second chapter, we intro- duce algorithms that adapt not only their regression functions but also the com- plete tree structure while achieving the performance of the best linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. In the third chapter, we propose an incremental decision tree structure and using this model, we introduce an online regression algorithm that partitions the regressor space in a data driven manner.
    [Show full text]
  • Dynamic Data Structures for Document Collections and Graphs
    Dynamic Data Structures for Document Collections and Graphs J. Ian Munro∗ Yakov Nekrichy Jeffrey Scott Vitterz Abstract In the dynamic indexing problem, we must maintain a changing collection of text documents so that we can efficiently support insertions, deletions, and pattern matching queries. We are especially interested in developing efficient data structures that store and query the documents in compressed form. All previous compressed solutions to this problem rely on answering rank and select queries on a dynamic sequence of symbols. Because of the lower bound in [Fredman and Saks, 1989], answering rank queries presents a bottleneck in compressed dynamic indexing. In this paper we show how this lower bound can be circumvented using our new framework. We demonstrate that the gap between static and dynamic variants of the indexing problem can be almost closed. Our method is based on a novel framework for adding dynamism to static compressed data structures. Our framework also applies more generally to dynamizing other problems. We show, for example, how our framework can be applied to develop compressed representations of dynamic graphs and binary relations. 1 Introduction Motivated by the preponderance of massive data sets (so-called \big data"), it is becoming increas- ingly useful to store data in compressed form and moreover to manipulate and query the data while in compressed form. For that reason, such compressed data structures have been developed in the context of text indexing, graph representations, XML indexes, labeled trees, and many other ap- plications. In this paper we describe a general framework to convert known static compressed data structures into dynamic compressed data structures.
    [Show full text]
  • Dynamic Data Structures: Orthogonal Range Queries and Update Efficiency Konstantinos Athanasiou Tsakalidis
    Dynamic Data Structures: Orthogonal Range Queries and Update Efficiency Konstantinos Athanasiou Tsakalidis PhD Dissertation Department of Computer Science Aarhus University Denmark Dynamic Data Structures: Orthogonal Range Queries and Update Efficiency A Dissertation Presented to the Faculty of Science of Aarhus University in Partial Fulfilment of the Requirements for the PhD Degree by Konstantinos A. Tsakalidis August 11, 2011 Abstract (English) We study dynamic data structures for different variants of orthogonal range reporting query problems. In particular, we consider (1) the planar orthogonal 3-sided range reporting problem: given a set of points in the plane, report the points that lie within a given 3-sided rectangle with one unbounded side, (2) the planar orthogonal range maxima reporting problem: given a set of points in the plane, report the points that lie within a given orthogonal range and are not dominated by any other point in the range, and (3) the problem of designing fully persistent B-trees for external memory. Dynamic problems like the above arise in various applications of network optimization, VLSI layout design, computer graphics and distributed computing. For the first problem, we present dynamic data structures for internal and external memory that support planar orthogonal 3-sided range reporting queries, and insertions and deletions of points efficiently over an average case sequence of update operations. The external memory data structures find applications in constraint and temporal databases. In particular, we assume that the coor- dinates of the points are drawn from different probabilistic distributions that arise in real-world applications, and we modify existing techniques for updating dynamic data structures, in order to support the queries and the updates more efficiently than the worst-case efficient counterparts for this problem.
    [Show full text]
  • Implementation and Performance Analysis of Exponential Tree Sorting
    International Journal of Computer Applications (0975 – 8887) Volume 24– No.3, June 2011 Implementation and Performance Analysis of Exponential Tree Sorting Ajit Singh Dr. Deepak Garg Department of computer Science Assistance Professor, and Engineering, Department of computer Science Thapar University, Patiala, and Engineering, Punjab, India Thapar University, Patiala, Punjab, India ABSTRACT Fredman and Willard showed that n integers can be sorted in The traditional algorithm for sorting gives a bound of time in linear space [12]. Raman expected time without randomization and with showed that sorting can be done in randomization. Recent researches have optimized lower bound time in linear space [15]. Later Andersson improved the time for deterministic algorithms for integer sorting [1-3]. Andersson has given the idea of Exponential tree which can be used for bound to [8]. Then Thorup improved the time sorting [4]. Andersson, Hagerup, Nilson and Raman have given bound to [16]. Later Yijei Han showed an algorithm which sorts n integers in expected time for deterministic linear time but uses space [4, 5]. Andersson has given improved space integer sorting [9]. Yijei Han again showed improved algorithm which sort integers in expected result with time and linear space [6]. In these time and linear space but uses randomization [2, 4]. Yijie Han ideas expected time is achieved by using Andersson’s has improved further to sort integers in exponential tree [8]. The height of such a tree is . expected time and linear space but passes integers in a batch i.e. Also the balancing of the tree will take only time. all integers at a time [6].
    [Show full text]
  • Binary Trees, While 3-Ary Trees Are Sometimes Called Ternary Trees
    Rooted Tree A rooted tree is a tree with a countable number of nodes, in which a particular node is distinguished from the others and called the root: In some contexts, in which only a rooted tree would make sense, the term tree is often used. Infinite Tree A rooted tree is infinite if it contains a countably infinite number of nodes. Finite Tree Similarly, a rooted tree is finite if it contains a finite number of nodes. Parent Consider a rooted tree T whose root is r T Let t be a node of T From Paths in Trees are Unique, there is only one path from t to r T Let π:T−{r T }→T be the mapping defined as: π(t)= the node adjacent to t on the path to r T Then π(t) is known as the parent (or parent node) of t , and π as the parent function or parent mapping. Root Node The root node, or just root, is the one node in a rooted tree which, by definition, has no parent. Ancestor An ancestor (or ancestor node) of a node t of a rooted tree T whose root is r T is a node in the path from t to r T . Thus, the root of a rooted tree T is the ancestor of every node of T (including itself). Proper Ancestor A proper ancestor of a node t is an ancestor of t which is not t itself. Children The children (or child nodes) of a node t in a rooted tree T are the elements of the set: {s∈T:π(s)=t} That is, the children of t are all the nodes of T of which t is the parent.
    [Show full text]
  • Comparison of Advance Tree Data Structures
    International Journal of Computer Applications (0975 – 8887) Volume 41– No.2, March 2012 Comparison of Advance Tree Data Structures Parth Patel Deepak Garg Student - Master of Engineering Associate Professor CSED, Thapar University CSED, Thapar University Patiala, India Patiala, India ABSTRACT 2. B-TREE AND R-TREE B-tree and R-tree are two basic index structures; many The data structure which was proposed by Rudolf Bayer for different variants of them are proposed after them. Different the Organization and Maintenance of large ordered database variants are used in specific application for the performance was B-tree [12]. B-tree has variable number of child node optimization. In this paper different variants of B-tree and R- with predefined range. Each time node is inserted or deleted tree are discussed and compared. Index structures are different internal nodes may join and split because range is fix. Each in terms of structure, query support, data type support and internal node of B-tree contains number of keys. Number of application. Index structure’s structures are discussed first. B- keys chosen between d and 2d, d is order of B-tree. Number tree and its variants are discussed and them R-tree and its of child node of any node is d+1 to 2d+1. B-tree keeps record variants are discussed. Some structures example is also shown in sorted order for traversing. The index is adjusted with for the more clear idea. Then comparison is made between all recursive algorithm. It can handle any no of insertion and structure with respect to complexity, query type support, data deletion.
    [Show full text]
  • Insert a New Element Before Or After the Kth Element – Delete the Kth Element of the List
    Algorithmics (6EAP) MTAT.03.238 Linear structures, sorting, searching, Recurrences, Master Theorem... Jaak Vilo 2020 Fall Jaak Vilo 1 Big-Oh notation classes Class Informal Intuition Analogy f(n) ∈ ο ( g(n) ) f is dominated by g Strictly below < f(n) ∈ O( g(n) ) Bounded from above Upper bound ≤ f(n) ∈ Θ( g(n) ) Bounded from “equal to” = above and below f(n) ∈ Ω( g(n) ) Bounded from below Lower bound ≥ f(n) ∈ ω( g(n) ) f dominates g Strictly above > Linear, sequential, ordered, list … Memory, disk, tape etc – is an ordered sequentially addressed media. Physical ordered list ~ array • Memory /address/ – Garbage collection • Files (character/byte list/lines in text file,…) • Disk – Disk fragmentation Data structures • https://en.wikipedia.org/wiki/List_of_data_structures • https://en.wikipedia.org/wiki/List_of_terms_relating_to_algo rithms_and_data_structures Linear data structures: Arrays • Array • Hashed array tree • Bidirectional map • Heightmap • Bit array • Lookup table • Bit field • Matrix • Bitboard • Parallel array • Bitmap • Sorted array • Circular buffer • Sparse array • Control table • Sparse matrix • Image • Iliffe vector • Dynamic array • Variable-length array • Gap buffer Linear data structures: Lists • Doubly linked list • Array list • Xor linked list • Linked list • Zipper • Self-organizing list • Doubly connected edge • Skip list list • Unrolled linked list • Difference list • VList Lists: Array 0 1 size MAX_SIZE-1 3 6 7 5 2 L = int[MAX_SIZE] L[2]=7 Lists: Array 0 1 size MAX_SIZE-1 3 6 7 5 2 L = int[MAX_SIZE] L[2]=7 L[size++]
    [Show full text]