5.4. Deque and List

Total Page:16

File Type:pdf, Size:1020Kb

5.4. Deque and List Object-oriented programming B, Lecture 13e 1 5.4. Deque and list. The term “deque” is an abbreviation for “double-ended queue”. It is a dynamic array that is implemented so that it can grow in both directions. Fig. 5.7. Logical structure of a deque. Thus, inserting and removing elements both at the end and at the beginning is fast. However, inserting and removing elements in the middle takes time because elements must be moved. To provide this ability, the deque is implemented as an array of arrays – individual blocks, with the first block growing in one direction and the last block growing in the opposite direction. Fig. 5.8. Internal structure of a deque. Deque operations differ from vector operations only as follows: 1. Deques do not provide the functions for capacity (capacity() and reserve()); 2. Deques do provide direct functions to insert and to delete the first element (push_front() and pop_front()). A list is implemented as a doubly linked list of elements. This means each element in a list has its own segment of memory and refers to its predecessor and its successor. Fig. 5.9. Structure of a list. Because the internal structure of a list is totally different from a vector or a deque, a list differs in several ways compared with them: 1. Lists provide neither a subscript operator nor at() because a list does not provide random access. For example, to access the fifth element, one have to navigate the first four elements step by step following the chain of links. Thus, accessing an arbitrary element using a list is slow. 2. Inserting and removing elements is fast at each position, not only at one or both ends. We can always insert and delete an element in constant time because no elements have to be moved. Instead, only some pointer values are manipulated internally. 3. Lists don’t provide operations for capacity or reallocation because neither is needed. Each element has its own memory that stays valid until the element is deleted. So, inserting and deleting elements does not invalidate pointers, references, and iterators to other elements. 4. Lists provide many special member functions (like splice family and sort) for moving elements. These member functions are faster versions of the STL generic algorithms that have the same name. They are faster because they only redirect pointers rather then copy and moves values. Since some of the STL generic algorithms requiring Object-oriented programming B, Lecture 13e 2 random access, like sort, cannot be applied to a list, the list’s member function sort is specially introduced in order to implement such important procedure. The next example illustrates usage of some deque’s and list’s member functions: Example 5.5. 01: #pragma warning(disable: 4786) 02: #include<iostream> 03: #include<deque> 04: #include<string> 05: #include<list> 06: #include<algorithm> 07: using namespace std; 08: 09: int main() 10: { 11: deque<string> d; 12: d.push_front("one"); 13: d.push_front("two"); 14: cout << d[0] << " " << d.at(1) << endl; // two one 15: d.pop_front(); 16: d.push_front("zero"); 17: d.push_back("two"); 18: cout << d.front() << " " << d.back() << endl; // zero two 19: sort(d.begin(), d.end()); //the STL generic algorithm sort 20: for(int i = 0; i < d.size(); i++) 21: cout << d.at(i) << " "; 22: cout << endl; // one two zero 23: 24: list<string> l(d.size()), l2; 25: copy(d.begin(), d.end(), l.begin()); 26: string s[] = { "four", "five", "six"}; 27: l2.insert(l2.begin(), s, s + 3);//call like c.insert(p, first, last) 28: //inserts at iterator position p a copy of all elements of the range 29: //[first,last) 30: l2.splice(l2.end(), l);//call like c1.splice(p, c2) moves all 31: //elements of c2 to c1 in front of the iterator position p 32: list<string>::iterator it; 33: for(it = l2.begin(); it != l2.end(); it++) 34: cout << *it << " "; 35: cout << endl; // four five six one two zero 36: 37: l2.sort(); //sort all elements with operator < 38: for(it = l2.begin(); it != l2.end(); it++) 39: cout << *it << " "; // five four one six two zero 40: cout << endl; // 41: 42: return 0; we cannot use operator < for lists 43: } 5.5. Set and multiset. Set and multiset containers sort their elements automatically according to a certain sorting criterion (in ascending order, by default). The difference between the two is that multisets allow duplicates, whereas sets do not. The elements of a set or multiset may have any type T that is assignable, copyable, and comparable according to the sorting criterion. Sets and multisets are usually implemented as balanced binary trees. Object-oriented programming B, Lecture 13e 3 4 2 ++ ++ ++ ++ 6 ++ 1 3 5 Fig. 5.10. Internal structure and iterating over elements of a set The major advantage of automatic sorting is that a binary tree performs well when elements with a certain value are searched. However, automatic sorting also imposes an important constraint on sets and multisets: we may not change the value of an element directly because this might compromise the correct order. Therefore, to modify the value of an element, we must remove the element with the old value and insert a new element that has the new value. Set and multiset provide special search functions: • count(elem) – returns the number of elements with value elem; • find(elem) – returns the position of the first element with value elem or end(); • lower_bound(elem) and upper_bound(elem) – returns the first and last position respectively, at which an element with the passed value elem would be inserted; • equal_range(elem) – return both return values of lower_bound() and upper_bound() as a pair. The following example illustrates use of some of these functions: Example 5.6. 01: #pragma warning(disable: 4786) 02: #include<iostream> 03: #include<set> 04: #include<string> 05: using namespace std; 06: 07: int main() 08: { 09: // STL set container 10: set<int> s; //creates an empty set without any elements 11: s.insert(7); 12: s.insert(2); 13: s.insert(-6); 14: s.insert(-6); //duplicate element: ignore 15: set<int>::iterator it; 16: for(it = s.begin(); it != s.end(); it++) 17: cout << *it << " "; 18: cout << endl; // -6 2 7 //Obs!: the order has changed 19: 20: int key; 21: cin >> key; 22: it = s.find(key); 23: if(it != s.end()) 24: cout << *it << endl; 25: else 26: cout << "no such an element" << endl; 27: 28: // STL multiset container 29: string a[] = {"one", "two", "three"}; 30: multiset<string> ms(a, a + sizeof(a)/sizeof(string)); 31: ms.insert("three"); 32: ms.insert("three"); 33: cout << "there are " << ms.count("three") << " 'three'" << endl; Object-oriented programming B, Lecture 13e 4 34: multiset<string>::iterator jt; 35: for(jt = ms.begin(); jt != ms.end(); jt++) 36: cout << *jt << " "; 37: cout << endl; // one three three three two 38: 39: cout << "lower bound of 'three': " <<*ms.lower_bound("three") 40: << " " 41: << "upper bound of 'three': " << *ms.upper_bound("three") 42: << endl; 43: //lower bound of 'three': three upper bound of 'three': two 44: return 0; 45: } That is, lower_bound() and upper_bound()return the following iterators: one three three three two lower_bound(”three”) upper_bound(”three”) 5.6. The STL function objects. The types set and multiset are declared and defined as class templates inside namespace std: template<class T, class Compare = less<T>, class Allocator =allocator<T> > class set; The optional second template argument defines the sorting criterion. If a special sorting criterion is not passed, the default criterion less is used, less being the STL function object. As we know (see section 2.4.2), the function object is an object of function-like class in which the function call operator() is overloaded. So, a function object encapsulates a function in an object for use by other components. The STL provides many useful function objects. To aid the writing of function objects, the library provides two base classes: template<class Arg, class Result> struct unary_function { typedef Arg argument_type; typedef Result result_type; }; template<class Arg1, class Arg2, class Result> struct binary_function { typedef Arg1 first_argument_type; typedef Arg2 second_argument_type; typedef Result result_type; }; Each template class in point serves as a base for classes that define a function call operator() in the form: result_type operator()(first_argument_type, second_argument_type) Object-oriented programming B, Lecture 13e 5 The purpose of these classes is to provide standard names for the argument and return types for use by users of classes derived from unary_function and binary_function. The STL function objects are divided into two groups: predicates and arithmetic function objects. A predicate is a function object (or function) that returns a bool. For example, the header <functional> defines template<class T> struct less: public binary_function<T, T, bool> { bool operator()(const T& x, const T& y) const { return x < y; } }; The predicates provided by STL <functional> are as follows: Table 5.1. equal_to Binary arg1 == arg2 not_equal_to Binary arg1 != arg2 greater Binary arg1 > arg2 less Binary arg1 < arg2 greater_equal Binary arg1 >= arg2 less_equal Binary arg1 <= arg2 logical_and Binary arg1 && arg2 logical_or Binary arg1 || arg2 logical_not Unary !arg The STL <functional> also provides some useful standard arithmetic functions as function objects. Table 5.2. plus Binary arg1 + arg2 minus Binary arg1 - arg2 multiplies Binary arg1 * arg2 divides Binary arg1/arg2 modulus Binary arg1 % arg2 negate Unary - arg The next example illustrates the use of both user-defined
Recommended publications
  • Lecture 04 Linear Structures Sort
    Algorithmics (6EAP) MTAT.03.238 Linear structures, sorting, searching, etc Jaak Vilo 2018 Fall Jaak Vilo 1 Big-Oh notation classes Class Informal Intuition Analogy f(n) ∈ ο ( g(n) ) f is dominated by g Strictly below < f(n) ∈ O( g(n) ) Bounded from above Upper bound ≤ f(n) ∈ Θ( g(n) ) Bounded from “equal to” = above and below f(n) ∈ Ω( g(n) ) Bounded from below Lower bound ≥ f(n) ∈ ω( g(n) ) f dominates g Strictly above > Conclusions • Algorithm complexity deals with the behavior in the long-term – worst case -- typical – average case -- quite hard – best case -- bogus, cheating • In practice, long-term sometimes not necessary – E.g. for sorting 20 elements, you dont need fancy algorithms… Linear, sequential, ordered, list … Memory, disk, tape etc – is an ordered sequentially addressed media. Physical ordered list ~ array • Memory /address/ – Garbage collection • Files (character/byte list/lines in text file,…) • Disk – Disk fragmentation Linear data structures: Arrays • Array • Hashed array tree • Bidirectional map • Heightmap • Bit array • Lookup table • Bit field • Matrix • Bitboard • Parallel array • Bitmap • Sorted array • Circular buffer • Sparse array • Control table • Sparse matrix • Image • Iliffe vector • Dynamic array • Variable-length array • Gap buffer Linear data structures: Lists • Doubly linked list • Array list • Xor linked list • Linked list • Zipper • Self-organizing list • Doubly connected edge • Skip list list • Unrolled linked list • Difference list • VList Lists: Array 0 1 size MAX_SIZE-1 3 6 7 5 2 L = int[MAX_SIZE]
    [Show full text]
  • Yikes! Why Is My Systemverilog Still So Slooooow?
    DVCon-2019 San Jose, CA Voted Best Paper 1st Place World Class SystemVerilog & UVM Training Yikes! Why is My SystemVerilog Still So Slooooow? Cliff Cummings John Rose Adam Sherer Sunburst Design, Inc. Cadence Design Systems, Inc. Cadence Design System, Inc. [email protected] [email protected] [email protected] www.sunburst-design.com www.cadence.com www.cadence.com ABSTRACT This paper describes a few notable SystemVerilog coding styles and their impact on simulation performance. Benchmarks were run using the three major SystemVerilog simulation tools and those benchmarks are reported in the paper. Some of the most important coding styles discussed in this paper include UVM string processing and SystemVerilog randomization constraints. Some coding styles showed little or no impact on performance for some tools while the same coding styles showed large simulation performance impact. This paper is an update to a paper originally presented by Adam Sherer and his co-authors at DVCon in 2012. The benchmarking described in this paper is only for coding styles and not for performance differences between vendor tools. DVCon 2019 Table of Contents I. Introduction 4 Benchmarking Different Coding Styles 4 II. UVM is Software 5 III. SystemVerilog Semantics Support Syntax Skills 10 IV. Memory and Garbage Collection – Neither are Free 12 V. It is Best to Leave Sleeping Processes to Lie 14 VI. UVM Best Practices 17 VII. Verification Best Practices 21 VIII. Acknowledgment 25 References 25 Author & Contact Information 25 Page 2 Yikes! Why is
    [Show full text]
  • Programmatic Testing of the Standard Template Library Containers
    Programmatic Testing of the Standard Template Library Containers y z Jason McDonald Daniel Ho man Paul Stro op er May 11, 1998 Abstract In 1968, McIlroy prop osed a software industry based on reusable comp onents, serv- ing roughly the same role that chips do in the hardware industry. After 30 years, McIlroy's vision is b ecoming a reality. In particular, the C++ Standard Template Library STL is an ANSI standard and is b eing shipp ed with C++ compilers. While considerable attention has b een given to techniques for developing comp onents, little is known ab out testing these comp onents. This pap er describ es an STL conformance test suite currently under development. Test suites for all of the STL containers have b een written, demonstrating the feasi- bility of thorough and highly automated testing of industrial comp onent libraries. We describ e a ordable test suites that provide go o d co de and b oundary value coverage, including the thousands of cases that naturally o ccur from combinations of b oundary values. We showhowtwo simple oracles can provide fully automated output checking for all the containers. We re ne the traditional categories of black-b ox and white-b ox testing to sp eci cation-based, implementation-based and implementation-dep endent testing, and showhow these three categories highlight the key cost/thoroughness trade- o s. 1 Intro duction Our testing fo cuses on container classes |those providing sets, queues, trees, etc.|rather than on graphical user interface classes. Our approach is based on programmatic testing where the number of inputs is typically very large and b oth the input generation and output checking are under program control.
    [Show full text]
  • Lecture 2: Data Structures: a Short Background
    Lecture 2: Data structures: A short background Storing data It turns out that depending on what we wish to do with data, the way we store it can have a signifcant impact on the running time. So in particular, storing data in certain "structured" way can allow for fast implementation. This is the motivation behind the feld of data structures, which is an extensive area of study. In this lecture, we will give a very short introduction, by illustrating a few common data structures, with some motivating problems. In the last lecture, we mentioned that there are two ways of representing graphs (adjacency list and adjacency matrix), each of which has its own advantages and disadvantages. The former is compact (size is proportional to the number of edges + number of vertices), and is faster for enumerating the neighbors of a vertex (which is what one needs for procedures like Breadth-First-Search). The adjacency matrix is great if one wishes to quickly tell if two vertices and have an edge between them. Let us now consider a diferent problem. Example 1: Scrabble problem Suppose we have a "dictionary" of strings whose average length is , and suppose we have a query string . How can we quickly tell if ? Brute force. Note that the naive solution is to iterate over the strings and check if for some . This takes time . If we only care about answering the query for one , this is not bad, as is the amount of time needed to "read the input". But of course, if we have a fxed dictionary and if we wish to check if for multiple , then this is extremely inefcient.
    [Show full text]
  • Parallelization of Bulk Operations for STL Dictionaries
    Parallelization of Bulk Operations for STL Dictionaries Leonor Frias1?, Johannes Singler2 [email protected], [email protected] 1 Dep. de Llenguatges i Sistemes Inform`atics,Universitat Polit`ecnicade Catalunya 2 Institut f¨urTheoretische Informatik, Universit¨atKarlsruhe Abstract. STL dictionaries like map and set are commonly used in C++ programs. We consider parallelizing two of their bulk operations, namely the construction from many elements, and the insertion of many elements at a time. Practical algorithms are proposed for these tasks. The implementation is completely generic and engineered to provide best performance for the variety of possible input characteristics. It features transparent integration into the STL. This can make programs profit in an easy way from multi-core processing power. The performance mea- surements show the practical usefulness on real-world multi-core ma- chines with up to eight cores. 1 Introduction Multi-core processors bring parallel computing power to the customer at virtu- ally no cost. Where automatic parallelization fails and OpenMP loop paralleliza- tion primitives are not strong enough, parallelized algorithms from a library are a sensible choice. Our group pursues this goal with the Multi-Core Standard Template Library [6], a parallel implementation of the C++ STL. To allow best benefit from the parallel computing power, as many operations as possible need to be parallelized. Sequential parts could otherwise severely limit the achievable speedup, according to Amdahl’s law. Thus, it may be profitable to parallelize an operation even if the speedup is considerably less than the number of threads. The STL contains four kinds of generic dictionary types, namely set, map, multiset, and multimap.
    [Show full text]
  • FORSCHUNGSZENTRUM JÜLICH Gmbh Programming in C++ Part II
    FORSCHUNGSZENTRUM JÜLICH GmbH Jülich Supercomputing Centre D-52425 Jülich, Tel. (02461) 61-6402 Ausbildung von Mathematisch-Technischen Software-Entwicklern Programming in C++ Part II Bernd Mohr FZJ-JSC-BHB-0155 1. Auflage (letzte Änderung: 19.09.2003) Copyright-Notiz °c Copyright 2008 by Forschungszentrum Jülich GmbH, Jülich Supercomputing Centre (JSC). Alle Rechte vorbehalten. Kein Teil dieses Werkes darf in irgendeiner Form ohne schriftliche Genehmigung des JSC reproduziert oder unter Verwendung elektronischer Systeme verarbeitet, vervielfältigt oder verbreitet werden. Publikationen des JSC stehen in druckbaren Formaten (PDF auf dem WWW-Server des Forschungszentrums unter der URL: <http://www.fz-juelich.de/jsc/files/docs/> zur Ver- fügung. Eine Übersicht über alle Publikationen des JSC erhalten Sie unter der URL: <http://www.fz-juelich.de/jsc/docs> . Beratung Tel: +49 2461 61 -nnnn Auskunft, Nutzer-Management (Dispatch) Das Dispatch befindet sich am Haupteingang des JSC, Gebäude 16.4, und ist telefonisch erreich- bar von Montag bis Donnerstag 8.00 - 17.00 Uhr Freitag 8.00 - 16.00 Uhr Tel.5642oder6400, Fax2810, E-Mail: [email protected] Supercomputer-Beratung Tel. 2828, E-Mail: [email protected] Netzwerk-Beratung, IT-Sicherheit Tel. 6440, E-Mail: [email protected] Rufbereitschaft Außerhalb der Arbeitszeiten (montags bis donnerstags: 17.00 - 24.00 Uhr, freitags: 16.00 - 24.00 Uhr, samstags: 8.00 - 17.00 Uhr) können Sie dringende Probleme der Rufbereitschaft melden: Rufbereitschaft Rechnerbetrieb: Tel. 6400 Rufbereitschaft Netzwerke: Tel. 6440 An Sonn- und Feiertagen gibt es keine Rufbereitschaft. Fachberater Tel. +49 2461 61 -nnnn Fachgebiet Berater Telefon E-Mail Auskunft, Nutzer-Management, E.
    [Show full text]
  • Chapter 10: Efficient Collections (Skip Lists, Trees)
    Chapter 10: Efficient Collections (skip lists, trees) If you performed the analysis exercises in Chapter 9, you discovered that selecting a bag- like container required a detailed understanding of the tasks the container will be expected to perform. Consider the following chart: Dynamic array Linked list Ordered array add O(1)+ O(1) O(n) contains O(n) O(n) O(log n) remove O(n) O(n) O(n) If we are simply considering the cost to insert a new value into the collection, then nothing can beat the constant time performance of a simple dynamic array or linked list. But if searching or removals are common, then the O(log n) cost of searching an ordered list may more than make up for the slower cost to perform an insertion. Imagine, for example, an on-line telephone directory. There might be several million search requests before it becomes necessary to add or remove an entry. The benefit of being able to perform a binary search more than makes up for the cost of a slow insertion or removal. What if all three bag operations are more-or-less equal? Are there techniques that can be used to speed up all three operations? Are arrays and linked lists the only ways of organizing a data for a bag? Indeed, they are not. In this chapter we will examine two very different implementation techniques for the Bag data structure. In the end they both have the same effect, which is providing O(log n) execution time for all three bag operations.
    [Show full text]
  • Dynamic Allocation
    Eric Roberts Handout #22 CS 106B February 2, 2015 Dynamic Allocation The Allocation of Memory to Variables • When you declare a variable in a program, C++ allocates space for that variable from one of several memory regions. Dynamic Allocation 0000 • One region of memory is reserved for variables that static persist throughout the lifetime of the program, such data as constants. This information is called static data. • Each time you call a method, C++ allocates a new block of memory called a stack frame to hold its heap local variables. These stack frames come from a Eric Roberts region of memory called the stack. CS 106B • It is also possible to allocate memory dynamically, as described in Chapter 12. This space comes from February 2, 2015 a pool of memory called the heap. stack • In classical architectures, the stack and heap grow toward each other to maximize the available space. FFFF Dynamic Allocation Exercise: Dynamic Arrays • C++ uses the new operator to allocate memory on the heap. • Write a method createIndexArray(n) that returns an integer array of size n in which each element is initialized to its index. • You can allocate a single value (as opposed to an array) by As an example, calling writing new followed by the type name. Thus, to allocate space for a int on the heap, you would write int *digits = createIndexArray(10); Point *ip = new int; should result in the following configuration: • You can allocate an array of values using the following form: digits new type[size] Thus, to allocate an array of 10000 integers, you would write: 0 1 2 3 4 5 6 7 8 9 int *array = new int[10000]; 0 1 2 3 4 5 6 7 8 9 • The delete operator frees memory previously allocated.
    [Show full text]
  • A Fully-Functional Static and Dynamic Succinct Trees
    A Fully-Functional Static and Dynamic Succinct Trees GONZALO NAVARRO, University of Chile, Chile KUNIHIKO SADAKANE, National Institute of Informatics, Japan We propose new succinct representations of ordinal trees, and match various space/time lower bounds. It is known that any n-node static tree can be represented in 2n + o(n) bits so that a number of operations on the tree can be supported in constant time under the word-RAM model. However, the data structures are complicated and difficult to dynamize. We propose a simple and flexible data structure, called the range min-max tree, that reduces the large number of relevant tree operations considered in the literature to a few primitives that are carried out in constant time on polylog-sized trees. The result is extended to trees of arbitrary size, retaining constant time and reaching 2n + O(n=polylog(n)) bits of space. This space is optimal for a core subset of the operations supported, and significantly lower than in any previous proposal. For the dynamic case, where insertion/deletion (indels) of nodes is allowed, the existing data structures support a very limited set of operations. Our data structure builds on the range min-max tree to achieve 2n + O(n= log n) bits of space and O(log n) time for all the operations supported in the static scenario, plus indels. We also propose an improved data structure using 2n + O(n log log n= log n) bits and improving the time to the optimal O(log n= log log n) for most operations. We extend our support to forests, where whole subtrees can be attached to or detached from others, in time O(log1+ n) for any > 0.
    [Show full text]
  • Polynomial Division Using Dynamic Arrays, Heaps, and Packed Exponent Vectors ?
    Polynomial Division using Dynamic Arrays, Heaps, and Packed Exponent Vectors ? Michael Monagan and Roman Pearce Department of Mathematics, Simon Fraser University Burnaby, B.C. V5A 1S6, CANADA. [email protected] and [email protected] Abstract. A common way of implementing multivariate polynomial multiplication and division is to represent polynomials as linked lists of terms sorted in a term ordering and to use repeated merging. This results in poor performance on large sparse polynomials. In this paper we use an auxiliary heap of pointers to reduce the number of monomial comparisons in the worst case while keeping the overall storage linear. We give two variations. In the first, the size of the heap is bounded by the number of terms in the quotient(s). In the second, which is new, the size is bounded by the number of terms in the divisor(s). We use dynamic arrays of terms rather than linked lists to reduce storage allocations and indirect memory references. We pack monomials in the array to reduce storage and to speed up monomial comparisons. We give a new packing for the graded reverse lexicographical ordering. We have implemented the heap algorithms in C with an interface to Maple. For comparison we have also implemented Yan’s “geobuckets” data structure. Our timings demonstrate that heaps of pointers are com- parable in speed with geobuckets but use significantly less storage. 1 Introduction In this paper we present and compare algorithms and data structures for poly- nomial division in the ring P = F [x1, x2, ..., xn] where F is a field.
    [Show full text]
  • Will Dynamic Arrays Finally Change the Way Models Are Built?
    Will Dynamic Arrays finally change the way Models are built? Peter Bartholomew MDAO Technologies Ltd [email protected] ABSTRACT Spreadsheets offer a supremely successful and intuitive means of processing and exchanging numerical content. Its intuitive ad-hoc nature makes it hugely popular for use in diverse areas including business and engineering, yet these very same characteristics make it extraordinarily error-prone; many would question whether it is suitable for serious analysis or modelling tasks. A previous EuSpRIG paper examined the role of Names in increasing solution transparency and providing a readable notation to forge links with the problem domain. Extensive use was made of CSE array formulas, but it is acknowledged that their use makes spreadsheet development a distinctly cumbersome task. Since that time, the new dynamic arrays have been introduced and array calculation is now the default mode of operation for Excel. This paper examines the thesis that their adoption within a more professional development environment could replace traditional techniques where solution integrity is important. A major advantage of fully dynamic models is that they require less manual intervention to keep them updated and so have the potential to reduce the attendant errors and risk. 1 INTRODUCTION This paper starts by reviewing the decisions made at the time electronic spreadsheet was invented and looks at the impact of those original decisions. Dan Bricklin required a means for recording the parameters used in a formula which is both usable for the calculation and intelligible for the user. Dan was well aware of the “programmer’s way” of achieving this through the use of named variables but instead plumped for a strategy that was more action-led and intuitive.
    [Show full text]
  • Lecture Notes of CSCI5610 Advanced Data Structures
    Lecture Notes of CSCI5610 Advanced Data Structures Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong July 17, 2020 Contents 1 Course Overview and Computation Models 4 2 The Binary Search Tree and the 2-3 Tree 7 2.1 The binary search tree . .7 2.2 The 2-3 tree . .9 2.3 Remarks . 13 3 Structures for Intervals 15 3.1 The interval tree . 15 3.2 The segment tree . 17 3.3 Remarks . 18 4 Structures for Points 20 4.1 The kd-tree . 20 4.2 A bootstrapping lemma . 22 4.3 The priority search tree . 24 4.4 The range tree . 27 4.5 Another range tree with better query time . 29 4.6 Pointer-machine structures . 30 4.7 Remarks . 31 5 Logarithmic Method and Global Rebuilding 33 5.1 Amortized update cost . 33 5.2 Decomposable problems . 34 5.3 The logarithmic method . 34 5.4 Fully dynamic kd-trees with global rebuilding . 37 5.5 Remarks . 39 6 Weight Balancing 41 6.1 BB[α]-trees . 41 6.2 Insertion . 42 6.3 Deletion . 42 6.4 Amortized analysis . 42 6.5 Dynamization with weight balancing . 43 6.6 Remarks . 44 1 CONTENTS 2 7 Partial Persistence 47 7.1 The potential method . 47 7.2 Partially persistent BST . 48 7.3 General pointer-machine structures . 52 7.4 Remarks . 52 8 Dynamic Perfect Hashing 54 8.1 Two random graph results . 54 8.2 Cuckoo hashing . 55 8.3 Analysis . 58 8.4 Remarks . 59 9 Binomial and Fibonacci Heaps 61 9.1 The binomial heap .
    [Show full text]