Lecture P8: Pointers and Linked Lists Basic Computer Memory Abstraction

Total Page:16

File Type:pdf, Size:1020Kb

Lecture P8: Pointers and Linked Lists Basic Computer Memory Abstraction Pointer Overview Lecture P8: Pointers and Linked Lists Basic computer memory abstraction. addr value ■ Indexed sequence of bits. 0 0 ■ Address = index. 1 1 2 1 Pointer = VARIABLE that stores memory address. 3 1 Uses. 4 0 5 1 ■ Allow function to change inputs. ■ Better understanding of arrays. 6 0 ■ Create "linked lists." 7 0 J ♠ Q♦ 5 ♥ NULL 8 1 9 0 10 1 . 256GB 1 2 Pointers in TOY Pointers in C Unix % gcc pointer.c Variable that stores the value of a single MEMORY ADDRESS. C pointers. % a.out ■ In TOY, memory addresses are 00 – FF. ■ If x is an integer: x = 7 – indexed addressing: store a memory address in a register &x is a pointer to x (memory address of x) px = ffbefb24 ■ Very powerful and useful programming mechanism. ■ If px is a pointer to an integer: *px = 7 ■ Confusing and easy to abuse! *px is the integer pointer.c #include <stdio.h> Address D000 D004 D008 .. D0C8 D0CC D0D0 .. D200 D204 D208 int main(void) { 9 1 D200 .. 0 7 0000 .. 5 3 D0C8 Value int x; allocate storage for int *px; pointer to int x = 7; Memory location D008 px = &x; stores a "pointer" to another printf(" x = %d\n"); memory address of interest. printf(" px = %p\n", px); printf("*px = %d\n", *px); return 0; } 3 5 Pointers as Arguments to Functions Pointers as Arguments to Functions Goal: function that swaps values of two integers. Goal: function that swaps values of two integers. A first attempt: badswap.c Now, one that works. swap.c #include <stdio.h> #include <stdio.h> changes value only swaps copies void swap(int a, int b) { stored in memory void swap(int *pa, int *pb) { of x and y int t; address for x and y int t; t = a; a = b; b = t; t = *pa; *pa = *pb; *pb = t; } } int main(void) { int main(void) { int x = 7, y = 10; int x = 7, y = 10; swap(x, y); swap(&x, &y); printf("%d %d\n", x, y); printf("%d %d\n", x, y); return 0; return 0; } } 6 7 Linked List Overview Linked List Goal: deal with large amounts of data. addr value Fundamental data structure. ■ Organize data so that it is easy to manipulate. 0 0 ■ HOMOGENEOUS collection of values (all same type). ■ Time and space efficient. ■ Store values ANYWHERE in memory. 1 1 ■ Associate LINK with each value. 2 1 Basic computer memory abstraction. ■ Use link for immediate access to the NEXT value. 3 1 ■ Indexed sequence of bits. 4 0 9 5 ■ Address = index. Possible memory representation of x + 3x + 7. 5 1 ■ Assume linked list starts in location D000. Need higher level abstractions to bridge gap. 6 0 D000 D004 D008 .. D0C8 D0CC D0D0 .. D200 D204 D208 ■ Array. 7 0 Address ■ Struct. 8 1 Value 1 9 D200 .. 7 0 0000 .. 3 5 D0C8 ■ LINKED LIST 9 0 ■ Binary tree. 10 1 head 1 9 D200 7 0 0000 NULL 3 5 D0C8 ■ Database. ■ . 256GB 1 8 10 Linked List Linked List vs. Array Fundamental data structure. Polynomial example illustrates basic tradeoffs. ■ HOMOGENEOUS collection of values (all same type). ■ Sparse polynomial = few terms, large exponent. 1000000 50000 ■ Store values ANYWHERE in memory. Ex. x + 5x + 7 ■ ■ Associate LINK with each value. Dense polynomial = mostly nonzero coefficients. Ex. x7 + x6 + 3x4 + 2x3 + 1 ■ Use link for immediate access to the NEXT value. Possible memory representation of x9 + 3x5 + 7. Huge Sparse Polynomial Huge Dense Polynomial ■ Assume linked list starts in location D000. array linked array linked Address D000 D004 D008 .. D0C8 D0CC D0D0 .. D200 D204 D208 space huge tiny space huge 3 * huge Value 1 9 D200 .. 7 0 0000 .. 3 5 D0C8 time instant tiny time instant huge ! Advantage: space proportional to amount of info. Lesson: know space and time costs. ! Disadvantage: can only get to next item quickly. ■ Axiom 1: there is never enough space. Time to determine coefficient of xk. ■ Axiom 2: there is never enough time. 11 12 Overview of Linked Lists in C Linked List for Polynomial poly1.c Not directly built in to C language. Need to know: C code to represent typedef struct node *link; 9 5 of x + 3x + 7. struct node { How to associate pieces of information. ■ Statically, using nodes. int coef; define node to ■ User-define type using struct. int exp; store two integers memory address link next; ■ Include struct field for coefficient and exponent. of next node }; How to specify links. int main(void) { ■ Include struct field for POINTER to next linked list element. struct node p, q, r; p.coef = 1; p.exp = 9; q.coef = 3; q.exp = 5; How to reserve memory to be used. initialize data r.coef = 7; r.exp = 0; ■ Allocate memory DYNAMICALLY (as you need it). p.next = &q; ■ malloc() link up nodes q.next = &r; r.next = NULL; How to use links to access information. return 0; ■ -> and . operators } 13 14 Linked List for Polynomial Better Programming Style poly3.c C code to represent poly2.c Write separate 9 5 #include <stdlib.h> of x + 3x + 7. function to handle #include <stdlib.h> memory allocation ■ Statically, using nodes. typedef struct node *link; and initialization. link NEWnode(int c, int e, link n) { ■ Dynamically using links. struct node { . .}; link x = malloc(sizeof *x); if (x == NULL) { int main(void) { link x, y, z; check if printf("Out of memory.\n"); malloc fails exit(EXIT_FAILURE); x = malloc(sizeof *x); } initialize data x->coef = 1; x->exp = 9; x->coef = c; x->exp = e; x->next = n; y = malloc(sizeof *y); return x; y->coef = 3; y->exp = 5; } allocate enough z = malloc(sizeof *z); memory to store node int main(void) { z->coef = 7; z->exp = 0; Initialize link x = NULL; x->next = y; pointers to NULL x = NEWnode(7, 0, x); y->next = z; link up nodes of list x = NEWnode(3, 5, x); z->next = NULL; x = NEWnode(1, 9, x); return 0; Study this code: tip of iceberg! return 0; } } 15 16 Review of Stack Interface Stack Implementation With Linked Lists In Lecture P5, we created ADT for stack. stacklist.c ■ We implemented stack using arrays. #include <stdlib.h> ■ Now, we give alternate implementation using linked lists. #include "STACK.h" typedef struct STACKnode* link; client.c struct STACKnode { STACK.h int item; standard linked #include "STACK.h" link next; list data structure void STACKinit(void); }; int STACKisempty(void); int main(void) { void STACKpush(int); int a, b; static to make static link head = NULL; head points to int STACKpop(void); . it a true ADT top node on stack void STACKshow(void); STACKinit(); void STACKinit(void) { STACKpush(a); head = NULL; client uses data type, without . } regard to how it is represented b = STACKpop(); or implemented. return 0; int STACKisempty(void) { } return head == NULL; } 17 18 Stack Implementation With Linked Lists Stack Implementation With Linked Lists stacklist.c (cont) int STACKpop(void) { stacklist.c (cont) int item; allocate memory and link NEWnode(int item, link next) { if (head == NULL) { initialize new node link x = malloc(sizeof *x); printf("Stack underflow.\n"); if (x == NULL) { exit(EXIT_FAILURE); printf("Out of memory.\n"); } exit(EXIT_FAILURE); item = head->item; } link x = head->next; free malloc x->item = item; x->next = next; free(head); is opposite of : gives memory back to system return x; head = x; } return item; void STACKpush(int item) { } insert at beginning head = NEWnode(item, head); traverse linked list of list } void STACKshow(void) { link x; for (x = head; x != NULL; x = x-next) printf("%d\n", x->item); } 19 20 Implementing Stacks: Arrays vs. Linked Lists Conclusions We can implement a stack with either array or linked list, and switch Whew, lots of material in this lecture! implementation without changing interface or client. %gcc client.c stacklist.c Pointers are useful, but confusing. OR %gcc client.c stackarray.c Study these slides and carefully read relevant material. Which is better? ■ Array ! Requires upper bound MAX on stack size. ! Uses space proportional to MAX. ■ Linked List ! No need to know stack size ahead of time. ! Requires extra space to store pointers. ! Dynamically memory allocation is slower. 21 22 Pointers and Arrays on arizona, Lecture P8: Extra Slides avg.c int is 32 bits (4 bytes) ⇒ #include <stdio.h> 4 byte offset #define N 64 "Pointer arithmetic" int main(void) { &a[0] = a+0 = D000 int a[N] = {84, 67, 24, ..., 89, 90}; &a[1] = a+1 = D004 int i, sum; &a[2] = a+2 = D008 for (i = 0; i < N; i++) a[0] = *a = 84 sum += a[i]; a[1] = *(a+1) = 67 a[2] = *(a+2) = 24 printf("%d\n", sum / N); return 0; } Memory address D000 D004 D008 .. D0F8 D0FC .. Value 84 67 24 .. 89 90 .. 24 Pointers and Arrays Passing Arrays to Functions integer (on arizona) takes 4 Pass array to function. bytes ⇒ 4 byte offset ■ Pointer to array element 0 is passed instead. Just to stress that a[i] really means *(a+i): avg.c "Pointer arithmetic" 2[a] = *(2+a) = 24 #include <stdio.h> &a[0] = a+0 = D000 #define N 64 This is legal C, but don't ever &a[1] = a+1 = D004 do this at home!!! &a[2] = a+2 = D008 int average(int b[], int n) { receive the value D000 from main a[0] = *a = 84 int i, sum; a[1] = *(a+1) = 67 for (i = 0; i < n; i++) a[2] = *(a+2) = 24 sum += b[i]; return sum / n; } int main(void) { int a[N] = {84, 67, 24, ..., 89, 90}; Memory address D000 D004 D008 .. D0F8 D0FC .
Recommended publications
  • Application of TRIE Data Structure and Corresponding Associative Algorithms for Process Optimization in GRID Environment
    Application of TRIE data structure and corresponding associative algorithms for process optimization in GRID environment V. V. Kashanskya, I. L. Kaftannikovb South Ural State University (National Research University), 76, Lenin prospekt, Chelyabinsk, 454080, Russia E-mail: a [email protected], b [email protected] Growing interest around different BOINC powered projects made volunteer GRID model widely used last years, arranging lots of computational resources in different environments. There are many revealed problems of Big Data and horizontally scalable multiuser systems. This paper provides an analysis of TRIE data structure and possibilities of its application in contemporary volunteer GRID networks, including routing (L3 OSI) and spe- cialized key-value storage engine (L7 OSI). The main goal is to show how TRIE mechanisms can influence de- livery process of the corresponding GRID environment resources and services at different layers of networking abstraction. The relevance of the optimization topic is ensured by the fact that with increasing data flow intensi- ty, the latency of the various linear algorithm based subsystems as well increases. This leads to the general ef- fects, such as unacceptably high transmission time and processing instability. Logically paper can be divided into three parts with summary. The first part provides definition of TRIE and asymptotic estimates of corresponding algorithms (searching, deletion, insertion). The second part is devoted to the problem of routing time reduction by applying TRIE data structure. In particular, we analyze Cisco IOS switching services based on Bitwise TRIE and 256 way TRIE data structures. The third part contains general BOINC architecture review and recommenda- tions for highly-loaded projects.
    [Show full text]
  • C Programming: Data Structures and Algorithms
    C Programming: Data Structures and Algorithms An introduction to elementary programming concepts in C Jack Straub, Instructor Version 2.07 DRAFT C Programming: Data Structures and Algorithms, Version 2.07 DRAFT C Programming: Data Structures and Algorithms Version 2.07 DRAFT Copyright © 1996 through 2006 by Jack Straub ii 08/12/08 C Programming: Data Structures and Algorithms, Version 2.07 DRAFT Table of Contents COURSE OVERVIEW ........................................................................................ IX 1. BASICS.................................................................................................... 13 1.1 Objectives ...................................................................................................................................... 13 1.2 Typedef .......................................................................................................................................... 13 1.2.1 Typedef and Portability ............................................................................................................. 13 1.2.2 Typedef and Structures .............................................................................................................. 14 1.2.3 Typedef and Functions .............................................................................................................. 14 1.3 Pointers and Arrays ..................................................................................................................... 16 1.4 Dynamic Memory Allocation .....................................................................................................
    [Show full text]
  • Abstract Data Types
    Chapter 2 Abstract Data Types The second idea at the core of computer science, along with algorithms, is data. In a modern computer, data consists fundamentally of binary bits, but meaningful data is organized into primitive data types such as integer, real, and boolean and into more complex data structures such as arrays and binary trees. These data types and data structures always come along with associated operations that can be done on the data. For example, the 32-bit int data type is defined both by the fact that a value of type int consists of 32 binary bits but also by the fact that two int values can be added, subtracted, multiplied, compared, and so on. An array is defined both by the fact that it is a sequence of data items of the same basic type, but also by the fact that it is possible to directly access each of the positions in the list based on its numerical index. So the idea of a data type includes a specification of the possible values of that type together with the operations that can be performed on those values. An algorithm is an abstract idea, and a program is an implementation of an algorithm. Similarly, it is useful to be able to work with the abstract idea behind a data type or data structure, without getting bogged down in the implementation details. The abstraction in this case is called an \abstract data type." An abstract data type specifies the values of the type, but not how those values are represented as collections of bits, and it specifies operations on those values in terms of their inputs, outputs, and effects rather than as particular algorithms or program code.
    [Show full text]
  • Data Structures, Buffers, and Interprocess Communication
    Data Structures, Buffers, and Interprocess Communication We’ve looked at several examples of interprocess communication involving the transfer of data from one process to another process. We know of three mechanisms that can be used for this transfer: - Files - Shared Memory - Message Passing The act of transferring data involves one process writing or sending a buffer, and another reading or receiving a buffer. Most of you seem to be getting the basic idea of sending and receiving data for IPC… it’s a lot like reading and writing to a file or stdin and stdout. What seems to be a little confusing though is HOW that data gets copied to a buffer for transmission, and HOW data gets copied out of a buffer after transmission. First… let’s look at a piece of data. typedef struct { char ticker[TICKER_SIZE]; double price; } item; . item next; . The data we want to focus on is “next”. “next” is an object of type “item”. “next” occupies memory in the process. What we’d like to do is send “next” from processA to processB via some kind of IPC. IPC Using File Streams If we were going to use good old C++ filestreams as the IPC mechanism, our code would look something like this to write the file: // processA is the sender… ofstream out; out.open(“myipcfile”); item next; strcpy(next.ticker,”ABC”); next.price = 55; out << next.ticker << “ “ << next.price << endl; out.close(); Notice that we didn’t do this: out << next << endl; Why? Because the “<<” operator doesn’t know what to do with an object of type “item”.
    [Show full text]
  • Subtyping Recursive Types
    ACM Transactions on Programming Languages and Systems, 15(4), pp. 575-631, 1993. Subtyping Recursive Types Roberto M. Amadio1 Luca Cardelli CNRS-CRIN, Nancy DEC, Systems Research Center Abstract We investigate the interactions of subtyping and recursive types, in a simply typed λ-calculus. The two fundamental questions here are whether two (recursive) types are in the subtype relation, and whether a term has a type. To address the first question, we relate various definitions of type equivalence and subtyping that are induced by a model, an ordering on infinite trees, an algorithm, and a set of type rules. We show soundness and completeness between the rules, the algorithm, and the tree semantics. We also prove soundness and a restricted form of completeness for the model. To address the second question, we show that to every pair of types in the subtype relation we can associate a term whose denotation is the uniquely determined coercion map between the two types. Moreover, we derive an algorithm that, when given a term with implicit coercions, can infer its least type whenever possible. 1This author's work has been supported in part by Digital Equipment Corporation and in part by the Stanford-CNR Collaboration Project. Page 1 Contents 1. Introduction 1.1 Types 1.2 Subtypes 1.3 Equality of Recursive Types 1.4 Subtyping of Recursive Types 1.5 Algorithm outline 1.6 Formal development 2. A Simply Typed λ-calculus with Recursive Types 2.1 Types 2.2 Terms 2.3 Equations 3. Tree Ordering 3.1 Subtyping Non-recursive Types 3.2 Folding and Unfolding 3.3 Tree Expansion 3.4 Finite Approximations 4.
    [Show full text]
  • 4 Hash Tables and Associative Arrays
    4 FREE Hash Tables and Associative Arrays If you want to get a book from the central library of the University of Karlsruhe, you have to order the book in advance. The library personnel fetch the book from the stacks and deliver it to a room with 100 shelves. You find your book on a shelf numbered with the last two digits of your library card. Why the last digits and not the leading digits? Probably because this distributes the books more evenly among the shelves. The library cards are numbered consecutively as students sign up, and the University of Karlsruhe was founded in 1825. Therefore, the students enrolled at the same time are likely to have the same leading digits in their card number, and only a few shelves would be in use if the leadingCOPY digits were used. The subject of this chapter is the robust and efficient implementation of the above “delivery shelf data structure”. In computer science, this data structure is known as a hash1 table. Hash tables are one implementation of associative arrays, or dictio- naries. The other implementation is the tree data structures which we shall study in Chap. 7. An associative array is an array with a potentially infinite or at least very large index set, out of which only a small number of indices are actually in use. For example, the potential indices may be all strings, and the indices in use may be all identifiers used in a particular C++ program.Or the potential indices may be all ways of placing chess pieces on a chess board, and the indices in use may be the place- ments required in the analysis of a particular game.
    [Show full text]
  • CSE 307: Principles of Programming Languages Classes and Inheritance
    OOP Introduction Type & Subtype Inheritance Overloading and Overriding CSE 307: Principles of Programming Languages Classes and Inheritance R. Sekar 1 / 52 OOP Introduction Type & Subtype Inheritance Overloading and Overriding Topics 1. OOP Introduction 3. Inheritance 2. Type & Subtype 4. Overloading and Overriding 2 / 52 OOP Introduction Type & Subtype Inheritance Overloading and Overriding Section 1 OOP Introduction 3 / 52 OOP Introduction Type & Subtype Inheritance Overloading and Overriding OOP (Object Oriented Programming) So far the languages that we encountered treat data and computation separately. In OOP, the data and computation are combined into an “object”. 4 / 52 OOP Introduction Type & Subtype Inheritance Overloading and Overriding Benefits of OOP more convenient: collects related information together, rather than distributing it. Example: C++ iostream class collects all I/O related operations together into one central place. Contrast with C I/O library, which consists of many distinct functions such as getchar, printf, scanf, sscanf, etc. centralizes and regulates access to data. If there is an error that corrupts object data, we need to look for the error only within its class Contrast with C programs, where access/modification code is distributed throughout the program 5 / 52 OOP Introduction Type & Subtype Inheritance Overloading and Overriding Benefits of OOP (Continued) Promotes reuse. by separating interface from implementation. We can replace the implementation of an object without changing client code. Contrast with C, where the implementation of a data structure such as a linked list is integrated into the client code by permitting extension of new objects via inheritance. Inheritance allows a new class to reuse the features of an existing class.
    [Show full text]
  • Data Structures Using “C”
    DATA STRUCTURES USING “C” DATA STRUCTURES USING “C” LECTURE NOTES Prepared by Dr. Subasish Mohapatra Department of Computer Science and Application College of Engineering and Technology, Bhubaneswar Biju Patnaik University of Technology, Odisha SYLLABUS BE 2106 DATA STRUCTURE (3-0-0) Module – I Introduction to data structures: storage structure for arrays, sparse matrices, Stacks and Queues: representation and application. Linked lists: Single linked lists, linked list representation of stacks and Queues. Operations on polynomials, Double linked list, circular list. Module – II Dynamic storage management-garbage collection and compaction, infix to post fix conversion, postfix expression evaluation. Trees: Tree terminology, Binary tree, Binary search tree, General tree, B+ tree, AVL Tree, Complete Binary Tree representation, Tree traversals, operation on Binary tree-expression Manipulation. Module –III Graphs: Graph terminology, Representation of graphs, path matrix, BFS (breadth first search), DFS (depth first search), topological sorting, Warshall’s algorithm (shortest path algorithm.) Sorting and Searching techniques – Bubble sort, selection sort, Insertion sort, Quick sort, merge sort, Heap sort, Radix sort. Linear and binary search methods, Hashing techniques and hash functions. Text Books: 1. Gilberg and Forouzan: “Data Structure- A Pseudo code approach with C” by Thomson publication 2. “Data structure in C” by Tanenbaum, PHI publication / Pearson publication. 3. Pai: ”Data Structures & Algorithms; Concepts, Techniques & Algorithms
    [Show full text]
  • 1 Abstract Data Types, Cost Specifications, and Data Structures
    Parallel and Sequential Data Structures and Algorithms — Lecture 5 15-210 (Fall 2012) Lecture 5 — Data Abstraction and Sequences I Parallel and Sequential Data Structures and Algorithms, 15-210 (Fall 2012) Lectured by Guy Blelloch — 11 Septermber 2012 Material in this lecture: - Relationship between ADTs, cost specifications and data structures. - The sequence ADT - The scan operation: examples and implementation - Using contraction : an algorithmic technique 1 Abstract Data Types, Cost Specifications, and Data Structures So far in class we have defined several “problems” and discussed algorithms for solving them. The idea is that the problem is an abstract definition of what we want in terms of a function specification, and the algorithms are particular ways to solve/implement the problem. In addition to abstract functions we also often need to define abstractions over data. In such an abstraction we define a set of functions (abstractly) over a common data type. As mentioned in the first lecture, we will refer to the abstractions as abstract data types and their implementations as data structures. An example of an abstract data type, or ADT, you should have seen before is a priority queue. Let’s consider a slight extension where in addition to insert, and deleteMin, we will add a function that joins two priority queues into a single one. For historical reasons, we will call such a join a meld, and the ADT a "meldable priority queue". As with a problem, we like to have a formal definition of the abstract data type. Definition 1.1. Given a totally
    [Show full text]
  • A Metaobject Protocol for Fault-Tolerant CORBA Applications
    A Metaobject Protocol for Fault-Tolerant CORBA Applications Marc-Olivier Killijian*, Jean-Charles Fabre*, Juan-Carlos Ruiz-Garcia*, Shigeru Chiba** *LAAS-CNRS, 7 Avenue du Colonel Roche **Institute of Information Science and 31077 Toulouse cedex, France Electronics, University of Tsukuba, Tennodai, Tsukuba, Ibaraki 305-8573, Japan Abstract The corner stone of a fault-tolerant reflective The use of metalevel architectures for the architecture is the MOP. We thus propose a special implementation of fault-tolerant systems is today very purpose MOP to address the problems of general-purpose appealing. Nevertheless, all such fault-tolerant systems ones. We show that compile-time reflection is a good have used a general-purpose metaobject protocol (MOP) approach for developing a specialized runtime MOP. or are based on restricted reflective features of some The definition and the implementation of an object-oriented language. According to our past appropriate runtime metaobject protocol for implementing experience, we define in this paper a suitable metaobject fault tolerance into CORBA applications is the main protocol, called FT-MOP for building fault-tolerant contribution of the work reported in this paper. This systems. We explain how to realize a specialized runtime MOP, called FT-MOP (Fault Tolerance - MetaObject MOP using compile-time reflection. This MOP is CORBA Protocol), is sufficiently general to be used for other aims compliant: it enables the execution and the state evolution (mobility, adaptability, migration, security). FT-MOP of CORBA objects to be controlled and enables the fault provides a mean to attach dynamically fault tolerance tolerance metalevel to be developed as CORBA software. strategies to CORBA objects as CORBA metaobjects, enabling thus these strategies to be implemented as 1 .
    [Show full text]
  • Tries and String Matching
    Tries and String Matching Where We've Been ● Fundamental Data Structures ● Red/black trees, B-trees, RMQ, etc. ● Isometries ● Red/black trees ≡ 2-3-4 trees, binomial heaps ≡ binary numbers, etc. ● Amortized Analysis ● Aggregate, banker's, and potential methods. Where We're Going ● String Data Structures ● Data structures for storing and manipulating text. ● Randomized Data Structures ● Using randomness as a building block. ● Integer Data Structures ● Breaking the Ω(n log n) sorting barrier. ● Dynamic Connectivity ● Maintaining connectivity in an changing world. String Data Structures Text Processing ● String processing shows up everywhere: ● Computational biology: Manipulating DNA sequences. ● NLP: Storing and organizing huge text databases. ● Computer security: Building antivirus databases. ● Many problems have polynomial-time solutions. ● Goal: Design theoretically and practically efficient algorithms that outperform brute-force approaches. Outline for Today ● Tries ● A fundamental building block in string processing algorithms. ● Aho-Corasick String Matching ● A fast and elegant algorithm for searching large texts for known substrings. Tries Ordered Dictionaries ● Suppose we want to store a set of elements supporting the following operations: ● Insertion of new elements. ● Deletion of old elements. ● Membership queries. ● Successor queries. ● Predecessor queries. ● Min/max queries. ● Can use a standard red/black tree or splay tree to get (worst-case or expected) O(log n) implementations of each. A Catch ● Suppose we want to store a set of strings. ● Comparing two strings of lengths r and s takes time O(min{r, s}). ● Operations on a balanced BST or splay tree now take time O(M log n), where M is the length of the longest string in the tree.
    [Show full text]
  • Space-Efficient Data Structures for String Searching and Retrieval
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 2014 Space-efficient data structures for string searching and retrieval Sharma Valliyil Thankachan Louisiana State University and Agricultural and Mechanical College, [email protected] Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Valliyil Thankachan, Sharma, "Space-efficient data structures for string searching and retrieval" (2014). LSU Doctoral Dissertations. 2848. https://digitalcommons.lsu.edu/gradschool_dissertations/2848 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. SPACE-EFFICIENT DATA STRUCTURES FOR STRING SEARCHING AND RETRIEVAL A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Sharma Valliyil Thankachan B.Tech., National Institute of Technology Calicut, 2006 May 2014 Dedicated to the memory of my Grandfather. ii Acknowledgments This thesis would not have been possible without the time and effort of few people, who believed in me, instilled in me the courage to move forward and lent a supportive shoulder in times of uncertainty. I would like to express here how much it all meant to me. First and foremost, I would like to extend my sincerest gratitude to my advisor Dr. Rahul Shah. I would have hardly imagined that a fortuitous encounter in an online puzzle forum with Rahul, would set the stage for a career in theoretical computer science.
    [Show full text]