KP-Trie Algorithm for Update and Search Operations

Total Page:16

File Type:pdf, Size:1020Kb

KP-Trie Algorithm for Update and Search Operations The International Arab Journal of Information Technology, Vol. 13, No. 6, November 2016 722 KP-Trie Algorithm for Update and Search Operations Feras Hanandeh1, Izzat Alsmadi2, Mohammed Akour3, and Essam Al Daoud4 1Department of Computer Information Systems, Hashemite University, Jordan 2, 3Department of Computer Information Systems, Yarmouk University, Jordan 4Computer Science Department, Zarqa University, Jordan Abstract: Radix-Tree is a space optimized data structure that performs data compression by means of cluster nodes that share the same branch. Each node with only one child is merged with its child and is considered as space optimized. Nevertheless, it can’t be considered as speed optimized because the root is associated with the empty string. Moreover, values are not normally associated with every node; they are associated only with leaves and some inner nodes that correspond to keys of interest. Therefore, it takes time in moving bit by bit to reach the desired word. In this paper we propose the KP-Trie which is consider as speed and space optimized data structure that is resulted from both horizontal and vertical compression. Keywords: Trie, radix tree, data structure, branch factor, indexing, tree structure, information retrieval. Received January 14, 2015; accepted March 23, 2015; Published online December 23, 2015 1. Introduction the exception of leaf nodes, nodes in the trie work merely as pointers to words. Data structures are a specialized format for efficient A trie, also called digital tree, is an ordered multi- organizing, retrieving, saving and storing data. It’s way tree data structure that is useful to store an efficient with large amount of data such as: Large data associative array where the keys are usually strings, bases. Retrieving process retrieves data from either that comes in the form of a word or dictionaries. The main memory or secondary memory. Using data word “Trie” comes from the middle letters of the word structures vary depending on the nature or the purpose “retrieval” that was coined by Fredkin [7]. Their of the structure. The main characteristics that use for structure nature facilitates the process of searching or assessing the quality of any data structure depending retrieving queried words quickly. A common use-case on the performance or the speed of storing and for tries was finding all dictionary words that start with accessing data in those data structures. Any current a particular prefix (e.g., finding all words that start natural language such as: English, French, Arabic, etc., with “data”). In tries, keys were stored in the leaves can have number of words up to one million words and the search method involves left-to-right although dictionaries couldn’t contain all such words comparison of prefixes of the keys. especially as languages continuously grow to add new In each trie, nodes form the children that could words or borrow words from other languages. Current further be parents for lower nodes. Nodes contain versions of Oxford English dictionary may have up to letters that represent keys or pointers to words (or the half million words. As such, a software product or web rest of the words) at the lowest leaf levels. In principle, application that needs or uses a dictionary should have each node can contain the searched for word (if it is a efficient data structures for effective: Storage, access leaf). This can dynamically change if more words are and expansion of data or words. added. Finding a word in a trie depends on the size of The concept of data structures such as trees has th the tree or the number of words along with its developed since the 19 century. Tries evolved from structure. The depth of the nodes that a query could go trees. They have different names such as: Radix tree, depends on the number of words in the searched for prefix tree compact Trie, bucket Trie, crit bit tree and word. If the word does not exist in the tree, the longest Practical Algorithm to Retrieve Information Coded in node sequence is performed. Alphanumeric (PATRICIA) [9]. Certain trie structures nature can facilitate not only In the existing general form, it is believed that tries quicker access and retrieval of information; they can was first proposed by Morrison [14]. Radix tree, prefix further facilitate smooth expansion of such structures. tree compact Trie, bucket Trie, crit bit tree and In many cases, it is necessary for a dictionary to accept PATRICIA those different names may have some the addition of new words and hence the structure differences in the detail structure. For example, unlike should facilitate this expansion smoothly and PATRICIA tree nodes that store keys and words, with dynamically. This can be for either fragmentation or 723 The International Arab Journal of Information Technology, Vol. 13, No. 6, November 2016 for allocation/reallocation [9]. In some other cases, it the proposed Flash trie data structure followed may be necessary to compress, encode or encrypt data compression techniques to minimize the total size of in those tree structures. the Trie. Behdadfar and Saidi [4] applied Trie search In computer science, a radix tree also Known as for IP routing table, search optimization where they PATRICIA Trie or compact prefix tree, is a tried to adjust tries to deal with large size data compressed version of a trie and space optimized data structures such as those of IP addresses. They tried to structure, in which any node that is an only child is arrange nodes through encoding their addresses with merged with its parent. Unlike regular trees, the key at numbers which can be reached faster based on the each node is compared part-of-bits by part-of-bits; numbers encoded. They concluded that some methods where the quantity of bits in that part at that node is the can be optimized for search or query whereas similar radix r of the radix trie, Instead of compare the whole methods can be optimized for an addition or update keys from their beginning up to the point of inequality. process to the Trie table. Insertion, deletion, and searching operations were Bodon and Ronyal [5] prepared a more modified supported by radix trees with O(k) where k is the version of trie for solving frequent itemset mining maximum length of all strings in the set. Unlike in which is assumed to be the most important data mining regular tries, edges could be labeled with sequences of fields. Experiments proved that the performance of elements as well as single elements. tries is close to the performance of hash-trees with The branching factor was an essential in compute optimal parameters at high support threshold; however, the time complexity of the searching tree, the at lower threshold the trie-based algorithm does better branching factor is the number of children in each node than the hash-tree. Moreover, tries are mainly suitable of the tree, where usually every node had the same for useful implementation of candidate generation branching factor, but the difficulty occurs when because pairs of items that produce candidates have the different nodes at the same level of the tree have same parents. Thus, candidates can be easily obtained different numbers of children. In this situation, branch by a simple scan of a part of the trie. It gives simpler factor with a given depth of the tree could be defined algorithms, which are faster for a range of applications by computing the ratio of the number of nodes at that and escapes the need of fine tuning by employing a depth with the number of nodes at the next self-adjusting method. Ferragina and Grossi [6] insignificant depth. introduced the string B-Tree that is link between some The time complexity also depends on solution traditional external-memory and string-matching data depth. Solution depth is the length of a shortest structure. It’s a combination of B-Trees and Patricia solution path. Computing the branch factor and Tries for internal-node indices that is made more solution depth depends on the given problem instance. helpful by adding extra pointers to accelerate search The rest of this paper is organized as follows: and renovate operations. Section two presents’ related studies to the paper Fredkin [7] described the Trie structure. Fredkin subject, section three shows the proposed KP-Trie, said “As defined by me, nearly 50 years ago, it is section four presents evaluation and experimental properly pronounced “tree” as in the word “retrieval”, results, Last section presents the conclusions. at least that was my intent when I gave it the name “Trie”, the idea behind the name was to combine 2. Related Works reference to both the structure (a tree structure) and a major target (data storage and retrieval)”. Fu et al. [8] Some recent researches suggested that new data introduced a modified LC-Trie lookup algorithm in structures such as: Hash tables or linked structures can order to enhance its performance. The idea depends on be suitable alternatives for tries in terms of flexibility expansion and collapsing of the routing prefixes which and performance. turns them into disjoints, complete and minimal set. Askitis and Sinha [1] suggested that HAT-Trie is Subsequently, the base and prefix vectors are removed better alternative for trie representation, this is based from the data structure. This result in a smaller data on the previous approach: Bucket-trie where Buckets structure and less number of executed instructions, are sectioned using B-tree splitting. This research tends which in turn improves the performance, a technique to improve Burst-tries through caching and using hash called prefix transformation. Thereafter, the LC-Trie’s tables.
Recommended publications
  • Application of TRIE Data Structure and Corresponding Associative Algorithms for Process Optimization in GRID Environment
    Application of TRIE data structure and corresponding associative algorithms for process optimization in GRID environment V. V. Kashanskya, I. L. Kaftannikovb South Ural State University (National Research University), 76, Lenin prospekt, Chelyabinsk, 454080, Russia E-mail: a [email protected], b [email protected] Growing interest around different BOINC powered projects made volunteer GRID model widely used last years, arranging lots of computational resources in different environments. There are many revealed problems of Big Data and horizontally scalable multiuser systems. This paper provides an analysis of TRIE data structure and possibilities of its application in contemporary volunteer GRID networks, including routing (L3 OSI) and spe- cialized key-value storage engine (L7 OSI). The main goal is to show how TRIE mechanisms can influence de- livery process of the corresponding GRID environment resources and services at different layers of networking abstraction. The relevance of the optimization topic is ensured by the fact that with increasing data flow intensi- ty, the latency of the various linear algorithm based subsystems as well increases. This leads to the general ef- fects, such as unacceptably high transmission time and processing instability. Logically paper can be divided into three parts with summary. The first part provides definition of TRIE and asymptotic estimates of corresponding algorithms (searching, deletion, insertion). The second part is devoted to the problem of routing time reduction by applying TRIE data structure. In particular, we analyze Cisco IOS switching services based on Bitwise TRIE and 256 way TRIE data structures. The third part contains general BOINC architecture review and recommenda- tions for highly-loaded projects.
    [Show full text]
  • Lecture 26 Fall 2019 Instructors: B&S Administrative Details
    CSCI 136 Data Structures & Advanced Programming Lecture 26 Fall 2019 Instructors: B&S Administrative Details • Lab 9: Super Lexicon is online • Partners are permitted this week! • Please fill out the form by tonight at midnight • Lab 6 back 2 2 Today • Lab 9 • Efficient Binary search trees (Ch 14) • AVL Trees • Height is O(log n), so all operations are O(log n) • Red-Black Trees • Different height-balancing idea: height is O(log n) • All operations are O(log n) 3 2 Implementing the Lexicon as a trie There are several different data structures you could use to implement a lexicon— a sorted array, a linked list, a binary search tree, a hashtable, and many others. Each of these offers tradeoffs between the speed of word and prefix lookup, amount of memory required to store the data structure, the ease of writing and debugging the code, performance of add/remove, and so on. The implementation we will use is a special kind of tree called a trie (pronounced "try"), designed for just this purpose. A trie is a letter-tree that efficiently stores strings. A node in a trie represents a letter. A path through the trie traces out a sequence ofLab letters that9 represent : Lexicon a prefix or word in the lexicon. Instead of just two children as in a binary tree, each trie node has potentially 26 child pointers (one for each letter of the alphabet). Whereas searching a binary search tree eliminates half the words with a left or right turn, a search in a trie follows the child pointer for the next letter, which narrows the search• Goal: to just words Build starting a datawith that structure letter.
    [Show full text]
  • Lecture 04 Linear Structures Sort
    Algorithmics (6EAP) MTAT.03.238 Linear structures, sorting, searching, etc Jaak Vilo 2018 Fall Jaak Vilo 1 Big-Oh notation classes Class Informal Intuition Analogy f(n) ∈ ο ( g(n) ) f is dominated by g Strictly below < f(n) ∈ O( g(n) ) Bounded from above Upper bound ≤ f(n) ∈ Θ( g(n) ) Bounded from “equal to” = above and below f(n) ∈ Ω( g(n) ) Bounded from below Lower bound ≥ f(n) ∈ ω( g(n) ) f dominates g Strictly above > Conclusions • Algorithm complexity deals with the behavior in the long-term – worst case -- typical – average case -- quite hard – best case -- bogus, cheating • In practice, long-term sometimes not necessary – E.g. for sorting 20 elements, you dont need fancy algorithms… Linear, sequential, ordered, list … Memory, disk, tape etc – is an ordered sequentially addressed media. Physical ordered list ~ array • Memory /address/ – Garbage collection • Files (character/byte list/lines in text file,…) • Disk – Disk fragmentation Linear data structures: Arrays • Array • Hashed array tree • Bidirectional map • Heightmap • Bit array • Lookup table • Bit field • Matrix • Bitboard • Parallel array • Bitmap • Sorted array • Circular buffer • Sparse array • Control table • Sparse matrix • Image • Iliffe vector • Dynamic array • Variable-length array • Gap buffer Linear data structures: Lists • Doubly linked list • Array list • Xor linked list • Linked list • Zipper • Self-organizing list • Doubly connected edge • Skip list list • Unrolled linked list • Difference list • VList Lists: Array 0 1 size MAX_SIZE-1 3 6 7 5 2 L = int[MAX_SIZE]
    [Show full text]
  • Efficient In-Memory Indexing with Generalized Prefix Trees
    Efficient In-Memory Indexing with Generalized Prefix Trees Matthias Boehm, Benjamin Schlegel, Peter Benjamin Volk, Ulrike Fischer, Dirk Habich, Wolfgang Lehner TU Dresden; Database Technology Group; Dresden, Germany Abstract: Efficient data structures for in-memory indexing gain in importance due to (1) the exponentially increasing amount of data, (2) the growing main-memory capac- ity, and (3) the gap between main-memory and CPU speed. In consequence, there are high performance demands for in-memory data structures. Such index structures are used—with minor changes—as primary or secondary indices in almost every DBMS. Typically, tree-based or hash-based structures are used, while structures based on prefix-trees (tries) are neglected in this context. For tree-based and hash-based struc- tures, the major disadvantages are inherently caused by the need for reorganization and key comparisons. In contrast, the major disadvantage of trie-based structures in terms of high memory consumption (created and accessed nodes) could be improved. In this paper, we argue for reconsidering prefix trees as in-memory index structures and we present the generalized trie, which is a prefix tree with variable prefix length for indexing arbitrary data types of fixed or variable length. The variable prefix length enables the adjustment of the trie height and its memory consumption. Further, we introduce concepts for reducing the number of created and accessed trie levels. This trie is order-preserving and has deterministic trie paths for keys, and hence, it does not require any dynamic reorganization or key comparisons. Finally, the generalized trie yields improvements compared to existing in-memory index structures, especially for skewed data.
    [Show full text]
  • Search Trees for Strings a Balanced Binary Search Tree Is a Powerful Data Structure That Stores a Set of Objects and Supports Many Operations Including
    Search Trees for Strings A balanced binary search tree is a powerful data structure that stores a set of objects and supports many operations including: Insert and Delete. Lookup: Find if a given object is in the set, and if it is, possibly return some data associated with the object. Range query: Find all objects in a given range. The time complexity of the operations for a set of size n is O(log n) (plus the size of the result) assuming constant time comparisons. There are also alternative data structures, particularly if we do not need to support all operations: • A hash table supports operations in constant time but does not support range queries. • An ordered array is simpler, faster and more space efficient in practice, but does not support insertions and deletions. A data structure is called dynamic if it supports insertions and deletions and static if not. 107 When the objects are strings, operations slow down: • Comparison are slower. For example, the average case time complexity is O(log n logσ n) for operations in a binary search tree storing a random set of strings. • Computing a hash function is slower too. For a string set R, there are also new types of queries: Lcp query: What is the length of the longest prefix of the query string S that is also a prefix of some string in R. Prefix query: Find all strings in R that have S as a prefix. The prefix query is a special type of range query. 108 Trie A trie is a rooted tree with the following properties: • Edges are labelled with symbols from an alphabet Σ.
    [Show full text]
  • X-Fast and Y-Fast Tries
    x-Fast and y-Fast Tries Outline for Today ● Bitwise Tries ● A simple ordered dictionary for integers. ● x-Fast Tries ● Tries + Hashing ● y-Fast Tries ● Tries + Hashing + Subdivision + Balanced Trees + Amortization Recap from Last Time Ordered Dictionaries ● An ordered dictionary is a data structure that maintains a set S of elements drawn from an ordered universe � and supports these operations: ● insert(x), which adds x to S. ● is-empty(), which returns whether S = Ø. ● lookup(x), which returns whether x ∈ S. ● delete(x), which removes x from S. ● max() / min(), which returns the maximum or minimum element of S. ● successor(x), which returns the smallest element of S greater than x, and ● predecessor(x), which returns the largest element of S smaller than x. Integer Ordered Dictionaries ● Suppose that � = [U] = {0, 1, …, U – 1}. ● A van Emde Boas tree is an ordered dictionary for [U] where ● min, max, and is-empty run in time O(1). ● All other operations run in time O(log log U). ● Space usage is Θ(U) if implemented deterministically, and O(n) if implemented using hash tables. ● Question: Is there a simpler data structure meeting these bounds? The Machine Model ● We assume a transdichotomous machine model: ● Memory is composed of words of w bits each. ● Basic arithmetic and bitwise operations on words take time O(1) each. ● w = Ω(log n). A Start: Bitwise Tries Tries Revisited ● Recall: A trie is a simple data 0 1 structure for storing strings. 0 1 0 1 ● Integers can be thought of as strings of bits. 1 0 1 1 0 ● Idea: Store integers in a bitwise trie.
    [Show full text]
  • Assignment of Master's Thesis
    CZECH TECHNICAL UNIVERSITY IN PRAGUE FACULTY OF INFORMATION TECHNOLOGY ASSIGNMENT OF MASTER’S THESIS Title: Approximate Pattern Matching In Sparse Multidimensional Arrays Using Machine Learning Based Methods Student: Bc. Anna Kučerová Supervisor: Ing. Luboš Krčál Study Programme: Informatics Study Branch: Knowledge Engineering Department: Department of Theoretical Computer Science Validity: Until the end of winter semester 2018/19 Instructions Sparse multidimensional arrays are a common data structure for effective storage, analysis, and visualization of scientific datasets. Approximate pattern matching and processing is essential in many scientific domains. Previous algorithms focused on deterministic filtering and aggregate matching using synopsis style indexing. However, little work has been done on application of heuristic based machine learning methods for these approximate array pattern matching tasks. Research current methods for multidimensional array pattern matching, discovery, and processing. Propose a method for array pattern matching and processing tasks utilizing machine learning methods, such as kernels, clustering, or PSO in conjunction with inverted indexing. Implement the proposed method and demonstrate its efficiency on both artificial and real world datasets. Compare the algorithm with deterministic solutions in terms of time and memory complexities and pattern occurrence miss rates. References Will be provided by the supervisor. doc. Ing. Jan Janoušek, Ph.D. prof. Ing. Pavel Tvrdík, CSc. Head of Department Dean Prague February 28, 2017 Czech Technical University in Prague Faculty of Information Technology Department of Knowledge Engineering Master’s thesis Approximate Pattern Matching In Sparse Multidimensional Arrays Using Machine Learning Based Methods Bc. Anna Kuˇcerov´a Supervisor: Ing. LuboˇsKrˇc´al 9th May 2017 Acknowledgements Main credit goes to my supervisor Ing.
    [Show full text]
  • Balanced Binary Search Trees 1 Introduction 2 BST Analysis
    csce750 — Analysis of Algorithms Fall 2020 — Lecture Notes: Balanced Binary Search Trees This document contains slides from the lecture, formatted to be suitable for printing or individ- ual reading, and with some supplemental explanations added. It is intended as a supplement to, rather than a replacement for, the lectures themselves — you should not expect the notes to be self-contained or complete on their own. 1 Introduction CLRS 12, 13 A binary search tree is a data structure that supports these operations: • INSERT(k) • SEARCH(k) • DELETE(k) Basic idea: Store one key at each node. • All keys in the left subtree of n are less than the key stored at n. • All keys in the right subtree of n are greater than the key stored at n. Search and insert are trivial. Delete is slightly trickier, but not too bad. You may notice that these operations are very similar to the opera- tions available for hash tables. However, data structures like BSTs remain important because they can be extended to efficiently sup- port other useful operations like iterating over the elements in order, and finding the largest and smallest elements. These things cannot be done efficiently in hash tables. 2 BST Analysis Each operation can be done in time O(h) on a BST of height h. Worst case: Θ(n) Aside: Does randomization help? • Answer: Sort of. If we know all of the keys at the start, and insert them in a random order, in which each of the n! permutations is equally likely, then the expected tree height is O(lg n).
    [Show full text]
  • Tries and String Matching
    Tries and String Matching Where We've Been ● Fundamental Data Structures ● Red/black trees, B-trees, RMQ, etc. ● Isometries ● Red/black trees ≡ 2-3-4 trees, binomial heaps ≡ binary numbers, etc. ● Amortized Analysis ● Aggregate, banker's, and potential methods. Where We're Going ● String Data Structures ● Data structures for storing and manipulating text. ● Randomized Data Structures ● Using randomness as a building block. ● Integer Data Structures ● Breaking the Ω(n log n) sorting barrier. ● Dynamic Connectivity ● Maintaining connectivity in an changing world. String Data Structures Text Processing ● String processing shows up everywhere: ● Computational biology: Manipulating DNA sequences. ● NLP: Storing and organizing huge text databases. ● Computer security: Building antivirus databases. ● Many problems have polynomial-time solutions. ● Goal: Design theoretically and practically efficient algorithms that outperform brute-force approaches. Outline for Today ● Tries ● A fundamental building block in string processing algorithms. ● Aho-Corasick String Matching ● A fast and elegant algorithm for searching large texts for known substrings. Tries Ordered Dictionaries ● Suppose we want to store a set of elements supporting the following operations: ● Insertion of new elements. ● Deletion of old elements. ● Membership queries. ● Successor queries. ● Predecessor queries. ● Min/max queries. ● Can use a standard red/black tree or splay tree to get (worst-case or expected) O(log n) implementations of each. A Catch ● Suppose we want to store a set of strings. ● Comparing two strings of lengths r and s takes time O(min{r, s}). ● Operations on a balanced BST or splay tree now take time O(M log n), where M is the length of the longest string in the tree.
    [Show full text]
  • Lecture Notes of CSCI5610 Advanced Data Structures
    Lecture Notes of CSCI5610 Advanced Data Structures Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong July 17, 2020 Contents 1 Course Overview and Computation Models 4 2 The Binary Search Tree and the 2-3 Tree 7 2.1 The binary search tree . .7 2.2 The 2-3 tree . .9 2.3 Remarks . 13 3 Structures for Intervals 15 3.1 The interval tree . 15 3.2 The segment tree . 17 3.3 Remarks . 18 4 Structures for Points 20 4.1 The kd-tree . 20 4.2 A bootstrapping lemma . 22 4.3 The priority search tree . 24 4.4 The range tree . 27 4.5 Another range tree with better query time . 29 4.6 Pointer-machine structures . 30 4.7 Remarks . 31 5 Logarithmic Method and Global Rebuilding 33 5.1 Amortized update cost . 33 5.2 Decomposable problems . 34 5.3 The logarithmic method . 34 5.4 Fully dynamic kd-trees with global rebuilding . 37 5.5 Remarks . 39 6 Weight Balancing 41 6.1 BB[α]-trees . 41 6.2 Insertion . 42 6.3 Deletion . 42 6.4 Amortized analysis . 42 6.5 Dynamization with weight balancing . 43 6.6 Remarks . 44 1 CONTENTS 2 7 Partial Persistence 47 7.1 The potential method . 47 7.2 Partially persistent BST . 48 7.3 General pointer-machine structures . 52 7.4 Remarks . 52 8 Dynamic Perfect Hashing 54 8.1 Two random graph results . 54 8.2 Cuckoo hashing . 55 8.3 Analysis . 58 8.4 Remarks . 59 9 Binomial and Fibonacci Heaps 61 9.1 The binomial heap .
    [Show full text]
  • CMSC 420: Lecture 7 Randomized Search Structures: Treaps and Skip Lists
    CMSC 420 Dave Mount CMSC 420: Lecture 7 Randomized Search Structures: Treaps and Skip Lists Randomized Data Structures: A common design techlque in the field of algorithm design in- volves the notion of using randomization. A randomized algorithm employs a pseudo-random number generator to inform some of its decisions. Randomization has proved to be a re- markably useful technique, and randomized algorithms are often the fastest and simplest algorithms for a given application. This may seem perplexing at first. Shouldn't an intelligent, clever algorithm designer be able to make better decisions than a simple random number generator? The issue is that a deterministic decision-making process may be susceptible to systematic biases, which in turn can result in unbalanced data structures. Randomness creates a layer of \independence," which can alleviate these systematic biases. In this lecture, we will consider two famous randomized data structures, which were invented at nearly the same time. The first is a randomized version of a binary tree, called a treap. This data structure's name is a portmanteau (combination) of \tree" and \heap." It was developed by Raimund Seidel and Cecilia Aragon in 1989. (Remarkably, this 1-dimensional data structure is closely related to two 2-dimensional data structures, the Cartesian tree by Jean Vuillemin and the priority search tree of Edward McCreight, both discovered in 1980.) The other data structure is the skip list, which is a randomized version of a linked list where links can point to entries that are separated by a significant distance. This was invented by Bill Pugh (a professor at UMD!).
    [Show full text]
  • String Algorithm
    Trie: A data-structure for a set of words All words over the alphabet Σ={a,b,..z}. In the slides, let say that the alphabet is only {a,b,c,d} S – set of words = {a,aba, a, aca, addd} Need to support the operations Tries and suffixes trees • insert(w) – add a new word w into S. • delete(w) – delete the word w from S. • find(w) is w in S ? Alon Efrat • Future operation: Computer Science Department • Given text (many words) where is w in the text. University of Arizona • The time for each operation should be O(k), where k is the number of letters in w • Usually each word is associated with addition info – not discussed here. 2 Trie (Tree+Retrive) for S A trie - example p->ar[‘b’-’a’] Note: The label of an edge is the label of n A tree where each node is a struct consist the cell from which this edge exits n Struct node { p a b c d n char[4] *ar; 0 a n char flag ; /* 1 if a word ends at this node. b d Otherwise 0 */ a b c d a b c d Corresponding to w=“d” } a b c d a b c d flag 1 1 0 ar 1 b Corr. To w=“db” (not in S, flag=0) S={a,b,dbb} a b c d 0 a b c d flag ar 1 b Rule: a b c d Corr. to w= dbb Each node corresponds to a word w 1 “ ” (w which is in S iff the flag is 1) 3 In S, so flag=1 4 1 Finding if word w is in the tree Inserting a word w p=root; i =0 Recall – we need to modify the tree so find(w) would return While(1){ TRUE.
    [Show full text]