Non-Blocking Interpolation Search Trees with Doubly-Logarithmic Running Time

Total Page:16

File Type:pdf, Size:1020Kb

Non-Blocking Interpolation Search Trees with Doubly-Logarithmic Running Time Non-Blocking Interpolation Search Trees with Doubly-Logarithmic Running Time Trevor Brown Aleksandar Prokopec Dan Alistarh University of Waterloo Oracle Labs Institute of Science and Technology Canada Switzerland Austria [email protected] [email protected] [email protected] Abstract are surprisingly robust to distributional skew, which sug- Balanced search trees typically use key comparisons to guide gests that our data structure can be a promising alternative their operations, and achieve logarithmic running time. By to classic concurrent search structures. relying on numerical properties of the keys, interpolation CCS Concepts • Theory of computation → Concur- search achieves lower search complexity and better perfor- rent algorithms; Shared memory algorithms; • Computing mance. Although interpolation-based data structures were methodologies → Concurrent algorithms; investigated in the past, their non-blocking concurrent vari- ants have received very little attention so far. Keywords concurrent data structures, search trees, inter- In this paper, we propose the first non-blocking imple- polation, non-blocking algorithms mentation of the classic interpolation search tree (IST) data structure. For arbitrary key distributions, the data structure ensures worst-case O¹logn + pº amortized time for search, 1 Introduction insertion and deletion traversals. When the input key distri- Efficient search data structures are critical in practical set- butions are smooth, lookups run in expected O¹log logn + pº tings such as databases, where the large amounts of under- time, and insertion and deletion run in expected amortized lying data are usually paired with high search volumes, and O¹log logn + pº time, where p is a bound on the number with high amounts of concurrency on the hardware side, of threads. To improve the scalability of concurrent inser- via tens or even hundreds of parallel threads. Consequently, tion and deletion, we propose a novel parallel rebuilding there has been a significant amount of research on efficient technique, which should be of independent interest. concurrent implementations of search data structures. We evaluate whether the theoretical improvements trans- For search data structures supporting predecessor queries, late to practice by implementing the concurrent interpola- which are the focus of this work, such as binary search trees tion search tree, and benchmarking it on uniform and non- (BSTs) or balanced search trees, efficient implementations uniform key distributions, for dataset sizes in the millions have been well researched and are relatively well understood, to billions of keys. Relative to the state-of-the-art concur- e.g. [9, 13, 22, 36]. However, these classic search data struc- rent data structures, the concurrent interpolation search tree tures are subject to the fundamental logarithmic complexity achieves performance improvements of up to 15% under thresholds (in the number of keys n), even in the average high update rates, and of up to 50% under moderate update case, which limits their performance for large key sets, in the rates. Further, ISTs exhibit up to 2× less cache-misses, and order of millions or even billions of keys. In the sequential consume 1:2 − 2:6× less memory compared to the next best case, elegant and non-trivial techniques have been proposed alternative on typical dataset sizes. We find that the results to reduce average-case complexity, by leveraging properties of the key space, or of the key distribution. With one notable exception [37], these techniques are significantly less well understood for concurrent implementations. Permission to make digital or hard copies of all or part of this work for This paper revisits this area, and provides the first efficient, personal or classroom use is granted without fee provided that copies interpolation are not made or distributed for profit or commercial advantage and that non-blocking concurrent implementation of an copies bear this notice and the full citation on the first page. Copyrights search tree data structure [34], called the C-IST. The C-IST is for components of this work owned by others than the author(s) must dynamic, in that it supports concurrent searches, insertions be honored. Abstracting with credit is permitted. To copy otherwise, or and deletions. Interpolation search trees, presented in the republish, to post on servers or to redistribute to lists, requires prior specific next section, have amortized worst-case O¹lognº time for permission and/or a fee. Request permissions from [email protected]. standard operations, but achieve O¹log lognº expected amor- PPoPP ’20, February 22–26, 2020, San Diego, CA, USA O¹ nº © 2020 Copyright held by the owner/author(s). Publication rights licensed tized time complexity for insert and delete, and log log to ACM. expected time for search, by leveraging smoothness proper- ACM ISBN 978-1-4503-6818-6/20/02...$15.00 ties of the key distribution [34]. Our concurrent implemen- https://doi.org/10.1145/3332466.3374542 tation preserves these properties with high probability. 276 PPoPP ’20, February 22–26, 2020, San Diego, CA, USA Trevor Brown, Aleksandar Prokopec, and Dan Alistarh To ensure correctness, non-blocking progress, and scal- such that km is the successor of kj , and then allocates a new ability in the concurrent setting, we introduce several new inner node that holds both kj and km. Finally, the old pointer techniques relative to sequential ISTs. Specifically, our con- in the parent is atomically changed with a CAS instruction tributions are as follows: to point to the new node. • We describe the first non-blocking concurrent inter- Without rebalancing, the tree can become arbitrarily deep. polation search tree (C-IST) based on atomic compare- Therefore, insertion must periodically rebalance parts of the and-swap (CAS) instructions (Section 2), with expected tree. The following figure shows the tree after inserting an lookup time O¹log logn + pº, and expected amortized additional key kn, such that ki < kj < km < kn. The subtree O¹log logn + pº time for insert and delete. at the bottom, which contains the keys ki , kj , km and kn, • We design a parallel, non-blocking rebuilding algo- is sufficiently imbalanced, and it should be replaced with rithm to provide fast and scalable periodic rebuilding a more balanced tree. Rebalancing creates a new subtree for C-ISTs (Section 3). We believe that this technique that contains the same set of keys. After rebalancing, the is applicable to other concurrent data structures that subtree consists of a single inner node of degree 4, as shown require rebuilding. on the right. Note that deletions also periodically rebalance the subtrees. • We prove the correctness, non-blocking and complex- ... ... ity properties of the C-IST (Section 4). k0 k1 k2 ... k0 k1 k2 ... • We provide a C-IST implementation in C++, and com- ... ... pare its performance against concurrent ¹a;bº-trees [13], kj kj km kn Natarajan and Mittal’s concurrent BSTs [36], and Bron- km son’s concurrent AVL trees [10] (Section 5). We report ki kj kn ki kj km kn performance improvements of 15% − 50% compared km kn to ¹a;bº-trees (the prior best-performing concurrent There are several challenges with this making this ap- search tree) on large datasets, and improvements of proach concurrent. First, concurrent modifications and re- up to 3:5× compared to the other concurrent trees, de- balancing must correctly synchronize so that all operations pending on the proportion of updates. We also analyze remain non-blocking, while searches remain wait-free. Sec- the average depth and cache-miss behavior, present a ond, the rebalancing of any subtree must not compromise breakdown of the execution time, show the impact of the scalability of the other operations. Finally, concurrent the parallel rebuilding algorithm, and compare mem- rebalancing must, when the probability distribution of the ory footprints. input keys is smooth [34], ensure that the operations run in 2 Concurrent Interpolation Search Tree amortized O¹log lognº time. 2.1 Examples and Overview 2.2 Data Types We illustrate how concurrent interpolation search trees work The concurrent interpolation search tree consists of the data using several examples. Examine the first tree in the follow- types shown in Figure 1. The IST data type represents the in- ing figure. Each inner node consists of asetof d pointers to terpolation search tree with the single member root, which child nodes, and d − 1 keys that are used to drive the search. points to the root node. Initially, the root node points to an We say that the node’s degree is d. The top node usually has empty leaf node, whose type is Empty. The Single data the highest degree, and the degree of a node decreases as it type represents a leaf node with a single key and an associ- gets deeper in the tree (explained precisely below). The tree ated value, and the Inner data type represents inner nodes, is external, meaning that the keys are stored in the leaf nodes. as illustrated on the right of Figure 1. The illustration shows a subset of nodes – the missing nodes In addition to holding the search keys, and the pointers are represented with ··· symbols. to the child nodes, the Inner data type contains the node’s root root degree, and a field called initSize, which contains the ... ... k0 k1 k2 ... k0 k1 k2 ... number of keys that were in the corresponding subtree when ... .. ... ... .. ... ki .. ki .. this node was created. Apart from the child pointers, these .. .. CAS fields are set on creation, and not subsequently modified. kj kj < km kj Inner also contains two volatile fields, count and status, km km which are used to coordinate rebuilding. The count field kj ki km ki kj km holds the number of updates that were performed in the Consider the task of inserting a key km, such that kj < subtree rooted at this node since it was created.
Recommended publications
  • Log4j-Users-Guide.Pdf
    ...................................................................................................................................... Apache Log4j 2 v. 2.2 User's Guide ...................................................................................................................................... The Apache Software Foundation 2015-02-22 T a b l e o f C o n t e n t s i Table of Contents ....................................................................................................................................... 1. Table of Contents . i 2. Introduction . 1 3. Architecture . 3 4. Log4j 1.x Migration . 10 5. API . 16 6. Configuration . 18 7. Web Applications and JSPs . 48 8. Plugins . 56 9. Lookups . 60 10. Appenders . 66 11. Layouts . 120 12. Filters . 140 13. Async Loggers . 153 14. JMX . 167 15. Logging Separation . 174 16. Extending Log4j . 176 17. Extending Log4j Configuration . 184 18. Custom Log Levels . 187 © 2 0 1 5 , T h e A p a c h e S o f t w a r e F o u n d a t i o n • A L L R I G H T S R E S E R V E D . T a b l e o f C o n t e n t s ii © 2 0 1 5 , T h e A p a c h e S o f t w a r e F o u n d a t i o n • A L L R I G H T S R E S E R V E D . 1 I n t r o d u c t i o n 1 1 Introduction ....................................................................................................................................... 1.1 Welcome to Log4j 2! 1.1.1 Introduction Almost every large application includes its own logging or tracing API. In conformance with this rule, the E.U.
    [Show full text]
  • LMAX Disruptor
    Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads Martin Thompson Dave Farley Michael Barker Patricia Gee Andrew Stewart May-2011 http://code.google.com/p/disruptor/ 1 Abstract LMAX was established to create a very high performance financial exchange. As part of our work to accomplish this goal we have evaluated several approaches to the design of such a system, but as we began to measure these we ran into some fundamental limits with conventional approaches. Many applications depend on queues to exchange data between processing stages. Our performance testing showed that the latency costs, when using queues in this way, were in the same order of magnitude as the cost of IO operations to disk (RAID or SSD based disk system) – dramatically slow. If there are multiple queues in an end-to-end operation, this will add hundreds of microseconds to the overall latency. There is clearly room for optimisation. Further investigation and a focus on the computer science made us realise that the conflation of concerns inherent in conventional approaches, (e.g. queues and processing nodes) leads to contention in multi-threaded implementations, suggesting that there may be a better approach. Thinking about how modern CPUs work, something we like to call “mechanical sympathy”, using good design practices with a strong focus on teasing apart the concerns, we came up with a data structure and a pattern of use that we have called the Disruptor. Testing has shown that the mean latency using the Disruptor for a three-stage pipeline is 3 orders of magnitude lower than an equivalent queue-based approach.
    [Show full text]
  • High Performance & Low Latency Complex Event Processor
    International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Impact Factor (2012): 3.358 Lightning CEP - High Performance & Low Latency Complex Event Processor Vikas Kale1, Kishor Shedge2 1, 2Sir Visvesvaraya Institute of Technology, Chincholi, Nashik, 422101, India Abstract: Number of users and devices connected to internet is growing exponentially. Each of these device and user generate lot of data which organizations want to analyze and use. Hadoop like batch processing are evolved to process such big data. Batch processing systems provide offline data processing capability. There are many businesses which requires real-time or near real-time processing of data for faster decision making. Hadoop and batch processing system are not suitable for this. Stream processing systems are designed to support class of applications which requires fast and timely analysis of high volume data streams. Complex event processing, or CEP, is event/stream processing that combines data from multiple sources to infer events or patterns that suggest more complicated circumstances. In this paper I propose “Lightning - High Performance & Low Latency Complex Event Processor” engine. Lightning is based on open source stream processor WSO2 Siddhi. Lightning retains most of API and Query Processing of WSO2 Siddhi. WSO2 Siddhi core is modified using low latency technique such as Ring Buffer, off heap data store and other techniques. Keywords: CEP, Complex Event Processing, Stream Processing. 1. Introduction of stream applications include automated stock trading, real- time video processing, vital-signs monitoring and geo-spatial Number of users and devices connected to internet is trajectory modification. Results produced by such growing exponentially.
    [Show full text]
  • Need a Title Here
    Equilibrium Behaviour of Double-Sided Queueing System with Dependent Matching Time by Cheryl Yang A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of the requirements for the degree of Master of Science in Probability and Statistics Carleton University Ottawa, Ontario @ 2020 Cheryl Yang ii Abstract In this thesis, we consider the equilibrium behaviour of a double-ended queueing system with dependent matching time in the context of taxi-passenger systems at airport terminal pickup. We extend the standard taxi-passenger model by considering random matching time between taxis and passengers in an airport terminal pickup setting. For two types of matching time distribution, we examine this model through analysis of equilibrium behaviour and optimal strategies. We demonstrate in detail how to derive the equilibrium joining strategies for passen- gers arriving at the terminal and the existence of a socially optimal strategies for partially observable and fully observable cases. Numerical experiments are used to examine the behaviour of social welfare and compare cases. iii Acknowledgements I would like to give my deepest gratitude to my supervisor, Dr. Yiqiang Zhao, for his tremendous support throughout my graduate program. Dr. Zhao gave me the opportunity to explore and be exposed to various applications of queueing theory and offered invaluable guidance and advice in my thesis research. Without his help, this thesis would not be possible. I am grateful for his patience, the time he has spent advising me, and his moral support that guided me through my studies at Carleton University. I would also like to thank Zhen Wang of Nanjing University of Science and Tech- nology for his gracious help and patience.
    [Show full text]
  • Varon-T Documentation Release 2.0.1-Dev-5-G376477b
    Varon-T Documentation Release 2.0.1-dev-5-g376477b RedJack, LLC June 21, 2016 Contents 1 Contents 3 1.1 Introduction...............................................3 1.2 Disruptor queue management......................................3 1.3 Value objects...............................................5 1.4 Producers.................................................6 1.5 Consumers................................................7 1.6 Yield strategies..............................................9 1.7 Example: Summing integers.......................................9 2 Indices and tables 15 i ii Varon-T Documentation, Release 2.0.1-dev-5-g376477b This is the documentation for Varon-T 2.0.1-dev-5-g376477b, last updated June 21, 2016. Contents 1 Varon-T Documentation, Release 2.0.1-dev-5-g376477b 2 Contents CHAPTER 1 Contents 1.1 Introduction Message passing is currently a popular approach for implementing concurrent data processing applications. In this model, you decompose a large processing task into separate steps that execute concurrently and communicate solely by passing messages or data items between one another. This concurrency model is an intuitive way to structure a large processing task to exploit parallelism in a shared memory environment without incurring the complexity and overhead costs associated with multi-threaded applications. In order to use a message passing model, you need an efficient data structure for passing messages between the processing elements of your application. A common approach is to utilize queues for storing and retrieving messages. Varon-T is a C library that implements a disruptor queue (originally implemented in the Disruptor Java library), which is a particularly efficient FIFO queue implementation. Disruptor queues achieve their efficiency through a number of related techniques: • Objects are stored in a ring buffer, which uses a fixed amount of memory regardless of the number of data records processed.
    [Show full text]
  • On New Approaches of Assessing Network Vulnerability: Hardness and Approximation Thang N
    1 On New Approaches of Assessing Network Vulnerability: Hardness and Approximation Thang N. Dinh, Ying Xuan, My T. Thai, Member, IEEE, Panos M. Pardalos, Member, IEEE, Taieb Znati, Member, IEEE Abstract—Society relies heavily on its networked physical infrastructure and information systems. Accurately assessing the vulnerability of these systems against disruptive events is vital for planning and risk management. Existing approaches to vulnerability assessments of large-scale systems mainly focus on investigating inhomogeneous properties of the underlying graph elements. These measures and the associated heuristic solutions are limited in evaluating the vulnerability of large-scale network topologies. Furthermore, these approaches often fail to provide performance guarantees of the proposed solutions. In this paper, we propose a vulnerability measure, pairwise connectivity, and use it to formulate network vulnerability assessment as a graph-theoretical optimization problem, referred to as β-disruptor. The objective is to identify the minimum set of critical network elements, namely nodes and edges, whose removal results in a specific degradation of the network global pairwise connectivity. We prove the NP-Completeness and inapproximability of this problem, and propose an O(log n log log n) pseudo- approximation algorithm to computing the set of critical nodes and an O(log1:5 n) pseudo-approximation algorithm for computing the set of critical edges. The results of an extensive simulation-based experiment show the feasibility of our proposed vulnerability assessment framework and the efficiency of the proposed approximation algorithms in comparison with other approaches. Index Terms—Network vulnerability, Pairwise connectivity, Hardness, Approximation algorithm. F 1 INTRODUCTION ONNECTIVITY plays a vital role in network per- tal a network element is to the survivability of the net- formance and is fundamental to vulnerability C work when faced with disruptive events.
    [Show full text]
  • Data Structures of Big Data: How They Scale
    DATA STRUCTURES OF BIG DATA: HOW THEY SCALE Dibyendu Bhattacharya Manidipa Mitra Principal Technologist, EMC Principal Software Engineer, EMC [email protected] [email protected] Table of Contents Introduction ................................................................................................................................ 3 Hadoop: Optimization for Local Storage Performance................................................................ 3 Kafka: Scaling Distributed Logs using OS Page Cache .............................................................10 MongoDB: Memory Mapped File to Store B-Tree Index and Data Files ....................................15 HBase: Log Structured Merged Tree – Write Optimized B-Tree ................................................19 Storm: Efficient Lineage Tracking for Guaranteed Message Processing ...................................25 Conclusion ................................................................................................................................33 Disclaimer: The views, processes, or methodologies published in this article are those of the authors. They do not necessarily reflect EMC Corporation’s views, processes, or methodologies. 2014 EMC Proven Professional Knowledge Sharing 2 Introduction We are living in the data age. Big data—information of extreme volume, diversity, and complexity—is everywhere. Enterprises, organizations, and institutions are beginning to recognize that this huge volume of data can potentially deliver high value to their
    [Show full text]
  • Research Article DECISION TREE LEARNING and REGRESSION MODELS to PREDICT ENDOCRINE DISRUPTOR CHEMICALS - a BIG DATA ANALYTICS APPROACH with HADOOP and APACHE SPARK
    International Journal of Machine Intelligence ISSN: 0975-2927 & E-ISSN: 0975-9166, Volume 7, Issue 1, 2016, pp.-469-473. Available online at http://www.bioinfopublication.org/jouarchive.php?opt=&jouid=BPJ0000231 Research Article DECISION TREE LEARNING AND REGRESSION MODELS TO PREDICT ENDOCRINE DISRUPTOR CHEMICALS - A BIG DATA ANALYTICS APPROACH WITH HADOOP AND APACHE SPARK PAULOSE RENJITH1*, JEGATHEESAN K.2 AND GOPAL SAMY B.3 1Cognizant Technology Solutions, Athulya, Infopark SEZ, Kakkanad, Kochi 682030, Kerala, India 2Center for Research and PG Studies in Botany and Department of Biotechnology, Thiagarajar College (Autonomous), Madurai - 625 009, Tamil Nadu, India 3Department of Biotechnology, Liatris Biosciences LLP, Kakkanad, Cochin, Kerala *Corresponding Author: [email protected] Received: April 04, 2016; Revised: May 04, 2016; Accepted: May 05, 2016; Published: May 07, 2016 Abstract- Predictive toxicology calls for innovative and flexible approaches to mine and analyse the mounting quantity and complexity of data used in it. Classification and regression based machine learning algorithms are used in this study in order to computationally predict chemical’s affinity towards endocrine hormones. As a result of the modelling complexity and existing big sized toxicity datasets generated by various irrelevant descriptors, missing values, noisy data and skewed distribution, we are motivated to use machine learning and big data analytics in toxicity prediction. This paper reports results of a qualitative and quantitative toxicity prediction of endocrine disrupting chemicals. Datasets of Estrogen Receptor (ER) and Androgen Receptor (AR) disrupting chemicals along with their Binding Affinity values were used for building the predictive models. Fragment counts of dataset chemicals were generated using Kier Hall Smarts Descriptor that exploit electro-topological state (e- state) indices.
    [Show full text]
  • Anomaly Detection in Manufacturing Equipment with Apache Flink : Grand Challenge Yann Busnel, Nicolo Riveei, Avigdor Gal
    FlinkMan : Anomaly Detection in Manufacturing Equipment with Apache Flink : Grand Challenge Yann Busnel, Nicolo Riveei, Avigdor Gal To cite this version: Yann Busnel, Nicolo Riveei, Avigdor Gal. FlinkMan : Anomaly Detection in Manufactur- ing Equipment with Apache Flink : Grand Challenge. DEBS ’17 : 11th ACM International Conference on Distributed and Event-based Systems, Jun 2017, Barcelone, Spain. pp.274-279 10.1145/3093742.3095099. hal-01644417 HAL Id: hal-01644417 https://hal.archives-ouvertes.fr/hal-01644417 Submitted on 22 Nov 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Grand Challenge: FlinkMan – Anomaly Detection in Manufacturing Equipment with Apache Flink Nicolo Rivei Yann Busnel Avigdor Gal Technion - Israel Institute of IMT Atlantique / IRISA / UBL Technion - Israel Institute of Technology [email protected] Technology [email protected] [email protected] ABSTRACT analog. ese sensors provide periodic measurements, which are We present a (so) real-time event-based anomaly detection appli- sent to a monitoring base station. e laer receives then a large cation for manufacturing equipment, built on top of the general collection of observations. Analyzing in an ecient and accurate purpose stream processing framework Apache Flink.
    [Show full text]
  • Operating Model Editorial Board
    McKinsey on Digital Services Introducing the next-generation operating model Editorial Board: Joao Dias Somesh Khanna Chirstopher Paquette Marta Rohr Barr Seitz Alex Singla Rohit Sood Jasper van Ouwerkerk Contents Introduction— Reinventing your operating model to win in the digital world: 1 How to capture the full value Part 1: The next-generation operating model: What is it and how do I build it? 7 The next-generation operating model for the digital world 8 Putting customer experience at the heart of next-generation operating models 16 How to start building your next-generation operating model 26 What it takes to deliver breakthrough customer experiences 34 From disrupted to disruptor: Reinventing your business by transforming the core 40 Part 2: New approaches and capabilities to drive your next-generation 49 operating model Digitizing customer journeys and processes: Stories from the front lines 50 Four fundamentals of workplace automation 58 Intelligent process automation: The engine at the core of the next-generation operating model 66 Making data analytics work for you—instead of the other way around 76 Part 3: Foundations to scale your next-generation operating model 89 Organizing for the future 90 Deploying a two-speed architecture at scale 102 Speed and scale: Unlocking digital value in customer journeys 106 Scaling a transformative culture through a digital factory 112 Transforming operations management for a digital world 118 Introduction Reinventing your operating model to win in the digital world: How to capture the full value How to profitably deliver services to customers has become a defining challenge for businesses today.
    [Show full text]
  • CS302ES Regulations
    DATA STRUCTURES Subject Code: CS302ES Regulations : R18 - JNTUH Class: II Year B.Tech CSE I Semester Department of Computer Science and Engineering Bharat Institute of Engineering and Technology Ibrahimpatnam-501510,Hyderabad DATA STRUCTURES [CS302ES] COURSE PLANNER I. CourseOverview: This course introduces the core principles and techniques for Data structures. Students will gain experience in how to keep a data in an ordered fashion in the computer. Students can improve their programming skills using Data Structures Concepts through C. II. Prerequisite: A course on “Programming for Problem Solving”. III. CourseObjective: S. No Objective 1 Exploring basic data structures such as stacks and queues. 2 Introduces a variety of data structures such as hash tables, search trees, tries, heaps, graphs 3 Introduces sorting and pattern matching algorithms IV. CourseOutcome: Knowledge Course CO. Course Outcomes (CO) Level No. (Blooms Level) CO1 Ability to select the data structures that efficiently L4:Analysis model the information in a problem. CO2 Ability to assess efficiency trade-offs among different data structure implementations or L4:Analysis combinations. L5: Synthesis CO3 Implement and know the application of algorithms for sorting and pattern matching. Data Structures Data Design programs using a variety of data structures, CO4 including hash tables, binary and general tree L6:Create structures, search trees, tries, heaps, graphs, and AVL-trees. V. How program outcomes areassessed: Program Outcomes (PO) Level Proficiency assessed by PO1 Engineeering knowledge: Apply the knowledge of 2.5 Assignments, Mathematics, science, engineering fundamentals and Tutorials, Mock an engineering specialization to the solution of II B Tech I SEM CSE Page 45 complex engineering problems.
    [Show full text]
  • Haskell Communities and Activities Report
    Haskell Communities and Activities Report http://tinyurl.com/haskcar Thirty Fourth Edition — May 2018 Mihai Maruseac (ed.) Chris Allen Christopher Anand Moritz Angermann Francesco Ariis Heinrich Apfelmus Gershom Bazerman Doug Beardsley Jost Berthold Ingo Blechschmidt Sasa Bogicevic Emanuel Borsboom Jan Bracker Jeroen Bransen Joachim Breitner Rudy Braquehais Björn Buckwalter Erik de Castro Lopo Manuel M. T. Chakravarty Eitan Chatav Olaf Chitil Alberto Gómez Corona Nils Dallmeyer Tobias Dammers Kei Davis Dimitri DeFigueiredo Richard Eisenberg Maarten Faddegon Dennis Felsing Olle Fredriksson Phil Freeman Marc Fontaine PÁLI Gábor János Michał J. Gajda Ben Gamari Michael Georgoulopoulos Andrew Gill Mikhail Glushenkov Mark Grebe Gabor Greif Adam Gundry Jennifer Hackett Jurriaan Hage Martin Handley Bastiaan Heeren Sylvain Henry Joey Hess Kei Hibino Guillaume Hoffmann Graham Hutton Nicu Ionita Judah Jacobson Patrik Jansson Wanqiang Jiang Dzianis Kabanau Nikos Karagiannidis Anton Kholomiov Oleg Kiselyov Ivan Krišto Yasuaki Kudo Harendra Kumar Rob Leslie David Lettier Ben Lippmeier Andres Löh Rita Loogen Tim Matthews Simon Michael Andrey Mokhov Dino Morelli Damian Nadales Henrik Nilsson Wisnu Adi Nurcahyo Ulf Norell Ivan Perez Jens Petersen Sibi Prabakaran Bryan Richter Herbert Valerio Riedel Alexey Radkov Vaibhav Sagar Kareem Salah Michael Schröder Christian Höner zu Siederdissen Ben Sima Jeremy Singer Gideon Sireling Erik Sjöström Chris Smith Michael Snoyman David Sorokin Lennart Spitzner Yuriy Syrovetskiy Jonathan Thaler Henk-Jan van Tuyl Tillmann Vogt Michael Walker Li-yao Xia Kazu Yamamoto Yuji Yamamoto Brent Yorgey Christina Zeller Marco Zocca Preface This is the 34th edition of the Haskell Communities and Activities Report. This report has 148 entries, 5 more than in the previous edition.
    [Show full text]