Algorithms and Data Structures for New Models of Computation

Algorithms and Data Structures for New Models of Computation

Department: Software Technology Editor: Name, xxxx@email Algorithms and Data Structures for New Models of Computation Paul E. Black National Institute of Standards and Technology (NIST) David Flater NIST Irena Bojanova NIST Abstract—In the early days of computer science, the community settled on a simple standard model of computing and a basic canon of general purpose algorithms and data structures suited to that model. With isochronous computing, heterogeneous multiprocessors, flash memory, energy-aware computing, cache and other anisotropic memory, distributed computing, streaming environments, functional languages, graphics coprocessors, and so forth, the basic canon of algorithms and data structures is not enough. Software developers know of real-world constraints and new models of computation and use them to design effective algorithms and data structures. These constraints motivate the development of elegant algorithms with broad utility. As examples, we present four algorithms that were motivated by specific hardware nuances, but are generally useful: reservoir sampling, majority of a stream, B-heap, and compacting an array in Θ(log n) time. STANDARD MODEL AND CANON storage with significant latency, (d) input and output that is insensitive to content or context In the early days of computer science, the and orders of magnitude slower, (e) resources and community settled on a simple standard model of connections that do not change, and (f) no sig- computing [5] and a basic canon [14] of general nificant concerns about energy use or component purpose algorithms and data structures. The stan- failure, permanent or intermittent. dard model of computing has (a) one processor that performs a single instruction at a time, (b) The canon of data structures and algorithms copious memory that is uniformly accessible with includes lists, quicksort, Boyer-Moore string negligible latency, (c) readily available persistent search, trees, and hash tables. Knuth even began IT Professional Published by the IEEE Computer Society © 2021 IEEE 1 Department Head what was intended to be a complete encyclopedia to spread writes throughout memory and still of computing [10]. Although “new and better” maintain the abstraction of persistent memory that sort routines are regularly sent to the editors of the is insensitive to write locations. This is often Dictionary of Algorithms and Data Structures [1], accomplished through journaling file systems or none has been both new and widely applicable. log-structured file systems, whose patterns of The most recent general-purpose data structure is writes match flash memories well [7]. Additional the skip list, which was invented in 1989 [15]. efficiencies may be available to the software Work continues in specialized areas, such as developer by using data structures specialized to graphics, distributed systems, user interaction, flash memories, such as [19] or [4]. Taking device cyber-physical systems, and artificial intelligence. characteristics in consideration can improve solid Except perhaps for quantum computing algo- state disk responsiveness by 44 % [8]. rithms, general textbooks have the same model Similar constraints and opportunities arise be- and cover similar topics. cause of multicore machines, in which there is parallelism on every desktop, distributed comput- New Models ing, in which one can use hundreds of machines Programmers will not be prepared for modern only for milliseconds, and cloud computing, in systems with only this background. Few actual which extra processing is always available for a computers match the standard model in all as- price with some delay to request and provision pects. Although hardware, caching, and system additional resources. In addition, micropower and support well approximate the standard model, ubiquitous sensors are best used by applications developers must be aware of constraints and that treat input as a stream of values to be idiosyncrasies of the software’s execution envi- processed instead of a static set of data. Power- ronment. New constraints of time, space, energy, and thermal-aware computing is of growing im- and hardware motivate new ways of thinking and portance in mobile devices and huge server in- lead programmers to need new approaches to old stallations. Today we expect computer systems problems. distributed across the entire Earth to function as a So many advances have been made in hard- single, coherent whole, in spite of speed-of-light ware and systems that slavishly staying within delays, which often requires careful caching [6]. that standard model forgoes much performance. Enticing advantages are available to exploit in EXAMPLE ALGORITHMS AND DATA isochronous computing, heterogeneous multipro- STRUCTURES cessors, flash memory, energy-aware computing, In this section, we give four instances of cache and other anisotropic memory, distributed elegant, generally useful algorithms and data computing, graphics coprocessors, networks of structures that were developed for particular cir- many simple devices, and streaming environ- cumstances, but are not widely known. They ments. In some cases, there is enormous waste in illustrate the opportunity to create new algorithms maintaining or staying within the standard com- and data structures motivated by new computing puting model. In other cases, additional resources constraints. are squandered if used without regard to their particular strengths and weaknesses. Reservoir Sampling For example, flash memories, used in solid For the first algorithm, consider a stream state drives and memory cards for cameras and environment, that is, items arrive one at a time. smart phones, wear out. Individual bits in flash The program does not know how many items will memory can be set, but bits can only be cleared arrive, or equivalently, an answer may be required by erasing an entire block of memory. Blocks of after an unspecified number of items has arrived; memory can only be erased up to one hundred see Figure1 . Since we don’t know how many thousand times before excessive errors occur [8]. items there will be, we limit resources to be a The entire block of memory then becomes un- constant amount of memory, plus a counter that usable. To extend their usable life, many flash requires log n bits. memory subsystems use sophisticated algorithms As a real world example, imagine a network 2 IT Professional Figure 1. The problem is to randomly select an item from a stream of items of unknown length using a near- constant amount of memory. All items must have an equal chance of being selected. of tiny sensors detecting pollution. Randomly a why should this subtle algorithm be used? Typi- sample is requested from every sensor in the cally everything is in memory and a program can network. The tiny sensors cannot store many do two passes: the first pass counts the number samples, and for statistical purposes all samples of items, then the second pass chooses one at must be equally likely. random. Suppose that selection is computation- The challenge is to choose one of the items ally intensive, for example, only numbers that are randomly with all items having an equal chance prime are candidates to be chosen randomly. It of being chosen. If the number of items is known would be inefficient to check for primality in the beforehand, one could generate a random number first pass to count, choose a random number k, in the range of the number of items. Alternately, then repeat the primality checking when counting th one can store all the items, possibly in some data through to find the k item. A program might structure that can grow as needed, and choose one cache the primes identified in the first pass, but when needed. Neither of these standard solutions then the program is more complicated. satisfy the constraint of not knowing the number Adapting the reservoir sampling algorithm of items beforehand. allows the following elegant program structure: A simple solution is attributed to Alan G. for every item Waterman [11]. It requires storing the item chosen if this item is prime then so far and a counter, n, that holds the number decide whether to keep it of items from the stream so far. To begin, store As several authors have pointed out, reservoir the first item and set n to 1. For each new item, sampling can be easily extended to pick m items replace the chosen item by the new item with at random from a stream. The insight is to keep probability 1=(n+1) and increment n. The proof a new item n + 1 with probability m=(n + 1). If of algorithm correctness is simple, too. we decide to keep a new item, randomly choose Clearly we end up with one of the items. But one of m items to replace. does every item have an equal chance of being This can be used to efficiently choose a ran- chosen? We sketch an induction proof. The first dom sample from a database. Typically one se- item is kept as the chosen item after the first lects all candidates from a database, then chooses step with a chance of 1=1, which is correct. The a random sample. It may be far more efficient to second item is kept with a probability of 1=2, perform a single pass, randomly choosing items so either item has a 1=2 chance of being chosen in the process. after the second step. (Following paragraph not in published article.) For the induction step, assume that every one Parallel hardware readily speeds up reservoir of n items has an equal chance, 1=n, of being sampling. Each parallel thread chooses an item st the chosen item. When the n + 1 item arrives, from its subprocesses weighted by the number it replaces whatever item had been chosen with of items in each subprocess’s subtree. The thread probability 1=(n+1). The previously chosen item returns the item chosen and the total number of is therefore retained with probability n=(n + 1).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us