36 Algorithms.Txt 4/4/2010 Zhan and Noon Tested Algorithms For
Total Page:16
File Type:pdf, Size:1020Kb
36_Algorithms.txt 4/4/2010 Zhan and Noon tested algorithms for solving this problem, finding approximate or double bucket modifications to one algorithm and Pallattino's algorithm with two queues were fastest on actual data. Johnson's algorithm solves this problem in the sparse case faster than the cubic Floyd's algorithm, both of which solve this for all pairs. Another approach uses dynamic programming, and checks all edges n times; that algorithm also detects negative cost cycles, and is named for Bellman and Ford. A Fibonacci heap is used to speed up a greedy approach, which works only for positive-weight edges, while if an admissible heuristic is available, the A* algorithm can be used. For 10 points, identify this problem in graph theory, most famously solved by Dijkstra's algorithm, which asks for a fast route between two nodes. Answer: shortest path algorithms [accept with any of the following modifying the answer: all-pairs, single-source, or single-source single-destination] Along with a binary search tree, one of these structures is used to store future beach-line-changing events in Fortune's algorithm for generating Voronoi diagrams. The ROAM algorithm uses a pair of these data structures to store triangles that can be either split or merged. A meldable heap is a version of this structure implemented using a binomial heap, which gives it a "merge" operation, and the “fringe” structure in the A* algorithm is one of these. They're used on network routing protocols to ensure real-time traffic is forwarded first, and they optionally implement the PeekAtNext operation. In Java these structures implement the "comparable" interface, unlike another structure whose interface they implement. For 10 points, name this data structure that unlike a related structure isn't first in first out but inserts each element with an associated value of importance. Answer: Priority queue (09Lederberg) One of these algorithms should exhibit avalanching on all inputs and outputs, and examples include MD4, FNV, Linear Feedback Shift Register, and Cyclic Rendundancy Check. These algorithms can be divided into addative or multiplicative and rotative varieties based on whether they shift the accumulator as they traverse the input. The requirement of uniform distribution is often made more difficult by very non-uniform inputs, and a common problem is for a small set of input bits to cancel each other out. These algorithms are useful in the provision of digitial signatures, and to be used in cryptograhpy they typically must be one-way and collision-free. For 10 points, name these unary functions that return a key used in a namesake data type with Big O of one lookup time. Answer: Hash Function [accept simple Hash functions] (09Lederberg) Fibonacci heaps are used in Johnson's Algorithm to determine this, and an efficient way to determine it via a best-first method is A* [A-STAR]. One variety of this class of algorithms runs in N cubed time in the size of the graph and is the all- pairs approach, which is slower than the Bellman-Ford algorithm of this type. The simplest algorithm for determining this involves a single source and a single destination with nonnegative edge weights and is named for Dijkstra. For 10 points, name this class of algorithms which determines the minimum sum of the weights between two nodes of a graph, exemplified by the quickest way to get between two cities given the time spent on roads between them. Answer: shortest-path algorithms [accept clear equivalents; prompt on €œgraph search€, €œtree search€, or anything else involving searching] This type of algorithm is useful when the problem exhibits optimal substructure and when locally optimal choices lead to globally optimal solutions. Huffman coding, Dijkstra's (Dike-strahs) algorithm, and the fractional knapsack problem use it, although dynamic programming is a better solution to the "zero-one" knapsack problem. For ten points, what algorithm always makes the choice that looks best at the moment. Answer: greedy algorthm It contains such public member functions as preorder, inorder, and postorder traversals, which call its own recursive utility functions to perform the appropriate operations on the internal representation. This nonlinear, two dimensional data structure picks a root value and orders subsequent values either to a right branch or left branch depending on that value's relation to the root. FTP, identify this ordering mechanism from computer science with an arboreal name. 1 36_Algorithms.txt 4/4/2010 Answer: binary tree In 1999, McIlroy created an adversary for this algorithm that guarantees that it will run in worst-case time. Its worst- case runtime can be avoided by switching to heapsort after a certain recursion depth, a construction known as introsort. Like mergesort, it is easily parallelizable, and its runtime can be decreased by first selecting the median of the unsorted input list. It was invented by C.A.R. Hoare, and its second phase is the partition function, which splits the original list into lists of elements that are greater or less than the chosen pivot value. For 10 points, identify this divide- and-conquer sorting algorithm which runs in big O of n log n time, named for its speed. Answer: Quicksort The Deutsch-Bobrow algorithm implements a "deferred" variety of one technique used to perform this action in run- time, and one problem with that algorithm is the issue of zero count table overflow. Christopher's algorithm was created to perform this function for FORTRAN, while Lins' algorithm is a lazy algorithm which peforms this function using a control set. The Deutsch-Schorr-Waite algorithm is an example of a pointer-reversal algorithm for doing it. A two-phase algorithm to perform this action was developed by McCarthy and is known as mark-and-sweep, while Unix- based systems employ the reference-counting method. For ten points, identify this action in which memory that is no longer in use is returned to the heap. Answer: garbage collection Applications of this type of data structure include parsing mathematical expressions and calling subroutines from parent programs, but NOT reading from an input stream or running processes in the order in which they were called. It can be implemented in order-one time in real life as a to-do box which is open at only one end, or in a program by a singly- linked list, since pushing and popping can all be done at the head. FTP, name this data structure which uses a last-in, first-out system. Answer: stack (prompt on "LIFO" or "last-in, first-out" on early buzz) Performing one type of this process requires the use of a "burn-in," (*) and several adaptive or recursive methods named after it involve importance sampling. The error term in the "integration" named after it decreases as one over square root of N and bootstrapping yields estimates of parameters in another type of this process. Peter Lepage invented a version of this technique that applies to particle physics which involves constructing a separable multi- dimensional weight function. Including the VEGAS algorithm and simulated annealing approaches such as the Metropolis-Hastings algorithm, this type of technique includes is stochastic, contrasting it with deterministic algorithms such as molecular dynamics. For 10 points, give the general term for these methods of Markov Chain, integration, or sampling, which generally involves randomly selecting a whole bunch of points and is named for a European gambling mecca. Answer: Monte Carlo methods (accept "annealing" before *) This man formulated a mechanism that checks whether a new state is safe or not when granting requests in the Banker's Algorithm. That algorithm was subsequently used in an early operating system he developed, known as the "T- H-E multiprogramming system". He also formulated a method that uses a stack to convert standard syntax into Reverse Polish Notation, the Shunting Yard Algorithm. With C. A. R. Hoare this man lends his name to a 1972 text book called Structured Programming that advocated his position against the GOTO statement. His extensive correspondences are collectively known as the "EWD" series, and one of his eponymous creations falls apart when graphs have negative edge weights and has a heuristic modification known as A Star. FTP, identify this computer scientist best known for lending his name to a greedy algorithm that finds the single source shortest path from a node in a graph. Answer: Edsger Wybe Dijkstra A variant of this algorithm used for selection can be made to run in worst-case time by a median-of-three killer 2 36_Algorithms.txt 4/4/2010 sequence, though this algorithm can be used to find the smallest or largest few elements of an array effectively. In 1999, McIlroy created an adversary for this algorithm that guarantees that it will run in worst-case time, though a selection algorithm that finds an array's median will ensure a running time close to n log n. It works in three phases: choosing a pivot element, recursing on lesser elements, and recursing on elements greater than or equal to the pivot element. For 10 points, name this divide-and-conquer algorithm, noted for being faster than other n log n sorting algorithms in practice. Answer: Quicksort Complexity class BPP problems are solved by these methods in polynomial time. Antithetic variates and control variates are common variance reduction techniques for its estimates, while low-discrepancy sequences are used in their "quasi" form. Lazzarini's choice of stopping time and needle lengths when attempting one named for Buffon gave an excessively accurate estimate of pi in just 3,408 tosses. The Gibbs sampler and the more general Metropolis-Hastings algorithm for calculating high-dimensional integrals, and random walk simulation of binomial options pricing models, are examples that make use of Markov chains.