Constructing Trees in Parallel

Constructing Trees in Parallel

Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1989 Constructing Trees in Parallel Mikhail J. Atallah Purdue University, [email protected] S. R. Kosaraju L. L. Larmore G. L. Miller Report Number: 89-883 Atallah, Mikhail J.; Kosaraju, S. R.; Larmore, L. L.; and Miller, G. L., "Constructing Trees in Parallel" (1989). Department of Computer Science Technical Reports. Paper 751. https://docs.lib.purdue.edu/cstech/751 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. CONSTRUCTING TREES IN PARALLEL M. J. Atallah S. R. Kosaraju L. L. Larmore G. L. Miller S-H. Teng CSD-TR-883 May 1989 - 1 - Constructing Trees in Parallel M. J. AtalIah' s. R. Kosarajut L. L. Larmoret G. L. Miller! S-H. Teng! Abstract 1 Introduction OCIog" n) time, n"/logn processor as well as D(log n) In this paper we present several new parallel algo­ time, n 3 /logn processor CREW deterministic paral­ rithms. Each algorithm uses substantially fewer pro­ lel algorithIllB are presented for constructing Huffman cessors than used in previously known algorithms. The codes from a given list of frequencies. The time can four problems considered are: The Tree Construction be reduced to O(logn(log logn)") on a CReW model, Problem, The Huffman Code Problem, The Linear using only n 2J(loglogn)2 processors. Also presented Context Free Language Recognition Problem, and The is an optimal O(logn) time, OCn/logn) processor Optimal Binary Search Tree Problem. In each of these EREW parallel algorithm for constructing a tree given problems the computationally expensive part of the a list of leaf depths when the depths are monotonic. problem is finding the associated tree. We shall show An 0Clog" n) time, n processor parallel algorithm is that these trees are not arbitrary trees but are special. given for the general tree construction problem. We We take advantage of the special form of these trees 2 also give an O(log n) time n 2 / log" n processor algo­ to decrease the number of processors used. rithm which finds a nearly optimal binary search tree. All of the problems we consider in this paper, 2 36 An OCIog" n) time n . processor algorithm for recog­ as well as many other problems, can be performed in nizing linear context free languages is given. A crucial sequential polynomial time using Dynamic Program­ ingredient in achieving those bounds is a formulation ming. Nc algorithms for each of these problems can ofthese problems as multiplications ofspecial matrices be obtained by parallelization of Dynamic Program­ which we call concave matrices. The structure of these ming. Unfortunately, this approach produces parallel matrices makes their parallel multiplication dramati­ algorithms which use 0(n6 ) or more processors. An cally more efficient than that of arbitrary matrices. algorithm which increases the work performed from 2 0(71) or O(n ) to 0(n6) is not ofmuch practical value. oDepartment of Computer Science, Purdue Univemity. Sup­ In this paper we present several new paradigms for im­ ported by the Office ofNaval Re3earch under Grant9 NOOO14-B4­ proving the processor efficiency for dynamic program­ K-0502 and NOOOl4-86-K-0689, llIId the National Science Foun­ dation under Grant DCR-8451393, with matching fund9 from ming problems. For all the problems considered a tree AT&T. or class of trees is given implicitly and the algorithm lDepartment of Computer Science, Johns Hopkins Univer­ must find one such tree. sity. Supporl.l':d by National Scien= Founda.tion through grant CCR-88-04284 The construction of optimal codes is a classical IICS, UC Irvine. problem in communication. Let ~ = {O, 1, ..., (f - I} ISchool ofCompuler Science, CMU and Department of Com­ be an alphabet. A code C = {ct, ...,cn} over~ is a puter Science, USC. Supported by National Science Foundation finite nonempty set ofdistinct finite sequences over~. through grant CCR-87-13489. Each sequence Ci is called code word. A code C is a prefix code if no code-word in C is a prefix of another code-word. A message over C is a word resulting from the concatenation of code words from C. We assume the words over a source alphabet al,··· J an are to be transmitted over a communica­ tion channel which can transfer one symbol of E per unit or time, and the probability of appearance of a, is Pi E 'R... The Huffman Coding Problem is to construct a prefix code C = {Cl".' ,Cn E EO} such - 2 - that the average word length ~r=iPi . lei I is minimum, We give an 0(log2 n) time, n processor EREW where led is the length of Ci. PRAM: parallel algorithm for the tree construction It is easy to see that prefix codes have the nice problem. In the case when Ii, ... ,In are monotonic, we property that a message can be decomposed in code give an O(logn) time and nflogn processor EREW word in only one way- they are uniquely decipherable. PRAM parallel algorithm. In fact, trees where the It is interesting to point out that Kraft and McMillan level of the leaves are monotone will he used for proved that for any code which is uniquely decipher­ both constructing Huffman Codes and Shannon-Fano able there is always a prefix code with the same average Codes. word length [13]. In 1952, Huffman [9] gave an elegant Using our solution of the tree construction prob­ sequential algorithm which can generate an optimal lem we get an O(logn) time n/logn processor prefix code in O(n log n) time. If the probabilities are EREW PRAM algorithm for constructing Shannon­ presorted then his algorithm is actually linear time Fano Codes. [11]. Using parallel dynamic programming, Kosaraju We also consider the problem of parallel con­ and Teng [18], independently, gave the first JiC al­ structing optimal binary search trees as defined by gorithm for the Huffman Coding Problem. However, Knuth [10]. The best known NC algorithm for this both constructions use n6 processors. In this paper, problem is the parallelization of dynamic program­ we first show how to reduce the processor count to n 3 , ming which uses n6 processors. In this paper, using while using O(log n) time, by showing that we may as­ the new concave matrix multiplication algorithm, we sume that the tree associated with the prefix code is show how to compute nearly optimal binary search left-justified (to be defined in Section 2). tree in 0(log2 n) time using n2 /10gn processors. Our The n 3 processor count arises from the fact that search trees are only off from optimal by an additive we are multiplying n x n matrices over a closed semir­ amount of link for any fixed k. ing. We reduce the processor count still further to Finally, we consider recognition of linear context n2/10gn by showing that, arter suitable modification, free languages. A CFL is said to be linear if all pro­ the matrices which are multiplied are concave (to be ductions are of the form A -+ bB, A -+ Bb or a -+ A defined later). The structure of these matrices makes where A and Bare nonterminal variables and a and their parallel multiplication dramatically more efficient b are terminal variables. It is well known from Ruzzo than that of arbitrary matrices. An O(logn loglogn) [17] that the general CFL's recognition problem can be time n2I logn processor CREW algorithm is presented performed on a CRCW PRAM in O(log n) time using for multiplying them. Also given is an O«loglogn)2) n 6 processors again by parallelization ofdynamic pro­ time, n 2I log logn processor CRCW algorithm for mul· gramming. By observing that the parse tree of the lin­ tiplying two concave matrices1 . ear context free language is ofvery restricted form, we The algorithm for construction ofa Huffman code construct an 0(n3 ) processor, 0(log2 n) time CREW still uses n2 processors, which is probably too large for PRAM algorithm for it. Using the fact that we are do­ practical consideration since Huffman's algorithm only ing Boolean matrix multiplication, we can reduce the takes O(nlogn) sequential time. Shannon and Fano processor count to n 2 .36 . gave a code, the Shannon-Fano Code, which is only one bit off from optimal. That is, the expected length ofa Shannon·Fano code word is at most one bit longer 2 Preliminaries than the Huffman code word. Throughout this paper a tree will be a rooted tree. It The construction of the Shannon-Fano Code re­ is ordered if the children ofeach node are ordered from duces to the following Tree Construction Problem, left to right. The level ofa node in a tree is its distance from the root. A binary tree T is complete at level 1 Definition 1.1 (Tree Construction Problem) if there are 2' nodes in T at level I. A binary tree is Given n integer values h, ... ,In. construct an ordered empty at level I if there is no vertex at levell. binary tree with n leaves whose levels when read form A binary tree T is a left-justified tree if it satisfies left to right are h, ... ,In. the following property: l Independently, [1J and [2] improved the CREW algoriLhm 1. if a vertex has only one child, then it is a left child; results by showing thal lwo concave matrices CIlll be multi. plied in O(logn) lime, U5ing n2/1ogn CREW PRAM proces­ SOI'S.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us