
Know Your Reviewed in this issue Limits to Parallel Computation: P- Limits Completeness Theory, by Raymond Greenlaw, H. James Hoover, Walter Igor L. Markov L. Ruzzo. (Oxford University Press, University of Michigan 1995, ISBN-10: 0195085914, ISBN-13: 978-0195085914.) h IT IS UNUSUAL to review a book published nineteenth century we understood that chemical 18 years ago. However, some books are ahead of reactions do not alter chemical elements listed in their time, and some prospective readers may have periodic tables. Both stories show that fundamental gotten behind the curve. To this end, the develop- limits were discovered, prohibiting initial scenarios. ment of commercial parallel software is clearly lag- However, this is not how these stories end. Perpetual ging behind initial hopes and promises, perhaps motion can be successfully emulated by tapping an because known limits to parallel computation have abundant energy source while the system remains been overlooked. isolated for practical purposes, e.g., GPS navigation The history of humankind includes several strik- satellites use solar energy to power their continual ing technological scenarios that seemed feasible transmissions. Another example is nuclear pro- and admitted promising demonstrations, but could pulsion in ballistic missile submarines that remain not be applied in practice. One example was perpe- submerged and isolated for years. Even the transmu- tual motion, defined as ‘‘motion that continues inde- tation of cheap metals into gold has been demon- finitely without any external source of energy’’. The strated in particle accelerators, and platinum-group hope was to build a machine doing useful work metals can be commercially extracted from spent without being resupplied with fuel. Records of per- nuclear fuel. Once scientists develop an understand- petual motion trials date back to the seventeenth ing of fundamental limits, engineers circumvent century. It took two centuries to formulate the laws these limits by reformulating the challenge or by of thermodynamics to show why perpetual motion other clever workarounds. in an isolated system is not possible. A second ex- Today, the business value in many industries is ample is the mythical philosopher’s stone that trans- fueled by computation, just like it was driven by muted base metals into gold through chemical steam engines during the industrial revolution and processes (in fact, published accounts with exper- backed up by precious metals during the tumultu- imental validation were as respected as modern-day ous Middle Ages. The need for faster computation research publications). However, by the late leads to significant investments into computing hardware and software. Just like Chemistry and Phy- sics were developed to study chemical reactions and energy conversion, Computer Science was Digital Object Identifier 10.1109/MDAT.2012.2237133 developed in the last 60 years to study algorithms Date of current version: 11 April 2013. and computation. In particular, Complexity theory 78 2168-2356/13/$31.00 B 2013 IEEE Copublished by the IEEE CEDA, IEEE CASS, IEEE SSCS, and TTTC IEEE Design & Test studies the limits of computation, as illustrated by the CPU caches can boost memory performance, but notion of NP-complete problems (a standard text- only by a constant factor, and not entirely due to book is Michael Sipser’s ‘‘Introduction to the Theory parallel algorithms). Other signs that a claimed of Computation’’). Current consensus is that these speed-up is bogus can be subtle and ad hoc.For problems cannot be solved in worst-case polyno- fundamental problems, like Boolean SATand circuit mial time without major theoretical breakthroughs, simulation, that have consistently defied paralleli- and the knowledge accumulated in the field allows zation efforts by sophisticated researchers, a spec- one to quickly evaluate and diagnose purported tacular speed-up (e.g., 220 times claimed at ICCAD breakthroughs. Even the least-informed funding 2011 for SAT) better have a convincing and agencies would now recognize naBve attempts at unexpected explanation. Patrick Madden’s ASPDAC solving NP-complete problems in polynomial time. 2011 paper illustrates how academics often over- On the other hand, the understanding of such limits simplify the challenge they are studying and ignore guided applied algorithm development to identify best known techniques in their empirical compar- and exploit useful features of problem instances. An isons. David Bailey’s SC 1992 paper ‘‘Misleading example end-to-end discussion can be found in the Performance Claims in the Supercomputing Field’’ DAC 1999 paper ‘‘Why is ATPG Easy?’’ by Prasad, and its DAC 2009 reprise suggest that this phenom- Chong, and Keutzer. Moreover, for optimization enon is not new. problems, the notion of NP-hardness can sometimes The article ‘‘Parallel Logic Simulation: Myth or be circumvented by approximating optimal solu- Reality?’’ in the April 2012 issue of IEEE Computer tions (typical for geometric tasks, such as the Travel- offers a great exposition of the promise and the ling Salesman Problem). As a result, the software failure of parallel functional logic simulation (e.g., and hardware industries have been quite successful evaluating new circuit designs before silicon pro- in circumventing computational complexity limits duction). Many people find it obvious that Boolean in applications ranging from formal verification to circuit simulation should be easy to parallelize, and large-scale interconnect routing. And chess-playing academic papers claim such results. But imple- computers go far beyond NP. menting this idea in successful commercial software History does not repeat itself, but it often rhymes, has been a losing proposition for many years (leav- as Mark Twain noted. The latest craze in softwareV ing the market open to expensive hardware emula- parallel computingVhas given us hope to turn tors developedbyIBM,EVE,Cadence,Synplicity/ silicon (predesigned processor cores) into compu- Synopsys, and others). The authors of the IEEE tation without increasing clock speed and power Computer article dissect many failed attempts and dissipation per core. As top-of-the-line integrated the obstacles encountered. This is where careful circuits cost more than their weight in gold, the phi- observers may suspect fundamental limits. losopher’s stone pales in comparison to the value Enter the book Limits to Parallel Computation: proposition of turning not base metals, but sand into P-Completeness Theory by Greenlaw, Hoover, and something more expensive than gold. And we now Ruzzo. Just like NP-complete problems defy worst- see academics, instigated by U.S. funding agencies case polynomial-time algorithms, P-complete prob- left unnamed (to protect the guilty!), claim fantastic lems defy significant speed-ups through parallel parallel speed-ups that do not survive scrutiny. computation. The Preface says: Those who attended the panel on parallel Electronic Design Automation at ICCAD 2011 may recall that This book is an introduction to the rapidly I questioned claims of algorithmic ‘‘superlinear’’ growing theory of P-completenessVthe branch speed-up (more than k times when using k pro- of complexity theory that focuses on identifying cessors, for large k). If using k parallel threads of the ‘‘hardest’’ problems in the class P of prob- execution consistently improves single-thread run- lems solvable in polynomial time. P-complete time by more than a factor of k,thenwecouldjust problems are of interest because they all appear simulate k threads by time-slicing a single thread, to lack highly parallel solutions. That is, algo- with a factor-of-k slowdown. This yields a better se- rithm designers have failed to find NC algo- quential algorithm. Thus, the original comparison rithms, feasible highly parallel solutions that was to suboptimal sequential algorithms (using k take time polynomial in the logarithm of the January/February 2013 79 problem size while using only a polynomial processor by unrolled combinational circuits. number of processors, for them. Consequently, Further analysis is based on formal notions of a the promise of parallel computation, namely computational problem, reducibility and complete- that applying more processors to a problem can ness. These notions lead to complexity classes, greatly speed its solution, appears to be broken such as P (problems solvable in polynomial time) by the entire class of P-complete problems. and NC (problems solvable by poly-sized circuits of polylogarithmic depth/delay, named ‘‘Nick’s class’’ Just like the well-known book ‘‘Computers after Nicholas Pippenger). Clearly, NC is contained and Intractability: A Guide to the Theory of NP- in P, but is believed to be smaller than P (just like Completeness’’ by Garey and Johnson, this book P is believed to be smaller than NP). Because consists of two partsVan introduction to the any problem in P can be efficiently reduced P-completeness theory, and a catalog of P-complete (NC-reduced) to any P-complete problem, finding a problems. It starts with an anecdote about a com- P-complete problem inside NC would contradict pany that was forced by its competitors to look into P 6¼ NC (Theorem 3.5.4). So, if you are comfortable parallel platforms and thus developed parallel sort- interpreting NP-complete as ‘‘likely not solvable in ing of n elements using n2 processors in Oðlog nÞ polynomial time,’’ you should be comfortable in- time. This example is used to motivate key concepts, terpreting
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-