The Complexity of Boolean Functions
Ingo Wegener Johann Wolfgang Goethe-Universit¨at
WARNING: This version of the book is for your personal use only. The material is copyrighted and may not be redistributed. Copyright c 1987 by John Wiley & Sons Ltd, and B. G. Teubner, Stuttgart.
All rights reserved.
No part of this book may be reproduced by any means, or transmitted, or translated into a machine language without the written permission of the publisher.
Library of Congress Cataloguing in Publication Data: Wegener, Ingo The complexity of boolean functions. (Wiley-Teubner series in computer science) Bibliography: p. Includes index. 1. Algebra, Boolean. 2. Computational complexity. I. Title. II. Series. AQ10.3.W44 1987 511.3’24 87-10388
ISBN 0 471 91555 6 (Wiley)
British Library Cataloguing in Publication Data:
Wegener, Ingo The complexity of Boolean functions.—(Wiley-Teubner series in computer science). 1. Electronic data processing—Mathematics 2. Algebra, Boolean I. Title. II. Teubner, B. G. 004.01’511324 QA76.9.M3
ISBN 0 471 91555 6
CIP-Kurztitelaufnahme der Deutschen Bibliothek
Wegener, Ingo The complexity of Boolean functions/Ingo Wegener.—Stuttgart: Teubner; Chich- ester; New York; Brisbane; Toronto; Singapore: Wiley, 1987 (Wiley-Teubner series in computer science) ISBN 3 519 02107 2 (Teubner) ISBN 0 471 91555 6 (Wiley)
Printed and bound in Great Britain On this version of the “Blue Book”
This version of “The Complexity of Boolean Functions,” for some people simply the “Blue Book” due to the color of the cover of the orig- inal from 1987, is not a print-out of the original sources. It is rather a “facsimile” of the original monograph typeset in LATEX. The source files of the Blue Book which still exist (in 1999) have been written for an old version of troff and virtually cannot be printed out anymore. This is because the (strange) standard font used for the text as well as the special fonts for math symbols seem to be nowhere to find today. Even if one could find a solution for the special symbols, the available text fonts yield a considerably different page layout which seems to be undesirable. Things are further complicated by the fact that the source files for the figures have been lost and would have to be redone with pic. Hence, it has been decided to translate the whole sources to LATEX in order to be able to fix the above problems more easily. Of course, the result can still only be an approximation to the original. The fonts are those of the CM series of LATEX and have different parameters than the original ones. For the spacing of equations, the standard mechanisms of LATEX have been used, which are quite different from those of troff. Hence, it is nearly unavoidable that page breaks occur at different places than in the original book. Nevertheless, it has been made sure that all numbered items (theorems, equations) can be found on the same pages as in the original. You are encouraged to report typos and other errors to Ingo Wegener by e-mail: [email protected]
Preface
When Goethe had fundamentally rewritten his IPHIGENIE AUF TAURIS eight years after its first publication, he stated (with resig- nation, or perhaps as an excuse or just an explanation) that, ˝Such a work is never actually finished: one has to declare it finished when one has done all that time and circumstances will allow.˝ This is also my feeling after working on a book in a field of science which is so much in flux as the complexity of Boolean functions. On the one hand it is time to set down in a monograph the multiplicity of important new results; on the other hand new results are constantly being added. I have tried to describe the latest state of research concerning re- sults and methods. Apart from the classical circuit model and the parameters of complexity, circuit size and depth, providing the basis for sequential and for parallel computations, numerous other models have been analysed, among them monotone circuits, Boolean formu- las, synchronous circuits, probabilistic circuits, programmable (univer- sal) circuits, bounded depth circuits, parallel random access machines and branching programs. Relationships between various parameters of complexity and various models are studied, and also the relationships to the theory of complexity and uniform computation models. The book may be used as the basis for lectures and, due to the inclusion of a multitude of new findings, also for seminar purposes. Numerous exercises provide the opportunity of practising the acquired methods. The book is essentially complete in itself, requiring only basic knowledge of computer science and mathematics. This book I feel should not just be read with interest but should encourage the reader to do further research. I do hope, therefore, to have written a book in accordance with Voltaire’s statement, ˝The most useful books are those that make the reader want to add to
v vi them.˝ I should like to express my thanks to Annemarie Fellmann, who set up the manuscript, to Linda Stapleton for the careful reading of the text, and to Christa, whose complexity (in its extended definition, as the sum of all features and qualities) far exceeds the complexity of all Boolean functions.
Frankfurt a.M./Bielefeld, November 1986 Ingo Wegener Contents
1. Introduction to the theory of Boolean functions and circuits 1 1.1 Introduction 1 1.2 Boolean functions, laws of computation, normal forms 3 1.3 Circuits and complexity measures 6 1.4 Circuits with bounded fan-out 10 1.5 Discussion 15 Exercises 19
2. The minimization of Boolean functions 22 2.1 Basic definitions 22 2.2 The computation of all prime implicants and reductions of the table of prime implicants 25 2.3 The minimization method of Karnaugh 29 2.4 The minimization of monotone functions 31 2.5 The complexity of minimizing 33 2.6 Discussion 35 Exercises 36
3. The design of efficient circuits for some fundamental functions 39 3.1 Addition and subtraction 39 3.2 Multiplication 51 3.3 Division 67 3.4 Symmetric functions 74 3.5 Storage access 76 3.6 Matrix product 78 3.7 Determinant 81 Exercises 83
vii viii
4. Asymptotic results and universal circuits 87 4.1 The Shannon effect 87 4.2 Circuits over complete bases 88 4.3 Formulas over complete bases 93 4.4 The depth over complete bases 96 4.5 Monotone functions 98 4.6 The weak Shannon effect 1 06 4.7 Boolean sums and quadratic functions 107 4.8 Universal circuits 110 Exercises 117
5. Lower bounds on circuit complexity 119 5.1 Discussion on methods 119 5.2 2 n - bounds by the elimination method 122 5.3 Lower bounds for some particular bases 125 5.4 2.5 n - bounds for symmetric functions 127 5.5 A 3n - bound 133 5.6 Complexity theory and lower bounds on circuit complexity 138 Exercises 142
6. Monotone circuits 145 6.1 Introduction 145 6.2 Design of circuits for sorting and threshold functions 148 6.3 Lower bounds for threshold functions 154 6.4 Lower bounds for sorting and merging 158 6.5 Replacement rules 160 6.6 Boolean sums 163 6.7 Boolean convolution 168 6.8 Boolean matrix product 170 6.9 A generalized Boolean matrix product 173 6.10 Razborov’s method 180 6.11 An exponential lower bound for clique functions 184 6.12 Other applications of Razborov’s method 192 ix
6.13 Negation is powerless for slice functions 195 6.14 Hard slices of NP-complete functions 203 6.15 Set circuits - a new model for proving lower bounds 207 Exercises 214
7. Relations between circuit size, formula size and depth 218 7.1 Formula size vs. depth 218 7.2 Circuit size vs. formula size and depth 221 7.3 Joint minimization of depth and circuit size, trade-offs 225 7.4 A trade-off result 229 Exercises 233
8. Formula size 235 8.1 Threshold - 2 235 8.2 Design of efficient formulas for threshold - k 239 8.3 Efficient formulas for all threshold functions 243 8.4 The depth of symmetric functions 247 8.5 The Hodes and Specker method 249 8.6 The Fischer, Meyer and Paterson method 251 8.7 The Nechiporuk method 253 8.8 The Krapchenko method 258 Exercises 263
9. Circuits and other non uniform computation methods vs. Turing machines and other uniform computation models 267 9.1 Introduction 267 9.2 The simulation of Turing machines by circuits: time and size 271 9.3 The simulation of Turing machines by circuits: space and depth 277 9.4 The simulation of circuits by Turing machines with oracles 279 9.5 A characterization of languages with polynomial circuits 282 9.6 Circuits and probabilistic Turing machines 285 x
9.7 May NP-complete problems have polynomial circuits ? 288 9.8 Uniform circuits 292 Exercises 294
10. Hierarchies, mass production and reductions 296 10.1 Hierarchies 296 10.2 Mass production 301 10.3 Reductions 306 Exercises 318
11. Bounded-depth circuits 320 11.1 Introduction 320 11.2 The design of bounded-depth circuits 321 11.3 An exponential lower bound for the parity function 325 11.4 The complexity of symmetric functions 332 11.5 Hierarchy results 337 Exercises 338
12. Synchronous, planar and probabilistic circuits 340 12.1 Synchronous circuits 340 12.2 Planar and VLSI - circuits 344 12.3 Probabilistic circuits 352 Exercises 359
13. PRAMs and WRAMs: Parallel random access machines 361 13.1 Introduction 361 13.2 Upper bounds by simulations 363 13.3 Lower bounds by simulations 368 13.4 The complexity of PRAMs 373 13.5 The complexity of PRAMs and WRAMs with small communication width 380 13.6 The complexity of WRAMs with polynomial resources 387 13.7 Properties of complexity measures for PRAMs and WRAMs 396 Exercises 411 xi
14. Branching programs 414 14.1 The comparison of branching programs with other models of computation 414 14.2 The depth of branching programs 418 14.3 The size of branching programs 421 14.4 Read-once-only branching programs 423 14.5 Bounded-width branching programs 431 14.6 Hierarchies 436 Exercises 439
References 442
Index 455 1
1. INTRODUCTION TO THE THEORY OF BOOLEAN FUNC- TIONS AND CIRCUITS
1.1 Introduction
Which of the following problems is easier to solve - the addition or the multiplication of two n-bit numbers ? In general, people feel that adds are easier to perform and indeed, people as well as our computers perform additions faster than multiplications. But this is not a satisfying answer to our question. Perhaps our multiplication method is not optimal. For a satisfying answer we have to present an algorithm for addition which is more efficient than any possible algorithm for multiplication. We are interested in efficient algorithms (leading to upper bounds on the complexity of the problem) and also in arguments that certain problems cannot be solved efficiently (leading to lower bounds). If upper and lower bound for a problem coincide then we know the complexity of the problem. Of course we have to agree on the measures of efficiency. Compar- ing two algorithms by examining the time someone spends on the two procedures is obviously not the right way. We only learn which algo- rithm is more adequat for the person in question at this time. Even different computers may lead to different results. We need fair crite- rions for the comparison of algorithms and problems. One criterion is usually not enough to take into account all the relevant aspects. For example, we have to understand that we are able to work only sequentially, i.e. one step at a time, while the hardware of computers has arbitrary degree of parallelism. Nowadays one even constructs parallel computers consisting of many processors. So we distinguish between sequential and parallel algorithms. { }n → The problems we consider are Boolean functions f : 0 1 { }m 0 1 . There is no loss in generality if we encode all information by { } the binary alphabet 0 1 . But we point out that we investigate finite 2 functions, the number of possible inputs as well as the number of pos- sible outputs is finite. Obviously, all these functions are computable. In § 2 we introduce a rather general computation model, namely cir- cuits. Circuits build a model for sequential computations as well as for parallel computations. Furthermore, this model is rather robust. For several other models we show that the complexity of Boolean functions in these models does not differ significantly from the circuit complexity. Considering circuits we do not take into account the spe- cific technical and organizational details of a computer. Instead of that, we concentrate on the essential subjects. The time we require for the computation of a particular function can be reduced in two entirely different ways, either using better com- puters or better algorithms. We like to determine the complexity of a function independently from the stage of the development of technol- ogy. We only mention a universal time bound for electronic computers. · −33 Foranybasicstepatleast56 10 seconds are needed (Simon (77)). Boolean functions and their complexity have been investigated since a long time, at least since Shannon’s (49) pioneering paper. The earlier papers of Shannon (38) and Riordan and Shannon (42) should also be cited. I tried to mention the most relevant papers on the com- plexity of Boolean functions. In particular, I attempted to present also results of papers written in Russian. Because of a lack of exchange several results have been discovered independently in both ˝parts of the world˝. There is large number of textbooks on ˝logical design˝ and ˝switch- ing circuits˝ like Caldwell (64), Edwards (73), Gumm and Pogun- tke (81), Hill and Peterson (81), Lee (78), Mendelson (82), Miller (79), Muroga (79), and Weyh (72). These books are essentially concerned with the minimization of Boolean functions in circuits with only two logical levels. We only deal with this problem in Ch. 2 briefly. The algebraical starting-point of Hotz (72) will not be continued here. We develop the theory of the complexity of Boolean functions in the sense of the book by Savage (76) and the survey papers by Fischer (74), 3
Harper and Savage (73), Paterson (76), and Wegener (84 a). As al- most 60% of our more than 300 cited papers were published later than Savage’s book, many results are presented for the first time in a textbook. The fact that more than 40% of the relevant papers on the complexity of Boolean functions are published in the eighties is a statistical argument for the claim that the importance of this subject has increased during the last years. Most of the book is self-contained. Fundamental concepts of linear algebra, analysis, combinatorics, the theory of efficient algorithms (see Aho, Hopcroft and Ullman (74) or Knuth (81)) and the complexity theory (see Garey and Johnson (79) or Paul (78)) will be applied.
1.2 Boolean functions, laws of computation, normal forms
n
{ } →
By Bn m we denote the set of Boolean functions f : 0 1
{ m }
0 1 .Bn also stands for Bn 1 . Furthermore we define the most important subclass of Bn m , the class of monotone functions Mn m .
Again Mn =Mn 1 .
n