Binary Decision Diagrams, ACM Computing Surveys, Sept 1992
((LecLec 3 3)) BinaryBinary DecisionDecision Diagrams:Diagrams: RepresentationRepresentation
^ What you know X Lots of useful, advanced techniques from Boolean algebra X Lots of cofactor-related manipulations X A little bit of computational strategy X Cubelists, positional cube notation X Unate recursive paradigm
^ What you don’t know X The “right” data structure for dealing with Boolean functions: BDDs X Properties of BDDs X Graph representation of a Boolean function X Canonical representation X Efficient algorithms for creating, manipulating BDDs X Again based on recursive divide&conquer strategy (Thanks to Randy Bryant for nice BDD pics+slides)
© R. Rutenbar 2001, CMU 18-760, Fall 2001 1
CopyrightCopyright NoticeNotice
© Rob A. Rutenbar, 2001 All rights reserved. You may not make copies of this material in any form without my express permission.
© R. Rutenbar 2001, CMU 18-760, Fall 2001 2
Page 1 HandoutsHandouts
^ Physical X Lecture 03 -- BDDs: Representation X Paper: Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams, ACM Computing Surveys, Sept 1992. ^ Electronic X Nothing today
^ Reminder X HW1 is due Thu in class
© R. Rutenbar 2001, CMU 18-760, Fall 2001 3
WhereWhere AreAre We?We?
^ Still doing Boolean background, now focussed on data structs
M T W Th F Aug 27 28 29 30 31 1 Introduction Sep 3 4 5 6 7 2 Advanced Boolean algebra 10 11 12 13 14 3 JAVA Review 17 18 19 20 21 4 Formal verification 24 25 26 27 28 5 2-Level logic synthesis Oct 1 2 3 4 5 6 Multi-level logic synthesis 8 9 10 11 12 7 Technology mapping 15 16 17 18 19 8 Placement 22 23 24 25 26 9 Routing 29 30 31 1 2 10 Static timing analysis Nov 5 6 7 8 9 11 12 13 14 15 16 12 Electrical timing analysis Thnxgive 19 20 21 22 23 13 Geometric data structs & apps 26 27 28 29 30 14 Dec 3 4 5 6 7 15 10 11 12 13 14 16 © R. Rutenbar 2001, CMU 18-760, Fall 2001 4
Page 2 ReadingsReadings
^ In De Micheli book X pp 75-85 does BDDs, but not in as much depth as the notes
^ Randy Bryant paper X Symbolic Boolean Manipulation with Ordered Binary Decision Diagrams, ACM Computing Surveys, Sept 1992. X Lots more detail (some of it you don’t need just yet) but very complete, if a bit terse.
© R. Rutenbar 2001, CMU 18-760, Fall 2001 5
BDDBDD HistoryHistory
^ A little history... X Original idea for Binary Decision Diagrams due to Lee (1959) and Akers (1978) X Critical refinement–Ordered BDDs–due to Bryant (1986) X Refinement imposes some restrictions on structure X Restrictions needed to make result canonical representation
^ A little terminology X A BDD is a directed acyclic graph X Graph: vertices connected by edges X Directed: edges have direction (draw them with an arrow) X Acyclic: no cycles possible by following arrows in graph
X Often see this shortened to “DAG”
© R. Rutenbar 2001, CMU 18-760, Fall 2001 6
Page 3 GraphsGraphs
^ DAGs -- a reminder of some technicalities...
vertex
directed edge edge
A graph A directed graph A directed acyclic graph vertices + edges ...but not acyclic ...note that a “loop” is not a directed cycle, you are only allowed to follow edges along direction that the arrow points
© R. Rutenbar 2001, CMU 18-760, Fall 2001 7
BinaryBinary DecisionDecision DiagramsDiagrams
^ Big Idea #1: Binary Decision Diagram X Turn a truth table for the Boolean function into a Decision Diagram Vertices = Edges =
Leaf nodes = X In simplest case, resulting graph is just a tree
^ Aside X Convention is that we don’t actually draw arrows on the edges in the DAG representing a decision diagram X Everybody knows which way they point, implicitly X Point from parent to child in the decision tree
^ Look at a simple example... © R. Rutenbar 2001, CMU 18-760, Fall 2001 8
Page 4 BinaryBinary DecisionDecision DiagramsDiagrams
Truth Table Decision Tree
x1 x2 x3 f x1 0 0 0 0 0 0 1 0 0 1 0 0 x2 x2 0 1 1 1 1 0 0 0 1 0 1 1 x3 x3 x3 x3 1 1 0 0 1 1 1 1
• Vertex represents a decision • Follow green (dashed) line for value 0 • Follow red (solid) line for value 1 • Function value determined by leaf value.
© R. Rutenbar 2001, CMU 18-760, Fall 2001 9
BinaryBinary DecisionDecision DiagramsDiagrams
^ Some terminology
A ‘variable’ vertex The ‘lo’ pointer to ‘lo’ son or x1 The ‘hi’ pointer child of the to ‘hi’ son or vertex child of the x2 x2 vertex A ‘constant’ vertex x x x x 3 3 3 3 at the bottom leaves of the tree 0 0 0 1 0 1 0 1
The ‘variable ordering’, which is the order in which decisions about vars are made. Here, it’s X1 X2 X3
© R. Rutenbar 2001, CMU 18-760, Fall 2001 10
Page 5 OrderingOrdering
^ Note: Different variable orders are possible
x1
x2 x2 Order for this subtree is X2 then X3 x3 x3 x3 x3
0 0 0 1 0 1 0 1 Here, it’s x1 X3 then X2
x x 2 2x3
x3 x3 xx32 xx32
0 0 0 1 00 10 01 11
© R. Rutenbar 2001, CMU 18-760, Fall 2001 11
BinaryBinary DecisionDecision DiagramsDiagrams
^ Observations X Each path from root to leaf traverses variables in a some order X Each such path constitutes a row of the truth table, ie, a decision about what output is when vars take particular values X But we have not yet specified anything about the order of decisions X This decision diagram is not canonical for this function
^ Reminder: canonical forms X Representation that does not depend on the logic gate implementation of a Boolean function X Same function (ie, truth table) of same vars always produces this exact same representation X Example: a truth table is canonical a minterm list, for our function f = Σ m(3,5,7), is canonical
© R. Rutenbar 2001, CMU 18-760, Fall 2001 12
Page 6 BinaryBinary DecisionDecision DiagramsDiagrams
^ What’s wrong with this representation? X It’s not canonical, X Way too big to be useful X ...in fact it’s every bit as big as a truth table: 1 leaf per row
^ Big idea #2: Ordering X Restrict global ordering of variables Means:
X Note X It’s OK to omit a variable if you don’t need to check it to decide which leaf node to reach for final value of function
© R. Rutenbar 2001, CMU 18-760, Fall 2001 13
TotalTotal OrderingOrdering
^ Assign arbitrary total ordering to variables
X x1 < x2 < x3 X Variables must appear in this specific order along all paths OK Not OK
x1 x 1 x3 x 1
x2 x2
x3 x3 x1 x1
^ Properties X No conflicting variable assignments along path (see each var at most once walking down the path). X Simplifies manipulation
© R. Rutenbar 2001, CMU 18-760, Fall 2001 14
Page 7 BinaryBinary DecisionDecision DiagramsDiagrams
^ OK, now what’s wrong with it? X Variable ordering simplifies things... X ...but representation still too big X ...and still not necessarily canonical
x1 Original decision diagram
x2 x2 Equivalent, but x3 x3 x3 x3 different diagram 0 0 0 1 0 1 0 1 x1
x2 x2
x3 x3 x3 x3
0 0 0 1 0 1 0 1 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 15
BinaryBinary DecisionDecision DiagramsDiagrams
^ Big Idea #3: Reduction X Identify redundancies in the DAG that can remove unnecessary nodes and edges X Removal of X2 node and its children, replacement with X3 node is an example of this sort of reduction
^ Why are we doing this? X To combat size problem: want DAGs as small as possible X To achieve canonical form: for same function, given total variable order, want there to be exactly one graph that represents this function
© R. Rutenbar 2001, CMU 18-760, Fall 2001 16
Page 8 ReductionReduction RulesRules
^ Reduction Rule 1: Merge equivalent leaves
a a a
X ‘a’ is either a constant 1 or constant 0 here X Just keep one copy of the leaf node X Redirect all edges that went into the redundant leaves into this one kept node
© R. Rutenbar 2001, CMU 18-760, Fall 2001 17
ReductionReduction RulesRules
^ Apply Rule 1 to our example...
x1 x1
x2 x2 x2 x2
x3 x3 x3 x3 x3 x3 x3 x3
0 0 0 1 0 1 0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 18
Page 9 ReductionReduction RulesRules
^ Reduction Rule 2: Merge isomorphic nodes
x x x
y z y z
X Isomorphic: Means 2 nodes with same var and identical children X You cannot tell these nodes apart from how they contribute to decisions as you decend thru DAG X Note: means exact same physical child nodes, not just children with same labels X Remove redundant node (extra ‘x’ node here) X Redirect all edges that went into the redundant node into the one copy that you kept (edges into right ‘x’ node now into left as well)
© R. Rutenbar 2001, CMU 18-760, Fall 2001 19
ReductionReduction RulesRules
^ Apply Rule 2 to our example
x1
x2 x2
x3 x3 x3 x3
0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 20
Page 10 ReductionReduction RulesRules
^ Reduction Rule #3: Eliminate Redundant Tests
x
y y
X Test: means a variable node here... X It’s redundant since both of its children go to same node... X ...so we don’t care what value x node takes in this diagram X Remove redundant node X Redirect all edges into the redundant node (x) into the one child node (y) of the removed node
© R. Rutenbar 2001, CMU 18-760, Fall 2001 21
ReductionReduction RulesRules
^ Apply Rule #3 to our example
x1 x1
x2 x2 x2
x3 x3 x3
0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 22
Page 11 BinaryBinary DecisionDecision DiagramsDiagrams
^ How to apply the rules? X For now, just iteratively, keep trying to find places the rules “match” and do the reduction X When you can’t find any more matches, the graph is reduced
^ Is this how programs really do it? X Nope, there’s some magic one can do with a clever hash table, but more about that later, when we start doing algorithms to manipulate BDDs X Roughly speaking, in real programs you build the BDDs correctly on the fly--you never build a bad, noncanonical one then try to fix it.
© R. Rutenbar 2001, CMU 18-760, Fall 2001 23
BDDs:BDDs: BigBig ResultsResults
^ Recap: what did we do? X Start with any old BDD X ...ordered the variables => Ordered BDD (OBDD) X ...reduced the DAG => Reduced Ordered BDD (ROBDD)
^ Big result ROBDD is a canonical form for Boolean function
X Same function always generates exactly same DAG... X ...for a given variable ordering Two functions identical if and only if ROBDD DAGs isomorphic
X ie, they are identically the same graph X Nice property to have: simplest form of DAG is canonical
© R. Rutenbar 2001, CMU 18-760, Fall 2001 24
Page 12 BDDs:BDDs: RepresentingRepresenting SimpleSimple ThingsThings
^ Note: can represent any function as a ROBDD X Here is the ROBDD for the function f(x1,x2,...xn) = 0
0 Unique unsatisfiable function
X Here is the ROBDD for the function f(x1,x2,...xn) = 1
1 Unique tautology function
X Here is the ROBDD for the function f(x1, ..., x, ..., xn) = x
x Treat variable as function 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 25
BinaryBinary DecisionDecision DiagramsDiagrams
^ Assume variable order is X1, X2, X3, X4
Typical Function Odd Parity
x1 x1 • (x1 + x2 )x4 • No vertex labeled x x2 3 x2 x2 Linear – independent of x3 representation x x • Many subgraphs shared 3 3
x4 x4 x4
0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 26
Page 13 SharingSharing inin BDDs BDDs
^ Technical aside X Every node in a BDD (in addition to the root) represents some Boolean function in a canonical way
x ⊕ x ⊕ x ⊕ x (x1 + x2 )x4 1 2 3 4 x1 x1
x3 ⊕ x4 x2 •x4 x2 x2 x2
( x3 ⊕ x4 ) ’ x3 x3
x4 ‘ x4 x4 x4
0 1 0 1
X BDDs are incredibly good at extracting and representing this kind of sharing of subfunctions in subgraphs
© R. Rutenbar 2001, CMU 18-760, Fall 2001 27
BDDBDD ApplicationsApplications
^ Aside: some nice, immediate applications X Tautology checking X Was complex with the cubelist representation, required divide &conquer algorithm, lots of manipulation X With BDDs, it’s trivial. Just see if the BDD for function == 1
X Satisfiability == can you find assignment of 0s & 1s to vars to make the function == 1? X No idea how to do it with cubelists X With BDDs, any path to 1 node from root is a solution
x1
x2
Satisfiability: X1 X2 X3 X4 =
x4
0 1 © R. Rutenbar 2001, CMU 18-760, Fall 2001 28
Page 14 BDDBDD VariableVariable OrderingOrdering ^ Question: Does variable ordering matter? YES!
a1 b1 + a2 b2 + a3 b3 Good Ordering Bad Ordering
a1 a1
b1 a2 a2
a2 a3 a3 a3 a3
b2 b1 b1 b1 b1
a3 b2 b2
b3 b3
0 1 0 1 Linear Growth Exponential Growth
© R. Rutenbar 2001, CMU 18-760, Fall 2001 29
VariableVariable Ordering:Ordering: ConsequencesConsequences
^ Interesting problem X Some problems that are known to be exponentially hard to solve work out to be very easy on BDDs X Trouble is, they are only easy when the size of the BDD that represents the problem is “reasonable” X Some input problems make nice (small) BDDs, others make pathological (large) BDDs X No universal solution (or else we’d always be able to solve exponentially hard problems easily)
^ How to handle? X Variable ordering heuristics: make nice BDDs for reasonable probs X Basic characterization of which problems never make nice BDDs
© R. Rutenbar 2001, CMU 18-760, Fall 2001 30
Page 15 VariableVariable OrderingOrdering
^ Analogy to “bit-serial” computing useful here...
f(x1, x2, x3, ..., xn) K-Bit Memory
xn x2 x1 0 0 0 0 Bit-Serial or Processor 1
^ Operation X Suppose this machine reads your function inputs 1 bit at a time... X ...ie, in a certain variable order. X Stores information about previous inputs to correctly deduce function value from remaining inputs. ^ Relation to OBDD Size X If this ‘machine’ requires K bits of memory at step i... X ...then the OBDD has ~ 2K branches crossing level i.
© R. Rutenbar 2001, CMU 18-760, Fall 2001 31
VariableVariable Ordering:Ordering: ExampleExample
a1 b1 + a2 b2 + a3 b3
a1 a1
b1 a2 a2 at level 3 at level 3 a2 a3 a3 a3 a3 3 edges cross 8 edges cross
b2 b1 b1 b1 b1
a3 b2 b2
b3 b3
0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 32
Page 16 VariableVariable Ordering:Ordering: IntuitionIntuition
^ Idea: Local Computability X Inputs that are closely related should be kept near each other in the variable order X Groups of inputs that can determine the function value by themselves should be close together
a1 b1 + a2 b2 + a3 b3
a1 a1
b1 a2 a2
a2 a3 a3 a3 a3
b 2 b1 b1 b1 b1
a3 b2 b2
b3 b3 0 1 0 1 © R. Rutenbar 2001, CMU 18-760, Fall 2001 33
VariableVariable Ordering:Ordering: IntuitionIntuition
^ Idea: Power to control the output X The inputs that “greatly affect” the output should be early in the variable order X “Greatly affect” means almost always changes the output when this input changes X Example: multiplexer
order: S < D0 < D1 order: D1 < D0 < S D0 0 out D1 1 sel S
© R. Rutenbar 2001, CMU 18-760, Fall 2001 34
Page 17 VariableVariable OrderingOrdering ^ What use is any of this? Suggests ordering heuristic... X Suppose I have a logic network like this...
X1 X2 X3 output X4
X5
X Now, redraw to represent circuit as linear arrangement of its gates X Constraint: all the output-to-input wires go left-to-right in this order X Called a topological ordering
w = 4
x5 function output x4 x3 x2 x1
Primary inputs represented by “source” blocks © R. Rutenbar 2001, CMU 18-760, Fall 2001 35
VariableVariable OrderingOrdering
w = 4
x5 function output x4 x3 x2 x1
^ Parameters X Number of primary inputs = n X “Bandwidth” = w = number of wires cut at widest point
^ Useful result: Size upper bound [Berman, IBM] w X Can represent with OBDD with <= n 2 nodes X Order variables in reverse of source block ordering X Means list vars right to left in the above picture...
© R. Rutenbar 2001, CMU 18-760, Fall 2001 36
Page 18 VariableVariable OrderingOrdering ^ Reasoning here goes like this... X All info about vars > i encoded in w bits... w X ...so at most 2 distinct decisions, which bounds number of branch destinations from levels < i to levels <= i
w
xn xi x1
<= 2 w 0
x BDD i leaves • • BDD root • 1 xi
© R. Rutenbar 2001, CMU 18-760, Fall 2001 37
VariableVariable OrderingOrdering
^ Linear circuit example: 4 bit adder sum, MSB X How to order vars for a simple 4-bit carry ripple adder, Sum MSB?
a3 b3 a2 b2 a1 b1 a0 b0
4 bit ripple adder
S3 ^ Answer: Use nice property of our adder circuit X It has Constant bandwidth => Linear OBDD size
w = 3 0 S 3 2/3 2/3 2/3 a0 a1 a2 a3 b0 b1 b2 b3
© R. Rutenbar 2001, CMU 18-760, Fall 2001 38
Page 19 Aside:Aside: VariableVariable OrderingOrdering
^ Generalization X Many carry chain circuits have constant bandwidth X Examples X Comparators X Priority encoders X ALUs
© R. Rutenbar 2001, CMU 18-760, Fall 2001 39
VariableVariable OrderingOrdering HeuristicsHeuristics
^ Heuristic ordering methods X Take advantage of this “linear ordering” idea X Input: gate-level logic network we want to build a BDD for X Output: global variable ordering to use X Method: topological analysis, aka, “walking” the network graph...
Input Netlist Ordering a b < a < d < c < e? b c a < b < c < d < e? d e < d < c < b < a? e
© R. Rutenbar 2001, CMU 18-760, Fall 2001 40
Page 20 Example:Example: DynamicDynamic WeightWeight AssignmentAssignment HeuristicHeuristic
^ Concrete example: Minato’s heuristic X Pick a primary output; put a weight “1” there X For each gate with weights on its output but not its input, “push” the weight thru to the inputs, dividing by the number of inputs. Each input gets equal weight. X If there is fanout (one wire goes to >= 2 inputs) then ADD the weights to get the new weight for this wire. X If there is more than 1 output, start with the one that has the deepest logic depth from the inputs X Continue till all primary inputs are labeled
a a
b b c c d d
e e
© R. Rutenbar 2001, CMU 18-760, Fall 2001 41
DynamicDynamic WeightWeight AssignmentAssignment
^ Minato’s heuristic X Pick the primary input with the biggest weight. Put it first in var order. X Erase the subcircuit (wires, input pins, entire gates if they have only one “active” pin left) that are reachable only from this primary input we selected. X Go back and reassign the weights again in the new, smaller circuit.
1/4 a 1/2 a 5/12 b 1/2 1 b 1/6 c c d 4/6 d 1 e 1/2 e
© R. Rutenbar 2001, CMU 18-760, Fall 2001 42
Page 21 DynamicDynamic WeightWeight AssignmentAssignment
^ Just continue
a a b b c c d d
e e
a a
b b c c d d
e e
© R. Rutenbar 2001, CMU 18-760, Fall 2001 43
DynamicDynamic WeightWeight AssignmentAssignment
^ Minato’s method X Iteratively picks the next variable in the order using the simple weight propagation idea X Tries to order all vars starting from the “deepest” output X Deletes the ordered var, erases wires/gates, repeats till all ordered
^ How well does it work? X Fairly well. Very simple to do. Lots better than random order. X OK complexity == O( #gates • #primary inputs)
^ Notes X There are other, better, more complex heuristics X Also, the ordering does NOT have to be static, it can change dynamically as the BDD is used
© R. Rutenbar 2001, CMU 18-760, Fall 2001 44
Page 22 VariableVariable OrderingOrdering HeuristicsHeuristics
^ Alternative: Suppose your network is a tree X Start at the output X Do a postorder traversal of tree X Write down variables in order visited by the tree walk ^ Remember postorder walk? X Visits the nodes, ie, gates, in a deterministic order X Ignore primary inputs (for now) A B1 B postorder (TreeNode) { B5 if (TreeNode.TopChild != null) C B2 postorder( TreeNode.TopChild) root if (TreeNode.BotChild != null) D B6 B8 E postorder( TreeNode.BotChild) B3 F write out TreeNode name H G B7 } B4 I Nodes finished as:
© R. Rutenbar 2001, CMU 18-760, Fall 2001 45
VariableVariable OrderingOrdering HeuristicsHeuristics
^ In our case A B1 X Tree might not be binary -- not a big deal B X Just use some consistent order for B5 exploring the children nodes C B2 X Visits variables in reverse order D B6 B8 E ^ Why is this a good heuristic? B3 F X It makes a linear ordering of ckt H G B7 X Bandwidth is O(logN) for N blocks B4 2 I X OBDD size is O(N )
B1 B2 B5 B3 B6 B4 B7 B8
AB CEF D HI G Highest Lowest
© R. Rutenbar 2001, CMU 18-760, Fall 2001 46
Page 23 VariableVariable OrderingOrdering HeuristicsHeuristics
^ What if network is not a tree? X More general, more common case X Some terminology: Reconvergent fanout X When one input or intermediate output has multiple paths to the final network output, fanout is called reconvergent X If you don’t have a tree, you have this
A B1 B B5 Reconvergent fanout C B2 D B6 B8 E B3 F B7 H G B4 I
© R. Rutenbar 2001, CMU 18-760, Fall 2001 47
VariableVariable OrderingOrdering HeuristicsHeuristics
^ For general logic networks X Still try to do a depth-first walk of the graph, output to inputs X Try to walk the graph like it was a tree, giving priority to nets that have multiple fanouts
A B1 B B5 C B2 D B6 B8 E B3 F H G B7 B4 I An ordering... B < A < C < D < E < F < G < H < I
© R. Rutenbar 2001, CMU 18-760, Fall 2001 48
Page 24 Ordering:Ordering: ResultsResults
Function Class Best Worst Addition linear exponential Symmetric linear quadratic Multiplication exponential exponential
^ General Experience X Many tasks have reasonable OBDD representations X Algorithms remain practical for up to millions of OBDD nodes. X Heuristic ordering methods are generally OK, though it may take effort to find a heuristic that works well for your problem X So-called dynamic variable ordering -- reordering your BDD vars as your BDD gets used, to improve the size -- is essential today
© R. Rutenbar 2001, CMU 18-760, Fall 2001 49
BinaryBinary DecisionDecision DiagramsDiagrams
^ Variants and optimizations X Refinements to OBDD representation X Do not change fundamental properties ^ Primary Objective X Reduce memory requirement X Critical resource X Constant factors matter ^ Secondary Objective X Improve Algorithmic Efficiency X Make commonly performed operations faster
^ Common Optimizations X Share nodes among multiple functions X Negated arcs
© R. Rutenbar 2001, CMU 18-760, Fall 2001 50
Page 25 BinaryBinary DecisionDecision Diagrams:Diagrams: SharingSharing
^ Sharing, revisited X We mentioned BDDs good at representing shared subfunctions X Consider this example from a 4 bit adder: sum msb and carry out
S 3 Cout
a3 a3 Negative Xor Positive Logic b3 b3 Logic b3 b3 Carry Carry a a Chain 2 2 Chain a2
b2 b2 b2 b2 b2 b2
a1 a1 a1
b1 b1 b1 b1 b1 b1
a0 a0 Basically same a0 shared b b 0 0 subfunction b0 0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 51
Sharing:Sharing: Multi-rootedMulti-rooted DAGDAG 2 roots ^ Don’t need to represent it twice S 3 Cout X A BDD can have multiple ‘entry points’, or roots a3 a3 X Called a multi-rooted DAG b3 b3 b3 b3 Positive ^ Recall a2 a2 Logic Carry X Every node in a BDD Chain represents some Boolean b2 b2 b2 b2 function a1 a1 X This multi-rooting idea just explicitly exploits this to b1 b1 b1 b1 better share stuff a0 a0
b0 b0
0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 52
Page 26 Sharing:Sharing: Multi-rootedMulti-rooted DAGDAG
^ Why stop at 2 roots? S 3 Cout X For many collections of functions, there is considerable sharing a3 a3 X Idea is to minimize size wrt several separate BDDs by max sharing S 2 b3 b3 b3 b3
a2 a2 a2 ^ Example: Adders S 1 b2 b2 b2 b2 b2 b2 X Separately X 51 nodes for 4-bit adder a1 a1 a1 X 12,481 for 64-bit adder S 0 b1 b1 b1 b1 b1 b1 X Quadratic growth X Shared a0 a0 a0 X 31 nodes for 4-bit adder b0 b0 X 571 nodes for 64-bit adder X Linear growth 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 53
BDDBDD Sharing:Sharing: IssuesIssues
^ Storage model X Single, multi-rooted DAG X Function represented by pointer to node in DAG X Be careful to apply reduction ops globally to keep all canonical X Every time you create a new function, gotta go look in your big multi-rooted DAG to see if it already exists, inside, somewhere ^ Storage management X User cannot know when storage for node can be freed X Must implement automatic garbage collection... X ...or not try to free any storage X Significantly more complex programming task ^ Algorithmic efficiency X Functions equivalent if and only if pointers equal X if (p1 == p2) … X Can test in constant time
© R. Rutenbar 2001, CMU 18-760, Fall 2001 54
Page 27 Optimization:Optimization: NegationNegation ArcsArcs
^ Concept X Dot on arc represents complement operator X Inverts function value of BDD reachable “below the dot” X Can appear on internal or external arc
a+b ~(a+b) a+~b
a a negation b b
0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 55
CanonicalCanonical FormForm
^ Must have conventions for use of negative arcs X Express as series of transformation rules X These are really nothing more than DeMorgan laws
Rule #1 Rule #2
No Double Negations No Negated Hi Pointers ⇒ x ⇒ x
x ⇒ x
© R. Rutenbar 2001, CMU 18-760, Fall 2001 56
Page 28 Aside:Aside: WhyWhy DoesDoes ThisThis Work...?Work...?
^ Just like Shannon expansion, applied again X ..with prudent use of the basic DeMorgan laws.
No Negated Hi Pointers
x ⇒ x
© R. Rutenbar 2001, CMU 18-760, Fall 2001 57
Aside:Aside: WhyWhy DoesDoes ThisThis Work...?Work...?
^ Just like Shannon expansion, applied again
No Negated Hi Pointers
x ⇒ x
© R. Rutenbar 2001, CMU 18-760, Fall 2001 58
Page 29 TransformationTransformation RulesRules (Cont.)(Cont.)
Rule #3 Rule #4
No Negated Constants No Hi Pointers to 0
x ⇒ x a ~a 0 1
x ⇒ x
0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 59
TransformationTransformation ExampleExample
^ Example of applying the rules X Tends to get “nand-like” DAGs
~a+~b ~(a · b)
a a a a ⇒ ⇒ ⇒ b b b b
0 1 0 1 0 1 0 1
x ⇒ x k ⇒ ~k
© R. Rutenbar 2001, CMU 18-760, Fall 2001 60
Page 30 NegationNegation ArcArc ExamplesExamples MSB of Sum All Adder Functions S 3 S 3 Cout Odd Parity a3 a3 a3
x1 b3 S 2 b3 b3 b3 x 2 a2 a2 a2
x3 b2 b2 S 1 b2 b2 b2
a1 a1 a1 x4
b1 b1 S 0 b1 b1 b1 0 1 a0 a0 a0
b0 b0
0 1 0 1
© R. Rutenbar 2001, CMU 18-760, Fall 2001 61
EffectEffect ofof NegationNegation ArcsArcs
^ Storage savings X At most 2X reduction in number of nodes
^ Aside: can people really do this “negation” thing in their heads by looking at a normal BDD? X Nope X Takes lots of practice even to be able read these things X Just useful because of the 2X space efficiency
^ Algorithmic improvement X Can complement function in constant time
© R. Rutenbar 2001, CMU 18-760, Fall 2001 62
Page 31 SummarySummary
^ OBDD X Reduced graph representation of Boolean function X Canonical for given variable ordering
^ Selecting good variable ordering critical X Minimize OBDD size X Circuit embeddings provide effective guidance
^ Variants and optimizations X Reduce storage requirements X Improve algorithmic efficiency X Complicate programming and debugging
© R. Rutenbar 2001, CMU 18-760, Fall 2001 63
Page 32