
Recursive Decomposition of Sparse Incompletely-Specified Functions Robert Brayton and Alan Mishchenko EECS Dept. UC Berkeley Abstract minterms for each output, i.e. we are given a number of incompletely-specified Boolean functions (ISFs) A sparse function is one with only a few onset and {(f , g )} , one for each output. In this work, (f , g ) is offset minterms, compared to the entire input space. i i i i The paper examines effective use of the don’t cares to given as a pair of truth tables. Our algorithm produces synthesize a small logic network. An algorithm is a generalization network, which for each output i, proposed, implemented and tested on known functions hi ( x ) as a function of the inputs x, satisfies where only a small sample of care minterms is given. fx()⊆ hx () ⊆ gx () . This is compared to the The algorithm is top-down recursive looking for i i i decompositions based on two-input Boolean functions original completely-specified network after both have and multiplexors. been synthesized by ABC. The comparison is in terms of the number of AIG nodes. I. Introduction A viable algorithm should produce a AIG count at least We examine the problem of implementing a multi-level as good as the “golden model”, because the golden multi-output logic function in the form of a completely model gives the existence of at least one circuit with specified Boolean network, when only a small sample that count, but the don’t cares provide additional of its onset and offset minterms is given. The goal is to flexibility for circuit generalization. An example is the find a relatively small network that distinguishes alu4 benchmark from mcnc91 , which has 14 inputs and between the given onset and offset minterms. Minterms 8 outputs. If the initial completely-specified circuit is of the Boolean input space outside of the sample set, optimized by iterating the ABC command dc2 , it can are treated as don’t cares. The challenge of sparse be reduced to 631 AIG nodes. On the other hand, our functions is that there is an enormous number of don’t algorithm produces a circuit, based on a 10% sampling cares. Examples of applications are in the domain of of the onset and offset minterms of the 8 outputs, character recognition, image reconstruction and, in which has only 429 nodes after iterative synthesis with general, learning the smallest logic function consistent command dc2 in ABC. with a given training set. The problem of synthesizing incompletely-specified Occam’s Razor Principle states that “the simpler functions with many don’t-care has been the topic of explanations for available observed data have the previous research. In particular, [Chang et al ‘10] greater predictive power”, so it serves as a motivation proposes an elaborate portfolio solution and an in- for finding a simple network. In this work, a simpler depth discussion of industrial applications. explanation is the smaller Boolean network that distinguishes the provided data. In another sense, we II. The Algorithm want to generalize from a given set of examples. Since Our algorithm decomposes a sparse ISF into a network this problem is known to be NP-hard [Gary and of two-input ANDs, ORs, XORs, and MUXes. At each Johnson’79], our approach is heuristic. In addition, we step it tries to use the don’t cares in a helpful way but restrict our selection of the nodes to two-input at the same time preserving as many as possible don’t functions and multiplexors, which can be further cares for later use in the recursion. It can also choose to optimized by standard logic synthesis tools. implement the complement if it is estimated to give We develop an algorithm which produces such a better results. The input is an ISF for each output given network and measure its efficacy for the case when one as a pair (ON, OFF) of truth tables or sets of minterms. known good network is available. In our experiments, Since the input is specified using a set of minterms, we completely-specified Boolean networks are sampled restrict the number of inputs to 16 in our current for a given percentage of their onset and offset implementation, which was done in Python. The minterms. This training set is thus given as two sets of pseudo-code of the algorithm is given in Fig. 1. decomp_network ({(f_i,g_i)}) (,)fg= xfg (,) + xfg (,) so we recur by calling Net = [] xx xx for each i: on two ISFs, (f , g ) and (f , g ) . For R = decomp (f_i,g_i) decomp x x x x Verify(R,(f,g)) XOR, we use the fact that Net = Net + R return dc2(Net) (,)fg=⊕⊕ xx (,) fg =⊕ x ((,) xgf + xfg (,)) , so we recur by calling on the single ISF decomp (f,g): decomp a. fm = min(f,g) (xg+ xf , xf + xg ) , using the fact that (,)fg= (, gf ) b. gm = min(g,f) c. sign = |fm| <= |gm|: for an ISF. d. if not sign: f,g) = (g,f) For making these choices, we need to estimate, which e. v_m = choose_var_cof(f,g) f. v_x = choose_lit_xor(f,g) of the inputs would be the best variable for co-factoring g. mx = estimate_size(‘mux’,v_m,(f,g)) or for XORing. As a measure of which to choose, each h. ex = estimate_size(‘xor’,v_x,(f,g)) variable x is tested by measuring how well the SOPs i. fx = estimate_size(‘factor’,f) j. decomp_type = created by the associated decomposition, can be select(mx,ex,fx,(‘mux’,‘xor’,‘factor’)) factored if x is chosen. The SOPs are produced using k. if decomp_type == ‘mux’: Espresso minimizer or a recursive algorithm for x = v_m Irredundant Sum-Of-Product (ISOP) computation N = (x decomp (f_x,g_x) + [Minato’00]. Each of these algorithms is applied to xbar decomp (f_xbar,g_xbar)) l. if decomp == ‘xor’: (fx , g x ) and (fx , g x ) in the case of a MUX or x = v_x N = (x xor decomp (xor (x, (xg+ xf , xf + xg ) in the case of an XOR, and the (f,g)) best result is chosen (lines e-f) and compared against a m. else: if sign: simple algebraic factoring of the SOP for the given ISF N = fm (line j). If the multiplexor wins, we recur by applying else: N = gm decomp to (fx , g x ) as well as (fx , g x ) (line k). If the n. if sign: return N XOR wins we recur by applying decomp else: to (x⊕ (,)) f g (line l). Otherwise, the recursion return ~N terminates by returning the factoring of the minimized Figure 1: Recursive function to decompose an ISF. (f , g ) (line m). Finally the variable sign is used to complement the resulting network or not (line n). The algorithm decomp_network works top-down Minimization of an ISF is done with both Espresso and recursively calling decomp (fi , g i ) to obtain a ISOP, both of which can use don’t cares. This depends decomposition tree for each output given the ISF on whether the minimization is being used just for estimation (ISOP only is used for speed), e.g. in (f , g ) . These are accumulated into a single multi- i i choosing the pivot variable x for the multiplexor or output network Net . XOR, or for choosing whether to implement the complement of the ISF (both ISOP and Espresso are At each call to decomp, there is given a pair (f , g ) used to get a better estimation). as either a pair of minterm sets or a pair of truth-tables. The first step is to estimate if it would be better to As an example of the kind of decomposition possible, implement the onset or the offset (lines a-d), so at each we consider the benchmark alu4 in the mcnc91 step we can implement either a function ( sign = benchmark set (which has 14 inputs and 8 outputs) and ‘+’, see Fig. 2 below ) or its complement consider part of the decomposition tree of the second (sign = ‘-’). This is done by quickly estimating output, which is PO = 1. The decomposition of that the decomposition of (f , g ) and (g , f ) separately part is printed by the algorithm as follows: and choosing the “simplest”. Then it is tested if a [[-13, 'xor', '+', '+'], MUX/XOR at the top could lead to a better [[9, 'shannon', '+'], ['+', ['-------------0', '-----------0—', decomposition, or a simple algebraic factoring (lines e- '----------0-1-']], j). For the first two, we need to select a best variable ['-', ['----------1-1-', '0-------1-----', (pivot). For a MUX the ISF is '----1-----0---']]]]] III. Related Work Figure 2: A sample of part of a decomposition produced by the algorithm. In general, if we decompose with a two-input function at the top, e.g. an AND, the two inputs provide don’t cares for each other. For an AND, when one input is 0, This represents the function the other is a don’t care and vice versa. In general, this x13⊕[( xx 9 13 ++ x 11 xx 10 12 )( + xxx 9 10 12 ++ xx 0 8 xx 4 12 )] . gives rise to a Boolean relation. For example for an The example illustrates choosing an XOR AND, if the output is to implement an ISF ( f,g ) = decomposition at the top level using the pivot literal (onset, offset), and the two inputs of the AND are ( u,v ), then the Boolean relation is fuv+ gu( + v ) + fg . In x13 ( -13 ). The initial ISF (f , g ) is not complemented and the sub-call to decompose tabular form, the Boolean relations for AND, XOR, (xg+ xf , xf + xg ) is also not complemented (‘+’,’+’).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-