Learning Combinational Logic

Total Page:16

File Type:pdf, Size:1020Kb

Learning Combinational Logic

Learning Combinational Logic

ECE 539 Final Project Marcel Wermeckes December 19th, 2001 Introduction

Combinational logic circuits are building blocks in every integrated circuit. Apart from their counterpart, the sequential logic circuits, combinational logic circuits have defined output for given input patterns. The full adder seen in Figure 1 is an example of a simple combinational circuit.

Figure 1

The circuit shown has three inputs (A, B and CIN) and two outputs (S and COUT). Given the two bits A and B and the carry-in CIN, the full adder will add these three values and form a sum (S) and carryout COUT. An example output for an input of A=0 B=0 and CIN =0 is also given in the figure. The behavior of a full adder can be described by two simple Boolean equations:

S=(A or B) or CIN COUT = (A and B) or (CIN and (A or B))

The reader should notice the importance of the brackets in these equations. By reusing the first OR gate in the circuit, the number of gates can be minimized.

Neural networks are similar to combinational logic because they also produce output values to specific inputs. The big difference between a neural network and combinational logic is that a neural network can be trained and is therefore dynamic. Its output values can change with respect to its inputs everytime it is retrained. Combinational logic output on the other hand is fixed. Once a combinational logic circuit has been built, it’s I/O mapping cannot be changed. A certain input vector will always generate the same output once the circuit has stabilized. However, whereas neural network might not always give the intended output, a combinational logic circuit does not fail. Motivation

The motivation for this project was due to my interest in digital logic. The textbook gives an example of an XOR gate in Chapter 4.5. It is found that an elementary (single layer) perceptron model cannot classify input patterns that are linearly separable such as for an XOR gate. It was my idea to build a Neural Network based on any set of Boolean equations. What kind of model would have to be used in order to be able to learn the behavior of a combinational logic circuit? How will the neural network have to be trained? Will the neural network have the same accuracy as that of the combinational logic? I was curious to find a good model and training parameters to successfully implement a network that could be retrained and dynamically change its logic behavior through user input.

It is uncontested that this project will serve more of an academic than practical purpose. Instead of building a complex neural network it should be understood that a simple programmable look-up table could serve the same purpose.

Therefore, for combinational logic circuits of only moderate size it would be a lot easier to hard code an I/O table in form of a look-up table. However, for larger circuits it could be the case that a compact neural network would in fact use less resources of the computer in order to find the right output pattern to a given input. Since exhaustive testing can easily be done for these kind of neural networks, a neural network of 100% success rate must be the goal. Depending on the logic of the circuit, one could start with a more complex structure of a neural network and then get rid of complexity/redundancy. The network can be simplified as long as the classification rate of all possible inputs for the logic circuit is still 100%.

Classically, a Neural Network should not be trained with testing data. For the neural networks that will be built as part of this project, the training set and testing set are identical. However, in this particular case this method is necessary because the input patterns are discrete and limited. Program details

In order to achieve my goals, I designed a computer application that takes Boolean equations as input and that builds a Neural Network matching these equations. The user of the program is able to enter any number of Boolean equations with an arbitrary number of input variables.

By default, the neural network built follows the multi-layer perceptron model and has one hidden layer. The algorithm used is back-propagation. It will train the network and find its appropriate weights. Parameters such as the momentum, learning rate and number of nodes in the hidden layer can be specified before the network is trained similar to the MATLAB file bp.m used in class. Usually, optimal values for these parameters exist and through repeated training the best network parameters can be found. The user can also set the number of training cycles. The number of Boolean equations the user enters automatically sets the number of input neurons. The number of equations being entered determines the output layer neurons.

After the training of the network has taken place, input patterns can be entered and the output of the neural network will become visible. It is also possible to test all input patterns (exhaustive testing) of the neural network. When tested, the output of the Neural Network output will be compared to the ideal value of ‘0’ or ‘1. The error being formed is the difference between the ideal value and the output of the network and can be investigated further in MATLAB. During exhaustive testing, the program will create a file with all input patters, expected output, output generated by the neural network and a sum of square error.

Chart 1 shows the flow and some of details of the program.

As can be seen, the program must generate a truthtable for the combinational circuit in order to train its network. Also, the user has a multitude of options, one of them being that s/he can circulate and try to find the optimal network parameters with the same set of Boolean equations at hand. a truthtable is the boolean being built user enters equations are according to the boolean equations being parsed boolean and evaluated equations

the user may enter number of hidden neuron weights layer, momentum, the neural network are randomized learning rate and is being built number of learning cycles (epochs)

the neural network is being trained

user enters one input pattern for exhaustive testing testing

testing results are testing result is being output to a being displayed file

program exit

Chart 1 Programming

The entire program is written in C++ and was compiled using the GNC g++ compiler in the UNIX environment. I encourage the reader to compile files program.c, bpfunctions.c and utilities.c with the command “g++ -o program program.c” and run the program.

If the user of the program chooses to exhaustively test the network, then three files will be created in the program directory. Outfile1 contains the outputs of the network when tested with all possible testing patterns. Outfile2 shows the ideal output values that are obtained when evaluating the Boolean functions. Lastly, outfile3 shows the testing errors of each output neurons. This file can be loaded into MATLAB for further analysis. The Neural Network built reacts dynamically to user input so that the size of the Neural Network is different each time the program is run. As was mentioned earlier, the number of input variables determines the number of input layer neurons and the number of Boolean equations determines the number of output layer neurons of the multiplayer perceptron.

I wrote and tested all aspects of the program except for parts of the back propagation algorithm. The main functional parts of the user program include a Boolean equation parser that obeys the precedence rules of Boolean algebra and an equation evaluator to obtain training and testing patterns. The Boolean equation parser turned to be more complicated than anticipated. The current implementation converts the input string into postfix notation and then is able to evaluate the equation for different input values. In addition, several MATLAB routines had to be written for the error output of the program to be interpreted. The back propagation algorithm is only partly my work. I can only claim about 30% of the code in file bpfunctions.c. The general structure of the algorithm was adopted (see below), however some changes to the code had to be made and considerable time was spent testing the modified algorithm to make sure it was working properly. The BP-algorithm used can be found at http://richardbowle.tripod.com/neural/netcpp1.htm Results

Table 1 shows output of a sample run of the program. The equations entered by the user in this case were:

A&B^C+(A&B)^A A+B+C&A C+B^A

M/BETA 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.2 4.4888 3.2564 3.0933 3.0638 3.2359 2.3534 1.7565 1.9963 sum of square error 3 2 3 3 2 3 2 2 # of declassifications 0.3 4.1432 3.1614 3.1492 2.826 3.8468 2.2716 2.5425 0.6284 4 3 3 2 2 2 3 0 0.4 3.3972 2.8633 1.8838 2.5472 1.824 2.0634 1.7129 3.0407 3 2 1 2 2 2 0 2 0.5 3.3033 2.2766 1.9401 1.7036 1.6773 3.1617 2.2219 1.3605 3 2 1 2 2 2 3 1 0.6 3.4161 2.5792 1.9043 1.795 0.7371 1.0433 1.6727 1.0461 3 2 1 2 0 0 2 1 0.7 1.8456 2.9094 1.1415 1.6568 1.6911 1.1713 1.0004 0.8763 1 3 1 2 1 0 0 0 0.8 1.7897 1.1365 1.0116 0.3691 0.9576 1.5285 0.0237 1.0084 2 1 1 0 0 0 0 1 0.9 1.7409 1.5165 1.7551 0.0106 0.1139 0.5284 0.753 0.7659 2 1 2 0 0 0 0 1

Table 1: Sum of squared errors after 300 epochs

The table shows the testing results for different network parameters. In this particular case, the network consisted of 4 hidden neurons in the hidden layer. The first row of each box shows the sum of square error and the second row shows the number of declassifications. A logic ‘1’ has been classified correctly for any output value between 0.50001 and 1. If the correct value of an output neuron is supposed to be ‘0’, then all values between 0 and 0.49999 are correct and the classification has was successful.

In the table, the momentum M specified by the user increases from 0.2 to 0.9 vertically. Beta represents the learning parameter and is shown in the same range. For instance, for M=0.5 and Beta=0.8, the sum of square error is 2.2219 and the number of declassifications is 3. For this particular example, there are 3 input neurons and 3 output neurons. All 8 possible input patterns to the equation are tested for the 3 equations giving a total of 24 outputs that need to be compared to its correct value. Regardless of momentum and learning rate, there are at most 3 declassifications, meaning that the success rate of the Neural Network is 87.5 %. Choosing the right values for M and BETA improved the performance of the neural network and lead to a 100% classification rate. Figure 1 shows the errors of all output neurons for M=0.7 and Beta=0.4.

As can be seen, there is one declassification. The error in output of neuron 5 is 1.0 and is therefore at its maximum.

Figure 1

Figure 2 shows the absolute errors for M= 0.8 and Beta= 0.9. The classification rate is 100%. The maximum error in this case is as small as 0.05.

Figure 2 Discussion

After much testing of different number and size of Boolean equations it was found that the performance of the neural network can vary heavily. The following trends were found to be valid

The larger the number of Boolean equations entered, the worse the performance of the Neural Network. This is easily explained by the fact that the number of connections of the hidden layer neurons increases significantly. The larger number of output neurons makes training more necessary.

 Even if the number of Boolean equations and input variables were the same, different logic functions did not perform equally well with the same network. In other words, not only the size of the logic circuit matters when determining the structure and parameters of the neural network; the logic itself has to be considered.

 The performance of most Neural Networks was maximized with M=0.9 and BETA=0.7

 For a very large number of Boolean equations, the performance of the Neural Network becomes significantly better if the number of hidden neurons is increased

 Training with less than 50 epochs generally gives very poor testing results

 If the declassifications are being listed according to their originating Boolean equation, then we see that most of the time, declassifications tend to occur for more heavily for certain equations than for others.

Figure 3 shows the results of a more complex Neural Network generated. All values larger than 0.5 represent declassifications. The classification rate in this particular example was 88%.

Recommended publications