<<

Digital Logic and Computer Organization

Neal Nelson

c May 2013 Contents

1 Numbers and Gates 5 1.1 Numbers and Primitive Data Types ...... 5 1.2 Representing Numbers ...... 6 1.2.1 Decimal and Binary Systems ...... 6 1.2.2 Binary Counting ...... 7 1.2.3 Binary Conversions ...... 9 1.2.4 Hexadecimal ...... 11 1.3 Representing Negative Numbers ...... 13 1.3.1 Ten’s Complement ...... 14 1.3.2 Two’s Complement Binary Representation ...... 18 1.3.3 Negation in Two’s Complement ...... 20 1.3.4 Two’s Complement to Decimal Conversion ...... 21 1.3.5 Decimal to Two’s Complement Conversion ...... 22 1.4 Gates and Circuits ...... 22 1.4.1 Logic Expressions ...... 23 1.4.2 Circuit Expressions ...... 24 1.5 Exercises ...... 26

2 Logic Functions 29 2.1 Functions ...... 29 2.2 Logic Functions ...... 30 2.2.1 Primitive Gate Functions ...... 31 2.2.2 Evaluating Logic Expressions and Circuits ...... 31 2.2.3 Logic Tables for Expressions and Circuits ...... 32 2.2.4 Expressions as Functions ...... 34 2.3 Equivalence of Boolean Expressions ...... 36 2.4 Logic Functional Completeness ...... 36 2.5 Boolean Algebra ...... 37 2.6 Tables, Expressions, and Circuits ...... 37

1 2.6.1 Disjunctive Normal Form ...... 38 2.6.2 Logic Expressions from Tables ...... 39 2.7 Exercises ...... 41

3 Dataflow Logic 43 3.1 Data Types ...... 44 3.1.1 Unsigned Int ...... 45 3.1.2 Signed Int ...... 45 3.1.3 Char and String ...... 45 3.2 Data Buses ...... 46 3.3 Bitwise Functions ...... 48 3.4 Functions ...... 51 3.5 Decoders ...... 57 3.6 Multiplexers ...... 58 3.7 Control Logic ...... 60 3.8 Data Path Circuits ...... 61 3.9 Exercises ...... 63

4 Integer Arithmetic and Adders 65 4.1 Unsigned Addition ...... 65 4.2 Signed Addition and Subtraction ...... 66 4.3 Adder-Subtracter Circuit ...... 66 4.4 Multiplication and Division ...... 73 4.5 Floating Point Numbers ...... 73 4.6 Exercises ...... 73

5 Registers and Sequential Logic 76 5.1 Clocked Logic ...... 76 5.2 Memory Bits ...... 78 5.3 Registers ...... 81 5.4 Counter and Shifter Registers ...... 82 5.5 Register Transfers ...... 83 5.6 Register Transfer Sequences ...... 86 5.7 Adder Datapaths ...... 88 5.8 Sample RT Sequences for Adder Datapaths ...... 91 5.9 Exercises ...... 93

6 Memory and Stored Programs 90 6.1 Stored Programs ...... 92 6.2 Memory Components ...... 93

2 6.3 Instruction Memory ...... 95 6.4 Instruction Fetch Datapath ...... 96 6.5 Instruction Execution Datapath ...... 98 6.6 Combined Fetch-Execute Datapath ...... 100 6.7 Exercises ...... 102

7 Instruction Architecture 104 7.1 SAM - Simple Accumulator Machine ...... 104 7.2 SAM Word Size, Data Types, and Registers ...... 105 7.3 SAM Address Space and Physical Memory ...... 105 7.4 SAM Assembly Language and Machine Code ...... 106 7.5 SAM Instruction Set ...... 109 7.6 SAM Instruction Execution Traces ...... 111 7.7 Exercises ...... 113

8 Register Transfer Architecture 115 8.1 Stored Program Architecture ...... 115 8.2 Fetch-Execute Cycle ...... 116 8.3 RT Machine Organization ...... 116 8.4 The SAM Datapath ...... 118 8.5 Instruction Decoder ...... 119 8.6 Control Sequencer ...... 120 8.7 Instruction Fetch Control Sequence ...... 121 8.8 Instruction Execution Control Sequences ...... 122 8.9 Control Sequence Logic ...... 123 8.10 The SAM ...... 125 8.11 Exercises ...... 125

9 Assembly Language Programming 111 9.1 Assembly Code ...... 111 9.2 Machine Code ...... 111 9.3 Code Sequences ...... 111 9.4 Decisions ...... 111 9.5 Loops ...... 111 9.6 Overflow and Carry ...... 111 9.7 Exercises ...... 111

3 Preface

This book is a reference for Logic and Computer Organization at The Ev- ergreen State College. We presume a stored-program processor architecture built using silicon technology in a non-quantum setting. The heart of a computer is the processor. We will learn the inner work- ings of a processor by studying the construction of a processor from primitive digital logic gates realizable in hardware. The presentation is intellectually honest, but greatly simplified relative to current technology. Nevertheless, the essential ideas are the same and the resulting processor that we explore can do everything any other processor can do – given enough time and space. The next significant revolution in computer organization will come only with a quantum processor. I want readers to hear my thanks to the Computer Science Foundations students of 2013-14 and 2014-15 for beta testing this text in their study of Digital Logic and Computer Organization. I especially thank Mary Kallam for her careful editing of the original version.

4 Chapter 1

Numbers and Gates

This chapter introduces the two most fundamental ideas upon which we will build our processor. First, processors operate on symbolic and numerical data so we will need to represent symbols and numbers in our machines. Second, we will need primitive logic components that can be constructed in hardware and used both in combinations and sequences to compute. The magic of how a single primitive logic gate can be combined with feed- back and a hardware clock to run complex programs will unfold throughout the remaining chapters.

1.1 Numbers and Primitive Data Types

In the first part of this chapter we examine the binary representation of inte- gers. are one of several kinds of numbers in . Numbers in general (and integers in particular) are abstract mathematical entities whose existence is only accessible through our minds and thoughts. Num- bers have proved very useful in understanding our material world quantita- tively, but we must have a standard way to denote specific numbers - a way to write them down and communicate them to each other in text and formu- las. A system of representing numbers in written form is called a numeral system. Our commonly accepted practice for denoting numbers in written form is the called the Hindu-Arabic decimal numeral system. We also need a standard way of representing numbers in computing systems. This chapter is concerned with the denotation and representation of numbers for computing systems. In programming languages numbers show up as one of a small set of primitive data types that are closely related to the set of primitive values

5 built into all processors. In this chapter we will specifically focus on the representation of integers, leaving real numbers and other primitive types for later study. By the end of this chapter we will know how integers in mathematics are denoted by decimal numerals in writing, and commonly coded as the primitive data type Int in programming languages, and subsequently stored in computing systems in binary using a two’s complement representation in a fixed size computer word.

1.2 Representing Numbers

The earliest counting systems began with the Natural Numbers which we think of now as the collection or set of positive integers: {1, 2, 3,...} The number zero was historically much later in achieving the status of a number, but zero is such a handy and natural number when you get used to it that it is now commonly included as a natural number. So although there is not complete agreement on the matter of zero, we will assume that the natural numbers are the set of non-negative integers:

N = {0, 1, 2, 3,...}

1.2.1 Decimal and Binary Systems It is a convenient fact that our Hindu-Arabic decimal system of numerals has a base of ten and we have ten digits on our hands. It is an equally convenient fact that both logic and numbers can be represented in a base two binary system and hardware can efficiently store and manipulate binary information. So our understanding of logic and numbers translates directly into hardware, but the somewhat inconvenient fact is that we must translate between base two and base ten representations of numbers. A decimal system of numerals is a base 10 system with 10 distinct digits. A binary system of numerals is a base 2 system with only two distinct digits, which we denote with the usual symbols 0 and 1. It is a common convention in mathematical logic to use the symbol F for false, and T for true. Because both systems are binary, we can choose either convention and both conventions are commonly used. We will initially use both conventions. When we are speaking about logic in the beginning of the text we’ll use F and T as is common in logic. When we are speaking about circuits and numbers, we will migrate toward using 0 and 1 as is common in digital logic and Boolean algebra, where 0 is interpreted as equivalent to F , that is, false

6 and 1 is interpreted as equivalent to T , true. Eventually we’ll converge on the usual digital logic convention of using 0 and 1 for both binary digits and the logic constants. The actual circuit hardware, of course, does not care a hoot about the symbols. The hardware will simply insist that two distinct voltage levels represent the two binary constants.

1.2.2 Binary Counting The Hindu-Arabic decimal numeral system is a positional system with each digit position tracking some power of 10 that contributes to the value of the number. A fairly natural understanding of this system comes from following a counting sequence in three decimal digits:

000 001 002 ... 009 010 011 012 ... 019 . . . . 090 091 092 ... 099 100 101 102 ... 109 . . . . 990 991 992 ... 999 Each time we reach 9 in an given position we start over at 0 and also add one to the digit immediately to the left. Of course when get to a sequence of 9’s this so-called rollover has a ripple effect as we can see from the rollover from 99 to 100 above. We say the decimal system is a base ten positional representation system. The same idea of counting holds in binary, but now we have only two digits and rollover happens as soon as we’ve used a 1 digit. The following example illustrates counting in three binary digits:

Decimal Number 0 1 2 3 4 5 6 7 Binary Representation 000 001 010 011 100 101 110 111

Notice that with binary counting we use up the available counting po- sitions more quickly. For example, starting with 000 and then 001 we are already out of digits for the least significant (right-most) position and must roll over to 010 to represent the number 2. With three available positions we can only count eight items, starting at binary zero and ending at binary seven. We say the binary system is a base two positional representation system.

7 Of course, the idea of a positional representation system generalizes to any base. Any system of an arbitrary base b must have b distinct symbols for digits. By convention we index digit positions in a number from right to left starting at 0 so as to coincide with the exponent for the position. Sometimes the base is referred to as the radix. If there is any confusion about the base of a number, then it is common to denote the base as a subscript, for example, examining the counting table above we see that

1012 = 510 The binary counting example illustrates some important notations and conventions that we will assume when working with binary numerals in processors. First, a single binary counting position is referred to as a bit. We can think of a bit as the simplest coding of data in a processor. A fixed size collection of eight bits is called a byte. More generally, we use the term word when referring to some fixed size number bits. We call the number of bits in a word the word size of a processor. The very earliest processors had a word size of 8 (a single byte). Word sizes in processors have been doubling in each new generation of processors, e.g., processors to the time of this publication have had word sizes of 8, 16, 32, 64, 128. The word size of a processor is a fundamental design parameter that deeply affects the architecture of the processor and transitions to new word sizes in a processor family have been problematic. When writing down binary numbers of a specific word size the leading zeros are always filled out to the word size. The rightmost bit position is called the least significant bit and the leftmost bit position is the most signif- icant bit. Following the general convention for positional systems mentioned above we identify individual bit positions by indexing them from right (least significant) to left (most significant) starting at zero. The following table illustrates the coding of a number in the base 10 decimal system, a pattern we’ll shortly reproduce for the base 2 binary system. Recall that 100 = 1, in fact, x0 = 1 for any number x.

Decimal Number 704 position 2 1 0 positional value as power of 10 102 101 100 positional value 100 10 1 positional digit 7 0 4 positional numerical value 700 0 4 total numerical value 704 = 700 + 0 + 4

8 We can more succinctly write the positional representation for the ex- ample above in the following powers-of-ten summation notation.

704 = 7 ∗ 100 + 0 ∗ 10 + 4 ∗ 1 = 7 ∗ 102 + 0 ∗ 101 + 4 ∗ 100 = 7 ∗ 102 + 4 ∗ 100

Just as each position in a decimal representation corresponds to a power of ten, in the binary representation each position corresponds to a distinct power of 2 that contributes to the magnitude of the number.

Binary Number 10110101 position 7 6 5 4 3 2 1 0 positional value as power of 2 27 26 25 24 23 22 21 20 positional value 128 64 32 16 8 4 2 1 positional digit 1 0 1 1 0 1 0 1 positional numerical value 128 0 32 16 0 4 0 1 total numerical value 181 = 128 + 32 + 16 + 4 + 1

Again, we can more succinctly write the positional representation the powers-of-two summation notation as follows.

10110101 = 1 ∗ 128 + 0 ∗ 64 + 1 ∗ 32 + 1 ∗ 16 + 0 ∗ 8 + 1 ∗ 4 + 0 ∗ 2 + 1 ∗ 1 = 1 ∗ 27 + 0 ∗ 26 + 1 ∗ 25 + 1 ∗ 24 + 0 ∗ 23 + 1 ∗ 22 + 0 ∗ 21 + 1 ∗ 20 = 27 + 25 + 24 + 22 + 20

All the representations of numbers we have discussed so far are unsigned, that is, representations of natural numbers.

1.2.3 Binary Conversions Expressing a base two number as a sum of powers of two directly gives a way to convert unsigned binary numbers to decimal, as we saw in the last section. Now we present two methods for the reverse process of converting an unsigned decimal number into a binary number. The first method is to build an equivalent sum of powers of two from largest power to the smallest power. Given a number decimal number k, find the largest power of two less than k and that will be the first term in

9 the equivalent sum of powers. Subtract that first power of two from the number k to get the residual and then repeat the process on the remaining residuals to find all the powers of two in the equivalent sum of powers for the original number k. The following table illustrates the method for the number k = 181. Decimal Power of 2 Residual 181 27 = 128 53 = 181 − 128 53 25 = 32 21 = 53 − 32 21 24 = 16 5 = 21 − 16 5 22 = 4 1 = 5 − 4 1 20 = 1 0 = 1 − 1

181 = 27 + 25 + 24 + 22 + 20

= 101101012

The second method for converting unsigned decimal numbers to binary is more mechanical and essential generates binary digits from right to left (least significant to most significant). Starting with an unsigned decimal number k, divide the number by two and use the remainder as the rightmost binary digit. Repeat the process using the quotient to generate the next binary digit. The following table illustrates the process for the number k = 181 again. Decimal Divide Quotient Remainder 181 ÷ 2 90 1 90 ÷ 2 45 0 45 ÷ 2 22 1 22 ÷ 2 11 0 11 ÷ 2 5 1 5 ÷ 2 2 1 2 ÷ 2 1 0 1 ÷ 2 0 1

Read Remainder bottom to top Reading the Remainder column from bottom top gives the binary result.

181 = 101101012

10 In case you’re curious, the second method is based on the following factored equivalent of the usual power-of-two representation. (Read the 0 and 1 remainders from left to right.)

10110101 = 1 ∗ 128 + 0 ∗ 64 + 1 ∗ 32 + 1 ∗ 16 + 0 ∗ 8 + 1 ∗ 4 + 0 ∗ 2 + 1 ∗ 1 = 1 ∗ 27 + 0 ∗ 26 + 1 ∗ 25 + 1 ∗ 24 + 0 ∗ 23 + 1 ∗ 22 + 0 ∗ 21 + 1 ∗ 20 = 1 ∗ 27 + 0 ∗ 26 + 1 ∗ 25 + 1 ∗ 24 + 0 ∗ 23 + 1 ∗ 22 + 0 ∗ 21 + 1 = 2 ∗ (1 ∗ 26 + 0 ∗ 25 + 1 ∗ 24 + 1 ∗ 23 + 0 ∗ 22 + 1 ∗ 21 + 0 ∗ 20) + 1 = 2 ∗ (1 ∗ 26 + 0 ∗ 25 + 1 ∗ 24 + 1 ∗ 23 + 0 ∗ 22 + 1 ∗ 21 + 0) + 1 = 2 ∗ (2 ∗ (1 ∗ 25 + 0 ∗ 24 + 1 ∗ 23 + 1 ∗ 22 + 0 ∗ 21 + 1 ∗ 20) + 0) + 1 . . = 2 ∗ (2 ∗ (2 ∗ (2 ∗ (2 ∗ (2 ∗ (2 ∗ 1 + 0) + 1) + 1) + 0) + 1) + 0) + 1

1.2.4 Hexadecimal Large integers represented in binary can be cumbersome because the string of bits becomes too long to conveniently read. Consequently it has be- come conventional to write binary numbers using base 16 or hexadecimal notation, hex for short. Of course, with base 16 numbers we need 16 distinct digits. By convention the 16 digits of hexadecimal numbers are 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F. We can now express the number 181 from above using powers of 16 and then with hex digits. The last equation shows a common alternative form for indicating a number is to be interpreted as hexadecimal.

18110 = 11 ∗ 16 + 5 = 11 ∗ 161 + 5 ∗ 160

= B516 = 0xB5

Notice that the first hex digit B corresponds to the 11 in the powers of 16 expression. The hexadecimal and binary forms of a number are very closely related because the base 16 itself is a power of 2. The actual digits of a hex number can themselves be written in binary form using exactly four bits. The fol-

11 lowing table lists all the hex digits and their decimal and binary equivalents

Hex Digit Decimal Equivalent Binary Equivalent 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 A 10 1010 B 11 1011 C 12 1100 D 13 1101 E 14 1110 F 15 1111

Now we can take any hex number and rewrite it using bundles of 4 bits in place of the hex digits according to the above table. Conversely, we can group bits in a binary number into 4-bit bundles and translate the number directly into hexadecimal form using the above table. For example, starting with the powers-of-sixteen summation notation we get the hex and binary in the last two equations.

1 0 18110 = 11 ∗ 16 + 5 ∗ 16

= B516 = 0xB5

= 1011 01012

It is a common convention to list binary numbers in bundles of 4 with spaces between the bundles in honor of hexadecimal and for ease of reading. A group of 4 bits is called a nibble. When we break up a binary word with spaces at the 4-bit nibble boundaries, the binary word becomes easier to read and easy to see as a hexadecimal number. When grouping bits of a binary number into 4-bit bundles we may need to pad to the left with 0s to make sure every bundle has exactly 4 bits. For

12 example,

10110101112 = 10 1101 0111 = 0010 1101 0111

= 2 D 716 = 0x2D7 = 2 ∗ 162 + 13 ∗ 161 + 7 ∗ 160 = 512 + 208 + 7

= 72710

1.3 Representing Negative Numbers

We have explored in some depth the binary representation of unsigned in- tegers, but now we must complete the representation of integers by investi- gating the representation of both negative and positive integers. For a given word size, say k bits, the number of integers that can be represented is lim- ited to 2k. The unsigned representations discussed above held the integers between 0 and 2k − 1. When we include negative integers, then the rep- resentation must split the bit patterns into roughly half negative integers, half positive integers, and also zero. Moreover, the representation must have some way to signify the sign of the number (negative or positive). The most obvious scheme, called the sign-magnitude representation, de- votes one bit to the indicate the sign and uses the remaining bits to the magnitude of the number. So, for example, in a sign-magnitude representa- tion of signed integers the leftmost bit designates the sign of the number (a 0 bit indicates a positive number and a 1 bit indicates a negative) and the remaining k − 1 bits are interpreted as the magnitude. In this scheme the integers between −2k−1 and +2k−1 are represented in a k bit word. Unfortu- nately, in this scheme there is now two zeros - a positive zero and a negative zero. The following table illustrates the sign-magnitude representation of

13 integers using 4 bits.

Binary sign-magnitude Decimal 0000 +0 0001 +1 0010 +2 0011 +3 0100 +4 0101 +5 0110 +6 0111 +7 1000 −0 1001 −1 1010 −2 1011 −3 1100 −4 1101 −5 1110 −6 1111 −7

The sign-magnitude representation has an unwanted redundancy in the representation of 0 that leads to unnecessary complications in building hard- ware that obeys the correct mathematical properties. No processor uses sign-magnitude representation. The most common representation of signed integers is called the two’s complement representation that is easily imple- mented in hardware and maintains convenient mathematical properties.

1.3.1 Ten’s Complement In order to gain a better insight into two’s complement, we’ll define the idea by illustrating a ten’s complement representation in the more familiar decimal system. First assume that we have exactly two decimal digits in which to represent signed integers. Were we to represent unsigned decimal integers we would be able to encode the numbers between 0 and 99. Now we will represent the 100 signed integers (and zero) between -50 and 49. We have two decimal digits, but we don’t have the luxury of another special symbol (the − sign) to signify negative numbers so we’ll need to reinterpret some of the unsigned numbers as negative numbers. Understanding this idea is key to understanding the representation of negative numbers in computers. Define the ten’s complement of a two digit decimal number n between 0

14 and 50 as the result of the ten’s complement operation

2 compl10(n) = 10 − n = 100 − n as illustrated in the following table.

Unsigned Decimal Ten’s complement 00 100 01 99 02 98 . . . . 10 90 11 89 . . . . 48 52 49 51 50 50 Clearly adding a number between 0 and 50 with its ten’s complement results in 100 because n + (100 − n) = 100 for all k between 0 and 50. Given the assumption that we actually only have two decimal digits, then adding a number with its ten’s complement results in zero because we have to drop the extra leftmost digit. We encountered this earlier when representing unsigned decimal numbers in two digits - adding 1 more to 99 gives 00. In mathematical terms we are doing modulo arithmetic, in this case decimal arithmentic modulo 100. Modulo arithmetic makes use of the mod operator, for example, in addition modulo 100 the following equation holds.

(99 + 1) mod 100 = 0 Now extend the definition of ten’s complement and apply the ten’s com- plement operation compl10(n) to the numbers between 51 and 100. The ten’s complement of 51 is 49, and so forth until the ten’s complement of 99 is 1, and the ten’s complement of 100 is 0 giving the following table.

15 Unsigned Decimal Ten’s complement 00 100 01 99 02 98 . . . . 10 90 11 89 . . . . 48 52 49 51 50 50 51 49 52 48 . . . . 89 11 90 10 . . . . 98 2 99 1 We can begin to see how the ten’s complement of a number can be the representation of its negation because two characteristic laws for negative numbers hold (provided we assume modulo arithmitic, i.e., wrap-around arithmetic). The first of the two laws requires that adding a number and its negation gives zero. Adding a number and its ten’s complement results in 0 (modulo 100), just like adding a number and its negation. In the second law, negating a number twice should equal the original number. Taking the 10’s complement of a number twice brings us back to the original number (modulo 100), just like negating a number twice. Now we make the key assumption to reinterpret the unsigned numbers between 50 and 99 as the negative numbers in ten’s complement representa- tion. This gives us the following table of two-digit signed integers (second column) with their corresponding representation (left column).

16 Unsigned Decimal Signed Decimal in ten’s complement 00 00 01 01 02 02 . . . . 10 10 11 11 . . . . 48 48 49 49 50 −50 51 −49 52 −48 . . . . 98 −2 99 −1 Mathematically the two relationships discussed above are simple and elegant. (n + (102 − n)) mod 102 = 0 adding a number and its negation 102 − (102 − n) = n negating a number twice Our discussion of ten’s complement has so far been restricted to two digit decimal numbers, but the idea of ten’s complement works on any fixed size k-digit decimal representation. The range of signed decimal numbers that can be represented in k bits is given by (10k) (10k) − 2 to + 2 − 1 The positive integers with zero fall in the range 0 to (10k)/2 − 1 and are represented directly. The negative numbers fall in the range −10k to −1 and a negative number −n in that range is represented in ten’s complement by 10k − n. The two characteristic relationships generalize to the following for a k-digit decimal number n. (n + (10k − n)) mod 10k = 0 adding a number and its negation 10k − (10k − n) = n negating a number twice The ten’s complement representation always requires that we know exactly the number of digits used to represent numbers because modulo addition is assumed.

17 The real benefits of the ten’s complement operation appear when observe that subtraction can now be easily be done by negation of the subtrahend (the second number) followed by addition. Negation is accomplished by the ten’s complement operation (provided we’re not trying to negate -50, because there is no +50). In the end this makes the hardware for adding and subtracting much simpler. But, of course, we’ll be working with two’s complement rather than ten’s complement.

1.3.2 Two’s Complement Binary Representation The mathematics behind the ten’s complement operation extends uniformly to any base, so now we can introduce the two’s complement representation of signed integers by following the ideas of the previous section with a shift to base two. For a k bit number, define the two’s complement of a binary number n k k between 0 and 2 − 1 as compl2(n) = 2 − n. For example, in k = 4 bits the unsigned patterns 0 to 15 get split up into 0 to 7 for positive numbers and 8 to 15 for the negative numbers −8 to −1 (respectively). That is, unsigned 8 is −8, unsigned 9 is −7, and so forth until unsigned 15 is −1.

Binary Two’s Complement Representation in 4 bits Unsigned Decimal Binary Signed Decimal 0 0000 +0 1 0001 +1 2 0010 +2 3 0011 +3 4 0100 +4 5 0101 +5 6 0110 +6 7 0111 +7 8 1000 −8 9 1001 −7 10 1010 −6 11 1011 −5 12 1100 −4 13 1101 −3 14 1110 −2 15 1111 −1

In general the range of signed integers in a k-bit two’s complement repre- sentation is −2k−1 to +(2k−1 − 1). The table above illustrates that 24 = 16

18 bit patterns can represent the unsigned numbers from 0 to 24 − 1 = 15 or the unsigned numbers from −23 = −8 to +23 − 1 = 7. The choice is entirely determined by the hardware or person that is interpreting the bit patterns. Like the ten’s complement, the two’s complement operation compl2(n) can be interpreted as a negation operation (provided we do not attempt to negate the largest negative number). First we can check that the addition of a positive number and its two’s complement adds to 0 (modulo 16). For example, the unsigned value 9 represent −7 so 9 + 7 must equal 16, which it does. Similiarly −4 + 4 is 0 (modulo 16) because −4 is the pattern 1100 (unsigned 12), 4 is the pattern 0100 (unsigned 4) and the sum is 0000 (un- signed 12 + 4 = 16). The negation of the largest negative number (−8) is not defined because the value 8 cannot be represented in 4-bit two’s comple- ment. Second, we can check that negating a number twice yields the original number. For example compl2(7) = 16 − 7 = 9 (which represents −7) and compl2(9) = 16 − 9 = 7, which brings us back to +7. From an implementation point of view the two’s complement represen- tation has a very convenient feature: the leftmost bit always signifies a negative number! Two’s complement representation always requires that we know the num- ber of bits used to represent the number, just like ten’s complement required knowing the number of digits. Positive numbers are always padded to the left with 0s to fill the proper number of bit positions. Less obviously, nega- tive numbers are always padded to the left with 1s to fill the proper number of bit positions. The padding of bits to the left for a signed integer in two’s complement representation is referred to as sign extend. Although it may not be immediately apparent from the previous discus- sion, an unfortunate confusion exists between the phrase two’s complement representation that refers to the way numbers are coded, and the two’s com- plement operation that shows how to calcuate the negation of a number. Yuk. It is important to distinguish between the two phrases. The two’s complement representation refers to the interpretation of the bit pattern - a representation standard. The negation operation is an action that we’ll need to perform, either in calculations or in actual circuits. The table of 4-bit binary patterns above can be used to remind us of the general formula mentioned earlier for the range of integers that can be represented in k bits using two’s complement representation.

−2k−1 to + 2k−1 − 1

19 1.3.3 Negation in Two’s Complement This section is entirely devoted to the two’s complement operation, which we will refer to as the negation operation in the two’s complement represen- tation. One of the convenient results of using the two’s complement representa- tion is that subtraction can be implemented by simply negating the subtra- hend (the second number) and adding. For example, 8 − 3 is implemented as 8 + (−3). Consequently, if we have hardware to do a negation operation, then we only need to build an adder and not a separate subtracter. So we will need a negation circuit when we build an adder-subtracter. There are two ways to do the negation operation in the two’s complement representation. From previous sections we already know how to mathe- matically negate a number in two’s complement representation - we simply calculate the two’s complement of the number. Now we will examine a method for negation that can be implemented elegantly in logic hardware. The algorithm for two’s complement negation that is convenient to im- plement in hardware is performed in two steps. Assume a k-bit word holds a number in two’s complement representation. We want to negate the number by logically fiddling with the bits in the following way.

1. Invert all the bits in the word, and

2. add one to the word.

We leave it for the challenge exercises to prove that this method always gives a result equivalent to the definition given earlier of the two’s comple- ment of a number. Here are a couple of example negations using this method. Although we have not yet discussed the addition of binary numbers, it should be fairly clear how to add one to a binary number in the following examples.

Bitwise Negation in 4-bit Two’s Complement Representation

Signed Decimal Binary Inverted Add One Negated Signed Decimal −1 1111 0000 0001 1 −7 1001 0110 0111 7 +7 0111 1000 1001 −7 −8 1000 0111 1000 not defined!

20 1.3.4 Two’s Complement to Decimal Conversion Given a signed number in two’s complement representation, how do we determine the decimal value? The method is as follows.

1. Determine whether the number is negative or positive based on the leftmost sign bit.

2. If the number is positive because the sign bit is zero, then do an unsigned decimal coversion to determine the magnitude.

3. If the number is negative because the sign bit is one, then do a negation operation and use the resulting unsigned value as the magnitude of the negative number.

Here are two examples. Both examples assume an 8-bit two’s comple- ment encoding. Recall that the range of signed integers that can be encoded in 8 bits is −27 to +(27 − 1), equivalently, −128 to 127.

Convert 0x6D to signed decimal. 1. The hex pattern 0x6D is 0110 1101 in binary. The leftmost bit is 0 so the number is positive

2. 0x6D = 6∗16+13 = 109 so the hex encoding 0x6D is 10910

Convert 0xED to signed decimal. 1. The hex pattern 0xED is 1110 1101 in binary. The leftmost bit is 1 so the number is negative, so first do a negation. 2. To negate, first calculate the unsigned value and then per- form the two’s complement operation and use that resulting value as the magnitude. Unsigned 0xED is 14 ∗ 16 + 13 = 237, so the negation is 256 − 237 = 19. The hex encoding 0xED is therefore −1910

In the second example we could use the alternative method of negating a binary value: invert all bits and add one. The hex pattern 0xED is 1110 1101 in binary. Inverting all bits gives 0001 0010 and adding one gives 0001 0011 = 0x13 = 161 + 3 = 19. So again the interpretation of the hex encoding 0xED is -19.

21 1.3.5 Decimal to Two’s Complement Conversion Given a signed integer in decimal representation, how do we determine the binary code in two’s complement representation? Before we begin any con- version we must know the number of bits available to code the result, which will determine the range of signed integers that can be represented and the number of bits required in our result. The method is as follows. 1. If the number is zero, then the bit pattern is all zeros. 2. If the number is positive, then convert the decimal number to a binary bit pattern as if it were an unsigned number. 3. If the number is negative, then convert the decimal number to a binary bit pattern, perform the negation operation (the two’s complement operation), and convert the resulting number to a binary bit pattern. Here is the negative number example from above now converting in the opposite way. Assume two’s complement representation in 8 bits, so that the negation operation will be to subtract from 28 = 256.

Convert −1910 to hex. 1. The number is negative. The 8-bit negation (two’s comple- ment) of −19 is 256 − 19 = 237. 2. The decimal number 237 = 16 ∗ 14 + 13 so the hex code is 0xED. The decimal number −19 is therefore 0xED in 8-bit two’s complement representation.

1.4 Gates and Circuits

Digital logic is what we use to build computers and the most primitive digital logic components are gates. Logic gates are hardware realizations of simple logic operations in a language of propositional logic. As a formal mathematical system propositional logic is a zeroth-order logic, meaning that any logic expression in the language can only have one of two values: true, or false. Propositional logic includes not only a language of logic expressions, but also axioms and rules of reasoning for logical deduction. For our purposes we will not look at the system of deduction, but rather, focus on the logic language. The language of propositional logic will allow us to begin specifing circuits that can be built in hardware.

22 1.4.1 Logic Expressions The language of zero-order logic expressions consists of two logical constants for true and false (T or 1 for true, F or 0 for false), variables (A, B, C, . . .), together with the binary logic operator And, using the symbol (∧), the binary logic operatory Or, using the symbol (∨) and the unary logic operator Not using the symbol (¬).

constants 0, 1 variables A, B, C, . . . operators ∧, ∨, ¬ Using the logic constants, variables, and operators we can form logic expressions just like we commonly form arithmetic expressions. In proposi- tional logic the expressions are called well formed formulas with a precisely specified syntax. We will introduce the syntax of logic expressions more in- formally. In the syntax of logic expressions the ∧ operator acts like multiply, the ∨ operator acts like addition, and the ¬ operator acts like the a unary minus. Parentheses are used to prioritize subexpressions in the familiar algebraic way. Here are some example expressions.

0 A 0 ∧ 1 A ∨ 1 ¬A ¬1 A ∧ B (A ∨ B) ∧ ¬C When parentheses are left out of logic expressions, then the logic And operator takes precedence over the logic Or operator just as multiply takes precedence over addition in arithmetic expressions. Similiarly, the unary logic Not operator has the highest precedence, like the unary minus in alge- bra. A chain of operations of the same precedence implicitly associates to the left in operational order. The Or and the And operators, like addition and multiplication, are both associative, so the order of performing a chain of the And or a chain of Or operations doesn’t matter. Logic expressions can involve the logic constants 0 or 1 well as the variables and operators.

23 Expression without Parentheses Equivalent Expression 0 ∧ A ∧ B (0 ∧ A) ∧ B 0 ∧ 1 ∨ 1 (0 ∧ 1) ∨ 1 A ∧ ¬BA ∧ (¬B) A ∧ B ∨ B ∧ C (A ∧ B) ∨ (B ∧ C) A ∨ B ∧ B ∨ CA ∨ (B ∧ B) ∨ C You may be curious what the expressions mean, and that’s the central concern of next chapter. If we think in terms of truth values F and T , then the logic operators work on truth values and yield truth values. Every logic expression takes on a truth value of T or F given an assignment of truth values to the variables in the expression. We can equivalently see logic expressions as forming a Boolean algebra, named after the 19th century mathematician George Boole who developed the algebra of Boolean values 0 and 1. Variables only take on Boolean values and all of the operations in Boolean algebra work on Boolean values and yield Boolean values. The meaning of every Logic expression will simply be 0 or 1 once the variables in the expression are assigned values. The distinction between propositional logic on the one hand and Boolean algebra on the other is largely a matter of viewpoint. Digital logic tradi- tionally leans toward the algebraic viewpoint, as mentioned earlier, but we will, in later chapters, introduce a third dataflow viewpoint associated with the flow of binary data through processors.

1.4.2 Circuit Expressions It is conventional in digital logic to draw logic expresssions as circuits. The logic operations are represented by special symbols called called gates. The inputs to the gate are represented by wires and can be labeled with the incoming expressions. The output wire can also be labeled with the output expression. In the following example the inputs are labeled with logic vari- ables and the outputs for each gate labeled with the simple logic expression.

And Gate Symbol Or Gate Symbol Not Gate Symbol A A A ∧ B A ∨ B B B A ¬A

24 The inputs to a gate can be labeled with a more complicated expression as the following example illustrates. We can think of this as a gate schema, meaning that the same gate symbol can be used in many different places in a circuit, just like an operator symbol can be used in many different places in a logic expression.

And Gate Schema A ∨ B (A ∨ B) ∧ (C ∨ D) C ∨ D

A circuit representation can be directly translated to a logic expression. In a multi-gate circuit representation the intermediate wires are not usually labeled with expressions, but the expressions associated with the intermedi- ate wires will be subexpressions of the logic expression for the enire circuit. For example

A

B (A ∧ B) ∨ (B ∧ C)

C

Translating in the other way, an expression with many logic operations can be translated directly to a circuit expression. Hopefully it is apparent that curcuits are just another way to symbolize logic expressions in a more graphical notation. The visual representation of circuit expressions corresponds more closely to the physical realization of gates as devices in hardware. Circuit expressions often include special symbols that correspond to spe- cific expressions. The following symbols are common because they have historically corresponded to specific hardware components.

25 Nand Gate Symbol A ¬(A ∧ B) B

Nor Gate Symbol A ¬(A ∨ B) B

Xor Gate Symbol A (A ∧ ¬B) ∨ (¬A ∧ B) B

1.5 Exercises

1. Give the powers-of-10 summation representation of 82710.

2. Give the powers-of-2 summation representation of 22310.

3. How many bits are needed to represent 22310 in unsigned binary?

4. Give the binary value of 22310 in unsigned binary. Left fill your answer with zero bits to the nearest nibble boundary. Calculate the binary value using both methods described in the text. Show your work in a systematic way.

5. Give the powers-of-ten summation representation for 102210.

6. Convert 102210 to unsigned binary and left pad the number with zeros to the nearest nibble.

7. Convert 102210 to unsigned hexadecimal. 8. What is the decimal value of the following bit pattern assuming the number is coded as unsigned binary in 8 bits? 1110 1101

26 9. What is the decimal value of the 8-bit pattern in the previous problem assuming the number is a signed integer coded in 8-bit two’s comple- ment representation?

10. What is the hexadecimal value of the number 101910 assuming un- signed 12-bit binary representation?

11. What is the hexadecimal value of the number 101910 assuming two’s complement representation in 12 bits?

12. What is the hex value of the number −222210 when coded using two’s complement representation in 12 bits?

13. What is the decimal value of 0xFAB assuming unsigned binary repre- sentation?

14. What is the decimal value of 0xFAB assuming two’s complement rep- resentation in 12 bits?

15. What is the binary value of the hex number 0xCAFE assuming 16 bit unsigned representation?

16. What is the hex value of the following binary number assuming 16 bit two’s complement representation for signed integers?

1011 1010 1011 1110

17. Convert 0x871 to signed decimal. Assume two’s complement represen- tation in 12 bits.

18. What is the range of unsigned integers that can be represented in 12 bits?

19. What is the range of signed integers that can be represented in 12 bits?

20. Convert the following binary pattern to decimal. Assume the pattern is coded in 12 bits using two’s complement representation.

1111 0100 1110

21. How many bits are needed to represent the number −227110 in two’s complement representation?

27 22. Convert the number −227110 to binary in two’s complement represen- tation. Fill out the number to the nearest nibble boundary.

23. Draw the equivalent logic for the Nand gate using only And, Or, Not gates.

24. Draw the equivalent logic for the Nor gate using only And, Or, Not gates.

25. Draw the equivalent logic for the Xor gate using only And, Or, Not gates.

26. Give the logic expression for the following circuit.

A

B

C

27. Draw the logic circuit for the following logic expression.

(A ∨ ¬B) ∧ ¬(B ∨ C)

28 Chapter 2

Logic Functions

2.1 Functions

At perhaps the most fundamental level, computing can be seen as the me- chanical calculation of mathematical functions. A mathematical is specified by a set called the domain, a set called the codomain (sometimes called the range), and a rule that assigns to every element of the domain a unique element of the codomain. We commonly say a function maps every domain element to a codomain element. The following table lists the most common mathematical domains – num- bers, of course. N – the natural numbers, Z – the integers, Q – the rational numbers, R – the real numbers, C – the complex numbers. A function need not be restricted to the above common mathematical domains. Any well-defined set can be a domain. For example, the two element set {0, 1} is the common Boolean domain we use in logic. When a function has a Boolean codomain we call the function a . The equality relation is probably the most common Boolean function – it takes two arguments in some domain and produces a true result if they are equal and a false result if they are not equal. The definition of a function extends to multi-argument functions for rules that n-tuples of elements from n domains to a codomain element. For example, the algebraic operations of addition and multiplication map pairs (2-tuples) of numbers to a single result number.

29 In programming languages a data type is a set of computational val- ues. A primitive data type is typically built into a processor. In an earnest attempt to approximate the mathematical domains in computing, the fol- lowing primitive data types are common. Bool – Boolean values, representing the values 0 (or F ) and 1 (or T ), Int – bounded integers approximating the integers, Float – floating point numbers approximating the real numbers, Char – character values representing single text characters. The primitive data types in a processor can only approximate the math- ematical domains using some finite internal representation, whereas obvi- ously the mathematical domains are not finite. As a cautionary note, be aware that computational errors can creep into software when numbers are rounded or truncated to fit into finite representations. These roundoff errors can sometimes cascade and produce serious miscalculations without careful error analysis in numerical algorithms.

2.2 Logic Functions

Logic functions are a special case of Boolean functions in which the argu- ment’s domains must all be Boolean as well as the result. A logic function takes any number of type Boolean arguments and produces a type Boolean result. In other words, all inputs must evaluate to false or true (0 or 1) and the output evaluates to false or true (0 or 1). The restriction to only Boolean domains for logic functions is what makes possible the nesting of logic expressions introduced in the language of logic. Logic functions are finite, that is, it is possible to finitely list all of the possible inputs and outputs of the function in a finite table. Of course, this is possible because a function can only have a finite number of arguments and there are only two possible values for each argument. Every logic expression determines a logic function because logic expres- sions are composed of nested logic expressions that are ultimately based on the logic operators. The logic gates themselves operate on Boolean values that are calculated by other logic expressions. Each of the gates that correspond to a logic operator is a logic func- tion. The ¬ gate is a one-argument (unary) function and all the other logic operators that we’ve encountered so far are two-place (binary) functions. The input signals of a logic circuit can be labeled with any logic ex- pression, so we’ve used Greek letters (α, β) to stand for an arbitrary logic expression rather than just a single logic variable as we did earlier.

30 2.2.1 Primitive Gate Functions And Gate Symbol And Gate Function Table α α ∧ β α β α ∧ β β 0 0 0 0 1 0 1 0 0 1 1 1

Or Gate Symbol Or Gate Function Table α α ∨ β α β α ∨ β β 0 0 0 0 1 1 1 0 1 1 1 1

Not Gate Symbol Not Gate Function Table α ¬α α ¬α 0 1 1 0

2.2.2 Evaluating Logic Expressions and Circuits A logic expression with constants can be evaluated using the function tables for the logic operators. Here are some examples.

0 ∨ 1 ∧ 1 = 0 ∨ (1 ∧ 1) Adding parentheses to show precedence = 0 ∨ 1 Using the And gate table = 1 Using the Or gate table

0 ∨ ¬1 ∧ ¬0 ∨ ¬1 = (0 ∨ ((¬1) ∧ (¬0))) ∨ (¬1) Adding parentheses = (0 ∨ (0 ∧ 1)) ∨ 0 Using the Not gate table three times = (0 ∨ 0) ∨ 0) Using the And gate table = 0 Using the Or gate table twice

A logic expression with variables can be evaluated given a specific as- signment of Boolean values to the variables. For example,

31 Variable Assignment Expression Evaluation ABC A ∨ (B ∧ C) 0 1 1 = 0 ∨ (1 ∧ 1) Substituting variable values = 0 ∨ 1 Using the And gate table = 1 Using the Or gate table

For logic expression with k distinct variables there will be 2k possible combinations of variable assignments. For example, the above expression with three distinct variables has 8 possible variable assignments. For a given logic expression we can accumulate all of the possible variable assignments and resulting values into a single table that we call a function table for the logic expression. By convention we list the variable assignments in binary counting order from top to bottom. The table itself fully specifies the logic expression as a logic function. Here is a simple logic function table for a logic expression with two variables. The table requires 4 calculations, one for each of the possible variable assignments.

Function Table for a Logic Expressions with Two Variables

AB (A ∧ ¬B) ∨ (¬A ∧ B) 0 0 0 0 1 1 1 0 1 1 1 0

2.2.3 Logic Tables for Expressions and Circuits Building a complete function table for an expression requires that we eval- uate the expression for every possible assignment of Boolean values to the variables in the expression. But we can do this in a more systematic way than separately calculating the value of the expression for each possible Boolean value assignment to the variables. Instead we can build the com- plete function table for each of the subexpressions from the innermost to the outermost final expression. For example, consider the following logic expression and corresponding logic circuit. We saw the function table for this expression in the previous section. Now we will systematically build the function table.

32 Logic Expression (A ∧ ¬B) ∨ (¬A ∧ B)

Logic Circuit A B (A ∧ ¬B) ∨ (¬A ∧ B)

In this construction of the function table for the expression we’ll lay out a table with the input variables on the left and then on the right of the vertical bar we will list all of the subexpressions from the simplest innermost expressions on the left to the final outermost expression on the right.

Function Table with Intermediate Expression Headings

AB ¬A ¬BA ∧ ¬B ¬A ∧ B (A ∧ ¬B) ∨ (¬A ∧ B) 0 0 0 1 1 0 1 1

When we build the function table, first calculate a complete function table for each subexpression in each column from left to right. So, for example, the following partially filled out table shows the function tables calculated for ¬A and ¬B and then shows the function table for the first And subexpression. The table result for each column is calculated using the function table for the logic operator using the columns of the subexpressions. So the column headed by the subexpression A ∧ ¬B is calculated using the And gate function table and the columns headed by A and ¬B.

33 Function Table with Intermediate Expressions

AB ¬A ¬BA ∧ ¬B ¬A ∧ B (A ∧ ¬B) ∨ (¬A ∧ B) 0 0 1 1 0 0 1 1 0 0 1 0 0 1 1 1 1 0 0 0

We can fill out the table for the second And expression and finally the table for the final expression on the output of the Or gate.

Function Table with Intermediate Expressions and Final Expression

AB ¬A ¬BA ∧ ¬B ¬A ∧ B (A ∧ ¬B) ∨ (¬A ∧ B) 0 0 1 1 0 0 0 0 1 1 0 0 1 1 1 0 0 1 1 0 1 1 1 0 0 0 0 0

Figure 2.1 shows the logic expression, the corresponding logic circuit, and the full logic function table including intermediate expressions and the final logic function.

2.2.4 Expressions as Functions Now we can just write the summary table for the function defined by a logic expression. Notice that the function table for this expression is the same as the function table for the Xor gate. Figure 2.2 shows the logic expression, the corresponding logic circuit, and the meaning of the expression as logic function table. Take a moment to look closely at the Xor function table. The Xor gate can be used to detect whether the two inputs are different. If we put a Not gate on the output of an Xor gate we can detect whether the two input bits are the same, that is, the equivalence logic function denoted by the ↔ symbol. In the next chapter we will see how an Xor gate can be viewed from a different data flow viewpoint. In the data flow viewpoint one of the Xor gate inputs determines whether the other bit is inverted or not.

34 Logic Expression (A ∧ ¬B) ∨ (¬A ∧ B)

Logic Circuit A B (A ∧ ¬B) ∨ (¬A ∧ B)

Function Table with Intermediate Expressions

AB ¬A ¬BA ∧ ¬B ¬A ∧ B (A ∧ ¬B) ∨ (¬A ∧ B) 0 0 1 1 0 0 0 0 1 1 0 0 1 1 1 0 0 1 1 0 1 1 1 0 0 0 0 0

Figure 2.1: Circuits and Function Tables for Logic Expressions

Logic Expression (A ∧ ¬B) ∨ (¬A ∧ B)

Simplified Logic Circuit A (A ∧ ¬B) ∨ (¬A ∧ B) B Function Table AB (A ∧ ¬B) ∨ (¬A ∧ B) 0 0 0 0 1 1 1 0 1 1 1 0

Figure 2.2: Xor Circuit and Function Table

35 2.3 Equivalence of Boolean Expressions

We define the equivalence of Boolean expressions as the equivalence of the logic functions determined by the expressions. Because the logic expressions define finite functions representable by finite tables, we can prove the equiv- alence of two Boolean expressions by showing that the function tables of the two logic expressions are equivalent. We will illustrate a proof of equivalence of Boolean expressions by show- ing that the following two expressions are equivalent.

¬A ∧ ¬B = ¬(A ∨ B)

We can use the same systematic construction by tables of subexpressions. In this case we’ll construct the tables for the expressions on both sides of the equality. The values in the column headed by the expression on the left side are identical to the values in the column headed by the expression on the right side of the equality. Thus, because the function tables of the two expressions are the same the expressions are equivalent.

Equivalence Proof of Logic Expressions using Function Tables

AB ¬A ¬B ¬A ∧ ¬B A ∨ B ¬(A ∨ B) 0 0 1 1 1 0 1 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 0 0 0 1 0 ↑ ↑

2.4 Logic Functional Completeness

Binary logic functions have finite function tables, so you might reasonably ask whether it is possible to list all possible binary function tables, and indeed this can be done. The following table lists all possible output patterns for the four distinct combinations of input values. Each column is a different function and with four bits of output for each function (reading down the column) there are 16 possible binary functions of the two inputs. The output patterns are listed in counting order. Notice that the 0 function and the 1

36 function ignore their inputs and are therefore constant functions. The → is implication and 6→ is the negation of implication. Similarly, ← is right- to-left implication and 6← is its negation. The ↔ is bi-implication, that is, equivalence.

Table of all Binary Boolean Functions

AB 0 ∧ 6→ A 6← B Xor ∨ Nor ↔ ¬B ← ¬A → Nand 1 0 0 00000000111111 11 0 1 00001111000011 11 1 0 00110011001100 11 1 1 01010101010101 01

We initially introduced logic expressions with just three common opera- tors, Not (¬), And (∧), and Or (∨). We then used those basic operators to define the additional operators Nand, Nor, and Xor. Can all the logic oper- ators given in the above table be defined using just And, Or and Not? The answer is yes, and we say that the set of those three operators is functionally complete. It also turns out that the Nand operator by itself is functionally complete. All logic circuits can be built with just Nand gates and feedback! But it’s much easier to read circuits expressed with And, Or, Not. We leave the demonstration of functional completeness for the exercises.

2.5 Boolean Algebra

Note: For this topic please refer to supplementary handout materials.

2.6 Tables, Expressions, and Circuits

We have examined three distinct representations of logic:

• Logic expressions,

• Logic circuits,

• Logic tables.

We have seen how to convert logic expressions to logic circuits and, con- versely, how to convert logic circuits to logic expressions. Logic expressions and logic circuits are equivalent representations. We have also shown how

37 to build a logic table for a logic expression (or circuit) that completely ex- presses the logic function determined by the expression. The logic table lists all possible evaluations of an expression row by row for each possible way of assigning logic values to the logic variables in the expression. If we can now show how to build a logic expression from a given logic table, then we will have demonstrated the equivalence of all three represen- tations of logic expressions. We will first define a normal form for logical expressions and then show how to build normal form expressions directly from logic function tables.

2.6.1 Disjunctive Normal Form Recall that the And operator is called a conjunction and the Or operator is called a disjunction.A Not operator is sometimes called a logic negation. A series of logic expressions connected with the Or operator is called a disjunction or sum. A series of logic expressions connected with the And operator is called a conjunction or product term. A logic expression that is only a logic variable or a negation of a logic variable is called a literal. The disjunctive normal form or sum of products form of a logic expression is a disjunction of conjunctions in which the product terms consist only of literals. In other words, the disjunctive normal form of a logic expression α has the general structure

α = β1 ∨ β2 ∨ ... ∨ βn where each of the βi logic expressions is a conjunction of only logic variables or negations of logic variables. When writing expressions in sum of products form it is common to write the conjuncts of literals by simple juxtaposition, following an analogy with arithmetic products. Removing the explicit representation of the ∧ operator in the disjunctive normal form makes the expressions easier to read. It is also fairly common to use the plus symbol (+) for the logical Or operator in the disjunctions and to use an overbar to designate the negation of a logic variable. The resulting logic expressions subsequently look very much like algebraic expressions that readers are familiar with. Note that juxtaposition for product won’t work if expressions have the logical constants 0 or 1. The following list of logic expressions illustrates the variety of common notations. All of the expressions are in disjunctive normal form and they are all the same expression written using different conventions. Note that if the ∧ symbol is used for And, then the Or symbol is always ∨; never mix ∧

38 with + in logical expressions. We will commonly use the conventions in the last three forms because they are the easiest to read.

(A ∧ (¬B) ∧ C) ∨ (¬A ∧ B ∧ C) ∨ (A ∧ B ∧ (¬C)) A ∧ (¬B) ∧ C ∨ (¬A) ∧ B ∧ C ∨ A ∧ B ∧ (¬C) A ∧ ¬B ∧ C ∨ ¬A ∧ B ∧ C ∨ A ∧ B ∧ ¬C A(¬B)C ∨ (¬A)BC ∨ AB(¬C) A(¬B)C + (¬A)BC + AB(¬C) ABC¯ + ABC¯ + ABC¯

An expression is in full disjunctive normal form if every product term men- tions every variable in the expression, such as the examples just given. Here are some more expressions in disjunctive normal form, although not neces- sarily full disjunctive normal form. The last line shows how a center dot (·) can be used for product when juxtaposition won’t work.

¬A ∧ B A¯ ABC A¯B¯ + C 1 + 0 + B¯ 1 · 0 + 1

The following expressions are not in disjunctive normal form:

¬(A ∧ B) A ∧ B A ∧ (B ∨ C)

Logic expressions in normal form may not be the most efficient in terms of the number of gates, but there are minimization algorithms that can au- tomatically reduce logic expressions to equivalent expressions with minimal gate counts. To see an exceedingly inefficient expression for a logic func- tion, construct the conjunctive normal form for the three-variable case of the constant logic function 1. The constant logic function 1 is the function that ignores all its input values and produces a 1 (true) output in all cases.

2.6.2 Logic Expressions from Tables We can associate a product term with each row in a logic table. Each row gives a particular truth assignment to all of the logic variables. The product term we associate with a row is the one that evaluates to true precisely

39 for the particular variable assignment of the row. For example, in a table with three variables A, B, C, a variable assignment of 0, 0, 0 would be true exactly when the product term A¯B¯C¯ is true. No other product term would be true under that assignment. Similarly, every other variable assignment has associated with it a unique product term for truth. The following table lists the product terms associated with each row of a logic table with three variables.

Product Terms for Variable Assignments

ABC Product Term 0 0 0 A¯B¯C¯ 0 0 1 A¯BC¯ 0 1 0 AB¯ C¯ 0 1 1 ABC¯ 1 0 0 AB¯C¯ 1 0 1 ABC¯ 1 1 0 ABC¯ 1 1 1 ABC Now that we have a logic expression associated with the truth of each variable assignment, we can easily contruct a logic expression for any logic table by forming the disjunction of precisely those product terms for which the logic expression is expected to be true. For example, the following logic table specifies a three variable expression that is true when exactly two of the variables are true. The table shows the associated product terms for the true rows of the expression and the final expression is listed underneath the table.

40 Logic Table for Exactly Two

ABC Exactly Two Product Term 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 ABC¯ 1 0 0 0 1 0 1 1 ABC¯ 1 1 0 1 ABC¯ 1 1 1 0

Logic Expression for Exactly Two

ABC¯ + ABC¯ + ABC¯ The above method for converting logic tables to logic expressions always produces a logic expression in full disjunctive normal form. How might we convert an arbitrary logic expression into disjunctive normal form? First build a logic table for the original expression and then use the table to form the conjunctive normal form expression!

2.7 Exercises

1. Evaluate the following logic expression for the variable assignment: A = 0,B = 1,C = 1,D = 0.

¬(A ∨ B) ∧ (¬B ∨ C) ∨ D

2. Build a truth table for the logic function of three variables that is true when either all of the variables are true or exactly when one of the variables is true.

3. Give a logic circuit diagram for the following logic expression.

¬(A ∨ B) ∧ (¬B ∨ C)

4. Build the truth table for the logic expression in the previous exercise.

41 5. Give equivalent forms for the logic expression

A ∧ ((¬B) ∨ (¬C)) ∨ (¬C ∧ B)

using each of the three following conventions

(a) Use juxtaposition for And, ∨ for Or, and ¬ for Not. (b) Use juxtaposition for And, + for Or, and ¬ for Not. (c) Use juxtaposition for And, + for Or, and the overbar for Not.

6. Give a logic table for the following logic expression (from the previous exercise). A ∧ ((¬B) ∨ (¬C)) ∨ (¬C ∧ B)

7. Draw a logic circuit diagram for the expression in the previous exercise.

8. Convert the expression in the previous exercise to disjunctive normal form.

9. Give a logic expression in full disjunctive normal form for the constant logic function 1. Assume two input variables.

10. Show how a Nand gate can be used to implement a Not gate.

11. Show how an And gate can be implemented with only Nand gates. Hint: use the previous exercise.

12. Show how an Or gate can be implemented with only Nand gates.

13. Draw the logic circuit for the following logic expression.

¬(A ∨ B) ∧ (¬B ∨ C) ∨ D

14. Give the logic expression in full sum of products form for the logic expression in the previous problem. Hint: make a truth table first.

15. *Give a simpler logic expression in disjunctive normal form for the previous problem. (A starred problem is a challenge problem).

42 Chapter 3

Dataflow Logic

The correspondence between logic and computing hardware works at the level of single wires coded with either 0 or 1. A single binary 0 or 1 is a unit of information called a bit, as we learned in Chapter 1. Bits are collected into bytes or words that represent some type of data, such as numbers or characters. At the most primitive level processors are just gates connected by wires carrying bits. The gates transform the bits on input wires into bits on the output wires that feed the inputs of other gates. It is not immediately ap- parent that computation of any complexity could possibly take place with the simple gates and wiring we have discussed. But computational complex- ity quickly arises when we allow outputs of gates to feed back into the input of gates. Such circuit feedback is controlled by a clock and careful timing constraints on gates so that complex computations can reuse gates through feedback and consequently perform complex computations over time. We call such logic sequential logic or clocked logic. The second half of the text is devoted to clocked logic. We say combinational logic or non-clocked logic when referring to logic that does not use a clock and consequently does not have feedback. The combinational logic we have discussed so far has processed informa- tion only at the level of single bits. But from a conceptual viewpoint we know that processors work on numbers and other data abstractions that are represented by collections of bits. Data is carried around in the processor by bundles of wires that we think of as a single conceptual entity. Of course, the processor doesn’t care about the bundling, but in the interest of conceptual clarity we clearly identify wire bundles when we draw circuits. A bundle of wires in a circuit is called

43 a bus. Bundles of wires in a processor that carry data are usually a multiple of bytes (8 bits) and often closely connected to the word size of the processor. Recall that the word size of a processor is a fundamental architectural pa- rameter. The word size of a processor determines the range and precision of the built-in primitive data types that it can efficiently process.

3.1 Data Types

Mathematical domains like integers and reals are represented in computing by primitive data types. Although the mathematical domains of numbers are infinite, primitive data type implementations are only finite approximations in finite storage. So, for example, integers are represented in a finite word size and called the int primitive data type in Java or C++. The practical outcome of a finite word size representing integers is that programmers must be aware of the possibility of calculating an integer value that exceeds the range of values that can be represented in the finite word size, a condition referred to as overflow. The representation of real numbers in a bounded word size (e.g. the primitive data type float in Java or C++) results in the practical problem of a bounded precision as well as a bounded range. Very large numbers or very small fractions can only have a bounded number of digits. Computing deals with text data as well as numbers. Text is a sequence of characters and characters are stored in fixed size words. A processor must be able to move and copy characters, but other operations on characters typically take place in specialized input-output hardware peripheral to the processor. In addtion to the primitive data types commonly found in programming languages we will see that processors are able to do some special operations on the raw bit patterns of fixed size words. For the purposes of building our simple processor we’ll confine our study to raw bit patterns, integers, and characters coded in fixed size words. Every representation of a specific primitive data type is determined by two factors:

• the word size, and

• the coding format.

44 For the time being please do an online search for an ascii table.

Figure 3.1: Ascii Table

3.1.1 Unsigned Int Bounded unsigned integers are a data type found in some high level lan- guages (C, C++) but not others (Java, Python). Unsigned integers are the non-negative integers (that is, the natural numbers). There seems to be some controversy about its importance for a programming langauge. Math- ematically unsigned integers are simply a of the integers, so it would seem to be unnecessary as a primitive type. But there are fairly common circumstances in which unsigned integers are needed, such as lengths, dis- tances, array indices or memory addresses. On the other hand, having two kinds of arithmetic (signed and unsigned) can cause confusion or compli- cations that lead to program errors. In any case, all architectures provide some instructions for dealing with unsigned integers, or more generally, raw bit patterns. For example, we will shortly see bitwise logical operations, shifting, and unsigned counting later in this chapter.

3.1.2 Signed Int The signed and bounded Int primitive data type exists in all programming languages and is universally implemented with two’s complement represen- tation. Architectures must provide facilities for signaling and handling over- flow conditions in arithmetic operations because the Int type is bounded.

3.1.3 Char and String The Char primitive type is a set of characters now commonly implemented using the Unicode standard, a 16 bit character encoding that can represent a huge variety of characters in many languages. The Unicode standard is an upgrade to the original Ascii standard that coded characters in only 7 or 8 bits and included only English typewriter characters, symbols, and non- printing keystrokes. Figure 3.1 lists the Ascii standard. The much larger Unicode standard can be easily found online. A string is a sequence of characters and consequently a String data type is not actually a primitive data type. Nevertheless, strings are so common in programming that most languages have special facilities for handling strings as a kind of pseudo-primitive data type. Moreover, most architectures in-

45 clude facilities that support common string operations, like moving or copy- ing a strings in memory.

3.2 Data Buses

So far we have only covered the logic operations that work single bits. This section introduces circuits that can operate on multi-bit data values of some primitive data type. When a bundle of wires carries a data value we call the bundle a data bus or a data path. Processors use the multi-bit data values carried by data buses to compute primitive functions on the primitive data types. Complex processors have all kinds of buses, not just primitive data type buses. We’ll use a bracketed expression following a bus name to indicate a specific number of bits. For example, to designate bits 7 to 0 of bus A we write the following, where bit 7 is the most significant bit and bit 0 is the least significant bit: A[7:0] Our first example shows a simple 8-bit logical operation. The value carried on the 8-bit bus is considered a raw bit pattern and the circuit si- multaneously performs a logical Not on each of the eight individual bits. We call this a Bitwise Not function. A raw bit pattern is the most primitive type of data value that processors work and does not generally appear as a primitive data type in higher level programming lanauges. Typically ma- nipulation of raw bit patterns occurs at a very low level of processing, such as found in a device controller.

46 Bitwise Not

split bundle

8 8 A / / Out

Function Table for Bitwise Not A[7:0] Out[7:0] a7a6 . . . a0 ¬a7¬a6 ... ¬a0 Our second example shows a slightly more sophisticated circuit that includes a single control signal directing the circuit to perform the Bitwise Not or to pass the data through unchanged. We call this a Controlled Bitwise Not circuit.

47 Controlled Bitwise Not Inv split bundle

8 8 A / / Out

Function Table for Controlled Bitwise Not A[7:0] Inv Out[7:0] a7a6 . . . a0 0 a7a6 . . . a0 a7a6 . . . a0 1 ¬a7¬a6 ... ¬a0 The key to understanding the Controlled Bitwise Not circuit is an inter- pretation of the behavior of the Xor gate as a one-bit data flow component rather than a one-bit logic function. In the data flow view, one bit entering an Xor gate is considered a data bit. The other bit is considered a control signal whose value determines whether the data bit is inverted or not. The data flow interpretation makes a good deal more sense when we are looking at a data path of more than one bit, say, the 8 bits of the Controlled Bitwise Not circuit. The single Inv control bit that enters all eight of the Xor gates determines the inversion of the entire 8-bit bus. Both of the example functions are unary (single operand) functions work- ing on raw 8-bit patterns.

3.3 Bitwise Functions

In this section we’ll look at bitwise operators as function circuit components. The idea is to bundle a collection of gates working together on a bus into a block component acting as a function on the bundled bits of the bus. For

48 example, in the previous section we saw a single 8-bit data bus connected to a unary (one-operand) bitwise logic circuit. We’ll turn that into a single abstract one-operand (unary) functional component and then examine func- tional components for binary (2-operand) bitwise operations on 8-bit data buses. We create an abstract component for the bitwise not circuit by sur- rounding the detailed internal logic with a box and then eliminating the circuit drawings and replacing it with the abstract component name. We refer to the resulting box as a block component symbol. Our abstract bitwise not component looks like the following and has the function shown in the accompanying function table.

Block Component Symbol for Controlled Bitwise Not Inv 8 8 A / Not / Out

Function Table for Controlled Bitwise Not A[7 : 0] Inv Out[7:0] a7a6 . . . a0 0 a7a6 . . . a0 a7a6 . . . a0 1 ¬a7¬a6 ... ¬a0 The following example is a bitwise And operator on 8-bit data values.

49 Bitwise And split bundle

8 8 A / / Out

8 B / split

Function Table for Bitwise And A[7:0] B[7:0] Out[7:0] a7a6 . . . a0 b7b6 . . . b0 (a7 ∧ b7)(a6 ∧ b6) ... (a0 ∧ b0) Once we understand how a bitwise logical function works we can intro- duce a circuit abstraction to simplify our circuit diagrams. We’ll draw the bitwise And circuit above as a single dataflow component symbol as follows. The reader of the circuit must be sure to understand the above internal wiring that is assumed for the dataflow component symbol representing the abstraction.

Bitwise And Dataflow Component 8 A / 8 8 / Out 8 B /

The bitwise Or operator is similar.

50 Bitwise Or split bundle

8 8 A / / Out

8 B / split

Function Table for Controlled Bitwise Or A[7:0] B[7:0] Out[7:0] a7a6 . . . a0 b7b6 . . . b0 (a7 ∨ b7)(a6 ∨ b6) ... (a0 ∨ b0) Bitwise Or Dataflow Component 8 A / 8 8 / Out 8 B /

3.4 Integer Functions

In this section we look at some examples of circuits for computing simple integer functions on 8-bit bundled data carrying values of the primitive type Bounded Unsigned Integer. The bitwise operators of the previous sections worked on 8-bit raw bit patterns. Now we examine circuits that perform common arithmetic operations on 8-bit data values that code bounded non- negative integer values. One of the simplest integer operations is to increment an integer by one. Consider a function table that specifies the desired behavior of an incrementer circuit.

51 Function Table for Integer Increment A Out 0 1 1 2 2 3 . . . .

Algebraic Function Definition for Integer Increment

Out = A + 1

Notice that the table is actually infinite. Of course we can much more simply express the desired function of the incrementer circuit as a algebraic formula. We must remember, however, that the values carried by the bus are not actually unsigned integers, but a bounded unsigned integers, commonly known as the int data type. Our algebraic specification must indicate what the output of the circuit will be when the input is the largest representable integer! The largest unsigned integer is a bit pattern of all 1s and our circuit will increment the largest unsigned integer to all 0s. We can specify that using the modulus operator of mathematics (denoted mod).

Function Table for 8-bit Bounded Unsigned Integer Increment A Out 0 1 1 2 2 3 . . . . 254 255 255 0

52 Binary Function Table for 8-bit Bounded Unsigned Integer Increment A[7:0] Out[7:0] 00000000 00000001 00000001 00000010 00000010 00000011 . . . . 11111110 11111111 11111111 00000000 The following circuit increments implements the increment function on the incoming data path. We’ve shown the circuit and the function table.

8-bit Bounded Unsigned Integer Increment split bundle

8 8 A / / Out

Function Table for 8-bit Bounded Unsigned Integer Increment A Out k (k + 1) mod 28 Circuit diagrams with the full details of logic components can become

53 very complex and difficult to read. We can utilize our earlier idea of an abstract block component symbol for a dataflow circuit component to in- troduce a general subcircuit abstraction. A subcircuit abstraction represents any circuit with well-defined input and output as a block component symbol. A block component symbol hides the internal circuitry and show only the input-output relationship of the circuit function. Generally speaking, if our circuit diagram is composed mostly of block components then we call the circuit a block diagram. Here is a block diagram using the increment block component symbol.

Block Component Symbol for 8-bit Bounded Unsigned Integer Increment 8 8 A / Inc / Out

Function Table for 8-bit Bounded Unsigned Integer Increment A Out k (k + 1) mod 28 Subcircuits are analogous to subroutines in programming. Like logic gates, subcircuits are logic schemas so multiple instances of the subcircuit can be placed in a larger circuit. Each subcircuit or gate instance is a replication of the same logic as a distinct physical entity. Block components can have control inputs as well as data inputs. The following circuit has an input control signal that determines whether the circuit performs the increment or not. We do not need the detailed circuitry to understand what the subcircuit does, so we’ve shown the circuit details last.

Block Component Symbol for Controlled 8-bit Bounded Integer Increment Inc 8 8 A / Inc / Out

Function Table for Controlled 8-bit Bounded Unsigned Integer Increment A Inc Out k 0 k k 1 (k + 1) mod 28

54 Controlled 8-bit Bounded Unsigned Integer Increment Inc split bundle

8 8 A / / Out

Function Table for Controlled 8-bit Bounded Unsigned Integer Increment A Inc Out k 0 k k 1 (k + 1) mod 28 We can combine circuit functions to do more complex logic. Recall that one way to negate an integer in two’s complement representation is to in- vert all bits and then add one. We can create a new controlled negation circuit called Neg that is composed of a controlled bitwise not followed by an increment! The first block diagram below depicts the the Neg as a single component. The second block diagram shows how two subcircuits can be combined to build a negation circuit. The negation component makes the most sense with a Signed Int interpretation, so we give two function tables. The first function table represents a Signed Int interpretation and the second function table gives the Unsigned Int interpretation. Both tables specify the

55 range of input for which the interpretation is well defined. The negation of the largest negative integer is not defined because there is no corresponding positive integer of that magnitude. The circuit when applied to the largest negative integer returns the largest negative integer. Hardware implemen- tations may choose to flag this as an exceptional condition or leave it for the user to beware.

Block Component Symbol for Controlled 8-bit Int Negate Neg 8 8 A / Neg / Out

Block Implementation of a Controlled 8-bit Int Negate Neg 8 8 8 A / Not / Inc / Out

Function Table with Signed Int Interpretation for Controlled 8-bit Int Negate Component A Neg Out k 0 k k 1 −k where − (27 − 1) ≤ k ≤ 27 − 1

Function Table with Unsigned Int Interpretation for Controlled 8-bit Int Negate Component A Neg Out k 0 k mod 28 k 1 (28 − k) mod 28 where 0 ≤ k ≤ 28 − 1

56 3.5 Decoders

Information on several single binary signals can be combined into a single encoded signal with a numerical value for each control value. For example, four independent single-bit control signals can be combined into an encoded two-bit control line with four possible control values (0 to 3). When bundled signals are used for control we call the bundle a control bus, to distinguish its purpose from, say, a data bus. Bundling control signals has a clear advantage of carrying control information with less physical signal lines. The purpose of a decoder is to extract single control signals from encoded control values. For example, the following decoder transforms a two-bit encoded control signal into exactly one of four single bit output control signals. The decoder function table shows how a numerical data value on the input bus translates to a specific bit pattern on the four output control signals. The logic for a decoder converts an encoded number in k input bits into exactly one of 2k independently active single control output lines. Decoders are common circuit components that help make the transition from bundled control bus values into individual logic control signals needed in a circuit. The block component diagram and function table for a 2-to-4 decoder are as follows.

Block Component Symbol for 2-to-4 Decoder A

/ 2 2 to 4

/ 4 Out

Function Table A Out[3:0] 0 1000 1 0100 2 0010 3 0001 The internal details of the decoder block illustrate a wiring convention in which the circuit connections follow a simple binary counting pattern. Here we show the wiring of 3 to 8 decoder, or octal decoder. We leave the block

57 component and function table for the octal decoder for the exercises.

The inverse of a decoder circuit is an encoder circuit. Encoders are less commonly used and we won’t need one, so the circuit details are left for the exercises.

3.6 Multiplexers

The next circuit component serves the purpose of routing data from multiple sources onto a single bus. It is clearly infeasible to wire a bus for every source to destination path, just as we don’t expect to have an independent rail line between every pair of cities. Distinct train routes share tracks by using switching components. Analogously, multiplexers are logic switching components that allow sharing of bus paths, usually data bus paths. The following component is a 2-to-1 multiplexer that routes either data path A or data path B onto the output bus, but never both at the same time. The Sel control signal select which data path get routed through to the output.

58 Block Component Symbol for 2-to-1 Multiplexer A B / 8 / 8 Sel 2-to-1 / 8 Out

Function Table AB sel Out x y 0 x x y 1 y The following circuit shows the internal logic for the 2-to-1 multiplexer on 4-bit wide data paths. Notice that the all the data path widths must be the same. Also examine carefully how the select line is used to insure exactly one input bus is passed to the output bus and that it is passed unchanged. The internal logic for the 2-to-1 multiplexer on 8-bit data paths given above as a block component follows a similar pattern and is left for the exercises.

59 3.7 Control Logic

Sometimes it is desirable to bundle control lines into a control bus. A control bus is closely connected to the control needs of a particular circuit or subcircuit and need not correspond in any way to the width of a data bus. Often control buses route control codes to specific subcircuit components. As an example of using a control bus carrying control codes, consider a multiplexer on 8-bit wide data paths that is intended to select one of four input buses to route onto an output bus. Here is the block diagram and function table.

Block Component Symbol for 4-to-1 Multiplexer

B / / C 8 8

A / / D 8 8 2 Sel / 4-to-1 / 8 Out

Function Table ABCD sel Out w x y z 00 w w x y z 01 x w x y z 10 y w x y z 11 z Because we need to select one of four we’ll need four distinct control codes. A 2-bit wide control bus can carry the four distinct values, but now what will the internal circuit logic be for the 4-to-1 multiplexer? We can use a 2-to-4 decoder on the Sel control bits and let each of the four disjoint output control signals choose a unique input bus through a bank of Or gates to the output as shown in the following circuit.

60 Notice that the method is simply an extension of the circuit technique used in the 2-to-1 multiplexer. In fact, if you look closely you will see that the earlier 2-to-1 multiplexer actually uses a trivial 1-to 2-decoder!

3.8 Data Path Circuits

We have developed the data flow viewpoint of logic in order to lift the view of circuits from operations on single-bit, pure logic signals to functional operations on multi-bit data buses carrying numerical values and other data values. Logic circuits can now be viewed more abstractly as computing on programming-level data types, such as those introduced in the beginning of this chapter. In support of the more abstract view of circuits as computation over data types we also developed a block-component method of hiding details of the circuit logic within functional units in circuit diagrams. The resulting block circuit diagrams more closely align with our concept of programming-level operations and are consequently much easier to read. In this section we expand the idea of dataflow logic into a general idea

61 of a data path circuit consisting of data paths, functional units, and control logic. The following example combines an increment, a negation, a logical com- plement, and an identity operation into one unary arithmetic unit with a 2-bit control. We can make use of the controlled bitwise not and the con- trolled increment circuit components developed earlier. The leftmost bit of the control word determines whether or not to perform the bitwise not and the rightmost bit of the control word determines whether or not to increment. The function of the circuit depends on the interpretation of the data values on the input and output buses. The first table gives the interpretation of the circuit assuming the bit patterns are interpreted as unsigned bounded integers. The second function table gives the interpretation of the circuit assuming signed bounded integers in two’s complement representation. The circuit, of course, doesn’t care how we interpret the values.

Block Component Symbol for a Tiny ALU 2 Op / 8 8 A / Alu / Out

Function Table for Tiny ALU with Bounded Unsigned Integers A Op Out k 00 k k 01 (k + 1) mod 28 k 10 bwNot(k) k 11 (28 − k) mod 28 where 0 ≤ k ≤ 28 − 1

Function Table for Tiny ALU with Bounded Signed Integers in Two’s Complement Representation A Op Out k 00 k k 01 if (k == 27 − 1) then − 27 else (k + 1) mod 28 k 10 bwNot(k) k 11 if (k == −27) then k else − k where − 27 ≤ k ≤ 27 − 1

62 Tiny Alu Circuit with Block Components 2 Op /

8 8 8 A / Not / Inc / Out

A single functional unit performing any one of several arithmetic or logical operations is traditionally called an Arithmetic Logic Unit or ALU. Simple processors have a single ALU that is able to perform all the logi- cal and integer arithmetic operations. More complex processors have other functional units, for example, a Floating Point Unit, or FPU, performs func- tional operations on values of type float that represent real numbers.

3.9 Exercises

1. Draw the block component for the 8-bit Bitwise Or dataflow compo- nent.

2. Draw a 3-bit Bounded Unsigned Integer Increment circuit. Do not use a bundler and splitter — label each of the three intputs and three outputs separately.

3. Write down the logic equations for each of the three output signals in terms of the input signals for the 3-bit Bounded Unsigned Integer Increment circuit.

4. Build a logic table for the 3-bit Bounded Unsigned Integer Increment function based on the logic equations of the previous problem. Verify that the logic table exhibits the proper increment behavior.

5. Draw the block component for the octal decoder.

6. Give the function table for the octal decoder.

7. Give the internal logic for the 2-to-1 multiplexer on 4-bit data paths by giving the four outputs each as a logic expression of the nine inputs.

8. This problem is about the Tiny ALU component. Give the output bit pattern of the component for the following values of the input signals:

(a) k = 10110111, Op = 00 (b) k = 10110111, Op = 01

63 (c) k = 10110111, Op = 10 (d) k = 10110111, Op = 11

9. Draw the detailed logic diagram for a 4-bit version of the Tiny ALU component. Label the two bit Op signal inputs as Op1 and Op0, where zero is the least significant bit. Label the individual input signals of the A bus as a3, a2, a1, a0 from most to least significant. 10. Give the logic expression for the 4-bit Tiny ALU component that you constructed in the previous problem. You will need expressions for the four output signals Out3, Out2, Out1, Out0 in terms of the six input signals.

11. Suppose you wanted to build a 3-to-1 multiplexer. How many select bits would you need? Give the function table for a 3-to-1 multiplexer. Hint: you are entitled to signify some outputs as Undefined.

12. How many select bits are needed for an 8-to-1 multiplexer? What decoder component is used to implement an 8-to-1 multiplexer?

64