History & Binary Representation

C. R. da Cunha∗1

1Instituto de F´ısica,Universidade Federal do Rio Grande do Sul, RS 91501-970, Brazil.

August 30, 2017

Abstract In this lesson we will review the history of electronics and the devel- opment of the first . Keywords— ; Microprocessors; Electronics; Digital.

1 History

Let us begin with the history of semiconductors and contemporary electronics. Although it started in Germany, it flourished at the Bell Labs in the United States. ∗[email protected]

1 1874 Diode effect discovered by Ferdinand Braun at the University of Berlin. 1906 Diode patented. 1925 Bell Labs is founded. 1925 MOS transistor is patented by Julius Lilienfeld. 1929 Walter Brattain joins Bell Labs. 1934 Another MOS patent by Oskar Heil. 1936 Mervin Kelly becomes director of Bell Labs. 1936 William Shockley joins Bell Labs. 1945 John Bardeen joins Bell Labs. 1947 Bardeen and Brattain conceive the Point Contact Transistor. 1948 Shockley invents the Bipolar Junction Transistor. 1953 First transistor built at the University of Manchester by Dick Grimsdale. 1954 Transistors are fabricated in Si by Morris Tanenbaum at Bell Labs. 1956 Nobel Prize for the invention of the transistor. 1956 Shockley found Shockley Semiconductor Laboratory in Mountain View, CA. 1957 , , Jean Hoerni and the other traitorous eight found Fairchild Semiconductors. 1958 Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductors independently develop the fist Integrated Circuit. 1959 Planar transistor developed by Jean Hoerni at Fairchild Semiconductors. 1968 Robert Noyce and Gordon Moore found Corporation.

Soon after Intel is founded, a new revolution starts by making electronic circuits mimick the behavior of machines such as Charles Babbage’s difference engine from 1822, Alan Turing’s a-machine from 1936, and John Atanasoff and Clifford Berry’s automatic electronic digital computer from 1942. Soon after in 1946, John Brainerd at the University of Pennsylvania develop the Electronic Numerical Integrator and Computer (ENIAC). The first transistorized computer appeared in 1955 at the University of Manchester under the leadership of Tom Kilburn. It took another 15 years for the development of integrated processors.

2 Figure 1: DIP, PLCC and PGA packagings.

Model Year W/Mem. [bits] [MHz] MIPS Applications 1971 4 BCD/12 .74 .056 Calculators 1972 8/14 .80 .12 Calc./Robots Intel 4040 1974 4 BCD/13 .74 .060 Calculators TI TMS1000 1974 4/8 .4 .050 Calculators 1974 8/16 3.125 .29 Cash Registers G.In. CP1600* 1974 8/16 5 .2 Video Games Zilog Z80 1976 8/16 8 .40 Video Games MOS Tech 6502 1976 8/16 8 3.4 Video Games Intel MCS-48 1976 8/8 11 .5 Controllers 1976 8/16 6.5 1 Controllers 1978 16/16 10 .75 PC-XT 1979 16- 10 50 kFLOPS FPU Motorola 68000 1979 32/24 7.67 1 Mac/Video Games Intel 8051 1980 8/16 12 1 Controllers 1982 16/20 25 1 PC-AT Acorn ARM1/2 1985 32/26 6 8 Atmel AVR 1996 8/ 8 8 Controllers (*) Microchip’s PIC was born from there.

2 Microprocessors vs. Microcontrollers

Microprocessors are units responsible for processing the flow of information in an eletronic system, whereas microcontrollers are units that incorporate a processor, a memory and other subunits to perform intelligent operations. Sim- ple microcontrollers can have packagings as simple as a DIP16 such as the Intel 4004. Most common microcontrollers are found in a DIP40 package, whereas modern microprocessors are found in packagings such as a 82 PLCC. Some of these packagings are shown in Fig. 1.

3 3 Binary Representation

Let us begin our study of microcontrollers by reviewing the binary representa- tion and operations. A binary number has only two mnemonics. We will use 1 to represent the high state, and 0 to represent the low state. Thus, we can have a binary number such as 011001. This can be converted to decimal by:

n X “ bn2 n ÿ 0 1 2 3 4 4 “ 1 ˆ 2 ` 0 ˆ 2 ` 0 ˆ 2 ` 1 ˆ 2 ` 1 ˆ 2 ` 0 ˆ 2 (1) “ 1 ` 8 ` 16 “ 25. The reverse operation can be constructed by successively dividing a decimal number by 2 and taking the remainder. For example:

25{2 “ 12 ` rr1s 12{2 “ 6 ` rr0s 6{2 “ 3 ` rr0s (2) 3{2 “ 1 ` rr1s 1{2 “ 0 ` rr1s, where r[x] is the remainder of the operation. Therefore, 25 can be represented as 11001. Now, let’s take number 9 in binary:

9{2 “ 4 ` rr1s 4{2 “ 2 ` rr0s (3) 2{2 “ 1 ` rr0s 1{2 “ 0 ` rr1s It needs 4 bits to be represented. We could have reached the same result by taking Log2p9q « 3.17 Ñ 4. Thus, we can represent decimal numbers in groups of four bits. This is called binary coded decimal or BCD. For instance,25 would be represented as 0010 0101. It requires more bits to be represented but it has the advantage of simplicity.

3.1 Fixed Point Representation How do we represent real numbers? There are two possibilities, the first is to use fixed point The trick here is to apply a binary point. For example, let us take again number 25 in binary (11001) and apply a point so that we have 110.01. In this case we have:

4 110.01 “ 1 ˆ 22 ` 1 ˆ 21 ` 0 ˆ 20 ` 0 ˆ 2´1 ` 1 ˆ 2´2 “ 4 ` 2 ` 0.25 (4) “ 6.25. In our case both 25 and 6.25 have exacly the same representation in binary. The only difference is the point.

3.1.1 1’s Complement And negative numbers? One strategy is to use the 1’s complement by just negating the expression. For example, for our 25 we would have 011001 and -25 would be 100110 in a 6 bit notation. For a 3 bit representation we would have:

´3 “ 100 ´2 “ 101 ´1 “ 110 ´0 “ 111 (5) 0 “ 000 1 “ 001 2 “ 010 3 “ 011 This has some problems. For example, the 1’s complement of 000000 (0) is 111111 (-0). Furthermore, arithmetic operations become problematic. Let’s take for example 3-1 in a 3 bit representation. This would be 011 + 110, which would produce 001 and a carry-out bit of 1. This has to be added to the result and we obtain 010, which is the expected result.

3.1.2 2’s Complement We can improve computations and use a 2’s complement by always adding one. For example, 3 in a 3 bit representation is 011, thus -3 in 1’s complement is 100. In 2’s complement we only have to add 1 and obtain 101. Thus, again in a 3 bit representation we would have:

´4 “ 100 ´3 “ 101 ´2 “ 110 ´1 “ 111 (6) 0 “ 000 1 “ 001 2 “ 010 3 “ 011

5 This way, not only we avoid the problem of -0 as we also simplify the arithmetics. Let’s see that example of 3-1 again now in 2’s complement. This would be 011+111=010=2 with a carry bit that can be completely discarded. In 2’s complement, overflow can be detected if summing two numbers of the same sign produces a number with an opposite sign. Let us now see how this works for signed fixed point numbers:

´2.0 “ 100 ´1.5 “ 101 ´1.0 “ 110 ´0.5 “ 111 (7) 0.0 “ 000 0.5 “ 001 1.0 “ 010 1.5 “ 011

3.1.3 Arithmetic For fixed point representation, addition and subtraction are exactly the same operations as we would do for integer numbers. For example:

0.5 ` 1.0 “ 00.1 ` 01.0 “ 01.1 “ 1.5 (8) 1.5 ´ 0.5 “ 01.1 ` 11.1 “ 01.0 “ 1.0 Now let’s see how it works for multiplication. For simplicity let us drop the radix point and take care of it later.

010 ˆ 001 010 1.0 ˆ 0.5 “ (9) 000 ` 000 00010 We must now account for the radix point. Since both multiplicands have points after the first bit, the result has to have the point after two bits. Therefore, the result is 000.10. However, our representation includes only one bit for the fractional part and two for the integer part. We therefore must either truncate or round the result for the appropriate number of bits. It is simple in this case to just truncate it to 00.1, which in decimal is 0.5, the result that we expected. Let us now use two bits for each the integer and the fractional part. Our correspondence table becomes:

6 0.00 “ 0000 0.25 “ 0001 ´0.25 “ 1111 0.50 “ 0010 ´0.50 “ 1110 0.75 “ 0011 ´0.75 “ 1101 1.00 “ 0100 ´1.00 “ 1100 (10) 1.25 “ 0101 ´1.25 “ 1011 1.50 “ 0110 ´1.50 “ 1010 1.75 “ 0111 ´1.75 “ 1001 ´2.00 “ 1000 Let’s multiply 0.25 ˆ ´2.00:

0001 ˆ 1000 0000 0000 (11) 0000 ` 1000 1000000 Our multiplicands have two bits for the fractional part. Therefore, the result should have four bits for its fractional part and we would get 100.0000. Trun- cating it we get 00.00, which is completely wrong. This happened because we did not account for the sign. One way to compensate for it is by expanding the bit sign:

1000 ˆ 0001 111000 0000 (12) 0000 ` 0000 0111000 We must place the point four bits from the right hand side end. Therefore, the truncated result becomes 11.10, which corresponds to -0.5 as expected. Why did we do this 1 filling operation? Because we are multiplying 1ˆ a negative number. We therefore must take its 2’s complement. We obtain it by 1 filling. This is known as sign extension. Let’s calculate it now the other way around:

0001 ˆ 1000 0000 0000 (13) 0000 ` 1111 1111000

7 Taking four bits for the radix point and truncating we get 11.10, which is the expected result. Note, however that we took 2’s complement. This happened because we are multiplying one argument by the sign bit. It is like multiplying the argument by -1. Let’s now take a look at another example: ´0.75 ˆ 0.75:

1101 ˆ 0011 1111101 111101 (14) 0000 ` 0000 11110111 Placing the point we get 1111.0111. If we simply truncate the result we get 1101 which corresponds to -0.75. The right result would be -0.5625 and we have a quantization error which cannot be accounted for in this representation with a restricted number of bits. For the sake of practice, let’s make the same calculation the other way around:

0011 ˆ 1101 0011 0000 (15) 0011 ` 1101 11110111 Which is exactly the same value.

3.2 Floating Point With floating point we have a completely different story. In floating point, a number is represented as:

S E3E2E1E0 M5M4M3M2M1M0 , (16) where S is a bit for the sign, E is the exponent, and M is the mantissa. This can be represented as p´1qSM ˆ 2E. Typically, in floating point representation, the mantissa is normalized. For example:

110.100 “ 1.10100 ˆ 22 . (17) 0.00101 “ 1.01000 ˆ 2´3 Also, the exponent is stored with a 127 bias according to IEEE 754 standard. This means that the exponent is added to 127 (01111111) and then stored. For example, 12 (01100) is stored as 139 (010001011), and -5 is stored as 122 (01111010).

8 Let us now put it all together. A number in floating point notation is stored as SIGN EXPONENT MANTISSA. Also, for the mantissa, the 1 at the left hand side of the radix point is dropped. Let’s look at some examples:

´10.0110 “ ´1.0011 ˆ 21 “ 1 10000000 00110000 (18) 0.00101 “ `1.01 ˆ 2´3 “ 0 01111100 01000000

3.2.1 Arithmetic Addition and subtraction are quite simple. We first must put both operands in the same exponent and them perform a standard sum of the mantissas main- taining the exponent. Multiplication and division are also simple. For the mantissa we must per- form a standard multiplication. We sum the exponents and subtract the bias. The sign bit are just added and the carry is dropped. Thus for a floating point representation with 2 bits for the exponent with a bias 2 and 3 bits for the mantissa we have:

2.5 ` 0.5 “ “ 10.1 ` 0.1 “ 1.01 ˆ 21 ` 1.00 ˆ 2´1 “ 0 11 010 ` 0 01 000 “ 101.00 ˆ 2´1 ` 1.00 ˆ 2´1 (19) “ 110.00 ˆ 2´1 “ 1.10 ˆ 21 “ 0 11 100 “ 3.0. ´2.0 ˆ 0.25 “ “ ´10.0 ˆ 0.01 “ ´1.00 ˆ 21 ˆ 1.00 ˆ 2´2 “ 1 11 000 ˆ 0 00 000 (20) “ 1 01 000 “ ´1.000 ˆ 2´1 “ ´0.100 “ ´0.5

9