<<

Organization 1. Introduction STUDY MATERIALS ON COMPUTER ORGANIZATION (As per the curriculum of Third semester B.Sc. Electronics of Mahatma Gandh Uniiversity) Compiled by Sam Kollannore U.. Lecturer in Electronics M.E.S. College, Marampally

1. INTRODUCTION

1.1 GENERATION OF

The first electronic computer was designed and built at the University of Pennsylvania based on technology. Vacuum tubes were used to perform logic operations and to store data. Generations of computers has been divided into five according to the development of technologies used to fabricate the processors, memories and I/O units.

I Generation : 1945 – 55 II Generation : 1955 – 65 III Generation : 1965 – 75 IV Generation : 1975 – 89 V Generation : 1989 to present

First Generation (ENIAC - Electronic Numerical Integrator And EDSAC – Electronic Delay Storage Automatic Calculator EDVAC – Electronic Discrete Variable Automatic Computer UNIVAC – Universal Automatic Computer IBM 701) ƒ Vacuum tubes were used – basic arithmetic operations took few milliseconds ƒ Bulky ƒ Consume more power with limited performance ƒ High cost ƒ Uses assembly language – to prepare programs. These were translated into machine level language for execution. ƒ Mercury delay line memories and Electrostatic memories were used ƒ Fixed point arithmetic was used ƒ 100 to 1000 fold increase in speed relative to the earlier mechanical and relay based electromechanical technology ƒ Punched cards and paper tape were invented to feed programs and data and to get results. ƒ Magnetic tape / magnetic drum were used as secondary memory ƒ Mainly used for scientific computations.

Second Generation (Manufacturers – IBM 7030, Digital Data Corporation’s PDP 1/5/8 Honeywell 400) ƒ Transistors were used in place of vacuum tubes. (invented at AT&T Bell lab in 1947) ƒ Small in size ƒ Lesser power consumption and better performance

1 Computer Organization 1. Introduction ƒ Lower cost ƒ Magnetic ferrite core memories were used as main memory which is a random-access nonvolatile memory ƒ Magnetic tapes and magnetic disks were used as secondary memory ƒ Hardware for floating point arithmetic operations was developed. ƒ Index registers were introduced which increased flexibility of programming. ƒ High level languages such as FORTRAN, COBOL etc were used - Compilers were developed to translate the high-level program into corresponding assembly language program which was then translated into machine language. ƒ Separate input-output processors were developed that could operate in parallel with CPU. ƒ Punched cards continued during this period also. ƒ 1000 fold increase in speed. ƒ Increasingly used in business, industry and commercial organizations for preparation of payroll, inventory control, marketing, production planning, research, scientific & engineering analysis and design etc.

Third Generation (System 360 Mainframe from IBM, PDP-8 Mini Computer from Digital Equipment Corporation) ƒ ICs were used ƒ Small Scale Integration and Medium Scale Integration technology were implemented in CPU, I/O processors etc. ƒ Smaller & better performance ƒ Comparatively lesser cost ƒ Faster processors ƒ In the beginning magnetic core memories were used. Later they were replaced by semiconductor memories (RAM & ROM) ƒ Introduced microprogramming ƒ Microprogramming, parallel processing (pipelining, multiprocessor system etc), multiprogramming, multi-user system (time shared system) etc were introduced. ƒ Operating system software were introduced (efficient sharing of a computer system by several user programs) ƒ Cache and virtual memories were introduced (Cache memory makes the main memory appear faster than it really is. Virtual memory makes it appear larger) ƒ High level languages were standardized by ANSI eg. ANSI FORTRAN, ANSI COBOL etc ƒ management, multi-user application, online systems like closed loop process control, airline reservation, interactive query systems, automatic industrial control etc emerged during this period.

Fourth Generation (Intel’s 8088,80286,80386,80486 .., Motorola’s 68000, 68030, 68040, Apple II, CRAY I/2/X/MP etc) ƒ were introduced as CPU– Complete processors and large section of main memory could be implemented in a single chip ƒ Tens of thousands of transistors can be placed in a single chip (VLSI design implemented) ƒ CRT screen, laser & ink jet printers, scanners etc were developed. ƒ Semiconductor memory chips were used as the main memory. ƒ Secondary memory was composed of hard disks – Floppy disks & magnetic tapes were used for backup memory ƒ Parallelism, pipelining cache memory and virtual memory were applied in a better way ƒ LAN and WANS were developed (where desktop work stations interconnected) ƒ Introduced C language and Unix OS ƒ Introduced Graphical User Interface

2 Computer Organization 1. Introduction ƒ Less power consumption ƒ High performance, lower cost and very compact ƒ Much increase in the speed of operation

Fifth Generation (IBM notebooks, Pentium PCs-Pentium 1/2/3/4/Dual core/Quad core.. SUN work stations, Origin 2000, PARAM 10000, IBM SP/2) ƒ Generation number beyond IV, have been used occasionally to describe some current computer system that have a dominant organizational or application driven feature. ƒ Computers based on artificial intelligence are available ƒ Computers use extensive parallel processing, multiple pipelines, multiple processors etc ƒ Massive parallel machines and extensively distributed system connected by communication networks fall in this category. ƒ Introduced ULSI (Ultra Large Scale Integration) technology – Intel’s Pentium 4 contains 55 million transistors millions of components on a single IC chip. ƒ Superscalar processors, Vector processors, SIMD processors, 32 bit micro controllers and embedded processors, Digital Signal Processors (DSP) etc have been developed. ƒ Memory chips up to 1 GB, hard disk drives up to 180 GB and optical disks up to 27 GB are available (still the capacity is increasing) ƒ Object oriented language like JAVA suitable for internet programming has been developed. ƒ Portable note book computers introduced ƒ Storage technology advanced – large main memory and disk storage available ƒ Introduced . (and other existing applications like e-mail, e Commerce, Virtual libraries/Classrooms, multimedia applications etc.) ƒ New operating systems developed – Windows 95/98/XP/…, , etc. ƒ Got hot pluggable features – which enable a failed component to be replaced with a new one without the need to shutdown the system, allowing the uptime of the system to be very high. ƒ The recent development in the application of internet is the Grid technology which is still in its upcoming stage. ƒ Quantum mechanism and nanotechnology will radically change the phase of computers.

1.2 TYPES OF COMPUTERS

1. Super Computers 2. Main Frame Computers 3. Mini Computers 4. Micro Computers

1. Super Computers E.g.:- CRAY Research :- CRAY-1 & CRAY-2, Fujitsu (VP2000), Hitachi (S820), NEC (SX20), PARAM 10000 by C-DAC, Anupam by BARC, PACE Series by DRDO ƒ Most powerful Computer system - needs a large room ƒ Minimum world length is 64 bits ƒ CPU speed: 100 MIPS ƒ Equivalent to 4000 computers ƒ High cost: 4 – 5 millions ƒ Able to handle large amount of data ƒ High power consumption ƒ High precision

3 Computer Organization 1. Introduction ƒ Large and fast memory (Primary and Secondary) ƒ Uses multiprocessing and parallel processing ƒ Supports multiprogramming

Applications ƒ In petroleum industry - to analyze volumes of seismic data which are gathered during oil seeking explorations to identify areas where there is possibility of getting petroleum products inside the earth ƒ In Aerospace industry - to simulate airflow around an aircraft at different speeds and altitude. This helps in producing an effective aerodynamic design for superior performance ƒ In Automobile industry – to do crash simulation of the design of an automobile before it is released for manufacturing – for better automobile design ƒ In structural mechanics – to solve complex structural engineering problems to ensure safety, reliability and cost effectiveness. Eg. Designer of a large bridge has to ensure that the bridge must be proper in various atmospheric conditions and pressures from wind, velocity etc and under load conditions. ƒ Meteorological centers use super computers for weather forecasting ƒ In Biomedical research – atomic nuclear and plasma analysis – to study the structure of viruses such as that causing AIDS ƒ For weapons research and development, sending rockets to space etc

2. Main Frame Computers E.g.:- IBM 3000 series, Burroughs B7900, Univac 1180, DEC ƒ Able to process large amount of data at very high speed ƒ Supports multi-user facility ƒ Number of processors varies from one to six. ƒ Cost: 3500 to many million dollars ƒ Kept in air conditioned room to keep them cool ƒ Supports many I/O and auxiliary storage devices ƒ Supports network of terminals

USERS ROOM (Entry restricted to authorized persons)

SYSTEM ROOM (Entry restricted to system Administrators & maintenance staff) Magnetic Tape Library Magnetic Tape Drives Magnetic Tape Drives

Back-end Processor Plotter

Printer Host Processor Console

Front-end Processor

User Terminals

4 Computer Organization 1. Introduction

Applications ƒ Used to process large amount of data at very high speed such as in the case of Banks/ Insurance Companies/ Hospitals/ Railways…which need online processing of large number of transactions and requires massive data storage and processing capabilities ƒ Used as controlling nodes in WANs (Wide Area Networks) ƒ Used to mange large centralized

3. Mini Computers E.g.:- Digital Equipments PDP 11/45 and VAX 11) ƒ Perform better than micros ƒ Large in size and costlier than micros ƒ Designed to support more than one user at a time ƒ Posses large storage capacities and operates at higher speed ƒ Support faster peripheral devices like high speed printers ƒ Can also communicate with main frames

Applications ƒ These computers are used when the volume of processing is large for e.g. Data processing for a medium sized organization ƒ Used to control and monitor production processes ƒ To analyze results of experiments in laboratories ƒ Used as servers in LANs (Local Area Networks)

4. Micro Computers E.g.:- IBM PC, PS/2 and Apple Macintosh ƒ A uses a microprocessor as its central Processing Unit. are tiny computers that can vary in size from a single chip to the size of a desktop model ƒ They are designed to be used by only one person at a time ƒ Small to medium data storage capacities 500MB – 2GB ƒ The common examples of microcomputers are chips used in washing machines, TVs, Cars and Note book/Personal computers.

Applications Used in the field of desktop publishing, accounting, statistical analysis, graphic designing, investment analysis, project management, teaching, entertainment etc

ƒ The different models of microcomputers are given below:- a) Personal computers:- The name PC was given by the IBM for its microcomputers. PCs are used for word processing, spreadsheet calculations, database management etc. b) Note book or Lap Top:- Very small in terms of size – can be folded and carried around – Monitor is made up of LCD and the keyboard and system units are contained in a single box. Got all the facilities of a (HDD, CDD, Sound card, N/W card, Modem etc) and a special connection to connect to the desktop PC which can be used to transfer data. c) Palm Top:- Smaller model of the microcomputer- size is similar to that of a calculator – pocket size- It has a processor and memory and a special connection to connect to the desktop PC which can be used to transfer data. d) Wrist PC:- Smallest type of microcomputer – can be worn on our wrist like a - It has a processor and memory and a wireless modem

5 Computer Organization 1. Introduction

Cost Speed Applications

Superfast Weapon design Weather forecasting Super Computers Aircraft design fast Biomedical applications

Scientific calculations Main frame computers Data Processing for large business Teaching systems in Universities medium Large multi-user systems Manufacturing processes Mini Computers Hospital Administration Teaching systems in Colleges

slow Office automation Small business systems Micro Computers Control applications Teaching systems in schools

FUNCTIONAL UNITS OF A COMPUTER Computer is a device that operates upon information or data. It is an electronic device which accepts input data, stores the data, does arithmetic and logic operation and outputs the information in desired format. Even though the size, shape, performance, reliability and cost of computers have been changing over the years, the basic logical structure proposed by Von Neumann has not change. The internal architecture of computers differs from one system model to another. A block diagram of the basic computer organization specifying different functional units is shown below. Here the solid lines indicate the flow of instruction and data and the dotted lines represent the control exercised by the control unit.

Secondary Storage Program Input Unit Output Unit Information & Data (Result) Primary Storage

Control Unit

ALU

Central Processing Unit

6 Computer Organization 1. Introduction

INPUT UNIT Input unit accepts coded information from human operators through electromechanical devices such as the keyboard or from other computers over digital communication lines. The information received is either stored in the memory for later reference or immediately used by the Arithmetic and Logic circuitry to perform the desired operation. Finally the result is sent back to the outside through the output unit. The keyboard is wired so that whenever a key is pressed, the corresponding letter or digit is automatically translated into its corresponding code and sent directly to either the memory or the processor. Other kinds of input devices: Joy stick, track ball, mouse (pointing devices), scanner etc.

MEMORY UNIT The memory unit stores program and data. There are two classes of memory devices :- Primary memory and Secondary memory. Primary memory (Main memory) ƒ Contains a large number of semiconductor cells each capable of storing one bit of information ƒ These cells are processed in group of fixed size called words containing ‘n’ bits. The main memory is organized such that the contents of one word can be stored or retrieved in one basic operation. ƒ For accessing data, a distinct address is associated with each word location. ƒ Data and programs must be in the primary memory for execution. ƒ Number of bits in each word is called the word length and it may vary from 16 to 64 bits. ƒ Fast memory ƒ Expensive ƒ Time required to access one word is called Memory Access Time - 10nS to 100nS. This time is fixed and independent of the location. E g. Random Access Memory (RAM)

Secondary storage ƒ They are used when large amount of data have to be stored (also when frequent access is not necessary) E.g. Hard Disk, Compact Disk, Floppy Disk, Magnetic Tapes etc.

PROCESSOR UNIT ƒ The heart of the computer system is the Processor unit. ƒ It consists of Arithmetic and Logic Unit and Control Unit.

Arithmetic and Logic Unit (ALU) ƒ Most computer operations (Arithmetical and logical) are executed in ALU of the processor. ƒ For example: Suppose two numbers (operands) located in the main memory are to be added. These operands are brought into arithmetic unit – actual addition is carried. The result is then stored in the memory or retained in the processor itself for immediate use. ƒ Note that all operands may not reside in the main memory. Processor contains a number of high speed storage elements called Registers, which may be used for temporary storage of frequently used operands. Each register can store one word of data. ƒ Access times to registers are 5 to 10 times faster than access time to memory. Control Unit

7 Computer Organization 1. Introduction ƒ The operations of all the units are coordinated by the control unity (act as the nerve centre that sends control signal to other units) ƒ Timing signal that governs the I/O transfers are generated by the Control Unit. ƒ Synchronization signals are also generated by the Control Unit ƒ By selecting, interpreting and executing the program instructions the program instructions the control unit is able to maintain order and direct the operation of the entire system.

The control unit and ALU’s are usually many times faster than other devices connected to a computer system. This enabled a single processor to control a number of external devices such as video terminals, magnetic taped, disk memories, sensors, displays and mechanical controllers which are much slower than the processor.

OUTPUT UNIT ƒ Counter part of input unit ƒ Output devices accept binary data from the computer - decodes it into original form and supplies this result to the outside world. E.g. Printer, Video terminals (provides both input & output functions), graphic displays etc

Basic Operational Concepts:- ƒ Activity in a computer is governed by instructions ƒ To perform a given task, a set of instructions called program must be there in the main memory ƒ Individual instructions are brought from the memory into the processor which executes the specific operation. ƒ Data to be used as operands are also stored in the memory.

E.g. Add LOCA, R0

This instruction adds the operand at the memory location LOCA to the operand in the Processor R0 and places the sum into the register R0. Here the original contents of LOCA are preserved whereas those of R0 are overwritten. Steps:- 1. Instruction is fetched from the main memory into the processor Memory access 2. Operand at LOCA is fetched operation

3. Add the contents to the contents of R0 ALU operation 4. Finally store the result in R0

Note: Data transfer between the main memory and the processor are started by sending the address of the memory location to be accessed to the memory unit and issuing the appropriate control signal by the control unit.

INTERNAL ORGANIZATION OF PROCESSOR

Processor contains a number of registers used for temporary storage of data other than ALU and Control circuitry Instruction Register (IR) – holds the instruction that is currently being executed – its output is available to the control circuits which generate the timing signals that control the various processing elements involved in executing the instruction.

8 Computer Organization 1. Introduction Program Counter (PC) – It contains the address of the instruction currently being executed. During the execution of an instruction, the contents of the program counter are updated to hold the address of the next instruction to be executed. i.e. PC points to the next instruction that is to be fetched from the memory. n General Purpose Registers (R0 to Rn-1) – Facilitates communication with the main memory. Access to data in these registers is much faster than to data stored in memory locations because the registers are inside the processor. Most modern computers have 8 to 32 general purpose registers.

Memory Address Register (MAR) – holds the address of the location to or from which data are to be transferred Memory Data Register (MDR) – contains the data to be written into or read out of the address location.

MAIN MEMORY

MAR MDR CONTROL

PC Ro

R1 .

. ALU

IR Rn-1

n general purpose registers

Fig: Processor

Steps involved during operation:-

1. Program is stored in the main memory 2. PC is set to point to the first instruction of the program 3. Contents of the PC are transferred to the MAR and a Read Control signal sent to the memory 4. After the access time, the addressed word (in this case the first instruction) is read out of the memory and is loaded into the MDR 5. Contents of the MDR are transferred to the IR. Now the instruction is ready to be decoded and executed. 6. If the instruction involves an operation to be performed by the ALU, the required operands are to be fetched from the memory (or CPU registers). This is done by sending its address to the MAR and initiating a Read cycle.

9 Computer Organization 1. Introduction 7. Operands are read from the memory into the MDR and are transferred from MDR to the ALU. 8. ALU will perform the desired operation. 9. If the result is to be stored in the memory, then it is sent to the MDR. 10. The address of the location where the result is to be stored is sent to the MAR and a Write cycle is initiated. 11. At some point during the execution of the current instruction, the contents of the PC are incremented so that the PC now points to the next instruction to be executed. 12. As soon as the execution of the current instruction is completed, a new instruction fetch may be started.

NOTE:- In addition to transferring data between the memory and the processor, the computer accepts data from input devices and sends data to output devices. For example, a sensing device in a computer controlled industrial process may detect a dangerous condition. Here the device raises an interrupt signal. An interrupt is a request from an I/O device for service by the processor. Now the processor provides the requested service by executing an appropriate interrupt-service routine. The internal state of the processor at such moments (like the contents of the PC, the general registers, and some control information) are saved in memory locations. When the interrupt-service routine is completed, the state of the processor is restored so that the normal program may be continued.

10

HISTORY OF THE COMPUTER

ARTICLE WRITTEN BY:

ADEBOWALE ONIFADE

ELECTRICAL ELECTRONIC

ENGINEERING DEPARTMENT

UNIVERSITY OF IBADAN

NIGERIA

REGION 8

HISTORY OF THE COMPUTER

ABSTRACT

This paper takes a keen look at the history of computer technology with a view to encouraging computer or electrical electronic engineering students to embrace and learn the history of their profession and its technologies. Reedy (1984) quoted Aldous Huxley thus:

“that men do not learn very much from the lessons of history is the most important of all the lessons that history has to teach.” This paper therefore emphasizes the need to study history of the computer because a proper study and understanding of the evolution of computers will undoubtedly help to greatly improve on computer technologies.

INTRODUCTION

The word ‘computer’ is an old word that has changed its meaning several times in the last

few centuries. Originating from the Latin, by the mid-17th century it meant ‘someone who

computes’. The American Heritage Dictionary (1980) gives its first computer definition as “a

person who computes.” The computer remained associated with human activity until about

the middle of the 20th century when it became applied to “a programmable electronic device

that can store, retrieve, and process data” as Webster’s Dictionary (1980) defines it. Today,

the word computer refers to computing devices, whether or not they are electronic,

programmable, or capable of ‘storing and retrieving’ data.

The Techencyclopedia (2003) defines computer as “a general purpose machine that processes

data according to a set of instructions that are stored internally either temporarily or

permanently.” The computer and all equipment attached to it are called hardware. The

instructions that tell it what to do are called "software" or “program”. A program is a detailed

set of humanly prepared instructions that directs the computer to function in specific ways.

Furthermore, the Encyclopedia Britannica (2003) defines computers as “the contribution of

major individuals, machines, and ideas to the development of computing.” This implies that

2 the computer is a system. A system is a group of computer components that work together as a unit to perform a common objective.

The term ‘history’ means past events. The encyclopedia Britannica (2003) defines it as “the discipline that studies the chronological record of events (as affecting a nation or people), based on a critical examination of source materials and usually presenting an explanation of their causes.” The Oxford Advanced Learner’s Dictionary (1995) simply defines history as

“the study of past events.…” In discussing the history of computers, chronological record of events – particularly in the area of technological development – will be explained. History of computer in the area of technological development is being considered because it is usually the technological advancement in computers that brings about economic and social advancement. A faster computer brings about faster operation and that in turn causes an economic development. This paper will discuss classes of computers, computer evolution and highlight some roles played by individuals in these developments.

CLASSIFICATION OF COMPUTERS

Computing machines can be classified in many ways and these classifications depend on their functions and definitions. They can be classified by the technology from which they were constructed, the uses to which they are put, their capacity or size, the era in which they were used, their basic operating principle and by the kinds of data they process. Some of these classification techniques are discussed as follows:

Classification by Technology

This classification is a historical one and it is based on what performs the computer operation, or the technology behind the computing skill.

3 I FLESH: Before the advent of any kind of computing device at all, human beings

performed computation by themselves. This involved the use of fingers, toes and any

other part of the body.

II WOOD: Wood became a computing device when it was first used to design the

abacus. Shickard in 1621 and Polini in 1709 were both instrumental to this

development.

III METALS: Metals were used in the early machines of Pascal, Thomas, and the

production versions from firms such as Brundsviga, Monroe, etc

IV ELECTROMECHANICAL DEVICES: As differential analyzers, these were present

in the early machines of Zuse, Aiken, Stibitz and many others

V ELECTRONIC ELEMENTS: These were used in the Colossus, ABC, ENIAC, and

the stored program computers.

This classification really does not apply to developments in the last sixty years because several kinds of new electro technological devices have been used thereafter.

Classification by Capacity

Computers can be classified according to their capacity. The term ‘capacity’ refers to the volume of work or the data processing capability a computer can handle. Their performance is determined by the amount of data that can be stored in memory, speed of internal operation of the computer, number and type of peripheral devices, amount and type of software available for use with the computer.

The capacity of early generation computers was determined by their physical size - the larger the size, the greater the volume. Recent computer technology however is tending to create smaller machines, making it possible to package equivalent speed and capacity in a smaller format. Computer capacity is currently measured by the number of applications that it can

4 run rather than by the volume of data it can process. This classification is therefore done as

follows:

I MICROCOMPUTERS

The Microcomputer has the lowest level capacity. The machine has memories that are

generally made of semiconductors fabricated on silicon chips. Large-scale production of

silicon chips began in 1971 and this has been of great use in the production of

microcomputers. The microcomputer is a digital computer system that is controlled by a

stored program that uses a microprocessor, a programmable read-only memory (ROM) and a

random-access memory (RAM). The ROM defines the instructions to be executed by the

computer while RAM is the functional equivalent of computer memory.

The Apple IIe, the Radio Shack TRS-80, and the Genie III are examples of microcomputers

and are essentially fourth generation devices. Microcomputers have from 4k to 64k storage location and are capable of handling small, single-business application such as sales analysis, inventory, billing and payroll.

II

In the 1960s, the growing demand for a smaller stand-alone machine brought about the

manufacture of the , to handle tasks that large computers could not perform

economically. Minicomputer systems provide faster operating speeds and larger storage

capacities than microcomputer systems. Operating systems developed for minicomputer

systems generally support both multiprogramming and virtual storage. This means that many

programs can be run concurrently. This type of computer system is very flexible and can be

expanded to meet the needs of users.

5 Minicomputers usually have from 8k to 256k memory storage location, and a relatively established . The PDP-8, the IBM systems 3 and the Honeywell 200 and

1200 computer are typical examples of minicomputers.

III MEDIUM-SIZE COMPUTERS

Medium-size computer systems provide faster operating speeds and larger storage capacities than mini computer systems. They can support a large number of high-speed input/output devices and several disk drives can be used to provide online access to large data files as required for direct access processing and their operating systems also support both multiprogramming and virtual storage. This allows the running of variety of programs concurrently. A medium-size computer can support a management information system and can therefore serve the needs of a large bank, insurance company or university. They usually have memory sizes ranging from 32k to 512k. The IBM System 370, Burroughs 3500

System and NCR Century 200 system are examples of medium-size computers.

IV LARGE COMPUTERS

Large computers are next to Super Computers and have bigger capacity than the Medium- size computers. They usually contain full control systems with minimal operator intervention. Large computer system ranges from single-processing configurations to nationwide computer-based networks involving general large computers. Large computers have storage capacities from 512k to 8192k, and these computers have internal operating speeds measured in terms of nanosecond, as compared to small computers where speed is measured in terms of microseconds. Expandability to 8 or even 16 million characters is possible with some of these systems. Such characteristics permit many data processing jobs to be accomplished concurrently.

6 Large computers are usually used in government agencies, large corporations and computer services organizations. They are used in complex modeling, or simulation, business operations, product testing, design and engineering work and in the development of space technology. Large computers can serve as systems where many smaller computers can be connected to it to form a communication network.

V

The supercomputers are the biggest and fastest machines today and they are used when billion or even trillions of calculations are required. These machines are applied in nuclear weapon development, accurate weather forecasting and as host processors for local computer. and time sharing networks. Super computers have capabilities far beyond even the traditional large-scale systems. Their speed ranges from 100 million-instruction-per-second to well over three billion. Because of their size, supercomputers sacrifice a certain amount of flexibility.

They are therefore not ideal for providing a variety of user services. For this reason, supercomputers may need the assistance of a medium-size general purpose machines (usually called front-end processor) to handle minor programs or perform slower speed or smaller volume operation.

Classification by their basic operating principle

Using this classification technique, computers can be divided into Analog, Digital and Hybrid systems. They are explained as follows:

I ANALOG COMPUTERS

Analog computers were well known in the 1940s although they are now uncommon. In such machines, numbers to be used in some calculation were represented by physical quantities - such as electrical voltages. According to the Penguin Dictionary of Computers (1970), “an analog computer must be able to accept inputs which vary with respect to time and directly

7 apply these inputs to various devices within the computer which performs the computing

operations of additions, subtraction, multiplication, division, integration and function

generation….” The computing units of analog computers respond immediately to the

changes which they detect in the input variables. Analog computers excel in solving

differential equations and are faster than digital computers.

II DIGITAL COMPUTERS

Most computers today are digital. They represent information discretely and use a binary

(two-step) system that represents each piece of information as a series of zeroes and ones.

The Pocket Webster School & Office Dictionary (1990) simply defines Digital computers as

“a computer using numbers in calculating.” Digital computers manipulate most data more

easily than analog computers. They are designed to process data in numerical form and their

circuits perform directly the mathematical operations of addition, subtraction, multiplication,

and division. Because digital information is discrete, it can be copied exactly but it is difficult to make exact copies of analog information.

III HYBRID COMPUTERS

These are machines that can work as both analog and digital computers.

THE COMPUTER EVOLUTION

The computer evolution is indeed an interesting topic that has been explained in some

different ways over the years, by many authors. According to The Computational Science

Education Project, US, the computer has evolved through the following stages:

The Mechanical Era (1623-1945)

Trying to use machines to solve mathematical problems can be traced to the early 17th

century. Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibnitz were among

8 mathematicians who designed and implemented that were capable of addition, subtraction, multiplication, and division included The first multi-purpose or programmable computing device was probably Charles Babbage's Difference Engine, which was begun in

1823 but never completed. In 1842, Babbage designed a more ambitious machine, called the

Analytical Engine but unfortunately it also was only partially completed. Babbage, together with Ada Lovelace recognized several important programming techniques, including conditional branches, iterative loops and index variables. Babbage designed the machine which is arguably the first to be used in computational science. In 1933, George Scheutz and his son, Edvard began work on a smaller version of the difference engine and by 1853 they had constructed a machine that could process 15-digit numbers and calculate fourth-order differences. The US Census Bureau was one of the first organizations to use the mechanical computers which used punch-card equipment designed by Herman Hollerith to tabulate data for the 1890 census. In 1911 Hollerith's company merged with a competitor to found the corporation which in 1924 became International Business Machines (IBM).

First Generation Electronic Computers (1937-1953)

These devices used electronic switches, in the form of vacuum tubes, instead of electromechanical relays. The earliest attempt to build an electronic computer was by J. V.

Atanasoff, a professor of physics and mathematics at Iowa State in 1937. Atanasoff set out to build a machine that would help his graduate students solve systems of partial differential equations. By 1941 he and graduate student Clifford Berry had succeeded in building a machine that could solve 29 simultaneous equations with 29 unknowns. However, the machine was not programmable, and was more of an electronic calculator.

A second early electronic machine was Colossus, designed by Alan Turing for the British military in 1943. The first general purpose programmable electronic computer was the

9 Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John

V. Mauchly at the University of Pennsylvania. Research work began in 1943, funded by the

Army Ordinance Department, which needed a way to compute ballistics during World War

II. The machine was completed in 1945 and it was used extensively for calculations during the design of the hydrogen bomb. Eckert, Mauchly, and John von Neumann, a consultant to the ENIAC project, began work on a new machine before ENIAC was finished. The main contribution of EDVAC, their new project, was the notion of a stored program. ENIAC was

controlled by a set of external switches and dials; to change the program required physically

altering the settings on these controls. EDVAC was able to run orders of magnitude faster

than ENIAC and by storing instructions in the same medium as data, designers could

concentrate on improving the internal structure of the machine without worrying about

matching it to the speed of an external control. Eckert and Mauchly later designed what was

arguably the first commercially successful computer, the UNIVAC; in 1952. Software

technology during this period was very primitive.

Second Generation (1954-1962)

The second generation witnessed several important developments at all levels of computer

system design, ranging from the technology used to build the basic circuits to the

programming languages used to write scientific applications. Electronic switches in this era

were based on discrete diode and transistor technology with a switching time of

approximately 0.3 microseconds. The first machines to be built with this technology include

TRADIC at Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Index

registers were designed for controlling loops and floating point units for calculations based

on real numbers.

10 A number of high level programming languages were introduced and these include

FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of

this era include the IBM 704 and its successors, the 709 and 7094. In the 1950s the first two

supercomputers were designed specifically for numeric processing in scientific applications.

Third Generation (1963-1972)

Technology changes in this generation include the use of integrated circuits, or ICs

(semiconductor devices with several transistors built into one physical component),

semiconductor memories, microprogramming as a technique for efficiently designing

complex processors and the introduction of operating systems and time-sharing. The first ICs

were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit

(or ‘chip’), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to

100 devices per chip. Multilayered printed circuits were developed and core memory was

replaced by faster, solid state memories.

In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use

functional parallelism. By using 10 separate functional units that could operate

simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a

computation rate of one million floating point operations per second (Mflops). Five years

later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its

pipelined functional units, is considered to be the first vector processor and was capable of executing at ten Mflops. The IBM 360/91, released during the same period, was roughly twice as fast as the CDC 660.

Early in this third generation, Cambridge University and the University of London

cooperated in the development of CPL (Combined Programming Language, 1963). CPL was,

according to its authors, an attempt to capture only the important features of the complicated

11 and sophisticated ALGOL. However, like ALGOL, CPL was large with many features that were hard to learn. In an attempt at further simplification, Martin Richards of Cambridge developed a subset of CPL called BCPL (Basic Computer Programming Language, 1967). In

1970 Ken Thompson of Bell Labs developed yet another simplification of CPL called simply

B, in connection with an early implementation of the UNIX operating system. comment):

Fourth Generation (1972-1984)

Large scale integration (LSI - 1000 devices per chip) and very large scale integration (VLSI -

100,000 devices per chip) were used in the construction of the fourth generation computers.

Whole processors could now fit onto a single chip, and for simple systems the entire computer (processor, main memory, and I/O controllers) could fit on one chip. Gate delays dropped to about 1ns per gate. Core memories were replaced by semiconductor memories.

Large main memories like CRAY 2 began to replace the older high speed vector processors, such as the CRAY 1, CRAY X-MP and CYBER

In 1972, Dennis Ritchie developed the C language from the design of the CPL and

Thompson's B. Thompson and Ritchie then used C to write a version of UNIX for the DEC

PDP-11. Other developments in software include very high level languages such as FP

(functional programming) and Prolog (programming in logic).

IBM worked with Microsoft during the 1980s to start what we can really call PC (Personal

Computer) life today. IBM PC was introduced in October 1981 and it worked with the operating system (software) called ‘Microsoft Disk Operating System (MS DOS) 1.0.

Development of MS DOS began in October 1980 when IBM began searching the market for an operating system for the then proposed IBM PC and major contributors were Bill Gates,

Paul Allen and Tim Paterson. In 1983, the Microsoft Windows was announced and this has witnessed several improvements and revision over the last twenty years.

12

Fifth Generation (1984-1990)

This generation brought about the introduction of machines with hundreds of processors that could all be working on different parts of a single program. The scale of integration in semiconductors continued at a great pace and by 1990 it was possible to build chips with a million components - and semiconductor memories became standard on all computers.

Computer networks and single-user also became popular.

Parallel processing started in this generation. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module though each processor had its own local cache.

The machine was designed to compete with the DEC VAX-780 as a general purpose Unix system, with each processor working on a different user's job. However Sequent provided a library of subroutines that would allow programmers to write programs that would use more than one processor, and the machine was widely used to explore parallel algorithms and programming techniques. The Intel iPSC-1, also known as ‘the hypercube’ connected each processor to its own memory and used a network interface to connect processors. This distributed memory architecture meant memory was no longer a problem and large systems with more processors (as many as 128) could be built. Also introduced was a machine, known as a data-parallel or SIMD where there were several thousand very simple processors which work under the direction of a single control unit. Both wide area network (WAN) and (LAN) technology developed rapidly.

Sixth Generation (1990 - )

Most of the developments in computer systems since 1990 have not been fundamental changes but have been gradual improvements over established systems. This generation brought about gains in parallel computing in both the hardware and in improved understanding of how to develop algorithms to exploit parallel architectures.

13

Workstation technology continued to improve, with processor designs now using a

combination of RISC, pipelining, and parallel processing. Wide area networks, network

bandwidth and speed of operation and networking capabilities have kept developing

tremendously. Personal computers (PCs) now operate with Gigabit per second processors,

multi-Gigabyte disks, hundreds of Mbytes of RAM, colour printers, high-resolution graphic

monitors, stereo sound cards and graphical user interfaces. Thousands of software (operating

systems and application software) are existing today and Microsoft Inc. has been a major

contributor. Microsoft is said to be one of the biggest companies ever, and its chairman –

Bill Gates has been rated as the richest man for several years.

Finally, this generation has brought about micro controller technology. Micro controllers are

’embedded’ inside some other devices (often consumer products) so that they can control the

features or actions of the product. They work as small computers inside devices and now

serve as essential components in most machines.

THE ACTIVE PLAYERS

Hundreds of people from different parts of the world played prominent roles in the history of computer. This section highlights some of those roles as played in several parts of the world.

The American Participation

America indeed played big roles in the history of computer. John Atanasoff invented the

Atanasoff-Berry Computer (ABC) which introduced electronic binary logic in the late 1930s.

Atanasoff and Berry completed the computer by 1942, but it was later dismantled.

Howard Aiken is regarded as one of the pioneers who introduced the computer age and he

completed the design of four calculators (or computers). Aiken started what is known as

computer science today and was one of the first explorers of the application of the new

14 machines to business purposes and machine translation of foreign languages. His first machine was known as Mark I (or the Harvard Mark I), and originally named the IBM ASCC and this was the first machine that could solve complicated mathematical problems by being programmed to execute a series of controlled operations in a specific sequence.

The ENIAC (Electronic Numerical Integrator and Computer) was displayed to the public on

February 14, 1946, at the Moore School of Electrical Engineering at the University of

Pennsylvania and about fifty years after, a team of students and faculty started the reconstruction of the ENIAC and this was done, using state-of-the-art solid-state CMOS technology.

The German Participation

The DEHOMAG D11 tabulator was invented in Germany. It had a decisive influence on the diffusion of data processing in Germany. The invention took place between the period of 1926 and 1931.

Korad Zuse is popularly recognized in Germany as the father of the computer and his Z1, a programmable automaton built from 1936 to 1938, is said to be the world’s ‘first programmable calculating machine’. He built the Z4, a relay computer with a mechanical memory of unique design, during the war years in Berlin. Eduard Stiefel, a professor at the

Swiss Federal Institute of Technology (ETH), who was looking for a computer suitable for numerical analysis, discovered the machine in Bavaria in 1949. Around 1938, Konrad Zuse began work on the creation of the Plankalkul, while working on the Z3. He wanted to build a

Planfertigungsgerat, and made some progress in this direction in 1943 and in 1944, he prepared a draft of the Plankalkul, which was meant to become a doctoral dissertation some day. The Plankalkul is the first fully-fledged algorithmic programming language. Years later, a small group under the direction of Dr. Heinz Billing constructed four different

15 computers, the G1 (1952), the G2 (1955), the Gla (1958) and the G3 (1961), at the Max

Planck Institute in Gottingen.

Lastly, during the World war II, a young German engineer, Helmut Hoelzer studied the

application of electronic analog circuits for the guidance and control system of liquid-

propellant rockets and developed a special purpose analog computer, the ‘Mischgerat’ and integrated it into the rocket. The development of the fully electronic, general purpose, analog computer was a spin-off of this work. It was used to simulate ballistic paths by solving the equations of motion.

The British Participation

The Colossus was designed and constructed at the Post Office Research Laboratories at

Dollis Hill in North London in 1943 to help Bletchley Park in decoding intercepted German

telegraphic messages. Colossus was the world’s first large electronic valve programmable

logic calculator and ten of them were built and were operational in Bletchley Park, home of

Allied World War II code-breaking.

Between 1948 and 1951, four related computers were designed and constructed in

Manchester and each machine has its innovative peculiarity. The SSEM (June 1948) was the

first such machine to work. The Manchester Mark 1 (Intermediate Version, April 1949) was

the first full-sized computer available for use. The completed Manchester Mark 1 (October

1949), with a fast random access magnetic drum, was the first computer with a classic two-

level store. The Ferranti Mark 1 (February 1951) was the first production computer delivered

by a manufacturer. The University of Manchester Small-Scale Experimental Machine, the

‘Baby’ first ran a stored program on June 21, 1948, thus claiming to be the first operational

general purpose computer. The Atlas computer was constructed in the Department of

Computer Science at the University of Manchester. After its completion in December 1962,

16 it was regarded as the most powerful computer in the world and it had many innovative design features of which the most important were the implementation of virtual addressing and the one-level store.

The Japanese Participation

In the second half of the 1950s, many experimental computers were designed and produced by Japanese national laboratories, universities and private companies. In those days, many experiments were carried out using various electronic and mechanical techniques and materials such as relays, vacuum tubes, parametrons, transistors, mercury delay lines, cathode ray tubes, magnetic cores and magnetic drums. These provided a great foundation for the development of electronics in Japan. Between the periods of 1955 and 1959, computers like

ETL-Mark 2, JUJIC, MUSASINO I, ETL-Mark-4, PC-1, ETL-Mark-4a, TAC, Handai-

Computer and K-1 were built.

The African Participation

Africa evidently did not play any major roles in the recorded history of computer, but indeed it has played big roles in the last few decades. Particularly worthy of mention is the contribution of a Nigerian who made a mark just before the end of the twentieth century.

Former American President – Bill Clinton (2000) said “One of the great minds of the

Information Age is a Nigerian American named Philip Emeagwali. He had to leave school because his parents couldn't pay the fees. He lived in a refugee camp during your civil war.

He won a scholarship to university and went on to invent a formula that lets computers make

3.1 billion calculations per second….”

Philip Emeagwali, and Internet pioneer, was born in 1954, in Nigeria, Africa.

In 1989, he invented the formula that used 65,000 separate computer processors to perform

3.1 billion calculations per second. Emeagwali is regarded as one of the fathers of the

17 internet because he invented an international network which is similar to, but predates that of

the Internet. He also discovered mathematical equations that enable the petroleum industry to

recover more oil. Emeagwali won the 1989 Gordon Bell Prize, computation's Nobel prize, for

inventing a formula that lets computers perform the fastest computations, a work that led to

the reinvention of supercomputers.

SUMMARY, CONCLUSION AND RECOMMENDATION

Researching, studying and writing on ‘History of the Computer’ has indeed been a fulfilling, but challenging task and has brought about greater appreciation of several work done by scientists of old, great developmental research carried out by more recent scientists and of course the impact all such innovations have made on the development of the human race. It has generated greater awareness of the need to study history of the computer as a means of knowing how to develop or improve on existing computer technology.

It is therefore strongly recommended that science and engineering students should develop

greater interest in the history of their profession. The saying that ‘there is nothing absolutely

new under the sun’ is indeed real because the same world resources but fresh ideas have been

used over the years to improve on existing technologies.

Finally, it is hoped that this paper is found suitable as a good summary of ‘the technological

history and development of computer’ and challenging to upcoming scientists and engineers

to study the history of their profession.

18 LIST OF REFERENCES

Alacritude, LLC. (2003). http://www.infoplease.com

Allison, Joanne (1997) http://www.computer50.org/mark1/ana-dig.html

Brain, Marshall (2003). How work

http://electronics.howstuffworks.com/microcontroller1.htm

Brown, Donita (2003). Reinventing supercomputers

http://emeagwali.com/education/inventions-discoveries/

Computational Science Education Project. (1996) http://csep1.phy.ornl.gov/csep.html

Ceruzzi, Paul E. (2000). A History of Modern Computing. London: The MIT Press

Crowther, Jonathan (1995). ed. Oxford Advanced Learner’s Dictionary of Current

English. Oxford: Oxford University Press.

Dick, Maybach (2001). BCUG - Brookdale Computer Users Group

http://www.bcug.com/.

Dick, Pountain (2003). Penguin Dictionary of Computing. Australia:

Penguin Published

Encyclopedia Britannica (2003) http://www.britannica.com.

. Joelmreyes (2002). http://comp100.joelmreyes.cjb.net

Leven Antov (1996) History of the MS-DOS. California: Maxframe Corporation

Moreau, R. (1984). The Computer Comes of Age – The People, the Hardware,

and the Software. London: The MIT Press

Morris, William (1980). ed. The American Heritage Dictionary. Boston:

Houghton Mifflin Company.

Reedy, Jerry (1984). ed. Notable Quotables. Chicago:

World Book Encyclopedia, Inc.

19 Rojas, Raul and Ulf Hashagen (2000). eds. The First Computers –

History and Architecture. London: The MIT Press

Steve Ditlea (1984). ed. Digital Deli. New York:

Workman Publishing Company, Inc.

Techencyclopedia (2003). http://www.techweb.com/encyclopedia

The Computer Language Company

. The World Book Encyclopedia (1982), C-Ch Volume 3

World Book-Childcraft International, Inc

Weiner, Mike and others (1990). eds. The Pocket Word Finder Thesaurus.

New York: Pocket Books

Layman, Thomas (1990). eds. The Pocket Webster School & Office Dictionary.

New York: Pocket Books

20 1 History of Computers

From the earliest times the need to carry out calculations has been developing. The first steps involved the development of counting and calculation aids such as the counting board and the abacus.

Pascal (1623-62) was the son of a tax collector and a mathematical genius. He designed the first mechanical calculator (Pascaline) based on gears. It performed addition and subtraction.

Leibnitz (1646-1716) was a German mathematician and built the first calculator to do multiplication and division. It was not reliable due to accuracy of contemporary parts.

Babbage (1792-1872) was a British inventor who designed an ‘analytical engine’ incorporating the ideas of a memory and card input/ouput for data and instructions. Again the current technology did not permit the complete construction of the machine.

Babbage is largely remembered because of the work of Augusta Ada (Countess of Lovelace) who was probably the first computer programmer.

Burroughs (1855-98) introduced the first commercially successful mechanical adding machine of which a million were sold.by 1926.

Hollerith developed an electromechanical punched-card tabulator to tabulate the data for 1890 U.S. census. Data was entered on punched cards and could be sorted according to the census requirements. The machine was powered by electricity. He formed the Tabulating Machine Company which became International Business Machines (IBM). IBM is still one of the largest computer companies in the world.

Aiken (1900-73) a Harvard professor with the backing of IBM built the Harvard Mark I computer (51ft long) in 1944. It was based on relays (operate in milliseconds) as opposed to the use of gears. It required 3 seconds for a multiplication.

Eckert and Mauchly designed and built the ENIAC in 1946 for military computations. It used vacuum tubes (valves) which were completely electronic (operated in microseconds) as opposed to the relay which was electromechanical. It weighed 30 tons, used 18000 valves, and required 140 kwatts of power. It was 1000 times faster than the Mark I multiplying in 3 milliseconds. ENIAC was a decimal machine and could not be programmed without altering its setup manually.

Comp 1001: History of Computers 1 2 Atanasoff had built a specialised computer in 1941 and was visited by Mauchly before the construction of the ENIAC. He sued Mauchly in a case which was decided in his favour in 1974!

Von Neumann was a scientific genius and was a consultant on the ENIAC project. He formulated plans with Mauchly and Eckert for a new computer (EDVAC) which was to store programs as well as data.

This is called the stored program concept and Von Neumann is credited with it. Almost all modern computers are based on this idea and are referred to as von neumann machines.

He also concluded that the binary system was more suitable for computers since switches have only two values. He went on to design his own computer at Princeton which was a general purpose machine.

Alan Turing was a British mathematician who also made significant contributions to the early development of computing, especially to the theory of computation. He developed an abstract theoretical model of a computer called a Turing machine which is used to capture the notion of computable i.e. what problems can and what problems cannot be computed. Not all problems can be solved on a computer.

Note: A Turing machine is an abstract model and not a physical computer.

From the 1950’s, the computer age took off in full force. The years since then have been divided into periods or generations based on the technology used.

First Generation Computers (1951-58): Vacuum Tubes

These machines were used in business for accounting and payroll applications. Valves were unreliable components generating a lot of heat (still a problem in computers). They had very limited memory capacity. Magnetic drums were developed to store information and tapes were also developed for secondary storage.

They were initially programmed in machine language (binary). A major breakthrough was the development of assemblers and assembly language.

Second Generation (1959-64): Transistors

Comp 1001: History of Computers 2 3 The development of the transistor revolutionised the development of computers. Invented at Bell Labs in 1948, transistors were much smaller, more rugged, cheaper to make and far more reliable than valves. Core memory was introduced and disk storage was also used. The hardware became smaller and more reliable, a trend that still continues.

Another major feature of the second generation was the use of high-level programming languages such as Fortran and Cobol. These revolutionised the development of software for computers. The computer industry experienced explosive growth.

Third Generation (1965-71): Integrated Circuits (ICs)

IC’s were again smaller, cheaper, faster and more reliable than transistors. Speeds went from the microsecond to the nanosecond (billionth) to the picosecond (trillionth) range. ICs were used for main memory despite the disadvantage of being volatile. Minicomputers were developed at this time.

Terminals replaced punched cards for data entry and disk packs became popular for secondary storage.

IBM introduced the idea of a compatible family of computers, 360 family, easing the problem of upgrading to a more powerful machine.

Substantial operating systems were developed to manage and share the computing resources and time sharing operating systems were developed. These greatly improved the efficiency of computers. Computers had by now pervaded most areas of business and administration.

The number of transistors that be fabricated on a chip is referred to as the scale of integration (SI). Early chips had SSI (small SI) of tens to a few hundreds. Later chips were MSI (Medium SI): hundreds to a few thousands,. Then came LSI chips (Large SI) in the thousands range.

Fourth Generation (1971 - ): VLSI (Very Large SI)

VLSI allowed the equivalent of tens of thousand of transistors to be incorporated on a single chip. This led to the development of the microprocessor a processor on a chip.

Comp 1001: History of Computers 3 4 Intel produced the 4004 which was followed by the 8008,8080, 8088 and 8086 etc. Other companies developing microprocessors included Motorolla (6800, 68000), Texas Instruments and Zilog.

Personal computers were developed and IBM launched the IBM PC based on the 8088 and 8086 microprocessors. Mainframe computers have grown in power. Memory chips are in the megabit range. VLSI chips had enough transistors to build 20 ENIACs. Secondary storage has also evolved at fantastic rates with storage devices holding gigabytes (1000Mb = 1 Gb) of data.

On the software side, more powerful operating systems are available such as Unix. Applications software has become cheaper and easier to use. Software development techniques have vastly improved.

Fourth generation languages 4GLs make the development process much easier and faster.

[Languages are also classified according to generations from machine language (1GL), assembly language (2GL), high level languages (3GL) to 4Gls].

Software is often developed as application packages. VisiCalc a spreadsheet program, was the pioneering application package and the original killer application.

Killer application: A piece of software that is so useful that people will buy a computer to use that application.

Fourth Generation Continued (1990s): ULSI (Ultra Large SI) ULSI chips have millions of transistors per chip e.g. the original Pentium had over 3 million and this has more than doubled with more recent versions. This has allowed the development of far more powerful processors.

Comp 1001: History of Computers 4 5 The Future

Developments are still continuing. Computers are becoming faster, smaller and cheaper. Storage units are increasing in capacity.

Distributed computing is becoming popular and parallel computers with large numbers of CPUs have been built.

The networking of computers and the convergence of computing and communications is also of major significance.

From Silicon to CPUs !

One of the most fundamental components in the manufacture of electronic devices, such as a CPU or memory, is a switch. Computers are constructed from thousands to millions of switches connected together. In modern computers, components called transistors act as electronic switches.

A brief look at the history of computing reveals a movement from mechanical to electromechanical to electronic to solid state electronic components being used as switches to construct more and more powerful computers as illustrated below:

Electromechanical: Relays

Electronic: Vacuum tubes (valves)

Solid State: Transistors

Integrated Circuits (ICs): n Transistors ( n ranges from less than 100 for SSI ICs to millions for ULSI ICs)

Figure 1: Evolution of switching technology

Transistors act as electronic switches, i.e. they allow information to pass or not to pass under certain conditions. The development of integrated circuits (ICs) allowed the construction of a number of transistors on a single piece of silicon (the material out of which IC’s are made).

IC’s are also called silicon chips or simply chips. The number of transistors on a chip is determined by its level of integration.

Comp 1001: History of Computers 5 6

N0. of Transistors Integration level Abbreviation Example 2 -50 small-scale integration SSI. 50 - 5000 medium-scale integration MSI 5000 - 100,000 large scale integration LSI Intel 8086 (29,000) 100K - 10 million very large scale VLSI Pentium (3 million) integration 10 million to 1000 ultra large scale ULSI Pentium III (30 million) million integration 1000 million - super large scale SLSI integration

Moore’s Law

The number of transistors on an IC will double every 18 months.

(Gordon Moore chairman of Intel at the time 1965,).

This prediction has proved very reliable to date and it seems likely that it will remain so over the next ??? years.

Chip Fabrication Silicon chips have a surface area of similar dimensions to a thumb nail (or smaller) and are three dimensional structures composed of microscopically thin layers (perhaps as many as 20) of insulating and conducting material on top of the silicon. The manufacturing process is extremely complex and expensive.

Silicon is a semiconductor which means that it can be altered to act as either a conductor allowing electricity to flow or as an insulator preventing the flow of electricity. Silicon is first processed into circular wafers and these are then used in the fabrication of chips. The silicon wafer goes through a long and complex process which results in the circuitry for a semiconductor device such as a microprocessor or RAM being developed on the wafer. It should be noted that each wafer contains from several to hundreds of the particular device being produced. Figure 3 illustrates an 8-inch silicon wafer containing microprocessor chips.

Comp 1001: History of Computers 6 7

8 inch diameter

Microprocessor Chip

Silicon Wafer

Figure 3: A single silicon wafer can contain a large number of microprocessors

The percentage of functioning chips is referred to as the yield of the wafer. Yields vary substantially depending on the complexity of the device being produced, the feature size used and other factors. While manufacturers are slow to release actual figures, yields as low as 50% are reported and it is accepted that 80-90% yields are very good.

A single short circuit, caused by two wires touching in a 30 million plus transistor chip, is enough to cause chip failure!

Feature Size The feature size refers to the size of a transistor or to the width of the wires connecting transistors on the chip. One micron (one thousandth of a millimetre) was a common feature size.

State of the art chips are using sub-micron feature sizes from 0.25 (1997) to 0.13 (2001) (250 -130 nanometres)

The smaller the feature size, the more transistors there are available on a given chip area.

This allows more microprocessors for example to be obtained from a single silicon wafer. It also means that a given microprocessor will be smaller, runs faster and uses less power than its predecessor using a larger feature size. Since more of these smaller chips can be obtained from a single wafer, each chip will cost less which is one of the reasons for cheaper processor chips.

In addition, reduced feature size it makes it possible to make more complex microprocessors, such as the Pentium III which uses around of 30 million transistors.

Comp 1001: History of Computers 7 8 Die Size An obvious way to increase the number of transistors on a chip is to increase the area of silicon used for each chip - the die size.

However, this can lead to problems. Assume that a fixed number faults occur randomly on the silicon wafer illustrated in Figure 3. A single fault will render an individual chip useless.

The larger the die size for the individual chip, the greater the waste in terms of area of silicon, when a fault arises on a chip.

For example, if a wafer were to contain 40 chips and ten faults occur randomly, then up to 10 of the 40 chips may be useless giving up to 25% wastage.

On the other hand, if there are 200 chips on the wafer, we would only have 5% wastage with 10 faults. Hence, there is a trade-off between die size and yield, i.e. a larger die size leads to a decrease in yield.

Comp 1001: History of Computers 8