<<

DEFINING

“A device that computes, especially a programmable electronic machine that performs high- speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”

“A is a device that accepts information (in the form of digitalized data) and manipulates it for some result based on a program or sequence of instructions on how the data is to be processed.”

“A computer is a machine that manipulates data according to a set of instructions.”

“A device used for computing, specially, an electronic machine which, by means of stored instructions and information, performs rapid, often complex calculations or compiles, correlates, and selects data.”

“A data that can perform substantial computation, including numerous arithmetic and logic operations, without intervention by a human operator during the run.”

“A computer is an electronic device that accepts data and instructions, processes the data according to the set of instructions, and produces the desired information.”

“A computer is a device capable of solving problems by accepting data, performing described operations on the data and supplying the results of these operations.”

Also refer - ANUBHA JAIN, DEEPSHIKHA BHARGAVA & DIVYA ARORA- RBD Publications

Chapter No. 1

Page No. 1.1 & 1.2 A SIMPLE MODEL OF COMPUTER (FUNDAMENTALS)

In this you have to explain various components of a computer system. Some are as under-

1) Monitor

2) Speakers

3) Keyboard

4) Mouse

5)

6) Scanner

7) Cabinet (Consist of various components like – mother board , ram , hard disk etc.)

As Shown in picture below- Also refer - ANUBHA JAIN, DEEPSHIKHA BHARGAVA & DIVYA ARORA- RBD Publications

Chapter No. 2 & 3

Page No. 2.1 – 2.10 , 3.18 – 3.20 CLASSIFICATION OF COMPUTERS

TECHNICAL COMMERCIAL

1) Hybrid Super Computer

2) Analog Main Frame Computer

3) Digital Mini Computer

Micro Computer

SUPERCOMPUTER

A is a computer that is at the frontline of current processing capacity, particularly speed of calculation. introduced in the 1960s were designed primarily by Seymour at (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in parallel to the creation of the market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash".

Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.

Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.

Supercomputer Companies / Manufacturer

Supercomputer companies in operation

These companies make supercomputer hardware and/or software, either as their sole activity, or as one of several activities.

, Inc. • AVA Direct • -DAC • Cray Inc. • • Dawning • Fujitsu • Groupe Bull • Hitachi • HP • IBM • Corporation • Lenovo • Labs International • • nCUBE • NEC Corporation • Corporation • SGI • • Supercomputing Systems • Galactic Computing • Supermicro Computer SMCI • T-Platforms

Defunct supercomputer companies

These companies have either folded, or no longer operate in the supercomputer market.

• Control Data Corporation (CDC) • • MasPar Computer Corporation • Meiko Scientific • Quadrics • Sequent Computer Systems • SiCortex • Supercomputer Systems, Inc. , Eau Claire, Wisconsin, S. Chen • Supercomputer Systems, Inc. , San Diego, California • Thinking Machines

Rank Site Computer

Roadrunner - BladeCenter QS22/LS21 Cluster, PowerXCell 8i 3.2 DOE/NNSA/LANL 1 Ghz / DC 1.8 GHz, Voltaire Infiniband IBM

Oak Ridge National Laboratory - Cray XT5 QC 2.3 GHz 2 United States Cray Inc.

Forschungszentrum Juelich (FZJ) JUGENE - Blue Gene/P Solution 3 Germany IBM

NASA/Ames Research Center/NAS Pleiades - SGI ICE 8200EX, QC 3.0/2.66 GHz 4 United States SGI

DOE/NNSA/LLNL BlueGene/L - eServer Blue Gene Solution 5 United States IBM

National Institute for Computational Kraken XT5 - Cray XT5 QC 2.3 GHz 6 Sciences/University of Tennessee Cray Inc. United States

Argonne National Laboratory Blue Gene/P Solution 7 United States IBM Rank Site Computer

Texas Advanced Computing Center/Univ. Ranger - SunBlade x6420, Opteron QC 2.3 Ghz, Infiniband 8 of Texas Sun Microsystems United States

DOE/NNSA/LLNL Dawn - Blue Gene/P Solution 9 United States IBM

JUROPA - Sun Constellation, NovaScale R422-E2, Intel Xeon Forschungszentrum Juelich (FZJ) X5570, 2.93 GHz, Sun M9/Mellanox QDR Infiniband/Partec 10 Germany Parastation Bull SA

Mainframe computer

An IBM 704 mainframe

Mainframes (often colloquially referred to as Big Iron) are computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and financial transaction processing.

The term probably had originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames after the term was used to distinguish high-end commercial machines from less powerful units.

Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. (IBM System z10 is the latest incarnation.) Otherwise, large systems that are not based on the System/360 but are used for similar tasks are usually referred to as servers or even supercomputers. However, "", "supercomputer" and "mainframe" are not synonymous.

Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day. Modern mainframe computers have abilities not so much defined by their single task computational speed (usually defined as MIPS — Millions of ) as by their redundant internal engineering and resulting high reliability and security, extensive input- output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.

Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers. While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not to the same degree or level of sophistication.

IBM mainframes dominate the mainframe market at well over 90% market share. Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. In 2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers and which some analysts classify as mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain nominal mainframe hardware businesses in their home Japanese market, although they have been slow to introduce new hardware models in recent years.

The amount of vendor investment in mainframe development varies with market share. Unisys, HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather than custom processors in order to reduce their development expenses, and they have also cut back their mainframe software development. (However, Unisys still maintains its own unique CMOS processor design development for certain high-end ClearPath models but contracts chip manufacturing to IBM.) In stark contrast, IBM continues to pursue a different business strategy of mainframe investment and growth. IBM has its own large research and development organization designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-core z10 mainframe . IBM is rapidly expanding its software business, including its mainframe software portfolio, to seek additional revenue and profits. IDC and Gartner server market share measurements show IBM System z mainframes continuing their long-running market share gains among high-end servers of all types, and IBM continues to report increasing mainframe revenues even while steadily reducing prices.

The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally are used for problems which are limited by calculation speed, while mainframes are used for problems which are limited by input/output and reliability and for solving multiple business problems concurrently (mixed workload). The differences and similarities are as follows:

• Both types of systems offer parallel processing, although this has not always been the case. Parallel processing (i.e., multiple CPUs executing instructions simultaneously) was used in supercomputers (e.g., the Cray-1) for decades before this feature appeared in mainframes, primarily due to cost at that time. Supercomputers typically expose parallel processing to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.

• Supercomputers are optimized for complex computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers and insurance business or payroll processing applications are more suited to mainframes.

• Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.

• Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power. This distinction is perhaps blurring over time as Moore's Law constraints encourage more specialization in server components.

• Mainframes are exceptionally adept at batch processing, such as billing, owing to their heritage, decades of increasing customer expectations for batch improvements, and throughput-centric design. Supercomputers generally perform quite poorly in batch processing.

There has been some blurring of the term "mainframe," with some PC and server vendors referring to their systems as "mainframes" or "mainframe-like." This is not widely accepted and the market generally recognizes that mainframes are genuinely and demonstrably different. An IBM zSeries 800 (foreground, left).

• 90% of IBM's mainframes have CICS transaction processing software installed. Other software staples include the IMS and DB2 databases, and WebSphere MQ and WebSphere Application Server middleware.

• As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers that had never previously owned a mainframe.

• Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern mainframes tolerate sustained periods of 100% CPU utilization, queuing work according to business priorities without disrupting ongoing execution.

• Mainframes have a historical reputation for being "expensive," but the modern reality is much different. As of late 2006, it is possible to buy and configure a complete IBM mainframe system (with software, storage, and support), under standard commercial use terms, for about $50,000 (U.S.). The price of z/OS starts at about $1,500 (U.S.) per year, including 24x7 telephone and Web support.

• In the unlikely event a mainframe needs repair; it is typically repaired without interruption to running applications. Also, memory, storage and processor modules of chips can be added or hot swapped without interrupting applications. It is not unusual for a mainframe to be continuously switched on for months or years at a stretch. Minicomputer

A minicomputer (colloquially, mini) is a class of multi-user computers that lies in the middle range of the computing spectrum, in between the largest multi-user systems (mainframe computers) and the smallest single-user systems ( or personal computers). The class at one time formed a distinct group with its own hardware and operating systems, but the contemporary term for this class of system is midrange computer, such as the higher-end SPARC, POWER and -based systems from Sun Microsystems, IBM and Hewlett- Packard.

The term "mini computer" evolved in the 1960s to describe the "small" third generation computers that became possible with the use of transistor and core memory technologies. They usually took up one or a few cabinets the size of a large refrigerator or two, compared with mainframes that would usually fill a room. The first successful minicomputer was Digital Equipment Corporation’s 12-bit PDP-8, which cost from US$16,000 upwards when launched in 1964. The important precursors of the PDP-8 include the PDP-5, LINC, the TX-0, the TX-2, and the PDP-1. Digital Equipment gave rise to a number of minicomputer companies along , including , , , and .

Mini computers were also known as midrange computers. They had relatively high processing power and capacity that mostly fit the needs of mid range organizations. They were used in manufacturing processes or handling email that was sent and received by a company.

The decline of the minis happened due to the lower cost of microprocessor based hardware, the emergence of inexpensive and easily deployable local area network systems, the emergence of the 80286 and the 80386 , and the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments/“data centers” — with the result that and dumb terminals were replaced by networked and servers and PCs in the latter half of the 1980s.

During the 1990s the change from minicomputers to inexpensive PC networks was cemented by the development of several versions of to run on the Intel microprocessor architecture, including Solaris, FreeBSD, NetBSD and OpenBSD. Also, the series of operating systems, beginning with Windows NT, now included server versions that supported pre-emptive multitasking and other features required for servers.

As microprocessors have become more powerful, CPUs built up from multiple components — once the distinguishing feature differentiating mainframes and midrange systems from microcomputers — have become increasingly obsolete, even in the largest mainframe computers.

Digital Equipment Corporation was the leading minicomputer manufacturer, at one time the 2nd largest computer company after IBM. But as the minicomputer declined in the face of generic UNIX servers and Intel based PCs, not only DEC, but almost every other minicomputer company including Data General, Prime, , Honeywell and Wang Laboratories, many based in New England also collapsed. DEC was sold to in 1998.

In the software context, the relatively simple OSes for early microcomputers were usually inspired by minicomputer OSes (such as CP/M's similarity to Digital's RSTS) and multi-user OSs of today are often either inspired by or directly descended from minicomputer OSs (UNIX was originally a minicomputer OS, while Windows NT — the foundation for all current versions of Microsoft Windows — borrowed design ideas liberally from VMS and UNIX). Many of the first generation of PC programmers were educated on minicomputer systems.

• Control Data’s CDC 160A and CDC 1700 • DEC PDP and VAX series • • Hewlett-Packard HP 3000 series, HP 2100 series, HP1000 series. • Honeywell-Bull Level 6/DPS 6/DPS 6000 series • IBM midrange computers • Nord-1, Nord-10, and Nord-100 • Prime Computer Prime 50 series • SDS SDS-92 • SEL, one of the first 32-bit realtime computer system manufacturers • TI-990 • Wang Laboratories 2200 and VS series • K-202, first Polish minicomputer

The Commodore 64 was one of the most popular microcomputers of its era, and is the best-selling model of of all time.

A microcomputer is a computer with a microprocessor as its . Another general characteristic of these computers is that they occupy physically small amounts of space when compared to mainframe and minicomputers. Many microcomputers (when equipped with a keyboard and screen for input and output) are also personal computers (in the generic sense).

The abbreviation "micro" was common during the 1970s and 1980s, but has now fallen out of common usage.

The earliest models often sold as kits to be assembled by the user, and came with as little as 256 bytes of RAM and no input/output devices other than indicator lights and switches. However, as microprocessors and memory became less expensive from the early-to-mid-1970s onwards, microcomputers in turn grew faster and cheaper. This resulted in an explosion in their popularity during the late 1970s and early 1980s.

A large number of computer manufacturers packaged microcomputers for use in small business applications. By 1979, many companies such as Cromemco, Processor Technology, IMSAI, Northstar, Southwest Technical Products Corporation, Ohio Scientific, Altos, Morrow Designs and others produced systems designed either for a resourceful end user or consulting firm to deliver business systems such as accounting, database management, and word processing to small businesses. This allowed businesses unable to afford leasing of a minicomputer or time- sharing service the opportunity to automate business functions, without (usually) hiring a full- time staff to operate the computers. A representative system of this era would have used an S100 , an 8-bit processor such as an or , and either CP/M or MP/M .

The increasing availability and power of desktop computers for personal use attracted the attention of more software developers. As time went on and the industry matured, the market for personal (micro) computers standardized around IBM PC compatibles running MS-DOS (and later Windows). Modern desktop computers, video game consoles, , tablet PCs, and many types of handheld devices, including mobile phones and pocket , as well as industrial embedded systems, may all be considered examples of microcomputers according to the definition given above.

Everyday use of the expression "microcomputer" (and in particular the "micro" abbreviation) has declined significantly from the mid-1980s onwards, and is no longer commonplace. It is most commonly associated with the first wave of all-in-one 8-bit home computers and small business microcomputers (such as the Apple II, Commodore 64, BBC Micro, and TRS 80). Although—or perhaps because—an increasingly diverse range of modern microprocessor-based devices fit the definition of "microcomputer," they are no longer referred to as such in everyday speech.

In common usage, "microcomputer" has been largely supplanted by the description "" or "PC," which describes that it has been designed to be used by one person at a time. IBM first promoted the term "personal computer" to differentiate themselves from other microcomputers, often called "home computers", and also IBM's own mainframes and minicomputers. Unfortunately for IBM, the microcomputer itself was widely imitated, as well as the term. The component parts were commonly available to manufacturers and the BIOS was reverse engineered through cleanroom design techniques. IBM PC compatible "clones" became commonplace, and the terms "Personal Computer," and especially "PC" stuck with the general public.

Since the advent of (monolithic integrated circuits containing RAM, ROM and CPU all onboard), the term "micro" is more commonly used to refer to that meaning.

Monitors, keyboards and other devices for input and output may be integrated or separate. Computer memory in the form of RAM, and at least one other less volatile, memory storage device are usually combined with the CPU on a system bus in a single unit. Other devices that make up a complete microcomputer system include, batteries, a power supply unit, a keyboard and various input/output devices used to convey information to and from a human operator (printers, monitors, human interface devices) Microcomputers are designed to serve only a single user at a time, although they can often be modified with software or hardware to concurrently serve more than one user. Microcomputers fit well on or under desks or tables, so that they are within easy access of the user. Bigger computers like minicomputers, mainframes, and supercomputers take up large cabinets or even a dedicated room.

A microcomputer comes equipped with at least one type of data storage, usually RAM. Although some microcomputers (particularly early 8-bit home micros) perform tasks using RAM alone, some form of secondary storage is normally desirable. In the early days of home micros, this was often a data cassette deck (in many cases as an external unit). Later, secondary storage (particularly in the form of and hard disk drives) were built in to the microcomputer case itself. A collection of early microcomputers, including a Processor Technology SOL-20 (top shelf, right), an MITS (second shelf, left), a TV Typewriter (third shelf, center), and an Apple I in the case at far right.

Although they contained no microprocessors but were built around TTL logic, Hewlett-Packard Calculators as far back as 1968 had various levels of programmability such that they could be called microcomputers. The HP 9100B (1968) had rudimentary conditional (IF) statements, statement line numbers, Jump statements (Go to), registers that could be used as variables, and primitive subroutines. The programming language resembled in many ways. Later models incrementally added more features, including the BASIC programming language (HP 9830A in 1971). Some models had tape storage and small printers. However, displays were limited to a single line at a time. [1] The HP 9100A was referred to as a personal computer in an advertisement in a 1968 Science magazine[5] but that advertisement was quickly dropped.[6] It is suspected[who?] that HP was reluctant to call them "computers" because it would complicate government procurement and export procedures.[citation needed]

The 2200, made by CTC in 1970, is perhaps the best candidate for the title of "first microcomputer". While it contains no microprocessor, it used the 4004 programming instruction set and its custom TTL logic was the basis for the , and for practical purposes the system behaves approximately as if it contains an 8008. This is because Intel was the contractor in charge of developing the Datapoint's CPU but ultimately CTC rejected the 8008 design because it needed 20 support chips.[7] Another early system, the Kenbak-1, was released in 1971. Like the Datapoint 2200, it used discrete TTL logic instead of a microprocessor, but functioned like a microcomputer in most ways. It was marketed as an educational and hobbyist tool, but was not a commercial success; production ceased shortly after introduction.[2]. Another system of note is the -N, introduced in 1973 by a French company and powered by the 8008; it was the first microcomputer sold all assembled and not as a construction kit.

Virtually all early microcomputers were essentially boxes with lights and switches; one had to read and understand binary numbers and machine language to program and use them (the Datapoint 2200 was a striking exception, bearing a modern design based around a monitor, keyboard, and tape and disk drives). Of the early "box of switches"-type microcomputers, the MITS Altair 8800 (1975) was arguably the most famous. Most of these simple, early microcomputers were sold as electronic kits--bags full of loose components which the buyer had to solder together before the system could be used.

The period from about 1971 to 1976 is sometimes called the first generation of microcomputers. These machines were for engineering development and hobbyist personal use. In 1975, the Processor Technology SOL-20 was designed, which consisted of a single board which included all the parts of the computer system. The SOL-20 had built-in EPROM software which elimated the need for rows of switches and lights. The MITS Altair just mentioned played an instrumental role in sparking significant hobbyist interest, which itself eventually led to the founding and success of many well-known personal and software companies, such as Microsoft and Apple Computer. Although the Altair itself was only a mild commercial success, it helped spark a huge industry.

1977 saw the introduction of the second generation, known as home computers. These were considerably easier to use than their predecessors, whose operation often demanded thorough familiarity with practical electronics. The ability to to a monitor (screen) or TV set allowed for visual manipulation of text and numbers. The BASIC programming language, which was easier to learn and use than raw machine language, became a standard feature. These features were already common in minicomputers, which many hobbyists and early manufactures were familiar with.

1979 saw the launch of the VisiCalc spreadsheet (initially for the Apple II) that first turned the microcomputer from a hobby for computer enthusiasts into a business tool. After the 1981 release by IBM of their IBM PC, the term Personal Computer became generally used for microcomputers compatible with the IBM PC architecture (PC compatible

ANALOG COMPUTERS An analog computer (spelled analogue in British English) is a form of computer that uses the continuously-changeable aspects of physical phenomena such as electrical,[1] mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital computers represent varying quantities incrementally, as their numerical values change.

Mechanical analog computers were very important in gun fire control in World War II and the Korean War; they were made in significant numbers. In particular, development of transistors made electronic analog computers practical, and before digital computers had developed sufficiently, they were commonly used in science and industry.

Analog computers can have a very wide range of complexity. Slide rules and nomographs are the simplest, while naval gun fire control computers and large hybrid digital/analogue computers were among the most complicated. Digital computers have a certain minimum (and relatively great) degree of complexity that is far greater than that of the simpler analog computers. This complexity is required to execute their stored programs, and in many instances for creating output that is directly suited to human use.

Setting up an analog computer required scale factors to be chosen, along with initial conditions – that is, starting values. Another essential was creating the required network of interconnections between computing elements. Sometimes it was necessary to re-think the structure of the problem so that the computer would function satisfactorily. No variables could be allowed to exceed the computer's limits, and differentiation was to be avoided, typically by rearranging the "network" of interconnects, using integrators in a different sense.

Running an electronic analog computer, assuming a satisfactory setup, started with the computer held with some variables fixed at their initial values. Moving a switch released the holds and permitted the problem to run. In some instances, the computer could, after a certain running time interval, repeatedly return to the initial-conditions state to reset the problem, and run it again.

The similarity between linear mechanical components, such as springs and dashpots (viscous- fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations that are of essentially the same form.

However, the difference between these systems is what makes analog computing useful. If one considers a simple mass-spring system, constructing the physical system would require making or modifying the springs and masses. This would be followed by attaching them to each other and an appropriate anchor, collecting test equipment with the appropriate input range, and finally, taking measurements. In more complicated cases, such as suspensions for racing cars, experimental construction, modification, and testing is not so simple nor inexpensive.

The electrical equivalent can be constructed with a few operational amplifiers (Op amps) and some passive linear components; all measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) 'stiffness of the spring', for instance, can be changed by adjusting a potentiometer. The electrical system is an analogy to the physical system, hence the name, but it is less expensive to construct, sometime safer, and typically much easier to modify. As well, an electronic circuit can typically operate at higher frequencies than the system being simulated. This allows the simulation to run faster than real time (which could, in some instances, be hours, weeks, or longer). Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations.

The drawback of the mechanical-electrical analogy is that electronics are limited by the range over which the variables may vary. This is called dynamic range. They are also limited by noise levels. Floating-point digital calculations have comparatively-huge dynamic range (Good modern handheld scientific/engineering calculators have exponents of 500.)

These electric circuits can also easily perform a wide variety of simulations. For example, voltage can simulate water pressure and electric current can simulate rate of flow in terms of cubic metres per second. (In fact, given the proper scale factors, all that is required would be a stable resistor, in that case.) Given flow rate and accumulated volume of liquid, a simple integrator provides the latter; both variables are voltages. In practice, current was rarely used in electronic analog computers, because voltage is much easier to work with.

Analog computers are especially well-suited to representing situations described by differential equations. Occasionally, they were used when a differential equation proved very difficult to solve by traditional means.

A digital system in nearly every instance uses two voltage levels to represent binary numbers, although their numerical magnitude only sometimes has practical significance, such as in accounting or engineering calculations. In many cases, the binary numbers are simply codes that correspond, for instance, to brightness of primary colors, or letters of the alphabet (or other printable symbols). The manipulation of these binary numbers is how digital computers work. The electronic analog computer, however, manipulates electrical voltages that represent the magnitudes of quantities in the problem being solved.

Accuracy of an analog computer is limited by its computing elements as well as quality of the internal power and electrical interconnections. The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures.

Precision of a digital computer must necessarily be finite, although 64-bit CPUs are becoming commonplace, and arbitrary-precision arithmetic, while relatively slow, provides any practical degree of precision that might be needed.

Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations. In general, analog computers are extraordinarily fast, since they can solve most complex equations at the rate at which a signal traverses the circuit, which is generally an appreciable fraction of the speed of light. On the other hand, the precision of analog computers is not good; they are limited to three, or at most, four digits of precision.

Digital computers can be built to take the solution of equations to almost unlimited precision, but quite slowly compared to analog computers. Generally, complex equations are approximated using iterative numerical methods which take huge numbers of iterations, depending on how good the initial "guess" at the final value is and how much precision is desired. (This initial guess is known as the numerical seed for the iterative process.) For many real-time operations, the speed of such digital calculations is too slow to be of much use (e.g., for very high frequency phased array radars or for weather calculations), but the precision of an analog computer is insufficient.

Hybrid computers can be used to obtain a very good but relatively imprecise 'seed' value, using an analog computer front-end, which is then fed into a digital computer iterative process to achieve the final desired degree of precision. With a three or four digit, highly accurate numerical seed, the total digital computation time necessary to reach the desired precision is dramatically reduced, since many fewer iterations are required.

Consider that the nervous system in animals is a form of hybrid computer. Signals pass across the synapses from one nerve cell to the next as discrete (digital) packets of chemicals, which are then summed within the nerve cell in an analog fashion by building an electro-chemical potential until its threshold is reached, whereupon it discharges and sends out a series of digital packets to the next nerve cell. The advantages are at least threefold: noise within the system is minimized (and tends not to be additive), no common grounding system is required, and there is minimal degradation of the signal even if there are substantial differences in activity of the cells along a path (only the signal delays tend to vary). The individual nerve cells are analogous to analog computers; the synapses are analogous to digital computers.

Note that hybrid computers should be distinguished from hybrid systems. The latter may be no more than a digital computer equipped with an analog-to-digital converter at the input and/or a digital-to-analog converter at the output, to convert analog signals for ordinary digital signal processing, and conversely, e.g., for driving physical control systems, such as servomechanisms.

Sometimes seen is another usage of the term "hybrid computer" meaning a mix of different digital technologies to achieve overall accelerated processing, often application specific using different processor technologies.[1]

Hybrid computer is a digital computer that accepts analog signals, converts them to digital and processes them in digital form. This integration is obtained by digital to analog and analog to digital converter. A hybrid computer may use or produce analog data or digital data. It accepts a continuously varying input, which is then converted into a set of discrete values for digital processing. A hybrid computer system setup offers a cost-effective method of performing complex simulations. A hybrid computer capable of real-time solution has been less expensive than any equivalent digital computer. Hybrid computers have been necessary for successful system development. An example of a hybrid computer is the computer used in hospitals to measure the heartbeat of the patient. Hybrid machines are generally used in scientific applications or in controlling industrial processes.

Also refer - ANUBHA JAIN, DEEPSHIKHA BHARGAVA & DIVYA ARORA- RBD Publications

Chapter No. 1

Page No – 1.15 – 1.69 Characteristics of Computer

1. Speed: The performance of a computer is judged by its processing speed, which is based on its cycle time. For slow computers, speed is about 300 – 400 cycle/nanosecond. A computer processing speed is expressed in MHz and GHz.

2. Accuracy: A computer is consistently highly accurate in its calculations and decisions, if data is provided correctly. It performs each and every calculation with the same accuracy. Some terms related to this are: GIGO -Garbage in Garbage out.

WYSWYG –What you see in, what you get out.

3. Reliability: It is the extent to which we can rely on computer. As far as accuracy, speed and serviceable life is concerned, a computer is an extremely reliable machine.

4. Storage: depending upon its secondary storage capacity, a computer can store and recall any amount of data. Data on the computer can be saved and retained as long as desire by the user and can be retrieved as and when needed.

5. Diligence: Being a machine, a computer is free from any monotony, tiredness, lack of attention, etc. and therefore, can work for hours constantly with the same speed. This quality is very rare in human beings, especially where the work is routine and requires great accuracy.

6. Versatility: A computer can perform several and varied functions at the same time. At a particular point of time, it can do multitasking. 7. Dumb Machine: It can perform only those functions for which it can be programmed. A computer cannot think and it has no intelligence. Computer is instructed in detail what to do and in what sequence.

8. Emotionless: Unlike human beings, computers do not have emotions, feelings and instincts. A computer’s working is based on the instructions given by the user. That is why; computers can work mutely and without any boredom. There is no IQ or feelings. It is just an electronic machine.

INPUT & OUTPUT DEVICES

HARDWARE

Hardware is a general term used to refer to the physical computer machinery and its tangible components i.e. those components, which can be seen & touched.

HUMANWARE OR PEOPLEWARE

Input Devices

They transform data or information from outside world & feed them into computer system.

Data: it is the raw facts given to the computer.

Programs: These are the sets of instructions that direct the computer.

Commands: These are special codes or key words that the user inputs to perform a task.

1 Keyboard- a device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.

2 Pointing devices

• Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.

o Mechanical o Optomechanical o Optical Mouse - a newer technology that uses lasers, or more commonly LEDs to track the surface under the mouse to determine motion of the mouse, to be translated into mouse movements on the screen. o Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes. • Light Pen • Touch Screen • Graphics Tablet

3 Gaming devices

Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.

Gamepad - a general handheld game controller that relies on the digits (especially thumbs) to provide input.

Game controller - a specific type of controller specialized for certain gaming purposes.

4 Audio input devices

Microphone - an acoustic sensor that provides input by converting sound into electrical signals.

5 Scanners

Optical Scanners

OCR – Optical Character Recognition

Flatbed Scanners

Overhead Scanners

6 Barcode Reader

7 MICR Magnetic Ink Character Reader

8 Magnetic Strip

9 OMR Optical Mark Rader

10 Grabber

OUTPUT DEVICES –

1Printer- Dot Matrix , Daisy Wheel , Chain & Band Printer , Ink Jet , Laser 2 – Pen , Drum , FlatBed , Electostatic

3 Monitor – CRT , LCD

4 Speakers

Also refer - ANUBHA JAIN, DEEPSHIKHA BHARGAVA & DIVYA ARORA- RBD Publications

Chapter No. 3

Page No-3.20 – 3.47

Computer Memory (RAM) & CPU –

Refer - ANUBHA JAIN, DEEPSHIKHA BHARGAVA & DIVYA ARORA-RBD Publications

Chapter No. 3

Page No-3.47 – 3.55 , 3.63 – 3.66 Computer Software

As it is from ANUBHA JAIN, DEEPSHIKHA BHARGAVA & DIVYA ARORA-RBD Publications

Chapter 4

Study the relevant lines.