<<

Brief History of Systems, , and Programming

The first modern computer came into existence in the 1940s. No single person invented the computer—the credit goes to the many inventors who have worked on different pieces of the computer over the years. Man’ quest to simplify mathematical computations has led to extensive research, development, and other innovations. Laptops, tablets, smartphones, and many other devices are a product of these innovations. This article will detail the history of these innovations in mathematics, programming, and software and computer system design.

What Is a Computer?

A computer is an electronic device that stores and processes data. It comprises both hardware and software. The term hardware refers to the physical aspects of the computer and comprises the following main components: 1. central processing unit (CPU); 2. memory; 3. storage devices (disks, CDs, and tapes); 4. input and output devices (monitors, keyboards, mice, and printers). All these components are connected to each other through the system bus. The figure below provides a visual overview of the main parts of the computer.

Figure 1: The main components of a computer.

Computer programs are written by programmers, and they guide the computer through an orderly set of actions to perform some operation. The term software refers to these programs that instruct hardware to perform specific tasks. The instructions to the http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 1 of 9 computer can be given using different programming languages. These languages have evolved over time.

History of Computing

The earliest device to keep track of calculations was an abacus. It was used around 50 BC and was very popular in Asia. A popular form of abacus is shown below.

Figure 2: An abacus.1

John Napier, a Scottish mathematician, physicist, and astronomer, defined natural logarithms in 1614 to simplify calculations. The use of logarithms greatly simplified the complex astronomical, navigational, mathematical, and scientific calculations that were commonplace at that time. He also invented Napier’s bones, a mathematical tool that used a set of numbered rods to simplified multiplication.

Figure 3: Napier’s bones.2

1 This image is in the public domain. The original can be found here. 2 This image is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. It is attributed to Wikipedia user La Enciclopedia Libre Universal en Español. The original can be found here.

http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 2 of 9 , a British mathematician and inventor, first proposed the idea of a programmable computer. While studying complex astronomical calculations that others had done by hand, he found numerous mistakes, which motivated him to design a “mechanical computer” that could do these calculations without errors. Though he designed such a machine, it was never built during his lifetime.

The need for programming came with the idea of making general purpose hardware that could be used to carry out a variety of tasks. , who was the world’s first programmer, published a paper in which she demonstrated how Babbage’s could be programmed to perform various computations.

Another device called a punch card was used in the late 1800s to keep track of data that could be read by machines. Punch cards stored information in digital format, which was represented at the time by specific series of holes in paper cardstock. Herman Hollerith applied the idea of representing information as holes in paper cards to speed up the tabulation process in the 1890 US Census. Hollerith’s work contributed to initial programming methods, and punch cards were used to communicate with well into the 1970s. We still use the technique of punch cards today in the voting process, and punch cards were subject to media attention with the “hanging chads” issue during the 2000 US presidential elections, when some ballots were not punched properly, making votes difficult to count.

Figure 4: A punch card programmed with FORTRAN.3

In the late 1940s, John von Neumann introduced the idea of a computer architecture based on stored programs. The key idea was to store both the data and the program in memory. The idea behind storing programs in a memory was based on the construction of these programs using a set of generic operations. This became known as von Neumann Architecture in the field of . It was a major advance in computer design, because until this point computers were programmed by setting switches and physically wiring the components. Storing programs in memory completely changed that. This was also the start of machine language (a sequence of 0s and 1s) as a means of programming of the computer. A set of sequences of 0s and 1s were used to indicate the operations and the operands on which these operations would be performed. An example of a machine language program looks something like this:

3 This image is in the public domain. The original can be found here. http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 3 of 9

Figure 5: An example of a machine language program

Each line in the program contains a 16-bit code that represents either a machine instruction or a single data value. For example, the first few bits may indicate that the operation to be performed is addition, and the following bits may provide the numbers that need to be added. Another sequence may have initial bits to indicate that data needs to be fetched from the memory, and the following bits will provide the address in the memory from which the data will be fetched.

It was difficult to program using 0s and 1s as different pieces of code looked similar. Giving sections of these 0s and 1s symbolic names would make the task of programming easier as the programmer could focus on data and operations in creating programs. This led to the creation of assembly languages in the 1950s; programmers used these assembly languages to write software. An is a low-level which is close to machine language but provides clarity into operations of a machine through the use of symbols. An example of a machine language program looks something like this:

LD R1, NUMBER1

LD R2, NUMBER2

ADD R3, R1, R2

Figure 6: An assembly language program for adding two numbers

The first computer, ENIAC (Electronic Numerical Integrator and Computer), was built by the United States Army’s Ballistic Research Laboratory in 1946. It was part of research aimed at providing better ballistic missiles to the U.S. Army during World War II.

http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 4 of 9

Figure 7: ENIAC, the first computer.4

Dr. Presper Eckert and Dr. John Mauchly, two members of the team that built ENIAC, started their own company, Universal Automatic Computer, or UNIVAC, to build the first commercial computer. Their first client was the United States Census Bureau, which needed a computer to keep track of the growing U.S. population. The computer was successfully built in 1951 at the cost of about one million dollars (about $9 million in today’s money).

Machine Language and Programming Languages

Machines understand 0s and 1s. The task of software is to express computation in a higher-level language and then translate it into a sequence of 0s and 1s that machines can understand. When you express the computation in a higher-level language, this is referred to as raising the level of abstraction in programming context. A typical software application, such as a or an , may include millions of lines of software code. However, the hardware can only execute low-level instructions presented to it in the form of the machine language consisting of 0s and 1s. Several layers of software are needed to convert the high-level application code into the machine language. As shown in the figure below, a system software layer such as the operating system controls the hardware; the user controls application programs, which run on top of the operating system layer.

4 This image is in the public domain. The original can be found here. http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 5 of 9

Figure 8: A layered structure showing where the operating system software and application software are situated while running on a typical desktop computer.5

Programming Languages

During the 1950s and 1960s, several high-level programming languages were introduced, such as FORTRAN, COBOL, Lisp, ALGOL, and PL/I. The 1970s saw the introduction of languages such as Pascal, , and . Most of these languages such as Pascal, C, and FORTRAN are procedural programming languages. Procedural programming is also referred to as imperative programming. The idea of this programming style is to specify the steps that the program must take to reach the desired state.

An exception to this style of programming comes from languages called Lisp and Prolog. Lisp is a language, and Prolog is a logic programming language. In functional programming style, evaluation of mathematical style functions take place and the state variables are not used. In logic programming, a program is expressed as a sequence of logical assertions and these assertions are automatically evaluated to come up with the result. These languages include a logic evaluation engine along with the .

In the late 1970s, a new design approach called object-oriented programming (OOP) was developed. This programming technique has several advantages, and almost all languages today follow this approach. The 1980s saw the introduction of object-oriented programming language C++, which was later followed by another objective-oriented programming called Java in the 1990s. Java has become a popular programming language for the development of mobile and web-based application, and Java will be the focus of this course. The 1980s and the 1990s also saw the introduction of scripting programming languages such as , Python, and Ruby.

5 This image is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. The original version of this image can be found here. http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 6 of 9 Computer Networks

Prior to the introduction of personal computers, businesses utilized for local computers, with users connected to the remotely via terminals. The minicomputers were often connected via computer networks to enable distributed systems – software made up of components, located or distributed on the network computers, which communicate via messages to achieve a common purpose. As the speed and storage capacity of personal computers increased, as their cost decreased, and as Apple and Windows operating systems developed, the personal computer replaced the minicomputer for local and network computing. The networks which began as local hard-wired networks grew rapidly by utilizing telephone networks and eventually evolved to include wireless connections. Thus, local computing grew to become wide area computing, and today, the network has expanded to include the entire Internet.

Microsoft and PC Operating Systems

Computers entered American households in the 1980s. In 1975, and Paul Allen co-founded to create software for both businesses and personal computers. As the first major project for the company, IBM asked Microsoft to create operating system software for the personal computers that IBM planned to build. Microsoft named this operating system MS-DOS and released it as a product. Although announced in 1983, the Windows operating system was not released until 1987. It was not until the 1995 release of that Microsoft’s software became a very popular operating system and one of the fastest-selling pieces of software in the . In 2001, Microsoft introduced Windows XP, with features like multi-lingual support and a new, user-friendly interface. Released in 2006, addressed security issues that many critics voiced for Windows XP and also improved the accessibility of settings and programs. This was followed by , which supported mobile computing as wireless Internet access became common. Now, with , Microsoft has focused on touch-enabled computing, which has become popular with devices such as smart phones and tablets.

Web Browsers

As the name suggests, a web browser is a software program that allows users to easily browse the contents of the . It was first created in 1989 by Tim Berners-Lee, a British computer scientist, when he was working as a fellow at CERN. Marc Andreessen, a student at the University of Illinois in Urbana Champaign, later created a version of the browser in 1993, called , which was much easier to install and use. The browser later formed the basis for the company Netscape, which popularized the use of web browsers. Today, there are many browser options. The most popular include , Mozilla , Microsoft Internet Explorer, and .

http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 7 of 9 and Cloud Computing

Founded in 1998, Google has become one of the most recognized technology companies in the world due to its search engine, which is used to find information on the Internet. Google’s search engine, as well as the company’s other products, such as , Google Maps, Google Books, and YouTube, are web browser-based software applications. There is a general trend towards using applications that do not run on your computer but run on servers located elsewhere, also known as the “cloud.” The computing resources being used are remote, and your machine mainly takes up roles of display and communications with the remote entities. As devices become more capable and more mobile, software development is catering to the cloud computing paradigm more and more.

Virtual Machines

One of the enablers of cloud computing paradigm is a concept called virtual machines. Software working with hardware allows hardware to be virtualized. A virtual machine is a program that creates a self-contained operating environment on top of the underlying hardware and presents appearance of different machines to the user. This allows the same piece of hardware to be used as different machines at the same time. You can create a virtual machine on top of a Windows machine through the use of virtual machine software such as Oracle’s VirtualBox and EMC’s VMWare. The servers in the data centers can be more efficiently utilized through the use of virtual machines.

Smart Phones and Mobile Applications

Today, mobile applications drive a lot of software development. Released in 2007, Apple’s iOS, which was previously known as iPhone OS, is a mobile operating system for the iPhone and iPod Touch devices. Now, the operating system functions for the iPad and the Apple TV, allowing for a familiar interface across devices that ameliorates the user’s experience. Another leading OS for smart phones and tablets is Google’s Android OS, which is Linux-based. Microsoft’s Windows 8 OS has also been designed to work with phones and tablets. These operating systems have made the development of mobile applications popular and spurred the creation of a large app-making industry.

http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 8 of 9 Sources

Mauchly, John and John Presper Eckert, “The History of the UNIVAC Computer.” About.com Inventors. http://inventors.about.com/od/uvstartinventions/a/UNIVAC.htm (Accessed July 15, 2013)

Microsoft Corporation, “A History of Windows.” http://windows.microsoft.com/en-us/windows/history (Accessed June 10, 2013).

Random History, “Programming Code from Punch Cards to HTML: A History of Software.” Random History. http://www.randomhistory.com/2008/06/26_software.html (Accessed June 10, 2013).

Tatnall, Arthur, History of and Software Development. http://www.eolss.net/Sample- Chapters/C15/E6-45-12.pdf (Accessed July 15, 2013).

Wikipedia contributors, “History of programming languages.” Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/History_of_programming_languages (Accessed July 22, 2014).

http://www.saylor.org/courses/cs101/#1.1

The Saylor Foundation Saylor.org Page 9 of 9