Future Computing Environments: the Commodity Mainframe

Total Page:16

File Type:pdf, Size:1020Kb

Future Computing Environments: the Commodity Mainframe - - Decision Resources, Inc. Bay Colony Corporate Center Information Systems Industry 1100 Winter Street Waltham, Massachusetts 02154 Telephone 617.487.3700 Future Computing Environments: Telefax 617.487.5750 The Commodity Mainframe Era Gordon Bell Consultant to Decision Resources, Inc. The success of large-scale, multimilliondollar main- frames built from ECL technology and running pre Business Implications prietary operating systems from IBM and other The structure of mainframes renders them un- hardware vendors (e.g., Fujitsu, Hitachi, NEC, and competitive against micrebased multiprocessor Unisys) has clearly peaked. While some analysts fore- computers. These micro-based systems also cast the overall mainframe market to increase at 24% have cost advantages in manufacturing, software (less than half that of personal computers and work- volumes, interactivity, and a base of trained us- stations), others forecast a shrinking market for main- ers and programmers. frames. We believe that the mainframe hardware market will be eaten away by alternative technologies, The industry is transitioning from servercentric but large-scale transaction-oriented applications will and clientcentric environments to client-server, client-multipleserver, and network computing grow moderately. with multiple clients and multiple servers. This Mainframes operate as a central facility for batch transition will take another decade because mis- processing, large transaction processing workloads, sion-critical legacy applications cannot readily and general-purpose computing. Mainframes require be migrated to other environments. a large, specialized staff to maintain both hardware Resizing will continue to draw applications off and software. Created in the 1950s as the central mainframes onto networked PCs, workstations, computer for large enterprises, the mainframe now and small servers; simple multiprocessors; large is following the same fateful path as proprietary mini- scalable multiprocessors; and massively parallel computers, which, ironically, were created in the processing multicomputers. This trend will ac- 1970s for departmental applications and thrived on celerate as network cost, performance, and replacing mainframe functions. Minicomputer maintainability improve. growth is even more anemic than that of mainframes Simple multiprocessors will be the mainline of because the minicomputer failed to attract enough computing throughout this decade, to be re- missioncritical applications, which slow the migration placed eventually by various forms of highly to PCs and workstations. parallel processing computers. This environ- With the advent of open systems 1 based on Unix, ment will be based on standards, making com- DOS, and industry standards, proprietary systems ponents more commodity-like and the market (particularly mainframes and traditional minicom- more competitive. If this future environment is based on a few communications and com- puting standards, the market for computers will 1. The IEEE defines an open system as "a comprehensive and con- become vastly larger than it is today. sistent set of international information technology standards and functional standards profiles that specify interfaces, services, and supporting formats to accomplish interoperability and port- ability of applications, data, and people." Press Date: December 20,1993 puters) are out of favor. Unix became widely used in ported and to which additional computers can be U.S. universities beginning in the early 1910s, result- added incrementally. ing in a significant base of trained Unix programmers and users. Similarly, the wide-scale adoption of PCs Why Today's IBM Mainframes Are has established the largest base of computer users. Obsolete The mainframe has no such loyal following, except, perhaps, in MIS departments. Even mainframe pro- To make the assertion that, like the dinosaur, the grammers are dwindling as older programmers retire water-cooled ECL-technology mainframe is obsolete, and new trainees gravitate toward Unix and DOS we must first dissect it to examine the arcane struc- environments. ture required to support its software. Figure 1 shows the structure of the largest mainframe based on the Beginning a little over 10 years ago, workstations be- IBM 360 architecture, the IBM ES 9000, which dictates came a natural choice for users hampered by over- the structure of IBM-compatible mainframes such as loaded minicomputers and mainframes. Enabled the Arndahl5000 Series. by fast, inexpensive, CMOS microprocessors, a bit- mapped graphic workstation provided high interac- The mainframe's performance relies on two inde- tivity. Soon workstations and workstation servers pendent (but tightly connected) parts sharing a connected to a local area network (LAN) replaced single operating system and clusters that have shared minicomputers for departmental and interactive input/output (I/O) . The ES 9000 has two quasi- computing. In the late 1980s, a LAN-based PC envi- independent subsystems, each having shared mem- ronment based on Novel1 NetWare servers came into ory, up to four central processors (that may be added prominence, connecting Microsoft DOS and incrementally), and a switch to interconnect the parts. Windows-based PCs. While this structure is essential for fault-tolerance, it may not meet the performance requirements for very Today there are a multitude of computing alterna- large applications. Clustering technology is required tives to the original IBM 360 architecture-the basis to scale beyond a single computer with eight proces- for the majority of mainframes installed. (Mainframes sors. The complexity of processor/memory intercon- from NEC and Unisys have architectures similar to nections limits mainframes to eight processors per the IBM 360.) Most of these mainframe alternatives computer; clustering is limited as well. For all practi- fall into one of the following categories: cal purposes, about six mainframes can be clustered Multipocasor. A computer architecture in which when they also have multiple processors. two or more processors share common memory. The combination of multiprocessing and clustering (Most mainframes and supercomputers are also creates additional complexity that increases the soft- multiprocessors.) ware burden of every large application. The IBM Multi. A microprocessor-based multiprocessor that 360's complex operating system and arcane architec- typically supports fewer than 20 processors, where ture are maintained by an ever-dwindling base of the microprocessors are organized around a single programmers who tend to resist new computing archi- bus. tectures. In this predictable scenario, the fewer the mainframe programmers, the less likely companies Computer Cluster. A collection of independent com- will commit to new mainframe purchases; and the puters (i.e., each computer runs its own copy of the fewer the mainframes purchased, the less likely new operating system) that are interconnected to sup port a single application. The maximum number programmers will be trained on its architecture. of computers supported is typically fewer than 16. The duplexed mainframe comprises pairs of comput- (Most mainframes can be clustered.) ers and software programs. The following describes some of their specialized control functions. Scalable Multi. A multi that has no theoretical limit on the number of microprocessors supported and The 1/0 control portion of the main computer to which additional microprocessors can be added operating system (including MVS, VM, AIX) exe- incrementally. cutes channel commands. This structure is not suited to Unix, which is inherently small-file- and Scalable Cluster. A computer cluster that has no character I/O-based. theoretical limit on the number of computers sup- SPECTRUM Information Systems Industry Future Computing Environments Decision Resources, Inc. Press Date: December 20, 1993 Figure 1 Duplexed IBM Mainframe Structure Console computer (connected to all processors/ controllers) Source: Gordon Bell. The Storage Management Subsystem (SMS) con- memory. (Some data transmission delays can be offset trols disks and archival storage. In addition to the by increasing throughput.) main control computer, controllers and disks in- Multis, however, have a simple 1/0 structure whereby clude computers not visible to users. a disk (with a track buffer) reads directly into main The access interface (VTAM) allows the operating memory under the control of an ordinary processor. system to access the 3274 or 3745 communications Disk caches are located in main memory and can be computers and their operating systems. traded off with other memory demands. Because all resources are available and interchangeable, any The console computer monitors the main computer processor or memory can be used for application, and corrects errors. control of communication, disk, and caching at any Computers within 1/0 devices and controllers that level. Compared with duplexed mainframe disks, are not visible to users. redundant arrays of independent (or inexpensive) disks (RAID) reduces overhead and can increase per- Computers and operating systems increase software formance by up to a factor of 4. size and complexity, automatically increase total mem- ory, and cause data transmission delays. For example, Whereas the relatively high cost of mainframes limits disk caching and buffering occur within the disk, disk the number that are built, multis sell in an order-of- controller, storage computer, 1/0 channels, and main magnitude-higher
Recommended publications
  • 4: Gigabit Research
    Gigabit Research 4 s was discussed in chapter 3, the limitations of current networks and advances in computer technology led to new ideas for applications and broadband network design. This in turn led to hardware and software developmentA for switches, computers, and other network compo- nents required for advanced networks. This chapter describes some of the research programs that are focusing on the next step-the development of test networks.1 This task presents a difficult challenge, but it is hoped that the test networks will answer important research questions, provide experience with the construction of high-speed networks, and demonstrate their utility. Several “testbeds” are being funded as part of the National Research and Education Network (NREN) initiative by the Advanced Research Projects Agency (ARPA) and the National Science Foundation (NSF). The testbed concept was first proposed to NSF in 1987 by the nonprofit Corporation for National Research Initiatives (CNRI). CNRI was then awarded a planning grant, and solicited proposals or white papers" from The HPCC prospective testbed participants. A subsequent proposal was then reviewed by NSF with a focus on funding levels, research program’s six objectives, and the composition of the testbeds. The project, cofunded by ARPA and NSF under a cooperative agreement with testbeds will CNRI, began in 1990 and originally covered a 3-year research program. The program has now been extended by an additional demonstrate fifteen months, through the end of 1994. CNRI is coordinat- gigabit 1 Corporation for National Research Initiatives, ‘‘A Brief Description ofthe CNRJ Gigabit lkstbed Initiative,” January 1992; Gary StiX, “Gigabit Conn=tiom” Sci@@ net-working.
    [Show full text]
  • NVIDIA Tesla Personal Supercomputer, Please Visit
    NVIDIA TESLA PERSONAL SUPERCOMPUTER TESLA DATASHEET Get your own supercomputer. Experience cluster level computing performance—up to 250 times faster than standard PCs and workstations—right at your desk. The NVIDIA® Tesla™ Personal Supercomputer AccessiBLE to Everyone TESLA C1060 COMPUTING ™ PROCESSORS ARE THE CORE is based on the revolutionary NVIDIA CUDA Available from OEMs and resellers parallel computing architecture and powered OF THE TESLA PERSONAL worldwide, the Tesla Personal Supercomputer SUPERCOMPUTER by up to 960 parallel processing cores. operates quietly and plugs into a standard power strip so you can take advantage YOUR OWN SUPERCOMPUTER of cluster level performance anytime Get nearly 4 teraflops of compute capability you want, right from your desk. and the ability to perform computations 250 times faster than a multi-CPU core PC or workstation. NVIDIA CUDA UnlocKS THE POWER OF GPU parallel COMPUTING The CUDA™ parallel computing architecture enables developers to utilize C programming with NVIDIA GPUs to run the most complex computationally-intensive applications. CUDA is easy to learn and has become widely adopted by thousands of application developers worldwide to accelerate the most performance demanding applications. TESLA PERSONAL SUPERCOMPUTER | DATASHEET | MAR09 | FINAL FEATURES AND BENEFITS Your own Supercomputer Dedicated computing resource for every computational researcher and technical professional. Cluster Performance The performance of a cluster in a desktop system. Four Tesla computing on your DesKtop processors deliver nearly 4 teraflops of performance. DESIGNED for OFFICE USE Plugs into a standard office power socket and quiet enough for use at your desk. Massively Parallel Many Core 240 parallel processor cores per GPU that can execute thousands of GPU Architecture concurrent threads.
    [Show full text]
  • Resource Guide: Workstation Productivity
    Resource Guide: Workstation Productivity Contents Beyond the Desktop: Workstation Productivity ……………………………………………………2 Find out how to get the highest productivity possible from high performance applications that require more power than traditional desktop computers Top IT Considerations for Mobile Workstations …………………...............................................4 Discover the top IT considerations for organization that require mobile workstations designed to provide a balance of performance and portability Maximum Management and Control with Virtualized Workstations ....…………………...…….6 Explore the benefits of virtualization to ensure the greatest possible management and control of high performance workstations for the most demanding workloads and applications Sponsored by: Page 1 of 6 Copyright © MMIX, CBS Broadcasting Inc. All Rights Reserved. For more downloads and a free TechRepublic membership, please visit http://techrepublic.com.com/2001-6240-0.html TechRepublic Resource Guide: Workstations Productivity Beyond the Desktop: Workstation Productivity Find out how to get the highest productivity possible from high performance applications that require more power than traditional desktop computers Desktop computers have come a long way in terms of value and performance for the average user but more advanced applications now require even more in terms of processing, graphics, memory and storage. The following is a brief overview of the differences between standard desktops and workstations as well as the implications for overall performance and productivity as compiled from the editorial resources of CNET, TechRepublic and ZDNet.com. Think Inside the Box A lot of people tend to think of desktop computers and computer workstation as one and the same but that isn’t always the case in terms of power and performance. In fact, many of today’s most demanding workloads and applications for industries such as engineering, entertainment, finance, manufacturing, and science actually require much higher functionality than what traditional desktops have to offer.
    [Show full text]
  • The Video Game Industry an Industry Analysis, from a VC Perspective
    The Video Game Industry An Industry Analysis, from a VC Perspective Nik Shah T’05 MBA Fellows Project March 11, 2005 Hanover, NH The Video Game Industry An Industry Analysis, from a VC Perspective Authors: Nik Shah • The video game industry is poised for significant growth, but [email protected] many sectors have already matured. Video games are a large and Tuck Class of 2005 growing market. However, within it, there are only selected portions that contain venture capital investment opportunities. Our analysis Charles Haigh [email protected] highlights these sectors, which are interesting for reasons including Tuck Class of 2005 significant technological change, high growth rates, new product development and lack of a clear market leader. • The opportunity lies in non-core products and services. We believe that the core hardware and game software markets are fairly mature and require intensive capital investment and strong technology knowledge for success. The best markets for investment are those that provide valuable new products and services to game developers, publishers and gamers themselves. These are the areas that will build out the industry as it undergoes significant growth. A Quick Snapshot of Our Identified Areas of Interest • Online Games and Platforms. Few online games have historically been venture funded and most are subject to the same “hit or miss” market adoption as console games, but as this segment grows, an opportunity for leading technology publishers and platforms will emerge. New developers will use these technologies to enable the faster and cheaper production of online games. The developers of new online games also present an opportunity as new methods of gameplay and game genres are explored.
    [Show full text]
  • Inside the Computer Microcomputer Minicomputer Mainframe
    Inside the computer Microcomputer Classification of Systems: • Personal Computer / Workstation. – Microcomputer • Desktop machine, including portables. – Minicomputer • Used for small, individual tasks - such as – Mainframe simple desktop publishing, small business – Supercomputer accounting, etc.... • Typical cost : £500 to £5000. • Chapters 1-5 in Capron • Example : The PCs in the labs are microcomputers. Minicomputer Mainframe • Medium sized server • Large server / Large Business applications • Desk to fridge sized machine. • Large machines in purpose built rooms. • Used for distributed data processing and • Used as large servers and for intensive multi-user server support. business applications. • Typical cost : £5,000 to £500,000. • Typical cost : £500,000 to £10,000,000. • Example : Scarlet is a minicomputer. • Example : IBM ES/9000, IBM 370, IBM 390. Supercomputer • Scientific applications • Large machines. • Typically employ parallel architecture (multiple processors running together). • Used for VERY numerically intensive jobs. • Typical cost : £5,000,000 to £25,000,000. • Example : Cray supercomputer 1 What's in a Computer System? Software • The Onion Model - layers. • Divided into two main areas • Hardware • Operating system • BIOS • Used to control the hardware and to provide an interface between the user and the hardware. • Software • Manages resources in the machine, like • Where does the operating system come in? • Memory • Disk drives • Applications • includes games, word-processors, databases, etc.... Interfaces Hardware • The chunky stuff! •CUI • If you can touch it... it's probably hardware! • Command Line Interface • The mother board. •GUI • If we have motherboards... surely there must be • Graphical User Interface fatherboards? right? •WIMP • What about sonboards, or daughterboards?! • Windows, Icons, Mouse, Pulldown menus • Hard disk drives • Monitors • Keyboards BIOS Basics • Basic Input Output System • Directly controls hardware devices like UARTS (Universal Asynchronous Receiver-Transmitter) - Used in COM ports.
    [Show full text]
  • Microcomputers: NQS PUBLICATIONS Introduction to Features and Uses
    of Commerce Computer Science National Bureau and Technology of Standards NBS Special Publication 500-110 Microcomputers: NQS PUBLICATIONS Introduction to Features and Uses QO IGf) .U57 500-110 NATIONAL BUREAU OF STANDARDS The National Bureau of Standards' was established by an act ot Congress on March 3, 1901. The Bureau's overall goal is to strengthen and advance the Nation's science and technology and facilitate their effective application for public benefit. To this end, the Bureau conducts research and provides; (1) a basis for the Nation's physical measurement system, (2) scientific and technological services for industry and government, (3) a technical basis for equity in trade, and (4) technical services to promote public safety. The Bureau's technical work is per- formed by the National Measurement Laboratory, the National Engineering Laboratory, and the Institute for Computer Sciences and Technology. THE NATIONAL MEASUREMENT LABORATORY provides the national system of physical and chemical and materials measurement; coordinates the system with measurement systems of other nations and furnishes essential services leading to accurate and uniform physical and chemical measurement throughout the Nation's scientific community, industry, and commerce; conducts materials research leading to improved methods of measurement, standards, and data on the properties of materials needed by industry, commerce, educational institutions, and Government; provides advisory and research services to other Government agencies; develops, produces, and
    [Show full text]
  • Lenovo Thinkpad P15 Gen 1
    ThinkPad P15 Gen 1 Business professionals can now get extreme power in a mobility-friendly package. This mobile workstation supports advanced workloads with an Intel Xeon or 10th Gen Core processors and powerful NVIDIA Quadro RTX5000 graphics. Choose the 4K OLED or LCD display with Dolby Vision HDR for unmatched clarity and comfort. Users can connect up to three additional displays, and transfer large files at unprecedented speeds with twin ThunderBolt 3 connectors. PROFESSIONAL MACHINES FOR REASONS TO BUY ADVANCED USERS Built to the highest thermal engineering standards, it stays cool and quiet while maintaining the power users need for advanced workloads. Support for WiFi 6 ensures internet connectivity is faster and more secure while users can stay connected on the move with optional 4G WWAN. Certified or recommended for use with: Adobe Premiere Pro, Adobe Photoshop, Adobe After Effects Configure with an 8-core Intel Xeon or Core i9 processor, fast M.2 SSD storage and up to 128GB of DDR4 memory; enabling multitasking of large spreadsheets, databases or complex statistical software. This 15.6" display ThinkPad P Series mobile workstations offer a choice of professional graphics and processor options. most.lenovo.com ThinkPad P15 Gen 1 Recommended for this KEY SPECIFICATIONS CONNECTIVITY device Processor up to Intel Xeon processor W-10885M or 10th Gen Intel Core i9 I/O Ports 1x USB 3.2 Type-C (power delivery and DisplayPort), 2x USB 3.2 processor Type-C Gen 2 / Thunderbolt 3 (power delivery and DisplayPort), 2x USB-A 3.2 Gen 1, HDMI 2.0,
    [Show full text]
  • 2 CLASSIFICATION of COMPUTERS.Pdf
    CLASSIFICATION OF COMPUTERS Computers can be classified in the following methods: I . Computational Method I. Size and Capability I. Classification based on Computational method: Based on the way a system performs the computations, a computer can be classified as follows: • Digital • Analog • Hybrid Digital computer: A digital computer can count and accept numbers and letters through various input devices. The input devices convert the data into electronic pulses, and perform arithmetical operations on numbers in discrete form. In addition to performing arithmetical operations, they are also capable of:- 1. Storing data for processing 2. Performing logical operations 3. Editing or deleting the input data. One of the main advantages in the use of digital computers is that any desired level of accuracy can be achieved by considering as many places of decimal as are necessary and hence are most suitable for business application. The main disadvantage is their high cost, even after regular reductions in price and the complexity in programming. Example: To calculate the distance travelled by a car in a particular time interval, you might take the diameter of the tyre to calculate the periphery, take into consideration the number of revolutions of the wheel per minute, take the time in minutes and multiply them all to get the distance moved. This is called digital calculation. A computer using the principle of digital calculations can be called a digital computer. Analog Computer: Analog computers process data input in a continuous form. Data such as voltage, resistance or temperature are represented in the computer as a continuous, unbroken flow of information, as in engineering and scientific applications, where quantities to be processed exists as waveforms or continually rising and falling voltages, pressure and so on.
    [Show full text]
  • Computer Types and Functions 10 Types of Computers Source
    Computer Literacy – Part 3 Computer Types and Functions 10 Types of Computers Source: http://computer.howstuffworks.com/10-types-of-computers.htm There are a lot of terms used to describe computers. Most of these words imply the size, expected use or capability of the computer. While the term computer can apply to virtually any device that has a microprocessor in it, most people think of a computer as a device that receives input from the user through a mouse or keyboard, processes it in some fashion and displays the result on a screen. 1. PC • The personal computer (PC) defines a computer designed for general use by a single person. While a Mac is a PC, most people relate the term with systems that run the Windows operating system. PCs were first known as microcomputers because they were a complete computer but built on a smaller scale than the huge systems in use by most businesses. 2. Desktop • A PC that is not designed for portability is a desktop computer. The expectation with desktop systems are that you will set the computer up in a permanent location. Most desktops offer more power, storage and versatility for less cost than their portable brethren. 3. Laptop • Also called notebooks, laptops are portable computers that integrate the display, keyboard, a pointing device or trackball, processor, memory and hard drive all in a battery-operated package slightly larger than an average hardcover book. 4. PDA • Personal Digital Assistants (PDAs) are tightly integrated computers that often use flash memory instead of a hard drive for storage.
    [Show full text]
  • Digital and System Design
    Digital System Design — Use of Microcontroller RIVER PUBLISHERS SERIES IN SIGNAL, IMAGE & SPEECH PROCESSING Volume 2 Consulting Series Editors Prof. Shinsuke Hara Osaka City University Japan The Field of Interest are the theory and application of filtering, coding, trans- mitting, estimating, detecting, analyzing, recognizing, synthesizing, record- ing, and reproducing signals by digital or analog devices or techniques. The term “signal” includes audio, video, speech, image, communication, geophys- ical, sonar, radar, medical, musical, and other signals. • Signal Processing • Image Processing • Speech Processing For a list of other books in this series, see final page. Digital System Design — Use of Microcontroller Dawoud Shenouda Dawoud R. Peplow University of Kwa-Zulu Natal Aalborg Published, sold and distributed by: River Publishers PO box 1657 Algade 42 9000 Aalborg Denmark Tel.: +4536953197 EISBN: 978-87-93102-29-3 ISBN:978-87-92329-40-0 © 2010 River Publishers All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, mechanical, photocopying, recording or otherwise, without prior written permission of the publishers. Dedication To Nadia, Dalia, Dina and Peter D.S.D To Eleanor and Caitlin R.P. v This page intentionally left blank Preface Electronic circuit design is not a new activity; there have always been good designers who create good electronic circuits. For a long time, designers used discrete components to build first analogue and then digital systems. The main components for many years were: resistors, capacitors, inductors, transistors and so on. The primary concern of the designer was functionality however, once functionality has been met, the designer’s goal is then to enhance per- formance.
    [Show full text]
  • Lenovo Mobile Workstations
    The ThinkPad P43s, P53, and P73 are certified to run the most requested LENOVO MOBILE ISV applications, including: APPLICATION PROGRAM WORKSTATIONS Structural/MEP Autodesk Revit · Bentley MicroStation Autodesk Design AutoCAD Design Suite INNOVATIVE WORKSTATION POWER Concept Design ICEM · 3ds Max · Creo Concept · Alias · Teamcenter Vis FOR A FUTURE-READY CAMPUS CATIA · NX · Creo (ProE) · Inventor SolidWorks Product Design AVEVA Plant The right tool for any workload MicroStation Simulation · ANSYS Mechanical Fluent Analysis & Simulation Simulia · Nastran · HyperWorks A trusted long-time partner of higher education leadership, LenovoTM focuses on giving campuses power, performance, and reliability in every ThinkPad® MicroStation Simulation · ANSYS Structural · Energy Systems Engineers we design. Simulation · Lighting Analysis · Autodesk · CFD Our Lenovo ThinkPad mobile workstations deliver the very best in performance Architectural 3ds Max · Revit Architecture · Bentley MicroStation and portability where and when your specialists need it most, in the office or in Concepting AutoCAD Architecture the field. The end result is tower-sized performance in a lightweight notebook Autodesk 3ds Max · Autodesk Maya · Autodesk built to work as hard as you do. 3D Animation Mudbox · Side Effects Software Houdini · Pixologic · ZBrush · Maxon Cinema 4D With certifications from leading software providers, including Adobe and Autodesk, Lenovo’s mobile workstations give higher education institutions Editing AVID Media Composer · Adobe Premiere Pro the best choice for supporting applications so critical to your mission and Foundry NUKE · Adobe After Effects · Adobe bottom line. 2D & Composting Photoshop · Blackmagic Design DaVinci Resolve Learn more at cdw.ca/lenovo or contact your CDW Account Manager for more information. Intel inside®. Powerful Productivity Outside. THINKPAD P43S THINKPAD P53 The ThinkPad P43s, Lenovo’s most Lenovo has created the most mobile workstation is designed powerful 15" mobile workstation with STEM students in mind.
    [Show full text]
  • Chapter 1. a Chronology of Game Programming
    This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks . [ Team LiB ] Chapter 1. A Chronology of Game Programming "In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move." —Douglas Adams KEY TOPICS Phase I: Before Spacewar Phase II: Spacewar to Atari Phase III: Game Consoles and Personal Computers Phase IV: Shakedown and Consolidation Phase V: The Advent of the Game Engine Phase VI: The Handheld Revolution Phase VII: The Cellular Phenomenon Phase VIII: Multiplayer Games In Closing The art and science of game development have experienced a huge progression since the early days. Hardware has improved by orders of magnitude, whereas game complexity and richness have simply exploded. To better understand how games are coded today, and why, it is essential to look back and review the history of game programming. I have divided that lengthy narration into what I consider the eight most defining moments that shaped the current industry. Let's time warp ourselves to those moments and take a look at how things were done back then. This way the reasons behind today's practices will become obvious. I will avoid the trivial and focus on programming practices. Because raw data is useless without perspective, it's also important to provide some context in which to interpret the data and to understand how today's programming practices were developed. [ Team LiB ] This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it.
    [Show full text]