<<

4 Introduction and Overview

It wae the beet of tlmee, It wae the worst of times. Dickens, Tale of Two Cities, 1859

Today, many businesses, corporations, and institutions are striving to optimize their usage of resources. Rightsizing is the way to match available computer resources to individual or corporate needs. This chapter introduces the concepts, basic terminology, and major implementation strategies related to rightsizing.

WHAT IS RIGHTSIZING? Rightsizing is a new term for describing an old but elusive goal—balancing user needs against available technology and organizational resources. Many people in the computer world are understandably reluctant to accept rightsizing as a needed addition to an already crowded vocabulary. Such terms are often short-lived or merely the products of colorful advertising campaigns. The term rightsizing, how- ever, is very useful in describing the emerging architectures for computer systems, as we will soon see. Rightsizing of a computer system matches user needs with available technol- ogy and resources, usually by moving software applications to the appropriate hardware platform. This balancing usually results in a redistribution of data pro- cessing, computing, and presentation tasks among the various within an organization. Successful redistribution leads to the most appropriate al- location and sharing of computer resources, for a given time. Rightsizing is a natural extension of the evolution of shared computer sys- tems. In the early years of computers, the only resources capable of being shared were printers and floppy disks. As network technology developed, it became pos- sible to share data and files. Today, distributed software applications allow users to share programs, processors, and displays. Rightsizing is at once a strategy and a process. As a strategy, it consists of the objective of optimizing the overall use of computer resources and the goals

3 4 Rart I Rightsizing Overview developing an appropriate schedule and budget. As a process, it consists of the specific steps—the activities and events—that will lead to the desired outcome. In theory, rightsizing could lead to the migration, or moving, to a less powerful computer, depending upon the requirements and economics of the needed sys- tem. In practice, however, rightsizing almost always results in the movement to faster, more powerful networked systems. References [1] to [10] in the bib- liography at the end of the book provide in-depth discussions of various aspects of rightsizing.

MOTIVATION AND MISCONCEPTIONS Such a potentially involved and time-consuming task as rightsizing would not be undertaken unless there was significant potential for cost savings and increased productivity. However, these benefits are not always realized imme- diately, but rather over the long term, and the benefits of one project may not apply to other areas. Thus, the professionals involved with computer right- sizing need to be aware of some common misconceptions associated with such endeavors.

Why Rightsize? Since rightsizing may result in the complete redesign of a company's computer system and information structure, the potential benefits must be significant. The most commonly cited benefits are as follows:

• Increased access to information. A key requirement in the information age is the ability to quickly access the most up-to-date information. Designing corporate computer systems to meet this need is one of the primary goals of rightsizing. • Increased productivity. When computer resources are used to their best ad- vantage, productivity increases are experienced by almost everyone, from software developers to end users. For example, not all applications need to be developed in a mainframe environment. Many can be created and even run on PCs. Similarly, end users typically enjoy faster response times on PC network systems than on their mainframe counterparts. • Support of organizational changes. Since the 1980s, many corporations have eliminated their middle layers of management. This flattening of the organizational hierarchy was an attempt to reduce the isolation of upper management while empowering lower-level management with greater decision-making authority (Fig. 1-1). Rightsizing from overburdened main- frames to networked PCs connected to corporate computers, pro- vides the necessary quick access to the very best information pertinent to the required decisions. Chap. 1 Introduction and Overview 5

1980s 1990s

Figure 1-1 Organizational flattening.

Misconceptions Many misconceptions about rightsizing are based more upon the fears and wishes of those involved than on reality. Older system administrators, fearful that their many years of mainframe experience will no longer be needed, may equate rightsizing with the replacement of mainframe and by and PCs. In con- trast, those who need computer-based information to do their jobs may view right- sizing as a way to speed the development of needed software application programs and reduce existing backlogs. Information managers, focused on the bottom line, may see rightsizing as the latest cost savings technique, anticipating quick returns. Rightsizing is all and none of the above. While mainframe computers may be retired as a result of rightsizing, they may also be replaced by newer mainframes, or the existing mainframe may be moved to a secondary role, such as a database ma- chine. Which architecture is chosen depends on a careful assessment of system needs, available technology, and cost of all the alternatives. Similarly, rightsizing will not result in immediate reduction of backlogs in development of application and data processing programs. It takes time for system administrators, programmers, and end users to learn and fully utilize newly rightsized systems. Finally, it can also take time, often years, to realize cost savings from the rightsizing process. But delaying this process can result in lost productivity and decreased market presence, as other, more aggressive companies, incorporate rightsized systems into their business.

MAJOR RIGHTSIZING COMPONENTS The major components affected by rightsizing fall into three basic categories: hardware, network systems, and software.

Hardware Hardware refers to any physical component or device that makes up a computer system, from the internal chips to the computer housing or chassis. Peripherals are hardware devices that are attached to a computer's housing, for example, key- board, monitor, and printer. 6 9axX I Rightsizing Overview

Computers are themselves a major class of hardware systems. Traditionally, all computers were classified as one of three types, listed here in terms of de- creasing size and computing power: mainframes, minicomputers, and microcom- puters (e.g., and PCs) (see Fig. 1-2). However, with continuing technical advances in the size and power of CPUs (central processing units) and the storage capacity of memory chips, as well as less expensive manufacturing techniques, the distinction among these three groups has blurred. Smaller, low-end mainframes are now almost indistinguishable from high-end minicomputers, and the distinction between minicomputers and is similarly blurred. Mainframe computers, so named because they were originally built on a large chassis or "main frame," are the fastest, largest, and most expensive of the three computer system categories. Users input data and receive processed results from the computer through a terminal, a hardware device consisting of a display screen and keyboard. Thousands of terminals are typically connected to one central, . Terminals should not be confused with microcomputers. Although both have similar peripheral devices, such as a monitor and a keyboard, terminals generally have far less computing power. There are three main types of terminals: dumb, smart, and intelligent. Dumb terminals can only send and receive data; they have no data processing capability. Smart terminals are the next step up, enabling the user to perform some basic data editing functions. Finally, intelligent terminals can send and receive data, and also run simple applications, associated with informa- tion display, independent of the mainframe computer. (Microcomputers connected to a host machine can serve as intelligent terminals.) The term host has many meanings. For example, when many dumb terminals are connected to one mainframe, the mainframe "hosts" the terminals by provid- ing requested services, data storage, and input/output (I/O) resources. In a broader sense, a host computer is any physical system that interprets and runs software pro- grams. These programs may have been written on other computers, called logical or virtual machines, that are attached to the host via a network. Thus, a host can be

Mainframe

• Supports thousands of terminals • Supports hundreds of terminals — Supports 1 to 20 users • Corporate business machine - Lab and university machine — Includes PCs and workstations • $1 to $25 million • $30K to $1 million - Individual or machine -$1Kto$30K

Figure 1-2 The three major classes of computers. Chap. 1 Introduction and Overview 7 a mainframe, minicomputer, or microcomputer attached to a network, depending upon the system. Minicomputers, or minis, are smaller than mainframes but still too large to be portable. Their computing power, memory capacity, cost, and number of users supported are midrange between mainframes and microcomputers. Terminals are also needed to input and receive data from minicomputers. Microcomputers, or micros, are relatively small in comparison to mainframes or minicomputers. The microcomputer category includes desktop machines, such as PCs or workstations; , the briefcase-sized portables that can be held in your lap (see Fig. 1-3); and palmtop computers. Whatever the size, microcomput- ers are quickly becoming as fast and powerful as existing minicomputers and even some older mainframes. Microcomputers are standalone systems, requiring no connection to a main- frame or minicomputer host. Whereas most hosts require slave terminals to input data and display the processed results, microcomputers perform all computer- related tasks independently. PCs and workstations can be connected together via networks to share data and increase their overall power, but this is not essential. In contrast, terminals are slaves to a host computer and cannot function separately, independent of a mainframe or minicomputer. Although, as noted, the capabilities, performance, and cost of high-end PCs are converging with those of low-end workstations (see [1]), enough differences remain to justify classifying them as distinct types. Their differences can best be noted by comparing their configurations, as shown in Table 1-1. Basically, a PC is a single-user computer running under a DOS-Window or Macintosh operating sys- tem. It may utilize an Intel (DOS) or Motorola (Mac) CISC-based and a Novell (DOS) or Appletalk (Mac) network. Furthermore, the networking capability of a DOS-Windows machine is an add-on, not built into the original sys- tem architecture. In contrast, a typical workstation is a multiuser computer run- ning a multitasking operating system such as Unix, utilizing a RISC-based processing chip with built-in TCP/IP networking capabilities. Workstations are well suited to run processor-intensive scientific and engi- neering applications because they are designed with reduced instruction set com- puter (RISC) processors. Unlike the complex instruction set computer (CISC) processors found in most PCs, RISC chips have fewer and simpler instruction pro- grammed into them. (A comparison of the two processor types is given in Chap. 5, Rightsizing Computer Hardware and Operating Systems.) It should be noted that the differences cited above are generalizations, which do not always hold. For example, the PowerPC-based Macintosh has a RISC processor, but runs most of its CISC-based operating system applications using special software which emulates a CISC machine on top of a RISC processor. Also, Windows 95 from Microsoft has built-in networking capability. Further, these pri- mary differences all but disappear when low-end workstations are compared with high-end PCs. In this case, the PCs perform more and more like workstations. For O E 2

g

Do

ii

O Q_

CD o I o o o o CL 2

"8 SO m i m i 1 | 00 E

a Chap. 1 Introduction and Overview 9

Table 1-1 PC Versus Workstation Configurations

Aspect PC Workstation Number of users Single user Multiuser Typical usage Low-end processing tasks, Processing-intensive e.g. word processing, mail, tasks, e.g. CAD/CAM smaller , (coputer-aided design/ spreadsheets computer-aided manufacturing), scientific and engineering computations Operating system DOS, Windows—3.X and Unix, Windows-NT, OS/2 95, Mac SunOS Number of tasks handled Single-tasking Multitasking at a time Network type Peer-based, e.g., Novell Server-based, e.g., TCP/IP type (add-on) and Apple (built-in) Talk Cost $300 to $10,000 $4,000 to $75,000 Specific hardware Processor Typically 16 or 32 bits Typically 32 or 64 bits (CISC-based) (RISC-based) Coprocessors Optional Built-in Graphics Mid-resolution, smaller High-resolution, large screen screen

example, many PCs will run a multitasking operating system like Unix or OS/2 on a TCP/IP network with a high-resolution screen. Likewise, many types of Unix sys- tems can now emulate the DOS and MAC operating system.

Network Systems Almost all rightsizing strategies rely heavily on networks, which form the com- munication links between computers. Network-unique hardware and software are sufficiently complicated, and separate from the rest of the computer system, to form their own hardware category. Historically, PCs have lacked networking capability. For example, DOS did not have operating system features like multitasking and network card drivers to support networking, so networking solutions consisted of software patches to DOS. This led to compatibility problems between networking software and other PC applications, and made PCs less reliable for networking than workstations built around Unix. 10 fart I Rightsizing Overview

In the mainframe world networking is handled by the front-end processor (FEP); typically a microcomputer or minicomputer that handles most of the com- munication processing tasks for the host computer, usually a mainframe. By off- loading the data communication input/output functions from the host, the FEP computer frees up the host to concentrate exclusively on data processing activities. Typical FEP tasks include transmitting and receiving messages, error checking, se- rial to parallel conversions, and coordinating message switching. The back-end processor, relieved of the basic "housekeeping" data communication tasks by the front-end processor, handles mostly processor-intensive functions, for example, data storage and manipulation. Back-end processors are usually host machines and may run on a mainframe, minicomputer, or workstation connected to a network. In a contemporary computing environment, software applications and pro- cessing tasks are split, or distributed, across many different machines, all of which are connected by some kind of network. Front-end and back-end processors reside on and server computers, respectively. A server processes and services re- quests made by clients. For example, a spreadsheet application, running on a client , may need certain data that is stored on a server minicomputer in another building. The client makes a request for this data, which is transmitted over a network connecting client and server. Once the server receives the client's requests, it locates the data and transmits it back to the client.

Software Software is the sequence of instructions, called programs, that tell a computer's hardware what to do, specifically how to process data. Software programs exist on electromagnetic media such as floppy and hard disks, memory chips, magnetic tapes, and CD-ROMs. Today, the hardware and software of a particular computer type, say an IBM PC, are designed to function and interact together. The software is designed to work with a specific hardware architecture, and the hardware is de- signed to receive a specific type of instructions, in a specific format, from the soft- ware. Therefore, a program written for an IBM PC will not run directly on a Unix-based computer, e.g., a Sun workstation. All software can be divided into two categories, based upon the primary function of the program: operating system software, and . The operating system is the link between the hardware and application software such as word processing, spreadsheet, and data- base programs. The operating system is the main software program that manages the resources and basic operation of a computer. The term resources refers to the processor and any coprocessors; the data storage devices, both RAM and hard disk; and all peripheral devices. Basic operations, the common housekeeping ac- tivities required by all application programs, include:

• Process management, i.e., the order in which tasks will be performed and how they will be handled. Chap. 1 Introduction and Overview 11

• Memory management, i.e., how to allocate the available memory. • Basic I/O control, i.e., input received from a keyboard, mouse, or virtual re- ality glove/headset and data output to a screen, printer, or file. The operating system allows many different application programs, from word pro- cessing programs to games, to be run on the same machine. Table 1-2 lists the operating systems for the different computer system cate- gories from three manufacturers: IBM, Sun, and Microsoft. Note that Microsoft (MS) Windows-3 .X is not an operating system itself since it requires DOS to be pres- ent, but MS Windows-NT and MS Windows 95 are full-fledged operating systems. Table 1-2 Common Operating Systems Computer System

Computer Type IBM Sun Microsoft Mainframe MVS Minicomputer OS-400 Solaris/Sun OS (Unix) Workstation OS/2, AIX (Unix) Solaris (Unix) Windows NT Microcomputer, PC PC DOS/OS/2 Solaris-86 (Unix) MS-DOS/Windows 3.1-95

The core of the operating system described above is also known as the kernel. The application software interacts with the operating system kernel, and the ker- nel passes on commands to the basic input/output system (BIOS), which lies be- tween the operating system and the computer's hardware (see Fig. 1-4). This

Peripherals Computer

Software

K B e Hardware Operating r Application o system n program s e I

i _; -Tmr^rmm "^""^"^r"^r"T^^r

Figure 1-4 Operating system for a Standalone PC. 12 Part I Rightsizing Overview fundamental link translates software commands into the electronic signals recog- nizable by the hardware chips and circuits, which actually perform all the com- puter's processing. The BIOS has limited capabilities. Additional device driver programs are used to access hardware not recognized by the BIOS, such as mice and high- resolution display adapters, as well as newer types of hardware, such as CD- ROMs. Device drivers are like extensions of the BIOS in that they write directly to a particular peripheral via a hardware address. Since rightsizing relies heavily on distributed applications, these device drivers must be compatible across many platforms. This aspect of rightsizing will be explored further in Chap. 5, "Right- sizing Operating Systems."

RIGHTSIZING STRATEGIES The process of rightsizing moves you from a present computer system architecture (where you are now) to a desired, future system (where you want to be). Perhaps your present system is centered around a mainframe and you want to move to a net- work of PCs and workstations. You may want to switch to another mainframe, one that is faster and costs less to maintain. Or you may have standalone PCs and pe- ripherals and want to move to a system of shared resources, that is, network the standalone machines together. Whatever your objectives, they will be achieved by following a carefully conceived rightsizing strategy, tailored to your needs. Three rightsizing scenarios, or migration strategies, occur with such fre- quency that they have been given special names, indicative of the original hard- ware aspects of each path:

• Downsizing (rehosting) • Upsizing • Same-sizing

Figure 1-5 shows how the first two strategies are related. These three strategies may occur separately or together, depending upon the particular requirements, constraints, and functions that satisfy each rightsizing situation. In terms of com- puter architecture, you can move down or across (downsizing/rehosting), up (up- sizing), or stay where you are (same-sizing), given a specific current computer system configuration. Note that these rightsizing schemes are named according to the way in which the hardware platforms are affected by rightsizing. The various possibilities are shown, in matrix form, in Fig. 1-6. (This matrix will be revisited, in greater detail, in Chaps. 3 and 4.) All rightsizing strategies also affect software systems, including data management programs, application specific software and user interface programs (see Fig. 1-7). At one extreme, shown in the large circle in the upper left-hand Chap. 1 introduction and Overview 13

Rehost

Mainframe Mainframe or minicomputer

D o w n s z e Server Server

Minicomputer

Mainframe Client

Workstation

PC Cluster controllerj Macintosh u p s i Dumb terminals z e

Microcomputer

Figure 1-5 Rightsizing hardware model. X X

© I X X Q

XXX XX

XXX XX 2 •£ — i § 6) x x x x » \x x I

g X X X

o a) w X X I * CD DC 00 I1 c O ,x a. o o o S 8 1 2 S £i I I CD i e CO 2 T3 eg 5 « 2" PC CO 1

to • o 3 o X op E

14 A " \\ - rMrH—\ V—* 1 "j'lJH^J » !' \ \ * "'e l B r ll 4-> | ^—\ V—»Ijj |l <^J

15 16 Part I Rightsizing Overview corner of the figure, is a traditional mainframe-dumb terminal (master-slave) sys- tem. In this architecture, sometimes known as the time-sharing model or mono- lithic system, all applications and data reside on the mainframe. The dumb terminals are merely monitors, displaying the results of the mainframe's computa- tions. At the other extreme, shown in the large circle at the lower right-hand cor- ner, is the standalone PC, where all applications and data reside and all software processing occurs. Of course, the type of applications supported and the raw pro- cessing power available are quite different for these two machines. Mainframe computers were designed to handle all of the critical processing needs for an entire organization, and PCs were designed for individual users. In between these two extremes is the domain of the network, towards which the monolithic mainframe systems and the standalone PC systems are merging. Mainframe systems are being downsized, and standalone PC systems are being up- sized, to networks of client and server machines. In these client/server network- based computer systems, client machines, usually PCs, make requests of and are serviced by server machines, usually workstations, minicomputers, or mainframes. The client side of the network is represented as the area beneath the network cylinder in Fig. 1-7. A common characteristic of client machines is that they al- ways present information to the user; that is, they act as the end user interface. The common characteristic of server systems, those represented above the network cylinder, is that they always manage the database. All applications operating on a client/server network belong in one of three software categories:

1. User interface or presentations 2. Applications 3. Data management

These applications are partitioned to either the client or server side of the network— that is, they reside on either the client PC or the server host computer—creating five different types of client/server architectures:

1. Distributed user interface 2. Networked user interface 3. Distributed applications 4. Networked data management 5. Distributed data management

For example, in the distributed user interface partitioning, also known as distrib- uted presentation, some user interface software is on the client PC and some is on the server. In addition, the server has all of the remaining applications plus the data Chap. 1 Introduction and Overview 17 management program. The center partitioning, distributed applications, is the rightsized solution for many applications where both the client and the server ma- chine share in the task of processing.

Downsizing Downsizing is the process of replacing a large computer system with a smaller one. Physically, downsizing means moving from a bulky mainframe or minicomputer to a collection of smaller computers, e.g., LAN-based PCs and workstations. When downsizing to a system of smaller, interconnected machines, the most vital elements of the future system become the network and the way in which all of the software applications, programs, and databases are distributed. The mainframe functions usually shift to a nodal network role, often a primary . Sophisticated networks will allow workstations and PCs to communicate with one another to transfer files, help perform computations, and share applications, tasks previously performed by the mainframe. Of course, the software must be designed to allow for these complex distributed applications. In terms of Fig. 1-5, the rightsizing hardware model, downsizing represents a move away from the standalone mainframe system (upper left) to any of the network-based client/server architectures. This migration can range from the relatively simple (e.g., off-loading, or the movement of some mainframe software development to PCs [2]) to the more complex (e.g., rehosting, or the movement of some major mainframe applications to another host [4]) to the highly complex (e.g., the complete replacement of the mainframe with a network of PCs and work- stations, or total downsizing). See Fig. 1-8. Both off-loading and rehosting may eventually lead to total downsizing, i.e., the complete replacement of the existing mainframe system.

Downsizing Replacing some or all mainframe applications (from mainframe to minicomputer to PC/workstations)

Rehosting Off-loading Moving applications Moving applications or operating system development

Figure 1-8 Two forms of downsizing—rehosting and off- loading. \b Part I Rightsizing Overview

Off-loading Off-loading moves specific operations from one computer to another, usually from a larger system to a smaller one. Off-loading mainframes means transferring the development of software applications from the mainframe environment to a network of PCs and workstations [5]. PC-based development offers two important advantages over developing the same applications on mainframes: (1) reduces mainframe costs and (2) improves productivity. Software development activities, such as compiling and testing code, can re- quire significant mainframe CPU time and many input/output operations. Both CPU and input/output activities are expensive on mainframes, thus developing these same applications on PCs results in significant cost savings. Also, software developers are more productive in a PC-based environment because of the faster response time and more powerful programming and debugging tools afforded by PCs. This also helps to reduce the cost of off-loaded development compared to mainframe development. I Applications chosen for off- loaded development must first be A Word of Cautl0n: Don>t confuse down" downloaded to a smaller system. loadinS with conversion- Conversion, the Downloading moves data or pro- Process of reformatting data to reside on a grams from a host mainframe to a new system or rewritinS code t0 execute on smaller computer, e.g., a PC or a different machine, is a popular technique workstation. This is just the oppo- in riShtsizin8 mainframe applications to a site of uploading, which moves clie^server "etwork environment. It is, programs from a smaller system to however'

Off-loaded development process Mainframe system PC/workstation Download (Software applications) n

Upload PC-developed applications

Figure 1-9 Uploading applications. workstation running Solaris would require a complete change in the operating system and thus all software application interfaces to that system. Note that the rehosting process can be further complicated if the new host has, or is planned to have, a greater networking and application distribution ca- pability than the original host.

Considerations in Downsizing The most commonly cited goals in downsizing are:

• Reduced maintenance and operation costs • Reduced backlogs • Increased access to information

High costs, extensive backlogs, and insufficient access to information are all re- lated to the difficulties of using older, mainframe technology in today's ever changing world. The costs associated with maintaining the hardware, and the software applications, on older mainframes, are significant. Also, the batch mode of mainframe operation contributes to a backlog in user requests, especially for development of new software applications. Finally, most older mainframe tech- nology was designed around the master-slave concept of data access. The result- ing difficulty in accessing and processing mainframe data, which contributes to both the higher cost and increased backlogs of older mainframe systems, is the most important reason to rightsize. Downsizing is not without its shortcomings. Costs associated with downsiz- ing may include the following: 20 F^rt I Rlghtshinq Overview

Hardware Software Network System New PCs and New operating Bridges workstations systems Routers Upgraded PCs and New software LANs workstations applications Network operating Additional memory Networked application systems capacity licenses Network cards Additional hard New in-house Cables disk capacity application Gateways Additional backup development Multiplexer/concentrators capacity Migration of existing Faster processors applications and coprocessors Emulation of older, New or additional legacy applications file servers Additional peripheral resources such as printers, scanners, etc.

Given the new hardware, software, and systems requirements, downsizing costs are typically high at the beginning of a transition. Only in the long term are real sav- ings from reduced maintenance costs realized from downsizing. Short-term bene- fits come from a significant increase in productivity, mainly due to the increased access to information and the increased data processing capabilities for users.

Upsizing Upsizing involves migration from a PC-based system to a more powerful network or host-based system. As in downsizing, computing power is increased. But un- like downsizing, the new upsized system may physically be the same size. For ex- ample, an office might upgrade from a computer with a 80486 or older microprocessor chip to a computer of the same size but with a more powerful Pen- tium/Pro chip. Of course, an upsized system might increase in physical size as well as in power, from, say, a PC to a mainframe. Most commonly, upsizing means moving from a standalone PC, or loosely networked collection of PCs, to a fully integrated network of PCs and workstations. The system components emphasized in an upsizing strategy are similar to those noted in downsizing, namely the network and the software needed to distribute ap- plications. This similarity between the desired future architectures should not be sur- prising, as both strategies move from existing standalone systems to interconnected hardware and software systems, i.e., network-based configurations. In upsizing, the standalone PCs will be connected, via the network, to one another. This allows all re- sources to be shared and utilized to the fullest extent—from peripherals like printers Chap. 1 Introduction and Overview 21 to internal elements like hard disk and CD-ROM drives, memory, and processors. As with downsizing, the typical goal is to migrate toward a fully realized network sys- tem, where usage of both the server and the client computer systems is maximized. In terms of the rightsizing hardware model in Fig. 1-5, you would start from the standalone PC (lower right) and move, or upsize, toward the center. The most important goal of upsizing is effective sharing of information and system capabilities. For example, rather than copying spreadsheet data onto a floppy disk and sending the disk to someone who needs the information, you can transfer the data in a matter of seconds via the network. You need not even have a spreadsheet program on your PC, but can have the remote machine perform needed calculations presenting only the results on your screen. In such ways the process of exchanging information becomes more efficient, resulting, hopefully, in increased productivity and decreased costs. The drawbacks of upsizing are similar to those of downsizing, i.e., costs asso- ciated with the purchase of new hardware and software or upgrading existing hard- ware and software, network-related costs, and the costs of training users on the new system. The most notable contrast with downsizing is that downsizing moves from a highly structured, mainframe-dependent system, whereas upsizing starts from a relatively unstructured system, most often, a loose collection of standalone PCs.

Same-sizing Same-sizing is a rightsizing strategy that recommends staying with your current computer system configuration. After carefully examining the needs of the users and customers of a given system, you may determine that the time, energy, and money spent downsizing (rehosting) or upsizing would not be offset by the bene- fits of the change. A same-sizing strategy may also result from determination that software, not hardware, is the problem. It is important to have the "no change" scenario as a possible outcome of your rightsizing analysis. After all, the same level of effort in determining overall sys- tem needs must be expended regardless of which rightsizing strategy is ultimately selected, and the most prudent action may be no action at all. Giving same-sizing the same decision status as downsizing or upsizing also allows management to jus- tify all the energy spent on determining the system requirements.

CASE STUDY The following section contains the beginning of a case study that is continued throughout the book, with the final section presented at the end of the last chapter. This case study serves two purposes: first, it provides realistic scenarios which il- lustrate the basic rightsizing concepts and applications, and second, it highlights the similarities and differences in rightsizing between the technical and business communities. As each chapter adds more information to the basic outline 22 fart I Rightsizing Overview presented here, the case study becomes more detailed, and can serve as the vehi- cle for discussing specific applications of concepts covered in the chapter. The two most common rightsizing strategies are downsizing and upsizing. In general, businesses that have based their computing architectures around the main- frame are prime candidates for downsizing. Technical firms, on the other hand, tend to invest more heavily in workstations and are accordingly more interested in upsizing. This case study uses those stereotypes as a starting point.

Opening Scene Cosmo Jake, a systems engineer, consults for the XYZ Corporation, a large tech- nical firm specializing in environmental restoration of former Department of De- fense (DOD) facilities. Marisa Rosales, a computer science major, is retained by the ABC Corporation, a Fortune 500 mail-order company. Both have extensive backgrounds in computer systems and an overall understanding of the application of systems engineering to their respective fields.

ABC Corporation ABC Corporation is a Fortune 500 mail-order firm that offers more than 2500 dif- ferent items featured in by-monthly catalogs. The firm specializes in posters and picture frames, for use in the home or office. With a mailing list of some 2.5 mil- lion customers, the company receives more than 2 million orders per year. ABC runs a centralized operation from a town in California, just north of Los Angeles. Business has been good, and the firm intends to branch out to two loca- tions, one in the Pacific Northwest and the other on the east coast near Boston (see Fig. 1-10). The firm hopes to reduce shipping charges by servicing customers in those new locations from regional warehouses. A traditional IBM or Big Blue shop, ABC relies solely on a pair of IBM 3090 mainframe computers to control most of the company *s operation. One machine runs administrative applications such as payroll, accounting, and employee bene- fits. Data management of corporate applications is performed with an IMS/DB database, running under the CICS environment on the MVS operating system. Most of the applications were written in the COBOL programming language. The other 3090 manages the applications critical to the company's mission, including inventory, shipping, and order entry operations. The mainframe database for this second machine is an IBM DB2 system. Finally, an IBM 4300 minicomputer is used to control corporate advertising and catalog-publishing activities, centered around a SQL/DS database system (see Fig. 1-11). All of these host machines can be queried from hundreds of IBM 3270 dumb terminals, networked IBM desktop PCs, and Apple Macintosh desktop computers, all of which are located around the warehouse and office areas. The desktops must run in 3270 emulation mode to interface with the mainframes. Most of the desk- tops are used for data entry, usually customer orders. Chap. 1 Introduction and Overview 23

•Seattle Bostorw

PetroitJ New York 'Salt Lake City •Chicago IPittsburghi t Denver kSanlFrancisco Washinqton' USA

I Atlanta i Los Angeles

) El Paso New Orleans* Houston i to/liami

Figure 1-10 Branch locations of ABC Corporation.

As the company begins to develop plans for the expansion, the information systems (IS) department voices concern over the inadequacies of the present sys- tem to accommodate the anticipated growth. Indeed, certain key applications, such as decision support, are already suffering from the existing centralized computer architectures. Management uses these decision support programs to analyze buy- ing patterns and predict future trends, i.e., the second 3090 mainframe. Analysts review those reports the following morning. During busy seasons, such as Christ- mas, this 24-hour turnaround time is too slow to register the rapid changes in customer buying patterns, let alone react to these trends. Since the main objective is to reduce inventory, the analysts need hourly snapshots of customer buying pat- terns so that production on the faster-moving items can be increased and pro- duction of slow movers decreased. Unfortunately, the mainframe used for such analysis is completely devoted to order-entry activities during the day, and all the other host machines are equally busy. This problem will only get worse as the com- pany expands. Sensing a reluctance on the part of current IS management to embrace solu- tions based upon newer technologies, the management of ABC Corporation hires Marisa Rosales, an outside expert, to help rightsize their overall computer system. After studying the company's existing system and defining their specific rightsiz- ing needs and functions, she tentatively concludes that a downsizing activity is in order, one which emphasizes decentralization and client/server technology. E-JS o to oO it o.<2 co a> " ^- "O 1?

li ! £f §2 I

m '

a g S-ra il J3 OO 2g Q |o "1S sS oO ro co II o U O m 23 i m o as SI S CD cc « "go CL i Q. T < E

24 Chap. 1 Introduction and Overview 25

XYZ Corporation At the seemingly opposite end of the business spectrum from ABC is XYZ Corp., a large technical firm whose chief customer is the government. This firm spe- cializes in environmental restoration of former Department of Defense (DOD) facilities. XYZ has a five-year contract to clean up hazardous wastes on three different DOD facilities, each in different states located across the country. The end goal is to make these facilities and the land they occupy suitable for reuse by in- dustry and the community. To accomplish this task will require compliance with over 2 million requirements from the Environmental Protection Agency (EPA), Nu- clear Regulatory Commission (NRC), and other state and federal regulatory bodies. XYZ's corporate office is located in California, but the company maintains large regional offices at the three cleanup sites. These regional sites are separate cost centers. XYZ hopes to reduce duplication of effort, especially in identifying which regulatory requirements must be met and the types of solutions used in the actual remediation activities. The firm intends to avoid these duplication problems by shar- ing data through a distributed information system, specifically one large require- ments database. At present, each site maintains its own separate database systems. Technical professionals, such as engineers and scientist, make up the major- ity of the work force, with administration and support personnel comprising the rest. Most employees are connected via a LAN linking IBM PC and clones, as well as some Macs. There are a few sections, however, that are not yet connected to the network. Individual groups and departments perform system analysis and model- ing activities on workstations, mostly Suns and HPs. These models and their re- spective databases are not integrated together. A few older IBM 370 mainframes, running the MVS operating system with a DB2 database management system, are used for payroll and cost accounting. XYZ Corp. has hired Cosmo Jake to help systems engineer the development and implementation of the computer-based tools needed to support the company's current contract, as well as the overall information management scheme. After analysis of the situation, he concludes that an upsizing strategy is the way to go, one which focuses on integration and network-centric technology. The case studies in the chapters that follow will show how these rightsizing strategies were selected, in terms of the requirements and functions of each unique situation.

CHAPTER REVIEW • Rightsizing is the way to match available computer resources to the indi- vidual or corporate need. • Rightsizing is both a strategy and a process. As a strategy, it provides a plan for achieving the desired goal of optimizing the utilization of computer 26 Part I Rightsizing Overview

resources; as a process, it consists of the series of activities that lead to the desired result. • The major components affected by rightsizing fall into three basic cate- gories: hardware, networks, and software. • Almost all rightsizing strategies rely heavily on network systems. • In contemporary computing environments, software applications and pro- cessing tasks are distributed across many different machines, leading to client/server systems. • There are three basic rightsizing strategies: downsizing (rehosting), upsiz- ing, and same-sizing. • Typical pressures driving downsizing are high maintenance and opera- tion costs, large backlogs, and increasing need to quickly and easily access more information. • Typical pressures driving upsizing are the need to timely share information and systems capabilities. • Rehosting results in the migration from one host-based system to another.

In Chap. 2, you will learn about systems engineering, a proven approach for man- aging the development of complex rightsizing systems. By using a systems engi- neering approach, you will greatly increase the likelihood that the system you design will be the one that meets your customer's needs and expectations. As your rightsizing needs change with time, your previous system engineering work will lay the foundation upon which to incorporate these changes and design yet an- other, future system.

Review Questions 1. Rightsizing can lead to increased productivity by which of the following? a. Increasing timely access to accurate information. b. Decreasing organizational flattening of management levels. c. Leaving applications on the mainframe computer system

2. What is a host computer? Give three examples of possible hosts.

3. What capabilities and characteristics best differentiate a workstation from a PC?

4. Why have PCs traditionally lacked network capabilities? Chap. 1 Introduction and Overview 27

5. Which of the software programs listed below controls a computer's data processing, memory allocation, and basic input/output functions? a. Operating system. b. User interface program. c. Various application programs. d. Database management system.

6. What steps can you take to ensure that your rightsizing effort will not lead you in the wrong direction, that is, that your future system will meet the needs of your customers?

7. Compare the common characteristics of client and server computer soft- ware systems.

B. Which partitioning of the three basic software applications is the most representative of a client/server network system? Explain.

9. List and compare the three basic rightsizing scenarios.

10. How does off-loading software applications from a mainframe computer differ from rehosting? From downloading? From conversion?

11. What are the most commonly cited goals in downsizing? How do these compare to the main goals in upsizing?

12. Why is it important to consider same-sizing as a viable rightsizing scenario?