<<

Classification of Computers

Computer systems used for business purposes can be divided in to three classes: Microcomputers, Minicomputers and Mainframe computers. Though these divisions are loosely based on the size of the computer systems, there are no hard and fast rules for deciding exactly where one category ends and the next begin. Hence the largest minicomputer systems are often larger than the smallest mainframe computers.

The “size” of a computer system is dependent on the size of a computer’s hardware configuration, the nature of its applications, and the complexity of its system software. This helps us to classify a system as a microcomputer, minicomputer or mainframe.

Irrespective of size, all computers consist of two basic types of components. Those are processors and input/output (I/O) devices.

The processor consists of three parts: the Central Processing Unit, or CPU, main storage and device controllers. The CPU executes instructions, main storage stores instructions and data processed by the CPU and device controllers let the CPU and main storage connect to I/O devices.

Note: Though all computer systems consist of three basic components, the way those components are combined for a particular computer system varies depending on the system’s requirements.

Microcomputer or Personal Computer

Microcomputer is primarily intended for stand-alone use by an individual. Microcomputers are , single-user systems which provide a simple processor and just a few input/output devices. This system consists of a processor with 2 or 4 GB of main storage, a keyboard, a display monitor, a printer, and a diskette drive with a capacity of 4GB and a 500 GB of hard disk. Different models are available, such as Desktop, Notebook, Laptop, Hand-held, Palmtop and PDA.

Minicomputer

Unlike microcomputers, most of the minicomputers provide more than one terminal so that several users can use the system at a time. Such systems are referred to as “multi- user systems”.

The Mini computer – Mini computers like the mainframe computers are used by business organization. The difference being that it can support the simultaneous working of up to 100 users and is usually maintained in business organizations for the maintenance of accounts and finances. Super Computer

Supercomputer is a broad term for one of the fastest computers currently available. are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a . Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). Perhaps the best known supercomputer manufacturer is Cray Research

Mainframe Computer

A consists of the same basic types but has more I/O devices and larger storage capacities.

Mainframe was a term originally referring to the cabinet containing the central processor unit or "main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes.

Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe.

Note

 In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and . But in general, a minicomputer is a system capable of supporting from up to 200 users simultaneously.

 As the size of computers has diminished while the power has increased, the term mainframe has fallen out of use in favor of enterprise server. You'll still hear the term used, particularly in large companies to describe the huge machines processing millions of transactions every day

Today Mainframe almost always refers to IBM’s zSeries computers. And that’s what this book is all about. From now on when we talk about Mainframes, we’re talking about the zSeries machines. What is Server Technology

A “server” is simply a computer, any computer that provides “services” to other computers. If you computer can share its files, it’s a file server. If you turn iTunes music sharing on, your computer becomes a music server. If you have a shared printer, your computer has become a print server.

A server is a running to serve the requests of other programs, the "clients". Thus, the "server" performs some computational task on behalf of "clients". The clients either run on the same computer or connect through the network. Depending on the computing service that it offers it could be a server, file server, mail server, print server, web server, or other.

For example, when you enter a query in Google, the query is sent from your computer over the internet to the Google servers that store all the relevant web pages. The results are sent back by the server to your computer.

We can find the following types of servers around the world:  Database server - Provides database services to other computer programs or computers  Web server - Server that HTTP clients connect to in order to send commands and receive responses along with data contents  Standalone server - Emulator for client–server (web-based) programs  Proxy server - Acts as an intermediary for requests from clients seeking resources from other servers And so on…

Do we need a server? Why?

“Do we need a server?” is a common question. The real hidden question is “What does a server do for my organization, and do we need something that does that?”

Servers can actually do a lot of different things, which is why it’s sometimes difficult to decide. However, for the Small to Medium-sized organization, a server typically “serves” in any or all of the following roles:  Central file storage  Backups  Email  Network printer sharing  Firewall  Remote connection  Security for user account and file access  Flexibility and expandability for new functions in the future So … In this globalization era, most of the business runs with datum from different states, countries and continents. Undoubtedly, computers handshake across regions and thereby Servers become a standard for a Business or Organization.

Servers often run for long periods without interruption and availability must often be very high, making hardware reliability and durability extremely important. Those are ideally very fault tolerant and use specialized hardware with low failure rates in order to maximize uptime, for even a short-term failure can cost more than purchasing and installing the system.

Mainframe: Is It a Server?

Mainframes are Servers, the way we have client-server computing today, but they are not called servers because the "clients" of a mainframe system are usually terminals. These are dumb terminals, which mean all the applications they run on big system. Nothing is being served to the client...all the work is done on the big machine. A Mainframe is usually dedicated for one purpose like a Bank or Corporate.

Generally Mainframe machines acts as a servers (but they are not called as servers). You throw tasks to the Mainframe machine, and the mainframe performs the processing, and stores the result. Many users throw requests to the Mainframe machine concurrently. Thus, as a mainframe operator, generally you would not be present physically near the machine, you operate it remotely. you have a keyboard and monitor connected to your Mainframe server. There would be thousands of terminals connected to a mainframe server and many users can perform their tasks on the mainframe server concurrently. You might sit in the work-office of your company on a terminal and perform data-processing on the Mainframes server located somewhere else.

So different types of computers or technology acts as a Server(But not a server). So a Personal computer can be a server, Mini computer can be a server, Mainframe computer can be a server and so on. But these differ on some factor and especially on RAS.

RAS (Reliability, Availability and Serviceability) decides the strength of any Servers.

Reliability, Availability and Serviceability (RAS)

Reliability, Availability and Serviceability (RAS) is a set of related attributes that must be considered when designing, manufacturing, purchasing or using a computer product or component.

The term was first used by IBM to define specifications mainframes and originally applied only to hardware. Today RAS is relevant to software as well and can be applied to networks, application programs, operating systems (OS), personal computers (PCs) and supercomputers. But especially Servers. Reliability - Ability of a computer-related hardware or software component to consistently perform according to its specifications. In a simple form, it is “not only free from technical errors but also avoid such errors once it arrives”. A reliable system  Helps itself to avoid and detect faults.  Does not silently continue and deliver results with corrupted data instead it corrects or else stops and reports the corruption.

Availability - The ratio of time a system or component is functional to the total time it is required or expected to function. It may be reported as minutes or hours of downtime per year. This can be expressed as a direct proportion (for example, 9/10 or 0.9) or as a percentage (for example, 90%). A system with high availability  This allows the system to stay operational even when faults do occur.  Disables the malfunctioning portion and continue operating at a reduced capacity.

Serviceability - Various methods of easily diagnosing the system when problems arise. Some systems have the ability to correct problems automatically before serious trouble occurs. A system supports serviceability  Detects potential problems in the initial stage to avoid system downtime.  Hot swapping of components - replacing computer system components without shutting down the system.

Why is RAS important?

Reliability is crucial in modern systems, even if it comes at the expense of system performance. Slowness can be an acceptable trait of a system, but failure and data loss are almost never acceptable. Downtime is equally unacceptable, lending to the obvious importance of availability. Finally, serviceability contributes to both of the traits, and should help to reduce the ongoing cost of running the system.

Mainframe strictly supports RAS and maintains without fail. The degree of RAS given by Mainframes is higher than any other technologies and it is the strength of Mainframes as well. Mainframe – A brief Intro…

So, what is Mainframe?

A large, highly-functional business computer, descended from IBM's System/360, which is capable of running thousands (or more) of concurrent applications, serving millions (or more) of concurrent users at greater than 99% busy with 99.999% or bettter uptime and no degradation in service. It continues to be the computer that handles much of the world's most critical business data processing, with so much reliability, availability and security.

Mainframe computers are powerful computers used primarily by corporate and governmental organizations for critical applications, bulk data processing such as census, industry and consumer statistics, enterprise resource planning, and transaction processing.

A very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. In the hierarchy that starts with a simple microprocessor (in watches, for example) at the bottom and moves to supercomputers at the top, mainframes are just below supercomputers.

Big companies such as Banks, Insurance Companies, Travel and Retail Sector, Telecom Companies employ Mainframes for processing their Business-Data. Today, thousands of people around the globe book flights, do Electronic Money Transfers, swipe their credit- cards for purchases. These transactions are processed on-the-fly in a snap (not only) by a Mainframe Computer.

Although the term mainframe first described the physical characteristics of early systems, today it can best be used to describe a style of operation, applications, and facilities. So we have to keep in mind that “Mainframe is not only a computer, more than it is a “Technology”.

The mainframe is sometimes referred to as a "dinosaur" not only because of its size but because of reports, going back many years that it's becoming extinct. In 1991 Stewart Alsop, the editor of InfoWorld, predicted that the last mainframe would be retired by 1996. However, in February 2008 IBM released a new mainframe, the z10. Steve Lohr wrote about the mainframe as "the classic survivor technology" in The New York Times ("Why old technologies are still kicking"): Some more definitions are:

 A Powerful Computer/Server/Technology that supporting thousands of applications, input/output devices and supporting thousands of users simultaneously. Today, a Mainframe refers to IBM's zSeries computers.

 It is a Style of secured computing designed to continuously run large, mixed workloads at high levels of utilization while meeting user-defined service level objectives.

 A mainframe is a large computer system that is used to host the , transaction servers, and applications that require a great degree of security and availability.

 Mainframe is a large, multiuser computer system designed to handle massive amounts of input, output and storage. A mainframe is usually composed of one or more powerful CPUs connected to many input/output devices, called terminals or to personal computers. Mainframe systems are typically used in business requiring the maintenance of huge databases or simultaneous processing of multiple complex tasks.

What does the word Mainframe stand for?

The major components of the electronic computers produced in the 1950s and early 1960s were mounted on racks or frames. In order to keep the lengths of cables interconnecting a computer's components to a minimum (thereby maximize processing speed), a computer's central processing unit (CPU) and main memory were most often housed together in a single frame, which came to become called the computer's main frame.

So the term Mainframe originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames. Later the term was used to distinguish high-end commercial machines from less powerful units which were often contained in smaller package.

Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. IBM System z and zEnterprise are IBM's latest incarnation. Otherwise, systems with similar functionality but not based on the IBM System/360 are referred to as "servers." However, "server" and "mainframe" are not synonymous.

Some non-System/360-compatible systems derived from or compatible with older (pre- Web) server technology may also be considered mainframes. These include the Burroughs large systems, the UNIVAC 1100/2200 series systems, and the pre- System/360 IBM 700/7000 series. Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide Web for details.)

Today’s Mainframe

Mainframe development occurred in a series of generations starting in the 1950s. In those days, mainframe computers were not just the largest computers; they were the only computers and few businesses could afford them.

In the 1970s and 1980s when almost all computers were big, the term Mainframe was used to refer to a number of different computer systems. Today most of these are gone, and Mainframe almost always refers to IBM’s zSeries computers.

As computer technology evolved, mainframe computers became an integral part of distributed processing systems. In such systems many smaller computers, such as minicomputers, are linked together in a system controlled by a host computer, often a mainframe computer. This is what the reason Mainframe is so called “Centralized System”.

The zSeries computers are amongst the largest computers sold today, and they’re used for commercial data processing. By commercial data processing we’re really talking about database based applications; putting a piece of data into a database, looking at it, and taking it out.

IBM zEnterprise System is the latest line of IBM mainframes, introduced on July 22, 2010. This comes in two models – Enterprise Class and Business Class. The zEnterprise has a really improved RAS. More efficient and powerful. It makes Cloud Computing easier.

Mainframe Market

There are number of vendors available to manufacture Mainframes. They are: IBM – Z series machines. HP – . – ClearPath. Fujitsu – BS2000. And so on….

From the above, IBM mainframes dominate the mainframe market at well over 90% market share. Whatever that we discussed and going to discuss on this materials focus only IBM Mainframes. Why do rely on Mainframes?

Whenever data processing is a key for any business, we can either go with Distributed Technology or Midrange Technology or a Mainframes Technology. Each has its own strengths. But if you don’t bother about Maintenance cost, Staffing etc., Mainframes is the BEST technology to drive the Data Processing. We can understand the reason in this section.

The main purpose of Mainframes is to run commercial applications of Fortune 1000 businesses and other large-scale computing purposes. Banking and insurance businesses where enormous amounts of data are processed, typically (at least) millions of records, each day.

How Mainframes differs from other technologies?

Though there are a number of reasons or facts, we can combine all these in to a well defined word – TRUST. Yes, it is trustworthy technology but what in terms of…??? The answer is RAS with SCALABILTY. Scalability stands here as “numbers”. I mean the number of processor to be run, the number of peripheral devices to be connected, the number of users to be supported, the number of information to be processed and the number of application can run. All the above figures are considered as “at a time” or “at the given moment”. That is “number of application can run at a time”

Some key points are:

 Mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe.

 While client/server systems are suited for rapid application deployment and distributed processing, mainframes are efficient at online transactional processing, mass storage, centralized software distribution, and data warehousing

 Mainframes also have tools for monitoring performance of the entire system, including networks and applications not available today on servers

 Mainframe has come in three tier client/server architecture. The combination of mainframe horsepower as a server in a client/server distributed architecture results in a very effective and efficient system. Mainframe vendors are now providing standard communications and programming interfaces that make it easy to integrate mainframes as servers in client/server architecture. Using mainframes as servers in a client/server distributed architecture provides a more modular system design, and provides the benefits of the client/server technology.  Using mainframes as servers in client/server architecture also enables the distribution of workload between major data centers and provides disaster protection and recovery by backing up large volumes of data at disparate locations.

 Nearly all mainframes have the ability to run (or host) multiple operating systems, and thereby operate as a host of a collective of machines. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.

 Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not usually available with most server solutions.

 Mainframes can handle large volumes of data and Provide centralized administration Offer superior data management capabilities, can handle different types of workload, have high data bandwidth, Monitor data integrity and security.

Why to stay on Mainframes? Why not other technologies?

 The point that the industries that typically use mainframes (financial/insurance) will eventually see the cost of maintaining their legacy systems become too high, and switch to a more modern, distributed, platform. There is, however, a large cost to converting all these systems over to cheap, for a servers running java. Not just the cost of doing it, but the risk involved in doing it wrong. If, hypothetically, you were to convert a system from mainframe COBOL to midrange java, and you messed up a bunch of account values or lost data in any manner, you'd probably be on the bad end of a class action lawsuit, not to mention potential putative regulatory action. This is why nobody does that. Commonly system has about 4 million lines of COBOL, and they all (mostly) work. Like they say, "If it ain't broke, don't fix it."

 The cost of running additional workload on distributed servers goes up linearly which increases cost per unit of work as the workload grows.  The cost of running incremental workload on the mainframe goes down as the total workload grows and Customers have learned that mainframes running high throughput workloads are the most cost-efficient platform.  The Total Cost of Storage is typically three times more in Distributed Environments.

Mainframes strengths

RAS for Mainframes:

Reliability:

 The mainframes system's hardware components have extensive self-checking and self-recovery capabilities.  The system's software reliability is a result of extensive testing and the ability to make quick updates for detected problems.  IBM boasts that you can bet all your money on A Mainframe, when it comes to Reliability. Very often, you must have seen the horrific Blue-Screen of Death (BSOD) on Desktop Computers, and they crash! A Mainframe Computer reliably processes huge volumes of Business-Data, without crashing

Availability:

 The Mainframe system can recover from a failed component without impacting the rest of the running system. This term applies to hardware recovery (the automatic replacing of failed elements with spares)  Software recovery (the layers of error recovery that are provided by the operating system).  Mainframe-Computers are always available, they are up and running all the time. They just don't fail. You'll be surprised to know, once a Mainframe Computer is started and powered on(IPL'ed), they can run for 5 to 10 years, at a stretch without failing. In other words, Mainframe Computers have very good up-times. The Mean Time Between Failures(MTBF) ranges from several months to even years. Serviceability:  The system can determine why a failure occurred. This capability allows for the replacement of hardware and software elements while impacting as little of the operational system as possible.  This term also implies well-defined units of replacement, either hardware or software.  Faults can be detected early on a Mainframe Computer. When some components fail, some of IBM's systems can automatically call the IBM Service center. Repairs can be done without disrupting the day-to-day operations.

Security:  The IBM System z and IBM zEnterprise joins as the world's only servers with the highest level of hardware security certification, that is, Common Criteria Evaluation Assurance Level 5 (EAL5).  The EAL5 ranking gives companies confidence that they can run many different applications running on different operating systems, such as z/OS, z/VM®, z/VSE™, z/TPF and ®-based applications containing confidential data, such as payroll, human resources, e-commerce, ERP and CRM systems, on one System z divided into partitions that keep each application's data secure and distinct from the others.  DB2 uses cryptographic functions in the hardware. Both security and cryptographic functions enable delivery of leading-edge security at low levels of granularity, for example, individual rows and columns instead of tables.

Environment friendly:

 A single z Mainframe(termed as Green Machines as well) system is capable of consolidating the work of up to thousands of servers--sweeping the floor of the data center and reducing energy consumption, required floor space and the slew of costs associated with distributed server environments  When you hit a certain workload the larger servers or mainframe environments are more energy efficient on an [energy used per] unit of work basis than multiple servers working together. We took a look at the energy used for the same workload when run on distributed four-way Unix servers and on a z9 mainframe machine. The energy consumption of the z Servers was ten times less.

 And this is the main reason for which IBM is ranked No.1 Green Company in the world.

Continuing compatibility:

 Many Mainframes applications are written decades ago and which are still running, and few may be written ‘yesterday’ The ability of an application to work in the system or its ability to work with other devices or programs is called compatibility.  Mainframes effectively allow downward compatibility with flawless execution of software applications in new hardware configuration.  No other can claim as much continuous, evolutionary improvement, while maintaining compatibility with previous releases. Why Business should prefer Mainframes? - Values Of Mainframes:

Some facts about mainframe (2007-2008 Statistics):

 95 % of the Fortune 1000 enterprises use IMS - Originally written in 1968 to support NASA’s Apollo program

 25 of the world’s top 25 Banks, 23 of the top 25 US Retailers and nine out of 10 of the world’s largest insurance companies run DB2 on System z

 490 of IBM’s top 500 customers run CICS IBM’s CICS handles more than 30 billion transactions a day IBM has 50,000 CICS customer licenses, and 16,000 customers

 IMS and CICS systems execute over 80 billion transactions a day(note that the current world population is 7 billion)

 95% of US Fortune 500 companies are System z clients and 71% of global Fortune 500 companies are System z clients

 80% of world's corporate data resides or originates on mainframes.

 More than 359 ISVs sell over 972 Linux applications on System z

 System z increased market share 14 points while Sun, HP, and others fell over past 5 years

 60% of System z revenue is driven by new workloads (Java, Linux, Database, SOA)

Why Business should prefer Mainframes? - What makes a mainframe special?

Legendary dependability:

 Can I count on a mainframe? - Less than 5 minutes downtime a year

 How do mainframes do this? - Hardware, software, storage, and network designed for maximum a application availability  How can mainframes help you cope with planned and unplanned outages? - Unique mainframe clustering technology for maximum up-time - Resilient recovery in multiple locations - Replicate data real-time at remote locations and switch to replicated data without application outage

Why Business should prefer Mainframes? – Extreme Scalability

The industry’s fastest, most scalable & flexible enterprise server Z196 can be scalable

 Up to 60 system images on a single server  Up to 24 processors/96 cores for client use and 20 processor/80 cores for a box on z/OS 1.12  Up to 32 z/OS logical partitions can be configured in a single-image Parallel Sysplex® cluster, with shared data (up to 2,560 core engines total)  63.75K subchannels in subchannel set 0 and Support for multiple subchannel sets – for a total of 127.75K subchannels  Support for up to 3 TB of real memory on a single z/OS image (z/OS 1.8). This will allow the use of up to 512 GB of real memory on a single z/OS image

Why Business should prefer Mainframes? – Near Continuous availability

 There is more to “availability” than just the server being up - the application and the data must be available as well. For the System z platform this means hardware, I/O connectivity, operating system, subsystem, database, and application availability too.

 Beyond the single system is z/OS Parallel Sysplex clustering. Parallel Sysplex is designed to provide your data sharing applications and data with not only continuous availability for both planned and unplanned outages, but also near- linear scalability and read/write access to shared data across all systems in the sysplex for data sharing applications.  MTBF (Mean Time Between Failure) for Z servers lies between 20 – 40 years.

Who uses Mainframes daily?

Just about everyone is using, has used a mainframe computer at one point or another. If you ever used an automated teller machine (ATM) to interact with your bank account, you used a mainframe. A banking institution could use a mainframe to host the database of its customer accounts, for which transactions can be submitted from any of thousands of ATM locations worldwide.

Today, mainframe computers play a central role in the daily operations of most of the world’s largest corporations. In banking, finance, health care, insurance, utilities, government, and a multitude of other public and private enterprises, the mainframe computer continues to be the foundation of modern business.

So whenever a business application is accessed through a web browser, there is often a mainframe computer performing crucial functions “behind the scene.”

Many of today’s busiest websites store their production databases on a mainframe host. New mainframe hardware and software products are ideal for web transactions because they are designed to allow huge numbers of users and applications to rapidly and simultaneously access the same data without interfering with each other. This security, scalability, and reliability are critical to the efficient and secure operation of contemporary information processing.

Businesses today rely on the mainframe to:

 Perform large-scale transaction processing (thousands of transactions per second)  Support thousands of users and application programs concurrently accessing numerous resources  Manage terabytes of information in databases  Handle large-bandwidth communication How come such an old technology standstill? Unknown to many, mainframe machines are still here to stay. While we work on Windows and UNIX systems, the mainframe still is king.

Long before there were desktops and servers, the only computers used for business were mainframes with IBM leading the pack. Mainframes used to occupy a large area in a building. But with modern technology, they are now becoming smaller in their footprint. There had been a lot of changes to mainframe technologies that they can also be used as Servers in a web enabled world and hence the mainframe computer continues to be the foundation of modern business.

In the 1990's, the word was "the mainframe is dead". Two decades later, this proves to be wrong. How does the mainframe fit in the modern world of computing??? The fact is organizations have built their businesses around mainframe code and it will be too risky for them to convert their core applications to something new. IBM has taken advantage of this and is extending the capabilities of their older compilers. A check of their programming manuals shows that PL/1 can now work with XML. COBOL can be used as an object oriented language.

Technologies from Progress and IBM can convert the dump green screens to web services that can be accessed by web applications. These also convert these green screens into HTML pages that can make them look like they were served from modern day web applications. These technologies also allow a mainframe program to access an external web service. By using technologies like this, the mainframe can be leveraged for its strengths. With cloud computing on the horizon, a study conducted for CA Technologies show that 79% of European organizations believe the mainframe will be integral to their implementation of this technology.

So...Young developers who want to work in large organizations will benefit by picking up some knowledge of how mainframes work. It may be old and it may be a totally different world compared to Windows and Linux machines. But things seem to indicate that it is here to stay. How Mainframe works?

On the hardware behalf Mainframe is made up of thousands, maybe millions, CPUs and they all work in parallel. On the software behalf a Mainframe computer works by making replacing dozens of virtual machines thus emulating hundreds of smaller servers running different operating systems reducing the management and administrative cost while improving the scalability and reliability.

To put simply, a Mainframe is a central computer, or a server. All terminals, which are very basic PCs will connect to the mainframe and it will do all the work and the PC acts more or less like a remote monitor/keyboard/mouse. These PC could be any type of computers either a Server or Micro or Mini computers.

Who are all must be needed to run MF

Mainframe systems are designed to be used by huge numbers of people. Because of the large number of users, applications running on the system, and the sophistication and complexity of the system software that supports the users and applications, a variety of roles are needed except users to operate and support the mainframe system. Unlike personal computer we need a group of people to maintain or support Mainframes while it runs, mostly all the day.

Who’s who in the Mainframe world?

The following group of people are needed to run Mainframe:

System : The system programmer installs, customizes, and maintains the operating system or subsystems, and also installs or upgrades products that run on the system. The system programmer must be skilled at debugging problems with system software. They are needed to maintain such as database management systems, online transaction processing systems, and web servers. Middleware is a software “layer” between the operating system and the user or user application. It supplies major functions that are not provided by the operating system. Middleware products such as DB2, CICS, and IMS™ can be as complex as the operating system itself.

System administrator: The person who maintains the critical business data that resides on the mainframe. System administrators perform more of the day-to-day tasks related to maintaining the critical business data that resides on the mainframe, while the system programmer focuses on maintaining the system itself. Although system programmer expertise lies mainly in the mainframe hardware and software areas, system administrators are more likely to have experience with the applications. Common system administrator tasks can include:

 Installing software  Adding and deleting users and maintaining user profiles  Maintaining security resource access lists  Managing storage devices and printers  Managing networks and connectivity  Monitoring system performance

Application Developer: They design, build, test, and deliver mainframe applications for the company’s users and customers. Application Developer is not only responsible to create new applications but also for maintaining and enhancing the company’s existing mainframe applications. As mainframe installations still create new programs with COBOL or PL/I, languages such as Java and /C++ applications, developers are needed to maintain the code at all the levels.

System operator: The person who monitors and controls (24/7) the operation of the mainframe hardware and software. The operator  Starts and Stops system tasks, subsystems such as transaction processing systems, database systems, and the operating system itself.  Perform an orderly shutdown and startup of the system and its workloads, when it is required.  Monitors the system consoles for unusual conditions  Works with the system programming and production control staff to ensure the health and normal operation of the systems.

As applications are added to the mainframe, the system operator is given with run book (SOP – Standard Operations Procedure) of instructions to ensure that they run smoothly. Run book/SOP includes  Application-specific console messages that require operator intervention  Recommended operator responses to specific system events  Directions for modifying job flows to accommodate changes in business requirements

In case of a failure or an unusual situation, the operator communicates with system , who assist the operator in determining the proper course of action, and with the production control analyst, who works with the operator to make sure that production workloads are completing properly. Production control analyst: The person who ensures that batch workloads run to completion without error or delay. Most of Mainframe installations run interactive online workloads followed by a batch jobs that run (sometimes after the prime shift) when the online systems are not running. The Production control staffs understand that the use of Well-structured rules and procedures to control changes, strength of the Mainframe environment and prevent outages.

Mainframe Vendors: A number of vendor roles are commonplace in the mainframe. Most of Mainframe computers are sold by IBM, and the operating systems and primary online systems are also provided by IBM, most vendor contacts are only a IBM employees.

However, independent software vendor (ISV) products are also used in the IBM Mainframe environment, and customers use original equipment manufacturer (OEM) hardware, such as disk and tape storage devices, as well.

Typical vendor roles are:  Customer Engineer: The IBM hardware maintenance person is often referred to as the customer engineer (CE) who provides onsite support for hardware devices. The CE usually works directly with the operations teams if hardware fails or if new hardware is being installed.

 Software support: IBM has a centralized Support Center that provides entitled and extra-charge support for software defects or usage assistance. They validate the application product errors and give us a solution.

Mainframe Workloads – Batch & Online

Whatever you speak on Mainframes, the workload lies either or Online processing. So mainframe workloads always fall into one of two categories: batch processing, or online transaction processing, including Web-based applications.

Batch Processing:

To keep it simple, Batch Processing is running of jobs (a basic MF workload – Whatever work is done on Mainframe is called “jobs”) on the mainframe without user interaction. We will see batch processing in detail.

A key advantage of mainframe systems is the ability to process terabytes of data from high-speed storage devices and produce valuable output. For example, mainframe systems make it possible for banks and other financial institutions to produce end-of- quarter processing in an acceptable time frame (runtime) when such reporting is necessary to clients (such as quarterly stock statements or pension statements) or to the government (financial results). With mainframe systems, even retail stores can generate and consolidate nightly sales reports for review by regional sales managers. The applications (programs) that produce these statements are batch applications are processed on the mainframe without user interaction. A batch job is submitted on the computer, which reads and processes data in bulk (perhaps terabytes of data). A batch job may be part of a group of batch jobs that need to process in sequence (one after one) to create a desired outcome. This outcome may be output such as client billing statements.

Characteristics of Batch processes:

 A scheduled batch process can consist of the execution of hundreds or thousands of individual jobs (programs) in a pre-established sequence. Hence, during batch processing, multiple types of work can be generated. Consolidated information, such as profitability of investment funds, scheduled database backups, processing of daily orders, and updating of inventories, are common examples.  Large amounts of input data are processed and stored (perhaps terabytes or more) that is, large numbers of records are accessed, and a large volume of output is produced.  Immediate response time is usually NOT a requirement. However, batch jobs often must complete within a “batch window,” (within 10 hours or within 5 pm) as prescribed by a service level agreement (SLA). During its run, the online activity would be very minimal.  Now in the modern Mainframes, batch window is getting shrink, and batch jobs are now often designed to run concurrently with online transactions with minimal resource contention.

While batch processing is possible on distributed systems, it is not as commonplace as it is on mainframes, because distributed systems often lack:  Sufficient data storage  Available processor capacity, or cycles  Sysplex wide management(logically merge more than one system) of system resources and job scheduling When and where the Batch processing is needed?

It is used when there is a lot of transactions/hits on a master file and the response needed is not immediate, usually it may be until the end of the day or week or even a month. A good example would be payroll processing, where nearly every master file record will be affected. The data is collected over a period of time, then input and verified by clerks (verified means input by someone else and then both inputs are compared by computer) and processed centrally. After the output is produced, and is usually printed media such as pay slips or invoices, although this is changing with the advent of the web.

Online Processing:

Often it is called as Online Transaction Processing (OLTP). As we know, Batch processing involves large numbers of very similar records. It was widely used in the days when IBM cards were the primary means of entering new information into computer systems. As computers became more advanced, it became possible for people to use them in real time with individual terminals feeding into a central mainframe computer.

Online Transaction Processing allows a user to enter a transaction to a program that can return the result in real time. If you go to an ATM and query your balance, your request is sent to an online program that accepts your query and returns the result in real time.

An early example of online processing involved a central computer loaded with a BASIC interpreter connected to a lot of stand-alone terminals. In the earliest days, that is before personal computers and intelligent workstations became popular, the most common way to communicate with online mainframe applications was with 3270 terminals. These devices were sometimes known as “dumb” terminals, but they had enough intelligence to collect and display a full screen of data, these were teletype machines. The characters were green on a black screen, so the mainframe applications were nicknamed “green screen” applications. The central computer they were connected to was fast enough to give the individual terminal users the illusion that it was working on their own particular programs alone.

For many of the applications now being designed, many installations are reworking their existing mainframe applications to include web browser-based interfaces for users. Although there are technologies that now allow for more user-friendly interfaces, most of these interactive systems still display their output in what used to be 3270 terminals. The 3270 terminals were physical visual display units that could only display text in 80 columns and 24 rows. Other models could display 80 columns and 43 rows. These are what some call 'green screen terminals' or 'dumb terminals'. There are probably no 3270 terminals in use today. Most 3270 terminals are emulated on desktop using emulation software. The way data is to be displayed on the terminal has to be mapped. The mapping process depends on the interactive system used.

Typically, mainframes serve a vast number of transaction systems as it can support an unpredictable number of concurrent users and transaction types. Most transactions are executed in short time periods (fractions of a second in some cases).

Online transactions usually have the following characteristics:  A small amount of input data, a few stored records accessed and processed, and a small amount of data as output  Immediate response time, usually less than one second  Large numbers of users involved in large numbers of transactions  Round-the-clock availability of the transactional interface to the user  Assurance of security for transactions and user data

Online transactions are familiar to most people. Examples include:  ATM machine transactions such as deposits, withdrawals, inquiries, and transfers  Supermarket payments with debit or credit cards  Purchase of any product over the Internet

Some industry uses of mainframe-based online systems include:  Banks – ATMs, teller systems for customer service  Insurance – Agent systems for policy management and claims processing  Travel and transport – Airline reservation systems  Manufacturing – Inventory control, production scheduling  Government – Tax processing, license issuance and management

Now large computer systems with networks of interconnected computers still use both batch processing (for example when large databases are merged) and online processing (e-mail is a good example) together to fulfill their business needs. Criticality of a Mainframes’ Runtime

So without Mainframes this may happen…..

 Your airplane might not land safely (air traffic control)  Your ATM would not give you money (banks all over the world)  You could not buy something online (transaction processing)  Trains could not run everywhere (virtual Linux servers)  Hospitals could not access patient records (Patient Management)  Your FedEx /UPS package would not ship? (shipping and tracking of shipments)  The Internet would not work. Introduction to IBM Mainframes

IBM Mainframes:

IBM mainframes are large computer systems produced by IBM from 1952 to the present. During the 1960s and 1970s, the term mainframe computer was almost synonymous with IBM products due to their market share. Current mainframes in IBM's line of business computers are developments of the basic design of the IBM System/360.

Introduction to IBM Mainframes – Hardware - Server Models

IBM System/360: All that changed with the announcement of the System/360 (S/360) in April, 1964. The System/360 was a single series of compatible models for both commercial and scientific use. The number "360" suggested a "360 degree," or "all-around" computer system. System/360 incorporated features which had previously been present on only either the commercial line (such as decimal arithmetic and byte addressing) or the technical line (such as floating point arithmetic). Some of the arithmetic units and addressing features were optional on some models of the System/360. However, models were upward compatible and most were also downward compatible. The System/360 was also the first computer in wide use to include dedicated hardware provisions for the use of operating systems. Among these were supervisor and application mode programs and instructions, as well as built-in memory protection facilities. Hardware memory protection was provided to protect the operating system from the user programs (tasks) and the user tasks from each other. The new machine also had a larger address space than the older mainframes, 24 bits vs. a typical 18 bits.

The smaller models in the System/360 line (e.g. the 360/30) were intended to replace the 1400 series while providing an easier upgrade path to the larger 360s. To smooth the transition from second generation to the new line, IBM used the 360's microprogramming capability to emulate the more popular older models. Thus 360/30s with this added cost feature could run 1401 programs and the larger 360/65s could run 7094 programs. To run old programs, the 360 had to be halted and restarted in emulation mode. Many customers kept using their old software and one of the features of the later System/370 was the ability to switch to emulation mode and back under operating system control. Operating systems for the System/360 family included OS/360 (with PCP, MFT, and MVT), BOS/360, TOS/360, and DOS/360.

The System/360 later evolved into the System/370, the System/390, and the 64-bit zSeries and System z machines. System/370 introduced capabilities in all models other than the very first System/370 models; the OS/VS1 variant of OS/360 MFT, the OS/VS2 (SVS) variant of OS/360 MVT, and the DOS/VS variant of DOS/360 were introduced to use the virtual memory capabilities, followed by MVS, which, unlike the earlier virtual-memory operating systems, ran separate programs in separate address spaces, rather than running all programs in a single virtual address space. The virtual memory capabilities also allowed the system to support virtual machines; the VM/370 hypervisor would run one or more virtual machines running either standard System/360 or System/370 operating systems or the single-user Conversational Monitor System (CMS). A time-sharing VM system could run multiple virtual machines, one per user, with each virtual machine running an instance of CMS.

OS/390

OS/390 is the IBM operating system most commonly installed on its S/390 line of mainframe server. It is an evolved and newly renamed version of MVS (Multiple Virtual Storage), IBM's long-time, robust mainframe operating system. By whatever name, MVS has been said to be the operating system that keeps the world going. The payroll, accounts receivable, transaction processing, database management, and other programs critical to the world's largest businesses are usually run on an MVS system. Although MVS tends to be associated with a monolithic, centrally-controlled information system, IBM has in recent years repositioned it as a "large server" in a network-oriented distributed environment that would tend to use a 3-tier application model.

Since MVS represents a certain and culture in the history of computing and since many older MVS systems still operate, the term "MVS" will probably continue to be used for some time. Since OS/390 also comes with UNIX user and programming interfaces built in, it can be used as both an MVS system and a UNIX system at the same time. OS/390 (and earlier MVS) systems run older applications developed using Common Business Oriented Language and, for transaction programs, Customer Information Control System. Older application programs written in PL/I and Formula Translation are still running. Older applications use the Virtual access method for file management and Virtual Telecommunications Access Method for telecommunication with users. The most common program environment today uses the C and C++ languages. DB2 is IBM's primary. Java applications can be developed and run under OS/390's UNIX environment. For additional information about major components of OS/390, see MVS. Other IBM operating systems for their larger computers include or have included: the Transaction Processing Facility (TPF), used in some major airline reservation systems, and virtual machine, an operating system designed to serve many interactive users at the same time.

IBM System z

IBM System z, or earlier IBM eServer zSeries, is a brand name designated by IBM to all its mainframe computers. In 2000, IBM rebranded the existing System/390 to IBM eServer zSeries with the depicted in IBM's red trademarked symbol, but because no specific machine names were changed for System/390, the zSeries in common use refers only to one generation of mainframes, starting with z900. Since April 2006, with another generation of products, the official designation has changed to IBM System z, which now includes older IBM eServer zSeries, the IBM System z9 models, the IBM System z10 models, and the newer IBM zEnterprise. Both zSeries and System z brands are named for their availability — z stands for zero downtime. The systems are built with spare components capable of hot failovers to ensure continuous operations. The zSeries line succeeded the System/390 line (S/390 for short), maintaining full backward compatibility. In effect, zSeries machines are the direct, lineal descendants of System/360, announced in 1964, and the System/370 from 1970s. Applications written for these systems can still run, unmodified, with only few exceptions, on the newest System z over four decades later.

The z900 was a powerful machine (compared to its predecessors), a machine which introduced IBM's newly-designed z/Architecture into the 64-bit mainframe world. The new servers provided more than twice the performance of previous models. In its 64-bit mode the new CPU became free from the 31-bit addressing limits of its predecessors. In July 2005, IBM announced a new brand name System z9 using it to announce System z9-109 servers.

The System z9-109 Model S54, with up to 54 processing units (PUs), is reportedly capable of performing approximately 18,660,000,000 core . A single S54 can typically process one billion or more business transactions per day— double the throughput of its predecessor. The 54 PUs can be configured, or "characterized", for a variety of purposes including general purpose processing (CPs), zAAPs, , IFLs, and ICFs.

The IBM System z10 servers have many similarities to z9 servers but support more memory and can have up to 64 central processors (CPs) per frame. The full speed z10 processor's uniprocessor performance is up to 62% faster than that of the z9 server, according to IBM's z10 announcement.

The IBM zEnterprise System, or z196, introduced in July 2010, supports up to 80 central processors of up to 5.2 GHz, and 3TB of memory. The zEnterprise also supports or Power blades attached as a zEnterprise BladeCenter Extension (zBX).

A direct comparison of System z servers with other computing platforms is difficult. For example, System z servers offload such functions as I/O processing, cryptography, memory control, and various service functions (such as hardware configuration management and error logging) to dedicated processors. These "extra" processors are in addition to the (up to) 80 main CPs per frame. System z cores include extensive self checking of results, and if an error is detected the server retries the instruction. If the instruction still fails, the server shuts down the failing processor and shifts workload, "in flight," to a surviving spare processor. The IBM mainframe then "calls home" (automatically places a service call to IBM). An IBM service technician replaces the failed component with a replacement part (possibly even a new processor book, consisting of a group of processors). With System z9 servers, the technician installs the new book and removes the old one without interruption to running applications. (Note that IBM mainframe processors have a reported 40 year MTBF.) Similar design redundancies exist in memory, I/O, power, cooling, and other subsystems. All these features exist at the hardware and microcode level, without special application programming. The same concepts can extend to coupled frames separated by up to 100 kilometers in a Geographically Dispersed Parallel Sysplex when z/OS is used. System z servers are used by IBM customers for business-critical installations in medium and large organizations which need very high availability, where scheduled and unscheduled downtime costs are high, and at traditional mainframe shops such as banks and insurance companies which already have mainframe applications at the center of their business processes. For such organizations which have to consider a very high price for system failures and service outages, System z machines may provide a lower total cost of ownership than other platforms, especially when running a variety of business- critical applications concurrently (so-called mixed workload). Overall, mainframes like System z are mostly used in government, financial services, retail, and manufacturing industries.

Today's (Latest) systems – zEnterprise System

IBM zEnterprise System is the latest line of IBM mainframes, introduced on July 22, 2010. A “System of Systems,” design that embraces the integration and management of multiple technology platforms—mainframe, UNIX and x86—to dramatically improve productivity of today’s multi architecture data centers

Supports z/OS®, Linux on System z®, z/VSE®, z/VM®, z/TPF, AIX®, Linux on IBM System x®, and now operating environments

The IBM® zEnterprise™ System (zEnterprise) offers a revolutionary system design that addresses the complexity and inefficiency in today’s multi architecture data centers. The zEnterprise extends the strengths and capabilities of the mainframe—such as security, fault tolerance, efficiency, virtualization and dynamic resource allocation—to other systems and workloads running on AIX® on POWER7®, Linux on System x and now Microsoft Windows—fundamentally changing the way data centers can be managed.

For the first time it is possible to deploy an integrated hardware platform that brings mainframe and distributed technologies together: a system that can start to replace individual islands of computing and that can work to reduce complexity, improve security, and bring applications closer to the data that they need.

With the zEnterprise System a new concept in IT infrastructures is being introduced: zEnterprise ensembles. A zEnterprise ensemble is a collection of highly virtualized diverse systems that can be managed as a single logical entity where diverse workloads can be deployed. Ensembles, together with the virtualization, flexibility, security, and management capabilities provided by the zEnterprise System are key to solving the problems posed by today’s IT infrastructure. zEnterprise systems consists of 3 components

 CPC – Can be either zEnterprise 196 (z196) or zEnterprise 114 (z114)  zEnterprise BladeCenter Extension (zBX)  zEnterprise Unified Resource Manager

The zEnterprise System includes a central processing complex (CPC)—either the zEnterprise 196 (z196) for a larger businesses or the zEnterprise 114 (z114) for a smaller businesses, the IBM zEnterprise BladeCenter® Extension (zBX) with its integrated optimizers and/or select IBM blades, and the zEnterprise Unified Resource Manager. zEnterprise 196 (Z196) At the core of the zEnterprise™ System is the next generation mainframe—the zEnterprise 196 (z196)—the industry's fastest and most scalable enterprise system. To support a workload optimized system, the z196 can scale up (over 52,000 MIPS in a single footprint), scale out (80 configurable cores) and scale within (specialty engines, cryptographic processors, hypervisors) executing in an environmentally friendly footprint. And the z196 is designed to work together with system software, middleware and storage to be the most robust and cost effective data server. The z196 offers an industry standard PCIe I/O drawer for FICON and OSA-Express multimode and single mode fiber optic environments for increased capacity, infrastructure bandwidth, and reliability.

For the first time since 1995, the z196 is an IBM mainframe that allows for liquid cooling. Customers have the option of purchasing the mainframe with a water-cooled heat exchanger

The z196 offers a total of 96 cores running at an astonishing 5.2 GHz, and delivering up to 40 percent improvement in performance per core and up to 60 percent increase in total capacity for z/OS®, z/VM®, and Linux on System z® workloads compared to its predecessor, the z10™ EC. The z196 has up to 80 configurable cores for client use. The cores can be configured as general purpose processors (CPs), Integrated Facilities for Linux (IFLs), System z Application Assist Processors (zAAPs), System z Integrated Information Processors (zIIPs), additional System Assist Processors (), Internal Coupling Facilities (ICFs) or used as additional spares.

This design, with increased capacity and number of available processor cores per server, and reduced energy usage and floor space, makes the z196 a perfect fit for large-scale consolidation. The virtualization capabilities can support up to 47 distributed servers on a single core, up to thousands on a single system. zEnterprise 114 (Z114) At the core of the zEnterprise System for both mid-sized and small enterprises is the next generation mainframe—the zEnterprise 114 (z114)—offering new levels of freedom and a whole new world of capabilities for a broader set of businesses.

Unique hybrid architecture enables integration and centralized management of Mainframe, POWER7 and IBM System x technologies and workloads in a single unified system. Simplify infrastructures, and consolidate an average of 30 distributed servers or more on a single core, or 300 in a single footprint, delivering a virtual Linux server for under $1.45 day.

So it is a great choice for mid-sized and growing businesses for running mission critical workloads, hosting private enterprise clouds, or as a backup, standalone development machine, or

BladeCenter Extension (zBX)

The zEnterprise 196 adds new functionality over previous iterations of the IBM mainframe in the form of zEnterprise BladeCenter Extension (zBX). This feature allows Power7 and x86 IBM System x blade servers to be integrated with and managed from the z196 mainframe

The zBX Model 002 is configured with a zEnterprise Central Processing Complex (CPC) —either the zEnterprise 196 (z196) or zEnterprise 114 (z114)—through a secure high- performance private network.

In November 2011, IBM introduced new Microsoft Windows support in its zEnterprise System mainframes which is currently offered running on z/OS, Linux, IBM AIX and x86 Linux environment. The new Windows support comes via x86 processor-based blades that plug into IBM's zEnterprise BladeCenter Extension (zBX). The move is in line with IBM’s strategy for pushing towards a more heterogeneous ecosystem, where customers can effectively mix and match in order to best suit their needs with a more centralized system, rather than the distributed systems that their competitors in the server space offer.

Unified Resource Manager:

For the first time it is possible to deploy an integrated hardware platform that brings mainframe and distributed technologies together: a system that can start to replace individual islands of computing and that can work to reduce complexity, improve security, and bring applications closer to the data that they need. Unified Resource Manager runs in the Hardware Management Console (HMC). It provides integrated management across all elements of the zEnterprise System. Mainframe Server Models (chronological order)

The older S/390 IBM mainframe servers are considered history since support for the last S/390 compatible version of z/OS (1.5) was dropped on March 31, 2007.

IBM introduced two entries for each server models since IBM System z is introduced. They are:

 Business Class (BC)  Enterprise Class (EC)

The Business Class (BC) could be said to be intended for small to midrange enterprise computing, and delivers an entry point with granular scalability and a wide range of capacity settings to grow with the workload.

The BC shares many of the characteristics and processing traits of its larger sibling, the Enterprise Class (EC). This model provides granular scalability and capacity settings on a much larger scale and is intended to satisfy high-end processing requirements. As a result, the EC has a larger frame to house the extensive capacity that supports greater processing requirements. zSeries mainframes:

 z900 (2064 series), for larger customers (2000)  z800 (2066 series), entry-level, less powerful variant of the z900 (2002)  z990 (2084 series), successor to larger z900 models (2003)  z890 (2086 series), successor to the z800 and smaller z900 models (2004)

System z9 mainframes:

 z9 Enterprise Class (2094 series), introduced in 2005 initially as z9-109, beginning the new System z9 line  z9 Business Class (2096 series), successor to the z890 and smallest z990 models (2006) System z10 mainframe:

 z10 Enterprise Class (2097 series), introduced on February 26, 2008  z10 Business Class (2098 series), introduced on October 21, 2008 zEnterprise System mainframe:

 zEnterprise 196 (2817 series), introduced on July 22, 2010  zEnterprise 114 (2818 series), introduced on July 6, 2011

Each Mainframe has different number of sub models for its specifications. Example, the Mainframes industry's fastest and most scalable and flexible enterprise server Z196 has M15, M32, M49, M66 and M80. Each model differs from its specifications. z196 comes as a Enterprise class for a lager businesses and at the same time z114 comes as Business class for a smaller organizations or businesses.

The z196 offers a total of 96 cores running at an astonishing 5.2 GHz, and delivering up to 40 percent improvement in performance per core and up to 60 percent increase in total capacity for z/OS®, z/VM®, and Linux on System z® workloads compared to its predecessor, the z10™ EC. The z196 has up to 80 configurable cores for client use. The cores can be configured as general purpose processors (CPs), Integrated Facilities for Linux (IFLs), System z Application Assist Processors (zAAPs), System z Integrated Information Processors (zIIPs), additional System Assist Processors (SAPs), Internal Coupling Facilities (ICFs) or used as additional spares.

This design, with increased capacity and number of available processor cores per server, and reduced energy usage and floor space, makes the z196 a perfect fit for large-scale consolidation. The virtualization capabilities can support up to 47 distributed servers on a single core, up to thousands on a single system.

Enterprise Class

The zEnterprise 196 is available in five hardware models: M15, M32, M49, M66 and M80. The model number is based on the number of cores available for customer's workload (additional cores are usually installed and used for redundancy and other purposes) Business Class

The zEnterprise 114 is available in two models the M05 and the M10. Introduced in July, 2011, this system is designed to extend the benefits of the zEnterprise 196 system to the mid-range business segment. Like the z196, the z114 is fully compatible with the zBX and the URM. Intro to IBM Mainframes Hardware

CPC (Central Processing Complex)

An IBM mainframe that has two or more central processors (CPs) that share memory. It is the collection of processors, memory and I/O subsystems manufactured with a single serial number, typically all contained in one cabinet. The zSeries CPC contains from 5 to 20 processors.

A CPC that can be physically partitioned to form two operating processor complexes is known as Multiprocessor.

Channel Subsystem

The heart of moving data into and out of a mainframe host is the channel subsystem, or CSS. The CSS is, from a central processor standpoint, independent of the processors of the mainframe host itself. This means that input/output (I/O) within a mainframe host can be done asynchronously.

When an I/O operation is required, the CSS is passed the request from the main processor. While awaiting completion of an I/O request, the main processor is able to continue processing other work. This is a critical requirement in a system designed to handle massive numbers of concurrent transactions.

All LPARs within the central processor complex can make use of the channel subsystem. The asynchronous I/O is handled within the channel subsystem by a channel program. Each LPAR ultimately communicates using a subchannel. In addition, the channel subsystem can be used to communicate between LPARs. Each CPC has a channel subsystem. Its role is to control communication of internal and external channels to control units and devices.

Storage - I/O Peripherals and Control Devices:

Mainframe hardware consists of processors and a multitude of peripheral devices such as disk drives (called direct access storage devices or DASD), magnetic tape drives, and various types of user consoles. Tape and DASD are used for system functions and by user programs executed by z/OS.

A single System z mainframe can have up to 1024 individual channels for input and output (I/O) connectivity. This capacity is one factor that contributes to the mainframe's legendary scalability.

Although less complex than a real system with more channels and I/O devices, illustrates key concepts related to I/O configurations and capacity.

 ESCON and FICON channels connect to only one device or one port on a switch.  Most modern mainframes use switches between the channels and the control units. The switches may be connected to several systems, sharing the control units and some or all of its I/O devices across all the systems.  CHPID addresses are two hexadecimal digits.  Multiple partitions can sometimes share CHPIDs. Whether this is possible depends on the nature of the control units used through the CHPIDs. In general, CHPIDs used for disks can be shared.  An I/O subsystem layer exists between the operating systems in partitions (or in the basic machine if partitions are not used) and the CHPIDs.

Many users still refer to these as "addresses" although the device numbers are arbitrary numbers between x'0000' and x'FFFF'. Today's mainframes have two layers of I/O address translations between the real I/O elements and the operating system software. The second layer was added to make migration to newer systems easier.

Modern control units, especially for disks, often have multiple channel (or switch) connections and multiple connections to their devices. They can handle multiple data transfers at the same time on the multiple channels. Each device will have a unit control block (UCB) in each z/OS image.

DASD:

A DASD, or Direct Access Storage Disk, is a type of storage device that is connected directly to a user’s computer rather than being connected to the network. Because many types of storage device connect directly to a user’s computer, the term “DASD” may apply to a variety of different devices; however, a DASD is typically a storage device that contains a significant amount of memory and a relatively low access time. For example, desktop servers and external hard drives are good examples of DASDs, although flash drives and SD cards can be considered DASDs as well. Magnetic Tape

Magnetic tape is the best storage medium for data and contains most of the data that is stored in the data processing environment. Magnetic tape is made by taking a plastic tape and bonding a layer of magnetic material on the tape. A spot on the tape could be magnetized one way and would represent a one. Magnetized in the other direction, the spot would represent a zero. Groups of these spots can be combined to represent one byte. Groups of bytes are a physical block on the tape.

The performance of the tape subsystem is measured by the throughput of the subsystem in megabytes or gigabytes per second. The throughput depends on certain architectural factors as well as certain factors decided by the application programmer.

The number of active tape drives and channel paths: The number of channel paths limits the number of drives transferring data.

Buffering in the control unit (tape cache): If the control unit does not have to wait for the tape to complete an action, the throughput of the subsystem will be better.

The speed of the tape over the read/write heads: The faster the tape moves, the faster data can be transferred.

Channel speed: The speed of the channel can operate an upper limit on the data transfer rate. If the channel operates at three megabytes per second, then the tape drive can operate at any speed to and including three megabytes per second.

Channel utilization: Selector mode channels can operate up to 100 percent channel busy without degradation of the entire channel. The total transfer of all the blocks from the tape may be longer because the I/O is waiting for the channel to get started.

Lpar

Logical partition (LPAR) defined as an image or server. This represents an operating system instance, such as z/OS, z/VM, or Linux. You can run several different operating systems within a single mainframe by partitioning the resources into isolated servers.

This allows a server to be divided into multiple logical partitions. This capability was designed to help in isolating workloads in different z/OS images, so you can run production work separately from test work, or even consolidate multiple servers into single server. An Lpar has the following properties:

 Each LP is a set of physical resources (CPU, storage, and channels) controlled by just one independent image of an operating system, such as z/OS, Linux, CFCC, z/VM, or VSE. You can have up to 60 LPs in a server.  Each LP is defined through IOCP/HCD. For example, the IOCP RESOURCE PARTITION = ((LP1,1),(LP2,2)) statement defines two LPs. A power-on reset (POR) operation is required to add or remove LPs.  LP options, such as the number of logical CPs, the LP weight, whether LPAR capping is to be used for this LP, the LP storage size (and division between central storage and expanded storage), security, and other LP characteristics are defined in the Activation Profiles on the HMC.  Individual physical CPs can be shared between multiple LPs, or they can be dedicated for use by a single LP.  Channels can be dedicated, reconfigurable (dedicated to one LP, but able to be switched manually between LPs), or shared (if ESCON or FICON).  The server storage used by an LP is dedicated, but can be reconfigured from one LP to another with prior planning.  Although it is not strictly accurate, most people use the terms LPAR and PR/SM interchangeably. Similarly, many people use the term LPAR when referring to an individual . However, the term LP is technically more accurate. Introduction to IBM Mainframes – Operating System

What is an operating system?

An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into the computer by a boot program, manages all the other programs in a computer. The other programs are called applications or application programs. The application programs make use of the operating system by making requests for services through a defined application program interface (API). In addition, users can interact directly with the operating system through a user interface such as a command language or a graphical user interface (GUI).

An operating system performs these services for applications:

 In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn.  It manages the sharing of internal memory among multiple applications.  It handles input and output to and from attached hardware devices, such as hard disks, printers, and dial-up ports.  It sends messages to each application or interactive user (or to a system operator) about the status of operation and any errors that may have occurred.  It can offload the management of what are called batch jobs (for example, printing) so that the initiating application is freed from this work.  On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time. All major computer platforms (hardware and software) require and sometimes include an operating system. Linux, Windows 2000, VMS, OS/400, AIX, and z/OS are all examples of operating systems. MVS (Multiple Virtual Storage)

MVS is an evolutionary system. It has evolved over the architecture of Batch Processing concept. This means it takes a job from the job queue, and executes it. MVS is one of the most complex software written so far. Performance and ease of use are the two factors that influenced the evolution of the MVS.

MVS is an operating system from IBM that continues to run on many of IBM’s mainframe and large server computers. MVS has been said to be the operating system that keeps the world going and the same could be said of its successor systems, OS/390 and z/OS. The payroll, accounts receivable, transaction processing, database management, and other programs critical to the world's largest businesses are usually run on an MVS or successor system. Although MVS has often been seen as a monolithic, centrally-controlled information system, IBM has in recent years repositioned it (and successor systems) as a "large server" in a network-oriented distributed environment, using a 3-tier application model.

The Virtual Storage in MVS refers to the use of virtual memory in the operating system. Virtual storage or memory allows a program to have access to the maximum amount of memory in a system even though this memory is actually being shared among more than one application program. The operating system translates the program's virtual address into the real physical memory address where the data is actually located. The Multiple in MVS indicates that a separate virtual memory is maintained for each of multiple task partitions. z/OS: z/OS is a 64-bit operating system for mainframe computers, produced by IBM. It derives from and is the successor to OS/390, which in turn followed a string of MVS versions. [NB 1] Like OS/390, z/OS combines a number of formerly separate, related products, some of which are still optional. z/OS offers the attributes of modern operating systems but also retains much of the functionality originating in the 1960s and each subsequent decade that is still found in daily use (backward compatibility is one of z/OS's central design philosophies). z/OS was first introduced in October, 2000. z/OS is the computer operating system for IBM's zSeries 900 (z900) line of large (mainframe) servers. z/OS is a renamed and upgraded version of OS/390, which in turn evolved from the MVS operating system. IBM's renamed servers and operating systems reflect a strategy to realign its products more closely with the Internet and its own e- business initiatives. z/OS is described as an extremely scalable and secure high-performance operating system based on the 64-bit z/Architecture. Like its predecessor, OS/390, z/OS lays claim to being highly reliable for running mission-critical applications. The operating system supports Web- and Java-based applications.

Characteristics of Mainframe Operating Systems:

Multiprogramming and Multiprocessing:

Z/OS is capable of multiprogramming, or executing many programs concurrently, and of multiprocessing, which is the simultaneous operation of two or more processors that share the various hardware resources.

The earliest operating systems were used to control single-user computer systems. In those days, the operating system would read in one job, find the data and devices the job needed, let the job run to completion, and then read in another job. In contrast, the computer systems that z/OS manages are capable of multiprogramming, or executing many programs concurrently. With multiprogramming, when a job cannot use the processor, the system can suspend, or interrupt, the job, freeing the processor to work on another job. z/OS makes multiprogramming possible by capturing and saving all the relevant information about the interrupted program before allowing another program to execute. When the interrupted program is ready to begin executing again, it can resume execution just where it left off. Multiprogramming allows z/OS to run thousands of programs simultaneously for users who might be working on different projects at different physical locations around the world. z/OS can also perform multiprocessing, which is the simultaneous operation of two or more processors that share the various hardware resources, such as memory and external disk storage devices.

The techniques of multiprogramming and multiprocessing make z/OS ideally suited for processing workloads that require many input/output (I/O) operations. Typical mainframe workloads include long-running applications that write updates to millions of records in a database, and online applications for thousands of interactive users at any given time. By way of contrast, consider the operating system that might be used for a single-user computer system. Such an operating system would need to execute programs on behalf of one user only. In the case of a personal computer (PC), for example, the entire resources of the machine are often at the disposal of one user.

Many users running many separate programs means that, along with large amounts of complex hardware, z/OS needs large amounts of memory to ensure suitable system performance. Large companies run sophisticated business applications that access large databases and industry-strength middleware products. Such applications require the operating system to protect privacy among users, as well as enable the sharing of databases and software services.

Thus, multiprogramming, multiprocessing, and the need for a large amount of memory mean that z/OS must provide function beyond simple, single-user applications. The related concepts listed below explain the attributes that enable z/OS to manage complex computer configurations.

Time Sharing:

In batch system, the user cannot interact with the job when it is being executed. It means that all possible problems must be anticipated before hand as the user cannot make corrections during execution. It becomes very difficult when a program has to go through many phases such as compilation, linking etc. It may be difficult to define what to do if a particular phase tails. Another problem is the debugging of a program. All debugging is static. The only information to find out is why the program is giving incorrect output at various stages of execution.

Time sharing was introduced to make computer systems more interactive. CPU is the most important resource that is shared. Each job gets CPU for a small amount of time. When the allotted time period for a job is used, the next job in line is allocated CPU.

The switching between jobs occurs very frequently. It allows the user to interact with, the job as it is running. Operating system should enable the user to interact with jobs that are executing. The communication usually occurs via keyboard. The user gets a prompt to enter commands. The user must know the status of the job in order to enter relevant commands. The output of a job is usually presented on a monitor.

Generally, the commands given by the user take very little time to execute. The control returns to command line after finishing a command. It displays a prompt to indicate that the system is ready to execute another command. DOS and Unix are examples of such of system.

Batch processing

Work is processed in units called jobs. A job may cause one or more programs to execute in sequence. One of the problems that arise when batch processing is used is managing how work flows through the system. To manage this in the multi-user system, the Job Entry Subsystem (JES) processes each user's job in an orderly fashion.

Spooling:(Simultaneous Peripheral Operations Online)

The overlapping of low-speed operations with normal processing. Spooling originated with mainframes in order to optimize slow operations such as reading cards and printing. Card input was read onto disk and printer output was stored on disk. In that way, the business data processing was performed at high speed, receiving input from disk and sending output to disk. Subsequently, spooling is used to buffer data for the printer as well as remote batch terminals. The spooling of documents for printing and batch job requests still goes on in mainframe computers where many users share a pool of resources. On personal computers, your print jobs (for example, a Web page you want to print) are spooled to an output file on hard disk if your printer is already printing another file. HISTORY AND EVOLUTION OF MAINFRAMES OPERATING SYSTEM

What is an Operating System? In simplest terms, an operating system is a collection of programs that manage a computer system's internal workings— its memory, processors, devices, and file system. Mainframe operating systems are sophisticated products with substantially different characteristics and purposes. Operating systems are designed to make the best use of the computer's various resources, and ensure that the maximum amount of work is processed as efficiently as possible. Although an operating system cannot increase the speed of a computer, it can maximize use of resources, thereby making the computer seem faster by allowing it to do more work in a given period of time. z/OS®, a widely used mainframe operating system, is designed to offer a stable, secure, and continuously available environment for applications running on the mainframe. z/OS today is the result of decades of technological advancement. It evolved from an operating system that could process a single program at a time to an operating system that can handle many thousands of programs and interactive users concurrently. To understand how and why z/OS functions as it does, it is important to understand some basic concepts about z/OS and the environment in which it functions.

In most early operating systems, requests for work entered the system one at a time. The operating system processed each request or job as a unit, and did not start the next job until the one being processed had completed. This arrangement worked well when a job could execute continuously from start to completion. But often a job had to wait for information to be read in from, or written out to, a device such as a tape drive or printer. Input and output (I/O) take a long time compared to the electronic speed of the processor. When a job waited for I/O, the processor was idle.

Finding a way to keep the processor working while a job waited would increase the total amount of work the processor could do without requiring additional hardware. z/OS gets work done by dividing it into pieces and giving portions of the job to various system components and subsystems that function interdependently. At any point in time, one component or another gets control of the processor makes its contribution, and then passes control along to a user program or another component.

Lets walk through the evolution of Mainframe Operating system. Evolution of Mainframe Operating system

IBM was slow to introduce operating systems: General Motors produced General Motors OS in 1955 and GM-NAA I/O in 1956 for use on its own IBM computers; and in 1962 released MCP and General Electric introduced GECOS, in both cases for use by their customers. In fact the first operating systems for IBM computers were written by IBM customers who did not wish to have their very expensive machines ($2M in the mid-1950s) sitting idle while operators set up jobs manually, and so they wanted a mechanism for maintaining a queue of jobs. The operating systems developed in that era were not compatible with Lower end and higher end systems, which in turn increased the production cost for both hardware and software, which was affecting the sales as customers did not have an easy option for upgrading the system. So in 1964 the company announced System/360, a new range of computers which all used the same peripherals and most of which could run the same programs.

System/360 – OS/360

IBM originally intended that System/360 should have only one batch-oriented operating system. There were couple of drawbacks in OS /360 thus IBM decided to produce simple batch oriented OS as DOS/360, this removed the drawback of OS/360 of not fitting into the limited memory available on the smaller System/360 models. This prevented the hardware sales from being dropped. Other OS of this era were BOS/360 (Basic Operating System, for the smallest machines) and TOS/360 (Tape Operating System, for machines with only tape drives). Features of OS/360 MFT provides the overall capability of multiprogramming with a fixed number of tasks. This means that as many as four jobs may be run concurrently within a single computing system having only one central processor.  MFT allows the user to specify at system generation the number and size of partitions, and the scheduler to be used.  At nucleus initialization, the number of partitions may be reduced and their sizes changed without repeating the system generation process.  Other features include the ability to specify the inclusion of main storage protection, and the ability to specify separately the desired number of Write and reply buffers for console communication.  In addition, serially reusable system resources may be en queued upon to assure non-conflicting, sequential availability to all jobs operating at the same time, and to assure integrity of direct access device space management. MVT- Multiprogramming with a Variable number of Tasks

 It treated all memory not used by the operating system as a single pool from which contiguous "regions" could be allocated as required by an indefinite number of simultaneous application programs.  This scheme was more flexible than MFT's and in principle used memory more efficiently, but was liable to fragmentation - after a while one could find that, although there was enough spare memory in total to run a program, it was divided into separate chunks none of which was large enough.  There was a need to eradicate the problem of memory shortage thus IBM came up with the concept of virtual storage. Which allowed programs to request address spaces larger than physical memory.  The original implementations had a single virtual address space, shared by all jobs, So OS/VS1 and SVS in principle had the same disadvantages as MFT and MVT, but the impacts were less severe because jobs could request much larger address spaces and the requests came out of a 16 MB pool even if physical storage was smaller.

OS/370- MVS  In the mid-1970s IBM introduced MVS, which not only supported virtual storage that was larger than the available real storage, as did SVS, but also allowed an indefinite number of applications to run in different address spaces.  Two concurrent programs might try to access the same virtual memory address, but the virtual memory system redirected these requests to different areas of physical memory. Each of these address spaces consisted of three areas: an operating system (one instance shared by all jobs), an application area unique for each application, and a shared virtual area used for various purposes, including inter-job communication.  IBM promised that application areas would always be at least 8MB. This made MVS the perfect solution for business problems that resulted from the need to run more applications.  MVS maximized processing potential by providing multiprogramming and multiprocessing capabilities. Like its MVT and OS/VS2 SVS predecessors, MVS supported multiprogramming; program instructions and associated data are scheduled by a control program and given processing cycles.  Unlike a single-programming operating system, these systems maximize the use of the processing potential by dividing processing cycles among the instructions associated with several different concurrently running programs. This way, the control program does not have to wait for the I/O operation to complete before proceeding. By executing the instructions for multiple programs, the computer is able to switch back and forth between active and inactive programs. MVS/370 is a generic term for all versions of MVS operating system prior to MVS/XA. The System/370 architecture, at the time MVS was released, supported only 24-bit virtual addresses, so the MVS/370 operating system architecture is based on a 24-bit address. Because of this 24-bit address length, programs running under MVS/370 are each given 16 megabytes of contiguous virtual storage. MVS/XA, or Multiple Virtual Storage/Extended Architecture, was a version of MVS that supported the 370-XA architecture, which expanded addresses from 24 bits to 31 bits, providing a 2 gigabyte addressable memory area. It also supported a 24-bit legacy addressing mode for older 24-bit applications (i.e. those that stored a 24-bit address in the lower 24 bits of a 32-bit word and utilized the upper 8 bits of that word for other purposes).

OS/390

OS/390 was introduced in late 1995 in an effort, led by the late Randy Stelman, to simplify the packaging and ordering for the key, entitled elements needed to complete a fully functional MVS operating system package. These elements included, but were not limited to:

 Data Facility Product (DFP) (Provides access methods to enable I/O to DASD subsystems and Tape)  Job Entry Subsystem (JES) (Provides ability to submit batch work and manage print)  CommServer - Communications Server (Provides VTAM and TCP/IP communications protocols)

An additional benefit of the OS/390 packaging concept was to improve Reliability, Availability and Serviceability (RAS) for the operating system, as the number of different combinations of elements that a customer could order and run was drastically reduced. This reduced the overall time required for customers to test and deploy the operating system in their environments. In December 2001 IBM extended OS/390 to include support for 64-bit zSeries processors and added various other improvements, and the result is now named z/OS Z/OS z/OS is a 64-bit operating system for mainframe computers, produced by IBM. It derives from and is the successor to OS/390, which in turn followed a string of MVS versions. Like OS/390, z/OS combines a number of formerly separate, related products, some of which are still optional. z/OS offers the attributes of modern operating systems but also retains much of the functionality originating in the 1960s and each subsequent decade that is still found in daily use (backward compatibility is one of z/OS's central design philosophies). z/OS was first introduced in October, 2000. Features of Z/OS Z/OS supports stable mainframe systems and standards such as.  CICS, IMS, DB2, RACF, SNA, WebSphere MQ, record-oriented data access methods, , CLIST, SMP/E, JCL, TSO/E, and ISPF, among others.  z/OS also supports 64-bit Java, C/C++, and UNIX (Single UNIX Specification) APIs and applications through UNIX System Services  z/OS can communicate directly via TCP/IP, including IPv6, and includes standard HTTP servers  Another central design philosophy is support for extremely high quality of service (QoS), even within a single operating system instance, although z/OS has built-in support for Parallel Sysplex clustering.  z/OS has a unique Workload Manager (WLM) and dispatcher which automatically manages numerous concurrently hosted units of work running in separate key-protected address spaces according to dynamically adjustable business goals. This capability inherently supports multi-tenancy within a single operating system image.  IBM mainframes also offer two additional levels of virtualization: LPARs and (optionally) z/VM.

In addition to z/OS, four other operating systems dominate mainframe usage: z/VM, z/VSE™, Linux for System z®, and z/TPF. Mainframe Subsystems and Facilities:

TSO and ISPF:

TSO (Time Sharing Option) is an MVS component that lets terminal users access MVS facilities. TSO does this by treating each terminal user as a job. When you logon to TSO, TSO creates a JCL stream and submits to JES2/JES3 for processing. Each TSO user is given a unique address space and can allocate data sets and invoke programs just as a batch job can. The TSO commands reside in SYS1.CMDLIB and are located into the private area of the TSO user's address space as required. The Terminal Monitor Program (TMP) is a program that accepts and interprets commands and causes an appropriate command processor to be scheduled and executed.

The TSO environment has both MVS and TSO user address spaces and MVS and TSO user data sets. The system address spaces are Master Scheduler, JES, BATCH, VTAM and TSO. TSO user spaces are created as users LOGON.

TSO/E (Time Sharing Option/Extended) is a subsystem that lets terminal users invoke system facilities interactively. Each TSO/E user is given a unique address space and can allocate data sets and invoke programs just as a batch job can. TSO/E is an OS/390 component that lets terminal users interact with the OS/390 system. ISPF, which runs under the control of TSO/E, provides a powerful and comprehensive program development environment that includes a full-screen and facilities to manage background job processing.

ISPF (Interactive System Productivity Facility) runs as part of TSO/E and takes advantage of the full-screen capabilities of display terminals. Figure below shows the ISPF menu structure

Such tools include

 Browse - for viewing data sets, Partitioned Data Set (PDS) members, and Unix System Services files.  Edit - for editing data sets, PDS members, and Unix System Services files.  Utilities - for performing data manipulation operations, such as:  Data Set List - which allows the user to list and manipulate (copy, move, rename, print, catalog, delete, etc.) files (termed "data sets" in the z/OS environment).  Member List - for similar manipulations of members of PDSs.  Search facilities for finding modules or text within members or data sets.  Compare facilities for comparing members or data sets.  Library Management, including promoting and demoting program modules. Storage Management Subsystem (SMS)

The Storage Management Subsystem, commonly referred to as SMS, is an IBM product that became available with MVS/ESA. The purpose of SMS is to provide the user with system-managed data, intended to simplify user interface for allocating and maintaining data, provide the user with a logical view of data, separated from the actual physical device characteristics and to provide centralized control of disk storage.

Placing the data set under SMS control has certain benefits. Disk allocation for new data sets is directed to the most suitable group of disk volumes under centralized disk-storage administration control. The user does not determine which volumes will be used whereas SMS does this. The management of a data set after it is created of archive, retention etc., is also under centralized administration control. The JCL is also simplified.

Virtual Telecommunication Access Method

MVS uses the most powerful access method viz., Virtual Telecommunications Access Method (VTAM), a part of the comprehensive telecommunications product called SNA, which stands for System Network Architecture.

VTAM is considered to be a subsystem because it runs in its own address space. VTAM is able to provide centralized control over all of the terminal devices attached to an MVS system. Each VTAM terminal device is allocated to the VTAM address spaces, communicate with those terminal devices indirectly. They issue requests to VTAM, which in turn services the request for the appropriate terminal.

There are two required data sets that control how VTAM operates: SYS1.VTAMLIB contains the VTAM load modules and user defined tables and exit routines. SYS1.VTAMLST contains the definition statement and start options for the VTAM.VTAM applications that must be defined to VTAM with an APPL statement. This allows network resources to communicate with that application through VTAM. An APPL definition must be coded for each application program.

CICS

CICS (Customer Information Control System) is a transaction subsystem that handles interactions between terminal users and online application programs. CICS is a subsystem of VTAM, and so any CICS system must be defined to VTAM. A connection between the user terminal and CICS is automatically established when either CICS or VTAM is started. End users identify themselves to CICS by signing on to begin an online session.

CICS is an online transaction processing (OLTP) program from IBM that, together with the COBOL programming language, has formed over the past several decades the most common set of tools for building customer transaction applications in the world of large enterprise mainframe computing. A great number of the legacy applications still in use are COBOL/CICS applications. Using the application programming interface (API) provided by CICS, a programmer can write programs that communicate with online users and read from or write to customer and other records (orders, inventory figures, customer data, and so forth) in a database (usually referred to as "data sets") using CICS facilities rather than IBM's access methods directly. Like other transaction managers, CICS can ensure that transactions are completed and, if not, undo partly completed transactions so that the integrity of data records is maintained.

IBM markets or supports a CICS product for OS/390, UNIX, and Intel PC operating systems. Some of IBM's customers use IBM's Transaction Server to handle e-business transactions from Internet users and forward these to a mainframe server that accesses an existing CICS order and inventory database.

IMS

IMS -- IBM's main database before DB2, and still highly valued by its customers. The key characteristic of IMS is that record schemas are arranged in a "tree" structure. For example, if a particular structure has a "teacher" record "root" with several "student" record "branches," the same structure cannot have a student record root with several teacher record branches -- that must be done with a separate structure. As a result, data stores with this type of many-to-many relationship are much larger with IMS, and performance in typical database tasks is slower. However, for run-the-business applications that do not typically involve many-to-many relationships, IMS is superior, and therefore many business-critical mainframe applications from the 1960s and 1970s continue to use IMS. IBM has periodically updated IMS to handle new environments such as the Web.

IMS (Information Management System) has two components: DL/I and Data Communications. The DL/I component of IMS allow users set up and maintain complex hierarchical databases that can be processed by application programs run as batch jobs. If the optional Data Communications component of IMS (IMS DC) is used, interactive application programs can be coded that use IMS databases and communicate with terminals.

Like CICS, IMS DC implements its own multiprogramming that is transparent to MVS. IMS DC multiprogramming is more like MVS multiprogramming than CICS multiprogramming, however. The IMS control region in its own address space schedules application programs for execution in dependant regions (also in separate address spaces). The control region also manages communication between the application programs, databases, and terminals. DB2

DB2 is a family of relational database management system (RDBMS) products from IBM that serve a number of different operating system platforms. According to IBM, DB2 leads in terms of database market share and performance. Although DB2 products are offered for UNIX-based systems and personal computer operating systems, DB2 trails Oracle's database products in UNIX-based systems and Microsoft's Access in Windows systems.

In addition to its offerings for the mainframe OS/390 and VM operating systems and its mid-range AS/400 systems, IBM offers DB2 products for a cross-platform spectrum that includes UNIX-based Linux, HP-UX, Sun Solaris, and SCO UnixWare; and for its personal computer OS/2 operating system as well as for Microsoft's Windows 2000 and earlier systems. DB2 databases can be accessed from any application program by using Microsoft's Open Database Connectivity (ODBC) interface, the Java Database Connectivity (JDBC) interface, or a CORBA interface broker

The program development process includes the steps:

•Pre-compilation •Compilation or assembly •Link editing •Binding •Execution.

Pre-compiler

Before a program can be compiled or assembled, it must be processed by the DB2 pre- compiler.

The pre-compiler does the following functions:  Converts SQL statements into host-language statements.  Validates SQL statements.  Detects SQL statement syntax errors.  Produce a data base request module.

Compilation and Link Editing

A source program is compiled and assembled to produce an object module. After successful compilation, the object module is link-edited from the load module. DB2 can have multiple source modules that are each compiled and assembled separately. The individual object modules are then combined in a single link-edit step. Individual DB2 program modules communicate with each other using conventional linkage techniques. Binding

Before a program can be executed, a process called binding must be performed. Binding establishes a linkage between the application program and the DB2 data it accesses. Binding process involves only the DBRM produced by pre-compiler. Process performed by binding are as follows:

 Verify SQL statements  Verify authority  Select access paths  Prepare an application plan

The process can be run immediately after pre-compilation itself since it works on only the DBRM.

Program Execution

After successful binding, the program is ready for execution. The program can be executed many times using the application plan. When a program is executed, DB2 checks the application plan, if there are any changes it rebinds it.

Authorization

No special authorizations are required to code, pre-compile, compile or assemble, or link- edit a DB2 program. However the user who binds or executes the program must have been granted appropriate authorizations to perform these functions. The user must have appropriate table privileges, based on the type of operations performed by the program. These can include authorization to select data, to insert, update, or delete data, or to create new tables. A user, performing bind must have specific authorization to perform a bind for the program.

RACF

A very important consideration in any MVS installation is to maintain security and so unauthorized users cannot access data. To provide security, most installations use a comprehensive security package called Resource Access Control Facility (RACF). RACF identifies both users and resources, such as data sets. When a user attempts to access a resource, RACF ensures that the user has the correct authority. RACF is not a subsystem, but rather a set of routines stored in the PLPA that are invoked by a user's address space whenever needed. CLIST

CLIST (Command List) (pronounced "C-List") is a language for TSO in MVS systems. It originated in OS/360 Release 20 and has assumed a secondary role since the availability of Rexx in TSO/E Version 2. In its basic form, a CLIST program (or "CLIST" for short) can take the form of a simple list of commands to be executed in strict sequence (like a DOS batch file(*.bat) file). However, CLIST also features If-Then-Else logic as well as loop constructs.

CLIST is an interpreted language. That is, the computer must translate a CLIST every time the program is executed. therefore tend to be slower than programs written in compiled languages such as COBOL, , or PL/1. (A program written in a compiled language is translated once to create a "load module" or executable.)

CLIST can read/write MVS files and read/write from/to a TSO terminal. It can read parameters from the caller and also features a function to hold global variables and pass them between CLISTs. A CLIST can also call an MVS application program (written in COBOL or PL/I, for example). CLISTs can be run in background (by running JCL which executes the TSO control program (IKJEFT01)). TSO I/O screens and menus using ISPF dialog services can be displayed by CLISTs.

REXX

(REstructured eXtended eXecutor) is an interpreted programming language developed at IBM by Mike Cowlishaw. It is a structured, high-level programming language designed for ease of learning and reading. Proprietary and open source REXX interpreters exist for a wide range of computing platforms; compilers exist for IBM mainframe computers.

Rexx is widely used as a glue language, language, and is often used for processing data and text and generating reports; these similarities with Perl mean that Rexx works well in Common Gateway Interface (CGI) programming and it is indeed used for this purpose. Rexx is the primary scripting language in some operating systems, e.g. OS/2, MVS, VM, AmigaOS, and is also used as an internal macro language in some other software, e.g., KEDIT, THE, the ZOC terminal emulator.

Additionally, the Rexx language can be used for scripting and macros in any program that uses Windows Scripting Host ActiveX scripting engines languages (e.g. VBScript and JScript) if one of the Rexx engines (see below) are installed.

Rexx is supplied with VM/SP on up, TSO/E Version 2 on up, OS/2 (1.3 on up), AmigaOS Version 2 on up, PC DOS (7.0 or 2000), and Windows NT 4.0 (Resource Kit: Regina). REXX scripts for OS/2 and NT-based Windows share the .cmd with other scripting languages, and the first line of the specifies the interpreter to be used. A Rexx script or command is sometimes referred to as an EXEC in a nod to Rexx's role as a replacement for the older EXEC command language on CP/CMS and VM/370 and EXEC 2 command language on VM/SP.

Mainframe Basics

What is a job in mainframe?

A job is the unit of work that a computer operator (or a program called a job scheduler) gives to the operating system. For example, a job could be the running of an application program such as a weekly payroll program. A job is usually said to be run in batch (rather than interactive) mode. The operator or job scheduler gives the operating system a "batch" of jobs to do (payroll, cost analysis, employee file updating, and so forth) and these are performed in the background when time-sensitive interactive work is not being done. In IBM mainframe operating systems, a job is described with language (JCL). Jobs are broken down into job steps.

Jobs are background (sometimes called batch) units of work that run without requiring user interaction (for example, print jobs).

What is JCL-()

JCL (job control language) is a language for describing jobs (units of work) to the MVS operating system, which runs on IBM's mainframe computers. These operating systems allocate their time and space resources among the total number of jobs that have been started in the computer. Jobs in turn break down into job steps. All the statements required to run a particular program constitute a job step.

A set of JCL statements can be compared to a menu order in a restaurant. The whole order is comparable to the job. Back in the kitchen, the chefs divide the order up and work on individual dishes (job steps). As the job steps complete, the meal is served (but it has to be served in the order prescribed just as some job steps depend on other job steps being performed first).

JCL statements mainly specify the input data sets (files) that must be accessed, the output data set to be created or updated, what resources must be allocated for the job, and the programs that are to run, using these input and output data sets. A set of JCL statements for a job is itself stored as a data set and can be started interactively. MVS provides an interactive menu-like interface, ISPF, for initiating and managing jobs.

In MVS, the part of the operating system that handles JCL is called the Job Entry Subsystem (JES). There are two versions, JES2 and a later version with additional capabilities, JES3. What is a dataset?

The term data set refers to a file that contains one or more records. The record is the basic unit of information used by a program running on z/OS or in simple words its equivalent of a file in other operating systems, such as Mac OS, Windows

Any named group of records is called a data set. Data sets can hold information such as medical records or insurance records, to be used by a program running on the system. Data sets are also used to store information needed by applications or the operating system itself, such as source programs, macro libraries, or system variables or parameters. For data sets that contain readable text, you can print them or display them on a console (many data sets contain load modules or other binary data that is not really printable). Data sets can be cataloged, which permits the data set to be referred to by name without specifying where it is stored.

VSAM - Virtual Storage Access Method

VSAM stands for Virtual Storage Access Method. The VSAM is a method used for managing files in mainframe system. That is in other words VSAM is a data management system used in mainframe systems and this was introduced by IBM. VSAM also known as Virtual Storage Access Method is an access method for IBM's mainframe operating system - MVS.

VSAM Structure:

The management of data takes place as records in VSAM system and it is allowed for VSAM to be of any length as per the ends of user. The VSAM supports fixed as well as variable length records. These records are placed in blocks which are termed as Control Intervals and these Control Intervals are measured in bytes. The Control Intervals are further placed as Control Areas which is still of larger size.

Types of VSAM datasets:

The datasets of VSAM are generally referred as clusters and there are four types of VSAM datasets based on the way the records are stored and accessed which are given below:  Entry Sequenced Data Set (ESDS)  Key Sequenced Data Set (KSDS)  Linear Data Set (LDS)  Relative Record Data Set (RRDS)  Entry Sequenced Data Set (ESDS)

The Entry Sequenced Data Set is also called as ESDS and here each record is identified and accessed by specifying the physical location which in other words by specifying the byte address of the first data byte of each record in relationship to the beginning of the dataset. This method of VSAM helps to maintain records in sequential order. The ESDS cluster has one component viz. data component.

As we have seen before the records in ESDS datasets are accessed by specifying the physical location and so the records in ESDS cluster are stored in the order in which they are entered into the dataset maintaining their physical entry or location. The record is referenced by the relative byte address also termed as RBA. By this method it is not possible delete records but it is possible to add record and the new record added gets appended to the end of last dataset. Records in ESDS cluster may be fixed or variable length. Also accessing of records in ESDS can be done sequentially with the help of RBA value or randomly accessed to the desired record using the RBA value.

Key Sequenced Data Set (KSDS)

The Key Sequenced Data Set is also called as KSDS and is most commonly sued type of VSAM. In this method each record is located and accessed by specifying its key value. Key value is nothing but unique sequence of characters for each record that helps to access the record using this value. The KSDS cluster has two components in it.

• Index component • Data component

As explained before key filed which is unique and same number of characters fir each record in data component of a KSDS cluster is placed. In the data component the actual records gets stored in logical sequence relative to their key field value. The list of key values for the records in the cluster along with pointers to the corresponding records in the data component is maintained in index component of the KSDS cluster. By pointer maintenance it is possible to reorganize records easily when new records are inserted or when records are deleted and also the records in KSDS cluster may be fixed or variable length. Also accessing of records in KSDS can be done sequentially with the help of key value or randomly accessed to the desired record using the key value.

Linear Data Set (LDS)

The Linear Data Set is also called as LDS and refers to a byte-stream dataset which is used rarely by users.

Relative Record Data Set (RRDS)

The Relative Record Data Set is also called as RRDS and here each record is identified and accessed by specifying the record number which is nothing but the sequence number relative to the first record in the dataset. Unlike ESDS the RRDS type of VSAM helps to access records in random order. The RRDS cluster has one component, the data component. In contrast to the other two methods like KSDS and ESDS the RRDS supports only fixed length records. The record as specified before is accessed by using the record number which is any number from one to maximum number of records which may be contained in the dataset depending on the record position of slot number.

Both sequential and random Accessing of records in RRDS cluster is possible by using the relative record number of the desired record. In this method both addition of new records to the RRDS cluster is possible which writes the new record in the available empty slot and similarly deleting a record from RRDS cluster is possible which gives room for empty slot.