WHITEPAPER

A DATA CENTER WITHOUT ENFORCEABLE STANDARDS RISKS MUCH MORE THAN MERE STORAGE MISMANAGEMENT. Efficiency and reliability are the twin pillars of mainframe computing. Here’s why you need storage allocation control standards, and four things to help you keep tabs on them.

On December 14, 2020, users of , YouTube, Storage is critical for every single function of an Drive, , , application. Without storage, applications don’t and more — in other words, most of the online exist. If storage isn’t available or an application world — couldn’t access these and other services runs into issues trying to access it, that app will, for around 45 minutes. The problem, as the at best, experience throttled performance, or at search-engine-turned-tech-titan later explained, worst ABEND. Complications with storage can also was an identity management system that ran lead to data unavailability, which can even become out of storage space and lost the ability to permanent data loss in severe circumstances. authenticate user logins. Compounding the issue Clearly, storage is one of the primary pillars of was the fact that it affected Google’s internal application integrity. employees as well, who unsurprisingly rely on While your own storage management mishaps tools such as Gmail and Google Meet to manage might not be felt around the world, they’ll works in progress and service requests. To no doubt be a major inconvenience for the remediate the issue, or at least temporarily work consumers of your business services, and possibly around it, teams disabled an automated storage an even bigger pain for you. To avoid service quota management system that had failed to disruptions and production mishaps, it’s important properly allocate the necessary space for identity to maintain strict standards governing how the management demands. Google’s distributed different datasets in your installation are treated environment means the company has never throughout their lifecycle, from creation and utilized mainframes, but the outage underscores processing all the way through to deletion. That’s the importance of proper storage monitoring and where DTS Software’s Allocation Control Center, management regardless of . or ACC Monarch, comes in. This could easily have been a mainframe use case. ACC Monarch provides centralized, rule- of an organizational asset. Standardizing JCL, based management across a wide variety of eliminating defects, and improving quality will mainframe resources, enabling you to not help application performance and ensure you just establish standards, but to enforce them get through thousands of batch jobs error- throughout the dataset lifecycle in both test free, day in and day out. Instead of having to and production environments. With ACC directly alter JCL so that it adheres to certain Monarch, you can define the attributes of standards — a process that sometimes feels like datasets at their creation, manage access and it requires a court order — ACC Monarch works buffering as they’re used, and even dictate by executing on a set of policy rules. These how they’re disposed of according to your policy rules control how datasets are used, production and security requirements. enabling you to adjust the following features of your data center: Storage allocation standards enforcement might not sound all that exciting, but storage 1. LEVERAGE I/O PERFORMANCE AS A management integrity has a direct impact BENCHMARK on application reliability and performance. Besides CPU consumption, I/O On the mainframe, where the world’s most performance is the most important mission-critical business services are delivered, benchmark in data center management, standards enforcement is vital to these because job streams can stack up and services that we consume every day. ACC fail without fast and efficient I/O. In a Monarch can be used in conjunction with your typical IBM® z/OS® installation, sequential allocation policies as a central repository for batch jobs and reading and writing VSAM the standards you require throughout the datasets consume significant I/O resources, storage management process. and it’s up to data center managers to ensure that these resources are allocated GAIN GREATER CONTROL OF effectively. The reality, however, is that YOUR DATA CENTER WITH BETTER many jobs, steps, and programs in a typical STANDARDS ENFORCEMENT data center suffer from poor performance It’s not uncommon for a busy data center because they rely on requirements that manager to settle for suboptimal performance might have been appropriate in 1975 but in an area because it’s considered too time- are obsolete today. consuming and expensive to make changes to JCL, control statements, or the DFSMS It’s understandable that data center environment in the process of improving a managers in the risk-averse mainframe job that already runs in production. It is a fair world might not want to make what they assessment when any change to the production perceive to be invasive changes to JCL environment requires a lengthy rollout through or control statements when production development and testing phases. But what if jobs are able to run fine as-is. Fortunately, there were an easier way? ACC Monarch’s Performance Buffering feature can intervene to automatically It’s possible your JCL may have been written increase buffering values for both non- 30-40 years ago under a completely different VSAM and VSAM datasets without set of standards (and their enforcement necessitating any JCL changes. In the methods!), but that doesn’t make it any less case of non-VSAM datasets, a simple everything contained in the dataset, which increase in values like BUFNO can offer a can cause major CPU and I/O performance substantial performance improvement, issues for large VSAM . When a while system-managed buffering values developer at a financial firm accidentally including ACCBIAS and SMBVSB can coded ERASE for a large dataset, it created control buffering across all types of VSAM a backlog behind this twenty-minute job I/O. By improving buffering performance, that caused critical and expensive delays in data center managers can reduce elapsed other production jobs. ACC Monarch helps time and CPU time, increasing operational prevent this problem by enforcing standards efficiency and ultimately saving their that either limit or require operands such as organizations time and money. ERASE, DELETE, and PURGE for disposal of various types of datasets. 2. DATASET CREATION – NAMING

CONVENTIONS STANDARDS In the case of VSAM datasets, the IDCAMS_ ENFORCEMENT DELETE environment gets control when Naming conventions are the first VSAM clusters and non-VSAM datasets means that data center managers are about to be deleted. Standards have to ensure the right data shows enforcement rules might look at the up at the right time to fulfill a specific job or userid that initiated the delete to function. Logical and consistent naming ensure they have the right administrative conventions are vital to efficient and privileges, or they might look at the reliable data center operation, which is characteristics of the dataset in question to why ACC Monarch Unit Name Manager determine whether or not an ERASE would enforces naming standards from the cause CPU and I/O performance issues or moment datasets are created. disruption to critical production workloads. These measures ensure that a single ACC Monarch Unit Name Manager can mistake can’t have consequences that are validate or override UNIT name, but it felt throughout an entire organization. can also examine and validate allocation variables such as volume serial number, DD 4. GDG PROCESSING name, dataset name, Data Control Block To ACC Monarch, generation data groups (DCB) and space characteristics, including (GDG bases) and generation datasets (GDSs) RLSE. If necessary, the values associated are just another dataset, meaning ACC with these variables can be dynamically rules can provide values for primary and changed to meet system requirements, secondary space, dataset organization, DCB ensuring the validity of not just UNIT characteristics, and pool, storage group, names, but all aspects of dataset allocation or volume serial number. ACC policy rules and job execution. can set characteristics of GDG bases in the DEFINE_GDG environment, which establishes 3. DATASET DELETION IS NEVER THE END control whenever a GDG base is defined or Any time a dataset is deleted, there are altered. These rules can enforce standards options as to how the dataset is to be dictating the number of generations, roll off treated. For example, coding DELETE simply options, first-in first-out (FIFO) or last-in first- removes the pointer to the dataset, but out (LIFO) options, and more. coding the ERASE parameter writes 0s over

Since Extended GDGs (GDGEs) were obtaining new extents and volumes, deletion, created in z/OS 2.2, released in September and cataloging. And ACC Monarch can record 2015, one of the most advantageous all actions in logs and SMF records. You can uses of ACC Monarch’s DEFINE_GDG use ACC Monarch to issue WTOs and console environment has been to enforce the commands, send emails, and more. Also, creation of all new GDGs as GDGEs, startup, refresh, and shutdown are all managed ensuring that they can be defined with up via simple commands and dynamically installed to 999 generations, instead of the 255 you code, removing the need for SMP/E work. can reference with the original GDGs. This ACC Monarch maintains control at every means that all of the existing jobs will use point in a dataset’s lifecycle, from OPEN and the new standard automatically instead of allocation to extend, rename, and DELETE. Do requiring someone to manually reach in, you want to prevent the use of production find the part of an application that creates resources like RLS log streams in test GDGEs and make the necessary alterations. environments? No problem. Would you like to GDGE is far from mandatory — after all, increase and standardize I/O buffer usage at you’ve lived without it since the 1960s OPEN? Consider it done. — but the ability to easily make it the new standard offers you some additional These use cases are just the tip of the iceberg features and use cases that you’ve been for ACC Monarch, and we’re confident that it missing out on. Installations with a mix of can help you manage your data center more old and extended GDGs with a variety of effectively, no matter your goals. attributes are also more prone to error and likely to suffer from poor performance, which means standardization is advisable.

THE POWER OF POLICY ENFORCEMENT FOR MORE INFORMATION RULES about how ACC Monarch can improve the efficiency and ACC Monarch relies on the DTS Software reliability of your production Policy Rules Engine, which allows it to enforce workloads through effective systemwide standards in such a way that they standards enforcement, reach out can’t be bypassed under any circumstances. for a demo or to request a free trial These rules can run before and after DFSMS at DTSSoftware.com. ACS routines, as well as at many other points in the dataset life cycle, such as volume selection,

4350 Lassiter Ave. at North Hills, Suite 230, Raleigh, NC 27609 www.DTSsoftware.com • [email protected] 918-833-8426 • 919-833-2848 (fax)