Jim Fisher Advanced Technical Skills – North America [email protected]

TS7700 Introduction

© 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation Over 58 Years of Tape Innovation ƒStarting in 1952 –IBM 726 Tape Unit • 7,500 characters per second • 100 bits per inch ƒand continuing in 2008 –IBM TS1130 • up to 160MBps1 • up to 1TB1 • 321,000 bits per inch

2000 2002 2004 2007 LTO Gen1 LTO Gen2 LTO Gen3 LTO Gen4 1848 1952 1964 George Boole 1995 IBM 726 IBM 2104 invents IBM 3590 1st drive 1st read/back drive O P E N binary algebra

E N T E R P R I S E

1928, IBM invents 1959 1984 1999 2003 2005 2008 the 80 column IBM 729 IBM 3480 IBM 3590E 3592 Gen1 3592 Gen2 3592 Gen3 Punch Card 1st read/write drive 1st cartridge drive 1 represents maximum native performance or cartridge capacity

© 2010 IBM Corporation IBM Tape Technology History

Year Product Capacity (MB) Transfer Rate (KB/s) 1952 IBM 726 1.4 7.5 1953 IBM 727 5.8 15 1957 IBM 729 23 90 1965 IBM 2401 46 180 1968 IBM 2420 46 320 1973 IBM 3420 180 1250 1984 IBM 3480 200 3000 1989 IBM 3490 200 4500 1991 IBM 3490E 400 9000 1992 IBM 3490E 800 9000 1995 IBM 3590-B1A 10,000/20,000 9000 1999 IBM 3590-E1A 20,000/40,000 14,000 2002 IBM 3590-H1A 30,000/60,000 14,000 2003 IBM 3592-J1A 300,000,000 40,000 2005 IBM 3592-E05 700,000,000 100,000 2008 IBM 3592-E06 1,000,000,000 160,000

© 2010 IBM Corporation VTS/TS7700 Evolution

First Device Backend Host Host IO Virtual Logical Logical Cache Introduced Drive Interface Rate Drives Volumes Volume Size 1996 3495- 3590 ESCON 9 MB/sec 32 50,000 400, 800 144 GB B16 MB 1997 3494- 3590 ESCON 9 MB/sec 32 50,000 400, 800 144 GB B16 MB 1998 3494- 3590 ESCON 50 128 250,000 400, 800 1.7 TB B18 MB/sec MB 2001 3494- 3590 or ESCON 50 64 250,000 400 to 432 GB B10 3592 FICON MB/sec 2000 MB 2001 3494- 3590 ESCON 150 256 500,000 400 to 1.7 TB B20 and/or FICON MB/sec 4000 MB 3592 2006 TS7740 3592 FICON 600 256 1,000,000 400 to 6 TB MB/sec 4000 MB 2008 TS7740 3592 FICON 600 256 1,000,000 400 to 13.7 TB MB/sec 4000 MB 2008 TS7720 ---- FICON 600 256 1,000,000 400 to 70 TB MB/sec 4000 MB

© 2010 IBM Corporation Why Do I Need a TS7700?

ƒ Tape Technology has dramatically increased tape density –400 MB or 800 MB in the 1980s (3490E) –10 GB to 60 GB in the 1990s (3590) –60 GB to 1000 GB in the 2000s (3592) ƒ Average Mainframe dataset size is approximately 750 MB ƒ Typical tape usage is one dataset, one tape ƒ As tape density increases, how do I take advantage? ƒ VTS and now TS7700 –Presents image of 3490E tape drives and cartridges to host –“Stacks” logical volumes onto physical volumes –TS7700 manages mapping of logical to physical

© 2010 IBM Corporation Main Benefits from Tape Virtualization ƒ Brings efficiency to the tape operation environment. ƒ Reduces batch window. ƒ Provides High Availability and disaster recovery configurations. ƒ Provides fast access to data through caching on disk. ƒ Provides utilization of current tape drive, tape media, and tape automation technology. ƒ Provides the capability of filling high capacity media 100%. ƒ Provides a large number of tape drives or concurrent use. ƒ Provides data consolidation, protection, and sharing. ƒ Requires no additional software. ƒ Reduces Total Cost of Ownership (TCO)idea

© 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation Virtual Tape Concepts

ƒ Virtual tape drives

– Appear as multiple 3490E tape drives 100 101 1FF – Shared / partitioned like real tape drives . . .

Virtual Virtual Virtual – Designed to provide enhanced job parallelism Drive Drive Drive – Requires fewer real tape drives 1 2 n – TS7700 offers 256 virtual drives per cluster ƒ Tape volume caching Tape Virtual Volume 1 – All data access is to disk cache Volume Virtual Volume 2 Cache – Removes common tape physical delays Virtual Volume n – Fast mount, positioning, load, demount – Up to 13.7 TB/70 TB of cache (uncompressed) ƒ Volume stacking (TS7740) Logical 3592 Volume 1 – Designed to fully utilize cartridge capacity up to 1000 GB native capacity – Helps reduces cartridge requirement Logical Volume n – Helps reduces footprint requirement

© 2010 IBM Corporation Logical Volumes in TS7700

© 2010 IBM Corporation Host and TS7700 Basic Diagram

© 2010 IBM Corporation TS7700 Architecture – Old versus New

© 2010 IBM Corporation TS7700 - Scalability

ƒ Common DASD, Multiple Elements - redundancy, capacity, attachments

VNode VNode VNode VNode

Management Interface VNode VNode

DASD Cache HNode HNode Up to 16 VNodes Meta Data Up to 2 HNodes GNode GNode To Grid Physical Library and Clusters Drives Current delivery is a single GNode (VNode + HNode in same controller)

© 2010 IBM Corporation TS7740 Virtualization Engine Components ƒ TS7740 Virtualization Engine (3957 Model V06) – Power5 architecture server based – Two dual-core, 64-bit, 1.9-GHz processors – Runs the V and H nodes ƒ TS7740 Cache Drawer (3956 Model CX6) – RAID array expansion – 16 15K 146GB FC HDDs – 1.5 TB usable capacity (after RAID and spares) ƒ TS7740 Cache Controller (3956 Model CC6) – Disk RAID array controller – 16 15K 146GB FC HDDs – 1.5 TB usable capacity (after RAID and spares) ƒ 3952 Model F05 Frame – Houses major components & support components – Dual Power The combination of the Virtualization Engine components is called a TS7700 Cluster

© 2010 IBM Corporation TS7720 Virtualization Engine Components

ƒ TS7720 Disk Only VTE (3957-VEA) – Power5++ architecture server based – Two dual-core, 64-bit, 2.1-GHz processors – Integrated Enterprise Library Controller ƒ TS7720 Cache Drawer (3956-SX7) – RAID array expansion (RAID 10) – 16 – 7.5K RPM 1 TB SATA HDDs – 10 TB Usable (after RAID and spares) ƒ TS7720 Cache Controller (3956-CS7) – Disk RAID array controller – 16 – 7.5K RPM 1 TB SATA HDDs – 10 TB Usable (after RAID and spares)

ƒ 3952 Model F05 Frame(s) – Ethernet routers for service and management interface functions – Houses major components & support components –Dual Power

© 2010 IBM Corporation TS7700 Internal Components

© 2010 IBM Corporation TS7700 - Capabilities ƒ Tape Volume Cache – TS7740 zSeries Attachment • RAID5 – FC Drives • 1 to 13.7 TB capacity (3 – 41 TB @3:1) – TS7720 • RAID 6 – SATA Drives • 40TB or 70TB capacity (120 – 210 TB @3:1) TS7700 ƒ 256 Virtual Tape Devices (3490E) ƒ 1 Million Logical Volumes ƒ Advanced Policy Management Standard – Volume pooling – Cache management –Reclamation ƒ 4 to 16 3592 Physical Drives – TS7740 Only ƒ 3584 Physical Library Support – New or added to existing library 3592/3584 – TS7740 Only

© 2010 IBM Corporation TS7700 - Interfaces

ƒ zSeries Attachment zSeries Attachment – Two or Four adapters per vnode – Single port 4Gb FICON adapter ƒ Grid Interconnect – Two 1Gb Ethernet per node TS7700 ƒ Management Interface – Two Ethernet ports per node ƒ Drive/TS7700 Interface – Two 4Gb fiber per node for 3592 – TS7740 Only

3592/3584

© 2010 IBM Corporation System Software Support for TS7700 ƒ z/OS – Support is provided at z/OS V1R9, and above. z/OS R1V4 through R1V8 support the TS7700, but these levels are out of service. An extended service contract is recommended for support. ƒ z/VSE – Support provided at z/VSE 3.1.2 and above. ƒ z/VM – Support provided for z/VM V5R1 or above for both guest and native VM. z/VM V4R4 supports the TS7700, but this level is out of service. An extended service contract is recommended for support. ƒ TPF – Support is provided at TPF 4.1 and above. ƒ z/TPF – Support is provided at z/TPF V1.1 and above.

© 2010 IBM Corporation TS7700 Grid Configuration ƒ Joins two or more TS7700 Clusters together to form a Grid configuration – Hosts attach directly to the TS7700 Clusters ƒ I/P based replication ƒ Two 1 Gbps Ethernet links – VNodes use links for Remote File access – HNodes use links for cluster-cluster Library messages and replication of logical volumes ƒ RJ45 Copper or SW Fiber ƒ Standard TCP/IP Networking infrastructure

ƒ Policy-based replication management Library – APM is standard – Can be configured for disaster recovery and/or high availability environments ƒ TS3500 library is supported

© 2010 IBM Corporation TS7700 - Grid Interconnect

© 2010 IBM Corporation TS7700 – Balanced Mode

Host FICON TS7700 Cluster 0

WANWAN

ƒ Balanced Mode ƒ Virtual devices online in both clusters ƒ Allocation can pick a drive from either cluster TS7700 Cluster 1

© 2010 IBM Corporation TS7700 – Preferred Mode

Host FICON TS7700 Local

WANWAN ƒ Preferred Mode ƒ Virtual devices online on one cluster only

Host TS7700 Remote

© 2010 IBM Corporation TS7700 Three Site Grid – High Availability and Disaster Recovery

Remote Disaster Local Production Site / Campus / Region Recovery Site

System z DWDM or Channel FICON Attachment Extension (optional)

TS7700 TS7700 TS7700 Cluster-0 Cluster-1 Cluster-2

WAN

© 2010 IBM Corporation TS7700 Three Site Grid – 2 Production Sites and Disaster Recovery

Remote Disaster Local Production Site / Campus / Region / World Recovery Site

System z DWDM or Channel FICON Attachment Extension (optional) System z FICON Attachment

TS7700 TS7700 TS7700 Cluster-0 Cluster-1 Cluster-2

WAN

© 2010 IBM Corporation Four Cluster Grid - High Availability Everywhere

HA Disaster Recovery Site HA Production Site Or Second Production Site

WANWAN

© 2010 IBM Corporation Grid Link Dynamic Load Balancing

© 2010 IBM Corporation TS7740 Back-End Encryption

Host - zOS, AIX, Linux, iOS, HP-UX, Windows, Sun Host Key Store EKM Crypto Services

FICON NetworkNetwork Host - zOS, AIX, Linux, iOS, HP-UX, Windows, Sun

Key Store TS7740 EKM The proxy in the Crypto Services TS7700 provides the bridge between the drive FC and the Encryption policy is based on network for EKM Storage Pool which is controlled e exchanges. Fibr through Advanced Policy Management (APM): Storage Group and Management Class

© 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation What is Reclamation? ƒ Recovers unused space on stacked volumes ƒ TS7740 stacks logical volumes onto physical volumes ƒ TS7740 fills the physical volume ƒ Initially the physical volume is 100% full ƒ Over time the logical volumes become invalid on the physical – After being returned to scratch AND one of the following: • Allocated to satisfy a scratch mount • Delete Expired time passes ƒ Reclaim process transfers the active logical volumes from the partially full physical volumes to an empty physical volume ƒ New full physical volume is created ƒ Reclaimed volume returns to scratch pool

© 2010 IBM Corporation Life Cycle of a Logical Volume

Delete expire time elapses Volume selected for scratch mount and Volume returned TS7700 no longer set to private to scratch manages volume

Volume selected Volume selected for scratch mount for scratch mount TS7700 no longer manages volume

• A scratch mount occurs and a scratch logical volume is selected • The logical volume is set to private • Time goes by • Return To Scratch processing occurs • TS7700 still manages the volume • The TS7700 stops managing the volume’s data when: • The logical volume is selected to satisfy a scratch mount • Delete Expire time passes (if set)

© 2010 IBM Corporation Reclamation Process

PV0001

LV0001 LV0002 LV0003 LV0004 LV0005 LV0006 LV0007 LV0008 LV0009 LV0010 • Multiple physical volumes are stacked with logical volumes

PV0002 • Physical volumes are filled

LV0011 LV0012 LV0013 LV0014 LV0015 LV0016 LV0017 LV0018 LV0019 LV0020

PV0001

LV0001 LV0002 LV0003 LV0004 LV0005 LV0006 LV0007 LV0008 LV0009 LV0010 • Over time logical volumes become invalid

PV0002 • Unused gaps on tape • Stacked volume becomes LV0011 LV0012 LV0013 LV0014 LV0015 LV0016 LV0017 LV0018 LV0019 LV0020 eligible for reclaim when active data drops below a threshold

PV0003

LV0002 LV0003 LV0005 LV0007 LV0009 LV0011 LV0012 LV0015 LV0017 LV0020 • Reclaim reads active logical volumes from multiple stacked volumes and writes them to a an empty stacked volume • Reclaimed tapes are now scratch tapes

© 2010 IBM Corporation How is Reclaim Controlled? ƒReclaim Threshold –Determines at what level of active data a physical volume becomes eligible for reclaim –Set on a per pool basis –Reclaim uses two back-end drives per reclaim task • Makes sure there is at least on drive available for recall • Uses CPU resources ƒInhibit Reclaim Schedule –Inhibits reclaim operations during the periods specified –Overridden for panic reclaim (less than 2 scratch physical volumes available) –Reclaim should be inhibited during peak production periods ƒHost Console Command –Can be used to limit the number of reclaim tasks (R1.6) –LI REQ, lib_name, SETTING, RECLAIM, RECMMAX, xx

© 2010 IBM Corporation Effect of Reclaim Threshold

• A higher threshold: • Moves more data at reclaim time • Reclaims occur more often • Consumes back-end drive resources • Uses more CPU resources • Average amount of data on a cartridge • With a Reclaim Threshold of 10%, on average cartridge is 55% full • With a Reclaim Threshold of 20%, on average cartridge is 60% full • With a Reclaim Threshold of 30% on average cartridge, is 65% full

Cartridge Reclaim Threshold Capacity 10% 20% 30% 40% 300 GB 30 GB 60 GB 90 GB 120 GB 500 GB 50 GB 100 GB 150 GB 200 GB 640 GB 64 GB 128 GB 192 GB 256 GB 700 GB 70 GB 140 GB 210 GB 280 GB 1000 GB 100 GB 200 GB 300 GB 400 GB

© 2010 IBM Corporation Reclaim Recommendations ƒ Threshold of 20% - 30% for regular pools ƒ Threshold of 10% for Copy Export pools – Logical volumes are recalled into cache if needed – Both primary and secondary copies are pre-migrated to tape ƒ Inhibit reclaim during heavy production periods ƒ Adjust maximum number of reclaim tasks as needed

© 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation Host Console Request

TS7700

Grid Link Status

© 2010 IBM Corporation zOS Host Console Request Keywords Keyword 1 Keyword 2 Keyword 3 Description Comp Dist CACHE Requests information about the current state of the cache and the data N/A Y managed within it associated with the specified distributed library. COPYEXP zzzzzz RECLAIM Requests that the specified physical volume that has been exported N/A Y previously in a copy export operation, be made eligible for priority reclaim. COPYEXP zzzzzz DELETE Requests that the specified physical volume that has been exported N/A Y previously in a copy export operation, be deleted from the TS7700 database. The volume must be empty. GRIDCNTL COPY ENABLE Requests that copy operations for the specified distributed library be N/A Y enabled/disabled. DISABLE LVOL zzzzzz Requests information about a specific logical volume. Y N/A

PDRIVE Requests information about the physical drives and their current usage N/A Y associated with the specified distributed library. POOLCNT [0-32] Requests information about the media types and counts, associated with N/A Y a specified distributed library, for volume pools beginning with the value in keyword 2. PVOL zzzzzz Requests information about a specific physical volume. N/A Y

RECALLQ [zzzzzz] Requests the content of the recall queue starting with the specified N/A Y logical volume. RECALLQ zzzzzz PROMOTE Requests that the specified logical volume be promoted to the front of N/A Y the recall queue, then returns the content of the recall queue. RECLAIM RCLMMAX xx Limits the number of reclaim tasks in the TS7740 N/A Y

SETTING Various Various View and set current cache and throttling values N/A Y STATUS GRID Requests information about the copy, reconcile and ownership takeover Y N/A status of the libraries in a Grid configuration

STATUS GRIDLINK Requests information on the grid link performance from the perspective N/A Y of a specific TS7700. Performance is analyzed every 5 minutes. © 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation Two Cluster Grid

• In a multi-cluster grid each cluster is * Cluster 0 RUN MC - RUNDEF given a numeric designation Cluster 1 Deferred • Cluster 0 through Cluster 7

• Cluster Copy Policy provides the Cluster 0 Host ability to define where and when Alice copies are made

• Management Class used to define policy Cluster 1 • Data consistency points can be Host Bob different for each cluster • Rewind/Unload (RUN) • Deferred •No Copy Cluster 0 Deferred MC - RUNDEF • Single cluster always uses * Cluster 1 RUN Rewind/Unload

© 2010 IBM Corporation * Cluster 0 RUN MC - RUNDEF Cluster 1 Deferred Host Cluster 2 Deferred

Cluster 0 MC - RUNDEF Host Alice Cluster 0 RUN Cluster 2 Three Cluster Grid Cluster 1 No copy Ted * Cluster 2 No copy Cluster 1 Host Bob

Cluster 0 Deferred

MC - RUNDEF * Cluster 1 RUN Cluster 2 Deferred

© 2010 IBM Corporation Logical Volume Copy Consistency ƒ Three way configuration uses same consistency point rules as two way – (R) - Rewind Unload (RUN) or immediate mode copy consistency – (D) - Deferred consistency – (N) - No Copy or inhibit consistency ƒ Consistency policies are configured through the Management Class within the Library Manager – Each distributed library or cluster is configured independently • Recommendation is to keep consistency rules the same within each cluster • Unique Management Classes can be used to generate different outcomes – The cluster or distributed library at the mount-point is where the consistency points are read for that mount operation ƒ Various overrides available

© 2010 IBM Corporation Management Class Panel - Copy Consistency Points

© 2010 IBM Corporation Management Class Panel - Copy Consistency Points

© 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation What is Ownership? ƒ In a multi-cluster grid all clusters are aware of all logical volumes ƒ Only one cluster can own a logical volume at a time ƒ Initial ownership is assigned at logical volume insert time ƒ If a cluster needs to use a logical volume it asks the owning cluster to transfer ownership – Mount a logical volume – Change a logical volume’s category ƒ Ownership is granted over the grid links via a “token” ƒ Ownership serializes the use of a logical volume ƒ How does ownership get transferred when the owning cluster is not available on the grid? – Ownership is not granted, mount fails – Manual ownership takeover is enabled on a cluster • Read or Read/Write ownership takeover – Autonomic Ownership Takeover Manager

© 2010 IBM Corporation Autonomic Ownership Takeover Manager ƒ An optional function for improving the availability of the TS7700 ƒ Enables automatic ownership takeover when one of the clusters in a Grid configuration fails ƒ Uses the TS3000 System Console (TSSC) on each TS7700 to provide an independent method to check the status of a cluster – This requires additional I/P addresses and interconnects between sites ƒ If a TS7700 cannot obtain ownership because it does not get a response from the owner TS7700, it asks its local TSSC to check the status of the remote TS7700 ƒ If the failure of the remote TS7700 is confirmed, one of the ownership takeover modes is enabled automatically – A default of Neither, Read or Write ownership takeover is set by IBM service

© 2010 IBM Corporation Autonomic Ownership Takeover Manager

TS7700 has become unavailable

WANWAN

TS7700 TS7700 TSSC TSSC

TS7700 cannot communicate with remote TS7700

TS7700 requests status of remote TS7700

Remote TSSC response that its TS7700 is inoperable

Automatic entry to one of the takeover modes

© 2010 IBM Corporation Autonomic Ownership Takeover Manager

Network is partially inoperable

WANWAN

TS7700 TS7700 TSSC TSSC

TS7700 cannot communicate with remote TS7700

TS7700 requests status of remote TS7700

Remote TS7700 responds, TSSC response that its TS7700 is OK

No automatic entry to one of the takeover modes

© 2010 IBM Corporation Autonomic Ownership Takeover Manager

Network is inoperable

WANWAN

TS7700 TS7700 TSSC TSSC

TS7700 cannot communicate with remote TS7700

TS7700 requests status of remote TS7700, but no response from remote TSSC

No automatic entry to one of the takeover modes

© 2010 IBM Corporation Topics ƒ What is the TS7700 and Why Should I have one? ƒ TS7700 Product Overview ƒ Reclamation ƒ Host Console Request Support ƒ Logical Volume Copy Consistency ƒ Autonomic Ownership Takeover Manager ƒ Recent Enhancements

© 2010 IBM Corporation Release 1.5 and 1.6 New Functionality ƒ Release 1.5 – Library Manager Convergence – TS7720 Disk Only Virtualization – TS1130 (3592-E06) Support – Workflow Management Enhancements – Device Allocation Assist

ƒ Release 1.6 – Four-Cluster Grid – Cluster Families –Hybrid Grid – Logical WORM – Improved Code Install Process – Security Enhancements – Additional Host Console Requests

© 2010 IBM Corporation Library Manager Convergence ƒ Integration of the Library Manager – Moves the logical volume management into the TS7700 – Moves physical volume/device management to the TS7700 ƒ TS7700 takes over Logical/Physical Volume Management – Sends SMC commands directly to 3584/TS3500 – Eliminates the use of the 3953 Library Manager controller for attaching the TS7740 to the 3584/TS3500 ƒ Library Manager GUI Incorporated into TS7700 Management Interface – Configuration and management panels provided by the Library Manager have been moved to the TS7700 Management Interface

Logical Mount Times Return-To-Scratch Rate

Pre R1.5 R1.5, 1.6 Pre R1.5 R1.5, 1.6

Standalone 2-5 sec 1-2 sec Standalone 3.5-4.5 / sec 6-7 / sec

Grid 3-7 sec 2-4 sec Grid 2-2.5 / sec 3.5 – 4.5 / sec

© 2010 IBM Corporation Library Manager Convergence System z Host TS7740 TS7740

FICON TCP/IP LAN/WANLAN/WAN TCP/IP 3953-F05 TCP/IP TS7740 Ethernet Switch TCP/IP Fibre Channel Library Manager

Fibre Switch

Fibre Channel

3592 Tape Drive ary e Libr TS7740/TS3500 - R1.4 0 Tap TS350

© 2010 IBM Corporation Library Manager Convergence System z Host TS7740 TS7740

FICON TCP/IP LAN/WANLAN/WAN TCP/IP 3953-F05 TS7740

Fibre Channel Library Manager

Fibre Switch

Note: 3953-L05 Library Managers remain Fibre Channel in the 3953-F05 if there are Control Unit attached drives still in the library

3592 Tape Drive Option A ibrary ape L TS7740/TS3500 and R1.6 500 T TS3 Fibre Switches Stay in 3953-F05

© 2010 IBM Corporation Library Manager Convergence System z Host TS7740 TS7740

FICON TCP/IP LAN/WANLAN/WAN TCP/IP

TS7740

Fibre Channel

Option B TS7740/TS3500 and R1.6 Fibre Switches Move to TS3500 D23 Frames • FC4748 Remove 4GB Switch to remove Fibre Switch switches from 3953-F05 (if applicable) Fibre Channel • FC4871 TS7700 BE SW Mounting Hardware to provide mounting hardware for Back End 3592 Tape Drive drive fibre switches. ary e Libr • FC4872 TS7700 BE 4Gb Switches if new 0 Tap TS350 switches are required • FC4873 Reinstall TS7700 BE switches if moving switches from 3953-F05 © 2010 IBM Corporation TS7720 Single-Cluster and Multi-Cluster Grid

System z Host TS7720 TS7720

FICON TCP/IP LAN/WANLAN/WAN TCP/IP

TS7720 • Variable cache configurations • One 3956-CS7 Cache Controller • Three or six 3956-SX7 Cache Drawers • Two sizes available - 40 TB and 70 TB (Uncompressed capacity) • Stores hundreds of thousands of logical volumes. Almost 300,000 assuming 600MB host volumes with a compression ratio of 3:1 • Maximum of 1,000,000 logical volumes • Same functionality as the TS7740 System • Transparent support on z/OS, z/VM, z/VSE & z/TPF • Standalone and business continuation configurations • Cache space management • Monitor available and used space through host console command or TS7720 Management Interface • Cache space messages to attached hosts (CBR3750I console messages on z/OS) – Limited and Out of Space warnings

© 2010 IBM Corporation TS1130 Support (3592-E06/EU6)

ƒ All drives in the system must be of the same type – 3592-J1A or 3592-E05 in J1A emulation mode – 3592-E05 in native mode – 3592-E06/EU6 ƒ Media type/Capacities

Drive Type Media Capacity 3592-J1A JJ 60 GB 3592-E05 in J1A JA 300 GB emulation mode JJ 100 GB 3592-E05/TS1120 JA 500 GB JB 700 GB JJ 128 GB 3592-E06/TS1130 JA 640 GB JB 1000 GB

© 2010 IBM Corporation Workflow Management Enhancements ƒ Provides the user the ability to adjust the internal management and alert levels managed by the TS7700 ƒ New controls set and retrieved by via the TS7700 z/OS Host Command Line Request function ƒ Settings are persistent, are maintained across code upgrades, system restarts, etc. ƒ Keyword “SETTING” added ƒ Three categories (keywords) of setting added –Alert –Cache – Throttle ƒ Refer to IBM® Virtualization Engine TS7700 Series z/OS Host Command Line Request User's Guide on Techdocs for details.

© 2010 IBM Corporation Workflow Management Enhancements – Alert Messages ƒ Alert level controls – Provides a mechanism to present messages to the host console when various metrics have crossed a customer set boundary – Alerts are defined in two levels, warning and critical. Each is configured by the customer – Messages are sent when the configured metric has crossed the warning or critical levels, as well as when the system drops below the threshold level ƒ Alert conditions which can be managed are – Copy crit – Critical level of the amount of data needing to be replicated to other TS7700s – Copy low – Warning level of the amount of data needing to be replicated to other TS7700s – Physical drive crit – Critical level of available physical backend drives – Physical drive low – Warning level of available physical backend drives – Physical scratch crit – Critical level of physical scratch – Physical scratch low – Warning level of physical scratch – Residual data crit – Critical level of unpremigrated data in TS7700 cache – Residual data warn – Warning level of unpremigrated data in TS7700 cache ƒ Generates a z/OS message as thresholds are crossed – CBR3750I Message from library Prodlib: AL5006 Available physical scratch volumes of 98 below critical limit of 100 for pool 12.

© 2010 IBM Corporation Workflow Management Enhancements – Cache Controls ƒ This allows the customer to tailor how the TS7700 manages the data flow through the disk cache within the system to better fit their needs ƒ Cache Controls – Copies to follow Storage Class preference – Determines if incoming copies should be treated as PG0 or as defined by Storage Class • Disable/Enable – Premigration Priority Threshold – Threshold to begin ramping up number of physical drives to perform premigration • Default is 800GB of resident data – Premigration Throttling Threshold – Threshold to begin throttling host I/O in order to keep unpremigrated data below this level • Default is 1000GB of resident data – Recalls Preferred to be removed from cache – treats recalls as PG0 rather than defined from Storage Class • Disable/Enable

© 2010 IBM Corporation Workflow Management Enhancements – Throttle Controls ƒ This allows the customer to tailor when, and to what levels, the TS7700 paces host I/O or replication via the throttling mechanism ƒ Throttle Controls – Full Cache Copy Throttle – Determines if the TS7700 begins to pace the host I/O when logical volume replication begins to fall behind, such that the amount of uncopied data within the cache grows • Enable/Disable – Deferred Copy Throttle – Sets the amount of delay the TS7700 will introduce per 256KB copied, when system resources are constrained • Default is 125 msec – Deferred Copy Throttle Threshold – Sets the level at which the DCT is applied • Default is 100 MB/s Compressed Host Write rate – Immediate Copy Throttling – Determines if the TS7700 should restrict the host I/O when immediate copies begin to approach the MIH timeout value • Enable/Disable

© 2010 IBM Corporation Device Allocation Assist

ƒ Improves performance in a multi-cluster grid with multiple clusters online to the host. ƒ With new host support applied (APAR OA24966) the host has knowledge of a virtual device’s affinity to a cluster ƒ For a specific mount, the library will preference a specific distributed library: – Directs the host to the best distributed library in a multi-cluster grid configuration – Maximizes cache hits, minimizes remote cache access, and provides for better workload balancing – host will allocate a device from the ordered list of distributed library IDs provided

© 2010 IBM Corporation TS7700 – W/O Device Allocation Assistance

Host

Mount ABC123

Virtual Tape Virtual Tape Device SS Device SS 00-1F 20-3F

Shortest Latency ƒ z/OS randomly picks a device address ƒ This can result in a remote mount TVC Shorter Latency TVC

Cluster 0 Cluster 1

Cluster 0 Database Cluster 1 Database

ABC123 ABC123

Volume State Volume State Consistent Consistent Non-Scratch Non-Scratch Not In cache In Cache

© 2010 IBM Corporation TS7700 – With Device Allocation Assistance

Host

Mount ABC123

CL1, CL0

Virtual Tape Virtual Tape Device SS Device SS 00-1F 20-3F

Shortest ƒ z/OS asks for an ordered subsystem list for the volume Latency – Ordering takes into account cache residency, how busy, link latencies, recall workload

TVC Shorter Latency TVC ƒ Picks and available device from the ordered list ƒ Minimizes remote mounts Cluster 0 Cluster 1

Cluster 0 Database Cluster 1 Database

ABC123 ABC123

Volume State Volume State Consistent Consistent Non-Scratch Non-Scratch Not In cache In Cache

© 2010 IBM Corporation Release 1.5 and 1.6 New Functionality ƒ Release 1.5 – Library Manager Convergence – TS7720 Disk Only Virtualization – TS1130 (3592-E06) Support – Workflow Management Enhancements – Device Allocation Assist

ƒ Release 1.6 – Four-Cluster Grid – Cluster Families –Hybrid Grid – Logical WORM – Improved Code Install Process – Security Enhancements – Additional Host Console Requests ƒ Futures

© 2010 IBM Corporation Four-Cluster Grid - High Availability Everywhere

HA Disaster Recovery Site HA Production Site Or Second Production Site

Cluster 0 Cluster 2 WANWAN

Cluster 1 Cluster 3

Copy Mode = RRDD Copy Mode = DDRR

© 2010 IBM Corporation Two, Two Cluster Grids in One!

Production Site Disaster Recovery Site Copy Mode DNDN

Cluster 0 Cluster 2 WANWAN

Cluster 1 Cluster 3

Copy Mode NDND

© 2010 IBM Corporation Three Production Sites, One DR Site

Production Site A Production Site C Copy Mode DNND Copy Mode NNDD

Cluster 0 Cluster 2 WANWAN

Cluster 1 Cluster 3 Production Site B Disaster Recovery Site Copy Mode NDND

© 2010 IBM Corporation Cluster Families - Cooperative Replication ƒ Copy management uses information to optimize long-distance copy links – For deferred mode copies – Prioritizes getting one copy to each family before making copies to members in a family – Executes single copy between families, then local copies between family members.

Production Site DR Site Family to Family

Within Within Family WANWAN Family

Family A Copy Mode RRDD Family B

© 2010 IBM Corporation TS7720/TS7740 Hybrid Grid ƒ A hybrid grid is a multi-cluster TS7700 grid with a mixture of both TS7720s and TS7740s ƒ Data contained within any TS7700 location can be accessed by any TS7700 in the grid independent of TS7700 type ƒ Different combinations satisfy High Availability and Performance characteristics: – TS7740 provides high capacity physical tape backend, copy export capability – TS7720 provides high performance through read-hits due to larger tape volume cache – Grid provides high availability at local and/or remote sites ƒ New Automatic Removal Policy manages the data in TS7720 clusters ƒ No host changes required to support hybrid grids ƒ High availability at a lower price – Allows an HA solution for recently used content ƒ Provides a higher cache-hit percentage – The TS7720 clusters will retain much more cache resident content ƒ Provides DR recovery for newest data electronically – Electronically replicate between TS7720 and TS7740 – Recovery Time Objective potential of seconds for most accessed data. ƒ Provides a safety net for TS7720 overruns – If TS77200 reaches capacity, production can still run

© 2010 IBM Corporation Hybrid Grid Configurations ƒ Disk-Centric and Cost Effective Hybrid Grid Configurations ƒ Automatic Volume Migration and Cache Space Management of the TS7720’s cache – Volumes are copied from a TS7720 to a TS7740 through normal copy policies – When space is needed, the least recently accessed volumes in the TS7720’s cache that have been copied to a TS7740 are removed from the TS7720’s cache ƒ Migrated volumes remain accessible through the TS7720 – TS7720 uses the grid links to remotely access the volume data in the TS7740

Data Mig ration LAN/WAN Very Large Cache

Mi grated Da Host ta Access TS7720 Cluster Intermediate Cache

Tape

Drives/Library

TS7740 Cluster

© 2010 IBM Corporation Hybrid Four Cluster Grid for HA

HA Production Site HA DR Site

TS7740 TS7740 WANWAN

High cache-hit percentage due to a production cache size of 70TB without limiting

TS7720 total solution to 70TB TS7720

Copy Mode = RRDD

With Device Allocation Assist the production host will prefer allocations to a cluster that has the data in cache

© 2010 IBM Corporation Hybrid Electronic Vaulting

100% of N-1 days customer replicated data data

WANWAN

TS7740 TS7720

Copy Export every N Days

DR Solution with electronic vaulting for most recently accessed data (N-1 days) while using copy export for older data N+ days.

Copy Mode = DD

© 2010 IBM Corporation TS7720 Front End, TS7740 Back-end Hybrid ƒ Three production TS7720 clusters all feeding into a common TS7740 – Each TS7720 primarily replicates only to the common TS7740 – Provides 210TB of HIGH PERFORMANCE production cache when running in balanced mode – The installed TS7740 performance features can be minimal since host connectivity wouldn’t be expected ƒ Production data migrates into the TS7740 – If a TS7720 reaches capacity, the oldest data which has already been replicated to the TS7740 will be removed from the TS7720 cache. – Copy export can be utilized at the TS7740 in order to have a second copy of the migrated data. – Duration between copy exports can be longer since the last N days of data has not yet been migrated

LAN/WAN TS7720 TS7740

Copy Export

TS7720 Production Capacity of 210TB

TS7720

© 2010 IBM Corporation Logical Write Once, Read Many (LWORM) Volumes ƒ Expands TS7700 support into compliance storage space – Logical volume now can support Non-Rewritable storage solutions – Emulates Write Once, Read Many (WORM) physical tape functionality • Logical volume can only be appended to beginning after last customer data record • Supported for IBM Standard and ANSI Labeled volumes • Every write from BOT generates a unique ID for tracking and detection volume replacement – Supported by both TS7720 and TS7740 ƒ Uses Data Class storage construct to specify that a logical WORM volume is to be allocated ƒ Retention period controlled by application through a Tape Management System ƒ When data is expired, volume can become a candidate for re-use as a new instance of a logical WORM volume or as a full read/write volume

Volume Returned Volume Re- to Scratch Written from BOT

Retention Period Non-Rewritable

Scratch Mount Subject to Being - DC = LWORM Scratched or Deleted by z/OS DFSMS RMM

© 2010 IBM Corporation Logical Write Once, Read Many (LWORM) Volumes ƒ WWID is generated for each LWORM volume – A 12 byte world unique identifier is generated during scratch mounts when data class indicates LWORM. – 12 byte identifier is bound to the volume once a first-write occurs. – If no write occurs (host/scratch mismatch), the previously bound WWID, if any, remains. ƒ A write-mount count is also bound to the volume – During fast-ready mounts, the write mount count is initialized to zero. – Each mount with at least one write, including the scratch mount, will increase the mount-count by a value of one. ƒ During host mounts, the WWID and write-mount count is surfaced to z/OS host software via the RBL and PRSD Expanded Device Data command. – Only when the WWID and write-mount counts match will the host allow the mount to succeed.

RBL = Read Buffered Log PRSD = Prepare to Read Subsystem Data

© 2010 IBM Corporation Logical WORM – Data Class on TS7700 MI

ƒ Setting Logical WORM option for a Data Class causes all clusters in the grid to create the same Data Class name with the Logical WORM option set and the same Logical Volume Size.

© 2010 IBM Corporation Improved Microcode Install Process

ƒ Uses “image install” for installing code rather than individual component installs. ƒ AIX install is integrated into code install. ƒ Reduces time and impact of TS7700 code installations by streamlining existing processes, running processes in parallel and fully enabling available concurrent code load components ƒ This action was taken as a result of field experience with long microcode installs. ƒ It’s important to note that this is a new process so be sure to read and follow the new Installation Instructions! ƒ Code upgrades from 1.4, 1.4A and 1.5 levels supported. (8.4.0.39 is minimum code level)

Pre R1.6 Timeline R1.6 Timeline

Pre-Install Preparation 1 – 2.5 hours 1 – 2.5 hours

Upgrade/Installation 6 – 8.5 hours 2 – 3.5 hours (Non-concurrent tasks) Post-install N/A 1 – 2 hours (Concurrent tasks) Total Projected Timeline 7 – 11 hours 4 – 8 hours

© 2010 IBM Corporation User Security and User Access Improvements ƒ Provide Centralized Role-Based Access Control (RBAC) – Allow Customer to control all access granted to each local, web specialist, and Remote Support user id – Allow Customer to easily create, change, reset, and revoke passwords for web specialist and Remote Support userids – All RBAC controlled via a remote Customer LDAP server (Tivoli Directory Services or Microsoft Active Directory) via SSPC – Single log in for individual users across all supported products • When a user logs into one supported storage product, and is authenticated, a token is created. This token can then be passed to other supported products to automatically log the user into those products. – External reporting (audit logging) for ‘potentially dangerous’ actions taken by each individual user across all supported products • logins, configuration changes, status changes (such as vary on, vary off, service prep), shutdown, and code updates.

The System Storage Productivity Center (SSPC) is an appliance running an Authentication Services application. Optionally the Authentication Services function could be provided by Tivoli Productivity Center (TPC) software running on a customer-managed server.

© 2010 IBM Corporation IBM Confidential Secure Storage Administration Architecture

Advanced capability available via Tivoli Federated Identity Manager Security Services Centralized audit Single sign on logging/reporting Authentication Audit Service Service (SSPC/TPC) (e.g., TSIEM)

Centralized user management Common User IBM Storage IBM Storage Non-IBM Registry Disk Products Tape Products Storage Partners (LDAP) Mapping of Mapping of Authentication Authentication Authentication authorization roles authorization roles Client Client Client to users and groups to users and groups

TPC - Total Storage Productivity Center SSPC - System Storage Productivity Center TSIEM – Tivoli Security Information Event Manager Authentication – recognized user (valid User ID & Password) Authorization – maps a Role to a recognized user LDAP – Lightweight Directory Access Protocol

© 2010 IBM Corporation Cache Host Overrides ƒ Set through Host Library Request command (z/OS console) ƒ New keywords for the LVOL request – PREFER - update management to PG1 and update to most recently accessed – MIGRATE - update management to PG0 • Migration is not immediate. Volume is only added to the migration candidate list. ƒ Only applies to TS7740

Removed when space is needed, largest first. In addition, occasionally smallest files to prevent the need PG0 Volumes to manage so many small files that tend to remain.

PG1 Volumes Removed when space is needed, after PG0 volumes and by LRU

LI REQ, lib_name, LVOL, xxxxxx PREFER or MIGRATE

© 2010 IBM Corporation Reclaim Maximum Tasks Limit ƒ Set through Host Library Request command (z/OS Number of Available Maximum Number of console) Drives Reclaims 3 1 ƒ New keyword RECLAIM 4 1 ƒ Only applies to TS7740 5 1 ƒ Inhibit Reclaim Schedule is 6 2 still honored 7 2 8 3 ƒ Turn off limit by setting value 9 3 to -1 10 4 11 4 12 5 13 5 14 6 15 6 16 7

LI REQ, lib_name, SETTING, RECLAIM, RCLMMAX, x

© 2010 IBM Corporation QuestionsQuestions ??

© 2010 IBM Corporation Trademarks The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®

The following are trademarks or registered trademarks of other companies.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.

* All other products may be trademarks or registered trademarks of their respective companies.

Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. 85Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography. © 2010 IBM Corporation