IBM i Virtualization and Open Storage

Mike Schambureck IBM Lab Services Rochester, MN Partition Virtualization on POWER

IO Virtualization IO Virtualization with Dedicated Adapters with a Hosting

Server LPAR LPAR A LPAR B LPAR A LPAR B Physical Physical Adapter Adapter Physical DevDrv DevDrv Adapter Virtual Virtual Virtual Virtual Increasing Adapter Adapter Adapter Adapter DevDrv Adapter BW Server Server Client Client & LPAR Density per Slot Virtual Fabric Hypervisor Func Func

PCI adapter PCI adapter Func

Port Port PCI adapter

Port

Fabric Fabric Partition Virtualization concepts / benefits

. Virtualization allows you to use the same physical adapter across several partitions simultaneously. – For storage •Disk • Tape • Optical – For Ethernet

. Benefits: – This reduces your hardware costs – Better hardware utilization – Take advantage of new capabilities IBM i Host and Client Partitions: Overview

. DASD IBM i Host IBM i Client – Hardware assigned to host LPAR in HMC – Hosting server’s DASD can be DDxx DDxx integrated or SAN – DASD virtualized as NWSSTG Integrated objects tied to network server Disks descriptions . Optical – DVD drive in host LPAR virtualized Virtual SCSI directly (OPTxx) connection OR . Networking

NWSSTGs OPTxx – Network adapter and Virtual Ethernet adapter in host LPAR DVD OPTxx – Virtual Ethernet adapter in client LPAR DVD CMNxx Ethernet Virtual LAN connection SAN VIO Server and Client Partitions: Overview

. DASD VIOS Host IBM i Client – Hardware assigned to VIOS LPAR in HMC – DASD can be integrated or SAN DD## – Hdisk# is virtualized as IBM i DD## devices Integrated . Optical Disks – DVD drive in host VIOS LPAR virtualized directly (OPT##)

hdisk## Virtual SCSI . Networking – Network adapter and Virtual OR connection Ethernet adapter in VIOSLPAR OPT## OPT## – Virtual Ethernet adapter in IBM i client LPAR DVD DVD

CMN## Ethernet Virtual LAN connection SAN IBM i Innovative Technology

Pg 6 Integrated Server Virtualization concepts / benefits

. Virtualization also allows IBM i to host x86 operating systems – For storage • Disk (also uses network storage spaces) • Tape • Optical – For Ethernet

. Benefits: – Take advantage of IBM i ease of use and legendary reliability – Designed to pool resources and optimize their use across a variety of operating systems – Centralize storage and server management – Take advantage of IBM i save/restore interfaces for x86 data • Object level (storage space) • level (Windows only) Where Do I Start with Virtualization on IBM i on Power systems?

• Latest version at: http://www.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf •http://www.ibm.com/systems/resources/systems_power_hardware_blades_i_on_blade_readme.pdf •https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/IBM%20i%20Technology%20Updates/page/IBM%20i%20on%20a%20Flex%20Compute%20Node Virtual SCSI (vSCSI): IBM i hosting IBM i or VIOS hosting IBM i

Source System 1 Hosting Server IBM i Client IBM i Client IBM i Client (System 1) (System 2) (System 3)

System 2 FC HBA

System 3 6B22 6B22 6B22 Device Device Device •Assign storage to the physical Type Type Type adapter in the hosting partition •Requires 512 byte per sector LUNs to be assigned to the host •Many Storage options supported

Hypervisor

POWER6 with IBM i 6.1.1 vSCSI Storage Mapping

•Storage management allocation are Hosting server IBM i Client done from both the external storage and the IBM i/VIOS Storage adapter •Storage is assigned to the hosting IBM i/VIOS partition •Within the VIOS you map the 6B22 hdisk# (lun) to the vhost corresponding to the client partition NWSD nwsstg Device Type nwsstg •Within IBM i host, you map storage vhostXXX hdisk1 spaces (NWSSTG) to network VSCSI hdisk2 VSCSI server description (NWSD) tied to SERVER Client the client partition. •Flexible disk sizes Hypervisor • Load source requirements

•16 disks per vscsi adapter. Just POWER6 with IBM i 6.1.1 increased in i7.1 TR8/i7.2 to 32! vSCSI Tape and optical

•Drive is assigned to the Hosting server IBM i Client hosting partition CD1 •Within the VIOS you OPT01 map physical tape or optical or file backed RMT1 virtual optical to the TAP01 vhost corresponding to the client partition vhostXXX cd1 •IBM i hosting rmt1 automatically maps optical and tape VSCSI VSCSI SERVER Client resources to the client using the vSCSI adapter

•VIOS has no tape Hypervisor library support with vscsi adapters. Must use VFC adapters. POWER6 with IBM i 6.1.1 Create Virtual SCSI Client Adapter Create the Virtual SCSI Server Adapter

Update LPAR profile or perform Dynamic LPAR operation

Specify IBM i LPAR

Specify adapter ID used when creating the client adapter in IBM i Assigning VIOS Storage to IBM i – SAN Storage

IBM i LPAR #1 Max of 32* virtual devices DDxx VIOS per connection

vhost0 Virtual SCSI Connection vSCSI vtscsiXX

vhost1 IBM i LPAR #2 vtscsiYY DDxx

vSCSI storage volumes

VIOS: Create Virtual SCSI Server Adapters in VIOS (VIOS partition profile) VIOS: Create Virtual SCSI Client Adapters in Client IBM i partition profile VIOS: Assign storage volumes to IBM i client partitions (HMC or command line) IBM i: Initialize and Add Disks to ASP (from SST)

* Requires i7.1 TR8 or i7.2 Use HMC Virtual Storage Management to view storage in VIOS

© Copyright IBM Corporation 2012 View on the HMC and VIOS Command Line

© Copyright IBM Corporation 2012 Virtual Storage Management – Map Disk to IBM i client

Option 2 – VIOS Command Line

mkvdev –vadapter vhost0 –vdev hdisk1 IBM i + NPIV ( Virtual Fiber Channel (vFC) )

Source System 1

VIOS IBM i Client IBM i Client IBM i Client (System 1) (System 1) (System 1) System 2 8Gbs HBA

System 3

•Hypervisor assigns 2 unique WWPNs to each Virtual fiber Virtual address example C001234567890001 •Host on SAN is created as an iSeries hosttype Hypervisor •Requires 520 byte per sector LUNs POWER6 with IBM i 6.1.1 to be assigned to the iSeries host on DS8K •Can Migrate existing direct connect LUNS •DS8100, DS8300, DS8700, Note: an NPIV ( N_port ) capable switch is required to connect the DS8800, DS5100, DS5300, V7000, VIOS to the SAN/tape library to use virtual fiber. SVC, V3700 and V3500 supported Requirements for NPIV with VIOS and IBM i Client Partitions

• Must use 8 Gb or 16 Gb fibre channel adapters on the Power System and assign to VIOS partitions • Must use a fibre channel switch to connect Power System and Storage Server • Fibre Channel switch must be NPIV-capable • Storage Server must support NPIV as an attachment between VIOS and IBM i – Coming up on another slide

19 NPIV Configuration - Limitations

. Single client adapter per physical port per partition – Intended to avoid single point of failure – Documentation only – not enforced

. Maximum of 64 active client connections per physical port – It is possible to map more than 64 clients to a single adapter port – May be less due to other VIOS resource constraints

. 32K unique WWPN pairs per system platform – Removing adapter does not reclaim WWPNs Can be manually reclaimed through CLI (mksyscfg, chhwres…) “virtual_fc_adapters” attribute – If exhausted, need to purchase activation code for more . Device Limitations – Maximum of 128 visible target ports Not all visible target ports will necessarily be active Redundant paths to a single DS8000 node Device level port configuration Inactive target ports still require client adapter resources – Maximum of 64 target devices Any combination of disk and tape Tape libraries and tape drives are counted separately Create VFC Client Adapter in IBM i Partition Profile

Need to check box Specify VIOS LPAR VFC Client Adapter Properties

Virtual WWPNs used to configure hosts on the storage server Disk and Tape Virtualization with NPIV – Assign Storage

. Use HMC to assign IBM i LPAR and VFC adapter pair to physical FC port

© Copyright IBM Corporation 2012 Disk and Tape Virtualization with NPIV – Configure SAN

•Complete zoning on your switch using virtual WWPNs generated for the IBM i LPAR •Configure a host connection on the SAN tied to the virtual WWPN •Use storage or tape library UI and Redbook to assign LUNs or tape drives to the WWPN from the VFC client adapter in i LPAR

© Copyright IBM Corporation 2012 Redundant VIOS with NPIV

POWER6 . Step 1: configure virtual and physical FC adapters IBM i – Best Practice to make VIOS redundant or separate individual VIOS Client IASP SYSBAS partitions where a single hardware VFC failure would not take down both adapters Server VIOS partitions. VFC . Step 2: configure SAN fabric and storage adapters – Zone LUNs to the virtual WWPNs. – Each DASD sees a path through 2 1 VIOS partitions VIOS VIOS

•Notes: Support up to 8 paths per LUN •Not all paths have to go through separate VIOS partitions. 2

Physical FC connections Connecting IBM i to VIOS storage - VSCSI vs. NPIV

IBM i VSCSI IBM i NPIV V7000 generic generic scsi disk scsi disk EMC DS8000

SCSI FCP VIOS VIOS VIOS VIOS

FC HBAs FC HBAs FC HBAs FC HBAs

SAN SAN

DS8000 V7000 XIV DS3500

• All storage subsystems* and internal storage • Some storage subsystems and some FC tape libraries supported supported • Storage assigned to VIOS first, then virtualized • Storage mapped directly to Virtual FC adapter in IBM to IBM i i, which uses N_Port on FC adapter in VIOS * See following charts for list of IBM supported storage devices 26 Support for IBM Storage Systems with IBM i

DS3200 SVC DS8100 DS3400 DS4700 Storwize DS8300 Table as of DS5100 DS3500 DS4800 V7000 XIV DS8700 DS5300 April 2014 DCS3700 DS5020 V3700 DS8800 DS3950 V3500 DS8870

6.1 / 7.1 IBM i Version POWER6/7 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 5.4 / 6.1 / 7.1 Hardware POWER6/7 POWER6/7 POWER6/7 POWER6/7 POWER5/6/7 Rack / Tower Not DS3200#, Systems Yes DS3500## Direct* or VIOS Direct* or VIOS – Direct or VIOS – IBM i Attach VIOS VSCSI VIOS VSCSI -- VSCSI and VIOS VSCSI VSCSI and NPIV% VSCSI and NPIV** NPIV%%

6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 IBM i Version POWER6/7 @, #, POWER6/7 POWER6/7 POWER6/7 POWER6/7 POWER6/7 Hardware ## Power (BCH) (BCH) (BCH) (BCH) (BCH) Blades

VIOS VSCSI and VIOS VSCSI and IBM i Attach VIOS VSCSI VIOS VSCSI VIOS VSCSI VIOS VSCSI NPIV% NPIV**

IBM i Version 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 PureFlex Hardware POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ Nodes VIOS VSCSI IBM i Attach Behind V7000 Behind V7000 Behind V7000 Behind V7000 Behind V7000 For V7000 IBM i Version 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 6.1 / 7.1 Hardware POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ POWER7/7+ VIOS VSCSI or Flex Nodes Direct or VIOS – NPIV%% Native* or VIOS – IBM i Attach VIOS VSCSI VIOS VSCSI VIOS VSCSI VSCSI and NPIV** For V7000 / VSCSI and NPIV% V3700 / SVC

Use Disk Magic® to evaluate SAN’s performance and configuration. Legend is on the next slide Support for IBM Storage Systems with IBM i

Notes - This table does not list more detailed considerations, for example required levels of firmware or PTFs required or configuration performance considerations - POWER7 servers require IBM i 6.1 or later - This table can change over time as addition hardware/software capabilities/options are added # DS3200 only supports SAS connection, not supported on Rack/Tower servers which use only Fibre Channel connections, supported on Blades with SAS ## DS3500 has either SAS or Fibre Channel connection. Rack/Tower only uses Fibre Channel. Blades in BCH support either SAS or Fibre Channel. Blades in BCS only uses SAS. ### Not supported on IBM i 7.1. But see SCORE System RPQ 846-15284 for exception support * Supported with Smart Fibre Channel adapters – NOT supported with IOP-based Fibre Channel adapters ** NPIV requires Level of 6.1.1 or later and requires NPIV capable HBAs (FC adapters) and switches @ BCH supports DS3400, DS3500, DS3950 & BCS supports DS3200, DS3500 % NPIV requires IBM i 7.1 TR2 (Technology Refresh 2) and latest firmware released May 2011 or later %% NPIV requires IBM i 7.1 TR6 (Technology Refresh 6)

For more details, use the System Storage Interoperability Center: www.ibm.com/systems/support/storage/config/ssic/ Note there are currently some differences between the above table and the SSIC. The SSIC should be updated to reflect the above information. IBM Power Systems Set Tagged I/O – Specify Client SCSI Adapter for Load Source

Client Adapter created on previous slides - Client SCSI or Client VFC

Physical CD/DVD or Client SCSI adapter if virtualizing the device.

© 2013 IBM Corporation IBM Power Systems PC5250 emulator is used for the IBM i console

. Just like the HMC uses

05/30/12 © 2013 IBM Corporation IBM Power Systems 6B25 Adapter Look & Feel

. Similar in look & feel to other IOPless storage adapters

. Attached device resources have real hardware CCINs

© 2013 IBM Corporation Virtual Ethernet

. PowerVM Hypervisor Ethernet Hosting Server Client 1 Client 2 Bridged switch Ethernet Adapter – Part of every Power server CMN CMN Phy Virt – Moves Ethernet data between LPARs (Vir) (Vir) – Can separate traffic based on VLANs

. Shared Ethernet Adapter VLAN-Aware Ethernet Switch – Part of the VIO server PowerVM Hypervisor – Logical device

– Bridges traffic to and from Ethernet external networks Switch – VLAN aware – Link aggregation for external networks – SEA Failover for redundancy . IBM i Bridge Adapter – Bridges traffic to and from external networks – Introduced in i7.1 TR3 SEA Failover Configuration for Redundant VIOS’s

VIOS VIOS Partition Partition SEA SEA ETH ETH VETH VETH VETH VETH VETH VETH

Hypervisor

NetworkClient network SEA Failover and Link Aggregation

VIOS 1A VIOS 1B IBM i Client . Create a 2nd VIOS Ent 6 Primary Standby Ent 5 . Each VIOS has a SEA adapter* SEA SEA . Each VIOS has a link aggregation Ent 5 Ent 3 . A control channel is created between Aggr Aggr the 2 VIOS

– Note: One SEA adapter must have Ent 3 Ent 4 Ent1Gb 2 Ent 1Gb 7 Ent 4 Ent 2 Ent 0 Ent 1 CMN0 Phy Phy Virt Virt Virt Virt Phy Phy Virt Hypervisor PVID = 99 PVID = 99 PVID = 1

a lower priority at creation ** PVID = 1 PVID = 1 Control . Failover and Redundancy Channel – VIOS 1A could be taken down for VLAN 99 maintenance – VIOS 1B would take over the network traffic – A broken cable, or failed adapter for example would not disrupt Ethernet traffic

Switch Switch

*The HMC must have “Access External Networks” checked for ent 2 virtual adapter on the VIOS’s!

06/19/12 **Only 1 virtual ethernet adapter used for SEA(ent2) can have a priority of “1” on the HMC. HMC: Hosting partition – bridge Ethernet adapter

. Note the setting “Access external network” – Required for Shared Ethernet Adapter . Client partitions use the same VLAN ID

35 Create Virtual Adapter in Client Partition

Needs to match VLAN ID in VIOS HMC: VIOS - Create Shared Ethernet Adapter Select Physical Adapter for SEA Create Virtual Adapter Control Channel VIOS1A

. A control channel is created to allow a primary VIOS to communicate with a secondary VIOS so that a failover can occur if the primary VIOS is unavailable . The control channel is a virtual ethernet adapter pair (one on each VIOS) that is linked to the SEA on that VIOS . Heartbeat messages are passed from the primary to the secondary VIOS over VIOS1B a separate VLAN (PVID) . Control channel must be created before the failover SEA is created on the secondary VIOS – Operation will fail if control channel doesn’t exist Create Failover SEA on Secondary VIOS View of Both VLANs from the HMC

VLAN 1 – SEAs VLAN 99 – Control Channel Dual SEAs

. Another option is to create shared ethernet adapters (SEAs) in each VIOS and make them peers (not primary/secondary) – This is also referred to as “load sharing” . HMC does not support this feature yet so need to use VIOS command line . Need to set ha_mode = sharing when creating the SEAs from the VIOS command line . If changing existing SEAs that were previously set to primary/secondary, make sure you change the ha_mode attribute on the primary first – chdev -dev entX -attr ha_mode = sharing – entX is the name of the shared ethernet adapter 10 Gb Shared Ethernet Adapter Performance

. 10 Gb SEAs put a much greater load on VIOS than 1 Gb SEAs – Current recommendation is 2 dedicated processors for VIOS partitions that virtualize 10 Gb SEAs . Make sure large send attribute is turned on (at TCP layer) – chdev -dev ent2 -attr large_send = yes -perm . Make sure flow control attribute is turned on – chdev -dev ent2 -attr flow_ctrl = yes -perm – Also need to turn on flow control on the network switch IBM i bridge adapters

From the IBM i command line interface: • Create an Ethernet line description for the physical Ethernet resource, and set its Bridge identifier to your chosen bridge name. • Create an Ethernet line description for the selected virtual Ethernet resource, and set its Bridge identifier to the same bridge name. - The VE adapter must have the Use this adapter to access the external network selected. • When both line descriptions are varied on, traffic is bridged between the two networks, and any other partitions with virtual Ethernet adapters on the same VLAN as the new virtual Ethernet resource can access the same network as the physical Ethernet resource.

06/19/12 Virtual Ethernet Limits

Description Limit

Maximum virtual Ethernet adapters per LPAR 256

Maximum number of VLANs per virtual adapter 21 VLAN (20 VID, 1 PVID)

Number of virtual adapter per single SEA sharing a single 16 physical network adapter

Maximum number of VLAN IDs 4094

Maximum number of physical adapters in a link aggregation 8 primary, 1 backup Where do you have to run VIOS hosting IBM i?

Power blades Power Compute Nodes Take advantage of other VIOS capabilities PowerVM Active Memory Sharing

.Reduce memory costs by improving memory utilization on Power Servers

. Supports over-commitment of logical memory with overflow going to a paging device . Intelligently flow memory from one partition to another for increased utilization and flexibility . Memory from a shared physical memory POWER Server pool is dynamically allocated among logical Virtual partitions as needed to optimize overall I/O memory usage Dedicated Shared Memory Server . Designed for partitions with variable Memory memory requirements . PowerVM Enterprise Edition on POWER6 and Power7 processor-based systems – Partitions must use VIOS for I/O CPU Shared CPU virtualization . Make sure it’s a good fit for you! Paging

PowerVM Hypervisor AMS

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. LPAR Suspend/Resume – Customer Value

. Planned CEC outages for maintenance/upgrades – Suspend/resume may be used in place of or in conjunction with partition mobility. – Suspend/resume may require less time and effort than manual database shutdown and restart, for example.

. Resource balancing for long-running batch jobs – e.g. suspend lower priority and/or long running workloads to free resources.

Minimum Requirements: • All I/O is virtualized • HMC version 7 releases 7.3 • FSP FW: Ax730_xxx • IBM i 7.1 TR2 • VIOS 2.2.1.0 FP24 SP2 Partition Suspend/Resume

Power7 System #1 Storage Subsystem SuspendedIBM i Client Partition 1 Validate environment for appropriate resources

en0 A Ask partiton if it’s (if) ready for suspend vscsi0 ent1 Suspend Partition ReservedStorage Pool CPU and I/O LUN M M M M M M Move Memory and CPU VLAN to Storage Pool C C

Hypervisor Partition Suspended VASI vhost0 ent1 A

Mover ent2 en2 vtscsi0 Service SEA (if)

fcs0 ent0 VIOS

Partition Suspend/Resume supported on POWER7 IBM i 7.1 TR2 PowerVM Live Partition Mobility

• Move running partition from one system to Live Partition Mobility another with almost no impact to end users

• Requires POWER7 systems or later, PowerVM Movement of the Enterprise, and all I/O must be through the OS and applications to a Virtual I/O Server different server with no loss of service • Requires IBM i 7.1 with TR4 or newer

Virtualized storage and Network Infrastructure

Potential Benefits • Eliminate planned outages • Balance workloads across systems • Energy Savings Requirements & Planning . Source and destination must be mobility capable and compatible: – Enhanced hardware virtualization capabilities. – Identical or compatible processors. – Compatible firmware levels. . Source and destination must be LAN connected HMC – same subnet . All resources (CPU, Memory, IO adapters) must be virtualized prior to migration. LAN – Hypervisor will handle CPU and Memory automatically, as required. Virtual IO adapters are LPAR pre-configured, and SAN-attached disks accessed through Virtual IO Server (VIOS) . Source and destination VIOS must have SAN symmetrical access to the partition’s disks. – e.g. no internal or VIOS LVM-based disks. Boot . OS is migration enabled/aware. Paging – Certain tools/ application middleware can benefit Application Data from being migration aware also. Live Partition Mobility

Power7 System #1 Power7 System #2

SuspendedIBM i Client Partition 1 ShellIBM i PartitionClient 1 OnceFinish enough the migration memory M M M M M M M Validate environment M M M M M M M CreateCreatepagesandStart removeshell virtualmigratinghave partition been SCSIthe for appropriate moved,onmemoryoriginal targetdevices suspend systemLPARpages the resources en0 sourcedefinitions system en0 A (if) (if) A

vscsi0 ent1 ent1 vscsi0 HMC VLAN VLAN

Hypervisor Hypervisor

VASI vhost0 ent1 ent1 vhost0 VASI

Mover ent2 en2 en2 ent2 Mover vtscsi0 vtscsi0 Service SEA (if) (if) SEA Service

fcs0 fcs0 ent0 VIOS VIOS ent0

Storage Subsystem

A

Partition Mobility supported on POWER7 IBM i 7.1 4 Native attached Storage to IBM i

• No VIOS involved.

• Adapters are cabled to Fibre Channel (FC) switch(es).

• Switches include zoning from the SAN to the IBM i partition.

• Active paths are solid, Passive paths are dotted. • Allows for failover recovery on loss of primary node.

• Requires i7.1 TR6 or newer.

• Supported SANs: • DS8000/5100/5300 • V7000 • V3700 • V3500 • SVC Direct attached Storage to IBM i

• No VIOS involved.

• Adapters are cabled directly to the SAN

• Active paths are solid, Passive paths are dotted. • Allows for failover recovery on loss of primary node. • This ties up host ports on the SAN (ie can’t be shared)

• Requires i7.1 TR6 or newer.

• Supported SANs: • DS8000/5100/5300 • V7000 • V3700 • V3500 • SVC IBM i Virtualization Enhancements

Virtualization by IBM i 7.1 IBM i 6.1 with 6.1.1 Environment GA Date machine code

June 2014 Technology Refresh 8 -SRIOV native Ethernet support X - ivirtualization -Increase vscsi disks per host X - ivirtualization, VIOS adapter to 32

March 2013 Technology Refresh 6 - - NPIV attach of SVC, Storwize X - VIOS V7000, V3700, V3500

October 2012 Technology Refresh 5 - -Large Receive offload for layer 2 X - VIOS bridging -PowerVM V2.2 Refresh with SSP X - VIOS and LPM updates

May 2012 Technology Refresh 4 -IBM i Live Partition Mobility X - VIOS -HMC Remote Restart PRPQ X - VIOS -Performance enhancement for X - ivirtualization zeroing virtual disk

December 2011 Technology Refresh 3 -PowerVM V2.2 Refresh with X - VIOS SSP Enhancements

5 6 IBM i Virtualization Enhancements (continued) Virtualization by IBM i 7.1 IBM i 6.1 with 6.1.1 Environment GA Date machine code

October 2011 Technology Refresh 3 -Ethernet layer-2 bridging X - ivirtualization -Mirroring with NPIV attached X - VIOS storage -VPM enhancements to create X X (client only) ivirtualization IBM i partitions -PowerVM NPIV attachment for X - VIOS DS5000 for blades -PowerVM V2.2 refresh with X X VIOS network load balancing

May 2011 Technology Refresh 2 -Partition suspend and resume X - VIOS -IBM i to IBM i virtual tape X Apar II14615 (client only) ivirtualization support -PowerVM NPIV attachment of X - VIOS DS5000

December 2010 Technology Refresh 1 -PowerVM with shared storage X - VIOS pools

September 2010 Technology Refresh 1 - -Support for embedded media X - ivirtualization changers -Expanded HBA and switch X X VIOS support for NPIV on blades

5 7 The End

Thank you! Trademarks The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®

The following are trademarks or registered trademarks of other companies.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. is a registered trademark of The Open Group in the United States and other countries. is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.

* All other products may be trademarks or registered trademarks of their respective companies.

Notes: Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions. This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

59