IBM I Virtualization and Open Storage
Total Page:16
File Type:pdf, Size:1020Kb
IBM i Virtualization and Open Storage Mike Schambureck IBM Lab Services Rochester, MN Partition Virtualization on POWER IO Virtualization IO Virtualization with Dedicated Adapters with a Hosting Server Server LPAR LPAR A LPAR B LPAR A LPAR B Physical Physical Adapter Adapter Physical DevDrv DevDrv Adapter Virtual Virtual Virtual Virtual Increasing Adapter Adapter Adapter Adapter DevDrv Adapter BW Server Server Client Client & LPAR Hypervisor Density per Slot Virtual Fabric Hypervisor Func Func PCI adapter PCI adapter Func Port Port PCI adapter Port Fabric Fabric Partition Virtualization concepts / benefits . Virtualization allows you to use the same physical adapter across several partitions simultaneously. – For storage •Disk • Tape • Optical – For Ethernet . Benefits: – This reduces your hardware costs – Better hardware utilization – Take advantage of new capabilities IBM i Host and Client Partitions: Overview . DASD IBM i Host IBM i Client – Hardware assigned to host LPAR in HMC – Hosting server’s DASD can be DDxx DDxx integrated or SAN – DASD virtualized as NWSSTG Integrated objects tied to network server Disks descriptions . Optical – DVD drive in host LPAR virtualized Virtual SCSI directly (OPTxx) connection OR . Networking NWSSTGs OPTxx – Network adapter and Virtual Ethernet adapter in host LPAR DVD OPTxx – Virtual Ethernet adapter in client LPAR DVD CMNxx Ethernet Virtual LAN connection SAN VIO Server and Client Partitions: Overview . DASD VIOS Host IBM i Client – Hardware assigned to VIOS LPAR in HMC – DASD can be integrated or SAN DD## – Hdisk# is virtualized as IBM i DD## devices Integrated . Optical Disks – DVD drive in host VIOS LPAR virtualized directly (OPT##) hdisk## Virtual SCSI . Networking – Network adapter and Virtual OR connection Ethernet adapter in VIOSLPAR OPT## OPT## – Virtual Ethernet adapter in IBM i client LPAR DVD DVD CMN## Ethernet Virtual LAN connection SAN IBM i Innovative Technology Pg 6 Integrated Server Virtualization concepts / benefits . Virtualization also allows IBM i to host x86 operating systems – For storage • Disk (also uses network storage spaces) • Tape • Optical – For Ethernet . Benefits: – Take advantage of IBM i ease of use and legendary reliability – Designed to pool resources and optimize their use across a variety of operating systems – Centralize storage and server management – Take advantage of IBM i save/restore interfaces for x86 data • Object level (storage space) • File level (Windows only) Where Do I Start with Virtualization on IBM i on Power systems? • Latest version at: http://www.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf •http://www.ibm.com/systems/resources/systems_power_hardware_blades_i_on_blade_readme.pdf •https://www.ibm.com/developerworks/community/wikis/home?lang=en#/wiki/IBM%20i%20Technology%20Updates/page/IBM%20i%20on%20a%20Flex%20Compute%20Node Virtual SCSI (vSCSI): IBM i hosting IBM i or VIOS hosting IBM i Source System 1 Hosting Server IBM i Client IBM i Client IBM i Client (System 1) (System 2) (System 3) System 2 FC HBA System 3 6B22 6B22 6B22 Device Device Device •Assign storage to the physical Type Type Type adapter in the hosting partition •Requires 512 byte per sector LUNs to be assigned to the host •Many Storage options supported Hypervisor POWER6 with IBM i 6.1.1 vSCSI Storage Mapping •Storage management allocation are Hosting server IBM i Client done from both the external storage and the IBM i/VIOS Storage adapter •Storage is assigned to the hosting IBM i/VIOS partition •Within the VIOS you map the 6B22 hdisk# (lun) to the vhost corresponding to the client partition NWSD nwsstg Device Type nwsstg •Within IBM i host, you map storage vhostXXX hdisk1 spaces (NWSSTG) to network VSCSI hdisk2 VSCSI server description (NWSD) tied to SERVER Client the client partition. •Flexible disk sizes Hypervisor • Load source requirements •16 disks per vscsi adapter. Just POWER6 with IBM i 6.1.1 increased in i7.1 TR8/i7.2 to 32! vSCSI Tape and optical •Drive is assigned to the Hosting server IBM i Client hosting partition CD1 •Within the VIOS you OPT01 map physical tape or optical or file backed RMT1 virtual optical to the TAP01 vhost corresponding to the client partition vhostXXX cd1 •IBM i hosting rmt1 automatically maps optical and tape VSCSI VSCSI SERVER Client resources to the client using the vSCSI adapter •VIOS has no tape Hypervisor library support with vscsi adapters. Must use VFC adapters. POWER6 with IBM i 6.1.1 Create Virtual SCSI Client Adapter Create the Virtual SCSI Server Adapter Update LPAR profile or perform Dynamic LPAR operation Specify IBM i LPAR Specify adapter ID used when creating the client adapter in IBM i Assigning VIOS Storage to IBM i – SAN Storage IBM i LPAR #1 Max of 32* virtual devices DDxx VIOS per connection vhost0 Virtual SCSI Connection vSCSI vtscsiXX vhost1 IBM i LPAR #2 vtscsiYY DDxx vSCSI storage volumes VIOS: Create Virtual SCSI Server Adapters in VIOS (VIOS partition profile) VIOS: Create Virtual SCSI Client Adapters in Client IBM i partition profile VIOS: Assign storage volumes to IBM i client partitions (HMC or command line) IBM i: Initialize and Add Disks to ASP (from SST) * Requires i7.1 TR8 or i7.2 Use HMC Virtual Storage Management to view storage in VIOS © Copyright IBM Corporation 2012 View on the HMC and VIOS Command Line © Copyright IBM Corporation 2012 Virtual Storage Management – Map Disk to IBM i client Option 2 – VIOS Command Line mkvdev –vadapter vhost0 –vdev hdisk1 IBM i + NPIV ( Virtual Fiber Channel (vFC) ) Source System 1 VIOS IBM i Client IBM i Client IBM i Client (System 1) (System 1) (System 1) System 2 8Gbs HBA System 3 •Hypervisor assigns 2 unique WWPNs to each Virtual fiber Virtual address example C001234567890001 •Host on SAN is created as an iSeries hosttype Hypervisor •Requires 520 byte per sector LUNs POWER6 with IBM i 6.1.1 to be assigned to the iSeries host on DS8K •Can Migrate existing direct connect LUNS •DS8100, DS8300, DS8700, Note: an NPIV ( N_port ) capable switch is required to connect the DS8800, DS5100, DS5300, V7000, VIOS to the SAN/tape library to use virtual fiber. SVC, V3700 and V3500 supported Requirements for NPIV with VIOS and IBM i Client Partitions • Must use 8 Gb or 16 Gb fibre channel adapters on the Power System and assign to VIOS partitions • Must use a fibre channel switch to connect Power System and Storage Server • Fibre Channel switch must be NPIV-capable • Storage Server must support NPIV as an attachment between VIOS and IBM i – Coming up on another slide 19 NPIV Configuration - Limitations . Single client adapter per physical port per partition – Intended to avoid single point of failure – Documentation only – not enforced . Maximum of 64 active client connections per physical port – It is possible to map more than 64 clients to a single adapter port – May be less due to other VIOS resource constraints . 32K unique WWPN pairs per system platform – Removing adapter does not reclaim WWPNs Can be manually reclaimed through CLI (mksyscfg, chhwres…) “virtual_fc_adapters” attribute – If exhausted, need to purchase activation code for more . Device Limitations – Maximum of 128 visible target ports Not all visible target ports will necessarily be active Redundant paths to a single DS8000 node Device level port configuration Inactive target ports still require client adapter resources – Maximum of 64 target devices Any combination of disk and tape Tape libraries and tape drives are counted separately Create VFC Client Adapter in IBM i Partition Profile Need to check box Specify VIOS LPAR VFC Client Adapter Properties Virtual WWPNs used to configure hosts on the storage server Disk and Tape Virtualization with NPIV – Assign Storage . Use HMC to assign IBM i LPAR and VFC adapter pair to physical FC port © Copyright IBM Corporation 2012 Disk and Tape Virtualization with NPIV – Configure SAN •Complete zoning on your switch using virtual WWPNs generated for the IBM i LPAR •Configure a host connection on the SAN tied to the virtual WWPN •Use storage or tape library UI and Redbook to assign LUNs or tape drives to the WWPN from the VFC client adapter in i LPAR © Copyright IBM Corporation 2012 Redundant VIOS with NPIV POWER6 . Step 1: configure virtual and physical FC adapters IBM i – Best Practice to make VIOS redundant or separate individual VIOS Client IASP SYSBAS partitions where a single hardware VFC failure would not take down both adapters Server VIOS partitions. VFC . Step 2: configure SAN fabric and storage adapters – Zone LUNs to the virtual WWPNs. – Each DASD sees a path through 2 1 VIOS partitions VIOS VIOS •Notes: Support up to 8 paths per LUN •Not all paths have to go through separate VIOS partitions. 2 Physical FC connections Connecting IBM i to VIOS storage - VSCSI vs. NPIV IBM i VSCSI IBM i NPIV V7000 generic generic scsi disk scsi disk EMC DS8000 SCSI FCP VIOS VIOS VIOS VIOS FC HBAs FC HBAs FC HBAs FC HBAs SAN SAN DS8000 V7000 XIV DS3500 • All storage subsystems* and internal storage • Some storage subsystems and some FC tape libraries supported supported • Storage assigned to VIOS first, then virtualized • Storage mapped directly to Virtual FC adapter in IBM to IBM i i, which uses N_Port on FC adapter in VIOS * See following charts for list of IBM supported storage devices 26 Support for IBM Storage Systems with IBM i DS3200 SVC DS8100 DS3400 DS4700 Storwize DS8300 Table as of DS5100 DS3500 DS4800 V7000 XIV DS8700 DS5300 April 2014 DCS3700 DS5020 V3700 DS8800 DS3950 V3500 DS8870 6.1 / 7.1 IBM i Version POWER6/7 6.1 / 7.1 6.1 / 7.1