Implementing Sun Solaris on IBM Bladecenter Servers

Total Page:16

File Type:pdf, Size:1020Kb

Implementing Sun Solaris on IBM Bladecenter Servers Front cover Implementing Sun Solaris on IBM BladeCenter Servers Install and deploy Solaris 10 on the latest BladeCenter servers Configure Boot from SAN with Fibre Channel Management using SNMP and IBM Director David Watts Jim Blue William Champion Bradley Thomas ibm.com/redbooks Redpaper International Technical Support Organization Implementing Sun Solaris on IBM BladeCenter Servers May 2007 REDP-4269-00 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (May 2007) This edition applies to Sun Solaris 10 11/06 on IBM BladeCenter servers. © Copyright International Business Machines Corporation 2007. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii The team that wrote this IBM Redpaper . vii Become a published author . viii Comments welcome. ix Chapter 1. Technical overview . 1 1.1 Solaris 10 overview . 2 1.1.1 Solaris Containers. 2 1.1.2 DTrace . 3 1.1.3 Predictive Self-Healing . 4 1.1.4 ZFS . 4 1.2 BladeCenter overview . 5 1.2.1 BladeCenter H chassis . 6 1.2.2 BladeCenter HT chassis . 9 1.2.3 Blade servers . 14 1.2.4 BladeCenter HS21 XM server. 15 1.2.5 BladeCenter LS21 server . 16 1.3 AMD Opteron architecture . 18 1.3.1 64-bit extensions: AMD64. 21 1.3.2 The benefit of AMD64 64-bit computing . 22 1.3.3 64-bit memory addressing. 23 1.4 Application migration. 23 1.4.1 Application Binary Interface . 24 1.4.2 Solaris Compatibility Assurance Toolkit . 24 1.5 IBM software supported on Solaris . 27 Chapter 2. Installation. 29 2.1 Installation . 30 2.1.1 Disabling Reboot on NMI . 30 2.1.2 Patches to the 11/06 version . 31 2.1.3 Installing Solaris 10. 32 2.2 Boot from SAN . 32 2.2.1 Boot from SAN overview. 32 2.2.2 Topology overview . 33 2.2.3 Configuration overview . 33 2.3 Use of a Solaris Installation Server . 39 2.3.1 Setting up an install server . 40 2.3.2 Setting up a boot server . 41 2.3.3 Patching the Install Server for LS21/HS21 installations . 42 2.3.4 Creating an updated DVD image and installing onto an LS21 . 45 2.3.5 Creating a DHCP and PXE server . 46 Chapter 3. Management . 51 3.1 Advanced Management Module . 52 3.1.1 Chassis management . 52 3.1.2 Management module login . 54 © Copyright IBM Corp. 2007. All rights reserved. iii 3.1.3 Advanced Management Module Command-Line Interface. 57 3.2 IBM Director . 57 3.2.1 Discovery . 58 3.2.2 SNMP browser . 59 3.2.3 Managing MIBs . 61 3.2.4 Compiling MIBs. 62 3.2.5 Loading MIBs into memory . 63 3.2.6 Traps. 64 3.2.7 Monitors . 65 3.3 Deploying Solaris with the ServerGuide Scripting Toolkit . 65 3.4 Altiris Deployment Solution . 66 3.5 Advanced Setup Utility . 66 3.5.1 Using the ASU definition files . 66 3.5.2 Using the ASU command . 67 3.6 Sun Management Center . 68 3.7 Sun Connection. 71 Chapter 4. Storage . 73 4.1 LSI RAID and ServeRAID . 74 4.2 Fibre Channel . 75 4.2.1 Attaching to DS4000 storage . 75 4.2.2 Attaching to DS8000 storage . 79 Abbreviations and acronyms . 85 Related publications . ..
Recommended publications
  • Megaraid® 1078-Based SAS RAID Controllers User's Guide
    USER’S GUIDE MegaRAID® 1078-based SAS RAID Controllers February 2007 ® 80-00157-01 Rev. A This document contains proprietary information of LSI Logic Corporation. The information contained herein is not to be used by or disclosed to third parties without the express written permission of an officer of LSI Logic Corporation. LSI Logic products are not intended for use in life-support appliances, devices, or systems. Use of any LSI Logic product in such applications without written consent of the appropriate LSI Logic officer is prohibited. Purchase of I2C components of LSI Logic Corporation, or one of its sublicensed Associated Companies, conveys a license under the Philips I2C Patent Rights to use these components in an I2C system, provided that the system conforms to the I2C standard Specification as defined by Philips. Document 80-00157-01 Rev. A, February 2007. This document describes the current versions of the LSI Logic Corporation MegaRAID SAS RAID controllers and will remain the official reference source for all revisions/releases of these products until rescinded by an update. LSI Logic Corporation reserves the right to make changes to any products herein at any time without notice. LSI Logic does not assume any responsibility or liability arising out of the application or use of any product described herein, except as expressly agreed to in writing by LSI Logic; nor does the purchase or use of a product from LSI Logic convey a license under any patent rights, copyrights, trademark rights, or any other of the intellectual property rights of LSI Logic or third parties.
    [Show full text]
  • Sun Microsystems Solaris 10 What's
    Solaris 10 What’s New Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. Part No: 817–0547–15 January 2005 Copyright 2005 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 U.S.A. All rights reserved. This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers. Parts of the product may be derived from Berkeley BSD systems, licensed from the University of California. UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd. Sun, Sun Microsystems, the Sun logo, docs.sun.com, AnswerBook, AnswerBook2, SunVTS, Java, J2SE, J2EE, JavaServer, JumpStart, Sun Fire, StarOffice, Sun Blade, Sun Ray, Solstice Enterprise Agents, CacheFS, Sun StorEdge, and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. FireWire is a trademark of Apple Computer, Inc., used under license. Netscape and Netscape Navigator are trademarks or registered trademarks of Netscape Communications Corporation. Mozilla is a trademark or registered trademark of Netscape Communications Corporation in the United States and other countries.
    [Show full text]
  • Disk Array Data Organizations and RAID
    Guest Lecture for 15-440 Disk Array Data Organizations and RAID October 2010, Greg Ganger © 1 Plan for today Why have multiple disks? Storage capacity, performance capacity, reliability Load distribution problem and approaches disk striping Fault tolerance replication parity-based protection “RAID” and the Disk Array Matrix Rebuild October 2010, Greg Ganger © 2 Why multi-disk systems? A single storage device may not provide enough storage capacity, performance capacity, reliability So, what is the simplest arrangement? October 2010, Greg Ganger © 3 Just a bunch of disks (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 Yes, it’s a goofy name industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 4 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … October 2010, Greg Ganger © 5 Disk Subsystem Load Balancing I/O requests are almost never evenly distributed Some data is requested more than other data Depends on the apps, usage, time, … What is the right data-to-disk assignment policy? Common approach: Fixed data placement Your data is on disk X, period! For good reasons too: you bought it or you’re paying more … Fancy: Dynamic data placement If some of your files are accessed a lot, the admin (or even system) may separate the “hot” files across multiple disks In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks October 2010, Greg
    [Show full text]
  • EMC Host Connectivity Guide for Oracle Solaris
    Dell EMC Host Connectivity Guide for Oracle Solaris P/N 300-000-607 REV 56 MAY 2020 Copyright © 2007 – 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.” DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE. Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the propertyof their respective owners. Published in the USA. Dell EMC Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.DellEMC.com 2 Dell EMC Host Connectivity Guide for Oracle Solaris CONTENTS Preface ....................................................................................................................................... 13 Part 1 Connecting Solaris to Dell EMC Storage Chapter 1 Solaris Operating System Solaris operating system overview........................................................................ 20 Multipathing software ........................................................................................... 21 MPxIO/STMS ...............................................................................................
    [Show full text]
  • 6Gb/S Megaraid SAS RAID Controllers User Guide February 2013
    6Gb/s MegaRAID® SAS RAID Controllers User Guide February 2013 41450-04, Rev. B 41450-04B 6Gb/s MegaRAID SAS RAID Controllers User Guide February 2013 Revision History Version and Date Description of Changes 41450-04, Rev. B, February 2013 Updated the environmental conditions for the RAID controllers. Updated this guide to the new template. 41450-04, Rev. A, August 2012 Added the MegaRAID SAS 9270-8i, SAS 9271-4i, SAS 9271-8i, SAS 9271-8iCC, SAS 9286-8e, SAS 9286CV-8e, and SAS 9286CV-8eCC RAID controllers. 41450-03, Rev. A, April 2012 Added the MegaRAID SAS 9265CV-8i, SAS 9266-4i, SAS 9266-8i, and SAS 9285CV-8e RAID controllers. 41450-02, Rev. E, February 2011 Added the MegaRAID SAS 9260CV-4i, SAS 9260CV-8i, SAS 9265-8i, and SAS 9285-8e RAID controllers. 41450-02, Rev. D, June 2010 Added the MegaRAID SAS 9260-16i, SAS 9280-16i4e, and SAS 9280-24i4e RAID controllers. 41450-02, Rev. C, April 2010 Put this guide in the new template. 41450-02, Rev. B, November 2009 Added the MegaRAID SAS 9240-4i, SAS 9240-8i, SAS 9261-8i, and SAS 9280-4i4e RAID controllers. 41450-02, Rev. A, July 2009 Added the MegaRAID SAS 9260-4i, SAS 9260DE-8i, SAS 9280-8e, and SAS 9280DE-8e RAID controllers. 41450-01, Rev. A, June 2009 Added the MegaRAID SAS 9260-8i RAID controller. 41450-00, Rev. A, March 2009 Initial release of this document. LSI, the LSI & Design logo, CacheCade, CacheVault, Fusion-MPT, MegaRAID, and SafeStore are trademarks or registered trademarks All other brand and product names may be trademarks of their respective companies.
    [Show full text]
  • 6 O--C/?__I RAID-II: Design and Implementation Of
    f r : NASA-CR-192911 I I /N --6 o--c/?__i _ /f( RAID-II: Design and Implementation of a/t 't Large Scale Disk Array Controller R.H. Katz, P.M. Chen, A.L. Drapeau, E.K. Lee, K. Lutz, E.L. Miller, S. Seshan, D.A. Patterson r u i (NASA-CR-192911) RAID-Z: DESIGN N93-25233 AND IMPLEMENTATION OF A LARGE SCALE u DISK ARRAY CONTROLLER (California i Univ.) 18 p Unclas J II ! G3160 0158657 ! I i I \ i O"-_ Y'O J i!i111 ,= -, • • ,°. °.° o.o I I Report No. UCB/CSD-92-705 "-----! I October 1992 _,'_-_,_ i i I , " Computer Science Division (EECS) University of California, Berkeley Berkeley, California 94720 RAID-II: Design and Implementation of a Large Scale Disk Array Controller 1 R. H. Katz P. M. Chen, A. L Drapeau, E. K. Lee, K. Lutz, E. L Miller, S. Seshan, D. A. Patterson Computer Science Division Electrical Engineering and Computer Science Department University of California, Berkeley Berkeley, CA 94720 Abstract: We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5" disk chives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19" racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance.
    [Show full text]
  • ICP Disk Array Controllers
    ICP Disk Array Controllers GDT8546RZ GDT8546RZ is a 4-channel PCI RAID controller offering demands for inexpensive entry-level RAID solutions. Its the option of connecting up to four serial ATA drives. compact construction allows GDT8546RZ to be used in Parallel ATA (IDE) disk drives can be linked with an low profile systems. The controller is not only suitable optional accessory. The controller is based on the for high density and/or entry-level rackmount servers, but Serial AT Attachment (SATA) technology, satisfying the also for high-end desktops and workstations. Excellence in Controllers 64-bit/66MHz PCI - Serial ATA Hardware RAID Controller High Configuration Flexibility ● Supports RAID 0, 1, 4, 5 and 10 ● ROM-resident setup (ICP RAID Console) with ● Hot Plug, Auto Hot Plug with SAF-TE support integrated “Express Setup” function (CTRL+<G>) ● Private or Pool Hot Fix drives ● Simultaneous operation of several disk arrays ● Hardware RAID is totally independent of the host and ● Flexible capacity setting operating system ● Online capacity expansion ● Integrated acoustic alarm ● Online RAID level migration ● Complete disk array migration Versatile Connectivity Capabilities ● Controller switch and controller upgrade without ● Supports four internal independent SATA 1.5 channels reconfiguration ● Data transfer rate up to 150MB/sec. per channel ● Automatic adaptation to changes in SATA configuration ● Connection of one hard drive per channel (point-to- point configuration making cabling redundant) Integrated RAID Software ● Cable length
    [Show full text]
  • 416 Distributed Systems
    416 Distributed Systems RAID, Feb 16 2018 Thanks to Greg Ganger and Remzi Arapaci-Dusseau for slides Outline • Using multiple disks • Why have multiple disks? • problem and approaches • RAID levels and performance 3 Motivation: Why use multiple disks? • Capacity • More disks allows us to store more data • Performance • Access multiple disks in parallel • Each disk can be working on independent read or write • Overlap seek and rotational positioning time for all • Reliability • Recover from disk (or single sector) failures • Will need to store multiple copies of data to recover • So, what is the simplest arrangement? JustJust a bunch a bunch of of disks disks (JBOD) (JBOD) A0 B0 C0 D0 A1 B1 C1 D1 A2 B2 C2 D2 A3 B3 C3 D3 • Yes, it’s a goofy name • industry Yes, it’sreally a goofydoes sellname “JBOD enclosures” industry really does sell “JBOD enclosures” October 2010, Greg Ganger © 5 4 Disk Subsystem Load Balancing • I/O requests are almost never evenly distributed • Some data is requested more than other data • Depends on the apps, usage, time, ... • What is the right data-to-disk assignment policy? • Common approach: Fixed data placement • Your data is on disk X, period! • For good reasons too: you bought it or you’re paying more... • Fancy: Dynamic data placement • If some of your files are accessed a lot, the admin(or even system) may separate the “hot” files across multiple disks • In this scenario, entire files systems (or even files) are manually moved by the system admin to specific disks • Alternative: Disk striping • Stripe all
    [Show full text]
  • IBM XIV Storage System Host Attachment and Interoperability
    Front cover IBM XIV Storage System Host Attachment and Interoperability Integrate with DB2, VMware ESX, and Microsoft HyperV Get operating system specifics for host side tuning Use XIV with IBM i, N series, and ProtecTIER Bertrand Dufrasne Bruce Allworth Desire Brival Mark Kremkus Markus Oscheka Thomas Peralto ibm.com/redbooks International Technical Support Organization IBM XIV Storage System Host Attachment and Interoperability March 2013 SG24-7904-02 Note: Before using this information and the product it supports, read the information in “Notices” on page ix. Third Edition (March 2013) This edition applies to the IBM XIV Storage System (Machine types 2812-114 and 2810-114) with XIV system software Version 11.1.1. © Copyright International Business Machines Corporation 2012, 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . ix Trademarks . .x Preface . xi The team who wrote this book . xi Now you can become a published author, too! . xiii Comments welcome. xiii Stay connected to IBM Redbooks . xiv Summary of changes. .xv March 2013, Third Edition . .xv Chapter 1. Host connectivity . 1 1.1 Overview . 2 1.1.1 Module, patch panel, and host connectivity . 4 1.1.2 Host operating system support . 9 1.1.3 Downloading the entire XIV support matrix by using the SSIC. 9 1.1.4 Host Attachment Kits . 10 1.1.5 Fibre Channel versus iSCSI access . 12 1.2 Fibre Channel connectivity . 13 1.2.1 Preparation steps . 13 1.2.2 Fibre Channel configurations .
    [Show full text]
  • The Rise & Development of Illumos
    Fork Yeah! The Rise & Development of illumos Bryan Cantrill VP, Engineering [email protected] @bcantrill WTF is illumos? • An open source descendant of OpenSolaris • ...which itself was a branch of Solaris Nevada • ...which was the name of the release after Solaris 10 • ...and was open but is now closed • ...and is itself a descendant of Solaris 2.x • ...but it can all be called “SunOS 5.x” • ...but not “SunOS 4.x” — thatʼs different • Letʼs start at (or rather, near) the beginning... SunOS: A peopleʼs history • In the early 1990s, after a painful transition to Solaris, much of the SunOS 4.x engineering talent had left • Problems compounded by the adoption of an immature SCM, the Network Software Environment (NSE) • The engineers revolted: Larry McVoy developed a much simpler variant of NSE called NSElite (ancestor to git) • Using NSElite (and later, TeamWare), Roger Faulkner, Tim Marsland, Joe Kowalski and Jeff Bonwick led a sufficiently parallelized development effort to produce Solaris 2.3, “the first version that worked” • ...but with Solaris 2.4, management took over day-to- day operations of the release, and quality slipped again Solaris 2.5: Do or die • Solaris 2.5 absolutely had to get it right — Sun had new hardware, the UltraSPARC-I, that depended on it • To assure quality, the engineers “took over,” with Bonwick installed as the gatekeeper • Bonwick granted authority to “rip it out if itʼs broken" — an early BDFL model, and a template for later generations of engineering leadership • Solaris 2.5 shipped on schedule and at quality
    [Show full text]
  • New/Usr/Src/Makefile.Lint 1
    new/usr/src/Makefile.lint 1 new/usr/src/Makefile.lint 2 ********************************************************** 61 cmd/chgrp \ 8706 Mon May 27 09:44:59 2013 62 cmd/chmod \ new/usr/src/Makefile.lint 63 cmd/chown \ XXX Remove nawk(1) 64 cmd/chroot \ ********************************************************** 65 cmd/clinfo \ 1 # 66 cmd/cmd-crypto \ 2 # CDDL HEADER START 67 cmd/cmd-inet/lib \ 3 # 68 cmd/cmd-inet/lib/netcfgd \ 4 # The contents of this file are subject to the terms of the 69 cmd/cmd-inet/lib/nwamd \ 5 # Common Development and Distribution License (the "License"). 70 cmd/cmd-inet/sbin \ 6 # You may not use this file except in compliance with the License. 71 cmd/cmd-inet/usr.bin \ 7 # 72 cmd/cmd-inet/usr.lib/bridged \ 8 # You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 73 cmd/cmd-inet/usr.lib/dsvclockd \ 9 # or http://www.opensolaris.org/os/licensing. 74 cmd/cmd-inet/usr.lib/ilbd \ 10 # See the License for the specific language governing permissions 75 cmd/cmd-inet/usr.lib/in.dhcpd \ 11 # and limitations under the License. 76 cmd/cmd-inet/usr.lib/in.mpathd \ 12 # 77 cmd/cmd-inet/usr.lib/in.ndpd \ 13 # When distributing Covered Code, include this CDDL HEADER in each 78 cmd/cmd-inet/usr.lib/inetd \ 14 # file and include the License file at usr/src/OPENSOLARIS.LICENSE. 79 cmd/cmd-inet/usr.lib/pppoe \ 15 # If applicable, add the following below this CDDL HEADER, with the 80 cmd/cmd-inet/usr.lib/slpd \ 16 # fields enclosed by brackets "[]" replaced with your own identifying 81 cmd/cmd-inet/usr.lib/vrrpd \ 17 # information: Portions Copyright [yyyy] [name of copyright owner] 82 cmd/cmd-inet/usr.lib/wpad \ 18 # 83 cmd/cmd-inet/usr.lib/wanboot \ 19 # CDDL HEADER END 84 cmd/cmd-inet/usr.sadm \ 20 # 85 cmd/cmd-inet/usr.sbin \ 86 cmd/cmd-inet/usr.sbin/ilbadm \ 22 # Copyright (c) 2003, 2010, Oracle and/or its affiliates.
    [Show full text]
  • Red Hat Enterprise Linux: Your Solaris Alternative
    RED HAT ENTERPRISE LINUX: YOUR SOLARIS ALTERNATIVE 2 INTRODUCTION 3 FACTORS THAT INFLUENCE OPERATING SYSTEM CHOICE New projects Mandated migration 4 BUSINESS REQUIREMENTS TO CONSIDER Strength of ISV support Application migration considerations Performance Availability and scalability Security 11 TOTAL COST OF OWNERSHIP (TCO) Feature of comparison 13 DETAILED COMPARISON OF SELECTED FEATURES Filesystems and volume managers: Ext3, Ext4, XFS vs. UFS and ZFS DTrace vs SystemTap Software management 18 CONCLUSION Platform support Customer value www.redhat.com Red Hat Enterprise Linux: Your Solaris Alternative INTRODUCTION There were two primary reasons that IT professionals previously chose the Oracle Sun SPARC platform to power their IT infrastructures: the performance of the hardware and the robustness of the Solaris operating system. As the price, performance, and reliability of industry-standard x86_64 servers have increased to the point where they can meet and exceed these features, the reasons to continue buying SPARC hardware have become less and less compelling. This is particularly true with with large, multi-core x86 systems that are designed specifically for Linux©, such as the latest 128-core systems. Similarly, Linux, and in particular, Red Hat© Enterprise Linux, have emerged as the operating system of choice to leverage the benefits of open, industry-standard architectures. Selecting an operating system for your IT infrastructure has long-term consequences. The selection process must take into account not only the technical features of the current operating system, but the ability for the operating system to enable and support your future business requirements. While Oracle has quelled some worry over their commitment to Solaris, the move to Solaris 11 will likely be as painful as the move from Solaris 8/9 to Solaris 10, as Solaris 11 is significantly different from Solaris 10.
    [Show full text]