IBM Smart Analytics Systems

Total Page:16

File Type:pdf, Size:1020Kb

IBM Smart Analytics Systems Front cover IBM Smart Analytics System Understand IBM Smart Analytics System configuration Learn how to administer IBM Smart Analytics System Integrate with existing IT systems Whei-Jen Chen Rafael Aielo Silvio Luiz Correia Ferrari Zohar Nissare-Houssen ibm.com/redbooks International Technical Support Organization IBM Smart Analytics System February 2011 SG24-7908-00 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. First Edition (February 2011) This edition applies to IBM Smart Analytics System 5600, 7600, and 7700. © Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team who wrote this book . ix Acknowledgements . x Now you can become a published author, too! . xi Comments welcome. xii Stay connected to IBM Redbooks . xii Chapter 1. IBM Smart Analytics System . 1 1.1 Overview . 2 1.1.1 Architecture . 4 1.2 IBM Smart Analytics System portfolio . 7 1.2.1 IBM Smart Analytics System 1050 and 2050 . 8 1.2.2 IBM Smart Analytics System 5600 . 8 1.2.3 IBM Smart Analytics System 7600 and 7700 . 10 1.2.4 IBM Smart Analytics System 9600 . 13 1.2.5 IBM Smart Analytics System family summary. 13 1.3 IBM training . 14 1.3.1 IBM Professional Certification . 15 1.3.2 Information Management Software Services . 15 1.3.3 IBM Software Accelerated Value Program . 17 1.3.4 Protect your software investment: Ensure that you renew your Software Subscription and Support . 17 Chapter 2. Installation and configuration . 19 2.1 Planning . 20 2.1.1 Smart Analytics System customer worksheet . 20 2.1.2 Floor diagram and specification review. 25 2.2 Installation of IBM Smart Analytics System. 26 2.2.1 Installation at the IBM Customer Solution Center . 26 2.2.2 Installation at the customer site. 27 2.3 Documentation and support for the IBM Smart Analytics System. 28 Chapter 3. High availability. 31 3.1 High availability on IBM Smart Analytics System . 32 3.2 IBM Tivoli System Automation for Multiplatforms on IBM Smart Analytics System . 33 © Copyright IBM Corp. 2011. All rights reserved. iii 3.3 High availability overview for the core warehouse servers . 34 3.4 Managing high availability resources for the core warehouse. 39 3.4.1 Monitoring the high availability resources for the core warehouse . 41 3.4.2 Starting and stopping resources with the high availability management toolkit . 43 3.4.3 Starting and stopping resources with Tivoli SA MP commands . 45 3.4.4 Manual node failover for maintenance . 46 3.4.5 Manual node failback . 48 3.5 High availability for the warehouse application module. 51 3.5.1 Starting and stopping high availability resources for warehouse application servers . 56 3.5.2 Manual failover warehouse application node . 56 3.5.3 Manual failback of the warehouse application node . 57 3.6 High availability for the business intelligence module . 59 3.6.1 Starting and stopping high availability resources for BI module . 65 3.6.2 Manual failover BI type 1 node to BI type 2 node . 69 3.6.3 Manual failover BI type 2 node to BI type 1 node . 69 3.6.4 Manual failback BI type 1 node . 70 3.6.5 Manual failback BI type 2 node . 71 Chapter 4. Maintenance . 75 4.1 Managing DB2 message logs . 76 4.1.1 The db2dback shell script . 76 4.1.2 db2support -archive . 76 4.1.3 The db2diag utility . 77 4.2 Changing the date and time . 80 4.3 IBM Smart Analytics System upgrades . 82 4.3.1 IBM Smart Analytics System software and firmware stacks . 82 4.3.2 The Dynamic System Analysis tool. 83 4.3.3 IBM Smart Analytics System Control Console . 85 4.4 IBM HealthCheck Service . 86 4.5 IBM Smart Analytics System installation report. 87 4.6 IBM Smart Analytics System backup and recovery. 88 4.6.1 Operating system backup and recovery . 89 4.6.2 Database backup and recovery . 97 Chapter 5. Monitoring tools . 105 5.1 Cluster and operating system monitoring . 106 5.1.1 AIX and Linux . 106 5.1.2 IBM Systems Director . 106 5.2 DB2 monitoring . 110 5.2.1 DB2 monitoring utilities . 111 5.2.2 DB2 Performance Expert for Linux, UNIX, and Windows . 117 iv IBM Smart Analytics System 5.3 Storage monitoring . 122 5.3.1 IBM Remote Support Manager . 122 5.3.2 Internal disks . 124 5.3.3 SAN switches . 124 5.4 Network monitoring . 126 Chapter 6. Performance troubleshooting . 127 6.1 Global versus local server performance troubleshooting . 128 6.1.1 Running performance troubleshooting commands . 129 6.1.2 Formatting the command output . 135 6.2 Performance troubleshooting at the operating system level . 137 6.2.1 CPU, run queue, and load average monitoring. 137 6.2.2 Disk I/O and block queue . 142 6.2.3 Memory usage . 148 6.2.4 Network . 150 6.3 DB2 Performance troubleshooting . 152 6.3.1 CPU consumption . 155 6.3.2 I/O usage. 169 6.3.3 DB2 memory usage . 186 6.3.4 DB2 network usage. 193 6.4 Common scenario: Data skew . 195 6.4.1 Operating system monitoring . 196 6.4.2 DB2 monitoring . 196 Chapter 7. Advanced configuration and tuning . 203 7.1 Configuration parameters . 204 7.1.1 Operating system and kernel parameters . 204 7.1.2 DB2 configuration . ..
Recommended publications
  • 20 Linux System Monitoring Tools Every Sysadmin Should Know by Nixcraft on June 27, 2009 · 315 Comments · Last Updated November 6, 2012
    About Forum Howtos & FAQs Low graphics Shell Scripts RSS/Feed nixcraft - insight into linux admin work 20 Linux System Monitoring Tools Every SysAdmin Should Know by nixCraft on June 27, 2009 · 315 comments · Last updated November 6, 2012 Need to monitor Linux server performance? Try these built-in commands and a few add-on tools. Most Linux distributions are equipped with tons of monitoring. These tools provide metrics which can be used to get information about system activities. You can use these tools to find the possible causes of a performance problem. The commands discussed below are some of the most basic commands when it comes to system analysis and debugging server issues such as: 1. Finding out bottlenecks. 2. Disk (storage) bottlenecks. 3. CPU and memory bottlenecks. 4. Network bottlenecks. #1: top - Process Activity Command The top program provides a dynamic real-time view of a running system i.e. actual process activity. By default, it displays the most CPU-intensive tasks running on the server and updates the list every five seconds. Fig.01: Linux top command Commonly Used Hot Keys The top command provides several useful hot keys: Hot Usage Key t Displays summary information off and on. m Displays memory information off and on. Sorts the display by top consumers of various system resources. Useful for quick identification of performance- A hungry tasks on a system. f Enters an interactive configuration screen for top. Helpful for setting up top for a specific task. o Enables you to interactively select the ordering within top. r Issues renice command.
    [Show full text]
  • Java Bytecode Manipulation Framework
    Notice About this document The following copyright statements and licenses apply to software components that are distributed with various versions of the OnCommand Performance Manager products. Your product does not necessarily use all the software components referred to below. Where required, source code is published at the following location: ftp://ftp.netapp.com/frm-ntap/opensource/ 215-09632 _A0_ur001 -Copyright 2014 NetApp, Inc. All rights reserved. 1 Notice Copyrights and licenses The following component is subject to the ANTLR License • ANTLR, ANother Tool for Language Recognition - 2.7.6 © Copyright ANTLR / Terence Parr 2009 ANTLR License SOFTWARE RIGHTS ANTLR 1989-2004 Developed by Terence Parr Partially supported by University of San Francisco & jGuru.com We reserve no legal rights to the ANTLR--it is fully in the public domain. An individual or company may do whatever they wish with source code distributed with ANTLR or the code generated by ANTLR, including the incorporation of ANTLR, or its output, into commerical software. We encourage users to develop software with ANTLR. However, we do ask that credit is given to us for developing ANTLR. By "credit", we mean that if you use ANTLR or incorporate any source code into one of your programs (commercial product, research project, or otherwise) that you acknowledge this fact somewhere in the documentation, research report, etc... If you like ANTLR and have developed a nice tool with the output, please mention that you developed it using ANTLR. In addition, we ask that the headers remain intact in our source code. As long as these guidelines are kept, we expect to continue enhancing this system and expect to make other tools available as they are completed.
    [Show full text]
  • AIX Version 7.2: Performance Tools Guide and Reference About This Document
    AIX Version 7.2 Performance Tools Guide and Reference IBM AIX Version 7.2 Performance Tools Guide and Reference IBM Note Before using this information and the product it supports, read the information in “Notices” on page 315. This edition applies to AIX Version 7.2 and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright IBM Corporation 2015, 2018. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents About this document ......... v The procmon tool ............ 212 Highlighting .............. v Overview of the procmon tool....... 212 Case-sensitivity in AIX ........... v Components of the procmon tool...... 212 ISO 9000................ v Filtering processes........... 215 Performing AIX commands on processes ... 215 Performance Tools Guide and Reference 1 Profiling tools ............. 215 The timing commands ......... 215 What's new in Performance Tools Guide and The prof command .......... 216 Reference ............... 2 The gprof command .......... 217 CPU Utilization Reporting Tool (curt) ...... 2 The tprof command .......... 219 Syntax for the curt Command ....... 2 The svmon command .......... 226 Measurement and Sampling ........ 3 Security .............. 227 Examples of the curt command ....... 4 The svmon configuration file ....... 227 Simple performance lock analysis tool (splat) ... 32 Summary report metrics......... 227 splat command syntax.......... 32 Report formatting options ........ 228 Measurement and sampling ........ 33 Segment details and -O options ...... 230 Examples of generated reports ....... 35 Additional -O options ......... 234 Hardware performance monitor APIs and tools .. 50 Reports details ............ 238 Performance monitor accuracy ....... 51 Remote Statistics Interface API Overview .... 259 Performance monitor context and state .... 51 Remote Statistics Interface list of subroutines 260 Performance monitoring agent ......
    [Show full text]
  • How to Maintain Happy SAS® Users
    NESUG 2007 Administration & Support How to Maintain Happy SAS ® Users Margaret Crevar, SAS Institute Inc., Cary, NC ABSTRACT Today’s SAS ® environment has high numbers of concurrent SAS processes and ever-growing data volumes. It is imperative to proactively manage system resources and performance to keep your SAS community productive and happy. We have found that ensuring your SAS applications have the proper computer resources is the best way to make sure your SAS users remain happy. INTRODUCTION There is one common thread when working with the IT administration staff at a SAS customer’s location with regard to what they can do to maintain happy SAS users, and that is to ensure that underlying hardware is properly configured to support the SAS applications. This is not a trivial task since different SAS applications need to have the hardware configured differently and depending on where you are with your understanding of how SAS will be used will help you evaluate options for the hardware, operating, and infrastructure (mid-tier) configuration. This is easier for existing SAS customers and more difficult with new SAS customers or new SAS applications at an existing SAS customer site. In this paper we will: • discuss briefly how SAS works, especially from an IO perspective • give some guidance on how to initially configure hardware for SAS usage • give some guidance on how to monitor the hardware to avoid running out of a computer resource • discuss if you should run all your SAS components under a single operating system or split them across multiple operating system This paper pulls together information that has been presented in recent SAS Global Forum and SUGI papers.
    [Show full text]
  • UNIX OS Agent User's Guide
    IBM Tivoli Monitoring Version 6.3.0 UNIX OS Agent User's Guide SC22-5452-00 IBM Tivoli Monitoring Version 6.3.0 UNIX OS Agent User's Guide SC22-5452-00 Note Before using this information and the product it supports, read the information in “Notices” on page 399. This edition applies to version 6, release 3 of IBM Tivoli Monitoring (product number 5724-C04) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright IBM Corporation 1994, 2013. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Tables ...............vii Solaris System CPU Workload workspace ....28 Solaris Zone Processes workspace .......28 Chapter 1. Using the monitoring agent . 1 Solaris Zones workspace ..........28 System Details workspace .........28 New in this release ............2 System Information workspace ........29 Components of the monitoring agent ......3 Top CPU-Memory %-VSize Details workspace . 30 User interface options ...........4 UNIX OS workspace ...........30 UNIX Detail workspace ..........31 Chapter 2. Requirements for the Users workspace ............31 monitoring agent ...........5 Enabling the Monitoring Agent for UNIX OS to run Chapter 4. Attributes .........33 as a nonroot user .............7 Agent Availability Management Status attributes . 36 Securing your IBM Tivoli Monitoring installation 7 Agent Active Runtime Status attributes .....37 Setting overall file ownership and permissions for AIX AMS attributes............38
    [Show full text]
  • Nmon Performance Monitor Splunk App for Unix and Linux Systems Documentation Release 1.8.0
    Nmon Performance monitor Splunk app for Unix and Linux systems Documentation Release 1.8.0 Guilhem Marchand October 07, 2016 Contents 1 Overview: 3 1.1 About Nmon Performance monitor for Splunk.............................3 1.2 Release notes...............................................5 1.3 Known Issues............................................... 30 1.4 Support.................................................. 30 1.5 Issues and enhancement requests.................................... 31 1.6 Scripts and Binaries........................................... 31 1.7 licence.................................................. 33 2 Documentation: 35 2.1 Introduction............................................... 35 2.2 Deployment Matrix........................................... 37 2.3 Deployment topologies.......................................... 39 2.4 Download................................................. 42 2.5 Running on Windows.......................................... 43 2.6 Deploy to single server instance..................................... 43 2.7 Deploy to distributed deployment.................................... 46 2.8 Deployment on Search Heads running in SH Pooling mode...................... 55 2.9 Deploying Nmon Performance Monitor in SH Clusters......................... 55 2.10 Deploy to Splunk Cloud......................................... 55 2.11 Managing Nmon Central Repositories.................................. 59 2.12 rsyslog / nmon-logger deployment.................................... 61 2.13 syslog-ng / nmon-logger
    [Show full text]
  • Ppe52 - AIX Performance Tuning - Part 2 – I/O
    10/28/2014 Jaqui Lynch Enterprise Architect Forsythe Solutions Group pPE52 - AIX Performance Tuning - Part 2 – I/O © Copyright IBM Corporation 2014 Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM. 9.0 Agenda • Part 1 • Part 2 • CPU • I/O • Memory tuning • Volume Groups and File • Starter Set of Tunables systems • AIO and CIO for Oracle • Part 3 • Network • Performance Tools 2 AIX Performance Tuning Part 2 1 10/28/2014 I/O 3 Adapter Priorities affect Performance Check the various Technical Overview Redbooks at http://www.redbooks.ibm.com/ 4 AIX Performance Tuning Part 2 2 10/28/2014 Power8 –––S814 and S824 Adapter Slot Priority 5 I/O Bandwidth –––understand adapter differences • PCIe2 LP 8Gb 4 port Fibre HBA • Data throughput 3200 MB/psFDX per port • IOPS 200,000 per port • http://www.redbooks.ibm.com/technotes/tips0883.pdf • Can run at 2Gb, 4Gb or 8Gb • PCIe2 8Gb 1 or 2 port Fibre HBA • Data throughput 1600 MB/s FDX per port • IOPS Up to 142,000 per card Above are approximate taken from card specifications Look at DISK_SUMM tab in nmon analyzer Sum reads and writes, figure out the average and max Then divide by 1024 to get MB/s 6 AIX Performance Tuning Part 2 3 10/28/2014 Adapter bandwidth 7 disk_summ tab in nmon Disk total KB/s b740ft1 - 1/12/2013 Disk Read KB/s Disk Write KB/s IO/sec 140 1600 120 1400 Thousands 1200 100 1000 80 800 IO/sec KB/sec 60 600 40 400 20 200 0 0 21:00 21:01 21:01 21:02 21:03 21:04 21:04 21:05 21:06 21:07 21:07 21:08 21:09 21:10 21:10 21:11 21:12 21:13 21:13 21:14 21:15 21:16 21:16 21:17 21:18 21:19 21:19 21:20 21:21 21:22 21:22 21:23 21:24 21:25 21:25 21:26 21:27 21:28 21:28 21:29 R/W Disk Read KB/s Disk Write KB/s IO/sec Read+Write MB/Sec Avg.
    [Show full text]
  • Optimizing AIX 7 Performance: Part 1, Disk I/O Overview and Long-Term Monitoring Tools (Sar, Nmon, and Topas) Martin Brown October 12, 2010 Ken Milberg
    Optimizing AIX 7 performance: Part 1, Disk I/O overview and long-term monitoring tools (sar, nmon, and topas) Martin Brown October 12, 2010 Ken Milberg Learn more about configuring and monitoring AIX 7 based on the investigations of AIX 7 beta compared to the original articles based on AIX 5L. The article covers the support for direct I/O, concurrent I/O, asynchronous I/O, and best practices for each method of I/O implementation. This three-part series on the AIX® disk and I/O subsystem focuses on the challenges of optimizing disk I/O performance. While disk tuning is arguably less exciting than CPU or memory tuning, it is a crucial component in optimizing server performance. In fact, partly because disk I/O is your weakest subsystem link, you can do more to improve disk I/O performance than on any other subsystem. View more content in this series Introduction A critical component of disk I/O tuning involves implementing best practices prior to building your system. Because it is much more difficult to move things around when you are already up and running, it is extremely important that you do things right the first time when planning your disk and I/O subsystem environment. This includes the physical architecture, logical disk geometry, and logical volume and file system configuration. When a system administrator hears that there might be a disk contention issue, the first thing he or she turns to is iostat. iostat, the equivalent of using vmstat for your memory reports, is a quick and dirty way of getting an overview of what is currently happening on your I/O subsystem.
    [Show full text]
  • Profiling Linux Operations for Performance and Troubleshooting
    Profiling Linux Operations for Performance and Troubleshooting by Tanel Põder https://tanelpoder.com/ @tanelpoder © Tanel Poder tanelpoder.com 1 About me • Tanel Põder • I’m a database performance geek (23 years) • Before that an Unix/Linux geek, (27 years) • Oracle, Hadoop, Spark, cloud databases J • Focused on performance & troubleshooting • Inventing & hacking stuff, consulting, training • Co-author of the Expert Oracle Exadata book • Co-founder & technical advisor at Gluent • 2 patents in data virtualization space • Working on a secret project ;-) • Blog: tanelpoder.com • Twitter: twitter.com/TanelPoder • Questions: [email protected] © Tanel Poder alumni tanelpoder.com 2 Agenda 1. A short intro to Linux task state sampling method 2. Demos 3. More Demos 4. Always on profiling of production systems © Tanel Poder tanelpoder.com 3 Preferring low-tech tools for high-tech problems • Why? • I do ad-hoc troubleshooting for different customers • No time to engineer a solution, the problem is already happening • Troubleshooting across a variety of servers, distros, installations • Old Linux distro/kernel versions • No permission to change anything (including enabling kernel tracing) • Sometimes no root access • Idea: Ultra-low footprint tools that get the most out of Low tech tools aren't already enabled Linux instrumentation always "deep" enough • /proc filesystem! or precise enough, but they are quick & easy to try out © Tanel Poder tanelpoder.com 4 System-level metrics & thread state analysis Let's sample the threads! © Tanel Poder tanelpoder.com 5 Application thread state analysis tools These tools also • Classic Linux tools sample, • ps snapshot /proc • top -> (htop, atop, nmon, …) files • Custom /proc sampling tools Proc sampling • 0x.tools pSnapper complements, not replaces • 0x.tools xcapture other tools • grep .
    [Show full text]
  • AIX 6 Best Practice for SAS Enterprise Business Intelligence (SAS Ebi) Users on IBM POWER6
    AIX 6 best practice for SAS Enterprise Business Intelligence (SAS eBI) users on IBM POWER6 Joseph Pu Alfredo Mendoza Frank Bartucca Harry Seifert Frank Battaglia IBM Corporation ISV Business Strategy and Enablement December 2008 © Copyright IBM Corporation, 2008 Table of contents Abstract........................................................................................................................................1 Introduction .................................................................................................................................1 Guidance on configuring the AIX6 kernel on IBM POWER6 ...................................................2 AIX system general practices .................................................................................................................. 2 Keep the Hardware Management Console (HMC) and microcode up to date ................. 2 Use JFS2 on 64-bit AIX kernel .......................................................................................... 2 Enable the POWER simultaneous multithreading hardware ............................................ 2 Use a Host Ethernet Adapter ............................................................................................ 2 It is not recommended that SASWORK or UTILLOC point to /var.................................... 3 Increase the value of maxuproc to prevent SAS processes from being abnormally terminated or delayed.......................................................................................................
    [Show full text]
  • Linux System Administration Paul Cobbaut Linux System Administration Paul Cobbaut Lt-1.9
    Linux System Administration Paul Cobbaut Linux System Administration Paul Cobbaut lt-1.9 Publication date Fri 17 Apr 2015 01:01:06 AM CEST Abstract This book is meant to be used in an instructor-led training. For self-study, the intent is to read this book next to a working Linux computer so you can immediately do every subject, practicing each command. This book is aimed at novice Linux system administrators (and might be interesting and useful for home users that want to know a bit more about their Linux system). However, this book is not meant as an introduction to Linux desktop applications like text editors, browsers, mail clients, multimedia or office applications. More information and free .pdf available at http://linux-training.be . Feel free to contact the author: • Paul Cobbaut: [email protected], http://www.linkedin.com/in/cobbaut Contributors to the Linux Training project are: • Serge van Ginderachter: [email protected], build scripts and infrastructure setup • Ywein Van den Brande: [email protected], license and legal sections • Hendrik De Vloed: [email protected], buildheader.pl script We'd also like to thank our reviewers: • Wouter Verhelst: [email protected], http://grep.be • Geert Goossens: [email protected], http://www.linkedin.com/in/ geertgoossens • Elie De Brauwer: [email protected], http://www.de-brauwer.be • Christophe Vandeplas: [email protected], http://christophe.vandeplas.com • Bert Desmet: [email protected], http://blog.bdesmet.be • Rich Yonts: [email protected], Copyright 2007-2015 Paul Cobbaut Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
    [Show full text]
  • Nmon Performance Monitor Splunk App for Unix and Linux Systems Documentation Release 1.9.0
    Nmon Performance monitor Splunk app for Unix and Linux systems Documentation Release 1.9.0 Guilhem Marchand Nov 06, 2019 Contents 1 Overview: 3 1.1 About Nmon Performance monitor for Splunk.............................3 1.2 Release notes...............................................6 1.3 Known Issues............................................... 39 1.4 Support.................................................. 39 1.5 Issues and enhancement requests.................................... 40 1.6 Scripts and Binaries........................................... 41 1.7 licence.................................................. 42 2 Documentation: 43 2.1 Introduction............................................... 43 2.2 Deployment Matrix........................................... 45 2.3 Deployment topologies.......................................... 46 2.4 Download................................................. 52 2.5 Running on Windows.......................................... 53 2.6 Deploy to single server instance..................................... 54 2.7 Deploy to distributed deployment.................................... 57 2.8 Deploying Nmon Performance Monitor in SH Clusters......................... 65 2.9 Deploy to Splunk Cloud......................................... 65 2.10 Managing Nmon Central Repositories.................................. 66 2.11 Eventgen testing............................................. 68 2.12 Upgrade................................................. 70 2.13 Splunk HEC / nmon-logger deployment................................
    [Show full text]